Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 06/18/2019 in all areas

  1. 2 points
    Defects Per Million Opportunities (DPMO) is a very powerful metric in understanding the performance of the process. However, following are the pitfalls while using DPMO 1. Calculation of DPMO makes sense only if we have Discrete (Attribute) data. It is difficult to imagine the number of opportunities for a Continuous (Variable) data. E.g. if we are monitoring temperature with an USL of 30. Then what is an opportunity? Defect is easy to tell (temp. going above 30) but determining the opportunity is difficult. Should be each second / minute etc. It is for this reason that for Continuous Data we first calculate the Sigma Level which is then converted to DPMO 2. Even for Discrete Data, DPMO is a metric that could portray a false picture about the process performance. Let's take an example. Number of Units made = 1000 Opportunities for error (OFE) = 10 Total # of Defects = 124 Total # of Defectives = 36 (i.e. all these 124 defects were found in 36 units only). Now, one could calculate the following metrics Defects Per Unit (DPU) = 124/1000 = 0.124 Defective % = 36/1000*100= 3.6% Defects Per Million Opportunities (DPMO) = 124 / (1000*10)*1000000 = 12400 Converting all these numbers to Sigma Level DPU = 0.124; Z (long term) = 1.19 Defective % = 3.6%; Z (long term) = 1.80 DPMO = 12400; Z (long term) = 2.24 It is evident from the above example that for the same process and same numbers, the DPMO provides the best Sigma Level which might be misleading. This is the primary reason that vendor always wants to calculate quality in terms of DPMO while the client always insists on either DPU or Defective %. 3. For DPMO calculation, all defects have same importance. This sometimes becomes a challenge in service industries where some of the defects are considered more critical than others 4. DPMO does not give any indication on the number of units which have defects. It is quite likely that most of the defects could be found in only a handful of units while on the other hand it could also mean that same kind of defect could happen in multiple units. E.g. in my example 124 defects happened only in 36 units. However, these 124 could also happen in 124 units (1 defect in each of the 124 units).
  2. 2 points
    OFAT vs DOE? OFAT or One Factor at a Time is a method in which the impact of change in one factor is studied on the output when all the other factors are kept constant. DOE or Design of Experiments is a method in which the impact of change in factors is studies on the output when all factors can be changed at the same time. Similarity in both techniques 1. Both require experiments to be conducted 2. Both are statistical techniques. Solutions identified from these need to be checked for practical or business sense as well Differences in both techniques 1. In OFAT, only 1 factor can be changed while in DOE, all factors can be changed in a single experiment 2. DOE can be used to screen the critical factors from among a list of multiple factors and can also be used to optimize the factors for a desirable output. On the other hand, OFAT can only be used for screening of critical factors 3. OFAT will only tell the main effect of the factor on the output. DOE will tell us both about the main effect and interaction effects (i.e. the combined effect of 2 or more factors) on the output 4. In OFAT, the project lead can decide the number of experiments that they want to do. DOE will give us the number of experiments that are required (basis the fractional or full factorial design) It is a well established fact that DOE is superior to OFAT as it can help you change multiple factors at the same time and hence allows to study the impact using less number of experiments. However, the question is that whether there is a need to change multiple factors? E.g. Let us assume the mileage of the car as the output. There are multiple inputs for this (limiting to 5 for explanation) Mileage = f(Car Condition) Mileage = f(Road Condition) Mileage = f(Fuel Type) Mileage = f(Way you drive) Mileage = f(Resistance between tyres and road) Now if a car manufacturer wants to understand which of the factors is important for mileage, they will definitely prefer DOE over OFAT. They will be able to identify the critical factors and also optimize the value of critical factors to get maximum mileage. Now, consider my situation. I have only one car (10 years old), I take the same route to office everyday, i have a fixed driving style and the tyres are also in good condition. The above things mean that except for Fuel Type every other factor is almost constant. Now if I need to maximize the mileage of my car, I dont need a DOE. I can simply do a OFAT. This is precisely what I did. I have a BP station where I refuel my car. I experimented with the Speed (97 octane) fuel as compared to the normal fuel. Now common sense would suggest that there will be a statistically significant change in the mileage. However, when i did OFAT testing, the mileages were not different (may be the car engine is old and higher octane makes no difference) and I could continue to use the normal petrol and save by not spending extra for Speed. The point that I want to highlight is that if experimentation does not cause much and you can reasonably assume the other factors to be constant, then OFAT is also useful. Otherwise, it is well established that DOE is advantageous over OFAT. P.S. The data for my fuel test is available on request (though I will have to dig it out from the hard-disk).
  3. 2 points
    One Factor at A Time Design of Experiments In OFAT, we hold 1 factor as constant and alter 2nd variable level Multiple (more than 2 factors) can be manipulated It is sequential, one factor at a time Simultaneous with multiple factors Experimenter can decide upon the number of experiments to be conducted In DOE, number of experiments is selected by the design itself We CANNOT estimate interactions among the Factors Systematically interactions are estimated Design is Experimenters decision Factorial designs (Full and Fractional) Low precision in OFAT With regards to Precision, in designed experiments the estimates of each factor is High High chances of False optimum (when 2+ factors considered) which can mislead High chances of optimization Used to estimate curvature in factors If there is curvature, estimation is done by augmenting into central composite design Domino effect, If one experiment goes wrong resulting in Inconclusiveness Orthogonal design, easy to predict and make conclusions It is sensible to say DOE is superior over OFAT, as we can save time and don’t have to perform multiple tests / experiments. Let’s see how Designed Experiments take an upper hand against OFAT with an example. Let’s run an example for 3 factors in 15 runs Few interpretations, with reference to above diagram In DOE, we can estimate the interactions between the factors but not in OFAT In DOE, prediction is better as the experimental runs have better data spread compared to that in OFAT with same number of experimental runs Curvature determination is better as it covers entire spectrum in DOE compared to OFAT and for that matter Response Optimisation is also better in designed experiments.
  4. 1 point
    While Amlan, Vastupal and Natwarlal have given good answers, the winner this time is Natwarlal as he has picked the most important elements and explained them in the most lucid manner.
  5. 1 point
    Let’s see an example for DPMO calculation for cinder blocks evaluated on length, breadth and height. Item/Criteria Length Breadth Height Defective # of defects cinder block #1 correct incorrect correct yes 1 cinder block #2 correct incorrect incorrect yes 2 cinder block #3 incorrect correct correct yes 1 cinder block #4 correct correct correct no 0 cinder block #5 correct correct correct no 0 Opportunities/Unit 3 Total Units 5 Total Opportunities 15 Total Defects 4 DPO 0.266667 DPMO 266,667 Area to right 0.27 Area to left 0.73 Sigma level (with 1.5 sigma shift) 2.12 The flaws in using DPMO as metric are obvious, and listed below 1. DPMO/Sigma Level are metrics which can theoretically be used to compare unlike products and processes 2. Complexity of defects can’t be represented with DPMO; not all defects are equal sometimes 3. Defect density is not captured by DPMO; i.e. a needle in haystack OR box of needles in haystack 4. Back calculating DPMO from sigma level, if defects doesn’t follow a normal distribution then sigma level will be overestimated 5. DPMO and PPM are not the same, except if # of opportunities for a defect/unit = 1. These are used interchangeable very often 6. To make a jump from 2 to 3 sigma, DPMO has to be reduced by 241,731 while from 5 to 6 sigma is mere 230 (all with 1.5 sigma shifts). This shows that DPMO is sensitive to tails of distribution which is not always a nice thing. How? a Burr distribution with c=4.873717 and k=6.157568 perfectly resembles a standard normal distribution with mean = 0, sigma = 1, skewness = 0 and kurtosis = 3 but are very difference from DPMO standpoint. i.e. our realization of the ‘true’ distribution of a process will never coincide perfectly with the truth 7. Chasing zero defects in accordance with DPMO, a good process can be made better but not perfect. 8. Over relying on DPMO may give inappropriate approximations of Cpk
  6. 1 point
    Six Sigma: A approach to process improvement which is defect oriented is very popular in many industries like: general Electric, Texas Instrument, Kodak and much more. The main objective to reduce output variability to increase customer satisfaction or we can say that this approach tries to keep specification limit more than six standard deviation in both direction. it means it wants lower defect level or below than 3.4 defects per million opportunities or 3.4 DPMO. Now the question comes when Six Sigma is not called Six Sigma and answer for this is that, when it is used as the Six Sigma Metric and there are various pitfalls of using as a Metric, which are given below; 1. We use a term very often called opportunities to calculate DPMO, even DPMO full form is Defects per million opportunities and if the customer gives a weightage to the opportunities as per their importance , it will be very poor phenomenon for customer satisfaction because their are chances that metric can better and on the other hand customer satisfaction will be worse. for example we are improving one type of defect at the expense of any important one like someone is trying to eliminating 15 unimportant defect and while doing this he is leaving 5 important defects resulting overall improvement of 10 defects , leaving behind a poor customer satisfaction. 2. Every process has its own limitations and while calculating DPMO , it ignores process limitations and it consider only the gap between its existing performance and zero defects, so it fails to consider redesigning of the process. 3. You can play a game very easily with this unless until it is complemented by someone other. for example we are having two different group of experts and we have given them a job to identify the opportunities for the defect and we see that there will be huge difference in their list.
  7. 1 point
    Secondary Metric: in a project is one that has to be kept constant or prevented from deterioration as it is an important process metric even though it is not the metric to be improved. (taking the definition from Forum's dictionary) Almost 99.9% of the projects will have one or more secondary metrics. One could imagine the secondary metric as a contradiction or a constraint while improving the primary metric. Providing some examples below 1. Formula 1 race or any other race: Primary Metric is the speed. You want your vehicle to go as fast as possible. However, there are a few constraints (or secondary metrics) in achieving speed greater than a certain value. Listing some of them below a. The downforce has to be high at higher speeds. This is because at high speed, the vehicle will have a tendency to leave the ground and this is undesirable. and If downforce is kept high then higher speeds are difficult to achieve. Hence, a goal would be to maximize the speed of the vehicle without increasing the downforce b. Revolutions (or revs) of the engine. Higher speeds requires an engine to rev at higher speeds i.e. more revolutions per minute. However, higher revs would mean higher fuel consumption. Hence, a goal would be to improve the speed without increasing the revs Similarly there are a host of other secondary metrics when we look at the design of a formula 1 car and the objective is to make it go as fast as possible. 2. Looking at the way India is playing in this semi-final, Primary Metric is to improve the run rate while ensuring that risk of shots played does not increase Risk of the shots played is the secondary metric here Other common examples 3. Lower Average Handling Time should not compromise the First Call resolution 4. Higher Return on Investment while keeping the Risk constant 5. Hiring the best available talent while keeping the cost constant How do we identify the secondary metrics? a. Mostly it is intuitive and if you are well aware of the process, one can easily identify the list of secondary metrics for a particular primary metric. b. One could identify the secondary metrics if one thinks about the constraints or contradictions c. Look at the Roof of the House of Quality (correlation matrix between the technical specs) Situations where there is no Secondary Metric Ideally there will always be one or more secondary metric (I wrote 99.9% above). The only 0.1% situations where I think secondary metric will not make sense are matters of life and death. In other words, these are situations where focusing on secondary metric is of no relevance. Some examples below 1. In medical world, steroids are considered as life saving drugs. However, it is well established that these steroids have side effects as well. Now, if a person is on a death bed (sorry for such an extreme example) and a steroid can save their life, then the side effects really does not matter. Another e.g. from the recent Jet airways. Primary metric was to remain operational. Even though this came at a very high cost (secondary metric) but Jet was not worried about cost because the survival of the organization was at stake (this was obviously before they were completely grounded). 2. If the primary metric is about adherence to regulatory or compliance issues. In such situations, the focus on secondary metric is not at all important. E.g. Indian automobile manufacturers have been advised to be BS 6 compliant. Now this is the primary metric. Due to this the cost of cars (secondary metric) is getting higher, but the manufacturers are not worried about the cost as it is a regulatory requirement. Similarly, the reserves that a bank has to keep is a regulatory requirement from RBI. The secondary metric is the cost of parking funds. But banks do not focus on cost of parking funds in order to maintain the reserves. To conclude, Secondary metrics will always be present. Only in special circumstances, one could choose to ignore the secondary metric since primary metric is too critical and the improvement in Primary metric offsets the degradation in the secondary metric.
  8. 1 point
    Kano Model: Dr Noriaki Kano created Kano Model in 1984 for product development and customer satisfaction and to explain different categories of customer requirements and how these requirements influence the customer satisfaction. Any product or service given by any organisation will only be considered by customers if it solves important customer problems effectively. its not necessary that all customer requirements will deliver more satisfaction. you can have two different customer needs of equally important and you will be more satisfied if one goes well and you can be neutral if other goes well. it may be possible a customer can be more satisfied with the need of less importance and can be neutral with the need of more importance. Kano Model has two axis( Refer Fig 1) , the horizontal axis represents the degree of implementation or execution, on the right side it is fully executed an don the left side its not done at all. the vertical axis represents satisfaction level of customer , on the top customer is fully satisfied and on the bottom side customer is very dis - satisfied. Dr. Kano gave total five categories of customer needs by using these sets of axis which are explained below: 1. Performance Attributes: these attributes are one dimensional and are on the top of customer's mind when they are making choices and evaluating between competitor present in the market. ( Refer Fig 1), Performance of these attributes gives more satisfaction and if they fail to perform customer will be very dissatisfied., because these are liner in nature and it will be better to execute them fully so that customer is more satisfied. in other words you can call them as satisfiers to customers. for example : the battery life of a mobile , if it goes well then customer is satisfied otherwise dissatisfied or the average claimed by a car manufacturing company is 24 and its actually coming 14 then customer is fully dis-satisfied, or the resolution in your new TV is not as per claimed by the company then customer is not satisfied. any mobile company is claiming its mobile can be used for gaming purpose but it starts to hang in simple application then customer will be fully dis- satisfied. it means that you will receive more satisfaction by customer if you are able to able to execute fully these performance attributes. 2. Threshold Attributes: these are the basic attributes and customers take them for granted and expects them to be in the product or service they are having. if these are doing well then customers might be just neutral but if these are not performing well then it may leads to customer dis-satisfaction (refer fig 1). in other words we can call them as " Must-be's" because they must be included. for example : lock of the door of the car that we are considering to buy, cleanliness of the hotel room that you booked for your trip. 3. Excitement Attributes: (Refer fig 1)these are the unexpected surprises or delights for the customer. in others words these are termed as the " WOW - factor" or the different offers that any company gives on its product to attract more customers. these are called Delighters because they do exactly what they are. they attract more customer and sometimes it happens that customer leaves some of its needs when they see such type of delighters. for example offer of 2 lac off on purchasing a brand new car so in this case customer can overcome some of its needs to grab that opportunity, or to give more service time for more period of time or for kilometers. giving road side assistance also comes into this delighters. normally these type of delighters comes on festive seasons because customers wants to buy new things. if these are not given at any point of time then there is very less that customer will be less satisfied. 4. In different attributes: (Refer fig 1), these attributes are those types that presence or absence of them does not matter to customer satisfaction. customer will be neutral if they are present or absent. for example some advanced application in mobile phone that is not used by maximum people. they provide little value to your product because majority of customers dont care about them. 5. Reverse Attribute: ( refer fig 1) this is the rarest category out of these 5 and these items are those that you dont want to offer. these requirements are of such types that their presence leads to dis-satisfaction of customers. Reverse Attributes found very rarely. Microsoft's little " paperclip helper" is a very small example of this because most of people was annoying because of it. there is presence of grey shades between five categories which are defined above. it may change from person to person and its very important to keep in mind that Kano Model is not only absolute because whatever one is describing it as an excitement attribute , it might be possible that the other one describes it as the performance attributes. these is very often and its very simple fact that there is a little difference between customer to customer and their requirements. Kano model helps any organisation to take decision which can fulfill requirements of customers and one can take decision with the help of Kano Model. As every one knows that customer needs and expectations are very dynamic in nature and for any organisation it is must to understand their nature of business and understand the pace of change of industry year by year to be in this competitive market. if you are not going with change then you will be replaced by someone else, there are many examples of it like Nokia, Blackberry, HMT Tractors. Time is a very important factor which plays a very important role in Kano Model ( Refer Fig 1). As the time passes industries changes, technologies changes and customer requirements also changes. for the excitement attributes we should know that how long they will last . generally its a saying that whatever is exciting ( Excitement Attributes) today will be definitely asked for tomorrow ( Performance attributes) and can be expected the next day( Threshold Attributes). there are various examples of this transition and companies can take decision by seeing all these factors. its reality is that it forces companies to bring innovations continuously ( Excitement Attributes) to keep themselves in this competitive edge. for example when touchscreen was offered by Apple then it was excitement attribute for the customers buy as the time passed it became threshold attribute and now every mobile company producing mobile phone with touchscreen. another example is of headphones that was used to give with every mobile phone and now a days also companies used to give but Xiomi has changed this scenario by giving more features and quality product. As Xiomi does not provide any headphone with new mobile phone but still number one company to sell its smartphone because of its performance and excitement attributes. Customers are more satisfied even without headphone and its presence does make any affect. Other examples are of AC in car now days becomes threshold, Wifi in a hotel and camera in your mobile phone and remote control for your television. Kano model helps organisation to collect data on the base of voice of customer and helps to classify that data into different categories to launch a new product in the market to satisfy more and more customer to be in the market.
  9. 1 point
  10. 1 point
    Visuals are always easy to review and summarize the content. This is precisely the reason a graphical summary is done rather than reading data in multiple rows or columns. What is a Box Plot? The most commonly used method for graphical summary is a frequency distribution plot like a histogram (for continuous data). The same data can also be plotted using a box plot which is just another way of looking at the histogram. Box Plot is a top view of the histogram. I took the annual rainfall data (from the GoI website for Andaman and Nicobar) and below is the graphical summary from Minitab. If you notice, the same data is represented in a histogram and a box plot. Even though both graphs represent the same data, the two are actually different. I have tried to summarize the differences below In addition to the insights or usefulness of the Box Plot as captured in the above table, Box Plot can be used in the following scenarios as well 1. Compare data sets for the same metric (I have provided an example below) even when a project is not being done 2. Used to identify the problem in Define phase (too much spread or process shifted to one side) 3. Used to baseline the process performance in Measure phase 4. Used to graphically compare performance of two or more sub-groups (units, departments, centers, shits etc.) in Analyze phase 5. Used to confirm the improvement in the Improve phase (spread will reduce or process is more centered) 6. Check for presence of outliers in data to ensure process control in Control phase In the below example, I considered the annual rainfall data for 6 regions (from the GoI website). Observations from the box plot 1. Clearly identifies the regions which get higher rainfall as compared to the others. A&N receive the maximum annual rainfall while Rajasthan West receives the lowest 2. Rainfall in Rajasthan, Delhi, Orissa and UP West (if I ignore the slightly elongated whisker) is almost equally distributed across the range, while it is skewed in A&N (left skewed) and Nagaland (right skewed) 3. The variation in rainfall is the least in Delhi and Rajasthan West while it the max in A&N and Nagaland (given that the length of the box is highest for them) 4. There are no outliers in the data set Just for illustration, I added another year's data (hypothetically a drought year). Below is how the box plot changes. Now the box plot, adds an outlier (star mark) for all states except Rajasthan West (i had entered a value of 0, but still it did not consider it as an outlier). These star marks indicate the presence of a value which is different from the other values for the data set or in other words is an outlier. Box Plot identifies it and gives us a chance to investigate and do RCA to find out the reason (remember I had entered data for a hypothetical drought year where rainfall will be very less). I guess the limitations of Gauss' Normal Distribution Plot and Karl Pearson's Histogram led John Tukey to identify and start using a Box Plot :)
  11. 1 point
    Box plot (box and whisker plot): This analysis creates visual representation of the range and distribution of Quantitative data (continuous data). It creates 4 Quartile groups. Quartile Group 1: Min - 25th Percentile (Q1) Quartile Group 2: 25th Percentile (Q1) - 50th Percentile (Q2, Median) Quartile Group 3: 50th Percentile (Q2) - 75th Percentile (Q3) Quartile Group 4: 75th Percentile (Q3) - Max In this, Q3-Q1 is Inter Quartile Range (IQR) Insights from Box-plot: Comparing multiple data sets (Categorical variable for grouping (1-4); Understanding Data Symmetry and Skewness * It gives spread of data points. Lowest(min) and highest(max) value in the data set. * It shows outliers (if any) present in the data. Outliers are values which is greater than 1.5 times of IQR away from 25th percentile or 75th percentile. * It clearly shows if the distribution is skewed (left or right. Refer to enclosed pic) * Median: This separates lower 50% of observations from the upper 50% of observations. * Box plot with groups, when we have further categories, we can use ‘categorical variables for grouping’, this helps us to identifying further distribution spread among the groups. Example Reference: This example is for Box Plot Graph with Groups. Group A and Group B Respectively. In this, it is clearly evident that there are outliers in both the graph. Group A is right Skewed. We will have more clarity on the distribution of data in both groups by visual representation.
  12. 1 point
    In day to day life there are number of examples of binomial distribution like 1. exam result pass or fail 2. application reject or not reject 3.bus will come or not come 4.product defective or not defective Binomial distribution has following characteristics 1.This distribution has only two possible outcomes for n number of trials like yes or no,go not go ,pass or fail, More familiar example for binomial distribution is coin toss where only two possible outcomes are possible either Head or Tail for n number of trials 2. Trails are fixed say “n “ numbers and all are identical. 3.Probability of success (p) remains constant from trial to trial 4. Result of each trial is independent of other trial. Example:Suppose in exam there are 10 questions and each question has four possible answers out of which only one is correct then probability of any answer being correct is 1/4=0.25 - This probability of correct answer will be same for each question. To calculate the probability of all answers being correct: probability of success on single trial = 0.25 Trials = 10 Successes = 10
  13. 1 point
    Sample size for Regression Analysis. What is Sample Size? Since we cannot work with population data (due to constraints of time and money), we always prefer to work with sample data. Therefore, it becomes important to know how many data points (or sample size) are required in the sample. Usually the sample size determination is dependent on the following parameters 1. Significance Level or alpha 2. Power of the test or (1-Beta) 3. Effect or the difference to be detected Smaller the alpha, Higher the Power of test, smaller the effect that needs to be detected --> Higher is the sample size required. Sample size for Regression Analysis depends on the following (in addition to the parameters already listed for sample size selection above and hence starting with number 4 below) 4. Type of Regression being done (Linear, Multiple, Ordinal etc.) 5. Purpose of Regression - a. Determine the effectiveness of the model (looking at R-square value) b. Determine the statistically important predictors (or determining the Beta values for each predictor) 6. Level of correlation between the predictors For point 4, generally, simpler the regression lesser the sample size required. Hence, a lower sample size if I'm carrying out a linear regression vs a multiple regression. For point 5, if the purpose if only to check the fit of the model a smaller sample size would suffice as compared to determining the significant factors from all the potential ones For point 6, higher the correlation higher the sample size (applicable only if there are multiple predictors) Now that we know the factors affecting sample size for regression, how should be check if we have the required number of samples for doing regression. The best way is to follow the theory behind sampling - higher the size, better it is But this arises another question, what sample is sufficiently high? There are a few empirical formulae that can be of help here. I am listing a few of them below 1. One common rule of thumb and the most famous one is that sample size should be 10 times the number of predictors. So if you have 4 predictors, you should have a minimum of 40 samples for running regression 2. As suggested by Green (1991) a. Sample size = 50+8*k, k --> number of predictors; applicable if we are doing regression for point 5a b. Sample size = 104 + k, k--> number of predictors; applicable if we are doing regression for point 5b There are some more depending on the kind of regression (ordinal, log etc.) that you plan to run. Sometimes, it is difficult to have answers to the 6 parameters before one decides the sample size. A more practical approach is to work backwards i.e. since we know the number of samples or we know how many we could collect, we could always do the Power Analysis (given the other factors are kept constant or pre-decided).
  14. 1 point
    What are the key differences between Multiple Regression using historical data and Multiple Regression based on Experimental Data (DOE)? Using historical data : you can see how the actual data fits the regression line you can see outliers and unusual observations to investigate or collect more data you can confirm if residuals are random and follow normal distribution DOE 1. models a regression line based upon experimental data - may not reflect all influences - noise, environmental, and control factors as in real data 2. models are only as good as the SME /teams that provide insight to the scope, factors, boundaries, and interactions What are the advantages of one over the other, if at all? using DOE can model and understand interactions with smaller runs/ replications ; can screen for critical X's and then evaluate interactions and provide model equation that cover more factors and levels more efficiently and effectively.
  15. 1 point

    From the album: April-June 2019

    © Benchmark Six Sigma

  16. 1 point
    Kano Model Summary: The KANO Model is a tool used to identify 5 categories of product features/services from a customer's perspective, to enable manufacturers and service providers to be competitive in the market. KANO Model Feature Categories Presence of the feature VOC Absence of the feature VOC Examples Basic Neutral Dissatisfaction 24hr. hot water supply at hotels Performance Satisfaction Dissatisfaction High Battery life in mobiles Excitement Extra Satisfaction Neutral Welcome drinks /complimentary chocolates at hotel check in Indifferent Neutral Neutral Material used in packing juices or milk, if the packets are durable and do not leak. Reverse Dissatisfaction Satisfaction Annoying Pop up help features in some software (*VOC-Voice of Customer) One important point to keep in mind is over time as the customers get used to an Excitement feature, the feature becomes more of an expectation and moves to become a Basic feature. In other words, a feature which was earlier not even expected, becomes a “must -have”. Earlier its absence would have been unnoticed, but now its absence causes dissatisfaction among the customers. Example: power steering in cars, camera feature in mobile phones. What would be your approach for putting these needs to good use? First, I would work towards developing the identified Basic and Performance features/services, so that it is maintained at a level where it continues to satisfy the customers. There should NOT be any decline in these features/services. Second, I would focus on developing the identified Excitement features. These would eventually transform to be “basic must – haves”. I would innovate new features /services which would continue to add the WOW factor. Third, I would work on cost optimization/cost cutting on the identified Indifferent features/services. Fourth, last but not the least, I would take the precaution of not overwhelming the customer with product features and services. More is not always great! The product features and services should be inline with the requirements of the target customers.
  17. 1 point
    Hello Richa I am also from manufacturing quality background. 90 % of the time problems in quality can be solved using lean or Six Sigma Concept efficiently and in less time because It is a methodology to solve the problem. Quality department is suppose to support production department to solve the problem. This will become easier if you start using six sigma tools. Most commonly used tool in Manufacturing is DOE for optimization and better understanding of 'o/p as a function of i/p parameter' model.
  18. 1 point
    Nice summary VK. Another similar example is airline flight safety, which operates at 6 sigma levels but I don't think many airlines follow the methodology. Exception would be GE Aircraft Engines thanks to it being a GE company. (although it is not an airline).
  19. 1 point
    Dear All, It is very important to keep HIGH SIGMA LEVEL distinguished from SIX SIGMA APPLICATION. Many companies have reached wonderful performance levels because they have a good process in place with some mistake proofing and visual controls. They applied these concepts as plain common sense warrants their use. My question was "Is Mumbai Dabbawalas performance a good application example of Six Sigma?" If we pay attention to the term "application example for Six Sigma", the answer should be "NO". I agree that we may use it as an example for application of Lean concepts (or for providing example for high Sigma Level or low DPMO) Lot of people have been referring Dabbawalas as a great example for Six Sigma, which is misleading. The best examples for Six Sigma should be companies who use DMAIC or DFSS rigorously to progressively achieve performance excellence through customer and business focus projects using appropriate techniques. Some people even refer to Dabbawalas as "Six Sigma Certified". Please note that no company gets the tag of "Six Sigma certified" Thanks to all for contributing in this discussion. Amol, Rohit, Karthik, Ravindra, Renu, Imraan have highlighted the high SIGMA LEVEL appropriately and mentioned that process can be considered beyond Six Sigma. However, Rahul, Anees, Reddy, Venkatesh and Arup have appropriately highlighted that we should not consider this as SIX SIGMA APPLICATION (and that this was not a conscious effort with the methodology) By mentioning this, we do not wish to take away the credit for what they achieved. May they achieve greater success! and let us hope that they really use Six Sigma some day. Best Regards, VK
  20. 1 point
    Dear VK. Greeting from Sathya. I consider this as not a SIX SIGMA EXAMPLE because of the following reasons. However, every organization should get into this as a best practice to excell. 1. Six Sigma evolves if there is no business solution to a Critical Problem. 2. I dont see Six sigma tools and methodologies to refer this as an example for SIX SIGMA. 3. Dabbawallas success story has greater relevance to LEAN methodology rather than to SIX SIGMA. 4. As LEAN eliminates WASTE & SIX SIGMA reduces variation in processes, again this will not be a good example for SIX SIGMA because, their processes is near perfect and where is the shift in process that needs SIX SIGMA application ? But, I would like this classic example to be more refered to LEAN application as it could have greater relevance with Standardization, VSM , Waste Elimination, TAKT , Visual Standards, 5S applications etc....
This leaderboard is set to Kolkata/GMT+05:30
×
×
  • Create New...