Jump to content


Popular Content

Showing content with the highest reputation since 07/10/2019 in all areas

  1. 1 point
    Defects Per Million Opportunities (DPMO) is a very powerful metric in understanding the performance of the process. However, following are the pitfalls while using DPMO 1. Calculation of DPMO makes sense only if we have Discrete (Attribute) data. It is difficult to imagine the number of opportunities for a Continuous (Variable) data. E.g. if we are monitoring temperature with an USL of 30. Then what is an opportunity? Defect is easy to tell (temp. going above 30) but determining the opportunity is difficult. Should be each second / minute etc. It is for this reason that for Continuous Data we first calculate the Sigma Level which is then converted to DPMO 2. Even for Discrete Data, DPMO is a metric that could portray a false picture about the process performance. Let's take an example. Number of Units made = 1000 Opportunities for error (OFE) = 10 Total # of Defects = 124 Total # of Defectives = 36 (i.e. all these 124 defects were found in 36 units only). Now, one could calculate the following metrics Defects Per Unit (DPU) = 124/1000 = 0.124 Defective % = 36/1000*100= 3.6% Defects Per Million Opportunities (DPMO) = 124 / (1000*10)*1000000 = 12400 Converting all these numbers to Sigma Level DPU = 0.124; Z (long term) = 1.19 Defective % = 3.6%; Z (long term) = 1.80 DPMO = 12400; Z (long term) = 2.24 It is evident from the above example that for the same process and same numbers, the DPMO provides the best Sigma Level which might be misleading. This is the primary reason that vendor always wants to calculate quality in terms of DPMO while the client always insists on either DPU or Defective %. 3. For DPMO calculation, all defects have same importance. This sometimes becomes a challenge in service industries where some of the defects are considered more critical than others 4. DPMO does not give any indication on the number of units which have defects. It is quite likely that most of the defects could be found in only a handful of units while on the other hand it could also mean that same kind of defect could happen in multiple units. E.g. in my example 124 defects happened only in 36 units. However, these 124 could also happen in 124 units (1 defect in each of the 124 units).
  2. 1 point
    Let’s see an example for DPMO calculation for cinder blocks evaluated on length, breadth and height. Item/Criteria Length Breadth Height Defective # of defects cinder block #1 correct incorrect correct yes 1 cinder block #2 correct incorrect incorrect yes 2 cinder block #3 incorrect correct correct yes 1 cinder block #4 correct correct correct no 0 cinder block #5 correct correct correct no 0 Opportunities/Unit 3 Total Units 5 Total Opportunities 15 Total Defects 4 DPO 0.266667 DPMO 266,667 Area to right 0.27 Area to left 0.73 Sigma level (with 1.5 sigma shift) 2.12 The flaws in using DPMO as metric are obvious, and listed below 1. DPMO/Sigma Level are metrics which can theoretically be used to compare unlike products and processes 2. Complexity of defects can’t be represented with DPMO; not all defects are equal sometimes 3. Defect density is not captured by DPMO; i.e. a needle in haystack OR box of needles in haystack 4. Back calculating DPMO from sigma level, if defects doesn’t follow a normal distribution then sigma level will be overestimated 5. DPMO and PPM are not the same, except if # of opportunities for a defect/unit = 1. These are used interchangeable very often 6. To make a jump from 2 to 3 sigma, DPMO has to be reduced by 241,731 while from 5 to 6 sigma is mere 230 (all with 1.5 sigma shifts). This shows that DPMO is sensitive to tails of distribution which is not always a nice thing. How? a Burr distribution with c=4.873717 and k=6.157568 perfectly resembles a standard normal distribution with mean = 0, sigma = 1, skewness = 0 and kurtosis = 3 but are very difference from DPMO standpoint. i.e. our realization of the ‘true’ distribution of a process will never coincide perfectly with the truth 7. Chasing zero defects in accordance with DPMO, a good process can be made better but not perfect. 8. Over relying on DPMO may give inappropriate approximations of Cpk
  3. 1 point
    Six Sigma: A approach to process improvement which is defect oriented is very popular in many industries like: general Electric, Texas Instrument, Kodak and much more. The main objective to reduce output variability to increase customer satisfaction or we can say that this approach tries to keep specification limit more than six standard deviation in both direction. it means it wants lower defect level or below than 3.4 defects per million opportunities or 3.4 DPMO. Now the question comes when Six Sigma is not called Six Sigma and answer for this is that, when it is used as the Six Sigma Metric and there are various pitfalls of using as a Metric, which are given below; 1. We use a term very often called opportunities to calculate DPMO, even DPMO full form is Defects per million opportunities and if the customer gives a weightage to the opportunities as per their importance , it will be very poor phenomenon for customer satisfaction because their are chances that metric can better and on the other hand customer satisfaction will be worse. for example we are improving one type of defect at the expense of any important one like someone is trying to eliminating 15 unimportant defect and while doing this he is leaving 5 important defects resulting overall improvement of 10 defects , leaving behind a poor customer satisfaction. 2. Every process has its own limitations and while calculating DPMO , it ignores process limitations and it consider only the gap between its existing performance and zero defects, so it fails to consider redesigning of the process. 3. You can play a game very easily with this unless until it is complemented by someone other. for example we are having two different group of experts and we have given them a job to identify the opportunities for the defect and we see that there will be huge difference in their list.
  4. 1 point
    Secondary Metric: in a project is one that has to be kept constant or prevented from deterioration as it is an important process metric even though it is not the metric to be improved. (taking the definition from Forum's dictionary) Almost 99.9% of the projects will have one or more secondary metrics. One could imagine the secondary metric as a contradiction or a constraint while improving the primary metric. Providing some examples below 1. Formula 1 race or any other race: Primary Metric is the speed. You want your vehicle to go as fast as possible. However, there are a few constraints (or secondary metrics) in achieving speed greater than a certain value. Listing some of them below a. The downforce has to be high at higher speeds. This is because at high speed, the vehicle will have a tendency to leave the ground and this is undesirable. and If downforce is kept high then higher speeds are difficult to achieve. Hence, a goal would be to maximize the speed of the vehicle without increasing the downforce b. Revolutions (or revs) of the engine. Higher speeds requires an engine to rev at higher speeds i.e. more revolutions per minute. However, higher revs would mean higher fuel consumption. Hence, a goal would be to improve the speed without increasing the revs Similarly there are a host of other secondary metrics when we look at the design of a formula 1 car and the objective is to make it go as fast as possible. 2. Looking at the way India is playing in this semi-final, Primary Metric is to improve the run rate while ensuring that risk of shots played does not increase Risk of the shots played is the secondary metric here Other common examples 3. Lower Average Handling Time should not compromise the First Call resolution 4. Higher Return on Investment while keeping the Risk constant 5. Hiring the best available talent while keeping the cost constant How do we identify the secondary metrics? a. Mostly it is intuitive and if you are well aware of the process, one can easily identify the list of secondary metrics for a particular primary metric. b. One could identify the secondary metrics if one thinks about the constraints or contradictions c. Look at the Roof of the House of Quality (correlation matrix between the technical specs) Situations where there is no Secondary Metric Ideally there will always be one or more secondary metric (I wrote 99.9% above). The only 0.1% situations where I think secondary metric will not make sense are matters of life and death. In other words, these are situations where focusing on secondary metric is of no relevance. Some examples below 1. In medical world, steroids are considered as life saving drugs. However, it is well established that these steroids have side effects as well. Now, if a person is on a death bed (sorry for such an extreme example) and a steroid can save their life, then the side effects really does not matter. Another e.g. from the recent Jet airways. Primary metric was to remain operational. Even though this came at a very high cost (secondary metric) but Jet was not worried about cost because the survival of the organization was at stake (this was obviously before they were completely grounded). 2. If the primary metric is about adherence to regulatory or compliance issues. In such situations, the focus on secondary metric is not at all important. E.g. Indian automobile manufacturers have been advised to be BS 6 compliant. Now this is the primary metric. Due to this the cost of cars (secondary metric) is getting higher, but the manufacturers are not worried about the cost as it is a regulatory requirement. Similarly, the reserves that a bank has to keep is a regulatory requirement from RBI. The secondary metric is the cost of parking funds. But banks do not focus on cost of parking funds in order to maintain the reserves. To conclude, Secondary metrics will always be present. Only in special circumstances, one could choose to ignore the secondary metric since primary metric is too critical and the improvement in Primary metric offsets the degradation in the secondary metric.
  5. 1 point
    OFAT vs DOE? OFAT or One Factor at a Time is a method in which the impact of change in one factor is studied on the output when all the other factors are kept constant. DOE or Design of Experiments is a method in which the impact of change in factors is studies on the output when all factors can be changed at the same time. Similarity in both techniques 1. Both require experiments to be conducted 2. Both are statistical techniques. Solutions identified from these need to be checked for practical or business sense as well Differences in both techniques 1. In OFAT, only 1 factor can be changed while in DOE, all factors can be changed in a single experiment 2. DOE can be used to screen the critical factors from among a list of multiple factors and can also be used to optimize the factors for a desirable output. On the other hand, OFAT can only be used for screening of critical factors 3. OFAT will only tell the main effect of the factor on the output. DOE will tell us both about the main effect and interaction effects (i.e. the combined effect of 2 or more factors) on the output 4. In OFAT, the project lead can decide the number of experiments that they want to do. DOE will give us the number of experiments that are required (basis the fractional or full factorial design) It is a well established fact that DOE is superior to OFAT as it can help you change multiple factors at the same time and hence allows to study the impact using less number of experiments. However, the question is that whether there is a need to change multiple factors? E.g. Let us assume the mileage of the car as the output. There are multiple inputs for this (limiting to 5 for explanation) Mileage = f(Car Condition) Mileage = f(Road Condition) Mileage = f(Fuel Type) Mileage = f(Way you drive) Mileage = f(Resistance between tyres and road) Now if a car manufacturer wants to understand which of the factors is important for mileage, they will definitely prefer DOE over OFAT. They will be able to identify the critical factors and also optimize the value of critical factors to get maximum mileage. Now, consider my situation. I have only one car (10 years old), I take the same route to office everyday, i have a fixed driving style and the tyres are also in good condition. The above things mean that except for Fuel Type every other factor is almost constant. Now if I need to maximize the mileage of my car, I dont need a DOE. I can simply do a OFAT. This is precisely what I did. I have a BP station where I refuel my car. I experimented with the Speed (97 octane) fuel as compared to the normal fuel. Now common sense would suggest that there will be a statistically significant change in the mileage. However, when i did OFAT testing, the mileages were not different (may be the car engine is old and higher octane makes no difference) and I could continue to use the normal petrol and save by not spending extra for Speed. The point that I want to highlight is that if experimentation does not cause much and you can reasonably assume the other factors to be constant, then OFAT is also useful. Otherwise, it is well established that DOE is advantageous over OFAT. P.S. The data for my fuel test is available on request (though I will have to dig it out from the hard-disk).
  6. 1 point
    One Factor at A Time Design of Experiments In OFAT, we hold 1 factor as constant and alter 2nd variable level Multiple (more than 2 factors) can be manipulated It is sequential, one factor at a time Simultaneous with multiple factors Experimenter can decide upon the number of experiments to be conducted In DOE, number of experiments is selected by the design itself We CANNOT estimate interactions among the Factors Systematically interactions are estimated Design is Experimenters decision Factorial designs (Full and Fractional) Low precision in OFAT With regards to Precision, in designed experiments the estimates of each factor is High High chances of False optimum (when 2+ factors considered) which can mislead High chances of optimization Used to estimate curvature in factors If there is curvature, estimation is done by augmenting into central composite design Domino effect, If one experiment goes wrong resulting in Inconclusiveness Orthogonal design, easy to predict and make conclusions It is sensible to say DOE is superior over OFAT, as we can save time and don’t have to perform multiple tests / experiments. Let’s see how Designed Experiments take an upper hand against OFAT with an example. Let’s run an example for 3 factors in 15 runs Few interpretations, with reference to above diagram In DOE, we can estimate the interactions between the factors but not in OFAT In DOE, prediction is better as the experimental runs have better data spread compared to that in OFAT with same number of experimental runs Curvature determination is better as it covers entire spectrum in DOE compared to OFAT and for that matter Response Optimisation is also better in designed experiments.
This leaderboard is set to Kolkata/GMT+05:30
  • Who's Online (See full list)

    There are no registered users currently online

  • Forum Statistics

    • Total Topics
    • Total Posts
  • Create New...