Jump to content


Popular Content

Showing content with the highest reputation on 03/13/2020 in all areas

  1. 2 points

    From the album: Jan-Mar 2020

    This is a group photograph of the Black Belt Workshop held in Mumbai, February-March 2020.

    © Benchmark Six Sigma

  2. 1 point
    Process Capability Assessment is the main step in Measure phase where the Baseline Metric is calculated. Following are the metrics that can be used for assessment 1. Sigma Level Long Term (Zlt or Zoverall) and Sigma Level Short Term (Zst or Zwithin) 2. Pp, Ppk (using overall standard devaition) and Cp,Cpk (using within standard deviation) 3. DPMO, DPU and Defective % Zwithin uses the within standard deviation for calculation while Zoverall uses overall standard deviation for calculation. The difference between within and overall standard deviation is how you perceive the collected data. If the entire data set is (or the population data) is used, it results in Overall Standard Deviation. While if we divide the entire data into rational subgroups then we get Within Standard Deviation (which is also known as Pooled standard deviation) Another common method to understand the difference within overall standard deviation is when only common cause variation is considered overall standard deviation is when both common cause and special cause variation is considered Sub-grouping or Rational subgroups is the collection of data under similar process conditions thereby resulting in lesser variation leading to the following concept within standard deviation < overall standard deviation Following are few scenarios where sub-grouping is NOT preferred 1. Rational sub-groups do not make sense while working with discrete data. For e.g. if we do weekly sub-groups and are collecting data on defects. For a particular week, if there are no defects (though unlikely but still), then within standard deviation will be 0. Hence does not make much sense to use sub-grouping when dealing with discrete data. On the contrary, one should check for possibility of sub-grouping in case of continuous data 2. Consistent and standardized process that does not change very often. E.g. Temperature control for stem cells. Assuming that it is maintained at -4 Celsius, it is unlikely that it will show a lot of common cause variation. In such cases, even if we do sub-grouping, the variation within and overall will be more or less same (unless there was a presence of a special cause) 3. Project scope deals with a specific product or service being delivered to a specific client. E.g. delivery time of same kind of pizza by only one pizza outlet and to a specific corporate customer (assuming this corporate customer orders almost on a daily basis and orders the same pizza everytime from the same outlet) 4. All process inputs are well controlled. If all the process inputs are all well controlled, then there are less chances of variation in the process. In such a scenario, one could avoid doing rational sub-grouping. Closest example I can think of is the process of making a burger at McDonald's. All the process inputs are well controlled and hence we get the same taste of the burger. One could argue that it is not a perfect example. And I tend to agree because it is very difficult to find a process where all inputs could be controlled. There will always be fatigue, wear and tear etc. Like they say, there is no "perfect process" Important thing to note here is that irrespective of whether you do sub-grouping or not, one should be consistent with the approach for doing a pre vs post project comparison. If baselined with Zwithin, then compare the improvement with Zwithin only. P.S. - If all of this is too tedious, one could simply use the empirical formula Zwithin = Zoverall + 1.5 (however, one should remember that if the data is continuous, both these can be determined independently as well)
This leaderboard is set to Kolkata/GMT+05:30
  • Create New...