Jump to content


Popular Content

Showing most liked content since 02/19/2018 in all areas

  1. 1 point
    DPMO and PPM Defects per million opportunity (defects is the key word) and PPM- Defective parts per million (defective is the key word) A process/product where a single unit of output has multiple opportunities for error/defect/failure(A) Vs a process/Product where a single unit of output has limited opportunities of error(B). for scenario A, DPMO is more suitable because it accounts for every defect that occurs in a unit, for eg. a data entry form(a unit of output) might have 50 fields to enter, each field could be considered as an opportunity for an operator to do mistake(defect) therefore accounting each defect becomes important. But why?? When we process millions of transactions, if we don’t track exactly where we are repeatedly going wrong within the 50 fields, it become extremely difficult to prioritize for us to work on it, Secondly, per example above, there is effort in entering every field accurately and so the DPMO measure brings out the actual score that determines the accuracy of work. On the other hand, PPM measure defectives, in the example mentioned above, lets say we using PPM, the operator has entered 49 fields correctly and entered 1 wrongly, in such case it is considered as a defective form. What about 49 right fields? that’s the problem, in PPM its not accounted, it just says that the form is defective Scenarios A : 100 units with 50 fields per form and we have say 50 defects(imagine there is 1 defect in each chart) DPMO = (50/(100X50))X1000000=10000 99% accurate Scenario B: 100 units with 50 fields per form and we have 50 Defectives (imagine there is 1 defect in each chart) PPM = 50/100*1000000=500000 50% accurate Therefore, I feel that DPMO over PPM will best suit where there are multiple opportunities to error in a unit of output
  2. 1 point
    Long term performance is to be treated differently compared to short term performance as there is a scope of random variations that could be introduced in the process because of several changes which could be but not limited to new employees in the process, change in measuring standards and/or any abnormalities that could have run into the process. As the scope for this to happen in measuring short term performance is less, and as there would be more number of subsets in long term(of short term), the performance could most probably come down in long run. This is even supported with the calculations of Short and long term sigma where usually the long term sigma is a difference of short term sigma and 1.5(one point five sigma) to attain long term sigma value.
  3. 1 point
    Q 71. Some of the commonly used measures of customer satisfaction are given below - NPS (Net Promoter Score) - Loyalty and referral check. C-SAT (Customer Satisfaction Index) - Satisfaction attained by the use of product/service. Churn (Customer Churn Rate) - Customer loss assessment. CAC (Customer Acquisition Cost) - CAC is all the costs spent on acquiring more customers (marketing expenses) divided by the number of customers acquired in the period the money was spent. CES (Customer Effort Score) - Customer effort assessment in getting work done/ issues resolved. What will be your ranking for the five metrics (NPS, C-SAT, Churn, CAC, CES) as per order of importance for performance of an App based cab service provider and why? This question is a part of Excellence Ambassador initiative and is open for 3 days. There is a reward of 2000 points for best answer in 3 days. All rewards are mentioned here - https://www.benchmarksixsigma.com/forum/excellence-ambassador-rewards/. The scoreboard is here - https://www.benchmarksixsigma.com/forum/business-excellence-scoreboard/ All questions so far can be seen here - https://www.benchmarksixsigma.com/forum/lean-six-sigma-business-excellence-questions/
  4. 1 point
    This is a multiple choice question carrying 100 points for the right answer. Flash quiz closes in four hours - 6 PM IST on 29th January 2018.
  5. 1 point
    No. Not at all. Completing DMAIC Or DMADV project undoubtedly will give great confidence for any professional in this field. The "Hands on experience to say". Such an experience is definitely required and will be an added advantage while hiring improvement manager in any organisation. However, question here is "Is it essential or mandatory that the candidate should have completed the DMAIC Or DMADV project?" then the clear answer is NO. Role of Lean Six Sigma Black Belt professional as an improvement manager demands that the candidate should be able to identify the right kind of projects to begin with and then mentor both function specific and cross functional projects involving varying level of skills - Leadership, communication, interpersonal (Soft skills) and also rock solid knowledge base - not only on DMAIC or DMADV but also on several other aspects like maturity and ability to understand the business in totality - "vantage" which is built over years of experience. Completion of one or two Six sigma projects may not qualify or ensure that the candidate is fit for the role of improvement manager. However, it could be an added advantage to some deserving candidates when assessed for the role.
  6. 1 point
    There are many situations where we really require zero defect. like already pointed out "Surgical set up" , or a "plane landing". The question here is not about zero defect required or not.. It is about "Is zero defect achievable?" When we say zero defect does it mean: 1. Absolutely no defect from a process... for how much duration? For ever? 2. Are we drawing some upper and lower tolerance on % defects or DPMO. and so long as the defect rate falls with in a service level agreement, are we going to accept it as zero defects? 3. if we are talking about a particular product, on which multiple defects can manifest,... when we say zero defect, are we referring to the non-occurrence of a particular defect or do we mean that no defect type should occur? 4. Are we referring to only the final output? Are we ok to have inprocess defects, but the final outcome is expected to be zero defect? 5. When we say zero defect, are we ignoring other factors like delivery time, processing cost, productivity etc? WHAT IS ZERO DEFECT? DEFINE IT.
  7. 1 point
    About Baseline One of the requirements of the Measure Phase in Six Sigma DMAIC cycle is the Baseline measurement, sometimes expressed as Baseline Sigma. In fact it is hard to tell whether the baseline data is required as part of the Define phase or Measure phase. Ideally, if we need to give the problem statement, which is expected to cover What, When, Magnitude and Impact. The ‘When’ portion is expected to show the metrics related to the problem for a time period as a trend chart, so that we can see the magnitude of the problem and the variation over a period of time – and acts as a baseline. Baseline certainly helps to act as reference to compare and assess the extent of improvement. Baseline is important to get a good measure of the quantum of improvement and in turn to quantify the benefits in tangible terms. However, the following discussion brings out certain practical challenges related to Baseline. 1. Baseline metric did not exist, but is it worth post-creating it? Suppose we are trying to improve an electronic product, based on certain customer complaints, our project objective will be to ensure that the incidents of customer complaints should be reduced or eliminated. Upon subjecting the product to a special lab evaluation, we could simulate the failure. However, a reasonable baseline metric will be possible only if we subject a set of sample units for a certain period of time. This could prove quite costly and time consuming. On the other hand the solution to the problem is known and we may proceed with the actions. Since our goal is to ensure zero failure, under the given conditions and duration, comparison with a baseline is not important here. Many a time, when the company is anxious to implement the improvement to get the desired benefits, be It cost or Quality, it may not make much sense to build up a baseline data, unless, it is readily available. 2. New measurement methodology evolved as part of improvement Let’s take an example of Insurance Claims processing, where the payment / denial decisions are taken based on a set of rules and associated calculations. The improvement being sought is to reduce the rate of processing errors. However it was only as part of the improvement actions that an appropriate assessment tool was evolved to identify and quantify the errors by the processors. By this time, the improvement has already begun and it is not practically possible to trace backwards to use this tool and get a baseline measurement. 3. When improvement is for ‘Delight factors’ Often we introduce enhancement features on product, for example, new models / variants of smart phones. In such cases, the emphasis is more on the delight factors for customers, for features that they haven’t experienced earlier and any baseline comparison may not have much relevance. 4. Integrated set of modifications Let’s examine another scenario where a series of modifications were implemented on a software application and was released together as a new version. Here, the set of actions taken influenced multiple factors, including performance improvement, elimination of bugs and inclusion of new innovative features. In such situations, any comparison with a baseline performance to the current will be very difficult and would have overlapping impacts. If we still need to do a comparison before vs after, we may have to do so after factoring and adjusting for such interaction effects on the pre / post improvement outcomes. To conclude, in general, a baseline metric is an important information that we require to compare the post improvement results – However, it has to be borne in mind that certain situations challenge the feasibility and relevance of using a baseline measurement.
  8. 1 point
    SJ has explained this in detail. Two more lines to summarise - In a two sided specification limit, Cpk multiplied by three is the smaller of the two sigma levels (sigma level upper and sigma level lower) In the example above Cpk = 1 and Cpk multiplied by three is Sigma Leval (Upper).
  9. 1 point
    Dear Ari/Ramabadran, You have rightly pointed out the Cpk cannot be translated directly into Sigma levels. Here are some additional points regarding that topic. Cpk (unlike Cp) includes both variation and shift to calculate the process capability. The traditional formula for Cpk (for a normally distributed process data) is given by: Cpk = min [ (USL - Xbar)/(3*S), (Xbar - LSL)/(3*S)] Where, LSL and USL are the Lower and Upper Specification Limits (as determined from the customer), Xbar is the average of the process data, and S is the sample standard deviation. When, we look at the formula, we see that we compute the minimum. Which means, we are looking at the defects that are greater than USL on one side and the defects that are less than the LSL on the other side and only picking the worst case (A smaller process capability number relates to larger number of defects). So, if a process had 10,000 PPM defects to the left of LSL and 20,000 PPM defects to the right of USL, then Cpk will look at both of them and then compute the process capability number related to 20,000. As pointed out earlier, Sigma Level (Bench) looks at defects on both sides and adds them up. So, it would not be directly possible to translate Cpk numbers to Sigma Levels. However, assuming the worst case defects occurs on both sides, it would be possible to estimate a conservative estimate of Sigma level if we so desire. Example: Let's say LSL = 11.5, USL = 18, Xbar = 15, S = 1. Cpk = min[(18-15)/(3*1), (15-11.5)/(3*1)] = min[1, 1.167] = 1.0 CpU (only looking at USL) = 1.0 CpL (only looking at LSL) = 1.167 PPM_U (only looking at USL) = 1350 PPM_L (only looking at LSL) = 232 PPM_Total (both sides) = 1582 Sigma_U = 3.0 Sigma_L = 3.5 Sigma_Bench = 2.95 Conservative estimate for Sigma_Bench based on Cpk = 1.0 (assume both sides have PPM = 1350 - centered process!). Hence, Total PPM = 2700 => Sigma_Bench = 2.78. CONCLUSION: Cp * 3 gives you sigma level on one side (Z_LSL and Z_USL). Cpk cannot be directly translated into sigma level for bilateral tolerances as it only looks at the worst side. Of course, if it is a unilateral tolerance, then we can predict Sigma levels from Cpk numbers.
This leaderboard is set to Kolkata/GMT+05:30
  • Who's Online (See full list)

    There are no registered users currently online

  • Forum Statistics

    • Total Topics
    • Total Posts