Jump to content


Popular Content

Showing content with the highest reputation since 09/22/2017 in all areas

  1. 5 points
    By and large, we come across situations where we favor the mean value of the outcome of a process (central tendency) to be focused around a specified targeted value with as less variation as possible (dispersion). There are situations where the variation assumes relatively higher importance than the central tendency; mostly because higher variations are more intolerable than some shifts in central tendency. Interestingly, there may be certain situations where variation or controlled variation is advantageous as well. Study of Process Potential: The process potential index Cp is used to study the variation, or spread of a process with respect to specified limits. While we study process potential, we are interested in the variation and not in the central tendency. The underlying idea is that if the process is able to maintain the variation within specified limits, it is considered to possess the required potential. The centering of mean can always be achieved by setting adjustments. Or in other words, if Cp is not satisfactory, Cpk (process capability) can never be achieved, since Cpk can never exceed Cp; it can at best equal Cp. Many examples where the variation is generally considered unfavorable to the outcome: 1. Analysis of Variance While evaluating whether there is a significant difference between means (central tendency) for multiple sets of trials as in ANOVA, the variation between sets and within sets are compared using F tests. Thus in such situations, the variation comparison assumes high importance. 2. Relative grading systems For many competitive examinations, the concept of ‘percentile’ is used, which is actually a relative grading system. Here, more than the absolute mark by a student, the relative variation from the highest mark is more important, thus the relative variability becomes key decisive factor. 3. Control chart analysis While studying a process variation using a control chart, first the instability and variation are given the importance. Only if we have control on these parameters we will be able to meaningfully study the ‘Off-target’ i.e. the central tendency. 4. Temperature variation in a mold While performing certain compression molding process, temperature variation across different points on the surface of the mold does more harm than the mean temperature. Here the mean temperature is permitted to have a wider tolerance, but the variation across mold does more warping of the product. 5. Voltage fluctuations Many electrical appliances get damaged due to high variation (fluctuation) in the voltage, although the mean voltage (central tendency) is maintained. Controlled variation is favorable: 1. Load distribution in a ship While loading a ship the mean value of the load can vary, but the distribution of the load is more important to maintain the balance of the ship on water. 2. Science of music Those who understand the science of music would agree that more than the base note, the appropriate variation of the other notes with respect to the base note is extremely important to produce good music. Some examples where variation is favorable: Systematic Investment plans (SIPs) take advantage of the variation in the NAVs to accumulate wealth. Here even an adverse shift of the central tendency is compensated by the variation! Law of physics states that Force = Mass x Acceleration (F = ma). Thus, if we consider speed as the variable, it is the variation of speed that decides the force and the mean speed (central tendency) appears to have little relevance.
  2. 3 points
    This is a multiple choice question carrying 100 points for right answer. Closes at 11 PM IST tonight.
  3. 3 points
    Q37 - The seven wastes of Lean is a great concept and has been an eye opener for many professionals. Let us assume that a leadership/ business ownership team member asks you - What are some of the ways we can put this concept to good use in the organization?. What would you say? This question is a part of the November Episode and can be answered by approved Excellence Ambassadors till 10 PM on November 3, 2017. There are many rewards amounting to 0.5 million INR or more. Just being regular here earns you a reward. Even a streak of 3 great answers can get you a reward. All rewards are mentioned here - https://www.benchmarksixsigma.com/forum/excellence-ambassador-rewards/ All questions so far can be seen here - https://www.benchmarksixsigma.com/forum/lean-six-sigma-business-excellence-questions/
  4. 3 points
    While continuous data is generally preferred over discrete data, please indicate circumstances where discrete is the preferred data type although continuous data is available for the same characteristic. This question is a part of Excellence Ambassador - Episode 2 - World’s best Business Excellence Dictionary Project and can be answered only be registered participants. To know how to register, kindly visit dictionary in the menu above.
  5. 3 points
    Excellence : Excellence is defined as the quality of being extremely good So what is Personal excellence? In simple words, setting up the bar higher [benchmark] in whatever activities, the individual(who is compared with the rest) does. Process Excellence: Providing an environment where the processes are highly stable and controlled with a minimal or no variation and with minimum or no wastage(Muda). Focus is on continuous improvement to ensure processes are highly stablized Operational Excellence: It reflects the way how as a person, unit, you or your team/organisation excel at parameters such as Cost, Human Resources, scope, time, quality etc.,. By excelling at this, the provider of a service, can provide value to the customer with optimal/maximum efficiency. Business Excellence: It is through which you make your business, with effective strategies ,efficient business plans , best business practices so that optimal results are achieved at a sustained rate. How each one is related to the other one(s): Personal Excellence is directly tied to Process Excellence. If and only if the individual is interested to adhere to the processes laid out, then process excellence or for that matter any other activity can be successful . If the cultural shift/mindset is not there amongst the individual/team , then no change would work. This can be represented by the formula : Quality of the solution (Q) * Acceptance of the solution (A) = Effectiveness of the solution (E). Unless there is an acceptance to any thing (which is the human part) nothing can be done. So if the individual has the desire to excel at his/her work, then he/she would strive to make sure he/she/the organization achieve Process Excellence. Process Excellence provides a way for continuous improvement. Purpose of process excellence is to streamline all the processes , make them stable and in the process to achieve minimal degree of variation and minimal wastage. By having a process excellence system in place, grey areas in Operational excellence and Business excellence can be identified and improved/rectified upon. Practically it is difficult to achieve excellence in one when another one is absent. For instance, Business and Operational excellence would require process improvements. If streamlining does not happen there then there is no excellence in Business and in Operational aspects as well.Similarly without human intervention or the elevated mindset of the individual, it becomes difficult to successfully run the processes at a top-notch. From an organisation perspective, the organisation should Provide a conducive environment to work with wherein by individuals can be encouraged to share their ideas/thoughts and create a transparency, making them feel belonging to the organisational/unit's problems/constraints (Personal Excellence) Encourage individuals to showcase their creativity in designing/providing solutions to problems (Personal Excellence) Create Challenging contests and rewarding people on various categories such as best creativity,best solution, optimal solution,... (Personal Excellence) Setup process standards and metrics for each parameter(Define the expectation).Set the Upper & Lower limit & also customer specification limits (Process Excellence) Conduct awareness sessions on process expectations with reasoning and justifications. Provide details with SMART goals (Process Excellence) Ensure that individuals/teams adhere to the standards with constant monitoring through Audits/Inspections/reviews. (Process Excellence) Look out for scope for continuous improvements periodically and accordingly adjust the process baseline if required. (Process Excellence) Define the Operational parameters that requires excellence. (Operational Excellence) Conduct awareness sessions to key stakeholders on those operational parameters and provide the plan on when and how to achieve them (Operational Excellence) Ensure the status of operational excellence through Project Management Reviews/status reports and other similar artefacts and address the deviations (Operational Excellence). Preserve the best practices that were followed to achieve Operational Excellence (Operational Excellence) Define the strategies/plans needed for improving the business results (Business Excellence) Define the best practices in getting business-oriented goals/activities done (Business Excellence) Conduct Confidential meeting with key stakeholders and provide the envisaged plan to them and convey your expectation (Business Excellence) Conduct monthly/quarterly review meetings with respective units and look onto the 4-quarter dashboard. (Business Excellence) Get Business Mgmt section of Customer Satisfaction Survey from the customer to see if organisation is in target with its objective (Business Excellence) Document the outcome of the business results and the effective means to achieve them (Business Excellence)
  6. 2 points
    Q 49. What is the differences between Lead Time and CycleTime? What is the reason for confusion in the two definitions? Cycle time, lead-time are the most generic terms, which always get confused in terms of usage and representation of the work. Some people may call the avg time taken to complete a chart – as production lead-time not cycle time. Some call it as cycle time. Hence understanding the term what it stands for is very important to avoid such confusions. These confusions will lead in wrong data collection, poor / worse decision making. Definitions: 1. Cycle time – it is the time taken to complete one unit’s production from start and finish. It is based on work process based. CT = Net production / no. of units produced 2. Takt time – It is the rate at which you have to complete the production in order to meet the customer requirements. It is based on customer demand. TT = Net production / customer demand 3. Lead time – It is the time taken for production of one unit through its multiple processes of operations from frond to end. i.e from the order received to payment received. LT = T from order to dispatch. Difference between Cycle time and Lead time: Aspects Cycle time Lead time Definition "Cycle time" is the time it takes to complete the production of one unit from start to finish. "Lead time" is the time it takes for one unit to make its way through your operation from taking the order to receiving payment. Meaning CT starts when the actual work of production is started in the unit and ends when it is ready for delivery. It measures the time elapsed between order and till delivery to he customer Perspective / View this is done in terms of organization's perspective this is done in the customer's perspective. Rate of Measures Measures the work completion rate. More of a mechanical process capability. Measures the arrival rate Aims to measure cycle time in terms of demand Customer waiting time. It is measured in Amount of time / unit( minutes / customer , Parts / hr) minutes / hours etc Relationship related by Work in progress but within the unit Related by work in progress, but there is no unit. VA / NVA It segregates the Value add activity time from NVA. It includes both VA & NVA Cases if one time is higher, If CT is higher than Lead time, demand of the customer is not met. If lead time is higher than CT, inventory is more. Example A train manufacturer offers custom manufactured replacement parts to customers. When an order is placed it is goes through several internal business processes each with its own cycle time including order processing, manufacturing and delivery. The lead time is the sum of these cycle times plus a delay of two days due to a manufacturing backlog. Conclusion: Cycle time and lead time are two different entities from the different stakeholders perspective. Both are related by common term of net production, work in progress, etc. But the difference is lead time is measured from customer’s point and cycle time is done in internal process point of views. Both are to be well understood with its own limitations in terms of usage. To me, the word production gets into confusion mode to many. Another example, in a coding company, client provides a batch today at 8 am to the company to code and give. If the company delivers the completed batch at 8 pm, the lead time for this process is 12 hours. But when the batch start time and end time is noted, the cycle time taken to complete the batch is only 2 hours. It shows that the inventory is more. Here the company would have involved in other client works. Hence, understanding the concept is very important to define the data collection process and in valid decision makings. Thanks Kavitha
  7. 2 points
    Q 44. Can Type 1 Error of one situation be considered as Type 2 Error in a different situation? In other words, can Null Hypothesis statement for one situation be the same as Alternative hypothesis for another situation? Null Hypothesis: It is commonly denoted as Hsub0. This is typically a standard observation made by the researcher to say that there is no interaction between these variables. It is called null hypothesis. Alternate Hypothesis: It is denoted as Hsub1. Opposite of null hypothesis is alternative hypothesis, also called as researcher hypothesis, which is their prediction and measured for existence of relationship between these variables. Significance: Statistical tests are done to determine the relationship is significant. It also means that the difference in the results are not by random chance. Type 1 & Type II errors: No hypothesis is 100% certain for decision making. Because it is based on the probability value, there is chance of making a wrong decision as well. There are two types of errors possible in hypothesis. Type I and type II errors. Type I errors are when the null hypothesis is true and you reject the null. This is denoted by level of significance. Type II errors are when the null hypothesis is false and you fail to reject the null and accept alternative. This is denoted by Power test. Truth about the population Decision based on sample H0 is true H0 is false Fail to reject H0 Correct Decision (probability = 1 - α) Type II Error - fail to reject H0 when it is false (probability = β) Reject H0 Type I Error - rejecting H0 when it is true (probability = α) Correct Decision (probability = 1 - β) Negations: There are certain negations before making any hypothetical statements. Null hypothesis: “x is equal to y.” Alternative hypothesis “x is not equal to y.” Null hypothesis: “x is at least y.” Alternative hypothesis “x is less than y.” Null hypothesis: “x is at most y.” Alternative hypothesis “x is greater than y.” Example of Null and alternative hypothesis with 2 types of errors. · Null hypothesis (H0): μ1= μ2 The two medications are equally effective. · Alternative hypothesis (H1): μ1≠ μ2 The two medications are not equally effective. In the above example, the errors would be defined as Type I error – if the physician rejects the null hypothesis and concludes that the 2 medications are different when actually it is not. Type II error – If the physician fails to reject the null and concludes that the 2 medications are same when actually it is not same. Type II error is sometimes serious or life threatening. Having considered the consequences of the risk or seriousness of commiting one type of error, the decision is taken accordingly. Refernce: http://support.minitab.com/en-us/minitab-express/1/help-and-how-to/basic-statistics/inference/supporting-topics/basics/type-i-and-type-ii-error/ Another Example Null hypothesis - Earth is not at the center of universe. Alternative hypothesis - The Earth IS at the center of the Universe. In such statements, instead of proving one of the favorable conditions only, you have to first disprove that the theory of rejecting the null is equally important to accept the alternate. It is to just prove that the study or experiment conducted is flawless. If you only prove the alternate to be effective and not proving null to be rejected would set a system failure. Type I Error : In this example, the astronaut concludes by watching the sky over nights and conclude that the all other planets revolve around the earth. Hence the earth is at the centre of universe. So, alternate is proven. And the null is rejected. Type II Error – Here the astronaut concludes that the planet is not revolving around the earth. In fact the earth is revolving aroudnd the planets. Hence the earth is not at the center, because it keeps moving. Here he fails to reject the null and accept it, when actually it is not. Conclusion: So, to conclude the hypothesis statements depends on the situations we study and it is equally important to disprove the one with accepting the other hypothesis. Typically the null hypothesis says that there is nothing new happened either before and after or after the solution implemented. The difference is equal to 0. Generally, the people’s claims are always true until proven otherwise. If we have to prove, show evidences to reject the null hypothesis. To conclude the 2nd part, a null hypothesis can never be a alternate hypothesis in any type of situations, since the null hypothesis is generally a work done to nullify the statements or claims by people. Whereas the alternative hypothesis is a opposite nature of null hypothesis. It is not a equalized statements. It can be greater or lesser of the effect studied. Thanks Kavitha
  8. 2 points
    Q 39 - Is there anything called a mature process? When do you say that a process has good maturity? If a process is supposed to be improved or redesigned periodically, does an assessment for the maturity of a process carry any significance? (To get more clarity to the third question above, please read on - We like to believe that there is always scope for process improvement. Let us assume that a process is assessed and found to have high maturity. If this process is highly mature, does it mean it has little scope for improvement or redesign? On the other hand, if this high maturity process is being improved (or redesigned), does it mean that it was not highly mature at the first place?) This question is a part of the November Episode and can be answered by approved Excellence Ambassadors till 10 PM on November 7, 2017. There are many rewards amounting to 0.5 million INR or more. Just being regular here earns you a reward. Even a streak of 3 great answers can get you a reward. All rewards are mentioned here - https://www.benchmarksixsigma.com/forum/excellence-ambassador-rewards/ All questions so far can be seen here - https://www.benchmarksixsigma.com/forum/lean-six-sigma-business-excellence-questions/
  9. 2 points
    About Baseline One of the requirements of the Measure Phase in Six Sigma DMAIC cycle is the Baseline measurement, sometimes expressed as Baseline Sigma. In fact it is hard to tell whether the baseline data is required as part of the Define phase or Measure phase. Ideally, if we need to give the problem statement, which is expected to cover What, When, Magnitude and Impact. The ‘When’ portion is expected to show the metrics related to the problem for a time period as a trend chart, so that we can see the magnitude of the problem and the variation over a period of time – and acts as a baseline. Baseline certainly helps to act as reference to compare and assess the extent of improvement. Baseline is important to get a good measure of the quantum of improvement and in turn to quantify the benefits in tangible terms. However, the following discussion brings out certain practical challenges related to Baseline. 1. Baseline metric did not exist, but is it worth post-creating it? Suppose we are trying to improve an electronic product, based on certain customer complaints, our project objective will be to ensure that the incidents of customer complaints should be reduced or eliminated. Upon subjecting the product to a special lab evaluation, we could simulate the failure. However, a reasonable baseline metric will be possible only if we subject a set of sample units for a certain period of time. This could prove quite costly and time consuming. On the other hand the solution to the problem is known and we may proceed with the actions. Since our goal is to ensure zero failure, under the given conditions and duration, comparison with a baseline is not important here. Many a time, when the company is anxious to implement the improvement to get the desired benefits, be It cost or Quality, it may not make much sense to build up a baseline data, unless, it is readily available. 2. New measurement methodology evolved as part of improvement Let’s take an example of Insurance Claims processing, where the payment / denial decisions are taken based on a set of rules and associated calculations. The improvement being sought is to reduce the rate of processing errors. However it was only as part of the improvement actions that an appropriate assessment tool was evolved to identify and quantify the errors by the processors. By this time, the improvement has already begun and it is not practically possible to trace backwards to use this tool and get a baseline measurement. 3. When improvement is for ‘Delight factors’ Often we introduce enhancement features on product, for example, new models / variants of smart phones. In such cases, the emphasis is more on the delight factors for customers, for features that they haven’t experienced earlier and any baseline comparison may not have much relevance. 4. Integrated set of modifications Let’s examine another scenario where a series of modifications were implemented on a software application and was released together as a new version. Here, the set of actions taken influenced multiple factors, including performance improvement, elimination of bugs and inclusion of new innovative features. In such situations, any comparison with a baseline performance to the current will be very difficult and would have overlapping impacts. If we still need to do a comparison before vs after, we may have to do so after factoring and adjusting for such interaction effects on the pre / post improvement outcomes. To conclude, in general, a baseline metric is an important information that we require to compare the post improvement results – However, it has to be borne in mind that certain situations challenge the feasibility and relevance of using a baseline measurement.
  10. 2 points
    Some of the situations in which the "Push" system is generally successful would be one or more of the following. No two situations are the same, even if some appear similar. 1 Demand is easily and accurately predictable Due to an accurate forecasting system, the risk of carrying “dead” inventory is less. Moreover, by planning and pushing a steady volume to the market, supply chain and production are also steadied, thereby eliminating delay losses. 2 Conversion costs between products is low due to late point differentiation If in spite of an accurate forecasting system, there is a difference in the final product type demanded, the stock of Product A can be converted to Product B at a very low cost and pushed on to the market. 3 Very short time demanded from order to delivery If a very short delivery time or instant delivery from the point of time an order is placed is demanded by the market or customer, there is no option except to supply from stock and avoid revenue losses due to short supplies. 4 Products do not deteriorate during storage When there is no constraint on “shelf life”, the risk of inventory to be written off is low. Further more, inventory is being used up sooner rather than later, reducing cost of delays. 5 Carrying cost is less than cost of lost business When a manufacturer is able to make up for the expense of carrying inventory by exploiting the predictable demand, the likelihood of profiting, “net-net” is high when compared with the potential loss of business, customers and reputation by becoming Just-Short-Of-Time rather than Just-In-Time 6 Long, geographically global supply chains with their own unpredictability Even with the best e-Kanban-powered pull system, the long winding, supply chain that traverses the entire globe is so packed with potential “delay-bombs”, that some “good-old” stock, which can be pushed becomes the life-saver 7 Shipping costs can be optimised by shipping in bulk When the costs of transporting raw material or components or sub-assemblies can be whittled down to almost next to nothing by using up (say) full container space, stocking up and pushing is not a bad idea 8 Demand profiles across time periods are static When there is no fluctuations between days of a week, weeks of a month and months of an year, it is profitable to stabilise production and supply chains by planning and pushing an average volume periodically to the market
  11. 1 point
    I suppose everyone agrees that if one is not good with numbers, career growth is likely to face a serious roadblock at one stage or the other. I have noticed several people who fear mathematics and this leads to certain problems in learning or applying Six Sigma. Many have already given up hope assuming that they can never cover up. Good news, however is that this weakness can be addressed by most people. It definitely needs a persistent effort to capture Mathematics concepts that are really important. Some of these are Algebra, Data Handling, Decimals, Equations, Exponents and powers, Fractions, Graphs, Integers, Mathematical modelling, Mathematical Reasoning, Probability, Proportions, Ratios, Rational Numbers and Statistics. If you are one of those who felt this way and wish to improve your math, I can provide you a step by step approach which shall broadly follow the sequence below. Plan study time for these topicsUse the uploaded materialStudy identified topics and answer questions provided in the text. Check your answers with answer key provided.Conquer your weakness and face the Six Sigma world more confidently.In case good number of people see value in such a sequence, I shall be putting in extra effort and make the content and sequence available to you free of cost. I have written this post just to know whether there are many people out there who really wish to use such content and approach. Reply to this post showing your interest so that I can view the count. Best Wishes, VK
  12. 1 point
    Q 49. What is the differences between Lead Time and CycleTime? What is the reason for confusion in the two definitions? This question is a part of the November Episode and can be answered by approved Excellence Ambassadors till 10 PM on November 21, 2017. There are many rewards. Being regular earns you a reward. Even a streak of 3 good answers can get you a reward. Rewards are mentioned here - https://www.benchmarksixsigma.com/forum/excellence-ambassador-rewards/ All questions so far can be seen here - https://www.benchmarksixsigma.com/forum/lean-six-sigma-business-excellence-questions/
  13. 1 point
    The effectiveness of any tool is dependent on the user and method use. So is the case with the “Fishbone Diagram (FBD)” or “Cause and Effect Diagram (CED)”. No tool can achieve anything not intended by the user of the tool. A tool can only provide different perspectives to the user to take a decision. It is very much possible for the user to junk the information the tool provides and go by his or her feeling. The “Fishbone Diagram” or “Cause and Effect Diagram” is no exception. Misuse of a tool can also include erroneous use, which could be either a genuine error or an intended misuse. Means to pre-conceived end The most common misuse of the FBD is to doctor various bones so that all root causes that emerge are in line with decisions already taken. Logic can be thrown to the winds as each immediate and root cause are written so as to justify the decision. Effects instead of causes Another common mistake people can make is to reverse the plotting of causes as a hierarchy of effects. Rather than progress causes from the effect to the root cause, it progresses through subsequent effects. Incorrect or inaccurate problem statement definition A guess or an assumption is made when documenting the Problem Statement or Effect. Then with the effect itself not being very correct, of what quality can the supposed “Root Causes” be? Too much of guess-work in the causes While all proposed causes are to begin with atleast, potential causes, if too many causes are all out of guess work or out of assumptions without a validation plan, then the likelihood of the problem being solved is next to nothing. Tracing back from the root cause After reaching the root cause, by relentlessly questioning “Why?” a comfort syndrome results in picking up an immediate cause rather the root cause. Using Solutions as Causes To prepare a justification for investment in a solution, solutions end up getting prefixed by “Lack of”. Examples could be lack of automation, lack of maintenance support etc. Giving up after identifying one Root cause Either due to the excitement of having identified a root cause or due to sheer laziness, it is possible to forget the basic tenet that one problem may have multiple root causes. Confusing correlation with causation Mistaking certain commonalities in various instances of problem occurrence as the cause of the problem itself is another common error. Working to a strict time deadline While no activity can go on endlessly, it is not possible to brainstorm and think through all root causes in a hurry or when wanting to close the meeting within a particular time. Many staff who participate will take quite some time to warm up and by the time they are ready to contribute, the meeting is over. Criticizing proposed root cause ideas It takes free, unfettered thinking to arrive at all root causes. If the thought-process of the participants are stifled for any reason, the fish bone will not complete and thus not effective. Holy cows There are certain people or certain processes in the organization which are sacrosanct and cannot be touched, let alone be changed whatever be the consequences. Therefore, all root cause analyses stop at this point. “Out of control” causes To be on the safer side and not end up with responsibilities, it is best if the fishbone analysis is guided to causes not within the organisational control at all so that no one in the organisation is tasked with the responsibilities of implementing corrective action. People related causes Documenting clichéd people related causes like, “Human error” (Are animal errors possible?), “forgot” (Is the process so dependent on memory) will not help in resolving the problem. Focussing on “Who” rather than “What” A classical distraction is to focus on who is the root cause instead of what.
  14. 1 point
    One of the very common method used for dealing with a large data is to “stratify” the data into groups. The stratification may be done in multiple ways depending upon the situation and purpose for analyzing the data. For instance, if we are studying national sales data to understand the areas having improvement opportunities, the data may be stratified into groups for each state. Other ways of stratifying may be on age, income levels, education levels, month wise etc. The stratification groups need to be decided based on the objective that is being pursued. Such segmentation will help us to represent the data using a bar chart and helps comparing the variation between the groups. It helps in narrowing our focus on areas that depict an abnormal problem, or areas of opportunity. During root cause analysis, such segmentation is one of the first steps adopted. It also helps in evolving a Pareto diagram and apply the 80 / 20 rule. Where deeper probing and analysis are required, it is a good idea to do the segmentation first, so that the efforts for such deeper analysis may be restricted to the volumes, shortlisted based on the segmentation. Sometimes when we have a large amount of data; say for instance a product failure data for a period of six months, it would help to segment the data for certain time period, maybe month wise, and week wise. Of if we know of certain factors that we suspect to influence the failure under study, the data may be appropriately segmented to see a comparison of the failure rates between those events. A good segmentation helps in optimizing the efforts spent for root cause analysis and facilitates arriving at the root cause faster.
  15. 1 point
    Type I error is rejecting a Null Hypothesis that is a true (should have been accepted). Type 2 error is accepting a Null Hypothesis that is false (should have been rejected) Let us discuss this question with an example. Machine A and Machine B are producing certain part, and the weight of the part is a characteristic of interest. The weights of samples taken from these machines are as follows: A – 10.8, 10.3. 10.7, 10.9, 10.4, 10.7, 11.0, 10.3, 10.8, 10.7. B – 11.2, 11.3, 11.1, 11.6, 11.0, 11.6, 10.8, 11.4, 11.4, 11.6. Mean weight for Machine A = 10.6 Mean weight for Machine B = 11.3 Situation - 1 Assume that in reality there is a significant weight difference on the output from Machine A and B. But we are trying to prove using a Hypothesis test. Hypothesis statements: H0 : Mean weight from Machine A = Mean weight from Machine B H1 : Mean weight from Machine A Mean weight from Machine B The true conclusion would have been to reject the Null Hypothesis, in this situation. However, as a result of the test, if H0 gets retained, it is an incorrect acceptance of null hypothesis and is a Type-2 error Situation - 2 Now let’s examine another situation. Here we want to test the effectiveness of an improvement action taken, which is expected to bring down the differences on the weight of their outputs. Our aim is to improve the process to reduce the difference. Assume that the difference between the machines continues to exist. The Hypothesis statements may be as follows: H0 : (Mean weight from M/c B ) – (Mean weight from M/c A) = 0.7 H1 : (Mean weight from M/c B ) – (Mean weight of M/c A) ≤ 0.7 The true conclusion would be to accept the null hypothesis, in this situation and accept the difference is equal to 0.7 However, conducting the Hypothesis test, if H0 gets incorrectly rejected, it means that the means are having difference which is less than 0.7. This amounts to Type-1 error. Thus, the null hypothesis in the situation in situation-2 is the alternate hypothesis in situation-1
  16. 1 point
    Every hypothesis test uses samples to interfere properties of a population on the basis of an analysis of the sampling information. Therefore, there is some chance that although the analysis is flawless, the conclusion may be incorrect. These sampling errors are not errors in usual sense, because they can't be corrected. There are two types of error Type 1 and type 2 Error. When we reject null hypothesis when it is true that's is type 1 Error and it is producer risk. When we fail to reject null hypothesis when it is false, then type 2 Error occurs and it is consumers risk. To ensure that hypothesis tests are carried out portered, it is useful to have a well defined process for conducting them. 1specify the parameters to be used. 2.state the null and alternate hypothesis. 3.state alpha value 4determone test statistic 5. Define Rejection criteria 6.compute critical values and test statistics 7.state conclusion for the test. Null hypothesis for one situation can be same as alternate hypothesis of another situation. It depends how we are considering the situations. If null statement of one situation is used as alternative of another situation it will also reverse the definition of type 1 and type 2 Error. We need to think more carefully about which hypothesis is more appropriate and situation before finalise the statement for null and alternate hypothesis. The null and alternate both are mutually exclusive so wee need to take care while finalise the statement according to situation For example We are saying when we are standing o earth our eyes are able to see that earth is flat surface so in this case null hypothesis is that earth is flat when we are seeing while standing on a surface and alternate hypothesis is that earth is not flat when we are seeing while standing on a surface. Now another situation that we are saying earth is not flat when we are seeing from space. In this case null hypothesis is while seeing from space earth is not flat and alternate hypothesis is while seeing from space earth is flat. Another example of the same that sun is revolving round the earth as day change to night. In this example null is sun revolvs around earth as day change to night. Alternate hypothesis is that sun does not revolve round earth. If we say as day change to night it means earth is revolving around its own axis. In this null is as earth revolvs around duty own axis that's why day change to night. Alternatee is earth does revolves around its own axis. Another example of student passing an exam >=40. Null hypothesis is passing exam with 40 Alternate is it should be greater than 40 to excel in exam. Null says no difference before and after in terms of grades calculated on the other hand Alternate says there is a difference.Now in this case, passing exam is acceptable but greater than Benchmark set is also acceptable. Null can be used as alternate in this situation If we have to consider that null is equal to and alternate is less than or not equal to or greater than. Another example is of average salary of an engineer in a company is =>50000 per month. So there are various situation where we can use null of one situation as alternate hypothesis of another situation but wee need to think carefully while deciding the statement.
  17. 1 point
    Process voice Process Voices captures the four distinct “voices” for any given process. The Voice of the Business (VOB) reflects the needs of management. The Voice of the Employee (VOE) relays what it feels like to work within the process. The Voice of the Customer (VOC) details the needs of the end user The Voice of the Process (VOP) lists the waste, rework and other observed process issues. (refernece : goleansixsigma website https://goleansixsigma.com/process-voices/) When we drill down only to VOC and VOB further down The voice of the customer identifies needs and requirements of customer The VoC gives the following aspects for the study of Affordability Need and expectations Accuracy Responsive Flexibility Friendliness Convenient The voice of the business is derived from financial information and data. The VoB gives the following aspects for the study of Process complexity. Strategic Direction of the business Financial capability Market share and its weakness Utilization of investment capital Research and development status Production environment and conditions And when we study VOC we also work on KANO model where We check for BASIC need of the customer, What our Product satisfy the need and When the customer is Delighted (delighted feature) 1)every patient to be treated immediately once he enter the hospital with in 15 min of entry (foot count is 10 patients per hr ) there are 2 doctors only and the Average time to diagnose is 10 min in the above case The foot count ratio to diagnose time and no of doctors create a perfect equilibrium, but when it comes to the Goal of within 15 min may not be addressed if foot count is all patients come in 1st 10 min of the hr and here we are talking about Averages Here VOC of service of delight within 15 min May become a conflict to address this The Business cannot add equal number of doctors to equal number of patients for various reasons. This may also result in poor feedback from customer with the delight expectation and waiting time Many examples can be made such as 1) Medicine should be sweet for Patients but Due to the ingredients of chemical composition Business cannot make the Medicines sweet 2) Cost of Service to be cheaper in Hotel but Business needs to cater many other needs which make it costlier including profit margin 3) Food should be served Hot in Fast food during winter/rainy season (pizza Delivery) but business cannot put a hot oven while transport and deliver HOT due to its infrastructure, working model and costing needs 4) Reaching destination in Fastest Possible time at average 60km/hr but there are problems with Vehicle max speed and cost to power ratios, quantity and mode of transport. The funny VOC to VOB is in organisation conflicts between boss and employee Boss orders the employee at 6.00pm Today that he needs the presentation by morning 9.00am tomorrow at the same time The presentation should have 6 days of data to be recorded for which the process implemented yesterday. Which result in a conflict of process, process environment, expectation and need of Customer (BOSS)
  18. 1 point
    There are 7 wastes in Lean which are referred to as Muda (in Japanese). Let us quickly go through each one. Waiting: It talks about the non-productive time period due to lack of equipment, material , human beings. This can talk about any delayed arrival of materials or persons, or delay due to equipments broken. So resources would be idle till such time. Eg: Waiting for a human resource to be available to fix an issue Over Production: It refers to manufacturing an item before it is actually needed. It relates to excessive accumulation of WIP of finished goods inventory. It is the worst form of waste because it contributes to all the others. Eg: Preparing unnecessary reports which may not be read at all Rework (or Defects): This talks about the correction needed for fixing the mistakes, normally. These activities are waste of time. Eg 1: Fixing the bugs created in software code Eg 2: Correction on made on the word documents – typo errors/spelling mistakes Motion: It tells about the unnecessary movement of people and equipment Eg: Extra data entry, extra steps ... Over Processing: This refers to non-value added tasks which customer would not be interested. This could be because of poor design of the product/process. Eg1: Any gold-plating work , like beautification of software code which is not required or not asked by customer Eg:2 Sending unnecessary mails doing extra communication Eg:3 Sending some data in multiple reports – duplicate effort Inventory: To keep stock of items , often in excess, than what is required . Eg 1: Keeping stock of raw materials, in plenty (abundance) than what is needed for use. Transportation: It refers to the unwarranted movement of materials, information.. Process Steps should be closely located to one another for minimal movement. Eg 1: Mail Forwarding to another persons , after missing to loop in those persons in the original email Eg:2 Movement of files/papers from one place to another place Now as we have seen the 7 wastes of Lean , there is also another type of waste , which is often called as the 8th waste. It talks about the underutilization of talent and skills. If we happen to address this waste, there is a chance that we can do well with the rest of the waste types.Having said that, let us see how we can make good use of these wastes . Let us take some of the wastes discussed above Waiting: Some IT applications might be quite complex and can involve multiple technologies. In a typical enterprise based web application, the technologies involved could be relational database(s), server-side programming languages, application servers (which host the application), front-end/User Interface languages and programming scripts, messaging tools etc.,. Assume this application is in the development stage. Now we see a potential problem in the way how application is designed. This requires a technical architect to do a course correction as he is the technical Subject Matter Expert (SME).The design change is critical in ensuring application scalability does not impact. Now assume that this architect is shared/utilized across multiple projects within a business unit of the organisation (which owns this application). When he is needed for this project(application) to provide optimal design, he is not available as he is held up in another project for production implementation and would not be available for 2 days. Now the project needs to wait for his presence and its worth waiting 2 days, since the architect would change the shape of the project (application) so that many users (Scalability) can be able to visit post the go-live period 2.Rework: There are scenarios, where this may be required. 1.There could be changes on product /process design due to environmental conditions, government statutory 2.There could be rework due to a design flaw on product. Eg: A product might have sharp edges on it. A consumer would have put a consumer case against it , after getting affected (hurt) by that. So consumer court would have instructed the company which produced the product to blunt the product’s edges. This is a legal issue which definitely needs rework of the design. 3.Inventory: Why this is needed? i).In Software, the developers will always have a copy of the code that they had developed. Those codes would be available in a Version control system (Configuration management). But still developers retain that because they would like to see if that can be used in some other projects. Also they want to see how their code looks changed when compared with the code in the version control system ii). A Supermarket can operate in Push (having Inventory) and Pull type (Just-In-Time). Some items can be fast moving and some items can move slow or have less demand. So the super market can stock fast moving items which do not degrade over a period of time or for a considerable amount of time. Conclusion: This goes to show how different type of wastes can be made use of!!
  19. 1 point
    In any business, performance is typically expected to vary over time and w.r.t. inputs. When comparing two performances, it would not be completely correct if a decision that the performances are different were to be taken based on comparison of just one or few data points from both the performances. Sampling errors should not influence the decision. Therefore, it is essential that the correctness of the decision taken should be sustainable over time. For the decision to be sustainable, data that reflect the sustainability of both the performances will be required. Once this data is available or is collected, the decision based on this data is also expected to sustain over time. The decision that is taken based on samples must hold good for the populations also. In other words, even after some unavoidable overlaps of both the performances, perhaps due to chance causes, the difference in the performances of the two populations must be visible, conspicuous and clearly discernible. In other words, the difference in the two performances need to be significantly different. But “significance” is quantitative and statistical. The significance of the difference is assessed from statistical data of the two performances. Statistically significant difference represents the clarity or discernibility of the difference between the two performances and the sustainability of this difference over time. Performances of two populations with a statistically significant difference will remain different over time unless there are some special causes in play on one or both of them. But how significant is significant? This depends on the objective of comparison and the stakes involved. The margin of error tolerable in taking a decision on the difference between the performances depends on these factors. For different combinations of conditions, this margin of error could be 1% or 5% or 10% or any other agreed number. This is the error involved in the decision to conclude that the two performances are significantly different based on the available statistics. Uses of the concept of Statistically Significant Difference in Problem Solving and Decision Making The uses of this key concept of “Statistically Significant Difference” to solve problems and take decisions are innumerable, a few of which are given below. 1. Comparison of performances between two or more a. Time periods b. Processes c. People d. Suppliers or Service Providers e. Applications 2. Assessing effectiveness of a. Training b. Improvements c. Corrective Actions d. Action taken on suspected root causes 3. Evaluating a. User ratings in market surveys against marketing campaigns b. Performances of new recruits against agreed targets In all the above cases, Hypothesis Testing can be effectively applied to assess the existence of a statistically significant difference.
  20. 1 point
    While VOC is considered as a key starting point for business excellence, can overemphasis on VOC be detrimental to business? Explain with examples. (This question is part of episode 2 of Excellence Ambassador series. It is open till 10 PM IST on 10th October 2017)
  21. 1 point
    There are various types of process mapping, but we can categorize them in mainly 5 groups. 1. SIPOC 2. High Level Process Map/Flow chart 3. Detailed process Map 4. Swim Lane Map 5. Value steam Mapping SIPOC:- SIPOC stands for Supplier – Inputs – Process – Outputs – Customer · The required inputs (and their providers) are listed to the left, and the key process outputs (and their recipients) are listed to the right. The SIPOC provides a focus for discussion of what the process is all about. · With SIPOC we will be able to know who supplies to process, what is the output of the process? What are requirements of a customer? · It is recommended to have a SIPOC for every project because they are helpful when discussing the process with others and simple to make. High Level Process Maps/High level Flow chart:- It provides an overview of the processes and objectives that drive an organization. The purpose is to provide quick and easy insights into what the process does, without getting into the details of how it’s done. Detailed Process Map/ Detailed Flow chart:- While studying the high level process map, if we want to get more detail of a particular process. We may need to make a details process map for that process. Swim lane Map:- Swim lanes is a technique used in process mapping to simplify the work procedure. The process is divided into several swim lanes. These are represented by the different people that will perform that job. Detailed process maps are often prepared in the swim lane format. This is because often there are multiple detailed process maps. Keeping track of who is supposed to do what may get confusing. Swim lanes help to simplify them. Value Stream Map:- VSMs are typically used in Lean applications. They are rich with information that is useful when planning process improvements. Value Stream Maps are sometimes called Material and Information Flow Diagrams. With value stream map we can see how material is moving from one process to another and how information is flowing. We can also see WIPs and its level. We can get relevant process details such as cycle time, change over time,etc. What is the wait time for information/product can also be gathered from a value stream map. They require more skill to build than simpler process maps, but they provide great information. Below is the summary of various process maps. Process Mapping When It is used SIPOC to get overview of what are inputs/outputs, what are customers’ requirements High level process Map Shows how the process works Detailed process Map To get deep understanding of the process Swim Lane Maps It shows which department is is involved with how much intensity in that process Value stream mapping It is the ultimate process map, which gives all the relevant detail about the process. For me Value stream mapping is the best template to do a process mapping. Now for an organization which is new to these tools, or for an organization which I am not aware of I will follow below sequence of process mapping.
  22. 1 point
    To start with, it has been proven that pull system works well in many scenarios. It also leads to savings, less inventory etc.. However, it cannot be implemented everywhere. In todays world the customer expects to be served immediately - the one who is able to satisfy the demand at that moment is the one who gets the business and in turn the money. Take the case of a normal person who needs some medicine. Can the manufacturer then work on getting one set of pills only for this customer made and shipped - is this feasible? or doable? currently, we have seen an outbreak of influenza / dengue etc - in such cases - the demand exists and is known - to a large extent can be forecasted to some degree of accuracy and the end product is needed at a particular point in time. It would not help if the end user has to wait for the product to be manufactured and delivered. Take the case of vegetables of fruits that are being produced - here too, the demand is to some extent forecastable to some degree of accuracy and have a longer lead time - they cannot be produced in the pull methodology. Take another case of diamonds - these are generally not produced for one person at a time - they are mined, cut and kept ready in the hope of finding a buyer Overall, the thought is that where the demand for the product is to some extent forecastable, where the lead time is high and the demand needs to be fulfilled immediately, pull system may not work. The supporting ecosystem (eg: the supermarket which supplies the vegetables and fruits / the drugstore which sells the drug etc) may use the pull system, but the product will be manufactured / grown and kept ready for sale even before the customer has demanded it..
  23. 1 point
    The First Jidoka The automatic loom, invented by Sakichi Toyoda, the founder of Toyota, in the year 1902, can be considered as the first Jidoka example. In this innovation, if threads ran out or broke, the loom process was stopped automatically and immediately. In the early days of assembly line mass production, work cycles were watched over by a human operators. As competition increased, Toyota brought about a significant change in this process by automating machine cycles so that human operators were free to perform other tasks. The Toyota Production System has many tools for efficient products and services. Developed over the years, these tools aim at reducing human effort and automating machines to increase productivity. Jidoka is one such tool without which efficient manufacturing would practically be impossible, as of today. The article below explains all about the Jidoka process. The Concept of Autonomation To begin with, understand that autonomation and automation are different from each other. According to the definition of autonomation, it is a 'self-working' or 'self-controlled' process. It is a feature that contributes to the Jidoka process. Automation is the process where the work is still being watched by an operator, where errors may still be apparent, and detection and correction take a longer period. Autonomation resolves two main points. Firstly, it reduces human interference, and secondly, it prevents processes from making errors. This has been enlisted below. PRODUCT DEFECT Ordinarily, when a defect occurs, a worker detects it and later reports the problem. Autonomation enables the machine to stop the cycle when a defective piece is encountered. PROCESS MALFUNCTION If all the processed parts or components are not picked up at the end of the cycle, the machine might face problems, and the process might halt, and it would take a while before the worker realizes that the process has been interrupted because of a minor error. In case of autonomation, if the previous piece has not been picked up during ejection, the machine gives a signal or stops the cycle all together. An Introduction to Jidoka The Evolution towards Jidoka Jidoka can be simply defined as 'humanized automation'. Autonomation is just another term for Jidoka. It is used in different contexts. It is mainly used to detect defects and immediately stop the production or manufacturing process. It fixes the defect and finds solutions so that the defect or error does not occur again. The concept, as mentioned before, was invented by Sakichi Toyoda. Its purpose is to reduce human error and judgment by automatic error detection and correction. It was developed to eradicate the wastage of time due to human observation of the process, transportation, inventory, correction of defect, etc. Now, with Jidoka, production lines have become significantly more efficient, and the wastage of goods and inventory have been reduced too. Other Toyota Tools and Terms You need to keep in mind is that Andon, Poka-yoke, Just-in-time, etc., are all tools invented by Toyota. Jidoka is also one of these tools, and it encompasses some of the others as well, like Andon and Poka-yoke. Jidoka was developed to minimize errors that may have been caused due to human observations. Remember that Andon is not an example of jidoka, but an important tool. It displays the current state of work―whether the process is smooth, or it has any malfunction, or if there are product glitches, etc. The relation between Andon and Jidoka has been explained further in the article. Similar to Jidoka, Just-In-time is another important tool, and is one of the crucial pillars of TPS. It adheres to what product is required, when it is required, and how much is required. The 'takt time' is an important principle―it refers to the time that should be taken to manufacture a product on one machine. Line Stop Jidoka is a term that applies to the process in automotive manufacturing plants. It is called so because it interrupts and halts the entire line (process) when a defect is found out. The Elements of Jidoka GENCHI GENBUTSU It is one of the important elements of Jidoka. The basic principle of Genchi Genbutsu is to actually see the problem. It entails going to the root source of the problem. This is an important step in the Jidoka process―to find out why the defect occurred in the first place. ANDON As stated in the previous section, Andon is a visual representation of the current process. It indicates whether the process in running as per norms or whether there is a potential flaw. According to the condition, it gives out electronic signals. If the signal is negative, workers will understand that there is a problem in the process. The machine stops, immediately of course, and the workers can stop the production until the flaw in the process is fixed. STANDARDIZATION The main aim of Jidoka is to increase production quality. This is what standardization deals with. It involves developing strategies that adhere to perfection and quality. When a flaw is discovered, it is not only fixed, but efforts are also undertaken to see that it does not occur again, and the quality and standard of the same product are maximized. POKA-YOKE The concept is also called mistake-proofing or error-proofing; poka-yoke devices are designed to avoid mistakes that could occur during production. The Principles The Jidoka Process As seen in the first figure above, without Jidoka, the defective piece continues to be produced and ejected. It is only after ejection that the worker may realize that the product is defective and then stop the process. In the second figure, with Jidoka, the Andon light glows brightly indicating that the product is defective. The process is halted immediately, and necessary steps are taken. DETECT This involves detecting the problem. The machine is fixed with the right components so that the abnormality is immediately identified. For this step, machines may be fixed with sensors, electrical cords, push buttons, electronic devices, or may be fed with proper instructions to identify if a product is defective. STOP Once a defect has been spotted, the machine stops immediately. The machine is designed to stop on its own, no staff or worker needs physically stop it. The fact that a defect has been detected is indicated through signals. Once that is done, the staff might rush to the site to find out why the process has been halted. FIX When the machine stops, the production line needs to be stopped. You might wonder why the entire line needs to be halted due to one or more defective pieces. This is done because there is a likelihood of defective parts or components to have been manufactured along with the defective part or component. To avoid this over-production and wastage of material and equipment, the production line is halted. After this, steps are undertaken to fix the problem. Sometimes, this may be a minor glitch, while at times, there may be a major problem. Once the error is fixed, the production resumes. INVESTIGATE The last and rather vital step of Jidoka is to investigate the source of the problem. You have to find out answers to the following questions: 'Why the defect has occurred?', 'What kind of defect is it?', 'How can it be fixed?', 'What can be done to prevent it?', and so on. Root-cause analysis tools are widely used to get to the bottom of the problem. Through this process, efforts are undertaken to find out the best solution for the defect, and to prevent it from occurring in the first place. As more and more investigation and research is being carried out, better methods of manufacturing are discovered, better problem-solving techniques are invented, and the product quality increases. Examples Jidoka is mainly used in the manufacturing and automotive industries; however, it can be demonstrated in simple products used in daily life as well. For example, if your kitchen cabinet is fixed with a dustbin, you will notice that when you open the door of the cabinet, the lid of the dustbin is automatically lifted. This is because there is a string that helps lift the dustbin lid the moment the door is opened. Consider a printing press machine. If a sheet is missing in the machine, a sheet detector raises the print cylinder. This is due to Jidoka. In the manufacturing industry, a sensor is used to check if the components are in alignment. Even if a small part is out of alignment, the machine is stopped. Some high quality machines use the recall procedure. Sometimes, despite the best counter-measures, some products in the production line may slip through the machine cycle, undetected. The recall procedure checks every single product once again, before the final output ejection. Light curtains are used in automatic feed machines. They have a presence sensor that stops the machine if a component is broken or is defective. Benefits of Jidoka It helps detect the problem as soon as possible. It increases the quality of the product by proper enhancement and standardization. It integrates machine power with human intelligence to produce error-free goods. It helps in proper utilization of labor since the process is automated, workers can spend their time performing more value-added services. There is less scope for errors in production, which substantially increases the rate of productivity and lowers costs. Improved customer satisfaction is an important advantage as well. Good products are manufactured in lesser time. Jidoka is one of the strong pillars of TPS (Toyota Production System). It helps prevent defects in the manufacturing process, identifies defect areas, and devises solutions to see to it that the problem is corrected and the same defect does not occur again. Jidoka helps build 'quality' and has significantly improved the manufacturing process. Difference between Autonomation & Automation: (in summary) Autonomation vs. Automation Description Jidoka Automation If a malfunction occurs, The machine shall detect the malfunction and stop itself. The machine will continue operating until someone turns off a switch. Production of defects No defective parts will be produced. If defects occur, detection of these defects will be delayed. Breakdown of machines Breakdowns of machines, molds and/or jigs can be prevented. possible breakdown of machines, molds, and/or jigs may result. Severity of Malfunction detection Easy to locate the cause of any malfunction and implement measures to prevent recurrence. Difficult to locate the cause of malfunctions at an early stage and difficult to implement measures to prevent recurrence. thanks, Kavitha
  24. 1 point
    Hypothesis Testing is among the most powerful tools used in Business Excellence. It takes away the decisions based on gut feeling or experience or common sense e.g. Site A has better performance than Site B, we should hire more experienced employees as their accuracy is higher, it takes lesser time if we use System A vs System B, older customers are less likely to use self-help as compared to other age groups, are we meeting the cut-off defective %age or not, based on the proportion defectives we see. Hypothesis testing allows to collect valid sample sizes and make decisions for population - it keeps the gut feeling and statements such as "in our experience" out of the picture. You have statistical proof of whatever you "feel" or "think" is right. What must be kept in mind is that it is an OFAT testing technique - only one factor under consideration can be varied while all other Xs must be maintained constant. Hypothesis Testing can be used in any and every phase of the DMAIC cycle. Define - Usually all "1" tests or tests where we compare a population to an external standard are used in this phase e.g. 1 proportion test (if I have x out of y defects, am I meeting the client quality target of 95%?), 1 Sample Z, 1 Sample T, 1 Sample Sign etc. (Has the cost of living gone up as compared to the mean or median cost 10 years ago?). It helps us decide "do we even have a problem". Measure - One can look at data and the eye can catch a "trend". But can we really say that the performance has dipped, is the difference in performance statistically significant. Hypothesis testing can give you the answer. Analyze - this hardly needs any explanation as everyone has using hypothesis testing extensively in this phase to compare two populations or multiple populations e.g. do the five swimming schools create the same proportion of champions out of all enrolled in them, is the lead time for a process on machine A better than machine B, does Raw Material X give better quality than Raw Material Y, does Training Methodology 1 give better results as compared to Methodology 2, 3 and 4, does Vendor A have fewer billing discrepancies than Vendor B etc. Improve - tests involving two populations are generally used. E.g. comparing Y pre and post solution implementation (we implemented a solution to improve the yield of a machine). Is the post-solution yield higher than pre-solution yield, is the TAT post solution better than the TAT before implementing the solution, are more customers buying our product than before etc. Control - We get different CTQ numbers every month post we made an initial improvement. Can we really say that we have improved as compared to before? For 5 months after improve, if we saw a lower number for the metric, was that really different than other months. Can we say that we are consistent? We can use Hypothesis testing again. Business Excellence is nothing but an iterative process to drive excellence throughout the business. As Hypothesis Testing helps us validate or invalidate what we suspect every step of the way in the DMAIC cycle, it is a "must use" tool for the armor.
  25. 1 point
    Dear Ari/Ramabadran, You have rightly pointed out the Cpk cannot be translated directly into Sigma levels. Here are some additional points regarding that topic. Cpk (unlike Cp) includes both variation and shift to calculate the process capability. The traditional formula for Cpk (for a normally distributed process data) is given by: Cpk = min [ (USL - Xbar)/(3*S), (Xbar - LSL)/(3*S)] Where, LSL and USL are the Lower and Upper Specification Limits (as determined from the customer), Xbar is the average of the process data, and S is the sample standard deviation. When, we look at the formula, we see that we compute the minimum. Which means, we are looking at the defects that are greater than USL on one side and the defects that are less than the LSL on the other side and only picking the worst case (A smaller process capability number relates to larger number of defects). So, if a process had 10,000 PPM defects to the left of LSL and 20,000 PPM defects to the right of USL, then Cpk will look at both of them and then compute the process capability number related to 20,000. As pointed out earlier, Sigma Level (Bench) looks at defects on both sides and adds them up. So, it would not be directly possible to translate Cpk numbers to Sigma Levels. However, assuming the worst case defects occurs on both sides, it would be possible to estimate a conservative estimate of Sigma level if we so desire. Example: Let's say LSL = 11.5, USL = 18, Xbar = 15, S = 1. Cpk = min[(18-15)/(3*1), (15-11.5)/(3*1)] = min[1, 1.167] = 1.0 CpU (only looking at USL) = 1.0 CpL (only looking at LSL) = 1.167 PPM_U (only looking at USL) = 1350 PPM_L (only looking at LSL) = 232 PPM_Total (both sides) = 1582 Sigma_U = 3.0 Sigma_L = 3.5 Sigma_Bench = 2.95 Conservative estimate for Sigma_Bench based on Cpk = 1.0 (assume both sides have PPM = 1350 - centered process!). Hence, Total PPM = 2700 => Sigma_Bench = 2.78. CONCLUSION: Cp * 3 gives you sigma level on one side (Z_LSL and Z_USL). Cpk cannot be directly translated into sigma level for bilateral tolerances as it only looks at the worst side. Of course, if it is a unilateral tolerance, then we can predict Sigma levels from Cpk numbers.
This leaderboard is set to Kolkata/GMT+05:30