Jump to content

Venugopal R

Excellence Ambassador
  • Content Count

    137
  • Joined

  • Last visited

  • Days Won

    15

Venugopal R last won the day on June 14

Venugopal R had the most liked content!

Community Reputation

39 Excellent

5 Followers

About Venugopal R

  • Rank
    Advanced Member

Profile Information

  • Name
    Venugopal R
  • Company
    Benchmark Six Sigma
  • Designation
    Principal Consultant

Recent Profile Visitors

1,220 profile views
  1. Benchmark Six Sigma Expert View by Venugopal R Deciding and defining the “Opportunities” has been a highly debated topic while practicing DPMO. Let me share a couple of related points that I have come across. Let’s take an example of insurance claim forms being processed. Assume that the form has 20 fields of information. Let’s say each field represents an opportunity for a defect. When 100 such forms are processed, the total number of defect opportunities are 2000. Now if we detect 50 defects after processing 100 forms, the defect per opportunity is the total number of defects divided by the total number of opportunities = 50 / 2000. The DPMO will be 50 / 2000 * 1000000 = 25000. For the same example, if someone wants to consider the defect opportunities for each field as ‘wrong entry’ and ‘no entry’, this would mean that the number of opportunities in 100 forms will be 20 * 2 * 100 = 4000. The DPMO in this case will work out to 12500. Without a clear and common understanding and agreement on the definition of ‘Opportunities’, the DPMO data could be misleading and also prone for manipulations of the sigma values. It is important to define and have a uniform agreement on the applicable ‘Opportunities’, so that the baseline of comparison remains relevant. Another point of importance is that many a time, customers prefer ‘Defective’ based metrics. viz. % of parts that did not conform, Proportion of forms that are error free, etc. In such situations, for a same level of Quality, the Defective based metric will appear to be very stringent compared to the DPMO based metric, since the denominator of the latter could be very large.
  2. Benchmark Six Sigma Expert View by Venugopal R Most of the Excellence Ambassadors will be quite familiar with Kano Model and I expect to see very good responses. However, for many organizations, Kano model remains as a concept than being converted as modus that is exercised regularly. I am outlining the key steps for building the Kano plot. One of the simplest ways is to of depicting the features of a product or service as a ‘scatter plot’ on the Kano template. It is important to indicate the date of preparation of this Kano scatter plot. While preparing this plot, the process of judging the location of each feature is very important and should be as objective as possible, and as seen from the shoes of the customers. The key steps in this process would be: a) Decision on the features to be evaluated– decide the scope of your study b) Identification of source of inputs (select customers) – decide to pick your customers within certain limit, for e.g. a particularly demographic or a representing a section of the target market c) Structured method of gathering data – Questionnaire based survey; possibility of mock demonstration of features d) Analysis of the findings; Plan suitable sampling to capture possible variation on the response to same features and to decide its positioning on the Kano plot. The entire process of the exercise has to be well documented, so that the process can be repeated at later point of time to understand the shift on responses from ‘Performance to Threshold’ and ‘Excitement to Performance’
  3. Benchmark Six Sigma Expert View by Venugopal R The table below gives the temperature for XYZ city recorded during 5 different months. For each month 10 readings had been taken randomly across the day. If we represent the same data using a Box Plot, it will appear as below. Evidently, the box plot presents the same data in a more easily interpretable manner and mostly self-explanatory The box plot divides the data into 4 quartiles, have median as the measure of central tendency and the height of the box represents the placement of 50% of the data – i.e. between the 1st quartile and the 3rd quartile. Each whisker represents 25% of the data on either end, excluding any outlier. The outliers are shown as a star mark as seen in the above diagram for the month of April. The distance between the 3rd and 1st quartile, or the height of the box is known as ‘Inter Quartile range’. Inter-quartile range is a useful measure of the dispersion, very free from outliers and may be used for comparison between plots. Thus, the diagrammatic representation of the same data speaks louder, clearer, faster with more elaboration.
  4. Benchmark Six Sigma Expert View by Venugopal R In General, data can be categorized as Continuous data and Attribute data. In attribute type of data, we have further classifications viz. Count, Yes / No, Ordered etc. The principle of Binomial distribution applies for situations where we can have two outcomes, like the “Yes / No” type of data. Other requirements for being eligible for application of Binomial distribution are to have a fixed number of independent trials and the probability of the outcomes remain same throughout the exercise. One of the most popular example used to illustrate Binomial distribution is the outcomes relating to throwing of a dice, which has 6 faces. Let us define the outcome as obtaining ‘2’, when the dice is thrown 5 times. 1. The ‘success’ is defined as say, ‘Obtaining the number 2’. 2. The number of trials is 5 3. The probability of success (obtaining the number 2) for each throw (trial) is 1/6 – and this probability remains the same The overall probability of success can be calculated by the formula nCr Pr (1-P)n-r Where n is the number of trials, r is the number of successes, P is the probability of success for one trial. Where historic probabilities are of outcomes are available, the Binomial calculations can help in estimating the expected nature of outcomes for a given number of occurrences. The principle of Binomial distribution is applied to develop sampling plans for attribute data pertaining to ‘defectives’. A ‘defective’ is an item that contains one or more ‘defect’ and hence, if I have a sample of 10 items, the number of ‘defectives’ in the sample can possibly vary from 0 to 10. Here the number of samples becomes the ‘n’ value and the outcome is either the item is defective / or not. Using the sample observation, the plans are used to estimate the proportion defective in the lot and decide whether the lot can be accepted or not. Another application is on the Attribute control chart, ‘p-charts’ that are used for plotting the proportion defective data. The probability density distribution for Binomial will appear symmetric, but it is a discrete distribution and different from normal distribution, which is continuous.
  5. Benchmark Six Sigma Expert View by Venugopal R The two broad classifications of FMEA methodology are DFMEA and PFMEA, though there are more classifications possible such as ‘Concept FMEA’, ‘Proto FMEA’ and so on. Design FMEA is an exercise that has to be performed before the design of a product is finalized. Design FMEA may be done for a part design, sub-assembly design or for an assembly design. It starts with the item under consideration, describes its expected function and then list of the potential failure modes that could be anticipated for that function of the item. Some of the areas that the failure modes in a DFMEA will cover includes performance, safety, compliance to standards, user friendliness, and manufacturability. The potential causes will assess the inputs that get into the design and their possible inadequacies, mistakes and variations, while evolving the input requirements. Inadequate or incomplete assessment of the performance requirements of the item is another area for potential cause. The detection controls will look at existence and effectiveness of design verification and validation methods, any ‘fail-safeness’ applied etc. Study of historical data relating to performance, past failures, and / or reviewing the DFMEA for similar products are certain practices adopted during a DFMEA. Once the design of the product is evolved, the design of the process that needs to create the product will be taken up. Before finalizing the process(es), the PFMEA is performed. PFMEA begins with the Operation sequence and Process description. The requirements for each process step will include Product characteristics that are dependent on the process and the requisite process controls are identified. The process (or process step) not fulfilling any of these requirements will be identified as potential failure mode. The potential cause(s) for the failure mode could typically cover the process incapability, process sequencing, choice of equipment, skill etc. The current detection controls will look for Mistake Proofing, verification methods including MSA effectiveness. Process Control plan is a document that emerges based on the PFMEA findings. As we saw earlier, one of the potential failure modes for DFMEA is ‘Manufacturability’ or how well a process can be expected to have the capability to fulfil the design requirements. This will be an input from process (and PFMEA) to DFMEA. The Product characteristics that are dependent on the Process controls becomes and input from DFMEA to PFMEA The severity ratings applied by DFMEA for part characteristics could qualify some of them as CTQs or Special characteristics. Such characteristics if they have dependency on Process controls, the associated process characteristics need to be also identified as CTQs. There could be certain potential failure modes in PFMEA for which the detection control systems could be weak. For example, reliability related failures, as a result of Process limitations, may not be easy to detect as part of a day to day control system. Such inputs need to go back to DFMEA and the Design team to inbuilt greater assurance through Design and reduce dependency on Process. It is recommended to begin the PFMEA exercise, even as the DFMEA is evolving, so that both exercises can benefit from the mutual exchange of inputs. The FMEAs will then have to undergo multiple iterations of refinement. Even after finalizing the Design and the Process, the FMEAs will continue to remain as a living document that needs to be referred and updated from time to time.
  6. Benchmark Six Sigma Expert View by Venugopal R Most of us carry out regression analysis using software applications such a Minitab. We will get a result, whatever be the number of samples that we use for the regression exercise. Certain applications do indicate if we have taken the minimum sample size or not. Many follow the rule of thumb sample size of 10 or 30. This number may go up if the we have more independent variables. The discussion regarding scientific determination of required sample size for regression analysis can drag us into deeper statistical discussion. I will try to give my views and understanding briefly. A statistical derivation for the sample size that takes into account the statistical power; (i.e. the probability of rejecting null hypothesis when false) and where there are multiple independent variables, the minimum sample size has been derived as 50 + K, where K is the number of independent variables. If we need to evaluate the weightage of each variable, then the minimum sample size becomes 104 + K. The above derivations indicate that a sample over 100 will have statistical justification. If one needs to go deeper into this topic, the criteria for sample derivation is further extended to include the correlation amongst the independent variables, and the correlations between the independent and dependent variable are also taken into consideration. The sample size in such case would be higher and is represented as a table for various correlation values as mentioned above. Sometimes, practical constraints deprive us of obtaining the scientific sample sizes and we may resort to lower sample sizes. While, this would certainly compromise the power of the test, we may look at the R-square value. Higher R-square value gives assurance the most variation of the dependent variable is explained by the considered independent variables.
  7. Benchmark Six Sigma Expert View by Venugopal R One of the important tasks that most of us would have to encounter while working on improvement projects is to establish controls for sustaining our gains. In this context, it is not only important to identify the cause-effect relationship relevant to our problem, but also, prove and implement sustenance measures. Once a cause and effect relationship is established and we have proven the relationship between two variables, we would certainly like to express the association in a best possible manner. To examine whether an established cause-effect relationship should necessarily exhibit strong correlation, let’s look at some examples and think about this question. Correlations that remain valid within a range: Let’s take an example of a compression moulded component. It was proven that the cause for the poor hardness of the moulded component was due to low temperature setting. Once the temperature setting was increased, other parameters being maintained, the required hardness was attained. Both the dependent and independent variables are continuous in nature. In this case if a study is taken up by measuring the hardness levels against various temperature settings, we can certainly expect to see a positive correlation. However, this correlation may not continue beyond a certain range of temperature value. The correlation between the cause and the effect is valid within a certain range of the cause variable and would have an optimal value. Discrete causal variable: Let’s take an example of vehicle fuel mileage. Based on studies, it was established that the type of spark plug used was an important cause for the mileage of the vehicle. In this case we have 3 different types of spark plugs to choose from, thus making the causal variable a discrete one. In a strict sense, we may not be able to establish a co-relation between the proven cause and effect, since we do not have a sets of variable data sets to derive the correlation. However, those interested in deeper research may identify a variable factor within the spark plug that causes the difference and try to establish a correlation to the effect. Discrete variables for both cause and effect: Let us take another example where a login account is not opening and the cause is identified as usage of wrong passcode. Once the right passcode is used, the login works. The variables involved in the effect and cause are both discrete. Is there a way to establish a ‘correlation’? Continuous causal variable and discrete effect: Let us consider a case where the input (causal) variable is continuous and the output (effect) variable is discrete. Consider a drop test for a packed Hardware equipment, where the input variable is the drop height and the output variable is “whether the equipment is damaged or not”. It may not be possible to derive a correlation directly. However, if we can perform multiple tests for each drop height, then the proportion of products getting damaged for different drop heights, within a certain range could show a correlation. Considering the destructive nature of such tests, it may practically be expensive. To sum up, a proven cause-effect relationship establishes an association between the two variables, dependent and independent. However, correlation could be one of the tools to depict this association, but may not be the best applicable tool in all situations. Other tools such as tests of hypothesis, ANOVA, logistic regression etc. may be more appropriate depending on the types of data.
  8. Benchmark Six Sigma Expert View by Venugopal R To fully understand the explanation to this question, one has to be clear on the principles of XBar–R chart and also about the variations related to Gage R&R. While many of the ambassadors would have given good explanations, I will express my points briefly. Please be cautioned that this write-up will not give a full education on these topics and hence I request the readers who seek further clarity to read and develop more understanding of the two topics as mentioned above. How does an XBar–R chart work? XBar–R chart is constructed based on several groups of small samples processed under nearly similar conditions. Each such group is termed as “Rational sub-group”. The Range chart shows variation within these samples and the control limits for Range chart are statistically derived from the sample data. A reasonable time gap is allowed to pick the successive groups of samples, intending to bring out any process variations. The X-Bar represents the mean value of each of the sub-groups. It is to be noted that the control limits of the X-Bar chart are also derived using the average range values. How is the X-bar R chart interpreted? If all the X-Bar values are falling within the control limits, the variation between the subgroups cannot be distinguished from the variations within the sub-groups. This could mean that the Process variations are very low and do not show up over and above the ‘within’ group variations. It could also mean that the ‘within’ group variations are so high that the process variations are not being distinguished. If more values of X-Bar fall outside the control limits, then the process is considered to be influenced by assignable causes, whose influence is over and above the ‘within-group variations’. On the whole, more points falling within control limits is a desirable situation here. Now let’s examine the X-Bar R chart used for interpreting Gage R&R Here, each sub-group is represented by the readings taken by the same appraiser on the same part, assuming the range chart is made for each appraiser. The control limits for the X-Bar chart, being based on these R values, depict the variation of the measurement system. Each point on the X-Bar chart represents the average of the readings by each operator. The variations between X-Bar values are considered as due to Part to Part variation. Now, if most of the X-Bar values fall within the control limits of the X-Bar chart, it means that the Part to Part variation is not distinguishable from the Measurement system variation. It either means that the Measurement System Variation is too high or the choice of the parts is not representative enough to bring out the Part to Part variation, or a combination of both. If most of the X-Bar values fall outside the control limits, it means that the variation due to Measurement System is low enough to show up the variation due to Part to Part. In other words, here, we would like to have the relative variation of the measurement system to be low compared to the Part to Part variation that the system is expected to assess. Hence more points falling outside the control limits is a desirable situation here.
  9. Benchmark Six Sigma Expert View by Venugopal R Process Cycle Efficiency is defined as the ratio of Value added time to the total cycle time of a process. It is an indicator of the extent of value adding time in a process as against the total cycle time. Here, the definition of ‘Value adding process step’ is based on whether a customer is willing to pay for that process step, or whether the process step results in some transformation of product, or it should not be an activity of rework. Process Efficiency is determined as the ratio of Output to Input for a process. For example the process performed by an internal combustion engine for a vehicle is considered to be more efficient if it gives more mileage for one liter of fuel. When a process consists of more number of non-value added steps, it would consume more resources as inputs and hence the process efficiency is bound to dip. In such cases, process cycle efficiency is certainly one of the important contributor to overall process efficiency. For Processes that have a sequence of steps involved, PCE becomes more significant. For processes where the output yield is more important than cycle time, PCE may not be an relevant of adequate metric. For example, if we have a reverse osmosis process to purify water, the volume of pure water that comes out as against the volume of water consumed would give us the process efficiency, where as the role played by PCE may not be significant or adequate in relation with the overall process efficiency. For an assembly line process, the number of products assembled in an hour would be an important metric and the Process Cycle Efficiency becomes an important metric influencing the Process efficiency.
  10. Benchmark Six Sigma Expert View by Venugopal R The cash in hand has an advantage of having the ability to be invested immediately and enable earning of returns. Hence the value of same amount of cash that we would get in future is always lower than the same amount we have now. Net Present Value compares the value of the amount invested today to the present value of the future returns from the investment, after discounting them to a given rate of return. Based on certain inputs, the NPV helps in deciding whether an investment is expected to be profitable or not. The profitability is, however, not based on just the absolute value of the return of investment, but after applying the discount based on the prevailing interest rate. For instance, let’s say we have a certain amount of money in hand and it is expected to earn interest at the rate of 7% per annum through normal financial investments. If we invest the money on a business, we should expect a return that is more than that obtained through a normal financial investment. This can be ascertained by the NPV calculation. Let’s take an example where we have a sum of Rs.100,000 for investing in a business. We expect a future cash flow return of 160,000 after 3 years. We need to know if this would be a profitable venture, taking into account the prevailing rate of interest at 7%. We may provide the inputs into the NPV calculator available at https://www.benchmarksixsigma.com/calculators/net-present-value/ . We get a positive NPV, which indicates that the venture is profitable, over and above the expected rate of interest. On the other hand, if the expected future cash flow return had been 120,000, you can observe that the NPV turns negative, even though the absolute net yield is 20%. Thus, the NPV acts as an indicator to assess the worthiness of the business investment, considering the prevailing interest/discount rates. However, it needs to be remembered that NPV is an indicator that needs to be considered as just one input, and business decisions are taken considering several other factors.
  11. Benchmark Six Sigma Expert View by Venugopal R Hypothesis testing is no doubt a very powerful method for objectively deciding whether we have enough reason to believe that two populations are different. Once we understand the concept of hypothesis testing, one can discover that it has potential to be applied in almost all the phases on DMAIC. However, if we need to look at some of the key reasons why the tool is not patronized to the extent it could be, I would put down the below points, though these may not be exhaustive. 1. A green belt professional can gain adequate proficiency and confidence in the use of TOH only by repeated practice and deep thinking. The few examples used in a GB training are meant to illustrate the tool and its application, but many more examples need to be tried out. 2. From the various examples that are done, the participant needs to relate situations in his / her work area where the type of data used can be comparable. For instance, an example from a manufacturing situation can be compared to one in a services industry as far as the data is concerned. It could be “number of units produced vs number transactions served”. 3. The non-availability of statistical software like Minitab, Sigma Magic or equivalent has been seen as a deterrent. Most participants get trained using a trial version and later they are not equipped with the software. 4. Many a time, the leader (& sponsor) is anxious to implement improvement actions and do not spend adequate time and effort to have baseline data. Once improvement is done, even if they want to compare with the ‘before’ situation, they are constrained due to lack of baseline data. 5. Participants are sometimes unsure of the choice of the tests as applicable to their projects. Hence, they tend to avoid using this tool, in fear of using a wrong test. 6. The sponsors and other senior management leaders may not have the knowledge to appreciate the usage of Test of Hypothesis, which could discourage the GB to try it out, unless strongly supported by a good Blackbelt / Master Black belt. 7. The ability to interpret the results in a “Business language” rather than a “Statistical language” is another important skill for a Project leader to impress the benefits derived by using TOH, and other tools. 8. There may be some instances where the volume of data available could be very large, or the delta is large, to show very obvious differences between populations, which could render the usage of hypothesis testing as redundant. 9. There could be some who would not have gained an acceptance nor belief to the usage of the method and continue to be comfortable with ‘gut feeling’ decisions. There would be many other reasons as well, which I expect other ambassadors to narrate. On the whole, usage of TOH will be improved with more mentor-ship, exercises, making the software available, and getting the senior leadership exposed to appreciate the use and power of such tools.
  12. Benchmark Six Sigma Expert view by Venugopal R. Z-score is one of the measures used for assessing the Process Capability of a process. The following are some of the benefits of using the Z score: It is a versatile measure that can be used for variable and attribute data. Very often, six sigma projects are pursued without establishing a baseline measure of process performance, which makes it difficult to quantify the post improvement benefits. The Z score helps in assessing and comparing pre and post process performance Computation of Z score forces the project team to define Specification limits, mean and standard deviation, for variable data. In case of attribute data, it forces team to define Defects or Defectives, Sample size, Opportunity for errors. When we deal with multiple projects in an organization, be it Operations, Maintenance, Supply Chain, Administration, HR and so on, the Z score serves a universal measure for comparing process performance across different functions. On the whole, one should remember that the objective of a Six Sigma project is to improve process(es) and it is important to be clear of the process that is being addressed by the project and establish the measurement method. Considering the benefits discussed, the Z score is insisted.
  13. Benchmark Six Sigma Expert View by Venugopal R I am sure that most of the Excellence Ambassadors will have good answers for this question based on their experiences. My discussion would be to focus on a allied topic while RCA is performed. One of the simple and popular methods used to drill down to the ‘root cause’ is the ‘5 Why’ analysis. The underlying belief is that when we ask the first ‘Why’, we may not be sure whether we have got the answer that is the fundamental cause or it is just a symptom. We need to drill down until we identify a cause, for which we can directly find a solution. For example, when we ask a question “Why the car did not start?”, the immediate answer could be that the starter motor did not work. “Why the starter motor did not work?” The answer could be that no power was being delivered to the motor. “Why power is not delivered to the motor? The answer could be that the battery had drained and had no power left. “Why the battery got drained?” The answer could be “The driver left the parking lights ‘on’ and hence the battery got drained overnight”. Now, should we stop here and decide that the root cause is because the driver forgot to switch off the light? So, we go and question the driver why he did not switch off the parking lights. He replies that “The warning beep, when we keep the lights ‘on’ with the engine not running, was not working”. Now, should we hold the driver responsible for his neglect or conclude that the root cause was that the warning beep wasn’t working? In the above example, there had been two incidents that led to the failure. The warning beep not working and the driver forgetting to switch off the parking lights. Even if one of those incidents had not occurred, the failure wouldn’t have happened. Those of you who are familiar with the FMEA concepts, will recall that that while analyzing a failure mode, we look at something known as “Occurrence” and “Current Controls”. The ‘current controls’ usually refer to a detection system or a ‘mistake proofing’ system. In the above example, the cause is certainly the driver neglecting to put off the lights; However, there was a ‘current control’ system in the form of a warning beep that had not worked. Hence, isn’t it important that when the root cause is being finalized for this failure, we need to consider both the error by the driver and the break-down of the control system? Similar approach would be applicable to many situations where we ask “What was the root cause that led to the failure?” and “What happened to the control system?”. A few examples: 1. A fire broke out a. Cause – Poor quality of cables leading to short circuit b. Control system failure – The automatic sprinkler system failed 2. Amount credited to wrong bank account a. Cause – Human error in entering account number b. Control system failure – Automated validations of account number with other details did not function 3. Vehicle skidded a. Cause – Driver applied sudden brake on slippery road b. Control system failure – the ABS did not function In case one is not able to identify the existence of a relevant ‘current control system’, the absence of any 'current control' needs to be recorded as part of the root cause analysis. Hope the above discussion helped to illustrate that many times, failures occur not just because of the ‘cause’ but when it is combined along with the failure of one or more control systems. The root cause investigation must identify the factor that caused the condition for failure and the ineffectiveness or absence of a ‘current control’ system. Ideally one shouldn’t wait for a failure to realize that the ‘current control’ did not work. There have to be pro-active assessments to ensure the effectiveness of such controls. One last point.... Even when such controls existed and prevented failures, those incidents need to be investigated to act upon the fundamental cause that caused a condition for failure, though it was averted by the ‘current control’
  14. Benchmark Six Sigma Expert View by Venugopal R Any organization that deals with multiple product lines or services will have activities and expenses that are highly specific to the product lines or service verticals. There would also be many activities and expenses that are considered more in general and applies across the organization. Examples of such activities that are applicable across the organization are Administration, Infrastructure, Employee welfare related, Communication and IT, Energy consumption, Dealing with regulatory bodies and so on.. The method of costing that finds ways of allocating the ‘overhead’ expenses to functions, products or services is known as Activity Based Costing (ABC). Activity Based Costing will invoke more responsibility and cost consciousness within each function. Each function knows that they are being monitored for the share of ‘common’ expenditure related to them. From a Lean Six Sigma perspective, this helps in allocating 'base line costs' and 'post-project' cost benefits more specifically. For instance, there is an efficiency improvement project taken up by a testing laboratory within a factory, and one of the components of cost saving is the energy consumption. ABC will help to track whether there is reduction in energy consumption by the testing laboratory after the project is implemented. Another example could be a project where the ‘Learning & Development’ department brings out innovative training methods and one of the benefits is to reduce need for employees to travel from distant locations to attend training programs. If the ABC allocates the portion of travel costs associated with the training to Learning & Development department, the related savings associated to their project can be quantified objectively. While the ABC has many benefits, it does pose certain challenges as well. One example is where ABC is used to allocate the costs for a "Enterprise Business Excellence Program" to every function. Sometimes, when the Business Excellence team tries to drive certain initiatives, some functions may show resistance, since they become overtly cost conscious and may not even envision the long term organizational benefit due to such initiatives. This will require good conviction building to gain acceptance. Some organizations maintain the costs for such company-wide programs as part of the "corporate cost head", so that the individual functions cannot debate on such programs in the name of their P&L getting impacted. There could be certain expenses, where it would be practically difficult to do the ABC. For example, if an organization has multiple floors, it may be difficult to allocate the expenses for running and maintaining the elevators across functions, products or services! Overall, Activity Based Costing is a very useful methodology and may be applied with prudence as per the tolerance of the organization
  15. Benchmark Six Sigma Expert View by Venugopal R Many of the tools used in Six Sigma project, where samples are used for analysis and decision making, apply the principle of Central Limit Theorem (CLT). As per the CLT, Sample means tend to follow normal distribution, irrespective to the population distribution, and hence the properties of Normal distribution apply for the sample means. The normality gets better with higher sample size. In today’s world with so many user-friendly statistical software, the analysis and even the choice of the tools to be applied, (for instance the type of test of hypothesis to be used for comparative analysis) could be left to the software. Hence the practical application of CLT would be happening inadvertently while using these tools. Control charts that use mean value of subgroups have their limits and rules based on the CLT. The significance tests where mean values of samples are compared, have the acceptance conditions based on CLT. If these tools had been used as part of the Six Sigma projects, the CLT has been put to use as part of the inbuilt working of these statistical softwares.
×
×
  • Create New...