# Venugopal R

234

35

1. ## Latin Square Design

Benchmark Six Sigma Expert View by Venugopal R Readers are expected to have some exposure to 'Design of Experiments' to be able to relate some terminologies in this answer for 'Latin Square Design'. Experiments are designed to study whether a response (output) is dependent on certain factors (inputs) and also to establish the extent of relationship. It is possible that when we design and perform an experiment with planned settings of an input factor, there could be some known 'noise factors' which are likely to influence the behavior of the output. Such 'noise factors are also referred to as nuisance factors'. They are factors that we are not interested to study, but we may be concerned that they might interfere and bias our results. If we suspect the presence of one 'noise factor', it is a common practice to use a 'Randomized Block Design'. The below example will illustrate such a situation. It is believed the concepts of ‘Design of Experiments’ originated from field of agriculture. We will understand the Randomized Block Design, followed by Latin Square Design using an example relating to ‘yield of a crop’. However, the concept can be applied to other situations dealing with ‘nuisance factors’. We are limiting our discussion to the Experimental Design portion and not discussing the Analysis portion here. RANDOMIZED BLOCK DESIGN Imagine that we are interested to study the impact of 'fertilizer dozes' on the yield for a crop. We have divided the land into 24 plots (8 x 3) available as shown below. Eight different dozes of fertilizer (A, B, C, D, E, F, G, H) are to be tried out. However, it so happens that there is a river flowing on the left side of the land. Now we suspect whether the presence of the river will result in higher moisture content for the plots closer to the river. To study any possible impact due to the possible moisture variation we divide the plots into 3 vertical blocks, each block representing the different moisture content (High, Medium and Low). Within each block we perform all the treatments based on the 8 fertilizer dozes, but with random distribution. Such a design is referred to as 'Randomized Block Design (RBD). The RBD will help to address one noise factor. LATIN SQUARE DESIGN Instead of one Noise factor, if we have two Noise Factors; for example, we have river that runs along the West side and a road that runs along the North side. We suspect that the river contributes to varied levels of moisture content as we move from west to east along the land. Whereas, we also suspect that the road is contributing to varied levels of pollution while moving from North to South across the land. We suspect two nuisance factors. viz. Moisture levels and Pollution levels. Will the plots closer to the river be influenced by higher moisture content and the plots closer to the road be influenced by higher pollution content? To consider the possible impacts due to these two suspected noise factors, we use an experimental design as shown below. As seen, the design is in the form of a square, with equal number of rows and columns. The treatment for each plot is represented by an alphabet. In this case we can try out 4 different dozes of fertilizers viz. A, B, C and D. Such a design is known as 'Latin Square Design'. Each cell in the Latin Square design can accommodate only one treatment. It may be noticed that all the treatments (A,B,C and D) are covered in each row, as well as each column. The number of blocks has to be the same, horizontally and vertically, for both the noise factors. The Latin Square design is used when we suspect two noise factors and want to study whether those noise factors cause (an undesired) influence on the response. Another example for Latin Square application is shown below: The output of interest is the rate of sales for 3 variants (A, B, and C) of a product. The noise factors suspected are the type of cities and the type of dealer promotion schemes. We have considered 3 blocking with respect to the city types and 3 blocking with respect to the dealer promotion scheme. The Latin Square design may be applied as below:
2. ## Customer don't feel averages

Benchmark Six Sigma Expert View by Venugopal R 'Customers do not feel averages'.... In the case of B2B, customers would be organizations. Examples of expectations from such customers would be Product availability, Timely delivery, Zero DOA, Low response time, Higher customer preference etc. In the case of B2C, where customers would be and end-consumers, and the expectations could be different. Many end consumers may purchase a product or avail a service only once in a while. For them, a failure of the product or service is perceived as a 100% failure. 'Time to First Repair' denotes the period for which the consumer expects a failure free performance. Other expectations would include Quicker response time, User friendliness, After sales support and so on. If we look at the various customer expectations narrated above, and we convert them as metric, most of them would need a one-sided specification. (For example, Delivery time 2 days Maximum). Averages may not be considered. We may define the 'defect' for each expectation as an instance when the expectation is not met. Thus most expectations can be measured as DPMO (Defects per million opportunities), Defective %. Quite often, the degree of consumer expectations being met is assessed as part of the Pre-delivery audits by organizations. This would address the Product Quality and performance expectations. For example, a consumer durable manufacturer will do a Finished Product Audit on a random sampling basis and report a score based on the findings. The score will be weighted based on the criticality and frequency of findings during the audit. A service organization would measure a CSAT score based on customer feedback. Net Promoter Score (NPS) is one of the popular methods by which we obtain an estimate about the likelihood of customers recommending the company to others based on their experience. If averages are not felt by customers, why do we have averages measured as part of various metrics in an organization? In production processes, where we would like to monitor performance metrics based on samples, tracking sample averages would help to apply SPC tools such as 'control charts'. To express statistically, the principles of Normal distribution work better on sample averages. Averages do not mean much unless we examine the associated variability as well. Variability is derived from sample results and expressed as the 'control limits'. Such tools help us to monitor the stability (consistency of performance) of a process. Assuring stability is a pre-requisite to assess the capability of a process. Averages would not be the final way of expressing performance. Capabilities are expressed as Sigma levels, Capability stats or in terms of Parts Per Million.
3. ## Hanedashi

Hanedashi is a Japanese term referring to automatic ejection of a part from a machine, once the processing is completed. Let me discuss this concept with reference to a compression molding operation. In this example, the processing is done using a molding press and a pre-formed job needs to be loaded into the heated mold in the press. In a traditional set up (without Hanedashi), the following would be the typical set of activities: 1. Carries the preformed job from the previous process 2. Places it near the compression molding press 3. Inserts his hand inside the mold and pulls out the completed job 4. Places it on the table 5. Picks up the preformed job 6. Loads it into the mold 7. Picks the completed job and moves to the next machine If Hanedashi is implemented, then the press will be equipped with an automatic ejection mechanism for the finished job and the machine will also make itself ready to receive the new input. Then, the set of activities with the same scope as we saw earlier would be: 1. Bring the preformed job 2. Place in on the mold 3. Pick the finished job and load it for the next process, as applicable Hanedashi is an important methodology in Lean Management, and the type of wastes that are addressed will be: Motion - Human movement is reduced as seen in the reduction of the manual steps Waiting - The job need not wait inside the machine until an operator attends to it. Similarly, the new job need not wait for the earlier job to be removed. Hence the throughput increases Over processing - Excess time inside the machine may sometimes make the job over processed. Results in wastage of energy as well as prone for defects Defects - Defects can happen as a result of overprocessing as well as due to handling while removing manually Unutilized talent - The operator will be able to handle multiple machines and hence more resources need not be deployed for the same operation, thus resulting in better utilization of talents Apart from the above, Hanedashi also addresses operator safety. For the example discussed, many instances have occurred, where the operator might get injured (some times loss of fingers) if the machine gets activated while moving the job manually. Hot molds could cause burns. It may be interesting to note that if there is a sequence of machines with the application of Hanedashi across, then it is often referred to as a ‘Chaku-Chaku’ (Japanese term) line. (meaning ‘Load-Load’!)
4. ## Screening Design

For the answer provided below, it is assumed that the readers have understanding about the basics of DOE viz. Levels, Interactions, Main effects etc. DOE overview: Design of Experiments (DOE) is an advanced application of statistical methods to identify the independent factors that significantly impact a response that we are interested. For instance, if we are concerned about the ‘time to cook’ (Response) for an instant food product, and let us say that we want to study the influence of few factors viz. (1) Quality of ingredients, (2) cooking temperature, (3) moisture content, (4) Quantity of certain ingredients (5) sequence of cooking process and (6) type of preservative. If we want to study the effect of these factors on the response, then we have to vary these factors, try various combinations and observe the results. In this case, we have 6 factors. For varying these factors, the minimum variability that we can subject each factor is ‘2 levels’ and we need to define these ‘levels’ for each factor. If we run a set of experiments to cover all the combinations of variations (2 levels) for each factor, we will have to run 64 experiments (26). Running all combinations of factors and levels is known as ‘Full Factorial Design’. With replication, (running the entire design two times), the number of runs would be 128. Need for screening experimentation: Imagine if we need to conduct a full factorial experiment with 10 factors, each at two levels. The number of experiments will be 1024. And if we need to do a replication, we will have to perform 2048 trials. The experimental efforts, time and expenses could be extremely high and would prove as a deterrent to try such a full factorial experiment. In such situations, we can conduct an initial ‘screening’ to eliminate some factors that may not be significant and perform a full factorial with the remaining few factors. This is how the ‘screening experiments’ will be of help. Fractional Factorial designs: One of the methods used as screening experimentation is ‘Fractional Factorial design’. As per fractional factorial design, we need to run only fewer number of trials. The table below provides that the number of trials for a fractional factorial with 6 factors with “Resolution IV”, is 16. By conducting trials as per Resolution IV design, we can assess the significance of the ‘Main effects’, but not interactions. Thus, out of the six factors, we will be able to screen out the significant factors. Let us imagine that we found 3 factors as significant out of the 6 factors, after performing the screening experiment. Then, we can study these 3 factors by performing a full factorial and analyze all the main effects and interaction effects. Then, we will be performing only 8 experiments. Even if do a replication it would be 16 trials. Hence, the total number of experiments, including the screening experiments will be 32 (i.e., 16+16), as against the 128 experiments for a full factorial, without screening (6 factors with replication). Similarly, we can workout for 10 factors, the total number of experiments (screening + full factorial for reduced number of factors) can be brought down from 2048 to as low as 64, assuming that we find only 5 factors significant during the screening experiment. It is to be noted that during such a reduction, we are not compromising any critical inferences. Plackett Burman designs: Another method used for performing screening experiments is ‘Plackett Burman’ design. These are designs of Resolution III, which means that you will be able to identify only ‘main effects’ and interactions are not considered while the screening experiment is conducted. The table below provides options as per the ‘Plackett Burman’ design for various numbers of factors. As an example, for 6 factors, you can identify a screening experimental design with 12 runs. Conclusion: To sum up, screening designs are methods used during DOE that help to significantly reduce the overall number of experiments to be conducted, when we have a large number of factors. This is achieved by ‘screening out' the most significant factors using the screening experiments. Screening experiments will not help to analyze interaction effects. Once we screen out the most significant factors, a full factorial experiment (or equivalent by choosing resolution V or above) can be conducted with the reduced number of factors and subjected to detailed analysis and conclusions.
5. ## Reliability

The dictionary meaning of ‘Reliability’ says ‘The Quality of being Trustworthy or Performing consistently well'. It is also defined as the degree to which the result of a measurement, calculation or specification can be depended on to be accurate. Reliability as ‘Trust’ As a layman, the term ‘Reliability’ is often used with a connotation of ‘Trust’. We may say that this brand is reliable, or this person is reliable, this bank is reliable, this doctor is reliable etc. It means the we can trust and proceed on any association with these entities. Reliability as 'Accuracy' If the term reliability is used with respect to an information, measurement or calculation, it implies accuracy. Reliability as a product performance In case of manufactured products, ‘Reliability’ is the ability of a product to perform a required function under stated conditions for a stated period of time. In simpler terms, the reliability is the probability that a product will be ‘failure-free’ for a stated period of time or beyond. The probability of failure is usually based on the percentage of ‘survivors’ out of a large number of products. Reliability metrices The reliability of an equipment is also expressed in terms of ‘Mean Time Between Failures’ (MTBF) for repairable items. If the item is not repairable, ‘Meant Time To Fail’ (MTTF) is applicable. Mathematically, Reliability is expressed as the Probability of Survival R = P(S) = e -t/µ Where t = specified period of failure free performance; µ = MTBF. Reliability of Service In the context of a service industry, ‘Reliability’ is the probability of the agreed level of service within a specified time (for example, a courier service). However for certain types of services, safety also matters. (for example, a cab service). Reliability as a Commitment The term ‘Reliability is also used to express the level of fulfilment of a commitment. For example, we book a hotel room based on the features and pictures depicted in an online advertisement. After checking in, if we feel that the extent and quality of facilities provided are not up to the projected levels, we may feel deceived and say that this hotel is ‘not reliable’. Though we discussed multiple contexts, one of the common components about reliability is the factor or Trust, (or not being let down against expectations), be it a Product, a Service or a Commitment.
6. ## p-value

A hypothesis test is done to ascertain whether two variables (say Y and X) are related. i.e. whether the Y (also referred as 'output') is impacted by a change in X (also referred as input). We do a trial by taking few varying samples and see if the metric of interest is showing a difference on the Y for different values of X. For example, if we want to study whether the average productivity of a process is same or different for 'Day shift' and 'Night Shift', we would take samples of productivity numbers during Day and Night and compare the average productivities for Day with that of Night. In this example, the Y is the Productivity and X is the Shift (Day or Night). If we observe a difference in the average productivity between the Day and Night shifts based on the sample, the question that arises is "Is this difference due to a sampling (chance cause) variation or really due to the change of shifts"?. The 'p' value which is an output that is obtained after performing the test of hypothesis, gives the probability that the difference could be due to 'chance causes'. Obviously if the p value is very high then, it makes sense to believe that the difference is more likely to be due to chance causes and not due to the change of the shifts. In the language of hypothesis testing, we say that we accept the Null Hypothesis, Ho. On the other hand, if the p value is very low, it indicates that the probability that the difference is due to chance causes is very low and hence it is highly likely that the change of shifts has caused the difference in productivity levels. As per the hypothesis testing language, we say that we reject the Ho (or accept the alternate Hypothesis, Ha) The practice is to fix a threshold for the p value, beyond which we consider that the difference on the Y is not due to X, but only sampling variation. This threshold is known as the 'alpha' value and the default alpha value is 0.05 (equivalent to 5% probability). This also means that the confidence level (1-alpha) is 95%. Now, a p value of 0.049 indicates that there is a 4.9% chance that the difference is due to chance causes and hence 95.1% confidence that the difference is due to the change in the input (X) variable. Similarly, a p value of 0.02 indicates that there is a 2% chance that the difference is due to chance causes and hence 98% confidence that the difference is due to the input variable. By fixing a confidence level of 95%, we are setting our threshold of 5% for the p value for recognizing the difference as significant, if the actual p value falls below this threshold. In both the above cases, the basic inference based on a test of hypothesis would be the same i.e. the p value is lower than the alpha value (5%) and we would infer that the difference due to the X variable is significant and hence there is a relationship between the two variables. If we need to prioritize the strength of significance levels, as is done, when hypothesis tests are used as part of an experimental analysis, the lower p value may be taken as more significant.

8. ## Delphi Technique

If you traverse through the different phases of DMAIC, your are likely to find several tools and methods for Define, Measure and Analyze phases. When we reach the improve phase and look for tools for identifying solutions, one method that comes up in most people's mind is "Brainstorming". The "Brainstorming" though a very popular and widely applied method, is also known for certain drawbacks. Several improvements have been considered and evolved as methods for creative identification of solutions. The Nominal Group Technique (NGT) and Delphi technique are amongst such methods. NGT is a method by which we can generate solutions as well as evaluate them, whereas the Delphi technique is mostly considered as a method for evaluating alternate solutions. The NGT requires the participants to provide their ideas on a slip of paper, often referred to as 'Silent idea generation'. The advantage in NGT is that it promotes participation of all members and overcomes the domination and influencing that usually occur in traditional brainstorming. Once the ideas are collected from all the participants, they are discussed for clarifications among the group. All participants are also involved in scoring the ideas for arriving at the prioritized ones. The main focus of Delphi technique is to engage subject matter experts (often referred as "Panel of Experts") for the specific topics under consideration to evaluate multiple ideas and to finally decide upon the best solution to a problem. The relevant multiple experts are identified, maybe at different geographical locations, but each one is provided the inputs anonymously. For this reason, it is sometimes referred to as "Secret Ballot". After the inputs are received from each of the expert, the questions relating to the problem solution may be refined and subjected to subsequent rounds before arriving at a final decision. Delphi technique as compared to NGT: May be used for problems that need specific expert opinions and especially if there is a likely hood for difference of opinions between the experts. The idea of maintaining anonymity between the experts is to avoid possible bias and conflicts. In this way, it differs from NGT, where all the ideas once received are discussed amongst the participants and no anonymity is maintained. May not be friendly for solutions that are required quickly, since it usually takes time to contact experts and obtain their inputs with multiple rounds, in comparison to NGT, where we could reach a decision faster. Helps in evaluation of ideas and is not an ideal tool to generate the ideas, whereas by using NGT, we can generate ideas as well as evaluate them. Does not need face to face meetings and interactions which are important aspects for NGT. Delphi technique encourages diverse thinking and even conflicting opinions. May be used to estimate likely hood and outcome of future events, with high levels of uncertainty

10. ## CEDAC

Benchmark Six Sigma Expert View by Venugopal R The Fishbone diagram, which is also known as ‘Cause & Effect Diagram’ or Ishikawa diagram is a very popular tool used for identifying potential root causes. Most Busines Excellence professionals will need no introduction to this tool which is very widely used. The fishbone diagram leaves us with a list of potential root causes (also referred to as X factors) stratified under few headings. Dr. Ryuji Fukuda developed the method known as CEDAC, which is an acronym for ‘Cause and Effect Diagram with the Addition of Cards’. In this method, each participant is asked to identify the causes for a problem independently and write it on a post-it sticker. The recommended approach is to ask as question ‘Why not?’ (i.e. why do we consider as the constraints to achieve our target?) and identify the possible reasons. All the stickers are collected and grouped as done in an Affinity diagram and then transferred to the fishbone diagram in the CEDAC board, under the appropriate category. These stickers are stuck on the left side of the ‘bones’ of the fishbone diagram. It may be observed that the method adopted may be considered as a ‘modified Affinity Diagram’. Once all the stickers with the causes are stuck on the CEDAC board, the each team member is asked to view all the causes and identify solution(s) for each cause. They will write the solutions in another ‘post-it sticker’, of a different colour. This time, these stickers are stuck on the right side of the bones containing the corresponding causes on its left side. The solutions are evaluated by the team and the ones that get shortlisted are shown on the upper right corner of the CEDAC board, as ‘New Standards’. The CEDAC board also houses the Problem statement (Problem Effect) and the Goal Statement (Target Effect). An appropriate chart, such as a trend chart is also included to help monitor the progress based on the implementation of the finalized solutions. Thus, unlike the traditional Fishbone diagram, which is useful for the Causal analysis, he CEDAC becomes a tool for solution implementation as well as tracking the progress of the target effect.
11. ## Process Decision Program Chart (PDPC)

Benchmark Six Sigma Expert View by Venugopal R FMEA is a very popular tool used for Risk Analysis, whereas PDPC (Process Decision Program Chart) has been released by JUSE (Union of Japanese Scientists and Engineers) as early as 1976. While the Process FMEA is useful to analyze the potential risks (failure modes) associated with a process, the PDPC is a tool that helps to assess the risks associated with a Project. The Process FMEA begins by listing the process steps and identifying the potential failure modes during each process step. Process FMEA has its inbuilt quantification methods by considering the ratings for Severity, Occurrence and Detection associated with each failure mode and gives a composite Risk Priority Rating (RPN). PDPC is much simpler tool than Process FMEA, and it not only identifies the potential failures, but also the possible counter measures and ends with selection of the feasible counter measures. While we manage complex projects in which the impact of even small failures could be very high, it is very important to foresee potential risks and do an advance mitigation planning. The first step to prepare a PDPC is to develop a tree diagram of the project. The tree diagram begins with the overall objective on the top box. This is the first horizontal level of the PDPC. In the second level, we need to branch out from the overall objective, the major activities that are necessary to accomplish the objective. The third horizontal layer will be the tasks that branch out from each of the activities represented in the second layer. Having created the above 3-layered tree diagram up to ‘task level’, we need to do a ‘what-if’ analysis and identify what could potentially go wrong with each of the tasks. This has to be done by brain-storming using the experience and knowledge of the people and other experts connected to the project. Some of the questions that may be asked to identify the potential failures are: If this task were to fail, how could it go wrong? What is the primary intent of this task? Can it result in doing something else, instead or in addition to its primary intent? Have we allowed for any margin for error? Are there any assumptions made? Do we have experience from similar situations? The identified risks are included in the tree diagram at the fourth level. The team may review these risks and remove the ones that may be very improbable. The counter measure for each risk is identified as the fifth level in the tree diagram. The figure shows the structure of the PDPC tree diagram with the five levels. For each counter measure weight the feasibility considering, cost, time, effectiveness and ease of implementation. Mark the countermeasures that are finally selected as ‘O’ and the ones eliminated as ‘X’
12. ## Acceptance Sampling

Benchmark Six Sigma Expert View by Venugopal R “Acceptance sampling” refers to sampling methods used to take a decision on accepting or rejecting a lot. This method has been widely used as part of the ‘Incoming goods acceptance’ procedures for organizations that buy materials from suppliers or sub-contractors. However, the method is applicable to other areas as well, viz. Finished goods clearance, In-process acceptance. Some of the business examples where Acceptance Sampling may be applied: To evaluate the ‘Lots’ or ‘Batches’ of incoming components for a manufacturing organization To assess the quality of transactions executed during a period of time in a BPO industry A bank who process a large batch of cheques are by automatic optical character reading, may use Acceptance sampling for verifying whether the output meets zero defect acceptance criteria. A department store can verify the weight of pre-packed goods in a consignment on samples to decide on accepting / rejecting the consignment. Pharma approval authorities may use Acceptance Sampling as one amongst the many procedures adopted to take decisions on certifying the release of a batch of medicines. The main advantage of using Acceptance Sampling is to save the cost, efforts, handling and time involved in inspecting the entire lot. By taking a decision to accept or reject based on the Acceptance Sampling procedure, the rejected lots are usually expected to be reworked, replaced or segregated by the concerned supplier. However, there are certain disadvantages as well. One of the disadvantages in using Acceptance Sampling is the presence of ‘sampling risks’. There are two types of sampling risk: 1. Good lots can be rejected (Producer’s risk or alpha risk) 2. Bad lots can be accepted (Consumer’s risk or Beta risk) ‘Good lots’ mean lots containing defect levels that exceed acceptable levels. Assume that we fix the Acceptable Quality Level (AQL) as 1.0%. Then, ideally the sampling plan should accept all lots that have defective level less than 1.0%, and reject all lots that have defective level higher than 1.0%. The ideal operating characteristic curve (OC curve) would then look as shown below: The above situation is only ideal, and in reality, the OC curve for an sampling plan would look like something as shown below: The AQL is defined as the maximum percentage non conforming that for purpose of sampling inspection is considered satisfactory as a process average. The OC curve gives the probability of acceptance of the AQL. Similarly another point towards the lower end of the Y axis will represent what is knows as the LQL (Limiting Quality Level). This determines the limiting quality for which we expect a very low probability of acceptance. These two points on the OC curve define a sampling plan. Any sampling plan will have its OC curve. In the modern world with the focus moving towards Lean, JIT, Process Capabilities and Supplier certifications, the importance of Acceptance Sampling Plan as a long term control measure is reducing. However, it is important to understand the principles behind it and it still continues to have application in many situations
13. ## Work Breakdown Structure

Benchmark Six Sigma Expert View by Venugopal R Work Breakdown Structure (WBS) is defined as “the Hierarchical Decomposition of total scope of work to be carried out by a project team to accomplish the project objectives and create the required deliverables” – PMBOK The concept of WBS has emerged from the PERT (Program Evaluation & Review Technique) by US Department of Défense. In 1987, the Project Management Institute (PMI) documented the WBS method for application on non-defense industries. WBS is an important tool as part of the project scope management. The overall project deliverable is broken down to sub-deliverables and project work into smaller, manageable components. This will help in clear deployment of accountabilities across the project team members, while creating the visibility across the team as to how their activity connects to the overall project objective. The WBS looks similar to an organization structure. An illustrative example for a WBS for creation of a Web application is given below: WBS would mainly provide the outcomes for each stage of break-down, and not the activities. One cannot expect prescriptive activities from WBS. It is common practice to provide hierarchical numbering system for each breakdown deliverable, for e.g. 1.0. 1.1, 1.1.1 etc. Creating a WBS will act as a roadmap for the project manager in terms of the multiple deliverables for the project and how they lead to the overall deliverable. This brings in good control for the scope management making it easy to ensure that 100% of the tasks will get addressed through the components of WBS and any irrelevant component will not be included. The components in the WBS need to be MECE (Mutually Exclusive and Collectively Exhaustive). This implies that while the WBS has to incorporate all the necessary tasks, there should not be overlap on any two components. WBS principles provide guidelines for the level of detailing. Two to four levels of break-down are recommended. The duration of the activities for individual elements needs to be considered while deciding the final level of deliverables. One of the guidelines is to ensure that no activity at lowest level exceeds eighty hours of effort time. Another guideline is that the duration of the smallest level activities should be within a single reporting period for the project.
14. ## Cost Reduction vs Cost Avoidance

Benchmark Six Sigma Expert View by Venugopal R Lean Six Sigma and Business Excellence professionals often come across improvement projects that are important but sometimes difficult to justify the gains to get a CFO approval. A “Cost avoidance” related projects is perhaps one such situation. I recall a situation where the customer has a penalty clause if the Quality level of our output falls below 98%. For many years, we have been managing very well, barring some occasional blips and maintained Quality levels high enough to avoid the penalties. One fine day, the customer revises the SLA and raises the Quality requirement to 99.5% and gives us 3 month’s time to attain the same. This forces us to frantically work on an improvement project, which if successfully done on time, will help us to ‘avoid the cost’ of penalty. However, the finance staff will not see it reflected as a cost saving on their books. Now, imagine another situation where we are already incurring losses, as a result of being penalized for not meeting the Quality score. If a project is taken to address this issue and we succeed in getting rid of the penalty, this will obviously be seen as a saving on the Finance books and would probably get appreciated better than the previous case. “Cost reduction” refers to the reduction of a cost that is already being incurred. It is like getting relieved from a pain that we are already suffering. “Cost avoidance” refers to efforts that will avoid a potential cost, which would be incurred, if the action is not taken. It is like preventing us from a pain that we are likely to suffer if we do not act on time. DFMEA and PFMEA are tools that help us to prevent potential failures and thus help in “cost avoidance”. Fault tree analysis and corrective action are efforts that help us to solve an existing problem and hence result in “cost reduction”. However, once we implement a “cost reduction” activity, it has to be regularized and implemented as a "cost avoidance" on a similar new process or product design. Then on, it becomes an established practice and may no longer be perceived as a “cost avoidance” action when repeated. A project that addresses and removes non-value adding steps in a process drives “cost reduction”, where as a process or layout that is designed right in the beginning keeping away all those NVAs, will be considered as “cost avoidance” action. If a machine is producing more rejects and costs money, getting it repaired could result in “cost reduction” by eliminating the reject generation. However, a good Preventive Maintenance program would have been a “cost avoidance” initiative, as it would have prevented the reject generation in the first place. A greater awareness and appreciation of “cost avoidance” initiatives in an organization will encourage superior thinking and preventive oriented actions. On the other hand, poor organizational awareness of “cost avoidance” will discourage preventive oriented initiatives. In this context let me mention about the Cost of Quality (COQ), which has 3 broad components viz. Prevention costs, Appraisal Costs and Failure Costs. The Appraisal and Failure costs are often referred to as ‘Cost of Poor Quality’. (COPQ). The prevention costs should ideally be considered as ‘Investments’ that help in avoiding the COPQ. However, usually, the information on COPQ is more easily available in an organization than the ‘Prevention Costs’.
15. ## JIT

Benchmark Six Sigma Expert View by Venugopal R While JIT (Just In Time) aims at improving operating efficiency, it is interesting to look at what may be considered as a contrary, i.e. “Just In Case”. Companies tend to keep excess stock of raw materials, just in case they run out. Rarely required items are kept just in case an order comes in suddenly. Materials are procured well in advance just in case there are delays on transportation or other reasons. If we look at most of the ‘Just-in-case’ situations, we would see many opportunities where we could move towards ‘JIT’. However, making an organization work on JIT is easily said than done. Most of us would have read or heard about Toyota Production Systems where the JIT methodology got proven and gained popularity. The prerequisites for JIT can be largely seen if we look at the factors that are preventing an organization from implementing JIT. Let's see a few of them 1. Quality Variations in Quality can result in time and effort for higher inspections and checks, higher rework and uncertainties. A very matured Quality system and high level of Quality across the system throughout the supply chain is a fundamental pre-requisite for JIT. One of the ways for addressing this is to ensure high process capability (Sigma levels) for all processes. As much Poka Yoke methods should be used to prevent mistakes. Many a time, the compulsion for JIT forces an organization to uplift their Quality levels! 2. Pull System JIT works on the premise of a ‘pull’ system. This means that the entire system has to be ‘pulled’ based on the customer order / market requirements. The customer requirements and the associated communication processes need to be very well organized and matured. 3. Quick change over / set up changes Since JIT calls for producing only what is required, in the event of frequent changes in the type of product / model requirements, the company has to be efficient in doing set up changes and production change overs very quickly. Certain popular concepts like SMED are important here. 4. 5S “A place for everything and everything in its place” is a very important to attend to orders without wasting time on searching and scrambling for material, information, orders or tools. A good 5S culture is a prerequisite. 5. Supplier Quality System Though we mentioned Quality as one of the foremost requirements, it is emphasized that a very effective SQA program is essential, so that the inputs that come in from suppliers or sub-contractors are highly reliable and can be used without Quality checks / rework. 6. Flexibility of suppliers The ability of supplier and sub-contractors to accommodate to the requirements of the pull system is essential. Sometimes, companies may have dedicated suppliers or dedicated processes with suppliers, but this may not be possible always. There may be many standard bought out parts too. Managing flexibility across variety of suppliers and components will be challenging. 7. Logistics related challenges Challenges with respect to transportation of materials and finished goods, would be dependent on factors, all of which may not be within the control of the company or its partners. Companies try various methods to overcome such issues, which sometimes involve even strategic re-location of the supplier sites. 8. Production floor layout Refining production layout to optimize the handling of material and streamlining production flow will help to minimize the handling efforts and time of material and movement of personnel. Sometimes, in very matured JIT implementation, material is offloaded from trucks and fed directly to the assembly lines! 9. Employee training Once we move towards JIT there is not much room to accommodate mistakes, reworks, damages and poor performances. This calls for very planned employee training and upskilling. Multi-skilling will also be an important requirement to handle quick change overs. 10. Flexible Automation Automated handling of material and feeding would help. However, if frequent change-over and setup changes are required, automation should have the flexibility to accommodate with out delay. 11. Eliminate 7 wastes In general, the 7 wastes as in Lean Management need to be addressed continuously, viz. Transportation, Inventory, Motion, Waiting, Over Production, Over Processing and Defects. Many of them have been covered in the earlier points. However, a continuous culture of applying Lean techniques is important for effective sustenance of JIT. 12. Integrated ERP and EQMS systems Well implemented digitalized systems for ERP and EQMS are a necessity in today's world for running normal functions of an organization. Such practices along with a successful system integration across supply chain will be another prerequisite for JIT. 13. Pilot Program Since JIT implementation is a long term program, it has to be started as a pilot program on a selected area in the organization. This will help us to train ourselves on overcoming various challenges and make us more confident to extend the implementation in phases, across other areas of the organization. 14. Top management commitment As it applies for any companywide program, the top management commitment and attention to drive the program is vital, without which JIT implementation will never take off. This will also lead to formation of a JIT steering committee and governance process. Several decisions may have to be taken with respect to thoughtful investments and other changes in the way of operation. The above are some of the important prerequisites for JIT implementation, but not exhaustive.

18. ## 2 Sample T vs Paired T Test

Benchmark Six Sigma Expert View by Venugopal R When we have to compare the averages for two samples, it could be for different reasons: 1. To estimate whether two existing populations are different with respect to their average values of the characteristics of interest. Examples: To compare the average life span of bulbs produced by two different companies Average marks scored by male students vs that of female students 2. To estimate whether the effect of some change on a given population is significant or not. Examples: Performance of a group before training and after training Average mileage of cars for one type of fuel vs another From the above, we can see that for point-1, the two samples being compared can never be the same, since the reason for comparison is a difference based on the very nature of the sample itself. In such situations, we have to use 2-sample 't' test, and no ‘pairing' is possible. For the point-2, we have a possibility of subjecting the same set of samples to the first treatment and then to the second treatment and compare the difference in performance for each same sample. In such situations, Paired ‘t’ test is the ideal comparative statistical tool to be used. We may also come across some situations, where the paired sampling would not be practically possible. For example, let’s take the case of evaluating the average life of bulbs from the same company before and after doing a process improvement. Since the life testing of bulbs is a destructive test, the same samples will not be available for doing a paired 't' test and hence we have to use a different set of samples, and hence, only 2-sample 't' test. Another example would be to compare the effect of two vaccines on a set of people. Once they are subject to vaccine-1, they would have developed immunity and we cannot subject the same set of people to vaccine-2, ruling out the possibility of a paired 't' test. A paired ‘t’ test is recommended over 2-sample ‘t’ test whenever the situation permits, considering the advantages. Let me statistically illustrate certain advantages of paired test using the below example. As part of a medical research study, the heart rates of 20 athletes were studied before and after subjecting them to a running program. Since heart rates of the same athletes were studied before and after the treatment, a paired test is possible. We will however, carryout the paired test and the unpaired 2-sample 't' test for the same sets of data and compare the results. The mean heart rate before the treatment was 74.5 and after treatment was 72.3. The Minitab outputs for both the tests are given below: From the above results, it can be seen from the p values that for the same set of data, the paired t test has shown significance, where as the 2-sample t test has not shown significance. Thus, the 2-sample t test for the same data exhibits higher ‘Type 2’ error. Now, let us fix the required power of the test as 0.8 and determine the sample size requirements for both these tests, all data remaining same: The above information are the outputs based on ‘Power & Sample size’. For both the type of tests, the sample size was determined based on a difference of 2, target power of 0.8 and standard deviation of 4.29. The paired test requires a sample of 39 whereas the 2-sample test requires a sample of 74. Hence, the paired t test is preferable, whenever practically possible, from the sampling size requirement as well.
19. ## Process Door vs Data Door

Benchmark Six Sigma Expert View by Venugopal R One of the main intent for executing a project using the six sigma methodology is to guide a team through a disciplined approach to solve a problem in a highly objective manner. Various terminologies have been coined to reinforce the disciplined approach. “Process Door” and “Data Door” are terms used to broadly classify the approach to be used during the Analyze phase. Ironically, though one of the approaches is termed as “Data Door”, it does not mean that the “Process Door” will not use Data! Effectiveness and Efficiency First, let us understand that the objective for any Six Sigma Project may be broadly classified as “Effectiveness improvement” or “Efficiency Improvement”. 'Effectiveness' refers to how effectively we deliver a product or service to a customer, internal or external. Improvements in Product Quality, Enhancing performance of a product, Improving Process capability, Reducing variation, Improving Market share etc. are examples of Effectiveness improvement. 'Efficiency' refers to providing an higher effective output with lower inputs. Lean Projects in general are ‘efficiency improvement’ projects. Improving Cycle Times, Reducing wastes, Resource Optimization are examples of 'Efficiency Improvement'. Process door and Data door Once we define a project, go through the ‘Measure’ phase and reach the ‘Analyze’ phase, the choice of ‘Process Door’ or ‘Data Door’ has to be made. In general, ‘Effectiveness improvement’ projects take the ‘Data Door’, whereas ‘Efficiency Improvement’ projects take the ‘Process Door’. Data door For ‘Effectiveness improvement’ projects we will usually have a target, standard or specification, which has to be complied or attained. By the Data Door approach, 'the current situation analysis' will give us the gap which has to be studied using tools such as Pareto analysis, Control charts, Histograms, Scatter Plots, Design and Process FMEAs etc. Statistical tools such as Confidence intervals, Hypothesis testing, Normality, Correlation & regression etc. are also applicable as required. Process door For ‘Efficiency improvement’ projects, the approach normally starts with a process map and identification of valued adding and non-value adding (NVA) process steps. There are detailed definitions for NVAs. A quick definition for NVA would be – those process steps for which: customer is not willing to pay for, or does not result in any physical transformation, or happens to be a re-work. Some of the tools used in Process door are Process mapping, Effort vs Elapsed time, Seven+ wastes, VSM, Process FMEA etc. While using these tools, it is possible that some of the earlier mentioned statistical tools in Data door such as Pareto, Hypotheses may also be used as necessary. Practical application on projects For many projects, it may not be right to strictly stick to a set of tools under one of these ‘doors’. For instance, while working on a project that is trying to improve the number of exterior damages on a consumer durable product, the analysis might throw up ‘Number of instances of product handling’ as a possible cause. Then, it might lead to the process door, and process study has to be done to identify the number of ‘handling steps’ that could be avoided. This is one of the '7+ wastes'; hence, apart from the main objective, the project would also give some efficiency related benefits. Similarly, a project that begins as an efficiency improvement project, let’s say the TAT improvement for a loan processing begins with a 'Process door' – upon analysis, we might discover that 'reworking errors' as one of the possible causes. This will lead us to the ‘Data door’ to drill down on the details of the errors made, their causes and remedies. Sum up Once a project is defined, the team is expected to have a clear idea about the problem statement and the objective, based on which they will be led to the approach and tools as applicable to the situation. The concept of Process Door and Data Door is intended to provide an over all guidance to get them on the appropriate direction. The team will open the appropriate ‘doors’ and apply the tools while they traverse the course taken by the project.
20. ## Curse of Dimensionality

Benchmark Six Sigma Expert View by Venugopal R When we want to study the relationship of an outcome based on one factor, like a simple linear regression, we would obtain a relationship model with a certain level of accuracy. If we enhance the model by adding another relevant factor (dimension), we can expect the accuracy of the prediction to increase. However, if we keep on increasing the number of dimensions, from a certain threshold onwards, we will see the accuracy would actually start decreasing, unless we keep increasing the quantum of data substantially. The term “Curse of Dimensionality” has been coined by Richard Bellman, an American Mathematician, while dealing with problems on Dynamic programming. While studying models of relationship for certain outcomes based on factors (referred to as dimension), the ability to establish statistical relationship becomes very difficult with the increase in the number of dimensions, unless we exponentially increase the amount of data. This phenomenon is of particular interest in the field of Machine Learning related Data Analysis. To illustrate this in simple terms let’s consider an example where the variation in Quality of a certain food is studied for varying temperature. The Quality is determined by applying a score for various levels of temperature. We obtain a scatter diagram as in figure-1 below: Now, we try to enhance the model by adding one more factor viz. Time, while the total number of samples remain unchanged. Since we have added one more dimension, we have to use a 3D scatter plot as in figure-2, to represent the relationship. In figure-1, when it was a two dimensional model, we could observe that the points were quite dense and a regression line is fitted with apparently low residuals. Figure-2 represents the 3D regression with an addition factor ‘Time’ was included, all other data remaining the same. The space of the scatter diagram becomes a cube, and we can observe that the data points have changed from a ‘dense’ pattern to a more ‘sparse’ pattern. If we continue to include more dimensions for the same sample size, the representation is more complex, but the ‘sparseness’ of the data will increase making it difficult to obtain an accurate prediction from the model. Understanding the ‘Curse of Dimensionality’ is crucial while planning the number of dimensions and the volumes for an effective machine learning exercise.

22. ## Interrelationship Diagram

Benchmark Six Sigma Expert View by Venugopal R Union of Japanese Scientists and Engineers (JUSE) came out with ‘New Seven QC tools’ in the late 1970s; and Interrelationship Diagram was included as one of them. These tools were also called as ‘Management & Planning’ tools. While dealing with multiple factors that are believed to be impacting a problem, the Interrelationship diagram serves as a useful tool to pictorially represent the 'Cause & Effect' relationships among all the factors and also help to visualize the relative extent of impact of the factors towards the ultimate ‘effect’, to a great extent. A simple example will help to understand this tool quickly. An organization wanted to study why they were not getting the desired level of improvement in sales after subjecting their staff to ‘Learning and Development’ program for imparting the skills to improve sales. The relevant stakeholders did a brainstorming and came up with the following possible causes: Insufficient training duration Trainer caliber Inadequate practical training Poor training plan Qualifying exam too easy Candidate background Insufficient training content Low skill imparted For the above example, a interrelationship diagram was constructed as below for the identified factors The arrows connecting the factors represent the ‘Cause and Effect’ relationship. For instance, “Poor Training Plan” has 5 outgoing arrows. The factor where an arrow begins is the cause whose effect is the factor where the arrow ends. Thus “Insufficient content” is the effect for the cause “Poor training plan”. For each factor, the number of incoming and outgoing arrows are mentioned beside the respective boxes. It may be noticed that there are factors for which there is no incoming arrow, but only outgoing arrows. Such factors represent only a 'cause' and are not the 'effect' of any other factor (E.g.: Poor Training Plan). We may also have factors with only incoming arrows and no outgoing arrows. Such factors are the 'effect 'of many other factors and are not 'cause' for any of the other factors. (E.g.: Low skill imparted). It could be seen that the interrelationship diagram provides a visual picture of not only the C&E relationships among the factors but also an extent of prioritization of the causes that have higher influence upon the final effect. Where would we find this tool relevant and useful? During a Lean Six Sigma Project, after brainstorming, once the primary causes are identified and we need to shortlist the prioritized causes, this tool will be handy. Similarly, during the solution identification for a problem, once we list out the possible solutions, an interrelationship diagram can provide clarity on the solutions that result in the best effect. Even for identifying a set of projects to work on, this tool will help us to narrow down from a list of projects, to remove most of the redundancy, based on the interrelationships. Whenever we use affinity diagrams, fish bone diagrams or tree diagrams, we can use the Interrelationship diagram to explore their relationships. When we have to work with a set of factors (causes) that are overlapping and related, the Interrelationship diagram helps to clear up the clutter and help us to proceed with more clarity and focus. The tool is simple to apply whenever we need to quickly summarize and helps us to bring a team to consensus due to its visual impact, along with the ‘In-Out’ quantification. Even if we have debates, they will be focused on specific factor relationships. In situations where we have a list of factors, but do not have objective data to substantiate the contribution of each one towards a desired effect, Interrelationship diagram helps to make initial progress. The tool is also useful when we need to quickly classify the ‘factor to factor’ cause-effect relationships as Nil, Weak or Strong. Although we may do prioritizing at a broad level based on the ‘In-Out’ arrows count, it has to be remembered that certain factors may prove critical despite have low ‘In-Out’ arrow count. The team will have to use their discretion / gather data for narrowing down such factors.