Prashanth Datta's post in Sample Size was marked as the answer
Simply stating, the One-Sample T - test compares the mean of our sample data to a known value. For example, if we want to measure the Intelligence Quotient [IQ] for a group of selected people in India, we compute their IQ using a set of predefined tests (mapping to global standards). With the results we get the average IQ for the team selected as well as their individual IQ scores. This average IQ score of the group can always be compared to a known value of 82, which is the average IQ of Indians [which is already computed by accredited testing organizations]. Further, an average score of < 70 means poor IQ and the lower threshold is also computed and made available through global studies. In this case, we can see two sets of averages that can be compared to your teams evaluated IQ score to draw some meaningful conclusions i.e. if group scores <70, they are poor, close to 82 maps to Indians average IQ scores and greater than 82 implies the group has some really intelligent folks. While you may want to strengthen your argument by further statistical analysis, it serves as a starting point of discussion.
One Sample T-test is used when we don't know the population standard deviation. Like any other statistical testing, One Sample T-Test also works on certain assumptions. To sum up the assumptions,
Dependent variable Y, should be a Continuous data type Data analysed should be independent No significant outliers in data as we are keeping mean as reference here Data is normally distributed. Further, as we are aware that one of the technique that we use to identify critical X's are statistical hypothesis testing. An interesting question that we will land up is how much of data in sample size should be analyzed to arrive at a meaningful conclusion i.e. data leading to root cause identification, so as we can build more effective solutions during our Improve Phase.
The sample size calculator at https://www.benchmarksixsigma.com/calculators/sample-size-calculator-for-1-sample-t-test, is a good place to start with.
Before I delve into the calculator specifics, let me take an example. We have a Diabetic Clinic where they keep HbA1C readings as their baseline measurement of their patients. While I am not getting into the technicalities of how HbA1C readings work, a global average of 8 is kept as acceptable. The clinic has been running tests for their patients and computing the results. Their sample data gave an average HbA1C reading of 8.04 with a Standard deviation of 0.34. As expected, the Clinic had introduced a new alternative diabetic drug when the computed the sample average of 8.04. The clinic now wants to run an hypothesis if the new drug has really helped them to bring diabetes under control.
While we want to define the problem statement and use Hypothesis analysis to see if the change in drug has resulted as a critical X, the first step is to really see what amount of data needs to be evaluated before proceeding further. The calculator will help us with this critical step. Looking at the parameters of the calculator,
Confidence Level - Preventing Type I error - This implies rejecting null hypothesis while it remains true. As a rule of thumb 5% rejection is acceptable which means 95% probability we need to prevent this Type I error. Lets keep this value to 95% Power of the Test - Preventing Type II error - This implies accepting null hypothesis while it is false. By rule of thumb we keep it at 10%, which means 90% we need to prevent this error. [ Type and Type II error can be subjected to change based on risk appetite of producer and consumer] Reference Mean Value - we will keep it 8 as it is defined HbA1C globally accepted value as normal. Sample standard deviation - we will keep it at 0.34 basis the samples. Sample mean value - 8.04 as arrived from the tests. With the above data, we see we need 759 samples to check if our mean is similar to reference mean. Anything less than this will not give us any meaningful inference.
An interesting thing to observe is, if we compromise on our Type I and Type II error i.e. accepting more Producer and consumer risk, the sample size will fall. Again, it depends on the industry where you are analyzing the data. While Medical, critical research industries will not accept high allowances other industries may allow some tolerance.
In summary, this calculator can help to identify sample size when sample mean is available, especially in service sectors to start with some basis analytics in analyze phase to come up with some good inferences for solution building activities.
Prashanth Datta's post in Cost of Poor Quality was marked as the answer
Before analyzing the COPQ calculator, let us quickly understand what COPQ is all about.
As an organization, when we commit to deliver a product or service to our customers, it is deemed that the quality of this product or service meets and/or exceeds our customer expectations. Hence, right from the Design or Planning phase itself, it is important to give a lot of emphasis to the identified quality parameters associated with the product or service. When it comes to the financial planning part of your project, it is extremely important to assess the budget that needs to be allocated towards meeting the quality criteria.
The two primary costs that will be allocated first are Prevention Cost and Appraisal Cost.
Prevention Cost generally includes the budget allocated for quality planning (inputs), training (skills to avoid rework), preventive maintenance etc., Appraisal Cost generally includes the budget allocated for testing, inspection, audits, reviews etc., In real world scenario, it is often seen that, once the product or service is developed and ready for deployment, there are possibilities that we identify certain defects and post release to market, we can see a set pattern of complaints reported by our customers which needs to be fixed. The former, termed as Internal Failure or Defects still needs to be fixed before we deploy the product or service for which we will need to spend money. Latter, which comes from customer, termed as External Failure or Defects, obviously needs to be addressed but has a larger cost implication.
We can categorize these two as secondary costs as below
Internal Failure or Defects which generally includes rework, rectification, scrapping, unnecessary hold or inventory etc., External Failure or Defects which generally includes Repairs, Warranty Claims, Replacement Cost, Refunds, Cost of Dissatisfaction etc., While the Primary Cost associated with Prevention and Appraisal Cost are considered as Cost of good quality, as it strives towards achieving the required quality parameters as committed to the customer, the secondary cost of Internal and External failure are considered as Cost of Poor Quality as it has a negative impact owing to a defective and inefficient product or service. Summary as below.
Cost of poor quality is generally referred to as non-conformance cost and acts as a good indicator to the company quality policies and programs. A fairly less trending COPQ indicates a strong governance around Quality Management Programs.
Reducing the Cost of Poor Quality in itself is a very strong trigger and provides a good business case for your Lean Six Sigma Projects in the organization as it definitely calls for a systematic DMAIC approach to reduce the cost of non-conformance.
Let us now look at a very basic hypothetical example to compute Cost of Poor Quality.
Company AAA manufactures kids play area products – mainly gears such as Swings, Slides, See-Saw etc., Company JKL is a newly formed Kids Play Area Institution and have placed an order for 100 slides for all their offices across India. There is a SOW signed between both the companies with terms and conditions which includes Safety clause (as used by kids), strict delivery timelines, penalty clauses for delayed service etc., Each Slide was priced at INR 6000/- all inclusive (including installation, maintenance etc.,)
Post production of 100 slides, it was realized that inclination of the slides were not good enough to give a smooth glide down. Given the core production part was completed, it was not possible to reassemble the set. Post some calculations, it was found that using a tension spring, they can slightly pull back the slide towards the ladder steps (each step is separately assembled to side bars) to give the required further inclination for an enjoyable slide down. The team now had to include a tension spring into the set-up and have the same incorporated with additional efforts.
In summary, there was one Internal Defect found and had to include the cost of rework i.e. cost of tension spring, required adjustments on both slide and ladder set up and labor cost to make the adjustments. Also, this was the only workable option that company AAA could opt to meet the delivery timelines. With products delivered, JKL Kids Play area installed these slides across their 5 offices (20 each). While they realized the inclusion of tension spring which perhaps had led to some dissatisfaction owing to losing the aesthetics of the set-up, they kind off ignored it as the overall requirement was met.
External Defect Synopsis - Within a Span of 3 months, 2 branches reported issues about steps 5 and 6 loosening on the ladder side where tension springs are pulled towards the side bars, due to the stress called by the pull by tension springs. While JKL reported the issue to AAA, within next 15 days all branches flagged the issue and nearly 70 out of 100 slides had this issue. Given it was steps to ascent, they expressed their anxiety as kids were using it and asked for quick fix. While JKL insisted for complete replacement of units, AAA assured of a quick work around. AAA had to send their technicians to replace the two steps firmly to side bar and use additional clamping to support through extended bars connecting the slide and ladder. Infact, they had to take additional steps to clamp all 10 steps to avoid any future issues. An external defect was called out here by customer and had to be fixed but more importantly left the customer unhappy. Let us now look at the cost of poor quality using our calculator at https://www.benchmarksixsigma.com/cost-of-poor-quality/
a. Internal Defects Identified = 1 per slide [No proper sliding experience]. So for 100 units, I need to input 100 defects in he calculator
b. Average cost of internal rework = Cost of tension spring + additional labor cost effort = INR 75/ + INR 150 = INR 225/-
c. External Defects Identified = 1 [Steps on Ladder sides loosened]. Issue was found with 70 units only. So in calculator, I need to input 70 units.
d. Average Cost of external rework = Cost of replacement of 2 steps+ additional bars to hold the steps+ additional clamps to hold all steps + cost of technician = INR 100 + INR 300 + INR 500 + INR 400 = INR 1300
e. Penalties / Liabilities = INR 25000 claimed by company JKL for lost revenue
As per the calculator, Total Cost incurred due to internal and external defects are INR 138,500. AAA had sold 100 slides for INR 6000/- each i.e. INR 600000/- all inclusive.
Let us assume that AAA had planned for 30% profits with no investment on Preventive and Appraisal costs. So profits made were INR 180,000/ - With the Cost of Poor Quality, they lost around INR 138500/- and net effective profit is around INR 41500. ************************************************************
Taking my example above and mapping to calculator, below few call outs
1. Internal Defects Identified – In my scenario, it was 1 defect per piece so for 100 units sold, it would be 100 defects. In case if it was more than 1 defect in my example, it would have been number of defects per piece * number of pieces.
My feedback on calculator - If it is made more explicit in the calculator to input total number of defects across all units or have two separate fields like Defects/Unit and Total number of units which will land up with right number of defects for which the cost has to be estimated. For Services industry, we can have multiple defects while the unit can still be 1. 2. Average Cost of Rework [Both Internal and External] – In my scenario, I have tried to assess the individual cost that goes into the rework for both Internal and External scenarios.
My feedback on calculator – While it clearly states average rework cost per defect, it will be helpful to include the type of costs that can go it as it will help the user to appropriately think, sum and input into the calculator. 3. External Defects Identified – In my scenario, it was across 70 units we found the issue.
My feedback on calculator - If it is made more explicit in the calculator to input total number of external defects across all units or have two separate fields like Defects/Unit and Total number of units affected which will land up with right number of defects for which the cost has to be estimated. For Services industry, we can have multiple defects while the unit can still be 1. 4. Penalties / Liabilities / Recall Costs – In my scenario, I have included a Penalty in the form of loss of revenue.
My feedback on calculator – It’ll be helpful if more specific inputs are given in relation to this cost. While Penalties and Recall costs are direct, Liabilities if expanded further will be really helpful. If the magnitude of the excursion is large, sometimes, you may have to engage a Third Party Support all together to handle the volumes of your issue. This it will be a huge Opex cost. So being explicit will be helpful. In summary, this calculator is a good start to proactively look at the possible cost of non-conformance and plan your overall cost of quality as well as ensure to keep your cost of poor quality at minimum through appropriate planning. While meeting customer expectation is at focal, managing profitability is at equal importance to stay competitive in the market.
Prashanth Datta's post in Sensitivity Analysis was marked as the answer
As we have already seen and understood, Root Cause Analysis focuses on identification of all those independent variable X's (further narrowed to critical X's) deemed as input, which has an impact on the dependent output variable Y. In other words, identification of all causes which influences the effect.
In a Sensitivity Analysis, also referred to as "What-if" or "Simulation Analysis", we determine how the output-dependent variable (Y or effect) varies when each of the independent variable (X or causes) are varied under a predefined set of assumptions. Simply stating, how different values of each independent X, will have an impact on Y.
Once we have the critical X's identified using the relevant Tools & Techniques of Root Cause Analysis, applying Sensitivity Analysis on these Critical X's will be extremely helpful to see how the focus metric Y, behaves by changing the values of each X under a set of predefined assumptions. This paves way to develop solutions in a more scientific method within the identified X's and a combination of this approach across all input causes will help us with a more comprehensive solution that can be implemented during the Improve phase.
Let us now see an example.
I am running a small coffee shop and below are my financial workings as on date
Cost/Cup of Coffee - INR 12 Number of Cups of Coffee Sold per month - 4000 Operating Expense (incl. Rent, Salary, Milk, Sugar, Coffee power etc.,) = INR 40,000 Based on above workings my Monthly Income is INR 48,000 [12 per cup x 4000 cups per month]. My Profit after deducting the Operating Expense is INR 8,000 [Opex INR. 48,000 - Monthly Income 40,000]. I will now but a problem statement with a business case that I need to improve my Profits from this coffee shop.
At an high level, if I want to put this in a mathematical format for my business case in Cause and Effect method Y=F(X), using Root Cause Analysis techniques, at an high level, we can say profits are influenced by price per cup, number of cups sold and the operating expenses.
Profits = f(Price per Cup, Number of Cups Sold, Operating Expense, ...) I need to work on each of these levers either increase or decrease to improve my profits.
While at an high level, going by thumb rule, we always say to reduce the Opex and it in itself can lead to another root cause analysis on what we can vs what we cannot.
But looking at the other two levers of cost per cup and number of cups to be sold will form an interesting strategy to plan around and the Sensitivity Analysis will help me take a decision. With Sensitivity Analysis, I can play around by increasing or decreasing the price or increasing or decreasing the number of cups sold and its impact on my profit.
As per the rules of Sensitivity Analysis, we make some assumptions and in this case we make an assumption that my Operating Expense will remain fairly at same price or allowed to go by not more than 10%. With this ground rule you can see the below table which helps you draw conclusion.
I can now make some quick comparisons now. If I retain my current price of INR 12, but build strategies to increase sales to 5000 cups a month (increase in 1000 cups) I will make INR 20,000 profit vs my current profit of INR 8,000. Even an increase in Opex by 10% should still yield me INR 16,000 which is double my current scenario. Like wise if I want to retain the sale at 4000 cups and increase the price to INR 15 , I will have a similar story.
In summary, with Sensitivity Analysis applied on Root Cause Analysis, it shows, how within each Inputs, you will have options to explore to arrive at a desired stable Output with Voice of Customer at crux.
Prashanth Datta's post in Process Controls was marked as the answer
Control Phase is that critical stage of Business Process Improvement where the selected solution(s) which are implemented to achieve the desired output [Y] are monitored for it's effectiveness. In other words, the performance of the process which needs to be maintained at the desired level as per the Voice of Business or Voice of Customer has to be sustained through the actions implemented.
An effective Control Plan
a. Acts as Risk Assessment mechanism on the new improved process.
b. Ensures process stays within control and helps identify any out of control variations due to any special causes and calls for appropriate action as required
c. Continues to be a living document to monitor and control the new process
d. Doesn't replace the Standard Operation Procedure derived during the improved phase but adds to its effectiveness by monitoring it.
e. Resides with the process owner for continuous review and documentation.
Now knowing the importance of Control phase, let us look at some of the types of Process Controls and their effectiveness. I have kind of arranged these controls in order of their effectiveness basis my understanding and judgement
1. Mistake Proofing - Poke Yoke
2. Risk Mitigation Methodologies like FMEA
3. Process documentation / Inspection and Audits
4. Statistical Process Controls
5. Response and Reaction Plans
6. Process Ownership
1.Mistake Proofing - Poke Yoke - Mistake Proofing is a technique wherein the input's or causes are so well controlled by making it impossible to for a mistake to happen at the process level itself.
A simplest example that happens in our day to day life is using a spell check on our email. Unless all spellings are checked and corrected, the controls implemented in the form of a dialog box will not allow the email to be sent.
2. Risk Mitigation Methods like FMEA (Failure Mode and Effects Analysis), wherein every solution implemented to drive the Process Documentation / Inspection and Audits - I given an equal weightage to both these controls as an inspection or audit is done against a set standard and that standard has to be well documented in your process documentation.
A well documented Standard Operating Procedure provides detailed instructions at every stage and what needs to be done at each stage which implies the required controls for possible deviations would also have been included in the document.
While Inspection and Audits are an additional layer in the system which can add some additional lead time, it is still a desired quality control which helps to ensure the end results are controlled internally rather than treating as an escalation from customers.
3. Statistical Process Controls, like control charts helps as visual aids and paves way for analytics. It helps determine if a process is stable and within statistical controls. It serves as good trigger points to catch and deviation, however further deep dive has to be conducted to understand the causes for save.
4. Response and Reaction Plans - As suggested, the plans are reactive which is a response to an issue i.e. it comes into effect once an issue is identified. While it is not a top preferred option, but still is a control plan as it sets directions to have a response plan in the event of an issue trigger.
5. Process Ownership - Typically a control which is people dependent i.e. the process owner will define the rules to control the process outcome.
Prashanth Datta's post in FMECA (Failure Mode, Effects and Criticality Analysis) was marked as the answer
With increasing demands from Customers for high Quality and Reliable Products or Service, it is posing additional challenges for the Vendors (or Service Providers) to accomplish this through more scientific approach and reliable modeling, especially at the early phase of design or planning to ensure the outcome maps to Customer requirement by the time the final deliverables are ready.
Failure Modes and Effects Analysis (FMEA) is a tool for evaluating possible reliability issues at the early stages of process cycle where it is simpler to acquire actions to overcome these matters, thereby improving consistency through design.
In this method, we recognize probable failure mode, evaluate its effect on the process or product and categorize actions to diminish the failures at early stages to ensure the final deliverables maps to the Customer requirements. With this approach we move from what is “find failure and fix-it” approach to “anticipate failure and prevent it”
From a Six Sigma perspective, be it identifying critical X’s or selecting effective solution to implement for identified root causes, FMEA is the Process Map based approach which provides us with the required scientific approach
In crux, FMEA uses 3 components that are applied on the identified risks i.e. it takes into account
a. Severity – What will be the severity of the anticipated failure?
b. Occurrence [O] – How frequently we expect this failure to occur?
c. Detection [D] – Do we have the required controls to detect the failure?
The combination of this three results in what is called as a Risk Priority Number [RPN]. RPN = SXOXD.
Identified failures with higher RPN numbers are focused for corrective actions. Most of the times, the key controllable levers within the RPN formulae are Occurrence and Detection as Severity remains same once the issue occurs.
What is FMECA and When FMECA helps?
Let us look at a scenario as below
· Failure item a – Severity = 8; Occurrence = 10; Detection = 2. RPN = 160
· Failure item b – Severity = 10; Occurrence = 8; Detection = 2. RPN = 160
· Failure item c – Severity = 8; Occurrence = 2; Detection = 10. RPN = 160
· Failure item d - Severity = 10; Occurrence = 2; Detection = 8. RPN = 160
In this case, the RPN is same across and it needs a further deep dive. While in this simple example, we can take a SWAG by looking at Occurrence and Detection numbers and then mapping to Severity and assign priority, in real world problems, especially on design of Scientific / Military or Space equipment’s, the values can be too close to differentiate or go with a SWAG approach.
We use what is called as FMECA (Failure Mode, Effects and Criticality Analysis) methodology to handle such tricky scenarios.
While FMEA is an approach that identifies all possible ways that equipment can fail, and analyzes the effect that those failures can have on the system as a whole, FMECA goes a step beyond by assessing the risk associated with each failure mode, and then prioritizing corrective action that should be taken.
In FMECA, each failure mode is assigned a severity level and FMECA approach will not only identify but also investigate potential failure modes and their causes. i.e. a root cause of the reason for failure and corrective actions are evaluated for each identified failure.
A key thing to note here is, for FMECA to occur, we need to first have FMEA in place. A criticality analysis on FMEA results in FMECA.
FMECA is calculated in two ways.
· Mode Criticality = Item Unreliability x Mode Ratio of Unreliability x Probability of Loss
· Item Criticality = SUM of Mode Criticalities.
· Compare failure modes using a Criticality Matrix, in a graphical form which keeps severity on the horizontal axis and occurrence on the vertical axis.
FMEA vs. FMECA
a. FMEA is the first step required to generate FMECA. While FMEA focuses on failures, FMECA goes a step further to analyze the root cause for each failure
b. FMEA focuses on problem prevention while FMECA focuses on detection and control for each identified failure mode
c. FMEA can have multiple analysis levels while FMECA is focused at each failure level i.e. each failure is treated individually.
d. FMEA has no criticality analysis while FMECA looks at criticality of the potential failure and the areas of the design that need the most attention.
e. FMEA is focused on product design and process. It genrates new ideas for improvements in like designs or processes. FMECA Identifies system and its operator safety concerns. Provide new ideas for system and machinery improvements.
f. FMEA is fairly less time consuming activity compared to FMECA. FMECA is more time consuming.
g. FMEA requires knowledge about process, product, service and customer requirements. FMECA goes a step ahead to have additional inputs around system, machinery etc., as each failure root cause needs to be evaluated i.e. FMECA is more knowledge based activity.
Finally, Choosing FMECA over FMEA purely depends on the company deliverables. If the design involves delivery of critical product or service pertaining to Space, Medical or Military designs where we need to get into criticality evaluation of each potential failure, we need to go for FMECA. Please take note time should be at your side as these evaluations are time consuming.
FMEA can be a good starting point and usage of FMECA needs to be evaluated basis business case.
Prashanth Datta's post in EFQM Model was marked as the answer
Continual Improvement remains to be the key DNA for any organization to remain competitive in the business and deliver value to their customers. Different organizations use different methodologies, approaches and tools for deploying such continual improvement programs for adhering to their quality improvement commitments. These programs are generally addressed by different names i.e. it can be called as Total Quality Management, Six Sigma, Business Process Improvement, Business Process Re-engineering, Operational Excellence or Business Excellence.
EFQM is one such business excellence model. This tool helps organization to measure where they stand today on the path of excellence, understand the gaps and promoting solutions. This model also helps to ensure that business decisions incorporate the needs of all stakeholders and are aligned with the organization’s objectives.
EFQM consists of 3 components
a. 8 Core Values - Key Management Principles for achieving sustainable excellence in any organization
b. 9 Criteria – Framework to help Organizations to convert Values into Practice.
c. RADAR – Tool to drive continuous improvement
a. The 8 Core Values of EFQM
1. Adding value for customers
2. Creating a sustainable future
3. Developing organizational capability
4. Harnessing creativity and innovation
5. Leading with vision, inspiration and integrity
6. Managing with agility
7. Succeeding through the talent of people
8. Sustaining outstanding results
b. The 9 Criteria – 5 Enablers and 4 Results
· Enablers – Leadership, People, Strategy, Partnerships & Resources and Processes – Products – Services
· Results – People, Customer, Society and Business Results.
The Criteria allows people to understand the cause and effect relationships between what their organization does and the results it achieves.
RADAR stands for Results, Approaches, Deploy, Assess & Refine
Results – Define the goal aimed as part of the organization strategy
Approaches – Plan and develop a set of approaches to deliver the required results now and in the future
Deploy – Implement the approaches in a systematic way
Assess & Refine – Monitor the deployed approaches and analyze the results achieved and ongoing learning
Let’s see how RADAR maps to DMAIC
Results should be equivalent to Define phase where you identify and set goals for improvement Approaches should be equivalent to Measure and Analyze phase where you plan and develop a set of approaches to achieve the set goal Deploy should be equivalent to Improve phase where you implement the developed approaches Assess & Refine should be equivalent to Control phase where we monitor the deployed solution and see its impact on the results. Before we conclude this topic, we need to understand the limitation of EFQM.
This is a long-term strategic tool and cannot be used as a tool for day-to-day business as the positive effects of this model is seen in long term. This is a complex model and needs to be introduced properly with strong support and commitment of top-management This is where it is extremely important for organizations to understand which tool to implement for its organizations quality improvement strategy.
Prashanth Datta's post in Excellence in Results was marked as the answer
Y=F(X) represents a Data Driven Approach to summarize our business problem statement. It shows Causes (X) which has an impact on Effect (Y). Further, the SIPOC diagram helps us with a more detailed process map which helps us, to start at least with establishing a relationship between X and Y. Schematically representing the same as
Inputs (X) ---> Process --> Output (Y)
By following a DMAIC methodology, we look at improving our Output (Y) by generally addressing those X’s which can be influenced along with working on any process correction opportunities. In summary, an effective output is driven through driving changes both in inputs as well as the process followed.
We have an interesting question which leads us to explore the fact if only one lever i.e. either Input or Process can be focused to drive an improvement in output Y. To summarize the two conditions from our question is as below.
A. Focus heavily on X (inputs) while keeping your Process at a below average level.
B. Focus heavily on your Process while keeping your X (inputs) at below average level.
In both the cases, we need to understand that the either Input (X) or Process still plays a role but not leading to any significant changes to Y.
A. For Scenario A, focus on X while Process has minimal impact to the Output
The example I can think through here is that of the McDonalds French Fries.
The desired Output (Y) here is, at their store, deliver tasty French Fries to customer within a committed time. In order to match to the conditions, key controllable X is Potatoes here. Enough investment is made by the team to ensure that the Input X is controlled from the farming stage itself i.e. grow potatoes under set condition of size, quality, quantity etc., cut to proportion, which then gets processed using the defined ingredients, partially fried, frozen, packed and sent to their Stores. At store, the only process is just to fry and serve to their customer. The Input is controlled so well right from the source that it helps deliver the required delight factor to the customer at the store.
Off late, most of the Ready to Eat food products available in the market are focusing on a better customer experience to provide a tasty product which can be prepared in less time and effort from a customer stand point. They are able to achieve this by controlling their inputs at source so as for the customer the process of preparing is as simple as putting it in a micro oven and heat or dip in boiling water for couple of minutes before it is ready to be served.
B. For Scenario B, Focus on the Process while the input has minimal impact on the Output
The example that I can pick for this scenario is from my kitchen – The water purifier.
The desired Output (Y) here is to drink a safe, healthy, tasty and potable water immediately when required. The input (X), Water once again can be from any source i.e. Bore Well, Storage Tank, Direct Municipal Water Supply and others. My desired output is driven through my Water Purifier which has a set of processes designed within it and helps me achieve my objective. The stages of filtration and purification is methodically designed, so as the input water goes through the systematic process of each stage of purification and helps me with my desired output. In this case, the water cleaning process plays a significant role to help achieve my output Y.
In the context of this discussion, here is a call-out. In my first example of Scenario A, the Y is decided basis my customer requirement and hence we say only Xs are controlled. But if you look at the process of preparing the French Fries alone, it has his own checks and balances and need to have a balanced approach between both X and Process.
As last words, basis my observation and understanding, controlling of only X or only the Process is a relative discussion as it depends purely on the Y which we identify to drive.
Prashanth Datta's post in Y=f(X) was marked as the answer
As the topic suggests, Y=f(X) forms a very important part of DMAIC Project. Before delving into the actual question on the tools required to identify Xs, a quick recap on "What Y=f(X) represents?" can help set the context to answer the question better.
Y=f(X) represents a more Data Driven Approach to summarize our business problem statement. The Y here is the Output and X is the Input which drives the Y.
There can be more than One X which drives the Output Y. In other words, we can represent the same as -- Y=f(X1,X2,X3....,Xn)
Apart from being identified as Output, Y is also referred to as Dependent Variable, Effect, Symptom, Monitor or Response.
Likewise, apart from being identified as Input, X is also referred to as Independent Variable, Cause, Problem, Control and Factor.
Simply putting together,
During the Define Phase, based on Voice of Customer [VOC] OR Voice of Business [VOB] OR Cost of Poor Quality [COPQ], we will identify the Critical To Quality [CTQ] metric, Y, which needs improvement. During the Measure Phase we finalize the Y which we need to improve for the Business problem and also work on Standards and Measurement System for Y and have a Baseline Performance OR Current/As-IS performance documented against which an improvement will be tracked. Improvement here refers to either Shift in Mean (upwards of downwards based on the KPI tracked e.g. Reduction in Average handling time implies shifting the Mean down from Current Mean & Improve Occupancy % refers to shifting the Mean up from Current Mean) Reduce the Variation in process i.e. Within the control limits From the above prelude, we now have understood that the Output Y or Effect can have multiple Inputs X or Causes to drive.
The 3rd Phase of DMAIC which is Analyze will help us identify all Xs impacting Y and further helps narrow down Critical Xs which when controlled will help bring the desired improvement in the Output Y.
A simple example to illustrate,
Y [Over Weight] = f((X1 [Calories of food consumed], X2 [Sleep duration], X3 [Duration of Inactivity], X4 [Genetic Variations])) The tools that help list all possible Xs can be categorized under 3 sections
1.Qualitative Analysis --
Brainstorming and Structured Brainstorming Affinity Diagram >> Fishbone Diagram 2. Process Map Based Analysis --
Value Add and Non-Value Add Analysis Detailed Process Mapping Failure Mode and Effects Analysis [FMEA] by using Risk Prioritization Number concept. 3. Graphical Analysis
Historical input data trends can be analyzed by using Box Plots, Histogram, Scatter Diagram For Identifying Critical Xs,
1. We use Pareto along with other Graphical tools like Box Plots, Histogram and Scatter Diagram under Graphical Analysis
2. Value Add and Non-Value Data, Detailed Process Mapping and FMEA under Process Map Based Analysis
3. A more Statistical based approach using Hypothesis Testing
The purpose of identifying Critical Controllable Xs having an impact on Y is to help fix the problem at source itself.
Prashanth Datta's post in SWAG was marked as the answer
SWAG references to a rough estimate made by experts in their field, based on experience and intuition. The decisions are driven by a combination of factors including
a. Past Experience
b. General Impressions and Assumptions &
c. Approximate calculations
It can be simply summarized as an "educated guess" and are best made by consensus within a group of experts.With a group of experts working on a problem, it often results in decisions driven by the factors mentioned above despite any incomplete information around the business problem. Theoretically any decision made with 25-50% information can fall under SWAG category.
SWAG can act as a good starting point to provide any estimate, presumably in a shorter time period and at low cost, if applicable.
Example 1 - if I as a customer ask a building contractor on the estimated cost and time to construct a 1 BHK House, he can provide the information during the course of our conversation basis his experience, certain assumptions and approximate calculations on a ball park cost figure and timeline estimation. This in-turn can help me as a customer to plan ahead with a certain hedge in cost and timelines before we make any agreement on construction commitments.
Example 2 - W.R.T Services or IT industry, estimated time to deploy a pre-existing software for Dept. B which has already been implemented in Dept. A
Example 3 - W.R.T Services or IT industry, estimated FTE required to complete a task based on projected workload
In above examples, with a group of experts, basis their expertise, SWAG approach can help to build a straw-man to plan their work ahead
Limitations of SWAG - While SWAG can help set certain basic directions, we need to be extremely careful on limiting its use on few scenarios..
a. If your business problem statement has high risk involved, better to have an exhaustive search, proof, or rigorous calculation to provide the required levers for decision making.
e.g. Designing of any life safety devices in an automobile cannot work on guesstimates but rather it has to be addressed in a more scientific and full proof method despite we have a set of experts working on it.
b. If your business case is not pressed for time and calls for a proper planning, SWAG is not your tool.
e.g. If you need to plan and present your annual budget for 2020, you need to take a more methodical, systematic and comprehensive approach to present the same rather than going with a guesstimate story.