Jump to content

Kavitha Sundar

Excellence Ambassador
  • Content count

  • Joined

  • Last visited

  • Days Won


Kavitha Sundar last won the day on October 11

Kavitha Sundar had the most liked content!

Community Reputation

2 Average

About Kavitha Sundar

  • Rank
    Active Member

Profile Information

  • Name
    Kavitha S
  • Company
    Omega Healthcare Management Services Limited
  • Designation
    Manager - opex

Recent Profile Visitors

87 profile views
  1. Kano Model

    Kano model : Basic needs, performance needs, and excitement needs. Kano model provides the description and the method to identify the three needs. Once understood for a specific product or a service, what would be your approach for putting these needs to good use? As customer “ Do I know my needs? Do my needs keep changing? What was my need two years back is no more my need now? What is my need now is no longer be my need in the future?” So what is my stake? Yes. I know my needs. I can give my needs in verbatim but not in perfection. My needs are ever changing. As technology grows, the needs keeps changing along with urbanization. Hence my needs are not constant. What is Kano model? A Japanese consultant – Noriaki kano discovered this kano model. He tried to identify his customer’s needs and requirements. Tried to compare this needs to the product features to fit him in the kano model. Simply to say, he compared customer’s satisfaction to the features of the products offered to him. Before delevoping a Kano model, it is important to understand the following three things. 1. What is the customer satisfaction level in terms of product features? – “Satisfaction Vs. Functional level of product features. 2. A Product features 3. A Customer satisfaction survey / Need survey. 1. Satisfaction Vs. Functionality of the product Satisfaction is measured in a scale of 1- 5, where 1 being mostly dissatisfied and 5 being highly satisfied / delighted. Functional level of the product / service is again measured in 1-5 scale of 1 being Nothing / no use and 5 being the best product / service. Functionality of the product / service means cost, investment, time, and how well the product is implemented in terms of customer usage (the reach of the product, usage definitions, etc). When we say the functionality and satisfaction is related, we can also say there is some waste involved in delighting the customer. Either over processing or overproduction is the usual waste identified along with the concept of Kano model. Eg. When a customer requires a dosa, we will end up trying to delight the customer by giving him the Ghee dosa. In such cases, the Verbatim by the customer is detrimental. And there is a waste involved. 2. Product Features: Kano, a researcher divides his product/ service features into four distinct categories, which are directly related to customer’s reaction towards the functionality of the product. I.e How well the product features implemented are successful in order to make the customer happy. Eg. When the officer is allotted a office with fan, he will be happy. If he is able to adjust the speed according to the climate, he would be delighted. This was in the past. Now this change is done with air conditioners. The four categories are Indifferent, attractive, performance and must-be. Indifferent: The product’s presence or absence makes no difference to the customer. There is no reaction of the customer towards the functionality of the [product. Hence the exists a huge waste of effort, money, time. Must be: These are the basic requirements of the product / service that the customer expects. If we don’t have the basic features in the product bought or the in the service delivered, the customer will not buy. The product will be bad to the customer and it will badly affect the organization. Eg. A washing machine should be able to wash the clothes in the machine. A fridge will keep the fruits and vegetables fresh and cool. Performance: These are features which will help the organization stay competitive in the market. The more of investment will result in more customer satisfaction. These are a level higher to the basic needs. This is directly proportion to each of the variables. This relationship between satisfacation and functionality of the product is called liner or one dimensional performance in Kano model. Eg. A washing machine should be able to wash, rinse and dry. In addition, you can add the clothes in the middle of the wash. In the fridge, you can add the extra slabs in the slot given to store more. Delighters/Attractive: This will always cause a positive response to the customer. This is usually attractive to the customer. As the investment is bigger, the bigger the satisfaction / delighters. This uses pull strategy by attractive products. Eg. A washing machine in the requested colors / with pictures decided by the client. Change Vs. Delighters: As the technology grows , the concept of delighting the customer using kano model is not static. It keeps changing for ever. As I already mentioned, the changes are forever. Hence the Kano model developed is not model to be replicated in the future for the same problem or customer. Hence the conclusion is whatever the anlaysis we do and derive at the kano model, it is not a model to be replicated. This keeps changing. This illustrates the current reality of the customer’s satisfaction and the product functionality. Approach for using the Kano model is: You will have to ask two questions… 1. What will u do if the product is not there 2. What will u do if the product is given with the basic features. Yes. These are asked to the customer to knwoth e real situation / moment of the product. Performance – A customer likes to have the featureand dislikes if they donot have. Must be – Customers tolerate to have the product. Soon they will switch. Hence they dislikes if they do not have the features. Attractive – Customer likes the product because he didn’t expect the prodct tto have such mind blowing features. Conclusion: Kano model is not static. Hence it keeps changing forever for product to product, time to time , due to the technologivcal evolution and rise of competitiors. Thanks Kavitha
  2. Statistical Significance

    Question: What is the meaning of statistically significant difference? What some of the most important ways to utilize this concept in problem solving and decision making? Definition: A statistically significant is usually means statistically significant difference. It is defined as the chance of existence of relationship between two or more variables is because of some other cause but not definitely by the random cause. The hypothesis tests are used to prove the relationship using various tests. It is interpreted by P Value. In general, when a p value is less than or = 5%, it is called statistically significant. It does not mean that the finding is important to consider or the decision making taken is reliable. For Eg. 5000 coders are given enough training to check if there is any significant difference between male and female coder’s test scores. Lets say Mean for male is 97 and female is 99. We can use t test to compare the independent groups at .01 level of significance. The difference calculated is a very small difference. To say, it not even important as it is derived out of samples. It is because when u have the larger group for study, the difference would be smaller, which means the result is real not by any chances. Significance means Statistically it is important that the relationship exists between 2/more variables. P- Value: P-value is the level of marginal difference that it represents the chances / likelihood of the occurrence of a ny given event. When te p value is small it means there is a strong relationship / evidence in relation to alternative hypothesis. One-Tailed and Two-Tailed Significance Tests This is part of significant difference. It is important the hypothesis stated is of one tailed or two tailed. If it talks abnout the direction of relationship, it is one tailed probability. This is used to compare the group in one direction. Null hypothesis predicts the direction of the chance fixed. For Eg. Males are generally stronger then females. Female coders score higher than male coders. A two tailed would be stating its null hypothesis In the following way. Eg. Males and females are equal. Null hypothesis states there is no significant difference. Procedure Used to Test for Significance 1. Decide on the alpha value. 2. Conduct the study 3. Calculate the sample statistic 4. Compare the critical value obtained. Thanks Kavitha
  3. Question: Differentiate between a stable process and a capable process. Is Process Stability supposed to be a pre-requisite for all type of processes? Explain with appropriate examples. Process stability: means consistently producing the output in the process over time. If a process is consistent over time & distribution of the data is within the control limits. then we call this process as “Stable within control”. If there is a shift from the mean but the data points fall into the control limits set is called “Stable – out of control”. This requires attention since there is a common cause variation. Process Capability – means a measure of the process and how capable the process is in maintaining the customer’s needs / expectations. It explains us how good or how bad the output is. Process capability is measured and represented in “Cp, Cpk, Pp, Ppk”. If the process is stable but not capable, the customers will still be satisfied but not pleased/delighted. Eg. A customer requires pant size of 40 for his usage. Also he says anything that fits him between 38 & 42 would be okay. In this case, the store consistently supplies him the size of 39. The customer is happy but not delighted. When the supply is accurate the customer would be delighted. Here the process is stable but not capable in meeting the customer’s expectation. You should always concentrate on a target to delight the customer and not on the range though given by them. Process stability Process Capability Definition Predictable output consistently. How well a process behaves to produce the output in future. Stability has nothing to do with capability Meet customer's expectation at all times.(100%) It compares the process performance agaings tehe specifications given by client. Capability has nothing to do with stability. Both has inherent relationship. Central tendancy measures Constant mean and constant variance is required to say a process is stable. Range and target is provided by the client. Limits Control limits(UCL & LCL ) are in place Spec limits (USL & LSL) are in place. Usage of variations Constant random variations / controlled variation is exhibited by a stable process Controlled and uncontrolled variations can be seen in the process. Grphical tool used Control chart / Scatter plot Histogram Interpretation A random fluctuation around the constant mean over a period of time is said to be the stable process. When the pattern is seen and variation is uncontrolled, though it falls within the control limits, the process is not stable. It is always important that we require controlled variation, and constant mean. if the histogram falls within the specification limits, the process is capable. If the variation is too high when mean is shifted below the target, then the process is incapable. Distribution Constant distribution is required to say the process is stable. Normal distribution is required to say process is capable. Order of stability / Capability Stability comes first only when stability and normality of the process is tested, a process capability is tested. Measurement index Root cause for variation would be identified. Cp, Cpk, Pp and Ppk is calculated. Usage of Cp & Cpk, Pp & ppk Use Cp, Cpk for samples and Pp & Ppk for population to arrive at the capability index. Definitions Cp= Process Capability. A measure for a centered process Cpk= Process Capability Index. Adjustment of Cp for the off centered process. Pp= Process Performance. A measure of process performance for the centered process. Ppk= Process Performance Index. Adjustment of Pp for the off centered process. Cp – Capability index will tell you how fit the data are into the USL & LSL. Also make sure the process is well centered around the average. If the process is centered – then Cp is equal to Cpk. If not centered, then Cp > Cpk. When the Cp & Cpk is > or = 1 the nthe process is capable. Cp & Cpk comparison to sigma Cp and Cpk of: · 1.00 is equal to 3 Sigma · 1.33 is equal to 4 Sigma · 1.67 is equal to 5 Sigma · 2.00 is equal to 6 Sigma . Is Process Stability supposed to be a pre-requisite for all type of processes? Deming has quoted” Only when the process is stable, the process is capable of producing output”. This means the capability cant be checked before studying the stability and normality. The order is like this… Study stability, normality and then capability of any given process. A stable process is always a prerequisite for all processes to meet customer’s expectation or calculating Process capability, because a process can’t be capable if the process is out of control. In a DMAIC process , a BB should always check 1. Cehck if the process is stable or not simply by using the control charts. 2. Check if the data is normal or non normal to calculate the capability. If the process is not stable , then start focusing into the root cause. Try to eliminate the roort causes identified from the control chart, make the process stable. Make sure the data is normal. Then calculate the capability. Remember if the process is out of control, then no use in calculating capability. Because capability depends on the data where the process happened when the data is collected. Always remember root cause identified is eliminated ,but not to improve the process, to get the process where it belongs to. Dr. Deming used to use a very simple analogy in his seminars: “If this building catches on fire, we must put out the fire. Putting out the fire does not improve the building. All it does is get the building back to where it should have been all along – no fires!” For Eg. When we apply for the credit card, the agent tells us that we would receive the kit within 7 working days. But here customer expects a shorter delivery. Though there are aware of the process, they expect the shortest. Anything that exceeds the time period mentioned by the agent, will annoy the customer. There the process is said to be unstable and the variation would be high. In such cases, the customer makes many phone calls to the agent ./ bank to check the status. This involves cost, time and human intervention. Also, this makes the business weaker sinceword of mouth is the best method to increase sales pitch in the bank. It ruins the sales of the company. This variation is to be identified and eliminated inorder to satisfy the customer, and put back the process where it had been / where it should be to improve this process. Conclusion: Then process has to be stable , the data is to be normal and then the process is calculated whether it is capable to meet the customer needs. Any process which is unstable can’t be capable to meet the customer’s expectation. In simple words thus, stability and capability need to be treated hand in hand in terms of interpretations, but at all times, the word stable needs to come before saying the word capable. thanks Kavitha
  4. Correlation

    Question: Correlation does not prove the cause-effect relationship between two variables. Why do we still use it in root cause analysis? Please answer in your own words. ➢ Correlation – means the change in one variable does not change the other variable automatically. It is a statistical measure which tells the size and the direction / extent of relationship between two variables considered. ➢ Causation – means the change in one variable causes the change in the values of the other variable. Also called cause and effect. ➢ Association – it is also called as correlation Two types of relationship: ✓ An action/ one variable causes the other variable. (E.g. Eating sugar causes diabetes) ✓ It is correlated / associated between two variables. (E.g. Diabetes and hypertension are correlated, but diabetes does not cause hypertension.) Yes. Correlation is a necessary condition but it is not a sufficient condition for causation. Sometimes correlation alone is enough. Sometimes you need experimental data but not observational data to find the causation. Causation requires factual data but correlation is based on the observations. Eg. A study was conducted in the insurance agency, that the male drivers are more prone to more of accidents, hence insurance agencies charge high. In this case you can’t change the cause. Gender of the drivers can’t be changed experimentally. Here a male and female group might be tested in separate groups and the results can be analyzed for correlation. Causation – When a Ola CEO suddenly expired, there will be a change in the system. Huge cab price for some days may be experienced. Death of CEO is the cause for huge cab price/ revised cab price. Lets talk about each relationship in detail… 1. No relationship – No association. Which means when one variable remains constant, while the other variable increases or decreases. For Eg. If a person eat sweets / sugary products are correlated to the likelihood of obesity. After a detailed study, all correlation said the more you eat sweets / sugary products, the more you put on weight. Where is the causation? Do the sugary products cause one to gain weight, or does a gain in weight cause an increased consumption in sugary products? (a controlled experiment with rats showed the group that was fed a yogurt with artificial sweetener gained more weight than the group that was fed the normal yogurt.) Still more of such experiments happening in the medical laboratories. 2. Negative – it means the two measured variables move in opposite directions (ie when one increases the other decreases, or when one decreases the other increases). Eg. The size of the palm is negatively correlated with longevity of a person. If you see female’s palm it is always smaller than male’s. But females live longer than males do. Hence it is negatively correlated. 3. Positive correlation – it means the two measured variables move together in same direction. (ie. When one increases, the other also increase ot when one decreases, the other decrease). For eg. Whenever the outside weather is hot, the amount of fruit juices / icecreams sold are higher. It is positively correlated. As temperature and ice creams are moving in the same direction. Another example – Where 3rd variable is a cause but there is a correlation between the observed 2 variables: A strong correlation can be exhibited between amount of crime and amount of ice creams sold by the vendors. In such case, what is cause and what is effect.? We can’t segregate one as cause to another. The answer is evident that there is another 3rd variable causing both crime and ice cream sales. Summers are where the crime is highest and ice cream / juices sales are recorded highest. Yes. We always see patterns and we normally tend to gather information around the same to support the views already concluded. This behavior is also called as a confirmation bias. We always conclude the study with coincidence but not the causality. A relationship can’t be proved. But can be disproved with the help of hypothesis testing. Yes. Statistically it is possible to disprove the relationship. Never try to prove a correlation, instead pull double negative and disprove the correlation, by rejecting the null hypothesis. With such considerations in mind, scientists must carefully design and control their experiments to weed out bias, circular reasoning, self-fulfilling prophecies and hidden variables. Importance of Causation and correlation: Correlation is important to identify the extent to which the relationship is established between two variables. After confirming the relationship, it is also important to investigate whether one variable causes the other. By understanding both will provide us insights to a better target to get a best outcome. Correlation measurement & Values: It helps us to identify the direction & degree of association between two variables and hence represented by (r ), It is a numerical value range between +1 and -1.0. Negative Correlation – below 0 which indicates a negative relationship between the variables. Positive Correlation - > 0 it indicates a positive relationship between the variables meaning that both variables move in tandem No correlation - =0 , as this indicates there is no relationship between the variables. Limitations While the correlation coefficient is a useful measure, it has its limitations: Correlation coefficients are usually associated with measuring a linear relationship. For example, if you compare hours worked and income earned for a tradesperson who charges an hourly rate for their work, there is a linear (or straight line) relationship since with each additional hour worked the income will increase by a consistent amount. If, however, the tradesperson charges based on an initial call out fee and an hourly fee which progressively decreases the longer the job goes for, the relationship between hours worked and income would be non-linear, where the correlation coefficient may be closer to 0. Care is needed when interpreting the value of 'r'. It is possible to find correlations between many variables, however the relationships can be due to other factors and have nothing to do with the two variables being considered. For example, sales of ice creams and the sales of sunscreen can increase and decrease across a year in a systematic manner, but it would be a relationship that would be due to the effects of the season (ie hotter weather sees an increase in people wearing sunscreen as well as eating ice cream) rather than due to any direct relationship between sales of sunscreen and ice cream. The correlation coefficient should not be used to say anything about cause and effect relationship. By examining the value of 'r', we may conclude that two variables are related, but that 'r' value does not tell us if one variable was the cause of the change in the other. Limitations reference: http://www.abs.gov.au/websitedbs/a3121120.nsf/home/statistical+language+-+correlation+and+causation How can causation be established? People misunderstand the correlation in terms like if there is a relationship, then it would be the real causal factor. This might involve the organization’s extra effort and time and human resource to establish the real root cause or 3rd other variable which really causes the effect. However if observed data is enough to establish solutions, then it is a wrong method, unless the variable selection is correct. Controlled environmental study is the most effective way to differentiate the causes from the variables studied. In this controlled environment, the sample picked for study will be divided into two, making sure that the groups are almost comparable in each and every way. Then the study is conducted and the results are monitored. Later it would be analysed for causation and correlation between variables. Eg. In pharmacies, two groups with similar kind of disease will be selected. One group would be given same treatment, wheras others with advanced or new type of medication. If the two groups have noticeably different outcomes, the different experiences may have caused the different outcomes. observational studies are often used to investigate correlation and causation for the population of interest. The studies can look at the groups' behaviours and outcomes and observe any changes over time. The objective of these studies is to provide statistical information to add to the other sources of information that would be required for the process of establishing whether or not causality exists between two variables. Hence to conclude the hypothesis is used to confirm the correlation and causation between two variables. Hence it is an important technique to be used in Root cause analysis to find out the critical X’s. Thanks Kavitha
  5. VOC, Voice of customer

    Question: While VOC is considered as a key starting point for business excellence, can overemphasis on VOC be detrimental to business? Explain with examples. VOC – voice of the customer is defined as customer’s voices/ desires, wishes, expectations, requirements / needs of the output that can be either a product / service. It is always a verbatim comments. Who is a customer? A customer is the one who buys ur service / product and willing to pay for whiling receiving the product / service. Customer is of two types. Internal / external Internal Customer – A person who wants a specific output working within an organization. External Customer – A person who wants a output works outside of an organization. A customer needs will be translated into requirements in six sigma language. There are 3 confusing statements – need, wants and requirements., where the Six sigma person has to be aware of these terminologies so as to drive the needs to CTQ to delight the customer in the innovative technological world. Wants Needs Requirements An additional features which might excite the customer about the product. Additional/ free bies. A clear stated and unstated expectations of the customer on the product / service. A must of the product / service, that is mandatorily required to fulfil the needs of the customer. A customer would disappoint if the additional feature is not there., but still they use the product. If needs are not satisfied, the customer is highly dissatisfied and move to the competitiors. If requirements of the customer is not met, the product fails In the market by word of mouth itself, where the customer doesn't even enquire about it. Eg. A customer wants to know her condition, since aged. Hence wants a BP, Sugar & Hormone checkup. Hospital personnel analyses her condition recommends special cardiac package for her wellbeing along with the master health check up. Basic package like BP, sugar and thyroid hormone test might satisfy the customer but cardiac package will delight them. A customer wants a trainer to be friendly and conceptual to explain the level the trainee understands. A friendly knowledgeable trainer to make the trainees understand all the concepts taught. Accuracy of the training content, Quality of the trainer and cost effective. Capturing VOC: It can be captured in many ways like as follows: 1. Surveys 2. Focus group 3. Brainstorming / direct discussion with the customer / client 4. Inetrviews 5. Field reports / obsservations & Suggestions. 6. Warranty cards / complaint logs As customer is of two types internal and external, VOC is also classified as 1. Voice of the Associate – Employee’s suggestions/ feedbacks on the product / service processed. 2. Voice of Business – It deals primarily the needs of the business and its respective stakeholders. E.g. Revenue, profit, loss, etc. 3. Voice of the customer – It talks about the clients / end customers. 4. Voice of the process – the analysis results obtained out of the process capability studies and CTQ’ analysis. can overemphasis on VOC be detrimental to business? Explain with examples. When the VOC is harmful to business? Description - Example How to avoid? When VOC is failed to identify the right customer, it would require the project team / organization to intervene in middle of DMAIC phases. It would result in additional time , manpower and cost. Life Insurance retention project that gathered VOC from current and former policyholders; VOC missed a key component; namely, whether customers were given alternatives to help decrease premium payments. This was only identified much later in the project, through involvement of Call Center representatives during the Improve phase. A SIPOC diagram will help u identify the supplier, customer and the process steps at a higher level. Eg. In the example mentioned, agents were identified as customer, survey taken and changes would be considered by insurance parties. This best practice was implemented across other parties. When VOC fails to clearly define the goals of the data collection upfront. To much of focus on to the outcome instead of on the factors(X) would result in a different direction of the project. Eg. project focused on reduction of employee calls to the company service desk. One of the themes identified through VOC was lack of information available to employees to resolve issues on their own. Because there was not enough detail on what types of information employees needed, it was necessary to conduct an additional round of data collection. List all potential X. Create a plan towards data collection of all the critical X Focus on the group to collect the critical X When VOC directs the data collection methods wrongly. Over emphasis on the data collection methods like surveys, etc will lead the project in wrong direction if this is the one and only method of data collection. Solution can't be derived from this method since the data collected can be wrong. Eg. Project focused on the Knowledge, attitude and belief of the youngsters on HIV/ AIDS. The survey was method used to collect the data. Relying on this and deriving at a solution of creating awareness may be a additional cost, time and resource involved. This might be eliminated if a focus group / one to one interview technique was used. Understanding the pros and con, the right method of data collection tool to be identified and deployed in the project. When VOC is not converted into CTQ Because the verbatims are not clear from the customer, the needs are not stated properly and converted into CTQ. It will then lead to wrong data collection, baselining, analysis and solution identification and implementation. Eg. A customer says " I don't like your service". This does not lead to a actionable CTQ. Further a project team to drill down by talking to customer what it really means. If the team is not known of the customer needs, the customer will never be satisfied by the organisation. Understand the customer first what the need is, want is and requirements are. Thanks Kavitha
  6. Process mapping

    Process Map – is defined as a visual representations / hierarchical method of displaying the process steps / workflow involved in the entire process / part of the process, using the symbols provided. Process map is of 4 types. What is the difference between Process Map & Flow chart? The process of creating a diagram is called process map whereas the diagram is called flow chart. 4 types of Process map: Ø high-level, Ø common, Ø detailed, and Ø functional High level Common process flow detailed process map Functional Defined as SIPOC / COPIS is a a tool used to explain the relationship between, supplier, customer, input, process and outputs. Called as a simple flow chart, which describes the process steps. It is typically a lean tool, which adds details to common process flow Breaks the steps into functional areas, It is a macro level process map at above 60000 feet high. E.g. SIPOC It is a first step in construsting a detailed process flow.It uses Boxes and connnecting arrows to brief the process steps It classifies the inputs, adds VA,NVA in the process steps defined. E.g. VSM It is frequently mapped against a time line Realistic behind the flows It is ahigh level flow which describes the relationship between Preocess input variables and output variables and people. It pretends to follow the actual logic behind the process. It resembles the computer programming / original form of process design. It will give you the detailed flow with flow of information/material/people with value added to it. This will help you identify the waste. This flows involves a detailed process flow with department/ function wise to identify and eliminate waste in the process. Difference between Process Maps and VSM SIPOC – It stands out for Supplier, inputs, processes, output and customer. It describes the relationship between the supplier and customer, as to what the customer can expect as product / service of output, which are the steps the is followed to process the inputs as to convert into output. Value stream Map: is defined as an image or pictorial representation of flow of material or people or information that is pertaining to product / service. A data is associated with each step defining its value. E.g. Takt time, processing time, volume processed, no. of errors, etc. Process Maps Value stream Map It defines and clasifies the process inputs and output variables. It does not. Identifies the waste at macro level. In short, it helps to visualise the current process. It identifies waste between and within the process and the improvement areas. It adds value to the process flow. It helps to visualise and improve the process. Boundaries are clearly defined so as to what to map in the flow chart. It focus on the detailed approach as how the material / information flows. High level visual representation is possible. Takt time , throughput yield , VA / NVA , etc are few variables calculated. Can be drafted quickly Time consuming. It is used in various methodologies like DMAIC / DMADV It is typically a lean tool. Part 2 :If you had to suggest a sequential series for process mapping in an organization with increasing level of detailing, what will your suggestion be? If I had to sequence the process map in my organization, SIPOC tells us the entire process steps in a very brief manner but is not useful to identify the wastes. It does not answer our qustions like “what are the wastes? Why this step in the process is important? Which step is yielding less? Which step has more rework process? Which step consumes more time? Where is the delay? Where is the process reengineering step required? Etc..”. All these questions will be answered in the detailed process map with built in VSM. I would never conclude only detailed process map with VSM is sufficient, since the benefits of SIPOC is different from these. In fact, many researchers says that a detailed process map itself is sufficient to locate the waste and improve though it has all the VSM does. Due to time, cost, people constraint, the VSM is done first to identify the waste and then a detailed process map of that particular portion is done. Hence, I would use both detailed process map and VSM added to it. Both states the current state / Voice of the process, which helps to visualize the process and improve the process. Hence my suggestion is as below. 1. SIPOC / COPIS 2. Basic flow chart with boundaries defined.(if required at the macro level) 3. VSM 4. Detailed process map Or 1. SIPOC / COPIS 2. Basic flow chart with boundaries defined.(if required at the macro level) 3. Detailed process map with VSM built in. Note( A moderator can decide if the detailed process map to be displayed over here or not") thanks Kavitha Process Flow - Version 2.xlsx
  7. Continuous Data, Attribute Data

    Question 4 in Episode 2: While continuous data is measured and attribute data is counted, there is sometimes confusion if some specific dataset should be considered continuous or attribute. Provide some examples of confusing datasets and your inference. Data – is defined as a collection of avalues / useful information that is required for any analysis to the receipient. Data is genereally used to prove / disprove hypothesis. Data is of two types basis statistics. It is Quantitative or Qualitative. Quantitative is descriptive data, which can be categorized into subgroups for analysis and qualitative is numerical which means either measurable / countable. Qualitative data is again divided into 2 types continuous and discrete data. For Eg. Charlie chaplin is fair, short, has small mustache, thin built and wears black colored jacket. – it is qualitative data. Charlie chaplin has one hat, one walking stick and 2 legs. – it is Quantitative –discrete data. Charlie chaplin aged 45 years is 57.2 kgs built and 4.8 inches tall . – it is quantitative continuous data. 4 types of measurement scales: It is divided into four categories – Nominal and ordinal, interval and ratio Ø Nominal data: It assigns a numerical value as an attribute to any object / animal / person / any non-numerical data. Ø Ordinal data: Any data which can be ordered and ranked is called ordinal data. This can’t be measured. Eg. 1. A horse is numbered in the race court, represents the nominal data. 2. The numbered winning horses are ordered and ranked as “1st, 2nd and 3rd place” in race club, which represents ordinal data. Another best examples is progress report of the student. Ø Interval: It is a numeric scale where we know order as well as the differences between values. There is no origin. Eg. Temperature of the room is set to be normal if it is between 25 and 28 degrees C. Time is another good example of an interval scale in which the increments are known, consistent, and measurable. Ø Ratio: Ratio scales are the ultimate nirvana when it comes to measurement scales because they tell us about the order, they tell us the exact value between units, AND they also have an absolute zero–which allows for a wide range of both descriptive and inferential statistics to be applied. At the risk of repeating myself, everything above about interval data applies to ratio scales + ratio scales have a clear definition of zero. Good examples of ratio variables include height and weight. Qualitative data: It is otherwise called as categorical data. Quantitative data: It is divided into two contionus and discrete data. Difference between Continuous and discrete data: Continuous data Discrete data It is measureable on a scale It is countable The data falls within finite or infinite range The data has only finite numbers. Can be broken into subcategories Can't be broken since it is a whole number. The frequency is depicted in histogram, where skewness is shown clearly the values take a distinct value hence it is represented in bar diagram, skewness can't be seen. Values are allowed to group within the range The values are individual values. Eg. Temperature of the person, Height, Weight, Age, time, Cycle time taken to complete a task Eg. No. of cumputeers, No. of students, no. of books, no. of certificates, no. of errors, etc Confusion between Contionus and Discrete data: Eg. 1: Person Age Weight (Kgs) Height(Inches) Color Ajay 34 51 5.1 Wheatish Sharma 35 65.5 5.2 Fair Roshini 23 45.5 4.8 Wheatish Gaithri 53 72.5 4.8 Dark Linda 43 46.5 5.1 Fair Tanya 36 43 5.3 Wheatish Balu 27 56 5.6 Fair Vignesh 32 77 6.1 Dark Aarav 43 76 5.9 Wheatish Rithesh 45 64 5.3 Dark Qualitative data / categorical data: Categorize 10 people in the group into wheatish, dark, fair basis the color. This represents categorical data. Continuous data: Age , Height and weight of the people displayed above in the table depicts a good example of continuous data, where these numbers falls within the infinite ranges. Discrete data: No . of Wheatish – 4 No. of fair – 3 No. of dark – 3 Total no. of people – 10 Conclusion of Eg. 1: Though age is continuous numerical variables. Although the recorded ages have been truncated to whole numbers, the concept of age is continuous.) Number of aged people is a discrete numerical variable (a count). Age can be rounded down to a whole number, if so it represents the discrete data. Though it falls under discrete(when all data is shown as whole integers), it is actually a continuous data because it has ranges. Age is not a constant factor, though the DOB is constant. Basis the context / concept of the requirement – lets say to fill a form, the exact age is required. In such case, though age is discrete, it is continuous. “12 years, 153 days” really means a continuous age that is between 12Y152.5D and 12Y153.5D.” Eg. 2 : Income is another example of continuous data. Eg. 3: “ In practice, percentage data are often treated as continuous because thepercentage can take on any value along the continuum from zero to 100%. In addition, dividing a percentage point into two or more parts still makes sense.Discrete data are easy to collect and interpret. % is always to be considered as continuous but it depends on the concept. If I have to track the error percentage, the right metric is as below.. Error % = No of errors (Discrete) Total charts audited.(Discrete) Hence Error % is discrete. Another example: If I have to track the availability of the machine, the formula is as follows… Availability % = Total hours available (Continuous) / Expected hours of production for 8 hours(Continuous) Hence Availability % is continuous, since time is continuous. Conclusion: It depends…. In certain situations, discrete data may take on characteristics of continuous data. But, if counts are large, distribution of values are relatively wide, and the the values are distributed across the values, you can “pretend” it is continuous and use the appropriate tools. Thanks Kavitha
  8. Descriptions Correction Corrective action Preventive action Definitions Action taken to eliminate the defect identified. Action performed to identify & eliminate the root cause to prevent the recurrence of the defect. Action taken to prevent the potential / suspectable causes of the defects and its occurences Waste Classification - Rework Rework is required Fix the bug to prevent its recurrence to avoid rework, investing time and human resources to work on the root cause Proactive measure taken inorder to avoid the potential causes of defects / rework Focus on Current issues / problems Current issues / Problems Proactive ameasures to Futuristic problems Situations where we deploy what When the product is unused or defect identified before it reaches the customer The product that does not meet the customer expectation after it reaches the customer. This situation is to be considered serious. 1. Potential defects / errrors 2. When we want to improve the processes by creating a defect free / zero error environment ( Also called developmental action) Frequency of the defect / problem once in a while Very often but after the shipment Since it is a proactive measure, the defect is yet to occur in this case. Tools used - Why why analysis, Fish bone analysis, Process flows, Is/Is not FMEA, Control impact matrix Type of solution Immediate short term solution Long term solution Procedure Fix it immediately 1. Identify the defect / error 2. Fix the bug first. 3. Analyse & Identify the root cause 4. Find & Implement the solution 5. Document and create the control to sustain the improvements. 1. List all potential failures 2. Create control for all failurs identified 3. Implement solutions 4. Document 5. Create control plan and if required, Reengineer the process & Document. Examples 1. Quality of the chart with error % of 10%, wheras the acceptable is 5% Once in a while occurred, but corrected the chart during an audit and billed to client Identify the root cause - Coder is not aware of the concept since it was a new update. Hence corrective action of training was taken to all the coders to prevent its recurrences. Rule based tool for computer assisted coding tool is implemented with coding updates, sothat proactively the errors are arrested while coding itself and the errors are rectified before it goes out to the client. Example 2 . A government regulated shut down restaurant in terms of violation of rules and regulations of sanitary practices Immediate correction of adapting the policies provided Requirement to separate work areas, establish acceptable processes and perform cleaning tasks by creating the check lists, display the process steps in the kitchen area for all to adapt to rules. A automated machineries to perform the actions listed. A regualr trainings & frequent audits to confirm the rules are adhered to. Conclusion: Are there situations where both preventive action and corrective action are undesirable and correction is the only preferred action? There are no such situations that only correction is preferred and CAPA is not required. Any organisation who wants to delight the customer would rather opt CAPA instead of correction. A immediate fix would not be a permanent solution to in the high tech environment. Hence CAPA is required to delight the customer and Correction is required if defect is identified and to satisfy the customer. Thanks Kavitha
  9. Check sheet

    Check sheet is a document / form that is used to collect the data in a real situation, where the data is created. The check sheet collects both quantitative and qualitative data. It is a structured easiest way of data collection process. A properly designed check sheet will have answers to 5 “W” – what, when, why, where and Who. A check sheet is a very useful process improvement and problem solving tool. 7 QC tools was designed by Ishikawa. When he had improve his processes, he gave a lecture to his engineers on the statistical tools and techniques which were used for problem solving. Later he realized that the statistics are way high to understand and implement. Then he implemented his thoughts and created 7 basic quality tools which will help the team to identify the problem from the workflow of process using process flow, collect relevant data using check sheet and stratification, categorize them using cause & effect matrix / fish bone, analyses and find a solution using pareto, histogram to understand the magnitude of the problem and scatter diagram, implement solution and create control plan. Check sheets are of 5 types – Process check sheets, defect by location check sheet, defect check sheet, stratified defect check sheet and cause & Effect diagram check sheet. Check Sheet Procedure 1. Decide what to observe / record 2. Decide how long the data to be observed 3. Design the form and label it accordingly 4. Trial it for short period to make sure if the data is sufficient to perform data analyses. 5. Keep that as a reference for future data collection process. Check sheets are also called as defect concentration diagram since it helps to collect the data that is defect. Ex. For the data collection process. Date Reason 10/4/2017 10/5/2017 10/6/2017 10/7/2017 Incorrect coding III III II I Query error IIII IIII III II Typo error II I II I Conclusion: The Check sheet is required for education of quality as 7 QC tools. It depends on the data collection process to modify or have the predefined form.
  10. Kanban / Pull System

    Episode 2, Question 1 - While Pull based flow is considered better than Push based flow in many ways in general, it is not always that a pull system can be implemented. Please mention in your own words how and why the pull system is not practically better than push system in certain situations. The two promotional strategy which is applied to get the product to the target market are Push and Pull Strategy. People often juxtapose these two strategies, but they are different, in the way consumers are approached. The term is derived from logistics and supply chain management. However, their use in marketing is not less. The movement of a product or information is the essence of push and pull strategy. While in Push strategy, the idea is to push the company’s product onto customers by making them aware of it, at the point of purchase. Pull strategy, relies on the notion, “to get the customers come to you”. BASIS FOR COMPARISON PUSH STRATEGY PULL STRATEGY Also Called as Outbound strategy. Since focus is more towards outside reaching the customer Inbound strategy, since focus is inwards building a brand reputation, etc. to pull customers Meaning Push strategy is a strategy that involves direction of marketing efforts to channel partners. Targets the specific audience about movement of products/services and informations through intermediaries to end customer Pull strategy is a strategy that involves promotion of marketing efforts to the final consumer. Customers / buyers do lots of research over the net and approach the company / intermediaries for the product. What is it? It is about devising a plan / ways to place the product to the customer by ways of advertisement. Create awareness which often resides on websites for customers to become curious and do research. Also creates brand visibility. Means of communication Email/Print outs / Broadcast – a form of advertisements Customers do researches on the product spreaded by word of mouth, etc Objective To make customer aware of the product or brand. To encourage customer to seek the product or brand. Demand Creation No demand is created. Stocks the products for customers to get awareness & buy Demand is created by the shopper/ buyer / end customer. Channels A direct post card/ Pamplets / email offers, Sales force, Trade promotion, money etc attracts customers to their place. social networking, blogging, word of mouth, strategic placement of a product, media coverage , Search engines – Do research – buy online Emphasis on Resource Allocation Responsiveness Suitability When the brand loyalty is low. When the brand loyalty is high. Lead Time Long Short Cost Effectiveness Expensive as it has to frequently disturb customers by propagandizing the product. Cost efficient. SEO pulls the end customers. Focus Budget on this to pull customers. Search for next sales Yes No ROI Weaker and stays out for a shorter period as long as marketing is live. Stronger & Lasts longer Example 1 - Direct Response Print and SEO You run Direct Response Print offering a free trial for your product and the prospect either visits a landing page or calls to place an order. You focus on Search Engine Optimization and use key words on your site that are relevant to your product or service. A shopper finds your site online and calls you and places an order. Example 2 - Direct Mail and Social You mail out a coupon offering a 20% discount with a limited-time offer. The customer goes online to purchase and uses the offer code they find on the postcard or they call you to purchase. You mail out a coupon offering a 20% discount with a limited-time offer. The customer goes online to purchase and uses the offer code they find on the postcard or they call you to purchase. Example 3 – Benchmark Excellence ambassdors Glossary Benchmark uses intermediaries to build glossary to create awareness & learning’s to the end customers. Benchmark uses broadcasting messages, online content to pull the customers by building a reputation, brand visibility on the Website Why You Need Both Successful marketers rely on the strength of each approach and often use them together. You need Push to reach out to those who might not have heard of your service or company. A Push approach also is needed for communicating with your qualified leads, lapsed customers and existing customers to increase sales. You need Pull to attract those in the research or buying stage who are searching for your product or service and to promote your business as a thought leader. Conclusion Top multinational companies like Amazon, Forbes, Coca-cola, Intel, Nike and many others employ both push and pull strategies effectively. When push strategy is implemented with a well-designed and executed pull strategy, the result is phenomenal, as it generates consumer demand. Moral of the story: If push strategy is hare, then pull is the turtle. Thanks Kavitha
  11. The First Jidoka The automatic loom, invented by Sakichi Toyoda, the founder of Toyota, in the year 1902, can be considered as the first Jidoka example. In this innovation, if threads ran out or broke, the loom process was stopped automatically and immediately. In the early days of assembly line mass production, work cycles were watched over by a human operators. As competition increased, Toyota brought about a significant change in this process by automating machine cycles so that human operators were free to perform other tasks. The Toyota Production System has many tools for efficient products and services. Developed over the years, these tools aim at reducing human effort and automating machines to increase productivity. Jidoka is one such tool without which efficient manufacturing would practically be impossible, as of today. The article below explains all about the Jidoka process. The Concept of Autonomation To begin with, understand that autonomation and automation are different from each other. According to the definition of autonomation, it is a 'self-working' or 'self-controlled' process. It is a feature that contributes to the Jidoka process. Automation is the process where the work is still being watched by an operator, where errors may still be apparent, and detection and correction take a longer period. Autonomation resolves two main points. Firstly, it reduces human interference, and secondly, it prevents processes from making errors. This has been enlisted below. PRODUCT DEFECT Ordinarily, when a defect occurs, a worker detects it and later reports the problem. Autonomation enables the machine to stop the cycle when a defective piece is encountered. PROCESS MALFUNCTION If all the processed parts or components are not picked up at the end of the cycle, the machine might face problems, and the process might halt, and it would take a while before the worker realizes that the process has been interrupted because of a minor error. In case of autonomation, if the previous piece has not been picked up during ejection, the machine gives a signal or stops the cycle all together. An Introduction to Jidoka The Evolution towards Jidoka Jidoka can be simply defined as 'humanized automation'. Autonomation is just another term for Jidoka. It is used in different contexts. It is mainly used to detect defects and immediately stop the production or manufacturing process. It fixes the defect and finds solutions so that the defect or error does not occur again. The concept, as mentioned before, was invented by Sakichi Toyoda. Its purpose is to reduce human error and judgment by automatic error detection and correction. It was developed to eradicate the wastage of time due to human observation of the process, transportation, inventory, correction of defect, etc. Now, with Jidoka, production lines have become significantly more efficient, and the wastage of goods and inventory have been reduced too. Other Toyota Tools and Terms You need to keep in mind is that Andon, Poka-yoke, Just-in-time, etc., are all tools invented by Toyota. Jidoka is also one of these tools, and it encompasses some of the others as well, like Andon and Poka-yoke. Jidoka was developed to minimize errors that may have been caused due to human observations. Remember that Andon is not an example of jidoka, but an important tool. It displays the current state of work―whether the process is smooth, or it has any malfunction, or if there are product glitches, etc. The relation between Andon and Jidoka has been explained further in the article. Similar to Jidoka, Just-In-time is another important tool, and is one of the crucial pillars of TPS. It adheres to what product is required, when it is required, and how much is required. The 'takt time' is an important principle―it refers to the time that should be taken to manufacture a product on one machine. Line Stop Jidoka is a term that applies to the process in automotive manufacturing plants. It is called so because it interrupts and halts the entire line (process) when a defect is found out. The Elements of Jidoka GENCHI GENBUTSU It is one of the important elements of Jidoka. The basic principle of Genchi Genbutsu is to actually see the problem. It entails going to the root source of the problem. This is an important step in the Jidoka process―to find out why the defect occurred in the first place. ANDON As stated in the previous section, Andon is a visual representation of the current process. It indicates whether the process in running as per norms or whether there is a potential flaw. According to the condition, it gives out electronic signals. If the signal is negative, workers will understand that there is a problem in the process. The machine stops, immediately of course, and the workers can stop the production until the flaw in the process is fixed. STANDARDIZATION The main aim of Jidoka is to increase production quality. This is what standardization deals with. It involves developing strategies that adhere to perfection and quality. When a flaw is discovered, it is not only fixed, but efforts are also undertaken to see that it does not occur again, and the quality and standard of the same product are maximized. POKA-YOKE The concept is also called mistake-proofing or error-proofing; poka-yoke devices are designed to avoid mistakes that could occur during production. The Principles The Jidoka Process As seen in the first figure above, without Jidoka, the defective piece continues to be produced and ejected. It is only after ejection that the worker may realize that the product is defective and then stop the process. In the second figure, with Jidoka, the Andon light glows brightly indicating that the product is defective. The process is halted immediately, and necessary steps are taken. DETECT This involves detecting the problem. The machine is fixed with the right components so that the abnormality is immediately identified. For this step, machines may be fixed with sensors, electrical cords, push buttons, electronic devices, or may be fed with proper instructions to identify if a product is defective. STOP Once a defect has been spotted, the machine stops immediately. The machine is designed to stop on its own, no staff or worker needs physically stop it. The fact that a defect has been detected is indicated through signals. Once that is done, the staff might rush to the site to find out why the process has been halted. FIX When the machine stops, the production line needs to be stopped. You might wonder why the entire line needs to be halted due to one or more defective pieces. This is done because there is a likelihood of defective parts or components to have been manufactured along with the defective part or component. To avoid this over-production and wastage of material and equipment, the production line is halted. After this, steps are undertaken to fix the problem. Sometimes, this may be a minor glitch, while at times, there may be a major problem. Once the error is fixed, the production resumes. INVESTIGATE The last and rather vital step of Jidoka is to investigate the source of the problem. You have to find out answers to the following questions: 'Why the defect has occurred?', 'What kind of defect is it?', 'How can it be fixed?', 'What can be done to prevent it?', and so on. Root-cause analysis tools are widely used to get to the bottom of the problem. Through this process, efforts are undertaken to find out the best solution for the defect, and to prevent it from occurring in the first place. As more and more investigation and research is being carried out, better methods of manufacturing are discovered, better problem-solving techniques are invented, and the product quality increases. Examples Jidoka is mainly used in the manufacturing and automotive industries; however, it can be demonstrated in simple products used in daily life as well. For example, if your kitchen cabinet is fixed with a dustbin, you will notice that when you open the door of the cabinet, the lid of the dustbin is automatically lifted. This is because there is a string that helps lift the dustbin lid the moment the door is opened. Consider a printing press machine. If a sheet is missing in the machine, a sheet detector raises the print cylinder. This is due to Jidoka. In the manufacturing industry, a sensor is used to check if the components are in alignment. Even if a small part is out of alignment, the machine is stopped. Some high quality machines use the recall procedure. Sometimes, despite the best counter-measures, some products in the production line may slip through the machine cycle, undetected. The recall procedure checks every single product once again, before the final output ejection. Light curtains are used in automatic feed machines. They have a presence sensor that stops the machine if a component is broken or is defective. Benefits of Jidoka It helps detect the problem as soon as possible. It increases the quality of the product by proper enhancement and standardization. It integrates machine power with human intelligence to produce error-free goods. It helps in proper utilization of labor since the process is automated, workers can spend their time performing more value-added services. There is less scope for errors in production, which substantially increases the rate of productivity and lowers costs. Improved customer satisfaction is an important advantage as well. Good products are manufactured in lesser time. Jidoka is one of the strong pillars of TPS (Toyota Production System). It helps prevent defects in the manufacturing process, identifies defect areas, and devises solutions to see to it that the problem is corrected and the same defect does not occur again. Jidoka helps build 'quality' and has significantly improved the manufacturing process. Difference between Autonomation & Automation: (in summary) Autonomation vs. Automation Description Jidoka Automation If a malfunction occurs, The machine shall detect the malfunction and stop itself. The machine will continue operating until someone turns off a switch. Production of defects No defective parts will be produced. If defects occur, detection of these defects will be delayed. Breakdown of machines Breakdowns of machines, molds and/or jigs can be prevented. possible breakdown of machines, molds, and/or jigs may result. Severity of Malfunction detection Easy to locate the cause of any malfunction and implement measures to prevent recurrence. Difficult to locate the cause of malfunctions at an early stage and difficult to implement measures to prevent recurrence. thanks, Kavitha
  12. Pacemaker Process

    Definition: A device or technique used to set the pace of production and maintain takt time is called pacemaker. A technique for pacing a process to Takt time is called pacemaker process. Takt time is the maximum amount of time in which a product needs to be produced in order to satisfy customer demand. The term comes from the German word "takt," which means "pulse." Importance of Pacemaker Process: The "pacemaker process" is a series of production steps, frequently at the downstream (customer) end of the value stream in a facility, that are dedicated to a particular product family and respond to orders from external customers. The pacemaker is the most important process in a facility because how you operate here determines how well you can serve the customer, and what the demand pattern is like for upstream fabrication processes. Overview An unlinked production environment is like an accordion. Some processes move faster than the average and some operate more slowly. As a result, parts move through the system at varying speeds, only to end up in piles of inventory scattered along the value stream. Even with a takt time in place, there can still be some fluctuation in the actual performance of processes, if they are not somehow linked together. This fluctuation gets even more complicated when scheduling is done at multiple places in a value stream. For this reason, a pacemaker is often established. A pacemaker is the single point where a production process is scheduled. The upstream processes don’t produce without a pull signal originating from the pacemaker. Discussion: The pacemaker simplifies production oversight. Having only one scheduling point greatly reduces the need for coordination. The benefit is amplified when there is mixed-model production in a value stream. The actual demand determines the mix, and the pull signals generated by the pacemaker ensure that only the types of products that are needed are produced. The Pacemaker Process The production schedule is planned according to takt time, and is sent to the pacemaker process. It pulls from the upstream processes. Upstream processes ONLY produce when the pacemaker sends its signal. If there are multiple products, supermarkets are used. Continuous flow is used downstream from the pacemaker to manage production. There are several things to consider when selecting a pacemaker. The pacemaker should be reliable. If it is frequently down for maintenance, it wreaks havoc on the rest of the value stream. It should have minimal setup times to prevent surges. The closer it is to the end of production, the more linked it is to the customer. The downside is that it might drive more inventory into supermarkets on the upstream processes. Branches in production processes need to be upstream of the pacemaker, or have a supermarket. Don’t let a pacemaker process override a takt time. You don’t want wild fluctuations in production rates. When a process is well refined and tightly linked the pacemaker becomes more of a scheduling point than a way to manage the actual pace. Consider an assembly line. The shifts occur at prescribed intervals, or the line moves at a constant pace. The pacemaker simple determines the sequence of production. Example: For example, fluctuations in production volume at thepacemaker process affect capacityrequirements in upstream processes. ... Note: material transfer from the pacemaker process downstream to finished goods need to occur as a flow (no supermarkets or pulls downstream of the pacemaker process). Send the customer schedule to only one production process By using supermarket pull systems, you will typically need to schedule only one point in you door-to-door value stream. This point is called the pacemaker process, because how you control production at this process sets the pace for all the upstream processes. For example, fluctuations in production volume at the pacemaker process affect capacityrequirements in upstream processes. Your selection of this scheduling point also determines what elements of your value steam become part of the lead time from customer order to finished goods. Note: material transfer from the pacemaker process downstream to finished goods need to occur as a flow (no supermarkets or pulls downstream of the pacemaker process). Because of this, the pacemaker process is frequently the most downstream continuous-flow process in the door-to-door value stream. On the future-state map the pacemaker is the production process that is controlled by the outside customer’s orders. Thanks Kavitha
  13. Hypothesis Testing

    Answer 2 from Excellence Club 66: Hypothesis testing In a process, we may face Problem with Centering and/or Problem with Spread. Below diagram will allow us to understand these two problems in detail: Practical Six Sigma Problems that require Hypothesis Testing Hypothesis testing tells us whether there exists statistically significant difference between the data sets for us to consider that they represent different distributions. What is the difference that can be detected using Hypothesis Testing? For Continuous data, hypothesis testing can detect Difference in Average and Difference in Variance. For Discrete data, hypothesis testing can detect Difference in Proportion Defective. Steps in Hypothesis Testing: Step 1: Determine appropriate Hypothesis test Step 2: State the Null Hypothesis Ho and Alternate Hypothesis Ha Step 3: Calculate Test Statistics / P-value against table value of test statistic Step 4: Interpret results – Accept or reject Ho Mechanism: Ho = Null Hypothesis – There is No statistically significant difference between the two groups Ha = Alternate Hypothesis – There is statistically significant difference between the two groups Hypothesis Testing Errors: Type I Error – P (Reject Ho when Ho is true) = α In type I Error, we reject the Null Hypothesis when it is true. It is also called as Alpha error or Producer’s Risk. Type II Error - P (Accept Ho when Ho is false) = β Similarly, in type II Error, we accept Null Hypothesis when it is false. It is also called as Beta error or Consumer’s Risk. Six Sigma Hypothesis Testing Errors P Value – Also known as Probability value, it is a statistical measure which indicates the probability of making an α error. The value ranges between 0 and 1. We normally work with 5% alpha risk, a p value lower than 0.05 means that we reject the Null hypothesis and accept alternate hypothesis. Types of Hypothesis Testing: We use the following grid to select the appropriate hypothesis test depending on the data types: Types of Six Sigma Hypothesis Testing Normal Continuous Y and Discrete X Non-Normal Continuous Y and Discrete X Continuous Y and Continuous X Discrete Y and Discrete X Six Sigma Hypothesis Test – Null and Alternate Summary How important is hypothesis testing? The purpose of hypothesis testing is to make a decision in the face of uncertainty. We do not have a fool-proof method for doing this: Errors can be made. In which phases of an improvement project, is it likely to be used? It is used in analyze phase of the DMAIC improvement cycle. Critical X’s are identified and to measure the effect of each X on Y, the hypothesis testing is used in Analyze phase.
  14. Hypothesis Testing

    A hypothesis is an educated guess about something in the world around you. It should be testable, either by experiment or observation. For example: A new medicine you think might work. A way of teaching you think might be better. A possible location of new species. A fairer way to administer standardized tests. It can really be anything at all as long as you can put it to the test. What is a Hypothesis Statement? If you are going to propose a hypothesis, it’s customary to write a statement. Your statement will look like this: “If I…(do this to an independent variable)….then (this will happen to the dependent variable).” For example: If I (decrease the amount of water given to herbs) then (the herbs will increase in size). If I (give patients counseling in addition to medication) then (their overall depression scale will decrease). If I (give exams at noon instead of 7) then (student test scores will improve). If I (look in this certain location) then (I am more likely to find new species). A good hypothesis statement should: Include an “if” and “then” statement (according to the University of California). Include both the independent and dependent variables. Be testable by experiment, survey or other scientifically sound technique. Be based on information in prior research (either yours or someone else’s). Have design criteria (for engineering or programming projects). What is Hypothesis Testing? Hypothesis testing in statistics is a way for you to test the results of a survey or experiment to see if you have meaningful results. You’re basically testing whether your results are valid by figuring out the odds that your results have happened by chance. If your results may have happened by chance, the experiment won’t be repeatable and so has little use. Hypothesis testing can be one of the most confusing aspects for students, mostly because before you can even perform a test, you have to know what your null hypothesis is. Often, those tricky word problems that you are faced with can be difficult to decipher. But it’s easier than you think; all you need to do is: Figure out your null hypothesis, State your null hypothesis, Choose what kind of test you need to perform, Either accept or reject the null hypothesis. What is the Null Hypothesis? If you trace back the history of science, the null hypothesis is always the accepted fact. Simple examples of null hypotheses that are generally accepted as being true are: DNA is shaped like a double helix. There are 8 planets in the solar system (excluding Pluto). Taking Vioxx can increase your risk of heart problems (a drug now taken off the market). How do I State the Null Hypothesis? You won’t be required to actually perform a real experiment or survey in elementary statistics (or even disprove a fact like “Pluto is a planet”!), so you’ll be given word problems from real-life situations. You’ll need to figure out what your hypothesis is from the problem. This can be a little trickier than just figuring out what the accepted fact is. With word problems, you are looking to find a fact that is nullifiable (i.e. something you can reject). Hypothesis Testing Examples #1: Basic Example A researcher thinks that if knee surgery patients go to physical therapy twice a week (instead of 3 times), their recovery period will be longer. Average recovery times for knee surgery patients is 8.2 weeks. The hypothesis statement in this question is that the researcher believes the average recovery time is more than 8.2 weeks. It can be written in mathematical terms as: H1: μ > 8.2 Next, you’ll need to state the null hypothesis (See: How to state the null hypothesis). That’s what will happen if the researcher is wrong. In the above example, if the researcher is wrong then the recovery time is less than or equal to 8.2 weeks. In math, that’s: H0 μ ≤ 8.2 Rejecting the null hypothesis: Ten or so years ago, we believed that there were 9 planets in the solar system. Pluto was demoted as a planet in 2006. The null hypothesis of “Pluto is a planet” was replaced by “Pluto is not a planet.” Of course, rejecting the null hypothesis isn’t always that easy — the hard part is usually figuring out what your null hypothesis is in the first place. Hypothesis testing categorization: Hypothesis testing is categorized as a parametric test and nonparametric test. The parametric test includes z-test, t-test, f-Test and x2 test. The nonparametric test includes sign test, Wilcoxon Rank-sum test, Kruskal-Wallis test and permutation test. Parametric test: In this test, samples are taken from a population with known distribution (normal distribution), and a test of population parameters is executed. Nonparametric test: Also called distribution-free test, this test does not require the population to conform to a normal distribution, nor do the popular parameters need to be statistically estimated. How important is hypothesis testing? People refer to a trial solution to a problem as a hypothesis often called an "educated guess" because it provides a suggested solution based on the evidence. However, some scientists reject the term "educated guess" as incorrect. Experimenters may test and reject several hypotheses before solving the problem. Much of running a small business is a gamble, buoyed by boldness, intuition, and guts. But wise business leaders also conduct formal and informal research to inform their business decisions. “Good research starts with a good hypothesis”, which is simply a statement making a prediction based on a set of observations. The purpose of hypothesis testing is to make a decision in the face of uncertainty. We do not have a fool-proof method for doing this: Errors can be made. In which phases of an improvement project, is it likely to be used? It is used in analyze phase of the DMAIC improvement cycle. Critical X’s are identified and to measure the effect of each X on Y, the hypothesis testing is used in Analyze phase.
  15. False Alert, Missed Alarm

    False Alarm Vs. Missed alert.docx