Jump to content

All Activity

This stream auto-updates

  1. Yesterday
  2. Qualitative Analysis usually takes a back seat while we work with numbers, however it is equally important. To be honest, I was not expecting that there will be such wonderful answers to this question (and I strongly recommend reading all answers). It was a difficult choice to make, however, I still had to make a choice. Answer from Rahul Arora has been selected as the winner for pointing out how Thematic Analysis can be utilized to understand the VOC to identify a potential project area.
  3. Q 482. Explain the KT Analysis method of problem solving using an example. Does it provide any advantage over other problem solving approaches? Note for website visitors - Two questions are asked every week on this platform. One on Tuesday and the other on Friday. All questions so far can be seen here - https://www.benchmarksixsigma.com/forum/lean-six-sigma-business-excellence-questions/ Please visit the forum home page at https://www.benchmarksixsigma.com/forum/ to respond to the latest question open till the next Tuesday/ Friday evening 5 PM as per Indian Standard Time. Questions launched on Tuesdays are open till Friday and questions launched on Friday are open till Tuesday. When you respond to this question, your answer will not be visible till it is reviewed. Only non-plagiarised (plagiarism below 5-10%) responses will be approved. If you have doubts about plagiarism, please check your answer with a plagiarism checker tool like https://smallseotools.com/plagiarism-checker/ before submitting. The best answer is always shown at the top among responses and the author finds honorable mention in our Business Excellence dictionary at https://www.benchmarksixsigma.com/forum/business-excellence-dictionary-glossary/ along with the related term
  4. What is Thematic Analysis Thematic analysis is commonly stated as the study of patterns of meaning. In other words, If you want to analyze the themes within your data set to identify meaning. It is the most common forms of analysis within qualitative research - to identify, analyze and interpreting patterns of meaning or themes within quality data. What is Qualitative Data It is the data on which participants write descriptively and it can be obtained from questionnaires, interviews, focus groups, case studies, social media profiles, survey responses etc. - these data are generally non-numerical. And qualitative research methods have been used in various areas like, sociology, political science, psychology, educational research etc. Some samples of qualitative data The hair was smooth and silky The girls have brown, black, blonde, and red hair The room was very airy and bright with white curtains Different Approaches · Inductive Approach: No pre-conceptions about themes, instead we generating themes. · Deductive Approach: Already have a set of themes that we expect to generate from the data. · Semantic Approach: No need to understand the subjective meaning of the data. · Latent Approach: Need to dive into the data and understand its meaning. Steps To Do Thematic Analysis This is an iterative process which helps to go from messy data to the most important themes in the data. There are commonly used six steps developed by Braun and Clarke which can follow - · Data familiarization : reading, re-reading, taking notes · Initial codes generation: coding interesting feature across the entire data set · Searching themes: collecting codes related to potential themes · Reviewing themes : review and compare themes against data set · Defining Themes: generating clear definitions and names for each theme · Writing Up: the final analysis of selected extracts and producing a report of the analysis Advantages and Disadvantages Advantages - - It has a lot of flexibility in interpreting the data - It helps to approach large data sets easily into broad themes by sorting them. Disadvantages - - it involves the risk of missing nuances in the data - It is often quite subjective - Need to carefully reflect on your interpretations
  5. In qualitative research, we can use thematic analysis to determine something about people's views, opinions, knowledge, experiences, or values from a set of qualitative data, such as interview transcripts, social media profiles, or survey responses. We can use thematic analysis to answer the following types of research questions: In a hospital setting, how do patients perceive doctors? In terms of climate change, what do non-experts think? What is the role of gender in high school history? The six steps Braun and Clarke develop can help us decide if the thematic analysis is right for you and how you will analyse our data. Step 1: Familiarization Familiarizing ourselves with our data is the first step. Getting an overview of all the data we collected is essential before we analyse individual items. Step 2: Coding Once the data is analysed, it needs to be coded. It is, in essence, the process of highlighting sections of a text, usually a phrase or a sentence, and creating shorthand labels for it. We can quickly gain an overview of the main points and common meanings that recur across the data by using these codes. Step 3: Generating themes Next, we review the codes we've created, identify patterns among them, and begin generating themes. Themes are more general than codes. In most cases, you'll combine several codes into one theme. As an example, we might combine the codes as follows: Again, what we decide will depend on what we are trying to discover. We are looking for themes that tell us something useful about the data for our purposes. Step 4: Reviewing themes We need to make sure our themes are useful and accurate representations of the data. In this step, we compare our themes with the actual data set. For eg, we might decide upon looking through the data that “changing terminology” fits better under the “uncertainty” theme than under “distrust of experts,” since the data labelled with this code involves confusion. Step 5: Defining and naming themes Now that we have the final list of themes, it’s time to label and describe each of them. Defining themes will involve formulating precisely what we mean by each theme and figuring out how it helps us understand the data. Naming themes will involve coming up with a succinct and easily understandable name for each theme. Step 6: Writing up At last, we’ll write up our analysis of the data. A thematic analysis needs to begin with an introduction that establishes our research question, aims, and approach. We should also include a methodology section, describing how we collected the data and explaining how we conducted the analysis itself. The results or findings usually address each theme in turn. We describe how often the themes come up and what they mean, including examples from the data as evidence. Finally, our conclusion explains the main takeaways and shows how the analysis has answered our research question.
  6. Thematic analysis is method used to identify patterns or clusters of related data. This method is used to analyse qualitative / non - numeric data. This type of analysis is predominantly performed on data collected through survey's. Most of the organisations roll out employee survey to understand how satisfied and motivated the employees are working for the organisation. In this example, org ABC rolled out a survey and the results were not really good. They decided to initiate a Six Sigma project to improve the ESAT score by X%. As the data collected is from the survey results and it's qualitative in nature, they use thematic analysis to identify patterns or clusters of related data, themes emerging from the respondents concerns, employees perceived problems etc.,
  7. Last week
  8. What is Thematic Analysis, Thematic analysis is a study of patterns, a methodology used to analyse qualitative data (i.e. non-numerical data’s like audio or video, or audio) for understanding the opinions, experiences, or concepts. This analysis will be used to gather in-depth insights into a problem or to generate new ideas for research. There are 3 approaches/ways to do this analysis, they are, 1. Inductive approach – This approach derives meaning and creating themes from data without any preconceptions. (Will do the analysis without any idea of what themes will emerge, hence the themes will be determined by the data) 2. Deductive approach – In this approach, we start the analysis with a set of themes that we already expect to find from the data. (Will do this analysis after getting the knowledge from research or existing theory about the data) 3. Semantic approach – In this approach, we ignore the underlying meaning of data, but will identify the themes based on what is openly stated or written. (This approach is taken when investigating opinions and viewpoints, as these tend to be understandable) 4. Latent approach - This approach focuses on underlying meanings and relatively looks at the reasons for semantic content, which involves an element of interpretation, where data is not just engaged because of face value, but meanings are also theorized. Note – I personally prefer the Latent approach though we have the option of choosing any of these four as per the analysis requirement. Application, This analysis is useful during an interview or transcripts or during psychological research to examine the data to identify the patterns of meaning that come up repeatedly. How to do this analysis, There are different approaches to conducting thematic analysis, the most familiar type is the six-step process, Step 1 - Familiarization, In this step the analyser makes himself familiar with the data that needs to be analysed. This may include reading and re-reading the whole data thus having an overview of its context and taking notes of it. Step 2 - Coding, In step 2, the analyser highlights or labels the keywords or group of keywords, or even the entire phrases in the data that indicate some meaning. This meaning will come in handy when the analyser trying to clutch the essence of the data. In this example - The survey questing is “How has social media changed over the years?”, and we are interviewing a person who is 40+ years old and working in a middle school. And receiving the opinion, “I think these social media platforms such as YOUTUBE/FACEBOOK and LINKEDIN are not for the oldies anymore. Because the current trends are rapidly changing and evolving every day. Hence it becomes difficult for people like me to keep up with them. This difficulty makes us feel disconnected.” Further, we need to derive codes for the key phrases like – Quickly changing/ Uninterested/ Discomfort, etc., Step 3 - Generating themes, For the above-mentioned example, we can have a theme called “NOT SATISFIED” for the codes we derived from the interview mentioned. This step will give the analyser a brief idea about how many codes are being used repeatedly and which ones of them will be useful and which need to be discarded. Step 4 - Reviewing themes, In this step, the analyser compares the themes with the original data and looks for any missing points or irrelevant results, and may modify the themes by checking on how they satisfy and/or justify the data intended. Step 5 – Defining and naming themes, Next step, the analyser will do the naming for the themes depending on what they indicate and what we will understand from the data. Step 6 - Writing up, in the last step, using all the results we may conclude that social media has evolved so much that the elder generations find it difficult to understand and interact with which results in their dissatisfaction. This is the method I prefer and practice to regulate a perfect Thematic Analysis. Thanks
  9. Thematic Analysis is a method of analyzing qualitative data. It is generally applied to a set of text such as verbatim or transcripts where it is closely examined in order to identify common themes. It is a good approach to leverage whenever you are trying to find something meaningful(eg: people’s views, opinions etc) from a set of qualitative data. It is basically a data analysis process which involves deep diving through a dataset, create coding, identifying patterns, , deriving themes & then finally create a narrative. Following are the steps to perform Thematic Analysis:- First step is to familiarise yourself with the data i.e. performing an initial exploratory analysis of the data in order to identify meanings & patterns in the data. After familiarising with the data, create the initial codes that represent the meanings & patterns identified in the data. Decide the code, go through the data again & identify the excerpts, then apply the appropriate codes to them. Also add new codes as deemed fit. Bring together all the excerpts associated with a code & collate them under that appropriate code, repeat the same for other codes as well. Group the collated codes containing the excerpts into suitable themes . Evaluate & revise the themes, also ensure that each theme has data to support it & each theme must be unique. Once themes are finalised the final step is to create the narrative in order to share your findings to the audience. Based on the above understanding one of the common applications of thematic analysis in lean six sigma is VOC analysis. The objective here is to identify major recurring themes that reflect the concerns or problems raised by the customer & eventually this will help to identify the key needs of the customer basis which we can focus our improvement efforts accordingly. Let us take the example of a bank having rolled out a survey to its customers in order to get their valuable feedback on the overall performance of the services thy are delivering. Now once we have received the responses from the bank’s customers the bank decides to use Thematic Analysis in order to analyze those responses & post the analysis it was found that there were two major themes identified from the data points one towards the accuracy & other towards the timeliness of the Bank’s wire transfer process. Thus by leveraging this analysis they identified two major improvement areas i.e. To improve the accuracy of the wire transfer process & to reduce the overall time to completion of the wire transfer process.
  10. Thematic Analysis as the name gives us an idea is all about analysing the patterns or themes of data. It is useful for qualitative data analysis which means it can be used to analyse non-numerical data like may be audio/video/text during data collected from focus groups, interviews or surveys. In this method qualitative data is analysed and coding label is provided through a process of coding to understand explicit and implicit meanings of the data and then convert it to themes through iterative comparison. Different types of Thematic Analysis are: 1. Inductive 2. Deductive 3. Semantic 4. Latent Steps in Thematic Analysis: Step 1: Familiarization Step 2: Coding Step 3: Generating Themes Step 4: Reviewing themes Step 5: Defining themes Step 6: Writing Example in Six Sigma: Inductive Thematic Analysis: 1. Observation– a office lift is busy 2. Look for a pattern– the office lift is busy from 10 am to 7 pm. 3. Develop a theory– a office lift is busy during working hours. Deductive Thematic Analysis It depends on the Inductive approach, Starting with a theory– the Office has busy lift during working hours. Formulate a hypothesis– generally, all offices are having busy lifts during working hours. Collecting data to study hypothesis- observing all the office lifts during the working hours every day. Analyse the result (does collected data reject or validate the hypothesis)- since all the office lifts are busy during working hours -> support a hypothesis.
  11. All published answers have explained the two tools correctly. Best answer has been provided by Kaviraj for using the same data set and comparing the two tools. Answers from Rahul Arora and Chandra Shekhar Chauhan are also a must read.
  12. Grubb’s test is used for a minimum mono outlier in observation in a single attribute data (like employees salaries in an industry ) with normal distribution, whereas Boxplot is more of a visual technique for the same with more flexibility in terms of comparison between sets of data or groups , gives more direct representation of the distribution of data. I guess Grubb’s test is limited since it could detect only one outlier at a time even though it is useful in detection of an outlier. I would prefer BOXPLOT as this gives more visual validation summary for mean, dispersion or density more efficiently.
  13. Q 481. What is Thematic Analysis and for what type of data is this used? Elaborate its usage in a Six Sigma project along with an example. Note for website visitors - Two questions are asked every week on this platform. One on Tuesday and the other on Friday. All questions so far can be seen here - https://www.benchmarksixsigma.com/forum/lean-six-sigma-business-excellence-questions/ Please visit the forum home page at https://www.benchmarksixsigma.com/forum/ to respond to the latest question open till the next Tuesday/ Friday evening 5 PM as per Indian Standard Time. Questions launched on Tuesdays are open till Friday and questions launched on Friday are open till Tuesday. When you respond to this question, your answer will not be visible till it is reviewed. Only non-plagiarised (plagiarism below 5-10%) responses will be approved. If you have doubts about plagiarism, please check your answer with a plagiarism checker tool like https://smallseotools.com/plagiarism-checker/ before submitting. The best answer is always shown at the top among responses and the author finds honorable mention in our Business Excellence dictionary at https://www.benchmarksixsigma.com/forum/business-excellence-dictionary-glossary/ along with the related term
  14. I would prefer any of the Grubbs vs Box Plot based on the situation. If someone wants to detect presence of single outlier one at a time in in an univariate data set that follows an approximately normal distribution then we can use Grubbs Test. For simplicity I would go for Grubbs test by following points I will find the G test statistic. I will find the G Critical Value. Then I would compare the test statistic to the G critical value. The reject the point as an outlier if the test statistic is greater than the critical value. I will compare G test statistic to the G critical value: If Gtest < Gcritical: I will keep the point in the data set; it is not an outlier. If Gtest > Gcritical: I would reject the point as an outlier. Also Grubbs test is defined when we have following hypothesis H0: If there is no outliers in the dataset Ha: If there is only one outlier in the dataset. We can use Box plot when we want to compare the shapes of distributions, find central tendencies, assess variability and also identify outliers. Boxplots display 5 number summary. Box plots present ranges of values based on quartiles and display asterisks for outliers that fall outside the whiskers. Box plots work by breaking your data down into quartiles. When your sample size is too small, the quartile estimates might not be meaningful. These box plots work best when you have at least 20 data points per group. The upper whisker = top approx 25 % of data Box = middle 50% of data lower whisker = bottom approx 25 % of data If we have multiple distributions box plots are good method Example: Suppose we have five groups of scores and we want to compare them by Agile Coaching method we can use Box Plot method.
  15. Grubbs test is a statistical method used to find the outlier in the data range. Also, this test is used to find a single outlier in a normally distributed data set. This test is used to find if the maximum or the minimum value is an outlier in the given data range. Definition - Hypothesis of Grubbs test: Ho - There are no outliers in the given data set Ha - There is only one outlier in the given data set Test Statistic for the Grubbs' test - Y¯ represents sample mean and s represents standard deviation, the Grubbs test statistic is the largest absolute deviation from the sample mean in units of the given sample’s standard deviation. This is a 2-sided version of the test, the Grubbs test can also be defined as one of the following one-sided tests, 1. Test whether the minimum value is an outlier, 2. Test whether the maximum value is an outlier, Grubbs Test Example: Range given - 199.31, 199.53, 200.19, 200.82, 201.92, 201.95, 202.18, 245.57 Firstly a normal probability plot was generated, This plot indicates that the normality assumption is reasonable except for the maximum value. We, therefore, compute the Grubbs test for the given case to find whether the maximum value of 245.57, is an outlier or not. Test Results, H0: there are no outliers in the data Ha: the maximum value is an outlier Test statistic: G = 2.4687 Significance level: α = 0.05 Critical value for an upper one-tailed test: 2.032 Critical region: Reject H0 if G > 2.032 Hence we conclude that the maximum value is in fact an outlier at 0.05 significance level. Boxplots are used to graphically display different parameters briefly. Among other things, the median, the interquartile range, and the outliers can be read in a boxplot. The data used must have a metric scale level. Such as a person's age, electricity consumption, or temperature. How to interpret the boxplot? The box indicates the range in which the middle 50% of all values lie. Therefore, the lower end of the box is the 1st quartile, and the upper end is considered the 3rd quartile. Below q1 lies 25% of the data, and above q3 lie 25% of the data. In the boxplot, the solid line represents the median whereas the dashed line represents the mean. The T-shaped whiskers in the boxplot are the last part, which is within 1.5 times the interquartile range. This means, that the T-shaped whisker is the maximum value of your data but at most 1.5 times the interquartile range. Therefore, if there is an outlier, then the whisker goes up to 1.5 times the interquartile range. If there is no outlier present in the data, then the whisker is the maximum value. Hence, the upper whisker is either the maximum value or 1.5 times the interquartile range. Depending on which value is smaller. The same applies to the lower whisker as well, which is either the minimum or 1.5 times the interquartile range. Points that are further away are considered outliers. If no point is further away than 1.5 times the interquartile range, the T-shaped whisker thus gives the maximum or minimum value. Box Plot Example: Range - 199.31, 199.53, 200.19, 200.82, 201.92, 201.95, 202.18, 245.57 From the above example it’s graphically visible that the data value of 245.57 is not falling within 1.5 times the interquartile, hence it’s an outlier. Conclusion – I would prefer a box plot to find the outliers in normally distributed data range, since its less complex and easy to easy to understand because of its graphical representation. Thanks.
  16. Grubbs Test is being used to detect outliers in a univariate data set (data of one variable) assumed to come from a normal distribution population. Grubbs test is based on the assumptions of normality. First we should verify that the data can be reasonably follow the normal distribution before applying the Grubbs test. Grubbs test detect one outlier at a time. We need to Calculate the G Calculated value by using below formula; GCalc = I Xi- x Bar I / SD, Xi , X Bar and SD denoting the questionable value, sample mean and standard deviation. The Grubbs test statistic is the largest absolute deviation from the sample mean in units of the sample standard deviation. Based on No of sample in data set, we can get the G Table value. For example n=4 G tab= 1.463 and n=5 G tab= 1.672 at 95% confidence. If G calc > G tab, then outlier should be rejected; if G calc < G table, then outlier should be kept. Example: Data 5, 10, 9.5, 9.8, 9.9 Let say questionable value is 5. X Bar= (5+10+9.5+9.8+9.9) / 5 = 8.84 SD = Root [(5-8.84)2+(10-8.84)2+(9.5-8.84)2+(9.8-8.84)2+(9.9-8.84)2] / 5-1 = 2.155 GCalc = I Xi- x Bar I / SD = I 5- 8.84 I / 2.155 = 1.782 ~ 1.80 G tab for n=5 is 1.672 Here G Calc > G tab; therefore outlier should be rejected. Box Plot Box plot is a method for graphically demonstrating the locality, spread and skewness groups of numerical data through their quartiles. In addition to the box on box plot, there can be lines extending from the box indicating variability outside the upper and lower quartiles. Outliers that differ significantly from the rest of the data set may be plotted as individual points beyond the whiskers on the box plot. Box plots are non-parametric; they display variation in samples of a statistical population without making any assumptions of the underlying statistical distribution. The spacings in each subsection of the box plot indicate the degree of dispersion and skewness of the data, which are usually described using the five number summary- sample minimum, lower quartile, median, upper quartile, sample maximum. In addition, the box-plot allows one to visually estimate various estimators notably the interquartile range, midhinge, range, mid-range and trimean. Box plots can be drawn either horizontally or vertically. Example: Data 60, 82, 82, 84, 88, 90, 90, 92, 93, 97 Sample minimum range - 60 Median= (88+90)/2= 89 Lower Quartile Q1= median of lower values = 82 Upper quartile Q3= median of upper values = 92 IQR = Q3-Q1= 92-82=10 Sample maximum= 97 Upper range = Q3+1.5 IQR = 92+1.5 x10 = 92+15= 107 Lower Range = Q1-1.5 IQR = 82-1.5 x10 = 82-15= 67 (Refer below Box-plot for this example; which has been made free hand) Generally we prefer the Box plot to identify the outliers for any statistical data set whenever Grubbs test could be used for univariable data set with normal distribution population. A box plot is a standardized way of displaying the dataset based on the five number summary like the minimum, the maximum, the sample median and the first and third quartiles.
  17. Grubbs' test is used to detect a single outlier in a univariate data which follows a normal distribution. If you suspect more than one outlier may be present, this test may not be helpful. It considers the min and max value when detecting an outlier. Grubbs test can be used to detect if the max or min data is an outlier. As a part of analysis, it is important to check the outliers as this may impact the mean and standard deviation. An outlier should be detected and corrected however Grubs test may not be a robust technique to determine an outlier. Box plot instead can be used a excellent tool for detecting location and variation in a data set. It helps in identifying the middle 50% of the data, Lower quartile (25th Percentile) and upper quartile (75th percentile). Hence it help identify the median and extreme points(outliers). A box plot help u in comparison between various data sets and identifies the significant factor. It will help you read the location and variation between different groups and identify variation. Multiple data sets can be compared hence it helps you work with large data sets.
  18. Outliers in a dataset are basically the data points whose magnitude is significantly different from other data points in that dataset. Outliers signifies either error while keying in data or they signify presence of special cause. The most common method for identifying outliers is through Box plot however we can also leverage Grubbs Test to detect the same, but there is a marked difference in both the methodologies. Let us understand both these one be one:- Grubs Test:- It is one of the most commonly used hypothesis test for identifying outliers & it comes with the below hypothesis:- Ho: All the data points in a sample are drawn from a single population that follows a normal distribution Ha: One data point is not drawn from the same normally distributed population as other data points Thus a p-value of less than 0.05 indicates the presence of an outlier in the data. One of the biggest limitation of Grubbs test is that it assumes that the data is drawn from a normally distributed population, thus we have to first check whether the data qualifies the normality test. If the data fails the normality test then we cannot use Grubb’s test. Another limitation associated with Grubb’s test is that it only detects a single outlier at a time, thus requiring the outlier to be removed from the data set first & then again running multiple iterations of the test until no outliers are detected in the data. Box Plot:- Box-Plot is the commonly used graphical technique to detect outliers in a dataset. The outliers are based leverages Interquartile Range(IQR) with fences in order to identify outliers. Lower Fence : Q1 - 1.5*IQR Upper Fence : Q3 + 1.5*IQR Thus any value below the lower fence or above the upper fence will be considered as an outlier. The box-plot shows outliers as datapoints in the form of asterisk. Box plot is a more robust method to detect outliers as it is not driven by the assumption of normality & once can also detect multiple outliers in the data in a single iteration itself. Conclusion:- The best blend would be to use box plot coupled with domain expertise to identify & treat the outliers in a data.
  19. Earlier
  20. Q 480. Both, Grubbs Test and Box Plots are used to detect presence of outliers in the data set. Which of the two would you prefer to use and why? Provide examples to support your answer. Note for website visitors - Two questions are asked every week on this platform. One on Tuesday and the other on Friday. All questions so far can be seen here - https://www.benchmarksixsigma.com/forum/lean-six-sigma-business-excellence-questions/ Please visit the forum home page at https://www.benchmarksixsigma.com/forum/ to respond to the latest question open till the next Tuesday/ Friday evening 5 PM as per Indian Standard Time. Questions launched on Tuesdays are open till Friday and questions launched on Friday are open till Tuesday. When you respond to this question, your answer will not be visible till it is reviewed. Only non-plagiarised (plagiarism below 5-10%) responses will be approved. If you have doubts about plagiarism, please check your answer with a plagiarism checker tool like https://smallseotools.com/plagiarism-checker/ before submitting. The best answer is always shown at the top among responses and the author finds honorable mention in our Business Excellence dictionary at https://www.benchmarksixsigma.com/forum/business-excellence-dictionary-glossary/ along with the related term
  21. Rahul Arora has provided the winning response for today's question. Viewers are advised to go through the Benchmark Six Sigma expert response by Mr. Venugopal as well. If you have responded but your answer is not approved, there are high chances that it failed the plagiarism (copied from elsewhere over the internet) test. Please pay attention to the following text mentioned under each question When you respond to the question, your answer will not be visible till it is reviewed. Only non-plagiarised (plagiarism below 5-10%) responses will be approved. If you have doubts about plagiarism, please check your answer with a plagiarism checker tool like https://smallseotools.com/plagiarism-checker/ before submitting.
  22. Benchmark Six Sigma Expert View by Venugopal R Readers are expected to have some exposure to 'Design of Experiments' to be able to relate some terminologies in this answer for 'Latin Square Design'. Experiments are designed to study whether a response (output) is dependent on certain factors (inputs) and also to establish the extent of relationship. It is possible that when we design and perform an experiment with planned settings of an input factor, there could be some known 'noise factors' which are likely to influence the behavior of the output. Such 'noise factors are also referred to as nuisance factors'. They are factors that we are not interested to study, but we may be concerned that they might interfere and bias our results. If we suspect the presence of one 'noise factor', it is a common practice to use a 'Randomized Block Design'. The below example will illustrate such a situation. It is believed the concepts of ‘Design of Experiments’ originated from field of agriculture. We will understand the Randomized Block Design, followed by Latin Square Design using an example relating to ‘yield of a crop’. However, the concept can be applied to other situations dealing with ‘nuisance factors’. We are limiting our discussion to the Experimental Design portion and not discussing the Analysis portion here. RANDOMIZED BLOCK DESIGN Imagine that we are interested to study the impact of 'fertilizer dozes' on the yield for a crop. We have divided the land into 24 plots (8 x 3) available as shown below. Eight different dozes of fertilizer (A, B, C, D, E, F, G, H) are to be tried out. However, it so happens that there is a river flowing on the left side of the land. Now we suspect whether the presence of the river will result in higher moisture content for the plots closer to the river. To study any possible impact due to the possible moisture variation we divide the plots into 3 vertical blocks, each block representing the different moisture content (High, Medium and Low). Within each block we perform all the treatments based on the 8 fertilizer dozes, but with random distribution. Such a design is referred to as 'Randomized Block Design (RBD). The RBD will help to address one noise factor. LATIN SQUARE DESIGN Instead of one Noise factor, if we have two Noise Factors; for example, we have river that runs along the West side and a road that runs along the North side. We suspect that the river contributes to varied levels of moisture content as we move from west to east along the land. Whereas, we also suspect that the road is contributing to varied levels of pollution while moving from North to South across the land. We suspect two nuisance factors. viz. Moisture levels and Pollution levels. Will the plots closer to the river be influenced by higher moisture content and the plots closer to the road be influenced by higher pollution content? To consider the possible impacts due to these two suspected noise factors, we use an experimental design as shown below. As seen, the design is in the form of a square, with equal number of rows and columns. The treatment for each plot is represented by an alphabet. In this case we can try out 4 different dozes of fertilizers viz. A, B, C and D. Such a design is known as 'Latin Square Design'. Each cell in the Latin Square design can accommodate only one treatment. It may be noticed that all the treatments (A,B,C and D) are covered in each row, as well as each column. The number of blocks has to be the same, horizontally and vertically, for both the noise factors. The Latin Square design is used when we suspect two noise factors and want to study whether those noise factors cause (an undesired) influence on the response. Another example for Latin Square application is shown below: The output of interest is the rate of sales for 3 variants (A, B, and C) of a product. The noise factors suspected are the type of cities and the type of dealer promotion schemes. We have considered 3 blocking with respect to the city types and 3 blocking with respect to the dealer promotion scheme. The Latin Square design may be applied as below:
  23. Latin Square design helps us to control the variation in two directions. Factors are arranged in rows and columns. Below are couple of examples Latin Square Design is generally used. 1. Trials in agriculture. If there is a agricultural land, the fertility of this land might change in both directions, East – West and North – South due to the moisture levels in the air or soil of the land. In this example of the agricultural land, we might have blocked it in columns and rows. Now, each row is a level of row factor and each column a level of column factor. We can remove the variation in both the directions if we consider both rows and columns as factors in our design. 2. Trials in greenhouses where pots are arranged in a straight line perpendicular to the walls such that the distance between the pots and the wall are the sources of variability.
  24. My two cents on this:- Let us understand the concept & limitations of the two conventional experimental designs & how latin square design takes care of those limitations through example from optical lens industry:- Completely Randomised Design (CRD) or One Way ANOVA: In CRD each experimental unit is randomly assigned to one of the treatment levels. For eg: Let us take an example from optical industry where we want to study the Impact of different varnish types (coating formulations) on the final yield of our lens coating process. Here the experimental unit is the lens on which coating will be done. Here each sample will be randomly allocated to a treatment group hence in this case let’s say we have 60 samples & three types of varnishes (let, say X,Y,Z) thus the entire samples will be divided into three groups of 20 each & one group will be subjected to Varnish X, other to Y & the third to Z. This can be shown as:- Varnish X Varnish Y Varnish Z Group B Group A Group C We will be taking into account the variability within each unit in the overall sample (SS within) & the variability between groups subjected to the three varnish types X,Y,Z (SS between) Randomised Block Design (RBD) or Two Way ANOVA: Now in the above example let’s say we observed that the suppliers (let’s say Supplier A,B,C) from which the varnishes (X,Y,Z) are imported also influences the final yield of our coating process. Here the supplier factor will become the blocking variable. In this case the units are first assigned to each block & each unit within the block will be subjected to all the treatments but cannot be assigned to other blocks & other treatments. Thus let’s say we have 180 samples , first we will divide these samples into three groups of 90 I.e. one for supplier A, one for Supplier B & one for supplier C & these three groups will be further subdivided into groups of 30 & one subgroup will be subjected to Varnish X, second with Y & third with Z & likewise for supplier B & C group. Block Varnish X Varnish Y Varnish Z Group 1 Supplier A Subgroup 1 Subgroup 3 Subgroup 2 Group 2 Supplier B Subgroup 2 Subgroup 3 Subgroup 1 Group 3 Supplier C Subgroup 3 Subgroup 1 Subgroup 2 Here we will be taking into account the variability within each unit in the overall sample (SS within), variability in groups amongst the blocks I.e. supplier A & B (SS blocks) & the the variability between groups basis the three varnish types X,Y,Z (SS between) Latin Square Design: Latin square Design takes care of above limitation with the fact that each experimental unit will get all the treatment but that treatment combination will be a square & each treatment combination occurs only once in a row & a column which is the underlying principle of Latin Square Design. Let us see below:- Now considering the above example lets say we have a sample of 60 lenses & these will be divided into groups of 20 basis the supplier levels A,B,C as well as Varnish Types X,Y,Z. Here each group will be subjected to a combination of each supplier & each varnish type but only once. An important assumption to consider in Latin square Design is the levels in each of the factors considered should be the same like in this example where we have three levels of Suppliers (A,B,C) & three levels of medicine (X,Y,Z). Thus in this case it will be a 3x3 latin square . Varnish X Varnish Y Varnish Z Supplier A Group B Group A Group C Supplier B Group C Group B Group A Supplier C Group A Group C Group B Here we will be taking into account the variability within each unit in the overall sample (SS within), variability in groups amongst the blocks I.e. supplier A & B (SS blocks) & the the variability between groups basis the three varnish types X,Y,Z (SS between) & the variability due to each combination of block I.e. supplier & Treatment i.e. Varnish & Supplier.
  25. P Balakumaaran has provided the best answer to the question. First there is the definition and comparison of Value add and Non value add activities. Second Non value add activities are further classified as essential and non-essential. Third, there are various methods mentioned to address the two types of NVAs. Well written!
  26. Q 479. Latin Square design is a special kind of randomized design in DOE. Explain with examples, when is this kind of design generally used? Note for website visitors - Two questions are asked every week on this platform. One on Tuesday and the other on Friday. All questions so far can be seen here - https://www.benchmarksixsigma.com/forum/lean-six-sigma-business-excellence-questions/ Please visit the forum home page at https://www.benchmarksixsigma.com/forum/ to respond to the latest question open till the next Tuesday/ Friday evening 5 PM as per Indian Standard Time. Questions launched on Tuesdays are open till Friday and questions launched on Friday are open till Tuesday. When you respond to this question, your answer will not be visible till it is reviewed. Only non-plagiarised (plagiarism below 5-10%) responses will be approved. If you have doubts about plagiarism, please check your answer with a plagiarism checker tool like https://smallseotools.com/plagiarism-checker/ before submitting. The best answer is always shown at the top among responses and the author finds honorable mention in our Business Excellence dictionary at https://www.benchmarksixsigma.com/forum/business-excellence-dictionary-glossary/ along with the related term
  27. First we need to understand the difference between Value added and Non-Value added activities. These Non-Value added activities (NVA) are called as “Wastes” in the Lean world. There are 7 classical wastes, which in short was termed as “TIMWOOD”. Of late, an eight waste was added to this list (Non utilization of skill (N)), with which the acronym was termed as “DOWNTIME”. D – Defect / Scrap O – Over processing W – Waiting Time N – Non utilization of skill T – unwanted Transportation I – Inventory M – unwanted Motion / Movement E – Excess Production These wastes increase the Process Time, thus Increasing Cycle Time, in turn increasing the Lead Time. Process Time – Time taken to complete an individual activity or process Cycle Time – Net production time / Number of products produced Lead Time – Time between the Order received from Customer to Order delivered back to the customer. From Lean perspective, the ultimate goal is to eliminate all these wastes from the process. This will help to reduce the Process Time, and the saved time can be used to produce more products thus reducing the Cycle time. This will in turn help to reduce the Lead time, thus improving the On Time Delivery to the customers. This will provide the competitive edge in the market. In the practical scenario, we can not always categorize the activities as only Value added (VA) & Non Value Added (NVA). For example: a) Quality Inspection is considered as the Non Value added activity from the Lean Perspective. But, we cannot eliminate Quality Inspection and deliver good quality products to the customers. Hence Quality Inspection is a Business Value Added, which in turn is defined in Lean as Necessary Non Value Added (NNVA). b) Warehousing is considered as the NVA (from Inventory logic), however, we cannot eliminate the storage of Finished Goods in WH, which will have a direct impact on the Business. Hence this can be categorized as Necessary Non Value Added (NNVA) c) Equipment set-up or changeovers are supposed to be a Non Value Added activity (NVA). However, from the business perspective, it is not always possible to eliminate the Changeovers, as this limits the flexibility of the production line. Hence Changeovers are termed as Necessary Non Value Added (NNVA). Unlike the NVA, we cannot eliminate the NNVA, but we have to reduce them, thus reducing the impact of these NNVA’s in the process time, cycle time & Lead time and at the same time not impacting the Customer Satisfaction. Business Value-Added Activities: These are the activities for which the customer is not willing to pay for but they are needed for running the processes and the business. These business value-added activities could include work done on audits, control, reduce risk, for regulation or to support value added work. Taiichi Ohno called all these NVA as Muda ("waste" in Japanese). Business value-added activities are called Type-1 Muda while non-value-added activities are called Type-2 Muda. Some of the critical questions that can help us to demarcate the VA & NVA are: · Does the activity transform the form, feature, feeling and function that the customer is willing to pay for? · Is it being done right the first time? · Is this something the customer expects to pay for? A positive answer to all of these questions indicates that it is a VA. Even a single negative response indicates that it is either a NVA or a business value-added activity (NNVA) Note : When we stop doing the value-added activity, the customers will complain, while eliminating a business value-added activity would lead to internal customers or regulators complaining. Some of the approaches to manage the NNVA or Business Value Added activities are: Approach 1: Approach 2: ElCoMoRe : Eliminate – Combine – Modify – Reduce This approach talks about Eliminating all the Non Value Added activities, as much as possible. For NNVA – we can Combine them with other VA so that they can be done in parallel, or modify the way in which it was done (like, automation / outsourcing), or Reduce the time taken to complete this activity. This approach is also termed as ECRS : Eliminate – Combine – Reduce – Simplify. Approach 3: Theory of Constraints – this talks about identifying the limiting factor in the process, so that we can focus all the resources to eliminate the NVA and boost the VA and reduce the NNVA. This approach helps to identify the bottleneck, optimized usage of resources and enable quick & effective way to improve the customer satisfaction. Approach 4: Waste Hunting – this is an approach to hunt down the wastes in the processes, down the value-stream. This a ruthless approach to cut down all the NVA’s from the process. Though this approach is effective, many times we end with conflicts among the different process owners ,as the demarcation between the NVA & NNVA is very thin. Blind implementation of this approach may cut down and create a lasting impact on the business. Approach 5: Value Stream Mapping & Line Balancing – This helps to visualize the current state Value flow and enables to identify the bottleneck process – from capacity perspective, manpower perspective and Lead time perspective. This method throws light onto the Takt Time, to assess the Customer Satisfaction. This approach is vastly helpful in those process which involves a lot of Changeovers and SMED application is determined from the VSM. One of the effective metric for measuring the non-value-added content in a process is Process Efficiency. Process Efficiency (PE) = (Value Added Time X 100) / (Value Added Time + Non Value Added Time + Business Value Added Time)
  28. I can suggest following guidelines for the operations team: 1. There are seven types of wastes namely Overproduction, Inventory, Defects, Motion, Over-processing, Waiting, Transportation, Underutilised staff which should be taken into consideration. These wastes adds cost without adding any value and does not change in the existing process nor customer will be ready to pay for it. 2. Operations team need to establish an SLA workflows in their system , so that they can prioritize their tickets as per the business impact and customer needs and deliver without delays. Then create a Value Stream Map to understand which are the waste activities which delay the customer delivery and remove it to optimize the flow that adds value (Quality/less defects), reduces cost (Over processing), transforms positive change (Motion) in the product delivery and increases profit & customer satisfaction. 3. Also their should be collaboration of operations team with the customer to give feedback regularly, so that time is not wasted in creating a wrong product or service delivery (Defects/Defectives).
  1. Load more activity
×
×
  • Create New...