Venugopal R
Excellence Ambassador
Content Count
96 
Joined

Last visited

Days Won
11
Venugopal R last won the day on September 7
Venugopal R had the most liked content!
Community Reputation
30 ExcellentAbout Venugopal R

Rank
Advanced Member
Profile Information

Name
Venugopal R

Company
SourceHOV

Designation
SVP Global Quality
Recent Profile Visitors

Rolled Throughput Yield Part 2
Venugopal R replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins.
Rolled Throughput Yield (RTY) is calculated by multiplying the yields for each process. Let me illustrate an application of this metric using an example. XYZ company manufactures friction material that goes into auto disc brake pads. The processes under consideration start with the Mix, which is subjected to preform process, and then compression molding and then grind finish. Let's assume that the standard weight of mix required for each pad is 100 gms. If 10000 gms of mix is fed into the processes, the yield for each of the 3 processes, Preform, Comp. molding and Finishing are tabulated as below: The yield for each process is calculated in the last column, and the resulting RTY is 0.8, which means that when quantity of mix equivalent for 100 pads was fed into the system, we ended up getting only 80 pads. The loss of yield can be categorized into 2 categories. 1. Due to the losses due to spillage, gaseous waste, finishing dust (SGF) 2. Due to rejections that were either scrapped or reworked. (SRW) The RTY brings out the practical yield from the process at large. If we take a six sigma project to improve the RTY (say from 0.8 to 0.9), it will lead to the revelation and analysis of the 'Hidden Factory' in terms of Scrap and Rework handling that is going on in between the processes. Further probing would lead to a question about how much of SGF wastage can be reduced. It is likely that the factories will have practices by which Reworked material for a particular process will be fed into the next process. Similarly the wastage due to spillage may be retrieved and rerouted to the preform process. The grind dust may be collected and recycled at permitted proportions into the molding process. Assume around 2% of the SGF and 8% of the SRW are reintroduced into the process, the resulting yield (if we didn't consider RTY), would have worked out as 90%, and we would have missed out on exposing and quantifying the "Hidden Factory" and the opportunity for improvement 5 replies

1

 first time yield
 hidden factory

(and 2 more)
Tagged with:

Power of Hypothesis Test
Venugopal R replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins.
Decision based on test Reality Ho is True Ho is False Accept Ho Correct Decision (1 – alpha) Confidence Level Type II error (Beta) Reject Ho Type I error (alpha) Correct Decision (1 – Beta) Power of the Test If we want the test to pick up a significant effect, it means that whenever H1 is true, it should accept that there is significant effect. In other words, it means that whenever H0 is false, it should accept that there is significant effect. Again, in other words, it means that whenever H0 is false, it should reject H0. This is represented by (1Beta). As seen from the above table, this is defined as the power of the test. Thus, if we want to increase the assurance that the test will pick up significant effect, it is the power of the test that needs to be increased. 5 replies

1

 power of hypothesis test
 significant effect

(and 1 more)
Tagged with:

Measure of Dispersion
Venugopal R replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins.
Range, no doubt is the simplest measure for dispersion. Range, however can mislead us when there are outlier in the sample, since only 2 extreme values are used for calculating range. We need not go into the advantages of using standard deviation, since most of us would know it. However, in situations where we deal with small and equal sample sizes, the range will be a very ideal measure. One of the best examples that we have is the usage of range in an Xbar  R chart. Here, the samples are taken in the form of rational subgroups. Each subgroup consists of a small, say around 4 nos,, but equal sample size. Such sample sizes will be too small for computing standard deviations. The concept of rational subgrouping and very less time gap between the samples, reduces the possibility of outliers. However, even if we have outliers, those range values will stand out in the control chart and they will be removed during the 'homogenization' exercise. Hence range as a measure of variation can be used for such cases. 5 replies

 measure of dispersion
 range

(and 1 more)
Tagged with:

Skewness and Kurtosis
Venugopal R replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins.
One of the most important thing that one would like to infer from a descriptive statistics output for any data is how much does the data distribution comply or deviate from a normal distribution. Skewness and Kurtosis are measures that quantify such deviation, often referred to as measures for 'shape' related parameters. These measures will be particularly useful while comparing 2 distributions, and decide on the extent of normality  For eg. the delivery time for a product when compared between two delivery outlets. Data may be distributed either spread out more on left or on the right or uniformly spread. For a normal distribution, the data will be spread uniformly about a central point, and not skewed. When the data is scattered uniformly at the central point, it called as Normal Distribution. Here median, mode and mean are at the same point and the skewness is zero. When skewness is negative, it means that the data is left skewed. If it is positive, then the data is said to be right skewed, as illustrated below. While the graphical representation provides a very quick and easily understandable comparison of the skewness or bias on the data distribution, the skewness measure helps in quantifying the same. This will be particularly important for decision making while comparing distributions which appear similar, but have smaller differences in skew that may not show up well on the graph. In economics, the skewness measure is often used to study income distributions that are skewed to the right or to the left. Data distributions based on life times of certain products, like a bulb or other electrical devices, are right skewed. The smallest lifetime may be zero, whereas the long lasting products will provide the positive skewness. Kurtosis is often referred to as a measure of the 'pointedness' of the peak of the distribution. It is also referred as measure of the 'weight of the tails' of distibution. However, I will attempt to make the understanding of Kurtosis better in as simple terms as possible. It is known that while Normal distributions are symmetrical in nature, not all symmetrical distributions are Normal. A perfect normal distribution will have a Kurtosis represented as β2–3 = 0. A positive kurtosis, known as Leptokurtic will have β2–3 > 0; a negative kurtosis, known as Platykurtic will have β2–3 < 0. To illustrate with an example, most of us are familiar with 't' distribution, which will may appear seemingly similar to Normal distribution, but it will be differentiated by a β2–3 that will be greater than zero. Though Kurtosis is mostly referred to with respect to the "peakedness" and "tail heaviness", it is really dependent on the extent of mass that moves to or from the 'center' of the distribution as compared with a normal distribution with same mean and variance. One of the main uses of Kurtosis is to use it as underlying factor for testing Normality, since many of the statistical techniques depend on the normality of distribution. 4 replies

 skewness and kurtosis
 bell curve

(and 2 more)
Tagged with:

Risk Priority Number (RPN)
Venugopal R replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins.
I wouldn't want to go back to the points that have already been covered by various Excellence Ambassadors for the previous question relating to the limitations of FMEA. Here i will limit my discussion to the limitations in the usage of RPN. 1. The most known aspect while using RPN number is that even if we may rank the priorities based on descending order of the RPN, the severity has to be given very serious attention. Lets examine the below 2 scenarios: Severity 10 refers to Hazardous effect without warning Severity 4 refers to low effect such as fit / finish issues that may not impact functionality or safety. Occurrence 3 refers to low frequency such as 1 in 10000 Occurrence 7 refers to high frequency such as 1 in 100 Detection 3 refers to controls having good chance of detection Detection 8 refers to controls having poor chance of detection In the above case, prioritizing scenario 2 over 1, just based on RPN number may be disastrous. 2. Often, it would be difficult to obtain the reasonably correct rating numbers for occurrence. Especially when we are dealing with new product / processes, relevance of arriving at occurrence ratings based on existing processes may have limitations. Another risk would be that the occurrence frequencies would have been based on the data for a particular period, but in reality the occurrence frequency for a particular cause could change and alter our risk prediction and priority. 3. Where detections have a human dependency there is a possibility that when the occurrence for a particular cause becomes very low, there would be chance for reduced human alertness and actual detection could be lower, though a low score might have been assigned to it. 7 replies

 rpn
 risk priority number
 (and 4 more)

Sunil Dutt started following Venugopal R

Launching Lean Six Sigma Effectively
Venugopal R replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins.
Avoid branding program with “Lean Six Sigma” tag at start. Understand biggest Leadership pains. Always, bound to have requirements for improving effectiveness / efficiencies of processes. Take such pain area (or improvement area), and initiate through regular procedures in organization viz. Change request, or CAPA processes, which normally flow through relevant cross functional stakeholders. No SME should feel as an added activity. Once succeeded let leadership feel a factbased success story encouraging them to give you the next one. Stepup usage of LSS tools / practices as required and pursue to result in a seamless buyin of the program. 
Discrete data as continuous data
Venugopal R replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins.
Working with sample means When we work with sample means, the data from any distribution, even discrete are subjected to the properties of normal distribution, as governed by the central limit theorem. The application of this concept enables the usage of normal distribution laws for tools such as control charts. Ordinal data Many a time when we use ordinal data on a likert scale with ratings 1 to 5. When we average such recordings for a particular parameter from various respondents, it will get converted into a metric that can be seen on a continuous scale. Histogram Every time we plot a histogram even for data of discrete nature, (for example no. of corrections in a document per day), with large amount of data, it tends to exhibit the behavior of continuous data, say normal distribution. FMEA ratings When we use the ratings in FMEA for severity, occurrence and detection, we assign discrete rankings between 1 and 10, but once converted to RPN, it becomes more continual in nature, though it may remain a whole number. Failure data / distributions Another situation I can think of is about failure data. Individual failure data are count of occurrences, obviously discrete to start with. However, when we convert it to failure rate and plot distributions against time, they are treated as continuous distributions such as exponential, Weibull etc. 
My previous post discussed about situation1 for non correlation between Intenal Q score and VOC score. Let's look at another situation Situation2 Lack of correlation as the internal Q score shows poorer results than the VOC score. Before we conclude whether the internal score serves any purpose or not, below are some of the questions that need to be asked: 1. Is the VOC score structured and being reported as per an agreed procedure? 2. is there a possibility that despite having a dip in Quality, the VOC is silent on certain issues, but there is a risk of silent drift by the customer? 3. Has a detailed analysis been done on the key findings by the internal measurement, and an assessment done on the relevance of the findings from the customer point of view? 4. If sampling procedures are used, are the MOE (Margin of Errors) comparable for the methods employed internal and external? 5. Is it possible that there may be certain reliability related issues, that probably might show up on VOC score only after a period of time? 6. Some times it is a common practice to keep the internal measurements more stringent than what the customer would do, for higher sensitivity. This could affect the correlation. 7. The internal measurement might possibly take into account issues that impact customer as well as issues that may not impact the customer, but important from an internal process efficiency point of view. After considering the above discussed couple of situations of non  correlation, even if there is a positive correlation, there may be certain questions that may be worth looking into. Depending on the interest of the debating participants, I will dwell into that area. Thanks..

Let's consider specific cases of non correlation between Internal Quality score and VOC score. Siituation 1 if the VOC score is showing poorer Quality than the internal score, it is certainly cause for concern. It serves a purpose to examine some of the below questions.. 1. Is the the detection capability of Internal measurement adequate? 2. Could it be a result of a damage that has occurred subsequent to the internal measurement? 3. Is there a difference in the understanding / interpretation of the Quality standard? 4. Has a new problem cropped up that was never been part of the existing Quality standard? 5. If some sampling methodology is being used for the score determination, are the margin of errors for internal and VOC comparable? 6. Is it a subjective / aesthetic preference related issue, which could vary from customer to customer? 7. Is it some assignable spike due to a specific problem concentrated on few products in a particular batch? We will discuss another noncorrelating situation in my next post.

Looking at some of the responses, I would like to reiterate the question of this debate. The question is not about whether correlation is required or desirable. The question is "Given a situation where the internal service quality score fails to show a positive correlation with the VOC score, does it serve the purpose or not?" Or to express the question in other words "If the Internal Quality score does not positively correlate with the VOC score, is it to be discarded as not serving any purpose?" My answer has been "It need not be discarded for all such situations" In other words, "Yes, it would still be serving the purpose, depending upon the situation"

DPMO vs PPM
Venugopal R replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins.
PPM (Parts per Million) is a measure for defectives, which gives an indication of the number of parts having (one or more) defects in a given population. This measure does not provide insight into the quantum of defects, since there could be parts that have more than one defect. PPM is a popular measure when dealing with proportion defectives, where large number of pieces are involved. Even one defect in a piece usually render them unusable of may have to subject it to rework. Eg. Auto components being supplied to a large automobile manufacturer. It also applies when we referring to a single quality characteristic of interest, say the weight of a bottle of packaged drinking water; Proportion of batches delivered on time. DPMO (Defects Per Million Opportunities) is a measure for defects. When we deal with a part, it may be easier to express the defects per part or defects per x number of parts. Imagine, we are dealing with a process and need to express the number of defects during a certain period of time. We could say the number of defects from the process in a given period of time. However, if we need to compare the defects rate of process A and process B, it will be meaningful only if the opportunities for defects for these process are comparable. This may not be the case always, and hence the approach adopted is to preidentify the number of defect opportunities in a given process, and use the ratio "Defects over the number of opportunities". For ease of dealing the the numbers, it is multiplied by a million and hence know as DPMO. The opportunities represent potential failure modes. For eg. DPMO can be used to express the Quality levels for a check processing activity, Knowledge transfer process, or to compare different production processes. 
If we have a VOC score that is well correlated with the Internal score, then we do not really need to spend so much to maintain an internal score. We might as well depend on the VOC scores directly. However, this may not be the case always. There are many situations when certain customers may choose not to report Quality issues promptly, but they might silently switch over to another supplier or service provider if they are not satisfied. Due to lack of adequate customer inputs, the VOC scores may not correlate with the internal score, assuming the internal measurements and metrics are maintained correctly. This lack of correlation should not lead to a false sense of security that we are overdoing internally, when there isn't as much issues from customer. Such situations could be challenging for Quality professionals, when there may be a tendency for management to view much of the Quality checks and assessments as NVAs. That's why I emphasize that just with a positive correlation, one cannot sit back with the belief that all is well; at the same time the lack of correlation should not lead to a complacency, especially when the VOC inputs are lower.

There is no doubt that the internal Quality score has to reflect the Quality as per the customers requirements. However, there are practical scenarios, where the internal score may fail to show a positive correlation with the VOC scores. Let me cover one such situation. The method of measuring the internal Quality score is in our control, whereas the VOC score is not. Where we have a very commonly agreed measurement procedure and a structured measurement is performed by the customer and reported, it is fair to expect a higher degree of positive correlation. This is more likely to be possible in a OEM kind of client where there is a clear contract and service level agreements. However, it may not be the case in a consumer durable kind of industry, where the VOC is never structured to establish a positive correlation., though it is desirable. Hence the question is "should it be a matter of concern, if we are not able to establish a positive correlation every time with whatever VOC we obtain?" Or are there other ways of interpreting the internal scores, for the benefit of customer?

YES! After carefully reading the situation, the question is interpreted as whether the internal Quality scores need to have a 'positive' corelation with the VOC scores to be considered as serving the purpose. There are situations where such a corelation need not be a necessary condition.

Component failures
Venugopal R replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins.
The given situation is that of a reliability failure, where time is a factor. Obviously it is the infant mortality rate that is causing pain to the client. If the option of accelerated testing is ruled out due to cost considerations, the following approach may be adopted for quick identification of the highly probable causes. Assuming that sufficient failure data is available, plot the failure rate vs time graph. This will usually tend to take the shape of an exponential distribution, with high concentration of failures in the early periods. From this plot, determine a time period beyond which the failure rate tapers down to a safe levels. Pick up reasonable number of samples of failed components from the “early failure period” and seek for equal number of samples of components that are successfully performing even after the ‘safe cutoff’ period. (This will need a client cooperation as well as willingness by the supplier to replace those good components, to support this exercise) Now we have a set of “survived components” and a set of “failed components”. Depending upon the type of component, list out a set of quality characteristics that may be compared between the “survived components” and “failed components”. The sets of observations for each characteristic from the “survived” components need to be compared against the corresponding set from the “failed” components. To decide on significance of the differences for each characteristics, appropriate hypothesis tests may be applied where relevant. As a result of this exercise, the supplier would be able to redefine certain specification tolerances and manufacture components that are certainly bound to be more reliable. The other alternative / supplementing approach could be to collaborate with the client for sharing investment on accelerated testing. If setting up such facilities are not feasible, the services of external laboratories may be sought. Ultimately, the success is going to be winwin for both parties!