Jump to content

Venugopal R

Excellence Ambassador
  • Content count

  • Joined

  • Last visited

  • Days Won


Venugopal R last won the day on March 9

Venugopal R had the most liked content!

Community Reputation

27 Very Good


About Venugopal R

  • Rank
    Advanced Member

Profile Information

  • Name
    Venugopal R
  • Company
  • Designation
    SVP Global Quality

Recent Profile Visitors

466 profile views
  1. Avoid branding program with “Lean Six Sigma” tag at start. Understand biggest Leadership pains. Always, bound to have requirements for improving effectiveness / efficiencies of processes. Take such pain area (or improvement area), and initiate through regular procedures in organization viz. Change request, or CAPA processes, which normally flow through relevant cross functional stakeholders. No SME should feel as an added activity. Once succeeded let leadership feel a fact-based success story encouraging them to give you the next one. Step-up usage of LSS tools / practices as required and pursue to result in a seamless buy-in of the program.
  2. Working with sample means When we work with sample means, the data from any distribution, even discrete are subjected to the properties of normal distribution, as governed by the central limit theorem. The application of this concept enables the usage of normal distribution laws for tools such as control charts. Proportions / percentages When discrete data like defectives are converted into proportion or percentage, they tend to have some features of continuous data, viz. we will be able to express them to decimal level of accuracy. Ordinal data Many a time when we use ordinal data on a likert scale with ratings 1 to 5. When we average such recordings for a particular parameter from various respondents, it will get converted into a metric that can be seen on a continuous scale. Histogram Every time we plot a histogram even for data of discrete nature, (for example no. of corrections in a document per day), with large amount of data, it tends to exhibit the behavior of continuous data, say normal distribution. FMEA ratings When we use the ratings in FMEA for severity, occurrence and detection, we assign discrete rankings between 1 and 10, but once converted to RPN, it becomes more continual in nature, though it may remain a whole number. Failure data / distributions Another situation I can think of is about failure data. Individual failure data are count of occurrences, obviously discrete to start with. However, when we convert it to failure rate and plot distributions against time, they are treated as continuous distributions such as exponential, Weibull etc.
  3. Venugopal R

    Internal Quality Score

    My previous post discussed about situation1 for non correlation between Intenal Q score and VOC score. Let's look at another situation Situation2 Lack of correlation as the internal Q score shows poorer results than the VOC score. Before we conclude whether the internal score serves any purpose or not, below are some of the questions that need to be asked: 1. Is the VOC score structured and being reported as per an agreed procedure? 2. is there a possibility that despite having a dip in Quality, the VOC is silent on certain issues, but there is a risk of silent drift by the customer? 3. Has a detailed analysis been done on the key findings by the internal measurement, and an assessment done on the relevance of the findings from the customer point of view? 4. If sampling procedures are used, are the MOE (Margin of Errors) comparable for the methods employed internal and external? 5. Is it possible that there may be certain reliability related issues, that probably might show up on VOC score only after a period of time? 6. Some times it is a common practice to keep the internal measurements more stringent than what the customer would do, for higher sensitivity. This could affect the correlation. 7. The internal measurement might possibly take into account issues that impact customer as well as issues that may not impact the customer, but important from an internal process efficiency point of view. After considering the above discussed couple of situations of non - correlation, even if there is a positive correlation, there may be certain questions that may be worth looking into. Depending on the interest of the debating participants, I will dwell into that area. Thanks..
  4. Venugopal R

    Internal Quality Score

    Let's consider specific cases of non correlation between Internal Quality score and VOC score. Siituation 1 if the VOC score is showing poorer Quality than the internal score, it is certainly cause for concern. It serves a purpose to examine some of the below questions.. 1. Is the the detection capability of Internal measurement adequate? 2. Could it be a result of a damage that has occurred subsequent to the internal measurement? 3. Is there a difference in the understanding / interpretation of the Quality standard? 4. Has a new problem cropped up that was never been part of the existing Quality standard? 5. If some sampling methodology is being used for the score determination, are the margin of errors for internal and VOC comparable? 6. Is it a subjective / aesthetic preference related issue, which could vary from customer to customer? 7. Is it some assignable spike due to a specific problem concentrated on few products in a particular batch? We will discuss another non-correlating situation in my next post.
  5. Venugopal R

    Internal Quality Score

    Looking at some of the responses, I would like to reiterate the question of this debate. The question is not about whether correlation is required or desirable. The question is "Given a situation where the internal service quality score fails to show a positive correlation with the VOC score, does it serve the purpose or not?" Or to express the question in other words "If the Internal Quality score does not positively correlate with the VOC score, is it to be discarded as not serving any purpose?" My answer has been "It need not be discarded for all such situations" In other words, "Yes, it would still be serving the purpose, depending upon the situation"
  6. Venugopal R

    DPMO vs PPM

    PPM (Parts per Million) is a measure for defectives, which gives an indication of the number of parts having (one or more) defects in a given population. This measure does not provide insight into the quantum of defects, since there could be parts that have more than one defect. PPM is a popular measure when dealing with proportion defectives, where large number of pieces are involved. Even one defect in a piece usually render them unusable of may have to subject it to rework. Eg. Auto components being supplied to a large automobile manufacturer. It also applies when we referring to a single quality characteristic of interest, say the weight of a bottle of packaged drinking water; Proportion of batches delivered on time. DPMO (Defects Per Million Opportunities) is a measure for defects. When we deal with a part, it may be easier to express the defects per part or defects per x number of parts. Imagine, we are dealing with a process and need to express the number of defects during a certain period of time. We could say the number of defects from the process in a given period of time. However, if we need to compare the defects rate of process A and process B, it will be meaningful only if the opportunities for defects for these process are comparable. This may not be the case always, and hence the approach adopted is to pre-identify the number of defect opportunities in a given process, and use the ratio "Defects over the number of opportunities". For ease of dealing the the numbers, it is multiplied by a million and hence know as DPMO. The opportunities represent potential failure modes. For eg. DPMO can be used to express the Quality levels for a check processing activity, Knowledge transfer process, or to compare different production processes.
  7. Venugopal R

    Internal Quality Score

    If we have a VOC score that is well correlated with the Internal score, then we do not really need to spend so much to maintain an internal score. We might as well depend on the VOC scores directly. However, this may not be the case always. There are many situations when certain customers may choose not to report Quality issues promptly, but they might silently switch over to another supplier or service provider if they are not satisfied. Due to lack of adequate customer inputs, the VOC scores may not correlate with the internal score, assuming the internal measurements and metrics are maintained correctly. This lack of correlation should not lead to a false sense of security that we are overdoing internally, when there isn't as much issues from customer. Such situations could be challenging for Quality professionals, when there may be a tendency for management to view much of the Quality checks and assessments as NVAs. That's why I emphasize that just with a positive correlation, one cannot sit back with the belief that all is well; at the same time the lack of correlation should not lead to a complacency, especially when the VOC inputs are lower.
  8. Venugopal R

    Internal Quality Score

    There is no doubt that the internal Quality score has to reflect the Quality as per the customers requirements. However, there are practical scenarios, where the internal score may fail to show a positive correlation with the VOC scores. Let me cover one such situation. The method of measuring the internal Quality score is in our control, whereas the VOC score is not. Where we have a very commonly agreed measurement procedure and a structured measurement is performed by the customer and reported, it is fair to expect a higher degree of positive correlation. This is more likely to be possible in a OEM kind of client where there is a clear contract and service level agreements. However, it may not be the case in a consumer durable kind of industry, where the VOC is never structured to establish a positive correlation., though it is desirable. Hence the question is "should it be a matter of concern, if we are not able to establish a positive correlation every time with whatever VOC we obtain?" Or are there other ways of interpreting the internal scores, for the benefit of customer?
  9. Venugopal R

    Internal Quality Score

    YES! After carefully reading the situation, the question is interpreted as whether the internal Quality scores need to have a 'positive' co-relation with the VOC scores to be considered as serving the purpose. There are situations where such a co-relation need not be a necessary condition.
  10. Venugopal R

    Component failures

    The given situation is that of a reliability failure, where time is a factor. Obviously it is the infant mortality rate that is causing pain to the client. If the option of accelerated testing is ruled out due to cost considerations, the following approach may be adopted for quick identification of the highly probable causes. Assuming that sufficient failure data is available, plot the failure rate vs time graph. This will usually tend to take the shape of an exponential distribution, with high concentration of failures in the early periods. From this plot, determine a time period beyond which the failure rate tapers down to a safe levels. Pick up reasonable number of samples of failed components from the “early failure period” and seek for equal number of samples of components that are successfully performing even after the ‘safe cut-off’ period. (This will need a client co-operation as well as willingness by the supplier to replace those good components, to support this exercise) Now we have a set of “survived components” and a set of “failed components”. Depending upon the type of component, list out a set of quality characteristics that may be compared between the “survived components” and “failed components”. The sets of observations for each characteristic from the “survived” components need to be compared against the corresponding set from the “failed” components. To decide on significance of the differences for each characteristics, appropriate hypothesis tests may be applied where relevant. As a result of this exercise, the supplier would be able to re-define certain specification tolerances and manufacture components that are certainly bound to be more reliable. The other alternative / supplementing approach could be to collaborate with the client for sharing investment on accelerated testing. If setting up such facilities are not feasible, the services of external laboratories may be sought. Ultimately, the success is going to be win-win for both parties!
  11. I believe that the excellence ambassadors will be familiar with the fundamental definitions and calculations for Cpk, Ppk and Sigma. So, I am not elaborating on it and straight away getting into the comparative discussion. Process capability index Cp is calculated using the variations within a rational sub-group and hence indicates the process potential, but does not include the likely real time variations between groups, which could influence in the long term. Cp is a very good measure to be used for assessing the inherent process potential. It will be useful to assess the impact of any change / improvement on the process capability for a given process. Process performance index Pp considers the actual overall variation for its calculation and hence gives a more realistic prediction of the process performance. Although the Cp may appear good (say >1.67), it is important that Pp is also calculated to assess the process performance over time when subjected to day to day variations of real life production. Pp will serve as a measure for Production part approval criteria. When the above measures are done in consideration with the center shift of the mean, assuming we have an upper and lower specification limits, the respective measures Cpk and Ppk are used. Unless the study is used for a very short run, it is always recommended to use Cpk or Ppk as the case may be. When the process variation is statistical control, Cp and Pp tend to become equal. Cpk and Ppk are measures that are meaningful only when we have an upper and lower specification limits, and ideal while we deal with variable data. When we need to express the process performance index comparable across different processes that would also include attribute data, the sigma level as a measure of process performance becomes useful. Since we there is an established relationship between the defect levels (DPMO), and the associated sigma levels, it becomes a versatile measure to express and compare process performances. However, it is recommended to maintain the Cpk and Ppk values for the benefits discussed, and maintain the corresponding sigma conversion for having company-wide uniformity.
  12. Venugopal R

    Long term vs Short Term

    The long term performance or also known as “Long Term Capability” for a process, itself implies that it has to be taken for a reasonable period of time. At any given point of time, if we measure the process capability for a process, it will always be the “Short Term capability”. The short term denotes to process potential, when operated under a set of variations that are always expected to be inherent in the process at any point of time. Statistically these are variations that may be typically depicted by the spread of an associated normal distribution on both sides of the mean value. It is particularly useful to quickly understand the effectiveness of a change that is expected to reduce the variation, i.e. improve the process capability. If the short term capability itself does not meet the expected requirements, there wouldn’t be a need to run a long term capability. Knowing that during the long term a process will get subjected to additional variations the could impact in the shifting of the mean value, it is important that the short term capability has to be adequately good enough to enable the process to accommodate additional variations in the long term, so that the long term capability will still meet the expected requirements. Considering the practical challenges in terms of the time and effort in obtaining the long term process capability, it has been agreed that during long term, one may expect a shift of the mean value by 1.5σ on either side of the mean will be an acceptable indication of the long term capability. Thus in order to attain a long term process capability of 4.5σ, we need to ensure a short term capability of 6σ.
  13. Venugopal R

    Control Limits

    For statistical Control charts, the control limits are formed by its own historical data. To answer the above question, let’s quickly recap the process of forming the control limits. Typically the inputs based on past 30 or more data points are taken and the control limits are worked out using the formula depending upon the nature of the data and the appropriate control chart applied. I am skipping the elaboration of the control chart construction in this discussion. (i) Once the control limits are derived as above, this becomes a baseline situation, against which the readings are plotted subsequently. Since we keep the limits fixed based on the baseline inputs, if the variation increases, the points will start falling outside the control limits, or would start representing the runs that indicate that the process is no longer in control with respect to the baseline limits. (ii) Another scenario is if we do not fix the baseline limits, but the UCL and LCL keep revising themselves as when the data points are added into the control chart. In this case, if the variation increases, the control limits will keep widening and might give an illusion that the process continues to be in control. As a matter of fact, the process can still be termed as “within statistical control” even with an increased variation, so long as the points are contained within those widened limits. (iii) Hence, to keep track of the changes in variation levels and at the same time to watch whether the process is within statistical control, “stages” can be defined for periods of the control chart run, and the control limits for each stage can be worked out. This will help us to graphically see any changes on the variations (distance between the control limits) and the extent of statistical control within each stage. Such an option is available in Minitab.
  14. Below table gives the ranking for the five metrics (NPS, C-SAT, Churn, CAC, CES) as per order of importance for Performance of e-commerce website for electronic goods (only for the website) Customer satisfaction Metrics choice Rank CES – Customer Effort Score 1 NPS – Net Promoter Score 2 C-SAT – Customer Satisfaction Index 3 Churn – Customer Loss Assessment 4 CAC - Customer Acquisition Cost 5 1. CES – Customer Effort Score When someone wants to navigate through an e-commerce website, all by themselves, the ease and user-friendliness of the website is most important. The main reason one is trying to purchase a product through the website is to save time, which otherwise should have been spend in hunting and negotiating for the right product by visiting numerous shops. The speed, clarity and details one could obtain with relative ease, during the interface makes the CES the most important score. 2. NPS – Net Promoter Score As a matter of fact, a higher CES score will motivate a customer to increase the loyalty and referral with regard to the site usage to other potential customers. This score not only supports repeat visits by already existing customers, but also attracting new customers. Hence NPS finds a second ranking. 3. C-SAT - Customer satisfaction Index This reflects the satisfaction of customers who have already used the site and will give an assessment of their repeated use, but unlike the NPS, we may not get the assessment about added referrals. Thus this is placed only next to NPS 4. Churn – Customer Loss Assessment While this will assess whether the ratio of no. of new customers to the no. of customers lost, we may not be able to assess the dissatisfaction drivers of the lost customers. Relying only of this metric could prove risky, since for want of the real drivers that influence this ratio, it could change drastically any time. It could be good idea if this metric is used along with any of the higher ranked metrics. Thus this attains the 4th rank. 5. CAC – Customer Acquisition Cost While this metric would influenced by the Customer satisfaction, there would be other factors that influence the Customer Acquisition cost. Hence may not be a good metric to assess the customer satisfaction on usage of the product, (website).
  15. The below table gives the ranking for the 5 metrics as per the order of importance and relevance for assessing the “Performance of a call center for credit card support services” Rank Customer Satisfaction Measure 1 CES – Customer Effort Score 2 C-SAT – Customer Satisfaction Index 3 Churn – Customer Loss Assessment 4. NPS – Net Promoter Score 5 CAC – Customer Acquisition Cost 1. The top ranking has been given for CES, since it gives a metric to assess the customer experience for resolving an issue. The CES is best applied, when used after every customer interface instance for resolving an issue. 2. The C-SAT metric is not as specific as the CES for the given question; however, it is useful to get the customer satisfaction index with respect to a product / service. 3. Churn or customer loss assessment could possibly be a measure, provided we are able to segregate the customer loss happening due to the call center experience from the other reasons. 4. NPS would be score to evaluate the overall experience of holding the credit card, and not just limited to the call center support. NPS is important to assess the potential to attract new customer due to referrals by existing ones, whereas in the given situation pertains to already existing customers only. 5. CAC – This more applies to determine how many leads get converted as actual customers i.e. the amount of cost that is being invested to acquire a customer. This metrics is least applicable for the given example.