Jump to content

Venugopal R

Excellence Ambassador
  • Content Count

    101
  • Joined

  • Last visited

  • Days Won

    11

Venugopal R last won the day on September 7

Venugopal R had the most liked content!

Community Reputation

31 Excellent

3 Followers

About Venugopal R

  • Rank
    Advanced Member

Profile Information

  • Name
    Venugopal R
  • Company
    SourceHOV
  • Designation
    SVP Global Quality

Recent Profile Visitors

847 profile views
  1. Venugopal R

    BHAG (Big Hairy Audacious Goal)

    Benchmark Six Sigma Expert View by Venugopal R BHAG is no doubt a vision for a long-term, usually ten-plus years. It is a transformation goal and aims to position the organization for a revolutionary change. The guideline for Black Belt projects is to have a SMART goal and Black Belt projects need to get completed at the maximum within a few month's time. The strategic element of Six Sigma calls for annual goal setting and deployment of goals to identify the need and opportunities to improve, re-design or newly design processes. The famous approaches viz. DMAIC, DMADV and DFSS are popular methodologies that guide the teams towards executing such projects. While deciding the annual goals for an organization, the senior leadership would consider the BHAG vision and ensure that the annual goals are aligned to steer the organization towards BHAG goals. It then translates to more specific objectives that could be chartered as BlackBelt projects. Thus the Black Belt projects would certainly serve as a vehicle to provide substantial traction to steer the organization towards the BHAG, provided the senior leadership makes use of the Six Sigma organization effectively. However, the Black Belt projects alone may not be sufficient to fulfill the aspiration of BHAG. It will certainly need emphasis on strategic fortitude using tools and methodologies applying creativity and and innovation as well.
  2. Many of us will be familiar and are likely to have dealt with “Special processes” as defined by ISO 9001. To re-iterate the definition for ‘Special Process’…. They are processes whose outcome cannot be easily measured or evaluated and hence it is very important to ensure the compliance of the process parameters to provide an assurance that the output can be confidently relied upon. The most popular examples provided are Welding, soldering, painting etc. In my experience I have come across some specific examples – for instance the ‘burst strength' for an auto clutch facing will depend upon proper processing and curing of the friction material; this depends upon several process parameters ranging from the appropriate proportion of the pre-mix, the process parameters of the molding and baking process, the extent of force applied for the grinding and finishing operation. Other examples would include ‘Insulation breakdown resistance of wiring harness system’ used for appliances and automobiles. In the IT services industry, many processes are performed directly on the customer’s mainframe with no or very limited opportunity to do any verification or correction. For a banking industry, if the applicable discounts for a product are not withdrawn by the system after the intended period, it causes revenue losses for the bank, which many not be easily recovered. Usage of the right skills and check points is crucial to assure that poor quality does not hit the customer’s processes or the end customer. It is the responsibility of the producer to identify special processes, whether it is pointed out by the customer or not, and exercise and demonstrate appropriate pro-active controls. Coming to the ‘Special Requirements’ as defined by the Aerospace standards, they are bit different from the “Special processes” as defined in the ISO 9001 standards, in the sense that the Special Requirements as per the AS standards are identified by the customer as a product characteristic / performance parameter that have ‘high risk’ of not being met. Factors used in determination of special requirements could include process complexities, past experiences and limitations of industry process capabilities. Identification of special requirements, including the key characteristics and critical items is one of the defined outputs of the phase 2 of the Aerospace APQP. Some examples provided by IAQG guide for 9101 standards include new technology application, new work sharing, introduction of new processes or machines, new competencies requirements. The focus here is on the Product requirements and from the way the standard has defined it, it appears that one of the criteria considered for identifying ‘special requirements’ includes the fact that the product may be produced through a ‘special process'. In the context of this discussion, I would also like to mention about NADCAP (National Aerospace and Contractors Approval Program), which is an industry managed approach to conformity assessment of ‘Special processes’ related to Aerospace industry.
  3. Venugopal R

    Yokoten

    Reinventing the wheel can be an arduous task. It is basic common sense that we should try not to duplicate efforts, but build upon wisdom that already prevails. The distinctiveness with the Japanese companies is that they have demonstrated the art of picking up on an invention that already exists and take it to an unimaginable dimension. The transformation of the auto industry by Japanese 1980-90 period has awakened the US auto giants to revise their own standards on Automobiles. Similar is the case with many other products that the world has seen. It would not be out of place to mention the pioneering work by Indian Statistical Institute on Statistical Design of Experiments - many of those approaches have been practically applied on what came out as the very popularly accepted Taguchi methods. Indeed, a legacy has been left by Japanese in the ability to build and excel upon in many areas, be it Product, Process or Practices. Now let us see the Yokoten practice as applied within an organization. Yokoten, as many of you have figured out is commonly referred as lateral sharing of learning across organization. In many or our organizations, we continue to have pockets of good work going on, but with stealth publicity. People who have been in organization for long tenure would have seen the same or similar continuous improvement projects being repeated over time. We often talk about ‘sharing of best practices’, but from a yokoten point of view, shouldn’t we rather say “Building upon best practices”. In order to propagate the yokoten practices better in organizations, we need to consider multiple factors. Let’s discuss one such factor here. Usually when an improvement project is completed, there is a requirement to get the team to come out with ‘opportunities for replication’ and this gets presented and many a time, nothing much emerges out of it. The impression prevails that replication is a relatively simpler process and mostly, even if someone takes up sincerely, it is perceived as a low recognition effort. Instead, “building upon best practices” can be viewed as a creative ability and effort that carries equal importance, or maybe more in some cases. However the credit for the original effort will not diminish at all. Thomas Edison is still remembered as the inventor of the bulb, though in today’s world the bulb, from its original form has undergone significant transformations!
  4. Venugopal R

    Gemba

    Benchmark Six Sigma's MBB Expert Response (not contesting) I have been fortunate to have had rich work experience with organizations imbibing Japanese and Western management styles. I would not want to come to any conclusion as to which is better…. I find positives in both approaches and finally, it is the effective blend of best practices, applied with cognizance that gives the result. Whether we talk Gemba or MBWA, it depends on the manner in which they are practised that would make a difference. Both mean that we need to visit the work place. Both mean that we need to interact with people who are closest to the production and who touch the products. Both mean that we need to focus on continuous improvement. I am not sure what thoughts many of you would get when you hear these terms. But let me express mine. When I hear Gemba, it denotes “Roll up your sleeves and gets down to the workplace”. If it is a manufacturing floor, go near the production area, the machines and the people who are at the work spot under consideration. If we are talking about sales, go to the show room or sales counters where actual handshake with customers is happening and participate in the sales process. If it is a case of IT services, go and sit down in front of the monitor, by the side of the processors who are processing the transactions or doing a testing etc. Getting a ‘hands-on’ feel of the work and empathising with the people engaged in the work to understand the ground reality is what Gemba is all about. Gemba visits may be done any time as required and need not be as per a scheduled time table. The MBWA gives me a feeling of getting an overall view about what goes on in the actual workplaces. They are more structured and planned visits by senior leaders, mostly accompanied by the concerned area supervisors. Here the senior leaders may do an assessment of the processes as per a systematic schedule/checklist or it could be an ad-hoc assessment. Unlike Gemba, MBWA doesn’t give a feel of ‘rolling up sleeves’ and working, but more of ‘higher level’ observations, assessments and understanding. Observations are made on the spot and the issues are heard and seen at the workplaces and questions are asked on the spot to people who are closest to the work spot. Senior leaders visiting the workplace instils seriousness and sense of importance in the minds of the people at the workplace; be it a shop floor, sales and service counter, call centre or IT services. Which is better, Gemba or MBWA? Considering the above discussion and understanding, both need to be practised. There is a need for structured MBWA as well as Gemba visits by senior leaders. Both have common benefits as well as specific benefits.
  5. Venugopal R

    Should one know the formulas to be good at LSS?

    It depends on what position you are aiming for. If you are aiming for a LSS trainer role, it would be important to have a reasonable grasp of the underlying statistical principles, if not the the actual formula. For other roles, where you may have to lead a LSS project, in whatever else may be your area of competency, you can either rely on the statistical software and take the help of an LSS BB or MBB, where you need help. One of the main reasons why these applied subjects did not (and maybe still do not) get enough of buy-in was that many used to get put-off on the stat part. In the earlier days one had to use tables and calculators to do the workings, but now, thanks to the advanced software packages available, we are able to perform that part with ease. So, if we keep harping too much on the theoretical part, we may once again kindle the discouragement. We need to be careful and practical in this approach.
  6. Venugopal R

    Rolled Throughput Yield Part 2

    Rolled Throughput Yield (RTY) is calculated by multiplying the yields for each process. Let me illustrate an application of this metric using an example. XYZ company manufactures friction material that goes into auto disc brake pads. The processes under consideration start with the Mix, which is subjected to pre-form process, and then compression molding and then grind finish. Let's assume that the standard weight of mix required for each pad is 100 gms. If 10000 gms of mix is fed into the processes, the yield for each of the 3 processes, Preform, Comp. molding and Finishing are tabulated as below: The yield for each process is calculated in the last column, and the resulting RTY is 0.8, which means that when quantity of mix equivalent for 100 pads was fed into the system, we ended up getting only 80 pads. The loss of yield can be categorized into 2 categories. 1. Due to the losses due to spillage, gaseous waste, finishing dust (SGF) 2. Due to rejections that were either scrapped or reworked. (SRW) The RTY brings out the practical yield from the process at large. If we take a six sigma project to improve the RTY (say from 0.8 to 0.9), it will lead to the revelation and analysis of the 'Hidden Factory' in terms of Scrap and Rework handling that is going on in between the processes. Further probing would lead to a question about how much of SGF wastage can be reduced. It is likely that the factories will have practices by which Reworked material for a particular process will be fed into the next process. Similarly the wastage due to spillage may be retrieved and re-routed to the preform process. The grind dust may be collected and recycled at permitted proportions into the molding process. Assume around 2% of the SGF and 8% of the SRW are re-introduced into the process, the resulting yield (if we didn't consider RTY), would have worked out as 90%, and we would have missed out on exposing and quantifying the "Hidden Factory" and the opportunity for improvement
  7. Venugopal R

    Power of Hypothesis Test

    Decision based on test Reality Ho is True Ho is False Accept Ho Correct Decision (1 – alpha) Confidence Level Type II error (Beta) Reject Ho Type I error (alpha) Correct Decision (1 – Beta) Power of the Test If we want the test to pick up a significant effect, it means that whenever H1 is true, it should accept that there is significant effect. In other words, it means that whenever H0 is false, it should accept that there is significant effect. Again, in other words, it means that whenever H0 is false, it should reject H0. This is represented by (1-Beta). As seen from the above table, this is defined as the power of the test. Thus, if we want to increase the assurance that the test will pick up significant effect, it is the power of the test that needs to be increased.
  8. Venugopal R

    Measure of Dispersion

    Range, no doubt is the simplest measure for dispersion. Range, however can mislead us when there are outlier in the sample, since only 2 extreme values are used for calculating range. We need not go into the advantages of using standard deviation, since most of us would know it. However, in situations where we deal with small and equal sample sizes, the range will be a very ideal measure. One of the best examples that we have is the usage of range in an Xbar - R chart. Here, the samples are taken in the form of rational sub-groups. Each sub-group consists of a small, say around 4 nos,, but equal sample size. Such sample sizes will be too small for computing standard deviations. The concept of rational sub-grouping and very less time gap between the samples, reduces the possibility of outliers. However, even if we have outliers, those range values will stand out in the control chart and they will be removed during the 'homogenization' exercise. Hence range as a measure of variation can be used for such cases.
  9. Venugopal R

    Skewness and Kurtosis

    One of the most important thing that one would like to infer from a descriptive statistics output for any data is how much does the data distribution comply or deviate from a normal distribution. Skewness and Kurtosis are measures that quantify such deviation, often referred to as measures for 'shape' related parameters. These measures will be particularly useful while comparing 2 distributions, and decide on the extent of normality - For eg. the delivery time for a product when compared between two delivery outlets. Data may be distributed either spread out more on left or on the right or uniformly spread. For a normal distribution, the data will be spread uniformly about a central point, and not skewed. When the data is scattered uniformly at the central point, it called as Normal Distribution. Here median, mode and mean are at the same point and the skewness is zero. When skewness is negative, it means that the data is left skewed. If it is positive, then the data is said to be right skewed, as illustrated below. While the graphical representation provides a very quick and easily understandable comparison of the skewness or bias on the data distribution, the skewness measure helps in quantifying the same. This will be particularly important for decision making while comparing distributions which appear similar, but have smaller differences in skew that may not show up well on the graph. In economics, the skewness measure is often used to study income distributions that are skewed to the right or to the left. Data distributions based on life times of certain products, like a bulb or other electrical devices, are right skewed. The smallest lifetime may be zero, whereas the long lasting products will provide the positive skewness. Kurtosis is often referred to as a measure of the 'pointedness' of the peak of the distribution. It is also referred as measure of the 'weight of the tails' of distibution. However, I will attempt to make the understanding of Kurtosis better in as simple terms as possible. It is known that while Normal distributions are symmetrical in nature, not all symmetrical distributions are Normal. A perfect normal distribution will have a Kurtosis represented as β2–3 = 0. A positive kurtosis, known as Leptokurtic will have β2–3 > 0; a negative kurtosis, known as Platykurtic will have β2–3 < 0. To illustrate with an example, most of us are familiar with 't' distribution, which will may appear seemingly similar to Normal distribution, but it will be differentiated by a β2–3 that will be greater than zero. Though Kurtosis is mostly referred to with respect to the "peaked-ness" and "tail heaviness", it is really dependent on the extent of mass that moves to or from the 'center' of the distribution as compared with a normal distribution with same mean and variance. One of the main uses of Kurtosis is to use it as underlying factor for testing Normality, since many of the statistical techniques depend on the normality of distribution.
  10. Venugopal R

    Risk Priority Number (RPN)

    I wouldn't want to go back to the points that have already been covered by various Excellence Ambassadors for the previous question relating to the limitations of FMEA. Here i will limit my discussion to the limitations in the usage of RPN. 1. The most known aspect while using RPN number is that even if we may rank the priorities based on descending order of the RPN, the severity has to be given very serious attention. Lets examine the below 2 scenarios: Severity 10 refers to Hazardous effect without warning Severity 4 refers to low effect such as fit / finish issues that may not impact functionality or safety. Occurrence 3 refers to low frequency such as 1 in 10000 Occurrence 7 refers to high frequency such as 1 in 100 Detection 3 refers to controls having good chance of detection Detection 8 refers to controls having poor chance of detection In the above case, prioritizing scenario 2 over 1, just based on RPN number may be disastrous. 2. Often, it would be difficult to obtain the reasonably correct rating numbers for occurrence. Especially when we are dealing with new product / processes, relevance of arriving at occurrence ratings based on existing processes may have limitations. Another risk would be that the occurrence frequencies would have been based on the data for a particular period, but in reality the occurrence frequency for a particular cause could change and alter our risk prediction and priority. 3. Where detections have a human dependency there is a possibility that when the occurrence for a particular cause becomes very low, there would be chance for reduced human alertness and actual detection could be lower, though a low score might have been assigned to it.
  11. Venugopal R

    Launching Lean Six Sigma Effectively

    Avoid branding program with “Lean Six Sigma” tag at start. Understand biggest Leadership pains. Always, bound to have requirements for improving effectiveness / efficiencies of processes. Take such pain area (or improvement area), and initiate through regular procedures in organization viz. Change request, or CAPA processes, which normally flow through relevant cross functional stakeholders. No SME should feel as an added activity. Once succeeded let leadership feel a fact-based success story encouraging them to give you the next one. Step-up usage of LSS tools / practices as required and pursue to result in a seamless buy-in of the program.
  12. Venugopal R

    Discrete data as continuous data

    Working with sample means When we work with sample means, the data from any distribution, even discrete are subjected to the properties of normal distribution, as governed by the central limit theorem. The application of this concept enables the usage of normal distribution laws for tools such as control charts. Ordinal data Many a time when we use ordinal data on a likert scale with ratings 1 to 5. When we average such recordings for a particular parameter from various respondents, it will get converted into a metric that can be seen on a continuous scale. Histogram Every time we plot a histogram even for data of discrete nature, (for example no. of corrections in a document per day), with large amount of data, it tends to exhibit the behavior of continuous data, say normal distribution. FMEA ratings When we use the ratings in FMEA for severity, occurrence and detection, we assign discrete rankings between 1 and 10, but once converted to RPN, it becomes more continual in nature, though it may remain a whole number. Failure data / distributions Another situation I can think of is about failure data. Individual failure data are count of occurrences, obviously discrete to start with. However, when we convert it to failure rate and plot distributions against time, they are treated as continuous distributions such as exponential, Weibull etc.
  13. Venugopal R

    Internal Quality Score

    My previous post discussed about situation1 for non correlation between Intenal Q score and VOC score. Let's look at another situation Situation2 Lack of correlation as the internal Q score shows poorer results than the VOC score. Before we conclude whether the internal score serves any purpose or not, below are some of the questions that need to be asked: 1. Is the VOC score structured and being reported as per an agreed procedure? 2. is there a possibility that despite having a dip in Quality, the VOC is silent on certain issues, but there is a risk of silent drift by the customer? 3. Has a detailed analysis been done on the key findings by the internal measurement, and an assessment done on the relevance of the findings from the customer point of view? 4. If sampling procedures are used, are the MOE (Margin of Errors) comparable for the methods employed internal and external? 5. Is it possible that there may be certain reliability related issues, that probably might show up on VOC score only after a period of time? 6. Some times it is a common practice to keep the internal measurements more stringent than what the customer would do, for higher sensitivity. This could affect the correlation. 7. The internal measurement might possibly take into account issues that impact customer as well as issues that may not impact the customer, but important from an internal process efficiency point of view. After considering the above discussed couple of situations of non - correlation, even if there is a positive correlation, there may be certain questions that may be worth looking into. Depending on the interest of the debating participants, I will dwell into that area. Thanks..
  14. Venugopal R

    Internal Quality Score

    Let's consider specific cases of non correlation between Internal Quality score and VOC score. Siituation 1 if the VOC score is showing poorer Quality than the internal score, it is certainly cause for concern. It serves a purpose to examine some of the below questions.. 1. Is the the detection capability of Internal measurement adequate? 2. Could it be a result of a damage that has occurred subsequent to the internal measurement? 3. Is there a difference in the understanding / interpretation of the Quality standard? 4. Has a new problem cropped up that was never been part of the existing Quality standard? 5. If some sampling methodology is being used for the score determination, are the margin of errors for internal and VOC comparable? 6. Is it a subjective / aesthetic preference related issue, which could vary from customer to customer? 7. Is it some assignable spike due to a specific problem concentrated on few products in a particular batch? We will discuss another non-correlating situation in my next post.
  15. Venugopal R

    Internal Quality Score

    Looking at some of the responses, I would like to reiterate the question of this debate. The question is not about whether correlation is required or desirable. The question is "Given a situation where the internal service quality score fails to show a positive correlation with the VOC score, does it serve the purpose or not?" Or to express the question in other words "If the Internal Quality score does not positively correlate with the VOC score, is it to be discarded as not serving any purpose?" My answer has been "It need not be discarded for all such situations" In other words, "Yes, it would still be serving the purpose, depending upon the situation"
×