Jump to content

Venugopal R

Excellence Ambassador
  • Content Count

  • Joined

  • Last visited

  • Days Won


Venugopal R last won the day on September 7 2018

Venugopal R had the most liked content!

Community Reputation

31 Excellent


About Venugopal R

  • Rank
    Advanced Member

Profile Information

  • Name
    Venugopal R
  • Company
    Benchmark Six Sigma
  • Designation
    Principal Consultant

Recent Profile Visitors

961 profile views
  1. Benchmark Six Sigma Expert View by Venugopal R A few based on my experiences.... There have been times when I interacted with an organization about identifying Six Sigma projects, and they had some confusion, with the term ‘Projects’. They have been mostly associated with the term ‘Project’ from a context of business contract. Most us who are experienced in Six Sigma terminologies understand how the term ‘project’ needs to be interpreted based on context. However, I have learned that depending upon the audience we may have to be careful in ensuring whether the interpretation of the term ‘project’ is made as intended. Should we call it specifically as ‘Business Project’ and ‘Six Sigma Project’? On the other hand, the fundamental definition for ‘Project’ and the phases of project apply to both contexts quite well. The term project implies an undertaking to deliver an objective with a fixed start and end time. Coming to definition of ‘Process’, the understanding appears to be more uniform and unambiguous. I haven’t seen much confusions between the usage of Project and Process…. Hope the below statement is an example outlining the meaning of ‘Project’ and ‘Process’. “Most of the Six Sigma DMAIC ‘Projects’ aim to improve a ‘Process’ or set of ‘Processes’”. However, when we expect people to map the processes that they are involved in, many have some difficulty. In the Six Sigma world, we often use the SIPOC methodology to depict a high-level process. If we want to test ourselves about our own clarity with respect to a particular process, try building the SIPOC and see how well you are able to do it!
  2. Benchmark Six Sigma Expert View by Venugopal R The damage or extent of damage due to a failure may be often saved or reduced if the failure is detected sufficiently early. Very common example is that if the smoke detector gives an alarm, then there is a high possibility that a fire that is about to spread could be attended and put out. It gives certain comfort when we are assured that we have adequate detection ability for certain potential failures. Historical data and experience that a particular type of failure has a very low frequency of occurrence is another information that could influence our comfort levels with respect to a potential failure. We do have better quantifiable methods available today to express the ‘capabilities’ of processes, if we have to. Even if the failure occurs the extent of consequential damage it could cause is yet another factor that decides the extent to which we may breathe easy. We recognize that the above 3 factors have been considered in the FMEA methodology in the form of Detection, Occurrence and Severity. Thus, the worst can happen if a failure capable of causing damage of high severity, occurs frequently and catches us by surprise. Even if any one of these factors are addressed favorably, we can prevent / save damages. With many knowledgeable members in this forum, the FMEA method, which is essentially a cross functional activity would not require any further detailing here. While FMECA is widely defined as an extension of FMEA and the criticality calculation is also defined by MIL1629A way back, it is still possible to raise questions on clarity and uniform understanding of the method. I am not getting into the details of the calculations for the ‘qualitative and quantitative’ methods to evaluate criticality that decides prioritizing the corrective actions for risk mitigation, which most of the forum members would have been exposed to. However, the emphasis on Criticality analysis is to improve the design and system reliability. Whereas the RPN number in FMEA gives a practical approach for prioritization, considering the detection capabilities as well. It is my belief that we would all be in agreement that FMECA is a step up from FMEA that drives us to keep improving design robustness, preventive controls and mistake proofing as much as possible and make it a continuous effort.
  3. Venugopal R

    Point of Use Inventory

    Benchmark Six Sigma Expert View by Venugopal R While most of the excellence ambassadors will be familiar to the concept of ‘point of Use Inventory’ as used in Lean, and the advantages to control the wastes, I am expressing some thoughts on certain practical challenges one would face when trying to transform an existing workplace to POUS approach. Undergoing the experience of executing a transformation of an existing conventional workplace to a POUS system, will provide relevant learning that will help to set up an operation as POUS right from beginning. One of the common challenges will be the workplace layout – whether the existing layout supports POUS?; or is it possible to modify the layout suitable with little effort? The geographical location of the part suppliers – are they located near of far? For suppliers who are located far or overseas, each consignment would be of a minimum lot size to make the transportation costs practicable. I have seen suppliers setting up exclusive operations near the customer organization as part of long term relationship. The question is… how far one could succeed with such major changes, and in what period of time? Many factors influence the possibility of overcoming the transportation / handling cost related challenges to make POUS a reality for most of the supplies. Another important factor is about the Quality of the supplies. In a POUS system, we will not have much opportunity to evaluate the Quality of incoming material before being used for production. We neither want to compromise on Quality nor want it to be a show stopper for POUS. Hence, assuring “Quality at source” by ensuring clear understanding of requirements and assuring adequate process capabilities become an essential pre-requisite. We would certainly have some more important factors to be addressed to ensure an effective POUS system, apart from the above. Thus the whole implementation may be viewed as a systematic Black Belt DMAIC project, when taken as a transformation initiative, or as a DFSS project when taken as an initial setup. There are clear output and input metrics to be identified, monitored and improved.
  4. Venugopal R


    Benchmark Six Sigma Expert View by Venugopal R In Lean Six Sigma Approach, during the Analyze phase, we have an exercise to come up with Potential causes and similarly during the Improve phase, an exercise to come up with Possible solutions. Subjective methods such as 'Subjective Rating and Ranking, Cause and Effect matrix etc. are used for coming up with the potential causes. Likewise, for identifying possible solutions, Brainstorming and Creative thinking are applied. It is quite possible that during these exercises we would come across many ideas that might be ‘wild guesses’ based on the individual’s historical experiences and intuitions. Many a time such ideas and thoughts will trigger lateral thinking and shape up into ‘out of the box’ thinking. However, the Lean Six Sigma approach has got its own checkpoints to verify and validate both at the cause identification stage and at the solution identification stage to filter out such ideas. The second level of cause/solution identifications are based on statistical methods that will have to show significant response. Thus decisions involving major time, efforts, cost or customer risk, should not be taken just based on the ‘wild guess’ approach, but has to be ratified based on the statistical validations. On the other hand, there are situations where, decisions that were believed to be taken based on strong scientific thought process have been proven wrong once implemented, when proceeded without subjecting to a suitable statistical validations. Thus, I would conclude that ‘guess driven’ ideas may not be avoidable in Lean Six Sigma, however, expensive risks of wrong decisions getting implemented can be largely controlled by subjecting to appropriate validations with sound statistical basis.
  5. Venugopal R

    Stable vs Capable Process

    Benchmark Six Sigma Expert View by Venugopal R Most of us would have understood about the three types of Variability…. viz. Instability, Variation and Off target. A control chart is a tool that helps to represent these variabilities, both statistically and in an easily understandable way. If all the points fall within the limits and the points are in compliance with the rules for all other ‘runs’ as applicable for a control chart, the process is considered stable. If the overall variation, represented by the distance between the UCL and the LCL is such that the variation of the population falls within the specified tolerances, the process is considered capable. Well.. when can we have a capable and stable process be rendered incapable? One such possibility is if we take an unwanted action of altering the settings of the process. But why should this happen? Those of us who are familiar with he “Type-1” error, would understand that there is a small risk of being misled by a control chart, that the process is not stable, when in truly is. Of course, such type-1 errors are small and are expected to occur very rarely, but it is a possibility where one can adversely tamper a process that truly has been capable and stable. However, if we stick to the control chart methodology, we will quickly discover that there has been a shift and would promptly restore it back. Imagine a situation where no control charting is done and the decision to alter the process is taken based on ad-hoc measurements. The chances of disturbing a process away from its stability and capability is certainly high in such situations. Another incident that comes to my mind is where the concept of ‘fits’ and ‘tolerances’ had not been applied effectively for the dimensions of two components that need to be fitted to each other. As per the definition of stability and capability, the processes of both the components complied, but when an extreme match between the components came up there were failures, due to improper fit. Here the assembly has been rendered incapable. Hence, apart from the process capability and stability of individual mating components, the study on fits and tolerances needs to be considered as well.
  6. Venugopal R

    Stop Gap Arrangement

    Expert comments column I remember a situation when we had shortage of particular cast component that was essential to complete the final product assembly. The concerned supplier suddenly has an equipment problem and was able to supply only a few numbers per hour. So, we had one officer take a car and travel to the supplier's site, a few hundred kilometers away, 2 to 3 times a day and fetch those small quantities of the casting to keep our assembly line running. This arrangement had to be carried out for a week, by which time the supplier got his machine and process fixed and was able to produce in bulk. The arrangement that we employed during that one week is certainly not an efficient way of transporting the material, but had to be done as a ‘Stop Gap’ arrangement. Thus the ‘Stop Gap’ arrangement is a conscious decision taken for the time-being to keep things going due to a temporary setback, even though it may not be a right or efficient method in the long run. I am now quoting an example of a Six Sigma project that was aiming for a cycle time reduction of an online processing work. While the charter was prepared in the Define phase, and we were about to get the metrics identified for ‘Measure phase’, it came to our notice that there was an automated ‘search’ tool, that had already been developed, but never used due to a snag in implementation. Usage of this tool will help us to achieve a part of our target, say around 25%. The project team was able to co-ordinate with the concerned tool developer and the users of the tool in operations, and get the snag resolved in a couple of day’s focused effort. This not only took us nearer to our targeted objective, but also helped in boosting the confidence of the project team and the sponsors. It also gave due credit to those who had developed the tool, but were unable to show the result. This is considered as a ‘Quick Win’ for the project, though the major improvements were yet to be done. The ‘Quick Win’ in the above example was an action that did not need time consuming efforts like detailed analysis and validations, but was a ‘low hanging fruit’ that could be implemented quickly and helped to attain a benefit, though, may be small.
  7. Venugopal R

    Creativity and Productivity

    Expert Comments by Venugopal This is an interesting topic, which each one of us would have encountered in one way or the other, many times in our careers, wherever we may be working. The world could not have come forward so much without creativity. Creativity is an inborn trait, with not only humans, but even among other living beings. It is my personal belief that everyone understands how to view creativity and productivity appropriately, but tend to get into debates or arguments depending on the positions they hold at that time. While a set of people, typically the ‘production work force’ focus on routine productivity, another group, typically ‘Production Engineering or Design Engineering’ focus on coming up with creative methods to improve / enhance productivity. Over time, we have seen the emergence of concepts like “Kaizen”, whose objective is to harness the creativity from the minds of the people who are engaged in the day to day production, chasing productivity targets. This is a good example that illustrates that while being productive, and being closest to the workplace, the minds have been active on building up creative ideas. Unless these thoughts are tapped and encouraged we miss good opportunities of enhancing productivity (and other improvements) at the work place. However the improvement through Kaizens have limitations and it is important to have dedicated experts to explore best practices and leverage other technological advancements and ergonomics for breakthrough improvements on productivity. On the whole, it is a combination of the Kaizens (continuous small improvements) and the re-engineering / innovations (continual thoughts that result in periodic breakthrough improvements) that bring the transformation over a period of time. So long as the ‘human’ minds are involved, creativity will co-exist with productivity. Maybe I would take a step forward and mention that even RPA methods keep endeavoring creativity through “Machine Learning” methods, while productivity is in progress.
  8. Venugopal R

    BHAG (Big Hairy Audacious Goal)

    Benchmark Six Sigma Expert View by Venugopal R BHAG is no doubt a vision for a long-term, usually ten-plus years. It is a transformation goal and aims to position the organization for a revolutionary change. The guideline for Black Belt projects is to have a SMART goal and Black Belt projects need to get completed at the maximum within a few month's time. The strategic element of Six Sigma calls for annual goal setting and deployment of goals to identify the need and opportunities to improve, re-design or newly design processes. The famous approaches viz. DMAIC, DMADV and DFSS are popular methodologies that guide the teams towards executing such projects. While deciding the annual goals for an organization, the senior leadership would consider the BHAG vision and ensure that the annual goals are aligned to steer the organization towards BHAG goals. It then translates to more specific objectives that could be chartered as BlackBelt projects. Thus the Black Belt projects would certainly serve as a vehicle to provide substantial traction to steer the organization towards the BHAG, provided the senior leadership makes use of the Six Sigma organization effectively. However, the Black Belt projects alone may not be sufficient to fulfill the aspiration of BHAG. It will certainly need emphasis on strategic fortitude using tools and methodologies applying creativity and and innovation as well.
  9. Many of us will be familiar and are likely to have dealt with “Special processes” as defined by ISO 9001. To re-iterate the definition for ‘Special Process’…. They are processes whose outcome cannot be easily measured or evaluated and hence it is very important to ensure the compliance of the process parameters to provide an assurance that the output can be confidently relied upon. The most popular examples provided are Welding, soldering, painting etc. In my experience I have come across some specific examples – for instance the ‘burst strength' for an auto clutch facing will depend upon proper processing and curing of the friction material; this depends upon several process parameters ranging from the appropriate proportion of the pre-mix, the process parameters of the molding and baking process, the extent of force applied for the grinding and finishing operation. Other examples would include ‘Insulation breakdown resistance of wiring harness system’ used for appliances and automobiles. In the IT services industry, many processes are performed directly on the customer’s mainframe with no or very limited opportunity to do any verification or correction. For a banking industry, if the applicable discounts for a product are not withdrawn by the system after the intended period, it causes revenue losses for the bank, which many not be easily recovered. Usage of the right skills and check points is crucial to assure that poor quality does not hit the customer’s processes or the end customer. It is the responsibility of the producer to identify special processes, whether it is pointed out by the customer or not, and exercise and demonstrate appropriate pro-active controls. Coming to the ‘Special Requirements’ as defined by the Aerospace standards, they are bit different from the “Special processes” as defined in the ISO 9001 standards, in the sense that the Special Requirements as per the AS standards are identified by the customer as a product characteristic / performance parameter that have ‘high risk’ of not being met. Factors used in determination of special requirements could include process complexities, past experiences and limitations of industry process capabilities. Identification of special requirements, including the key characteristics and critical items is one of the defined outputs of the phase 2 of the Aerospace APQP. Some examples provided by IAQG guide for 9101 standards include new technology application, new work sharing, introduction of new processes or machines, new competencies requirements. The focus here is on the Product requirements and from the way the standard has defined it, it appears that one of the criteria considered for identifying ‘special requirements’ includes the fact that the product may be produced through a ‘special process'. In the context of this discussion, I would also like to mention about NADCAP (National Aerospace and Contractors Approval Program), which is an industry managed approach to conformity assessment of ‘Special processes’ related to Aerospace industry.
  10. Venugopal R


    Reinventing the wheel can be an arduous task. It is basic common sense that we should try not to duplicate efforts, but build upon wisdom that already prevails. The distinctiveness with the Japanese companies is that they have demonstrated the art of picking up on an invention that already exists and take it to an unimaginable dimension. The transformation of the auto industry by Japanese 1980-90 period has awakened the US auto giants to revise their own standards on Automobiles. Similar is the case with many other products that the world has seen. It would not be out of place to mention the pioneering work by Indian Statistical Institute on Statistical Design of Experiments - many of those approaches have been practically applied on what came out as the very popularly accepted Taguchi methods. Indeed, a legacy has been left by Japanese in the ability to build and excel upon in many areas, be it Product, Process or Practices. Now let us see the Yokoten practice as applied within an organization. Yokoten, as many of you have figured out is commonly referred as lateral sharing of learning across organization. In many or our organizations, we continue to have pockets of good work going on, but with stealth publicity. People who have been in organization for long tenure would have seen the same or similar continuous improvement projects being repeated over time. We often talk about ‘sharing of best practices’, but from a yokoten point of view, shouldn’t we rather say “Building upon best practices”. In order to propagate the yokoten practices better in organizations, we need to consider multiple factors. Let’s discuss one such factor here. Usually when an improvement project is completed, there is a requirement to get the team to come out with ‘opportunities for replication’ and this gets presented and many a time, nothing much emerges out of it. The impression prevails that replication is a relatively simpler process and mostly, even if someone takes up sincerely, it is perceived as a low recognition effort. Instead, “building upon best practices” can be viewed as a creative ability and effort that carries equal importance, or maybe more in some cases. However the credit for the original effort will not diminish at all. Thomas Edison is still remembered as the inventor of the bulb, though in today’s world the bulb, from its original form has undergone significant transformations!
  11. Venugopal R


    Benchmark Six Sigma's MBB Expert Response (not contesting) I have been fortunate to have had rich work experience with organizations imbibing Japanese and Western management styles. I would not want to come to any conclusion as to which is better…. I find positives in both approaches and finally, it is the effective blend of best practices, applied with cognizance that gives the result. Whether we talk Gemba or MBWA, it depends on the manner in which they are practised that would make a difference. Both mean that we need to visit the work place. Both mean that we need to interact with people who are closest to the production and who touch the products. Both mean that we need to focus on continuous improvement. I am not sure what thoughts many of you would get when you hear these terms. But let me express mine. When I hear Gemba, it denotes “Roll up your sleeves and gets down to the workplace”. If it is a manufacturing floor, go near the production area, the machines and the people who are at the work spot under consideration. If we are talking about sales, go to the show room or sales counters where actual handshake with customers is happening and participate in the sales process. If it is a case of IT services, go and sit down in front of the monitor, by the side of the processors who are processing the transactions or doing a testing etc. Getting a ‘hands-on’ feel of the work and empathising with the people engaged in the work to understand the ground reality is what Gemba is all about. Gemba visits may be done any time as required and need not be as per a scheduled time table. The MBWA gives me a feeling of getting an overall view about what goes on in the actual workplaces. They are more structured and planned visits by senior leaders, mostly accompanied by the concerned area supervisors. Here the senior leaders may do an assessment of the processes as per a systematic schedule/checklist or it could be an ad-hoc assessment. Unlike Gemba, MBWA doesn’t give a feel of ‘rolling up sleeves’ and working, but more of ‘higher level’ observations, assessments and understanding. Observations are made on the spot and the issues are heard and seen at the workplaces and questions are asked on the spot to people who are closest to the work spot. Senior leaders visiting the workplace instils seriousness and sense of importance in the minds of the people at the workplace; be it a shop floor, sales and service counter, call centre or IT services. Which is better, Gemba or MBWA? Considering the above discussion and understanding, both need to be practised. There is a need for structured MBWA as well as Gemba visits by senior leaders. Both have common benefits as well as specific benefits.
  12. Venugopal R

    Should one know the formulas to be good at LSS?

    It depends on what position you are aiming for. If you are aiming for a LSS trainer role, it would be important to have a reasonable grasp of the underlying statistical principles, if not the the actual formula. For other roles, where you may have to lead a LSS project, in whatever else may be your area of competency, you can either rely on the statistical software and take the help of an LSS BB or MBB, where you need help. One of the main reasons why these applied subjects did not (and maybe still do not) get enough of buy-in was that many used to get put-off on the stat part. In the earlier days one had to use tables and calculators to do the workings, but now, thanks to the advanced software packages available, we are able to perform that part with ease. So, if we keep harping too much on the theoretical part, we may once again kindle the discouragement. We need to be careful and practical in this approach.
  13. Venugopal R

    Rolled Throughput Yield Part 2

    Rolled Throughput Yield (RTY) is calculated by multiplying the yields for each process. Let me illustrate an application of this metric using an example. XYZ company manufactures friction material that goes into auto disc brake pads. The processes under consideration start with the Mix, which is subjected to pre-form process, and then compression molding and then grind finish. Let's assume that the standard weight of mix required for each pad is 100 gms. If 10000 gms of mix is fed into the processes, the yield for each of the 3 processes, Preform, Comp. molding and Finishing are tabulated as below: The yield for each process is calculated in the last column, and the resulting RTY is 0.8, which means that when quantity of mix equivalent for 100 pads was fed into the system, we ended up getting only 80 pads. The loss of yield can be categorized into 2 categories. 1. Due to the losses due to spillage, gaseous waste, finishing dust (SGF) 2. Due to rejections that were either scrapped or reworked. (SRW) The RTY brings out the practical yield from the process at large. If we take a six sigma project to improve the RTY (say from 0.8 to 0.9), it will lead to the revelation and analysis of the 'Hidden Factory' in terms of Scrap and Rework handling that is going on in between the processes. Further probing would lead to a question about how much of SGF wastage can be reduced. It is likely that the factories will have practices by which Reworked material for a particular process will be fed into the next process. Similarly the wastage due to spillage may be retrieved and re-routed to the preform process. The grind dust may be collected and recycled at permitted proportions into the molding process. Assume around 2% of the SGF and 8% of the SRW are re-introduced into the process, the resulting yield (if we didn't consider RTY), would have worked out as 90%, and we would have missed out on exposing and quantifying the "Hidden Factory" and the opportunity for improvement
  14. Venugopal R

    Power of Hypothesis Test

    Decision based on test Reality Ho is True Ho is False Accept Ho Correct Decision (1 – alpha) Confidence Level Type II error (Beta) Reject Ho Type I error (alpha) Correct Decision (1 – Beta) Power of the Test If we want the test to pick up a significant effect, it means that whenever H1 is true, it should accept that there is significant effect. In other words, it means that whenever H0 is false, it should accept that there is significant effect. Again, in other words, it means that whenever H0 is false, it should reject H0. This is represented by (1-Beta). As seen from the above table, this is defined as the power of the test. Thus, if we want to increase the assurance that the test will pick up significant effect, it is the power of the test that needs to be increased.
  15. Venugopal R

    Measure of Dispersion

    Range, no doubt is the simplest measure for dispersion. Range, however can mislead us when there are outlier in the sample, since only 2 extreme values are used for calculating range. We need not go into the advantages of using standard deviation, since most of us would know it. However, in situations where we deal with small and equal sample sizes, the range will be a very ideal measure. One of the best examples that we have is the usage of range in an Xbar - R chart. Here, the samples are taken in the form of rational sub-groups. Each sub-group consists of a small, say around 4 nos,, but equal sample size. Such sample sizes will be too small for computing standard deviations. The concept of rational sub-grouping and very less time gap between the samples, reduces the possibility of outliers. However, even if we have outliers, those range values will stand out in the control chart and they will be removed during the 'homogenization' exercise. Hence range as a measure of variation can be used for such cases.