Jump to content

Venugopal R

Excellence Ambassador
  • Content Count

    121
  • Joined

  • Last visited

  • Days Won

    13

Venugopal R last won the day on March 26

Venugopal R had the most liked content!

Community Reputation

34 Excellent

5 Followers

About Venugopal R

  • Rank
    Advanced Member

Profile Information

  • Name
    Venugopal R
  • Company
    Benchmark Six Sigma
  • Designation
    Principal Consultant

Recent Profile Visitors

1,104 profile views
  1. Benchmark Six Sigma Expert View by Venugopal R Observing Human behavior is very important during training sessions to gather feed back and modify your approach to make your session as effective as possible and for long term learning. Similarly, while doing mentoring of projects with Black Belt or Green Belt leaders, observation techniques are equally important. 1. Look for things that prompt behavior One of the important aspects during a project drive is to have constancy on pursuing the objective. An important behavioral observation is to see if the leader sets himself / herself a disciplined schedule. It is important to have some kind of planned schedule and adhere to it in terms of meeting, review and working sessions. Setting up such a time structure is a behavioral trait that triggers one to fill in the time slots with some targeted accomplishments. 2. Look for adaptations/hacks/workarounds Changes do keep happening in an organization and during the course of a project, it is quite possible that the relevance of the project objective could get altered, due to other business strategies / factors. Some project leaders find this as excuses for lack of progress in their projects, where as some others will figure out a workaround to adapt their projects to the changed scenario. 3. Look for what people care about/value the most The WIFM factor (What Is in it For Me) is very prevalent and influences the motivation for any leader to put in their best on the project. WIFM factor could vary for different individuals. Some may be looking for: a) Monetary rewards b) Recognition c) Making their job easier d) Learning experience e) Any other If we understand which category of WIFM the individual belongs to, in terms of what he / she values most as the outcome of doing the project, it could perhaps help in the way you may address the individual. The would also be individuals who do not appear to value anything and show poor involvement and urge for their project. There would also be individuals who value something, but may not make it obvious! 4. Look for body language While mentoring someone on a project, you cannot miss many body languages: Cell phone is the most common distraction. It is also a wonderful tool to pretend that you are busy with something. However, an experienced eye can easily differentiate between someone who is genuinely busy and those who pretend to be busy. It is important to keep observing and assess how much attention of your mentee you are able to gain and accordingly change your approach and strategy. 5. Look for patterns One of the patterns of behavior on people is the timing they work best. There are certain people who are best in the morning hours and some who are best in the evening hours. There are many who whose concentration will seriously taper down from a Friday afternoon and will be regained only after the next Monday afternoon. Again, there are some busy bees who prefer to take the work on weekends. Mostly, I find the meeting effectiveness is best if the duration is maintained for one to one and half hours, with a pre-planned agenda and objective. 6. Look for the unexpected Few project leaders who are sluggish, might make sudden spurt of improvements and the opposite is also possible. Some start off with high excitement and vigor, but by the time you reach the Measure phase, the energy levels may dip considerably. Another unpredictable pattern is ‘Mood swings’ and your effectiveness with the person depends on which way the swing occurs. On the whole, observing the conduct and manners of your audience / participants / mentees and trying to best adapt to the feedback based on observations is a very important control method to improve the effectiveness of your endeavor.
  2. Benchmark Six Sigma Expert View by Venugopal R The first time I ever saw and experienced a “prototype” was during the early years of my career with a white goods MNC. The product was a consumer durable with unique mechanism and control features being introduced to the world for the first time. The product was developed in a well-equipped research and engineering division of the company in US. Special purpose tools and machines were used to create the several hundreds of components in the Bill of Materials as per the design drawings, the prototype was assembled and made functional. Such a prototype was very useful in concept evaluation, fit and function. It would not be adequate to evaluate the reliability or endurance, nor would it help to assess the capability of a production process. The unit was used internally for the engineers to perform concept / design FMEA studies and improve the design of the components and assemblies. The term ‘proto’ means ‘first’ or ‘original’, from where something starts. In the later years the concept of rapid prototyping emerged with the advancement of Computer Aided Designs and 3D printing technologies. The 3D designs are broken into layers and get ‘Printed’ out as products using special materials. The rapid prototyping methods have helped in bringing down costs and time as compared to the conventional prototyping methods. Pilot testing is usually done once all the necessary preparations and set up are ready for production of the product. Before commencing regular commercial production, an initial batch of products are produced and subjected to evaluation in real use, but monitored closely. The objective of pilot testing is to evaluate not just the functionality of the product, but also its reliability in the actual working conditions, user acceptance, production related issues and any other feedback based on the actual field usage. For my above example on consumer durable, the company had one practice known as “Customer Use Field Trial”. Some products were offered to employees for use at their homes for a certain period of time and feed back obtained. Another method was to launch the product in a non-prominent market; for example, sell a few products only a B city in a distant locality and monitor the feedback. How does the Proto testing and Pilot testing translate in the IT services industry? Well, the coded softwares are subjected to test cases. There are tools available for handling large number of test cases and help improve the concept and design of the software. Another interesting practice that is followed is the Agile model, where, at different stages of development, the product is made available to customer, who also understands that the output may not be perfect and willingly provides feedback for rapidly modifying the designs until the final version emerges. However, only when the developed software product is put to real use by multiple users in multiple systems and subjected to the real-life variations, will the pilot testing be completed. In short, we should not rely on Pilot testing for key changes in concept and design, most of which should have taken place during the Prototyping stage.
  3. Benchmark Six Sigma Expert View by Venugopal R Having associated with Japanese experts on production planning methods, several years ago, I obtained very interesting inputs that transformed many conventional thoughts and practices. One such experience was the way we saw how the concept of ‘U shaped’ manufacturing ‘Cell’ brought about significant benefits and simplicity. The same production volume that used to be carried out in a large factory layout was unimaginably simplified to around one tenth of the area that was previously used. The underlying principle for this transformation was to move away from “batch processing” to “single piece” flow concept. Now, within this ‘single piece’ flow concept there are several methodologies that emerged, depending on certain factors. One such methodology is Chaku Chaku which translates as ‘Load-Load’. It is not necessary that all the U-shaped cells need to follow Chaku Chaku. This is ideally possible when we have several machine stations that are positioned in the right sequence of the process, and are capable of performing their individual processes automatically, including unloading of the component. Then all that the operator has to do is to pick up the component and load into the subsequent machine. As one can guess, a high degree of meticulous planning is required to make this line work. The timing for each station’s operation has to be almost the same, or appropriately balanced, and equal to the time taken for the operator to complete once round of ‘loading’. And it is essential that the machines are capable of automatically unloading the component to enable the operator to pick it up. However, sometimes it so happens that the operator is required to perform the unloading as well on a particular machine, where auto unloading has not been possible. This could potentially reduce the overall efficiency unless the timing is well balanced. And Lean and Quality need to go hand-in-hand. The machines should have high process capabilities to enable this method. Reworks will spoil the game. This method may not be easily adaptable under certain circumstances. Imagine a molding shop where we have machines that automatically complete the molding and eject the product out. But, if there is a heat treatment process where a large number of the components have to be loaded into an oven, with a much larger cycle time, then we have to go to batching again, unless an expensive investment is done for getting equipment that can accommodate single piece flow, and the volume has to be large enough to allow the required baking time. Obvious advantage of Chaku Chaku over full automation is the cost. Apart from that, it also provides more flexibility to change any particular machine to accommodate variants of the products.
  4. Benchmark Six Sigma Expert View by Venugopal R Cash in hand has an advantage of having the ability to be invested immediately and enable earning of returns. Hence the value of same amount of cash that we would get in future is always lower than that we have now. Net Present Value compares the value of the amount invested today to the present value of the future returns from the investment, after discounting them to a given rate of return. The answer to the given question may be debatable. The NPV being zero, means that we would be in no profit, no loss situation. Considering the fact that this investment gives tremendous intangible benefits, with no expected loss, it could be a taken up. Further, companies may not strictly go by the NPV alone. It also depends on the assumed discounted rate, which need not be accurate. Another situation that some of us would have come across is when we have multiple projects with a large client, where the gains from all the projects put together for the client is significant, then we can afford to take a project even with a negative NPV to retain the overall good will of the client partnership. Some companies have good continuous improvement practices in place and will have the confidence of bringing about adequate process improvements within couple of years and make the project profitable. However, if there are other evidently lucrative investment opportunities available, it may not make sense in going ahead with this project, unless the need for the Employee Satisfaction and CSR heavily over weigh the tangible benefits from the other projects.
  5. Benchmark Six Sigma Expert View by Venugopal R Most of us will be familiar with a requirement by the ISO 9000 standards i.e. Management Review. Organizations that did give importance to this requirement and had senior leaders participate in these reviews derived greater benefits than those who did not. There are certain people who say that the ISO 9000 systems aren’t successful. One of the possible reasons could be the lack or inadequacy in conducting Management reviews. The “Toll gate reviews” as part of a Six Sigma program are equally important for the success of the projects and the program at large. Good Six Sigma projects require cross functional involvement and senior management approvals. One of the popular ways is to schedule Toll Gate reviews at the end of each phase. However, quite often, there would be overlap of activities across phases, and hence some organizations prefer to schedule the Toll gate reviews at a monthly frequency and the projects will be reviewed at whatever phase they are in. One of the key objectives of the Toll Gate reviews is to provide support, solutions and guidance to the team to overcome any hurdles that they might be facing. It also serves as an approval for having completed the deliverables for the phase and any shortfalls or gaps will be understood by all concerned for appropriate actions, course corrections before it is too late. The Six Sigma program coordinator decides the agenda for the Toll Gate reviews and the required participants. The Six Sigma Program coordinator along with the steering group, Sponsors and leaders of the projects being reviewed, relevant process owners are necessary participants for the Toll Gate review. The ‘If needed’ participants will include Subject Matter Experts, other select team members of the project, any other stakeholder depending on the topics to be discussed. The Six Sigma coordinator should do some prior planning for the review viz. 1. Decide the projects that need to be reviewed in the session 2. Identify the key points for discussions / decisions / approvals on each project 3. Decide the participants based on the above 4. Send an invite and agenda in advance to all the participants 5. For any critical decisions from senior leaders, it would be important for the coordinator to meet / directly connect with those individuals prior to the meeting and provide a prelude 6. Any issues that need more preparation, information or deliberations may best be kept out of the review and the project leader and team advised to do their homework before taking it up in the Toll Gate review. If the Tollgate reviews are too lengthy, there is a risk of losing the attention and may discourage people from participating in future meetings. It is important for the Six Sigma program coordinator to keep the meetings short but effective and ensure regularity of the review. It is extremely important to set-in this practice and it is sure to pay off in the long run.
  6. Benchmark Six Sigma Expert View by Venugopal R A reasonable understanding about regression analysis and its application is a pre-requisite to answer this question. Whenever we arrive at a relationship between two variables; i.e. one dependent and the other independent, it has to be remembered that a dependent variable is influenced by not just one independent variable but several others. However, it will help if we are able to quantify what extent of the variation of the dependent variable is influenced by the independent variable in question. A very simple example… if my body weight is the dependent variable, we know that it could be impacted by several factors... viz. change in diet, extent of exercise, hours of sleep, hours of sitting, effect of certain medicines and so on. However, if I am able to quantify the extent of impact on the body weight by each of these factors, I would be able to address the most significant one to my benefit. ‘R-square’ explains the extent to which the predictor variable(s) influence the dependent variable (Y variable). However, the problem comes when we have multiple predictor (x) variables. For each predictor variable added, the R-square value keeps increasing, irrespective of whether the added x factor has a significant correlation with the dependent variable or not. This is where the ‘R-square adjusted’ value will help. For any added x variable, the increase in value of ‘R-square Adjusted’ will depend on whether the added factor influences the dependent variable over and above the chance cause variations. Thus, it makes sense to refer to the ‘R-square adjusted value’ when dealing with multiple regression. I will try to make this point clearer with the below example. Here the dependent variable is the no. of transactions per hour on an ATM located in the premises of a very busy mall. The predictor factors considered are: 1. The no. of shops that are open 2. The no. of cars that come in per hour 3. The no. of senior citizens who come in per hour. A set of data (restricted to 10 sets for simplicity) is tabulated as below: For each independent factor the correlation coefficient with respect to the output variable is as below: 1. No. of transactions vs No. of shops open = 0.955 (Strong correlation) 2. No. of transactions vs No. of cars coming in / hr = -0.102 (No correlation) 3. No. of transactions vs No. of senior citizens / hr = -0.22 (No correlation) It is clear from the above that only the factor no.1, i.e. ‘the number of shops open’ has shown a strong correlation with the No. of transactions / hour. Let us use this example to see the behavior of R-square and R-square adjusted, for the regressions with factor 1, factors 1 & 2, factors 1,2 & 3 From the above table, it may be observed that moving from scenario-1 to scenario-3, the R-square value shows an increase, with the addition of factors, whereas the ‘R-squared adjusted’ shows a decline. Now, let’s consider another factor, scenario-4; i.e. No. of youngsters, below 25 years who enter the mall in an hour and the corresponding number of transactions. The below table gives the data for 10 sets Correlations are calculated as: 1. No. of transactions vs No. of shops open = 0.955 (Strong correlation) 4. No. of transactions vs No. of youngsters = 0.989 (Strong correlation) Let’s study the behavior of R-square and R-square adjusted for the scenarios 1 and 4. It is seen that while the R-square value increased with the addition of this factor, the R-square adjusted also increased comparably. Hope this example illustrated how R-square adjusted will be useful when dealing with multiple regression analysis.
  7. Benchmark Six Sigma Expert View by Venugopal R Sensitivity Analysis of “What-if” analysis is popularly used for financial studies to project the possible effect on any business outcome, with projected variations in the input factors that are expected to influence the outcome. It is a model built using the knowledge of current situation which is taken as a baseline and then by subjecting the inputs to assumed deviations from the base value, the effect on the outcome is estimated. Though sensitivity analysis is more associated with financial projections to take decisions such as viability of a project outcome, it bears commonality with many of the techniques used for ‘root cause analysis’. The ‘Analyse’ phase of a Six Sigma project aims to establish the relationship between the input (x) factors and the output (y) factors, which is the intent of the sensitivity analysis model as well. However, in the case of root cause analysis, especially for finalizing the secondary causes, analysis of ‘actual’ data and validation of the effect of the input factors are necessary. I have found situations where the sensitivity analysis has helped to set target for input factors. One such example is a project, where the objective is to make a particular branch of a bank profitable. Here there are several input factors to be considered such as Lending volume, Lending interest rates, Borrowing volumes, Borrowing interest rates, volumes of low interest deposits, mix proportion of other products, Fund Transfer pricing and so on. The existing values of all these input parameters and their effect on the overall profitability are taken as the baseline scenario and multiple future scenarios are modeled considering variations of the inputs. The inputs are also classified as ‘more controllable’ and ‘less / not controllable’. Based on these studies, the scenario is chosen and the ‘more controllable’ factors identified for improvement. This is followed by a more detailed ‘root cause analysis’ that cause the gap on those factors from the desired levels, which is supported by actual data. Thus, Sensitivity Analysis helped in better depiction of the current situation with respect to the input / output factors and also helps to guide ourselves towards a desirable scenario, upon which the RCA can be carried out. I will be eager to know about other experiences where this methodology relates to RCA.
  8. Benchmark Six Sigma Expert View by Venugopal R In the earlier days, ‘JIT’ (Just In Time) was a very popular term from the Toyota Production System. Later on, came the term “Lean” and “Lean Management”. While one could get into the detail of why the term “Lean” has been chosen for what ever it means in management, let’s look forward to what it denotes. To me, it indicates ‘discipline’. If one has to remain ‘lean’ even from a physique context, it requires certain disciplined life style. Similarly, if I need my workplace to be ‘Lean’, it will require a disciplined work style on a continuous basis. Simple practices such as 5S may be taken as the starting point for Lean. In fact, many of the lean methodologies could be related to 5S steps. Again, many organizations start this, but sustenance of the standardized process and perpetuating the cycle of 5S will ensure continuous improvement in efficiencies. One of the most important topics in Lean is VA and NVA process. This is a very dynamic concept, since what we consider VA has the potential of becoming NVA tomorrow, with the advent of Technology. Consider how the billing would have happened in a super market, some years ago before the bar coding concept came into place! Similarly, what is considered NVA could also cease to be NVA with technology. Mix up of different types on invoicing during an accounts payable processing required manual inspection and sorting in between the processing. However, by implementing an auto sorting based on optical character recognition, it no longer was perceived as an NVA. If an organization has to effectively practice and sustain lean processes, the practices have to be extended to its suppliers and customers as well. For instance, to practice the concept of “Point of Use” inventory, it is important to evolve phases of improvement starting from in-house to suppliers. This could involve modification of layouts, improvement of process capabilities, improved material handling, improved supply chain management process, Improved MIS and above all improved mind-sets of employees. The chain will have to spread across the supplier’s organisations as well, leading to extended efficiency improvements. Application of modern methods like Data analytics and Machine learning have helped in leaning out processes with automation. These methods present a model of continuous improvement that keeps evolving. The historical data is continuously used to learn more about the expectations of the process and the process gets refined automatically and continuously, making itself leaner and more powerful with time.
  9. Benchmark Six Sigma Expert View by Venugopal R How it started and how long has it taken? Tracing the roots of 'Lean Six Sigma' depends on where we take the start point. I believe that parts of “Lean Six Sigma” have been conceived even before it came to be known by the metric name. Let me take a starting point somewhere in 1920’s when Walter Shewhart invented Control charts. In the 1940’s Deming popularized the philosophy of PDCA. Later in 1950’s Deming and Juran, two American Gurus of Quality started helping the Japanese, who were very deft in picking up the techniques and put them into practical use in their own ways. The JUSE (Japanese Union of Scientists & Engineers) was formed and provided courses in Quality techniques. Before 1980’s, Quality management became a science and was being taught in US, Japan, India and other countries. Practical application of statistical methods in Quality management became very prominent. Professional bodies like American Society for Quality (ASQ) and many others emerged. Then came a concept of ‘zero defect’ largely associated with Philip Crosby. The first international standard for Quality systems was introduced in late 1980’s. This was followed by many industry specific standards like QS9000, AS9145, TL9000, SEI-CMM, COPC, eSCM and more. ‘Business Excellence’ models such as Baldridge award, EFQM, Deming prize evolved. Then there was another skill in the name of ‘Project Management capability’ that became sought after. May be there are more such happenings of relevance, but I am limiting them here and relating to the topic of our discussion. Most of us know that the term ‘Six Sigma’ that literally represents a statistical metric, has been developed to encompass many of the above-mentioned practices to be applied in an orderly and logical manner to result in a powerful package of a management methodology. This methodology helps in resolving most of the business problems and helps in unearthing opportunities to continuously improve effectiveness and efficiencies. The ‘Six Sigma’ package by itself does not bring any new tool, but it is the intelligent knitting of several proven practices, including project management directives, that makes it a pragmatic approach. This packaging has happened due to efforts of several organizations, few of the prominent ones being Motorola, GE and Allied Signal. Lean methodologies evolved from the compulsion to reduce waste, which in effect relates to efficiency improvement i.e. get more outputs with lesser inputs (or resources). Thus, the techniques used in lean methodology would seamlessly fit into the overall frame work of Six Sigma management, and hence being referred to as ‘Lean Six Sigma’. The beauty of ‘Lean Six Sigma’ is that any organization at any stage can find a way of using this methodology. ‘Lean Six Sigma’ may be used as a tool box for problem solving. Or it could be used as an approach to improve efficiency and growth. Or it could be a management philosophy for transforming an organization. Thus, there is no need for any organization to believe that they are not ready for adopting ‘Lean Six Sigma’. Further, its methods and tools are so versatile that they may be applied to any industry, business or organization. Modern technology has given us many ways of easily computing complex statistical calculations that has made ‘Lean Six Sigma’ even more adoptable. With ever increasing abilities in computing and storage of data, we have been able to move from ‘sampling’ thought process to application of ‘Big Data’ analytics for which many of the six sigma tools are relevant. If one looks ‘Lean Six Sigma’ with a holistic view, it has a strategic Goal setting and deployment component, a tactical component with tools and techniques for problem solving, a behavioral component that deals with human aspects relating to change management. On the whole, this management methodology, if understood comprehensively has the power of being a single umbrella that envelopes continual positive transformation of the organization.
  10. Benchmark Six Sigma Expert View by Venugopal R If the question had been “During which phases of DMIAC TOH (Test Of Hypothesis) is largely made use of?” then the answer would be very obvious. Having asked to identify the phase where TOH does not find an application, we need to put some thoughts on every phase. My discussion here is not to be taken as a counter for any of the other responses, but may be viewed as a thought inciter. TOH is a statistical tool that will help to compare a characteristic of a population with that of another population or standard and take a decision whether we have sufficient reason to believe they are equal or not…. The decision is based on evaluation of few samples that represent the population. The phases of DMAIC that predominantly use the TOH are Analyse and Improve, and hence I will keep these 2 phases aside and look at others. DEFINE phase is where the business case has to be evolved and the management buy-in obtained. For example, if we need to decide on taking a project on improving the market share of a product for a segment of customer across geographies; we may use TOH in the form of Chi-square comparison with a competitor’s product while trying to get a management approval for the business case. MEASURE PHASE is where the Measurements systems need to be finalized and the baseline measurements need to be done. An important aspect of measure phase is to carry out a Measurement Systems Analysis. MSA practices use ANOVA, which is built upon TOH principle, for determining the existence of parameters like linearity, bias etc. As indicated, I am skipping discussion on Analyse and Improve phases, which are most popular for use of TOH. CONTROL phase is where the focus is on monitoring & ensuring sustenance of the gains. The Control plans, Mistake proofing are very prevalent methods here. Control charts that would have been initiated during the Measure phase, continue to be used for monitoring performance in this phase. Usage of control charts is possible when we obtain sample data points periodically. There could be certain situations where we may have practical difficulties in using a control chart. For example, consider a project whose objective is to improve Training effectiveness. Here, we can monitor the sustained effectiveness, only as and when the training happens. Another example could be a project whose objective is to improve the cycle time to ‘go live’ for New Product Development Process. Here, we can monitor the sustained effectiveness only when the next new product is developed and launched. Wouldn't TOH find suitable application for comparing the performance indicators of an improved process with previous / or with a standard to assure sustenance, in such situations? Let me conclude this discussion with the thought…. “TOH is well known to be applied during Analyse and Improve phases – however, aren't there situations in other phases, where it could find useful application for practical decision making?” I look forward to see the views by others on this question.
  11. Benchmark Six Sigma Expert View by Venugopal R Apart from the cost, sometimes it is impractical to have high number of samples to take a decision. There have been many situations where dependency on few samples was the only choice to take a decision. I would like to share one such case study, which happens to be one of my lingering experiences in solving a very serious field failure. This happened on an IT hardware product and the failure incidents became a threat for the product acceptance in the market. The severity of the effect of this failure could be classified 8 - 9. The problem occurred in around 2 to 3 percentage of the production volume and it could occur any time between the 1st day or 30th day of the product’s usage. This means that if I dispatched 30,000 units in a month, I could expect more than 750 failures within one month of usage of that batch, which was a disastrous situation considering the severity of the failure. The mandate was to get this problem fully resolved in no time, and say, maximum a week! This being a product reliability related failure with unknown cause, it was not easy to find any screening method to identify and contain the potential defectives in-house. Among the suspected causes were a few components that had undergone certain changes during the recent times. The changes included change of vendor and elimination of some components based on tests and validations. All changes had undergone necessary technical evaluation in house and by third party regulatory authorities before implementation. So, the variables that impact the failure incidence rates were 1. Component (type & presence), 2. No. of hours of operation 3. Volume of production 4. Possible interaction effects (on component combinations) Every component change had been individually evaluated and certified, and hence the technical team was not willing to accept a cause due to any component, from a design point of view. Without knowing the cause, if I had to contain the failure, the only way was to subject all the 30,000 units to a functional test for 30 days and then dispatch only the products that did not exhibit the failure. This was practically impossible. I had to come up with something better that this. This was a situation that demanded quick resolution of the problem that was plaguing the population but had to be resolved by decisions based on smaller samples. After quick deliberations with my teams, we came up with the thought of creating a customized reliability evaluation plan using 100 samples. Why 100? That was the testing facility limitation! The test was to subject the 100 units to an accelerated burn-in test for 24 hours under extreme conditions, that was approximated as equivalent to normal life period of 30 days. The combination of the components (type / presence) was applied using factorial principle. Considering 4 factors and 2 levels, we required 16 trial combinations with a limitation of doing only 6 samples for each combination at a time. To detect a failure occurrence rate of 2%, we had to repeat the entire cycle 8 to 10 times to witness simulation of the failure and to isolate the combinations that gave rise to the failure. Thus, the whole exercise lasted at least 10 days using “small” sample to help us identify and quarantine the cause, convince the stakeholders and successfully resolve the issue. This event was one such situation where there was no alternative than to depend on sample to unearth the cause and decide the appropriate action. It also proved that through thoughtful usage of samples, we can identify right actions successfully and quickly. A very careful and detailed planning, even during such a panicky situation was essential to get the best from 'small' samples. Though those were challenging times, I am glad that I have a case study to share with others who could face similar situations.
  12. Benchmark Six Sigma Expert View by Venugopal R So long as we use the term “Control”, it implies that we expect variations from the process, and that we do not have perfect confidence in the process. Come to think about it, do we have any process in our day to day life that we can imagine without any controls? One simple example that often crosses my mind is “Can we ride a bicycle if the handle bar is kept straight and locked?” You may try it out! Every moment we are riding the bicycle the control on balance is provided by the small movement of the handle bar, even if we want to ride straight! We walk straight on a path because, there is a control mechanism through our eyes that keeps directing every step, with some reference. If we think carefully about any process, there are bound to be some built-in controls that make the process work the way it is intended. I am not going to discuss any preferred order about the type of controls since most of the excellence ambassadors will have their experienced views. I am limiting my discussion on some aspects relating to effectiveness of controls. Most of you will agree, controls may be broadly classified as Preventive, Detective and Corrective. Controls may also be classified as ‘Technical’ and ‘Statistical’. What could be factors that adversely impact the effectiveness of these controls? A few of my thoughts are as below: 1. Missing out in the ‘anticipatory list If a failure has not even been anticipated, (maybe it was missed out despite doing FMEA / maybe there was no history of past occurrence), then obviously no control will exist if such a failure occurs. The exhaustiveness of the list of anticipated failures is a factor and there is always the question “How do I know what I don’t know?” 2. Lack of control on the “Preventive Control” Once we term a control as “Preventive”, can we sit back trusting that nothing can go wrong? Every “Preventive control” will have to be assessed for its effectiveness from time to time. For example, a loan disbursement will not be allowed by the system, unless the security clearances are fulfilled. Or an elevator should not start if it is overloaded. Such controls will have to be identified and evaluated from time to time to pro-actively ensure their functioning. 3. Misuse of Preventive / Detection controls This is about having the knowledge of such controls and manipulating them. For example, disabling the Firewall system deliberately to allow certain downloads. Tampering with a smoke detection system (may be to enjoy a smoke!) 4. Not paying heed to the ‘warning’ based controls This could happen either due to lack of awareness and training, or ‘taking for granted’ syndrome, or missing out on the warning. If a ‘low oil’ symbol gets illuminated on your car dashboard, the driver should be educated to understand it and take action. (awareness) If a fire alarm sounds, people tend to take it as a false alarm and not rush out (taking for granted) During a vendor payment, an alert message pops up on the screen to verify if an abnormally high amount appears, but if it escapes the attention of the processor, an over payment might occur (missing out). I am sure that we can think of more factors and examples that influence the effectiveness of process controls. Thus, the pursuit for perfecting a process control system needs to be a continual effort and there would always be a room for improving.
  13. Benchmark Six Sigma Expert View by Venugopal R Let me attempt to narrate the unfolding of my understanding of control plan over past 3 decades…. Maybe, my first introduction to the term “control plan” was through ISO 9000 standards released during the late eighties. However, I had worked with an auto ancillary prior to that, where we had a collaboration with a Japanese organization for setting up manufacturing of an auto component, that I believe was for the first time in India. As part of the technology transfer, one of the key documents that we received was a lengthy, multiple folded, handwritten, tabulated document with all the process steps outlined, the quality characteristic for each stage of the process, the specifications, the method for evaluating compliance to the characteristic, and the sampling recommendations. I am not sure whether this document was named as “control plan” at that time, but I always remember this document, during subsequent stages of my career when I formally got introduced to control plan and whenever I associate with control plan. This only proves that this tool, whatever it might have been named in those days, was part of the good Japanese production practices, from very early times. And, more importantly, it found its place among ‘most important’ documents required for a technology transfer. Subsequently, the automotive industry came up with a set of QS9000 standards, along with which emerged the APQP (Advanced Product Quality Planning) standard. The APQP provides a good framework that gives clarity about the creation of Control plan and its linkages and sequence with respect to other methodologies. The APQP goes through five phases after a pre-planning phase 1. Plan & Define 2. Product Design & Development 3. Process Design & Development 4. Product & Process Validation 5. Feedback assessment & corrective action The Product Design & Development section includes DFMEA and Design verification plans. The Process Design & Development phase includes PFMEA and the proto control plan formation begins here. The Process and Product validation phase includes the Evaluation Methods, MSA, setting up Statistical Process Controls, all of which are inputs into the control plan. The Production Control Plan is an output of the APQP at Phase 4. It is evident from this approach that 'control plan' needs to have plan for: Systemic controls (for instance effectiveness of mistake proofing systems needs to be validated from time to time) Process controls (for example, a thermostat-based temperature control needs to be validated periodically) Human based controls Reliability of measurement systems Reaction plans for any non-conformances If we need to have dependable control plans for all the above, the inputs for the control plan has to evolve from the above-mentioned phases of APQP. Some of you may wonder why control plan should be seen only from the context of a Quality System associated with auto industry. It is for the conceptual clarity that may be obtained from the framework of APQP and how the control plan gets derived. The same concept can be adopted for any industry, including Information Technology services. The control plan will remain as a live document that will keep getting updated in line with the levels of knowledge maturity. I conclude by saying that although the concept of control plan has existed even several decades ago, we have many avenues that have brought very refined clarity on the pre-requisites, building and executing a control plan effectively.
  14. Benchmark Six Sigma Expert View by Venugopal R A few based on my experiences.... There have been times when I interacted with an organization about identifying Six Sigma projects, and they had some confusion, with the term ‘Projects’. They have been mostly associated with the term ‘Project’ from a context of business contract. Most us who are experienced in Six Sigma terminologies understand how the term ‘project’ needs to be interpreted based on context. However, I have learned that depending upon the audience we may have to be careful in ensuring whether the interpretation of the term ‘project’ is made as intended. Should we call it specifically as ‘Business Project’ and ‘Six Sigma Project’? On the other hand, the fundamental definition for ‘Project’ and the phases of project apply to both contexts quite well. The term project implies an undertaking to deliver an objective with a fixed start and end time. Coming to definition of ‘Process’, the understanding appears to be more uniform and unambiguous. I haven’t seen much confusions between the usage of Project and Process…. Hope the below statement is an example outlining the meaning of ‘Project’ and ‘Process’. “Most of the Six Sigma DMAIC ‘Projects’ aim to improve a ‘Process’ or set of ‘Processes’”. However, when we expect people to map the processes that they are involved in, many have some difficulty. In the Six Sigma world, we often use the SIPOC methodology to depict a high-level process. If we want to test ourselves about our own clarity with respect to a particular process, try building the SIPOC and see how well you are able to do it!
  15. Benchmark Six Sigma Expert View by Venugopal R The damage or extent of damage due to a failure may be often saved or reduced if the failure is detected sufficiently early. Very common example is that if the smoke detector gives an alarm, then there is a high possibility that a fire that is about to spread could be attended and put out. It gives certain comfort when we are assured that we have adequate detection ability for certain potential failures. Historical data and experience that a particular type of failure has a very low frequency of occurrence is another information that could influence our comfort levels with respect to a potential failure. We do have better quantifiable methods available today to express the ‘capabilities’ of processes, if we have to. Even if the failure occurs the extent of consequential damage it could cause is yet another factor that decides the extent to which we may breathe easy. We recognize that the above 3 factors have been considered in the FMEA methodology in the form of Detection, Occurrence and Severity. Thus, the worst can happen if a failure capable of causing damage of high severity, occurs frequently and catches us by surprise. Even if any one of these factors are addressed favorably, we can prevent / save damages. With many knowledgeable members in this forum, the FMEA method, which is essentially a cross functional activity would not require any further detailing here. While FMECA is widely defined as an extension of FMEA and the criticality calculation is also defined by MIL1629A way back, it is still possible to raise questions on clarity and uniform understanding of the method. I am not getting into the details of the calculations for the ‘qualitative and quantitative’ methods to evaluate criticality that decides prioritizing the corrective actions for risk mitigation, which most of the forum members would have been exposed to. However, the emphasis on Criticality analysis is to improve the design and system reliability. Whereas the RPN number in FMEA gives a practical approach for prioritization, considering the detection capabilities as well. It is my belief that we would all be in agreement that FMECA is a step up from FMEA that drives us to keep improving design robustness, preventive controls and mistake proofing as much as possible and make it a continuous effort.
×
×
  • Create New...