Jump to content

Venugopal R

Excellence Ambassador
  • Content Count

    168
  • Joined

  • Last visited

  • Days Won

    19

Venugopal R last won the day on January 7

Venugopal R had the most liked content!

Community Reputation

51 Excellent

6 Followers

About Venugopal R

  • Rank
    Advanced Member

Profile Information

  • Name
    Venugopal R
  • Company
    Benchmark Six Sigma
  • Designation
    Principal Consultant

Recent Profile Visitors

1,695 profile views
  1. Benchmark Six Sigma Expert View by Venugopal R “Stop-The-Line” is a tough situation. On one side any organization wants to ensure that Product Quality is not compromised. On the other side, there are delivery targets for the day, for which the Operations become answerable. Again, if the line is stopped, it causes idleness for the workers who get an unexpected respite from their routine and the supervisors have to manage the work force who may get scattered once the line stops. Getting them back and regaining the rhythm is another concern. Altogether, it is certainly not an enviable situation for anyone. Even when Taiichi Ohno brought in the system for ‘stop-the-line’ in Toyota, he had many opponents within the organization. The ‘Andon cord’ was a popular system where any worker at any level had the right to pull the ‘cord’ which will result in stoppage of production. Then the concerned staff and the workers will discuss on resolving the issue as quickly as possible and restore the production line. Many organizations do claim that they will practice the system and even empower certain employees to stop the production based on the observing an issue. However there is bound to be many questions that the person who stops the line has to face. Some of them are: We could have continued the production and fixed it without stopping. This problem doesn’t appear serious enough to have stopped the production. Why did you wait till this problem reached the production line and not discover it earlier? Who approved the process? Why are you not questioning him / her instead of stopping the line? You will be held responsible for the loss of production, because you stopped the line. Since you stopped the line, you are responsible to fix it and start the line. We could have completed today’s ‘numbers’ and then fixed the issue as retro-fitment in the finished product warehouse. This is how we have been producing all along. Why are you waking up today and stopping the line? As seen, it is one of the common causes for the shop floor ‘disputes’ between Quality and Operations. I will share some approaches below, that have helped to help take rational decisions on line stoppages. Firstly, there has to be clear unified commitment from the leadership team, in the event of such situations, which are bound to happen some time or other A very good QMS is important to ensure that adequate systems and controls exist on supplied parts, first part approvals, equipment calibrations, change management, design controls for product and processes and so on. We do not want to empower line-stoppage and keep facing it every day! It should only occur as a rare situation. Well documented procedure explaining the circumstances when a ‘stop-the-line’ decision may be taken Procedure will cover who could take such as decision. Care needs to be taken to ensure that the coverage is reasonably adequate to prevent dependency on just one or few individuals who may not be available during such an emergency situation. The procedure should also include the reaction and restoration plans. Sometimes, it would involve pulling back finished goods and retesting / reworking them “Shooting the messenger” attitude should be discouraged, and focus should be on quick and effective restoration. While every effort needs to be taken to prevent ‘false-alarms’, a rare incident of false-alarm needs to be taken in the right spirit. It is better to be safe than sorry! The management should be relieved that someone has been able to point out the defect. Though it resulted in ‘line-stoppage’, it is a situation better than a ‘Product recall’. Every ‘line-stoppage’ situation has to be taken as an incident of hard learning and should be included into the directory for preventive actions.
  2. Benchmark Six Sigma Expert View by Venugopal R 'Game Theory' relates to study by theoretical framework and mathematical models to comprehend social situation among competing players and provide optimal decision making. In Game theory, Nash Equilibrium refers to a state of decisioning, among two or more players, in which each player understands the ‘equilibrium strategy’ of other players and no player can gain anything by changing their strategies. The idea has been illustrated in most literature on this topic using a popular example on ‘Prisoner’s dilemma’. To see the application of Nash Equilibrium in a business scenario, we may consider two competing companies A and B, trying to fix their pricing strategies for a competing product. However, a relatively higher pricing can pull down the sales and in turn the overall profits for the companies. Let us assume that the strategy of one player is known to the other player. The different scenarios could be as follows: 1. Player A fixes High price and Player B also fixes High price 2. Player A fixes High price and Player B fixes Low price 3. Player A fixes Low Price and Player B fixes High price 4. Player A fixes Low Price and Player B also fixes Low price The scenarios are represented in the table as below. Let us fix a numerical index of profitability, that is shown inside the cells of the table. The first number represents the profitability index for Player A, and the second one represents the same for Player B. The profitability index is influenced by price and volume. However, a higher price would pull down the volumes thus reducing the overall profitability. Thus from the above table it appears that at lower price has led to improvement of overall volumes from both the players resulting in best profitability index for both. As we can see for scenario 1, both the players have equal profitability index. However, Player A would be tempted to move to scenario 3 to increase its index. Similarly, Player B might shift to scenario 3 to improve its index. Also, in the scenarios 2 and 3, there is a possibility for players A and B respectively to attempt improvement to improve their competitiveness with respect to profitability index. However, in scenario 4, neither of the players will see a benefit in changing their strategies with respect to their competitor’s strategy and hence we can expect the best stability. This state represented by scenario 4 in this example denotes the Nash Equilibrium. In this state, there is a ‘Win-Win’ situation for both the players as well as for the consumers! We may think of another business situation where the players could be the Marketing department and the Product development department. While the Product development’s strategy is to include more innovative features into a newly developed product, the Marketing’s strategy is to time the launch of the product to beat the competition. We can build a scenario to study the effect on the ‘Market success’ of the product and the factors would be the level of innovative features and the time duration for launch. There could be a state of Nash equilibrium, where both the players may not want to alter their strategies, after knowing the other’s strategy. At the Nash Equilibrium state, the Marketing would not want to squeeze the dates any further since, the product competitiveness may be affected. The Product Development would put a freeze on the features to abide by the launch dates, otherwise there is a risk of their future funds getting impacted due to potential loss of revenue opportunity. Again the Nash Equilibrium will get them settle for a Win-Win situation! The above examples have considered only 2 players for simplicity, whereas there could be more in real life scenarios.
  3. Benchmark Six Sigma Expert View by Venugopal R Using median as a measure of central tendency helps to avoid effect of outliers. For those who need a clarity on fundamental behavior of mean and median, the following simple example will help. Consider as set of nine data points representing the minimum time in days between failures for nine similar equipment. 70, 248, 2400, 240, 2, 1460, 230, 180, 440 The mean for the above data is 586 whereas the median is 240. Now consider the data set below, which is same as above except that the maximum value has further increased from 2400 to 4800 70, 248, 4800, 240, 2, 1460, 230, 180, 440 The mean has shot up to 852, whereas the median remains unaffected at 240. In the above situation, the median is a more realistic representation as a measure of central tendency of the data. Few examples where the median may be a better choice: 1. Income data in an organization: It is quite possible that there could be a few high paid individuals, by which the mean could be severely biased, hence median is preferable. 2. Age of employees in a society: A few very senior citizens among a majority of people being in the lower middle age band, could give a non-normal distribution. 3. Customer satisfaction surveys using a Likert scale of 1 to 10: A very few customers voting on the upper or lower extreme could distort the reality – hence usage of median helps. 4. Life expectancy based on a specialized treatment: For instance if most patients had a post treatment life span in the range of 10 to 15, one odd patient living for 45 years could provide an unrealistic expectancy, unless we use median as a measure of performance. 5. The comparative tests performed on non-normal distributions, knows as non-parametric tests are based on usage of median. Examples of such tests are 1-Sample sign, Wilcoxon Signed rank, Mann Whitney, Kruskal Wallis, Moods Median.
  4. Benchmark Six Sigma Expert View by Venugopal R When we have two sets of data having different averages, if we move one data point from one set to another, an increase in the average could be seen for both the sets, subject to conditions. The conditions are - the data point is moved from the set that has higher average to the one with lower average. The data point happens to be lower than the average of its original set, but higher than the average of the set to which it is moved. Will Rogers phenomenon could thus cause an improvement on the average values of both the groups by just doing a reclassification, all the values remaining same. Let me illustrate this using a simplified practical example. The table below gives the outstanding dues of loans from customers. There are two groups based on age. For simplicity we are considering only 5 data points for each group. From the above table, it is seen that the average for group1 is 4600 and for group2 is 1370. Assume that one customer in group1, who has outstanding of 3200, turns 60 and thus gets moved to Group2. The table will get revised as below: The group1 average has increased to 4950. The group2 average has also increased to 1675. It appears as if there is a increase in the average outstanding for both the groups, whereas none of the individual values has changed. The effect was only due to a re-grouping. Such re-groups whether done intentionally or unintentionally, could alter the average value of a group. We need to be careful in interpreting such business results and confirm whether the changes are genuine or are a result of “Will Rogers” phenomenon. We see articles about how the Will Rogers effect impacts certain practices in the healthcare world. For example, advancements in the methods to detect growth of cancer has resulted in classifying more cases as ‘stage-3’, whereas they were earlier classified as ‘stage-2’, based on the prevailing detection methods. Such a shift in classification has resulted in moving certain cases from stage-2 to stage-3. These cases are ‘high sick’ as per stage-2, but ‘low sick’ as per stage-3. By this reclassification, the average mortality rates showed improvement for both the stages, whereas there is no change in the overall situation. However. While mortality rates could have genuinely improved due to advancement in treatments, Will Roger’s effect from re-classification could make the benefits appear further boosted.
  5. Benchmark Six Sigma Expert View by Venugopal R Though the evolution of Industry in the world has been continuous, it is being classified into four stages, or generations, starting from the 18th century. The current advancements in the industry is being termed as the 4th Industrial revolution, also known as Industry 4.0. Before we discuss the characteristics of Industry 4.0, let’s take a brief look at the earlier stages to get an idea on where we have come from. Industry 1.0: It was during the 18th Century that manual methods for production were replaced by usage of steam power and water power in the Western world. Weaving industry was one of the first to take on, followed by others. Industry 1.0 may be seen as the beginning of Industry culture for producing volumes with efficiency and consistency. Industry 2.0: This revolution was around beginning of 20th century and was propelled mainly with the invention of electricity. The advantages of using electric power replaced the steam and water driven machines. Practices for mass production of goods emerged. Further, the development of railroad networks and telegraph brought people together through travel and communications. This revolution led to a up-welling economic growth. The early concepts of Management and Industrial Engineering surfaced. Industry 3.0: This is an era that most of us would have experienced in the later half of 20th century. Developments started after the two world wars. Digitization, starting with the electronic calculators, invention of semi conductors, integrated chips and programmable controllers made deeper strides into digitization. We saw the growth of extensive use of computers for various industrial and other purposes. In turn, this led to the development of software industry. Software usage expanded to various supporting areas of management viz. Enterprise Resource Planning, Logistics, Work flow, Supply Chain Management and so on. Industry 4.0: By the 1990s we saw abundant development in the fields of communication and Internet applications. Characteristics Industry 4.0 has revolutionized and will continue to revolutionize the methods for exchange of information. While the previous Industrial revolutions have helped in bringing the world closer in terms of communications and reach, one of the characteristics of Industry 4.0 is in overcoming geographical barriers for carrying out various activities on real time basis. Cyber physical Systems have resulted in phenomenal transformation on various businesses, providing for machines to communicate intelligently with each other overcoming physical and geographical barriers. Components Industry 4.0 is expected to evolve significantly in the near future. It has multiple components, many of them inter-related. Various articles may be found with listing of several components for Industry 4.0. I am herewith furnishing nine components as identified by ‘Boston Global Group’ 1. Big Data & Analytics An exercise to analyse large and varied sets of data to uncover hidden patterns, unknown correlations, trends, to obtain meaningful inferences that help various situations, especially business 2. Autonomous robots Robots will eventually interact with one another, work side by side with humans and undergo continuous learning 3. Simulation 3D simulation of product & material development, production processes will become widespread. Operators will be perform advance machine settings for next product. 4. Horizontal & Vertical System Integration Horizontal integration means networking with individual machines, items of equipment or production units. Vertical integration means gaining control and connection between different parts of the supply chain. 5. Internet of Things Network of multitude of devices connected by communication technologies that results in systems that can monitor, collect, exchange, analyse and deliver valuable new insights. 6. Cyber Security Processes and controls that are designed to protect systems, network and data from cyber attacks. 7. Cloud Computing Storing and accessing data and programs in the internet, providing real time information and scalability to support multitude of devices and sensors, along with all the data they generate 8. Additive Manufacturing Also known as 3D printing, it is used to prototype and produce individual components 9. Augmented Reality Currently at nascent stage, these systems support variety of services, such as selecting parts in a warehouse & sending repair instructions through mobile devices The above list may not be exhaustive and we can also expect new components to get added in rapid manner going forward.
  6. Benchmark Six Sigma Expert View by Venugopal R Burn Down and Burn Up charts are used in Agile Scrum for visual tracking of the progress of a project. The charts typically use ‘Project Story points’ on the Y axis and the no. of iterations on the X axis. Story points are metrics used in agile management to quantify the effort for implementing a given story. Sometimes the time (total FTE hours) is used instead of story points. Burn Down charts help to see the remaining amount of work, the pace of the project with respect to the target, and will give an idea with how close or far the actual completion date will compare with the targeted date, considering the current pace of the project. Burn up charts help to see the progress of work till date with respect to the ideal curve. It has an additional horizontal line that shows the scope of the project at any point of time. In case there is a change in scope, this line will show it as a step up or down. This chart helps in assessing the real efforts being put in by the team, since effect of scope changes could be considered. Thus, it would help in assessing KPIs. Since burn down charts depict the remaining work to be completed, as compared to an ideal target at each iteration, they will be useful for providing commitments to clients and keep them apprised as to the closeness to completion. Burn down charts are simpler and easily comprehensible and would serve the purpose if there are no changes in scope. Both Burn Up and Burn Down charts may be used together for a project for their respective benefits
  7. Benchmark Six Sigma Expert View by Venugopal R Bathtub Curve (BTC) is well known for its depiction of the behavior of failure rate of a product during its life cycle. As per the principle of BTC, the failure rate or the probability of failure for a product is high during its initial life time and once it survives a certain initial time period, the failure rate reduces and remains more or less constant until it reaches a ‘wear out’ period, when the failure rate starts increasing. The failures that occur during the initial phase of life are also known as ‘Infant mortality’ and the failures that occurs during the last phase are ‘ageing or wear out’ failures. The causes for the failures at initial stages and the wear out stages would be different. The initial failures could be due to design flaws, manufacturing defects or shipment related. Companies resort to various measures to reduce or manage the infant mortality. It is a common practice to subject electronic products to a ‘burn-in’ test with the intent of weeding out most of the initial failures before shipping the product. Some other methods involve pre-delivery inspections and testing of the product at dock or after shipment to their locations. Manufacturers also provide ‘warranty’ that covers free replacement / repair of products that fail during the early life, subject to certain conditions. The causes for increase in failure rates towards the later part of life are due to wear out of components, other environmental factors such as corrosion, degradation etc. Methods to reduce the rate of ‘wear out’ failures include preventive maintenance, operating procedures, periodic replacement of certain parts such as bearings, belts, tires etc to protect the rest of the product. High Infant mortality would adversely impact the reputation of a product if not controlled within acceptable limits. Further, the resulting expenses due to warranty replacements and product recalls would be a drain due to costs of poor quality (COPQ). Some products like cell phones, computers turn obsolete very fast, due to rapid up-gradations of technology and features. Most users may change over to newer models, even before these products complete their ‘useful life’ period and may not reach the point of ‘wear out’ failures. However, the concept of infant mortality very much applies to them. The concept of BTC applies of all ‘durable’ products that are expected to be in use for a considerable period of time. A related metric that is used for products such as consumer durable is the MOL (Month of Life) failures. As per this, the shipped products are tracked as ‘month wise’ batches based on installation dates, and the service incidents that occur during each month of their life are tracked for a certain period of time. The company will have performance targets on MOL to be achieved. Interestingly, the ‘early failure’ concept applies not only to 'Manufactured products', but to ‘New Processes’ as well. We would have experienced that after rolling out a new process, high incidents of process failures are likely during the early days after implementation, based on which corrections would be carried out. For example, a newly introduced courier service had several incidents of consignments going to wrong locations due to a flaw in their bar-coding system. While ageing failures may be controlled to certain extent for a manufactured product by adequate maintenance procedures, we can reduce or even prevent ageing failures on services by exercising good knowledge management. The concept of 'Bath Tub Curve' may not be very much applicable to goods that are consumed very quickly, such as food items, certain FMCG goods etc. For such products, the life cycle is quite less to distinguish the 3 phases as per the BTC. Instead, the concept of ‘shelf-life’ is more applicable for such products.
  8. Benchmark Six Sigma Expert View by Venugopal R While Pareto chart depicts a set of factors in the descending order of their frequencies of occurrences, Paynther chart divides each bar of the Pareto into further sub-groups. The sub-groups thus split-up should be comparable across the bars of the Pareto. Paynther charts are popularly known for combining run charts along with a Pareto for tracking the trend of each sub-group item across a period of time. However, we may find the Paynther chart useful to depict comparisons of sub-groups other than time as well. For instance if we have a Pareto for the number of units of a product sold across various metros, we may construct a Paynther chart by sub-grouping each bar for the product-types. The Paynther chart represents the split-up details of each bar of the Pareto in the same order. From the charts above, it may be seen from the Pareto chart that overall sales for Metro4 is lower than Metro1, but the sale of Product B is more in Metro4 compared to Metro1. Similarly, other inferences may be derived. It helps to prepare the overall Pareto chart first and then create the Paynther chart for the chosen subgroups. The Paynther chart helps to drill down to next level of detail from a Pareto chart and would be very useful for root cause analysis. Other examples where we can apply the combination of Pareto and Paynther charts would be: 1. Defect types across months 2. Productivity of set of services across multiple sites 3. Purchase volumes of a product across by different age groups The Paynther chart may also be adapted to track the impact of actions on specific issues – for example, to track the improvement of specific product failures based on corrective actions implemented. For this, Paynther chart will be constructed with the sub groups following a chronological sequence.
  9. Benchmark Six Sigma Expert View by Venugopal R VUCA is an acronym coined by American military for addressing certain extreme war time situations. The fast-paced world has led to corporate leaders finding VUCA applicable to many corporate situations. I would like to share my thoughts and experiences as below. Example1 One of the common VUCA situation in the corporate world is during mergers and acquisitions. Volatility can be expected with several changes happening simultaneously. viz. New senior leadership formation, Organization reshuffle, New Clients, Products & Services, New geographies, New Processes and Operating platforms and so on. Uncertainty will prevail among employees, shareholders and other stakeholders. Long term customers and suppliers could get perturbed and could be left wondering about whether they would be impacted by the change. Complexity may arise out of need for integrating different workflows, processes and platforms. Synergizing operations, sites, technologies and functions would also present complexities. Ambiguity crops up due to possibilities of multiple interpretations of same messages, policies and procedures and reporting structures. Another area of ambiguity that is likely to hover for some time could be the name of the combined organization. Example2 Another situation that comes to my mind is during the time when the city was affected by severe floods. We saw volatility in the form of our normal life getting disrupted overnight and the damages spurting up in various forms. Our office sites became inaccessible, all communications were disrupted, all modes of power failed, employees got stranded, some critical equipment got submerged and so on. We were uncertain about how long it would take to get back to normalcy. We did not know how much worth of damages have happened and what more may happened. We did not know about the safety of employees and had no ways of communication. We did not know how much our customers would have got impacted due to delivery issues and how they would look upon us. What were the complexities that we confronted? Our BCP / DRM strategies swung into action. We had to divert our work to other sites that were not affected and to sites overseas. This resulted in employing of different resources to handle new work, with their limited familiarity and training. Damage to critical equipment that was part of the back up process resulted in data not being accessible to provide support to the remote locations. Since the entire city was impacted, rescue and draining equipment were of high demand and not easily available. Many employees had emergencies at their home as well and balancing of priorities between saving home and office was challenging. Arranging logistics to safely move employees, who were available to workable locations from their homes and providing them food proved complex in such a disastrous situation. Ambiguity prevailed with multiple inputs and information coming in about the status of the conditions of or sites and homes of employees. It was difficult to get to agreement on priorities with the limited resources available. Providing any sort of commitment to clients about returning to normalcy was very difficult with inconsistent and ambiguous inputs. Example3 Moving to another scenario, we had once faced a situation when there was a mass exodus of employees from a particular division, most of them being lured by a competitor in the city who suddenly setup an operation for the same services. Volatility was seen as the speed at which we were losing employees. With more people leaving and rumors spreading, it became contagious and in a very short time, more than 80% of the employees in that division had disappeared. Uncertainty was further fueled by the volatility, with rumors spreading about possible closure of division and many employees were in a dilemma whether to continue or to quit. The management was uncertain about how much of loss that would be caused and whether we would be able to retain the customers. Complexities included inducting employees from other divisions and to frantically look for hiring new employees, getting them trained and exposed to the client requirements and processes without disrupting deliveries and accuracy levels. Ambiguous inputs were received in terms of the number of employees being lost and what may be potentially lost. The cause for getting led into such a situation was being presented and interpreted by various people in different ways, whether it is the company, the competition or the customer who is responsible for causing the exodus. Dealing with VUCA Coming to discuss about the Business Excellence practices that could help during VUCA situations, we may look at each of the 4 components of VUCA. However some of the practices would help across VUCA. One of the essential practices is to have a good Business Continuity Plan and Disaster Recovery Management plans. This will be useful for unexpected situations that may emerge due to natural and other causes. Volatility coupled with uncertainty can result in ‘Not knowing the unknown’ and hence a risky situation. Anticipate potential failures and get prepared for worst situation. PFMEA might help us to assess each potential risk and classify their priorities and help building our preparedness. Proactive Multiskilling and multi-tasking of resources would help in situations where we have heavy fluctuations in product sales or on employee turnover. Complexities can arise out of interconnection between processes and activities and interdependence of multiple decision factors. Entity relationship diagrams will help to define and depict relationships between multiple entities. We may use SIPOC for high level depiction of a process in a complex situation and if we are able to obtain sufficient data, we may use multiple regression models. Ambiguities arise because we have a room for multiple interpretations of the same data coupled with insufficient data. One of the approaches in statistical decisioning is by attaching a confidence level for any inference. Thus even though we are unable to decide something with 100% certainty, we are able to take a practically applicable interpretation by attaching a quantified risk. The risk levels could decrease with improved quantum of inputs and over time.
  10. Benchmark Six Sigma Expert View by Venugopal R The term ‘Regress’ means ‘to return to a former or less developed state’. Regression testing is the terminology used in software testing to test whether there is any change in the software functionalities that used to be performing well, before introducing a change. Those of us who have been associated with software quality directly or indirectly are likely to have experienced loss or deterioration of existing functions of a software after being subjected to some change, be it a bug fix or an up-gradation. Looking back, similar situation prevails in manufacturing and other products as well. I had once got the AC condenser replaced in my car after which the AC was functioning perfectly. However, after a day, I realized that some of my dashboard indicators, that used to work properly, had stopped working. Most of you would have faced similar issues. Though the term ‘Regression testing’ is associated with software testing the need for similar testing has been prevalent in all fields. As LSS professionals, we are often torn between choosing between “Effectiveness and Speed”. How do we know whether any of previous functionalities would be adversely impacted by a change in a software? How do we decide what functionality could have got impacted and how much variability needs to be considered for evaluation? Since we do not know the ‘unknown’, the blind decision could be to test everything i.e. Retest all. While this could be very expensive and time consuming, it still may not guarantee that it will identify any regression issues that may exist. For instance, not all test cases that were prepared earlier may be re-usable. Some would have become ‘obsolete’. Moreover, it could be possible that those test cases had been developed without considering any interactive impact of the new change. Hence even if we can afford to go for a ‘Retest all’, it may not provide us the required protection. SW developing organizations and divisions strive to find those potential components and areas that are most vulnerable for being impacted by the change and ensure effective evaluation of those relevant functionalities – coupled with contingency plans. Based on my experiences, I share a few thoughts below and look forward to see many more from other professionals. Test automation Test automation is a very fundamental approach that may be adopted. This is useful especially when we re-run same test cases involving multiple scenarios. There are numerous tools available, depending upon the need.. like Selenium, Ranorex, QTP, to name a few. Keep client in the loop Where the SW product is for a client organization, and the change is requested by the customer, it is a common practice to keep the customers involved during the development and UAT planning and execution for the change. Many clients with long term tie-up, insist that for all changes, even when not initiated by client, they need to be kept involved throughout the change process. It may so happen that certain tests would pass in the developer’s environment but may fail within the customer’s environment. Keep ‘Roll Back’ options Despite best attention, in the event of any unexpected regression issue that crops up only after going live, the developer should have ensured to have the option to 'switch back' to the previous version until the issue is fixed. However, this may not be practically possible for all cases – for instance, if the change is linked with a change that is done at the client side as well. Prioritising Test Cases Selecting the test cases and having the confidence of not missing out on an important one is a very tricky decision and should not be done without cross functional involvement. A good understanding of test cases that detect frequent defects, functionalities that have high user visibility and sensitivity, test cases related to core functionalities, Integration test cases are important inputs for the deciding the priorities. FMEA will be a useful tool to structure, document and prioritize the profusion of knowledge and experience. Test Driven Development It is a legacy approach to subject previous set of test cases as part of ‘regression testing’. However as discussed earlier, it may not be adequately effective always. Parallel efforts to develop appropriate tests along with the code development for the change would help in identifying relevant test cases. Integrate regression testing with exploratory testing used in Agile development. Mitigation options other than tests Having in the development approach, keeping in mind that changes could be expected. With more focus on ‘Agile’ development, customers want to keep the options for change requests very open. System architecture may focus on building more smaller, independent components. This will make the effect of changes more isolated than the case of common coding and hence the test cases could be focused and limited. Version and Change controls Well defined and complied version and change controls will help to ensure that the access to changes in codes are well controlled, authorized and undergo required levels of approvals… thus preventing the possibilities of impairing unwanted sections inadvertently or due to ignorance. The success of optimizing the efforts and resources for Regression testing is achieved with a combination of Cross Functional Involvement, Software Quality Management Systems, and Proactive developmental prudence.
  11. Benchmark Six Sigma Expert View by Venugopal R Genchi Gembutsu translates into “Go and See”. It is a term that has emerged from the Toyota Production System. Japanese leaders like Taiichi Ohno insisted that engineers need to visit Gemba and observe to see how value is created and how waste gets generated. The context in these situations was a manufacturing shop floor, or an actual usage of and expectations from a product by the end customer. In many of the emerged businesses like software development, IT services and e-commerce, we may not have a comparable ‘shop-floor’ atmosphere. However, we have customers, customer expectations, customer usage related experiences, competitive offerings etc. Similarly, we have design teams, operations teams, customer relations teams etc. Any software being developed is meant to interface with a process of human to serve some purpose. Many times, we see that there is a ‘requirement’ document that is created by the user (could be a internal or external) based on which the development commences. The developed software product seldom comes right the first time and will require more iterations of rework until it meets the user’s requirements. Applying the principle underlying ‘Genchi Gembustsu’ is very important to reduce such wastage of effort and resources. For example, imagine a software development exercise for creating a web interface for potential customers who want to approach a bank for any product. The developer would have to feel the requirements by Becoming a potential customer himself / herself Obtain first hand inputs from a representative sample of potential customers Study the similar facilities provided by competitors in the market Visit the recipients of these inputs (could be the sales team or contact center) and understand the how best the inputs should be received by them for further actions. Possible areas for ambiguous interpretations and to improve the user-friendliness Adaptability of the portal with multiple applications and mobile devices. Ability to reach through popular social medias. Areas where flexibility of coding is important, considering possibilities of ongoing modifications and up gradations. The above are just examples to illustrate the possibilities. With adequate involvement of right teams and brain storming, one could arrive at the points as most appropriate for the situation. Taking example of an e-commerce platform, the most obvious Gembas will be the ‘end-user’ and all the locations where the customer requests and inputs are made use of.. viz. the teams involved in processing of order, logistics, payment and delivery. As discussed in above examples, customized list of check points has to be evolved. Direct knowledge and feel of the inbound and outbound users will also help in developing appropriate ‘test cases’ for effective and efficient UATs.
  12. Benchmark Six Sigma Expert View by Venugopal R One of the important prerequisites to answer this question is to have a good grasp about the concepts, methods and interpretation of tests of hypotheses. My discussion below is based on the assumption that the readers are reasonably conversant with TOH. Hypothesis testing is a popular and well known method among Lean Six Sigma practitioners. The fundamental rule that is applied for hypothesis testing is that a null hypothesis, or a 'hypothesis of no difference’ is compared against an alternate hypothesis to examine whether sufficient evidence exists to reject the null hypothesis. For instance, if we want to compare whether the mean value between two sets of data, for example, the weights of sachets of medicinal solution packed by two different methods. In this case: HO: Mean weights of sachets by method 1 = Mean weights of sachets by method 2 HA: Mean weights of sachets by method 1 ≠ Mean weights of sachets by method 2 If we run a two sample test, depending on the p value, we would either reject HO or ‘fail’ to reject HO. While inferring the TOH results, the important point to note is that even when we fail to reject HO, it does not necessarily mean that HO is true. It is just that we do not have sufficient evidence to prove that HO can be rejected and does not conclude equivalence. Equivalence tests allow us to conclude equivalence within a confidence interval. For an example like the one above, for equivalence tests, we need to specify the extent of difference between the group averages that is considered important. Ideal difference is zero; so we have to specify the interval that spans on either side of zero. Then, the differences that fall within the range are considered insignificant and equivalence may be concluded. The largest difference that is permissible by the specified interval is known as ‘Equivalence Interval’. The Hypothesis statements for the equivalence test will be: HO: The difference of mean between groups is outside the equivalence interval HA: The difference of mean between groups is inside the equivalence interval Please see typical graphical illustrations that are obtained while performing equivalence test using Minitab – for situations where equivalence can be claimed and cannot be claimed Equivalence tests are known as “Opposite of hypothesis testing” because of the fact that conventional hypothesis tests look for evidence to prove differences or ‘non-equivalence’, where as Equivalence tests look for evidence to prove ‘Equivalence’. When our aim is to look for equivalence between groups or with respect to a standard, the ‘Equivalence tests are more advantageous.
  13. Benchmark Six Sigma Expert View by Venugopal R Any sampling method that is used to evolve conclusions for a population is bound to have errors. However, it is not practical to assess entire populations in many situations and one has to rely on sampling methods. Hence it is important to understand about the errors that could arise while using a sample and we take decisions based on the knowledge about sampling errors. Biased & Unbiased sampling errors Biased sampling errors occur when a sample is drawn from a large base, and there is a likelihood that certain types of members are not included in the population, or disproportionately included. For instance, in a bank, in order to understand reasons for delayed payments for of loan installments, we take a sample of defaulters who are employed with various companies. This could be a biased sampling since we are getting the causes only from ‘salaried’ people. Defaulters who are not salaried (say businessmen) may have different causes. Biased errors can happen when the measuring instrument used for measuring the sample has a bias. For example, if samples are weighed using a weighting machine with bias, we get biased errors. Despite taking necessary precautions to minimize bias, sampling can still have errors due to chance variation and they are unbiased sampling errors. As we know for variable data, by the principle of central limit theorem, the variance of samples will be equal to variance of the population divided by n. In general, if we keep increasing the sample size, the sample characteristics will tend towards the population characteristics and hence lower the errors. Sampling Techniques There are various methods within sampling that may be chosen for a given situation to limit the errors due to sampling bias. Given below are a few of them. Use discretion while using ‘Non-Probability samples’, which do not make use of a ‘frame’. Such samples are likely to be subjected to unknown bias and are advisable to be used only for rough estimates. For Probability samples, decide the best ‘frame’ for sampling, such that the units within the frame best represent the population. Simple random sampling The random sampling method has a frame with every item numbered from 1 to N, where N is the population size. Random numbers are used to select n samples. Statistically, random samples do not have bias on mean value. The sampling error can be evaluated and kept within limits by controlling the sample size. Stratified sampling If we can divide the population into portions based on some common characteristic within each portion, then a ‘Stratified sample’ can be used. Here the frame is divided into portions or strata and simple random sampling applied for each stratum. The results may be combined finally. Stratified sampling will help to reduce the sample size and hence the costs, compared to simple random sampling without compromising the bias. For example, if we need to pick samples of products produced at different sites, assuming that within each site the samples exhibit homogeneity of characteristics, we may use the stratified sampling technique. Systematic sampling Another method is the ‘systematic sampling’. Here the population inside the frame is divided into number of groups depending upon the sample size and a sample is picked at equal intervals. Such method could be useful while taking samples from a running production or for a customer feedback in a supermarket and other such situations. However, if there is some sequential pattern involved in the characteristic, this method will induce bias. Factors to determine sample size One more useful information - While doing comparative tests like tests of hypothesis, a lower sample will play safe will tend to keep the H0 as true. In some of the statistical softwares, you can provide inputs on the delta that you consider important, apart from the confidence level and power of the test to determine the minimum sample size.
  14. Benchmark Six Sigma Expert View by Venugopal R We have come across many leaders commenting that ‘Six Sigma’ is not successful. It is likely to be unsuccessful if ‘Six Sigma’ is considered only as a set of improvement projects. Six Sigma is no different from any other TQM program, but has its own unique rigor by its uniquely defined approaches. If those approaches are not adopted holistically, we may not be able to harness the power of the program. One such inbuilt approach is the “Toll Gate Review”. Before even coming to the question of the quality of Toll Gate reviews, it has to be ascertained whether Toll Gate reviews are being conducted at all! Earlier in this forum there was a question on the objectives of Toll Gate reviews; - when it should be carried out, who should be the necessary participants and who should be the ‘if needed’ participants. I believe that you would have refreshed yourselves with the answers to those questions, so that we can take the topic further on. Now assuming that we have taken care of the schedules and presence of right participants for the Toll Gate reviews we will discuss about the administration of the review. 1. Have a clear agenda specific to the review. Do not go just with a generic agenda. Each review should have specific purposes. The co-ordinator of the Tollgate review will have to prepare the agenda in advance in collaboration with the key participants of the review, especially the project leaders. There would be points that need leadership clarification, support, guidance or approval. The agenda has to be circulated in advance – at least a day before the review. 2. Selective continuity from previous reviews Actions that emerged during previous review should have progressed or been completed as per their schedules. However if certain actions need further discussion in the forum, they must be selectively included in the agenda by the discretion of the review co-ordinator. 3. Pre-planned presentation structure The review co-ordinator should begin by providing a snapshot of the overall status of the project and touch upon selected highlights. This will be followed by the presentation of individual project leaders. Usage of a standard presentation template is recommended – however the leaders may supplement with customized slides as required to express their points effectively. The time slots for each presenter including the discussion time should be pre-planned. If the time is overshooting due to prolonged discussions, the co-ordinator should intervene and plan offline continuation of such discussions with the concerned stakeholders, so that the time slots of other presenters are not compromised. 4. Appreciation for progress and good work It is important to acknowledge and appreciate the actions completed on time and any other good initiatives. It is recommended to commence the review with such a positive vibration and then move on to rest of the agenda. 5. Keep the project charters for ready reference anytime Sometimes, while discussing a project, ideas keep coming to the mind and it could so happen that the senior members may voluntarily or involuntarily keep adding to the scope or altering the objective. The review co-ordinator should step-in and support the leader, by referring to the approved charter if a major deviation from the original objective or scope is being pressed upon. Only when the situation really warrants, should any change be done on the objective / scope – and then it should be assessed how realistic the project schedule remains or needs to be revised as well. 6. Direct the questions to respective project leaders or stakeholders We often see that the co-ordinator keeps answering most of the questions and also the questions are directed to the co-ordinator. It is important that the questions are directed to the respective leaders or stakeholders who are process owners or closely related to the area of the questions, and they need to answer. The co-ordinator can certainly facilitate the discussion, but should maintain his / her limits. 7. Sponsor support plays a major role One of the problems that we come across is the inadequate involvement of the sponsors (or champions) of the projects. Very often the project leader is grilled with questions which sometimes may not be answerable within his/ her level of authority in the organization. A quick review and sync up on the points between the sponsor and the leader, before coming for the Toll Gate review will help. Yet worse would be a situation where we see the Sponsor questioning the leader during a Toll Gate review. It is also important that the leader should not put the sponsor in an embarrassing situation by letting him / her being caught off guard. 8. Do not allow the review to drift It is quite possible that since the Toll gate review provides a forum with most of the senior leaders together, the discussion could stray off from its intended agenda to other topics. The co-ordinator should be empowered to steer the discussion to be restored back on track. 9. Minute the meeting actions and discussions – follow up As far as possible get the points / actions discussed be captured as minutes of meeting and read it out to the participants before dissolving the meeting. What happens outside between the Toll gate review meetings also plays a major role for the success of the meetings. Following up on actions, identifying challenges or hurdles, ensuring any agreed offline discussions happen are some of the important tasks for the review co-ordinator between the review meetings.
  15. Benchmark Six Sigma Expert View by Venugopal R We all would have learned about DMAIC and DMADV during our Six Sigma courses and many of us have used these approaches in our projects. Hence there may not be a need for any introductory explanation for these acronyms. However, I wish to provide my view point based on my experiences through handling projects under various circumstances. DMAIC (Define, Measure, Analyse, Improve, Control) is more commonly seen as the methodology used in most Six Sigma Projects. The usage of DMADV (Define, Measure, Analyse, Design, Validate) is usually seen less often. In general, it has been explained that DMAIC is used for ‘Improving’ a process and DMADV is used when a process needs to be designed. Projects are taken up to address some pain points or to improvement opportunities. Do we know every time we take up a project, whether a process has to be improved or designed? If no process exists, then of course, we will have to design a process; however if there is an existing process, we may have to either improve it or re-design it. Some times only after completing the Analyse phase, we would be able to take a decision whether it is worth improving the existing process or it needs to be re-designed. In the above situation the DMA phases are common and our decision to improve or (re)Design the process is taken only after doing some analysis. For example, we have a problem relating to Supplier Quality and we find that parts supplied by certified suppliers are defective. We go through the Define and Measure phase and collect sufficient and relevant data. Once we analyse the data, and if we find our problem to be confined to very few suppliers or most of the issues are relating to one or two parameters of the process, then we may decide that the existing supplier selection and certification process is by-and-large successful, but needs improvement only on a few areas and parameters. On the other hand, if after our Analyse phase we are convinced that majority of the suppliers certified through the process have issues or we see issues across many parameters, then we may conclude that it may not be worthwhile fixing the existing process, but rather go for re-designing the process. In such a case, if we may want to trace back on our DMA phases and to re-define our objective and goals. If we are using DMA as part of DMADV, some of the tools would be same as we would have used in DMAIC, but the intent may be different. For example, Process FMEA may be a common tool. For DMAIC we might use it for specific steps of the process that are identified for improvement, whereas in DMADV, we would use it for the entire process. There would be some tools that may be more applicable in DMADV, such as Customer surveys and QFD. It may also be noted that mostly, when we are clear about our need to design a process, we may straight away go for DFSS (Design For Six Sigma). IDOV (Identify, Design, Optimize, Verify) is one of the popular approaches for DFSS.
×
×
  • Create New...