Jump to content

Venugopal R

Excellence Ambassador
  • Content Count

    190
  • Joined

  • Last visited

  • Days Won

    22

Venugopal R last won the day on April 10

Venugopal R had the most liked content!

Community Reputation

55 Excellent

6 Followers

About Venugopal R

  • Rank
    Advanced Member

Profile Information

  • Name
    Venugopal R
  • Company
    Benchmark Six Sigma
  • Designation
    Principal Consultant

Recent Profile Visitors

1,991 profile views
  1. Benchmark Six Sigma Expert View by Venugopal R Let’s begin with a brief introduction to the TRIZ methodology. TRIZ is a Russian acronym for “Theoria Resheneyva Isobretatelskehuh Zadatch” roughly translating as “Theory of Inventive Problem Solving”. TRIZ was invented by a Soviet science fiction author, Genrich Altshuller around 1946. According to Altshuller, for all problems in the world, we can relate to a previously found solution, if we are able to express the problem at a generic level. The generic solution can then be adapted specifically to our problem. In this context, the 3 principles of TRIZ are as follows: Fundamental principles of TRIZ Principle 1: All problems have solutions outside the technical domain of that problem. Mostly, similar problems in other fields have already been solved by someone else. Principle 2: An invention happens only when a contradiction is resolved. When we want to improve something, something else is likely to get adversely affected. The invention has to be done without such a deterioration. Principle 3: There are only 39 general issues faced by Inventors. When one of these has to be improved, one or more of the remaining issues could get adversely impacted. Thus, the challenge is to resolve a problem with an inventive solution without causing any adverse impact. Fortunately, TRIZ has also provided guidelines to address the contradictions as well. Broadly, TRIZ recommends applying the principle of separation for avoiding or reducing the effect due to contradictions. Types of Separations 1. Separation in space 2. Separation in time 3. Separation between parts and whole 4. Separation upon condition I am providing some simple examples below, to help understand these types of separations in practical application. A common example of separation in space is how crossroads are managed. We have a contradiction that we want vehicles in both directions to pass quickly without collisions. By building an overpass, we achieve this by a separation in space. Consider the requirement for a quick braking of an automobile. The contradiction here is that the vehicle should not skid on a wet road, while braking. The invention of Anti Skid Braking System (ABS) ensures that the braking force is separated in time based on the extent of friction between the tires and the road. We would like to perform multiple tasks on our computers, but the contradicting parameter is that we would like to keep the power / battery consumption the minimum. By inventing the sleep mode, we separate the parts and the whole system – the system slips to power saving mode when not used for a certain period of time, however, the individual components of our work are maintained, so that we could resume from where we left it any time. Consider an automated data capture and processing method using optical character recognition. The contradiction in this case is that while we increase the speed of capture, we should not allow errors to pass through. Here, built-in validation rules automatically separate the suspected errors, applying separation upon condition for being specially attended. Contradiction Matrix A contradiction matrix is provided for each of the 39 parameters. Using this matrix, the contradicting parameter(s) for each parameter can be seen. The matrix also provides certain reference numbers for each contradicting combination of parameters. These numbers are part of another table that contains 40 suggested ‘Inventive Principles’, which give high level suggested ideas for solutions. For example, if we need to reduce the weight of a motorcycle, we can refer to the parameter no.1 in the table, which is ‘weight of a non-moving object’. One of the possible contradicting factors would be ‘Strength’, which is parameter no. 14 on the horizontal row. An extract of the table is shown below: Inventive Principles (Table of 40 ideas) The cell where the row and column of the contradicting parameters intersect, contains four numbers. If you refer these numbers on the table of Inventive Principles (not included in this article), which contains 40 principles, you can see the ideas listed for each of those numbers. These ideas have to be taken as a clue to find the solutions. In this example, it is quite likely that usage of composite materials (No.40) could help in reducing weight of the vehicle without compromising strength. Let’s also examine few other examples in the same context of contradiction, to see the applicability of the other ideas. Imagine if we have to reduce the weight of a piston moving within a cylinder, without compromising the strength aspect. Taking cue from the point no. 27, “Inexpensive, short-lived object for expensive durable one”, we may use a ring around the piston that can be replaced upon wear, but at the same time, protects the piston. Another example would be that of a cam and gear-controlled mechanism traditionally used for intermittent reversal of a washing machine agitator. By applying point no. 28, “Replacement of mechanical system”, the weight of several moving parts could be avoided by providing a microcomputer-controlled reversibility to the motor using an electronic control board. Take an example of a cutting tool that uses heavy moving tool and holding equipment and we are interested in reducing the weight of the equipment. By using point no. 18, “Mechanical vibration”, the cutting process and equipment could be invented by using vibrating tool to produce the same result without compromising strength The examples that we saw above are inventions that already exist, but we related them to one or more of the 40 inventive principles based on the contradiction matrix. We do not know whether the TRIZ matix had been used for the above inventions. Evidently, had they used it, they would have implemented the solutions faster. For new problems, if the TRIZ principles and matrices are referred, it could save time and effort to avoid ‘re-inventing the wheel’ for an already known solution, at the generic level. Applicability – beyond Manufacturing If you go through the table of 39 contradictions and the table of 40 inventive principles, it may be observed that the basis for evolving TRIZ methodology has predominantly focused on Engineering and Manufacturing. However, the concept of TRIZ, i.e. Contradictions and Separations are applicable to non-manufacturing situations as well by suitably modifying the tables for contradictions and inventive principles. It has to be borne in mind that these methods would not provide an ready made solution for your problem, but a thorough understanding of these methods and ability to relate your problem to the generic contradictions and inventive principles is essential for successful application of the technique.
  2. Benchmark Six Sigma Expert View by Venugopal R Benford’s law states that for a data set of numbers that represent a random sample from any population, the expected value for the percentage of occurrence of 1 as the first numeral is around 30%. Similarly, an expected percentage is assigned for 2, 3 etc as the first numeral and so on. When we move from the first digit to the ninth digit, the expected percentage keeps decreasing. The above expectation follows a logarithmic equation p = log (n+1) – log (n); where n is the first digit and p is the probability of occurrence. This was first discovered in 1881 by an astronomer, Simon Newcomb. However this hypothesis was tested and verified by Frank Benford in 1938. In today’s world, we come across numerous situations where big data analysis is made use of. Hence, it has become important to ensure that we have some quick and effective method of evaluating the genuineness of data and rule out possibilities of fraud or inadvertent lapses. Benford’s law becomes an useful tool to study and compare the pattern of the data in line with the expectations for the occurrence of starting digits. While there are applications of this phenomenon in business and practical life, we need to be cautious before we draw conclusions by applying Benford’s law. Before we move into discussions about applying this principle, let’s discuss a bit more about the probability of occurrence of the first digit numerals. Some insight into the basis of this law will help us to exercise our discretion on its application. However, the actual analysis of the Benford’s law and the scientific proof would be complex. In an attempt to provide a simplified view, I am trying to give a simple insight to have a quick idea about the dynamics of the behavior. Table-1 below, gives the actual percentage of occurrence of 1 as the starting digit for sets of numbers up to 100, 1000 and so on up to 1000000. We can observe that the actual percentage of occurrence is almost same across these sets for the starting digits of both 1 and 9. Now see the table-2, where additional grouping have been inserted and highlighted yellow, viz. up to 125000, 250000, 500000 and 750000. Evidently the percentage of 1 being the first digit goes up in the highlighted sets in the regions between 100,000 and 1,000,000. You may observe the dynamics of the variation in percentages across these sets and how they compare with that of 9 as the first digit. This clearly brings out the fact that among many other aspects, it is important to examine the data and look for certain characteristics before applying Benford’s law. This law applies to many types of data such as stock prices, tax calculations, electricity bills, census, birth rates, bank accounts etc. The law finds good application in Data science to catch anomalies and fraud detection. The law would not work well with less number of data. Expert opinions require at least 500 data points for effective application of Benford’s law. The nature of data should be such that there is equal probability of occurrence for all the digits from 1 to 9. For instance, if the study involves analysis of heights of humans in cms, the data is unlikely to have any starting digit other than 1. Similarly, if we are dealing with data of invoices whose minimum and maximum values are knows to be 45 and 75, we are not going to see any data beginning with 1,2,3,8 and 9. Data has to be distributed in multiple orders of magnitude. For instance, if my data is ranging from 1 to 1000, the probability of occurrence for each digit is almost the same and Benford law would not work. (See the example narrated in the link on the question) Benford’s law may be used as a tool for screening data, where applicable by the nature of data, but cannot be used as a conclusive proof for deciding the credibility of the data. Where data is suspected based on Benford’s law, further investigations will be required to arrive at final conclusion.
  3. Benchmark Six Sigma Expert View by Venugopal R While the world moves towards Industry 4.0, it is expected that the associated facets of business would also have to evolve accordingly. Most of us would already be informed about the key elements of Industry 4.0 viz: Autonomous robots, Simulation, System Integration, IOT, Cyber Security, Cloud computing, Additive Manufacturing, Augmented reality, Big Data and so on. The evolution of Quality 4.0 and Service 4.0 emerge as followers. Service, in economics is defined as an intangible commodity, an economic activity where the buyer does not generally, except by exclusive contract, obtain exclusive ownership of what is being purchased. Though there are several discussions about the application of the new age technologies for Service 4.0, we can expect significant development in this area during the days to come. The service expectations of a modern day customer can be quite a large and complex list; the ones listed below are just a few: Service Interactions – Straight forward and Spontaneous Expect that their individual behaviors, preferences and expectations are understood to an extent. Diverse interaction capabilities with different devices and multi-channel accessibility Service status and remote monitoring Quick response to variety of service demands including online and remote resolutions where applicable Round the clock - 24/7 availability Real time Transparency Intelligent enough to proactively prompt possible customer options Predictable and quick response patterns Response that’s humane and not appearing artificial Service continuity with portability of historical data No matter whichever Industry it be, one has to realize that the modern day customer is well informed and is very literate about the technological developments and facilities available in present times. They will be able to sense a service level that hasn't been updated with the changing world and will prefer to move with those who have adopted and advanced with technology. Many of the technologies that are referred as part of the Industry 4.0 are applicable to Service 4.0 with specific intents. Big Data Analytics – To study large amounts of data that help understand individual behaviors, preferences and expectations. I.O.T. – To help connect between devices and provide remote servicing / monitoring Cloud storage and computing – This will compliment with point no.1 and for various other purposes that require massive data Augmented reality – To provide real-world connection to customers while using digitalized approaches Cognitive computing – To simulate variety of human thinking using self learning algorithms and NLP. Automation methods – To reduce efforts from customer for data gathering and for repetitive & rule based services Bionic computing – For natural intractability with virtual agents, devices and services Virtualization – To reduce dependency of specific locations, hardware, software or devices to carry out a service Service revolution 4.0 is expected to re-define customer expectations in terms of speed, flexibility, efficiency, quality, and customer experiences.
  4. Benchmark Six Sigma Expert View by Venugopal R Instructions are created for a process with the intent of being followed to maximize consistency while executing a process. This discussion can be in the context of a SOP for a business process or for Work Instruction for a production process. Over a period of time, instructions get amended based on inputs, feedback, changes and for continuous improvements. The amendments could be addition of steps, exceptions or changes. The bulk of information in the instruction or SOP grows and the chances for interpretation variation increases, thus defeating the original intent of ensuring consistency. Most large organizations had the practice of maintaining voluminous procedures and instructions even before the days of ISO 9000. However, with the advent of the international QMS standards, the practice of creating various levels of SOPs and instructions received higher impetus and awareness. Work instructions are even displayed, often with pictures and sketches at the respective work stations. Most of us would have noticed that, for repeated processes and operations, people do not keep referring to SOPs and instructions. Then, the question arises, why do we need to have such SOPs and instructions? Some of the reasons to have SOPs and instructions would be: They serve as ready reckoners and reference documents. They serve as documents against which process audits can be conducted to assess the extent of compliance. They are useful while performing RCAs to assess whether the issue has happened due to non-compliance to laid down procedures. They will be useful while training / re-training employees. For most critical processes, operator (or processor) certification is essential, the criteria for which will include not only the knowledge about the process, but also the skills and ability to perform the task consistently correct. The certification criteria has to be included in the respective SOPs and it is important to ensure that visible evidence is available on the work stations that the person performing the task is certified. Some actions necessary to keep “Instruction creep” under check: Periodic reviews of the SOPs and Instructions are essential to ensure the updation all its contents, improve sequencing, simplify the contents and reduce interpretation ambiguities. Identify and introduce as much automation, visual & audio based andons, and mistake proofing, so that the efforts to follow instructions will be simplified. Have clear certification criteria for all critical processes and ensure compliance to certification process and usage of certified personnel. SOPs for business processes need to be built-in as part of the Enterprise workflows, with necessary checks and balances. For critical activities or decisions that still need a human judgement, institute a process by which such decisions cannot be executed by single individual.
  5. Benchmark Six Sigma Expert View by Venugopal R Benchmark Six Sigma Expert View by Venugopal RAlthough the term “Reporting Bias” and its categorization is popularly related to Epidemiology, some of them would be relevant for other businesses as well. Reporting bias is also referred to as ‘selective reporting’, where certain information tend to get reported dominantly, advertently or inadvertently. Such bias is common, especially in reporting of scientific matter and clinical trials. Below are various types of ‘reporting biases’, their definitions and some general thoughts on how an organization can safeguard against such biases. 1. Citation bias Basing the report from other articles and reports, has the risk of providing only ‘one side of the story’. There could also be a tendency to report the ‘positive outcomes’ for a study and not focusing on ‘negative’ aspects. Tips to safeguard: Ensure the practice of quoting the sources of references for citations. Corroboration by multiple references should be insisted to obtain realistic picture of the findings being reported. 2. Language bias This is a possibility when reporting needs to be done in multiple languages. In an organization context, while a corporate report would be released in English, there would be need to translate into regional languages for the benefit of all levels of employees. Or the need could be due to the presence of the organization present in multiple geographies. A bias in such situation could be intentional or unintentional. Tips to safeguard: Translations may be subjected to review using unbiased translators along with a SME to ensure that the message does not get biased intentionally or other wise. 3. Duplicate reporting bias Duplicate reporting could happen when the same topic is reported multiple times by same source or different sources. This can result in incorrect exaggeration of certain results or duplicate accounting of certain benefits. Tips to safeguard: Have defined authorities for reporting of specified topics. If any one else has inputs on the same topic, they need to forward to the designated authority to ensure that the chances of duplication are avoided. 4. Selective Reporting bias This is a very common type of bias, where some outcomes are reported and some are omitted, depending upon the nature of the results. Such bias could be introduced with vested motives to skew the interpretation in favor of the expectation of the author. Tips to safeguard: Provide equal opportunity of representation by all stakeholders for the given report. Any interpretation and related actions need to be taken up only after verifying that the facts are represented in full. Involvement of key stakeholders before final interpretation is important. 5. Time-lag bias This bias could be related to some of the earlier discussed bias, for instance, the selective reporting bias. A positive outcome may get reported faster and negative outcome may get reported after a significant time gap. This could result in incorrect actions being taken based on partial interpretation of the situation. For example, a product launch based only on the success outcomes of a new product trial reported without timely mention of certain potential risks, could result in serious reputation damages. Tips to safeguard: Follow a balanced set of business reporting parameters as far as possible and we need to know whether observations on all parameters have been considered with equal attention and reported on time, before taking major decisions For any organization, reporting methodologies for various business activities need to be pre-planned and reporting standards established, taking into account the potential reporting biases. To the extent possible, build-in logical validations that could throw up suspected biases, and for the rest, the organisational discipline needs to be set in place to minimize errors due to reporting biases.
  6. Benchmark Six Sigma Expert View by Venugopal R Monte Carlo Simulation is a statistical method, using computers for quantitative risk analysis. In this method, random sample data can be generated based on known distributions, instead of carrying out actual experiments, which could be impractical for many situations. Monte Carlo simulation methods are used in the field of Finance, Manufacturing, Engineering, Project Management, Medical and many other areas. Interestingly this method evolved based on a study on the outcomes for a gambling game, but the method was applied for studying neutron diffusion! Monte Carlo simulation methods are not required if we have a deterministic solution based on analytical relationship between two variables. This method is required where there is influence and interaction by complex set of variables. For instance, if we need to evaluate the health risks to children due to the vehicular pollution of cities, multiple factors and conditions would impact the outcome such that, no two iterations of simulation would give exactly same outcomes. Based on large number of trials a collective picture of the outcome is possible. To explain in simple terms, when a process takes multiple inputs to provide a combined outcome, the outcome will be reasonably predictable if each of the inputs are stable. However, if there is a variation for each of the inputs, it becomes quite complex to predict the outcome. Monte Carlo simulation becomes a useful method for such situations. The method uses computational algorithms to simulate the process for a very large number of times encompassing the entire variability span for each of the inputs. The output is based on a probability distribution, that depicts all outcomes along with the likelihood of occurrence for each outcome. Monte Carlo simulations are considered as remarkably accurate models provided there is good accuracy and randomization in the input data. The steps involved in a simulation process will include: Defining the problem Collect real system data Formulate and develop a model Validate the model & document Design the simulation exercise Perform simulation runs Interpret the results Recommendations based on the results The simulation outcomes are presented as 'expected ranges' with a confidence level associated with each range. As the number of trials increases, the range of the outcome will reduce. Consider an example below, where the Project Leader comes up with estimated delivery times for a project with 3 options viz. Relaxed, Normal and Aggressive. Considering the fact that the tasks for the project depend upon several factors with varying extent of control by the team, the above data is unable to provide the likelihood of each option. Applying Monte Carlo simulation and running a large number of simulation trials, the range of the each of the tasks will be taken into consideration as random values, and the percentage likely hood for each option is obtained. It may look like the below table. The above example hopefully gives a broad understanding of how the Monte Carlo simulation tool would help to take decisions for a business situation. The methodology can be used for diverse applications. It is very important to ensure that the inputs provided are realistic and randomization of data to obtain a reliable output.
  7. Benchmark Six Sigma Expert View by Venugopal R Any organization that looks for business growth can look at different strategies, viz. 1. Continue with existing products in the existing markets, but try to sell more (Market Penetration) 2. Continue with existing products, but try to capture new markets (Market expansion) 3. Introduce new products and continue with existing markets (Product development) 4. Develop new products that are meant for new markets (Diversification) Igor Ansof, a Russian American Business Manager introduced this concept for organizations to develop their Marketing strategy, in the form of a four-quadrant grid, which is known as the ‘Ansof Matrix’. As seen from the Ansof matrix, in quadrant 1, Penetration is considered the least risky, whereas the quadrant 4, Diversification is considered as 'high risk'. The other two options, Market development and Product development are considered as 'medium risk'. Let’s briefly look at each of the strategy options: 1. Penetration This is a low risk strategy and the company tries to grow its presence of its existing products in the existing market. It would involve promotion and advertisement programs to sustain and improve the product sales within the segments and geographies that they already operate. This can happen by increasing the market presence as well as increasing the market share. For instance, a company engaged in selling washing machines may try to grab some of its competitor’s market share, as well as try to convert more customers who never owned a washing machine earlier. Though the risk is low in the penetration strategy, the returns too may not be very high. However, it is very important to keep the pace on, in whatever the company is already into. 2. Market Development This strategy is needed to expand the market for existing products. The expansion could be by new geographies or by new segments. For instance a company would expand its market by deciding to export its existing products. A company who has been focusing on selling air conditioners for commercial customers targeting to sell them for household use, is an example of expanding the market segment. The market development strategy poses medium risk, since the company is treading upon a new market and it requires research and might have to counter unforeseen factors pertaining to the new market. Once successful in expanding the market, the company will benefit from higher revenues. 3. Product Development In this strategy, the company continues to deal with the already familiar market, but with new products that could be variants or upgraded versions of the existing ones. Since the company is dealing with familiar market, the risk is medium and is based to any uncertainties on the new product. Manufacturers of Automobiles, cell phones and consumer durables bringing out new, upgraded or variant versions of products, or even new products that attract existing customers are examples of such strategy. This strategy provides room for innovative offerings that can make the company competitive for a period of time, of course with the risk element. Sometimes, it is important that all companies selling such products quickly upgrade to certain new products… for instance when LCD TVs came to market, all TV manufacturers had to necessarily come up with new products in line with the updated technology to keep them afloat in the market. 4. Diversification This is the most advanced strategy, where in a company decides to explore not only a new product, but also a new market altogether. We have seen companies that were predominantly in steel blanking business becoming major manufacturers of consumer durable goods. We have brands that were famous in automobile field becoming major producers of electrical goods. We have companies whose brand used to be associated with cigarettes becoming major FMCG player. Diversification is a breakthrough strategy that has to be seen for long term pay off. Since this strategy is considered as the highest risk, very detailed planning, foresight, research, learning and unlearning, investments, reviews, experimentation, trials, concerted and patient efforts are required for succeeding. Having gone through the elements of the Ansof matrix, it may be observed that many companies adopt all of these strategies concurrently. Each element has its significance and a balanced approach needs to be adopted, depending upon the capabilities and management vision of the organization.
  8. Benchmark Six Sigma Expert View by Venugopal R Many methods are used for prioritizing project ideas as well as for prioritizing solutions during the ‘Define’ and ‘Improve’ phases. An organization always has limited resources but has to handle multiple tasks. For any on-going business, there is bound to be a flood of day-to-day tasks and there would also be some that keep cropping up suddenly. It could be chasing a new business opportunity, addressing a major customer issue, production issues related to material, resources or equipment... and so on. Despite the best attention and support provided by the executive leadership, it will usually be a challenge to mobilize a set of improvement ideas, get the teams involved, convert them to projects and get a continuous improvement program going. Hence it is important to involve the teams from the beginning and use a relatively objective approach to collectively decide where the priority of focus needs to be. Effort vs Pay-Off Chart GE came up with Effort vs Pay-Off Chart to classify and prioritize a list of solutions during the ‘Improve’ phase. The chart can certainly be made use of in the ‘Define’ phase as well to prioritize project ideas. The classification for each quadrant of this chart is as below, which is self explanatory. Low Effort, Low Pay Off (Low Hanging Fruits) Low Effort, High Pay Off (Jewels) High Effort, High Pay Off (High Hard) High Effort, Low Pay Off (Drop) PICK chart Similar to the E vs PO chart, the ‘PICK’ chart was developed by Lockheed Martin to classify a list of generated project ideas into 4 categories viz. Possible (Easy to do, Low Pay off) Implement (Easy to do, High Pay off) Challenge (Difficult to do, High Pay off) Kill (Difficult to do, Low Pay off) They are represented using a four-quadrant window as below: PICK chart along with PPI It is recommended that the use of PICK chart may be further enhanced by combining along with the Pareto Priority Index (PPI). The PPI method provides a numerical value for each selected project as per the below formula: PPI = (Savings x Prob. Of success) / (cost x time of completion) The numerator of the PPI formula provides a quantification in terms of the Pay-Off along with its probability of success. It not only considers the savings, but also the probability of success, which is certainly dependent on the level of difficulty to implement. This helps to represent the X axis of the PICK chart. Higher ‘Effort’ could require higher work hours and higher cost. Hence the denominator would represent the Y-axis of the PICK chart. On the PICK chart we have 10 project ideas plotted, spread across the four quadrants, depending upon their X and Y co-ordinates. A more detailed discussion about the PPI chart could be a separate topic of discussion.
  9. Benchmark Six Sigma Expert View by Venugopal R Ben Franklin effect may sound paradoxical. As quoted in his auto biography, his statement, in simpler terms, implies that one who has already done you a favor will be highly likely to do another one. He had also written that in his personal experience, it was one of his adversaries whom he approached for a favor and later they ended up becoming lifelong friends. Based on my own personal experiences, both in career and day-to-day life, there have been many instances, where we can relate the Ben Franklin effect positively, and at the same time, there have been negative incidents as well. However, since there is more to gain than lose by trying and adopting the approach, it is worthwhile discussing and exploring this topic. As per general human psychology, most people hesitate to approach someone for a favor for various reasons; more so, if it happens to be one with not-so-good relationship. Let us consider two different situations: For many instances, there may not be anything wrong in asking for help when one needs it from someone who is in a better position to provide that help. What might come in the way could possibly be a psychological barrier. Second case is when the person happens to be one with a strained relationship due to some past incident. While we could have examples for the above situations both in business and personal lives, we will discuss a few business scenarios. Customer Supplier relationship When we find it difficult to deliver products / services on time or right quality, we may request our customer for help. It could be clarifications on specifications, provide training on a process, help us with some checks at their end and provide feedback or discuss on an alternate approach and so on. Many times, reaching out to customer for such assistance is hesitated or postponed, with a notion that we might project a negative impression by asking for help – however, we would have noticed that more often than not, the customer would have been eager to help us out. Upon thanking the customer for their help, we can expect that this would encourage them to enhance the collaboration and support us during genuine situations in the future as well. Similarly, during certain occasions customer may reach out to their suppliers or service providers for helping them out with an exigent situation, which may be beyond the contract boundaries. This could encourage the supplier with a feeling of importance and result in offering such help in the future. However, it is important that this does not evolve as an unchecked ‘scope creep’ and impair a healthy business relationship. Leader Subordinate relationship The leader may ask for a help from subordinate without reservations, maybe to explain some subject which he / she may be good at or it could be taking his / her view before making a personal purchase, on a product for which he / she has good familiarity. This could highly motivate the employee who would look forward to similar opportunities in future. This could happen the other way as well, when an employee approaches his leader for help with some topic, or to obtain some attestation for a personal requirement, or some personal guidance on career. Such mutual help when requested from either side, so long as it does not amount to exploitation of authority or freedom, would foster healthier relationships and result in better performance as a team. Inter functional relationship Co-operation between different functions in an organization is very important for the success of a ‘process oriented’ company. Seeking ideas, involving other stakeholders to provide their viewpoints and support by providing them a larger organizational picture are essential principles for driving organizational excellence. It is akin to the prime minister of a country seeking ideas and support from all parties, including opposition parties to jointly address a national emergency. Key points to bear in mind We can certainly identify many more examples related to business and day-to-day lives. However, we may not see ‘Ben Franklin effect’ work in all situations. The underlying principle behind the Ben Franklin effect is that the requested favor should be such that it evokes a feeling of pride and importance on the other individual. Hence, it matters, who needs to be approached for what type of help. Ensure that you do not put someone in an embarrassing situation by asking that favor. It is also important how, or the manner in which such a request is put forth. It should neither be commanding nor pleading. You should also ensure that a genuine appreciation and compliment is given in return to the help provided.
  10. Benchmark Six Sigma Expert View by Venugopal R Both Srum and Kanban are agile methodologies, popularly used for Project Management. Although their context is often referred with respect to Software development, the methods are applicable for any type of project management. Scrum originated in the software world, whereas Kanban originated long ago as part of the Toyota Production System. While these two methods exhibit similarities, there are differences in the way they are organized and administered. “Scrumban” is a hybrid structure evolved by Corey Ladas that leverages the best of both methodologies. To understand Scrumban, we will first have a quick look at Scrum and Kanban methods independently and their differences. The Scrum method works with a well defined organization structure. It has a Product Owner, Scrum Master, Developers and Testers. The Product Owner represents the customer and will decide the priorities for the items, also known as ‘user stories’ to be worked upon. Product Owner mentors the team with respect to the Product requirements. The Product owner liaisons between the customer and the development team. The Scrum Master is an experienced facilitator who organizes the meetings and helps team to overcome hurdles. The Developers develop the codes and Testers test the functionality in the required environments. The entire team is self organized and they work on the stories listed under the ‘Product Backlog’. The ‘user stories’ are moved from the Backlog list and they get into the stages within the Development phase, viz. 'To do', 'Doing' and 'Done'. Afterwards, the ‘user stories’ will move to the similar 3 stages within the Testing phase. The entire cycle of this activity for a user story is known as a ‘Sprint’ and usually set for a time frame of 2-3 weeks. There are certain meetings that are conducted as part of the Sprint cycle viz. Sprint Planning, Daily Stand-up, Sprint Review and Retrospective meetings. In case any change requirement comes up during the Sprint, it will be accommodated only after the Sprint cycle is completed. Kanban focuses on continuous delivery, with a demand based supply. The team comprises of developers and testers with an Agile coach. The Kanban board also contains the Backlogs, user stories, the 3 stages during development and testing; quite similar to the Scrum method. While the Scrum employs a ‘time-box’ method based on Sprint cycles, Kanban works with continous flow, keeping minimum work-in-progress depending on the team’s capacity. If there is an accumulation of work in any of the stages up to the permitted capacity, other team members can pitch-in and clear the bottle neck. Kanban model is flexible to accommodate any change during the development stages. The delivery performance is measured as the lead time taken from the time the request is placed until go-live. Few evident benefits of Kanban over Scrum are greater flexibility and continuous flow. Now let us look at ‘Scrumban’ which is actually an enhancement of Scrum towards Kanban methodology. Some of the key features of Scrumban are: It incorporates the visual workflow based on Kanban Uses daily Scrum – Project team meets in front of Kanban board and discusses jobs done yesterday, to be done today and any obstacles to be addressed The job is pulled, as in Kanban and not assigned as in Scrum Sprints. The work is pulled by the member, and it gives higher ownership for the pulled task. For this purpose, additional columns are created on the Kanban board and limits are set for certain stages. Clear WIP (Work In Progress) limits are applied as in Kanban to prevent excessive accumulation of work at any stage. This will necessitate a “Pull” approach and prevent stagnation of work at a stage. Specific roles, by way of structured team as in scrum are not defined. More collaborative working is encouraged. Use of specialized team members – depending on the the requirements of the tasks, the required subject matter experts are pulled into the team Due to the ‘Pull’ approach, the Sprint Planning meeting will be based on trigger, rather than ‘once a fortnight’ as per traditional Scrum approach. The below table gives a typical Srumban visual workflow display. It may be noticed that the dark blocks indicate the limitations set for the stages represented by those columns. The flexibility and pull features of Srumban makes it a better structure to be adopted when the development and testing teams have dual responsibilities of handling new developments as well as ongoing requests.
  11. Benchmark Six Sigma Expert View by Venugopal R First and foremost priority of any business owner/leader will be to ensure the health and safety of their employees and their families. Almost every organization will have Emergency handling procedures, Business Continuity Plans and Disaster Recovery policies. Emergencies do erupt as may types, viz. Limited to the organization, one or more sites (eg. a Server issue or major failure of certain equipment) Problem that has affected the region (Eg. Flood or other natural calamity, political issues, epidemic) Situation that has impacted the entire nation (Eg. Political issues) Situation that impacts movement / logistics, but not health or safety of employees (Transportation breakdown, connectivity issues) Today, we have a situation that is beyond the limits of those listed above. We have a crisis that is not restricted to geographies but threatens to cripple the entire world. It impacts not only an organization, but its customers and service providers as well. It concerns individual safety and at the same time people are bound by various restrictions imposed mainly in the interest of society at large. Many of the measures undertaken by organization during their past experiences would apply, but current situation probably demands much more, and unlikely to have a comparable past experience. Some of the measures that could be taken and are being exercised by many organizations include: Streamlining and channelizing communications regarding the current situation. Most reliable information updates are required both by the decision makers and for the overall employees. Make sure every one tunes into a common source of information to avoid unwieldy spread of rumors and confusion. Identify a emergency handling team who will represent the entire organization for taking key decisions, communicating and leading selected sections during the crisis period. Ensure that this team is introduced to all employees. While employees may be given the freedom of providing inputs / communications relating to the crisis, ensure that such communications are streamlined through the emergency handling team. Draft communication to customers and other stakeholders on the company’s strategy to maintain their deliveries and let them know how they would be kept informed about periodic updates. In the event of limitations on capacities, discuss with clients to understand their priorities, so that the limited capacities can focus on the highest priorities. Ensure that the organization is well informed and complies with any regulatory requirements by law at any point of time. Specific to current pandemic, ensure that aggressive measures are taken for ensuring necessary hygiene – viz. Hand sanitizing while entering the work place, frequent wiping of door handles, elevators, staircase railings, conference room furniture, pens, projector remotes, key boards, mouse, restroom faucets and all other points of human contact. Ensure adequate communication to all employees and visitors about the hygiene practices and provide sufficient visual displays, audio visuals and any other effective means. Considering that the current pandemic has affected worldwide, one of the common business continuity strategy of providing alternate processing at different geographies may not be effective. “Work from Home” is a very popular means adopted by companies where possible depending on the nature of work. This would be possible mostly for certain types of IT companies but would not apply for operations for manufacturing sector. Even if the company is able to permit limited employees to work in the offices / factories in compliance the regulations, the company has a grave responsibility to protect not only their employees but also any impact to society as well. Adequate check points and action plans need to be evolved to ensure the same. Necessary notification to the concerned regulatory authorities needs to be provided as required to ensure that the company does not violate any legal requirement. The emergency team (or core team) has to be in constant touch and should meet through video conferencing or other means everyday or even multiple times during a day to keep updating and reviewing their plans and actions continually Once the emergency situation eases, the leadership team will have to revisit their annual budgets and review the decisions on capital, new hire and other spend. Revision of strategies regarding product mix, launching of new products etc. will have to be reviewed in view of making up for the lost hours and profits. It is quite possible that the customers too would have rapid revision of their plans and requirements during the crisis period; it is important for the customer relations personnel to be in touch with them for constant update of their requirements, which need to be linked to the company’s plans and to the available capacities. There has to be real time dashboard which is likely to change dynamically. Some companies have a practice of collecting a very small amount from every employee and they build a corpus for supporting any employee(s), in the event of getting personally impacted. Such practices may be considered in long run.
  12. Benchmark Six Sigma Expert View by Venugopal R Confidence interval is an estimated interval calculated from a set of observed data within which the a population parameter (e.g. Mean) is expected to fall with a given confidence level (e.g. 95%). It is to be remembered that the confidence interval is used for estimating the position of the population mean and not an individual value from the population. Predictive interval is an estimated interval within which an individual future value from a population is ‘predicted’ to fall with a certain probability. Confidence intervals give rise to a degree of certainty / uncertainty with respect to a sampling method. It provides the limits within which a population parameter will be contained. The mean value of a sample taken from a population provides a ‘point’ estimate. Imagine a large lot of apples and we need to estimate the mean weight of the apples in the population. If we take a random sample from the population and measure the mean value as 300 grams, this value is a point estimate of the population and will be subjected to uncertainties. While we may not be able to obtain the real value of the population mean unless we weigh all the apples, we can make the point estimate practically more useful by providing confidence intervals around it within which the mean population is expected to fall with a specified confidence level. This is possible by using the sample mean, sample standard deviation and assuming normal distribution. One of the common applications of confidence intervals is during the tests for significance for means. A common misconception about the Confidence Interval is that it is sometimes wrongly interpreted that they represent the percentage of individual values that fall within them 95% of the time (if the confidence level is at 95%). Coming to Prediction intervals, as defined earlier, they represent the intervals for individual future value from the population. Since the variation of the individual values will be much larger than the mean values, the Prediction Intervals will be wider than the confidence intervals. Prediction intervals are usually used during regression analysis. Prediction intervals are preferred for many situations than the confidence intervals, since they provide the estimates for an individual observation rather than unobservable population parameter. The below graph shows a fitted regression plot depicting the Confidence Intervals (green inner dotted lines) and the Prediction Intervals (violet outer dotted lines)
  13. Benchmark Six Sigma Expert View by Venugopal R Most of us are aware that GE has played a major role in popularizing Six Sigma practices and is well known for obtaining huge benefits. Having undergone my early training on Black Belt through IGE, one of the mandatory modules for the leadership team was Change Acceleration Process (CAP), which was imparted to all champions and executive leaders. The CAP broadly looks at Change as changing from ‘Current state’ to ‘Improved State’. It involves leading the change and also changing of associated systems and structures. One of the major challenges for an organization during a change is to overcome the ‘Resistance to change’. ‘Leading change’ focuses on Creating a shared need, Shaping a Vision and Mobilizing commitment. The Systems & Structures would ensure Monitoring the change, Finishing the job and Anchoring the change. The ‘Work-Out’ session is an approach taken up to accelerate the whole cycle of change management. Now, if we look at a Six Sigma project, we are trying to bring about a change, mostly on a process, which is expected to provide a sustained benefit on the efficiency, effectiveness, or both. Some of the main problems that we face in running Six Sigma projects are: Inability to provide continued attention Hindrance due to Overlapping priorities of day-to-day work Non availability of key resources at a time, to take decisions on the project Delays in obtaining approvals The ‘Workouts' are highly focused, planned and dedicated sessions. The Workout consultant works closely with the Champion and the Sponsor for initial decisions about the project, designs the sessions and associated logistics. Typically the work-out process will have a Design phase, Conduct phase and Implement phase Design phase: Obtain Executive sponsorship Define the critical business issues and desired outcomes Clarify Boundary conditions as applicable Select Experts across functions Define, Gather data Design work out session agenda Conduct: (usually 1 to 4 days session) A neutral third party facilitation is done for Team chartering and launch Process Analysis and Problem Solving Application of Analysis tools Decision maker’s briefing Team report-out and On-the-spot decisions Team implementation and communication plan Implement: Ongoing team implementation of approved recommendations Consulting liaison between team and decision makers 30, 60 and 90 day check points with decision makers Celebration of success, capturing learning Build internal change management capability As seen above most of the activities are covered during the phases of Six Sigma DMAIC or DMADV and appropriate tools are available. Effective Workouts help in accelerating the Six Sigma project cycle
  14. Benchmark Six Sigma Expert View by Venugopal R A few thoughts on addressing the Sisyphus effect… PDCA wheel analogy By looking at the story of Sisyphus, trying to roll up the rock up the hill, I can’t help relating it to the PDCA wheel which has to be rolled continuously for improvements. And that too… the PDCA wheel has to be imagined to be rolled upwards on an inclined plane. The problem that most of us would have experienced, and possibly continuing to experience is that the PDCA wheel will tend to roll backwards. We would experience it in the form of repeating the same ‘Continuous Improvement Projects’ repeatedly. The lack of adequate control systems, mistake proofing and SOPs are some of the major reasons that result in rolling back of improvements. That’s why you see that a ‘wedge’ is placed under the PDCA wheel to prevent its roll back, and this wedge is the QMS (Quality Management Systems). A good QMS is a pre-requisite to ‘Continuous Improvement Programs’, the absence of which puts us at risk of losing the gains. Separating resource allocation for Continuous Improvement: Very often we see the same set of resources being given the responsibility for handling the ‘day-to-day’ roles as well as ‘strategic roles’. With this, you will see that most of the ‘Improvement related actions’ get postponed due to 'pre-occupation' on the day-to-day activities. In order to obtain specific focus on strategic improvements and innovative thinking, specific time and resource have to be budgeted and complied with leadership focus. Comfort zone syndrome If one is used to a set pattern of activities for a long period, then they tend to develop a ‘comfort zone’ around this routine, despite it not being the most efficient method possible. There would be a resistance to come out of this routine, and to take up a creative thinking. It helps to have periodic job rotations to break the formation of such comfort zones. Continual application of Lean Develop process maps for all processes, including administrative processes and periodically perform a VA / NVA analysis. It is quite possible that you would come across few steps that could simplified, eliminated or clubbed with some other step to reduce the effort and time. With the ongoing improvements in information technology, it is important to keep identifying ways of digitizing and automating tasks that would relieve repetitive efforts by humans, and thus release their time for more creative thinking and developments. Balanced work allocation It is not uncommon to find few individuals who appear to be extremely busy and over-occupied with routine work, whereas there would be others who appear relatively less occupied. The reason could either be an imbalance in the work allocation or due to the difference in method or behavioral traits of the individuals. Apart from balancing the workload, it may be worthwhile to capture and share the best practices for performing a similar job more efficiently.
  15. Benchmark Six Sigma Expert View by Venugopal R The Kaplan Meier chart is used to estimate the probability of survival during a medical research. For instance, let us consider that we are interested to study the effect of a particular drug for treatment of a life-threatening disease. The study based on 10 patients who were subjected to this treatment is plotted as below, which is knows as Kaplan Meier chart. The Y axis represents the probability of survival and the X axis represents time (say no. of years). As seen, at the start the probability of survival is taken as 1 (or 100%). After two years a patient dies, then the probability of survival drops to 0.9 (90 %). At the end of 3 years we have one more mortality, then we calculate the survival rate as the conditional probability of survival at the end of 3 years for patents who survived the first lap i.e. 0.9 * (8 / 9) = 0.8 (or 80%). The calculation for each step of this chart is continued. However, it may sometimes so happen that we might lose track of a patient. They are no longer available for the study and are categorized as ‘censored’ patients. It is represented by a vertical cross line; as seen during the 5th year. The censored patients are removed from the denominator while calculating the survival probability for that year and for subsequent years. In the above figure, the red graph represents the Kaplan Meier chart for another drug B for a similar exercise. If we look the median survival for both the groups, it will be: Median survival for Drug A = 7 years Median survival for Drug B = 4 years One can also compare the estimates of the survival probabilities for a give period. For instance: 3-year survival probability for Drug A = 0.80 3-year survival probability for Drug B = 0.54 In general, a steeper curve represents a worse situation. Though not discussed in detail here, it is also to be noted that there is also a confidence interval associated with each estimate, and the width of the confidence interval depends on the number of samples being studied. I hope that this brief discussion about Kaplan Meier charts provides a broad idea as to how Medical Researches would use this tool for estimating and comparing the effectiveness of treatments.
×
×
  • Create New...