Jump to content

Venugopal R

Excellence Ambassador
  • Content Count

    204
  • Joined

  • Last visited

  • Days Won

    25

Venugopal R last won the day on July 22

Venugopal R had the most liked content!

Community Reputation

59 Excellent

7 Followers

About Venugopal R

  • Rank
    Advanced Member

Profile Information

  • Name
    VENUGOPAL R
  • Company
    Benchmark Six Sigma
  • Designation
    Principal Consultant, MBB

Recent Profile Visitors

2,213 profile views
  1. Benchmark Six Sigma Expert View by Venugopal R The service disciplines as part of the Queuing Theory are applicable to many situations, but very extensively used for CPU scheduling algorithms. 1. FIFO (First In First Out) is a very popular method, also referred as FCFS (First Come First Serve) algorithm in CPU scheduling. FIFO concept is commonly applied on most queues in daily life, say a ticket counter or grocery store billing counter. FIFO is important as part of inventory management, as we would generally like to use or sell materials and products before they become aged, especially when there is a risk of shelf life or obsolescence. For CPU scheduling algorithms, FCFS is preferable when the processes have short burst times. 2. LIFO (Last In First Out) is literally the opposite of FIFO. In day to day life, LIFO is likely to happen when we stack up any material that is expected to be consumed fast with no risk of expiry of obsolescence. For instance, even if a FIFO model is followed by a supermarket or an assembly shop at a batch level to stack their shelves and bins, the consumption of the goods within the batch will happen on a LIFO basis, since the item that has been stacked last has the best reach. LIFO is applied by a business if they want to use their most recent inventory first. If the costs of recent goods may be higher and LIFO will reflect higher inventory costs, meaning less profits and lower tax for that period. LIFO is permitted as an accepted accounting principle in some countries. 3. Processor Sharing - In this approach, all the recipients are served at the same time by sharing of the available resource. It is akin to many households tapping water from a common water tank, with well laid down network of pipes. There is no priority and the available source gets shared by all. Such a scheduling is also referred to as ‘egalitarian processor sharing’ where each client obtains a fraction of the available capacity. The Processor sharing algorithm is considered as a emergence from the ‘Round Robin’ scheduling algorithm. The application of this scheduling discipline making use of internet and other myriad service portals has revolutionized the way world does many activities in the last couple of decades. 4. Priority scheduling- To understand this discipline, let us imagine a queue of patients waiting for seeing a doctor on a FIFO basis. Suddenly, if an emergency case comes in and that patient is given priority, there are two possibilities…. i) the doctor interrupts his session with the current patient and goes to attend to the emergency case – this is pre-emptive ii) the doctor completes the session with the current patient and then attends to the emergency case – this is non-pre-emptive. In the case of CPU scheduling, for multiple processes processed by a CPU, each process will have a priority number assigned. The CPU will start processing the process that arrived first. When another process arrives, the priority numbers will be checked. If it is a non-pre-emptive schedule, the CPU will complete its current process and check the priority numbers of all the available process waiting in the ‘ready queue’. The process with the highest priority will be taken up next. Whereas, if it is a pre-emptive schedule, the CPU will check priority number of new processes as and when they arrive and if a process with higher priority than the current one is available, the CPU will be allocated to that new process and the current process will be moved to the ‘ready queue’ for being resumed later. 5. Shortest Job First - This will be easy to understand if we have understood the ‘Priority’ discipline as explained above. Shortest Job First (SJN) is a non-pre-emptive algorithm where the priority is given based on the execution time, also known as 'burst time'. In this case, the shorter the duration, higher the priority. This finds use for CPU scheduling, where the shorter processes are not made to wait too long, thus reducing the overall waiting time. The SJF algorithm is preferred if many processes come in to the processor simultaneously. 6. Pre-emptive shortest job first - This is a pre-emptive variant of the above discipline, where the current process will be interrupted to accommodate a newly arrived process with shorter duration. The idea is to reduce the overall waiting time and allow faster completion for shorter processes. However, this method is possible only if the processor has knowledge about the burst time for the process. This is not a recommended method if too many short duration processes start coming in between longer duration processes, since it will lead to long waiting time or ‘starvation’ for the longer processes. 7. Shortest remaining processing time - This is a pre-emptive CPU scheduling where; the processing time of new process will be compared with the remaining time of the current process. If the remaining time of current process is lesser than the processing time of the new process, the current process will continue to be executed till completion. On the other hand, if the processing time of the new process happens to be lesser than the remaining time of the current process, the existing process will be pre-empted and the new process will be taken up by the CPU. This discipline can be exercised only if the estimated burst time for the processes are known. This is bit more advantageous than the earlier case of pre-emptive shortest job first, since a current process that has already executed partially and is closer to completion than a new one will be allowed to complete.
  2. Benchmark Six Sigma Expert View by Venugopal R The term Heijunka has emerged from the Toyota Production System and aims to level the irregularity in Production. The Lean Lexicon defines Heijunka as “Levelling the type and quantity of production over a fixed period of time, which enables production to efficiently meet customer demands while avoiding batching and results in minimum inventories, capital costs, manpower and production lead time through the whole value stream”. Heijunka is a pre-requisite for the popular concept of JIT (Just-In-Time). Though Heijunka is referred to as a solution for Mura, it is important to understand how the 3Ms, Mura, Muri and Muda are interrelated. Hence, before we get to discuss Heijunka, let’s take a quick look at the Japanese terms Muda, Muri and Mura. Muda means ‘Waste’ and includes non-value-added activities such as avoidable Transportation, Inventory, Motion, Waiting, Over Production, Over Processing and Defects creation. Muri means ‘Over Burden’ and relates to tasks that are Overbearing, Risky or High stress causing. Mura means ‘Unevenness, Irregularity or Non-Uniform. In Six Sigma terminology, we may refer to it as ‘Variability’. We will understand more about Mura with some example situations. If we have a product whose demand is less during the beginning of the month and very high during the end of the month, we will have unevenness with respect to capacity utilization across the month. This irregularity is Mura. In the beginning of the month, being a low demand period, there will be idle waiting time which is a form of Muda. On the other hand, towards the last few days of the month, the demand becomes higher and is bound to put pressure on the employees to deliver the volumes and this causes Muri or overburden. The three components of Heijunka Leveling – Overall smoothening of the process to reduce the variability Sequencing – Managing the sequencing of work – Mixed production Stability – Reduce Process variation If we have a product with demand levels opposite of that mentioned earlier, i.e. whose demand is higher in the beginning of the month and lower towards the end, then by cross training the employees to work on this process and the earlier mentioned process could help to even out the variability in the process and thus reduce Mura and Muda. Another aspect that needs to be considered is to balance the production line with respect to the resource allocated and time taken for each step in the process. By allocating more resources for the process steps that consume more time, we can balance the process and also prevent building up WIP inventory between the steps. When we have products of varying complexities, but handled by the same set of people in the production line, it is bound to cause variation, idle time and overburden if the products of same complexity levels come together. For example, if all the easy products are processed during the beginning of the month and the difficult ones turn up together towards the end of month, we will see Muda (excess time) and Muri (Overburden) alternatively. If we are able to sequence the flow of products in a mixed manner so that the overall complexity levels at any point of time is more or less uniform, this will help in leveling of the variations. It may also be noted that if we want to have all product mix to be available to the customer uniformly throughout the month, the above point of Heijunka becomes very important and the concept of SMED will play an important role. Many of the Lean concepts are essential for successful Heijunka implementation: Takt Time: The time taken to finish a product to satisfy the customer demand Volume Leveling: Understanding the variability in demand, maintaining production at levels comparable to long-term average demand and maintaining a buffer inventory in proportion to the demand variability Type leveling: Maintain product type mix at frequent basis, if possible every day, and reserve capacity for changeover flexibility. Change over time: We already mentioned the importance of SMED concept Implementation of Heijunka is an important element of the Lean implementation for an organization. Successful implementation requires good understanding and data for the 3Ms (Muda, Muri & Muda), building flexibilities in terms of mixed manufacturing, quick changeover and employee allocation.
  3. Benchmark Six Sigma Expert View by Venugopal R Divergent & Convergent Thinking - Definition During divergent thinking, we look for several potential ideas for a problem or solution and during convergent thinking we tend to focus on a specific idea or solution. For example, if we want to think about increasing the sales of a product, we would start with a divergent thinking and explore various potential opportunities such as expanding market, adding more product types, improving features, optimizing price, placing more promotional programs and so on. However, once we assess and evaluate all these ideas, we will have to narrow down to one or very few number of ideas based on various facts and factors. This is where convergent thinking happens. Design Thinking – brief overview ‘Design Thinking’ goes through five phases viz. Empathize, Define, Ideate, Prototype and Test. A quick explanation of each of these phases is as below: Empathize: During this phase, the customer requirements and expectations are gathered, not just through the voice of customer, but also from what they think, feel, see, hear, say and do. The pains and gains as perceived by customer are also captured. This is a very important aspect of Design Thinking. Define: The requirements are processed and defined in a structured manner which can be incorporated into the design of the product, process or service, as the case may be. Ideate: A wide variety of potential solutions are explored, and multiple options are generated. Out of these, the best options for the given situation are narrowed down. Prototype: Based on chosen solution, a working model of the product, service or process is developed and subjected to actual customer use. This will help in refining the Define and Ideate phases. Test: By using the prototypes, test out whether the design intent for satisfying customer requirements and expectations is being achieved; use the feedback to improve the prototype. The above phases do not happen in a linear fashion, but will loop back to previous phases to get refined. Double Diamond: There will be very high amount of divergent thinking during the Empathize phase and some Convergent thinking during the Define phase. Once again when we move into Ideate phase, there would be a divergent thinking to come up with alternate design solutions, and by Convergent Thinking, we narrow down on the option for which we create prototype. Thus, the Divergent and Convergent thinking happen in two cycles, often referred to as ‘Double Diamond’ Supply Chain example: Let’s consider an example – A company wants to reduce their supply chain related costs. Applying just Convergent Thinking might limit yourself to reduce transportation costs and material handling costs. Whereas if we apply the Design Thinking process, the Empathize phase will pave the way for divergent thinking. Some of the likely aspects that would come out with Divergent Thinking may include: 1. Better space utilization 2. Automation of material handling 3. Streamlining ordering process 4. Monitoring customer demand 5. Leaning out the supply chain process 6. Inventory management 7. Outsourcing 8. Relocation of sites 9. Alternate methods of transportation Convergent Thinking has to be applied now to narrow down on the priority areas to work upon. Let’s assume that the chosen areas are point nos. 1, 6 and 7. For each of the chosen points, we will have to apply Divergent Thinking to identify the potential factors that need to be addressed. After this, we move to Convergent Thinking to shortlist the solutions and finalize the set of actions. Apart from Design thinking, Divergent and Convergent Thinking are used in many situations, for instance during the DMAIC cycle for a Six Sigma project. Both these types of thinking are important and often go hand-in-hand.
  4. Benchmark Six Sigma Expert View by Venugopal R ‘The Six Thinking Hats’ is a popular method of getting a team to think about a topic from multiple angles. Any brainstorming exercise needs good planning, facilitation and post-session work to derive the benefits of the time spent by a group of experts. Brainstorming, if allowed to happen as a ‘free for all’ exercise, will never provide any useful outcome. Various methods have been recommended for channelizing brainstorming efforts. The ‘Six Thinking Hats’ by Edward DE Bono is one widely accepted method to overcome some of the issues faced during a traditional brainstorming exercise. Genesis of the ‘Six Thinking Hats’ method. Each individual has got his / her characteristic and habitual way of thinking. Some will be optimistic by nature, whereas some will be cautious, and some others will be intuitive, creative and so on. With such different approaches of mind, based on individual behavioral characteristics, we would face clashes of interest, hurdles and passiveness during a brainstorming session. As per DE Bono’s thoughts, each of these characteristics are important and we need to look at a problem from all these angles before concluding upon the solution. He brought in six perspectives to be considered mandatorily during a brainstorming and related each on to a colour of a hat. These six perspectives are expected to largely cover encompass the variety of perspectives that could emerge from a group of individuals. What each coloured hat represents: White Hat – Facts: Focus on data, facts and information available or needed. Blue Hat – Process: Focus on managing the thinking process, next steps, action plans Red Hat – Feelings: Focus on feelings, hunches, gut feelings and intuitions Yellow Hat – Benefits: Focus on values and benefits; why something may work Black Hat – Cautions: Focus on difficulties, potential problems; why something may not work Green Hat – Creativity: Focus on possibilities, alternatives, solutions and new ideas How does it differ from traditional Brainstorming? In traditional brainstorming, the heterogeneity in the team-thinking at any point of time would cause conflicts on interests and will result in missing out valuable ideas from the multiple thought perspectives. There is bound to be dominance of few individuals and could result in bias towards their ideas. The participants whose perspectives could not be voiced or got overpowered, would feel their morale let down and will tend to have poor ownership on the final solution. By ‘wearing’ a particular colour of hat, all the participants force themselves to approach the problem in the perspective represented by the hat colour, at any given point of time irrespective of their natural inclination. This enables the entire team to address the problem in the same perspective at a given point of time. Room for dominance-based bias is reduced. By going through all the ‘colours’, the likelihood of anyone’s perspective getting left out is significantly reduced. This will enable to build an overall higher level of ownership on the accepted solution. Example case study: Let’s consider a situation, where an organization wants to decide whether they should purchase an expensive RPA tool. They use the ‘Six Thinking Hats’ for discussing and decision making. Please note that the points mentioned here are just for illustration and would not be exhaustive enough for an actual case. With the White Hat on, the team will focus on available data and the data required. They look at the number of automation opportunities, existing and likely to emerge in next couple of years. They look at the data on multiple options for RPA tools available and the comparative costs and features. They also look the past industry trends future prospects based on automation. With the Red Hat, the team will gather the intuitive opinions by the team members on different products available and the pros and cons based on hunches and individual opinions. They will also gather inputs as to what the team ‘feels’ about the need for automation, going for a third-party tool or developing it in-house. Wearing the Green Hat, the team encourages ideas for innovative thinking – alternate approaches to overcome their productivity issues or smartly modifying available software with internal expertise, or simplify the process with creative design thinking that could vastly reduce the number of steps involved. Other thoughts could be to leverage options offered through cloud computing. The Yellow Hat may introduce at this stage and focus on the tangible and intangible benefits of acquiring a RPA tool. They will look at the investment and ROI time frame. Other benefits could include improved accuracies and winning more customer good-will by providing faster and higher quality services. Another factor could be the enhancement in competitiveness. Black Hat may be brought in now – Concerns are raised on the credibility of the projections for automation. What if the technology gets obsolete faster than getting the ROI? Will it result in loss of jobs for employees? The Blue Hat is worn by the person facilitating the thought process, encouraging the ideas to flow and directing the switching of the thought process from one perspective to the next. Had they not followed the ‘Six thinking hats’ method, a few of the above points would have had a biased domination, quite likely around points 1, 2 and 4. Having explored the problem from all the perspectives the summary of the discussion will be comprehensive and would help the management team to take a very informed decision, with higher degree of ownership.
  5. Benchmark Six Sigma Expert View by Venugopal R Overall Equipment Effectiveness (OEE) is a very common indicator used to assess the ‘Value Adding Time’ in manufacturing and other processes that involve usage of equipment. 'Value Adding Time' is defined as the time used by the process for which the customer is willing to pay, involves transformation on the product / service, and it should be time used to get the output ‘right the first time’. Effective usage of equipment depends on: The extent to which the equipment is 'Available' as required. The 'Rate of production', as compared to a standard, while equipment is in use. The 'Quality' of the products / services generated. The calculation for Overall Equipment Effectiveness is: OEE = Availability X Efficiency X Quality Calculation for of each factor is done as: Availability = % of operating time within the Planned Production Time Efficiency = % of units produced / (Operating time X Capacity) Quality = % First pass rate / Total output (Product or service) Now, if we want the OEE to be 100%, we must have each of the above factors at 100%. While 100% is an ideal value for OEE, benchmark data shows that an OEE of 85% is considered as excellent score for discrete manufacturing, though there is always room for further improvement. It is not uncommon to find OEEs in the order of 50 to 70 percent for organizations just embarking on Lean Management techniques. So, considering that even the best of the companies are not able to maintain OEE of 100%, let’s see the constraints that come in their way. The major reasons that impact the OEE are broadly consolidated as ‘6 big losses’ as follows: Six Big Losses AVAILABILITY related losses 1. Unplanned stoppages 2. Planned stoppages EFFICIENCY (Performance) related losses 3. Small stoppages 4. Reduced speed QUALITY related losses 5. Start-up rejects 6. Production rejects 1. Unplanned stoppages: Loss of production time because the process or equipment that is scheduled for production is not run due to some fault. The equipment can start running only after the fault is fixed. Unplanned stoppages can happen due to Breakdown, Lack of resources / material, Tool failures, unscheduled maintenance etc. 2. Planned stoppages: The process or the equipment is stopped for performing a setup, adjustment or changeover of tools. Planned stoppages also happen for Preventive Maintenance and change of input materials. 3. Small stops: Stoppages for very short durations, also referred to as ‘Micro stops’ due to minor hiccups. Such stoppages are very short (< 5 minutes) and often not captured and monitored. 4. Reduced speed: Time loss that occurs when the equipment / process delivers a rate of production that is lower than the recommended standard. 5. Start-up rejects: The time spent for generating rejects that occur during the initial run after a changeover or adjustment. 6. Production rejects: Time spend in generating rejects and attending to them during the regular production. Now, let us think about why we cannot obtain and sustain an OEE of 100% If we look at the above factors, all are quantifiable and hence can be taken up for improvement from their current levels. However, maintaining each one of them at 100% in a sustained manner may not be practically viable. It may also be noted that since ‘Planned stoppages’ are also part of the big losses, and quite often the major contributor, it will never reach zero. There will always be need for planned maintenance, changeovers, adjustments and so on. At the same time this provides the highest opportunity for applying Lean Six Sigma tools like SMED, Predictive maintenance etc. for continual improvement. Yet another reason for OEE not touching 100% is that the ‘Efficiencies’ are measures based on the set standards. Different organizations within the same industry can set different standards for through-put. Further, it is expected that as part of continuous improvement, the standards will undergo upward revision periodically. This will increase the challenge for the Efficiency scores to catch up and sustain. Conclusion: The OEE is a very powerful metric that tells us about the extent of value added time spent by the process or the equipment. For reasons explained, a perfect score of 100% is not the real objective, but this metric and it break-up details help to continuously point out the areas where losses are occurring and to continually improve such areas to increase the value added time.
  6. Benchmark Six Sigma Expert View by Venugopal R As early as 1924, Walter Shewhart had introduced what we all know as Control Charts, which has become a very popular and important tool in Statistical Process Control. The Upper Control Limit (UCL) and Lower Control Limit (LCL) represent the +/- 3 sigma limits derived based on the historical data generated from the process itself. So long as the process is in statistical control, the probability of any point falling outside these limits as low as 0.003. Hence any point falling outside the UCL or LCL is suspected as an abnormal occurrence and subjected to analysis to look for any assignable cause(s). Based on the above, it generally implies that so long as the points fall within the control limits, the process is in statistical control and there is no need to suspect any abnormality. In 1984, Lloyd S.Nelson published in the Journal of Quality Technology, that just as we say that the probability of any points falling outside the control limits is very low, there is a possibility for other situations, with equally low probability of occurrence, even though all the points could be within the control limits. Hence, such situations would also be indicative of existence of special causes. He came out with 8 rules for suspecting presence of special causes, of which the Rule-1 is the original case of a point falling outside the Control limits. Each of these rules are illustrated below: While multiple rules are available, it is important to decide which rule needs to be applied when. Rule- is the fundamental one and is hence start with it for any situation. Subsequently, Rules 2 to 4 comprise a good set that help for many of the commonly occurring special causes. For an engineering study, adding rules 5 and 6 will increase the sensitivity to changes in the process average. Rules 7 and 8 will help to identify problems relating to sampling, viz. stratification and mixtures. While this battery of rules is expected by and large to reveal more special causes with higher sensitivity, there are still possibilities of special causes that escape these rules. For example, points may alternate up and down repeatedly, except an adjacent pair of points that may move in the same direction beginning at every nth sub-group. This could mean an underlying special cause that would not get detected by Rule-4. One has to be alert in usage of control charts to detect any patterns that may need attention and at the same time avoiding over-reactions. Reference: Journal of Quality Technology April 1985 edition
  7. Benchmark Six Sigma Expert View by Venugopal R It is believed that the acronym SMART was first used in 1981 by George T Dorain, who worked for Washington Power Company. Since then this acronym has become very popular, accepted widely and also used as guideline for goal setting during the Define phase of a Six Sigma Project. SMART is an excellent tool to guide and focus the thoughts for a team to evolve pragmatic goal for a project. It is true that a project is decided to be taken through the Six Sigma route only if the causes and solutions are not clear. What is clearly known is the problem, which has a business case derived based on its business impact. The very reason to establish a structure like the DMAIC and the associated tools and methodology is to help the leader and team to think in a structured manner to proceed through a path that has many uncertainties and unknown factors. The approval and support for the project team by project sponsor and the leadership is an essential requirement without which a Six Sigma project seldom succeeds. As part of the charter approval, it is important that the Project leader, Sponsor and the Executive Leadership agree that the project is not only important for the business at that point of time, but also that the goal that has been set is considered on consensus as achievable. Since the exact causes nor the solutions are known at that point of time, the criteria for 'achievability' has to be based on several other factors. Ideally the Goal for a Six Sigma project has to be challenging enough to be accomplished with some amount of stretch, but the ‘challenge’ should not be ‘unrealistic’ based on the capabilities and resource availability for the project team. The project goals are usually deployed from the Strategic business goals or from Current Pain points of the organization. If the project goal is deployed from a Strategic Business Goal, then it becomes a critical prerequisite to fulfill the larger business requirement. When the strategic goals are decided, the organization plans for necessary budget, knowledge, technology and other resources. If this planning is done with adequate thought, a good deal of support that is necessary for a deployed project would get built in. Once the project is seen within such a framework by all the concerned stakeholders, judging its 'achievability' becomes more realistic. The involvement of the stakeholders mentioned above is key. On the other hand, if the responsibility of deciding and setting the SMART goals is vested only with the project leader without adequate involvement, review nor support from the other parties, there is risk on the project goal not aligning with the broader business goals and 'achievability' becomes questionable. This is a mistake that happens in many organizations where many Lean Six Sigma projects do not progress successfully. Another important point to be noted is that the “A” in SMART goal cannot be seen without associating with the rest of the alphabets. Often, the project goals start with a broad thought, viz. “Improve Market Share”. Once we make it Specific, we would have to think and stipulate more details such as “Market share for Product A”, “Market share within a certain region” etc. As part of the 'Measurability', the commonly accepted reliable method of measure is established. Although it may be argued that these detailing happen during the Measure phase, the DMAIC phases do not take place as rigid sequence and some back and forth movement is essential. One has to think in totality keeping in mind all the phases, at every stage of the project. We already discussed about the ‘Relevance’ of the project with respect to the overall business goals. The ‘Achievability’ has to be credible within a time period, thus emphasizing the importance of ‘Time Bound’. Other factors that may be considered to judge the ‘Achievability’ of the Goal include: Review the type of the goal – whether it is relating to “compliance gap” or “enhancement related”. If it is a compliance gap for an existing target, compare with past performance trend for same / similar processes. Based on the above, assess the process potential and process capability of the organization to achieve the target. If it is an enhancement kind of target, compare with industry benchmarks and then compare whether the organization is equipped (or has plans to equip) with required capabilities, resources and know-how. Consider the level of the project. Is it a Green Belt or Black Belt? Again, this will also be reflective of experience, capabilities and track record of the team leader and the team members for achieving the set target While a project goes through the Define phase and Measure phase, it is possible that many facts would surface and the level of clarity about the project definition will enhance than what it was when the initial charter was developed. This elevates the confidence level of the team and It is important to revisit the charter and goals, by which the ‘Achievability’ will get re-assessed and the goal may be revised accordingly.
  8. Benchmark Six Sigma Expert View by Venugopal R When a company wants to launch a new product, there are certain concerns that need to be addressed: How do we know whether the product function and features will be useful and appealing to customers? Customers can provide relevant feedback only if they actually feel and use a product, but it may be too expensive to launch a product and then modify it based on feedback. It would take too long a time to develop a product will several features, and what happens if there are too many adverse feedback? It could be too expensive to invest on many features only to realize later that they need major modifications. There is a risk of losing the market as well. The Minimum Viable Product is a term introduced by Frank Robinson in 2001. It refers to a version of the product that will be launched in the market and it will be functional but with only minimal set of features. By launching a product with bare minimum features, but fully usable by customers, the company will be able to obtain very realistic feedback from the early users and further evolve the product. It is very likely that prior to launching the MVP, the company would have done adequate market studies and even developed a prototype. However, the MVP is a version which is actually hitting the market commercially. Especially for newly conceived products, it is very difficult to obtain the customer expectations and requirements, unless they actually get a practical feel of the product. That is why MVP became important. Strategically, the MVP provides an avenue to enter the market, at the same time use iterative approach to keep gathering the feedback and continually upgrade the product from its minimum viability to maximum viability. The thought process of MVP is comparable to that of Agile methodology used in software development. One of the principles within the manifesto of Agile development is to keep delivering versions of working software frequently and obtain iterative, incremental and evolutionary progress. Some of the popular products that impact the day-to-day life for most of us that used MVP are Facebook, AirBnB, Amazon, Uber. If we examine the origins of these companies, they have started in a very small scale, but offering a completely viable service before growing enormously. Facebook’s MVP was about linking students through their college / class and getting them to post message to their boards. All features were built upon the success over period of time. Airbnb started with an aim to provide affordable short-term rentals. Their MVP was by offering accommodation to 3 guests who visited a design conference. Amazon’s MVP offering began with a simple web design to sell books at low price – from then on, they have grown to their current levels in e-commerce. Uber’s MVP was to offer cheap taxi services by linking some of San Francisco’s iPhone users who were willing to make credit card payments through an app, with drivers. It may be appropriate to say that by using MVP, products and services are built by a company for its customer in collaboration with the customer, by using valuable customer inputs.
  9. Benchmark Six Sigma Expert View by Venugopal R First of all, let us salute all those who are engaged in protecting us from this pandemic, which greatly includes all the people who are directly or indirectly involved in healthcare activities, under such challenging and trying circumstances. Any views on this forum should never be mistaken as a criticism towards anyone doing such noble service, but as a discussion for learning from the experiences and generate thoughts that could help society as a whole to be better prepared in future. One of main issues that we see across the healthcare systems in the world with the prevailing problem is Muri, which means ‘overwork’, for healthcare workers. It would be unfair to blame the healthcare systems for the excess Muri, based on the current situation, since it is beyond anyone’s imagination. However, under these circumstances it is essential to do everything possible to provide relief to the people who are getting overburdened. It has to be mentioned that there are many efforts being taken by various governments and many volunteers to this effect as well. The three components of wastes viz. Mura, Muri and Muda can affect one another and hence it is important to address all of them together. Very often, we see that Muda is the one that gets most attention. I have tabulated these waste components with some examples and suggested a systemic remedy. It may be noted that this is only a very small representation, whereas there is bound to be many more situations for each category and the solutions may not be very easy to implement always. S# CATEGORY EXAMPLE SUGGESTED REMEDIAL MEASURES 1.0 MURI (Overburden) 1.1 Overbearing tasks Stretched working for direct and indirect healthcare staff Forced to handle excessive number of cases than normal Clear criteria to identify genuine cases who need to be admitted, plus awareness Consider geographical redistribution of staff based on need 1.2 Work related stress Handling patients who are not very cooperative Personal attenders / relatives of patients are not permitted due to risk of infection. This adds burden to the healthcare staff Support the skilled healthcare staff with other inhouse staff who can play the role of personal attenders. Maintain regular contact with patients relatives and obtain oral assistance 1.3 High risk tasks Frontline healthcare staff are at high risk of being exposed Continued awareness and providing equipment to staff. Ensure Routine 5S in workplace. Plan staff rotation on high risk area to prevent prolonged risk to anyone 2.0 MURA (Variability) 2.1 Materials related For materials required by healthcare staff, mismatch between requirement and availability Variation in the quality of the materials Material planning exercise to be done at treatment centre level and at regional level Have standards for each item and centralized compliance monitoring Establish authority who understands the risks to decide on use any material that doesn’t meet the standards in case of emergencies 2.2 Methods related Differences with respect to diagnosis, treatment approach, handling, duration, and conditions within and between treatment facilities. Standards for all methods with frequent updating and compliance monitoring by a central organization. Frequent sharing and synching of best practices across centres 2.3 Manpower related Unpredictable variation on day to day patient count Variation in knowledge & skills among staff State level planning for potential patient turnover and necessary treatment facilities Adopt buddy system to quickly orient staff and reduce knowledge variation 2.4 Machines related Critical equipment not functioning or functioning with variations, especially during emergency situations – leading to waiting or treatment deficiencies Equipment availability both in terms of numbers and through predictive maintenance 2.5 Measurements related False positives / False negatives on the screening tests. Dependency on sampling for screening evaluation MSA on the screening measurements to understand the measurement reliability and improvement actions Application of different sampling methods like stratified sampling to obtain realistic density of the problem 3.0 MUDA (Wastages) 3.1 Transportation Transporting patients for various requirements - Testing, ICU. Transporting of equipment across centres Study transportation data to review the facility layout for optimizing movements Consider creating ‘self-sufficient’ zones based on cost-effort-benefit analysis 3.2 Excess Inventory Excess stock of medicines that do not get consumed for long Large number of patients queuing up for being attended at various stages Test requests / reports piled up Value stream and Kanban methods could help in streamlining the processes and minimize inventory 3.3 Excess movements Healthcare staffing having to move about within a centre for various activities. Too many movements required by physicians and other healthcare staff while examining / treating patients Lay out optimization, maximise communications, review positioning of facilities Workplace management – 5S practices 3.4 Waiting time Patients waiting for getting admitted, seeing physician, tests, reports, discharge etc. Physicians waiting for test reports Many of the solutions for other wastes will help to reduce waiting times. Reasons for the waiting time have to analysed whether it is a result of an inefficiency from another process – to decide the best options for solution 3.5 Over processing Assigning more staff than required for a patient Keeping a patient under care more than required Doing unnecessary diagnostic procedures Patients have to repeat the same information multiple times Expectations at each stage need to be understood well, so that both under processing or over processing should be minimized / avoided Ensure patients inputs are recorded and the same file is maintained 3.6 Over production Preparing for a patient too much in advance Define preparatory lead time for effective & efficient preparation to receive a patient 3.7 Defects Wrong diagnosis, administering wrong medicine, improper dosage, mix-up of reports etc are some serious defects Other defects are also possible such as skipping a step during an clinical test, missing out an instrument on surgical case cart Training & certification, checklists, software-controlled protocol 3.8 Talent unutilized Feedback / suggestions from junior and supporting healthcare staff not considered Best practices between individuals, treatment centres not captured Institute and encourage Kaizen system, encouraging all staff to provide suggestions and best practices. Reward & recognition schemes
  10. Benchmark Six Sigma Expert View by Venugopal R Regression shows us the relationship between two variables, graphically as well as by an equation. By fitting a regression model between two variables, we can predict the dependent variable for any given value of the independent variable. What is residual? One of the commons methods used to fit a regression line is known as 'The Least Square Residual' method. On a fitted regression, many of the observed values would not fall exactly on the fitted line. The vertical distance between the observed line and the fitted line is the residual value. The residual can be either positive or negative depending on whether the observed value falls above the fitted line or below. Residual and Goodness of Fit The residuals and their pattern represent how the errors are distributed along the regression fit. By assessing the pattern of the residuals, we can determine whether they represent a stochastic error pattern. By stochastic, we mean whether the error pattern is random and unpredictable. For a simple analogy, we may take the case of a 'Rolling of dice' example. Though we would not be able to predict the outcome when an unbiased dice is rolled each time, over a series of tosses, we can determine whether the appearing numbers follow a random pattern or not. For instance, with a biased dice, if we observe the number “3” appearing more number of times, there is a bias and if this is noticed by a player, can be used to his / her advantage. Similar principle applies to regression models as well. For a series of observations, the errors should be random and unpredictable. By analyzing the residuals, we will be able to decide whether the regression model represents a systematically correct and reliable model or whether we need to improve it. Randomly distributed residuals For simplicity, we will stick our discussion to Ordinary Least Square regression model (OLS). The plot below is that of the residuals, plotted by taking zero as the X axis. We get this plot by choosing Residual vs Fits option in Minitab. Since natural variations tend to follow normal distributions with more values falling close to the center and symmetrically spread on both sides, the residuals are also expected to show randomness and spread across horizontally as shown above to represent a systematic OLS model. The similar pattern of spread across the breadth shows the level of randomness is maintained across the range of the relationship. This behavior is also known as homoscedasticity or constant variance. Non randomly distributed residuals The below picture depicts a possibility of heteroscedasticity, or non-constant variance. Here, the variation of the residuals is higher towards the left and it reduces as we go to the right. This pattern represents that the error is higher during the regression relationship for the lower set of values and decreases significantly as we move to the higher sets of values. Heteroscedasticity does not cause bias on the estimation of coefficients, but it adversely affects their precision. This pattern of variation violates the assumptions of the Linear Regression Modelling and becomes unreliable for predictions. Non normally distributed residuals Another type of Heteroscedasticity is shown below. In this case, it is a nonlinear data and hence the wrong model. The residuals follow an arch like shape. This indicates that the data is nonlinear and applying linear model is a mistake. In this example, the residuals will be non-normal and skewed to one side. What do heteroscedastic models indicate, in general? The heteroscedastic models indicate the some deterministic component of the model, i.e. predictor variable is not capturing some assignable or explanatory information and allowing it to get added into the residuals. Or it could be an interaction between the variables across the levels, which had not been identified. For example, if we want to study the productivity of workers across age, we might have a situation where the variation of the performance could be less for the younger ages and as the age increases there could be drop in productivity, coupled with an increased variation. This will result in a conical shaped distribution with increasing variation along the X axis of the ‘Residual vs Fits’ graph.
  11. Benchmark Six Sigma Expert View by Venugopal R Critical Path Method: Critical Path Method (CPM) has been in existence from 1950’s. It was developed by Morgan Walker and James Kelley. The trigger for development of CPM was attributed to PERT (Program Evaluation Review Technique), that was developed around the same time by Booz Allen Hamilton. CPM is a project modelling method that applies to any kind of project - Construction, Aerospace, Defence, Software, Product development, Research, Healthcare and so on. The steps involved in CPM are outlined broadly as below: Breakdown and list the activities involved for the project Sketch the activities sequentially, depicting the serial and parallel paths Identify and indicate the expected time duration for each activity Indicate dependencies between the activities Calculate the time for each path and identify the path that is expected to take the longest duration, which is the ‘Critical Path’ Any delay on the critical path will directly impact the project timelines and hence it is very important in Project Management to ensure Critical Path is not delayed. Assumptions while using CPM: 1. Multi-tasking is not done using the resources 2. The time duration estimates are considered realistic and honest Critical Chain Method: In the late 1990’s, Eliyaru Godratt, who is the author for the famous book “The Theory of Constraints (TOC)” came up with another business fiction by name “Critical Chain”. The Critical Chain Project Management is based on the methods and algorithms derived from TOC. He brought out the fact that practically there would be constraints in the form of resource limitations that could adversely impact the Critical Path Method. There would possibly be a ‘buffer’ on the time duration provided by the respective stakeholders. Two other syndromes that are likely to occur for the CPM are: Student Syndrome – Whenever the available time is more, there is a tendency to commence the work only towards the later part of the duration. Parkinson Syndrome – Whenever the task is completed before time, tendency to keep ‘polishing the work’ to fill in the remaining time duration, though such activity is only a wasted ‘over processing’. In Critical Chain Method, the activity time duration is cut by 50% and the reduced time is kept as buffer. The buffers may be classified as Project Buffers, Feeding Buffers and Resource Buffers. The project progress is monitored by monitoring the extent of consumption of the buffers rather than the individual performance adherence to schedules. Theory of Constraints: A quick overview of the principle of Theory of Constraints is as follows: Identify the system’s constraint Exploit the system’s constraint Subordinate & Synchronize to constraint Elevate the performance of constraint Repeat the process Critical Chain Method – Benefits: By applying the Critical Chain Management, the following benefits are obtained: Resource conflicts removed – Constraints due to resource variability, interdependence and contingencies are removed from the required resources. Resource de-contention is done for the project based on the capacity of available resources Critical Chain is identified – Critical chain is the longest series of interdependent activities through a network that are connected by task or resource dependencies. Critical chain is the project constraint for meeting the delivery commitments Feeding Buffers – Variability on the non critical feeding chains can become constraints for the critical chain. The feeding buffers help to address those constraints Start only as early as necessary – By this, the delaying of Critical Chain tasks are controlled and in turn it addresses the resource constraints that occur due to company-wide imbalanced loading of resources Address variability in Critical Chain – Project buffer protects the critical chain completion date due to variability within the critical chain itself.
  12. Benchmark Six Sigma Expert View by Venugopal R Statistical Sampling is a method that has been prevalent for long to help assess the characteristics about a population. Though the best option would be to assess the entire population, it may practically not be possible and hence the dependency on sampling to take decisions. Sampling Risks: While every method of sampling is associated with risk of errors, it is possible to understand and even quantify these risks and thus take an informed decision. Most of us will be aware about sampling errors, but we can have a quick recap as below: 1. Risk of declaring a good population as bad (alpha risk) 2. Risk of declaring a bad population as good (beta risk) Any sampling plan is governed by its operating characteristic curve (OC curve) that depicts and quantifies these risks. The OC curves and the sampling plans based on them have been very widely used in business for the purpose of deciding the appropriate acceptance sampling. However, I am not elaborating on this topic further here since there are many other aspects of sampling to be covered. Sampling Frame: To obtain a representative sample from a population, it is important to define the ‘sampling frame’. The sampling frame is the set of units that exhaustively represents the universe from which we take a sample. For instance, if we need to pick a sample for assessing customer satisfaction for a certain product and we pick the sample customers based on the credit card details, the sample will not cover the set of customers who paid through other means, and it is possible that their levels of satisfaction could be markedly different. Hence, in this case, the ‘sampling frame’ should incorporate inclusion of customers from all modes of payment. A sampling frame should be defined in such a manner that it considers and represents all possible stratification of the population. The number of units in the population not covered by the frame is known as ‘gap’. If the units in the gap are distributed like the units in the frame, then the sample will be a good representation of the population. Samples taken without using a frame are called as ‘non-probability’ samples, where as the samples taken using frames are called as ‘probability’ samples. It is recommended to use probability sampling, whenever possible, so that valid statistical inferences could be derived. Let us discuss various types of probability samples that could be used for different situations: Simple Random Sample: This is one of the most basic sampling methods. In this method there is a random chance for picking up any item from a population of N items. The lot of N items represents the frame. One may use random numbers and pick the samples Stratified Sampling: Here the N items in a population are divided into sub-groups or strata, based on a characteristic of relevance. A simple random sample is selected from each stratum and the combined result is obtained. For instance, if we need to pick a sample to perform a medical test from a population of the state, we can sub-classify them into districts and pick random samples from each district. Stratified sampling technique can help to reduce the overall sample size to obtain the same level of confidence on inferences. Further, it will also help to understand if any heterogeneity is present between the strata. Systematic Sampling: In systematic sampling, we classify all the items in the frame into groups by dividing the total number of items by the sample size. A very simple example of sequential sampling is to pick every nth item from a production line for inspection. While this sampling method gives a uniform coverage across the frame, one has to be cautious of certain disadvantages. For instance, imagine this is used for assessing the travel experience of people who got off a flight, and the method followed was to pick every 12th passenger who exits. There is a possibility that you might be picking up more passengers who were occupying a particular seat location, say, window seat, and thus likely to introduce bias in the sampling. Cluster Sampling: All the items in the frame are divided into clusters. Clusters are naturally occurring sub-categories of the frame. Example: Districts within a state, Colleges within a region etc. Out of n number of clusters, a few samples are selected and all the items in that cluster are studied. It may be noticed that the cluster sampling method is different from the stratified sampling method. Cluster sampling could result in increased sample size, but sometimes it may be convenient and reduce need to travel. Keeping the objective in mind, the sampling strategy and method will have to be decided, so that the inferences based on the sample will meaningfully and reliability representative.
  13. Benchmark Six Sigma Expert View by Venugopal R CONWIP and KANBAN are methods in Lean Management, aiming for triggering ‘Pull Systems’. KANBAN originated as early as 1940 in Toyota Production Systems and was developed by Taiichi Ohno. KANBAN, in Japanese translates as ‘Billboard’. The idea of KANBAN was to indicate the ‘available capacity’ and helps to implement Lean and JIT (Just-In-Time) production. KANBAN operates under the principle of using a limited number of cards (KANBAN cards), with each card representing a specific quantity (one or more) of a specific type of part. Thus, every part will have a KANBAN card attached to it. To express KANBAN in simplest terms, if any part is completely processed and consumed by market or for subsequent processing, its card gets released and goes back to the beginning of the process flow and pulls-in another part for being processed. While KANBAN works well for many situations, it faces challenges when we have to accommodate a changing product-mix. It becomes difficult to have ‘product-specific’ cards for each type of product. Further, KANBAN method faces practical issues when we have small orders or infrequent jobs. WIPs may have to wait for long time, since the system would respond only after the KANBAN system authorizations will have to be propagated all the way to the beginning of the process to trigger new release. The CONWIP system (Constant Work-In-Progress) was developed by Mark Spearman and Wallace Hope in 1990. The CONWIP system is similar to KANBAN in many ways, but instead of the card being associated with a specific type of part, it is associated with a certain quantity. So, when CONWIP card gets released upon the part(s) represented by it being consumed, it goes back, just like the KANBAN card does, to the beginning of the process. However, on the way, it checks with the backlog to see which parts are in demand along with its quantity. These details are captured by the card, so that the replenishment will be done for the part in demand for the required quantity. KANBAN was focusing on pulling the required part to keep the inventory low, whereas CONWIP focuses on replenishing the capacity, in line with market requirement, by ensuring constant WIP capacity. One of the ways of ensuring the WIP capacity, is by pre-defining WIP levels for each type of part and the processing time / batch for each of those parts. With that, not only the overall inventory can be expressed in terms of time, but also the breakdown of part specific inventory hours as well. The below table gives an example: It may be seen that the inventory information for various parts have been made comparable by expressing in hours. The part wise comparison to targeted WIP is also available. Usually an MRP system will provide real time data for the CONWIP card on the priority capacity. However, it may be kept in mind that for certain situations, the target WIP could be dynamic and would vary depending on the fluctuating market requirements. While KANBAN system is very suitable for high volume, low variant parts, CONWIP will be a better choice for made-to-order parts with lower volumes, but higher variants. A hybrid system encompassing both KANBAN and CONWIP is also used in certain situations where both high-volume parts and less frequent ones co-exist. One such model is where KANBAN cards are assigned to high volume parts and CONWIP to low volume parts. Depending on the type of card that is received, the part to be produced is chosen as per the KANBAN system of the CONWIP system. If this is practiced successfully, the benefits of both these systems can be availed.
  14. Benchmark Six Sigma Expert View by Venugopal R Team based activities require different kinds of motivation. Those who have played the roles of facilitators for engaging cross functional employees on team exercises need to keep evolving interesting ways on an on-going basis, to get the best from team sessions. Many of us would be familiar with some of the voting techniques like Delphi method that are used as part of the Improve phase in a DMAIC cycle and for many other situations. Planning Poker is a similar method used popularly as part of Agile Scrum framework, by which the size of a user story is determined with consensus. A user story, in software development refers to description of a software feature from an end-user perspective, in natural language. Sizing of a user story is done considering the quantum of work involved, the complexity of work, risks / uncertainties, time duration etcetera. The team members, during a facilitated meeting, are given the overview of the story and asked to estimate the size of the project by independently selecting a card from a set of cards. The set provided to each participant will contain cards bearing different numbers, usually as per Fibonacci series. Each team member independently selects his / her number that represents the size of the project according to his / her assessment. The team member should not disclose his / her choice to others until everyone completes the voting and the cards are flipped by everyone, when asked to do so, at the same time. Evidently the purpose of doing the secret voting is to avoid the possibility of anyone getting influenced by another’s voting and to obtain independent views. The term “Planning poker” was first introduced by James Grenning in 2002 and term became more popular through the book on ‘Agile Estimating & Planning’ by Mike Cohn. Fibonacci series (1,2,3,5,8,13….) is used to allow a significant difference between the voting choices. Apart from cards, usage of mobile apps and collaborative software have also emerged, thus enabling team member from remote locations to participate. The below steps depict the typical procedure for Planning Poker: Team members meet physically or remotely The meeting is moderated by a facilitator who will not vote The user story to be estimated is presented, usually by the product owner Any questions / clarifications are addressed by the presenter Each individual chooses a card representing his / her estimate and the card is placed face down. Individual choices are strictly not to be disclosed at this point. It may be noted that if there are remote participants, other voting tools are used. When required by the facilitator, all team members flip their cards at the same time In case there is extreme difference in the voted numbers, those team members are asked to justify the reasons for their outlying voting. The estimation process is repeated until a consensus is arrived. Certain points of caution to be exercised while using Planning Poker: Like any other project management tool, this technique works based on how well it is implemented. The tool cannot replace the creativity and the presence of mind required by the Scrum master. Scrum masters at times would be under pressure to arrive at the story points and may start declaring the points even before they complete the exercise The ability to gather the right set of team members and holding their attention through the exercise is often a challenge Any disclosure of the points before it is called for, defeats the purpose of the exercise. People tend to mis-interpret and compare the story sizes based on the value of points, whereas the effort levels need not be proportional to the values that are based on Fibonacci series. Too many stories should not be attempted for being sized in one session If the list of acceptance criteria is too long, it could be an indication that the story is too long to fit into a sprint. Sometimes the focus is more on the development side and the points for testing effort may get left out or given lower attention
  15. Benchmark Six Sigma Expert View by Venugopal R Let’s begin with a brief introduction to the TRIZ methodology. TRIZ is a Russian acronym for “Theoria Resheneyva Isobretatelskehuh Zadatch” roughly translating as “Theory of Inventive Problem Solving”. TRIZ was invented by a Soviet science fiction author, Genrich Altshuller around 1946. According to Altshuller, for all problems in the world, we can relate to a previously found solution, if we are able to express the problem at a generic level. The generic solution can then be adapted specifically to our problem. In this context, the 3 principles of TRIZ are as follows: Fundamental principles of TRIZ Principle 1: All problems have solutions outside the technical domain of that problem. Mostly, similar problems in other fields have already been solved by someone else. Principle 2: An invention happens only when a contradiction is resolved. When we want to improve something, something else is likely to get adversely affected. The invention has to be done without such a deterioration. Principle 3: There are only 39 general issues faced by Inventors. When one of these has to be improved, one or more of the remaining issues could get adversely impacted. Thus, the challenge is to resolve a problem with an inventive solution without causing any adverse impact. Fortunately, TRIZ has also provided guidelines to address the contradictions as well. Broadly, TRIZ recommends applying the principle of separation for avoiding or reducing the effect due to contradictions. Types of Separations 1. Separation in space 2. Separation in time 3. Separation between parts and whole 4. Separation upon condition I am providing some simple examples below, to help understand these types of separations in practical application. A common example of separation in space is how crossroads are managed. We have a contradiction that we want vehicles in both directions to pass quickly without collisions. By building an overpass, we achieve this by a separation in space. Consider the requirement for a quick braking of an automobile. The contradiction here is that the vehicle should not skid on a wet road, while braking. The invention of Anti Skid Braking System (ABS) ensures that the braking force is separated in time based on the extent of friction between the tires and the road. We would like to perform multiple tasks on our computers, but the contradicting parameter is that we would like to keep the power / battery consumption the minimum. By inventing the sleep mode, we separate the parts and the whole system – the system slips to power saving mode when not used for a certain period of time, however, the individual components of our work are maintained, so that we could resume from where we left it any time. Consider an automated data capture and processing method using optical character recognition. The contradiction in this case is that while we increase the speed of capture, we should not allow errors to pass through. Here, built-in validation rules automatically separate the suspected errors, applying separation upon condition for being specially attended. Contradiction Matrix A contradiction matrix is provided for each of the 39 parameters. Using this matrix, the contradicting parameter(s) for each parameter can be seen. The matrix also provides certain reference numbers for each contradicting combination of parameters. These numbers are part of another table that contains 40 suggested ‘Inventive Principles’, which give high level suggested ideas for solutions. For example, if we need to reduce the weight of a motorcycle, we can refer to the parameter no.1 in the table, which is ‘weight of a non-moving object’. One of the possible contradicting factors would be ‘Strength’, which is parameter no. 14 on the horizontal row. An extract of the table is shown below: Inventive Principles (Table of 40 ideas) The cell where the row and column of the contradicting parameters intersect, contains four numbers. If you refer these numbers on the table of Inventive Principles (not included in this article), which contains 40 principles, you can see the ideas listed for each of those numbers. These ideas have to be taken as a clue to find the solutions. In this example, it is quite likely that usage of composite materials (No.40) could help in reducing weight of the vehicle without compromising strength. Let’s also examine few other examples in the same context of contradiction, to see the applicability of the other ideas. Imagine if we have to reduce the weight of a piston moving within a cylinder, without compromising the strength aspect. Taking cue from the point no. 27, “Inexpensive, short-lived object for expensive durable one”, we may use a ring around the piston that can be replaced upon wear, but at the same time, protects the piston. Another example would be that of a cam and gear-controlled mechanism traditionally used for intermittent reversal of a washing machine agitator. By applying point no. 28, “Replacement of mechanical system”, the weight of several moving parts could be avoided by providing a microcomputer-controlled reversibility to the motor using an electronic control board. Take an example of a cutting tool that uses heavy moving tool and holding equipment and we are interested in reducing the weight of the equipment. By using point no. 18, “Mechanical vibration”, the cutting process and equipment could be invented by using vibrating tool to produce the same result without compromising strength The examples that we saw above are inventions that already exist, but we related them to one or more of the 40 inventive principles based on the contradiction matrix. We do not know whether the TRIZ matix had been used for the above inventions. Evidently, had they used it, they would have implemented the solutions faster. For new problems, if the TRIZ principles and matrices are referred, it could save time and effort to avoid ‘re-inventing the wheel’ for an already known solution, at the generic level. Applicability – beyond Manufacturing If you go through the table of 39 contradictions and the table of 40 inventive principles, it may be observed that the basis for evolving TRIZ methodology has predominantly focused on Engineering and Manufacturing. However, the concept of TRIZ, i.e. Contradictions and Separations are applicable to non-manufacturing situations as well by suitably modifying the tables for contradictions and inventive principles. It has to be borne in mind that these methods would not provide an ready made solution for your problem, but a thorough understanding of these methods and ability to relate your problem to the generic contradictions and inventive principles is essential for successful application of the technique.
×
×
  • Create New...