Jump to content


Excellence Ambassador
  • Content count

  • Joined

  • Last visited

  • Days Won


mohanpb0 last won the day on November 25

mohanpb0 had the most liked content!

Community Reputation

8 Average

About mohanpb0

  • Rank
    Advanced Member

Profile Information

  • Name
  • Company
    M/s CMA CGM Shared Service Centre India
  • Designation
    Director (Performance Management and BCMS)
  1. ARMI / RACI

    The RACI (Responsible, Accountable, Consulted, Informed) is slightly older and can be traced back to the Responsibility Assignment Matrix (RAM) introduced in the early 1970s. A variation, RASCI or RASIC (Responsible, Accountable, Support, Consulted, Informed) is also used in certain organizations. Both RACI and RASCI are popular as a role documentation tool. The ARMI (Approver, Resource Person, Member, Interested Party) is more Six Sigma in origin. It serves as a tool to list and categorize stake holders in a Six Sigma Improvement Project. Have been using only ARMI in Six Sigma projects. This could probably be because, the initial Six Sigma Training included ARMI. This has just continued as a practice. Additionally, having been in organizations where Six Sigma is not necessarily a way of life, there was still a need to have people volunteer or be persuaded to be part of a Six Sigma project team. In such situations, around a decade and a half back, there was a need to make it appear as a privilege for a person’s name to be associated with a Six Sigma Improvement Project. The tool for association needed to be both comprehensive as well as soft. ARMI has this characteristic in that it can boost the ego of people who are empowered as Approvers or recognized as Resource Persons. Even being a Member can appear as a good opportunity, while being an Interested Party can be taken as a demonstration of the person's commitment. The RACI had or still has a certain amount of authority in its intent and sometimes creates unpleasantness and perhaps even dread, when one finds oneself in the “Responsible” or “Accountable” cell or column. This is even more if one has not been consulted before being made responsible or accountable for some project or even activity or task. This brings some stress into people who may be worried about being successful in their responsibilities. While being "Consulted" certainly makes people feel good, Informed sort of sidelines the person and perhaps render the person powerless. Moreover, it is still sometimes difficult to explain the differences between Responsibility and Accountability to some people. For the above not necessarily professional reasons, have preferred ARMI to RACI.
  2. Specification Limits

    It is indeed a challenge to visualise even the characteristics of a totally new product, let alone decide their specifications. But following are some of the options available for making the best of an inherently difficult situation. 1. Benchmark on split characteristics The product as a whole is not available in the market as this organization is the pioneer of this. But various features or characteristics of this new product could be available singly or in combinations in various other products already in the market. The specifications for these characteristics could be used as a basis for deciding the specifications of the same characteristics for the new product. The same specifications need not be blindly replicated, but they can be started off with and appropriate adjustments can be applied. 2. Simulating End-user experience From the customer, whatever information available on the intended end user experience needs to be gathered. The various objectives which the product would achieve both for the customer and for the end user needs to be assessed or atleast a fair idea of them needs to be documented. From the above, a realistic table top simulation of the end user experience will need to be executed. When this is done, the organization progresses some way to understanding what its customer wants his customers to experience with the new product. From this understanding, the specification of various features for giving this experience to end users can be deduced. 3. Technical Absolutes Another option would be to go in for the best the technology available can offer and negotiate an appropriate agreement with the customer on absorbing the cost of the initial investment. After prototype production is done, the specifications can be scaled down from the technical maximums to suit the intents. The option of correcting specifications as the organization, the customer and end users become more knowledgeable about the product needs to be kept alive
  3. Rolled Throughput Yield

    Based on the calculated Rolled Throughput Yield of 100%, the process cannot be inefficient. But there could be assumptions that are made in this calculation which may reveal the inefficiencies in the process. These assumptions may result in some convenient omissions, which end up boosting the Rolled Throughput Yield to 100%. Sometimes, these assumptions also lower the perceived design efficiency. Some of these assumptions are: 1. Related to available time The equipment or plant is available for operation 24 X 7 = 168 hours in a week, but the Management chooses or there is business only for running five days a week for two shifts of eight hours each. Within this time of 80 hours per week, the plant operates at 100% rolled Throughput Yield, but from an overall time availability angle, there is already a loss of around 52% ([168 – 80] / 168). 2. Related to Capacity There could be a batch process operating in a drum, with a design capacity of 80 metric tonnes. Due to poor maintenance, some residue of the charge has solidified inside the drum reducing its usable capacity to 70 metric tonnes. The Rolled Throughput Yield on 70 metric tonnes is indeed 100%, but w.r.t. the Design Capacity, the yield loss is 10/80 = 12.5%. 3. Related to Scope For most product types, the Rolled Throughput Yield is 100%, but for some product types, the yield drops in one or more processes. But these product types are scoped out when calculating or presenting Rolled Throughput Yield calculations. 4. Related to Mode of Operation Some low yield processes get outsourced so that the “organization’s” Rolled Throughput Yield remains at 100%. But if Rolled Throughput Yield is assessed for the End-to-end process, it may go below 100%. 5. Related to Changeovers Changeovers are “Non-production” times. So, these times are not included in the available time. If considered the right way, the Rolled Throughput Yield will be less than 100%.
  4. Hawthorne Effect

    Any Industrial Engineer who has attempted to conduct Time and Motion Studies in a factory would have experienced first-hand, the reverse Hawthorne effect of workers trying hard to make work stretch to fill available time, intending to get a smaller quota of daily work. While on the floor, the practice of Rating during Time Study can to some extent help the observer to arrive close to the correct time standard, this alone may not be sufficient to completely nullify the reverse Hawthorne effect. In certain other situations, the fear of being branded as slow, uncooperative or having what is conveniently, if popularly called, “an attitude problem”, can make people work faster than normal. This can also happen when the people being observed are mis-motivated to impress the observer with their speed of working. In both the above situations, it would be very difficult to arrive at the correct baseline for the process. The process of neutralizing these effects and getting people to work normally cannot start on the floor during the observation for baselining. Setting the stage This process begins with setting the correct environment throughout the organization for all staff to be normal and be themselves throughout without any fear of any kind of retribution. This can happen only if staff are genuinely convinced about this in their heart-of-hearts. These staff will get convinced only if the Management demonstrates its correct intentions and walk its talk. The right actions are more effective than a million words spoken or written. Before the observation begins, all the staff who are being observed will need to be addressed by the organizational Management. The purpose of the observation and baselining need to be explained clearly and any questions asked by the staff during this meeting need to be answered completely and satisfactorily by the Management. Draft Benchmark Once the staff being observed are satisfied and are willing to cooperate by being normal during the baselining, a few other things will need to be done before the observation. The Management should try to get through their own network, an idea of the baseline for similar processes in other organizations in the same or similar industrial sectors. Further, the Management should take the help of one trusted staff or Supervisor to have the process executed away from the floor in private. Sufficient number of transactions need to be observed which are representative of the real-life day-to-day scenario. From both these sources, the Management will get a reasonable idea of the practices and the time taken for the process they are planning to baseline. Rating during observation Once this is available, the observation can then start with a trained person rating the pace of execution of work continually. The Management will need to use the draft benchmark to check if the results returned by their current baselining efforts are close to the one they have assessed earlier. This checking will need to be done atleast twice a day and can be done once an hour. Feedback Once the check is done and it is observed that the Hawthorne effect or its reverse is visible, then the baselining efforts will need to be temporarily either halted or results should be ignored temporarily. The staff involved in the process will need to be called in again and the Management needs to re-explain to them the objective of the baselining study and their cooperation needs to be requested again. The feedback needs to be shared with them that they are working too fast or too slow than normal and they need to work in the normal manner and pace for the mutual benefit of all concerned. To summarize, the approach required to be followed to neutralize the Hawthorne effect or its reverse would constitute the following: 1. Setting the stage with the staff being studied by a open and transparent discussion and ensuring that each of the staff are convinced about the necessity and the advantages of working at a normal pace 2. Developing a draft benchmark by: a. Getting some external benchmarks from other organizations for a similar process b. Preparing a draft baseline by observing a reliable staff perform the process outside the floor 3. Rating the pace of the work during the study 4. Checking the results of the study with the draft benchmark atleast twice a day and giving feedback to the staff being observed if any Hawthorne effect or its reverse is observed
  5. The Coefficient of Variation (CV) that is also known as the Relative Standard Deviation (RSD) is the ratio of the Standard Deviation of a Dataset to its Mean, popularly expressed as a percentage. The CV is a useful metric to compare variations of two datasets with different means. This metric has the advantage of all ratio coefficients in that it acts as a “common denominator” when comparing diverse data sets. Some of the relevant features of the CV are that it is independent of the order of values in the dataset and that it is relevant with only positive values of the dataset. The applications of the CV are many and include: 1. Evaluation of risk of investments vis-à-vis the return – Lower the CV, better the risk – return match 2. Assess the homogeneity of solid powder mixtures – Closer the CV to the defined norm, more homogeneous is the mixture 3. Measuring specific properties of chemicals or proportions of specific materials in mixtures 4. Calculation of economic disparity of a community or a group 5. Comparison of performance of two batches in a batch processing industry The CV can used to test hypotheses through Levene’s test. The general interpretation of the CV is that lower the CV lesser the variation relative to the mean and therefore a lower value of CV is preferable. While the advantages of CV are many, one of its disadvantages is that it is usable with only parameters on a ratio scale but not on an ordinal, value or categorical scales. Further if the dataset consists of both positive and negative values, the mean tends to zero and CV tends to infinity. If the two datasets being compared, contain values or on a scale related to one another, then the CV would be different for both the datasets in spite of the data sets being related. (E.g. The CVs of two data sets measuring the temperature of the same substances but expressed as Celsius in one data set and as Fahrenheit in another).
  6. FMEA

    In my humble opinion, there are no distinct advantages or disadvantages of any tool or for that matter anything. Each tool has its unique features, which under some conditions act as advantages and under other conditions act as disadvantages. So is the case with Failure Modes and Effects Analysis (FMEA) also. The best way to identify the limitations and the conditions under which these limitations are experienced is to do an FMEA on FMEA, the results of which is depicted below. Sr. No. Limitation Counter-measures 1 Teams that use FMEA may not always be sufficiently trained on FMEA and also know the domain adequately Establish and institutionalize norms for participation in FMEA 2 Not all failure modes may be covered Appoint well experienced facilitators to compere the FMEA session and have the facilitators guide the team through multiple perspectives 3 Prioritization tends to draw attention to certain failure modes only and may not eliminate all failure modes Consolidate low priority failure modes, repositorize them and keep reviewing them periodically to ensure that they do not drop off the radar 4 Tendency for the FMEA team to misjudge scope, either bite off more than what they can chew or just keep nibbling without getting anything substantial Present the initial scope in manageable blocks and work on one block at a time 5 Yesterday’s FMEA can become obsolete today Build in a review frequency in the FMEA procedure and follow it. Include FMEAs and reviews in the scope of periodic process audits or Management Systems audits 6 Sometimes, certain factors will be rated the full 10 out of 10 and may not ever change When discussing actions to reduce the RPN, spend adequate time on reducing all three factors viz. Severity, Occurrence and Detectability 7 Chances of the FMEA, “running away” with uncontrollably and unmanageably too many failure modes As part of the FMEA process, whenever a failure mode is proposed for discussion, let the first step of the discussion be to review it for duplicacy and overlap and accordingly decide whether to record it as a separate failure mode or merge it with another failure mode 8 Templates are too cumbersome for filling and discussing No need to stick to the classical template. Review the template cell by cell and customize it according to requirements. Buy computerized templates or develop your own FMEA application.
  7. Lead Time, Cycle Time

    Lead Time and Cycle Time are as different as chalk and cheese, which is illustrated below. Cycle Time is the total time elapsed from when raw material enters the production process until the finished product is ready for shipment to the customer. Lead Time is the total time elapsed from when a customer expresses a need to when that need is satisfied. Lead time is also the time quoted to customers (more usually in days or weeks rather than in hours or minutes) between the order date and the shipment date. It is actually the total time a customer must wait to receive a product or service, after placing an order. Lead time is the total of all the cycle times and waiting times for a particular process; or the length of time it takes a product or service to go through the entire process. The Lead time clock starts when a request is made and ends at delivery. The Cycle time clock starts when work begins on the request made and ends when the item is ready for delivery. Cycle time could be termed as a more mechanical measure of the process, while Lead time is what the customer experiences and may include the cycle time of multiple internal processes and delay time between processes. The lead time is the sum of cycle times plus a delay to a pending backlog. This is the lead time that is communicated to customers. The cycle times are used to manage internal business processes. Lead time is often used to indicate how long the delivery of a product will take from purchase order to delivery. Cycle time is the amount of time measured in seconds or minutes the product is produced in. Lead time cannot be shorter than cycle time as the cycle time is a subset of the Lead Time. In processes that have not been leaned out, lead time is a lot longer. Lead time is relevant from the business perspective. The cycle time is what the team can improve by changing its process. To reduce lead times one can and should reduce cycle time. But often, the waiting time before the work starts and between process steps can be really high and so this time should also be reduced. Cycle time is an internal metric and may not be visible to the customer. It signifies the effort spent on making the product. On the other hand Lead time is an external metric and hence it is visible to customers. It signifies the speed of delivery to the customer. The reasons for confusion could probably include some loose usage of the terms inter changeably by those not yet sensitized to the differences. Questions like, “What is your lead time to start working?” can confuse. Additionally, using the cycle time alone for planning production runs and making delivery commitments to the customer add to confusion. Another source of confusion could be the hitherto unfulfilled aspirations of making the Lead Time equal to the cycle time i.e. without any waiting time before, during and after the process. Without leaning out the process and bringing the Lead Time close to the cycle time, some may want to start planning using the two interchangeably. This can also embed the wrong understanding in others.
  8. Zero Defects

    The journey towards workforce involvement, improvement and goal setting can be successfully completed by the following strategy. 1. IPL – Idea Premier League a. A different kind of IPL, the Idea Premier League with its Management Committee, sub-committees, budgets, infra structure etc. is constituted. b. This IPL Management Committee has been equipped with adequate authority to take and implement process related decisions and is sufficiently staffed for process awareness and authority to change processes. 2. Communication and Socialization of the IPL a. At regular intervals, all staff in the company are addressed through road shows, audio-visual presentations, Brochures, Pamphlets, Newsletters etc. b. Every staff is made to understand the benefits arising out of improvements and the process of the life cycle of an improvement idea from ideation to auditing benefits. 3. KRAs (Key Result Areas) with improvement Targets for all a. All Management Staff in the organization (Supervisor and Above) have as per of their KRAs, improvement related targets in terms of monetary savings, no. of ideas from their Department or Line, extent of staff involvement and so on. b. These KRAs carry sufficient weight mandating focus c. A Manager gets his target from his Business Unit Head and allots the targets between his Supervisors. 4. Ideation a. This is a forum open to all staff below the level of “Supervisor” i.e. only the work force is eligible to participate in this IPL b. Any eligible staff who has an idea to improve quality and productivity can drop the idea in the nearest Idea Box or post it in the IPL portal or mail it to the IPL Mail id in the relevant format that is available in both hard and soft forms c. The format includes description of the problem, proposed solution, expected benefits etc. d. Once the idea is received, the same will be acknowledged by the IPL Management Team, which will then process the idea under its Governance procedure 5. Strong Idea Governance System a. A dedicated IPL Evaluation Team with representatives from all Functions will review the idea submitted and submit an Idea Implementation Feasibility Report (IIFR) within 24 hours of receipt of the idea that will state whether the idea can be implemented or not with the required justification of the same. This will involve if required a discussion with the ideator to understand the idea better. The evaluation will also decide on duplication and plagiarism of ideas. b. Once the IIFR is received, the IPL Management will review the same and decide on implementation and the time frame for the same. c. Wherever there is any investment required (Not related to automation), the IPL Management will decide if it can be accommodated in the current year’s IPL Budget or if there would be a need to include it in the next year’s budget d. The idea with the IPL Management’s decision is passed on to the IPL Implementation Team, who now plan and execute the implementation. e. The implementation can include one or more of alterations to equipment and tooling, investment and installation of new equipment and tooling, changes to processes, flow, layout, additional periodic reviews, short-term increased inspection to verify a change and so on. f. Whenever an idea is implemented, the line Supervisor send a weekly report, on the results from the change. g. At a planned monthly meeting, the IPL Management reviews the results of the implemented ideas and clears certain ideas as successful, some as to be further tested and where relevant can hold implementation for further discussions. h. The status is periodically communicated to the Ideator. i. At regular interval, the Finance team audits the benefits accrued out of improvements and publishes Financial Dashboards detailing benefits. 6. IPL Awards a. Monthly, Quarterly and Annual Awards are instituted and given away b. Awards include, Monthly “Best Innings” awards for the best idea with the highest benefits, Quarterly “Man of the Match” awards for maximum no. of ideas and maximum benefits and Annual “Man of the Series” awards for maximum no. of ideas and maximum benefits. c. These awards are held at the Line, Departmental and Organizational levels d. Awards are carefully selected to be of both practical value to the awardee as well as something to be longed for. The afore-mentioned approach which uses a bit of the options A, B, E and F would help the organization reach a state of having continual improvement embedded in its DNA
  9. The effectiveness of any tool is dependent on the user and method use. So is the case with the “Fishbone Diagram (FBD)” or “Cause and Effect Diagram (CED)”. No tool can achieve anything not intended by the user of the tool. A tool can only provide different perspectives to the user to take a decision. It is very much possible for the user to junk the information the tool provides and go by his or her feeling. The “Fishbone Diagram” or “Cause and Effect Diagram” is no exception. Misuse of a tool can also include erroneous use, which could be either a genuine error or an intended misuse. Means to pre-conceived end The most common misuse of the FBD is to doctor various bones so that all root causes that emerge are in line with decisions already taken. Logic can be thrown to the winds as each immediate and root cause are written so as to justify the decision. Effects instead of causes Another common mistake people can make is to reverse the plotting of causes as a hierarchy of effects. Rather than progress causes from the effect to the root cause, it progresses through subsequent effects. Incorrect or inaccurate problem statement definition A guess or an assumption is made when documenting the Problem Statement or Effect. Then with the effect itself not being very correct, of what quality can the supposed “Root Causes” be? Too much of guess-work in the causes While all proposed causes are to begin with atleast, potential causes, if too many causes are all out of guess work or out of assumptions without a validation plan, then the likelihood of the problem being solved is next to nothing. Tracing back from the root cause After reaching the root cause, by relentlessly questioning “Why?” a comfort syndrome results in picking up an immediate cause rather the root cause. Using Solutions as Causes To prepare a justification for investment in a solution, solutions end up getting prefixed by “Lack of”. Examples could be lack of automation, lack of maintenance support etc. Giving up after identifying one Root cause Either due to the excitement of having identified a root cause or due to sheer laziness, it is possible to forget the basic tenet that one problem may have multiple root causes. Confusing correlation with causation Mistaking certain commonalities in various instances of problem occurrence as the cause of the problem itself is another common error. Working to a strict time deadline While no activity can go on endlessly, it is not possible to brainstorm and think through all root causes in a hurry or when wanting to close the meeting within a particular time. Many staff who participate will take quite some time to warm up and by the time they are ready to contribute, the meeting is over. Criticizing proposed root cause ideas It takes free, unfettered thinking to arrive at all root causes. If the thought-process of the participants are stifled for any reason, the fish bone will not complete and thus not effective. Holy cows There are certain people or certain processes in the organization which are sacrosanct and cannot be touched, let alone be changed whatever be the consequences. Therefore, all root cause analyses stop at this point. “Out of control” causes To be on the safer side and not end up with responsibilities, it is best if the fishbone analysis is guided to causes not within the organisational control at all so that no one in the organisation is tasked with the responsibilities of implementing corrective action. People related causes Documenting clichéd people related causes like, “Human error” (Are animal errors possible?), “forgot” (Is the process so dependent on memory) will not help in resolving the problem. Focussing on “Who” rather than “What” A classical distraction is to focus on who is the root cause instead of what.
  10. Segmentation

    Segmentation could be called as the process of dividing the population into distinct subsets or segments that behave in the same way or have similar features. As each segment is homogeneous, they are likely to respond similarly, within. For effective segmentation, segments need to be measurable (The very purpose is to measure effects within a segment and between segments), identifiable (This is mandatory if all data is to be correctly segmented), accessible (The efforts at segmenting should not become more than the benefits obtained by solving the problem), actionable (The segments arrived at must be practically feasible to work on) and large enough to be effective (Each segment should have a critical enough mass). The segments arrived at need to be based on a logic that can be related to the problem being investigated or business goals being pursued. These questions may help drive the analysis of the segmented data. · Is there one defect category that occurs more frequently than others? · What factors contribute the most to the variation in Project Y? · Do results differ across factors? Segmentation, sub-segmentation, cross-segmentation and matrix-segmentation divide the data population into homogeneous data segments. Multiple data segmentation can be used effectively to isolate problem transactions that give us a handle to work on solving the problem. The criteria for this segmentation can be the natural transaction categories in the process or specially created criteria. In the case of the former i.e. segmentation along transaction categories, the population is split into various segments and extent to which the transactions in each segment have been impacted by the problem is measured. This will help in identifying those segments which are most impacted adversely by the problem. Thus, the problem segment or segments have been identified. Then, by identifying the characteristics and features of these problem segment or segments that are significantly different from those of the other segments which are not impacted by the problem, it is possible to identify those characteristic or feature that are most impacted by the problem. These could be the immediate cause of the problem. This will then need to be root-cause analysed and appropriate controls implemented. This way, it is possible to avoid shooting in the dark when trying to find the root cause of the problem. Segmentation along transaction categories has helped to narrow down the areas to be root cause analysed, thus saving time, effort and money in the problem solving exercise. In the case of the latter, i.e. segmentation along specially created criteria, it is possible to formulate criteria along suspected or potential root causes. By segmenting the data population along potential root causes, the segment or segments impacted most by the problem can be identified and along with the segments, the root causes themselves can be identified. Here, by almost directly identifying the root cause, even more savings of time, effort and money can be achieved. Here, data segmentation is actually being used to verify root causes. Going further on root causes, Segmentation analysis also assists us in planning and implementing different corrective actions for different segments that contribute effectively to improvement. A repetition of the segmentation post improvements and measurement of the problem impact will reveal the effectiveness of the corrective actions implemented. Thus Segmentation Analysis supports preparation for, conduct of and verifying effectiveness of root cause analysis.
  11. Quality Costs

    Need to strike an equilibrium between these costs There is definitely a need to strike a balance between Prevention & Appraisal Costs on one side and Internal Failure & External Failure Costs on the other. Any investments in an organization need to be justified, preferable in a tangible manner. The best way to justify investments on Prevention & Appraisal Costs would be to assess the Internal and External Failure Costs avoided by the investment. Approach to reach the best scenario Data collection, consolidation, Analysis and Actionable insights from the analysis would be the way to go on optimising Cost of Quality. For all existing Prevention and Appraisal activities, there need to be costs assessed both one time and also recurring. Prevention and Appraisal actions could be training of people, process embedded checks, testing at various points of the process of components, sub-assemblies and assemblies etc. These costs will need to be assessed for a line and also per product. These costs will need to be tracked and reported monthly. Concepts of Depreciation of Facilities and Net Present Value etc. will need to be used. The Finance Team would be able to help in this regard. In addition to the above, the results from the Prevention and Appraisal actions will need to be used to assess their effectiveness in eliminating or preventing poor Quality products from being produced and from reaching the market. The costs of repair and rework and retesting will need to be tracked and periodically reported. The feedback from customers including complaints will also need to be used in assessing the effectiveness of the preventive and appraisal actions. Using the two, estimates will need to be made of the quantity of poor quality products that reach the market. The historical data regarding customer rejections will obviously need to be used here. From this, the cost of recall including penalties, fines, repair, rework, rechecking and re-dispatch will need to be computed. Additionally, the cost of business lost due to poor Quality will need to be assessed from past data. For the future, the cost of business that can be lost needs to be predicted using realistic estimates. There need not be a doomsday prophet approach while doing this as this will artificially inflate the external failure costs by assuming that every faulty product reaching the customer would result in cancellation of all orders. Cost of potential business that could be lost will need to be assessed considering the customer’s brand equity, the likelihood of the customer increasing business in case of issue-free delivery and so on. Any estimated loss of business will need to be weighed down by the probability of the event happening, which will make the assessment more realistic. Now the return on investment can be calculated by: External and Internal Failure Costs Avoided ------------------------------------------------------- Prevention and Appraisal Costs incurred This needs to be periodically computed for different product or service lines, different customers, different products etc. These should be continually reviewed to see if the return is going too low. If for reasons like internal expertise developed, improved technology used or agreements with customers signed, the occurrence of defective products is reduced or probability of such a product reaching the customer is reduced or penalties payable to customers are reduced, then the investments in Prevention and Appraisal actions need to be reviewed and if required, optimised. Best Scenario The best scenario would be one in which all Costs of Quality are dispassionately reviewed in terms of tangible benefits and not for any sentimental or passion related reasons. If Failure costs reduce, it needs to be seen as a success of Prevention and Appraisal action investments. If the existing Prevention and Appraisal costs are consistently yielding results in avoiding Internal and Failure Costs, then a calculated decision needs to be made on conducting a pilot with partly optimized investments e.g. reduce the sampling for testing. If the experiment is successful, the organization should optimize the investments in Prevention and Appraisal. The overall motto should not be “Quality at any cost”, but “Quality at a cost”.
  12. An old proverb goes, “One man’s candy is another man’s poison”. This is true in the case of Hypothesis testing also. The following situations could be examples for a Type 1 error in one situation being a Type 2 error in another situation, which includes Conditions, Environment, Organization, Point of Time etc. Improved Technology In a factory, a process is presently being run using Technology A. The organization is upgrading this process to Technology B. By this, all products that were produced through Technology A would be produced through Technology B. This progress is being tracked by a KPI which is the proportion of product volumes which is produced through Technology B. When this progressing to completion, there is another wonder technology, Technology C that is doing the rounds. The organizational Management decides to bite the Technology C bullet. This progress of this upgrade is also tracked using the same KPI, “proportion of product volumes which is produced through Technology B”. While earlier, the objective was to maximise the proportion of product volumes which is produced through Technology B, now the objective is to minimize the same KPI. In this case, a Type 1 error in A to B upgrade would be a Type 2 error in B to C upgrade. As an example of the above, the manufacture of plates for chains could be considered. The traditional method would be of blanking sheets first and piercing the blanks next. This is Technology A. The first improvement is to improve the blank layout, which uses sheets better and there is less of material wastage leading to cost reduction. Improved blank layout is Technology B. The next improvement is to have a progressive tool, which pierces the sheet and then blanks plates in a pierced condition. This is Technology C. Salvage and repair section of a factory In a typical “Stockholm Syndrome” case, the extent to which the Salvage and repair section in a manufacturing unit is utilized is also a measure of the overall quality of production in the factory. If the Salvage team is kept busy, it would mean that the factory produces too many out of specification products. If there were any improvements being implemented in the Salvage section, then if a hypothesis test were to be conducted, the Type 1 error on a Salvage section parameter would be close to being a Type 2 error for the overall organization. Supplier – Customer contradictions There could be some processes or components or services outsourced as an exception only if the customer organisation is facing a problem. This will mean that more the outsourcing of that particular service, process or component, more the problems the organization is facing. While for the vendor, volume produced is a positive KPI, for the customer organization it is negative. In this situation also, a Type 1 error for the vendor would be a Type 2 error for the customer. Public Service Vs Private Enterprise As part of a service spirit driven, health-driven and value driven campaign, a state or local government implements various measures to supply clean, drinking water to all its residents. This it does by maintaining its natural water resources well, implementing rain water harvesting, strictly controlling effluent disposal into water bodies, purifying water supplied through traditional and modern methods and various other administrative and legal measures. This results in all residents getting good quality drinking water. This also results in a decrease in the cases of patients suffering from water-borne diseases. But the success of the same campaign has also resulted in a reduction in the sales of bottled water and also various types of water purification equipment. If the government’s campaign is tracked using (say) people not getting drinking water, a Type 1 error here could actually be a Type 2 error for those organizations, who are impacted adversely by this success. Others In addition to the above, whenever there is a fundamental difference in the motives of two different entities, this “phenomenon” can be observed.
  13. 8D Problem Solving

    Quite often, the use of a methodology is indirectly and perhaps unintentionally governed by its history and the original purpose for which the approach was created. Such is the case with Six Sigma and 8D also. According to Eileen Beachell, one of those involved in documenting the 8D approach originally at Ford, “the 8Ds are a well-defined linear logic methodology to address chronic problems with the purpose of changing the management procedures that allowed the problem to occur in the first place”. On the other hand, Bill Smith evolved Six Sigma at Motorola, from a study of the relationship between manufacturing defects and field reliability, which resulted in a thrust to improve process capability to the point that no more than 3.4 defects per million opportunities would be created when combined with their respective design specifications. The method of course involved use of statistical tools. The one key difference between these two methodologies in spite of other similarities is the “Implementation and Verification of Interim Containment” in 8D which is not part of the Six Sigma Methodology. In certain problem-solving situations, there could be some really burning problem and a need for some very quick, yet considered action on this, which should involve multiple skill inputs and multiple stakeholder representation. The action should also include damage control in addition to a permanent solution. There may not be sufficient time to form a cross-functional team, train key people in Six Sigma, go through the DMAIC phases, complete tollgate reviews, run a well-controlled and monitored pilot and complete a full-fledged Six Sigma Project. In such cases, the 8D approach may be easier to follow, arrest the adverse impacts of the problem, resolve the issue quickly and keep stakeholders satisfied. It need not be necessarily better than the Six Sigma approach, but in this situation, may be that little bit easier to do. A smaller team could be formed quickly within the closest circle of influence and contribution. S the team members are already familiar with one another and with the process also, they can get cracking as a team pretty quickly. To begin with, the problem could be described in detail and a quick correction could be implemented to begin with. This would satisfy the stakeholders for the time being as the adverse impacts of the problem have been contained. Then this team could do a thorough analysis of all potential root causes, identify the relevant ones, identify likely escape routes, design and implement corrective actions and preventive actions. Additionally, the last scenario of the enhanced Kano Model is the “Reverse” trend. In this, customers who are out to prove their capability in demanding product features that cannot be provided get dissatisfied if their requirements are fulfilled. Something similar to this can happen in implementing Six Sigma in certain organizations. The completeness of the “Six Sigma” approach, the structure in every phase, the need to be aware of and use some basic statistics and sometimes its sheer success and its popularity can occasionally create some irritation in people. They may not really want to be involved in a Six Sigma Project and would be interested in other alternative structured improvement approaches. Such people may be satisfied with the 8D approach, which does have some positives that the Six Sigma approach also has.
  14. Control Limits

    The role of a Lower Control Limit in the case of defects or defectives control chart is a very relevant question as who would not like having a process with a defect or defective rate as low as possible. Anyone would probably be happy if some data points fall below the Lower Control Limit. It would be a “Out of Control” problem, which would be good to have. It would create an opportunity for the process owner to investigate the reasons why the process’ defect or defective rate went below the control limit, identify if any best practices had been effective and then replicate these practices elsewhere. Yet, there could be situations where the Lower Control Limit becomes relevant for other reasons. Outsourced Process – You have to only meet the requirement, cannot exceed it There could be an outsourced process, where the customer requires the vendor to inspect and remove defects or errors to the levels the customer has agreed to with the end user. By good process control practices and by using the right methodology and equipment, the vendor may be able to bring down the error rate even below the agreed limits. The customer could react in two different ways. He may accept the output quietly and leave it. Or he may have some other points to worry about. He may feel that if the vendor’s good work in bringing down the error rate far below the agreed limits is accepted and acknowledged let alone appreciated, the vendor may use such events to try to negotiate a higher rate and increase costs, which as far as the customer is concerned, may not add any value as his contract with the end user is at a higher defect rate. Therefore, the customer if he control charts the vendor’s performance, he would need the Lower Control Limit to tell the vendor that his process is “not in control” and that his “performance has to improve”. The customer may also be worried if the vendor, buoyed by this acknowledged performance, may quote this performance with other potential customers, who would be his own competitors, get more business, become less dependent on him and so on. Furthermore, the customer may be worried that if he “spoils” his end-user with such “Super-Quality” deliveries, the end-user may get used to this and then start cribbing if the deliveries are within the agreed defect rates, but not significantly better. To avoid this, the Customer may prefer to stick to the agreed norms. “Negative” Lower Control Limit In another scenario, the Lower Control Limit when calculated could be negative. Obviously it is not possible to have negative defect or defective rates as this would mean that when the process is run on the input material or information, defects in the input are removed. Negative Lower Control Limits could mean that the process is occasionally capable of operating at zero error or zero defect levels. When a process has a positive Lower Control Limit, this may mean that the process in its present form is not capable of zero defect production and will need to be improved upon considerably. While in the above two situations, the Lower Control Limit may become more relevant than usual, the following Quality Story would make interesting reading in the context of “Lower Control Limit”. Quality Story An American firm scouting globally for buying an automotive spare signed a deal with a Japanese firm @ 25 cents per piece including all packing, transport, taxes, duty etc. and placed an initial order for three million parts. Wanting to impress their vendor on how strict their quality standards were, they added “We accept just three defects for every thousand parts”. The order was delivered as per the agreed schedule, but accompanied by a bill for $750,900. It did not take the American accountants a very long time to figure out that the bill was $900 more than what was agreed to. A bit perturbed about this (especially since it was believed that sticking to deals was part of both Japanese tradition as well as Japanese management practice), the firm rushed a cable to the Japanese supplier, requesting for an explanation. Back came a letter from the supplier, “You had ‘asked’ for 3 defectives for every 1000 parts. At this rate, you will require 9000 defectives for three million parts. We have made extra efforts to produce the 9000 defectives, which works out to an additional 10 cents per piece. The extra $900 is on account of this!”
  15. Sigma Level

    Long Term and Short-Term Sigma levels can be calculated from Ppk and Cpk, which in turn can be calculated using Long term and Short-Term Standard Deviations. Short term Sigma considers primarily common causes, while Long Term Sigma considers special causes. Very often, it is difficult to assess Long Term Standard Deviations as gathering data over a sufficiently longer duration of time can be challenging. The time may extend to many months, during which time, many things can change including, the market demand, business scenario, departmental and organisational leadership, observers, sponsorship of the study and so on. Therefore, to cut the lead time for the Long Term Sigma Assessment, the relationship between Short term and Long-Term Sigma can be used. Moreover, when setting a target for any process, the following need to be considered. One would be the target under standard environmental conditions. The other would be changing environmental conditions which may result in variation. Even highly stable processes, over an extended period of time may feel the impact of changing environmental conditions, which causes variation. These environmental changes need to be balanced by a compensation factor in order to account for these changes to ensure that the long term target is met. Therefore, the Short-term target would be the Long-term target plus a compensation factor. This compensation factor has been empirically arrived at by Motorola as approximately 1.5, originally referred to as “Long Term Dynamic Mean Variation”. This was arrived at under some assumptions. Thus, a process operating at 3.4 DPMO would be at a short-term Sigma level of 6, but in the long term would be only at a Sigma level of 4.5. For a process to be at a Long-Term Sigma level of 6, it needs to operate at 2 DPBO (Defects per Billion Opportunities).