Jump to content


Excellence Ambassador
  • Content count

  • Joined

  • Last visited

  • Days Won


mohanpb0 last won the day on February 1

mohanpb0 had the most liked content!

Community Reputation

11 Good

About mohanpb0

  • Rank
    Advanced Member

Profile Information

  • Name
  • Company
    M/s CMA CGM Shared Service Centre India
  • Designation
    Director (Performance Management and BCMS)
  1. As the real benefit of a Classroom Training Session is the fulfillment of the trainee's objectives, a C-SAT survey conducted after an appropriate lead time for the benefits of training to kick in would be the best choice. For example, for a Green Belt Training, a survey conducted after 3 to 6 months which is roughly the time required for initiating and completing a Green Belt Project may be the most authentic Measure of Satisfaction. However, as a relatively quicker measure, the satisfied (Or otherwise) trainee is very likely to keep talking about this at work, at home, on social media etc. which would result in more people getting enrolled (Or otherwise) for the same or similar training programme. Therefore, NPS would rank along side C-SAT as the first choice, the only difference being that while C-SAT is done after a lead time, while NPS assessment can start soon after the training or in some cases, even after the first day of the training. As a relatively contra-metric to NPS, Churn in terms of registered trainees opting out or interested trainees not registering can be the next best Measure of Customer Satisfaction. The next preferred measure would be CAC as the trainee has potentially multiple options for training and it needs a well planned, judiciously funded as campaign to first touch, then convince the trainee to opt for the training programme. But this is relatively less likely to be relevant in a Classroom training scenario. With objectives of trainees in opting to attend training programmes varying from genuine career development, genuine capability improvement, forced to attend by their Management, mandatory requirement for some other educational / career option, CES may not be uniformly relevant for all situations and hence it would be the last option. The preferred list in the order of preference would be C-SAT, NPS, Churn, CAC, CES.
  2. There is something very different about after-sale service in that, if the sale had been good in terms of both the product and information, there would not have been a need for the customer to come to the outlet again and create a need for an after-sale service in the first place. Thus as there has been some source of dissatisfaction due to which the customer has approached the after-sale service point, measuring KPI No. 2, C-SAT may not be the best option and so this would be a natural fifth choice. By the same logic, even expecting references from a customer forced to use after-sale service would be futile and hence, KPI No. 1, NPS would rate fourth. Equally certainly, as customers for after-sale service are not preferred at all, KPI No. 4 CAC, would end up as third. Retaining a dissatisfied customer would be a real challenge and as by default, customers who come to after-sale service can be expected to be dissatisfied, KPI No. 3, Churn would become more relevant at No. 2. On the contrary, in trying to make the best of a bad situation, if we could reduce the pain felt by the customer by minimizing the effort and running around, the customer needs to make, to get the problem corrected and get the issues resolved, it may actually turn the situation around. Therefore, KPI No. 5, CES would rank as the most suitable measure of Customer Satisfaction. My final preferred KPIs in the descending order of relevance would be CES, Churn, CAC, NPS and C-SAT.
  3. Sr. No. Complaint Handling Aspect Best Practices already followed What more could be done? 1 Simple method Ease of making a complaint Crisp and helpful notes to assist the customer Making the customer not feel guilty about complaining An app downloadable on mobile using which complaints could be made directly 2 Acknowledgment Auto acknowledgement Nothing more to add as long as it is followed by a proper response 3 Quick response Committed target me Nothing more to add 4 Well-written response Accepting the error Mentioning the refund in the beginning Explaining the action taken Thanking the complainant for the feedback Thanking the complainant for the photographic evidence Apologizing again before signing off “pink ladies” could be replaced with “pink lady apples” Abbreviations like “I’m” and “I’ve” could be expanded As the mood of the customer is obvious, OCADO could have “apologized for the disappointment” rather than “any disappointment” In addition to the refund, for helping OCADO trace and correct a failure mode, the customer could be given a free e-voucher for the goods damaged, i.e. six pink lady apples, which could be collected the next time the customer visits any OCADA store 5 No quibbles Trusting the customer as no further questions are asked Nothing more to add May need to be reviewed for bigger refunds
  4. Poka Yoke

    Would like to stick to one example of a form required to be filled online following elaborate business rules. The complexity necessitated a check of atleast the critical, sensitive fields, which if mis-filled, could cause fatal errors. Very soon, the usual pressure of targets over-ruled "old-fashioned" quality intents and staff started submitting forms without even a cursory glance at what they had hurriedly keyed in. The Quality team dutifully created a hard copy checklist to be filled in for every form which was supposed to make the Inputter check the entry before checking the relevant item in the list. Some sanity was restored and Quality stabilized albeit temporarily. But with the characteristic vigor in beating the system which Operations usually displays, the Inputters continued their "Rajadhani" speed data entry while filling in all checklists at the end of the shift long after submitting the form and committing the erroneous information to the customer. The Quality team then got wise to this when customer complaints resumed and stepped in with random audits at different points of time during the shift covering various combinations of transaction types, time of the day, day of the week, staff and so on. The Auditor would pick up completed transactions at random and demand the check list. If the checklist were not available, the staff could face disciplinary action. Again there was a brief lull in customer complaints, which was all too short-lived as the Inputters began to take calculated risks. The Quality team then made the checklist too online instead of the hard copy which was supposed to ease up the job of filling the checklist, but it proved to be mere wishful thinking. Finally, technology was brought in which prevented the Inputter from submitting the form unless the check-list was filled in and submitted. This was effective but Operations still complained about reduced productivity and rising costs. More advanced technology was introduced which used business rules to fill in all rule-driven fields and left only those fields which required thinking and judgment to be manually filled in. The Inputters were now given other fancy designations like, "Validators" or "Integrators" and were sufficiently self-motivated to do their own checks of the manually entered fields, which improved Quality.
  5. Normal Distribution

    That the “Normal” in “Normal Distribution” or “Normal Data” means “Natural” rather than the other dictionary meanings like, “Ordinary” or “Typical” or “Regular” or “Usual” or “Standard” would become clearer if the origin of the so called “Normal” distribution is traced. Somewhere in the 18th century C.E. a group of mathematicians and scientists in France were trying for a long time to make sense of a peculiar data distribution they had come across. They realized that one value was occurring most often and also that the other values lesser than and higher than this most frequently occurring value occurred at a progressively lesser frequency. In other words, as the value decreased from the most frequently occurring value, the frequency decreased and as the value increased more than the most frequently occurring value, the frequency again decreased. After a few days of research and discussion, they could not come to a conclusion and decided to take a walk in fresh air to clear their heads. They came to an orange orchard which was full of trees bearing ripe oranges. Unable to resist the temptation, they plucked a few oranges and began to enjoy nature’s bounty. One of the group who was still thinking of the data, started to keep a tally of the number of seeds in various oranges he ate, using a twig for a pencil and the mud as a note book. To his surprise, he found that the seeds in the oranges followed a distribution similar to what they were breaking their heads about in their lab for the last few days. Quickly, he brought this to the notice of others who soon confirmed the similarity of the data distributions. It struck them that perhaps this distribution could be something that occurred naturally. They tested this theory with certain other naturally occurring parameters and concluded that their theory was indeed correct. For reasons best known to themselves, they chose to name this distribution, “Normal” meaning that such a distribution occurred naturally. Or perhaps, the original French name given was translated as “Normal”. Whatever be the reason, it is now accepted that “Normal” distribution occurs naturally in many physical, social and biological processes. Therefore, if such measurements were made in a truly random manner, the data collected is expected to be naturally normal. Many a time, an apparently non-normal distribution, when investigated, would reveal some man-made cause like blending two distinct groups into one, or a skewed, non-random sampling and so on. Apart from the usually quoted examples of Normality like heights and weights of people from a randomly constituted group, even product characteristics from a machine or a cell with untampered settings and without any technological restrictions, will be expected to be normal not just in their physical characteristics like length, diameter etc. but also in their functional characteristics strength, power, torque and so on. Additionally, medical parameters like blood pressure, blood sugar etc. are also expected to be normally distributed. In the above situations, any distinctly non-normal distribution would need to be treated as unusual.
  6. Hiring a Lean Six Sigma Black Belt Professional

    YES “Captaincy is 90% luck and 10% skill. But don't try it without that 10% - Richie Benaud, former Australian Test All-rounder, Cricket Commentator and Author Ambassador Argument Submission Priyer It is necessary to have Lean Six Sigma skills and awareness about it, but executing a DMAIC or DMADV is not necessary. The person can also come from a Lean background and apply his experience derived by following the other methodology cycle followed for Lean projects (12 week cycle or 16 week cycles or A3 approach) to be succesful in improvement manager role. How can the Manager without experience of a complete project add value to his reports and the SS Project team? Even if he is sincere and committed, the inability to add value will show up the Manager Kavitha Sundar As per the previous question, if the organisation does not allow the BB/ GB to do a DMAIC project, the opportunity is not given to him to prove his skill set. But in such cases, he can only refresh his knowledge and simulate a project experience and move on to next firm if there is an opportunity. Hence the BB / GB should be ready to take up a project and prove his skill set at any point in time even if we don't get an opportunity. He can also use lean approach instead of DMAIC / DMADV. So, the project using DMAIC / DMADV methodology application should not be a road block for his career growth. This is a different question and nothing is mentioned about the organisation not allowing a project Simulation is no substitute for the actual experience If the Improvement Manager restricts himself only to Lean projects, then many improvement opportunities could be lost By not doing any SS projects, both his career would not develop and the organization will lose out on improvements Togy Jose I would have preferred to say Yes, but there isn’t a lot of adoption by Organization around Lean Six Sigma methodologies - which means even if an individual has put in a lot of effort in getting certified, he/she may not get to work on a project because the org is not interested. So, for selecting a BB - an MBB or a highly experienced BB should have an in depth conversation with the candidate to just check for conceptual clarity / aptitude / functional experience / maturity level etc Given that in some industries (eg: Consulting) where LSS is not encouraged but there are enough high quality profiles, would we want to be limited by this requirement? Even if someone were to claim having done a project end to end, there is no way to review the data and verify the findings on account of confidentiality. So we’ll limit ourselves to just checking only conceptual clarity anyway. Not every LSS intervention needs to be a project, even a well-timed and well-documented FMEA can help with prioritised corrective action or a well documented QFD can help with a well structured design process. So a certified BB who has a done a lot standalone interventions deserves a chance. Standards cannot be diluted just because organizations are not following the SS Methodology The profile would not be that high quality if there were no SS Project experience This cannot be a reason for leaving out the requirements Possible to understand the truth by repeated questioning Agreed that not every LSS intervention needs to be a project. Tools were, are and will be used by themselves. But by not having SS project experience, the main benefits of the SS Methodology are lost Atul Dev Completion of a Six Sigma project depends on opportunity That is fine. But not having an opportunity still remains a drawback Alex Fernandes .completion of a full-fledged DMAIC or DMADV project not be an essential criterion for the hiring of a Lean Six Sigma BlackB Belt professional in an improvement manager job role and this is one huristic that the feternity needs to change. It is important for the interviewer to assess candidate's knowledge and project completion is a good source to gauge from but this is not always true. Reasons: 1. Genuinity of projects cannot always be verified. 2. Success of the project cannot be established. 3. Level of participation of candidate in the project cannot always be checked. In fact, projects could be misleading and could give an upper edge to undeserving candidates. It is important for a candidate to know and apply SixSigma tools and techniques and that could be established even without a full fledged project. Leading interview questions can reveal the truth Same as above Same as above Phani Kumar. N Since Six Sigma is an approach for process improvement, a person experienced in the area of operation acquiring skills and the techniques to be implemented for process improvement can be a better pick than a person who has handled improvement projects in other areas. However, nowadays there is a need for doing Projects has become a pre-requisite for handling Business Excellence function when it comes to hiring by organizations. This I look as a drawback in hiring process. The USP of the SS Methodology is its completeness of approach. Expertise in this completeness is acquired only by completing projects. All other related skills cannot make up for a lack of experience in completing sufficient number of projects in various roles like member, leader, mentor and so on Rajesh Chakrabarty I do agree on the point about a person having experience in the area of operations acquiring skill and technique.... This person can definitely be of great help to the project lead/improvement Manager.....especially for FMEA The experience is never complete without the project. Nazim Because identifying the improvements are not dependent on the experience of DMAIC & DMADV, the candidate should be aware of how to find the areas of improvement and drive business benefit out of it Unless the candidate is experienced in various nuances of the methodology that is best learnt by doing different types of projects, the person will never be sufficiently aware Arunesh Ramalingam In my opinion it should be a "Good to have" requirement. I strongly feel the following two aspects should be given more importance: 1. The Professional's familiarity and understanding of Lean Six Sigma concepts and his attitude/thought process towards the concept of "Continuous improvement". This aspect would indicate if the person would be able to identify, initiate and promote improvement activities. 2. The Professional's overall job experience. This would highlight his skills related to working in a team, leading projects, ability to communicate with the management, handling conflicts and so on which are critical for executing any six-sigma project. I would agree that a person with prior project execution experience may be more familiar with all the aspects of project execution, but he may not be essentially a keen promoter of continuous improvement culture. Also, the ambience under which he completed the projects is an unknown factor. For example, there could have been high level of support that he received from the management and his team, enabling him to complete the projects. On the other hand, a professional with the right attitude and skills may turn out to be a better option (albeit with some mentoring or learning gap). The points 1 and 2 identified above could be evaluated with well drafted detailed interview questions involving case study analysis and presentations. The completion of a full-fledged DMAIC or DMADV project should be a "good-to-have" requirement, and making it an essential criterion may not be the right thing to do. Knowing Arunesh’s performance in the competitions, first reaction on the lighter side would be, “Et tu, Brute” :-) The question is only about the essentiality of project experience and nothing to do with attitude. A person’s ability to identify, initiate and promote improvement activities will be complete only with relevant project experience. Merely having those skills is insufficient. What is required is using the skills effectively in projects. If the person were not a keen promoter of Continuous Improvement culture, the person would not have applied for the role in the first place. Same as above Same as 1 above As mentioned above, the candidate’s likelihood of success lies in understanding the full potential of the SS Methodology which is best achieved in performing different types of projects.
  7. Organizations with a full-fledged LSS program can budget part of its projects to be executed jointly by both its own staff and freelancers who may not have such opportunities within their own organization. Their roles will be restricted to Data Analysis and relatively off-line work as the Information Security of these organizations will need to be respected. Another option would be to use a tool to simulate a project and solve it in the tool itself. Another possibility could be solving small issues outside work E.g. reaching the place of work on time every day without arriving too early.
  8. Efficient, Effective

    "Efficiency" measures the use of inputs for realizing maximum output, a waste process can also be performed efficiently. But nothing is more inefficient than doing efficiently, that which should not be done at all. Effectiveness also brings into focus, the purpose or objective of the process, ensuring that it should be fruitful in the larger scheme of things. The best example of a process that could be efficient without being effective is the “Corrections” or “Repair” processes. The process may be efficiently using resources, but will be effective only if the intelligence it generates leads to its own redundancy.
  9. ARMI / RACI

    The RACI (Responsible, Accountable, Consulted, Informed) is slightly older and can be traced back to the Responsibility Assignment Matrix (RAM) introduced in the early 1970s. A variation, RASCI or RASIC (Responsible, Accountable, Support, Consulted, Informed) is also used in certain organizations. Both RACI and RASCI are popular as a role documentation tool. The ARMI (Approver, Resource Person, Member, Interested Party) is more Six Sigma in origin. It serves as a tool to list and categorize stake holders in a Six Sigma Improvement Project. Have been using only ARMI in Six Sigma projects. This could probably be because, the initial Six Sigma Training included ARMI. This has just continued as a practice. Additionally, having been in organizations where Six Sigma is not necessarily a way of life, there was still a need to have people volunteer or be persuaded to be part of a Six Sigma project team. In such situations, around a decade and a half back, there was a need to make it appear as a privilege for a person’s name to be associated with a Six Sigma Improvement Project. The tool for association needed to be both comprehensive as well as soft. ARMI has this characteristic in that it can boost the ego of people who are empowered as Approvers or recognized as Resource Persons. Even being a Member can appear as a good opportunity, while being an Interested Party can be taken as a demonstration of the person's commitment. The RACI had or still has a certain amount of authority in its intent and sometimes creates unpleasantness and perhaps even dread, when one finds oneself in the “Responsible” or “Accountable” cell or column. This is even more if one has not been consulted before being made responsible or accountable for some project or even activity or task. This brings some stress into people who may be worried about being successful in their responsibilities. While being "Consulted" certainly makes people feel good, Informed sort of sidelines the person and perhaps render the person powerless. Moreover, it is still sometimes difficult to explain the differences between Responsibility and Accountability to some people. For the above not necessarily professional reasons, have preferred ARMI to RACI.
  10. Specification Limits

    It is indeed a challenge to visualise even the characteristics of a totally new product, let alone decide their specifications. But following are some of the options available for making the best of an inherently difficult situation. 1. Benchmark on split characteristics The product as a whole is not available in the market as this organization is the pioneer of this. But various features or characteristics of this new product could be available singly or in combinations in various other products already in the market. The specifications for these characteristics could be used as a basis for deciding the specifications of the same characteristics for the new product. The same specifications need not be blindly replicated, but they can be started off with and appropriate adjustments can be applied. 2. Simulating End-user experience From the customer, whatever information available on the intended end user experience needs to be gathered. The various objectives which the product would achieve both for the customer and for the end user needs to be assessed or atleast a fair idea of them needs to be documented. From the above, a realistic table top simulation of the end user experience will need to be executed. When this is done, the organization progresses some way to understanding what its customer wants his customers to experience with the new product. From this understanding, the specification of various features for giving this experience to end users can be deduced. 3. Technical Absolutes Another option would be to go in for the best the technology available can offer and negotiate an appropriate agreement with the customer on absorbing the cost of the initial investment. After prototype production is done, the specifications can be scaled down from the technical maximums to suit the intents. The option of correcting specifications as the organization, the customer and end users become more knowledgeable about the product needs to be kept alive
  11. Rolled Throughput Yield

    Based on the calculated Rolled Throughput Yield of 100%, the process cannot be inefficient. But there could be assumptions that are made in this calculation which may reveal the inefficiencies in the process. These assumptions may result in some convenient omissions, which end up boosting the Rolled Throughput Yield to 100%. Sometimes, these assumptions also lower the perceived design efficiency. Some of these assumptions are: 1. Related to available time The equipment or plant is available for operation 24 X 7 = 168 hours in a week, but the Management chooses or there is business only for running five days a week for two shifts of eight hours each. Within this time of 80 hours per week, the plant operates at 100% rolled Throughput Yield, but from an overall time availability angle, there is already a loss of around 52% ([168 – 80] / 168). 2. Related to Capacity There could be a batch process operating in a drum, with a design capacity of 80 metric tonnes. Due to poor maintenance, some residue of the charge has solidified inside the drum reducing its usable capacity to 70 metric tonnes. The Rolled Throughput Yield on 70 metric tonnes is indeed 100%, but w.r.t. the Design Capacity, the yield loss is 10/80 = 12.5%. 3. Related to Scope For most product types, the Rolled Throughput Yield is 100%, but for some product types, the yield drops in one or more processes. But these product types are scoped out when calculating or presenting Rolled Throughput Yield calculations. 4. Related to Mode of Operation Some low yield processes get outsourced so that the “organization’s” Rolled Throughput Yield remains at 100%. But if Rolled Throughput Yield is assessed for the End-to-end process, it may go below 100%. 5. Related to Changeovers Changeovers are “Non-production” times. So, these times are not included in the available time. If considered the right way, the Rolled Throughput Yield will be less than 100%.
  12. Hawthorne Effect

    Any Industrial Engineer who has attempted to conduct Time and Motion Studies in a factory would have experienced first-hand, the reverse Hawthorne effect of workers trying hard to make work stretch to fill available time, intending to get a smaller quota of daily work. While on the floor, the practice of Rating during Time Study can to some extent help the observer to arrive close to the correct time standard, this alone may not be sufficient to completely nullify the reverse Hawthorne effect. In certain other situations, the fear of being branded as slow, uncooperative or having what is conveniently, if popularly called, “an attitude problem”, can make people work faster than normal. This can also happen when the people being observed are mis-motivated to impress the observer with their speed of working. In both the above situations, it would be very difficult to arrive at the correct baseline for the process. The process of neutralizing these effects and getting people to work normally cannot start on the floor during the observation for baselining. Setting the stage This process begins with setting the correct environment throughout the organization for all staff to be normal and be themselves throughout without any fear of any kind of retribution. This can happen only if staff are genuinely convinced about this in their heart-of-hearts. These staff will get convinced only if the Management demonstrates its correct intentions and walk its talk. The right actions are more effective than a million words spoken or written. Before the observation begins, all the staff who are being observed will need to be addressed by the organizational Management. The purpose of the observation and baselining need to be explained clearly and any questions asked by the staff during this meeting need to be answered completely and satisfactorily by the Management. Draft Benchmark Once the staff being observed are satisfied and are willing to cooperate by being normal during the baselining, a few other things will need to be done before the observation. The Management should try to get through their own network, an idea of the baseline for similar processes in other organizations in the same or similar industrial sectors. Further, the Management should take the help of one trusted staff or Supervisor to have the process executed away from the floor in private. Sufficient number of transactions need to be observed which are representative of the real-life day-to-day scenario. From both these sources, the Management will get a reasonable idea of the practices and the time taken for the process they are planning to baseline. Rating during observation Once this is available, the observation can then start with a trained person rating the pace of execution of work continually. The Management will need to use the draft benchmark to check if the results returned by their current baselining efforts are close to the one they have assessed earlier. This checking will need to be done atleast twice a day and can be done once an hour. Feedback Once the check is done and it is observed that the Hawthorne effect or its reverse is visible, then the baselining efforts will need to be temporarily either halted or results should be ignored temporarily. The staff involved in the process will need to be called in again and the Management needs to re-explain to them the objective of the baselining study and their cooperation needs to be requested again. The feedback needs to be shared with them that they are working too fast or too slow than normal and they need to work in the normal manner and pace for the mutual benefit of all concerned. To summarize, the approach required to be followed to neutralize the Hawthorne effect or its reverse would constitute the following: 1. Setting the stage with the staff being studied by a open and transparent discussion and ensuring that each of the staff are convinced about the necessity and the advantages of working at a normal pace 2. Developing a draft benchmark by: a. Getting some external benchmarks from other organizations for a similar process b. Preparing a draft baseline by observing a reliable staff perform the process outside the floor 3. Rating the pace of the work during the study 4. Checking the results of the study with the draft benchmark atleast twice a day and giving feedback to the staff being observed if any Hawthorne effect or its reverse is observed
  13. The Coefficient of Variation (CV) that is also known as the Relative Standard Deviation (RSD) is the ratio of the Standard Deviation of a Dataset to its Mean, popularly expressed as a percentage. The CV is a useful metric to compare variations of two datasets with different means. This metric has the advantage of all ratio coefficients in that it acts as a “common denominator” when comparing diverse data sets. Some of the relevant features of the CV are that it is independent of the order of values in the dataset and that it is relevant with only positive values of the dataset. The applications of the CV are many and include: 1. Evaluation of risk of investments vis-à-vis the return – Lower the CV, better the risk – return match 2. Assess the homogeneity of solid powder mixtures – Closer the CV to the defined norm, more homogeneous is the mixture 3. Measuring specific properties of chemicals or proportions of specific materials in mixtures 4. Calculation of economic disparity of a community or a group 5. Comparison of performance of two batches in a batch processing industry The CV can used to test hypotheses through Levene’s test. The general interpretation of the CV is that lower the CV lesser the variation relative to the mean and therefore a lower value of CV is preferable. While the advantages of CV are many, one of its disadvantages is that it is usable with only parameters on a ratio scale but not on an ordinal, value or categorical scales. Further if the dataset consists of both positive and negative values, the mean tends to zero and CV tends to infinity. If the two datasets being compared, contain values or on a scale related to one another, then the CV would be different for both the datasets in spite of the data sets being related. (E.g. The CVs of two data sets measuring the temperature of the same substances but expressed as Celsius in one data set and as Fahrenheit in another).
  14. FMEA

    In my humble opinion, there are no distinct advantages or disadvantages of any tool or for that matter anything. Each tool has its unique features, which under some conditions act as advantages and under other conditions act as disadvantages. So is the case with Failure Modes and Effects Analysis (FMEA) also. The best way to identify the limitations and the conditions under which these limitations are experienced is to do an FMEA on FMEA, the results of which is depicted below. Sr. No. Limitation Counter-measures 1 Teams that use FMEA may not always be sufficiently trained on FMEA and also know the domain adequately Establish and institutionalize norms for participation in FMEA 2 Not all failure modes may be covered Appoint well experienced facilitators to compere the FMEA session and have the facilitators guide the team through multiple perspectives 3 Prioritization tends to draw attention to certain failure modes only and may not eliminate all failure modes Consolidate low priority failure modes, repositorize them and keep reviewing them periodically to ensure that they do not drop off the radar 4 Tendency for the FMEA team to misjudge scope, either bite off more than what they can chew or just keep nibbling without getting anything substantial Present the initial scope in manageable blocks and work on one block at a time 5 Yesterday’s FMEA can become obsolete today Build in a review frequency in the FMEA procedure and follow it. Include FMEAs and reviews in the scope of periodic process audits or Management Systems audits 6 Sometimes, certain factors will be rated the full 10 out of 10 and may not ever change When discussing actions to reduce the RPN, spend adequate time on reducing all three factors viz. Severity, Occurrence and Detectability 7 Chances of the FMEA, “running away” with uncontrollably and unmanageably too many failure modes As part of the FMEA process, whenever a failure mode is proposed for discussion, let the first step of the discussion be to review it for duplicacy and overlap and accordingly decide whether to record it as a separate failure mode or merge it with another failure mode 8 Templates are too cumbersome for filling and discussing No need to stick to the classical template. Review the template cell by cell and customize it according to requirements. Buy computerized templates or develop your own FMEA application.
  15. Lead Time, Cycle Time

    Lead Time and Cycle Time are as different as chalk and cheese, which is illustrated below. Cycle Time is the total time elapsed from when raw material enters the production process until the finished product is ready for shipment to the customer. Lead Time is the total time elapsed from when a customer expresses a need to when that need is satisfied. Lead time is also the time quoted to customers (more usually in days or weeks rather than in hours or minutes) between the order date and the shipment date. It is actually the total time a customer must wait to receive a product or service, after placing an order. Lead time is the total of all the cycle times and waiting times for a particular process; or the length of time it takes a product or service to go through the entire process. The Lead time clock starts when a request is made and ends at delivery. The Cycle time clock starts when work begins on the request made and ends when the item is ready for delivery. Cycle time could be termed as a more mechanical measure of the process, while Lead time is what the customer experiences and may include the cycle time of multiple internal processes and delay time between processes. The lead time is the sum of cycle times plus a delay to a pending backlog. This is the lead time that is communicated to customers. The cycle times are used to manage internal business processes. Lead time is often used to indicate how long the delivery of a product will take from purchase order to delivery. Cycle time is the amount of time measured in seconds or minutes the product is produced in. Lead time cannot be shorter than cycle time as the cycle time is a subset of the Lead Time. In processes that have not been leaned out, lead time is a lot longer. Lead time is relevant from the business perspective. The cycle time is what the team can improve by changing its process. To reduce lead times one can and should reduce cycle time. But often, the waiting time before the work starts and between process steps can be really high and so this time should also be reduced. Cycle time is an internal metric and may not be visible to the customer. It signifies the effort spent on making the product. On the other hand Lead time is an external metric and hence it is visible to customers. It signifies the speed of delivery to the customer. The reasons for confusion could probably include some loose usage of the terms inter changeably by those not yet sensitized to the differences. Questions like, “What is your lead time to start working?” can confuse. Additionally, using the cycle time alone for planning production runs and making delivery commitments to the customer add to confusion. Another source of confusion could be the hitherto unfulfilled aspirations of making the Lead Time equal to the cycle time i.e. without any waiting time before, during and after the process. Without leaning out the process and bringing the Lead Time close to the cycle time, some may want to start planning using the two interchangeably. This can also embed the wrong understanding in others.