Jump to content


Popular Content

Showing content with the highest reputation since 08/22/2019 in all areas

  1. 3 points
    Hello Team Thank you for asking the two questions per week. The questions that you ask are extraordinary. They make the boring LSS tools look very interesting :) Kudos to all of you for asking such amazing questions. These questions have helped me immensely. While writing the answers to these questions, I have got a better conceptual clarity and understanding of the tool. The competitive spirit makes it even more interesting. I eagerly wait for 5 pm on Tuesdays and Fridays. Obviously I love when I win (specially on a Friday) but I also feel jealous if I don't win. But it also motivates me to write better responses for the next question. I get to learn a lot from different perspectives in the answers that are posted to these questions as well. Sometimes I feel that my answer wasn't the best but I don't mind as long as I win. Thanks again. Keep throwing the googly questions!!
  2. 3 points
    Misuse of tools and techniques is a very common phenomenon. Misuse of a tool primarily happens because of two reasons 1. Intentional Misuse (it is better to call it as Misrepresentation) 2. Unintentional Misuse (due to lack of understanding of the concept) Pareto analysis or the 80/20 rule is a prioritization tool that helps identify the VITAL FEW from TRIVIAL MANY. 80/20 implies that 80% of problems are due to 20% of the causes. Intentional 1. Top 20% causes might not be the one leading to bigger problems - usually it is observed that causes with smaller effects occur more. Applying the Pareto principle will divert the focus of the team to the causes that have a smaller effect on the customer while the actual cause might be languishing in the trivial many 2. Prioritization without keeping in mind the goal - Pareto will help if the significant contributors identified help us achieve the goal. However, it is seldom checked whether the VITAL FEW will help achieve the goal or if there is a need to take a larger number of causes Unintentional 1. Going strictly by the 80/20 rule - some people take the 80/20 principle in the literal sense. They will do a Pareto plot and blindly apply the 80/20 principle. What needs to be noted is that 80/20 is a rule of thumb and it is not necessary to always have 80/20 split. It could also be 70/30 or 90/10 2. Keeping the total to 100 = 80+20. This is one of the most common misunderstanding of the 80/20 rule where one beliefs that the sum should always be 100. Again the rule is empirical in nature and it could be 80/15 or 75/25 as well 3. Unclear about the purpose of using a Pareto Analysis. Pareto can be used in Define phase to identify projects and also in Analyze phase to identify significant contributors. In the former, data is for problems and their occurrence while in the later, it is causes and their occurrence. Due to lack of clarity of purpose, if problems and causes are clubbed together in the same Pareto, then meaningful inferences cannot be drawn 4. Treating Pareto as a non-living tool - Pareto is usually done once and the same result is treated as sacrosanct for a long period of time. Pareto chart only provides a time snapshot. Over a period of time, the defect categories or causes and their occurrence numbers might also change and hence if Pareto Analysis is done at different points of time, it might yield different results Some that could fit in both categories 1. Small data set - Pareto Analysis will help if you want to prioritize vital few from a big data set. Doing a Pareto analysis on 4-5 categories will seldom yield a good result 2. Completely ignoring the trivial many - Pareto analysis helps identify the vital few but it does not say that one should ignore the trivial many. It simply states that first fix the vital and then move on to trivial. However, most people consider that if they fix the top 20%, they do not need to work on the remaining. Pareto can be used to continuously improve the process by continuously prioritizing the causes that you need to focus on 3. Doing Pareto at a high level only - Like most of the tools in Analyze phase, Pareto can also be used to drill down. E.g. Pareto can be done first to identify the top defect categories and then a second level Pareto can be done for the top defect categories (using the causes)
  3. 2 points
    Excellence Scoreboard represents the total number of points accumulated by all members of this Forum by giving answers of Weekly Questions. Thanks a lot to Mr. Vishwadeep Khatri and his entire Team for making their efforts to create this useful platform to all members where an individual member can earn points and can use these points to nominate someone who wants to upgrade himself/herself in this competitive field. Nominating someone results a lot of benefits in various forms for yourself and them-self both. I have nominated a participant for Green Belt Course by utilizing my points and found that his entire fees of Green Belt waived off and he has successfully completed his course and using the skills in his field.
  4. 2 points
    Run Chart is a plot of the data points for a particular metric with respect to time. It is primarily used for following two purposes 1. Graphical representation of performance of the metric (without checking for any patterns in it). E.g. The scoring comparison in a cricket match. The runs are plotted on Y axis and X axis has overs (which is a substitute for time spent) Source: The Telegraph 2. To check if the data from the process is random or if there is a particular pattern in it. These patterns could be one or more of the following a. Clusters b. Mixtures c. Trends d. Oscillations Source: Minitab help section Run chart if used for point number 2, performs following tests for randomness - Test for number of runs about the median. This is used for checking Clusters and Mixtures. Clusters are present if the actual number of runs about the median is less than the expected runs. This implies that there are data points in one part of the chart Mixtures are present if the actual number of runs are more than expected runs. This implies that there are frequent crossings of the median line - Test for number of runs up or down. This is used for checking Trends and Oscillations. Trends are present if the actual number of runs is less than the expected runs. This implies that there is a sustainable drift in the process (either up or down) Oscillations are present if the actual number of runs is more than the expected runs. This implies that the process is not steady These are hypothetical cases with the below hypothesis Ho - Data is random Ha - Data is not random p values are calculated for all the 4 patterns. A p value of less than 0.05 indicates acceptance of Ha implying that the particular pattern is present in the data set. Absence of these patterns indicate that the process is random. Advantages of Run chart over Control chart Ideally control chart is a more advanced tool as compared to a run chart. However following situations warrant the use of run chart over a control chart 1. Run chart is preferred when we need a snapshot of the metric performance with time without taking into account the control limits or if the process is stable/unstable. E.g. like the scoring run rate comparison for cricket (refer the screenshot above) 2. One can start creating run chart without any prior data collection unlike in a control chart (where data is collected first to determine the control limits) 3. As a quick check to see if the process data is random or not. For doing such checks (clusters, mixtures, trends and oscillations) in a control chart, one would have to run all the Nelson tests (usually control charts are used with only one test i.e. any points outside 3 standard deviations and hence might not be able to detect such patterned data) 4. Apart from the above, it is easy to prepare and interpret a run chart in comparison to a control chart
  5. 2 points
    In DMADV, focus is on new product/service design, unlike for existing product/service in DMAIC, during the last phase of DMADV, verification of design is performed and whether the design is capable of meeting needs of the customer is validated. Numerous pilot runs will be required to validate and verify the design outcomes. Major aspect of this phase to check whether all metrics which are designed are performing as expected. Conformance to Specification. Some of the common used tools in verify phase includes Control charts, control plans, Flagging, Poka Yoke, check sheets, SOP’s and work instructions. Software Application Design: In a new design viewpoint, Verification is whether Software Application developed in right way & Validation is whether Right Software Application is being produced In simple terms, verification is checking whether the application works perfectly without any errors/bugs and validation is checking whether the application is meeting the requirement and expectation Verification Validation Application and design review, code walk through, code inspection Black Box and White box testing It is static testing It is dynamic testing Performed first Usually performed post verification Verification done without software execution Validation done with software execution Automotive Manufacturing: Reference to a gearbox manufacturing, as per the new design in DMADV process, in actual manufacturing high level steps include preforming, annealing, machining, producing teeth, shaving, grinding and inspection. Here verification is, comparing the gearbox to design requirement of material, dimension, tolerance etc., that is all specs are verified Whereas, in validation, post inspection assembling gearbox and doing a dry run, test it to check whether it runs as expected. Verification Validation Done during development, review and inspection, production and scaleup Usually done before scaleup and after the actual production Random inspection can be done for verification Stringent checks are done during validation Validation can be done directly by skipping verification in some scenarios, especially when we are not able to measure component outcomes or when cost of verification is very high. Medical Devices: Verification usually done on the design: design input, process and the output. It is done by test, inspections and analysis. Validation is checking whether the intended need of the medical device is met Source: U.S. Food and Drug Administration (FDA)
  6. 2 points
    Pareto Analysis is used to separate Vital few from Trivial Many parameters. Vital few contributing to 20% and trivial many 80%. This principle is otherwise called as 80-20 Rule. It simply says, majority of the results come from minority of causes. In numerical terms, 20% of inputs are accountable for 80% of output 80% of productivity comes from 20% of associates 20% of causes are accountable for 80% of problem 80% of sales comes from 20% of customers 20% of efforts are accountable for 80% of Results Example Dataset: Metric Freq Percentage Cumulative Demand Exceeds Supply 232 24.12% 24.12% Incorrect Memory and CPU Usage 209 21.73% 45.84% Bandwidth Constraints 203 21.10% 66.94% Network Changes 64 6.65% 73.60% Fatal Bugs in Production 59 6.13% 79.73% Poor Front-End Optimization 52 5.41% 85.14% Integration Dependencies 39 4.05% 89.19% Database Contention 34 3.53% 92.72% Browser Incompatibility 23 2.39% 95.11% Device Incompatibility 14 1.46% 96.57% Hardware Conflicts 13 1.35% 97.92% Inadequate testing 9 0.94% 98.86% Too much code 6 0.62% 99.48% Exception handling 5 0.52% 100.00% Classification: Public Pareto Chart: Classification: Public Some of the common misuse include below scenario’s: Working only on Vital few parameters: There could be other potential parameters were the frequency is less and which falls on one of the trivial many factors, however when criticality or the severity of the potential parameter is high, since the frequency is low it is not considered and underestimated. For the referred example, Inadequate testing can be critical, if there is insufficient test case or when the test review is poor it can lead to multiple production issues, which is not factored when focusing only on Vital Few. On a ideal situation, 80% of the resource should focus on reducing the vital few and 20% of the resource working on minimizing trivial many parameters. Using pareto for defects belonging to multiple categories: Another misuse of pareto analysis is when combining defects from multiple categories. We need to clearly understand that categories must be Mutually Exclusive. Using Pareto when parameters are not collectively exhaustive: What is collectively exhaustive? Collectively, all the failures in the list should cover all the possible failures for the problem., that is, there should not be any gap. Definition: Events are said to be collectively Exhaustive, If the list of outcomes includes every possible outcomes. Performing analysis on small data sets/few data points: For statistically significant analysis, we will have to use relatively large data sets rather than working on small data points. At the same time number of categories need to be practically large enough. above pareto analysis, does not make sense, when the data set is relatively small. Inaccurate measuring: Visually looking in the pareto chart and selecting the Vital Few rather than considering cumulative % < (less than) 80% Analyzing defects only once: Pareto Analysis should be performed before the problem is solved and during the implementation period to see the trend and Post improvement. It is repetitive and iterative process, rather than running only once and focusing on the defects that were identified during the early stages of the analysis. 80 + 20 should be 100; and not 75 - 20 or 90 - 40 Considering 80 in the Left Axis: Left axis displays frequency and right axis the percentage, some time when people consider 80 in left axis leading to selecting wrong vital few could lead to poor problem solving. Flattened Pareto Analysis: If there is any bias in data collection methods, we might end up with bars being flat, this happens mainly when we are separating / breaking vital problems into small problems. It does not make sense to proceed with Pareto Analysis. Rather work on action plans based on the severity and criticality. Considering defects as Root Cause: Considering Vital defects identified during the analysis as Root Causes, and not analyzing further/deep dive to understand the root cause. This will not potentially stop the defect in occurring rather it would be applying band-aid scenario for the identified loop holes.
  7. 2 points
    Some of the common challenges in conducting severity rating in PFMEA are listed below along with some thoughts on how these could be mitigated. 1. Understanding the ordinal rating scale: Interpretation of ordinal rating scale may be different from the interpretation of ratio scales and there might be the risk of drawing incorrect assumption. For example if the rating scale gives 3 likely and 6 very likely , in the rating, the impact may be significantly different from that of 2 & 4 and not exactly double the rate (as is the ratio in both the cases). The range, however, may be considered as such if the rating scale is not well explained. The solution is to have detailed discussion on the assessment mechanism including the rating scale. 2. Different rating scale for different industry: The severity rating scale may have very different implications for example, the rating scale used for healthcare industry will have very different scaling parameters and levels vs insurance industry or automobile industry. This challenge can be addressed by working with the actual team members & the respective functional leaders to design a rating scale which is relevant to the organization. 3. Difference in interpretations: Even in case of the same rating scale being provided, there could be difference in interpretation of the severity & impact of a possible risk based upon the personal experiences of the person conducting the assessment. The solution in such a situation is to have calibration meetings to ensure that every one is on the same page. 4. Cognitive Biases: The challenge in using rating scales and not statistical data in arriving at severity rating is that it may be subject to cognitive biases as follows: a. Only takes in to account "known -unknowns" & does not plan and design suitable response mechanisms for black swan events & "Unknown-Unknowns) b. Availability : People will typically ignore statistical evidence and base their estimates on their memories, which favor the most recent, emotional and unusual events which have a significant impact on them. c. Gambler’s Fallacy: People make the assumption that individual random events are influenced by previous random events which might be spurious correlations and may not have causal relationships. d. Optimism bias: People overestimate the probability that positive events will occur for them, in comparison with the probability that these events will occur for other people. e. Confirmation bias: People seek to confirm their preconceived notions while gathering information or deriving conclusions. f. Majority: People may go with the assessment of majority to conform with the group at the cost of their objective opinion which may be truer representation but different from the group opinion. g. Self-serving bias: People have a propensity to assign to themselves more responsibility for successes than failures. h. Anchoring: People tend to base their estimates on previously derived/ used quantities, even when the two quantities are not related. i. Overconfidence: People consistently overestimate the certainty of their forecasts. j. Inconsistency: When asked to evaluate the same item on separate occasions, people tend to provide different estimates, despite the fact that the information has not changed. Solution in this case is to screen out whether the ratings have been influenced by these biases and inform the participants in advance to consider whether the ratings may have been influenced by such biases. Other mitigation measures could be blind peer rating or benchmark comparison with industry ratings for similar processes. 5. Interdependence between causal factors and failure modes: FMEA assumes that each risk is an independent event, whereas there may be a high degree of interdependence between factors which could influence risk rating significantly. Understanding and articulating such interrelationship could be challenging & not considering such impact could mean that the assessment is not representative of the possible risks & the resulting impacts. The way to mitigate this is to have a detailed discussion with all the relevant stakeholders & the process expert in a well-designed structure to ensure that all the risk and their interrelationships are well understood and documented. 6. Challenge in considering the effect on both the customer or the process (assembly/ manufacturing unit). As against DFMEA(Design FMEA) where we look at the effects on the customer , in process FMEA, we will need to consider the impact & hence the severity rating of the failure mode if it impacts both the process & the customers. This is because, the impact of the failure mode in this case will mean the impact on the process or the customer in both the cases. This leads to more complications in having to consider multiple scenarios. This challenge can be mitigated by taking the higher of the severity rating of the failure modes for other the process or the customer as the severity rating for the causal factor/ failure mode. 7. Challenge in deconstructing the impact of Root cause vs. assessing failure mode. Though there is a perspective that in some cases Root Cause and Failure Modes can be used interchangeably, however, if we drill down further, it is evident that root cause analysis is typically conducted post facto (after the event) whereas Failure modes identification happens proactively and will take into account various other factors apart from the proximate cause. The challenge is to ensure that this understanding percolates to the team creating the FMEA document. 8. Challenge in ensuring the risk assessment as an ongoing process vs as a single time activity - Risk assessment (including identification & severity assessment) has to be an ongoing process & not a single point in time activity ( as the severity and impact may show material change in cases where there have been significant changes in either internal or external drivers, process dynamics or in key the environmental factors). The challenge is ensure that the rigor of assessment is maintained & updated with any relevant changes. The solution in this case is to have a monitoring / governance mechanism which will ensure that FMEA is kept as a live document with relevant updates to ensure correct risk rating. 9. Challenge in considering the impact due to timescale: E.g. the impact of a risk manifesting immediately may be significantly different from that which may manifest after some time. The solution would be to conduct time-scale analysis of such risk factors to take into consideration the impacts of recent events and see whether the severity rating could change in such cases.
  8. 2 points

    From the album: July-Sep 2019

    © Benchmark Six Sigma

  9. 1 point
    Kanabn Board is a tool that is used to depict the position of work in the process. As the question mentions itself, Kanban boards were primarily done for work allocation, monitoring the progress, decision making and reporting (at the end of the day). The most common usage of these boards were found in the daily huddles / daily team meetings / stand up meeting (what ever you might want to call it). It is mostly done on a white board where columns are created to track progress. These days there are multiple online versions of Kanban boards (but the joy of doing it is using post it notes or a marker pen on a white board - the good old way). The selection of manual or a systemic Kanban board is of lesser significance. What is more important is to track the progress. The simplest of Kanban board looks like Source: Google Images (smartsheet.com) Source: Google Images The best feature about the Kanban board is how it has evolved across various industries and domains and how it is being utilized these days. The underlying feature of allocating work, tracking progress and decision making remain the same. 1. Kanban Board in Agile Software Development / Project Management Source: Google Images search 2. Kanban Board in sales Source: Google Images search 3. Kanban Board in Hiring Source: Google Images search 4. Kanban Board in Incident Management Source: Google Images search 5. Kanban Board in aviation (flight progress strips) Source: Google Images search Automated version of flight progress strips Source: Google Images search 6. Kanban Board in Food Ordering Source: Google Images search You notice that there are multiple variations of Kanban board (manual or systemic) with all trying to help the business and/or customer know the progress of their product/service through the various process stages. A more advanced or recent variation of Kanban board is a Swimlane Kanban Board where additional characteristics could also be tracked. Source: Google Images search
  10. 1 point
    Kanban boards, kanban cards has evolved in different ways in different industries. 1. Open source kanban board- Used for managing daily tasks of your own or for the team. You can use all sorts of tags, comments, member assigning etc. 2. Online kanban boards- Practicing LSS , managing all work thro' tablets, smartphones, helps in managing task real-time 3. Agile Kanban boards- Helps teams to accept incremental changes, visually tracking the progress of the activity, backlog etc. 4.Excel Kanban board- Tracking of activity in any project has been used in the basic Excel sheets just by naming the columns and activities planned and status against each. 5. Kanban Bin- 2 bin/3 bin kanban board for monitoring inventory, the method of pulling on box from supplier end and replenishing the first box by th time 2nd box gets consumed. 6. physical kanban board- The physical kanban cards/placards are placed and maintained. Based on the pull the production is done.
  11. 1 point
    Imagine you got to choose a solution from a list of probable solutions with the following conditions 1. all solutions will be evaluated one after the other 2. a solution if evaluated and rejected cannot be selected again 3. each solution has a different reward or benefit associated with it which you are unaware of. You will be aware of the rewards for only those solutions that have been evaluated 4. probable solutions are in no particular order 5. If you reject all solutions, by default the last solution will be selected even though it may no give you the best result In such a scenario, the biggest challenge is to determine where to stop? Ideally you want the maximum reward or the best solution. However, you do not know if it is still to be evaluated or whether you have already rejecting it assuming that there is a better solution yet to be evaluated. Optimal Stopping Problem provides a solution in such situations. It says that if you have to choose from 'n' solutions, always reject the first 'n/e' (where e = 2.71) solutions. Let us call this number as 'x'. Then select the next solution which is better than the 'x' solutions already evaluated. Working with this rule, you will select the best solution in about 37% of the cases (which as per Wikipedia is a very good success rate - i have not gone into the validation part of it yet). 'x' is basically a sample that is drawn from the population 'n'. And 'n/e' ensures that we have a sufficient sample size to consider. E.g. picked from the classic 'Secretary Problem' associated with Optimal Stopping Problem (source: Wikipedia) You have 100 applicants for the position of Secretary. All the above rules (points 1 through 5) apply here and you have to select the best candidate. As per the Optimal Stopping Problem, one should reject the first 100/2.71 ~ 37 candidates and then select the next candidate who is the best fit from among the candidates interviewed so far. P.S. This will ideally not happen as the interviewer will always have the option to go back to any candidate. I do not have examples of any practical application of this this concept is new to me. Hoping someone shares practical examples here.
  12. 1 point
    What are existing process controls? Simply stated these are the measures taken in the process to ensure that defective items are not produced. What are the types of existing process controls? There are two types 1. Preventive 2. Detective Preventive process controls have an effect on Occurrence ratings as they prevent the occurrence of the failure modes. Detective controls have an effect on Detection ratings as they help detect that a failure mode has occurred. The same principle is applied when Mistake Proofing is done in the Control phase of a six sigma project. Let us take some examples 1. Production checklist that an agent uses is a preventive control as it ensures that processing is done correctly. If it is used, it will reduce the occurrence of defects. Audit checklist is a detective control as it checks if a defect has occurred or not 2. Using coding best practices is a preventive control as it reduces the number of bugs. Unit testing is a detective control as it checks for bugs present in the system 3. The spelling auto correction or highlighting of the word with red underline in MS word is a preventive control which reduces the number of spelling mistakes (while writing the article). However, a spell check in MS word is a detective control as it detects the incorrect spellings (after the article is written). By the way, in Excel there is only spell check and no auto correction 4. Preventive Maintenance is done in manufacturing. It reduces the occurrence of unplanned downtimes due to faults and hence impacts the occurrence rating 5. The "Caps Lock is on" warning is a preventive control in order to prevent instances of entering incorrect passwords and hence impacts the recurrence rating. The system not allowing to log in if an incorrect password is entered is a detective control which first checks for a valid password and hence impacts detection rating 6. In certain websites, one cannot enter alphabets if it is a numeric field. This is a preventive control which reduces the occurrence of incorrect entry in the field. The warning that some mandatory fields are left blank and system not allowed to go the next screen is the detective control where it checks for entries in all mandatory fields 7. Metal detectors and smoke detectors are examples of detective process controls. These will not impact the occurrence but will definitely have a bearing on the detection rating. 8. The seat belt not worn is a detective process control as it detects that the seat belt is not worn. In some advanced cars, the car will not start unless the seat belt is worn. This is a preventive control as it does not let the occurrence happen In all the above examples, preventive controls impacts the occurrence rating while detective controls impact the detection rating. To sum it up, while doing process FMEA, the below format is more useful where it clearly shows that preventive process controls impact Occurrence ratings and detective process controls impact Detection rating. Source: APQP FMEA format sourced from Google images
  13. 1 point
    Existing Process controls affects the detection rating and also has influence on the occurrence rating Steps Potential failure mode Potential effect of failure SEVERITY Potential causes OCCURRENCE Current control DETECTION Comments What is the step In what ways can the step go wrong Impact if the failure mode is not prevented/corrected How severe is the impact on customer What causes the step to go wrong How frequent is the cause likely to occur What current control are there that prevent the failure mode from happening or detect it if it occurs How probable is the detection of failure mode Report to be saved in the respective location Report was missed to be saved in the location SLA not met 7 Assuming report is saved but not saved in the location/ other issue 1 Self check/Sample audit 6 Sample audit / Self check has influence over the occurrence on the report not being saved Team contacts the customer if no remittance details are available. Team contacts wrong customer SLA not met 6 Lack of attention/ name not clear in invoice 1 Checking previous history before contacting 7 Referring the historical data for exceptions will help for better clarity Log into Application Incorrect user ID Unable to log in 3 Typo error or Typing in urgency 2 System display the Alert message when User ID is Incorrect. User ID same as Z ID hence available with TL as well 3 System control supports in typo error or error to to urgent typing
  14. 1 point
    Benchmark Six Sigma Expert View by Venugopal R People who are not trained or exposed to the principles of Control charts often find it difficult to understand the significance of the Control limits and their interpretation. A good understanding of control charts starts with understanding data types. Then one has to understand some probability theory. Then the principle of Normal distribution. Good clarity on Special and Common causes. Then preferably an insight into Central Limit Theorem. Such a foundation prepares a person to have a good grasp of the underlying principles of Control Charts, their different types and application of each type of chart and so on. Even with all this understanding, usually the control charts are used for the observing any points falling outside the control limits, though there are 8 rules defined to observe statistical instability. There still remains the confusion in the minds of some as to how the Control limits differ from the specification limits and some are not comfortable with out including the specification limits also on the control charts. The run charts are much simpler and their understanding and interpretation do not require the extent of subject knowledge as above. Run charts do not have ‘Control limits’ much to the relief of those who had discomfort with control limits of control charts. Those who use Minitab to create run charts would have seen the chart has p values pertaining to the types of instability viz. Mixtures, Clusters, Oscillations and Trends. I am not explaining these terms here, since I am sure many respondents will do a good job there. However, if we go through the rules to detect instability as per the control chart, we can see that not only the four terms that are used for run charts are well covered by those rules, but additional ones as well. One may choose to use Run charts or Control charts depending upon the situation and the ease of comprehension by stakeholders involved. In many instances, some of the instability observations will be quite evident on a run chart and one may proceed by taking decisions for improvement.
  15. 1 point
    Run Chart A Run Chart is a line chart that visually displays data over a period of time. It’s also known as a trend or time series chart. Why Run Chart? Run Charts help us to identify patterns that may exist in the data, and trends over time. When to use Run Chart? You should use Run Charts whenever you want to understand how your process has performed over a certain period of time or when you want to see if your changes resulted in sustainable improvements. In a Run Chart we can identify below non-random variations or patterns: Clusters: Cluster is one where we can see a group of data points in a particular area in the Run Chart. We can also identify the clustering by looking at the probability value. If the probability value for Clustering is less than 0.05, we will be having clustering of data in the Run Chart. Mixtures: If the data points frequently crosses the median line, it’s called Mixtures. We can see than when we pool the data points from more than one population. We can also identify the Mixtures by looking at the probability value. If the probability value for Mixtures is less than 0.05, we will be having Mixture of data in the Run Chart. Trends: A trend is defined as a continued drift or float of data. It can be either in the upward direction or downward direction. This is an indication that the process may in near future go unstable. This may cause due to replacement of operators, aspects such as dilapidated tools etc. Trends can be identified by looking at the probability value. If the probability value for Trends is less than 0.05, we will be having Trends in data in the Run Chart. Oscillations: When data swings upwards and downwards, it’s called Oscillations. This gives an warning that the underlying process is not stable. Oscillations can be identified by looking at the probability value. If the probability value for Oscillations is less than 0.05, there will be Oscillations in data in the Run Chart. Comparison of Run Charts Vs. Control Charts: Below are some comparisons between Run Charts and Control Charts: Run Chart: · Run Chart is simple and can be created easily, · Can be quickly analyzed · A person looking at a Run Chart does not require statistical knowledge to read the chart However, a run chart lacks the below advantages which a Control Chart possesses: Control Charts: · A control chart will help us understand whether the process is stable or in control · Is the process in the correct track
  16. 1 point
    We use run chart to see if there is any sign of special cause variation in our process data. It is graphical representation of the process performance plotted over time (hourly for Continuous Flow processing and most commonly in days or in months) Most importantly, What is Run? It is 1+ consecutive data points on the same side of Median (Either above median or below median) Variations can be common cause or special cause. Point to note: Common cause variation is outcome in an Stable process that is predictable & Special cause variations is outcome in an Unstable process that is not predictable By using run chart, we will be able to find the trend and pattern in the process data set Common patterns of non-randomness include: Mixture Patterns Cluster Patterns Oscillating Patterns & Trend Patterns When we run control chart on Minitab, it detects whether above mentioned patterns are existing in the data Sample data – Considered gold price/10 grams for the last 55 months Classification: Public In the above chart we can witness, clustering and trends. Cluster Pattern: In general, it is set of points in one area of the chart, above or below the median line. Thumb rule for cluster, 6+ continuous nearby points above/below the median line We can also check out the P value to see if there is potential cluster in the data Specifically, when P value is < 0.05, we could say possibly the data could indicate cluster. In reference to the above Run chart, Approximate p-value for clustering is 0.000 which is less than 0.05, so reject null hypothesis. Cluster can show sign of potential sampling or measurement issues. Trend Pattern: It is sustained drift in the data set; either upward trend or downward trend; Thumb rule to conclude trend is 6+ consecutive points either higher than previous data in one continuous period or the other way, that is 6+ consecutive points lower than previous data points. In the referred above chart we could observe an upward trend and P-value is also less than 0.05 to conclude potential trend. Now as we know about Cluster and Trend, lets note the below points: Opposite of Cluster is Mixture & Opposite of Trend is Oscillation Oscillation: When the process is not stable, we get data points spread above and below the median line, looks like oscillation. Thumb Rule: 14+ points in one continuous period increasing and then decreasing cyclically For P value < 0.05, possible oscillation can be observed. Classification: Public Mixture: When there are no points near the center line, with 14+ points upward and downward, increasing and decreasing over the median line and P value <0.05, we can have potential mixture in the data set. Run Chart & Control Chart In Control chart, along with the center line we have the upper and lower control limits. Another major difference is in Control chart - Center line is median and in Run chart - Center line is Mean; Run chart does not give any detail on the statistical control limits. We can see control chart as an Enhancement to Run Chart. In control chart, we will be able to check the stability - whether the process mean and variation are stable; check whether any out of control. We can check normality - data is normal or non normal; But it does not provide view on patterns. When we use control chart from assistant view in Minitab we get below output view: Stability Report Classification: Public It gives commonly used patterns for reference, however does not highlight the pattern in the output. Control charts will be useful over an Run chart, when the focus in on the variation and to identify potential deviation. However, downside of control charts is that it could have below limitations and can cause unnecessary wastage of time. False Alarms Incorrect Assumptions Incorrect Control Limits Both - Run chart and Control charts has its own advantages and used for different purpose [Run - Trend & Patterns; Control - Stability] and are useful based on the required objective, situation and analysis.
  17. 1 point
    While most of the answers very well highlight the misuses of Pareto Analysis, the most comprehensive answer is that of Natwar Lal; thereby marked as the best answer. His idea of mentioning intentional vs non-intentional misuses is interesting. Benchmark Expert View has been provided by Venugopal R.
  18. 1 point
    Pareto principle is an effective root cause analysis (RCA) tool which help us separate the vital few factors from the trivial many without having to conduct deep research into each of causes of key factors, however, if not applied well, we may run into oversimplification of critical factors which may not resolve the issues or take us in a different direction from the real factors which actually affect the business outcomes. There are some inherent limitations of Pareto principle and if these are not taken into account wrong conclusions may be drawn while applying the principle: Historical Data: Based on historical data & does not account for the changed dynamics of the present. If for example 20% of the customers currently contribute to 80% of revenue, incorrectly applied Pareto conclusions may lead the business to ignore the others which may have greater untapped revenue potential. Does not take into account the future possibilities : Pareto Principle is descriptive & not prescriptive, so it will be a mistake to use it for forecasting future patterns. May not apply to all business phenomena: In some cases it is entirely possible to have evenly distributed causal factors or may have only one significant cause with others being more or less equally distributed. e.g. 80% of system downtime may not be directly linked to in 20% of the computers. Based on quantitative data only & ignores human factors which may have considerable influence on the outcomes. E.g. in a ten member team, while there may be significant productivity variations, it is unlikely that 2 people will contribute to 80% of the work. Data independence: While we take care to have mutually exclusive & collectively exhaustive (MECE)data as far as possible, interrelationships between the causal factors may not be ruled out as in real life, they may not be totally independent of each other. Pareto works well on a large data sample & may not apply well in those with few samples & causal factors Short period data may be lopsided & may not reflect the true behavior of the process when observed for a long period & this may lead to incorrect conclusions Very long period data may not account for changes in the nature or attributes of the causal factors over the time period. E.g. while looking at response times, the fact that information processing system may have undergone key upgrades and changes over time may be overlooked. Data collected from unstable processes or outlier events may lead us to draw incorrect conclusions 80 & 20 do not necessarily equal to 100: the 80% & 20% do not refer to the same elements & hence do not necessarily total to 100%. The 80% represents the result / outcome or Y whereas 20% represents the causal factor/ input or X. It is not correct to take them together for making a 100% as may be mistakenly done. Mistaking the 20% drivers for root causes: The 20 % causal factors are only the categories that contribute to the 80% of out outcome, these are NOT the root causes, which will need to be understood further by conducting further analysis. The mistake here is to stop at the vital few identified & not go deep into the looking at the root causes. Not reviewing the "Others" category: Typically we tend t ignore the "others" category while looking at Pareto, however, if this category has key drivers, it would not be a wise decision to ignore them completely as there could be significant business implications , especially if business environment & other factors change.
  19. 1 point
    Benchmark Six Sigma Expert View by Venugopal R ‘Priority Consciousness’ is one of the key topics discussed in Management. Sometimes we do hear people saying the ‘Everything is equally important’.. however, in reality it becomes difficult and even inefficient if we do not prioritize our tasks. Principle of Pareto Analysis would not require any explanation for most members in this forum. Pareto principle, though named after the Italian economist Vilfredo Pareto, was popularized and adopted in the field of Quality Management by Joseph Juran. All the seven Quality tools are excellent methods to provide guidance to problem solving, but teams have to apply their minds, process knowledge and situational requirements for the best decisions. This applies for the usage of Pareto analysis as well. There could be many ways by which the Pareto analysis may not be done to get its best benefits, and some misuse as well. 1. Not considering severity We may use Pareto analysis to classify the defects of a product based on the frequency of occurrence for a period of time… for example, take the case of an electrical home appliance. Top most occurring defect could be a scratch on the panel, and the least occurring could be an insulation breakdown. Obviously, if the priorities are judged based on frequency of occurrence alone, without considering severity, it could be disastrous! It will be a good practice to perform FMEA as well, so that the priorities are not decided just based on the occurrences alone. 2. Using Pareto charts only as a presentation tool Pareto charts are meant to be tools used as part of causal analysis, but they also serve as good presentation method. If we draw up the pareto charts just for project presentation, and do not build them during the appropriate phase of the problem solving, it is a misuse. 3. Labeling ‘stratification’ as ‘cause’ Pareto analysis can be used for stratification of data as well as for causal analysis. For example, the sales figures of a particular product across 12 cities can be depicted using a pareto, as a stratification exercise. However, if you drill down to 10 reasons for poor sales and depict them using pareto for each city, then you are using the tool for causal analysis. Sometimes, the failure to differentiate between the two, could result in labelling ‘stratifications’ as ‘causes’ 4. Improper Grouping The purpose of pareto is to identify a pattern of “Vital few and Trivial many”. If one type of grouping is resulting in a flat pareto, you may have to try some other type of grouping. For example, if you are working on improving the productivity of processing invoices and you develop a pareto of the productivity by grouping them vendor wise… assume you get quite a flat pareto. This does not allow you to differentiate productivity levels across vendors, so, you may try to group the data based on types of invoices, irrespective of vendors and develop a pareto. Similarly, different types of grouping need to be tried to identify a pattern of ‘vital few’. 5. Making ‘Others’ too tall Lack of adequate grouping can result in a very tall ‘others’ bar. We have seen pareto charts where the ‘others’ bar come up as the tallest! Certainly, the thoughts and efforts for grouping have not been adequate. 6. Missing out on ‘Quick wins’ Many times, an occurrence with lower frequency could have an easy solution, with less efforts. You should not just keep putting efforts only as per the pareto sequence, failing to notice the quick wins. Pareto analysis finds application in all phases of DMAIC phase. However, this tool has to be applied with some logical thinking and subject matter knowledge. It is a tool that helps in giving a broad level of prioritization, which has to be used along with other considerations.
  20. 1 point
    Q. 188 Explain the common challenges in Severity Assessment as a part of PFMEA (Process Failure Mode and Effects Analysis). Also mention how these challenges can be addressed. Note for website visitors - Two questions are asked every week on this platform. One on Tuesday and the other on Friday. All questions so far can be seen here - https://www.benchmarksixsigma.com/forum/lean-six-sigma-business-excellence-questions/ Please visit the forum home page at https://www.benchmarksixsigma.com/forum/ to respond to the latest question open till the next Tuesday/ Friday evening as per Indian Standard Time. The best answer is always shown at the top among responses and the author finds honorable mention in our Business Excellence dictionary at https://www.benchmarksixsigma.com/forum/business-excellence-dictionary-glossary/ along with the related term.
  21. 1 point
    We are going to discuss all about PFMEA process Failure Mode of Effective analysis . Before that let’s have look on FMEA i.e Failure Mode of Effective analysis. FMEA is tool used to prioritize identify, quantify and evaluate the risk. Goals Reduce the Failure Risk Detection of Failure are ensured Prevention of failure Why FMEA : FMEA is to track Failure of potential Reduce the risk counter measures to be taken In success of FMEA it starts with capturing all requirement properly and arranging potential failure modes. FMEA has 3 Points Failure Mode Failure Effect Failure cause There are 3 Types of FMEA Design Failure Mode of Effective Analysis : This method is to detect error in design . for ex : if any dimensions or size of product varies with standards due to which risk generates Process Failure Mode of Effective analysis: This method is to detect error in process . for ex : If any problem with Ids System Failure Mode of Effective analysis: This method is to detect error in System Process Failure Mode of Effective analysis The primary objective of PFMEA is to give proof of specific cause failure. If this can not be given then next level is mistake proof ,where team has to come up with ways of catching either cause or failure of specific failure mode of defect. PFMEA approach is to Identify ,reveal potential of failure Recognize function with process which reduce opportunity of potential To prevent out of conformance based on current details and document the process. Work towards corrective and prevention RPN: Risk Priority Number is component of PFMEA, It deals with 3 factors Severity: Seriousness of problem. It is rated in range of 1 to 10. Depending on how seviour problem is . if it is 10 then it may be without information If problem not effect function then it may 1 or 2, for ex if color of any product which is working fine is not so suitable. Occurrence: This is opportunities to raise the issues or problem it is rated from 1 to 10. The error which occurs or repeat highest then it extreme value toward 10 Detection: Identifying the problem, Rated between 1 to 10. The problem detected fery easily with less effort or which reflects its existence more is 10, Risk Priority number = product of Severity, Occurrence and detection RPN= S * O * D Highest value of RPN is 1000 and lowest value of RPN 1 How to Address the PFMEA A Team need to Formed with process owners Expectations to be set about goal, objective and duration of time line Team has to go through process MAP Implement process map in FMEA step by step AS team work on Severity, occurrence and detection scores. When team will get RPN value work on corrective measures Corrective measures to be tracked for consistency.
  22. 1 point
    Severity Ranking is the value that is given to the failure mode effects. It quantifies the impact of the failure mode on the customer in an FMEA. Denoted by S, the range is 1 to 10. Working on FMEA in itself is a challenging task as the team is trying to figure our the "risks". Some of the challenges while assigning the Severity ranking are as follows: 1. Keeping Severity independent of Occurrence and/or Detection. This is one of the most common challenge. The team usually think that if the occurrence of a failure mode is less or if it could be easily detected, the failure mode is not as severe. How do you address it - keep reminding the team that the three rankings in FMEA are independent of each other. E.g. presence of a smoke detector (makes detection easy) does not impact the severity of the fire 2. Considering the effect of failure mode on the external customer or internal customer. This is a classical debate topic. Do you consider the effect on the end customer or the next process step while doing PFMEA? How do you address it - list down all failure effects in separate row items. This way one could separate the Severity rankings for effect on internal vs external customers 3. Assigning the Severity ranking considering the effect of product failures (DFMEA) while doing the PFMEA. PFMEA is done for process failures and not product failures but sometimes with the mindset of identifying risks, one could also start listing the DFMEA failure modes and their severity rankings How do you address it - the product failure modes are ideally covered in DFMEA. One should abstain from capturing the same in the PFMEA. Failure modes and severity rankings in PFMEA should only pertain to the failures for the process 4. Subjective nature of the Severity rankings. This comprises of multiple challenges a. The details for Severity ranking comprises of multiple themes - level of dissatisfaction, monetary impact, amount of rework, scrap etc. If one keeps switching between these themes i.e. for one failure mode you look at level of dissatisfaction while for another you consider the amount of rework, it might lead to confusion and misunderstandings b. Ratings of 9 and 10. Both are hazardous with 9 being with warning while 10 is without warning. Now ideally if there is warning, it could be considered as a sort of detection. But if Severity is to kept independent of any sort of detection then why have 9 and 10. I mean if it is hazardous, it simply is hazardous irrespective of whether there is any warning or not c. Lack of quantified impact for Severity ranking. The themes for Severity ranking are mostly qualitative and lack quantification. As is true with any qualitative ranking system one could debate on assigning a rating of 6 vs 5 vs 4 etc. How do you address it - Before starting the PFMEA, spend some time to form a common ground of understanding (pick one theme) for the different Severity rankings to avoid unnecessary confusions and debates. You might also want to quantify the Severity rankings to make the selection easier. Finally, once you are done with a few steps or may be at the end of the PFMEA (though it is advised to do it after a few steps), you might want to stop and review the severity rankings that you have assigned to ensure consistency and reconfirm the understanding.
  23. 1 point

    From the album: April-June 2019

    © Benchmark Six Sigma

  24. 1 point
    From what i have seen or observed , general human tendency is to do Correlation and Comparison of things. People correlate one thing to another thing and compare things(objects, persons (read attributes/characteristics)...). Why they do this ? Because the human beings are trained and taught to do so since time immemorial. Man is a social animal with plenty of knowledge!! Human tendency is to go from a 'known' entity to an 'unknown' entity, in many cases. So the comparison starts from there. What are we trying to say by "Apple to Orange" comparison. It does mean that we are trying to compare two different things(in reality it could be about some objects or about some human beings). In the case of human beings, the comparison could be about their characteristics , in a personal context; skill set, experience-level, soft- skills, etc... in a professional context. So why is this important ? Because as a human brain is naturally aligned to compare people , it does not see what/whom needs to be compared with what/whom. This is the generic behaviour which needs to be consciously be changed by every individual to stay balanced. Apple to Orange comparisons , therefore would happen in reality in several cases, though the comparison often would be unfair. Let us see some examples where 'Apple to Orange' Comparison becomes necessary 1. In an IT organisation, an experienced employee(10+ year exp.), resigned from his job. As his job was a billable position, the service providing company found a lesser experienced(1 year) professional to replace the experienced professional. The customer knew the experience of the replaced staff. But a week later, the customer is not happy with the performance of the new staff stating the team that new staff needs to be upto speed and expects the quality as he got from the experienced person. Here clear 'Apple to Orange' Comparison has happened. Both the experienced employee(resigned) and the new employee have knowledge in that technology(Advanced Java) but the knowledge gap is wide. But for the customer, this is immaterial. He needed the same quality as it was earlier. The onus will be on the service provide to bridge the gap but the point is this comparison is made because the customer is expecting the same quality. 2.In an appraisal process in an IT company, all the team members(TM) were appraised. TM1, TM2, TM3 were all experienced in that order - with 10,7,5 years of experience respectively. TM4,TM5,TM6,TM7- were experienced with 4,2,1,1 years respectively. TM5,TM6, TM7 felt that it was unfair to compare them with senior TMs (TM1, TM2). They felt that they were feeling the pressure. Even though the project mgmt team had done some criteria for each experience, still the lesser experienced TMs felt that appraising them with the seniors would not help their appraisal. This is often the case we would come across many teams in many industries. 3. While doing software estimation: a). Say you have a start up company and you have got few development projects. Suddenly you get an enhancement cum maintenance project. Since you do not have an estimation template for an Enhancement , you rely on your estimation template for Development . You compare that (only to see if you get any initial idea) and try to tailor to suit your enhancement project requirements. In reality this could be a failure. b). Say you have an estimation model for Java/J2EE technology . Now you are getting a mainframe project and you do not have an estimation model for it. Now you decide to take a leaf out of your Java/j2EE estimation model and take that as basis for your mainframe estimation. In both the cases, we are making an Apple to Orange comparison ,as we are left with no choice but go from 'known to unknown' entity (to begin with). 4. How many times we would have seen the fact that to find the Greatest of All Time (GOAT) in a sport, we compare people who played the sport in different eras? Imagine how that sport would have been played in each era. Then how will you find GOAT ? Of course. various factors might be considered by the agencies (companies/relevant parties) which decide on the GOAT. Sometimes multiple Agencies who pronounce the result on GOAT, choose different players as the GOAT (they do not concur), because they may choose different factors. So to find the GOAT , we are making an Apple to Orange comparison and we try to minimise this indifferent comparison by bringing in various factors. Otherwise how can you say , who is the greatest cricketing batsman of all time, greatest tennis player of all time, greatest golfer of all time ? How difficult or easy to compare a tennis player who played lawn tennis in a wooden racquet, with a tennis player who played his tennis in synthetic hard court with a graphite racquet? Is there any technique to carry out such a comparison? Honestly am not sure if there is any technique. I could think of few things however which can give some quick guidance , in my opinion. 1. Organisational Process Assets (OPA) - Previous projects could have done like this and could have updated the organisation's repository - Documents could be like logs, Excel sheets, Lessons Learnt etc.,. Those artifacts can have data on the comparable entities and the associated data that was required for each of the comparable entity and the decisions made on the comparison. 2. SharePoint, Online database - Where Information or factors deciding the comparison can be captured; data about comparable entities Conclusion: Comparing things is a human mindset and has become a prerogative. Nothing can stop a person from comparing things . In a business context, this comparison takes importance because of cut-throat competition across the globe in every industry.Therefore , Apple-to-Orange Comparison , even though it may seem not correct, it is becoming increasing essential part of our day-to-day life.
  25. 1 point
    Dear Ransingh A very good question and the link shared by VK will help you visualize how CLT works. I want to highlight a common misconception about Central Limit Theorem. It is probably one of the most misunderstood concepts in Lean Six Sigma. Most of the people assume that if they have a large sample size (read greater than 30), then the data set follows normal distribution. This is far from truth. Irrespective of the sample size, the sample will always follow the distribution of the original data set. So if the original data set is Not Normal, then the sample (be it size 1 or 2 or 10 or 30 or 100 or however big) will also be Not Normal. Then where does CLT apply? CLT applies on the distribution of the sample means or sample sums i.e. if i pick up multiple samples from the Not Normal data set, calculate either the sum or the mean of all the samples and plot them on a histogram, then it will follow a Normal distribution. For e.g. consider a roll of a single dice. Possible values are 1,2,3,4,5,6 each having the same probability. A common misconception would be if i roll the dice multiple times (say 6000) times, I will get a normal distribution. This is not true. Roll of a dice follows a Uniform distribution and hence if you roll it 6000 times, it is likely that 1 through 6 will occur 1000 times each. However, what happens if 2 dice are rolled and sum of each roll is noted. The possible values are 2,3,4,5,6,7,8,9,10,11 and 12. Here however the probability is not the same. Prob. of getting 2 = 1/36 (only 1 combination will give 2) Prob. of getting 3 = 2/36 (2 combinations will give us 3) Prob. of getting 4 = 3/36 (3 combinations will give us 4) and so on...... 7 has the maximum probability (6/36) of occurrence while 2 and 12 have the least (1/36). Now, if I roll the 2 dice for 6000 times and plot the sums of each roll on a histogram, the plot will start resembling a normal distribution because of the variation in the probabilities of each number. Here if you notice closely, 1. The original distribution is Not Normal 2. Taking 2 data points from the original data set will give me a sample (equivalent to rolling of 2 dice). Then for each sample, the sum is being calculated and plotted 3. CLT is being applied on the sum and not on the individual data points The same is evident in the animation link shared by VK. So let's be aware of the misuse of this theorem and apply it correctly. P.S. there are multiple online sources where you can also find the mathematical proof of the the Central Limit Theorem.
  26. 1 point
    Why Baselining is important Baseline is the first step for anyone to understand how well a process is working or should work and then how much further we can achieve or take it to the next level. In order to calculate the process baseline sigma, we need to have the following information at hand: 1. No of units that a process can produce 2. Total no. of defects opportunities per unit 3. Total no. of defects Let us take an example: In a software development project for an online site (a java language web based application), 30 files (codes) are there. Customer is moving to Agile framework for the first time and is ok to have 10% defects for the first quarter. There are 5 opportunities to produce defects per file. S.No Opportunity 1 Usage of console - Print statements for debugging purpose which will clog memory 2 Code Not structured properly 3 Not handling Exception properly 4 Improper Relational Database handling 5 User Interface guidelines not properly followed Here is the defects count for each file Files Defects Count File 1 3 File 2 2 File 3 3 File 4 4 File 5 5 File 6 5 File 7 5 File 8 2 File 9 4 File 10 1 File 11 0 File 12 0 File 13 2 File 14 3 File 15 4 File 16 2 File 17 2 File 18 3 File 19 1 File 20 4 File 21 5 File 22 2 File 23 3 File 24 2 File 25 1 File 26 2 File 27 3 File 28 2 File 29 1 File 30 2 Total Count 78 To calculate the DPMO Total number of Defects (D) = 78 Number of units (N) = 30 Total number of defects Opportunities (O) =5 Defects Per Million Opportunities (DPMO) = 1000000 * (D/N*O) = 1000000 * [78/(30*5)] = 1000000 * 78/150 =520000 This is equal to 1.45 Sigma (with 1.5 shift) with yield at 48% and a 52 % defect which is much higher than the customer baselined - 10% defect allowance Now a process improvement was put in place. Coding Standards were introduced and a code review process was put in place. Now with this improvement in place, the opportunities for a defect to happen got reduced to two. S.No Opportunity 1 Improper Relational Database handling 2 User Interface guidelines not properly followed Number of units (N)=30 Total number of defects Opportunities (O) =2 Files Defects Count File 1 0 File 2 0 File 3 0 File 4 0 File 5 0 File 6 0 File 7 0 File 8 0 File 9 1 File 10 0 File 11 0 File 12 0 File 13 0 File 14 0 File 15 0 File 16 0 File 17 0 File 18 0 File 19 0 File 20 0 File 21 0 File 22 0 File 23 0 File 24 0 File 25 0 File 26 0 File 27 0 File 28 0 File 29 0 File 30 1 Total Count 2 Total number of Defects (D) = 2. Note the 2 defects are due to the one file having database error and another file having User Interface guideline issue. Number of units (N) =30 Total number of defects Opportunities (O) =1 Defects Per Million Opportunities (DPMO) = 1000000 * (D/N*O) = 1000000 * [2/(30*1)] = 1000000 * 2/30) = 66666.66 This is equal to 3 Sigma (with 1.5 shift) with 93.3 % yield and 6.7 % defect, which is well below the 10% baseline allowance from the customer. As we see, there is a drastic improvement in the yield and the process improvement made has virtually eliminated all but 2 defects. So the state before and after the process improvement is vast The baseline can even be re-shifted to this new state(6.7%), since the new improvement process is quite capable of producing lower amount of defects. Conclusion Thus we can observe how baselining helps us to know the current position of the process and how much we can improve on our process. We also saw how performance made, after an improvement is having vast difference with the performance made before improvement and how adjustment is made to have a proper comparison.
  27. 1 point
    Hi Manik, Very important question for service sector professionals. Here is my answer - Anything that needs improvement should be measured first. If something is critical, you can surely find methods to measure it. In service sector - Dissatisfaction is measured by poor referral, poor repeat sales, or loss of customers. Employee motivation is measured by absenteeism, productivity, attrition. Success of social activity is measured by number of posts, new friends and "likes" Taste of liquor is measured by scores given by experts (with their tongue as measuring device) Quality of service is measured through carefully designed feedback forms. Success of schemes is measured by increase in sales. Success of email campaign is measured by clicks. Success of outbound sales is measured by various conversion rates - cold contact to leads, leads to hot leads, hot leads to accounts, accounts to up-sell I presume you got the idea. Let us play a game - I suggest you and other viewers to challenge the creativity/ingenuity of this community by providing situations where measures are tough to devise.
This leaderboard is set to Kolkata/GMT+05:30
  • Create New...