
Topics

Question: Brainstorming
By Vishwadeep Khatri, in We ask and you answer! The best answer wins.
 Has best answer
 0 votes
 6 answers

Question: Pvalue
By Vishwadeep Khatri, in We ask and you answer! The best answer wins.
 Awaiting best answer
 0 votes
 0 answers

Leaderboard
Popular Content
Showing content with the highest reputation since 08/22/2018 in all areas

3 pointsA deep intrinsic problem with FMEA is how we calculate RPN (Risk priority number) by performing a mathematical operation on three ordinal scale data. Severity, occurrence and detection are purely ranked numbers and we never get to see the absolute difference between two ranks so any mathematical operation like addition, subtraction or multiplication don’t hold true however they can definitely throw a mathematical number. We calculate RPN in the similar fashion and then use this number to prioritize risks. Moreover, three building blocks of RPN are not on the same scale. They have different priorities in different organization. Severity should definitely be considered of top most importance. Let’s look at a scenario. We will try to calculation RPN for two earthquakes with different magnitudes. One at Richter scale of 2.0 and another at 6.0. 1. Richter scale 2.0 earthquake: Severity = 2 (as per Richter scale reading) Occurrence = 5 (assuming that this occurs very often) Detection = 4 (we would use same detection for both scenarios) RPN = 2 * 5 * 4 = 40 2. Richter scale 6.0 earthquake: Severity = 6 (as per Richter scale reading) Occurrence = 1 (very less frequent) Detection = 4 (we would use same detection for both scenarios) RPN = 6* 1 * 4 = 24 If we simply go by prioritizing risks as per RPN, then the first risk would get prioritized however practically that’s a lot safer than Risk 2. Richter scale 6.0 earthquake is rare but if it occurs for once, it’s a disaster. The RPN calculation doesn’t take care of such individuality which makes a great sense in practical scenarios. One way to overcome above problem could be to use weighted count method for calculating RPN. Severity should get highest weightage (may be 3), followed by Occurrence (may be 2) and then Detection (may be 1). Let’s redo the above earthquake scenario and we would call our metric as Weighted Ordinal RPN (WORPN). 1. Richter scale 2.0 earthquake: Severity = 2 (as per Richter scale reading) – we would consider it as count 2 and multiply it by weightage 3: gives the value of 6 Occurrence = 5 (assumption that this occurs very often) – Weightage 2, so count gives 2*5 = 10 Detection = 4 (we would use same detection for both scenarios) – Weightage 1, so count gives 1*4 = 4 WORPN = 6+10+4 = 20 2. Richter scale 6.0 earthquake: Severity = 6 (as per Richter scale reading) – Weightage 3, count = 3*6 = 18 Occurrence = 1 (very less frequent) – Weightage 2, count = 1*2 = 2 Detection = 4 (we would use same detection for both scenarios), Weightage 1, count = 4*1 = 4 WORPN = 18+2+4 = 24 And this weighted ordinal RPN brings second risk as top priority which is the cause of the concern. I welcome your thoughts on this subject further.

1 pointThis was a tricky one as most people use the normal pareto chart only. The answers provided by Mohan PB and Vastupal are very close. The best answer selected is from Vastupal basis the detailed explanation provided for both Pareto and Weighted Pareto along with an example.

1 pointHello Sir, We are dealing with Insurance claim processing Data Capture. While working on the Pareto along with frequencies we tend to survey the field nature like how the claim is getting impacted with these field. Apart from this we also tend to look the nature of failure with the impact it creates like a same field what is the impact when it is miskeyed(Typo) and when it is not keyed. Based on this we prioritize our actions. Thanks! Vanchinathan M

1 pointApart from “Cost”, there could be other measures like “Amount of Rework involved”. This would depend on not just the type of error but also the point of time in the process lifecycle, when the error is discovered. Cost could depend on the amount of rework involved. Cost can also involve fines or penalties. Further, there could be other factors like, “Showstopper” or “Nonshowstopper”, with the obvious meaning that a “Show Stopper” error brings the entire process to a grinding halt and hence will warrant higher weight. But the problem with this and other error criticalitybased weights including “Fatal”, “Critical”, “Noncritical” etc. is that of quantifying the conceptual criticality. This is a tricky question as while it is clear that a fatal error is more serious than a critical error, by how much is the question. Would a fatal error be rated twice as critical as a critical error? If so, why? In the area of backoffice documentation for container shipping process, some voyages involve declaring the contents of the containers in detail to the Customs of the destination country. Only after this declaration is cleared can the container be shipped. After this declaration is cleared if there are any changes to be done in the declaration, this will involve additional costs to be paid. If these changes are done after a certain point of time connected to the time of sailing, there will also be an additional fine involved. In some cases, no changes will be allowed and hence the container will have to be unloaded from the vessel and rolled over to the next voyage resulting in delayed revenue from the customer and perhaps loss of business in future. In this situation an error in documentation that necessitates redeclaration to customs or additional costs and / or fines or worse still rollover will have a higher weight than other errors even if the frequency of occurrence of such “costly” errors are lesser. Where relevant, additional audits for these errors are also justified.

1 pointPareto Chart or Pareto Diagram, is a excellent tool for prioritizing potential root causes or number of defects/problems. It is based on Pareto Principle of 8020 Principle. It is based on theory of unequal distribution of wealth. This principle dictates that 80% of the failures are coming from only 20% of the causes. This tool based on the present data, it does not provide any projected movement of any one of the contributing factor in coming future. It is the visual representation of vital few against trivial many. A Pareto diagram is combination of bar and line graphs of accumulated data where data is associated with a problem are divided into smaller groups by causes.It is a dual scale chart. The length of bars represent frequency and arranged with longest bar on left and shortest to right in decreasing order. On left side Y axis there is number of problems or defects and right side Y axis cumulative percentage. Cumulative percentage of 100% should be equal to total no of problems or defects for getting accurate pareto Chart. On X Axis types of defects or problems are defined. Cumulative % will be derived after arranging all types of defects/Problems in decreasing order. We can use Pareto chart in following conditions: When we are analysing data about the frequency of problems in a process. When we want to compare results before and after taking countermeasure. When we are communicating with other about our data to make them understand in better way, When analysing causes in broad way by looking at their specific components. When we have many problems and we want to focus only on significant problems. A Weighted Pareto does not look like normal Pareto chart in which we look at how often defect occur but it considers how important they are. There are some of different attributes that we can use in weighted Pareto; Severity of defects Cost related to defect Detectability of defects A Weighted Pareto may change how we see the priority for improvement projects. It requires valuation, different attributes are needed to calculate valuation and these attributes can be objective to find out and fix cost to each type of defects/problem so that we can identify which type of defects is having more impact on company and its reputation. it will help you to identify where high cost defects occur. We reconstruct Pareto to draw Weighted Pareto by multiplying frequency to its assigned value of attributes and rearrange all defects in decreasing order to calculate cumulative % of each defect type. for example if we see a sheet metal industry, some defects are occuring like crack 10 cases, 25 dent, 30 deform. As per Pareto Chart we consider these defects as Deform, Dent and Crack in decreasing order. So if we consider cost related to these defects like we spend 5 rupee to repair one deform and 7 rupee to repair one dent but we can't repair crack so we loss 20 rupee for a crack. so 10 Crack wasting 10* 20 = 200 rupees 25 Dent wasting 25* 7 = 175 rupees 30 Deform wasting 30 * 5 = 150 rupees. so according to weightage of defects we are observing that Crack which was least important before now comes at top position and to focus more for taking countermeasure, similarly Deform which was on first position before now comes at end for taking action and least important. So Weighted Pareto changes the scenario how we see the priority for improvement of projects.

1 pointIn the central limit theorem (CLT) establishes that, in some situations, when independent random variables are added, their properly normalised sum tends toward a normal distribution even if the original variables themselves are not normally distributed. The Question is how could this be proven as it looks very intimidating at times. Why is 30 considered the minimum sample size in some forms of statistical analysis? Is there any rationale for this.

1 pointDear Ransingh, There is a very good animation that you can see at http://onlinestatbook.com/stat_sim/sampling_dist/ You can change the distribution type of independent random variables from different distributions, and see how the averages (or sums) become normally distributed. As the sample size approaches 30, the curve becomes quite normal. Of course, beyond 30, it will become even more beautifully normal but the normality test passes very well at 30. The differences you see in the match of the obtained curve with the perfect bell curve between sample sizes of 5 and 6 are much bigger than what you see between 30 and 31. As the perfection towards normality grows very slowly with an addition of every single number after 30, the number 30 is considered as a reasonably good minimum size. Of course, the number 30 is just a rule of thumb and one could always take more to be safer.

1 pointGreat attempt by everyone and very neatly explained that the customer will not be willing to pay the cost due to the "hidden factories". The chosen best answer is that of Vastupal for providing the explanation in great detail. Must read Venugopal's answer to understand with the help of a clearly outlined example, complete with calculations. Keep it up!

1 pointRolled Throughput Yield (RTY) is calculated by multiplying the yields for each process. Let me illustrate an application of this metric using an example. XYZ company manufactures friction material that goes into auto disc brake pads. The processes under consideration start with the Mix, which is subjected to preform process, and then compression molding and then grind finish. Let's assume that the standard weight of mix required for each pad is 100 gms. If 10000 gms of mix is fed into the processes, the yield for each of the 3 processes, Preform, Comp. molding and Finishing are tabulated as below: The yield for each process is calculated in the last column, and the resulting RTY is 0.8, which means that when quantity of mix equivalent for 100 pads was fed into the system, we ended up getting only 80 pads. The loss of yield can be categorized into 2 categories. 1. Due to the losses due to spillage, gaseous waste, finishing dust (SGF) 2. Due to rejections that were either scrapped or reworked. (SRW) The RTY brings out the practical yield from the process at large. If we take a six sigma project to improve the RTY (say from 0.8 to 0.9), it will lead to the revelation and analysis of the 'Hidden Factory' in terms of Scrap and Rework handling that is going on in between the processes. Further probing would lead to a question about how much of SGF wastage can be reduced. It is likely that the factories will have practices by which Reworked material for a particular process will be fed into the next process. Similarly the wastage due to spillage may be retrieved and rerouted to the preform process. The grind dust may be collected and recycled at permitted proportions into the molding process. Assume around 2% of the SGF and 8% of the SRW are reintroduced into the process, the resulting yield (if we didn't consider RTY), would have worked out as 90%, and we would have missed out on exposing and quantifying the "Hidden Factory" and the opportunity for improvement

1 pointGreat job, Vastupal and Venugopal for providing a detailed and well explained answer. The chosen best answer is that of Venugopal as the table makes the description easy to read and understand.

1 pointDecision based on test Reality Ho is True Ho is False Accept Ho Correct Decision (1 – alpha) Confidence Level Type II error (Beta) Reject Ho Type I error (alpha) Correct Decision (1 – Beta) Power of the Test If we want the test to pick up a significant effect, it means that whenever H1 is true, it should accept that there is significant effect. In other words, it means that whenever H0 is false, it should accept that there is significant effect. Again, in other words, it means that whenever H0 is false, it should reject H0. This is represented by (1Beta). As seen from the above table, this is defined as the power of the test. Thus, if we want to increase the assurance that the test will pick up significant effect, it is the power of the test that needs to be increased.

1 pointIf we want to ensure that a statistical test picks up a significant effect, what will we want to increase  confidence or power of test? Before proceeding to find the answer of above question, we should know about Null Hypothesis, Alternate Hypothesis, confidence and power of test. Null Hypothesis: This Hypothesis indicates that there is no significant effect. Alternate Hypothesis: This Hypothesis indicates that there is a significant effect. Confidence: It is the probability that the test accepts Null Hypothesis when Null Hypothesis is True. Power: It is the probability that the test rejects the Null Hypothesis when Alternate Hypothesis is True. The statistical Power ranges from 0 to 1 and as statistical power increases, the probability of making type 2 error decreases. For a type 2 error probability of β, the corresponding statistical power is 1β. Means we are accepting the Alternate Hypothesis Test when it is true. Type 2 error is not rejecting the Null Hypothesis when in afcr the Alternate Hypothesis is true. SO by decreasing type 2 error, we are decreasing the probability of Null Hypothesis. Alpha, the Type 1 error that you are willing to accept. Its value is set from 0.1 to 0.5. An alpha of 0.5 means that you are willing to accept that there is a 5% chance that your results are due to chance not by test.Type 1 error is rejecting the Null Hypothesis when it is in fact true. Confidence level is 1 α. If we want to increase confidence level, then there is need to decrease value of α which indicates that we are accepting Null Hypothesis when Null is True, means that a statistical test is not picking of any significant effect when confidence is increasing. So, from above explanation we can say that, if we want to ensure that a statistical test picks up a significant effect, then we will want to increase power of test.

1 point

1 pointIt is great to see detailed answers for a question on a tool deemed as complex by many Six Sigma experts. The question sought answers to what apart from subjectivity in the rating scale is the limitation with using RPN. Most answers bring out one key point that RPN does not draw your attention towards the severity number. This is explained very well in several answers  hence making it complex to choose the best answer. Breaking the tie is the "suggested treatment" to this limitation which is provided by Vishwadeepak Choudhary, hence chosen as the best answer. I do recommend everyone reading all the answers to get a well rounded understanding.

1 pointAs my role is in process optimization learning six sigma will help me to reduce manual efforts and bring some automation in place by using DMAIC methodology which in turn helps me in improving the quality, customer satisfaction, and productivity.

1 pointBy and large, we come across situations where we favor the mean value of the outcome of a process (central tendency) to be focused around a specified targeted value with as less variation as possible (dispersion). There are situations where the variation assumes relatively higher importance than the central tendency; mostly because higher variations are more intolerable than some shifts in central tendency. Interestingly, there may be certain situations where variation or controlled variation is advantageous as well. Study of Process Potential: The process potential index Cp is used to study the variation, or spread of a process with respect to specified limits. While we study process potential, we are interested in the variation and not in the central tendency. The underlying idea is that if the process is able to maintain the variation within specified limits, it is considered to possess the required potential. The centering of mean can always be achieved by setting adjustments. Or in other words, if Cp is not satisfactory, Cpk (process capability) can never be achieved, since Cpk can never exceed Cp; it can at best equal Cp. Many examples where the variation is generally considered unfavorable to the outcome: 1. Analysis of Variance While evaluating whether there is a significant difference between means (central tendency) for multiple sets of trials as in ANOVA, the variation between sets and within sets are compared using F tests. Thus in such situations, the variation comparison assumes high importance. 2. Relative grading systems For many competitive examinations, the concept of ‘percentile’ is used, which is actually a relative grading system. Here, more than the absolute mark by a student, the relative variation from the highest mark is more important, thus the relative variability becomes key decisive factor. 3. Control chart analysis While studying a process variation using a control chart, first the instability and variation are given the importance. Only if we have control on these parameters we will be able to meaningfully study the ‘Offtarget’ i.e. the central tendency. 4. Temperature variation in a mold While performing certain compression molding process, temperature variation across different points on the surface of the mold does more harm than the mean temperature. Here the mean temperature is permitted to have a wider tolerance, but the variation across mold does more warping of the product. 5. Voltage fluctuations Many electrical appliances get damaged due to high variation (fluctuation) in the voltage, although the mean voltage (central tendency) is maintained. Controlled variation is favorable: 1. Load distribution in a ship While loading a ship the mean value of the load can vary, but the distribution of the load is more important to maintain the balance of the ship on water. 2. Science of music Those who understand the science of music would agree that more than the base note, the appropriate variation of the other notes with respect to the base note is extremely important to produce good music. Some examples where variation is favorable: Systematic Investment plans (SIPs) take advantage of the variation in the NAVs to accumulate wealth. Here even an adverse shift of the central tendency is compensated by the variation! Law of physics states that Force = Mass x Acceleration (F = ma). Thus, if we consider speed as the variable, it is the variation of speed that decides the force and the mean speed (central tendency) appears to have little relevance.

1 pointChecksheet It is used to systematically record and compile data from historical sources, or observations as they occur. It can be used to collect data at the location where the data is actually generated, in real time. The type of data can be quantitative or qualitative. Checksheet is one of the seven basic tools of Quality. What it does ? 1. Create an, easy to comprehend data that come from a simple efficient process 2. With every entry , creates a clean picture of facts as opposed to opinions of each team member 3. Standardizes agreement on the definition of each condition or event. How is it done? This can be deemed as a 8step process: 1. Agree on the definition of events or conditions that are being observed Eg: If we seek root cause for Severity1 defects, then agreement to be made on “Severity1”. 2. Decide on who collects the data (decide the person who will be involved in this activity) 3. Note down the source of the data. Data could be from a sample or an entire population and it can be quantitative or qualitative. 4. Decide on the knowledge level required (for the person who is involved) to collect the data. 5. Decide on the frequency of the data collection (whether data needs to be collected on weekly, hourly, daily, monthly basis....) 6. Decide on the duration of the data collection (how long data should be collected so as to have a meaningful outcome) 7. Construct a check sheet that is simple to use, concise and complete. 8. Have consistency in accumulating the data throughout the collecting period. How can a checksheet look like A checksheet can normally have o Project Name, for which the data is collected o Person (name) who collects the data o Location in which the data is collected o Date on which the data is collected o Any significant identifiers , if applicable o A column portraying the event name o Net total for rows and columns Let us take a sample checksheet for a hospital. Project Name: InPatient bottlenecks Name: Rajesh R Shift: Night Location : Ward Room Dates: 01Sep2017 to 03Sep2017 Reasons: Dates Total 01Sep 02Sep 03Sep Patient’s Attire not taken care 1 1 1 3 Beds not available 1 1 1 3 Here ‘Shift’ is the key identifier and ‘Reason’ is the event. Let us take a sample checksheet for a Mainframe batch Project Name: Mainframe Op Name: Rajesh R Shift: Midnight Location : Batch Dates: 09Apr2017 to 11Apr2017 Reasons: Dates Total 09Apr 10Apr 11Apr Weekday batch failure 1 9 1 11 Weekend batch failure 1 5 6 12 Here mainframe batch jobs fail for batch jobs running on weekdays and also for batch jobs running on the weekend. Future state of Checksheet: The checksheets have become obsolete and has been replaced by Business Process Management software. The software can handle much of complex issues with rather ease. Data can be quickly presented in a easy to view format. So the value of checksheet seems to be diminished out. From a quality perspective, personally i feel, checksheet remains to be as part of the 7 basic tools of quality. For many small companies, which still may not have the BPM software, checksheet would still the gotogo place. Unless an organisation is well equipped with the nuances of the BPM software or any other tool (which is an alternative to Checksheet), it cannot straightaway jump into those new techniques/tools. I sincerely feel that as a result of this, Checksheet should be followed by may be in modified form by focussing on viewing format, easy at which we collect the data. Conclusion: Checksheet tool is one of the key seven tools of basic of quality tools. With it, we would be able to provide correct data to the processes.Even if it is obsolete, it can be still used by companies which cannot afford to buy advanced BPM software or tools ; or startup companies which are having staff without exposure to BPM / tools and hence they would want to experiment with checksheets to get a feel of things and then work on the tools later on. Hence Checksheet availability is still a must for people to work with.

1 pointExcellence : Excellence is defined as the quality of being extremely good So what is Personal excellence? In simple words, setting up the bar higher [benchmark] in whatever activities, the individual(who is compared with the rest) does. Process Excellence: Providing an environment where the processes are highly stable and controlled with a minimal or no variation and with minimum or no wastage(Muda). Focus is on continuous improvement to ensure processes are highly stablized Operational Excellence: It reflects the way how as a person, unit, you or your team/organisation excel at parameters such as Cost, Human Resources, scope, time, quality etc.,. By excelling at this, the provider of a service, can provide value to the customer with optimal/maximum efficiency. Business Excellence: It is through which you make your business, with effective strategies ,efficient business plans , best business practices so that optimal results are achieved at a sustained rate. How each one is related to the other one(s): Personal Excellence is directly tied to Process Excellence. If and only if the individual is interested to adhere to the processes laid out, then process excellence or for that matter any other activity can be successful . If the cultural shift/mindset is not there amongst the individual/team , then no change would work. This can be represented by the formula : Quality of the solution (Q) * Acceptance of the solution (A) = Effectiveness of the solution (E). Unless there is an acceptance to any thing (which is the human part) nothing can be done. So if the individual has the desire to excel at his/her work, then he/she would strive to make sure he/she/the organization achieve Process Excellence. Process Excellence provides a way for continuous improvement. Purpose of process excellence is to streamline all the processes , make them stable and in the process to achieve minimal degree of variation and minimal wastage. By having a process excellence system in place, grey areas in Operational excellence and Business excellence can be identified and improved/rectified upon. Practically it is difficult to achieve excellence in one when another one is absent. For instance, Business and Operational excellence would require process improvements. If streamlining does not happen there then there is no excellence in Business and in Operational aspects as well.Similarly without human intervention or the elevated mindset of the individual, it becomes difficult to successfully run the processes at a topnotch. From an organisation perspective, the organisation should Provide a conducive environment to work with wherein by individuals can be encouraged to share their ideas/thoughts and create a transparency, making them feel belonging to the organisational/unit's problems/constraints (Personal Excellence) Encourage individuals to showcase their creativity in designing/providing solutions to problems (Personal Excellence) Create Challenging contests and rewarding people on various categories such as best creativity,best solution, optimal solution,... (Personal Excellence) Setup process standards and metrics for each parameter(Define the expectation).Set the Upper & Lower limit & also customer specification limits (Process Excellence) Conduct awareness sessions on process expectations with reasoning and justifications. Provide details with SMART goals (Process Excellence) Ensure that individuals/teams adhere to the standards with constant monitoring through Audits/Inspections/reviews. (Process Excellence) Look out for scope for continuous improvements periodically and accordingly adjust the process baseline if required. (Process Excellence) Define the Operational parameters that requires excellence. (Operational Excellence) Conduct awareness sessions to key stakeholders on those operational parameters and provide the plan on when and how to achieve them (Operational Excellence) Ensure the status of operational excellence through Project Management Reviews/status reports and other similar artefacts and address the deviations (Operational Excellence). Preserve the best practices that were followed to achieve Operational Excellence (Operational Excellence) Define the strategies/plans needed for improving the business results (Business Excellence) Define the best practices in getting businessoriented goals/activities done (Business Excellence) Conduct Confidential meeting with key stakeholders and provide the envisaged plan to them and convey your expectation (Business Excellence) Conduct monthly/quarterly review meetings with respective units and look onto the 4quarter dashboard. (Business Excellence) Get Business Mgmt section of Customer Satisfaction Survey from the customer to see if organisation is in target with its objective (Business Excellence) Document the outcome of the business results and the effective means to achieve them (Business Excellence)
This leaderboard is set to Kolkata/GMT+05:30

Who's Online (See full list)

Forum Statistics

Total Topics2,489

Total Posts10,979
