Jump to content

Mohamed Asif Abdul Hameed

Fraternity Members
  • Posts

    51
  • Joined

  • Last visited

  • Days Won

    19

Everything posted by Mohamed Asif Abdul Hameed

  1. 自働化 - Jidoka - Autonomation Simply means process automatically halts when there are non-conformities/irregularities/abnormalities in the system Andon light system is one of the Vital component in Autonomation. In the below reference pic, Andon is used as Visual Management tool to know the status of the production. Legend reference: Green - All Good, Normal Operation > Proceed further Yellow - Warning - Issue Identified, require attention > CAPA required Red - Production halted > Issue not identified; Immediate Supervisor inspection and RCA required Some of the planning aspects necessary to benefit from Jidoka is mentioned below. Organizations should combine JIT & Jidoka together, by doing this overproduction will be avoided, poor quality is minimized along with increased productivity. Under Continuous flow, this will avoid bottleneck and idle time Implement Lean flow before Autonomation Use effective use of systems and technology to make Andon lights interactive. This can improve the communication between operators and engineers Keep downtime minimal to magnify Quality & Overall Equipment Effectiveness (OEE) Have Rapid Issue Response (RIR) teams ready to address open and high priority tickets Integrate Andon Boards, Monitoring system and Alert systems for quick response Train operators and engineers on Autonomation tools - Andon, Andon cord, Fixed-position stop, Poke-yoke, Sensors and appropriate lean tools Empower the workforce for Pursuit of Excellence Corrective Action is essential, however importance for Preventive Action and Poke-yoke should be given for effective Jidoka benefits Moving to Jidoka: Minimize Manual Labor > Mechanize Flow > Implement Lean > Optimize > Automate > Autonomate
  2. I have summarized some of the methods to overcome and defeat Groupthink during a brainstorming session: Few best practices include below methods: Engage in Open discussions Allocate a “Devil’s Advocate” in the team - Red Teaming Structure the brainstorming session Encourage Wild Ideas Evaluate alternatives cautiously - Use "Six Thinking Hats" approach Disrupt abundantly when required Encourage Conflict of ideas, so that group doesn’t ends up with limited decisions Give everyone chance to speak up, during the session, may be in a Round-Robin fashion timely Add new elements to Brainstorming à Introduce “Reverse Brainstorming”; “Brainwriting” Give more attention to Group Dynamics Encourage Diversity Occasionally, Invite Cross function team member as external consultant Remain to be Impartial until to wrap up
  3. By Definition: Nash Equilibrium is a stable state of a system having interaction of different players were no player can gain by taking independent(in isolation) change of strategy if the strategies of the other player remains unchanged. Below is the Payoff Matrix for Company A & B for their decision for diversifying or not. In this scenario, “Players” are the firms, Company A and Company B “Moves” are the actions the firms can take: Either Diversify or not diversify (Something like Apple’s strategic decision of getting into Car business) “Payoffs” are the profits the firms will earn: (Diversifying increases firm’s operational costs, but in long run can increase revenues) Here we have the equilibrium outcome, which is both companies will diversify. Even though both A & B will perform better if they do not diversify, however such decisions are highly unstable as each company will have upper hand to diversify (extra +30), when competitor is not diversifying. The result referred is called as “Nash Equilibrium” A “Win Win Situation” Here Neither Company A nor Company B has anything to gain by modifying its own decision separately. Simply, Nash Equilibrium position is most equitable solution (most stable state), though not the most obvious solution when there is Multi – Party Conflict Nash Equilibrium is one of the fundamental concepts in Game Theory and it provides the base for rational decision making. It can be used to predict company’s response to competitor’s prices and decision. In 2000, advice from economists raised £22.5 billion for UK government from an auction of 3G Bandwidth license for mobile phones. Source: UKRI Economic and Social Research Council In Oligopolistic market condition, if one organization reduces its service prices, the other competitor must reduce the prices, so that they can retain customers. Classical Indian examples: > Bharti Infratel & Jio striking Nash Equilibrium for Telecom Infrastructure sharing > Dilemma of Shiv Sena whether to support or scoot BJP from the alliance, while forming government in Maharashtra Organizational decision making involves deciding between alternatives, uncertainty, complexity, Interpersonal Issues & High-Risk consequences. Organizations can follow the application of Game Theory by changing the Payoffs. Even though it is difficult to shift/switch from an competitive to cooperative strategy with any degree of success. It is better for organizations, cooperating with rivals/competitors which would leave everyone better off. Applying the concept in organizations helps narrow down the number of possible outcome, and therefore strategic decisions that would be taken. Take Away: A Lose-Win and Win-Lose situations usually in any kind of relationship does not last and it is temporary and can easily turn to be a Lose-Lose situation later. To make strong, long term relationship to be sustainable, we will have to rely upon Win-Win Situation.
  4. Value of any measurement is the sum of actual measurement and measurement error. Measurement system variation/error can occur because of Precision or Accuracy. Gage R&R tool measures the amount of variation in measurement system. Variation could be from the device or from the people. In the below diagram, Ref 1.1 is a classical example for High Precision and Low Accuracy. Even though Precision is high, values/points are highly biased and inaccurate. In Ref 1.3, values/points are both Accurate and Precise. Resolution is pivotal in measurement systems as it discriminates between measurement value. Post looking at resolution, it would make sense to look after the accuracy part, that is to measure the distance between the average value and true value. Moving from constant bias to Zero bias is the next objective. Linearity is the consistency of the bias existing in the system over the measurement range. Then looking at stability of the system, that is whether the measurement system has the ability to produce the same value over time when same sample is considered and to proceed with Precision - Repeatability and Reproducibility. Primary objective is to find out if there is any variation (either process or appraiser) and then look at total measurement system variation. So best order of checking the variation would start from 1. Resolution / Discrimination against tolerance (smallest unit of measure of the gage) 2. Accuracy / Bias (closeness of the data to target value) 3. Linearity (change in bias value within range) 4. Stability (change in bias over a period) 5. Precision (Repeatability and Reproducibility) (closeness of value with each other) Other views on the order could be 1. Resolution 2. Accuracy 3. Linearity 4. Precision & 5. Stability
  5. IoT (Internet of Things) is connecting things (devices, appliances, utilities, objects, machines, etc.,) to the internet. Car gate/barrier opening automatically when you reach your home location; Air conditioner, washing machine, geyser, TV switching on and off based on the pattern/specification are some of the examples of IoT. According to a recent research, distribution of multipurpose RAT’s (Remote Access Tools) that affect IoT has doubled in recent years (6.5% in 2018, 12.2% in 2018) Source: Kaspersky Global Research and Analysis Security concerns of IoT: As multiple devices are connected over the internet, there are possibilities that the information/data can fall into the wrong hands/hackers, results in misusing the data and ascending security concerns such as Data privacy Home security Network hacking Distributed Denial of Service attack Deliberate Radio Frequency jamming Extortion Losses Theft of financial information/money Simply to summarize losses could be Physical, digital, economical, psychological, reputation or social damages There is a clear limitation of IoT Security - We can’t install antivirus in most of the IoT devices (smart TV, internet security cameras) as it does not have adequate computing power to run an antivirus program To overcome Security concerns on IoT, we could follow some of the best practices listed below, Creating a strong password for the connected devices, encrypted, complex and not guessable viz., admin, 12345 Reset/Change password in regular frequency Not having the same password for all connected devices Enabling notification for any intrusion / invasion to take rapid action (Intrusion prevention) Frequently monitoring for suspicious/unusual activities (Anomaly detection) Regular application update from hardware vendors for improved security Selecting build-in security devices with embedded firmware for IoT connectivity IoT has great potential, doing due diligence before investing is wise.
  6. Both, Scatter Plot as well as Bubble Plot examines relationship between 2 variables (X Variable and Y Variable). However, In Bubble chart, the area of each bubble represents the value of third variable. Size can represent - Area of bubble or width of bubble based on input specification. Bubble chart is build upon scatter plot as a base. Below Scatter Plot and Bubble Plot reference same data points. Scatter Plot 1 - Examining relationship between Y Variable and X Variable Bubble Plot 1- Examining relationship between Y Variable and X Variable, Bubble size representing third variable Variants: Based on the groups, we could have a simple bubble plot or one with groups. Bubble Plot 2- With Groups - 3 Category A, B, C Limitations and Misinterpretations: Area or size of the bubble, proportionally increases or decreases in the plot and does not depend on the largest value/size of the bubble. Possibly there are high chances of misinterpretation to ascertain the value based on the bubble size. However, in Minitab, we have option to Edit Bubble Size (Minitab can calculate the size or we could go with actual size of the bubble in the mentioned variable) Complex to understand and read the data compared to that of a scatter plot It becomes chaos / confusing when there are more data points in the bubble plot (In above referred Bubble plot 2, 50 data points considered with 3 categories). Not Ideal for large set of data. hard to identify smaller bubble (smaller bubble might be covered/hidden), especially when it is closer/overlapped by a bigger bubble. Information lost Using Jitter can help in revealing overlapping points. However, it could confuse the reader as Jitter is generated based on random function (it is not the same point each time when it is generated) It could be difficult to determine the exact location of the data points when the bubbles are clustered When there is no clear legend, reader can misinterpret / misunderstand the data point and the relationship Negative Size? , Any negative/null value representing 3rd variable size would not be visible, after all, shape cannot have negative area Data is valuable only if we know how to visualize and give context. It would be better to select the Chart based on the message that we want to share with the audience rather than just going with the chart type.
  7. It is an Incremental Validation Technique, advantageous especially in agile development environment.In regression testing, we re-run previously performed test cases to validate working of current functionalities.This is performed mainly to test the code changes (enhancements), has any impact over the existing features. Regression testing is necessary because, any changes to the existing code can throw erroneous output and might cause the software to work inaccurately. It is a software testing type usually done in regular intervals, specifically post bug fixing (error correction), enhancements (after adding a new feature to an existing software), code optimization, environment change and fixing performance issue. For instance, in the below referred example Instagram added dark mode to its existing application. Irrespective of the update/release. whether it is minor/major/patch fix, regression testing is performed. On an average 1-4 weeks of regression testing is performed before releasing to production Environment based on the complexity of the application/system. However, we can optimize and make the testing effective by formulating and following a good regression testing strategy. Comprehensive techniques include: Retest all Regression test selection & Prioritization of test cases Below are the different types of regression testing. Types of Regression Testing SN Types Performed when 1 Corrective When no changes introduced to application specification 2 Re-test all Re using all test cases 3 Selective Testing for specific module / sub set 4 Progressive Changes to specification + New test cases created 5 Complete Multiple changes are performed 6 Partial When new code is added to existing code 7 Unit Unit testing phase (Code Isolation), dependencies blocked In QA Process, regression testing is a significant step. However, it can be Complex, Tedious (executing test again and again) and Time-consuming. Challenge is to have wide test coverage with minimal execution of test cases. For Rapid and Effective testing, strategy is used in selecting the test type. Best Practices and some recommended steps for effective testing follows: Primarily, Maintain and make amendments to the test cases in the regression test suite. Amendments can include adding new test cases, removing outdated test cases and modifying expected test results. Further categorizing test cases in the test suite for effective regression testing, viz., Reusable, Re-testable and Obsolete categories. Based on the bug report, identify vital few problematic areas to prioritize testing. Focusing on functionalities that are commonly and frequently used by the users and selecting appropriate test case can make the testing effective. We might miss certain scenarios in test cases. So, It is recommended not to forget/ignore Random testing Using cross team QA’s, testers to perform regression testing Having an effective regression strategy can help organization in saving time and efforts invested in Quality testing.
  8. There is of course, substantial difference between DMA of DMAIC and DMADV. I have briefly tabulated the difference, common tools and deliverables used in each of the phases in both the methodologies.
  9. Outlier is Anomaly, an extreme observation. It is any observation that is outside the pattern of the overall population distribution. Simply any data point that is more than 1.5 * IQR, either below the First Quartile or Above the Third Quartile. Many a times, the indication of outlier is considered as mistake in data collection and it can skew the statistical relationship. However, we could get an outlier because of the following reasons: Data entry/Type errors Measurement errors Experimental errors Intentional/dummy data Data processing errors (due to formula) Sampling errors Natural (not usually an error, it could be novelties in data) We can find outlier by, Foremost, when we use common sense Visually find the outlier (Graphical Summary out help to find outliers, or boxplot / scatterplot) Using statistical tests (There are many tests to find out outlier, listed below are few) Grubbs test for outliers (also called extreme studentized deviate) Dixon Q test for outliers Cochran’s C test Mandel’s h and k statistics Pierce’s criterion Chauvenet’s criterion Mahalanobis distance and leverage Methods of detection includes: Z-Score / Extreme Value Analysis Probabilistic and Statistical Modeling Linear Regression Models Proximity Based Models Information Theory Models High Dimensional Outlier Detection Methods In SAS, PROC Univariate, PROC SGPLOT can be used to find outlier. Statistical Tests can be used to detect an Outlier. However, it should not be used to determine what to do with them! (Ignore / Remove). One should have a good Domain Knowledge when Analyzing Outliers. Below is the example data set with Outlier and Without Outliers: Data set with Outlier Data set without Outlier We could have either have Univariate or Multivariate outlier. Univariate outlier: Data point with outlier on one variable Multivariate outlier: Combination of outliers on at least two variables Other forms of Outlier includes: Point outliers: Single outlier Contextual outliers: Can be noise in the data Collective outliers: Can be subset of uniqueness in the data (novelties) We can ignore outliers when, it is Bad Outlier, and We know that it is wrong data (Common sense) We have big data set (ignoring outlier doesn't matter at this situation) We can go back and validate the dataset for accuracy When the Outlier does not change the result, however influence change in assumption When Outlier influences both result and assumption, it is better to run analysis with and without outlier (as we are not sure whether it is because of mistake or misclassification of the data). Post analysis investigating both results to find the significance is minor or major. When outlier is a data from an unintended population We should not ignore outliers when, it is Good Outlier, and Results and outcomes are critical We have too many outliers (Usually when it is not unusual) Before Ignoring we will have to run through this checklist (for cautious and safe removal) Is Outlier because of data entry typo error? Identified Outlier value scientifically impossible? Assumption of Gaussian distribution on the data set is uncertain? Is the Outlier value seems to be scientifically interesting? Do we have substantial information about Outlier that we need to retain it? Are there any special circumstances / situations / cases for the data points? Are there any potential measurement errors? Under multi Outlier situation, can Masking be a problem? (In Masking - "outlier” is not detected) If the Answer to above questions is No, then Either, (Situation A) the so called, outlier, could have resulted from the same Gaussian population, it is just that we would have collected the observation from either the top/bottom tail of the population data. Or, (Situation B) the identified outlier, could be from different distribution. However, we would have collected the data due to mistake or bad sampling technique. For Situation A, removing outlier would be mistake For Situation B, We can remove the outlier cautiously Removal of Outlier can be dangerous. However it may improve the distribution and fit, but most of the time some important information is lost. So Points to remember, if we remove outlier: Trim the data set Do Winsorization (Replace outliers with nearest good data) Transform the data, Discretization Top, Bottom and Zero Coding Replace outlier with mean / median (Extreme Outliers will influence Mean, but not he Median; Ref to below example), random Imputation While we run Experiments and observe many Outliers in the data, we should repeat the data collection instead of simply removing them and when the Outliers are significant, then consider using Robust Statistical Technique. Outliers are not always bad data points, however, when the data set is small, then outlier can greatly influence the data statistics (We could have Skewed data, inflated or deflated means, distorted range and type I and type II errors). So it is better to do through investigation and also have background domain knowledge while performing this analysis. Case to case the analysis differs and based on that we should take cautious decision whether we have to Remove, Keep or change the Outlier.
  10. Net Promoter Score is a metric commonly used to measure the loyalty of the Customer. NPS scoring was created by Fred Reichheld from Bain & Company. NPS originally stood for Net Promoter Score. However, it has evolved to stand for Net Promoter System. Most of the Fortune companies like Apple, GE, Amex, Allstate, Walmart and other companies use NPS. NPS could be focal point for Organizational learning’s. It is calculated by simply asking the customer just one question. On a scale of 0 to 10, how likely are you to recommend this Product / Service to family, friend or Colleague? 11 Point scale, 0 being Not likely to recommend and 10 being Most likely to recommend the Service / Product This again is further categorized into 3 segments Promoters (9,10 Ratings) Passives (7,8 Ratings) Detractors (0 to 6 Ratings) Promoters are delighted by the Service / Product, Loyal and most likely to recommend Passives have Neutral opinion, they neither Promote nor Demote Detractors are dissatisfied and will most likely to switch to competitors, or where the service is Excellent (E.g.: From SBI to AmEx) Objective of NPS is to listen to detractors, fix the dissatisfaction and move them to promote. NPS = % of Promoters - % of Detractors NPS score ranges from -100 to 100 and simple calculated by the below formula, -100 signifying, all are detractors & 100 signifies that all are promoters Score >0, implies, promoters are more than detractors Below are the NPS Leaders by Industry: Source: NICE Satmetrix – US Consumer Report 2018 Allstate finished 4.9 points higher in 2018 compared to year-end 2017 Some of the Quick Benefits Include: Simplicity Ease of Use Easy and Quick Follow-up Learning and Experience Adaptability Categorical values (Qualitative): observations clubbed in groups or categories (Promoters, Passives, Detractors) Ordinal values: observations have rating scale (0 - 10 rating). This has implied order. With Ordinal data it is easy to detect responses, even when there is change in the distribution. For Instance: When 30% of the respondents have rated between 0 - 3 & 40% of respondents have rated between 4 - 6; With regards to Categorical classification, entire 70% is considered as Detractors. Any movement between the detractors are also untraceable when it is categorical. So why Ordinal Data converted to Categorical Data? By moving does statistical power and precision is lost? We know, Standard error is derived from Variance. So applying variance, we get Var [NPS] = Var [% of Promoters - % of Detractors] Var [NPS] = Var [% of Promoters] + Var [% of Detractors] - 2 Cov [% of Promoters, % of Detractors] Note: Cov [% of Promoters, % of Detractors] is going to be negative, as the customer cannot be both Promoter and Detractor at the same time. therefore, Var [NPS] = Var [% of Promoters] + Var [% of Detractors] - 2 (- number) Var [NPS] = Var [% of Promoters] + Var [% of Detractors] + number Here, Variance of NPS is > sum of variance of parts. So Categorical is better over Ordinal. Categorizing also influence the customer. Customer originally planned to give 5 or 6 would give better category when labeled. Categorization is not symmetrical. Converting from Ordinal to Categorical will lose few information. But the extreme responses (Most Likely and Least Likely) are good predictors. In the sense what is lost in scale conversion is ok to lose. We will not be able to start categorically first with 3 points, as we need at least 4 points to understand the intensity of the agreement. 11 Point is used to capture satisfaction at granular levels, macro and micro levels and then categorize to understand the group better. Categorizing / segmenting is a better way to notice patterns / movements between the customers. This will help in improving the experience and on certain touchpoints.
  11. We use run chart to see if there is any sign of special cause variation in our process data. It is graphical representation of the process performance plotted over time (hourly for Continuous Flow processing and most commonly in days or in months) Most importantly, What is Run? It is 1+ consecutive data points on the same side of Median (Either above median or below median) Variations can be common cause or special cause. Point to note: Common cause variation is outcome in an Stable process that is predictable & Special cause variations is outcome in an Unstable process that is not predictable By using run chart, we will be able to find the trend and pattern in the process data set Common patterns of non-randomness include: Mixture Patterns Cluster Patterns Oscillating Patterns & Trend Patterns When we run control chart on Minitab, it detects whether above mentioned patterns are existing in the data Sample data – Considered gold price/10 grams for the last 55 months Classification: Public In the above chart we can witness, clustering and trends. Cluster Pattern: In general, it is set of points in one area of the chart, above or below the median line. Thumb rule for cluster, 6+ continuous nearby points above/below the median line We can also check out the P value to see if there is potential cluster in the data Specifically, when P value is < 0.05, we could say possibly the data could indicate cluster. In reference to the above Run chart, Approximate p-value for clustering is 0.000 which is less than 0.05, so reject null hypothesis. Cluster can show sign of potential sampling or measurement issues. Trend Pattern: It is sustained drift in the data set; either upward trend or downward trend; Thumb rule to conclude trend is 6+ consecutive points either higher than previous data in one continuous period or the other way, that is 6+ consecutive points lower than previous data points. In the referred above chart we could observe an upward trend and P-value is also less than 0.05 to conclude potential trend. Now as we know about Cluster and Trend, lets note the below points: Opposite of Cluster is Mixture & Opposite of Trend is Oscillation Oscillation: When the process is not stable, we get data points spread above and below the median line, looks like oscillation. Thumb Rule: 14+ points in one continuous period increasing and then decreasing cyclically For P value < 0.05, possible oscillation can be observed. Classification: Public Mixture: When there are no points near the center line, with 14+ points upward and downward, increasing and decreasing over the median line and P value <0.05, we can have potential mixture in the data set. Run Chart & Control Chart In Control chart, along with the center line we have the upper and lower control limits. Another major difference is in Control chart - Center line is median and in Run chart - Center line is Mean; Run chart does not give any detail on the statistical control limits. We can see control chart as an Enhancement to Run Chart. In control chart, we will be able to check the stability - whether the process mean and variation are stable; check whether any out of control. We can check normality - data is normal or non normal; But it does not provide view on patterns. When we use control chart from assistant view in Minitab we get below output view: Stability Report Classification: Public It gives commonly used patterns for reference, however does not highlight the pattern in the output. Control charts will be useful over an Run chart, when the focus in on the variation and to identify potential deviation. However, downside of control charts is that it could have below limitations and can cause unnecessary wastage of time. False Alarms Incorrect Assumptions Incorrect Control Limits Both - Run chart and Control charts has its own advantages and used for different purpose [Run - Trend & Patterns; Control - Stability] and are useful based on the required objective, situation and analysis.
  12. In DMADV, focus is on new product/service design, unlike for existing product/service in DMAIC, during the last phase of DMADV, verification of design is performed and whether the design is capable of meeting needs of the customer is validated. Numerous pilot runs will be required to validate and verify the design outcomes. Major aspect of this phase to check whether all metrics which are designed are performing as expected. Conformance to Specification. Some of the common used tools in verify phase includes Control charts, control plans, Flagging, Poka Yoke, check sheets, SOP’s and work instructions. Software Application Design: In a new design viewpoint, Verification is whether Software Application developed in right way & Validation is whether Right Software Application is being produced In simple terms, verification is checking whether the application works perfectly without any errors/bugs and validation is checking whether the application is meeting the requirement and expectation Verification Validation Application and design review, code walk through, code inspection Black Box and White box testing It is static testing It is dynamic testing Performed first Usually performed post verification Verification done without software execution Validation done with software execution Automotive Manufacturing: Reference to a gearbox manufacturing, as per the new design in DMADV process, in actual manufacturing high level steps include preforming, annealing, machining, producing teeth, shaving, grinding and inspection. Here verification is, comparing the gearbox to design requirement of material, dimension, tolerance etc., that is all specs are verified Whereas, in validation, post inspection assembling gearbox and doing a dry run, test it to check whether it runs as expected. Verification Validation Done during development, review and inspection, production and scaleup Usually done before scaleup and after the actual production Random inspection can be done for verification Stringent checks are done during validation Validation can be done directly by skipping verification in some scenarios, especially when we are not able to measure component outcomes or when cost of verification is very high. Medical Devices: Verification usually done on the design: design input, process and the output. It is done by test, inspections and analysis. Validation is checking whether the intended need of the medical device is met Source: U.S. Food and Drug Administration (FDA)
  13. Pareto Analysis is used to separate Vital few from Trivial Many parameters. Vital few contributing to 20% and trivial many 80%. This principle is otherwise called as 80-20 Rule. It simply says, majority of the results come from minority of causes. In numerical terms, 20% of inputs are accountable for 80% of output 80% of productivity comes from 20% of associates 20% of causes are accountable for 80% of problem 80% of sales comes from 20% of customers 20% of efforts are accountable for 80% of Results Example Dataset: Metric Freq Percentage Cumulative Demand Exceeds Supply 232 24.12% 24.12% Incorrect Memory and CPU Usage 209 21.73% 45.84% Bandwidth Constraints 203 21.10% 66.94% Network Changes 64 6.65% 73.60% Fatal Bugs in Production 59 6.13% 79.73% Poor Front-End Optimization 52 5.41% 85.14% Integration Dependencies 39 4.05% 89.19% Database Contention 34 3.53% 92.72% Browser Incompatibility 23 2.39% 95.11% Device Incompatibility 14 1.46% 96.57% Hardware Conflicts 13 1.35% 97.92% Inadequate testing 9 0.94% 98.86% Too much code 6 0.62% 99.48% Exception handling 5 0.52% 100.00% Classification: Public Pareto Chart: Classification: Public Some of the common misuse include below scenario’s: Working only on Vital few parameters: There could be other potential parameters were the frequency is less and which falls on one of the trivial many factors, however when criticality or the severity of the potential parameter is high, since the frequency is low it is not considered and underestimated. For the referred example, Inadequate testing can be critical, if there is insufficient test case or when the test review is poor it can lead to multiple production issues, which is not factored when focusing only on Vital Few. On a ideal situation, 80% of the resource should focus on reducing the vital few and 20% of the resource working on minimizing trivial many parameters. Using pareto for defects belonging to multiple categories: Another misuse of pareto analysis is when combining defects from multiple categories. We need to clearly understand that categories must be Mutually Exclusive. Using Pareto when parameters are not collectively exhaustive: What is collectively exhaustive? Collectively, all the failures in the list should cover all the possible failures for the problem., that is, there should not be any gap. Definition: Events are said to be collectively Exhaustive, If the list of outcomes includes every possible outcomes. Performing analysis on small data sets/few data points: For statistically significant analysis, we will have to use relatively large data sets rather than working on small data points. At the same time number of categories need to be practically large enough. above pareto analysis, does not make sense, when the data set is relatively small. Inaccurate measuring: Visually looking in the pareto chart and selecting the Vital Few rather than considering cumulative % < (less than) 80% Analyzing defects only once: Pareto Analysis should be performed before the problem is solved and during the implementation period to see the trend and Post improvement. It is repetitive and iterative process, rather than running only once and focusing on the defects that were identified during the early stages of the analysis. 80 + 20 should be 100; and not 75 - 20 or 90 - 40 Considering 80 in the Left Axis: Left axis displays frequency and right axis the percentage, some time when people consider 80 in left axis leading to selecting wrong vital few could lead to poor problem solving. Flattened Pareto Analysis: If there is any bias in data collection methods, we might end up with bars being flat, this happens mainly when we are separating / breaking vital problems into small problems. It does not make sense to proceed with Pareto Analysis. Rather work on action plans based on the severity and criticality. Considering defects as Root Cause: Considering Vital defects identified during the analysis as Root Causes, and not analyzing further/deep dive to understand the root cause. This will not potentially stop the defect in occurring rather it would be applying band-aid scenario for the identified loop holes.
  14. Process Maps are vital in any six-sigma project. It visually represents the steps, actions, decisions that constitutes the process. With the help of process maps we can easily identify strengths and weaknesses in the process flow, identify value adds and non value adds based on the level of process map. Process maps are essential component in a project. They are useful in both Micro and Macro level. Sequence of process maps starts with Level 1 to Level 5, which is commonly referred in most of the organization. Level 1, being the high level SIPOC Level 2, Flow chart level Level 3, Swim lanes Level 4, Value Stream Mapping & Level 5, KPIV, KPOV For detailed insights on the levels of process maps, refer to previous forum discussion in the below link: Posted October 6, 2017 https://www.benchmarksixsigma.com/forum/topic/34895-process-mapping/?ct=1566389702 In most of the projects, basic process maps are created in initial stages of the project. In DMAIC Improvement projects, we use Process Flow chart & SIPOC in Define, As-Is process map in Analyze phase and Swim lane, multilevel process maps, To-Be process maps in Improve phase (if any change in flow is required). As-Is describes the Current State and To-Be describes the Future state. To-Be process map is more like improve flow for current state. Which level of process map we select in DMAIC project? Depending upon the complexity and type of the project, process map is selected during DMAIC. Depends on "How much information is necessary" or "How specific information's/details that needs to be captured" in the project and in the intentions and purpose. For instance, VSM is used in improvement projects. For complete radical transformation in processes, detailed end to end process map is used, kind of Business Process reengineering projects. Few considerations while selecting process maps include: Scope of the project (In scope & Out of Scope) Improvement / Process reengineering project Level of Focus / Granularity Objective (SLA Improvement / Optimization) Automation (Full / Partial / RPA / RDA) Based on the above consideration the level of process map is selected. Process maps are usually created in Visio, some organization use advanced process mapping tools. In our organization we use Blueworks Live (From IBM), for cross function collaborations in designing process maps across our offices for improvement projects. Reference: Sample process map from Blue works Classification: Public Selecting the right process map based on purpose: Purpose Process Map Type For a Process Snapshot SIPOC Simple description of process working High Level Process Map Radical Process Transformation Detailed End to End Process Map (With in Scope) BPR Detailed End to End Process Map (With in Scope) For Critical Problem Solving Detailed End to End Process Map (With in Scope) For displaying different departments operation Swim lane Map For displaying interaction/collaboration Relationship map For Lean Implementation Value Stream Mapping Each type of map has its pros and cons, hence based on the situation, criticality and purpose we can select the relevant process map.
  15. Drum Buffer Rope commonly known as DBR is an application from Theory of Constraints (ToC) used during planning and scheduling. Drum as constraint, Buffer as Inventory, and Rope for scheduling. DBR was developed by Eliyahu Goldratt. He also spearheaded Optimized production techniques, ToC and Critical Chain Project Management (CCPM). His famous writings include “The Goal” and “The Race”. In “The Race” he describes about Logistical system based on metaphor developed in “The Goal” DBR is similar to Kanban or CONWIP (Constant Work in Progress) DBR is similar to Kanban or CONWIP (Constant Work in Progress) Primary Objective of DBR is to ensure Throughput Expectations are met at the same time managing the operating Expense and the Inventory. In an assembly line, typically manufacturing setup, the constraint is referred as Drum (Slowest step/activity). This constraint acts as a Drumbeat to set the speed of the system. The process of releasing new work into the system is referred as Rope. Drum and Rope work together, as drum completes load, rope allows new work orders to be released. To ensure Drum is always engaged with load (Work @ full capacity), buffer is created in front of Drum. System Throughput is dependent on Drum’s working. Considering the below production line, Here A3 is the constraint, as it is used as Drumbeat, pace is set based on A3. A3 can process only 35 pieces per hour but A1 and A2 produces more than 35 pieces per hour. Rope monitors work load and pulls work from supplier and keeps buffer ready. Referring to 5 focusing steps in Theory of Constraint (ToC) 1) Identify the Constraint 2) Exploit 3) Subordinate all decisions to exploit 4) Elevate the constraint 5) Avoid inertia Drum: (High throughput as the constraint churns out work @ max capacity) Identifies constraints in the system Set’s pace Schedule Capacity Buffer: (High throughput as constant buffer is maintained before constraint) Exploits the constraint Shields the constraint (Ensuring buffer is created) Rope: Rope subordinates work to the constraint It Alert’s/Signal’s to release work to constraint Maximum output and by effectively using DBR we get less idle time In general, Objective of DBR in manufacturing setup to have minimum inventory, maximum output and to have delivery dates obvious or predictable. Processes which are predictable can be well planned and continuously improved. Both DBR and Kanban are systems to control production, pull based and is very similar to each other. However there are few difference. DBR assumes on single constraint in the production system whereas it need not be the case for Kanban. Kanban controls inventory level at each production stage, however DBR does it only at constraint. Kanban can be simply improved by applying DBR (ToC), combining lean & JIT manufacturing along with planning tool to have improved throughput. Thus preventing over production and having an highly efficient system. Kanban with Super Market before constraint is similar to DBR. Kanban can have more than 1 rope for effective processing steps and hence has an upperhand over DBR.
  16. Bessel’s work includes: correction to seconds pendulum corrected observation of personal equation correcting effects of instrumental errors Bessel functions (Cylinder function) Measuring stellar parallax and many more...…... Application of Bessel’s correction in Statistics, includes corrections in sample variance and sample standard deviation calculation. Correction is on the formula to use (n-1) instead of n; where n is number of observations (in sample) Why n-1 instead of n? Sample Variance/Standard Deviation gives biased estimate of Population Variance/Standard Deviation. Minor Bias! Bessel's correction is to eliminate this minor bias in sample standard deviation and sample variance calculations. So, as we use n-1, accuracy in these calculations increase significantly. Let’s refresh standard deviation and variance calculation and deep dive with an example, Both for Standard Deviation and Variance for sample data, N is replaced with (n-1). For Population we considered N and for sample it is (n-1) In order to get more clarity, lets consider data set of 100 observations (Population) and sample data randomly picked @ 5% from population data. For the same sample data, excel throws 2 different values, "STDEV.P" Formula -> Considering N Samples "STDEV.S" Formula -> Considering (n-1) during calculation R Code: Considering same sample Vector using R function: sample_data <- c(82, 31, 95, 33, 92) > mean(sample_data) [1] 67 > var(sample_data) [1] 1021.30 Verification: > sum(sample_data)/length(sample_data) [1] 67 > sum((sample_Data - 67)**2)/length(sample_data) [1] 817.04 Manual data is different from direct calculation. This is because, R directly applies Bessel's correction during calculation Applying correction to Manual calculation: > sum((sample_Data - 67)**2)/(length(sample_data)-1) [1] 1021.30 As the data sets are not same, As the Samples are representations of Population Variations Exists.... This is referred as "Sampling Variation" SAMPLES can Vary POPULATION Is Fixed Usually for Normal Distributed data, when we do sampling, we most likely select samples around mean and miss out lower and upper extreme values. Why this Bias? This downward Bias is corrected further, by dividing SAMPLE numerator by (n-1) and divide POPULATION numerator by N This is simple because of considering DoF (Degrees of Freedom) which means Sample data loses an (1) observation [for 6 samples, DoF is 5; for 5 samples DoF is 4; for n samples DoF is (n-1)] Dividing by (n-1) results in deriving unbiased variation. That is, diving by DoF (Degrees of Freedom) against the Sample Size. Less Biased Estimator and thus moving towards more accurate calculations.
  17. Scope Creep: When we start any project, one of the key initial steps that we take is defining scope (What is in-scope and what is out of the boundary, i.e., out of scope) Change is unavoidable so Change in scope is also inevitable based on current performance and situation. Anything (additional features, requirements, considerations to existing) that is added over and above the accepted/agreed upon scope is Scope Creep. Would like to quote an example from software development (traditional - Water Fall and agile) or even in an DMAIC six sigma project scope creep can happen at any of the phases. Below image gives a view of scope change iterations in each sprint delivery in an agile environment Iteration scope change along with Scrum Master’s expertise will allows better management of scope change and control it. Simple definition of Scope creep would be any changes that is introduced to the project post requirement gathering phase. CHANGE is not usually Bad !!! Change is necessary in order to sustain competition When do we usually have scope creep? It can be Market Demand, New Technology, Change in Business Need, Development Constraints Some of the additional scenarios would be When change control is uncommon Poor scope identification (initial analysis) and definition during project charter stage When communication is not apparent When there is external Influence Bad project planning and deployment Dynamic market change Quick change over / Late point differentiation (Usually for a matured process/product) When Project Manager is frail When initial scope definition is no more applicable Impact of Scope Creep: Project cost overrun Mix-up / Misunderstanding of new requirement How do we identify Scope Creep: Scope definition itself is not a simple process, it has various below steps / documents, viz., Scope Planning, Scope Definition, Scope Work Breakdown Structure Scope verification & Scope Change Control. So scope definition is obviously critical and vital step.. so when ever there is deviation or when there is a scope creep, we will have to rapidly take necessary immediate actions. Identify scope creep by when there is misalignment with objectives, deviation from deliverables, multiple change request from external stake holders, and most importantly anticipate and ensure availability during scheduled project connects [Do NOT wait for situations to come up, Be Proactive] . How do we avoid Scope Creep: Document the requirement details Create SMART objectives Deploy Change Control Plan Prepare Clear and Attainable Project Schedule Verify, Validate and get SING-OFF from Stakeholders (before project start) Engage the team Create SOW (Statement of Work) to outline the work and monitor development progress How do we manage Scope Creep: If we are not managing Scope Creep, it could possibly have Negative impacts. Use a good project management tool/applications at your disposal (such as JIRA, Trello, Easy-projects) and In order to manage the changes some of the below best practices / actions can be taken Define / Re-define the project scope Re-baseline / measure change difference (Work re-estimation. Agile Eg., T-shirt sizing exercise with the development team) Keep all stake holders informed Update project cost document (Request for extra resources / cost if required..) Update new target dead line / milestone document / Gantt Chart and communicate with project team, sponsor and supplier Reprioritise WIP Managing Key Project team Members, Stake Holders and Users: Keeping everyone informed is critical and is necessary in projects. so keep customers, users, members and stakeholders engaged. Be Agile Create 2 ways communication channel (Through tool/application/forums/meetings) Ensure regular updates and information is available to all stakeholders at any point of time Ensure time schedules for meetings are feasible and if required Re-schedule and Do not cancel meetings with out reasons Keep RACI Matrix updated and Key SPOC's defined during the project Maintain relationship and request sign-off whenever required Get USERS using the product or service at early stage It is Ok to say "No" when it not possible to accept scope change (final stage, 11th hour feedback, when change doesn't make sense,...) Keep developmental progress and details transparent. This would ensure team members are not demotivated and diverted from the objectives.
  18. Covering some basics: Posterior Probability (Conditional Probability): We use this, when we have new considerations based on recent updated data and wanted to update the probability of the event. Under Normal Distributed prior and likelihood probabilities, we shall be able to describe posterior probability with a function. This is referred as Closed Form Solution. It is often referred as revised probability Where, A and B are events; P(A) is Probability of A occurring; P(B) is Probability of B occurring; P(A|B) is Conditional probability of A (given that B occurs) P(B|A) is Conditional probability of B (given that A occurs) P(A) & P(B) are Prior Probabilities Note: Posterior probability is calculated by updating prior probability To make it simple, Posterior Probability = Prior Probability + New Evidence Considering an example of Gold Rate and Rupee Strength, Suppose, gold rate increased 70% of the time and gold rate decreased 30% of the time. i.e., P(Increase) = 0.7; P(decrease) = 0.3 In recent past after demonetization, given rupee lose strength against dollar, gold price increased 80% of the time and given rupee gained strength, gold price decreased 40% of the time i.e., P(loss | Increase) = 0.8 and P(Gain | decrease) = 0.4 Here, P(Increase) and P(decrease) are Prior Probabilities And, P(loss | Increase) and P(Gain | decrease) are Conditional Probability Now getting into results, So, Probability of Gold price will increase given for weak rupee is 0.76 We can apply Bayesian approach in below business scenarios widely to predict outcomes, Marketing and R&D Pricing Decision New Product Development Logistics Promotional Campaigns Taking Bayesian stat to next level, is to leap into MCMC, Markov Chain Monte Carlo methods. This methods helps in finding posterior distribution of the metrics. These algorithms generates simulations to find the metric parameter. We can write simple codes to get the system calculate the probability (Python code)
  19. Kaplan and Norton’s Balanced Scorecard helped in measuring the performance of the business in terms of both Financial and Non-Financial metrics. It was developed based on four perspectives viz. Financial, Customer, Internal Process and Learning and growth perspective. It is a powerful tool and the Objective of Balanced Scorecard is to create Cause and Effect logic for Organizations strategy goal. Below is the representation of the Balanced Scorecard framework: Each perspective would have objective, measure, target and initiatives for goal conversion. For instance, Objective: “Cut Rework cost” Metric: “Annual Savings”, “Cost of Poor Quality” Target: By 80% Initiatives: Replacing manual inspection with Automation; Implementing Mistake proofing in Operations Principles to remember when mapping goals into perspectives • Follow Cause and Effect Logic Between perspectives • Mapping business goals • Align KPI’s with Goals Balanced Scorecard is mostly compared with the Performance Pyramid model as both are methods of strategically driven performance management systems. Some of the Advantages of Balanced Scorecard includes: • This tool helps organization to create breakthrough performance • It helps in making strategy operational • It acts as a vital component in device integration • It drills down organization level measures to functional measures Balanced Scorecard becomes ineffective under below scenario’s • When there are too many performance Indicators • When everyone in the organization is not involved • When the metrics are poorly defined • When KPI’s are insufficient • Lack of Planning • Lack of Communication • When there is no balance between the four perspectives • When Leadership is just focusing on Financial metrics / Performance • It does not have Infinite Shelf life When Balanced scorecard is not updated in regular frequency (Sustainability would be an issue) • When using the tool for Strategy Formulation (It is a tool for strategy implementation) • Not capturing voice from external stakeholders apart from customers such as from Suppliers and public authorities • It doesn’t include environmental factors and metrics that can influence overall performance of the organization • When it takes time for Implementation/deployment, this is mainly because of the below reasons No proper understanding of Balanced Scorecard Lack of Management Support for implementation No proper training Inadequate Information Technology support Other cautious elements/disadvantages of Balanced Scorecard would include • High implementation cost for deploying balanced scorecard (number of employees trained * training hours * Cost per hour for FTE) • Objectives to be defined clearly (SMART) • Four perspectives do not provide holistic end to end picture • It doesn’t provide recommendations as output • It takes time to measure the output and its time-consuming process • Translation strategy into objectives needs to be accurate • When Cause and Effect relationship is inaccurate it can lead to inappropriate performance indicators • Lack of validation: It doesn’t provide mechanism for maintaining the relevance of the defined measures. • When there is lack of integration between strategy level and operation level metrics (It is not bi-way process) • Competitor analysis is not a part of the framework To overcome these limitations, companies should focus on the below parameters, Emphasis: Instead of considering all the available metrics and taking it to balanced score card, take only the metrics which matters, “Vital Few” and focusing on the metrics would give tremendous results Validation: Validate metrics, KPIs, objectives, strategy link. Ensure cause and effect logic is validated Clarity: Organization should have clear communication plan to clarity its strategy @ all levels – factory, mid management, senior management. Strategy should flow from top management and cascading goals should be done with utmost attention Integration: it is significant and essential for an organization to ensure balanced scorecard is successful. Integrating performance measure with the development practices would ensure the initiative is effective. When setting up Balanced Scorecard, we will have to focus on below points: Define / Evaluate: Vision & Mission, values, communication and change management plan Set Strategy: Aligning Organizations strategy with themes, results and perspectives Set Objective: Strategic objective categorized and drilled by perspectives and aligning cause and effect linkages Strategy Map: Extend the map organization wide, thus creating Enterprise wide Strategy Map Performance measures: Objectives, measures, target and initiatives are set and measured Accountability: Involving everyone in organization and assigning responsibility and ownership of the performance measures Assessment: Assessing the performance and monitoring the progress to ensure strategic objectives are aligned Alignments: Alignment / Realignment is done carefully for successful deployment Evaluation: Evaluate result for sustenance
  20. Would like to give an overview of DFX and DFMA before we get into DFA-index. DFX is Design for Excellence; DFMA is Design for Assembly and Manufacturing. DFMA is predominantly used in Manufacturing industry to keep the product cost minimal through various improvement through design changes and process optimization. While Developing Product: We could use dFMEA, DFR (Reliability) While High level design: We could use DFT (Testability) During Physical design: DFA (Assembly), DFF (Fabrication) During Prototype: DFM (Manufacturing) With this high level overview, we shall narrow down to DFA versus DFM, which are commonly compared in Manufacturing terms DFA Index: This indicates level of easiness to assemble a component. It is used to Measure Assembly Efficiency DFA Index is an Integral metric in DFA Method. Formula: Where, Nm = Minimum number of slides (Theoretical) tm = Minimum Assembly time (per part) ts = Total Assembly time (Estimated) For Example: If Total time to assemble is 500s, with 9 parts and if Minimum assembly time is 50s. Then DFA Index = 90 Larger the Value of DFA Index, more efficient is the Process This is mainly used to Analyze the data, which could give clarity to take further actions, "Part Elimination", "Redesign of Specific Parts"... There are various software tools and applications available for performing DFA Analysis. DFMA, DFA10 are few software's. Applications of DFA Index: DFA indicator is a good indicator of Assembly efficiency and this index could be used to analyze the data. Decision based on data analysis could lead to Fewer Parts or Simplified Assembly which could have the following benefits. Explaining DFA from DMAIC methodology preceptive: Define: Collecting product information and identifying opportunities for improvement Measure: Measuring current Assembly time and cost. Calculating DFA Index (Assembly efficiency) Analyze: Analyzing results to determine various complexities of product assembly Improve: Improve efficiency of assembly by reducing number of parts or by reducing/changing part types Control: Continue with best practices and follow DFA process improvement Cycle
  21. Spaghetti Diagram: Process Analysis tool It provides lucidity and understanding of the workflow. It gives better clarity of the current process and allows to see the visual flow of process and helps us in identifying Areas of Improvement. Without breaking the flow, continuous tracking in done to trace the flow of the activity in the process. It is used to track the ACTUAL flow of the process. It is also referred as Point-Point Flow chart / Work Flow diagram or simple Spaghetti chart. Below diagram on right, looks messy alike Spaghetti and that's the name Spaghetti Diagram. This powerful tool gives a sense of "Impact of Movement" in LEAN perspective and identify Wasteful Movements (Motion), waiting time and various transportation wastes in the process and keeps it nominal. Usually used in complex process to understand the current flow "Current State - As-Is" and to propose opportunities for optimization "Future State - To-Be" It can be used to track Product/Part, Multiple People/Person, Paper, Tools and Activity. I would like to explain this with an example: In the example, After lean - Spaghetti diagram (Improved Layout), considering below points: Activity distance travelled is reduced Distance is optimized People/processes relocated (Layout reorganized) to be in order in which the process flows or activities made close to each other (Better Sequencing) Unnecessary flow eliminated Waiting time reduced Interruptions/facility constraints identified and eliminated KANBAN opportunities identified >> This can lead to faster deliverables / delivery or >> With no change in delivery time but with lesser effort put in to complete the activity >> This could result in Space and Motion saving Benefits of using Spaghetti Diagram: Helps to detect wasteful movements and identify ways to increase the speed of process Identify possible inefficiencies/bottlenecks/Critical paths which causes delay in work place layout Helps in creating ideal Layout design for the process Recourse allocation improvement are also identified Unnecessary confused flow can be identified and eliminated It reduces Employee Fatigue and Improves Employee Morale It is used to identify redundancies in the process flow and eliminate them for optimal performance
  22. Understanding Metrics: Secondary Metric measures unintended consequence of changes to the process or product. Below table differentiates Primary and Secondary Metrics. Primary Metric (Y) Secondary Metric (y's) Also called as Project CTQ; Y= f(x); X's - Critical Inputs; Y - Output Also called as consequential metric; Cause and Effect relationship with Primary Metric Used to measure Success; Measures direct output metric Drives the right behavior; Measure result of primary metric Measures what needs to be fixed Measures what must not be broken ("Protection Measure") Metric considered for improvement Metric which should not negatively impact while improving primary metric Examples: Time, Quality, Service, Resources Examples: Flexibility, Engagement, Cost/Revenue, Customer Satisfaction Process driven just alone by primary metric is not mistake proof and prone to output variations/failures. Thus considering secondary metrics is vital. Secondary metrics helps organization to gain comprehensive view of operations. It is NOT accepted to reduce PCT (Process Cycle Time) at the cost of declined Service Quality. Compromising secondary metric at the cost of primary metric. Example: Selecting correct metric can help organization to validate the hypothesis and which can ensure that teams are aligned with the business goals. I would like to give an example from Insurance Industry for Primary and Secondary Metric It is imperative to respond to customer needs Quicker, Technological Advancement such as Automation (RPA), Artificial Intelligence (AI), Machine Language (ML) in current space can help organization in achieving this. From an Insurance Claims perspective, quicker claim processing is Vital from customer point of view. For that, Primary Metric can be: Claim Cycle Time and Secondary Metric can be: Number of claims processes per person, Delinquency rate Is Secondary Metric necessary? Secondary metrics are essential, it helps to gather insight of the process and it is necessary to meet long term go Multiple Secondary Metrics: Other CTQ's apart from primary CTQ becomes secondary metrics We could have one or more secondary metrics in a project in order to choose best solution However, it is better to have 3 or 4 controllable secondary metrics that needs to focused to complement the primary metric
  23. One Factor at A Time Design of Experiments In OFAT, we hold 1 factor as constant and alter 2nd variable level Multiple (more than 2 factors) can be manipulated It is sequential, one factor at a time Simultaneous with multiple factors Experimenter can decide upon the number of experiments to be conducted In DOE, number of experiments is selected by the design itself We CANNOT estimate interactions among the Factors Systematically interactions are estimated Design is Experimenters decision Factorial designs (Full and Fractional) Low precision in OFAT With regards to Precision, in designed experiments the estimates of each factor is High High chances of False optimum (when 2+ factors considered) which can mislead High chances of optimization Used to estimate curvature in factors If there is curvature, estimation is done by augmenting into central composite design Domino effect, If one experiment goes wrong resulting in Inconclusiveness Orthogonal design, easy to predict and make conclusions It is sensible to say DOE is superior over OFAT, as we can save time and don’t have to perform multiple tests / experiments. Let’s see how Designed Experiments take an upper hand against OFAT with an example. Let’s run an example for 3 factors in 15 runs Few interpretations, with reference to above diagram In DOE, we can estimate the interactions between the factors but not in OFAT In DOE, prediction is better as the experimental runs have better data spread compared to that in OFAT with same number of experimental runs Curvature determination is better as it covers entire spectrum in DOE compared to OFAT and for that matter Response Optimisation is also better in designed experiments.
  24. Box plot (box and whisker plot): This analysis creates visual representation of the range and distribution of Quantitative data (continuous data). It creates 4 Quartile groups. Quartile Group 1: Min - 25th Percentile (Q1) Quartile Group 2: 25th Percentile (Q1) - 50th Percentile (Q2, Median) Quartile Group 3: 50th Percentile (Q2) - 75th Percentile (Q3) Quartile Group 4: 75th Percentile (Q3) - Max In this, Q3-Q1 is Inter Quartile Range (IQR) Insights from Box-plot: Comparing multiple data sets (Categorical variable for grouping (1-4); Understanding Data Symmetry and Skewness * It gives spread of data points. Lowest(min) and highest(max) value in the data set. * It shows outliers (if any) present in the data. Outliers are values which is greater than 1.5 times of IQR away from 25th percentile or 75th percentile. * It clearly shows if the distribution is skewed (left or right. Refer to enclosed pic) * Median: This separates lower 50% of observations from the upper 50% of observations. * Box plot with groups, when we have further categories, we can use ‘categorical variables for grouping’, this helps us to identifying further distribution spread among the groups. Example Reference: This example is for Box Plot Graph with Groups. Group A and Group B Respectively. In this, it is clearly evident that there are outliers in both the graph. Group A is right Skewed. We will have more clarity on the distribution of data in both groups by visual representation.
  25. Major difference between DFMEA and PFMEA: DFMEA (Design Failure Mode Effect Analysis) detects potential failure modes with regards to a Product or a Service. Whereas, PFMEA (Process Failure Mode Effect Analysis) detects potential failure modes with regards to a Process (usually Manufacturing and Assembly processes). DFMEA - Emphasis on Product Function PFMEA - Emphasis on Process Input Variable Another major difference would be in DFMEA, it is used to detect potential deficiencies in Products/Services before they are being released to production. Preferably In PFMEA, needs to be performed before starting with new process. However, most of the time it is used for analysing deficiencies in existing processes. Irrespective of the difference, similar kind of steps is followed in both DFMEA and PFMEA. Mostly DFMEA is seen from a perspective of identifying deficiencies on Product life or from an Product malfunction view on the contrary PFMEA is seen from a perspective of identifying deficiencies on Product Quality or from improving reliability of the Process standpoint. In a typical Product Manufacturing Flow, DFMEA activity is performed before PFMEA. I would like to explain this with an example in Software and Manufacturing sector. [Refer Enclosed Image] PFMEA is developed to ensure effective process control and any abnormality/feedback is sent back to DFMEA for effective design changes (if any) for optimized flow and if it is effectively used, it can result in good improvement in Quality, Cost, Delivery and Reliability. Proactive design changes which will prevent process/product failure which can go as a feedback from PFMEA to DFMEA
×
×
  • Create New...