Jump to content

Mohamed Asif

Fraternity Members
  • Content Count

  • Joined

  • Last visited

  • Days Won


Mohamed Asif last won the day on September 8 2019

Mohamed Asif had the most liked content!

Community Reputation

13 Good


About Mohamed Asif

  • Rank
    Active Member
  • Birthday 03/30/1985

Profile Information

  • Name
    Mohamed Asif Abdul Hameed
  • Company
    Allstate India
  • Designation
    Senior Lead Consultant - Operations Excellence

Recent Profile Visitors

484 profile views
  1. Ensuring Compliance and Risk Mitigation are vital elements in organizations Fraud Detection and Risk Management Framework Fraud Detection is inevitable in organizations because “Undetected fraud encourages more fraud” In banking environment, Fraud Detection and Prevention are done more proactively compared to other domains. Below are some of the methods followed to detect frauds: Using Intrusion detection systems – It is passive system which monitors and notifies user Transaction monitoring for any suspicious activity and operating procedure violations Alert User and Risk response team when there is unusual activity detected (based on spending behavior and location) Real-time monitoring for high value transactions Using advanced firewall to auto detect and block traffic based on IP Port number Below are some of the frequent scheduled activities and best practices to prevent frauds: Using Intrusion prevention systems – it is active system which monitors and automatically defect attacks 2FA (two-factor authentication or multi-factor authentication) security process is an extra layer of protection to ensure security of online transactions Blocking debit/credit cards when repetitively wrong pin entered OTP and secure code authentication for online transactions Limiting transfer value for online transfers for new beneficiary for first 24 hours, Limiting to add number of beneficiaries with in 24-hour window Auto Logoff after when user is idle, and no activity detected Commonly used security Tools in financial institutes include: Proxy Piercing - This tool helps in tracing fraudster's true location Device fingerprinting - This tool captures transaction pattern associated with the device and flags Blacklisting - This tool blocks traffic initiated from specific user/domain/location/country (dark web monitoring) Velocity Checking - This tool watches repeat purchase from same user and flags Adopting to multiple fraud detection tools and methodologies is the only way to effectively fight back with online frauds. These tools can help in Payment fraud prevention New Account fraud prevention Account takeover protection Payment authorization Dynamic checkout Charge-back guarantee Representment Content Integrity CNP fraud protection In Insurance environment, especially during claims, organizations were following traditionally measures such as relying on expert judgement, Special Investigation Team and adjusters. However, Organizations should leverage technology to mitigate, prevent and combat fraudulent activities: for instance Using Analytical Techniques such as Artificial Neural Networks (ANN) to flag an unusual claim Using Data Mining methods such as clustering based on specific customer UID’s and segments Using Pattern Recognition Algorithm and models to identify patterns in comparison with the historical records Text mining and logical regression techniques to identify claimant records Categorization can be done based on available date such as Clean Claim, Standard Analysis and Critical Investigation Clean Claim – For Fast Track Settlement Standard Analysis – With Normal processing TAT Critical Investigation – For potential fraudulent claim Lemonade Insurance Company reports claim paid in 3 seconds with no paperwork Source: Insurance Innovation Reporter For companies like Lemonade, fraud detection and prevention system should have apex standards to maintain reputation and customer relationship
  2. 自働化 - Jidoka - Autonomation Simply means process automatically halts when there are non-conformities/irregularities/abnormalities in the system Andon light system is one of the Vital component in Autonomation. In the below reference pic, Andon is used as Visual Management tool to know the status of the production. Legend reference: Green - All Good, Normal Operation > Proceed further Yellow - Warning - Issue Identified, require attention > CAPA required Red - Production halted > Issue not identified; Immediate Supervisor inspection and RCA required Some of the planning aspects necessary to benefit from Jidoka is mentioned below. Organizations should combine JIT & Jidoka together, by doing this overproduction will be avoided, poor quality is minimized along with increased productivity. Under Continuous flow, this will avoid bottleneck and idle time Implement Lean flow before Autonomation Use effective use of systems and technology to make Andon lights interactive. This can improve the communication between operators and engineers Keep downtime minimal to magnify Quality & Overall Equipment Effectiveness (OEE) Have Rapid Issue Response (RIR) teams ready to address open and high priority tickets Integrate Andon Boards, Monitoring system and Alert systems for quick response Train operators and engineers on Autonomation tools - Andon, Andon cord, Fixed-position stop, Poke-yoke, Sensors and appropriate lean tools Empower the workforce for Pursuit of Excellence Corrective Action is essential, however importance for Preventive Action and Poke-yoke should be given for effective Jidoka benefits Moving to Jidoka: Minimize Manual Labor > Mechanize Flow > Implement Lean > Optimize > Automate > Autonomate
  3. I have summarized some of the methods to overcome and defeat Groupthink during a brainstorming session: Few best practices include below methods: Engage in Open discussions Allocate a “Devil’s Advocate” in the team - Red Teaming Structure the brainstorming session Encourage Wild Ideas Evaluate alternatives cautiously - Use "Six Thinking Hats" approach Disrupt abundantly when required Encourage Conflict of ideas, so that group doesn’t ends up with limited decisions Give everyone chance to speak up, during the session, may be in a Round-Robin fashion timely Add new elements to Brainstorming à Introduce “Reverse Brainstorming”; “Brainwriting” Give more attention to Group Dynamics Encourage Diversity Occasionally, Invite Cross function team member as external consultant Remain to be Impartial until to wrap up
  4. By Definition: Nash Equilibrium is a stable state of a system having interaction of different players were no player can gain by taking independent(in isolation) change of strategy if the strategies of the other player remains unchanged. Below is the Payoff Matrix for Company A & B for their decision for diversifying or not. In this scenario, “Players” are the firms, Company A and Company B “Moves” are the actions the firms can take: Either Diversify or not diversify (Something like Apple’s strategic decision of getting into Car business) “Payoffs” are the profits the firms will earn: (Diversifying increases firm’s operational costs, but in long run can increase revenues) Here we have the equilibrium outcome, which is both companies will diversify. Even though both A & B will perform better if they do not diversify, however such decisions are highly unstable as each company will have upper hand to diversify (extra +30), when competitor is not diversifying. The result referred is called as “Nash Equilibrium” A “Win Win Situation” Here Neither Company A nor Company B has anything to gain by modifying its own decision separately. Simply, Nash Equilibrium position is most equitable solution (most stable state), though not the most obvious solution when there is Multi – Party Conflict Nash Equilibrium is one of the fundamental concepts in Game Theory and it provides the base for rational decision making. It can be used to predict company’s response to competitor’s prices and decision. In 2000, advice from economists raised £22.5 billion for UK government from an auction of 3G Bandwidth license for mobile phones. Source: UKRI Economic and Social Research Council In Oligopolistic market condition, if one organization reduces its service prices, the other competitor must reduce the prices, so that they can retain customers. Classical Indian examples: > Bharti Infratel & Jio striking Nash Equilibrium for Telecom Infrastructure sharing > Dilemma of Shiv Sena whether to support or scoot BJP from the alliance, while forming government in Maharashtra Organizational decision making involves deciding between alternatives, uncertainty, complexity, Interpersonal Issues & High-Risk consequences. Organizations can follow the application of Game Theory by changing the Payoffs. Even though it is difficult to shift/switch from an competitive to cooperative strategy with any degree of success. It is better for organizations, cooperating with rivals/competitors which would leave everyone better off. Applying the concept in organizations helps narrow down the number of possible outcome, and therefore strategic decisions that would be taken. Take Away: A Lose-Win and Win-Lose situations usually in any kind of relationship does not last and it is temporary and can easily turn to be a Lose-Lose situation later. To make strong, long term relationship to be sustainable, we will have to rely upon Win-Win Situation.
  5. Value of any measurement is the sum of actual measurement and measurement error. Measurement system variation/error can occur because of Precision or Accuracy. Gage R&R tool measures the amount of variation in measurement system. Variation could be from the device or from the people. In the below diagram, Ref 1.1 is a classical example for High Precision and Low Accuracy. Even though Precision is high, values/points are highly biased and inaccurate. In Ref 1.3, values/points are both Accurate and Precise. Resolution is pivotal in measurement systems as it discriminates between measurement value. Post looking at resolution, it would make sense to look after the accuracy part, that is to measure the distance between the average value and true value. Moving from constant bias to Zero bias is the next objective. Linearity is the consistency of the bias existing in the system over the measurement range. Then looking at stability of the system, that is whether the measurement system has the ability to produce the same value over time when same sample is considered and to proceed with Precision - Repeatability and Reproducibility. Primary objective is to find out if there is any variation (either process or appraiser) and then look at total measurement system variation. So best order of checking the variation would start from 1. Resolution / Discrimination against tolerance (smallest unit of measure of the gage) 2. Accuracy / Bias (closeness of the data to target value) 3. Linearity (change in bias value within range) 4. Stability (change in bias over a period) 5. Precision (Repeatability and Reproducibility) (closeness of value with each other) Other views on the order could be 1. Resolution 2. Accuracy 3. Linearity 4. Precision & 5. Stability
  6. IoT (Internet of Things) is connecting things (devices, appliances, utilities, objects, machines, etc.,) to the internet. Car gate/barrier opening automatically when you reach your home location; Air conditioner, washing machine, geyser, TV switching on and off based on the pattern/specification are some of the examples of IoT. According to a recent research, distribution of multipurpose RAT’s (Remote Access Tools) that affect IoT has doubled in recent years (6.5% in 2018, 12.2% in 2018) Source: Kaspersky Global Research and Analysis Security concerns of IoT: As multiple devices are connected over the internet, there are possibilities that the information/data can fall into the wrong hands/hackers, results in misusing the data and ascending security concerns such as Data privacy Home security Network hacking Distributed Denial of Service attack Deliberate Radio Frequency jamming Extortion Losses Theft of financial information/money Simply to summarize losses could be Physical, digital, economical, psychological, reputation or social damages There is a clear limitation of IoT Security - We can’t install antivirus in most of the IoT devices (smart TV, internet security cameras) as it does not have adequate computing power to run an antivirus program To overcome Security concerns on IoT, we could follow some of the best practices listed below, Creating a strong password for the connected devices, encrypted, complex and not guessable viz., admin, 12345 Reset/Change password in regular frequency Not having the same password for all connected devices Enabling notification for any intrusion / invasion to take rapid action (Intrusion prevention) Frequently monitoring for suspicious/unusual activities (Anomaly detection) Regular application update from hardware vendors for improved security Selecting build-in security devices with embedded firmware for IoT connectivity IoT has great potential, doing due diligence before investing is wise.
  7. Both, Scatter Plot as well as Bubble Plot examines relationship between 2 variables (X Variable and Y Variable). However, In Bubble chart, the area of each bubble represents the value of third variable. Size can represent - Area of bubble or width of bubble based on input specification. Bubble chart is build upon scatter plot as a base. Below Scatter Plot and Bubble Plot reference same data points. Scatter Plot 1 - Examining relationship between Y Variable and X Variable Bubble Plot 1- Examining relationship between Y Variable and X Variable, Bubble size representing third variable Variants: Based on the groups, we could have a simple bubble plot or one with groups. Bubble Plot 2- With Groups - 3 Category A, B, C Limitations and Misinterpretations: Area or size of the bubble, proportionally increases or decreases in the plot and does not depend on the largest value/size of the bubble. Possibly there are high chances of misinterpretation to ascertain the value based on the bubble size. However, in Minitab, we have option to Edit Bubble Size (Minitab can calculate the size or we could go with actual size of the bubble in the mentioned variable) Complex to understand and read the data compared to that of a scatter plot It becomes chaos / confusing when there are more data points in the bubble plot (In above referred Bubble plot 2, 50 data points considered with 3 categories). Not Ideal for large set of data. hard to identify smaller bubble (smaller bubble might be covered/hidden), especially when it is closer/overlapped by a bigger bubble. Information lost Using Jitter can help in revealing overlapping points. However, it could confuse the reader as Jitter is generated based on random function (it is not the same point each time when it is generated) It could be difficult to determine the exact location of the data points when the bubbles are clustered When there is no clear legend, reader can misinterpret / misunderstand the data point and the relationship Negative Size? , Any negative/null value representing 3rd variable size would not be visible, after all, shape cannot have negative area Data is valuable only if we know how to visualize and give context. It would be better to select the Chart based on the message that we want to share with the audience rather than just going with the chart type.
  8. It is an Incremental Validation Technique, advantageous especially in agile development environment.In regression testing, we re-run previously performed test cases to validate working of current functionalities.This is performed mainly to test the code changes (enhancements), has any impact over the existing features. Regression testing is necessary because, any changes to the existing code can throw erroneous output and might cause the software to work inaccurately. It is a software testing type usually done in regular intervals, specifically post bug fixing (error correction), enhancements (after adding a new feature to an existing software), code optimization, environment change and fixing performance issue. For instance, in the below referred example Instagram added dark mode to its existing application. Irrespective of the update/release. whether it is minor/major/patch fix, regression testing is performed. On an average 1-4 weeks of regression testing is performed before releasing to production Environment based on the complexity of the application/system. However, we can optimize and make the testing effective by formulating and following a good regression testing strategy. Comprehensive techniques include: Retest all Regression test selection & Prioritization of test cases Below are the different types of regression testing. Types of Regression Testing SN Types Performed when 1 Corrective When no changes introduced to application specification 2 Re-test all Re using all test cases 3 Selective Testing for specific module / sub set 4 Progressive Changes to specification + New test cases created 5 Complete Multiple changes are performed 6 Partial When new code is added to existing code 7 Unit Unit testing phase (Code Isolation), dependencies blocked In QA Process, regression testing is a significant step. However, it can be Complex, Tedious (executing test again and again) and Time-consuming. Challenge is to have wide test coverage with minimal execution of test cases. For Rapid and Effective testing, strategy is used in selecting the test type. Best Practices and some recommended steps for effective testing follows: Primarily, Maintain and make amendments to the test cases in the regression test suite. Amendments can include adding new test cases, removing outdated test cases and modifying expected test results. Further categorizing test cases in the test suite for effective regression testing, viz., Reusable, Re-testable and Obsolete categories. Based on the bug report, identify vital few problematic areas to prioritize testing. Focusing on functionalities that are commonly and frequently used by the users and selecting appropriate test case can make the testing effective. We might miss certain scenarios in test cases. So, It is recommended not to forget/ignore Random testing Using cross team QA’s, testers to perform regression testing Having an effective regression strategy can help organization in saving time and efforts invested in Quality testing.
  9. There is of course, substantial difference between DMA of DMAIC and DMADV. I have briefly tabulated the difference, common tools and deliverables used in each of the phases in both the methodologies.
  10. Outlier is Anomaly, an extreme observation. It is any observation that is outside the pattern of the overall population distribution. Simply any data point that is more than 1.5 * IQR, either below the First Quartile or Above the Third Quartile. Many a times, the indication of outlier is considered as mistake in data collection and it can skew the statistical relationship. However, we could get an outlier because of the following reasons: Data entry/Type errors Measurement errors Experimental errors Intentional/dummy data Data processing errors (due to formula) Sampling errors Natural (not usually an error, it could be novelties in data) We can find outlier by, Foremost, when we use common sense Visually find the outlier (Graphical Summary out help to find outliers, or boxplot / scatterplot) Using statistical tests (There are many tests to find out outlier, listed below are few) Grubbs test for outliers (also called extreme studentized deviate) Dixon Q test for outliers Cochran’s C test Mandel’s h and k statistics Pierce’s criterion Chauvenet’s criterion Mahalanobis distance and leverage Methods of detection includes: Z-Score / Extreme Value Analysis Probabilistic and Statistical Modeling Linear Regression Models Proximity Based Models Information Theory Models High Dimensional Outlier Detection Methods In SAS, PROC Univariate, PROC SGPLOT can be used to find outlier. Statistical Tests can be used to detect an Outlier. However, it should not be used to determine what to do with them! (Ignore / Remove). One should have a good Domain Knowledge when Analyzing Outliers. Below is the example data set with Outlier and Without Outliers: Data set with Outlier Data set without Outlier We could have either have Univariate or Multivariate outlier. Univariate outlier: Data point with outlier on one variable Multivariate outlier: Combination of outliers on at least two variables Other forms of Outlier includes: Point outliers: Single outlier Contextual outliers: Can be noise in the data Collective outliers: Can be subset of uniqueness in the data (novelties) We can ignore outliers when, it is Bad Outlier, and We know that it is wrong data (Common sense) We have big data set (ignoring outlier doesn't matter at this situation) We can go back and validate the dataset for accuracy When the Outlier does not change the result, however influence change in assumption When Outlier influences both result and assumption, it is better to run analysis with and without outlier (as we are not sure whether it is because of mistake or misclassification of the data). Post analysis investigating both results to find the significance is minor or major. When outlier is a data from an unintended population We should not ignore outliers when, it is Good Outlier, and Results and outcomes are critical We have too many outliers (Usually when it is not unusual) Before Ignoring we will have to run through this checklist (for cautious and safe removal) Is Outlier because of data entry typo error? Identified Outlier value scientifically impossible? Assumption of Gaussian distribution on the data set is uncertain? Is the Outlier value seems to be scientifically interesting? Do we have substantial information about Outlier that we need to retain it? Are there any special circumstances / situations / cases for the data points? Are there any potential measurement errors? Under multi Outlier situation, can Masking be a problem? (In Masking - "outlier” is not detected) If the Answer to above questions is No, then Either, (Situation A) the so called, outlier, could have resulted from the same Gaussian population, it is just that we would have collected the observation from either the top/bottom tail of the population data. Or, (Situation B) the identified outlier, could be from different distribution. However, we would have collected the data due to mistake or bad sampling technique. For Situation A, removing outlier would be mistake For Situation B, We can remove the outlier cautiously Removal of Outlier can be dangerous. However it may improve the distribution and fit, but most of the time some important information is lost. So Points to remember, if we remove outlier: Trim the data set Do Winsorization (Replace outliers with nearest good data) Transform the data, Discretization Top, Bottom and Zero Coding Replace outlier with mean / median (Extreme Outliers will influence Mean, but not he Median; Ref to below example), random Imputation While we run Experiments and observe many Outliers in the data, we should repeat the data collection instead of simply removing them and when the Outliers are significant, then consider using Robust Statistical Technique. Outliers are not always bad data points, however, when the data set is small, then outlier can greatly influence the data statistics (We could have Skewed data, inflated or deflated means, distorted range and type I and type II errors). So it is better to do through investigation and also have background domain knowledge while performing this analysis. Case to case the analysis differs and based on that we should take cautious decision whether we have to Remove, Keep or change the Outlier.
  11. Net Promoter Score is a metric commonly used to measure the loyalty of the Customer. NPS scoring was created by Fred Reichheld from Bain & Company. NPS originally stood for Net Promoter Score. However, it has evolved to stand for Net Promoter System. Most of the Fortune companies like Apple, GE, Amex, Allstate, Walmart and other companies use NPS. NPS could be focal point for Organizational learning’s. It is calculated by simply asking the customer just one question. On a scale of 0 to 10, how likely are you to recommend this Product / Service to family, friend or Colleague? 11 Point scale, 0 being Not likely to recommend and 10 being Most likely to recommend the Service / Product This again is further categorized into 3 segments Promoters (9,10 Ratings) Passives (7,8 Ratings) Detractors (0 to 6 Ratings) Promoters are delighted by the Service / Product, Loyal and most likely to recommend Passives have Neutral opinion, they neither Promote nor Demote Detractors are dissatisfied and will most likely to switch to competitors, or where the service is Excellent (E.g.: From SBI to AmEx) Objective of NPS is to listen to detractors, fix the dissatisfaction and move them to promote. NPS = % of Promoters - % of Detractors NPS score ranges from -100 to 100 and simple calculated by the below formula, -100 signifying, all are detractors & 100 signifies that all are promoters Score >0, implies, promoters are more than detractors Below are the NPS Leaders by Industry: Source: NICE Satmetrix – US Consumer Report 2018 Allstate finished 4.9 points higher in 2018 compared to year-end 2017 Some of the Quick Benefits Include: Simplicity Ease of Use Easy and Quick Follow-up Learning and Experience Adaptability Categorical values (Qualitative): observations clubbed in groups or categories (Promoters, Passives, Detractors) Ordinal values: observations have rating scale (0 - 10 rating). This has implied order. With Ordinal data it is easy to detect responses, even when there is change in the distribution. For Instance: When 30% of the respondents have rated between 0 - 3 & 40% of respondents have rated between 4 - 6; With regards to Categorical classification, entire 70% is considered as Detractors. Any movement between the detractors are also untraceable when it is categorical. So why Ordinal Data converted to Categorical Data? By moving does statistical power and precision is lost? We know, Standard error is derived from Variance. So applying variance, we get Var [NPS] = Var [% of Promoters - % of Detractors] Var [NPS] = Var [% of Promoters] + Var [% of Detractors] - 2 Cov [% of Promoters, % of Detractors] Note: Cov [% of Promoters, % of Detractors] is going to be negative, as the customer cannot be both Promoter and Detractor at the same time. therefore, Var [NPS] = Var [% of Promoters] + Var [% of Detractors] - 2 (- number) Var [NPS] = Var [% of Promoters] + Var [% of Detractors] + number Here, Variance of NPS is > sum of variance of parts. So Categorical is better over Ordinal. Categorizing also influence the customer. Customer originally planned to give 5 or 6 would give better category when labeled. Categorization is not symmetrical. Converting from Ordinal to Categorical will lose few information. But the extreme responses (Most Likely and Least Likely) are good predictors. In the sense what is lost in scale conversion is ok to lose. We will not be able to start categorically first with 3 points, as we need at least 4 points to understand the intensity of the agreement. 11 Point is used to capture satisfaction at granular levels, macro and micro levels and then categorize to understand the group better. Categorizing / segmenting is a better way to notice patterns / movements between the customers. This will help in improving the experience and on certain touchpoints.
  12. We use run chart to see if there is any sign of special cause variation in our process data. It is graphical representation of the process performance plotted over time (hourly for Continuous Flow processing and most commonly in days or in months) Most importantly, What is Run? It is 1+ consecutive data points on the same side of Median (Either above median or below median) Variations can be common cause or special cause. Point to note: Common cause variation is outcome in an Stable process that is predictable & Special cause variations is outcome in an Unstable process that is not predictable By using run chart, we will be able to find the trend and pattern in the process data set Common patterns of non-randomness include: Mixture Patterns Cluster Patterns Oscillating Patterns & Trend Patterns When we run control chart on Minitab, it detects whether above mentioned patterns are existing in the data Sample data – Considered gold price/10 grams for the last 55 months Classification: Public In the above chart we can witness, clustering and trends. Cluster Pattern: In general, it is set of points in one area of the chart, above or below the median line. Thumb rule for cluster, 6+ continuous nearby points above/below the median line We can also check out the P value to see if there is potential cluster in the data Specifically, when P value is < 0.05, we could say possibly the data could indicate cluster. In reference to the above Run chart, Approximate p-value for clustering is 0.000 which is less than 0.05, so reject null hypothesis. Cluster can show sign of potential sampling or measurement issues. Trend Pattern: It is sustained drift in the data set; either upward trend or downward trend; Thumb rule to conclude trend is 6+ consecutive points either higher than previous data in one continuous period or the other way, that is 6+ consecutive points lower than previous data points. In the referred above chart we could observe an upward trend and P-value is also less than 0.05 to conclude potential trend. Now as we know about Cluster and Trend, lets note the below points: Opposite of Cluster is Mixture & Opposite of Trend is Oscillation Oscillation: When the process is not stable, we get data points spread above and below the median line, looks like oscillation. Thumb Rule: 14+ points in one continuous period increasing and then decreasing cyclically For P value < 0.05, possible oscillation can be observed. Classification: Public Mixture: When there are no points near the center line, with 14+ points upward and downward, increasing and decreasing over the median line and P value <0.05, we can have potential mixture in the data set. Run Chart & Control Chart In Control chart, along with the center line we have the upper and lower control limits. Another major difference is in Control chart - Center line is median and in Run chart - Center line is Mean; Run chart does not give any detail on the statistical control limits. We can see control chart as an Enhancement to Run Chart. In control chart, we will be able to check the stability - whether the process mean and variation are stable; check whether any out of control. We can check normality - data is normal or non normal; But it does not provide view on patterns. When we use control chart from assistant view in Minitab we get below output view: Stability Report Classification: Public It gives commonly used patterns for reference, however does not highlight the pattern in the output. Control charts will be useful over an Run chart, when the focus in on the variation and to identify potential deviation. However, downside of control charts is that it could have below limitations and can cause unnecessary wastage of time. False Alarms Incorrect Assumptions Incorrect Control Limits Both - Run chart and Control charts has its own advantages and used for different purpose [Run - Trend & Patterns; Control - Stability] and are useful based on the required objective, situation and analysis.
  13. In DMADV, focus is on new product/service design, unlike for existing product/service in DMAIC, during the last phase of DMADV, verification of design is performed and whether the design is capable of meeting needs of the customer is validated. Numerous pilot runs will be required to validate and verify the design outcomes. Major aspect of this phase to check whether all metrics which are designed are performing as expected. Conformance to Specification. Some of the common used tools in verify phase includes Control charts, control plans, Flagging, Poka Yoke, check sheets, SOP’s and work instructions. Software Application Design: In a new design viewpoint, Verification is whether Software Application developed in right way & Validation is whether Right Software Application is being produced In simple terms, verification is checking whether the application works perfectly without any errors/bugs and validation is checking whether the application is meeting the requirement and expectation Verification Validation Application and design review, code walk through, code inspection Black Box and White box testing It is static testing It is dynamic testing Performed first Usually performed post verification Verification done without software execution Validation done with software execution Automotive Manufacturing: Reference to a gearbox manufacturing, as per the new design in DMADV process, in actual manufacturing high level steps include preforming, annealing, machining, producing teeth, shaving, grinding and inspection. Here verification is, comparing the gearbox to design requirement of material, dimension, tolerance etc., that is all specs are verified Whereas, in validation, post inspection assembling gearbox and doing a dry run, test it to check whether it runs as expected. Verification Validation Done during development, review and inspection, production and scaleup Usually done before scaleup and after the actual production Random inspection can be done for verification Stringent checks are done during validation Validation can be done directly by skipping verification in some scenarios, especially when we are not able to measure component outcomes or when cost of verification is very high. Medical Devices: Verification usually done on the design: design input, process and the output. It is done by test, inspections and analysis. Validation is checking whether the intended need of the medical device is met Source: U.S. Food and Drug Administration (FDA)
  14. Pareto Analysis is used to separate Vital few from Trivial Many parameters. Vital few contributing to 20% and trivial many 80%. This principle is otherwise called as 80-20 Rule. It simply says, majority of the results come from minority of causes. In numerical terms, 20% of inputs are accountable for 80% of output 80% of productivity comes from 20% of associates 20% of causes are accountable for 80% of problem 80% of sales comes from 20% of customers 20% of efforts are accountable for 80% of Results Example Dataset: Metric Freq Percentage Cumulative Demand Exceeds Supply 232 24.12% 24.12% Incorrect Memory and CPU Usage 209 21.73% 45.84% Bandwidth Constraints 203 21.10% 66.94% Network Changes 64 6.65% 73.60% Fatal Bugs in Production 59 6.13% 79.73% Poor Front-End Optimization 52 5.41% 85.14% Integration Dependencies 39 4.05% 89.19% Database Contention 34 3.53% 92.72% Browser Incompatibility 23 2.39% 95.11% Device Incompatibility 14 1.46% 96.57% Hardware Conflicts 13 1.35% 97.92% Inadequate testing 9 0.94% 98.86% Too much code 6 0.62% 99.48% Exception handling 5 0.52% 100.00% Classification: Public Pareto Chart: Classification: Public Some of the common misuse include below scenario’s: Working only on Vital few parameters: There could be other potential parameters were the frequency is less and which falls on one of the trivial many factors, however when criticality or the severity of the potential parameter is high, since the frequency is low it is not considered and underestimated. For the referred example, Inadequate testing can be critical, if there is insufficient test case or when the test review is poor it can lead to multiple production issues, which is not factored when focusing only on Vital Few. On a ideal situation, 80% of the resource should focus on reducing the vital few and 20% of the resource working on minimizing trivial many parameters. Using pareto for defects belonging to multiple categories: Another misuse of pareto analysis is when combining defects from multiple categories. We need to clearly understand that categories must be Mutually Exclusive. Using Pareto when parameters are not collectively exhaustive: What is collectively exhaustive? Collectively, all the failures in the list should cover all the possible failures for the problem., that is, there should not be any gap. Definition: Events are said to be collectively Exhaustive, If the list of outcomes includes every possible outcomes. Performing analysis on small data sets/few data points: For statistically significant analysis, we will have to use relatively large data sets rather than working on small data points. At the same time number of categories need to be practically large enough. above pareto analysis, does not make sense, when the data set is relatively small. Inaccurate measuring: Visually looking in the pareto chart and selecting the Vital Few rather than considering cumulative % < (less than) 80% Analyzing defects only once: Pareto Analysis should be performed before the problem is solved and during the implementation period to see the trend and Post improvement. It is repetitive and iterative process, rather than running only once and focusing on the defects that were identified during the early stages of the analysis. 80 + 20 should be 100; and not 75 - 20 or 90 - 40 Considering 80 in the Left Axis: Left axis displays frequency and right axis the percentage, some time when people consider 80 in left axis leading to selecting wrong vital few could lead to poor problem solving. Flattened Pareto Analysis: If there is any bias in data collection methods, we might end up with bars being flat, this happens mainly when we are separating / breaking vital problems into small problems. It does not make sense to proceed with Pareto Analysis. Rather work on action plans based on the severity and criticality. Considering defects as Root Cause: Considering Vital defects identified during the analysis as Root Causes, and not analyzing further/deep dive to understand the root cause. This will not potentially stop the defect in occurring rather it would be applying band-aid scenario for the identified loop holes.
  15. Process Maps are vital in any six-sigma project. It visually represents the steps, actions, decisions that constitutes the process. With the help of process maps we can easily identify strengths and weaknesses in the process flow, identify value adds and non value adds based on the level of process map. Process maps are essential component in a project. They are useful in both Micro and Macro level. Sequence of process maps starts with Level 1 to Level 5, which is commonly referred in most of the organization. Level 1, being the high level SIPOC Level 2, Flow chart level Level 3, Swim lanes Level 4, Value Stream Mapping & Level 5, KPIV, KPOV For detailed insights on the levels of process maps, refer to previous forum discussion in the below link: Posted October 6, 2017 https://www.benchmarksixsigma.com/forum/topic/34895-process-mapping/?ct=1566389702 In most of the projects, basic process maps are created in initial stages of the project. In DMAIC Improvement projects, we use Process Flow chart & SIPOC in Define, As-Is process map in Analyze phase and Swim lane, multilevel process maps, To-Be process maps in Improve phase (if any change in flow is required). As-Is describes the Current State and To-Be describes the Future state. To-Be process map is more like improve flow for current state. Which level of process map we select in DMAIC project? Depending upon the complexity and type of the project, process map is selected during DMAIC. Depends on "How much information is necessary" or "How specific information's/details that needs to be captured" in the project and in the intentions and purpose. For instance, VSM is used in improvement projects. For complete radical transformation in processes, detailed end to end process map is used, kind of Business Process reengineering projects. Few considerations while selecting process maps include: Scope of the project (In scope & Out of Scope) Improvement / Process reengineering project Level of Focus / Granularity Objective (SLA Improvement / Optimization) Automation (Full / Partial / RPA / RDA) Based on the above consideration the level of process map is selected. Process maps are usually created in Visio, some organization use advanced process mapping tools. In our organization we use Blueworks Live (From IBM), for cross function collaborations in designing process maps across our offices for improvement projects. Reference: Sample process map from Blue works Classification: Public Selecting the right process map based on purpose: Purpose Process Map Type For a Process Snapshot SIPOC Simple description of process working High Level Process Map Radical Process Transformation Detailed End to End Process Map (With in Scope) BPR Detailed End to End Process Map (With in Scope) For Critical Problem Solving Detailed End to End Process Map (With in Scope) For displaying different departments operation Swim lane Map For displaying interaction/collaboration Relationship map For Lean Implementation Value Stream Mapping Each type of map has its pros and cons, hence based on the situation, criticality and purpose we can select the relevant process map.
  • Create New...