Jump to content

Mohamed Asif Abdul Hameed

Fraternity Members
  • Posts

  • Joined

  • Last visited

  • Days Won


Mohamed Asif Abdul Hameed last won the day on August 9

Mohamed Asif Abdul Hameed had the most liked content!


Profile Information

  • Name
    Mohamed Asif Abdul Hameed
  • Company
    Global FinTech Company
  • Designation
    Manager - Process Excellence

Recent Profile Visitors

1,543 profile views

Mohamed Asif Abdul Hameed's Achievements

  1. Synchronous (dependent) is sequential and happens one at a time, whereas in Asynchronous (independent) concurrent parallel operations are possible. In the below reference, total time taken to complete all 4 tasks in Asynchronous system is just 20 seconds compared to that of 45 seconds in the Synchronous sequential process execution. For instance, zoom meetings happens sequential, which is Synchronous, whereas email communication, online posts can be asynchronous, and be concurrent to keep the target audience engaged for different roles, functions, regions, and programs. As referred in the above example, request stacks in synchronous system, and typically referring from a web service request scenario, clients have to wait on the queue until the previous loop is executed and most of the time, what they see is a timeout response. Asynchronous system allows multi-tasking, has better resource utilization, with fewer wait times and is more adaptable and the leading contribution is the enhanced throughput that we get out of asynchronous systems, on the contrary synchronous system performs function one at a time and follows rigid sequence. Thus, it is advantageous to use the asynchronous system, especially in an agile, multi-request system environment, however, it is wise to use synchronous in reactive systems. So to conclude, It is better and suggested to evaluate and identify the dependencies in the processes to select the best optimal approach that works for the organization.
  2. Activity on Arrow (AOA), as the name implies, these network diagrams denote each activity as an arrow. Here the nodes denote the start and end of an activity. Activity on Node (AON), this method is also referred as Precedence Diagram Method (PDM), and here the arrow represents the logical relationship amounts the activities and the nodes denote activities. Decision: Even though both methods accomplish same results, practitioners prefer AON over AOA, as it does not require the use of dummy activities. Dummy activity (connecting link) is an imaginary activity, which does not require any time not any resource, still used to identify the dependence among operation. Whilst we can maintain the network Logic and avoid opacity (difficulty in interpretation). In AON, as activities are represented by nodes and its interdependency can be directly denoted by connected arrows, accordingly there is no need of dummy activity. Considerations: AOA, can have several separate possible networks illustrating the same project, contra AON representations are unique AON diagrams are comparatively easier to create and interpret When it comes to amendments, design changes are easier in AON, then doing the same on AOA structure AON focuses on tasks, whilst AOA focuses on events Leaders every so often, get confused with AOA networks and prefer to see more of AON representations Lets consider the below example and design the respective AOA and AON structures. Choose Project, A, 1 Discovery, B, 3 Get Go ahead, C, 2 Data Collection, D, 3 Assemble team and Kickoff, E, 2 Finalize actionable, F, 3 Leadership Summary, G, 1 Although AON is advantageous, it becomes challenging under the below listed situations: Path tracking by activity number is hard When there are multiple dependencies, drawing and interpreting becomes difficult Ruling: Certain Planning and Optimization techniques precisely require AOA network structure, and some might require AON format. It is hard to prefer one, yet opting AOA over AON and conversely, is solely based on specific project requirement. Nevertheless, the advantages of AON become more apparent and takes the upper hand over AOA.
  3. “Data is more valuable than Oil”, nevertheless are we leveraging it to the extreme capacity? The answer is simple, it is "No” and it simply becomes dark data! Gartner, coined this term ‘Dark data’ and defines it as “The information assets organizations collect, process, and store during regular business activities, but generally fail to use for their analytics, business relationships and direct monetizing” Dark data can be generated by organization’s systems, devices, and interactions and typically most of the time it is the CRM, ERP, SCADA, HTTP, IoT and even WIFI systems which collects the data. It can be stored physically or on the storage peripherals or in cloud. While most of the data is unstructured, some of the examples of Dark data includes that of below, but not limited to the list, Application logs Customer records Geolocation Survey data Financial statements Customer Address Contact details CCTV footage Emails Chat messages Medical records Zip files Archived web content Code snippets Biggest challenges with regards to dark data is with regards to: Security dangers (hacks) Compliance issues Data authenticity and High Storage cost Brand Reputation Opportunity Cost Risk associated with the dark data can be easily mitigated by adhering to audit and retention policies defined by the organization. However, some best practices can have high impact to manage the risk associated with the dark data. The below model typically shows how the data is collected, stored, retained and deleted, more from an analyze, categorize and classify approach. Model Explained: Starting from Data classification (Public, Internal, Restricted) While we classify, it is vital to bucketize based on few critical factors, viz., Critical data? Permanent document? Proprietary Intellectual Property? Document/data serves the current needs of the operations? Legal and regulatory requirement? (For instance, w.r.t HIPAA, 6 years minimum retention. In contrary, GDPR allow data storage for an extended period, however, solely should be used for the purpose of public interest, statistical analysis and for historical research only) Hot Data or Cold data? (hot data is accessed frequently and used for quick decision whereas cold data is old data and are not frequently used) Based on the classification, then deciding whether to store or delete. If we wanted to store what is the retention period and how it will be useful. When we follow this approach, along with Regular data Audit and internal Data Life Cycle Management (DLCM), we can make the maximum utilization of the data from the data pool. Ways to leverage Dark data: Text Mining / Word mining Data mining methods Voice to Text analytics Data analytics Prescriptive analytics Behavior analysis, which can be used to train AI models for prediction Big data analytics and visualization (SAP HANA) Data Forecasting Trend Analysis Investigate past complaints Google’s approach to data management: “Some data you can delete whenever you like, some data is deleted automatically, and some data we retain for longer periods of time when necessary. When you delete data, we follow a deletion policy to make sure that your data is safely and completely removed from our servers or retained only in anonymized form.” Apple’s approach to data storage: Apple uses personal data to power our services, to process your transactions, to communicate with you, for security and fraud prevention, and to comply with law. We may also use personal data for other purposes with your consent. Final say: Data violations have earned a lot of notice in recent years as businesses become more dependent on digital data, cloud computing, and remote working. As a result, compliance and regulations have emerged as a requirement for ensuring information security. Using data analytic application suites can manage unified unstructured data effectively and can provide intelligent identification of data sets in the organization which can be in line with the industry legal and regulatory requirements.
  4. Berkson’s paradox is a special case of collider bias. In simple terms, this bias results from conditioning on a common effect of at least two causes. In more easy terms: This happens when 2 variables appear to be negatively correlated in the sample data yet they are actually positively correlated with regards to the overall population For instance, let’s consider, two ancestors namely, exposure (E) and disease (D) and a common descendent (C). Here conditioning on C leads to a distortion in the association between E and D. That is Berkson's fallacy. In the below example, if we condition on the collider ‘hospitalization’, we can notice a reversal in the association between Smoking and Covid This is very much similar to that of the Berkson's original work in 1946, where he observed a negative correlation between cholecystitis and diabetes in patients, in spite of diabetes being a risk factor for cholecystitis. One of the best methods to prevent the bias is to collect simple random samples from population and that itself will reduce the errors in data gathering. Ensuring to properly define the population and then examine statistically whether the sample is the unbiased representation of the population.
  5. Dimension can have names, dates which are qualitative in nature, whereas measures can have numeric, which is quantifiable. Possible combination of discrete and continuous is viable with dimensions and measures. So, we can have discrete dimension, continuous dimension, discrete measure, and continuous measure as possible data types. In both continuous measure and discrete measure, aggregation (sum, average, count, min, max, percentile, std.dev, variance etc) is possible. However, aggregated value is shown as continuous data value in continuous measure, whereas in discrete measure, aggregated value is shown as categorical value. Dimension Examples (descriptive field): Client Name Client Segment Client ID State City Country Postal Code Measure Examples (numeric field): Profit Unit Cost Orden Quantity Sales Salary Most of the data visualization tools, auto detects data types, for instance, Tableau automatically detects as well as represents data types as symbols. Differences: For Instance, I have considered Tableau Data Visualization tools for reference to give elaborate difference between Dimension and Measure Below are some of the examples of effective usage of dimension and measure in terms of overall data visualization dashboard. Example - Sales Dashboard Example - Marketing Dashboard Example - Revenue and Customer Distribution Overview Dimension and Measures are the key point of any data visualization tool as it plays a major role while driving with data sets.
  6. Kappa is defined as the ratio of proportion of times that the appraisers agree to max proportion of times that the appraisers could agree. Kappa ranges from -1 to 1 The larger the kappa, the more agreement in that category For instance, Kappa value of 1 represents Absolute agreement Below table represents commonly accepted values for reliability measures: Cohen’s kappa Value Interpretation: 0.91 - 1.00 - Almost perfect 0.80 - 0.90 - Strong 0.60 - 0.79 - Moderate 0.40 - 0.59 - Weak 0.21 - 0.39 - Minimal 0.00 - 0.20 - None Krippendorff’s alpha Value Interpretation: 0.80 - 1.00 - Reliable value 0.67 - 0.79 - Acceptable for tentative conclusions 0.00 - 0.66 - Not acceptable Take Away: With caution, Stat practitioners should primarily examine the marginal distribution and not uncritically interpret the kappa value whether it is high or low. As prevalence, odds, raters independence, and the impact on diagnosis and other additional factors can have significant influence on the kappa statistics. Kappa statistics represents the degree of absolute agreement amongst ratings and popular statistics includes that of, Cohen’s kappa – Measures assessment agreement between two raters Fleiss’s kappa – Generalization of Cohen’s kappa (>2 raters) In most of the statistical tools, such as Minitab, by default Fleiss’s kappa is calculated for AAA (Attribute Agreement Analysis) As we could note here, Fleiss’s kappa is based on the theory that the observed agreement is corrected for the agreement expected by chance. However, on the contrary, Krippendorff’s alpha is based on the observed disagreement corrected for disagreement expected by chance. Key Differences: Fleiss’s kappa: Cannot handle missing values Expected agreement sample size is infinite Best suited for Nominal data Krippendorff’s alpha: Can handle missing values Actual sample size is considered Can handle all data types Both Fleiss’s kappa and Krippendorff’s alpha can be likewise recommended in the circumstance when the data is nominal and when there are no missing values. However, Krippendorff’s alpha statistics is preferred in below situations, viz., Whenever the data is missing Higher than nominal order (ordinal, interval, ratio) When there is bias in the distribution of disagreements (even strong bias will not have any distorting effect) When different participants have different number of raters (usually when the number of raters is more than 2 and can be applied to any scale level) When there is incompatibility in obtaining observation ratios by pair counting in the small samples Summary Table: Final Take Away: Before deep diving into the reliability data, it is recommended that based on the context, practitioners should select the index of Inter Coder Reliability based on data properties and assumptions, including the level of measurement of each variable to calculate the agreement and the number of coders. Most of the times, it is difficult and complex to compute Krippendorff’s alpha statistics compared to Fleiss’s kappa, however Krippendorff’s alpha provides higher reliability, particularly when there are no perfect conditions for research.
  7. Cobots are similar but smaller when compared to that of industrial robots. It is also comparatively cheaper in price and much user friendly. For a large-scale mass production, industrial robots can provide best efficiency. However, for small and medium scale businesses, cobots can be much more effective when it comes to automation on the shop floor. Cobots are Collaborative Robots. It is more of a collaboration between Human and Robot in a shared space and can optimize human work in various aspects. International Federation of Robotics (IFR) defines multi-level of collaboration viz., Coexistence, Sequential, Cooperation and Responsive Collaboration. Traditional Robots are best fit for: Large batches, small variability Complex deployment Consistent environment Human monitoring Focus on Robot Automation Big Investments Longer ROI Alternatively, Cobots could be a best alternative for: Low-volume, high-mix Fast and Easy deployment Agile and adapts to environment Collaborative Focus on End-Of-Arm-Tooling (EOAT) Lower upfront cost Faster ROI Cobots in Service Industry: It is often referred as RaaS (Robots-as-a-Service) and few of the utilities includes that of, Robotic-Assisted Knee Surgery (robotic arm assistance) Food Robots - Packaging (Wrapper, Vacuum sealer) Food Robots - Other Applications (Palletizing, Pick-and-place, Logistical automation) Product Quality Inspection (Cobot arms for visual inspection using 3d Cameras) Aviation (Cobot co-pilot mainly for Military UCAV (Unmanned combat aerial vehicles)) Agriculture (Farming - Once Cobot identifies flowers, fan gets activated for effective pollination (Smart Farming)) Diary (Robotic Milking) Restaurant Cobot These cobots are identified and selected based on critical factors such as, Reach (500 - 900 mm) Payload (2kg - 16kg) Footprint (Ø 128 - 200 mm) Weight (10kg - 35kg) Technology Advancements like IoT features with loaded capabilities such as heat sensors and thermal cameras help the cobots to perform more accurate tasks based on their use cases. Anticipating the rise of 5G, could lead Cobots to get fully automated and to perform tasks with greater accuracy.
  8. Sandbox is one of the testing environments. Each environments have particular purpose. It is critical, that the tester knows all the aspects of environments which could lead to better testing and QA strategy in the organization. Types of environments: DEV (Development) QA (Testing) SANDBOX (Isolated Virtual Environment) STAGING (Pre-production) PROD (Live) DR (Disaster Recovery) Sandbox-evading malware is a known type of malware (malicious software). This malware can identify if it is within virtual machine environment or not. Sandbox is a highly controlled environment that can be used to test unverified programs that may contain malicious codes with no loss to host device. However, these sandbox-evading malwares don’t perform their malicious code until they are out of the controlled environment. There are several recent instances in the industry, where AI algorithms are used to these malwares for evading virtual environments. Malware writers (aka cyber criminals) are users of sandbox environment themselves and to the fact, that there are more than 500 evasion techniques to avoid the detection and analysis. There are many, to list few common Evasion Techniques includes that of, Human-like behavior (Interaction detection – Like scrolling, mouse clicks) System interaction detection (Shut down by payload, self-debugging) File systems (Specific files, directories, strings) Hooks (mouse hooks) Generic OS Queries (Specific username, computer name, host name) Global OS objects (Specific global mutexes, virtual device, pipes and objects) Windows Management Interface (WMI) (Win32_Process Class, Task scheduler, Last boot time, last reset time) Timing-based evasion – delayed execution (stalling, dropper, logic bomb, extended sleep) Obfuscating internal data (encrypting API Calls, Domain Generation Algo (DGA)) Firmware tables (Specific strings – SMBIOS table) UI Artifacts (class names) Registry (registry paths, keys) OS features (debug privileges, unbalanced stack) Processes (Specific running processes, loaded libraries) Network (Specific MAC address, adapter name, anti-emulation) To address evasion/dodging, organization need to deploy strong systems (typically SaaS) that can bypass anti-sandbox strategies for evading detection and that can evaluate and continuously monitors the trend of threats which should potentially include that of vulnerabilities, exploits, active attacks, viruses and further malwares, such as spam, phishing, and malicious web content. Further factors that need to be leveraged while monitoring should include categories such as, Behaviors exhibited Data Reputation (whether hosted on a suspicious IP/URL) Digi Certificate (Correctly signed?) Total Virus (Known sample?) Industry reputation (popular application?) Alongside, deploying detection mechanisms, such as, below list can effectively control and counter evasion. Changing sleep duration dynamically Human interaction simulation Adding real environment and hardware artefacts Apart from dynamic analysis, perform static analysis as well Using fingerprint analysis Using behavior-based analysis Customizing the Sandbox Adding kernel analysis Implementing ML Considering content disarm and reconstruction (CDR) – Extra Sec layer These measures when combined and deployed can result in effective security solutioning for countering malware evasion.
  9. Prioritization Matrix is an essential and useful tool which assists in breaking down tasks and activities when there is too much in the plate. This facilitates decision making and help leaders to consider those activities which are most relevant, urgent, important and required for project and process sustenance. There are many variants of 2x2 Prioritization Matrix. Frequently used few are listed below. RVCE Matrix (Risk, Value, Cost and Effort) Eisenhower Matrix (Urgent, Important) MoSCoW - Value Based Prioritization Techniques (Must Have, Should Have, Could Have, Won't Have) WSJF (Value and Effort) Kano (Performance, Must-be, Attractive, Indifferent) RVCE Matrix: Decision Criteria: Risk, Value, Cost and Effort Decision/Outcome: Consider, Avoid, Investigate and Prioritize Eisenhower Matrix: Decision Criteria: Important or not, Urgent or not Decision/Outcome: Do, Decide, Delegate, Delete MoSCoW: Prioritization based on Value/Features Decision: Must have, Should have, Will not have, Could have WSJF: Decision Criteria: Value and Effort - High and Low Decision/Outcome: Do now, Do Later, Do Next, Don't Do Kano: Decision Criteria: Satisfaction and Functionality Other applications/variants of prioritization models includes that of: Lean Prioritization, where the effort is compared with ROI/Degree of Impact to consider the outcomes as Low Priority, Just Do it, Reconsider and Complex but worthwhile. Value and Risk Value and Effort Value and Complexity Benefits: Allows to analyze and compare results It removes bias Allows to objectively rank the priorities Determines most critical focus area Keeps progress of the project Better Time management Depending upon the type of project and consideration we can select any of the above mentioned models to Focus on the right project and better manage our time. My personal favorite model is Eisenhower Matrix, which is kind of, has the combination of essence from all the prioritization models available.
  10. Hanedashi: Auto-Eject / Auto Unloading / Automatic Ejection This technique provides automation for machines to remove finished parts from the process. Below is a typical example of manual and automatic unloading of the finished product. Manual Unloading: Automatic Unloading: Manual effort is just put in loading and not in unloading. Hanedashi is crucial for "Chaku Chaku"/"Load-Load" Line. In Chaku-Chaku, operator picks and loads the finished part from Machine A to B, to complete the cycle. If Hanedashi is used, machine can itself unload and load the parts with out operators effort. Thus Hanedashi can effectively eliminate the below wastes in a lean manufacturing setup. Transportation: Wasted time in moving materials unnecessarily Motion: Wasted time and efforts due to unnecessary movements by operators Waiting: Wasted time in waiting for finished product to complete next steps Some of the benefits of Hanedashi: At same time, operations in Multiple machine is possible Operator productivity improvement Improved working condition due to better ergonomics
  11. A/B Testing (Split Testing, Bucket Testing): A/B Testing lets marketers better understand which key formatting of a website or any piece of content makes the customer and clients more engaging. It simply compares two versions of webpage (sometimes more) to identify which variant appeals for more clicks. Predominantly for Web pages, nevertheless also used for comparing Emails, Application Interface and Advertisements. Testing process is simple, below are few of the milestones Collecting data Identifying Conversion Goals Generating Hypothesis Creating Variations Running Experiment Analyzing results Example 1: Example 2: Statistical analysis is performed to identify which variation/version better performs and in-line with the conversion goals. Search analytics tools like Stats Engine, Bloomreach, SiteImprove & Semrush uses build-in advanced statistical models which can throw out results real time. Is A/B Testing similar to that of Multivariate tests? A/B compares 2 pages with entirely different headlines, Text and Images. Multivariate test compares identical pages, however different fonts and sizes are compared. Below are some of the essential considerations of A/B Testing Variables: Layout CTA’s (Calls-to-Action) Content Offers Color Size Email Subject Line Headlines Email Sender Pricing Scheme Copy length Landing page Tone Images Timing Frequency Video Vs Text Sales Forms Targeting and Personalization Sales Copy Data Visualization Mostly used in the below industries: Media Travel E-commerce Banking Fin-Tech Technology Benefits of A/B Testing: Helps in conversion goals/sales Helps in making data-driven decision Improved user engagement Reduces bounce rates Ease of analysis It would be wise to use both A/B Analysis and Multivariate analysis together. First A/B can be used to determine which layout and design converts well and then using multivariate to fine-tune formatting the page to attract widespread traffic.
  12. Both Correlation and Covariance measures the linear association between two variables. To be specific and make it apparent, let us understand the key difference, Correlation measures the strength of a relationship between two variables. Covariance measures the direction of a relationship between two variables. Specific comparison: Values: Correlation: Standardized Covariance: Unstandardized Units: Correlation: Has Units Covariance: Does not have units Scale: Correlation: Change in scale does not affect the value of Correlation Covariance: Change in Scale will affect the value of Covariance Range: Correlation: -1 to +1 Covariance: -∞ to +∞ Why Correlation value lies between -1 and +1? Correlation is nothing but Covariance divided by standard deviation of the variables, hence the value lies between -1 and +1. Which means, it is scaled down version of covariance. Inferences from Analysis: Covariance Inference: Positive - Both the variables increase or decreases together - Directly Proportional Negative - Inverse, if one variable increases, the other decreases - Inversely Proportional Correlation Inference: +1 - Perfect Positive linear relationship 0 - No linear relationship -1 - Perfect Negative Linear relationship Some more examples: Correlation Examples: Pearson r Relationship 0 No relationship 0.466 Moderate positive relationship 0.95 Large positive relationship -0.96 Large negative relationship Covariance Examples: Covariance Relationship 0.0036 Positive 0 No variance -0.007 Negative -0.0376 Negative Covariance, typically can take any value and it is toilsome to interpret the number. Sample Data Set: G Price CO Price 49000 95.17 48600 98.4 48600 98.4 48600 98.4 48250 97.17 48000 97.16 47800 101.24 47800 101.24 47800 101.24 47950 103.66 Based on the same data set, below is association summary: Correlation(R) -0.74682 Covariance(G,CO) -744.37 There are numerous applications of Correlation and Covariance, some are listed below: Data science: one of the frequent used measurement is Covariance. Insights from covariance analysis can help us to get more clarity on Multivariate data. Stock market: Investors, traders and analyst often use correlation and covariance. Specifically, to understand the hidden correlation on the stock returns of one company to other, which could potentially bring down and minimize the investment risks. Implied Correlation Index by CBOE (Chicago Board Options Exchange): This tracks the correlation between implied volatilities of options and weighted portfolio of options Banking and Insurance: Exploratory analysis can give more insights on the variable relationship which assists in customer churn and retention.
  13. Screening Designs: Screens factors that are not statistically significant Intention: Used for exploratory analysis. Focus: Estimates main effects in presence of negligible interactions. Most suitable for Industrial experimentation during early states of design. When there are many significant factors, screening design can be used to condense the list to fewer once. Time and again, it becomes tedious to study all the factors in detail. Screening design can be used effectively and compared to traditional design methods, screening design just require fewer experimentation runs. To say in short, experiments are “Small and efficient”. Often used screening designs include: Fractional Factorial Design (2-Level), Full Factorial Design (2-Level), Definitive Screening Design, & Mixed-level design Below are few Specific design: Plackett-Burman design, Taguchi methods, Cotter Design However, there are many different screening designs, some of the considerations for best design fit are listed below: Questions that we can, before finalising the design method: What's is the Overall Goal What are the specific response types How are these responses measures What are the factors that need to be considered What should be the range for the factors Do we have block factors Are we working on a Split plot problem Do we have problematic combinations of the factor settings Answers to the above questions, can effectively let us know in picking the perfect design model for the screening. Lets take a case of a Chemical Product ABC as an example and use Plackett-Burman (2 Level Fractional Factorial design). Based on preliminary analysis, it was identified 11 potential factors might impact the yield of the chemical product, which is listed in the below table. Simply if we want to run 2 level full factorial design, Total number of runs would be 2 Power 11 = 2048 Runs. Some of the interactions between the potential factors might have trivial importance, Plackett-Burman design can be effectively used here. Lets run the design by just setting the Base number of runs as 12. Below is the summary of the design settings: Factor Combination and its Yield for 12 runs is shown in the below table: Inference from Effect probability plot can be used to identify the significant few (Important Factors in the experiment), which is shown in red squares in the plot graph. This brings down the potential factors from 11 to 5. Properties of the design generated can also be effectively evaluated by various other different output metrics, viz., Power Analysis, Prediction Variance Profile, Effect Probability Plot, Fraction of Design Space Plot, Prediction Variance Surface, Estimation Efficiency, Alias Matrix, Color Map on Correlations, & Design Diagnostics Screening design might not be limited to industrial experimentation, however the applications of it can also be used in other functions as well. For Instance, lets consider Marketing example. 2^7 Experiments or 128 Runs could be too much for a tight delivery time period of Product X, however screening designs based on preliminary analysis can help the product to touch new sales success stories. Benefits of Screening Designs: Relatively Inexpensive (Saves $'s) Efficient approach for process improvement Can run the experiments with limited resources Effective simulation
  14. Measles Chart commonly referred as Defect location check sheet/map is a graphical analysis tool. It can be closely compared to a scatter plot (from clustering viewpoint) in combination to a Defect Concentration diagram. To use, Measles chart, what we require is the Image of the subject of interest. Whenever, there is an issue, we must mark the location of the issue on the Image. Depending upon the issue category type, we can use different symbols. Based on the cluster, we shall be able to identify where exactly the issue is frequent and recurring on the Image. This indicates True failures, handling failures and test errors. Thus assisting us in getting into category wise solution mode rite away with a quick turnaround. Below images on the examples can help the operators to quickly identify where defects are happening and could turn their focus to get into root cause of the issue to take actions. Undoubtedly, this helps in defect reduction. Example 1: Defects in Side by Side refrigerating unit Example 2: Defects in Adventure touring motorcycle Example 3: Manufacturing defects in Heavy load transport truck Example 4: Stitching faults in Shirt manufacturing Example 5: Casual shoe adhesive faults Benefits of Measles Chart: Provides visual indication of defects Easy to deploy Effective defect prevention based on the application Quantify defects and issues by category and location Helps to identifying the frequency of the defect occurrence Can be performed at factory level - production floor. Precisely, measles charts is a structured analysis that helps in locating, diagnosing and correcting frequent issues / problems on the work floor to improve the operational efficiency. This analysis can help expert teams to retrospect and get into upstream processes to identify and fix the issue enduringly. More reads and a comparative study on Defect concentration diagram, similar subject to that of Defect location map can be seen in the below link:
  15. Weighted Shortest Job First (WSJF) is a prevailing model for ranking and prioritization. This is utilized in job sequencing based on features, capabilities and epics. Scaled agile framework (SAFe) customs WSJF to prioritize backlogs. WSJF is equally referred as CD3 (Cost of Delay Divided by Duration). Calculation: WSJF = CoD/Job Size CoD – Cost of Delay – Economic impact of delay in project delivery Job Size is the Job duration. Typical WSJF calculation that is used during PI Cycle is given below CoD consideration includes user business value, time criticality and risk reduction. Below are some of the drilldowns of the respective considerations. User business value: Relative value of the opportunity Opportunity rank in comparison to other Revenue generation or cost avoidance Time Criticality: Opportunity value decline by time Target/fixed deadline Impact Risk Reduction Opportunity reduces risk Create new opportunities Cost of Delay is a critical and key metric, while prioritizing and it has become essential to ask this fundamental question, “What will cost us the Most? Doing it Now or Delaying it Later?” Specific to cost of delay, we can compare WSJF with other prioritization models like Short Job First and Most Valuable First. Below table summarized the comparison. However, possible comparison from a broader prioritization lookout, we can compare WSJF with MoSCoW, Kano, RICE, Eisenhower, Value vs Effort, Walking skeleton models. Each model has its own Pros and Cons. WSJF is a great tool for prioritization, it gives clear picture when to go for Low hanging Fruit Vs Projects with higher value. However, we don't have to rely upon WSJF each and every time as few features and deliverables are supposed to be delivered at the right time without brainstorming.
  • Create New...