Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 08/07/2019 in all areas

  1. 5 points
    Description - Bench happily highlights that while planning his career, he had considered the choice between being a Generalist or a Specialist early in his life. Mark wants to know about the decision that he took. Bench says that he decided to keep options open for himself and proclaims himself as a "very general Generalist". After listening to Bench, Mark says that he has realized that he has taken a path different from the two options. He considers himself as a "specialized Generalist" or what can be considered as a "generalized Specialist". Bench want to understand what this means. Mark explains that he is a Business Excellence Master Black Belt. He calls himself a generalized Specialist as he specializes in problem solving which he can do in any sector. He further explains that he could be considered a generalist too as he can work with large variety of processes but in a specialized way. This cartoon depicts that Lean Six Sigma and Business Excellence competencies allows one to be specialized without dependence on a specific industry or functional domain.
  2. 5 points
  3. 5 points
    Misuse of tools and techniques is a very common phenomenon. Misuse of a tool primarily happens because of two reasons 1. Intentional Misuse (it is better to call it as Misrepresentation) 2. Unintentional Misuse (due to lack of understanding of the concept) Pareto analysis or the 80/20 rule is a prioritization tool that helps identify the VITAL FEW from TRIVIAL MANY. 80/20 implies that 80% of problems are due to 20% of the causes. Intentional 1. Top 20% causes might not be the one leading to bigger problems - usually it is observed that causes with smaller effects occur more. Applying the Pareto principle will divert the focus of the team to the causes that have a smaller effect on the customer while the actual cause might be languishing in the trivial many 2. Prioritization without keeping in mind the goal - Pareto will help if the significant contributors identified help us achieve the goal. However, it is seldom checked whether the VITAL FEW will help achieve the goal or if there is a need to take a larger number of causes Unintentional 1. Going strictly by the 80/20 rule - some people take the 80/20 principle in the literal sense. They will do a Pareto plot and blindly apply the 80/20 principle. What needs to be noted is that 80/20 is a rule of thumb and it is not necessary to always have 80/20 split. It could also be 70/30 or 90/10 2. Keeping the total to 100 = 80+20. This is one of the most common misunderstanding of the 80/20 rule where one beliefs that the sum should always be 100. Again the rule is empirical in nature and it could be 80/15 or 75/25 as well 3. Unclear about the purpose of using a Pareto Analysis. Pareto can be used in Define phase to identify projects and also in Analyze phase to identify significant contributors. In the former, data is for problems and their occurrence while in the later, it is causes and their occurrence. Due to lack of clarity of purpose, if problems and causes are clubbed together in the same Pareto, then meaningful inferences cannot be drawn 4. Treating Pareto as a non-living tool - Pareto is usually done once and the same result is treated as sacrosanct for a long period of time. Pareto chart only provides a time snapshot. Over a period of time, the defect categories or causes and their occurrence numbers might also change and hence if Pareto Analysis is done at different points of time, it might yield different results Some that could fit in both categories 1. Small data set - Pareto Analysis will help if you want to prioritize vital few from a big data set. Doing a Pareto analysis on 4-5 categories will seldom yield a good result 2. Completely ignoring the trivial many - Pareto analysis helps identify the vital few but it does not say that one should ignore the trivial many. It simply states that first fix the vital and then move on to trivial. However, most people consider that if they fix the top 20%, they do not need to work on the remaining. Pareto can be used to continuously improve the process by continuously prioritizing the causes that you need to focus on 3. Doing Pareto at a high level only - Like most of the tools in Analyze phase, Pareto can also be used to drill down. E.g. Pareto can be done first to identify the top defect categories and then a second level Pareto can be done for the top defect categories (using the causes)
  4. 4 points
    Check this debate between Bench and Mark on announcing winners for a contest. Is Bench incorrect here? Description - There is an announcement that the results for Men vs Women contests are about to be announced by Bench an Mark. The reward is to be given to better performing gender. To add spice to the contest, men and women had been divided into younger and older folks. Bench comes on stage and says that he will announce the result with some analysis. He explains that younger men have done better than younger women. He then goes on to show that older men have also done better than their older counterparts. With this analysis, he concludes that men have won the contest. Dramatically, Mark comes in and announces that the final winners are women. Bench argues with Mark saying that men were better in both the categories. Mark says that overall percentage pass rate of women is better than that of men and shows the aggregate scores. Mark highlights that the award was meant for better overall performance. Bench cannot figure out how this makes any sense. Bench continues to think that if there are only two categories possible and men did better in both, men should be considered winners. The cartoon highlights a possibility of misrepresentation that exists in many analyses, especially where sub-grouping is done and subgroups are analysed. Feel free to discuss this below.
  5. 4 points
    Who does not like a pat on the back!! We also like it and hence we would like to hear from you. Your positive feedback helps us stay motivated and continue to deliver excellent workshops. Please click on 'Reply to this Topic' and share the comment that you want to be displayed on this forum as your positive feedback? We value your opinion!!!
  6. 3 points
    Hello Team Thank you for asking the two questions per week. The questions that you ask are extraordinary. They make the boring LSS tools look very interesting :) Kudos to all of you for asking such amazing questions. These questions have helped me immensely. While writing the answers to these questions, I have got a better conceptual clarity and understanding of the tool. The competitive spirit makes it even more interesting. I eagerly wait for 5 pm on Tuesdays and Fridays. Obviously I love when I win (specially on a Friday) but I also feel jealous if I don't win. But it also motivates me to write better responses for the next question. I get to learn a lot from different perspectives in the answers that are posted to these questions as well. Sometimes I feel that my answer wasn't the best but I don't mind as long as I win. Thanks again. Keep throwing the googly questions!!
  7. 2 points
    What is a Venn diagram? John Venn who coined the Venn diagram in 1880 was an English mathematician, logician and philosopher. He also called them Euler diagrams after Leonhard Euler, who checked them out a century before. This is an extraordinarily flexible technique of combining circles useful for identify the contrast between overlapping areas of uniqueness. This representation of how groups relate to one another are generally called “sets”. There must be minimum two number of circles, and also the probability of maximum for many uses is 3. However, there can be more shapes in a diagram based on the number of sets and such a diagram can use unique shapes as per the below figures. Once the circles are interlocked, they reveal discrete areas (in which there’s no overlap). These again compared with the qualities of the overlap areas. Where there are three circles, the central area will show multiple overlapping characteristics. The volume of areas revealed should ideally be kept approximately proportional to their percentage of overlap, in order that the extent of the basic is visually representative. When to use a Venn diagram We often see Venn diagrams in mathematical contexts, but businesses and professionals also use these forms. In each case, the person creating the illustration wants to resolve a controversy, make a crucial decision, predict probabilities or visualize or understand how multiple sets or objects relate to at least one another. Instances when a Venn diagram might be useful in Business Market analysis: A Business Analysis Practitioner might use a Venn’s diagram for basic market research. While using two or more sets of data members within the meeting observe overlapping areas, as those areas contain the business’ target market. Competitor Analysis: A firm might use Venn diagrams to match themselves for their products to their competition. Most times, the business of using the Venn’s diagram may only use two sets of data to work out how they differ from the competition and find any similarities. This helps the business discover what advantages they have already got and specialize in areas where they will make improvements. Product Comparison: Alternatively, a business analyst may create an example with overlapping shapes to weigh the advantages of two or more work ideas. Within the same way that the business analyzes the market, a business analyst will weigh any differences and similarities two or more ideas share to work out which features of a product are the foremost desirable, as shown within the overlapping areas. Decision-Making: The same principles for analyzing two or more product ideas apply to a business’ general decision-making process. Advantages of a Venn diagram A Venn diagram provides the following advantages: It allows an analyst to visualize concepts and relationships between two or more data. It defines complex information into terms that an analyst can understand and represent easily. It helps an analyst to better keep information. Venn diagram symbols “∪ ” Union of two sets. An entire Venn diagram represents the union of two sets. “ ∩ “ Intersection of two sets. This type of intersection shows what items it shares between categories. “ Ac “ Complement of a Set. The compliment is that they don’t represent whatever in an exceedingly set. An classic example of Venn Diagrams; In a survey of the fast-food preferences of three people. We assign these three people as A, B, and C, showing which restaurants they enjoy. A three-circle diagram mostly covers every possibility that they’ll choose a restaurant by one, two, three or no respondents. Scores for Restaurant Survey Results: Restaurant A B C McDonald's 1 0 1 Wendy's 1 1 0 Burger King 0 0 0 In-N-Out 0 1 1 Taco Bell 1 0 1 KFC 0 0 0 A&W 0 0 0 Chick-fil-A 1 1 1 While creating the Venn diagram representing the results, we observed that in A∩B, we’ve Wendy’s because respondent A and respondent B both chose it. Few fast-food restaurants like Burger King, KFC & A&W remain outside the diagram but exist within the universe. Since all the three people have chosen Chick-fil-A, the intersection of all three represents A∩B∩C. So the final Venn diagram will represent in the below figure.
  8. 2 points
    Benchmark Six Sigma Expert View by Venugopal R First of all, let us salute all those who are engaged in protecting us from this pandemic, which greatly includes all the people who are directly or indirectly involved in healthcare activities, under such challenging and trying circumstances. Any views on this forum should never be mistaken as a criticism towards anyone doing such noble service, but as a discussion for learning from the experiences and generate thoughts that could help society as a whole to be better prepared in future. One of main issues that we see across the healthcare systems in the world with the prevailing problem is Muri, which means ‘overwork’, for healthcare workers. It would be unfair to blame the healthcare systems for the excess Muri, based on the current situation, since it is beyond anyone’s imagination. However, under these circumstances it is essential to do everything possible to provide relief to the people who are getting overburdened. It has to be mentioned that there are many efforts being taken by various governments and many volunteers to this effect as well. The three components of wastes viz. Mura, Muri and Muda can affect one another and hence it is important to address all of them together. Very often, we see that Muda is the one that gets most attention. I have tabulated these waste components with some examples and suggested a systemic remedy. It may be noted that this is only a very small representation, whereas there is bound to be many more situations for each category and the solutions may not be very easy to implement always. S# CATEGORY EXAMPLE SUGGESTED REMEDIAL MEASURES 1.0 MURI (Overburden) 1.1 Overbearing tasks Stretched working for direct and indirect healthcare staff Forced to handle excessive number of cases than normal Clear criteria to identify genuine cases who need to be admitted, plus awareness Consider geographical redistribution of staff based on need 1.2 Work related stress Handling patients who are not very cooperative Personal attenders / relatives of patients are not permitted due to risk of infection. This adds burden to the healthcare staff Support the skilled healthcare staff with other inhouse staff who can play the role of personal attenders. Maintain regular contact with patients relatives and obtain oral assistance 1.3 High risk tasks Frontline healthcare staff are at high risk of being exposed Continued awareness and providing equipment to staff. Ensure Routine 5S in workplace. Plan staff rotation on high risk area to prevent prolonged risk to anyone 2.0 MURA (Variability) 2.1 Materials related For materials required by healthcare staff, mismatch between requirement and availability Variation in the quality of the materials Material planning exercise to be done at treatment centre level and at regional level Have standards for each item and centralized compliance monitoring Establish authority who understands the risks to decide on use any material that doesn’t meet the standards in case of emergencies 2.2 Methods related Differences with respect to diagnosis, treatment approach, handling, duration, and conditions within and between treatment facilities. Standards for all methods with frequent updating and compliance monitoring by a central organization. Frequent sharing and synching of best practices across centres 2.3 Manpower related Unpredictable variation on day to day patient count Variation in knowledge & skills among staff State level planning for potential patient turnover and necessary treatment facilities Adopt buddy system to quickly orient staff and reduce knowledge variation 2.4 Machines related Critical equipment not functioning or functioning with variations, especially during emergency situations – leading to waiting or treatment deficiencies Equipment availability both in terms of numbers and through predictive maintenance 2.5 Measurements related False positives / False negatives on the screening tests. Dependency on sampling for screening evaluation MSA on the screening measurements to understand the measurement reliability and improvement actions Application of different sampling methods like stratified sampling to obtain realistic density of the problem 3.0 MUDA (Wastages) 3.1 Transportation Transporting patients for various requirements - Testing, ICU. Transporting of equipment across centres Study transportation data to review the facility layout for optimizing movements Consider creating ‘self-sufficient’ zones based on cost-effort-benefit analysis 3.2 Excess Inventory Excess stock of medicines that do not get consumed for long Large number of patients queuing up for being attended at various stages Test requests / reports piled up Value stream and Kanban methods could help in streamlining the processes and minimize inventory 3.3 Excess movements Healthcare staffing having to move about within a centre for various activities. Too many movements required by physicians and other healthcare staff while examining / treating patients Lay out optimization, maximise communications, review positioning of facilities Workplace management – 5S practices 3.4 Waiting time Patients waiting for getting admitted, seeing physician, tests, reports, discharge etc. Physicians waiting for test reports Many of the solutions for other wastes will help to reduce waiting times. Reasons for the waiting time have to analysed whether it is a result of an inefficiency from another process – to decide the best options for solution 3.5 Over processing Assigning more staff than required for a patient Keeping a patient under care more than required Doing unnecessary diagnostic procedures Patients have to repeat the same information multiple times Expectations at each stage need to be understood well, so that both under processing or over processing should be minimized / avoided Ensure patients inputs are recorded and the same file is maintained 3.6 Over production Preparing for a patient too much in advance Define preparatory lead time for effective & efficient preparation to receive a patient 3.7 Defects Wrong diagnosis, administering wrong medicine, improper dosage, mix-up of reports etc are some serious defects Other defects are also possible such as skipping a step during an clinical test, missing out an instrument on surgical case cart Training & certification, checklists, software-controlled protocol 3.8 Talent unutilized Feedback / suggestions from junior and supporting healthcare staff not considered Best practices between individuals, treatment centres not captured Institute and encourage Kaizen system, encouraging all staff to provide suggestions and best practices. Reward & recognition schemes
  9. 2 points
    Q 271. What is a residual in Regression? Why is it important to analyze the residuals before assessing the goodness of a Regression Model? What does it mean if Residuals are non normal or non random? Residual: Cutting-edge statistics and optimization, Residuals and statistical errors are closely related and easily disordered measures of the deviation of an observed value of an element of a statistical from its “Theoretical Value”. Residual = Observed value - Predicted value The Error of an observed value is the deviation of the observed value from the true value of quantity of interest (for example: a population mean) and the residual of an observed value is the variance between the observed value and the estimated value of the quantity of interest (for example: a sample mean). The division is most important in regression analysis, where the concepts are sometimes called the regression statistical errors and regression residuals and where they lead to the concept of studentized residuals. Error Vs Residual: · The difference between the height of each person in the sample and the unobservable population mean is a statistical error, whereas · The difference between the height of each person in the sample and the observable sample mean is a residual. Residual in Regression: Since a linear regression model is not always appropriate for the data, you should assess the appropriateness of the respective model by defining residuals and examining residual plots. Residual (e) is the difference between the observed value of the dependent variable (y) and the predicted value (ŷ) and each data point has one residual. Residual = Observed value - Predicted value e = y - ŷ Together the sum and the mean of the residuals are equal to zero ( Σ e = 0 and e = 0). Notation: e = Residual y = Observed Value y’ = Predicted Value Properties: Σ e = 0 Mean of the residuals e = 0 Important to analyze the Residual Plots: A residual plot is a chart that shows the residuals on the vertical axis and the independent variable on the horizontal axis. The facts in a residual plot are randomly dispersed around the horizontal axis, then linear regression model is mostly appropriate for the particular data set; otherwise, a nonlinear model is more appropriate. The below table shows the inputs and outputs from a simple linear regression analysis. The below chart displays the residual (e) and independent variable (X) as a residual plot. The above residual plot shows a fairly random pattern, the first residual is positive and then the next two residuals are negative, the fourth one is positive residual, and the last residual is negative. This random pattern is clearly indicating that a linear model provides a moderate fit to the data. Below, the residual plots show three typical patterns for the reference. The following first plot shows a random pattern, indicating a good fit for a linear model. 1) Random Pattern: 2) Non-Random: U – Shaped: 3) Non-Random: Inverted U The above last two patterns are non-random (U-shaped and inverted U), suggesting a better fit for a nonlinear model. Residuals are non normal or non random: Non-normality or non-random of the residual plot is an indication of an inadequate model. It means that the errors the model makes are not consistent cross-ways variables and observations (ie. the errors are not random). Transformations of Variables: Once a residual plot data set to be nonlinear, it is commonly possible to "transform" the raw data to make it more linear and it will allow us to use linear regression techniques more effectively with nonlinear data. What is a Transformation to Achieve Linearity? Converting a variable involves using a mathematical operation to change its measurement scale. Generally, there are two kinds of transformations. i) Linear transformation. A linear transformation preserves linear relationships between variables. Therefore, the correlation x and y would be unchanged after a linear transformation. ii) Nonlinear Transformation: Nonlinear transformation changes (increases or decreases) linear relationships between variables and, therefore, changes the correlation between variables. By using Regression, a transformation to achieve linearity is a special kind of nonlinear transformation. The respective nonlinear transformation that increases the linear relationship between two variables. Methods of Transforming Variables to Achieve Linearity: There are numerous ways to transform variables to achieve linearity for regression analysis. Some common methods are summarized below. Perform a Transformation to Achieve Linearity: Changing a data set to enhance linearity is a multi-step, trial-and-error process method. The following steps to be performed for Transforming a data set to enhance Linearity: i) Conduct a standard regression analysis on the raw data. ii) Construct a residual plot. a. The plot pattern is random, then do not transform data. b. The plot pattern is not random, then continue. iii) Compute the coefficient of determination (R2). iv) Choose a transformation method (see above table). v) Transform the independent variable, dependent variable, or both. vi) Conduct a regression analysis, using the transformed variables. vii) Compute the coefficient of determination (R2), based on the transformed variables. a. The transformed R2 is greater than the raw-score R2, then the transformation was successful. b. If not, attempt a different transformation method. The greatest transformation method (exponential model, quadratic model, reciprocal model and etc.) will depend on nature of the original data. Healthier way to determine which method is best is to try each and compare the result (residual plots, correlation coefficients). The finest method will yield the highest coefficient of determination (R2). Reference: https://en.wikipedia.org/wiki/Errors_and_residuals https://stattrek.com/regression/residual-analysis.aspx?tutorial=AP Thanks and Regards, Senthilkumar Ganesan, Email: senthillak@gmail.com Mobile: +91-7598124052.
  10. 2 points
    This album contains Benchmark Six Sigma Training Photographs from January to March 2020.
  11. 2 points

    From the album: Jan-Mar 2020

    This is a group photograph of the Black Belt Workshop held in Mumbai, February-March 2020.

    © Benchmark Six Sigma

  12. 2 points
    With reference to Weight of Ox, Galton discovered that average guess was awfully close to the actual weight. Sporadically, it makes sense to go with guess from crowd intelligence, specifically when there is no scale, or when the measurement is expensive or when it is time-consuming for accurate measurement. Few considerations and points to be factored before we rely on this method would include the below pointers: Diversity of opinion Independence of opinion Trusting the group to be fair in terms of giving the opinion Should be cautious of Groupthink Consideration on activity time Distribution of expert panel is important to avoid dominance and skewness Diffusion of responsibility We still use similar versions of Wisdom of Crowds in organizations. Ref 1: When we do Story Point Estimation in Agile development During development, it is critical and vital to plan budget, time and resource required to complete the module in every sprint. Entire project is broken into different levels and function point analysis and story points is assigned to each user stories. Story point estimation gives rough estimation (more like a guess) for the product backlog items with relative sizing. This is a comparative analysis. Key estimators include developers, testers, scrum master, product owner and other related stake holders. Some of the common estimating methods are T – shirt Sizing Using Fibonacci Series Animal Sizing Typical measures used in T – shirt sizing comprise XS, S, M, L, XL, XXL; In Fibonacci sequence measures includes numerical 1,2,3,5,8,13,21,34,55, etc.; In Animal Sizing team uses Ant, bee, cat, dog, lion, horse, elephant, etc. It’s fun to use animals instead of numbers Based on the efforts required, points are assigned for the stories during planning. Quick and effective way of using crowd intelligence while doing story point estimation. Ref 2: Delphi interview technique This method relies on expert panels decision, considering opinion from experts more like that of getting crowd intelligence in wisdom of crowd Ref 3: Group Decision Support System (GDSS) Many organizations use this method for idea generation, to select optimal solution, for voting and so on. By using this technique, it easy to arrive at solution for complex problems Other effective methods and applications includes below, Brain storming Nominal group technique Devil’s advocacy Fish bowling Didactic interaction Closing quotes to go with Wisdom of Crowd.
  13. 2 points
    We all wish a very Happy and Prosperous New Year 2020 to all Bench and Mark viewers! While Bench makes a wish about what is controlled by others, Mark wants to focus on what he can do. Being able to apply what has been learnt and to learn more of what is application oriented seems a wise.
  14. 2 points
    Benchmark Six Sigma Expert View by Venugopal R Pascal's Triangle is named after the French Mathematician Blaise Pascal. It will look as depicted below: A quick examination about the Pascal’s triangle reveals the following: The top most row (referred to as 0th row) has one number, which is 1. The next row (first row) has two numbers (or two columns) and each number is the sum of the numbers of the boxes above from the previous row. The same practice continues, and we get the Pascal’s triangle. Thus if number on the nth row and kth column is represented as then: Let us look at an example of a simple binomial probability – the outcome of tossing a coin. The following table gives the number of tosses, the outcome and the numerical representation of each outcome combination The last column of the above table is emerging as the Pascal’s triangle. It may also be seen that the binomial probabilities for a particular outcome can be worked out. For example, let’s see the probability of obtaining exactly two heads, when the coin is tossed 4 times. The total number of possible outcomes is 1+4+6+4+1 = 16. The number of combinations that gave exactly two heads is 6. Hence the probability of obtaining exactly two heads is 6 /16 = 0.375 or 37.5%
  15. 2 points
    Zipf’s law as proposed by American linguist George Kingsley Zipf, states that frequency of any word in a language relates inversely with the rank of that word in frequency table. Frequency is the number of time, the word is appearing in the sample or a text . -The most commonly evident word in a corpus suppose has a frequency f -The second most appearing word , will have a frequency around f/2 -The third most appearing word would have frequency nearly f/3 -The fourth most appearing word would have frequency around f/4 Zipf’s law can be applied in many other data and rankings such as: - Poulation ranks of cities in various countries -Corporation size -Income rankings of richest persons -Ranks or number of people watching the same TV channel -Temprature trends over recent years -Facebook likes of favourite teams - Number of citations to papers - Number of hits on web sites - Copies of books sold in the US - Telephone calls received - Magnitude of earthquakes - Diameter of moon craters - Intensity of solar flares - Intensity of wars - Net worth of Americans -Frequency of family names The level of fit between the data and Zipf’s distribution, can be tested by Kolmogorov-Smirnov test and then it be compared with the fits to alternative distributions, like lognormal, exponential distribution. Zipf’s Law is more or less in compliance with one of the most widely acclaimed economical and statistical principles ‘Pareto Principal’. The Zipf distribution is also labelled as the discrete Pareto distribution, as it includes primarily the discrete data and deals with frequency and rankings. The Pareto principle states that 20% of the invested input accounts for 80% of output. 20 % of work-related input yields 80% of the results. Similarly Zipf’s law accounts for the fact that few of the words, only first 20% of words, accounts for 80% frequency of entire corpus. The probability mass function(pmf) of the Zipf distribution is F(x)= C/xs , C- Calculated Constant, x=1, 2, 3-------------------n S- Value of exponent, characterizing the distribution
  16. 2 points
    Fast-tracking and crashing are important techniques of project management to shorten/compress the project schedule. These techniques, though not commonly used ones, but have important applications in project management. -There may be various business reasons to do them, when the project is already running late due to various unexpected conditions like crunch of manpower and other resources . They may be also mooted in the event of internal and external pressure from various stakeholders of the organization to expedite the project and finish before deadline. FAST TRACKING -Fast tracking is a technique that is usually the first line of action, when project compression is warranted. It encompasses doing of multiple activities in parallel, even though there may be some degree of finish to start dependencies of both the activities. -One of the best examples is -starting to work on product development, when product design is not complete, but a part of product design is accomplished. Whereas earlier plan was to initiate product development at the end of complete product design. Other relevant examples are: -Start laying foundation of construction, even if architectural drawings/designs are not completely done. - Constructing a different portion of highway simultaneously with another initiated portion - We need to analyze the dependencies of activities, if they are really mandatory or just presumed. If it is only discretionary, then we can manipulate the schedule of activities, so that overall time gets shortened. For an example – Activity 1 and Activity 2 have essential finish-to-start dependencies, with length of each activity being 5 days and the total duration being ten days. Let us assume that project manager gets a deadline from project sponsor to finish the project in 8 days. If we start the second activity by the end of 3rd day or beginning of 4 th day, we will be able to cut short 2 days and finish both the activities in 8 days The biggest drawbacks of fast tracking are: - Cannot be done, when there is complete interdependency or finish to start relationships of processes . - Leads to rework, project extension and project failure many a times. -A general rule that applies in fast tracking is that ,the second activity can be started when the first activity is at least 2/3rd or around 66% complete. This usually fits well and is commonly practiced. CRASHING Crashing is a technique, which entails use of additional resources, e.g.- overtime, manpower, additional material and equipment. The motto is to finish the activities or reach the project deadline at earliest, well ahead of the anticipated or projected deadline. -Crashing works very well in certain scenarios like construction industry- more workers finish the task earlier compared to a smaller number of workers. The best example of crashing was seen in Year 2000-Y2K- projects, when many of the companies accelerated the project to meet the deadline of completion of projects by the end of 1999. -The biggest issue with crashing is that, it increases financial burden. So the cost vs time tradeoffs have to be carefully decided, when deciding for crashing. - Also crashing is usually never the first choice, it is usually carried out, when fast tracking does not yield the desired result. -Crashing also can lead to waste of resources, especially with more of manpower leading to more confusion and errors. For example in complex neurosurgeries, which goes on for many hours and If we try to add more surgeons in team to shorten surgery time, it may rather lead to more complications due to difference in opinion, difference in skills of surgeons and lack of coordination. -Both Fast track and crashing needs to be implemented only in critical path activities. If we employ them on non-critical path activities, there will be no shortening of duration.
  17. 2 points

    From the album: Jan-Mar 2020

    This is a group photo of Kolkata Black Belt Workshop January 2020 batch.

    © Benchmark Six Sigma

  18. 2 points

    From the album: Jan-Mar 2020

    This is a group photo of the Bangalore Green Belt Workshop January 2020 batch.

    © Benchmark Six Sigma

  19. 2 points
    Benchmark Six Sigma Expert View by Venugopal R Using median as a measure of central tendency helps to avoid effect of outliers. For those who need a clarity on fundamental behavior of mean and median, the following simple example will help. Consider as set of nine data points representing the minimum time in days between failures for nine similar equipment. 70, 248, 2400, 240, 2, 1460, 230, 180, 440 The mean for the above data is 586 whereas the median is 240. Now consider the data set below, which is same as above except that the maximum value has further increased from 2400 to 4800 70, 248, 4800, 240, 2, 1460, 230, 180, 440 The mean has shot up to 852, whereas the median remains unaffected at 240. In the above situation, the median is a more realistic representation as a measure of central tendency of the data. Few examples where the median may be a better choice: 1. Income data in an organization: It is quite possible that there could be a few high paid individuals, by which the mean could be severely biased, hence median is preferable. 2. Age of employees in a society: A few very senior citizens among a majority of people being in the lower middle age band, could give a non-normal distribution. 3. Customer satisfaction surveys using a Likert scale of 1 to 10: A very few customers voting on the upper or lower extreme could distort the reality – hence usage of median helps. 4. Life expectancy based on a specialized treatment: For instance if most patients had a post treatment life span in the range of 10 to 15, one odd patient living for 45 years could provide an unrealistic expectancy, unless we use median as a measure of performance. 5. The comparative tests performed on non-normal distributions, knows as non-parametric tests are based on usage of median. Examples of such tests are 1-Sample sign, Wilcoxon Signed rank, Mann Whitney, Kruskal Wallis, Moods Median.
  20. 2 points
    Mean (Average) Mean is the best measure of central tendency in normally distributed data without significant outliers. As large number of distributions are symmetrical, mean represents the true estimate of distribution. Example: Mean height, weight etc. Mode Mode is the most repeated value in a set of data. Like, people become more inclined to things, that are undertaken by majority of the people. Median The median is the mid value that divides a set of values into top and bottom 50%. The income distribution in a country is asymmetrical, with 20% of population, accounting for major proportion of wealth in the country and remaining 80% of the people have lower income, in the way that wealth of top 20% is equal to bottom 80%. In this case, mean income will give a false and biased picture, due to distribution peaks in two different regions. Median will be the best representer of the income of the people in the country. In India, many educational institutes place their advertisements to attract students by stating the ''placement packages'' of their passing students either due to on campus selection attract students using “placement packages”. In this example, average placement package, which is commonly quoted, is a wrong way of assessing the students. Rather, median serves as the best measure, as the salary range is quite wide, for example for those selected for India location -20 students(salary up to 3 million INR) and those for USA location-5 students (Salary in range of 8 to 10 million INR after conversion of USD to INR) Median also finds use in measurement of commonly measured health indices such as blood pressure. If we measure the blood pressure of 5000 persons in a community health survey and tabulate the systolic and diastolic pressures separately, mean will give an erroneous impression, as 10-15%. of the patients may have very high systolic and diastolic blood pressures, much above the normal reference range( say, Systolic> 200mm of Hg and Diastolic> 140mm of Hg , which is not represented by majority) . Here, median will be the best measure for blood pressure levels of the community people and can be used to initiate health intervention for the community.
  21. 2 points
    Q 223. Resolution, Bias, Stability, Linearity and Precision are the five things that are checked while performing Gage R&R. What is the order in which these 5 things are to be checked? Note for website visitors - Two questions are asked every week on this platform. One on Tuesday and the other on Friday. All questions so far can be seen here - https://www.benchmarksixsigma.com/forum/lean-six-sigma-business-excellence-questions/ Please visit the forum home page at https://www.benchmarksixsigma.com/forum/ to respond to the latest question open till the next Tuesday/ Friday evening 5 PM as per Indian Standard Time The best answer is always shown at the top among responses and the author finds honorable mention in our Business Excellence dictionary at https://www.benchmarksixsigma.com/forum/business-excellence-dictionary-glossary/ along with the related term
  22. 2 points

    From the album: Oct-Dec 2019

    This is a group photo from Delhi Lean Six Sigma Green Belt Workshop December 2019 batch.

    © Benchmark Six Sigma

  23. 2 points

    Version 1.0.0

    403 downloads

    This zip file contains the study and exercise content for Lean Six Sigma Green Belt Training. You are requested to download and save this file in the laptop you will be using throughout the training. Note: It's important to download this file before the training to avoid any delay at the venue
  24. 2 points
    Bench still thinks of Six Sigma as a measure of process performance and struggles with the defects per million concept. Mark has continued to learn with evolution of Lean Six Sigma into a strategy driven Business Result Improvement Program. Are you ready to take your program to the next level? Do provide your comments on where do you think different organizations exist in this journey. This is an important discussion board for all Lean Six Sigma practitioners.
  25. 2 points
  26. 2 points
    Excellence Scoreboard represents the total number of points accumulated by all members of this Forum by giving answers of Weekly Questions. Thanks a lot to Mr. Vishwadeep Khatri and his entire Team for making their efforts to create this useful platform to all members where an individual member can earn points and can use these points to nominate someone who wants to upgrade himself/herself in this competitive field. Nominating someone results a lot of benefits in various forms for yourself and them-self both. I have nominated a participant for Green Belt Course by utilizing my points and found that his entire fees of Green Belt waived off and he has successfully completed his course and using the skills in his field.
  27. 2 points
    Run Chart is a plot of the data points for a particular metric with respect to time. It is primarily used for following two purposes 1. Graphical representation of performance of the metric (without checking for any patterns in it). E.g. The scoring comparison in a cricket match. The runs are plotted on Y axis and X axis has overs (which is a substitute for time spent) Source: The Telegraph 2. To check if the data from the process is random or if there is a particular pattern in it. These patterns could be one or more of the following a. Clusters b. Mixtures c. Trends d. Oscillations Source: Minitab help section Run chart if used for point number 2, performs following tests for randomness - Test for number of runs about the median. This is used for checking Clusters and Mixtures. Clusters are present if the actual number of runs about the median is less than the expected runs. This implies that there are data points in one part of the chart Mixtures are present if the actual number of runs are more than expected runs. This implies that there are frequent crossings of the median line - Test for number of runs up or down. This is used for checking Trends and Oscillations. Trends are present if the actual number of runs is less than the expected runs. This implies that there is a sustainable drift in the process (either up or down) Oscillations are present if the actual number of runs is more than the expected runs. This implies that the process is not steady These are hypothetical cases with the below hypothesis Ho - Data is random Ha - Data is not random p values are calculated for all the 4 patterns. A p value of less than 0.05 indicates acceptance of Ha implying that the particular pattern is present in the data set. Absence of these patterns indicate that the process is random. Advantages of Run chart over Control chart Ideally control chart is a more advanced tool as compared to a run chart. However following situations warrant the use of run chart over a control chart 1. Run chart is preferred when we need a snapshot of the metric performance with time without taking into account the control limits or if the process is stable/unstable. E.g. like the scoring run rate comparison for cricket (refer the screenshot above) 2. One can start creating run chart without any prior data collection unlike in a control chart (where data is collected first to determine the control limits) 3. As a quick check to see if the process data is random or not. For doing such checks (clusters, mixtures, trends and oscillations) in a control chart, one would have to run all the Nelson tests (usually control charts are used with only one test i.e. any points outside 3 standard deviations and hence might not be able to detect such patterned data) 4. Apart from the above, it is easy to prepare and interpret a run chart in comparison to a control chart
  28. 2 points
    In DMADV, focus is on new product/service design, unlike for existing product/service in DMAIC, during the last phase of DMADV, verification of design is performed and whether the design is capable of meeting needs of the customer is validated. Numerous pilot runs will be required to validate and verify the design outcomes. Major aspect of this phase to check whether all metrics which are designed are performing as expected. Conformance to Specification. Some of the common used tools in verify phase includes Control charts, control plans, Flagging, Poka Yoke, check sheets, SOP’s and work instructions. Software Application Design: In a new design viewpoint, Verification is whether Software Application developed in right way & Validation is whether Right Software Application is being produced In simple terms, verification is checking whether the application works perfectly without any errors/bugs and validation is checking whether the application is meeting the requirement and expectation Verification Validation Application and design review, code walk through, code inspection Black Box and White box testing It is static testing It is dynamic testing Performed first Usually performed post verification Verification done without software execution Validation done with software execution Automotive Manufacturing: Reference to a gearbox manufacturing, as per the new design in DMADV process, in actual manufacturing high level steps include preforming, annealing, machining, producing teeth, shaving, grinding and inspection. Here verification is, comparing the gearbox to design requirement of material, dimension, tolerance etc., that is all specs are verified Whereas, in validation, post inspection assembling gearbox and doing a dry run, test it to check whether it runs as expected. Verification Validation Done during development, review and inspection, production and scaleup Usually done before scaleup and after the actual production Random inspection can be done for verification Stringent checks are done during validation Validation can be done directly by skipping verification in some scenarios, especially when we are not able to measure component outcomes or when cost of verification is very high. Medical Devices: Verification usually done on the design: design input, process and the output. It is done by test, inspections and analysis. Validation is checking whether the intended need of the medical device is met Source: U.S. Food and Drug Administration (FDA)
  29. 2 points
    Pareto Analysis is used to separate Vital few from Trivial Many parameters. Vital few contributing to 20% and trivial many 80%. This principle is otherwise called as 80-20 Rule. It simply says, majority of the results come from minority of causes. In numerical terms, 20% of inputs are accountable for 80% of output 80% of productivity comes from 20% of associates 20% of causes are accountable for 80% of problem 80% of sales comes from 20% of customers 20% of efforts are accountable for 80% of Results Example Dataset: Metric Freq Percentage Cumulative Demand Exceeds Supply 232 24.12% 24.12% Incorrect Memory and CPU Usage 209 21.73% 45.84% Bandwidth Constraints 203 21.10% 66.94% Network Changes 64 6.65% 73.60% Fatal Bugs in Production 59 6.13% 79.73% Poor Front-End Optimization 52 5.41% 85.14% Integration Dependencies 39 4.05% 89.19% Database Contention 34 3.53% 92.72% Browser Incompatibility 23 2.39% 95.11% Device Incompatibility 14 1.46% 96.57% Hardware Conflicts 13 1.35% 97.92% Inadequate testing 9 0.94% 98.86% Too much code 6 0.62% 99.48% Exception handling 5 0.52% 100.00% Classification: Public Pareto Chart: Classification: Public Some of the common misuse include below scenario’s: Working only on Vital few parameters: There could be other potential parameters were the frequency is less and which falls on one of the trivial many factors, however when criticality or the severity of the potential parameter is high, since the frequency is low it is not considered and underestimated. For the referred example, Inadequate testing can be critical, if there is insufficient test case or when the test review is poor it can lead to multiple production issues, which is not factored when focusing only on Vital Few. On a ideal situation, 80% of the resource should focus on reducing the vital few and 20% of the resource working on minimizing trivial many parameters. Using pareto for defects belonging to multiple categories: Another misuse of pareto analysis is when combining defects from multiple categories. We need to clearly understand that categories must be Mutually Exclusive. Using Pareto when parameters are not collectively exhaustive: What is collectively exhaustive? Collectively, all the failures in the list should cover all the possible failures for the problem., that is, there should not be any gap. Definition: Events are said to be collectively Exhaustive, If the list of outcomes includes every possible outcomes. Performing analysis on small data sets/few data points: For statistically significant analysis, we will have to use relatively large data sets rather than working on small data points. At the same time number of categories need to be practically large enough. above pareto analysis, does not make sense, when the data set is relatively small. Inaccurate measuring: Visually looking in the pareto chart and selecting the Vital Few rather than considering cumulative % < (less than) 80% Analyzing defects only once: Pareto Analysis should be performed before the problem is solved and during the implementation period to see the trend and Post improvement. It is repetitive and iterative process, rather than running only once and focusing on the defects that were identified during the early stages of the analysis. 80 + 20 should be 100; and not 75 - 20 or 90 - 40 Considering 80 in the Left Axis: Left axis displays frequency and right axis the percentage, some time when people consider 80 in left axis leading to selecting wrong vital few could lead to poor problem solving. Flattened Pareto Analysis: If there is any bias in data collection methods, we might end up with bars being flat, this happens mainly when we are separating / breaking vital problems into small problems. It does not make sense to proceed with Pareto Analysis. Rather work on action plans based on the severity and criticality. Considering defects as Root Cause: Considering Vital defects identified during the analysis as Root Causes, and not analyzing further/deep dive to understand the root cause. This will not potentially stop the defect in occurring rather it would be applying band-aid scenario for the identified loop holes.
  30. 2 points
    Some of the common challenges in conducting severity rating in PFMEA are listed below along with some thoughts on how these could be mitigated. 1. Understanding the ordinal rating scale: Interpretation of ordinal rating scale may be different from the interpretation of ratio scales and there might be the risk of drawing incorrect assumption. For example if the rating scale gives 3 likely and 6 very likely , in the rating, the impact may be significantly different from that of 2 & 4 and not exactly double the rate (as is the ratio in both the cases). The range, however, may be considered as such if the rating scale is not well explained. The solution is to have detailed discussion on the assessment mechanism including the rating scale. 2. Different rating scale for different industry: The severity rating scale may have very different implications for example, the rating scale used for healthcare industry will have very different scaling parameters and levels vs insurance industry or automobile industry. This challenge can be addressed by working with the actual team members & the respective functional leaders to design a rating scale which is relevant to the organization. 3. Difference in interpretations: Even in case of the same rating scale being provided, there could be difference in interpretation of the severity & impact of a possible risk based upon the personal experiences of the person conducting the assessment. The solution in such a situation is to have calibration meetings to ensure that every one is on the same page. 4. Cognitive Biases: The challenge in using rating scales and not statistical data in arriving at severity rating is that it may be subject to cognitive biases as follows: a. Only takes in to account "known -unknowns" & does not plan and design suitable response mechanisms for black swan events & "Unknown-Unknowns) b. Availability : People will typically ignore statistical evidence and base their estimates on their memories, which favor the most recent, emotional and unusual events which have a significant impact on them. c. Gambler’s Fallacy: People make the assumption that individual random events are influenced by previous random events which might be spurious correlations and may not have causal relationships. d. Optimism bias: People overestimate the probability that positive events will occur for them, in comparison with the probability that these events will occur for other people. e. Confirmation bias: People seek to confirm their preconceived notions while gathering information or deriving conclusions. f. Majority: People may go with the assessment of majority to conform with the group at the cost of their objective opinion which may be truer representation but different from the group opinion. g. Self-serving bias: People have a propensity to assign to themselves more responsibility for successes than failures. h. Anchoring: People tend to base their estimates on previously derived/ used quantities, even when the two quantities are not related. i. Overconfidence: People consistently overestimate the certainty of their forecasts. j. Inconsistency: When asked to evaluate the same item on separate occasions, people tend to provide different estimates, despite the fact that the information has not changed. Solution in this case is to screen out whether the ratings have been influenced by these biases and inform the participants in advance to consider whether the ratings may have been influenced by such biases. Other mitigation measures could be blind peer rating or benchmark comparison with industry ratings for similar processes. 5. Interdependence between causal factors and failure modes: FMEA assumes that each risk is an independent event, whereas there may be a high degree of interdependence between factors which could influence risk rating significantly. Understanding and articulating such interrelationship could be challenging & not considering such impact could mean that the assessment is not representative of the possible risks & the resulting impacts. The way to mitigate this is to have a detailed discussion with all the relevant stakeholders & the process expert in a well-designed structure to ensure that all the risk and their interrelationships are well understood and documented. 6. Challenge in considering the effect on both the customer or the process (assembly/ manufacturing unit). As against DFMEA(Design FMEA) where we look at the effects on the customer , in process FMEA, we will need to consider the impact & hence the severity rating of the failure mode if it impacts both the process & the customers. This is because, the impact of the failure mode in this case will mean the impact on the process or the customer in both the cases. This leads to more complications in having to consider multiple scenarios. This challenge can be mitigated by taking the higher of the severity rating of the failure modes for other the process or the customer as the severity rating for the causal factor/ failure mode. 7. Challenge in deconstructing the impact of Root cause vs. assessing failure mode. Though there is a perspective that in some cases Root Cause and Failure Modes can be used interchangeably, however, if we drill down further, it is evident that root cause analysis is typically conducted post facto (after the event) whereas Failure modes identification happens proactively and will take into account various other factors apart from the proximate cause. The challenge is to ensure that this understanding percolates to the team creating the FMEA document. 8. Challenge in ensuring the risk assessment as an ongoing process vs as a single time activity - Risk assessment (including identification & severity assessment) has to be an ongoing process & not a single point in time activity ( as the severity and impact may show material change in cases where there have been significant changes in either internal or external drivers, process dynamics or in key the environmental factors). The challenge is ensure that the rigor of assessment is maintained & updated with any relevant changes. The solution in this case is to have a monitoring / governance mechanism which will ensure that FMEA is kept as a live document with relevant updates to ensure correct risk rating. 9. Challenge in considering the impact due to timescale: E.g. the impact of a risk manifesting immediately may be significantly different from that which may manifest after some time. The solution would be to conduct time-scale analysis of such risk factors to take into consideration the impacts of recent events and see whether the severity rating could change in such cases.
  31. 2 points

    From the album: July-Sep 2019

    © Benchmark Six Sigma

  32. 2 points
    Kaizen, Kaikaku and Kakushin are three approaches within Lean which have their roots from Toyota. They work well together and have different areas of focus and magnitude of impact/risk. The table below provides differentiation and tips for their implementation. Kaizen - Kai – Change, Zen – Good Kaikaku – Radical Change Kakushin – Innovation Definition Evolutionary change for better focused on incremental improvements Revolutionary change focused on radical improvements Innovation, transformation, reform and renewal Focus Area Continual improvement of their processes Transformation of their organizational culture Bringing something new into existence People All levels including workers Executives and top management Top Management Risk / Impact Low Medium High Steps / Tips / Techniques · 5 S o Seiri – Sort out o Seition – organize o Seiso – shine the workplace o Seiketsu – standardization o Shitsuke – self-discipline · 7+1 Wastes o Transportation o Inventory o Motion o Waiting o Over-processing o Over-production o Defects o Skills under-utilization · Look for ways to make maximum contribution to ideal state – “What would be ideal customer experience” · Search for opportunities for radical improvements · Apply 80-20 rule to do more with less · Creative problem solving · Challenge assumptions · Ask What and why questions – think differently · Brainstorm creative solutions · Know how to sell radical ideas – overcome resistance · Think positively and act promptly · Follow radical improvements with continual improvements (Kaizen) · Attribute listing · Biomimicry · Brainwriting 6-3-5 · Challenge assumptions · Osborn checklist · Harvey cards · Lotus Blossom Technique · Redefinition · Reverse Brainstorming · Systematic Inventive Thinking · COCD Box · Force Field Analysis · Six Thinking Hats · Follow it with radical and incremental improvements Eg. · Reduce production time by implementing 5 S · Usability improvement in software that’s allows people to enter data with reduced no. of values to enter · Introduce new lighter material for vehicle body - reform production processes · Upgrade software with new technology which allows faster development, better performance and more features · Make simplified cars by cutting the number of parts in half · Extending software on multiple media allowing ease of access and seamless collaboration and eliminates duplication throughout the supply chain Conclusion: All the three techniques have different role in the lean journey and allows organization to identify and implement changes at different levels and magnitude of impact. Each of them are necessary and must be run in tandem for an organization to be truly lean and successful cause by just being innovative, a company may not be successful in long run as it may lose out on efficiency nor by just being a company which is strong on efficiency sustain itself in long run as lack of innovation will allow competitors who are innovative to beat it down in market.
  33. 2 points
    Kaizen : It is a combination of two Japanese words , kai and zen . Kai means “Change” and Zen means “for the better” resulting in the meaning as “Change for the better”. It refers to any continuous improvement done in workplace, using small incremental changes Kaikaku: It is the Japanese term for ‘ radical change’ . It talks about the fundamental cum radical changes that we make to the system, in which we are working Kakushin: In Japanese it means ‘Innovation’ . It talks about the fact that changes done in the system (that we work upon) can sometimes lead to a paradigm shift in the working of the system such that that we need to realign our thinking to be more innovative S.No Kaizen Kaikaku Kakushin 1 Focuses on elimination of waste (Muda), Productivity improvement and Over hard working of employees (Muri ) with small continuous improvements Focuses on radical or revolutionary changes with big improvements Focuses on Breakthrough ideas /products /services 2 Cultural change is slowly imbibed into the working DNA of the employees . Cultural change happens explicitly and drastically changes Cultural change happens consciously due to focused thinking 3 Participation(involving in the activity) of all workers normally happens as kaizen activities deals with process kaizen (Individual workstands)and flow kaizen(material and information) Not necessarily all workers need to be involved Not necessarily all workers need to be involved How do they complement each other ? Kaizen is the base . Its the building block on top of which Kaikaku and Kakushin can be done. Objective is to remove any non-value adding work by doing a kaizen and then see what needs to be done. When too many Kaizen activities are not yielding any results , then we go got Kaikaku. This is akin to DMAIC and DMADV. If we think that DMAIC is not going to work, then no point in trying to improve the existing process. So we opt for DMADV as we think old process cannot be improved. Same way, we move to Kaikaku and hence that becomes a radical shift in our approach (again think of DMADV for analogy). Now this is done . What next ? What if we find a better way to optimise our benefits . Our system then should be in a transformed state with our thought process realigned with innovative approach. So Kakushin comes into picture . This is akin to DMAODV in Six Sigma parlour. Conclusion:All the three are a must so that an organisation can stay competitive in the market. What would a company lose if one of these as a concept was not utilised? Case 1: If Kaizen not utilised: If thats the case, it would be like building a house without a strong base. Kaizen helps in setting up individual standards and also helps in eliminating waste/non-value added activities. Also it helps in controlling the over work of employees. By not having Kaizen, the disadvantages would be a). Impact of other two type of improvements may not be effective as still the processes would be weak because non-value added activities would be present because those 2 improvement types may not have addressed this aspect b). Kaikaku and Kakushin focus on system improvements primarily and will not focus on individual standards unlike Kaizen . As a result, employee focus would be missing c). Employee morale may go down as cultural challenge is thrusted upon and as there is no Kaizen which speaks about Muri . Employee might spend long hours to adapt to the cultural changes brought by other 2 types of improvement Case 2: If Kaikaku not utilised: 1. 1. Potentially it could happen that small changes might keep on happening eternally for ages with not much impact 2. Management/Key stakeholders may not be able to take decisions on issues/problems Case 3: If Kakushin not utilised: 1. 1. The Organisation will not be competitive in its business 2. 2. Difficult to grow in niche market 3. Business growth and hence revenue will be stagnated 4. Morale of top management will go down Example for Kaizen, Kaikaku, Kakushin: Assuming we are in a primitive age of IT , explaining the concepts of Kaizen, Kaikaku and Kakushin Problem Statement Before Kaizen Kaizen Results Often multiple developers working on same code/functionality creates instability and also delays deployment of files Code written by one developer is inadvertently overwritten by another. This happens at times on the delivery date creating customer escalation Have a Version Control System which will alleviate the problems Version control eliminates overwriting. Latest code is always used for delivery and right file is deployed eliminating customer escalation Now Version control available . Next issue. Problem Statement Before Kaikaku Kaikaku Results As more than one developer working on same file and multiple files, changes need to be frequently deployed in the code repository which is not happening Due to time pressure , code deployed in the code repository throws error while testing the application . Tester would not be able to test in such a case Do a Continuous Integration (CI). Have an Integration Server which can seamlessly integrate all codes and provide a build (compilation of the code –ready for consumption by the users) and also intimate whether build is failed or passed Tester and Developer can get notification about the success or failure state. It makes easy for testers to test In today’s environment , Time to Market is the key. So the sooner we make the changes , the faster we should deploy it in the production environment . Else business would be lost. Now as we frequently make changes and deploy it in our local environment and test the application, do we have the capacity to deploy those changes in real time(production environment?) Problem Statement Before Kakushin Kakushin Results As frequent changes are done to the code and tested in local environment , it becomes difficult to deploy the changes everytime in production as the environments are different and we need to make changes in various places including code so that northing gets broken in production Takes 2 days of effort to do the manual changes . Also the stress in doing these changes (staying at office for long hrs) takes its toll on the health of the individual . More SMEs required to do this job since its for more than 1 day Automate the deployment part Avoids manual effort for deploying the changes 1 SME who knows automation is alone required If automation sequence is done properly, no mental stress or boredom will happen
  34. 1 point
    Three deviations often used together in Toyota Production system that jointly explains inefficient allocation of resources/wasteful practices to be eliminated. This is also called a 3M model of Toyota Production System. These are called be enemies of Lean – Muda (Waste), Muri (Overburden) & Mura (Unevenness). What are these wastes - Muda, Muri and Mura? Muda (Waste): - There are many definition of Muda which helps to understand what Muda is: - 1. It refer to the work which doesn’t add value for customer. 2. Activity that consumes resources but didn’t create values. 3. It doesn’t help business or workers in any ways. 4. It refers to direct obstacle of flow. Two types of Muda:- Type 1 – Non value added activities in the processes but essential for end customer. Example – inspection and safety testing. Type 2 – Non value added activities in the processes but non-essential for end customer. There are eight categories of Waste which can be eliminated as this is not essential for end customer. It has several abbreviation – TIMWOOD(T) , DOWNTIME, WORMPIIT. Here you go with wastes name:- 1. Transport 2. Inventory 3. Motion 4. Waiting 5. Overproduction 6. Over-processing 7. Defects 8. “Non use of resources or Talent” or Intellect. Muri (Overburden): - There are many definition of Muri which helps to understand what Muri is: - 1. Man power or equipment/machines are utilized for more than 100% for task completion. 2. Running higher or harder pace with more force for a longer period than its designed. 3. Overburden, excessiveness, very close to impossible or unreasonableness. 4. Overly difficult or ones that overburden workers. Muri can result from Mura and in some cases from Muda also. Why its causing:- 1. Ineffective training 2. No standards operating procedures 3. Wrong tools/equipment 4. Wrong process 5. No Optimization Long term impacts:- 1. Absenteeism due to illness 2. High attrition rate (people leaving) 3. More breakdowns (as machines will run more than designed capability or less/no time for maintenance) Avoid Muri:- 1. Define standard work 2. Evenly distribute load to avoid overburden 3. Preventive maintenance or Autonomous maintenance Mura (Unevenness): - There are many definition of Mura which helps to understand what Mura is: - 1. Type of waste occurred due to unevenness in production system or services 2. Its unevenness in an operations 3. Non-uniformity or irregularity Mura is a result for existence of any of the seven/eight wastes in system/processes. Why it’s causing:- 1. Fluctuation in customer demand (Basically Production system which can’t handle customer demand) or Uneven workplace 2. Variation in Cycle time 3. Uneven workload 4. Low volume but high product variation 5. Flexibility is more important than volume Long term impacts:- 1. Defects are manufactured 2. Delivery of inconsistent product 3. Capacity loss (at some point of time – Production floor struggle to complete large order and become idle as with less orders) 4. It creates Muri (overburden) and this reduced the efforts of eliminating Muda (8 wastes) Avoid Mura:- 1. Just-In-Time (JIT) 2. Kanban System 3. Pull System 4. Level scheduling 5. Workload balancing 6. Standard work How can the healthcare sector address these wastes? Healthcare is really huge industry and it has different & multiple start and end points which leads to a lot of wastes, unbalance situation and overcrowding in specific causes. Let’s understand one by one how healthcare sector address these wastes especially in this Pandemic situation of COVID19. Muda: - As explained above Muda has eight wastes (TIMWOOD-T) and we can co-relates to current situation of COVID19. 1. Transportation: - Inefficient movement. a. Patient movement from room to lab/diagnostic department or different places i. Example – Limited hospitals or labs are allowed to do the COVID19 test which lead to massive movement from one location to another. b. Daily essential needs moves from storage room to different floor i. Example – Mask and gloves movement from storage room to different floor where COVID19 people are admitted. (Mask and Gloves are most essential for COVID19) c. Medication from pharmacy to different floor as required i. Example – Few specific medicines which are still under trial and used for other diseases 2. Inventory: - Huge inventory in stock, Bulk ordering a. Overstocked Consumption i. Example – Gloves and Mask which has been ordered or kept in bulk keeping in mind that it will be used most. b. Stationary i. Example – Pre-Printed stationary with specific details can’t be used as limited sections are open in hospital during COVID19. ii. Example – Now mostly doctors are going in digital way to see patient. c. Medicines expiry i. Example – Bulk ordering of Medicines which has expiry dates. Since very limited departments are open now days in hospital and People also avoid to come hospital lead to reduce the purchasing of medicines from Hospitals. 3. Motion: - Unnecessary movement of people in hospital. a. Layout i. Example – One doctor has to move from one building to another to see patients as his office is building A but due to pandemic situation Patients have specific COVID ward in building B. b. Goods are not stored where needed i. Example – Gloves/Masks and COVID specific medications are kept in building A whereas patients are kept in building B. c. Testing equipment’s i. Example – Patient movement from building A to building B or on different floors for specific testing. 4. Waiting: - it occurs when the flow is blocked due unavailability of material or due to problems like downtime in specific machines. a. Patients in OPD waiting area i. Example – The queue has increased in this pandemic situation fearing they have COVID19 even they feel minor symptoms. b. Patient waiting Testing i. Example – Testing queue has also increased due to unavailability of enough kits to match the demand of COVID19 testing. c. Patient waiting for test result i. Example – Huge test requests resulting in massive queue of testing d. Patient waiting for admission i. Example – Limited beds in hospital resulted Patient waiting for admission 5. Overproduction: - It occurs when providers to more than it’s needed by the customers. a. Unnecessary diagnostic test i. Example – In this pandemic situation normal flue creates the fear in people and they ask for the COVID19 test fearing they might be suffering with COVID19. b. More medicines orders i. Example – Giving more medicines considering future need 6. Over Processing: - Doing more than it’s required by making it more complex. a. Looking big hospital and highly qualified doctor i. Example – Looking for specialist doctor even in light flu which can be easily recovered by resting at home b. Over testing i. Example – Referring for COVID19 test for each type of flu without looking the symptom’s 7. Defects: - Defects in healthcare can cause a life of human. Let’s understand with below reason why it can be deadly. a. Wrongly diagnosis after test i. Example – Test result of COVID19 were highly ineffective due to bad kits. b. Administrative mistakes (incorrect medication) i. Example – Wrong name mentioned on testing sample c. Wrong codification related to patient i. Example – Wrong name mentioned on test results 8. Talent: - Unutilized resources. a. Not using right resource at right place i. Example – Doctors or nurseys who are really aware to use PPE kit are not being used in COVID19 treatment. ii. Example – Or their ideas are not being implemented which can help to control the cases. Muri (overburden): - As explained above, the Man power or equipment’s are utilized for more than 100% or running higher or harder pace for a longer period than its designed. In Current Pandemic situation what is happening. - Doctor and Hospital staff is working more than 8 hours and in some places doctors and nursing staff is working even more than 12 hrs. (considering as per labor law 8 hrs. are standard time for working) - Cleaning staff in hospital and in factory and offices are working more than defined hours and in some place they are even doing 24 hrs as required due to shortage of man power. Why it’s happening, let understand? - [No Standard Operating Procedure/ Ineffective training] Since there is no specific medication for COVID19 o This needs lot of time to spend on different trials by doctors and testing different ways and medicines - [Wrong process] Life of COVID19 virus is not really know and information is shared based on experience o This is why lot of frequent cleaning and sanitation requires. Mura (Unevenness): - As explained above, Type of waste occurred due to unevenness in production system or services and Mura is a result for existence of any of the seven/eight wastes in system/processes. As explained above there are eight wastes of Muda and they are linked to current pandemic situation fully/partially. Why it’s happening, let understand? - [Fluctuation in customer demand (Basically Production system which can’t handle customer demand) or Uneven workplace] o Currently COVID19 patient are more than the number of available beds in hospitals o Also not every hospital is equipped to handle COVID19 patients - [Variation in Cycle time] o Recovery rate is varying patient to patient - [Uneven workload] o Since limited hospital are allowed to handle COVID19 patient hence the doctors and staff is more loaded as COVID19 patients are coming to only these hospitals. Thanks to resource:- https://theleanway.net/muda-mura-muri https://www.mudamasters.com/en/lean-production-theory/toyota-3m-model-muda-mura-muri https://www.lean.org/lexicon/muda-mura-muri https://www.kaizen.com/blog/post/2018/05/09/muda-mura-muri.html https://blog.kainexus.com/improvement-disciplines/lean/7-wastes-of-lean-in-healthcare
  35. 1 point
    ‘Analysis Paralysis’ and ‘Extinct by Instinct’: Analysis Paralysis (or Paralysis by Analysis) is the result of Behavioral Science driven and it defines an individual or group process when over processing, over analyzing or overthinking of any critical situation could cause forward motion or decision making to become more paralyzed meaning that no solution or course(s) of action is decided upon. The situation may be too complicated and decision is never made with respect to the fear of change that a potentially larger than problem may arise. A Person or Individual or Group and Leadership should take an immediate decision with less or agreed time frame based on the Business Analytical solutions. A person may desire the perfection solution, but may fear that a decision that could result in error, while on the way to a better solution. On the other end of the time spectrum is the phase extinct by instinct, which is making a fatal decision based on hasty judgement or a get reaction. Analysis Paralysis is generally happened due to fear of either making an error, outweighs the realistic expectations or potential value of success in a decision made in a timely manner. An overload of options, can overwhelm the respective situation and causing this “Paralysis” rendering one unable to come to a conclusion. It should be a larger problem in many critical situations where decisions need to be reached, but a leader or person is not able to provide a response last enough potentially causing a very bigger issue than they would have. Leader should take a decision based on the data availability, past experience and how risk was handled with mitigation. Business Analysis Core Concept Model: A Person or Group and Leader who ever is taking a decision in any large organization should consider the following Business Analysis Core Concept model under any critical situations. 1. Value 2. Solution 3. Need 4. Stakeholders 5. Context 6. Change. Definition: We deliver a Value from a Solution to a Need of Stakeholders within Context of Change. Decision Making: A person or Individual or Group and Leadership must be an effective in understanding the criteria involved in making a best decision and also in a position to assist others to make better decisions under critical situations. Business Analytics Practices (Iterative Methodical Exploration): Data Driven Decision Making Include: i) Descriptive Analytics ii) Predictive Analytics iii) Prescriptive Analytics Effectiveness Measures during Decision Making Process Include: Measures of effective decision making include: i) The Respective Stakeholders are presented or available in the decision-making process. ii) Stakeholders understand the decision-making process end to end and the rationale behind each decision. iii) The pros and cons of all available options are clearly communicated to stakeholders. iv) The best decision reduces or eliminates Risk, and any remaining uncertainty is accepted. v) The decision made addresses the opportunity at hand and best interest of all stakeholders. vi) All stakeholders understand all the conditions, Environment, and measures in which decision will be made vii) A Best decision is made What causes analysis paralysis? Analysis paralysis establishes itself in the inability to make a best with respect to overthinking and over processing the available options, possibilities and data. It's one of the main and major causes for project interruptions, exhausting project level planning sessions (PLP and PLC Processes), the gathering of unnecessary data, and slow movement between every phases or stages. 1. Personal Analysis: Causal personal analysis can occur during decision making process. With respect to overwhelming information and data on hand and fear of change or taking decisions, unable to make a rational decision by decision maker. 2. Conversational Analysis: Analysis paralysis can occur at any time regarding any issue in typical conversation and either analysis paralysis or Extinct by Instinct not been predicted by business. It is likely to occur during conversation, elevated and intellectual discussions. During such intellectual decision-making process or intellectual discussions, analysis paralysis involves the over analysis of a specific issue to that point where that issue can no longer be recognized or accepted, and actually the subject of the conversation is lost. It can also lead to major risk in such large organization. Business analysis core concept model will deliver the best decision during decision making process or intellectual decision-making discussions. Preventive and Overcoming: The following possible ways help to prevent or overcome of analysis paralysis and Extinct by Instinct. 1. Set Limits 2. Clarify Objectives and Priorities 3. Perfection is not the key 4. Incremental development or Agile approach rather than Waterfall, based on methodologies defined in Project Management Plan. 5. Stakeholders involvement during Analysis Phase. 6. Defining Scope of the Project. 7. Define Goals and Deliverables in well advance. 8. Define the Success Criteria. 9. Take small Iterative steps 10.Regression 11.Change Number of options. 12.Add or Remove emotion 13.Random selection 14.Talk about it 15.Make your best decision Business Analysis key Techniques to prevent from Analysis Paralysis and Extinct by Instinct: The following overall integrated, best and powerful Business Analysis Techniques can be used to make best decision in large organization. These techniques are helpful to avoid 'Analysis Paralysis' and 'Extinct by Instinct'. 1. Interviews 2. Legal/Regulatory Information. 3. Survey or Questionnaire 4. Workshops 5. Presentations 6. Assess Requirements Changes (With respect to Cost and Time estimates, Benefits, Risks, Priority and Course of Action) 7. Decision Modelling (Decision Trees, Decision Requirements Diagram) 8. Bench Marking and Market Analysis 9. Business Cases 10.Business Rules Analysis 11.Data mining 12.Estimation 13.Observation: Active/Noticeable, Passive/Unnoticeable. 14.Stake holders List, Map and Personas Important Business Analysis Techniques Explanation (Random Basis): i) Bench Marking and Market Analysis: The objective of market analysis is to acquire this information in order to support the various decision-making processes within an organization as well as improve organizational operations, increase customer satisfaction and increase value to respective Stakeholders. ii) Decision Trees/Decision Requirements Diagram: A decision requirements diagram is a visual representation of the information and data, knowledge, and decision making involved in a more complex business decision. Key elements: Decisions, Input Data, Business Knowledge Models, and Knowledge Sources. The above nodes are linked together into a network to show the decomposition of complex decision making into simpler building blocks. Guideline and Tools: The following Business guideline and tools can be used to avoid Paralysis Analysis. 1. Governance Approach 2. Policies 3. Validate Performance Measures With respect to Analysis Paralysis and Extinct by Instinct, the above Business Analysis Guideline & tools and Techniques along with preventive and overcoming ways method will helpful to Organize organization data with high accuracy. High quality data would be used for data driven decision making which definitely avoid overthinking, over processing, fear of change & confusion and so on. The respective data outputs/outcome will definitely provide more confidences to Leadership or Decision maker to make effective decision on time with safe and secure manner. References: https://en.wikipedia.org/wiki/Analysis_paralysis https://en.wikipedia.org/wiki/Extinct_Instinct https://www.iiba.org/ https://businessanalystlearnings.com/blog/2014/2/10/managing-analysis-paralysis Thanks and Regards, Senthilkumar Ganesan, Email: senthillak@gmail.com Mobile: +91-7598124052.
  36. 1 point
    As new lurking variables keep appearing, the model seems to be adopting to the new inputs. However with limited trends of last 4 blocks, I feel the actual will be between 70,500 - 72,500.
  37. 1 point
    GMT20200306-153053_Benchmark-_1920x1080.mp4
  38. 1 point
    The Wisdom of Crowds implies the amalgamation of information/ideas/guesses in groups, that are regarded as better, compared to that could have been put forward by any particular member belonging to that group. The theory came into existence with Francis Galton’s discovery that a crowd of a particular country together did a much better guess of the weight of an ox, when their guesses were averaged, in comparison to the guess of all the members taken individually The advantages of wisdom of crowd is attributed to be due to following factors: Cognition: Individual thinking and opining are much faster compared to the contemplation of experts or expert committees, which offer many a times biased decision, out of each other’s influence or under external influence. Diversity of opinion Independent opinions: Not influenced by other’s opinion Decentralisation: drawing opinion on local knowledge Aggregation: Assembling all individual decisions and leading to a collective decision Trust: Every person has a trust on collective group and respects it's decision. The reasons for failure of wisdom of crowd and strategies to combat: 1. Homogeneity/Lack of diversity: There has to be enough diversity to generate much of variance in thought process and private information 2. Centralization: The opinion must be derived from local knowledge, rather than a centralized controlling factor. 3. Division/Lack of dissemination of information: There needs to be free flow of information from one subdivision to another, lack of which can lead to failures, such as in 9/11 attack, lack of information from one to another subdivision led to failure of intelligence in prevention of such attacks. On the contrary, free flow of research information on SARS virus and its isolation, without any central control led to better curb on viral infection. 4. Imitation: The focus should be to make right decisions based on the current choices, rather than looking for any similar kind of decisions in the past and imitating it. 5. Emotionality: Emotional factors , such as togetherness can lead to peer pressure and members get influenced by each other and create collective hysteria. So the members should be divergent, without significant peer pressure. Application of Wisdom of Crowd in real world: Prediction markets: one of the most common application is that of prediction market, which creates speculative or betting markets, based on common questions, such as “who do you think, will win the polls’’. The current market values are indicators of probability of the event. The best example is Betfair, which is world’s biggest prediction exchange, with a very high trade volume, based on collective prediction. Many web based quasi-prediction market companies make use of this phenomenon to offer predictions based on things like sporting events, stock market etc. Prediction marketing principle is also being used in project management software to enable the team members to predict it’s real time deadline and probable budget. Delphi methods: The Delphi method is a kind of planned and interactive decision, based on a panel of independent experts. The selected experts , answers questionnaires in few rounds. After each round, a moderator provides summary of all the expert opinions. The participants are encouraged to modify their answers, in light of other answers. The range of answers decrease in the process and lead to convergence towards correct and better answer compared to individual answers. Human Swarming: It is enabled using software such as UNU collective intelligence platforms, in which groups of networked users can collectively respond to questions, generate ideas and make collective predictions. Studies have shown that human swarms perform better than individuals across a number of real-world predictions. Stock Markets: Wisdom of crowd aspect of stock markets enables decision makers (e.g. firms’ managers, capital providers, regulators, or central bankers) to use stock markets and their own information for a large scale predictive market.
  39. 1 point
    Benchmark Six Sigma Expert View by Venugopal R Though the evolution of Industry in the world has been continuous, it is being classified into four stages, or generations, starting from the 18th century. The current advancements in the industry is being termed as the 4th Industrial revolution, also known as Industry 4.0. Before we discuss the characteristics of Industry 4.0, let’s take a brief look at the earlier stages to get an idea on where we have come from. Industry 1.0: It was during the 18th Century that manual methods for production were replaced by usage of steam power and water power in the Western world. Weaving industry was one of the first to take on, followed by others. Industry 1.0 may be seen as the beginning of Industry culture for producing volumes with efficiency and consistency. Industry 2.0: This revolution was around beginning of 20th century and was propelled mainly with the invention of electricity. The advantages of using electric power replaced the steam and water driven machines. Practices for mass production of goods emerged. Further, the development of railroad networks and telegraph brought people together through travel and communications. This revolution led to a up-welling economic growth. The early concepts of Management and Industrial Engineering surfaced. Industry 3.0: This is an era that most of us would have experienced in the later half of 20th century. Developments started after the two world wars. Digitization, starting with the electronic calculators, invention of semi conductors, integrated chips and programmable controllers made deeper strides into digitization. We saw the growth of extensive use of computers for various industrial and other purposes. In turn, this led to the development of software industry. Software usage expanded to various supporting areas of management viz. Enterprise Resource Planning, Logistics, Work flow, Supply Chain Management and so on. Industry 4.0: By the 1990s we saw abundant development in the fields of communication and Internet applications. Characteristics Industry 4.0 has revolutionized and will continue to revolutionize the methods for exchange of information. While the previous Industrial revolutions have helped in bringing the world closer in terms of communications and reach, one of the characteristics of Industry 4.0 is in overcoming geographical barriers for carrying out various activities on real time basis. Cyber physical Systems have resulted in phenomenal transformation on various businesses, providing for machines to communicate intelligently with each other overcoming physical and geographical barriers. Components Industry 4.0 is expected to evolve significantly in the near future. It has multiple components, many of them inter-related. Various articles may be found with listing of several components for Industry 4.0. I am herewith furnishing nine components as identified by ‘Boston Global Group’ 1. Big Data & Analytics An exercise to analyse large and varied sets of data to uncover hidden patterns, unknown correlations, trends, to obtain meaningful inferences that help various situations, especially business 2. Autonomous robots Robots will eventually interact with one another, work side by side with humans and undergo continuous learning 3. Simulation 3D simulation of product & material development, production processes will become widespread. Operators will be perform advance machine settings for next product. 4. Horizontal & Vertical System Integration Horizontal integration means networking with individual machines, items of equipment or production units. Vertical integration means gaining control and connection between different parts of the supply chain. 5. Internet of Things Network of multitude of devices connected by communication technologies that results in systems that can monitor, collect, exchange, analyse and deliver valuable new insights. 6. Cyber Security Processes and controls that are designed to protect systems, network and data from cyber attacks. 7. Cloud Computing Storing and accessing data and programs in the internet, providing real time information and scalability to support multitude of devices and sensors, along with all the data they generate 8. Additive Manufacturing Also known as 3D printing, it is used to prototype and produce individual components 9. Augmented Reality Currently at nascent stage, these systems support variety of services, such as selecting parts in a warehouse & sending repair instructions through mobile devices The above list may not be exhaustive and we can also expect new components to get added in rapid manner going forward.
  40. 1 point

    From the album: Oct-Dec 2019

    This is a group photo from the December batch of Lean Six Sigma Black Belt training conducted by Benchmark Six Sigma at Chennai.

    © Benchmark Six Sigma

  41. 1 point

    From the album: Oct-Dec 2019

    This is the group photo of Team 1 at the December 2019 Batch of Lean Six Sigma Black Belt conducted by Benchmark Six Sigma in Bangalore.

    © Benchmark Six Sigma

  42. 1 point
    Benchmark Six Sigma Expert View by Venugopal R Burn Down and Burn Up charts are used in Agile Scrum for visual tracking of the progress of a project. The charts typically use ‘Project Story points’ on the Y axis and the no. of iterations on the X axis. Story points are metrics used in agile management to quantify the effort for implementing a given story. Sometimes the time (total FTE hours) is used instead of story points. Burn Down charts help to see the remaining amount of work, the pace of the project with respect to the target, and will give an idea with how close or far the actual completion date will compare with the targeted date, considering the current pace of the project. Burn up charts help to see the progress of work till date with respect to the ideal curve. It has an additional horizontal line that shows the scope of the project at any point of time. In case there is a change in scope, this line will show it as a step up or down. This chart helps in assessing the real efforts being put in by the team, since effect of scope changes could be considered. Thus, it would help in assessing KPIs. Since burn down charts depict the remaining work to be completed, as compared to an ideal target at each iteration, they will be useful for providing commitments to clients and keep them apprised as to the closeness to completion. Burn down charts are simpler and easily comprehensible and would serve the purpose if there are no changes in scope. Both Burn Up and Burn Down charts may be used together for a project for their respective benefits
  43. 1 point
    Bench is wrong, We can never consider a process to be perfect in a fast changing environment. Six sigma projects not only improve process but also provide an opportunity for innovative / creative solutions.If Kodak had would have done a six sigma project they would have been the leaders in digicam world today.Six sigma projects shall help a organization to create products & services for future.Six sigma helps the process to be active/live and if not done then the process becomes obselete.
  44. 1 point
    The chosen best answer is that of Rajesh Patwardhan - this is the only answer that mentions that the second approach is unbiased and is written in very simple words. This is a tricky topic to answer questions on - so kudos to all who have replied! Very well attempted.
  45. 1 point
    Genchi Genbutsu - "Go and See" to investigate the issue and truly understand the customer situation. It basically refers to go and observe the process where the actual value is being added. As the question suggests, it makes perfect sense to use in in manufacturing however it is a myth that it is only used in manufacturing. As a concept Genchi Genbutsu is domain and industry agnostic. While preparing process maps, we usually tell the participants to create a map of "What the process is" and not "What it should be" or "what you think it is". One of the best means of understanding "What the process is" is to pick up a transaction and do a walkthrough of the process with it. This is Genchi Genbutsu for you as when you do a walkthrough of the process with the transaction you actually go to the process and see how it works. I am providing some examples below where the idea is same "Go and See". 1. Issue Resolution: when you raise an issue, the first thing that the agent / engineer will do is try to replicate the issue. They might do a screen share or take control of your computer and replicate the issue to understand where to attack and what to do 2. Software Testing: The first one happens when the code is compiled. The compiler does a walkthrough of the entire code and highlights the section of the codes that could not be compiled due to incorrect coding. Second happens during the multiple stages of testing - unit testing, integration testing and UAT. If a particular test case fails and the code is sent back to developer, the developer will first recreate the situation to see the failure (this is Genchi Genbutsu) 3. Medical conditions: Various invasive and non-invasive screening methods are used to first go to the specific location in the body and see the extent of the problem. E.g. X-ray, MRI, CT-scans, angiography etc. 4. Servicing of car: when you take your car for its regular service, the mechanic will first take a test drive of the car. What he is trying to do is to get a feel of how the car is driving so that he could pinpoint the issue which he will not be able to do unless he drives it himself.
  46. 1 point
    One of the reasons for project failure is 'Lack of Planning' and this not only includes planning for what one is going to do in the project but it also involves planning on how to check that the project is on track. Doing effective tollgates is an excellent mechanism to check the progress and ensure that project is still on the right path. For the tollgates to be effective, one basically has to seek answers to 5W and 1H (What, Why, Where, When, Who and How) Let us look at each element in slightly more detail 1. WHAT - Determine the requirements. What is the purpose of the tollgate? What is the information / artifacts that are required? What questions have to be asked? 2. WHY - Determine the objectives of the tollgate. Why are we doing tollgate? Why is it important to do the tollgate? Is the purpose only to review or also to approve? 3. WHERE - Determine the logistics of the tollgate. Where are we doing the tollgate? 4. WHEN - Determine the frequency, duration of the tollgates. When should the tollgates be set up during the project lifecycle? 5. WHO - Determine the participants in the tollgate. Who should be presenting the progress? Who should be audience during the tollgate? Who should be asking the questions? Who is going to take down the action items and meeting minutes? 6. HOW - Determine the decision criteria for acceptance / rejection of the tollgate. How are we going to judge the success of the tollgate? How many tollgates are required in the project lifecycle? If the team has thought through the above indicative questions, the chances of having an effective tollgate increases manifold. An effective tollgate will have following benefits 1. Keep the project team honest and true to the project objective 2. Ensure that scope, cost and schedule creep DO not happen 3. Effective communication across various levels in the organization (as the sponsor and/or other stakeholders may not be too close to the project) 4. Any issues / challenges are brought to notice at the right time and to the right people so that solutions could be identified
  47. 1 point
    Outlier is Anomaly, an extreme observation. It is any observation that is outside the pattern of the overall population distribution. Simply any data point that is more than 1.5 * IQR, either below the First Quartile or Above the Third Quartile. Many a times, the indication of outlier is considered as mistake in data collection and it can skew the statistical relationship. However, we could get an outlier because of the following reasons: Data entry/Type errors Measurement errors Experimental errors Intentional/dummy data Data processing errors (due to formula) Sampling errors Natural (not usually an error, it could be novelties in data) We can find outlier by, Foremost, when we use common sense Visually find the outlier (Graphical Summary out help to find outliers, or boxplot / scatterplot) Using statistical tests (There are many tests to find out outlier, listed below are few) Grubbs test for outliers (also called extreme studentized deviate) Dixon Q test for outliers Cochran’s C test Mandel’s h and k statistics Pierce’s criterion Chauvenet’s criterion Mahalanobis distance and leverage Methods of detection includes: Z-Score / Extreme Value Analysis Probabilistic and Statistical Modeling Linear Regression Models Proximity Based Models Information Theory Models High Dimensional Outlier Detection Methods In SAS, PROC Univariate, PROC SGPLOT can be used to find outlier. Statistical Tests can be used to detect an Outlier. However, it should not be used to determine what to do with them! (Ignore / Remove). One should have a good Domain Knowledge when Analyzing Outliers. Below is the example data set with Outlier and Without Outliers: Data set with Outlier Data set without Outlier We could have either have Univariate or Multivariate outlier. Univariate outlier: Data point with outlier on one variable Multivariate outlier: Combination of outliers on at least two variables Other forms of Outlier includes: Point outliers: Single outlier Contextual outliers: Can be noise in the data Collective outliers: Can be subset of uniqueness in the data (novelties) We can ignore outliers when, it is Bad Outlier, and We know that it is wrong data (Common sense) We have big data set (ignoring outlier doesn't matter at this situation) We can go back and validate the dataset for accuracy When the Outlier does not change the result, however influence change in assumption When Outlier influences both result and assumption, it is better to run analysis with and without outlier (as we are not sure whether it is because of mistake or misclassification of the data). Post analysis investigating both results to find the significance is minor or major. When outlier is a data from an unintended population We should not ignore outliers when, it is Good Outlier, and Results and outcomes are critical We have too many outliers (Usually when it is not unusual) Before Ignoring we will have to run through this checklist (for cautious and safe removal) Is Outlier because of data entry typo error? Identified Outlier value scientifically impossible? Assumption of Gaussian distribution on the data set is uncertain? Is the Outlier value seems to be scientifically interesting? Do we have substantial information about Outlier that we need to retain it? Are there any special circumstances / situations / cases for the data points? Are there any potential measurement errors? Under multi Outlier situation, can Masking be a problem? (In Masking - "outlier” is not detected) If the Answer to above questions is No, then Either, (Situation A) the so called, outlier, could have resulted from the same Gaussian population, it is just that we would have collected the observation from either the top/bottom tail of the population data. Or, (Situation B) the identified outlier, could be from different distribution. However, we would have collected the data due to mistake or bad sampling technique. For Situation A, removing outlier would be mistake For Situation B, We can remove the outlier cautiously Removal of Outlier can be dangerous. However it may improve the distribution and fit, but most of the time some important information is lost. So Points to remember, if we remove outlier: Trim the data set Do Winsorization (Replace outliers with nearest good data) Transform the data, Discretization Top, Bottom and Zero Coding Replace outlier with mean / median (Extreme Outliers will influence Mean, but not he Median; Ref to below example), random Imputation While we run Experiments and observe many Outliers in the data, we should repeat the data collection instead of simply removing them and when the Outliers are significant, then consider using Robust Statistical Technique. Outliers are not always bad data points, however, when the data set is small, then outlier can greatly influence the data statistics (We could have Skewed data, inflated or deflated means, distorted range and type I and type II errors). So it is better to do through investigation and also have background domain knowledge while performing this analysis. Case to case the analysis differs and based on that we should take cautious decision whether we have to Remove, Keep or change the Outlier.
  48. 1 point

    From the album: July-Sep 2019

    © Benchmark Six Sigma

  49. 1 point
    Customer is always King!. But all organizations need to determine what type of Kings they want to deal with, which will help them in achieving a sustainable and profitable growth. In IT services sector, ideally you want to work with Customers who treat you as Transformation partner, to work in join partnership on solving their business requirements. This is in contrast to vendor partnership resulting in staff augmentation. But to get there takes a lot of effort to build the Brand, capability and delivery excellence etc. for customers to treat you that way. Working with larger Fortune 500 Corporations means a more structured approach for contracts, services and payments. But they also typically tend to mitigate risks by having more than one vendor for their requirements. Working with small to medium size organizations, may lead to challenges on size of contract, payment issues etc, with upside of becoming a strategic partner when the customer organization grows. Working with customers mean investments in Customer Relationships, Proposals, Proof of concepts etc, which are justifiable only if the relationship grows. Its better to trim the tail (where returns are meagre compared to investments) and refocus on profitable customers. Organizations need to decide on their customers based on the products and services they provide, their stage of growth, risk appetite and financial situation. Its better to turn down customers, when the requirement is not a skill the company has or wants to build, or there is insufficient capacity to meet the requirement, or requirements are not clear leading to potential scope creeps. There is an Explicit and Implicit way of selecting the customers to do business with. Implicit is indirectly stated or implied, while Explicit is directly stated and spelled out. In Explicit, companies usually spell out policies to meet to work with customers and vendors. There might be a minimum revenue size requirement, years of existence, credit worthiness, geographical location etc. This usually changes as the organization grows. In Implicit, Companies can segment the customers they want to work with and design appropriate strategies so that customers self select. For example, a high end luxury brand Retailer, will have stores located in Premium locations or malls, offer limited but exquisite products and pricing is in the higher range. They offer excellent customer service (Nordstorm, Coach, Burberry etc). Similarly Retailers, like Walmart offering Every day low price, target the general population, offer a wider catalogue at affordable price range. There is a dynamic of margin vs volume play, and based on which companies design their investments. This is visible in many sectors Hotel (2 or 3 star vs 5 star), Mobiles (Apple vs Android), Cars (BMW vs Hyundai) etc.
  50. 1 point
    Kanabn Board is a tool that is used to depict the position of work in the process. As the question mentions itself, Kanban boards were primarily done for work allocation, monitoring the progress, decision making and reporting (at the end of the day). The most common usage of these boards were found in the daily huddles / daily team meetings / stand up meeting (what ever you might want to call it). It is mostly done on a white board where columns are created to track progress. These days there are multiple online versions of Kanban boards (but the joy of doing it is using post it notes or a marker pen on a white board - the good old way). The selection of manual or a systemic Kanban board is of lesser significance. What is more important is to track the progress. The simplest of Kanban board looks like Source: Google Images (smartsheet.com) Source: Google Images The best feature about the Kanban board is how it has evolved across various industries and domains and how it is being utilized these days. The underlying feature of allocating work, tracking progress and decision making remain the same. 1. Kanban Board in Agile Software Development / Project Management Source: Google Images search 2. Kanban Board in sales Source: Google Images search 3. Kanban Board in Hiring Source: Google Images search 4. Kanban Board in Incident Management Source: Google Images search 5. Kanban Board in aviation (flight progress strips) Source: Google Images search Automated version of flight progress strips Source: Google Images search 6. Kanban Board in Food Ordering Source: Google Images search You notice that there are multiple variations of Kanban board (manual or systemic) with all trying to help the business and/or customer know the progress of their product/service through the various process stages. A more advanced or recent variation of Kanban board is a Swimlane Kanban Board where additional characteristics could also be tracked. Source: Google Images search
This leaderboard is set to Kolkata/GMT+05:30
  • Who's Online (See full list)

    There are no registered users currently online

  • Forum Statistics

    • Total Topics
      2,838
    • Total Posts
      14,347
×
×
  • Create New...