Jump to content


Popular Content

Showing most liked content since 09/21/2017 in all areas

  1. 2 points
    Some of the situations in which the "Push" system is generally successful would be one or more of the following. No two situations are the same, even if some appear similar. 1 Demand is easily and accurately predictable Due to an accurate forecasting system, the risk of carrying “dead” inventory is less. Moreover, by planning and pushing a steady volume to the market, supply chain and production are also steadied, thereby eliminating delay losses. 2 Conversion costs between products is low due to late point differentiation If in spite of an accurate forecasting system, there is a difference in the final product type demanded, the stock of Product A can be converted to Product B at a very low cost and pushed on to the market. 3 Very short time demanded from order to delivery If a very short delivery time or instant delivery from the point of time an order is placed is demanded by the market or customer, there is no option except to supply from stock and avoid revenue losses due to short supplies. 4 Products do not deteriorate during storage When there is no constraint on “shelf life”, the risk of inventory to be written off is low. Further more, inventory is being used up sooner rather than later, reducing cost of delays. 5 Carrying cost is less than cost of lost business When a manufacturer is able to make up for the expense of carrying inventory by exploiting the predictable demand, the likelihood of profiting, “net-net” is high when compared with the potential loss of business, customers and reputation by becoming Just-Short-Of-Time rather than Just-In-Time 6 Long, geographically global supply chains with their own unpredictability Even with the best e-Kanban-powered pull system, the long winding, supply chain that traverses the entire globe is so packed with potential “delay-bombs”, that some “good-old” stock, which can be pushed becomes the life-saver 7 Shipping costs can be optimised by shipping in bulk When the costs of transporting raw material or components or sub-assemblies can be whittled down to almost next to nothing by using up (say) full container space, stocking up and pushing is not a bad idea 8 Demand profiles across time periods are static When there is no fluctuations between days of a week, weeks of a month and months of an year, it is profitable to stabilise production and supply chains by planning and pushing an average volume periodically to the market
  2. 2 points
    Excellence : Excellence is defined as the quality of being extremely good So what is Personal excellence? In simple words, setting up the bar higher [benchmark] in whatever activities, the individual(who is compared with the rest) does. Process Excellence: Providing an environment where the processes are highly stable and controlled with a minimal or no variation and with minimum or no wastage(Muda). Focus is on continuous improvement to ensure processes are highly stablized Operational Excellence: It reflects the way how as a person, unit, you or your team/organisation excel at parameters such as Cost, Human Resources, scope, time, quality etc.,. By excelling at this, the provider of a service, can provide value to the customer with optimal/maximum efficiency. Business Excellence: It is through which you make your business, with effective strategies ,efficient business plans , best business practices so that optimal results are achieved at a sustained rate. How each one is related to the other one(s): Personal Excellence is directly tied to Process Excellence. If and only if the individual is interested to adhere to the processes laid out, then process excellence or for that matter any other activity can be successful . If the cultural shift/mindset is not there amongst the individual/team , then no change would work. This can be represented by the formula : Quality of the solution (Q) * Acceptance of the solution (A) = Effectiveness of the solution (E). Unless there is an acceptance to any thing (which is the human part) nothing can be done. So if the individual has the desire to excel at his/her work, then he/she would strive to make sure he/she/the organization achieve Process Excellence. Process Excellence provides a way for continuous improvement. Purpose of process excellence is to streamline all the processes , make them stable and in the process to achieve minimal degree of variation and minimal wastage. By having a process excellence system in place, grey areas in Operational excellence and Business excellence can be identified and improved/rectified upon. Practically it is difficult to achieve excellence in one when another one is absent. For instance, Business and Operational excellence would require process improvements. If streamlining does not happen there then there is no excellence in Business and in Operational aspects as well.Similarly without human intervention or the elevated mindset of the individual, it becomes difficult to successfully run the processes at a top-notch. From an organisation perspective, the organisation should Provide a conducive environment to work with wherein by individuals can be encouraged to share their ideas/thoughts and create a transparency, making them feel belonging to the organisational/unit's problems/constraints (Personal Excellence) Encourage individuals to showcase their creativity in designing/providing solutions to problems (Personal Excellence) Create Challenging contests and rewarding people on various categories such as best creativity,best solution, optimal solution,... (Personal Excellence) Setup process standards and metrics for each parameter(Define the expectation).Set the Upper & Lower limit & also customer specification limits (Process Excellence) Conduct awareness sessions on process expectations with reasoning and justifications. Provide details with SMART goals (Process Excellence) Ensure that individuals/teams adhere to the standards with constant monitoring through Audits/Inspections/reviews. (Process Excellence) Look out for scope for continuous improvements periodically and accordingly adjust the process baseline if required. (Process Excellence) Define the Operational parameters that requires excellence. (Operational Excellence) Conduct awareness sessions to key stakeholders on those operational parameters and provide the plan on when and how to achieve them (Operational Excellence) Ensure the status of operational excellence through Project Management Reviews/status reports and other similar artefacts and address the deviations (Operational Excellence). Preserve the best practices that were followed to achieve Operational Excellence (Operational Excellence) Define the strategies/plans needed for improving the business results (Business Excellence) Define the best practices in getting business-oriented goals/activities done (Business Excellence) Conduct Confidential meeting with key stakeholders and provide the envisaged plan to them and convey your expectation (Business Excellence) Conduct monthly/quarterly review meetings with respective units and look onto the 4-quarter dashboard. (Business Excellence) Get Business Mgmt section of Customer Satisfaction Survey from the customer to see if organisation is in target with its objective (Business Excellence) Document the outcome of the business results and the effective means to achieve them (Business Excellence)
  3. 1 point
    Case for Statistical Significance – an example Let’s consider the following data that is the age of 10 employees. 42, 35, 24, 31, 33, 41, 33, 31, 33, 32. Assume these 10 data points are a sample that represents a large population of say, over 5000 employees. Now, using this available information we are asked a question “whether the average age of the employees in this population can be considered equal to 30 years?” The quickest thing that anyone would do is to compute the average of the samples, which comes to 33.5. Since this is 3.5 more than 30, can we say that the population average age will be more than 30? These are situations where there is bound to be judgmental subjectivity and likelihood of reaching incorrect conclusions. This is a simple example of a situation where a test of hypothesis may be done and the concept of statistical significance helps to reach an objective conclusion. Statistical Significance – what does it imply? Statistical significance implies that the difference that is under evaluation, (whether it is a population average being compared to a specified value, or the averages of two populations are being compared, or the variances of two populations are being compared, etc.) can be considered as a difference that is significantly larger than what a chance cause variation would have caused. Since what we have is a sample data, it is to be noted that for different set of samples, the sample average is expected to vary with in certain limits for the same population (and same population average). The limits are governed by the variance of the population. The test of significance will evaluate, with the given set of data, whether the sample average is falling within the confidence limits or not. So long as the sample mean falls within the confidence limits, the conclusions will be that there not sufficient reason to believe that the population average represented by this sample is different from the specified value. Usage of Statistical Significance In today’s world the application of tests of significance has been simplified using statistical software such as Minitab. Once we give the inputs depending upon the case being studied, the application comes out with a P value, which is used to determine the significance of the results. Smaller the p-value, the evidence against the null hypothesis becomes stronger. Usually a p-value < 0.05 is used as the criteria for rejecting the null hypothesis; i.e. the difference is considered significant. As part to problem solving, tests of significance are integral part of Hypothesis testing, Analysis of Variance, Design of Experiments and other tools. It helps to take objective decisions with small samples. These methods are particularly useful during the Analyze phase where it helps to narrow down on short listed causes; and improve phase where the effectiveness of identified solutions could be validated.
  4. 1 point
    In any business, performance is typically expected to vary over time and w.r.t. inputs. When comparing two performances, it would not be completely correct if a decision that the performances are different were to be taken based on comparison of just one or few data points from both the performances. Sampling errors should not influence the decision. Therefore, it is essential that the correctness of the decision taken should be sustainable over time. For the decision to be sustainable, data that reflect the sustainability of both the performances will be required. Once this data is available or is collected, the decision based on this data is also expected to sustain over time. The decision that is taken based on samples must hold good for the populations also. In other words, even after some unavoidable overlaps of both the performances, perhaps due to chance causes, the difference in the performances of the two populations must be visible, conspicuous and clearly discernible. In other words, the difference in the two performances need to be significantly different. But “significance” is quantitative and statistical. The significance of the difference is assessed from statistical data of the two performances. Statistically significant difference represents the clarity or discernibility of the difference between the two performances and the sustainability of this difference over time. Performances of two populations with a statistically significant difference will remain different over time unless there are some special causes in play on one or both of them. But how significant is significant? This depends on the objective of comparison and the stakes involved. The margin of error tolerable in taking a decision on the difference between the performances depends on these factors. For different combinations of conditions, this margin of error could be 1% or 5% or 10% or any other agreed number. This is the error involved in the decision to conclude that the two performances are significantly different based on the available statistics. Uses of the concept of Statistically Significant Difference in Problem Solving and Decision Making The uses of this key concept of “Statistically Significant Difference” to solve problems and take decisions are innumerable, a few of which are given below. 1. Comparison of performances between two or more a. Time periods b. Processes c. People d. Suppliers or Service Providers e. Applications 2. Assessing effectiveness of a. Training b. Improvements c. Corrective Actions d. Action taken on suspected root causes 3. Evaluating a. User ratings in market surveys against marketing campaigns b. Performances of new recruits against agreed targets In all the above cases, Hypothesis Testing can be effectively applied to assess the existence of a statistically significant difference.
  5. 1 point
    I suppose everyone agrees that if one is not good with numbers, career growth is likely to face a serious roadblock at one stage or the other. I have noticed several people who fear mathematics and this leads to certain problems in learning or applying Six Sigma. Many have already given up hope assuming that they can never cover up. Good news, however is that this weakness can be addressed by most people. It definitely needs a persistent effort to capture Mathematics concepts that are really important. Some of these are Algebra, Data Handling, Decimals, Equations, Exponents and powers, Fractions, Graphs, Integers, Mathematical modelling, Mathematical Reasoning, Probability, Proportions, Ratios, Rational Numbers and Statistics. If you are one of those who felt this way and wish to improve your math, I can provide you a step by step approach which shall broadly follow the sequence below. Plan study time for these topicsUse the uploaded materialStudy identified topics and answer questions provided in the text. Check your answers with answer key provided.Conquer your weakness and face the Six Sigma world more confidently.In case good number of people see value in such a sequence, I shall be putting in extra effort and make the content and sequence available to you free of cost. I have written this post just to know whether there are many people out there who really wish to use such content and approach. Reply to this post showing your interest so that I can view the count. Best Wishes, VK
  6. 1 point
    Thanks to all participants for making this a rich discussion. Almost all of you spelled out the correct definition of Continuous and Attribute data and provided good examples. Also some of you pinpointed the area of confusion when the metric is a mix of continuous and discrete data e.g. percentages , practical usage of data, large amounts of discrete data being used as continuous , measurement gauge used etc.Kavitha Sundar provided a comprehensive explanation to data types with examples and inference. Congratulations Kavitha and thanks everyone for the your inputs.
  7. 1 point
    Q4 in Episode 2 - While continuous data is measured and attribute data is counted, there is sometimes confusion if some specific dataset should be considered continuous or attribute. Provide some examples of confusing datasets and your inference. This question is available for response till 10 PM IST on 6th October 2017. Only registered Excellence Ambassadors can respond. To know how to register, please visit the dictionary page.
  8. 1 point
    There are several types of process maps varying in their objectives and detailing. SIPOC, swimlane, value stream maps are some of them. If you had to suggest a sequential series for process mapping in an organization with increasing level of detailing, what will your suggestion be?
  9. 1 point
    There are various types of process mapping, but we can categorize them in mainly 5 groups. 1. SIPOC 2. High Level Process Map/Flow chart 3. Detailed process Map 4. Swim Lane Map 5. Value steam Mapping SIPOC:- SIPOC stands for Supplier – Inputs – Process – Outputs – Customer · The required inputs (and their providers) are listed to the left, and the key process outputs (and their recipients) are listed to the right. The SIPOC provides a focus for discussion of what the process is all about. · With SIPOC we will be able to know who supplies to process, what is the output of the process? What are requirements of a customer? · It is recommended to have a SIPOC for every project because they are helpful when discussing the process with others and simple to make. High Level Process Maps/High level Flow chart:- It provides an overview of the processes and objectives that drive an organization. The purpose is to provide quick and easy insights into what the process does, without getting into the details of how it’s done. Detailed Process Map/ Detailed Flow chart:- While studying the high level process map, if we want to get more detail of a particular process. We may need to make a details process map for that process. Swim lane Map:- Swim lanes is a technique used in process mapping to simplify the work procedure. The process is divided into several swim lanes. These are represented by the different people that will perform that job. Detailed process maps are often prepared in the swim lane format. This is because often there are multiple detailed process maps. Keeping track of who is supposed to do what may get confusing. Swim lanes help to simplify them. Value Stream Map:- VSMs are typically used in Lean applications. They are rich with information that is useful when planning process improvements. Value Stream Maps are sometimes called Material and Information Flow Diagrams. With value stream map we can see how material is moving from one process to another and how information is flowing. We can also see WIPs and its level. We can get relevant process details such as cycle time, change over time,etc. What is the wait time for information/product can also be gathered from a value stream map. They require more skill to build than simpler process maps, but they provide great information. Below is the summary of various process maps. Process Mapping When It is used SIPOC to get overview of what are inputs/outputs, what are customers’ requirements High level process Map Shows how the process works Detailed process Map To get deep understanding of the process Swim Lane Maps It shows which department is is involved with how much intensity in that process Value stream mapping It is the ultimate process map, which gives all the relevant detail about the process. For me Value stream mapping is the best template to do a process mapping. Now for an organization which is new to these tools, or for an organization which I am not aware of I will follow below sequence of process mapping.
  10. 1 point
    Question 4 in Episode 2: While continuous data is measured and attribute data is counted, there is sometimes confusion if some specific dataset should be considered continuous or attribute. Provide some examples of confusing datasets and your inference. Data – is defined as a collection of avalues / useful information that is required for any analysis to the receipient. Data is genereally used to prove / disprove hypothesis. Data is of two types basis statistics. It is Quantitative or Qualitative. Quantitative is descriptive data, which can be categorized into subgroups for analysis and qualitative is numerical which means either measurable / countable. Qualitative data is again divided into 2 types continuous and discrete data. For Eg. Charlie chaplin is fair, short, has small mustache, thin built and wears black colored jacket. – it is qualitative data. Charlie chaplin has one hat, one walking stick and 2 legs. – it is Quantitative –discrete data. Charlie chaplin aged 45 years is 57.2 kgs built and 4.8 inches tall . – it is quantitative continuous data. 4 types of measurement scales: It is divided into four categories – Nominal and ordinal, interval and ratio Ø Nominal data: It assigns a numerical value as an attribute to any object / animal / person / any non-numerical data. Ø Ordinal data: Any data which can be ordered and ranked is called ordinal data. This can’t be measured. Eg. 1. A horse is numbered in the race court, represents the nominal data. 2. The numbered winning horses are ordered and ranked as “1st, 2nd and 3rd place” in race club, which represents ordinal data. Another best examples is progress report of the student. Ø Interval: It is a numeric scale where we know order as well as the differences between values. There is no origin. Eg. Temperature of the room is set to be normal if it is between 25 and 28 degrees C. Time is another good example of an interval scale in which the increments are known, consistent, and measurable. Ø Ratio: Ratio scales are the ultimate nirvana when it comes to measurement scales because they tell us about the order, they tell us the exact value between units, AND they also have an absolute zero–which allows for a wide range of both descriptive and inferential statistics to be applied. At the risk of repeating myself, everything above about interval data applies to ratio scales + ratio scales have a clear definition of zero. Good examples of ratio variables include height and weight. Qualitative data: It is otherwise called as categorical data. Quantitative data: It is divided into two contionus and discrete data. Difference between Continuous and discrete data: Continuous data Discrete data It is measureable on a scale It is countable The data falls within finite or infinite range The data has only finite numbers. Can be broken into subcategories Can't be broken since it is a whole number. The frequency is depicted in histogram, where skewness is shown clearly the values take a distinct value hence it is represented in bar diagram, skewness can't be seen. Values are allowed to group within the range The values are individual values. Eg. Temperature of the person, Height, Weight, Age, time, Cycle time taken to complete a task Eg. No. of cumputeers, No. of students, no. of books, no. of certificates, no. of errors, etc Confusion between Contionus and Discrete data: Eg. 1: Person Age Weight (Kgs) Height(Inches) Color Ajay 34 51 5.1 Wheatish Sharma 35 65.5 5.2 Fair Roshini 23 45.5 4.8 Wheatish Gaithri 53 72.5 4.8 Dark Linda 43 46.5 5.1 Fair Tanya 36 43 5.3 Wheatish Balu 27 56 5.6 Fair Vignesh 32 77 6.1 Dark Aarav 43 76 5.9 Wheatish Rithesh 45 64 5.3 Dark Qualitative data / categorical data: Categorize 10 people in the group into wheatish, dark, fair basis the color. This represents categorical data. Continuous data: Age , Height and weight of the people displayed above in the table depicts a good example of continuous data, where these numbers falls within the infinite ranges. Discrete data: No . of Wheatish – 4 No. of fair – 3 No. of dark – 3 Total no. of people – 10 Conclusion of Eg. 1: Though age is continuous numerical variables. Although the recorded ages have been truncated to whole numbers, the concept of age is continuous.) Number of aged people is a discrete numerical variable (a count). Age can be rounded down to a whole number, if so it represents the discrete data. Though it falls under discrete(when all data is shown as whole integers), it is actually a continuous data because it has ranges. Age is not a constant factor, though the DOB is constant. Basis the context / concept of the requirement – lets say to fill a form, the exact age is required. In such case, though age is discrete, it is continuous. “12 years, 153 days” really means a continuous age that is between 12Y152.5D and 12Y153.5D.” Eg. 2 : Income is another example of continuous data. Eg. 3: “ In practice, percentage data are often treated as continuous because thepercentage can take on any value along the continuum from zero to 100%. In addition, dividing a percentage point into two or more parts still makes sense.Discrete data are easy to collect and interpret. % is always to be considered as continuous but it depends on the concept. If I have to track the error percentage, the right metric is as below.. Error % = No of errors (Discrete) Total charts audited.(Discrete) Hence Error % is discrete. Another example: If I have to track the availability of the machine, the formula is as follows… Availability % = Total hours available (Continuous) / Expected hours of production for 8 hours(Continuous) Hence Availability % is continuous, since time is continuous. Conclusion: It depends…. In certain situations, discrete data may take on characteristics of continuous data. But, if counts are large, distribution of values are relatively wide, and the the values are distributed across the values, you can “pretend” it is continuous and use the appropriate tools. Thanks Kavitha
  11. 1 point
    The reasons why the humble flow chart evolved into the powerful process map lies in the analogy between the process map and the geographical map. Just as a location on a map is referenced by its latitude and longitude, a process step in a process map is referenced by a combination of (say) the person / team doing that step and the stage of the process in which that step occurs. The references could be also be different – for example, a timeline could be one of the references. These references or the facility to reference a process step constitute the life of a process map. Now that this facility to reference is here to stay, swim lanes, be they horizontal, vertical or both are also an inseparable part of the process map. It does not matter as to which position in the sequence of detailing the process map is. Swim lanes make the process map easier to read and use. Therefore, it would be advisable to create and update one full set of swim-lane process maps from L0 to L5 levels. In the ITeS sector and in a typical BPO scenario, would use the following sequence of increased detailing. Level Description Details L0 Entity Level Customer, Supplier, Other External Parties L1 Sub-entity Level Different Departments of the Customer and Supplier, Other External Parties L2 Process / Sub-process Level Interactions of different processes or sub-processes with hand-ins and hand-outs L3 Activity Level Activities done by different stakeholders at different stages of the process L4 Task / Sub-task level Various tasks or sub-tasks that constitute activities L5 Field / Key-stroke level Absolute detail of every field touched or every key struck This set of process maps for every process is valuable as a training tool, as a real-time guide or SOP and as a trigger to identify improvement opportunities. To augment the above, would also use an enhanced SIPOC that contains apart from the usual Suppliers, Inputs, Process, Output and Customers, related information like process step times, who does what step, the team size and distribution across shifts, the average volumes of these transactions, the qualifications of staff for this process, the training required and so on. Other maps can be used to explain a specific perspective or to support a specific initiative. A turtle diagram or alternatively a Relationship Map can be used to understand at a glance, interlinks and dependencies. A value-stream map could be used to identify opportunities for leaning out a process by crashing lead-time. Overall, a simple, situation-based approach to selection of process map types for use would help in optimal utilization of this wonderful tool.
  12. 1 point
    Dear Participants, Firstly, it’s a pleasure to learn from so many industry experts under the BSS roof about their varied experience and diverse views on an interesting topic. Kudos to everyone for sharing such enlightening aspects. I must make a mention of some brilliant comments with regards to their views on both checksheets and BPM interventions. Mohan draws a wonderful analogy from spices preparation at home vs. outsourcing the same. Rajesh Chakrabarty lists down the advantages and disadvantages of digitization in the simplest way. Sandhya brings out the difference between a checksheet and a checklist adeptly. Additionally some great arguments by Sabyasachi, Kiran, and Arunesh. The one that I found quite intriguing and more structured argument, for and against Checksheets is by R Rajesh. He lists down the what, how of a checksheet, future state and then logically defends his argument in his conclusion. The post is here Thank you all for sharing your thoughts. Best wishes, Tina Arora
  13. 1 point
    To start with, it has been proven that pull system works well in many scenarios. It also leads to savings, less inventory etc.. However, it cannot be implemented everywhere. In todays world the customer expects to be served immediately - the one who is able to satisfy the demand at that moment is the one who gets the business and in turn the money. Take the case of a normal person who needs some medicine. Can the manufacturer then work on getting one set of pills only for this customer made and shipped - is this feasible? or doable? currently, we have seen an outbreak of influenza / dengue etc - in such cases - the demand exists and is known - to a large extent can be forecasted to some degree of accuracy and the end product is needed at a particular point in time. It would not help if the end user has to wait for the product to be manufactured and delivered. Take the case of vegetables of fruits that are being produced - here too, the demand is to some extent forecastable to some degree of accuracy and have a longer lead time - they cannot be produced in the pull methodology. Take another case of diamonds - these are generally not produced for one person at a time - they are mined, cut and kept ready in the hope of finding a buyer Overall, the thought is that where the demand for the product is to some extent forecastable, where the lead time is high and the demand needs to be fulfilled immediately, pull system may not work. The supporting ecosystem (eg: the supermarket which supplies the vegetables and fruits / the drugstore which sells the drug etc) may use the pull system, but the product will be manufactured / grown and kept ready for sale even before the customer has demanded it..
  14. 1 point
    The First Jidoka The automatic loom, invented by Sakichi Toyoda, the founder of Toyota, in the year 1902, can be considered as the first Jidoka example. In this innovation, if threads ran out or broke, the loom process was stopped automatically and immediately. In the early days of assembly line mass production, work cycles were watched over by a human operators. As competition increased, Toyota brought about a significant change in this process by automating machine cycles so that human operators were free to perform other tasks. The Toyota Production System has many tools for efficient products and services. Developed over the years, these tools aim at reducing human effort and automating machines to increase productivity. Jidoka is one such tool without which efficient manufacturing would practically be impossible, as of today. The article below explains all about the Jidoka process. The Concept of Autonomation To begin with, understand that autonomation and automation are different from each other. According to the definition of autonomation, it is a 'self-working' or 'self-controlled' process. It is a feature that contributes to the Jidoka process. Automation is the process where the work is still being watched by an operator, where errors may still be apparent, and detection and correction take a longer period. Autonomation resolves two main points. Firstly, it reduces human interference, and secondly, it prevents processes from making errors. This has been enlisted below. PRODUCT DEFECT Ordinarily, when a defect occurs, a worker detects it and later reports the problem. Autonomation enables the machine to stop the cycle when a defective piece is encountered. PROCESS MALFUNCTION If all the processed parts or components are not picked up at the end of the cycle, the machine might face problems, and the process might halt, and it would take a while before the worker realizes that the process has been interrupted because of a minor error. In case of autonomation, if the previous piece has not been picked up during ejection, the machine gives a signal or stops the cycle all together. An Introduction to Jidoka The Evolution towards Jidoka Jidoka can be simply defined as 'humanized automation'. Autonomation is just another term for Jidoka. It is used in different contexts. It is mainly used to detect defects and immediately stop the production or manufacturing process. It fixes the defect and finds solutions so that the defect or error does not occur again. The concept, as mentioned before, was invented by Sakichi Toyoda. Its purpose is to reduce human error and judgment by automatic error detection and correction. It was developed to eradicate the wastage of time due to human observation of the process, transportation, inventory, correction of defect, etc. Now, with Jidoka, production lines have become significantly more efficient, and the wastage of goods and inventory have been reduced too. Other Toyota Tools and Terms You need to keep in mind is that Andon, Poka-yoke, Just-in-time, etc., are all tools invented by Toyota. Jidoka is also one of these tools, and it encompasses some of the others as well, like Andon and Poka-yoke. Jidoka was developed to minimize errors that may have been caused due to human observations. Remember that Andon is not an example of jidoka, but an important tool. It displays the current state of work―whether the process is smooth, or it has any malfunction, or if there are product glitches, etc. The relation between Andon and Jidoka has been explained further in the article. Similar to Jidoka, Just-In-time is another important tool, and is one of the crucial pillars of TPS. It adheres to what product is required, when it is required, and how much is required. The 'takt time' is an important principle―it refers to the time that should be taken to manufacture a product on one machine. Line Stop Jidoka is a term that applies to the process in automotive manufacturing plants. It is called so because it interrupts and halts the entire line (process) when a defect is found out. The Elements of Jidoka GENCHI GENBUTSU It is one of the important elements of Jidoka. The basic principle of Genchi Genbutsu is to actually see the problem. It entails going to the root source of the problem. This is an important step in the Jidoka process―to find out why the defect occurred in the first place. ANDON As stated in the previous section, Andon is a visual representation of the current process. It indicates whether the process in running as per norms or whether there is a potential flaw. According to the condition, it gives out electronic signals. If the signal is negative, workers will understand that there is a problem in the process. The machine stops, immediately of course, and the workers can stop the production until the flaw in the process is fixed. STANDARDIZATION The main aim of Jidoka is to increase production quality. This is what standardization deals with. It involves developing strategies that adhere to perfection and quality. When a flaw is discovered, it is not only fixed, but efforts are also undertaken to see that it does not occur again, and the quality and standard of the same product are maximized. POKA-YOKE The concept is also called mistake-proofing or error-proofing; poka-yoke devices are designed to avoid mistakes that could occur during production. The Principles The Jidoka Process As seen in the first figure above, without Jidoka, the defective piece continues to be produced and ejected. It is only after ejection that the worker may realize that the product is defective and then stop the process. In the second figure, with Jidoka, the Andon light glows brightly indicating that the product is defective. The process is halted immediately, and necessary steps are taken. DETECT This involves detecting the problem. The machine is fixed with the right components so that the abnormality is immediately identified. For this step, machines may be fixed with sensors, electrical cords, push buttons, electronic devices, or may be fed with proper instructions to identify if a product is defective. STOP Once a defect has been spotted, the machine stops immediately. The machine is designed to stop on its own, no staff or worker needs physically stop it. The fact that a defect has been detected is indicated through signals. Once that is done, the staff might rush to the site to find out why the process has been halted. FIX When the machine stops, the production line needs to be stopped. You might wonder why the entire line needs to be halted due to one or more defective pieces. This is done because there is a likelihood of defective parts or components to have been manufactured along with the defective part or component. To avoid this over-production and wastage of material and equipment, the production line is halted. After this, steps are undertaken to fix the problem. Sometimes, this may be a minor glitch, while at times, there may be a major problem. Once the error is fixed, the production resumes. INVESTIGATE The last and rather vital step of Jidoka is to investigate the source of the problem. You have to find out answers to the following questions: 'Why the defect has occurred?', 'What kind of defect is it?', 'How can it be fixed?', 'What can be done to prevent it?', and so on. Root-cause analysis tools are widely used to get to the bottom of the problem. Through this process, efforts are undertaken to find out the best solution for the defect, and to prevent it from occurring in the first place. As more and more investigation and research is being carried out, better methods of manufacturing are discovered, better problem-solving techniques are invented, and the product quality increases. Examples Jidoka is mainly used in the manufacturing and automotive industries; however, it can be demonstrated in simple products used in daily life as well. For example, if your kitchen cabinet is fixed with a dustbin, you will notice that when you open the door of the cabinet, the lid of the dustbin is automatically lifted. This is because there is a string that helps lift the dustbin lid the moment the door is opened. Consider a printing press machine. If a sheet is missing in the machine, a sheet detector raises the print cylinder. This is due to Jidoka. In the manufacturing industry, a sensor is used to check if the components are in alignment. Even if a small part is out of alignment, the machine is stopped. Some high quality machines use the recall procedure. Sometimes, despite the best counter-measures, some products in the production line may slip through the machine cycle, undetected. The recall procedure checks every single product once again, before the final output ejection. Light curtains are used in automatic feed machines. They have a presence sensor that stops the machine if a component is broken or is defective. Benefits of Jidoka It helps detect the problem as soon as possible. It increases the quality of the product by proper enhancement and standardization. It integrates machine power with human intelligence to produce error-free goods. It helps in proper utilization of labor since the process is automated, workers can spend their time performing more value-added services. There is less scope for errors in production, which substantially increases the rate of productivity and lowers costs. Improved customer satisfaction is an important advantage as well. Good products are manufactured in lesser time. Jidoka is one of the strong pillars of TPS (Toyota Production System). It helps prevent defects in the manufacturing process, identifies defect areas, and devises solutions to see to it that the problem is corrected and the same defect does not occur again. Jidoka helps build 'quality' and has significantly improved the manufacturing process. Difference between Autonomation & Automation: (in summary) Autonomation vs. Automation Description Jidoka Automation If a malfunction occurs, The machine shall detect the malfunction and stop itself. The machine will continue operating until someone turns off a switch. Production of defects No defective parts will be produced. If defects occur, detection of these defects will be delayed. Breakdown of machines Breakdowns of machines, molds and/or jigs can be prevented. possible breakdown of machines, molds, and/or jigs may result. Severity of Malfunction detection Easy to locate the cause of any malfunction and implement measures to prevent recurrence. Difficult to locate the cause of malfunctions at an early stage and difficult to implement measures to prevent recurrence. thanks, Kavitha
  15. 1 point
    Automation Autonomation Definition Technology by which process or procedure is performed without human assistance Intelligent automation or automation with Human assistance (supervisory). It is a process of detecting automation errors Aims 1. Cost savings 2. Improve the quality (Accuracy & precision) 1. To detect product defects or process malfunction 2. Stop the process 3. Fix or correct the immediate condition 4. Investigate the root cause & fix it before starting Example To produce a sheet metal part (multistage operation). 1. Cut the part to right size 2. Pick and place the part in next station 3. Bend it to the right dimensions 4. Transfer the finished part to conveyor to assemble to make a final product Before step 04, if a camera is fixed to check for the critical dimensions & give an error feed back to the operator & stops the production to root cause the problem fix it Summary: Autonomation (Jidoka) helps in: Improves the speed of detecting defects Reduces costs by reducing damage to work-in-progress and equipment, and by preventing further processing on flawed work-in-progress Improves operator morale, particularly if the operator is trained to resolve problems (rather than simply calling for a technician) May reduce direct labor costs by permitting one worker to "supervise" several machines
  16. 1 point
    Hypothesis testing is an essential procedure in statistics , it evaluates two mutually exclusive statements about a population to determine which statement is best supported by data i.e which statement is statistically significant. But, why do we need hypothesis testing ? that is because we are making our conclusion about a population basis the sample data – hypothesis tests helps assess the likelihood of this possibility that the sample data is representative of the population data. This itself makes hypothesis testing very significant as it helps us assess the population parameters using the sample statistics. Hypothesis tests can be used across Define, Measure, Analyze and Improve phase of an improvement project Define Phase : To test whether the target set is significantly different from the baseline performance. Measure phase : Understanding the likelihood that a data sample comes from a population that follows a given probability distribution (i.e. normal, exponential, uniform, etc.) Analyse Phase : For screening potential causes. Evaluating several process factors (process inputs, or x’s) in a designed experiment to understand which factors are significant to a given output, and which are not. Improve phase : Evaluating a proposed process improvement, using pilot study output, to see if its effect is statistically significant, or if the same improvement could have occurred by random chance. Thanks Jisha Nair
  17. 1 point
    Poka -yoke or Mistake Proofing is about using a process or design feature and control mechanisms to prevent defects, detect them if there are not preventable. reduce the severity of the defects. The main motive is to: PREVENT a defect from occurring and if that is not possible, DETECT the defect every time it occurs. It is critical to prevent and detect errors/ defects as early as possible in the process because the later they are found the more expensive they become i.e. costs associated with them increases - more materials, labour, overhead, time. While implementing poka-yoke designs, care should also be taken that the implementation does not enhance any other issues or open new issues that may cause defects. Poka -yoke or Mistake Proofing has varying degrees of effectiveness. - Control Vs Warning Poka yoke. One must balance getting the most effective poka-yoke while keeping in mind the practical and economic feasibility of the solution. I feel all the interpretations provided in the question are all correct and validate the varying degrees of effectiveness. 1. The human error will not happen at all. Example: Rectangular design of 3.5” floppy disc so that the wrong side cannot be inserted. SIM card slot in cell phones is designed in such a way that user can insert SIM card in correct way only. There is no chance for user to make a mistake while putting SIM card in a cell phone or floppy in the drive. 2. Human error may continue to happen but the defect will not happen. Example: Validation check when creating new password to contain the required combination of Upper, lower case, numeric and special characters to ensure a strong password. The system does not accept a password unless it fulfils the criterion. Double Entry Box: Most websites & software where one needs to enter a critical bank account number, or a password create option, users are asked to enter the same value twice (with paste option disabled). This is to ensure people haven't made a mistake while entering the value, and that both boxes hold the same value 3. Human error may happen, the defect is less likely to happen. Example: Some of the email software pop up an error message like “there is no attachment, do you want to send it anyway?”, if they find the key words “Find attached” (or other variants of the same) and do not see an attachment when the user tries to send the email. Some Email software pop up a message if the subject is missing when the user tries to send the message. Car Seat belt Warning indicator beeps to warn that the user has forgotten to put on the seat belt, if he drives without putting on the belt. 4. Human error may happen, the defect will also happen but will be detected and corrected automatically. Example: Microsoft word, Google search automatically corrects typographical spelling error. Auto logout functionality in websites (especially Banks). When user forgets to logout before closing the website and reopens, then he has to provide the credentials and log back in.
  18. 1 point
    Background and Concept: False Alarm and Missed Alert are better understood with the two types of errors that are possible in statistical Hypothesis testing. Dealing with them with reference to test of hypotheses will provide more insights than otherwise. Any hypothesis test is begun with the assumption that the null hypothesis is correct. Null hypothesis is the default position and corresponds to the idea that "one is innocent until proven guilty". False alarm or Type I errors or False Positives (α): They happen when we reject a true null hypothesis. Missed alert or Type II errors or False Negatives (β): They happen when we accept (fail to reject) a false null hypothesis. Which error will you prefer over the other? The answer to this question depends on the problem and the worst that could happen if either a Type1 or Type 2 error was committed. Example 1: Person accused of Murder awaiting Death Sentence. Null Hypothesis: Person did not commit murder. Type 1 error: Person did not commit murder but pronounced guilty. (Rejected true Null Hypothesis) Type 2 error: Person committed murder but pronounce Not guilty. (Accepted false Null Hypothesis) In this example, though Type 2 error is not favorable to society, but hanging an innocent person is far worse. So Type2 error or a Missed alert is preferable. Example 2: Person being screened for a disease to prescribe further tests. Null Hypothesis: Person does not have the disease. Type 1 error: Person does not have the disease but recommended for further tests. (Rejected true Null Hypothesis) Type 2 error: Person has the disease but recommended for no further tests. (Accepted false Null Hypothesis) In this example, Type 1 error might cause the patient to undergo further tests but might finally reveal that he does not have the disease. A type 2 error would prevent a legitimate patient from undergoing further tests. But a legitimate patient can re-do the test if the symptoms persist, and it is fine for a person to do some further tests even if he does not have the disease. So Type1 error or False alarm is preferable. Example 3: Person being screened for a disease (presence of which has a good rate of survival and normal life) to prescribe a delicate specialised surgery that has poor success rate. Null Hypothesis: Person does not have the disease. Type 1 error: Person does not have the disease but recommended for surgery. (Rejected true Null Hypothesis) Type 2 error: Person has the disease but not recommended for surgery. (Accepted false Null Hypothesis) In this example, Type 2 error might cause the legitimate patient to not have the surgery which is bad, but it is much worse to have a person without the disease undergo the delicate critical surgery. The legitimate patient may re-do the tests, if he still feels the symptoms of the disease and may be re-diagnosed to undergo the surgery. In this case, a Type2 error or a Missed alert is preferable.
  19. 1 point
    There is no single body designated to provide Six Sigma certification to the quality profession. Almost every one of the tens of companies providing Six Sigma training and consulting also provides certification. Why is this? Because individuals and companies are spending a great deal of money, sometimes in excess of $30,000 per individual, to become trained, and they feel like they should have something to show for it. Hence, certification became a popular add-on service for consulting companies because it allowed them to differentiate between skills levels, as well as charge additional fees. The reasons for certification are the same for any other certification: to display proficiency in the subject matter to increase desirability by employers to potentially increase your salary Ultimately, certification is a professional decision that can only be made by you. In some cases, it will be required for you to advance within an organization. For instance, at some companies, it is a requirement of every salaried employee to be Green Belt trained and certified if they want to be promoted within the organization. In other cases, Six Sigma certification will display your energy and intent to be a leader within the quality profession. from the site: six sigma اسید هیومیک
  20. 1 point
    Hi All Please find below comparison on the topic. Hope I can connect to the lot here. Business Excellence Process Excellence Operational Excellence Personal Excellence INQUIRY What am I supposed to do How am I supposed to do When am I supposed to do Who am I CRITERIA Vision Outcome Output Realization FOCUS AREA Market Competitiveness Continuous Improvement Quality Service Learning RELATIONSHIP Transforming Reframing Refining Acting ORDER You start with You design it into You execute it as You reinvent each time ABSENCE CAUSES Annihilation Variation Waste Insatiety APPROACH Balance Score Card Etc Value Stream Mapping Etc 7 QC, 7 MT, 7 Waste Selflessness and learning Regards Igniting Minds 95 ( Nagraj Bhat - On behalf of )
  21. 1 point
    When we talk about excellence the first quote that came up to my mind is “Excellence/Perfection is not a destination; it is a continuous journey that never ends.” by Mr. Brian Tracy. The word excellence is derived from the Latin word excellere (which translates to Surpass) so when we talk about excellence it can be defined as the condition or state which surpasses the expectations and delights the user who is constantly on the lookout for improvement. Terms - Personal excellence, process excellence, operational excellence, and business excellence are interrelated and it is not possible to achieve one in the absence of another. Being a strong believer of taking one step at a time, would like to summarize the way excellence can be achieved in any discipline (personal/process/operational/business) Self-examination and the desire to be better and raising the bar from the current scenario – winning the mind of self and other leaders Selecting right goals and their duration: short, medium, and long - ownership Evaluating strengths and weakness – key performance results Placing entire focus on areas that need to be improved: what needs to be done, how it needs to be done, by when it needs to be done – strategy Identifying solutions/improvement plans, verifying whether the solutions/plans will bring about benefits (top line/bottom line) and implementing the improvement plan, reviewing with stakeholder, gather feedback and review. The above 1-5 steps need to be constantly worked upon to remain relevant and create value.
  22. 1 point
    Here is a video which shall be of interest to Healthcare professionals. />http://www.youtube.com/watch?v=dgjoDYeoRvU
This leaderboard is set to Kolkata/GMT+05:30