Jump to content

Saravanan S.

Excellence Ambassador
  • Content Count

    8
  • Joined

  • Last visited

Community Reputation

0 Average

About Saravanan S.

  • Rank
    Newbie

Profile Information

  • Name
    Saravanan Shanmugam
  • Company
    Think42 Labs Pvt. Ltd.
  • Designation
    Product Manager

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Stop-The-Line Philosophy: When it comes to production system, there are two main pillars associated with it, one is JIT (Just In Time) and the other one is Jidoka (Stop-The-Line), while JIT is all about flow of work, Jidoka is about stopping the line. Jidoka is about quality at source, i.e., inculcating quality into the processes of the assembly line leading to goods manufacturing, instead of doing quality checks after the goods are manufactured; however, in all the lean companies, Jidoka is being used along with inspection as a powerful combination of tools to prevent defects from reaching the customers. Jidoka was primarily pioneered by Toyota Production System and now has become one of the most important principles of lean that can help achieve true excellence. The line stop is a big step in the right direction toward maintaining quality of the product in the lean companies. Every individual in a lean company has the authority to stop the line, if they found any abnormality, which if unchecked could impact the quality of the manufactured goods. It gives the worker not just the power, but the empowerment, to stop the production, if defect is found, and immediately call for any help needed. In Jidoka, the production line is not just stopped, but the approach would be to identify the problem, fix it, analyse the root cause to prevent the problem from happening again and again. The principle of Jidoka (Line Stop) can have the following steps: 1. Detecting an abnormality; 2. Stop the line; 3. Fix the issue; 4. Investigate and fix the root cause; Jidoka is not being followed in many of the western companies, because of the fear of loss due to line being constantly stopped even for the minor problems. Even though Jidoka seems to adversely affect the productivity in the short term, within a short period of time, the line stops begin to reduce as line problems are getting fixed or eliminated, and the productivity begins to improve as the root causes are being fixed as and when they pop up. Apparently, the fear of productivity dip comes to our mind when we think about stopping the line because of failure to meet the required quality; however, the actual failure would be if we do not highlight the problem, apply a fix to it, and eliminate the root cause of the problem. The planning aspects the companies should adopt while implementing Jidoka is that they not only give the employees the power to stop the line when an abnormality is found, but proper training to everyone involved in the assembly line, in the appropriate problem solving tools so that they will be able to fix the problem and eliminate the root cause of the problems. Also, proper process documentation has to be maintained about the fix/changes they have made in the assembly line, so that it is properly communicated to all the people involved in the line production. In this way, the line problems are identified and fixed then and there, and the root cause is also analysed and removed in the due course, with proper documentation to help the successful future runs of the production line.
  2. Groupthink Groupthink is a term coined by the social psychologist Irving Janis, which means that when ideas are not challenged in the group discussions or brainstorming sessions on any issues, but accepted without any debate, then this psychological phenomenon of groupthink sets in. In many of these situations, people may think that it is always better to go along with the opinion of the majority, instead of contesting it. This leads to poor performance of the team and also bad decision making on any particular issue. In many cases, people think that if they object the ideas being floated by the team members, it might create some kind of disharmony or friction among the team members, and they may get alienated from the other members of the team. This kind of groupthink is more common in groups with more cohesiveness and also with impartial leadership, which results in suppression of individual opinions and creative thoughts that leads to poor decision making and ineffective problem solving capacity. Symptoms There are some symptoms by which this tendency of groupthink in the team can be identified. 1. Group cohesiveness is valued more than individual contribution or creativity. 2. Lack of intention to debate on any alternate approach. 3. Group leaders behave in a very impartial or biased way. 4. The group has faced recent failure and is in a great stress to perform to achieve success. 5. There is no standard method or metric in place to evaluate the ideas. Prevention There are various steps to prevent this phenomenon of groupthink - 1. Encourage debate - the team leader should talk to the team members and make them understand that how important is the ideas and opinions of other team members’ to be challenged, to induce the creative thought processing and arrive at the better decisions. The group leader should ensure that everyone in the group has offered their opinion/ideas, and also should ensure that a real debate has happened before the decision has been taken. If the leader realizes that there was no enough debate happened on the ideas, then the leader should postpone the decision making, and ask for more research. 2. Devil’s Advocate – during the group discussion or brainstorming sessions, one person should act as Devil’s Advocate, who could counter all the ideas and opinions being discussed in the group, and this will help to create an environment for more healthy debates and creative thoughts, which will help in better decision making. 3. Team Leader Ethics – ideally the team leader should abstain from attending such group discussions or brainstorming sessions; if he/she attends due to the gravity of the situation, at least they should refrain from sharing their opinions to the team. The trouble with being a leader is that his/her opinions will have a big impact on others, especially some timid team members, who would not dare to dissent the team leader’s views. Apparently, those team members might be have a better idea than that of the team leader’s. In that way, we are implicitly, stopping the other team members from exploring some great ideas. 4. External perspective – the group’s decision can be scrutinized by any outsider, from outside the group, but still who can provide a better idea on the issue on hand; he/she could be one of the stakeholders from a different department, a team member or a group leader of the team that had handled similar type of projects before, etc. The logic is that these outsiders would not be influenced by the ideas of this group’s leader or any other factors, and hence they would voice out their own ideas, which might be better than anyone else in the group. 5. Standard methodology to assess – after collecting all the ideas from the team, we need to put them through some standard evaluation methodology to find out how those ideas could help the team to move towards the better solution for the issue on hand and also towards the organizational goal, steps and risks involved in executing them, etc. By following the above stated steps, the phenomenon of groupthink can be avoided in any of the group discussions.
  3. Nash Equilibruim The problem of decision making is always challenging, because the future outcomes of any particular action is not fully predictable. Also, this action-outcome relationship is frequently changing, which requires a very dynamic adaptive decision making approaches, which should be based on the previous decision making choices. In this background, Nash Equilibrium takes significance as one of the concepts of game theory, where the optimal outcome of a game is one wherein no player gets any additional incentive to change their original strategy even after they come to know about their opponent’s strategy. Nash Equilibrium is a decision-making theorem within game theory which explains that a player can achieve his desired outcome without changing his choice even after knowing the opponent’s choices. In other words, each player’s decision is optimal when considering the decisions of other players. Each player wins and get their desired outcome, by sticking to their own choices, even after knowing their opponent’s choices. Real-time application: Consider two players are playing a game and there are two strategies, A & B, to be chosen. If the players choose strategy A, the player wins $1 and by choosing strategy B, the same player loses $1. During the game, if both the players chooses strategy A, then they will get $1 each and stand gained. If the choice of player 2 is revealed to player 1, even then we can see that the choices of both the players do not change, and they choose only strategy A. In summary, knowing the other player’s choice does not influence or change the other player’s choice. This situation is known as Nash Equilibrium.
  4. Gage R&R Gage R&R which stands for Gage Repeatability & Reproducibility is a Measurement System Analysis (MSA) process to measure the amount of variation in the measurement system due to the measuring device and the people involved in taking the measurement. The most important objective of the MSA is to assess the validity of the measurement system and reduce the factors which are causing the process variation that are actually stemming from the measurement system itself. The following steps are generally followed in the given order to study the errors within a measurement system. 1. Resolution; 2. Bias; 3. Linearity; 4. Stability; 5. Precision; As part of the Resolution step of Gage R&R, the goal is to have a minimum of five distinct readings with respect to the MSA study. The lack of resolution will not allow the measurement system to detect any defect more accurately or at least close to the accuracy. This part of the MSA study is usually the easiest to fix such as finding the correct testing device that can capably read the defect to the immediately nearest decimal. To achieve this, we need to have a large sample size so ensure more resolution in the measuring system. Technically, Bias and Linearity are part of calibration or accuracy. Bias will tell us the defects in the device as against the industry standards and Linearity will tell us if the bias is consistent across the response of the things to be measured. Ideally, these two are assessed together as part of the calibration study. Stability tells us how the accuracy and precision of the system changes over time. Precision: Precision is the variation that we see when we measure the same part repeatedly with the same device. It is common to see that P/T ratio which is the ratio of the precision of a measurement system to the total tolerance of the manufacturing process. Apparently, if the P/T ratio is low, the impact on product quality of variation due to the measurement system is small and if the P/T ratio is larger, it means the measurement system is huge, and hence intervention is needed. Precision has two components in it, i.e., Repeatability and Reproducibility, as follows - Repeatability --> this variation comes as a result of variation in the measuring device. This comes when the same operator measures the same part with the same device multiple times. Reproducibility --> this variation comes as a result of different operators measuring the same part with the same device multiple times. Alternatively, a single large study can be conducted for both repeatability and reproducibility at the same time. From the above definitions, as we can see, Stability is relatively a larger and a longer study. Also, since stability step is about the accuracy and precision of the entire system changes over the time, it can be done as a last step in a very big measurement system. If so, then the order of Gage R&R, changes as follows – 1. Resolution; 2. Bias; 3. Linearity; 4. Precision (both Precision and Stability can be interchangeable, depending on the size of the system); 5. Stability; (both Stability and Precision can be interchangeable, depending on the size of the system).
  5. The Will Rogers phenomenon is obtained from the idea that if an element of a set is moved to another set, it increases the average values of both the sets. It is an epidemiological paradox named after the Oklahoma comedian, Will Rogers, who apparently joked that the Oklahomans who moved to California, during the Great Depression of America, actually raise the average intelligence level in both the states of Oklahoma and California. It takes both of the following two conditions to be fulfilled to have the Will Rogers phenomenon happening in any real time scenarios: · The element which is being moved should be below average for its current set. Moving it will raise the average of the remaining elements of its own set. · The element which is being moved should be above the existing average of the set it is entering into. Adding it to the new set will, raise the average. Also, the element which is being moved does not have to be the very lowest of its set; it only has to have a value that lies between the arithmetic means of the two sets. One example of Bill Rogers phenomenon would be the following – let us assume there are two automobile branches in a city with three salesmen in each of them, i.e., A, B, C in branch 1 and salesmen D, E, and F in branch 2. On an average, salesman A sells one car per week, B sells two cars per week, C sells three cars per week, and so on up to F selling six cars per week. Obviously the average sales of the branch 1 is 2 and that of the branch 2 is 5; branch B is far better in the average sales when compared with that of branch A. If we need to improve averages of both the branches, we can use Will Rogers phenomenon and change the salesman D to branch 1, it changes the average of branch 1 from 2 to 2.5, and also increases the average of branch 2 to 5.5 with only two salesmen in it. Though the actual value of sales has not changed, we can use this Will Rogers phenomenon to boost the morale of the company to start with and then we can get down to work to actually increase the networth of the company by taking up some real action items.
  6. The Bathtub Curve Analysis: (not able to copy/attach the bathtub curve from my office laptop) The Bathtub Curve is being used for likely failure rates of any products, be it manufacturing products or technological products. This bathtub concept holds good for almost all the products which are coming new to the market, and has been widely accepted by the reliability community over the years. Predominantly, it has three parts, decreased failure rates (Infant mortality failures), constant failure rates (useful lifetime), and increased failure rates (wear-out failures). During the initial period of any new product, the product life cycle faces a lot of failures because of design failure of the manufactured goods, bugs in the software, wrong positioning of the product, etc. This phase of the product life cycle is being backed up by the strong support team to make the product a success. Hence, we experience decreased failure rates of the product life, which can be otherwise called as Infant Mortality Failure, which was high when the system came into existence and then started ramping down. The second phase of the bathtub curve experiences, less failure rates due to stability achieved in the product. This phase is also called as Useful lifetime of the product, wherein the product experiences only chance failures. The products are designed to operate under certain external conditions with certain stress level and when it crosses these constraints, the failure occurs, which is of rare situation. The end users of the products, would know how to use the products, and hence they do not attempt to cross that limit, and would try to extract more usability from the product. Hence this phase of the product is aptly called as useful lifetime of the product. The third phase of the bathtub curve is called as increased failure phase, wherein the products face a lot of failures due to the deterioration of the machinery, wearing out of its parts, outdated technology, slowness, etc. Either we need to repair the products, increase the support activity or else we need to install a new product/equipment or resort to newer technologies with respect to the technological products. This phase is also called as wear-out failure phase. For any product, the initial and final phases are very short when compared with it middle phase, which is the useful lifetime of the machinery/technological product. In my opinion, the bathtub curve concept is applicable for all the products that are being manufactured, technologically innovated or the written software.
  7. Process Maturity The essence of any business is that people working together to achieve a common goal. For the overall efforts to be successful, decisions and actions must be coordinated among individuals and various groups of the company. These actions must be consistent, coherent, and should yield satisfactory results at a reasonable cost. These concerted actions are otherwise called process. In other words, the process can be defined as “a sequence of steps performed to achieve certain goals”. It is a predetermined course of action and a standard plan for the company staff to follow to achieve the set goals/objectives. The business leaders should constantly monitor the current state of their business and the processes being exercised in their companies. To keep up with the pace to achieve the company’s objectives, the leadership should keep on improving their processes. By doing so, over a period of time, the processes of the company get matured while achieving their set goals. Thus, process maturity is being achieved in every company like this. Good maturity of the process The highly competitive businesses around the world are focusing on their processes for meeting the SLAs, quality improvement, cost reduction, improving their ETA’s, etc. The process maturity is an indication of how close a developing process is to being complete and capable of continual improvement through qualitative measures and feedback. Thus, for a process to achieve a good maturity, it has to be complete in its usefulness, automated, reliable in information and continuously improving. Also, it is important to improve the entire gamut of business processes to achieve the desired competitive edge. All the above together helps in achieving the process to reach a good maturity level. Significance of the process maturity Assessment Even for the matured processes, assessment of the process maturity is very important, because the executive management has to assess their companies from the process point of view, to come to a conclusion whether the company is moving in the right direction or not. This self evaluation is very much needed to help the industry leadership to initiate any other kind of quality methodologies to be implemented apart from what there in the company already. In that way, the process maturity assessment takes significance, even though every time all the existing processes are to be improved and redesigned periodically.
  8. VOC vs VOB Voice of Customer (VOC) is the needs, wants, and preferences of the customers/clients, whether they are internal or external. On the other hand, Voice of Business (VOB) is the needs, wants, expectations, and preferences of the people who run the business. The metrics used for VOC can be customer satisfaction scores, product ETAs, after sales customer care, product technical support, etc. Whereas the metrics used to measure VOB can be ROI, increase in shareholders' networth, percentage of increase in customer base, etc. It is the company's priority to meet the customers' needs and preference and this has been an ever-ending challenge of the company. The companies have to formulate strategies to align these two angles (VOB & VOC) of any business. There are certain scenarios when both VOC and VOB compete with each other. 1. The VOC expects the business to provide first class quality product/service and the companies need to have better internal processes to support the processes that drive the values for the customers. Whenever there is increase in sales of the product/services, subsequently increase in production or customer support needs to be there in line with the demand. However, as the objectives of the company (better ROIs and more profits) are diagonally opposite to the objectives of the customers (zero defects, zero errors, etc.), it is difficult to meet the demand with the supply at such instances. Though this period is transient, in such scenarios, both VOC and VOB acts in opposite direction. 2. Due to industry competitions, the companies have to slash their prices for their products/services to meet the competition. During such times, the company’s objective of increasing the ROI will get affected. In this case, the VOC competes directly with the VOB. In both the above cases, both VOC and VOB act directly opposite to each other. However, these are only the transient phases; it is the highest priority of the company to take up the challenge and find the match and synergy between them, i.e., VOC & VOB.
×
×
  • Create New...