Jump to content

Venugopal R

Excellence Ambassador
  • Posts

    234
  • Joined

  • Last visited

  • Days Won

    35

Community Answers

  1. Venugopal R's post in Six Thinking Hats was marked as the answer   
    Benchmark Six Sigma Expert View by Venugopal R

    ‘The Six Thinking Hats’ is a popular method of getting a team to think about a topic from multiple angles.
    Any brainstorming exercise needs good planning, facilitation and post-session work to derive the benefits of the time spent by a group of experts. Brainstorming, if allowed to happen as a ‘free for all’ exercise, will never provide any useful outcome. Various methods have been recommended for channelizing brainstorming efforts.
    The ‘Six Thinking Hats’ by Edward DE Bono is one widely accepted method to overcome some of the issues faced during a traditional brainstorming exercise.
     
    Genesis of the ‘Six Thinking Hats’ method.
    Each individual has got his / her characteristic and habitual way of thinking. Some will be optimistic by nature, whereas some will be cautious, and some others will be intuitive, creative and so on. With such different approaches of mind, based on individual behavioral characteristics, we would face clashes of interest, hurdles and passiveness during a brainstorming session.
    As per DE Bono’s thoughts, each of these characteristics are important and we need to look at a problem from all these angles before concluding upon the solution. He brought in six perspectives to be considered mandatorily during a brainstorming and related each on to a colour of a hat. These six perspectives are expected to largely cover encompass the variety of perspectives that could emerge from a group of individuals.
     
    What each coloured hat represents:
    White Hat – Facts: Focus on data, facts and information available or needed. Blue Hat – Process: Focus on managing the thinking process, next steps, action plans Red Hat – Feelings: Focus on feelings, hunches, gut feelings and intuitions Yellow Hat – Benefits: Focus on values and benefits; why something may work Black Hat – Cautions: Focus on difficulties, potential problems; why something may not work Green Hat – Creativity: Focus on possibilities, alternatives, solutions and new ideas  
    How does it differ from traditional Brainstorming?
    In traditional brainstorming, the heterogeneity in the team-thinking at any point of time would cause conflicts on interests and will result in missing out valuable ideas from the multiple thought perspectives. There is bound to be dominance of few individuals and could result in bias towards their ideas. The participants whose perspectives could not be voiced or got overpowered, would feel their morale let down and will tend to have poor ownership on the final solution.
     
    By ‘wearing’ a particular colour of hat, all the participants force themselves to approach the problem in the perspective represented by the hat colour, at any given point of time irrespective of their natural inclination. This enables the entire team to address the problem in the same perspective at a given point of time. Room for dominance-based bias is reduced. By going through all the ‘colours’, the likelihood of anyone’s perspective getting left out is significantly reduced. This will enable to build an overall higher level of ownership on the accepted solution.
     
    Example case study:
    Let’s consider a situation, where an organization wants to decide whether they should purchase an expensive RPA tool. They use the ‘Six Thinking Hats’ for discussing and decision making. Please note that the points mentioned here are just for illustration and would not be exhaustive enough for an actual case.
     
    With the White Hat on, the team will focus on available data and the data required. They look at the number of automation opportunities, existing and likely to emerge in next couple of years. They look at the data on multiple options for RPA tools available and the comparative costs and features. They also look the past industry trends future prospects based on automation. With the Red Hat, the team will gather the intuitive opinions by the team members on different products available and the pros and cons based on hunches and individual opinions. They will also gather inputs as to what the team ‘feels’ about the need for automation, going for a third-party tool or developing it in-house. Wearing the Green Hat, the team encourages ideas for innovative thinking – alternate approaches to overcome their productivity issues or smartly modifying available software with internal expertise, or simplify the process with creative design thinking that could vastly reduce the number of steps involved. Other thoughts could be to leverage options offered through cloud computing. The Yellow Hat may introduce at this stage and focus on the tangible and intangible benefits of acquiring a RPA tool. They will look at the investment and ROI time frame. Other benefits could include improved accuracies and winning more customer good-will by providing faster and higher quality services. Another factor could be the enhancement in competitiveness. Black Hat may be brought in now – Concerns are raised on the credibility of the projections for automation. What if the technology gets obsolete faster than getting the ROI? Will it result in loss of jobs for employees? The Blue Hat is worn by the person facilitating the thought process, encouraging the ideas to flow and directing the switching of the thought process from one perspective to the next. Had they not followed the ‘Six thinking hats’ method, a few of the above points would have had a biased domination, quite likely around points 1, 2 and 4. Having explored the problem from all the perspectives the summary of the discussion will be comprehensive and would help the management team to take a very informed decision, with higher degree of ownership.
  2. Venugopal R's post in VUCA - Volatility, Uncertainty, Complexity and Ambiguity was marked as the answer   
    Benchmark Six Sigma Expert View by Venugopal R
     
    VUCA is an acronym coined by American military for addressing certain extreme war time situations. The fast-paced world has led to corporate leaders finding VUCA applicable to many corporate situations. I would like to share my thoughts and experiences as below.
     
    Example1
    One of the common VUCA situation in the corporate world is during mergers and acquisitions.
     
    Volatility can be expected with several changes happening simultaneously. viz. New senior leadership formation, Organization reshuffle, New Clients, Products & Services, New geographies, New Processes and Operating platforms and so on.
     
    Uncertainty will prevail among employees, shareholders and other stakeholders. Long term customers and suppliers could get perturbed and could be left wondering about whether they would be impacted by the change.
     
    Complexity may arise out of need for integrating different workflows, processes and platforms. Synergizing operations, sites, technologies and functions would also present complexities.
     
    Ambiguity crops up due to possibilities of multiple interpretations of same messages, policies and procedures and reporting structures. Another area of ambiguity that is likely to hover for some time  could be the name of the combined organization.
     
    Example2
    Another situation that comes to my mind is during the time when the city was affected by severe floods.
     
    We saw volatility in the form of our normal life getting disrupted overnight and the damages spurting up in various forms. Our office sites became inaccessible, all communications were disrupted, all modes of power failed, employees got stranded, some critical equipment got submerged and so on.
     
    We were uncertain about how long it would take to get back to normalcy. We did not know how much worth of damages have happened and what more may happened. We did not know about the safety of employees and had no ways of communication. We did not know how much our customers would have got impacted due to delivery issues and how they would look upon us.
     
    What were the complexities that we confronted? Our BCP / DRM strategies swung into action. We had to divert our work to other sites that were not affected and to sites overseas. This resulted in employing of different resources to handle new work, with their limited familiarity and training. Damage to critical equipment that was part of the back up process resulted in data not being accessible to provide support to the remote locations. Since the entire city was impacted, rescue and draining equipment were of high demand and not easily available. Many employees had emergencies at their home as well and balancing of priorities between saving home and office was challenging. Arranging logistics to safely move employees, who were available to workable locations from their homes and providing them food proved complex in such a disastrous situation.
     
    Ambiguity prevailed with multiple inputs and information coming in about the status of the conditions of or sites and homes of employees. It was difficult to get to agreement on priorities with the limited resources available. Providing any sort of commitment to clients about returning to normalcy was very difficult with inconsistent and ambiguous inputs.
     
    Example3
    Moving to another scenario, we had once faced a situation when there was a mass exodus of employees from a particular division, most of them being lured by a competitor in the city who suddenly setup an operation for the same services.
    Volatility was seen as the speed at which we were losing employees. With more people leaving and rumors spreading, it became contagious and in a very short time, more than 80% of the employees in that division had disappeared.
    Uncertainty was further fueled by the volatility, with rumors spreading about possible closure of division and many employees were in a dilemma whether to continue or to quit. The management was uncertain about how much of loss that would be caused and whether we would be able to retain the customers.
    Complexities included inducting employees from other divisions and to frantically look for hiring new employees, getting them trained and exposed to the client requirements and processes without disrupting deliveries and accuracy levels.
    Ambiguous inputs were received in terms of the number of employees being lost and what may be potentially lost. The cause for getting led into such a situation was being presented and interpreted by various people in different ways, whether it is the company, the competition or the customer who is responsible for causing the exodus.
     
    Dealing with VUCA
    Coming to discuss about the Business Excellence practices that could help during VUCA situations, we may look at each of the 4 components of VUCA. However some of the practices would help across VUCA.
     
    One of the essential practices is to have a good Business Continuity Plan and Disaster Recovery Management plans. This will be useful for unexpected situations that may emerge due to natural and other causes. Volatility coupled with uncertainty can result in ‘Not knowing the unknown’ and hence a risky situation. Anticipate potential failures and get prepared for worst situation. PFMEA might help us to assess each potential risk and classify their priorities and help building our preparedness.
     
    Proactive Multiskilling and multi-tasking of resources would help in situations where we have heavy fluctuations in product sales or on employee turnover.
     
    Complexities can arise out of interconnection between processes and activities and interdependence of multiple decision factors. Entity relationship diagrams will help to define and depict relationships between multiple entities. We may use SIPOC for high level depiction of a process in a complex situation and if we are able to obtain sufficient data, we may use multiple regression models.
     
    Ambiguities arise because we have a room for multiple interpretations of the same data coupled with insufficient data. One of the approaches in statistical decisioning is by attaching a confidence level for any inference. Thus even though we are unable to decide something with 100% certainty, we are able to take a practically applicable interpretation by attaching a quantified risk. The risk levels could decrease with improved quantum of  inputs and over time.
  3. Venugopal R's post in Stakeholder Engagement was marked as the answer   
    Benchmark Six Sigma Expert View by Venugopal R

    Among the factors that hinder the progress and success of a project, the inadequate involvement of stakeholders is quite common. The reasons for not involving stakeholders are many. Sometimes enough efforts aren’t taken to identify the stakeholders. Some times the team takes a ‘cautious’ approach and keeps postponing the involvement of certain stakeholders, saying that it is better to involve them after showing some success on the project. Sometimes the team fears that certain stakeholders, if involved in the beginning could put too many questions and retard the project.
    However, with the experience with many projects, successful or otherwise, it is seen that the risk of not involving the right stakeholders is much higher than any perceived risk by involving them. The very first step is to identify the stakeholders relevant to the project.
     
    1.     Involve key leaders during project identification
    There are some stakeholders who will be part of even identifying the project.  Identifying potential stakeholders across functions, who could be leaders of key functions and involving them to rate and prioritize the list of projects is a good beginning to build ownership.  A tool such as Pareto Priority Index would help bring objectivity.
     
    2.     Project Chartering
    Once project is identified, certain stake holders associated with the project will be obvious while preparing the project charter itself. For instance, a project that has to improve a business process will identify the concerned process owner(s). The requirement of Subject Matter Experts, who need not be full time members of the project would emerge. While estimating the financial worthiness of the project, it will be important to involve appropriate staff from Finance function.
     
    3.     SIPOC
    While preparing the high level SIPOC, it is possible to identify stakeholders who may be suppliers or customers to certain process steps or could be even enablers. For example, a project involving a process improvement in Data aggregation services that may call for mass testing will require high bandwidth management. Hence the early involvement of IT support function may be essential to avoid an otherwise potential hurdle in due course.
     
    4.     Stake holder Impact
    All stake holders would not be equally impacted by the project. It is very important to understand the stakeholders on whom the impact is high and winning their confidence would help in obtaining their support on the project. On the other hand, there will be stakeholders on whom the impact is low, but their support, expertise or authority are required during certain stages of the project. The leader has to keep in mind that these stake holders are always kept in the loop with necessary involvement. Preparing a table laying out the extent of stakeholder impact will be useful tool for their involvement planning
     
    5.     Stakeholder Involvement planning
    While, not involving a stakeholder adequately can cause issues while the project progresses, over involvement of a stakeholder is also not desirable. Over involvement could result in grabbing too much of the stakeholder’s time and later lead to withdrawal tendencies. Sometimes it could result in too much influence of a stakeholder into the project. Hence it helps to plan in advance, to identify which stakeholder(s) should be involved at various process steps and also the degree of involvement – could be low, medium of high. A process map with extended columns to include the stakeholder’s involvement and degree of involvement will not only help in planning but also will remain as a reference document during the course of the project. This may be referred as Stakeholders Involvement plan
     
    6.     Stakeholder communication plan
    At each stage of the project plan the communication to stake holders – what should be communicated, at what point of time, extent of communication. Any good communication matrix will be helpful. There could be certain projects that involves customer value-add. In such projects including the customer as stakeholder for the project and necessary planning for involvement and communication apply. Interestingly I remember leading a customer value-add project where one of our competitors was a stakeholder!
     
    7.       During Project involvement of stake holders
    Though the above discussion covers some practices the early involvement of stakeholders in a project, we will also touch up on the selective involvement in further areas:
    a)       For Identifying root causes
    b)      For identifying / selection of final solutions and actions
    c)       Obtaining stakeholder buy-in on solutions
    d)      Sharing of results with stakeholders
    e)      Appreciating and thanking stakeholders
  4. Venugopal R's post in Power of Hypothesis Test was marked as the answer   
    Decision based on test
    Reality
    Ho is True
    Ho is False
    Accept Ho
    Correct Decision (1 – alpha)
    Confidence Level
    Type II error (Beta)
    Reject Ho
    Type I error (alpha)
    Correct Decision (1 – Beta)
    Power of the Test
     
    If we want the test to pick up a significant effect, it means that whenever H1 is true, it should accept that there is significant effect.
     
    In other words, it means that whenever H0 is false, it should accept that there is significant effect.
     
    Again, in other words, it means that whenever H0 is false, it should reject H0. This is represented by (1-Beta). As seen from the above table, this is defined as the power of the test.
     
    Thus, if we want to increase the assurance that the test will pick up significant effect, it is the power of the test that needs to be increased.
     
    Hypothesis testing.
  5. Venugopal R's post in Launching Lean Six Sigma Effectively was marked as the answer   
    Avoid branding program with “Lean Six Sigma” tag at start. Understand biggest Leadership pains. Always, bound to have requirements for improving effectiveness / efficiencies of processes. Take such pain area (or improvement area), and initiate through regular procedures in organization viz. Change request, or CAPA processes, which normally flow through relevant cross functional stakeholders. No SME should feel as an added activity. Once succeeded let leadership feel a fact-based success story encouraging them to give you the next one. Step-up usage of LSS tools / practices as required and pursue to result in a seamless buy-in of the program.
  6. Venugopal R's post in Discrete data as continuous data was marked as the answer   
    Working with sample means
    When we work with sample means, the data from any distribution, even discrete are subjected to the properties of normal distribution, as governed by the central limit theorem. The application of this concept enables the usage of normal distribution laws for tools such as control charts.
    Ordinal data
    Many a time when we use ordinal data on a likert scale with ratings 1 to 5. When we average such recordings for a particular parameter from various respondents, it will get converted into a metric that can be seen on a continuous scale.
    Histogram
    Every time we plot a histogram even for data of discrete nature, (for example no. of corrections in a document per day), with large amount of data, it tends to exhibit the behavior of continuous data, say normal distribution.
    FMEA ratings
    When we use the ratings in FMEA for severity, occurrence and detection, we assign discrete rankings between 1 and 10, but once converted to RPN, it becomes more continual in nature, though it may remain a whole number.
    Failure data / distributions
    Another situation I can think of is about failure data. Individual failure data are count of occurrences, obviously discrete to start with. However, when we convert it to failure rate and plot distributions against time, they are treated as continuous distributions such as exponential, Weibull etc.
  7. Venugopal R's post in DPMO vs PPM was marked as the answer   
    PPM (Parts per Million) is a measure for defectives, which gives an indication of the number of parts having (one or more) defects in a given population. This measure does not provide insight into the quantum of defects, since there could be parts that have more than one defect. PPM is a popular measure when dealing with proportion defectives, where large number of pieces are involved. Even one defect in a piece usually render them unusable of may have to subject it to rework. Eg. Auto components being supplied to a large automobile manufacturer. It also applies when we referring to a single quality characteristic of interest, say the weight of a bottle of packaged drinking water; Proportion of batches delivered on time.
     
    DPMO (Defects Per Million Opportunities) is a measure for defects. When we deal with a part, it may be easier to express the defects per part or defects per x number of parts. Imagine, we are dealing with a process and need to express the number of defects during a certain period of time. We could say the number of defects from the process in a given period of time. However, if we need to compare the defects rate of process A and process B, it will be meaningful only if the opportunities for defects for these process are comparable. This may not be the case always, and hence the approach adopted is to pre-identify the number of defect opportunities in a given process, and use the ratio "Defects over the number of opportunities". For ease of dealing the the numbers, it is multiplied by a million and hence know as DPMO. The opportunities represent potential failure modes. For eg. DPMO can be used to express the Quality levels for a check processing activity, Knowledge transfer process, or to compare different production processes.  
     
  8. Venugopal R's post in Component failures was marked as the answer   
    The given situation is that of a reliability failure, where time is a factor. Obviously it is the infant mortality rate that is causing pain to the client.
    If the option of accelerated testing is ruled out due to cost considerations, the following approach may be adopted for quick identification of the highly probable causes.
    Assuming that sufficient failure data is available, plot the failure rate vs time graph. This will usually tend to take the shape of an exponential distribution, with high concentration of failures in the early periods. From this plot, determine a time period beyond which the failure rate tapers down to a safe levels. Pick up reasonable number of samples of failed components from the “early failure period” and seek for equal number of samples of components that are successfully performing even after the ‘safe cut-off’ period. (This will need a client co-operation as well as willingness by the supplier to replace those good components, to support this exercise) Now we have a set of “survived components” and a set of “failed components”. Depending upon the type of component, list out a set of quality characteristics that may be compared between the “survived components” and “failed components”. The sets of observations for each characteristic from the “survived” components need to be compared against the corresponding set from the “failed” components. To decide on significance of the differences for each characteristics, appropriate hypothesis tests may be applied where relevant. As a result of this exercise, the supplier would be able to re-define certain specification tolerances and manufacture components that are certainly bound to be more reliable. The other alternative / supplementing approach could be to collaborate with the client for sharing investment on accelerated testing. If setting up such facilities are not feasible, the services of external laboratories may be sought. Ultimately, the success is going to be win-win for both parties!
  9. Venugopal R's post in Capability/ Performance Indices vs Sigma Level was marked as the answer   
    I believe that the excellence ambassadors will be familiar with the fundamental definitions and calculations for Cpk, Ppk and Sigma. So, I am not elaborating on it and straight away getting into the comparative discussion.
     
    Process capability index Cp is calculated using the variations within a rational sub-group and hence indicates the process potential, but does not include the likely real time variations between groups, which could influence in the long term.
     
    Cp is a very good measure to be used for assessing the inherent process potential. It will be useful to assess the impact of any change / improvement on the process capability for a given process.
     
    Process performance index Pp considers the actual overall variation for its calculation and hence gives a more realistic prediction of the process performance.
    Although the Cp may appear good (say >1.67), it is important that Pp is also calculated to assess the process performance over time when subjected to day to day variations of real life production. Pp will serve as a measure for Production part approval criteria.
     
    When the above measures are done in consideration with the center shift of the mean, assuming we have an upper and lower specification limits, the respective measures Cpk and Ppk are used.
    Unless the study is used for a very short run, it is always recommended to use Cpk or Ppk as the case may be.
     
    When the process variation is statistical control, Cp and Pp tend to become equal.
    Cpk and Ppk are measures that are meaningful only when we have an upper and lower specification limits, and ideal while we deal with variable data.
     
    When we need to express the process performance index comparable across different processes that would also include attribute data, the sigma level as a measure of process performance becomes useful. Since we there is an established relationship between the defect levels (DPMO), and the associated sigma levels, it becomes a versatile measure to express and compare process performances. However, it is recommended to maintain the Cpk and Ppk values for the benefits discussed, and maintain the corresponding sigma conversion for having company-wide uniformity.
  10. Venugopal R's post in Long term vs Short Term was marked as the answer   
    The long term performance or also known as “Long Term Capability” for a process, itself implies that it has to be taken for a reasonable period of time.
     
    At any given point of time, if we measure the process capability for a process, it will always be the “Short Term capability”. The short term denotes to process potential, when operated under a set of variations that are always expected to be inherent in the process at any point of time. Statistically these are variations that may be typically depicted by the spread of an associated normal distribution on both sides of the mean value. It is particularly useful to quickly understand the effectiveness of a change that is expected to reduce the variation, i.e. improve the process capability.
     
    If the short term capability itself does not meet the expected requirements, there wouldn’t be a need to run a long term capability. Knowing that during the long term a process will get subjected to additional variations the could impact in the shifting of the mean value, it is important that the short term capability has to be adequately good enough to enable the process to accommodate additional variations in the long term, so that the long term capability will still meet the expected requirements.
     
    Considering the practical challenges in terms of the time and effort in obtaining the long term process capability, it has been agreed that during long term, one may expect a shift of the mean value by 1.5σ on either side of the mean will be an acceptable indication of the long term capability.
     
    Thus in order to attain a long term process capability of 4.5σ, we need to ensure a short term capability of 6σ.
  11. Venugopal R's post in Control Limits was marked as the answer   
    For statistical Control charts, the control limits are formed by its own historical data. To answer the above question, let’s quickly recap the process of forming the control limits.
     
    Typically the inputs based on past 30 or more data points are taken and the control limits are worked out using the formula depending upon the nature of the data and the appropriate control chart applied. I am skipping the elaboration of the control chart construction in this discussion.
     
    (i)               Once the control limits are derived as above, this becomes a baseline situation, against which the readings are plotted subsequently. Since we keep the limits fixed based on the baseline inputs, if the variation increases, the points will start falling outside the control limits, or would start representing the runs that indicate that the process is no longer in control with respect to the baseline limits.
     
    (ii)             Another scenario is if we do not fix the baseline limits, but the UCL and LCL keep revising themselves as when the data points are added into the control chart. In this case, if the variation increases, the control limits will keep widening and might give an illusion that the process continues to be in control.
     
    As a matter of fact, the process can still be termed as “within statistical control” even with an increased variation, so long as the points are contained within those widened limits.
     
    (iii)           Hence, to keep track of the changes in variation levels and at the same time to watch whether the process is within statistical control, “stages” can be defined for periods of the control chart run, and the control limits for each stage can be worked out. This will help us to graphically see any changes on the variations (distance between the control limits) and the extent of statistical control within each stage. Such an option is available in Minitab.
  12. Venugopal R's post in Measuring Customer Satisfaction for e-commerce website was marked as the answer   
    Below table gives the ranking for the five metrics (NPS, C-SAT, Churn, CAC, CES) as per order of importance for Performance of e-commerce website for electronic goods (only for the website)
     
    Customer satisfaction Metrics choice
    Rank
    CES – Customer Effort Score
    1
    NPS – Net Promoter Score
    2
    C-SAT – Customer Satisfaction Index
    3
    Churn – Customer Loss Assessment
    4
    CAC - Customer Acquisition Cost
    5
     
    1.      CES – Customer Effort Score
    When someone wants to navigate through an e-commerce website, all by themselves, the ease and user-friendliness of the website is most important. The main reason one is trying to purchase a product through the website is to save time, which otherwise should have been spend in hunting and negotiating for the right product by visiting numerous shops. The speed, clarity and details one could obtain with relative ease, during the interface makes the CES the most important score.
    2.      NPS – Net Promoter Score
    As a matter of fact, a higher CES score will motivate a customer to increase the loyalty and referral with regard to the site usage to other potential customers. This score not only supports repeat visits by already existing customers, but also attracting new customers. Hence NPS finds a second ranking.
    3.      C-SAT - Customer satisfaction Index
    This reflects the satisfaction of customers who have already used the site and will give an assessment of their repeated use, but unlike the NPS, we may not get the assessment about added referrals. Thus this is placed only next to NPS
    4.      Churn – Customer Loss Assessment
    While this will assess whether the ratio of no. of new customers to the no. of customers lost, we may not be able to assess the dissatisfaction drivers of the lost customers. Relying only of this metric could prove risky, since for want of the real drivers that influence this ratio, it could change drastically any time. It could be good idea if this metric is used along with any of the higher ranked metrics. Thus this attains the 4th rank.
    5.      CAC – Customer Acquisition Cost
    While this metric would influenced by the Customer satisfaction, there would be other factors that influence the Customer Acquisition cost. Hence may not be a good metric to assess the customer satisfaction on usage of the product, (website).
  13. Venugopal R's post in Measures of Customer Satisfaction in a call center was marked as the answer   
    The below table gives the ranking for the 5 metrics as per the order of importance and relevance for assessing the “Performance of a call center for credit card support services”
    Rank
    Customer Satisfaction Measure
    1
    CES – Customer Effort Score
    2
    C-SAT – Customer Satisfaction Index
    3
    Churn – Customer Loss Assessment
    4.
    NPS – Net Promoter Score
    5
    CAC – Customer Acquisition Cost
     
    1.      The top ranking has been given for CES, since it gives a metric to assess the customer experience for resolving an issue. The CES is best applied, when used after every customer interface instance for resolving an issue.
    2.      The C-SAT metric is not as specific as the CES for the given question; however, it is useful to get the customer satisfaction index with respect to a product / service.
    3.      Churn or customer loss assessment could possibly be a measure, provided we are able to segregate the customer loss happening due to the call center experience from the other reasons.
    4.      NPS would be score to evaluate the overall experience of holding the credit card, and not just limited to the call center support. NPS is important to assess the potential to attract new customer due to referrals by existing ones, whereas in the given situation pertains to already existing customers only.
    5.      CAC – This more applies to determine how many leads get converted as actual customers i.e. the amount of cost that is being invested to acquire a customer. This metrics is least applicable for the given example.
  14. Venugopal R's post in Measures of Customer Satisfaction for App based Cab Service was marked as the answer   
    For an App Based Cab service provider, the following would be the ranking of the Customer Satisfaction measurement metric in the order of importance and relevance:
     

     
    1. For the Cab service provider it is equally important to have repeat customers as well as new customers. NPS being a loyalty based referral, will be an important metric to retain customers as well as to attract new customers.
     
    2. CES is important here since most operations a well as customer complaints would be handled using the app interface. The amount of effort that a customer has to put in for getting his / her message across and evoking the appropriate response becomes a critical factor and CES provides that measure.
     
    3. C-SAT being a versatile metric would have found the second place. However, it is debatable, and since the service is App based, CES finds itself the second place..
     
    4. The churn rate would have better applied in a situation where many customers keep subscribed. Hence it is not considered as a highly relevant metric in this case
     
    5. Customer Acquisition cost may be taken as a measure that is important from a longer term perspective, and once there already are many customers engaged, the above metrics have relatively higher importance.
     
     
  15. Venugopal R's post in Control Limits was marked as the answer   
    The control limits for Control charts are derived based on its own data, applying the statistical principles applicable for the distribution under which the data falls into.   ‘c’ charts and ‘u’ charts are used for ‘count’ data, such as number of defects in as part / sample. The choice of ‘c’ or ‘u’ are made based on fixed or varying sample sizes.
     
    It goes without saying that, when these charts are used for monitoring count of defects, anyone will only want the defect count to be as low as possible. Hence the UCL for defect makes sense, but the question is “why do we require a lower control limit for defect count?”
     
    LCL - little significance:
    Some times when the limits are worked out, the lower control limit might assume a negative value; in such cases, the calculated LCL, being negative has no meaning and the LCL is taken as zero. Obviously, no point is going to fall below zero, and hence the LCL is of little significance here, except when the count is zero.
     
    However, if we are using the ‘run’ patterns for our study of stability as per its rules, then the 1sigma and 2sigma limits are also used, apart from the LCL.
     
    LCL - Could unearth important finding:
    Where we do have a positive LCL, and if some data points fall outside, it indicates a situation that may be “too good to be true”. It will be worthwhile to investigate the special cause(s) that could have resulted in this occurrence.
     
    1. It could be measurement a error. For eg. a wrong gauge could have been used and it was failing to detect defects.
     
    2. It could be a change of an inspector that added subjectivity in the defect identification, especially if the defect was to be visually identified.
     
    3. Or it could be some genuinely favorable condition that brought down the defect count. These could be opportunities of unearthing some favorable factor that we have been missing or ignoring.
    One example from my experience is when we were using ‘u’ chart for plotting the count of character errors in captured data, processed from multiple sites. Few consecutive days we observed the count falling below the LCL. Upon investigation, we realized that one particular processing site was down during those days. Further probe revealed that this particular site was performing with an operating application, whose version was obsolete. Once the correct version was installed, we were able to sustain a reduced mean error count and the control limits could be narrowed.  
    LCL - More important (than UCL?)
    4.    It is not necessary that c and u charts should always represent defects, which are always “lower the better”. For eg. a consumer goods company selling a popular brand of shaving cream, wants to do a study to see the number of individuals out of sample who use their product. They pick a sample of individuals in a city every day and find out how many of them are using their brand. In this case, since the sample varies every day and it is a count data, ‘u’ chart applies. However, this is a case where "higher the count, the better". Hence the LCL and the count falling below LCL is of utmost importance.
  16. Venugopal R's post in VOC, VOB was marked as the answer   
    The Voice of Customer (VOC) is one important element during the Quality Function Deployment exercise. VOC includes stated and implied requirements of the customer. The QFD helps in ensuring that all the elements of the VOC are addressed by the process and also provides a quantitative expression as to how much of each characteristic is addressed by the associated process / process steps. The Critical to Quality (CTQ) characteristics are also identified and subjected to the extra attention and care required from the process.
     
    The Strategic Business Objectives form one of the key starting points for the Policy Deployment exercise as part of Business Process Management. Here the Voice of Business (VOB) forms a key input, being the primary need of a business and its stakeholders, including profitability, revenue, growth, market share etc.  
     
    Some situations where we encounter conflict between VOC & VOB
     
    1. Pricing of a Product / Service:
    This is one of the commonest factors where the VOC and VOB will undoubtedly have very strong negotiations. VOB will seek to maximize the margin of profits, but will have to maintain competitiveness. VOC seeks to get best price and this expectation will get calibrated by comparing with the prevalent competing offers. Here, one of the most important term that is used by VOB is that we should ensure that customer gets “Value for Money”. I would like to state the quote I picked up from one of my mentors “When we provide a product of service worth Rs.1, the customer should end up feeling that they obtained a service worth Rs.2”. However the same leadership also expects the VOB to be fulfilled. This is the challenge.
     
    2. Quality requirements:
    For industrial products, Quality requirements are expressed quite clearly and in detail through specifications, drawings and standards. In the case of consumer goods and durables, Quality standards are set by the manufacturer based on various inputs from market and focused customer groups and past customer feedback. In the case of service industry, Quality requirements are expressed as part of service level agreements and are likely to be quite voluminous and subject to higher interpretation variations.
     
    Conflicts arise when interpretation variations and alterations crop up in quality expectations, especially after a contract is signed off. The VOB is likely to raise concerns relating to the feasibility of agreed pricing and delivery times, if there are differences in the Quality levels initially agreed. Close involvement of all concerned stakeholders, transparent discussions are key to reach a consensus.
     
    3. Scope creep:
    Scope creep refers to changes, continuous or uncontrolled expansion in a project's scope, at any point after the project begins.  To a large extent businesses will accommodate changes that creep in after the initial agreement, honoring the customers’ needs and considering long term relationships. However, beyond a certain point of time, the VOB is likely to raise questions upon accepting such scope changes without reviewing or revising other contractual agreements. Again, the fairness of the expectations would also have to be seen, considering the competitive offerings available to the customer to strike the right balance between VOC and VOB.
     
    4. Overbooking:
    This situation is commonly seen in flight bookings where the airlines tend to overbook in anticipation of certain amount of last minute cancellations.  The VOB wants to maximize capacity utilization as much as possible. Sometimes, it results in some customers with valid reservations to go without seats. Although the airlines try to compensate by providing accommodation and other benefits, not every customer would be happy about such deprivation. The same analogy may be made for other businesses, where anxiety of not losing the order could trigger over commitments, but sometimes end up with under delivery. The ‘over ambitiousness’ of VOB rubs VOC.
     
    5. Forecast vs Actual:
    There could be cases where we have an ongoing customer contract for which a monthly forecast would have been provided by the customer to their vendor. It could be an Auto manufacturer giving a monthly forecast to supplier or it could be a Health Insurance company providing forecast for claim volumes to be processed every month. The vendor invests and plans capacity, hires resources and sets up equipment as per the forecasts. Where the demand exceeds the forecast for a particular month, it puts the vendor under pressure and where the demand falls short of the forecast, the vendor suffers capacity utilization. Such conflicts are seen between the VOC and VOB.
     
    If the vendor, as part of their business do their independent homework on the market / customer trends, apart from the forecast provided by the customer, they would be able to apply realistic flexibility on their plans and investments.
     
    6. Change requests:
    Change requests from customer for a running business could require investments from the supplier organization. Carrying out changes in a running business may not be very easy, without disrupting the flow. Change management system has to ensure that the effectiveness of the change is ensured, at the same time no adverse impact is resulted. The VOB may sometimes find it tough to accept the demand of the VOC, but considering the priority for an existing customer, will have to take risks and yield.
     
    Elaboration of the change management related terms and conditions in the Service Level Agreements could help bridge such expectation gaps between VOC and VOB.
     
    7. Long term business interest:
    Several situations would require the VOB to adjust itself consciously considering the long term business relationship with a customer. For instance, if we are provided multiple business accounts by one customer with varying profit margins, we might consciously undertake with discretion, to serve some business accounts that may not be profitable at all. Such decisions are taken in the larger interest of maintaining the overall set of accounts in the long term. The P&L heads for such accounts will feel the conflict between the VOC and VOB.
     
    Appropriate prior communication and involvement of such P&L heads in such strategic discussions would help ease out these situations.
     
    8. Moments of truth:
    Moments of Truth (MOT) is a phrase that refers to instances where the customer is provided an opportunity to form / change his/her impression about the company (or service provider). These could be very customized instances to deal with unique situations with customer; and sometimes one may have to deviate from the usual VOB. I can recall one instance when I had to be bold enough to host a customer, very upset due to product performance, to my organization and give him an audience by key functions in the organization and get the product fixed right in his presence. This is not an activity that is normally permitted and obviously cannot form a precedence; neither was I authorized to do so. However, it worked magic with the customer whose impression was transformed and it also helped to gain a sizable order. 
     
    Hence, MOT is something every organization needs to sensitize its employees, and should have a way of getting it exercised with employee discretion, when a situation demands, although it may temporarily appear as a conflict between VOC and VOB.
     
    9. Competitive rivalry, predatory pricing:
    These wars are common especially with consumer products and services. We do see competing companies, usually large organizations with multiple business lines, offering what appear to be unreasonably low prices for selected products / services, to rapidly gain market share. This puts huge pressure on the VOB for the smaller organizations that are more dependent on that particular line of business. It is very important not to falter on addressing the VOC under such circumstances. One has to be very patient, but focused on providing the best value to the customer, who finally takes the call.
     
    To conclude, both VOC and VOB and very important for the successful sustenance of a business. While the intention of VOB is to satisfy the needs of VOC, sustenance is possible only with business growth. So long as the ' VOC vs VOB'  conflicts help in constructive decisions and strategies, it will be Win-Win in the long run.
  17. Venugopal R's post in Central Tendency, Spread was marked as the answer   
    By and large, we come across situations where we favor the mean value of the outcome of a process (central tendency) to be focused around a specified targeted value with as less variation as possible (dispersion). There are situations where the variation assumes relatively higher importance than the central tendency; mostly because higher variations are more intolerable than some shifts in central tendency. Interestingly, there may be certain situations where variation or controlled variation is advantageous as well.
     
    Study of Process Potential:
    The process potential index Cp is used to study the variation, or spread of a process with respect to specified limits. While we study process potential, we are interested in the variation and not in the central tendency. The underlying idea is that if the process is able to maintain the variation within specified limits, it is considered to possess the required potential. The centering of mean can always be achieved by setting adjustments. Or in other words, if Cp is not satisfactory, Cpk (process capability) can never be achieved, since Cpk can never exceed Cp; it can at best equal Cp.
     
    Many examples where the variation is generally considered unfavorable to the outcome:
    1. Analysis of Variance
    While evaluating whether there is a significant difference between means (central tendency) for multiple sets of trials as in ANOVA, the variation between sets and within sets are compared using F tests. Thus in such situations, the variation comparison assumes high importance.
    2. Relative grading systems
    For many competitive examinations, the concept of ‘percentile’ is used, which is actually a relative grading system. Here, more than the absolute mark by a student, the relative variation from the highest mark is more important, thus the relative variability becomes key decisive factor.
    3. Control chart analysis
    While studying a process variation using a control chart, first the instability and variation are given the importance. Only if we have control on these parameters we will be able to meaningfully study the ‘Off-target’ i.e. the central tendency.
    4. Temperature variation in a mold
    While performing certain compression molding process, temperature variation across different points on the surface of the mold does more harm than the mean temperature. Here the mean temperature is permitted to have a wider tolerance, but the variation across mold does more warping of the product.
    5. Voltage fluctuations
    Many electrical appliances get damaged due to high variation (fluctuation) in the voltage, although the mean voltage (central tendency) is maintained.
     
    Controlled variation is favorable:
    1. Load distribution in a ship
    While loading a ship the mean value of the load can vary, but the distribution of the load is more important to maintain the balance of the ship on water.
    2. Science of music
    Those who understand the science of music would agree that more than the base note, the appropriate variation of the other notes with respect to the base note is extremely important to produce good music.
     
    Some examples where variation is favorable:
    Systematic Investment plans (SIPs) take advantage of the variation in the NAVs to accumulate wealth. Here even an adverse shift of the central tendency is compensated by the variation! Law of physics states that Force = Mass x Acceleration (F = ma). Thus, if we consider speed as the variable, it is the variation of speed that decides the force and the mean speed (central tendency) appears to have little relevance.
  18. Venugopal R's post in Tribal Knowledge was marked as the answer   
    About ‘Tribal Knowledge’
    In the Business World, Tribal Knowledge refers to certain information that is restricted to the minds of certain people in an organization. Unfortunately, such information has not been documented and thus not known by many others, whereas such information may be very important for the successful business outcomes, say, Delivery, Service and Quality.
    I remember one of my leaders once saying “Much of what we have can be built by someone else, be it our buildings, our machinery, our technology, our process and so on. But what would be most difficult to replicate is what goes on in the minds of our employees; the knowledge, the unique set of skills, experiences and the organization culture”. This clearly addresses the importance that has to be attached to the experience of a workforce in the organization, that is built over the years.
    It is very important to continually attract fresh and younger talent in an organization. However, instilling a culture that will foster a healthy collaboration amongst the fresh resources, experienced resources within the organization and experienced resources who have joined from other organizations.
     
    Unlocking, Capturing and Harnessing ‘Tribal knowledge’
    For various reasons, Tribal knowledge prevails in any established organization, despite the best efforts of documented systems. Some thoughts on Unlocking, capturing and harnessing tribal knowledge are as below:
     
    1.      Has to be an ongoing activity:
    Many a time the need for specialized knowledge will be felt when the concerned individual is not available or has decided to leave the organization.  The scramble for knowledge transfer or desperate attempts to retain the individual are not uncommon. It is important to take stock of ‘Tribal knowledge’ from time to time in all areas of the organization, and proactively get them addressed.
     
    2.      Identify the pockets of such knowledge:
    Some of the typical areas where we may have such confined knowledge are Technology, Customer relations, Quality, Maintenance etc.
     
    Software applications developed and maintained for years would have undergone several revisions and modifications and would have resulted in a complex set of coding that may be difficult to decipher for a new engineer. Very often any change attempted on such codes result in unwanted adverse results that could trigger Quality and delivery issues. 
     
    Customer relations would have long term experience and rapport established with specific individuals with the customer, all of which are difficult to transfer as knowledge.
     
    Maintenance folks would have attended to numerous problems and helped to restore equipment to avoid downtime and production loss. At time of emergency, everyone would have been anxious to get the production up and running and the discipline of maintaining detailed documentation might not attained the priority. This results in the growth of ‘tribal knowledge’ with the maintenance staff.
     
    Areas like Finance and cost accounting are usually covered by robust documentation, internal and external audits, which help to keep them at relatively higher transparency levels.
    However, these ‘knowledge pockets’ would vary from organization to organization and need to assessed periodically.
     
    3.      Map such knowledge pockets with the individuals who hold them
    The question “what could get impacted if this person quits the organization?”.. Needs to be asked for all employees, especially those with longer tenure. Once the knowledge reserves are associated with the individuals, we will be in a position to work on them before it is too late.
     
    4.      Examine whether the ‘Tribal Knowledge’ accumulation was unintentional or intentional?
    Each case of the ‘knowledge pockets’ identified need to be examined to understand whether it is intentionally being created or not. Over time, employees holding on to specialized knowledge could be an effect of insecure feeling. The possession of such knowledge or skill makes them critical in the organization and could lead to tendencies of confining such knowledge.
     
    However, in many other situations the ‘tribal knowledge’ accumulation would not be intentional and could have been the result of lack of organizational systems and planning.
    In the latter case, once the organization decides on a knowledge transfer program, we would not have any ‘will’ issue to be tackled.
     
    5.      Decide upon pro-active knowledge sharing programs:
    Functions like HR, Training, TQM along with concerned functional leaders need to embark on a time targeted program to continually convert the confined knowledge to organizational knowledge.
     
    Some of the stratification that may be done are:
    Number of employees nearing retirement age in near future - assess potential tribal knowledge residing in them Employees whose critical knowledge could be tempting for competitors to lure them Possessors of knowledge that prove critical in times of an emergency Other knowledge pockets that are important but may not be as urgent as the ones above.  
    This will help in setting the priorities for the Knowledge sharing / transfer program.
     
    6.      Knowledge conversion – Execution
    Retirement employees – Plan for retention / extension of retiring employees with required knowledge. Some of them may welcome such extension and could even work out flexible working arrangement to suit mutual convenience. If the retired employee extension is not workable, explore the possibility of hiring them on contract basis for providing the knowledge transfer. For knowledge intensive processes, introduce ‘buddy system’ where such jobs will be performed by 2 or more employees together to ensure larger spread of knowledge and skills. Where the ‘tribal knowledge’ dependency is for process or technology applications that are due for updating, institute program for replacing them with newer applications that will not only be more updated but also be better supported for service. Leverage systems like ISO 9001 and other Quality Management systems where adequate multi-tier documentation is mandated and subjected to periodic internal / external audits. Procedures, methods and technologies could be documented to a large extent, but ‘skill’ is something, where documentation is necessary, but may not be sufficient. For this, many of the human related actions discussed above will have to be performed pro-actively on an on-going basis.  
    7.      Importance for harnessing and preserving ‘Tribal Knowledge’
    Knowledge, tacit or explicit, has to be considered as a treasure for the organization. Adequate management attention and priority has to be allocated for ensuring on-going capture, harnessing and preserving this knowledge in the organization.
     
    This activity should be included as part of the annual budget exercise and reasonable funds allocated. It is sure to pay back and more important, control and prevent erosion of the knowledge treasure.
  19. Venugopal R's post in Rational Subgrouping was marked as the answer   
    What would an excellence practitioner lose if he does not utilize the concept of rational subgrouping in the pursuit of process improvement?
     
    The principle underlying the concept of Rational Sub-grouping
    As per the Central Limit Theorem, the distribution of sample averages taken from a population will be normal distribution. The sample mean value of the sample averages will equal to the population average and the standard deviation of the sample averages will be σ / √n, where σ is the population standard deviation and n is the sample size.
    This principle is used for deriving the control limits for a control plan.
     
    By Rational sub-grouping, we mean samples taken in succession during a particular time. Usually the number of samples in a rational sub-group (i.e. the sample size) will be very small, say, 4 or 5. The next such sample has to be taken after a time interval. The reason for taking the samples in succession is to ensure that they will (predominantly) have only variations due to chance causes, since they are produced under very similar conditions. The reason to keep the sample size small is to minimize any assignable variations that could creep in due to too much time gap between samples.
     
    The below table gives a representation of how data may be organized in sub-groups.

     
    What if an excellence practitioner does not utilize the concept of rational subgrouping?
    Let’s consider the following possibilities, instead of picking up the rationalized sub-group as explained above.
     
    1.     If he picks ups one large set of samples with no sub-groups:
    Using such a sample, he will be able to prepare a frequency diagram with class intervals and study the characteristics such as mean and overall variance. The two types of variation, i.e. due to chance causes and assignable causes will be combined and he would not be able to distinguish them separately. He will not be able to construct a control chart to assess the different types of variabilities.
    2.     If he picks up sub-groups with large no. of samples in each sub-group:
    Each sub-group is likely to exhibit variations other than chance causes. This can magnify the range and widen the control limits, if a control chart is constructed using this data. This will reduce the sensitivity of the control chart to detect instabilities.
    3.     If he picks up the samples for the sub-group with larger time interval:
    Any variability due to special causes that could have happened between the intervals could be missed out. The causes that lead to any drift of mean value or expansion of variation (range) could get unnoticed. This could impact the correctness of the control limits derived.
    4.     If he does not give sufficient intervals between picking up each sub-group:
    The conditions of samples in one subgroup are likely to overlap with that of adjacent sub-group, depriving the practitioner from obtaining a realistic ‘between’ subgroup variation. This could result in reduced R values and lead to narrower control limits.
    5.     If he picks up just one (or two) sample each time:
    In the case of picking just one sample, the range will not get estimated and there will be no possibility of working out the control limits. In the case of picking just 2 samples, he is at risk of narrowing the range and hence the control limits.
     
    Thus, by not using the concept of rational sub-grouping, practitioner will fail to come up with the best assessment of the 3 types of variabilities viz Instability, Off-target, and Variation the existing process
  20. Venugopal R's post in Baseline was marked as the answer   
    About Baseline
    One of the requirements of the Measure Phase in Six Sigma DMAIC cycle is the Baseline measurement, sometimes expressed as Baseline Sigma. In fact it is hard to tell whether the baseline data is required as part of the Define phase or Measure phase.
    Ideally, if we need to give the problem statement, which is expected to cover What, When, Magnitude and Impact. The ‘When’ portion is expected to show the metrics related to the problem for a time period as a trend chart, so that we can see the magnitude of the problem and the variation over a period of time – and acts as a baseline.
    Baseline certainly helps to act as reference to compare and assess the extent of improvement. Baseline is important to get a good measure of the quantum of improvement and in turn to quantify the benefits in tangible terms.
    However, the following discussion brings out certain practical challenges related to Baseline.
     
    1.    Baseline metric did not exist, but is it worth post-creating it?
    Suppose we are trying to improve an electronic product, based on certain customer complaints, our project objective will be to ensure that the incidents of customer complaints should be reduced or eliminated. Upon subjecting the product to a special lab evaluation, we could simulate the failure. However, a reasonable baseline metric will be possible only if we subject a set of sample units for a certain period of time. This could prove quite costly and time consuming. On the other hand the solution to the problem is known and we may proceed with the actions. Since our goal is to ensure zero failure, under the given conditions and duration, comparison with a baseline is not important here.
    Many a time, when the company is anxious to implement the improvement to get the desired benefits, be It cost or Quality, it may not make much sense to build up a baseline data, unless, it is readily available.
     
    2.    New measurement methodology evolved as part of improvement
    Let’s take an example of Insurance Claims processing, where the payment / denial decisions are taken based on a set of rules and associated calculations. The improvement being sought is to reduce the rate of processing errors. However it was only as part of the improvement actions that an appropriate assessment tool was evolved to identify and quantify the errors by the processors. By this time, the improvement has already begun and it is not practically possible to trace backwards to use this tool and get a baseline measurement.
     
    3.    When improvement is for ‘Delight factors’
    Often we introduce enhancement features on product, for example, new models / variants of smart phones. In such cases, the emphasis is more on the delight factors for customers, for features that they haven’t experienced earlier and any baseline comparison may not have much relevance.
     
    4.    Integrated set of modifications
    Let’s examine another scenario where a series of modifications were implemented on a software application and was released together as a new version. Here, the set of actions taken influenced multiple factors, including performance improvement, elimination of bugs and inclusion of new innovative features. In such situations, any comparison with a baseline performance to the current will be very difficult and would have overlapping impacts. If we still need to do a comparison before vs after, we may have to do so after factoring and adjusting for such interaction effects on the pre / post improvement outcomes.
     
    To conclude, in general, a baseline metric is an important information that we require to compare the post improvement results – However, it has to be borne in mind that certain situations challenge the feasibility and relevance of using a baseline measurement.
  21. Venugopal R's post in Correlation was marked as the answer   
    Just because two variables have a strong correlation, it does not form a sufficient condition for a cause-effect relationship.
     
    Let us consider two events P and Q that have shown a correlation. The various possibilities may be examined as follows:
     
    1.     Event P may be dependent on Event Q (Direct causation)
    This is the straight forward and genuine conclusion that one may derive from a correlation. For eg. Days and Nights are caused due to the rotation of earth.
     
    2.     Event Q may be dependent on Event P (reverse correlation)
    How will it sound if we conclude that the rotation of earth is caused by Days and Nights? This being an obvious example, one may not make such a mistake. However, for not so familiar events, going just by correlation, the cause-effect relation may be mistaken in the opposite understanding.
     
    3.     Event P and Event Q may both be a resultant of a third variable, that acts as a common cause for both these events, but they do not impact each other.
    For example, we see a negative correlation between the number of people travelling by public transport and the farmers’ Productivity. In reality they are not correlated to each other but both the factors are influenced by another factor, viz. shortage of fuel. Hence more people started switching to public transport than using their own cars, and farmers were hit by the diesel shortage impacting their productivity
     
    4.     Event P causes Event Q; and Event Q causes Event P (bidirectional or cyclic causation)
    When more people invest in stock market, the market indices go up, which in turn would make more people to invest.
     
    5.     Event P causes Event R which in turn causes Event Q (indirect causation);
    Longer hours of work results in consuming more junk food, which in turn causes obesity. So we cannot generalize the expectation that obesity can be reduced by reducing long work hours.
     
    6.     In reality there is no connection between Event P and Event Q; it is a spurious correlation
    We find a correlation between the no. of cellphone users in India and the number of women joining yoga classes in UK. Practically they are not related, hence it is a case of spurious correlation
     
    The above scenarios and examples bring out the fact that while correlation exercise is a tool that would help us to eliminate certain suspected causes, it may not help us to ascertain a real cause unless we have a good understanding of the events and processes under study, the underlying logical or scientific possibilities of relationship. Many a time the relationship that is indicated by a statistical correlation may have to be validated by other tools or trials before we establish the cause-effect relationship.
  22. Venugopal R's post in VOC, Voice of customer was marked as the answer   
    Voice of Customer is an important factor while we work on understanding and deploying the customer requirements to design and process requirements. VOC is an essential input in preparing the HOQ (House of Quality) as part of the QFD (Quality Function Deployment) exercise.
    To illustrate certain situations where overemphasis on VOC may not be practical or desirable, the following examples are discussed:
    1.     Varying or even contradicting customer preferences:
    Consider an case of designing the features for a consumer durable product... eg Washing Machine. One of the varying preference could be the color combinations on the machine. Another varying preference could be on the capacity requirements. While the range of preferences may be addressed to certain extent in the design and product mix, it has to be limited to a certain meaningful scope that will ensure return of investment.
    2.     Feasibility, cost and technological competency:
    There would be times when the expectation of the VOC may not be reachable considering the gap in technological and process competency that may prove too much to bridge within a given span of time and investment power. For eg. If a normal passenger car manufacturer gets a requirement for a racing car, it practically may not be a requirement that they may want to attempt.
    3.     Statutory and safety standards:
    Irrespective of the VOC, certain statutory and safety related requirements, regulations. Upon overall interest for compliance to regulatory requirements, companies may have to override customer expectations. In such cases the company should have the ability to explain and educate the customer for such decisions. Eg. Contain engine performance in an automobile to meet the pollution norms; exercise adequate pre-conditions for withdrawing money from bank account, which might result in exceeding the expected waiting time.
    4.     Innovative Products:
    Innovative products could be a game changer for consumers where the design has to be conceived differently from the prevailing VOC. For eg. How the concept of Smart Phones emerged based on the innovation by Apple. Here, there is always a risk due to straying away from the existing VOCs, but if it succeeds, the payoffs could be tremendous.
    5.     Customers rely on supplier expertise:
    For certain products, the customers might rely on a popular brand that has been in existence and proven for long. Many a time the customer may not be knowledgeable in all features and would rely on the branded product, which would offer features beyond the expectations of a normal customer. This is a case where the company would like to provide features that were not even known by the customer. This would apply to various consumer durable products, electronic appliances, cars etc. and could result in customer delight.
  23. Venugopal R's post in Correction, Corrective Action and Preventive Action was marked as the answer   
    Correction is a reaction to a problem or undesired situation to provide an immediate remedy to save a situation. We need to put the "fire out" immediately to prevent immediate damages. The correction related action gives no guarantee on fixing the cause of the problem and hence recurrence of the problem is very likely.
     
    Correction, which may also be referred to as "Damage Control", once performed, we need to immediately attend to the corrective action that needs to be taken to avoid recurrence of the problem. Identifying an effective corrective action will be possible only if we know the right cause for the problem.
     
    For instance, if a fire broke out due to an electric fault, we may trace the cause as the use of inferior cables whose insulation resistance was not up to the required standard. Then, the corrective action would be to replace all the cables with the right insulation standard. However, the same action of determining the right insulation material in the first place could have prevented the fire altogether. Thus, the same action, which is termed as "Corrective action",  could have become a "Preventive Action", had it been done pro-actively at the right time, in this case while selecting the cables.
     
    It would have been a complete preventive action  if an FMEA had been done to identify all the possibles of a potential fire and taking the right precautions, be it on the design, materials or controls. Any of these actions if done, after a fire breaks out, will only become a corrective action, or at the best, if the lessons are carried over, a preventive action for any future installations.
     
    The below table gives a few more examples of failures, with the typical correction and corrective actions. The preventive action, that could have prevented the failure, is also given in the last column. It may be noted that the preventive actions are described as "should have been".
     

     
    It is hard to think of a situation where we need only correction and no corrective and preventive action. However for a new innovation, where we would be exploring something completely new with very limited background information and limited knowledge about failure modes, we are left with no option other than keep proceeding and attend to any failures that would crop up. Even then, there would be a learning from the failure and a causal analysis can lead to corrective action for similar situations.
     
    Another thought that crosses my mind based on our understanding about the types of variations, if a failure happens due to a common cause error, where the cause may not be easily controllable, it may be debatable whether it is worthwhile taking a corrective action, which might not prove cost effective. For instance take for instance the missing baggage in an airport. If the rate of missing baggage is very, very low, it may be better to do a correction on the failed instances, which would involve efforts in tracing out the baggage, re-transporting it, payment of compensation etc. as against implementing a systemic corrective action. 
     
    However, such effort vs pay-off decisions will have to be taken depending upon factors viz.  criticality of the failures, safety implications, impact on business credibility etc.
  24. Venugopal R's post in Autonomation vs Automation, Jidoka was marked as the answer   
    Autonomation (or Jidoka)
    Autonomation (or Jidoka) is referred to as “Automation with a human touch”. Autonomation refers to automation of those tasks that are mundane or important from safety or Quality point of view, but still requires human attention. The idea is to minimize the operator intervention. A problem gets automatically detected, stops the machine (or process) and alerts the operator to take action. The expectation is that the action will involve not just restoring the process, but to also perform the root cause analysis. This helps to prevent a Quality or Safety problem, at the same time improve productivity with less human employment.
     
    Autonomation vs Automation
    Shigeo Shingo says there are 23 stages between fully manual process and full automated process. A fully automated machine must be able to detect and correct their own operating problems which may always not be cost effective. However, a good amount of the automation benefits can be availed by autonomation at a much lower cost. 
     
    Autonomation in modern day context
    Apart from the production floor, Jikoda principle is being used on many applications in the modern day context. A few day-to-day examples are as below: When you log in to a bank account, if you repeatedly enter the login details wrongly, it will get locked. The electrical safe trip used in our homes trip, if they sense excess flow of current, thus providing safety and protection of equipment. An elevator will not move if the load exceeds the limit; it will also give a warning beep. In the piped domestic gas supply systems, in case of leaks, excess flow is sensed and the valve will close supply to the suspected line. In the event of drop in cabin pressure in the airplane, the oxygen masks will automatically drop down for the passenger’s use.  
    In all the above examples, even though an automatic sensing and protective action happens, it has to be attended by a human and also take up necessary corrective action as applicable.
  25. Venugopal R's post in 5S was marked as the answer   
    5S is an orderly discipline applicable to any process in life, even though they have been popularized from a manufacturing background and most of us tend to relate accordingly.
     
    The below table gives a brief description of how it could apply for different non-manufacturing situations, viz. Pharmacy, Dentist, Fast food restaurant, Airplane boarding, Training. All of us will certainly realize that good practices such as these and more are being applied in situations as below and many other, whether they are being recognized as "5S" or not. In all these situations, it is obvious that the efforts are towards improving the efficiency and effectiveness, which translate to dollar savings in terms of revenue maximization and Quality of delivery.
     

     
×
×
  • Create New...