Jump to content


Excellence Ambassador
  • Posts

  • Joined

  • Last visited

  • Days Won


mohanpb0 last won the day on February 1 2018

mohanpb0 had the most liked content!

Profile Information

  • Name
  • Company
    M/s CMA CGM Shared Service Centre India
  • Designation
    Director (Performance Management and BCMS)

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

mohanpb0's Achievements


Contributor (5/14)

  • First Post
  • Collaborator Rare
  • Conversation Starter Rare
  • Week One Done
  • One Month Later

Recent Badges




Community Answers

  1. Apart from “Cost”, there could be other measures like “Amount of Rework involved”. This would depend on not just the type of error but also the point of time in the process life-cycle, when the error is discovered. Cost could depend on the amount of rework involved. Cost can also involve fines or penalties. Further, there could be other factors like, “Show-stopper” or “Non-show-stopper”, with the obvious meaning that a “Show Stopper” error brings the entire process to a grinding halt and hence will warrant higher weight. But the problem with this and other error criticality-based weights including “Fatal”, “Critical”, “Non-critical” etc. is that of quantifying the conceptual criticality. This is a tricky question as while it is clear that a fatal error is more serious than a critical error, by how much is the question. Would a fatal error be rated twice as critical as a critical error? If so, why? In the area of back-office documentation for container shipping process, some voyages involve declaring the contents of the containers in detail to the Customs of the destination country. Only after this declaration is cleared can the container be shipped. After this declaration is cleared if there are any changes to be done in the declaration, this will involve additional costs to be paid. If these changes are done after a certain point of time connected to the time of sailing, there will also be an additional fine involved. In some cases, no changes will be allowed and hence the container will have to be unloaded from the vessel and rolled over to the next voyage resulting in delayed revenue from the customer and perhaps loss of business in future. In this situation an error in documentation that necessitates re-declaration to customs or additional costs and / or fines or worse still rollover will have a higher weight than other errors even if the frequency of occurrence of such “costly” errors are lesser. Where relevant, additional audits for these errors are also justified.
  2. Standard Deviation indeed has a completeness when measuring dispersion in that every data point in the set is used in the calculation. Would this feature not definitely make it a generally better measure than others like Range, which touches only the two extremities of the data distribution and the Inter-Quartile Range (IQR) which touches just two other points in the distribution? Standard Deviation is a measure that indicates the spread of data from the mean, It is a measure that uses every value in the data set. Range, on the other hand, is the difference between highest and lowest value in a data set - hence, the measure uses only two values in the data set . Interquartile Range (IQR) also uses only two values from the entire data set, IQR = Quartile 3-Quartile 1. Further more, two distributions with widely different variations can have the same range by virtue of having the same minimum and maximum values but it is very unlikely for two such widely different distributions to have the same standard deviation. Standard Deviation is not as hyper-sensitive to extremities and outliers as the Range. But Standard Deviation is not suitable when the sample size is very small. This question can have only an affirmative answer, but that alone need not necessarily make Standard Deviation the best measure of dispersion under all circumstances. For example for non-normal processes, Range may be a more useful measure of dispersion than the Standard deviation, which is at its most relevant best in a Normal Distribution. Additionally, when ordinal data are being dealt with, Range could be better suited. Further, if at either ends of a distribution, there are open ended class intervals, the Range may be more appropriate. IQR works better when there are outliers in the data as it discards the outliers by using the measure as IQR = Q3-Q1 Finally, when explaining concepts of variation to a diverse crowd consisting of staff at the lower levels of hierarchy, the Range may be easier to explain and understand.
  3. This is a very difficult one to answer for me as I have always been a follower of the concept of quantifying all measures and criteria including Risk and its priority. To me, it is as difficult as trying prove that the sun rises in the west. It is very easy to criticise any measure as being academic including the Risk Priority Number. The RPN by definition has already captured the most relevant features of risk in terms of whether it would materialise at all (Likelihood), if mateialised, whether it would matter at all (Impact) and if it is possible to have an advance warning (Detectability). Then is there any feature of Risk that is not covered in the RPN? Nothing that atleast I am able to think off. Is it difficult to calculate and use the RPN? The potential problems in calculation has already been de-scoped from the question as, "other than subjectivity in rating". Therefore, there does not appear to be any other problem in calculation and use of the RPN for prioritising Risk Treatment Actions. One problem related to group dynamics when assessing risk rather than the RPN per se, is that in a cross-functional team, it may be a bit difficult to get a very diverse group to converge on a view that is converted to a rating. As a result, when a potential risk is assessed and the likelihood, impact and detectability are discussed and rated, some participants may not feel quite satisfied with the conclusion as they would think that this is not what they had in mind or what they had expected. Ofcourse, not all participants would have the same or similar experience, especially in risk assessment. Can only conclude that despite my best efforts, am unable to really fault the RPN on any account except the possible subjectivity in rating, which (thankfully) is out of scope.
  4. In such a "Catch 22" situation, the POC (Proof of Concept) needs to be scoped judiciously. A Function or a business Line with reduced dependencies (as nil is not possible) on others will need to be identified and a problem which when solved would deliver quantum, breakthrough improvements for that Function needs to be identified. It needs to be clearly projected that the "Top Management" is behind the POC, which is true. The Function selected for the POC and their personnel need to be made to feel special and lucky on being selected for the POC.
  5. The difference between PPM and DPMO is that a whole product (part) is the opportunity in PPM, while in some sectors like ITeS, a unit (product) of delivery may contain many opportunities. Situations, where one of the above is a better measure partly depends on defining "better for whom", for the buyer or for the seller. Generally, the broader the definition of opportunity, tougher is the target. For the buyer, a tougher target always goes some way to assure a higher product quality, while for the seller, an easier target means lesser costs and higher profits. For example, in an automobile component company, a part that is delivered is one opportunity. But in a Banking BPO, a form with (say) 12 fields represents 12 opportunities. In the former, a target of 100 ppm means a million products with not more than 100 of them being defective. But in the case of the latter, 100 DPMO means just 83,334 forms (one million fields) with not more than 100 fields being defective. Here, 100 PPM is clearly the tougher target than 100 DPMO. However, if the opportunity is redefined as a complete form, then it is on par with the former by requiring one million forms, with not more than 100 forms having errors.
  6. As the real benefit of a Classroom Training Session is the fulfillment of the trainee's objectives, a C-SAT survey conducted after an appropriate lead time for the benefits of training to kick in would be the best choice. For example, for a Green Belt Training, a survey conducted after 3 to 6 months which is roughly the time required for initiating and completing a Green Belt Project may be the most authentic Measure of Satisfaction. However, as a relatively quicker measure, the satisfied (Or otherwise) trainee is very likely to keep talking about this at work, at home, on social media etc. which would result in more people getting enrolled (Or otherwise) for the same or similar training programme. Therefore, NPS would rank along side C-SAT as the first choice, the only difference being that while C-SAT is done after a lead time, while NPS assessment can start soon after the training or in some cases, even after the first day of the training. As a relatively contra-metric to NPS, Churn in terms of registered trainees opting out or interested trainees not registering can be the next best Measure of Customer Satisfaction. The next preferred measure would be CAC as the trainee has potentially multiple options for training and it needs a well planned, judiciously funded as campaign to first touch, then convince the trainee to opt for the training programme. But this is relatively less likely to be relevant in a Classroom training scenario. With objectives of trainees in opting to attend training programmes varying from genuine career development, genuine capability improvement, forced to attend by their Management, mandatory requirement for some other educational / career option, CES may not be uniformly relevant for all situations and hence it would be the last option. The preferred list in the order of preference would be C-SAT, NPS, Churn, CAC, CES.
  7. There is something very different about after-sale service in that, if the sale had been good in terms of both the product and information, there would not have been a need for the customer to come to the outlet again and create a need for an after-sale service in the first place. Thus as there has been some source of dissatisfaction due to which the customer has approached the after-sale service point, measuring KPI No. 2, C-SAT may not be the best option and so this would be a natural fifth choice. By the same logic, even expecting references from a customer forced to use after-sale service would be futile and hence, KPI No. 1, NPS would rate fourth. Equally certainly, as customers for after-sale service are not preferred at all, KPI No. 4 CAC, would end up as third. Retaining a dissatisfied customer would be a real challenge and as by default, customers who come to after-sale service can be expected to be dissatisfied, KPI No. 3, Churn would become more relevant at No. 2. On the contrary, in trying to make the best of a bad situation, if we could reduce the pain felt by the customer by minimizing the effort and running around, the customer needs to make, to get the problem corrected and get the issues resolved, it may actually turn the situation around. Therefore, KPI No. 5, CES would rank as the most suitable measure of Customer Satisfaction. My final preferred KPIs in the descending order of relevance would be CES, Churn, CAC, NPS and C-SAT.
  8. Sr. No. Complaint Handling Aspect Best Practices already followed What more could be done? 1 Simple method Ease of making a complaint Crisp and helpful notes to assist the customer Making the customer not feel guilty about complaining An app downloadable on mobile using which complaints could be made directly 2 Acknowledgment Auto acknowledgement Nothing more to add as long as it is followed by a proper response 3 Quick response Committed target me Nothing more to add 4 Well-written response Accepting the error Mentioning the refund in the beginning Explaining the action taken Thanking the complainant for the feedback Thanking the complainant for the photographic evidence Apologizing again before signing off “pink ladies” could be replaced with “pink lady apples” Abbreviations like “I’m” and “I’ve” could be expanded As the mood of the customer is obvious, OCADO could have “apologized for the disappointment” rather than “any disappointment” In addition to the refund, for helping OCADO trace and correct a failure mode, the customer could be given a free e-voucher for the goods damaged, i.e. six pink lady apples, which could be collected the next time the customer visits any OCADA store 5 No quibbles Trusting the customer as no further questions are asked Nothing more to add May need to be reviewed for bigger refunds
  9. Would like to stick to one example of a form required to be filled online following elaborate business rules. The complexity necessitated a check of atleast the critical, sensitive fields, which if mis-filled, could cause fatal errors. Very soon, the usual pressure of targets over-ruled "old-fashioned" quality intents and staff started submitting forms without even a cursory glance at what they had hurriedly keyed in. The Quality team dutifully created a hard copy checklist to be filled in for every form which was supposed to make the Inputter check the entry before checking the relevant item in the list. Some sanity was restored and Quality stabilized albeit temporarily. But with the characteristic vigor in beating the system which Operations usually displays, the Inputters continued their "Rajadhani" speed data entry while filling in all checklists at the end of the shift long after submitting the form and committing the erroneous information to the customer. The Quality team then got wise to this when customer complaints resumed and stepped in with random audits at different points of time during the shift covering various combinations of transaction types, time of the day, day of the week, staff and so on. The Auditor would pick up completed transactions at random and demand the check list. If the checklist were not available, the staff could face disciplinary action. Again there was a brief lull in customer complaints, which was all too short-lived as the Inputters began to take calculated risks. The Quality team then made the checklist too online instead of the hard copy which was supposed to ease up the job of filling the checklist, but it proved to be mere wishful thinking. Finally, technology was brought in which prevented the Inputter from submitting the form unless the check-list was filled in and submitted. This was effective but Operations still complained about reduced productivity and rising costs. More advanced technology was introduced which used business rules to fill in all rule-driven fields and left only those fields which required thinking and judgment to be manually filled in. The Inputters were now given other fancy designations like, "Validators" or "Integrators" and were sufficiently self-motivated to do their own checks of the manually entered fields, which improved Quality.
  10. That the “Normal” in “Normal Distribution” or “Normal Data” means “Natural” rather than the other dictionary meanings like, “Ordinary” or “Typical” or “Regular” or “Usual” or “Standard” would become clearer if the origin of the so called “Normal” distribution is traced. Somewhere in the 18th century C.E. a group of mathematicians and scientists in France were trying for a long time to make sense of a peculiar data distribution they had come across. They realized that one value was occurring most often and also that the other values lesser than and higher than this most frequently occurring value occurred at a progressively lesser frequency. In other words, as the value decreased from the most frequently occurring value, the frequency decreased and as the value increased more than the most frequently occurring value, the frequency again decreased. After a few days of research and discussion, they could not come to a conclusion and decided to take a walk in fresh air to clear their heads. They came to an orange orchard which was full of trees bearing ripe oranges. Unable to resist the temptation, they plucked a few oranges and began to enjoy nature’s bounty. One of the group who was still thinking of the data, started to keep a tally of the number of seeds in various oranges he ate, using a twig for a pencil and the mud as a note book. To his surprise, he found that the seeds in the oranges followed a distribution similar to what they were breaking their heads about in their lab for the last few days. Quickly, he brought this to the notice of others who soon confirmed the similarity of the data distributions. It struck them that perhaps this distribution could be something that occurred naturally. They tested this theory with certain other naturally occurring parameters and concluded that their theory was indeed correct. For reasons best known to themselves, they chose to name this distribution, “Normal” meaning that such a distribution occurred naturally. Or perhaps, the original French name given was translated as “Normal”. Whatever be the reason, it is now accepted that “Normal” distribution occurs naturally in many physical, social and biological processes. Therefore, if such measurements were made in a truly random manner, the data collected is expected to be naturally normal. Many a time, an apparently non-normal distribution, when investigated, would reveal some man-made cause like blending two distinct groups into one, or a skewed, non-random sampling and so on. Apart from the usually quoted examples of Normality like heights and weights of people from a randomly constituted group, even product characteristics from a machine or a cell with untampered settings and without any technological restrictions, will be expected to be normal not just in their physical characteristics like length, diameter etc. but also in their functional characteristics strength, power, torque and so on. Additionally, medical parameters like blood pressure, blood sugar etc. are also expected to be normally distributed. In the above situations, any distinctly non-normal distribution would need to be treated as unusual.
  11. YES “Captaincy is 90% luck and 10% skill. But don't try it without that 10% - Richie Benaud, former Australian Test All-rounder, Cricket Commentator and Author Ambassador Argument Submission Priyer It is necessary to have Lean Six Sigma skills and awareness about it, but executing a DMAIC or DMADV is not necessary. The person can also come from a Lean background and apply his experience derived by following the other methodology cycle followed for Lean projects (12 week cycle or 16 week cycles or A3 approach) to be succesful in improvement manager role. How can the Manager without experience of a complete project add value to his reports and the SS Project team? Even if he is sincere and committed, the inability to add value will show up the Manager Kavitha Sundar As per the previous question, if the organisation does not allow the BB/ GB to do a DMAIC project, the opportunity is not given to him to prove his skill set. But in such cases, he can only refresh his knowledge and simulate a project experience and move on to next firm if there is an opportunity. Hence the BB / GB should be ready to take up a project and prove his skill set at any point in time even if we don't get an opportunity. He can also use lean approach instead of DMAIC / DMADV. So, the project using DMAIC / DMADV methodology application should not be a road block for his career growth. This is a different question and nothing is mentioned about the organisation not allowing a project Simulation is no substitute for the actual experience If the Improvement Manager restricts himself only to Lean projects, then many improvement opportunities could be lost By not doing any SS projects, both his career would not develop and the organization will lose out on improvements Togy Jose I would have preferred to say Yes, but there isn’t a lot of adoption by Organization around Lean Six Sigma methodologies - which means even if an individual has put in a lot of effort in getting certified, he/she may not get to work on a project because the org is not interested. So, for selecting a BB - an MBB or a highly experienced BB should have an in depth conversation with the candidate to just check for conceptual clarity / aptitude / functional experience / maturity level etc Given that in some industries (eg: Consulting) where LSS is not encouraged but there are enough high quality profiles, would we want to be limited by this requirement? Even if someone were to claim having done a project end to end, there is no way to review the data and verify the findings on account of confidentiality. So we’ll limit ourselves to just checking only conceptual clarity anyway. Not every LSS intervention needs to be a project, even a well-timed and well-documented FMEA can help with prioritised corrective action or a well documented QFD can help with a well structured design process. So a certified BB who has a done a lot standalone interventions deserves a chance. Standards cannot be diluted just because organizations are not following the SS Methodology The profile would not be that high quality if there were no SS Project experience This cannot be a reason for leaving out the requirements Possible to understand the truth by repeated questioning Agreed that not every LSS intervention needs to be a project. Tools were, are and will be used by themselves. But by not having SS project experience, the main benefits of the SS Methodology are lost Atul Dev Completion of a Six Sigma project depends on opportunity That is fine. But not having an opportunity still remains a drawback Alex Fernandes .completion of a full-fledged DMAIC or DMADV project not be an essential criterion for the hiring of a Lean Six Sigma BlackB Belt professional in an improvement manager job role and this is one huristic that the feternity needs to change. It is important for the interviewer to assess candidate's knowledge and project completion is a good source to gauge from but this is not always true. Reasons: 1. Genuinity of projects cannot always be verified. 2. Success of the project cannot be established. 3. Level of participation of candidate in the project cannot always be checked. In fact, projects could be misleading and could give an upper edge to undeserving candidates. It is important for a candidate to know and apply SixSigma tools and techniques and that could be established even without a full fledged project. Leading interview questions can reveal the truth Same as above Same as above Phani Kumar. N Since Six Sigma is an approach for process improvement, a person experienced in the area of operation acquiring skills and the techniques to be implemented for process improvement can be a better pick than a person who has handled improvement projects in other areas. However, nowadays there is a need for doing Projects has become a pre-requisite for handling Business Excellence function when it comes to hiring by organizations. This I look as a drawback in hiring process. The USP of the SS Methodology is its completeness of approach. Expertise in this completeness is acquired only by completing projects. All other related skills cannot make up for a lack of experience in completing sufficient number of projects in various roles like member, leader, mentor and so on Rajesh Chakrabarty I do agree on the point about a person having experience in the area of operations acquiring skill and technique.... This person can definitely be of great help to the project lead/improvement Manager.....especially for FMEA The experience is never complete without the project. Nazim Because identifying the improvements are not dependent on the experience of DMAIC & DMADV, the candidate should be aware of how to find the areas of improvement and drive business benefit out of it Unless the candidate is experienced in various nuances of the methodology that is best learnt by doing different types of projects, the person will never be sufficiently aware Arunesh Ramalingam In my opinion it should be a "Good to have" requirement. I strongly feel the following two aspects should be given more importance: 1. The Professional's familiarity and understanding of Lean Six Sigma concepts and his attitude/thought process towards the concept of "Continuous improvement". This aspect would indicate if the person would be able to identify, initiate and promote improvement activities. 2. The Professional's overall job experience. This would highlight his skills related to working in a team, leading projects, ability to communicate with the management, handling conflicts and so on which are critical for executing any six-sigma project. I would agree that a person with prior project execution experience may be more familiar with all the aspects of project execution, but he may not be essentially a keen promoter of continuous improvement culture. Also, the ambience under which he completed the projects is an unknown factor. For example, there could have been high level of support that he received from the management and his team, enabling him to complete the projects. On the other hand, a professional with the right attitude and skills may turn out to be a better option (albeit with some mentoring or learning gap). The points 1 and 2 identified above could be evaluated with well drafted detailed interview questions involving case study analysis and presentations. The completion of a full-fledged DMAIC or DMADV project should be a "good-to-have" requirement, and making it an essential criterion may not be the right thing to do. Knowing Arunesh’s performance in the competitions, first reaction on the lighter side would be, “Et tu, Brute” :-) The question is only about the essentiality of project experience and nothing to do with attitude. A person’s ability to identify, initiate and promote improvement activities will be complete only with relevant project experience. Merely having those skills is insufficient. What is required is using the skills effectively in projects. If the person were not a keen promoter of Continuous Improvement culture, the person would not have applied for the role in the first place. Same as above Same as 1 above As mentioned above, the candidate’s likelihood of success lies in understanding the full potential of the SS Methodology which is best achieved in performing different types of projects.
  12. Organizations with a full-fledged LSS program can budget part of its projects to be executed jointly by both its own staff and freelancers who may not have such opportunities within their own organization. Their roles will be restricted to Data Analysis and relatively off-line work as the Information Security of these organizations will need to be respected. Another option would be to use a tool to simulate a project and solve it in the tool itself. Another possibility could be solving small issues outside work E.g. reaching the place of work on time every day without arriving too early.
  13. "Efficiency" measures the use of inputs for realizing maximum output, a waste process can also be performed efficiently. But nothing is more inefficient than doing efficiently, that which should not be done at all. Effectiveness also brings into focus, the purpose or objective of the process, ensuring that it should be fruitful in the larger scheme of things. The best example of a process that could be efficient without being effective is the “Corrections” or “Repair” processes. The process may be efficiently using resources, but will be effective only if the intelligence it generates leads to its own redundancy.
  14. The RACI (Responsible, Accountable, Consulted, Informed) is slightly older and can be traced back to the Responsibility Assignment Matrix (RAM) introduced in the early 1970s. A variation, RASCI or RASIC (Responsible, Accountable, Support, Consulted, Informed) is also used in certain organizations. Both RACI and RASCI are popular as a role documentation tool. The ARMI (Approver, Resource Person, Member, Interested Party) is more Six Sigma in origin. It serves as a tool to list and categorize stake holders in a Six Sigma Improvement Project. Have been using only ARMI in Six Sigma projects. This could probably be because, the initial Six Sigma Training included ARMI. This has just continued as a practice. Additionally, having been in organizations where Six Sigma is not necessarily a way of life, there was still a need to have people volunteer or be persuaded to be part of a Six Sigma project team. In such situations, around a decade and a half back, there was a need to make it appear as a privilege for a person’s name to be associated with a Six Sigma Improvement Project. The tool for association needed to be both comprehensive as well as soft. ARMI has this characteristic in that it can boost the ego of people who are empowered as Approvers or recognized as Resource Persons. Even being a Member can appear as a good opportunity, while being an Interested Party can be taken as a demonstration of the person's commitment. The RACI had or still has a certain amount of authority in its intent and sometimes creates unpleasantness and perhaps even dread, when one finds oneself in the “Responsible” or “Accountable” cell or column. This is even more if one has not been consulted before being made responsible or accountable for some project or even activity or task. This brings some stress into people who may be worried about being successful in their responsibilities. While being "Consulted" certainly makes people feel good, Informed sort of sidelines the person and perhaps render the person powerless. Moreover, it is still sometimes difficult to explain the differences between Responsibility and Accountability to some people. For the above not necessarily professional reasons, have preferred ARMI to RACI.
  15. It is indeed a challenge to visualise even the characteristics of a totally new product, let alone decide their specifications. But following are some of the options available for making the best of an inherently difficult situation. 1. Benchmark on split characteristics The product as a whole is not available in the market as this organization is the pioneer of this. But various features or characteristics of this new product could be available singly or in combinations in various other products already in the market. The specifications for these characteristics could be used as a basis for deciding the specifications of the same characteristics for the new product. The same specifications need not be blindly replicated, but they can be started off with and appropriate adjustments can be applied. 2. Simulating End-user experience From the customer, whatever information available on the intended end user experience needs to be gathered. The various objectives which the product would achieve both for the customer and for the end user needs to be assessed or atleast a fair idea of them needs to be documented. From the above, a realistic table top simulation of the end user experience will need to be executed. When this is done, the organization progresses some way to understanding what its customer wants his customers to experience with the new product. From this understanding, the specification of various features for giving this experience to end users can be deduced. 3. Technical Absolutes Another option would be to go in for the best the technology available can offer and negotiate an appropriate agreement with the customer on absorbing the cost of the initial investment. After prototype production is done, the specifications can be scaled down from the technical maximums to suit the intents. The option of correcting specifications as the organization, the customer and end users become more knowledgeable about the product needs to be kept alive
  • Create New...