Jump to content

Ram Kumar Chaudhary

  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Average

About Ram Kumar Chaudhary

  • Rank
    Active Member

Profile Information

  • Name
    Ram chaudhary
  • Company
  1. Venn diagram is a visual method ( usually used in set theory) to enable building perspective about a problem or causes. This enables developing appropriate solution for the problem or causes identified by enabling breaking down of whole situation into smaller disparate components. Example hypothetical scenarios 1. Fatal error in transaction processing : Analyst processing transaction is new, type of transaction picked is complex , team leader is on leave; when these three situations come together, there is a high probability of a fatal error happening. At the intersection of these three circles is the red zone, hence such a situation should be mitigated. Venn diagram provides a simplified way of communication such combination of situations to operations staff for watch out. 2. Not meeting our contractual productivity improvement commitment for a client. When probed further basis experience following components surfaced, a) Delivery not meeting basic SLA's (hence there focus was on improving SLA performance b) Process Excellence not having a plan ( used to engage with delivery on adhoc basis) and c) Client team changing priorities frequently. This resulted in a situation of multiple initiatives none crossing the finishing line 3. Looking at which subset is causing maximum issue, example call quality not meeting client expectation. Similar to pareto looking at sub parts of call quality form where team is not able to meet standards. It appeared form had 5 parts and team was meeting/exceeding expectation on 4 parts and significantly missing on one part (Customer experience) - dead air & hold time. In order to resolve calls accurately team members were validating with SME's /Team leads, hence putting customers on hold frequently resulting in significant dip in customer experience.
  2. Brainstorming and Six Thinking Hat are both creative techniques for generating ideas/solutions to defined problem. These techniques are similar and varied in their own unique ways. Six Thinking hat technique was developed latter in 1980's hence it provides an approach which aims to address some of the shortcomings of traditional brainstorming ( which was developed in 1940's) to make the creativity session more productive. Both the techniques are group based. Here is a brief on why Six Thinking Hat might have certain advantages over traditional brainstorming 1. Both leverage a group to generate ideas; however traditional brainstorming does not addresses the point people in the group will have different styles of thinking and this difference in style needs to managed well to make the session productive. While both the group techniques use facilitator's, Six Thinking Hats takes into account the varied thinking approach and how to channelize them in a productive way. Facilitator needs to understand and use the hats accordingly to bring the appropriate perspectives. 2. Initial stage of session in Brainstorming & Six Thinking hat is similar however where Six Thinking Hat scores over Brainstorming is the middle part of the session which is a period of high volatility, where using the Six Hats discussions can be made more productive, reduce confrontation, dilute inhibition and holistically cover multiple aspects. Process is more disciplined in Six Hat Thinking. 3. Each idea is viewed from multiple lens in Six Thinking Hat approach in a structured way hence the output quality is usually refined and comprehensive compared to traditional brainstorming.
  3. There are 3 main components of OEE 1. Availability 2. Performance 3. Quality OEE is computed as follows = Availability * Performance* Quality There are sub arts to each of the above 3 components Why is it not possible to achieve 100% OEE? 100% OEE implies - 100% availability , 100% performance and 100% quality 1. 100% availability means machines run as per planned scheduled without interruption (e.g., unplanned stoppages, changeovers). While unplanned stoppages can be minimized via preventive maintainace it can't be completely eliminated. There are multiple variables at play which can result in unplanned outage and not all of these factor's can be predicted/factored ahead. Also it's difficult to bring down the changeover to negligible, since market dynamics will require product variants and variants will need changeover. 2. 100% performance means machines will run at optimal level with ideal yield. This necessitates no deterioration/wear & tear of machine over its life time via application of TPM principles. It's a desired but not a practical state since performance is bound to deteriorate with time or organizations need to spend extra ordinary effort in TPM/Quick replacement of machines which is financially not viable 3. 100% Quality is first time yield which can be aimed or aspired for.
  4. Visual Factory is a powerful concept from manufacturing with a strong business case for service operations as well (e.g., back office operations where work happens on computing machines/network, in hospitality, hospitals, restaurants, airports). The application areas for visual management across industry sectors are multiple. Let me take example from back office operations and illustrate the point on what information should be communicated and tools used. What type of information should be communicated using visual factory? It will vary from one set up to another in back office depending upon the problem areas we are trying to address or objective which we aim to achieve. However some of the generic dimensions to consider are as below 1. Controllable X's ( Input to the process) : Operations where there is inherently large variation in mix of inputs/quality of inputs/timeliness of input which needs to be governed rigorously. Example : Number of call center rep's available to take calls in any hour; submission of payment batch timely for payment processing, pre-check of documents for loan eligibility before sending for underwriting 2. Output monitoring ( Different characteristic of output) : Operations where there is possibility of remediating the situation and reducing the impact of adverse event/error present a strong case in back office to continuously monitor output. Example : Number of calls getting dropped in a contact center, high value payments revalidation for any fraudulent event 3. Do's/Don't ( Visual/safety aid's for operator's) : Continuous reinforcement of Do's & Don't in process for operators in the process to reduce possibility of error or risk event. Example : List of entities/accounts which needs to be immediately notified if any transaction is observed in those entities which executing day to day process 4. Standard's / Goals : Ongoing communication around the standard's/goals/objectives of the operations to maintain service standards. Example : First time resolution over individual productivity to ensure service representative prioritize resolution of issue versus rushing through the process to close the case 5. Job aids : Day to day job aids for operators to refer while executing their process. Example : Verifications to be while approving a commercial mortgage loan application 6. Uncontrollable X's ( Demand ) : Volumes in a back office operations What are some of the common tools used for visual factory in back office? Principles leveraged : 5 S , Mistake Proofing (Detective control & Preventive control), Statistical Process Control
  5. Hammurabi Code : Set of 282 first ever written law in ~ 2000 BC initiated by King of Babylonia. These laws were codified to drive discipline,equality,fairness, resolve disputes as civilization started to urbanize. Law averaged from moderate to extreme when compared to current law for driving civil society; it was also early start of evidence/witness based judiciary. Some of the way in which Hammurabi Code can be adapted to current organization culture to drive/correct behavior are , in some shape and form this is already happening 1. Balancing the equation between Sales Vs Delivery : Sales need to balance commitment to customers based on current capability of Delivery and cost structure. Example - linking deal commissions to actual profitability of client and length of stay 2. Aligning HR policy Vs Market Reality : HR policies and practices typically are renewed/refreshed basis extreme events rather then on an ongoing basis to keep pace with market conditions. Example - linking the delivery performance in balance scorecard of HR function to ensure alignment of HR policy to delivery like entry level talent criteria to the talent required by delivery for projects 3. Driving the culture of honoring commitment : Multiple leaders and colleagues are bound to make commitments during the normal course of their work day; some of these commitments roll up into larger commitments like while building/proposing solutions. We typically detach post commitment and when we re-engage once the solution comes to delivery stage colleagues start backing/providing caveats to their earlier commitments. This can be addressed via Say/Do ratio metric 4. Being responsive : Organization structure, bureaucracy,hierarchy and culture slows down the responsiveness of the organization. Especially from support functions like IT, Finance, HR. There can Uber kind of rating of employees on their responsiveness, when the same set of employees reach out for assistance they get prioritized basis their rating
  6. Service 4.0 objective is to provide unparalleled customer experience, hence the salient features for the same are also derived from the very same objective to reduce customer effort, improve customer convenience, increase presence of services/provisioning of services, reduced cost, personalization of services. There are multiple aspects of services & service delivery which are getting transformed via technology which in totality is getting termed as service 4.0. Below is a directional/non-comprehensive list of technologies enabling service 4.0 - Big data : The vast volume of data that is getting generated via different mediums which enables to provide richer perspective about the customer to service provider. Example social media posts showing interest on specific pictures can trigger a vacation offer or recent card transaction history in hospital can trigger an insurance call ; Internet of services/things : Wearable gadgets monitoring your health by generating data regarding your health parameters resulting in proactive health management ; Robotic/Intelligent automation : Resulting in service effort, time to service and cost to service reduction; Cloud computing : Ubiquity of applications, data, reduce cost of storage and subscription with ability to scale up an down in a short span; Augmented reality : Transforming the level of assistance to perform a service for service operator or customer (e.g., new house design) ; Virtualization : Reducing dependency (e.g., virtual meetings or halo rooms providing near similar in person experience)
  7. "Paralysis by Analysis" Versus " Extinct by Instinct" both relates to extreme corporate behaviour reflected during decision making process and induces significant risk for corporation where such behaviour exists. Paralysis by analysis implies dead slow decision making progress leading to inaction or no response to situation (e.g., threat, opportunity) by the corporation or its employees. Extinct by instinct implies on the feet decision making by the employees or leaders in the organization relying heavily on their gut feel or intuition. Such decision can come at the cost of inducing significant risk (e.g., betting on a stock purely based on gut in a volatile market) Such behaviours are driven by multiple factors and even within an organization/corporation you might see both types of approach along with other approaches which are more balanced. 1. Leader's approach to problem solving (e.g., gut based versus fact based) 2. Structure of the organization (e.g., concentrated power versus distributed power) 3. Market environment or reaction time available to organization (e.g., battalion in middle of adverse situation) 4. Culture of the organization (e.g., consensus driven across hierarchy versus top down where one man calls the shot) 5. Timely availability and quality of information/fact available (e.g., poor data availability and trust drives one over the other) 6. Impact of the decision/Priority of the issue (e.g., issue of significant impact like M&A) Paralysis by analysis results in significant slow response time of the organization compared to competition, frustration in the workforce, mis-trust and non alignment between functions and leaders, wastage of organizational resources. This can be solved for via execution focus, since organization is tuned to making decisions based on facts and data While extinct by instinct approach reduces consistency in performance of organization, un-predictability in the motivation and engagement level of workforce, band aid solutions or issues going unnoticed or hidden. This is a harder situation to solve for, since it driven by autocratic leaders, experience/expertise is valued more then anything else and no structure to decision making process.
  8. Commonly understood definition of instruction creep is documentation or codified knowledge leveraged for process execution becoming overbearing and loosing effectiveness. Documented process knowledge resides in the form of SOP's ( standard operating procedures) in organizations. These are usually step level instructions on how to execute a process and has day to day relevance in operations management, training, regulatory compliance and audit for adherence to laid procedures as well as best practices. Amazing isn't it so many different objectives/lens to look at SOP's. Can the process operator still make mistake in executing process task, the answer is BIG YES. Let's decode why this happens 1. SOP documents over a period of time become like Epic running into pages ( 2-3 pages to over 500+), process operators start trusting their own experience and knowledge then referring to bulky SOP's for reference while executing process. This happens in most cases due to unhealthy practices around SOP management (e.g., long gaps between SOP reviews, low quality indexing, not-removing redundant sections from the document, delegation of responsibility for SOP's to lowest level). 2. Poor design of SOP's (e.g., objective of SOP's, layout, content and indexing of content, visual versus text heavy, searchability within SOP, on Word document/Excel versus on Digital platform). Mostly SOP's are designed to cover the content comprehensively to ensure compliance versus usability. Process operators find it difficult to search information in SOP's when they need it the most while encountering new situation or scenario. This results in process operators struggling to figure information and making mistakes 3. Delayed updating of SOP's, in high demand variation environment usually SOP's don't keep pace with changing nature of process. Lack of timely updating results in poor knowledge dissemination and mistakes when process operators encountered an exceptional scenario. 4. Lack of monitoring discipline of SOP compliance, organizations usually lack mechanism to monitor adherence to SOP by process operators usually in ITes space apart from sampled transactions. Few organization don't have tracking when did the process operator last read the SOP resulting in dilution of SOP adherence and subsequent mistakes 5. Over-leverage staff, there are large variety of processes with significant demand variability leading to cross skilling of process operators across multiple of processes, which they usually don't execute in normal course of day. In special events, when they are assigned to execute a process on which they are cross-skilled, usually they end up making mistake due to nuances in the process How can process owners avoid instruction creep in SOP's? 1. Process owners should be directly engaged in review of the SOP's atleast once a quarter 2. Only keep the relevant/current process in SOP's and move historic process to appendix or archive 3. Improved UX of SOP's via digital platforms for quick review, updation, dissemination with SOP workflow 4. Making SOP's visual or voice enabled versus textual 5. Tracking usage of SOP's, usage improves process operators will ensure SOP's are live and relevant
  9. There are varied forms of reporting bias depending on the context, motivation and objectives of provider of report and consumers of report, which impacts/influences the quality of decision supposed to be taken basis the information shared. In any organizational context here are varies types of reporting bias 1. Over-reporting : Providing over optimistic view consciously like by marketing on potential of certain campaign or by R&D on market potential or performance of certain product 2. Under-reporting : Providing pessimistic view consciously to influence certain decisions e.g., sales pipeline and hence sales target; capacity required to execute a project or timelines required to execute a project 3. Delayed-reporting : Consciously delaying the information flow to upwards or downwards to manage impact of information (e.g., issues in operations ) 4. Convenience - reporting : Reporting driven by availability of information to solve for immediate information needs. 5. Confirmation - reporting :Consciously driving the data collection and subsequent analysis to prove/disprove already laid hypothesis as per individual preference Such situations can be tackled via following approaches 1. Independent agency/set up for data collection (e.g., companies engage independent survey agencies to measure their CSAT) 2. Deploying scientific sampling techniques - Increasing sample size, Increase duration of sample collection, Ensuring representative sampling of situation on ground or reality 3. Cross validation of analysis of reports via a triangulation mechanism (e.g., sales increased - did inventory reduced or production increased, did invoicing happened on time, did cash got collected from customers - were there any spike in returns)
  10. Leveraging Monte Carlo Simulation to reduce Risk in back office operations of finance institutions Category of risk : Operational ( Non IT related) resulting in operational or reputation loss to client. 1. Back office operations of financial institutions are vulnerable to multiple risks induced into operations due to manual interventions required to complete transactions. Example - transactions detail shared with wrong counterparty ( data breach); maker - checker miss ( resulting in incorrect payment); delayed processing of transaction ( resulting in interest charges) etc. While these events might be few but the impact is high. Mostly in operations manual controls are put in place for want of funds to arrest such events from occurring. Monte Carlo Simulation enables presently a view to leadership from provider as well buyer of services from financial institutions to take a data view of possible scenarios and pick opportune areas to invest in technology to arrest prime categories of these incidents. Usually on ground RCA's are developed for specific incidents and each one gets treated as single incident and unable to draw investments on technology front. 2. There is an extended debate in back office operations that profile of individuals required for financial back office is significantly different than others and so is there learning curve on the domain. When attrition and cost pressure play simultaneously resulting in erosion of tenure & domain knowledge, incidents spike. Talent is scare with relevant domain knowledge; in such situations Monte Carlo Simulation can be leveraged to model talent which optimizes cost while maintaining risk levels
  • Create New...