Jump to content

Johanan Collins

  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Johanan Collins

  1. Outliers are part of the real world and need to be investigated before analyzing and interpreting the data. This is, even more, the case with small sample sizes, as the outliers have a greater impact on the results. Some models such as Principal Component Analysis, Hierarchical Models, K-Means, Linear, and Logistic Regression are very sensitive to outliers. Detection of unusual transactions may be the aim of the operations. This unusual transaction is generally in the form of outliers, such as fraud detection, stock forecasting, etc. Hence understanding outliers is critical because outliers are most likely to bias the entire interpretation or the outliers maybe what we are looking for. Reason for Outliers Error The error may be due to Data Entry, Recording, Measurement in Gage, Measurement due Operator, Measurement error due to calibration, Sampling Errors, Data Processing Errors. Part of Normal Process Outliers may be present in the data due to Bulk orders, Resellers or Extra Loyal Customers, etc. How to Detect? Data Visualization Outliers can be detected through Data Visualization such as Box plots, Scatter Plots, Histograms, Run Charts, Lag Plots, Line Charts. Statistical Methods Outliers can be detected through Statistical Methods such as the Standard Deviation Method, Tukey's Method. Etc. What is the strategy to deal with outliers? Keep the outlier and carry out the test with the outliers. Segment the data and carry out a deeper analysis. Imputing outliers and treating them separately. Set up a filter to do the test without the outliers. Since significant effects are hidden by outliers, it may be appropriate to set up a filter to examine the results without the outliers. Delete the outlier - The outliers may be deleted if there was an error in data or the reason for the outlier is not likely to happen again Delete the outlier after post-test analysis Change the value of the outlier. This may be done by replacing it with a more appropriate value such as the mean or the median. Consider the underlying distribution. An Anderson Darlings or Shapiro Wilk Test may be done to check the normality of the data. Carry out a Non-Parametric Test in case the underlying distribution is not Normal. Transform the Data. Data can be transformed using the Box-Cox Transformation, Johnson Transformation, log transformations, scaling, cube root normalization, etc. Methods and Tests that can be done for data having Outliers Winsorizing or Winsorization It is named after Charles P Winsor, who was an Engineer and Biostatistician. In this process the effect of the outliers is reduced by limiting the extreme values. It sets the value of all the outliers to a specific percentile of the sample. Data estimated through the Winsorization method is generally more robust to outliers. Example. A 95% Winsorization would set the bottom 2.5 percentile of the data to the 2.5 percentile value and the top 2.5 percentile of the data to the 97.5 percentile value. Trimming/ Truncation This is a method of censoring data. All data above/below a certain percentile is removed. Example. A 95% truncated data would eliminate the bottom 2.5% of the data and the top 2.5% of the data above the 97.5 percentile. TRIMMEAN function in Excel may be used from trimming the data. Winsorized mean and truncated mean are not the same. Non-Parametric Tests such as 1 Sample Sign Test, 1 Sample Wilcoxon Test, Mann Whitney, Kruskal Wallis, Moods Median, Friedman, Runs can be done in case of the underlying distributions being not normal. Transformation - Transform Data and carry out Parametric tests. Univariate Methods Box Plot - The box plot is the easiest method for identifying outliers. It uses the median and the Q1 and Q3 to determine the outliers. Tukey Method - This method identifies the extreme outliers as being greater than three 3 times the Inter Quartile Range below/above first/third quartile, Mild Outliers as between 1.5 to 3 times IQR. Multivariate Methods At times the univariate method may not detect the outliers. Multivariate methods such as multiple linear regression may be used. Minkowski Error. This method can be used to minimize the impact of the outliers on the model. It is a loss index and more insensitive to outliers than the mean square error since in the mean square error the contribution of the outliers increases exponentially. References https://en.wikipedia.org/wiki/Winsorizing https://www.sigmamagic.com/blogs/how-to-handle-outliers/ https://cxl.com/blog/outliers/ https://aichapters.com/how-do-you-handle-outliers-in-data/ https://aichapters.com/how-do-you-handle-outliers-in-data/ https://www.aquare.la/en/what-are-outliers-and-how-to-treat-them-in-data-analytics/
  2. Multi Voting is used for decision-making to narrow down a very large list of ideas to a smaller one. It is generally done after brainstorming, as brainstorming generates a large number of ideas. In Multi Voting all members of the team have an equal vote thus weak members cannot be overpowered by strong members. Multi Voting allows for an item that is the choice of many, but not the top choice of any of the members to get selected. It is also called Nominal Group Technique (NGT) Steps · Generate a list of ideas and display them (maybe after Brainstorming) · Combine similar ideas into groups (may use Affinity mapping) · Give an Id to each item such as alphabets or numbers. · Decide how many items each person is to vote for (generally 1/3rd) · Tally votes · Eliminate items with few votes. (Generally, depends upon the group size) · Have a brief discussion. · Repeat process till decision arrived at. Benefits Helps to narrow down a large number of ideas/problems in a systematic and democratic manner All members have an equal vote. Powerful and outspoken members cannot overpower weak and introverted members. Popular It is popular because it is a very easy and democratic process. Pitfalls or Limitations Members with lesser experience or/and professional knowledge have an equal vote as compared to members who have more experience and greater professional knowledge. If team members are from various domains, various experience levels, multi-voting can lead to a sub-optimal decision. Example There are 15 Green/Black Belt Certified employees in ABC Enterprises. The Departmental Heads consisting of Rashmi, Ted, Prakash, Murthy, and Anita have to select three employees who have to attend a two-week Black Belt Six Sigma Training Workshop. After having a discussion on the experience, skills, etc of each candidate, they carry out Multi-Voting to decide on the three candidates. Multi Voting is done in two stages since Stage 1 had a tie between 4 candidates. The results are given below: References https://asq.org/quality-resources/multivoting https://sixsigmastudyguide.com/multivoting/
  3. A two-proportion test is a hypothesis test to check if the differences between two population proportions are statistically significant. For example, are the proportion of girls to boys in a school significantly different. The null hypothesis is that there is no difference between the population proportions. It can be tested against an alternate hypothesis that can be two-tailed or left/right-tailed. The output of a 2 proportions test has two p-values, one is the normal approximation based on the Z statistic and the other is the Fisher’s Exact Test. As their names indicate, the normal approximation test is an approximation and has greater error for smaller sample sizes and becomes more accurate as the sample size increases whereas the Fisher’s Exact test is always exact irrespective of the sample size however is more difficult to calculate as the sample size increases. The Fisher’s Exact Test is calculated using the hypergeometric distribution. The factorials in the formula make it more and more difficult to calculate the p-value as the sample size increases since it runs every possible combination from the sample, and calculates the total number of successes and failures at that given sample size. It then calculates the p-value from the total successes and failures. Thus, for larger samples, it is not only easier to calculate the p-value using the Normal Approximation Test, but the results are closer to Fisher’s Exact test results. Since we no longer do manual calculations and statistical software have the ability to quickly calculate the p-value from the Fisher’s Exact Test, it makes more sense to use the Fisher’s Exact Test irrespective of the sample size. For a small number of expected values, when compared to the Chi-Square or G-Test of independence, the Fisher’s exact test is more accurate. The Normal Approximation Test (Z-test), is not accurate when the number of events/non-events are < 5. This is based on the rule that N*P or N(1-P) should be >5 (where N is the No. of trials and P is the proportion of successes. In other words, the normal distribution can be used in place of the binomial distribution when the sample size is large. If N is small and P is small, the binomial distribution will be skewed and the normal distribution cannot be taken to represent it. This is evident from Table 1 where N is increased and Table 2 where P is increased. It can be seen from these 2 tables, that as N and P are increased the Normal Approximation approaches the Fisher’s Exact Test. It is also evident from Table 2 that for small samples/P, the Normal Approximation test may indicate that a difference between the population exist when no difference exists. References https://stats.stackexchange.com/questions/234010/2-sample-proportions-z-test-vs-fishers-exact-test https://blog.minitab.com/en/quality-data-analysis-and-statistics/two-p-values-for-a-2-proportions-test-am-i-seeing-double http://www.biostathandbook.com/fishers.html
  4. W Edwards Deming was born in 1900 in the United States. He was an American and an engineer and statistician. He was also a consultant and the author of numerous books. Philip B Crosby was born in 1926. He was a businessman, a consultant, and an author. Crosby's philosophy on quality was focused on doing it right the first time. He defined quality from the perspective of the customer. Demings 14 points for business excellence include constancy of purpose, new philosophy, moving away from mass inspection, focus on improving production and service, on-job training, education and training programs, supervision, removal of barrier among staff areas, remove work standards and numerical quotas, numerical goals, remove things that burden the workers, and top management. Crosby’s 14 points to quality improvement were long term commitment to quality by management, cross-departmental quality teams, identification of potential and current areas of problems, assessment of the cost of quality, being proactive and quick in correcting problems, focus on zero defects, training of supervisors in quality, focus on individual and team improvements, communication flow amongst and between workers and supervisors, recognition, quality control. Deming looked at the management process as a single unit and focused on constant improvement. He used principles of management to improve both quality and reduce cost. Cost reduction included reduction of waste, staff attrition, etc. The Japanese summarized Deming's methodology as focus on quality would in the long-term lead to a reduction of cost and focus on cost would lead to a reduction in quality and in the long-term lead to increase in cost. Deming was also known for the PDCA Cycle (Plan, Do, Check, Act). Deming also described the seven diseases to increase effectiveness and constant improvement. These included the constancy of purpose to plan, focus on short-term profits. Performance evaluation systems, job-hopping, cost of medical, and liability. Deming gave an emphasis on training over education. He believed that training provides skills to do the task whereas education only provided knowledge. Deming, therefore, gave emphasis to the training of all employees. Crosby defined the four absolutes of quality management. The first absolute was the definition of quality as conformance to requirements, the second being the system of quality being the prevention of defects, the third absolute of quality being its performance which is zero defects, and finally, the metric of quality is the price of non-conformance. Crosby believed that training management at all levels was necessary and that the management would communicate with the employees. Crosby gave a great emphasis to zero defects and included determination, education, and implementation in the basic elements of improvement. References https://www.simplilearn.com/deming-vs-juran-vs-crosby-comparison-article https://treehozz.com/what-is-the-difference-between-demings-and-crosbys-approach#:~:text=Subsequently%2C%20one%20may%20also%20ask%2C%20which%20is%20a,levels.%20Management%20communicates%20their%20understanding%20to%20other%20employees.
  5. The CUSUM chart is used to detect very small shifts to the process. It is able to detect these small shifts because it incorporates the data from the sequence of the sample. It plots the Cumulative SUMs of the deviations of each value in the sample from the laid down target. These deviations should vary randomly above and below zero. A trend in either way would indicate a shift in the process mean. Being cumulative helps in the detection of very minor drifts to the process mean and will either cause a steady increase or decrease in the cumulative deviation values. The sample values can be individual measurements or the means of the subgroups. In case we do not want to detect very small shifts, variables chart for subgroups such as the Individual-MR Chart or the Xbar-R Chart. In case the data is the count of defects/defectives, the U or P Chart can be used. Example A quality engineer at a plant that assembles automobile engines monitors the movement of crankshafts in the engines. In an operating engine, parts of the crankshaft move up and down a certain distance from the baseline position. The engineer took five measurements per day from September 28 through October 15 and then 10 per day from October 18 through 25. The example of the same data displaying a CUSUM Chart and an Xbar R Chart shows that while the X-Bar R Chart is in control, the CUSUM Chart is able to detect the minor shift in the process mean indicated by the red dots in the control chart. References https://support.minitab.com/en-us/minitab/18/help-and-how-to/quality-and-process-improvement/control-charts/how-to/time-weighted-charts/cusum-chart/before-you-start/overview/ https://support.minitab.com/en-us/minitab/19/help-and-how-to/quality-and-process-improvement/control-charts/how-to/time-weighted-charts/cusum-chart/methods-and-formulas/methods-and-formulas/ Example Data from support.minitab.com https://support.minitab.com/en-us/minitab/19/help-and-how-to/quality-and-process-improvement/control-charts/how-to/time-weighted-charts/cusum-chart/before-you-start/example/
  6. Economic and social activity facilitated by platforms is broadly defined as Platform Economy. It includes frameworks that facilitate sales, logistics, technology, social activities, etc. These transaction platforms are also called digital matchmakers. Major examples of the platform economy are eBay, Amazon, Facebook, Ola, Udemy, Coursera, etc. The innovation platform offers the technological framework on which other developers can build solutions. Some examples of this are the Microsoft and Ethereum platforms. Alex Moazed explains in his book Modern Monopolies that “platforms don’t own the means of production; they create the means of connection.” (Moazed, Alex (2016). Modern Monopolies. Macmillan. p. 30.) The Platform economy was formalized in the year 2000 with many large organizations adopting the technology and disrupting the traditional business models. Firms like Nokia, Blackberry lost out against firms such as Netflix, Amazon, taking leverage of the platform technology. As of today, private, government, and non-government organizations are increasing their presence on the platform economy. Categories of Platforms. Platforms can be categorized into either investment, transactional, innovation, or integrated. Transactional platforms facilitate different types of online buying and selling. Innovation platforms provide the technological framework on which other developers can create an ecosystem around the platform, Integrated platforms combine both the transactional and innovation aspects into the platform, and investment platforms invest in multiple platform businesses. Key Drivers and Emerging Key Drivers The key drivers and emerging key drivers of the platform economy are the network, the ecosystem built around the network, database management systems, machine learning, artificial intelligence, the cloud, the blockchain, decentralized finance, decentralized autonomous organizations, non-fungible tokens, web3, etc. Use of Lean Six Sigma Methodology in Platform Economy & Key Drivers The Platform Economy ties up the entire chain of an eco-system into a single framework. Master Black Belts have knowledge and experience across various domains in both the manufacturing and service industry across the entire spectrum of the chain. Projects that help in increasing productivity, efficiencies, reducing overall cost, creation of new markets, or even reducing the inefficiencies in the existing markets have all been accomplished by Black Belts. Some of the key drivers and the role played by Master Black Belts are explained below. Optimizing the Network Effect. The success of a platform depends on the Network Effect. A platform becomes powerful based on the network it is able to bring together. Six Sigma methodology can be used to study the network and create an optimal balance between various components of the network. For example, in a buyer/seller network, there has to be a supply and demand balance. This balance is dynamic and Six Sigma methodology through statistical analysis can be used to maintain the balance of the network so as to be able to monetize it in a profitable manner. Creating the Ecosystem. The platform economy works around an ecosystem that is not the platform. This ecosystem consists of various developers, who create applications for the ecosystem. These applications could be around any part of the end-to-end supply chain. Lean Six Sigma methodologies can be used in identifying the elements of the ecosystem and creating relevant applications that maximize the platform economy. Blockchain Technology, NFTs, DeFi, DAOs, etc. The Platform Economy being a centralized technology enables only big players to take advantage of the network, however, the open-source decentralized blockchain technology that enables smart contract functionality and built-in trust has enabled small players, innovators, and entrepreneurs to get on to the platform economy. Take, for example, the Ethereum platform that can attach tangible assets such as property, stocks, art, music, etc to the blockchain. Ethereum is an open-source, decentralized, blockchain technology with built-in trusted smart contracts. Any developer can deploy permanent immutable and decentralized applications. Users can use this technology and enable Decentralized Finance (DeFi), Decentralized Autonomous Organizations (DAOs), Non-fungible Tokens (NFTs), etc. Master Black Belts with their vast experience of working on various projects are uniquely placed to take advantage of the opportunities that are emerging with the decentralized platforms. References https://en.wikipedia.org/wiki/Platform_economy https://en.wikipedia.org/wiki/Ethereum
  7. Fractional Odds (British) Fractional odds are given in fraction form. The first number to the left is the numerator and the second number to the right is the denominator of the fraction. The denominator is the wager and the numerator is the amount the wager will yield. Example 1: Wager on Chennai Super Kings at 10 to 15 fractional odds. For every Rs 15 wagered the potential net profit is Rs 10. A winner will get Rs.25 which is the sum of your original stake of 15 and the net profit of 10. Fractional Odds - Wagering on an Underdog or High Odds Bet In a high odds bet, the odds are more in your favour. For Example, when wagering for Kolkata Knight Riders the underdog at a 5 to 1 fractional odds, if you bet Rs. 100 and Kolkata Knight Riders win, you would get Rs.600/- (100 original bet + 500 net profit) Decimal Odds (European) This is the most popular used odds and its popularity is spreading from Europe across the world. It is the most economical way to bet, as the true odds are calculated making it the fairest and most accurate odds. In decimal odds, the pay-out is the stake multiplied by the odds. This means that the original stake is included in the pay-out. Pay-Out = Stake x Odds Example 1 Decimal odds for Mumbai Indians to win are 1.82. If you wagered Rs. 100 your potential profit would be Rs. 182 (Rs. 100 + 82) In Fractional Odds a similar wager on Mumbai Indians would be at 82 to 100. Example 2 Wager of Rs. 100 on Punjab Kings at 2.75 decimal odds Pay-Out = Stake x Odds Pay-Out = 100 x 2.75 = Rs.275/- In Fractional Odds it would be a wager on Punjab Kings at 175 to 100. It means that your risk of Rs.100 is taken against a chance to win Rs.275 at a net profit of Rs.175/- American Odds (Moneyline) This system uses a number system based on $100. Negative odds mean the amount that needs to be wagered to win $100. Therefore -110 would mean that you need to risk $110 in order to win $100. Positive odds mean the amount that will be won when $100 is wagered. Therefore +110 would mean that you need to risk $100 in order to win $110. The formula for Negative Odds are 100/((100/odds) +1), whereas the formula for positive odds are 100/((odds/100) +1). The breakdown of the formula is • 100/odds or odds/100 breaks the odds into the decimal amount on the amount to be won on the wager. For example, 100/120= 0.83 means that you will win 83% of the amount you risk • The +1 adds the wagered amount to the formula. .83 +1 = 1.83. This means if you wager 100, you will win $83 and get your $100 back. • 100/1.83 = 53.09. This means if you made 100 wagers, you would need 53.09% wins to get a return of 1.83 Conversion of US Odds to Decimal Odds Positive US Odds : Decimal Odds = (US Odds/100) + 1 Negative US Odds : Decimal Odds = (100/US Odds) + 1 Conversion of Decimal Odds to US Odds Decimal Odds >= 2 : US Odds = (Decimal -1) x 100 Decimal Odds < 2 : US Odds = (-100)(Decimal -1) Breakeven Odds Breakeven odds is the percentage of time to win the wager in order to breakeven. Breakeven Odds for Decimal Odds are 2.0 as it will double your wager. Decimal odds below breakeven odds (less than even money odds) are when the potential returns are less than your stake. Decimal odds above 2 (plus odds) is when your returns is more than your stake. Breakeven Odds for Fractional Odds are 1 to 1 as it will double your wager. Comparison of Same Wager If you wagered Rs. 100 with a potential profit or Rs. 90 Fractional Odds for RCB would be at 90 to 100. Decimal odds for RCB to win are 1.9 US Odds = (-100)(1.9 – 1) = -90 References https://www.sbo.net/strategy/fractional-odds/ https://www.bettingpros.com/articles/break-even-win-for-sports-betting/ https://www.sbo.net/strategy/decimal-odds/
  8. Technical Debt is also known as Code or Design Debt. It is the additional cost that can be attributed to rework that could result due to hastening up a project as in Agile software development. Changes that are required and not made accumulate as technical debt. Hence, technical debt can accumulate as interest, implying that with accumulated technical debt, the ease with which change can be implemented becomes more and more difficult. Technical Debt, in itself, is not bad and helps in prioritizing various tasks of the project. Technical debt could arise due to various reasons. Some of them are the starting the development before the completion of the design, lack of knowledge, lack of ownership, competitive business pressure, lack of understanding of technical debt, tightly coupled components, inadequate testing leading to sub-optimal quick-fix solutions, inadequate documentation, collaboration, parallel development, delayed refactoring, non-alignment to frameworks and standards, bad technical leadership and last-minute changes to the requirements. Technical debt has been categorized by Kenny Rubin into Target, Known, and Happened upon Technical debt. Targeted and Know Technical Debt is known in advance and planned by the design and development team, however, happened-upon technical debt is the debt that is hidden and the team is totally unaware of this debt until revealed during the functioning of the product. Acceptable Level of Technical Debt Since debt is a necessary evil in the financial world, technical debt is a necessary evil in the project development world. Hence a balance between the technical debt and the Turn Around Time needs to be drawn. It is a choice of speed over perfection and where to draw the line in-between. Technical debt accumulates with the developments in the upstream project, increasing the debt in the future. This causes projects to miss their deadlines. Each change incorporated into the project adds to the technical debt, i.e., uncompleted work in the project. Deadlines are missed when the uncompleted work (technical debt) requires more time than the time available to finish the project. An acceptable level of technical debt is limiting the amount of work in progress. This will lead to the amount of uncompleted work (debt) to the minimum. At times Projects are released along with a large amount of technical debt. If any further refactors are implemented, the technical debt can increase exponentially. Modifying the code that is already in production comes with a large risk especially if the contract has strict service level agreements. Some projects accumulate technical debt by rushing out the software in order to get experience and then refactor the software and pay back the technical debt. The experience gained would lead to better software. It can be seen as a tool for getting ahead of the rest by test driving the software. Since software companies are generally under competitive market pressure to develop and ship quickly, technical debt is a necessary evil in order to beat the market. Teams need to make a trade-off between taking on technical debt and being the first out of the blocks or launching a perfect product later. It is thus a consensus amongst software production teams that technical debt is necessary. Consequences of Technical Debt The consequence of technical debt according to Shaun McCormick is that it decreases agility as the project matures. Since it decreases the agility as the project matures, it should be intentional and not accidental. Technical debt leads to uglier and harder to maintain code. References https://en.wikipedia.org/wiki/Technical_debt https://www.productplan.com/glossary/technical-debt/
  9. Time Series Analysis Time Series analysis, a part of predictive analysis is used predicting future data based on past data. It is the analysis of a sequence of data points over a consistent interval of time. The data is not recorded at random but in regular intervals. For accurate results, it requires a larger data set in order for the data to be representative, ensure reliability and consistency. The analysis depicts the change of the variable over time. Statistical software with good data visualization capability helps in the analysis and visualization and prediction of the time series. Time series analysis is used in place where the data is affected by time such as retail, stock markets, foreign exchange markets, cryptocurrency, medical data, weather data, educational data and many other areas. Classification of Time Series Analysis. Time series analysis can be classified as trend analysis, seasonal variation or cyclical variation. Trend Analysis can be deterministic or stochastic. In deterministic trend analysis, the underlying cause can be explained, however in stochastic trend analysis, the underlying cause in random and is unexplained. A trend may increase, decrease or move sideways over a period of time. Seasonal Variation generally takes place at regular intervals during the year. It occurs at a specific and regular time interval. It occurs due to the rhythmic forces that occur in a periodic manner such as the revolution of the earth around the sun or the moon around the earth. Seasonal variations repeat themselves over time. Seasonal variations occur due to habits, traditions, festivals, weather etc. Example of seasonal variation is Air Fares, hotel bookings, sale of winter clothing, etc. Cyclical variation occurs due to various factors. It differs from the seasonal variation in that the variation follows an irregular periodic pattern. It could be due to the business cycle such as boom, recession, depression and recovery. In a cyclical variation the outcomes may differ between each cycle. Example of Cyclical variations are the savings of a person during various stages of his life. These savings are different for each individual, hence more difficult to precicr Application of Time Series. It is used on predicting the future based on past data. It us used to detect the underlying cause, seasonal, cyclical trends or systemic patterns that occur over a period of time. It is used to detect seasonal variations that occur due to habits, traditions, festivals, weather etc. It is also used to detect cyclical variation such as business cycles, etc. Most Difficult to Handle The cyclic component is longer and less predictable when compared to the seasonal component which is shorter and easier to predict. Some cyclic components such as business cycles, could go on for a few years or decades. References https://www.tableau.com/learn/articles/time-series-analysis https://www.toppr.com/guides/fundamentals-of-business-mathematics-and-statistics/time-series-analysis/definition-of-time-series-analysis/ https://www.coursebb.com/2017/07/24/difference-cyclical-component-seasonal-component/
  10. DMAIC framework? Kotter’s Eight Steps and the DMAIC framework 1. Creating a Sense of Urgency In the Define stage, the Project Charter creates a sense of urgency for the senior management. It needs to have a compelling Business case in order to get the buy-in of the senior management. 2. Building a Guiding Team. The Team Members which are included in the Project Charter need to be selected with great care. Also, the Stakeholder Analysis done in the Define stage contributes towards the building of a guiding Team. Position power, expertise, credibility, and leadership need to be taken into consideration right at the Define stage of the DMAIC Process In the Improve and Control Phase of the DMAIC process, the team members to Improve/Implement and control the project should be chosen with great care. 3. Get the Vision Right John Kotter in his book Leading Change has defined A right vision as being imaginable, desirable, feasible, focused, flexible, and communicable. Getting the vision right is critical to a DMAIC project and starts right with the Project Charter in the Define Phase. The Project should be approved only if the vision is right and has SMART Goals. 4. Communicate for Buy-in Frequent and uniform communication for buy-in is necessary for change to take place. Buy-in has to be first communicated in a very strong manner in the Define Phase. The Measure phase would further validate buy-in. Once the Measure phase has validated the buy-in, the communication of buy-in needs to be sustained in the Analyse and Improve phase, however once again the Buy-in needs to be stepped up during the Control phase. The Buy-in during the control phase would need to include the entire staff of the area of implementation of the LSS Project. 5. Empower Action Often the obstacles to empowering action are structural barriers, skills, systems, and supervisors. These barriers need to be removed in order to empower action. The LSS Project Leader would need to empower his team during the Improve and Control Phase. It is during these phases that the team is going to come in direct confrontation with the staff of the area of implementation and without empowerment, it would be difficult for the team to improve/implement and control the project. 6. Create Short Term Wins Short-term wins help in maintaining the momentum of the project. Short-term wins should be created right from the Define Phase. However, there is plenty of scope for the Project Leader to create short-term wins during the Analyse, Improve, and Control Phase. The LSS project leader needs to be creative in coming up with short-term wins throughout the Project. 7. Don’t Let Up The Project Leader should ensure that the team is kept motivated and does not give up. He should ensure that the timelines are followed and the scope creep does not take place. He should ensure that the documentation prepared through the Project and is a tollgate requirement of the Control phase includes the Issues/Opportunities, lessons learnt that may be taken up in the next project. 8. Make Change Stick This involves a cultural change and is the last step. DMAIC and Lean Project are a philosophy that operates within a Lean culture. The Project Leader needs to study the perquisites for the new system and prepare a cultural change in the area of operation of the new system. Conclusion A DMAIC Project would be a failure without change management. Incorporating the Eight steps of change as enumerated by John Kotter and the three steps of Unfreeze, Change, Refreeze as enumerated by Kurt Lewin at various stages in the DMAIC framework is necessary for the success of the Projecrt. The LSS Project Leader needs to understand both these change models and be creative in integrating these change models in the DMAIC framework. References Benchmark Six Sigma Blackbelt Preparatory Module
  11. The two leaders in the BI market according to the Gartner 2021 report are Power BI and Tableau. Tableau has been around since 2003 and Power BI was launched in 2011 and added to the Office 365 suite in 2013. Power BI Power BI is a Business Intelligence software, that was added to the Microsoft family as a SaaS model. It is very closely related to Excel. It consists of a group of applications and services that are on the cloud. The main apps amongst others are Power Query, View, Pivot, Map, and Q&A. With its integration with Excel, it is very easy to create dashboards and reports, hence, it is the go-to tool for inexperienced BI users. Microsoft has added Power BI to its Power Platform which includes the Power Virtual Agents, Power Automate, Power Apps, etc. Its disadvantages are that it offers less functionality than Tableau. Also, in order to get its full functionality, you need to install the SQL server and the Report Service. Tableau Tableau was introduced as a BI software in 2003. It is more powerful than Power BI. It is one of the go-to software for Data Visualization. Tableau has a strong user community and has an end-to-end solution that begins from collaboration, moves on to analytics, the discovery of content, preparation of data, access of data, and deployment. Tableau is more flexible than Power BI. The desktop version of Tableau can be installed without the installation of the SQL Server. Its disadvantages are that it is much more expensive than Power BI and its learning curve is steeper than Power BI since it needs you to build your own data warehouse. Further, the Tableau licenses have incremental costs, and connecting to third-party service providers adds to the cost. Similarities between Power BI and Tableau. Both Power BI and Tableau can create a variety of Data Visualizations such as bar, line, pie charts, tree, geographical maps. The visualizations are interactive on both the software with them having features such as filtering, creating dashboards, etc. Both the software can be connected to various data sources, are user-friendly and require no coding. Differences between Power BI and Tableau Power BI works only with MS-Windows and can be easily integrated with Microsoft, however since Tableau has been acquired by Salesforce, its integration with Salesforce is easy. R and Data Visualization R is free software that was initially used for statistics and graphics. The R Core team and R Foundation for Statistical computing support its development. It has been under development since the early 1990s and is available under the GNU General Public License. It is available for various operating systems. Besides the command line interface, it can also be integrated with third-party GUI such as RStudio and various IDE such as Jupyter. Since R is open source, it is extensible through functions, and packages. The R community is constantly contributing and improving its functionalities. Besides great libraries for Data Visualization, it has numerous libraries for statistics, linear and non-linear modelling, spatial, time series, machine learning, and artificial packages for classification, clustering, computer vision, etc. Comparing R to Power BI and Tableau. As an open source, R is free, it is being developed by a vast community of R Programmers and has the latest packages in most of the domains. Even though the learning curve is steep, its data visualization package is comparable to both Power BI and Tableau. Besides this it has great packages for Simulation, Machine Learning, Artificial Intelligence, etc. References https://spreadsheeto.com/power-bi-vs-tableau/ https://www.datacamp.com/community/blog/power-bi-vs-tableau https://en.wikipedia.org/wiki/R_(programming_language)
  12. Code Refactoring involves changing the structure of the code, which may also be labeled as changing the factoring of the code. While code factoring changes the structure of the code, it does not change its functionality. The benefits of Code Refactoring are that it helps to improve the structure, design, and implementation of the code, making it more readable, less complex, easier to understand and maintain without changing its behavior. While code refactoring does not change the functionality, it can improve performance wherein the code uses less memory and works faster. Code refactoring is done in a series of smaller steps called micro-refactoring. Such small steps done continuously, help in removing hidden bugs and software vulnerabilities. However, if code refactoring is not done well, it can introduce bugs into the system. Benefits Maintainability and supportability. Refactored code is easy to read and understand. Its logic is not only clear to the person who has written the code, but also to the one who is maintaining the code. Scalability and Extensibility. It is easier to scale and extend the capabilities of Refactored code. Saved resources such as time and money in the future. Problems Code refactoring is not easy. It requires understanding the metadata, data models, dependencies, and the very structure of the software system. With the high turnover of team members in the software development industry, the new team will have to recreate this understanding of the software system. Code Refactoring should be avoided when the modifications are made, the structural architecture of the system could deteriorate. This can affect the maintainability and comprehensibility leading to a total change to the software system. Reference https://en.wikipedia.org/wiki/Code_refactoring https://lvivity.com/what-is-code-refactoring
  13. User Story. The term user story was first coined by Alistair Cockburn in 1998 and adapted in the Agile framework by Mike Cohn in 2004. In a Project, a user story is a way of describing the features of the system, in an informal natural language that is understood by the layman. These are generally written from the angle of the end-user of the system. It may be written by different end-users of the system so that the entire user’s requirements are captured for all functions. The User Story has various templates such as the role, the capability, and the benefit or the 5Ws: who, when, what, why, where. These templates facilitate the making sense of a problem without the problem of undue structuring. INVEST. The Acronym INVEST was first used by Bill Wake to define the characteristics of a good Product Backlog Item that is used in the User Story. Independent. The User Story should be independent. By independence, we mean that the User Stories could be taken up for work in any sequence. By doing this each User Story could be prioritized independently of other User Stories. If the User Stories are dependent and a more valuable story follows an invaluable story, it would not be possible to prioritize the more valuable story above the invaluable story. User Stories may be combined to make them independent. Negotiable. A User Story is used to begin a conversation and is not set in stone. The User Story gives the main essence of what is required. The final requirements are them worked out between the customer and the designer through negotiation and collaboration. The aim of the User Story is a starting point to understand the needs of the customer. Valuable. If the User Story does not have sufficient value, it is not prioritized and not done. At times a User Story may not create value for the end-user, but would be some non-functional requirement that is necessary thus creating some value to the business process. The value/benefit clause in the User Story Template would indicate the value that the User Story is desiring to deliver. Estimable. In order to prioritize the User Story, it is important to be able to accurately estimate its size in terms of resource requirements. At times User Stories that take a lot of time are given lower priority. In such cases, the User Story may be split into small parts to be able to estimate them more accurately and increase the priority. In case you are unable to estimate the User Story, it may be necessary to redefine the User Story. Stories that cannot be estimated, cannot be used as they cannot be made part of any iteration. Small. User stories should generally take between a few person-days to a few person-weeks. In general, a single-story should not consume more than 50% of the time. It should be small enough to be done in a sprint. Testable. In order to do a story, it has to be testable. To do this the story should have laid down and acceptance criteria for testing. Challenging Guideline to follow. Even though each of the guidelines may pose its own challenges, the guidelines of an “Independent” and “Small” User Story could be contrary to each other. Smaller User Stories may have to be merged together to create an Independent User Story thus a balance between the two would be a challenge. Conclusion. Firstly, successful implementation of the Agile framework requires a change in the organizational culture. Secondly, the more time spent on INVEST User Stories, the better will be your understanding of the user’s requirements which inevitably leads to cohesion amongst team members, and a shorter time to complete the project. References https://en.wikipedia.org/wiki/User_story https://en.wikipedia.org/wiki/INVEST_(mnemonic) https://agileforall.com/new-to-agile-invest-in-good-user-stories/ https://capgemini.github.io/agile/invest-in-user-stories/
  14. What is Human Centred Design? Human Centred Design is a problem-solving approach that develops solutions by taking the human perspective and behaviour into consideration at every level of the design. Its goal is to address and incorporate the preferences, pain points, likes, and dislikes of the user. It was first popularised by the global design firm IDEO who found that the Human Centred Design approach led to quicker, increased profits and fewer products failing to market. It starts with the end-user for whom the solution is being designed and ends with a tailor-made solution that is suitable to their requirements. A product may typically meet either a functional, emotional or social need. The functional need is the actual use of the product whereas the emotional and social needs come from the feeling one gets from possessing a product. For example, it could meet an aesthetic or status need. The Human Centred Design approach is akin to the ‘Jobs to be done’ theory by Harvard Business School Professor Clayton Christensen’s which states that people do not buy a product, but hire it to do a specific job or achieve a particular goal. The Human Centred Design approach looks at the design through this framework of the users’ motivations rather than demographic attributes such as income, gender, age, etc. The Human Centred Design is so popular in that it has been defined in ISO 9241-210:2019(E) as “an approach to interactive systems development that aims to make systems usable and useful by focusing on the users, their needs and requirements, and by applying human factors/ergonomics, and usability knowledge and techniques. This approach enhances effectiveness and efficiency, improves human well-being, user satisfaction, accessibility and sustainability; and counteracts possible adverse effects of use on human health, safety and performance”. It has three phases, viz., the inspiration phase, the ideation phase, and the implementation phase. The Inspiration Phase entails interacting with and learning from the end-user. The designer immerses himself totally into the lives of the people he is designing for, so as to understand their context and need. This will remove preconceived notions, biases, misunderstandings about what the customer actually requires. It sort of gets buy-in from the end-user. This stage requires empathy to understand the emotions and experiences of the customer. It is the designer putting himself into the shoes of the customer, getting a feel of the product asking questions that the customer would ask such as where, when, or the purpose for which they use the product. The ideation phase is used to generate various ideas and prototypes based on the experiences from the inspiration phase. The implementation phase would involve bringing the solution to the end-user. Salient Features The salient features of Human Centred Design are that it needs empathy, the generation of a large number of ideas through brainstorming or bodystorming, creation of a large number of prototypes along with the end-user. It caters to the functional, emotional, and social needs of the end-user right from the start. Product designs that follow the HCD approach are generally successful in the market. On the downside, these products have a long lead time for development. Benefits Even though the time to design and produce the product takes time, the likelihood of the product succeeding in the market is very high. Having been designed from the perspective of the end-user, the product has early acceptance in the market. Examples Zoom. Zoom realized the increased need for videoconferencing solutions not only for businesses but educational institutions, religious organizations, individual trainers, etc. Zoom created the virtual learning system through interaction with teachers and students and designed a system that met the varied end users’ requirements. Zoom ensured security and compliance requirements due to young students using its platform, they further build a whiteboard for easy interaction, dashboards to track student engagement, and integration with various learning management systems Products used during Commutes Designers of products used during commute such as coffee, milkshakes, mobile phones, music devices, etc, have interacted with the end-user to make their products more acceptable to the end-user. For example, Bluetooth-enabled hands-free mobile phones, music player controls on the steering wheel, cup holders in the car, increasing the thickness of the drink have been incorporated into the product based on the end-user requirements. Changing Users Requirements during Covid Payment and Logistics. With the onset of the pandemic, people had a need for reduced cash payments and increased home delivery of items. Payment and logistics processes have been redesigned to cater to these specific user requirements. Electronic Tablets. Tablets were basically being used by the designer community. However, during Covid, the customer base extended to teachers, students, managers, etc. Wacom identified this requirement and through interaction with the end-user incorporated various hardware and software changes in their offerings. References https://en.wikipedia.org/wiki/Human-centered_design#:~:text=Human-centered%20design%20%28%20HCD%29%20%5Balso%20Human-centred%20design%2C%20as,perspective%20in%20all%20steps%20of%20the%20problem-solving%20process. https://www.iso.org/standard/77520.html Ergonomics of human-system interaction — Part 210: Human-centred design for interactive systems https://www.designkit.org/human-centered-design https://online.hbs.edu/blog/post/what-is-human-centered-design?tempview=logoconvert https://online.hbs.edu/blog/post/jobs-to-be-done-examples?tempview=logoconvert
  15. McClelland Theory of Needs was developed by David McClelland in 1960, about two decades after Maslow’s hierarchy of needs. It is also called Need Theory. It was put forward as a motivational model. McClelland hypothesized that an individual's needs are acquired over a period of time and are shaped by their unique experiences. The theory explains three needs viz. achievement, affiliation, and power. The theory states that all humans irrespective of their gender, age, or race have these needs for achievement, affiliation, and power. The Thematic Appreciation Test is used to measure these needs. The score on the TAT can be used to suggest the type of work that will be suitable for that individual. Need for Achievement. People who have a dominant need for achievement, are most likely to take on moderate tasks which are not too easy and not too difficult so that they fail. They like to continuously receive feedback on their work further reinforcing and motivating their need to achieve. Such people are motivated by rewards such as promotions, bonuses, etc. Need for Affiliation. Such people look forward to social relationships and have a desire to be part of a group and be loved and accepted by the group. Such people out of fear of rejection, adhere to the norms of the group. People with a need for affiliation prefer collaboration over competition and detest situations with high risk and uncertainty. They work best in areas where social interactions are called for such as customer service or directly dealing with clients. Need for Power. The need for power can either be personal or institutional. People with institutional power desire to organize the entire efforts of the group to further the overall goals of the organization whereas people with personal power want to direct others and be in control. People with a need for institutional power are generally more effective than people with personal power. People with a need for power are likely to see everything as a Zero-sum game. They are meticulous and disciplined. They look for competition, winning the argument, influencing others rather than being influenced, and desire recognition and status. Implications for a Project Leaders 1. About 7/8th of the population is dominant in 1,2 or all 3 of these needs. Project leaders need to attempt to identify these needs in each of their Project Team members. 2. Project leaders need to interact with Top Management keeping in mind that Top Management generally has a low need for affiliation and a high need for power. 3. Project leaders need to identify people with a high need for institutional power and be cautious with people with a high need for personal power. They should give people with a high need for institutional power every opportunity to take on leadership. 4. Project Leaders should remember that people with high need for affiliation do not make good top managers and are more suited for non-leadership roles. Project leaders should put them in a collaborative environment that requires high personal interaction and harmonious relationships. Such people should not be put in a highly creative and flexible environment since these people generally conform to the group norms and would feel threatened in such an environment. 5. People with a strong need for achievement generally do not rise to top management. Understanding that they seek to excel and prefer to take on projects where they can be successful through their own efforts, Project leaders should either group them with other high achievers or give then independent tasks. Understanding that they avoid both high-risk and high uncertainty situations, Project leaders should generally give them challenging tasks with reachable goals and give them regular feedback either in the form of verbal appreciation or some reward. References https://en.wikipedia.org/wiki/Need_theory http://www.netmba.com/mgmt/ob/motivation/mcclelland/#:~:text=McClelland's Theory of Needs. In his acquired-needs theory%2C,be classed as either achievement%2C affiliation%2C or power.
  16. As with PERT which was first adopted by the US Navy, the Earned Value Project Management was first adopted by the US Air Force in the early 1960s. Earned Value of a Project, also called BCWP (Budgeted Cost of Work Performed) is a measure of the Project’s performance. It is basically a relationship between the percent of the project actually completed and the budget. It links the Project time and cost and is used to measure the health and status of the Project. The foundation of EV management is the measurement and tracking of project work. At the commencement of the Project, the Budget at Completion (BAC) or Planned Value (PV) is assessed. These figures are the cost of the project and the value that the project will deliver on its completion. The Earned Value is calculated by the formula below Planned Value = Actual Percent Completed x Budget for the Task The Planned Value or BCWS (Budgeted Cost of Work Scheduled) is the amount of tasks that should have been completed and is calculated by the formula below Planned Value = Planned Percent Completed x Budget for the Task The Earned Value is an important metric to the Project Manager as it gives the value that has so far been earned on the Project as compared to the amount that has been spent. It thus is a good measure to calculate the ROI and the efficiency of the Project. Example AMCO Project Management Co has undertaken a Project of $100,000. As of date, 20% of the Task has actually been completed as against the planned percent of 30%. The EV and PV for this Project are calculated below. EV = Actual Percent complete x The Budget for the Task. EV = 20% of $100,000 = $20,000 PV = 30% of $100,000 = $30,000 Comparing the EV with the PV will give a good indication of the progress of the Project. When not to use Earned Value Some Project Managers do not use Earned Value for the wrong reasons. Some lack the will to do it and think of it to be a complex process, and are happy to monitor their project through the present system. In most cases, the Project Managers are not aware of the actual cost of their project. One of the important reasons to use or not use the Earned Value system will be the maturity of the Project Management system. This would involve the use of at the very minimum, basic project management disciplines such as a reliable enterprise-wide DBMS, an estimation process, Work Breakdown Structures (WBS), a Contract Management System, Scheduling System, reporting systems for direct and indirect cost, an EAC process, a risk management system, a change management system and a work authorization process. Earned Value project management should not be used in an immature project management system that lacks the basic project management disciplines required for its success. It entails a necessity for the project manager to be able to define the project work in detail. At the very minimum, it requires a detailed WBS system and a good understanding of the work authorization process. References https://www.projectmanagement.com/contentPages/wiki.cfm?ID=711501&thisPageURL=/wikis/711501/What-Is-The-Earned-Value--EV--Of-A-Project#_=_ https://sitemate.com/us/resources/articles/finance/earned-value-calculation/ https://www.ipma-usa.org/articles/NotUsingEV.pdf © 2014, by Jim Baber. Published at www.asapm.org, April, 2014
  17. A critical and important aspect of Project Management is the estimation of the cost and time taken for each activity of the Project. These estimates determine the framework of the project such as the schedule, scope, and cost baseline. The PMI Body of Knowledge uses the 3-point estimate that is part of the PERT, Beta, or Triangular distribution. Three-Point Estimate It is used to estimate the activity duration and costs of various activities of a Project. It takes into consideration the optimistic, pessimistic, and most likely estimate of the activity. The estimate is generally given by subject matter experts (SMEs). The optimistic estimate assumes no hindrances to the activity and is generally the best-case outcome, whereas the pessimistic estimate is the worst-case outcome which assumes the Murphy Law of the possibility of everything going wrong. These estimates are the extremes and need to be realistic. The third estimate is the one that has the most likely outcome and is most realistic. As it would be erroneous to take the mean of the best-case and worst-case scenario to arrive at the most likely case, the opinion of the Subject Matter Expert is important. These three estimates form the triangular distribution and is one of the popular distributions that is used in simulation software such as Oracle Crystal Ball. Advantages and Disadvantages The 3-point estimate is easy to use and can be used for small projects where the range between the Optimistic and Pessimistic estimates is narrow. It is simple and does not require too many calculations to arrive at the estimated duration or cost. The disadvantage is that in larger projects with a large number of interdependent activities, any inaccuracy in the estimates is magnified down the line. PERT PERT (Program Evaluation and Review Technique) is a project management technique that was developed by the United States Navy in the 1950s. It is very similar to the Critical Path Method, however differs in that the CPM uses a single time estimate which is generally accurate. Hence CPM is used for projects which have been done in the past and an accurate time estimate is possible whereas PERT is used in larger projects that are generally executed for the first time and time estimates from past experiences or data are not available. The PERT method takes the 3-point estimate, overweighs the most likely estimate, and transforms the triangular distribution into a normal distribution. Advantages and Disadvantages The advantage of PERT is that it is useful when there is little historical data available and helps the project manager to make an informed estimate of the project timelines. It, therefore, is a good tool in project planning and optimal resource utilization and optimization of resources where historical data is not available. Being subjective in nature it is at times difficult to interpret, update, modify and maintain. The biases and prejudices of the subject matter expert may affect the estimates and hence both the cost and schedule of the project. Differences between Triangular and PERT Distribution In a Triangular distribution the expected time period or cost is calculated based on the area under the triangular distribution, whereas in a PERT distribution, the estimated time durations or cost is calculated based on a normal distribution. The properties of a normal distribution are that about 68%, 95.4%, and 99.72% of the data lies between one, two, and three standard deviations respectively. In the Triangular Distribution the Expected amount of time or cost, is calculated as the mean of the Optimistic, Most Likely, and Pessimistic estimate. In the PERT Beta Distribution, the Expected amount of time or cost, is calculated by giving a weight of 1 to the Optimistic and Pessimistic estimate and a weight of 4 to the Most likely estimate. The Standard Deviation of the PERT Distribution is (Pessimistic – Optimistic)/6. Considering the shape of the two distributions, the area under the distribution closer to the tails is greater in a 3-point distribution as against a PERT distribution. Similarly, the area under the curve closer to the expected value is greater in a PERT distribution as against a 3-point distribution. Hence a PERT distribution is likely to give a more accurate expected value than a 3-point distribution. Examples of PERT and 3 Point Estimate Let us assume the Optimistic estimate to be 10 days, the Most likely estimate to be 14 days and the Pessimistic estimate to be 30 days. The expected duration with a Triangular estimate would be (10+14 + 30)/3 = 18 days, whereas the expected duration with a PERT Distribution would be (10 + 4*14 + 30)/6 = 96/6 = 16 days. The Standard deviation would be (30-10)/6 = 3.33. PERT is used in larger projects that are generally executed for the first time and time estimates from past experiences or data are not available. It would therefore be used in Projects like landing a man on the Moon or Mars, design and development of electric cars, Augmented Reality, etc. The US Navy first used PERT in the Polaris Submarine Missile Program. References https://project-management.info/three-point-estimating-pert/
  18. Bodystroming Bodystorming is a hands-on method used as a creativity technique in interaction design. It helps the user to empathize with the end-user. The designer subjects him/herself to the experience in order to feel and understand it from the perspective of the customer. In doing so it instills a feeling of empathy for the designer. It could include role-play, drama, and simulation. The process also involves brainstorming using the body which helps in bringing the simulation closer to reality and thus generating better ideas. Advantages Some of the advantages of bodystorming are it helps with empathy, reflection in the participants, connects the researcher and users, gives a first-hand experience and greater awareness to the designer/researcher Disadvantages. At times bodystorming may make some users uncomfortable, and when used in a sensitive context can trigger some bad reactions. In this fast-paced world, some designers find body storming to be time-consuming, when similar results can be achieved using machine learning, artificial intelligence, and simulation models. One of the disadvantages of concern is that bodystorming requires experienced and well-trained designers. In case it is done in a non-professional manner, wrong conclusions can be arrived at. Use Cases. Bodystorming is generally used in the design of physical spaces such as the layout of a store or a clinic. It can also be used in the design of software and physical products. Interestingly, it is also used in scientific research. It has been successfully used to teach, learn, and discover new intradisciplinary boundaries. Example from Scientific Research In 2009 the dance director, Carl Flink of the American dance company Black Label Movement and a Biomedical engineer of the University of Minnesota got together and created a bodystorming system as a part of the Moving Cell Project. Dance artists and scientists were brought together to rapidly prototype research hypotheses in biomedical engineering using choreographic rules for dancers to follow. Interestingly, bodystorming has proven to increase the speed of scientific research by giving the scientist a psychological sense of empathizing with a molecule. In 2018, bodystorming was used at the Neuro-Oncological Symposium to model the recent research. In 2019, bodystorming was used in the PSON Annual Investigators Meeting. References https://study.com/academy/lesson/bodystorming-in-design-thinking-definition-purpose-example.html https://think.design/user-design-research/bodystorming/ https://en.wikipedia.org/wiki/Bodystorming
  19. Card Sorting Card sorting is an informational architecture technique in researching how people group information into various categories. Information Architecture is a discipline that studies the structure of information and includes taxonomies for various subjects. Card sorting is ideally suited in areas where the categorization of information cannot be done from first principles, where there is a large variety in functions and content. It is a technique in User Experience Design that either uses a set of subject matter experts or end-users to create a category tree. It can be used to categorize and group information that you may want to put on your website, blog, newspaper, workflows, navigation menus, navigation paths, etc., that makes the most sense to the end-user. The procedure for card sorting is first to identify and write down the key concepts of the design on post-it notes or index cards. The experts or end-users can then either individually or as a group sort out these post-it notes into various categories. They could also establish the structure and relationship between the information and the categories. Card Sorting is generally used when the number of items is large and there is no acceptable existing taxonomy, the items are very similar to each other making segregation difficult or the heterogeneous nature of end-users makes it difficult to logically and consistently differentiate the items into groups. Card sorting is useful in the design of new websites or Apps, or to make some improvements to the existing ones. It is also used to assess the expectations of the customer on how they would like the information grouped together on a website, compare how people look at different ideas or concepts, or even rank items based on some designated rules. Types of Card Sorting Card sorting can be classified into open card sorting, closed card sorting, reverse card sorting, hybrid card sorting and Delphi card sorting. In open card sorting the individuals or teams choose the names of the categories by themselves. This will help to determine the terms that they use in categorizing the item. This methodology helps to generate ideas of how to organize the information. Open card slotting could be used to compare a current website or app with the expectations of the customers or to identify the sections into which to categorize a blog or to group products in an online store or design the structure of your help file. In closed card sorting, a fixed or predetermined list of categories is given to the individual or the team. They are to then categorize the index items into these predetermined categories. This method is evaluative in its design as it determines if the category names are effective in organizing the index cards. Closed sorting could be used to identify the content that people use the most on a website or app, or assess the alignment between the staff, customers, and your companies’ values, or obtain quick design feedback from customers, or assign the priorities to a service or product's features In a reverse card sorting an existing architecture of categories is tested. This method is also called tree testing. In hybrid card sorting, the participants sort the items into predetermined categories, however have the option of adding their own categories. Delphi Card sorting. Celeste Paul has created what is called the modified Delphi Card sort. This method imitates the Delphi method in that after the first person does the card slot, it is handed down the line to the other individuals. Each individual iterates the previous structure thereby improving and refining the structure. Card Sorting Vs Affinity Diagram. As per rastplatznotizen “Card sorting finds common patterns in the way different people group information, while affinity diagramming obtains a consensus result.” Wikipedia describes card slotting as a way of coming up with an Affinity Diagram. Donna Maurer Spencer uses external people or end-users of the site for card sorting and her own design team for Affinity mapping. One of the main differences between the two is that the Affinity diagram is done in a team and may include a discussion before the content is organized. References https://en.wikipedia.org/wiki/Card_sorting https://www.optimalworkshop.com/learn/101s/card-sorting/ https://twobenches.wordpress.com/2008/10/15/card-sort-vs-affinity-diagramming/ http://uxpod.com/card-sorting-an-interview-with-donna-maurer-spencer/
  20. Hick’s Law was named after Britisher, William Edmund Hick, and American Ray Hyman. Their background in psychology helped in framing this law. The law looks very logical in that it states that the more choices a person has, the longer the person will take to reach a decision. The beauty of the law is the mathematical equation that goes with it. That is the time a person takes to reach a decision is a logarithmic function of the number of choices. Since it is a logarithmic function of the number of choices, the increase in time diminishes as the number of choices increase. A picture that comes to mind is a child selecting an ice cream before a Baskin Robbins Ice-cream shop. Considering the implications of the law, it becomes more relevant when designing a short list and loses its significance as the length of the list increases. Hence Hick’s Law can be used in the design of User Experience (UX). Examples of short list are ‘Action Buttons’ or ‘Navigation Menus’ in an App or website. At times the designer thinks it is wise to include more functionality into his website or app, however, he should use Hick’s law to assess the number of functionalities that he should put into his design. A user of the app/website when encountering too many options is likely to get saturated with the choices available and leave the website quickly. This can be measured through various matrices such as the bounce rate, conversion rate, user engagement, time on site, page views, etc., using various analytics software such as Google Analytics. Hick’s Law helps in various design decisions, either in the design of physical products, such as the number of buttons on a TV remote, the number of controls in a washing machine, or in software products as the number of links in the header tab of a website. Hick’s law can be applied to the Tree Structure of a menu, to determine both the horizontal width and vertical depth of the menu. In order to apply Hick’s law, designers should put the choices in categories there by reducing the choices available. Designers can also obscure complexity by breaking down the process into manageable steps with fewer options in each stage. The formula is RT = a + b log2 (n) RT – Reaction Time n – Number of Stimuli a and b – constants that depend on the task/condition Applicability of Hick’s law to Project Management. Hick’s law applies to choices that have an equal probability of selection. This means that the user has no previous knowledge of the choices and is making a choice based on what is presented in front of him. If a user is specifically/intentionally looking for a specific choice or has a certain bias, Hick’s law will not be applicable to him. In this case, the time taken to act is likely to be less than the logarithmic function as calculated by Hick’s Law. In such cases, other decision-making tools such as Pugh Matrix or Analytic Hierarchy Process may be used. References https://www.interaction-design.org/literature/topics/hick-s-law
  21. Difference between Agile and Lean While Agile and Lean are sometimes thought of to be similar, they are different in their very approach. Agile had got more to do with Project Management in an environment of quick change and technology. It is generally used in software development, due to the quick improvement in hardware technology such as processor speeds, storage, etc, the software improvements automatically piggyback on this change. Due to this change, the End Users' requirements and expectations change. Using erstwhile methods of project management have failed in this fast-changing environment. The Agile methodology of project management is better suited for quick change, chaos, and uncertainty. Agile is developed using frameworks like Scrum or Kanban. Lean has got more to do with a process that may either be in the field of manufacturing or service. Lean looks at the removal of waste in the process and delivers increased value. Use of management principles and processes are used in Lean. Lean is used to improve the quality or the speed of delivery of the product. It has seven principles and aims at changing the flow of the process from batch to flow or also reducing the size of the batch.
  22. Forward Vs Backward Integration Every company forms part of a value chain that is established from the source of supply to the end customer in order to deliver a product or a service to the end-user. Vertical Integration. This is a form of business strategy when a company integrates with other companies either up or down the value chain in order to take advantage of either economy of scale, reduce risk, optimize the output or reduce cost. Forward integration would be manufactures merging or acquiring distributors or retailers whereas backward integration would be for a manufacturer merging or acquiring its supplier. Vertical integration is not suitable in cases where the up-chain supplier or down-chain distributer is already operating at economies of scale. Examples A good example of Forward integration is Disney acquired over 300 retail stores that market products based on Disney movies and characters. Amazon, Ford Motors are good examples of Backward Integration. The companies acquired the affiliates of their key supply input. For example, Ford Motors acquired the supplier of glass, metal, and rubber to increase efficiency, reduce cost, and to reduce the supply chain risk. Apple is a good example of both forward and backward integration in that Apple has not only integrated with the manufactures of the components that go into the Apple products but has also integrated with the retailers through the exclusive Apple Retail Stores. Advantages. Some of the advantages of integration are to secure the value chain and minimize disruption to the supply chain. Besides this, integration enables unifying culture, uniform norms of quality, communication, objectives, better planning, taking advantage of economies of scale, etc. Overall vertical integration leads to higher productivity, efficiency, and lower costs. Disadvantages. The disadvantages of Vertical integration are that it kills competition, leads to oligarchic or monopolistic companies thereby increasing the price. At times it may kill flexibility, creativity, and innovation as it loses its ability to be agile and flexible. Antitrust Laws. Some countries have Antitrust laws that aim to prevent vertical integration. Antitrust laws are in place to protect customers from businesses whose practices are predatory in nature and aim to kill the competition with the aim to increase the price. References https://www.differencebetween.com/difference-between-forward-and-vs-backward-integration/ https://www.investopedia.com/ask/answers/09/antitrust-law.asp#:~:text=Antitrust%20laws%20are%20statutes%20developed%20by%20governments%20to,market%20allocation%2C%20bid%20rigging%2C%20price%20fixing%2C%20and%20monopolies.
  23. Introduction. Shigeo Shingo contributed to tools such as Total Quality Management, Just in Time, Flow, SMED, Quality at Source, Genchi Genbutsu, and Lean. He along with Taiichi Ohno applied his concepts to the real world in Toyota. Over the years, management gave a greater emphasis to these tools, i.e the “How” losing out on the important “Why”. The Shingo Model has been put together to link the “How” to the “Why”. It gives the relationship between principles, systems, tools, and results. The Shingo Model is a step towards a culture of operational excellence. It helps an organization to imbibe a culture of operational excellence by aligning its systems with principles rather than focussing on the tools. The model has guiding principles and the transformational process. The Shingo Model validates the teaching of Stephen R. Covey who said that values govern our actions but principles govern the consequence of our actions. It validates the fact that principles predict outcomes. Rather than focusing on the tools, it focuses on the underlying principles that operate behind them. It requires the leaders to anchor the mission, vision, and values of the organization to the principle of operational excellence and then for the entire organization to do so. Dimensions of Operational Excellence The Shingo Model has four dimensions viz. Cultural enablers, Continuous process improvement, Enterprise alignment, and Results. In order to see the potential of these principles and the business outcomes, all these four dimensions are important and require focus. These four dimensions cover five business areas, viz customer relations, product or service development, operations, supply, and admin. Each of these dimensions has some guiding principles and supporting concepts. Supporting concepts are important but not universal as the principles. Culture. Sustainable results require keeping the culture as central to the guiding principles, systems, and tools. It does not come through the “know-how”, just the use of tools and techniques but through the “know-why”, the principles behind their use. The leader needs to first experiment and imbibe the principle and then teach the principle to their team. This would empower the team to be take initiative and be creative and move in the right direction. Aligning the Systems with Principles. A bad system will have a large variation in behavior, leading to a variation in outcomes. The Shingo Model helps an organization to align every system with the principles. By doing this, the behavior of their staff is influenced towards the ideal. Tools. Since the guiding principles which form the culture of the organization are aligned with the systems, the staff will be able to better understand the why behind the use of the tools. Reference The Shingo Model for Operational Excellence, Jon M Huntsman School of Business , Utah State University.
  24. OTED as defined in the portal termwiki.com is “a setup reduction goal that reduces the process to a single step or one touch.” OTED is defined in the Encyclopaedia of Production and Manufacturing Management as “Setup times are non-value-adding activities. Lean manufacturing attacks setup times until they are reduced to zero or near zero. OTED refers to setup or changeover accomplished by the operator in one quick handling of the dies or tooling, usually in 10-15 seconds.” OTED, as defined in the portal leansixsigmadefnition.com, is “an exchange done with a single motion, rather than multiple steps and a changeover that takes less than 100 seconds and other references mention it is as being less than one minute.” The principle behind OTED and SMED is the idea that the set-up time in a process is a Non-Value-Added step and leads to waste. It is based on the principle in the Economic Order Quantity and Economic Production Run models. In the Economic Ordering Quantity model, the Ordering Cost and the Inventory Holding Cost are balanced, whereas, in the Economic Production Run Model, the Set-up Costs and the Inventory Holding Cost are balanced. The main idea that comes out from both these models, is the smaller the Ordering Cost/Set-up Cost the lower will the Order size/ Production Run. An important principle in Lean is the concept of moving from Batch Flow to Continuous Flow with the idea that moving from batch flow to single part flow to continuous flow reduces waste. Gradually reducing set-up times would gradually lead to smaller production runs which naturally lead to moving to smaller batch sizes and finally to a continuous flow. Once this principle has been identified, it is the endeavor to reduce the set-up times in a process. At times continuous improvement kaizen steps are taken to reduce the set-up time and at times, entire DMAIC projects are dedicated to the reduction of the set-up time. This technique is used in conjunction with Theory of Constraints and Value Stream Mapping with the identification of the bottleneck in the process, reducing the set-up time in the process till the time its cycle-time is less than the takt-time and moving on to the next bottleneck. In this iterative process, at first, there will be a need to use the SMED principles to release the bottleneck, however, after a few iterations further improvement of the process will only be possible through the application of OTED principles. The Single Minute Exchange of Die is a changeover in a single digit, i.e less than 10 minutes, however, the One Touch Exchange of Die is a near-instantaneous changeover. This one-touch implies that the change can be done with a single motion rather than multiple steps. Implementing OTED is challenging and takes a process to the best-in-class process. It is however important to realize making just one step in the process as OTED may not necessarily optimize the entire process. At times a balance between a single machine with OTED capability and high operating and maintaining cost need to be considered as against dedicated, low-tech machines in a work cell layout. References https://www.velaction.com/one-touch-exchange-of-die-oted/ https://en.termwiki.com/EN/one-touch_exchange_of_die_(OTED) (2000) ONE TOUCH EXCHANGE OF DIES (OTED). In: Swamidass P.M. (eds) Encyclopedia of Production and Manufacturing Management. Springer, Boston, MA . https://doi.org/10.1007/1-4020-0612-8_644 https://www.leansixsigmadefinition.com/glossary/oted/
  25. Iceberg Theory The Iceberg Theory is also called the theory of Omission. It is a writing theory that was made popular by journalist and writer Ernest Hemingway. He used his style of writing as a journalist in the writing of his short stories. In doing so, he focused more on the immediate events which were evident on the surface and less on their context. His writing style caused the reader to understand the context of the story in an implicit manner. His minimalistic style of writing caused each reader to contextualize the story within the readers' framework. Quoting from Oliver, Charles M “If a writer of prose knows enough of what he is writing about he may omit things that he knows and the reader, if the writer is writing truly enough, will have a feeling of those things as strongly as though the writer had stated them. The dignity of movement of an iceberg is due to only one-eighth of it being above water. A writer who omits things because he does not know them only makes hollow places in his writing.” The key to the quote by Oliver Charles M is that when the writer omits what he knows the reader is likely to strongly pick up the things omitted by the writer. This adds to the authenticity of the writer. However, when a writer omits things that he does not know, the reader is naturally unable to pick up the things omitted by the writer and it leads to a hollowness and in his writing. Application of the Iceberg Theory Leadership/Management. Questions such as “Are leaders born or made” or “Is Management an art or science” have been asked through the ages. Advancement of knowledge and technology has shown that through training, patience, and discipline leaders are made and management is more of a science than an art. Leadership qualities span a vast array of attributes such as professional knowledge, integrity, empathy, etc. This inward focus of character and knowledge and an outward focus on people takes time and effort to build making a leader authentic. Consultants. Successful consultants are those who have a wide range of knowledge and experience spanning various fields including understanding human behavior. With processes crossing numerous functions, consultants who are experts in a very specific area are unable to optimize solutions. Actors. Great Actors spend months or years researching their part. Good examples of this are Tom Hanks who plays the life of a physically disabled and low intelligence man in the movie “Forrest Gump”, Freddie Highmore who plays the part of a young autistic savant surgical resident in the series and Darsheel Safary who plays the part of a dyslexic child in “Taare Jameen Par”. Teachers, Professors. Similarly, teachers and professors with a deep knowledge of not only their subject but related subjects are generally more successful and popular. Doctors. With the human body being a diverse spectrum of various systems that interact with each other and with the external environment, a doctor with a deeper knowledge of not only the human body but other external factors is more likely to be successful. Conclusion. The Iceberg Theory brings out the fact that when a person with in-depth knowledge over a wide array of subjects speaks, the information appears to be authentic. On the contrary, when a person with superficial knowledge speaks, his lack of in-depth knowledge, of confidence, of authenticity, etc. will betray him and he will be easily found out. References https://en.wikipedia.org/wiki/Iceberg_theory Oliver, Charles M. (1999). Ernest Hemingway A to Z: The Essential Reference to the Life and Work. New York: Checkmark. ISBN 0-8160-3467-2.
  • Create New...