Jump to content

nilesh.ghm

Members
  • Content Count

    12
  • Joined

  • Last visited

Community Reputation

1 Average

About nilesh.ghm

  • Rank
    Active Member

Profile Information

  • Name
    Nilesh Gham
  • Company
    Westpoint Home
  • Designation
    Group Manager Continuous Improvement

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The bullwhip effect is defined as an extreme change in the supply position upstream(near the start of the process) that is generate by a small change in depend downstream(near the customer) in the supply chain. The bullwhip effect The below is an example taken from a case study involving children’s diaper sales of a large conglomerate 1. Data on purchase patterns suggested that there is no clear pattern in purchasing behavior. Parents of very small children, have very different diaper buying patterns and this changes in fairly small increments owing to factors which are difficult to understand. They visit various stores, shop on Mondays instead of Thursdays, or buy two or three weeks’ worth at one time because diapers are on sale. So, actual demand never quite seemed to meet the forecast. 2. Meanwhile the retailer had already ordered enough to allow a little extra safety stock to put in his/her storeroom. Or perhaps the retailer does a promotion without informing the distributor at all. This may cause a larger order to be requested than was originally forecasted These fluctuations impact forecasting for the distributor 3. The wholesale distributor had perhaps, forecasted demand based on past orders from his retailer. However, those demand patterns now would have a greater variation than the demand arrangements at the retailer’s checkout counters due to that safety and buffer material that the retailer held on to. Sometimes, safety stock accumulates because demand is less than the forecast, and this means that the retailer’s next order is for less than its forecast – or perhaps it doesn’t have to order at the usual time at all, because there is an excess of diapers already– which probably would have to be sold off in a promotion. The combined effect of all these activities would be that such miniscule variations in the end-user demand are completely blown up and magnified at the distributor 4. Now, as the supply chain goes upstream, the, the manufacturer of those products (diapers in this case) looks at the demand pattern from the distributor and makes his own forecasts, which display an even broader fluctuation and variability 5. And this variability goes up the supply chain with even wider swings Thus, decentralized inventory planning can lead to the bullwhip effect and other problems, especially if customer demand isn’t available to all stages of the supply chain. In turn, the bullwhip effect affects various supply chains that are heavily and serially on forecasting and is especially exacerbated when each entity in the supply chain forecasts independently of other players in the supply chain. As the bullwhip effect is driven by inefficiencies in forecasting, the solution is clearly to replace the forecasts with actual demand information. Clearly, this is not a simple matter to address, and the supply chain fraternity, over time has come up with various techniques for actual orders (not forecasts) drive production and distribution · One such a system is the pull system, where items are produced only as demanded for use or to replace those taken for use. Material control, withdrawal of inventory as demanded by the using operations. Material is held and not issued until a valid input signal comes from the (end) user. · In distribution system, the system of “Vendor Managed Inventory” gives a good example of a system for replenishing field warehouse inventories where replenishment decisions are made at the customer warehouse itself, and not at the central warehouse or the plant. And in the above example, diapers were managed at the retailers were managed directly by the conglomerate. Manufacturers of snacks chips, bread and soft drinks now routinely send their representatives to stock items at grocery and convenience stores. · A very important and often neglected way to prevent the bullwhip effect, is to have excellent levels of communications between supply chain partners, rather than having the assumption that the current orders will form an absolute reliable pattern. · IT can be used to gather, integrate and support to show actual supply chain activity
  2. Given the current competitive environment, mark is well aligned to scale ahead !!!
  3. "Adam Smith was wrong!! Adam Smith was wrong!! The best result will come, when the individual does what's good for himself, AND the group", a phrase by Russel Crowe portraying John Nash in the lovely movie A Beautiful Mind. And those very words describe everything that there is about the Nash Equilibrium. At the outset, Game theory tries to look at how individuals (or a collection of individuals) make choices that will, in turn affect other's choices. Nash Equilibrium makes reference to a condition in which each individual makes an optimized outcome, based on the expected decisions of others. Below are a few popular Game Theory Strategies/ Scenarios each one with of course, much more technical and mathematical workings of various scenarios involving the roles of individuals and the groups. Note that they are called "Games" but are correlated scenarios in the real. Many lab experiments have been conducted and outcomes deduced. 1. The Prisoner's Dilemma: In the prisoner’s dilemma, two suspected criminals are caught for a crime and are questioned in separate rooms. They cannot talk to each other. each suspect is told individually that if he/ she confesses AND gives a testimony against the other suspect, he/ she can go free, but if the doesn't co-operate, and the other suspect co-operates, he would be sent to prison for three years. In case they both confess, they would be given a two year sentence. If non of them confesses, they would be given 1 year in prison. Co-operating with each other is the best strategy for the two suspected criminals, when confronted with such a scene, research has shown that most people prefer to confess and give testify against the other, than to remain silent and take the chance that the other suspect would confess. 2. Matching Pennies: This is a game that has two players A & B, who place a coin on the table at the same time. they result depends on if the coins match. If both coins have heads (or tails), A wins and keeps B's coin too. If the heads/ tails don't match, B wins and keeps A's coin. 3. Deadlock A very trivial and common example of this game is of two nations have Weapons of mass destruction and are trying to reach agreement with each other on eliminating their stocks of such weapons. Here, co-operation would mean sticking to the agreement, while not agreeing would mean secretly retaining these weapons and not destroying them. The best outcome of such a a deadlock, is in despair, to secretly retain the weapons while the other nation destroys it's stockpile. Clearly, this will give a tremendous, but hidden advantage over the other nation, in case a war breaks out. Thus, sadly, the next best option is for both to retain their status of having Weapons of Mass Destruction 4. Cournot Competition Assume that two companies A and B manufacture the same product and can produce quantities in large and small numbers. If they both agree with each other to produce small numbers, then an overall lesser supply in the market would give a high price for the product and high profits for both the companies. However, if one of them does not agree and makes larges quantities, the market will be flooded with the product(s) at a low price and thus, reducing profits for both companies. However, if one of them agrees and makes less quantities, and the other makes large quantities, the one making less product would barely break even while the one making large quantities would have a much higher profit than if the both agreed 5. Co-ordination Assume two tech giants which are choosing and deciding to introducing a dashing new technology in microchip the would help generate millions of dollars in profits, or a revised version of a legacy technical that would generate less profit. If one of giants goes ahead with the new tech, the rate of adoption and use by customers would be much less, and as a consequence, this company would earn much less than if both of them decide on the same course of action. consider two technology giants who are deciding between introducing a radical new technology in memory chips that could earn them hundreds of millions in profits, or a revised version of an older technology that would generate much lower profits. If only one company decides to go ahead with the new technology, rate of adoption by consumers would be significantly lower, and as a result, it would earn lower profits if the two firms decide on the same course of action. 6. Centipede Game An extensive-form type of game, in which the two players get an alternating chance to take the bigger portion of a slowly rising pile of cash. The Centipede game happens sequentially, since each player makes his/ her move after the other, rather than at the same time. Every player is also aware of the stratagems chosen by the other players who played and performed this game before them. The game finishes as soon as one of the players take the case pile, with this player having the larger portion and the other player getting the much reduced portion. 7. Traveler's Dilemma In this game, a travelling company (say an airline) agrees to give compensation to two passengers with identical damages. The two passengers are separately asked to valuate the damages with a minimum of Rs. 5 and a maximum of Rs. 150. If both estimate the same value, the travelling company will give each of them that amount as reimbursement. However, if the estimates and values are different, the company will pay the lower estimate. with a small bonus of Rs. 5 to the passenger who wrote this lower estimate and a penal fine of Rs. 5 for the passenger who wrote the higher estimate. 8. Battle of the Sexes This is again a form of co-ordination game as in 5. above, however, with some dissimilarities in the pay offs. This type essential has a couple trying to co-ordinate their night out. They had been in agreement to go either to see the cricket match (the husband's preference) or a drama/ movie (the wife's preference), but, they have forgotten what they decided and to complicate the problem, they cannot communicate with each other. How should they manage this and where should they go? 9. Dictator Game In this simple game, say with two players A and B, A should decide how he would split a high cash prize with B, who has no inputs in A's decision. This many not be a game theory strategy, but it provides good insights into people's behavior and responses. 10.Peace-War This is an interesting variation of the prisoner's dilemma where the "co-operate or non-co-operate" is replaced by a "peace or war". A simple analogy would be to compare two companies who are competing in a price war. If both don't cut the price, they enjoy a prosperity in relation to each other, but a price war would reduce payoffs dramatically. However, if one company reduces prices, and the other does not, the first company would have a much higher profit, since it may be in a position to gain substantial market share, and the large volumes generate give lower production costs. 11. Volunteer's Dilemma Here, imagine one has undertaking volunteering work for the common good. The most dreaded outcome would be if none volunteers. Image a company where there are many accounting frauds, and the top management is not aware of it. Many junior workers in this department could be aware of these fraudulent activities but would hesitate to convey to the top management it could result in the employees who are involved in the fraudulent activities to be removed from duty possibly with court proceedings against them as well. Also, being a whistle blower could also have it's own repercussions, However, if nobody volunteers, the big fraud could result in the company's eventual demise with everyone loosing their dear jobs. Organizations can benefit greatly using these concepts, either individually or in combination with one another. The Nash Equilibrium has many applications from evolutionary biology, politics, International Relations, economics, etc.
  4. The median is used as an indicator of central tendency when there is high skewness or asymmetry in the underlying data. Comparison to the mean, which is dynamic, the median is completely static Typical examples of datasets where the median is preferred are include salaries, real estate prices, etc. Having the above, it is observed that the median is quite easily jumped upon as a preferred method of evaluation, especially when data is found to be not-normal, considering that it is more robust and makes one use the distantly related concepts of non-parametric tests. I would like to take this opportunity to mention a few points. As the ANOVA, and other variance/ mean “parametric” evaluation techniques were developed in the early parts of the century, there were a few basic assumptions which were considered important: 1. Observations in a data set need to be independent of each other, that is, the assumption is that the fact that you have observed a value in one group has no effect on the likelihood of observing another value in either group 2. Data sets evaluated need to have equal variances 3. The assumption of Normality Belief in the assumptions was sacrosanct, and if you have data sets which, for example, do not have equal variances, you simply, at the time, did not proceed to evaluate an F-Test However, the above assumptions were difficult to “test”, basically owing to limited computing power in the early parts of the century. As time passed, in the 1950-70s, there were these “robustness” studies performed, which gave good light to clarify the above 1. The independence factor is extremely important, and having pairs of data makes them correlate and covary 2. As long as data sets have equal (or nearly equal) sample sizes, the equal variance assumption can be forgiven 3. Normality: this assumption can safely be ignored in practice, and these robustness studies gave good confirmation that non-normality of data, (especially for the t-distribution), has trivial effect on the results. These studies did however iterate that it is important for Residuals obtained, after testing to be normally distributed, rather than the data itself. Not to mention, the median and it’s accompanies tests do have good applications when the critical assumptions above are not being met. So let's all think again before jumping on the Median !!
  5. I would prefer in the below order 1. Resolution Resolution represents the measurement system’s capability to detect and indicate small changes in the characteristic measured. Resolution is also known as discrimination. E.g. a tape measure with gradations in cm cannot be distinguish between measurements lesser than 1 cm, like 2mm say. If an instrument is not able to measure the required attribute in the first place, there is no point in proceeding 2. Bias Bias can be defined as the difference between the mean or the expected results (say of a standard) and the true/accepted reference value, and can be designated as a systematic error. Bias is checked using calibration. Once an instrument is able to have the necessary resolution, as above, bias would be the second thing to check. 3. Linearity The next question to answer would be if the procedure is able to delivery accurate and precise results (established above), over a range of values (higher or lower)? Linearity answers this question and performs calibration over a range of the measurement. 4. Precision Precision is also an important parameter necessary to demonstrate that a procedure will provide valid result, and helps in quantifying the random error. There may be a myriad of sources of variability in each measurement and many of them can be transient and cannot be easily identified or controlled. Therefore, a variety of approaches are used: Repeatability, Intermediate Precision and Reproducibility. Once all systematics errors are noted (and removed), through Bias estimation, Precision would be the next thing to check 5. Stability Stability, may be interpreted as the change in bias of a measurement over time and usage when such a system is used to measure a master part or standard. One can interpret a stable measurement as one which is in (the variation) is in statistical control. Once all of the above are established, Stability can be used as an on-going check
  6. The Will Roger's phenomenon, is a phenomenon which revolves around increasing "averages". It was made popular owing to the comedian Will Roger: When people of one geographical state left the state and moved to another geographical state, this had an effect of raising the average intelligence levels in either of these states. In the medical field(s), particularly in cancer research, such a phenomena is used in a way that any new clinical investigation includes more accurate cancer staging data than previous data; this results in a spurious, apparent increase in survival rates by stage. At the outset, using this phenomenon can seem to be highly delusional. Going after a better Average/ Mean, may possibly serve as a quick fix to hide an inherent variation and to show better "numbers". Window dressing like so, may have serious causes, which may crop up in future with a much aggrieved effect, possibly making the initial problem a bigger monster to solve. However, at times such a phenomenon may also be observed when there are process upgrades. wherein, data in foresight can show better averages, further leading to a possibly better population.
  7. Although Mark's approach seems to fit perfectly with a well seasoned Six Sigma professional's way of thinking, it also sounds a bit like having a hammer and everything appearing to be a high protruding nail. While I may not completely disagree with Mark, I would say that a few times, we really need to evaluate the impact of Mark's approach. Sometimes, mature process, once thoroughly evaluated, can be allowed to "let go"
  8. All three parameters revolve and compare around the “Interest Rate”, which is a way of expressing the risk free expected gains. Note the below example: Assume that we have a risk free interest rate of 15%, and we seek to have Rs. 100 at the end of the year. If we seek to have Rs. 100 at the end of the year (owing to say, the project being completed), and if we assume the risk free rate above 15%, we would have to invest (100/1.15) = Rs. 86.95 (at the start of the year) The difference of the values 100 – 86.95 = 13. 05 is the NPV (Net Present Value) Thus, the NPV is the difference between of the present value(s) of the future (and present) cash flows. The IRR is the percentage rate of return calculated for each period invested (the above assumes only one period invested). It is essentially a discount that would make the NPV equal to zero. Thus, IRR uses the same formula, but it solves for the discount rate (as 0) rather than the NPV itself. In our case here, since we have only one cash inflow and outflow, the IRR is 15% itself, meaning: 100 – (86.95 x 1.15) = 0 Note how the 15% is expressed as 1.15 indicating an increase Now assume that we invest Rs. 86.95, and owing to the beauty of the project, we further assume that we would get Rs. 120 at the end of the year. Now, using excel’s IRR function, we get the IRR to be 38%. This is clearly much higher than the nominal, risk free rate of 15%, and the project thus looks promising and good. IRR is, thus, a useful tool to help make an investment decision and compares to the Interest Rate and the cost of capital. If an IRR is greater than the cost of capital, then it’s a profitable and good investment. If the IRR is lower than the cost of capital, then it will be a loss-making investment. ROI Even though NPV is a useful tool to evaluate an investment decision, it requires a lot of assumptions and estimates leading to potential errors or misleading analysis. It is difficult to estimate the costs and returns with 100% certainty. In practical terms, the interest rate assumed may not remain fixed, and thus, the ROI sometimes, is the most commonly used measure of investment. When you invest, some of the few questions that you seek an answer for are: Do the expected returns justify the risk or costs associated with investment? Is the investment profitable? How much money will I make from my investment? ROI can be calculated in various ways. The most common method of evaluating ROI can be: Net Income as a percentage of Net Book Value which can be stated in accounting terms as total assets minus intangible assets and liabilities. ROI is simple to calculate but has limitations. For instance, it does not measure the return over shorter (say yearly) periods of time. Thus, the major difference between IRR and ROI is that IRR takes into account the time value of money for each cash inflow/ outflow, whereas ROI tells the total growth rate of an investment from beginning to end. Depending on the project and its duration, all three NPV, IRR and ROI can be used
  9. I would choose the second one, mainly owing to the statistical power I desire. Statistical power ie. the probability of rejecting an incorrect null hypothesis is usually understood as (1-beta), depends on the alternative hypothesis being true !! Ways to increase such statistical power could depend on: 1. The direction of the hypothesis: uni-directional hypothesis would be more stronger as it would concentrated only on one side of the curve 2. The Alpha level I choose: I may get more statistical power if I choose a "relaxed" alpha (eg. 0.1, which may be suitable for some experiments though) 3. The Sample Size: A higher sample size would help with the Standard Error, making the curves leaner 4. Standard Deviation: would have the same effect as point 3. Having an eye on the statistical power, would be a far better approach to choose a sample size (as in pt. 3) than an ad hoc method as in option 1. Even if option 1 may be used owing to feasibility reasons, it is still better to keep an eye on the statistical power. Also, an inexperienced experimenter, given option 1, may loose track of the critical value as well !! Nilesh
  10. I guess the biggest concern would be data privacy. However, there are a few below which may be relevant: 1. Design Details of the system 2. Improper testing 3. Malware / Virus Attack
×
×
  • Create New...