# All Activity

1. Last week
2. Q 507. Analysis of Means is a graphical variation of ANOVA however there are a few differences between the two. Under what conditions will you prefer to use ANOM instead of ANOVA? Note for website visitors - Two questions are asked every week on this platform. One on Tuesday and the other on Friday. All questions so far can be seen here - https://www.benchmarksixsigma.com/forum/lean-six-sigma-business-excellence-questions/ Please visit the forum home page at https://www.benchmarksixsigma.com/forum/ to respond to the latest question open till the next Tuesday/ Friday evening 5 PM as per Indian Standard Time. Questions launched on Tuesdays are open till Friday and questions launched on Friday are open till Tuesday. When you respond to this question, your answer will not be visible till it is reviewed. Only non-plagiarised (plagiarism below 5-10%) responses will be approved. If you have doubts about plagiarism, please check your answer with a plagiarism checker tool like https://smallseotools.com/plagiarism-checker/ before submitting. The best answer is always shown at the top among responses and the author finds honorable mention in our Business Excellence dictionary at https://www.benchmarksixsigma.com/forum/business-excellence-dictionary-glossary/ along with the related term
3. Going by the description, example and the methods to avoid the Paradox, Rahul Arora's answer is selected as the best answer this week.
4. Lindley’s Paradox concern the situation when comparing Null & Alternate Hypothesis test results in significant leading reject the null hypothesis The example follows a binomial where a survey of people who feel positive about the government. We have taken null hypothesis Ho= 0.5 and Alternate as Ha Not equal 0.5. We absorbed 20K cases and 9.8K was mentioned as positive. In this case, P value is 0.047, if we 95% significance level and the null hypothesis are rejected. Lindley’s paradox can happen • Sample size is large • Ho is precise • Ha is not strong opposite or relatively diffuse or not one-sided Multiple ways to address the false positive in the sampling. • In above example sample set is not clear whether we need gender segregation will take the survey. We need to take samples which address the true population. • Take a sub-sample from each sector to get a better true population • When handling a large sample, the null hypothesis not be precise but rather more projected • Alternate Hypothesis should have strong contract opposition which should be one side.
5. Lindley paradox The Lindley paradox is a perplexing situation in statistics where, depending on the prior distribution selection, the results of the frequentist and Bayesian approaches to a hypothesis testing problem differ. Two hypotheses, Ho and H1, as well as some prior distribution pi that represents uncertainty as to which hypothesis is more accurate before taking into account, can each account for the result x of an experiment. The Lindley paradox appears when: A frequentist test of Ho finds that the result x is "significant," i.e., there is enough evidence to reject Ho at the 5% level. 2. There is strong evidence that Ho agrees with x more strongly than H1 based on the posterior probability of Ho given x being high. These outcomes are possible when Ho is highly specific, H1 is more diffuse, and neither Ho nor H1 is strongly favoured by the prior distribution, as shown in the example below. The paradox proposed by Lindley is illustrated by the next numerical example. 49,581 boys and 48,870 girls have been born in a particular city over a specific time frame. Therefore, the observed percentage of male births is 49,581/98,451 0.5036. Assumed to be a binomial variable with parameter, the proportion of male births. Whether is 0.5 or another value is what we want to find out. In other words, the alternative to Ho:Ø 0.5 is the null hypothesis, Ho:Ø =0.5. Calculating a p-value, or the likelihood of observing a fraction of boys at least as large as x assuming Ho is true, is the frequentist method for testing Ho. Due to the large number of births, we can use a normal approximation to calculate the percentage of male births, xN(, 2), where =np= n=98451x 0.5=49225.5 and 2=n(1-)=98451x0.5x0.5=24612.75 A frequentist would typically conduct a two-sided test, for which the p-value would be P2x0.0117=0.0235, because we would have been equally surprised if we had observed 49,581 female births, or x0.4964. The frequentist approach rejects Ho in both cases because the p-values are less than the significance level,, of 5% and Ho does not agree with the observed data.
6. Larger the sample size better it is- · Larger sample size is more closely approximate the population as the primary goal of inferential statistics is to generalize from a sample to a population; it is less of an inference if the sample size is larger. · Small sample size is bad and if we pick a small sample means we are running a greater risk and it will be completely random and will be very unrepresentative of the whole population so the variability will be greater if the sample size is small. · The value of the standard error is directly dependent on the sample size and to calculate the standard error, we divide the standard deviation by the sample size. · If the sample size is large enough, a sampling distribution will be normally distributed and if the sampling distribution is normally distributed, we can make better inferences about the population from the sample. · Large sample size gives more power and we will have the smaller standard error. Lindley’s paradox makes note the conflict between Bayesian and Frequentist evidences in hypothesis testing. If the sample size is large, we become more confident about our estimate and our intervals become smaller and the size of our confidence interval decreases. Whenever the sample size is too large, the chances of the error are small in null hypothesis and highly statistically significant result. The ways to avoid the paradox is to not conflict both the Bayesian and Frequentist evidence in hypothesis testing and to use the analysis on the base of large sample size.
7. Lindley’s Paradox, developed by Sir Harold Jeffrey, showcased the conflict between the frequentist & bayesian approaches to hypothesis testing. It refers to the fact that with the increase in sample size (keeping a constant p-value eg p < 0.05), there seems to be a conflict between p-values & baye’s factors i.e. the p-value suggests that the null hypothesis (Ho) should be rejected, however the baye’s factor indicates towards the null hypothesis (Ho) out-predicting the alternative hypothesis (Ha) & this would ultimately result in Ho being rejected as per the frequentist approach & accepted basis the bayesian approach simultaneously. Let us try to understand this concept through an example:- Suppose a bank which processes loan applications receives applications for home loan. Also generally the bank receives all kinds of loan applications in two batches on a regular basis i.e. one batch containing 25% home loan applications & the second batch containing 50% home loan applications. Now the bank wants to figure out which of these two batches the received applications belong to. Thus in order to do that, let’s say the bank takes a random sample of 48 applications & observed that 36 of these random samples are home loan applications which amounts to 75%. Thus going by the above result we can conclude that the applications belong to the second batch i.e. which contains 50% home loan applications. Now let us apply hypothesis testing & go with the first hypothesis i.e. Testing whether the applications belong to the first batch which contains 25% home loan applications. Let us calculate the populations parameters i.e µ & σ. µ = np = 48*0.25 = 12 σ = sqrt(np(1-p)) = sqrt(48*0.25*(1-0.25)) = sqrt(48*0.25*0.75)) = 3 Now at 99% confidence level (or 0.01 significance level), the range is 12 +/- 3*3 i.e. from 3 to 21. Here findings of 36 samples taken above is nowhere close to this range thus making us reject the null hypothesis i.e. the applications belong to the batch containing 25% home loan applications. Now let us also test the hypothesis whether the applications received belong to the second batch containing 50% home loan applications. Let us again calculate the populations parameters i.e µ & σ. µ = np = 48*0.50 = 24 σ = sqrt(np(1-p)) = sqrt(48*0.50*(1-0.50)) = sqrt(48*0.50*0.50)) = 3.5 Now at 99% confidence level (or 0.01 significance level), the range from 13.5 to 34.5 which does not include the sample result of 36, which again will lead us to reject the null hypothesis that the applications received belong to the second batch i.e one containing 50% home loan applications. Now basis the results, the possibility of the received applications belonging to both the batches got rejected which is the underlying premise of lindley’s paradox. Let us now also see the different ways through which we can mitigate the same:- One approach is to lower down the alpha level as a function of the sample size, thus one should get the best result with any value of alpha that makes the ratio of critical value to the standard error increase with increase in sample size. Another approach is to set the baye’s factor (which is basically the ratio of the probability of data under both null & alternate hypothesis i.e. p(data|Ha) / p(data|Ho)) to 1 which implies equal evidence for both null & alternate hypothesis. Next is to adjust the alpha level in a way that the baye’s factor at the critical test statistic value is not greater than 1.
8. This was a tough one to answer. Rahul Arora has given the best answer to this question. Read through the other answers to get some more examples that highlight the differences between Bayesian and Frequentist approach.
9. Q 506. There is a golden rule of sampling - larger the sample size better it is. However, if your sample size is too large, it leads to Lindley's Paradox - Small errors in the null hypothesis are magnified when large data sets are analyzed, leading to false but highly statistically significant results. Illustrate this paradox by providing examples. What are the ways to avoid it? Note for website visitors - Two questions are asked every week on this platform. One on Tuesday and the other on Friday. All questions so far can be seen here - https://www.benchmarksixsigma.com/forum/lean-six-sigma-business-excellence-questions/ Please visit the forum home page at https://www.benchmarksixsigma.com/forum/ to respond to the latest question open till the next Tuesday/ Friday evening 5 PM as per Indian Standard Time. Questions launched on Tuesdays are open till Friday and questions launched on Friday are open till Tuesday. When you respond to this question, your answer will not be visible till it is reviewed. Only non-plagiarised (plagiarism below 5-10%) responses will be approved. If you have doubts about plagiarism, please check your answer with a plagiarism checker tool like https://smallseotools.com/plagiarism-checker/ before submitting. The best answer is always shown at the top among responses and the author finds honorable mention in our Business Excellence dictionary at https://www.benchmarksixsigma.com/forum/business-excellence-dictionary-glossary/ along with the related term
10. Bayesian Test interprets probability as a measure of the belief or confidence that an individual might hold regarding the likelihood of an event occurring and the prior beliefs about an event will likely change when new information is revealed. It not only considers the likelihood of the occurrence but also consider the beliefs and experiences that an individual may hold which may be fair. This helps in arriving at a hypothesis basis historic trend and real life experiences and beliefs which is more practical. The Frequentist inference interprets probability as the frequency of repeatable experiments and the gathering of information. This may be used when we have existing data however the inferences may be close to the sample population and would be dependent on the current data sample that the person may have selected. Frequentist inference relies on P-value and it is assumed that null hypothesis is true however Bayseian approach is based on beliefs based on new information derived at the time of conducting the hypothesis. Lets take the example of tossing a coin As per frequentist approach the likelihood of heads on the coin will depend on the sample and heads being received as per sample data v/s actual hypothesis. Parameters remain fixed and data is random since its based on frequency of repeated events As per Bayesian approach the likelihood of heads repeating 90 out of 100 times could also mean that there is something to do with the flipping of the coin or the coin itself. Hence the parameters are random but the data is fixed and probability depends on degree of certainty about values. Whilst frequentist method may be used for hypothesis testing, Bayesian method can be alternative method since the inferences are not based only on sample data but it also takes into account observations during hypothesis and considers both the null and the alternate.
11. Frequentist approach It’s the model of statistics taught in most core-requirement and its approach most often used by A/B testing. Making predictions on the underlying truths of the experiment using data from the current experiment in the Frequentist method Example: Is this variation different from the control in a t-test or the probability of a coin landing heads being 0.2 means that if we were to flip the coin enough times, we would see heads 20% of the time Bayesian Approach Bayesian approach is a more bottom-up approach to data analysis. In this approach past knowledge of similar experiments is encoded into a statistical device known as a prior, and this prior is combined with current experiment data to make a conclusion on the test. Example: if X-company knows that by 5 PM there are 50 reservations, then they can predict that there will be around 250 covers for the night. Major Difference Between the Frequentist and Bayesian Approach are Frequentist statistics never uses or calculates the probability of the hypothesis, while Bayesian uses probabilities of data and probabilities of both the hypothesis. In the frequentist approach, they are fixed variables. Bayesian approach, trying to estimate are treated as random variables. Bayesian view, a probability is assigned to a hypothesis whereas in the frequentist view, without being assigned a probability a hypothesis is tested.
12. Both Bayesian and Frequentist approach of hypothesis testing are important and relevant method to facilitate determination of an event. Dependent on the approach for decision making, a choice can be made between the two well known ways of hypothesis Bayesian Hypothesis testing is a method of assigning probability to unknown parameter, compared with available historical trend and gathered knowledge and later extended with most recent information about the unknown parameter in consideration. It is like simulating with the in-scope data multiple times which will ideally provide more details on the alternatives and firm up the probability factor on occurrence, with time. In simple terms, it is somewhat how we normally think and affirm an opinion. Start with a prior belief and keep improving this in light with new evidence. We regularly update our knowledge in light of the known facts – focus on what is known via knowledge and existing data, identify an issue with unknown fact that is in scope to decision making and carry out multiple / repeated actions that allows us to evolve and firm up a decision. Scientifically, It has 3 stages, specifying a prior probability distribution on unknown parameter, observed data summarized using Likelihood function hypothesis and posterior distribution also known as updated knowledge. Frequentist Approach is more of making predictions basis Data from the current experiment and is driven by what is known at a given point of time. The Hypothesis test (Null and alternative) based on applying statistics conclusion to identified data and when compared to P Value, will either recommend acceptance or denial of the specific outcome. In simple terms, Frequentist approach confirms the probability of having the same outcome if the condition is repeated again and again. This model only uses data from the current experiment when evaluating outcomes. Statistically it has 4 stages, Defining the assumption (Model), Null and Alternative Hypothesis, Test data basis available tools and use outcome to from mathematical conclusion. Despite of having same intent on outcome, the basics around theory and independent characteristics differentiates them from each other, some of them are noted below: S. No Bayesian Hypothesis S. No Frequentist Approach 1. Derives Probability by inferencing past knowledge combined and upgraded with outcome from current experiment. 1. Makes predictions purely basis data from current experiment, it is long run frequency of repeated experiments. 2. Parameters are random variables and data is fixed 2. Parameters are fixed and data is replicated 3. Probability is assigned to both hypothesis and past data 3. No probability is assigned to the hypothesis 4. Performs well with small data set, one can start with as small as one data set 4. Gives confidence in large data sets, since the later is randomized 5. Driven by ability to form prior model and relate to the difference in the answers 5. Easy to calculate and formulate hypothesis – statistical analysis based 6. Inferences here will lead to better communication of uncertainty 6. Based on fixed data and hence lacks the flexibility to adapt uncertainty 7. Easy to relate to the outcome, since the advantage is of having a prior parameter and knowledge 7. Mostly difficult to interpretate P Value and hence somehow keeps the confusion on absolute interpretation 8. Comparison is with P value with prior probability and prior is subjective 8. Hypothesis testing uses comparison with P Value that has never been observed Let’s also look at unique preposition around Bayesian way of thinking compared to a Frequentist way with an example, If someone asks us what is the probability of getting King of hearts in the given image, when picked from blind side of the cards? Most often than not, the response will be ½ or 50%. Absolutely right!! But if the cards are jumbled again and if I ask you to choose one of the cards and raise the same question.. what is the possibility of getting king of heart? Will the answer be different now? Some of us will say, now since I have picked one card, either the King of heart has been picked 100% or not picked which is 0%. Obviously, now there isn’t a choice anymore between two options to be considered as Heart or Club? That’s how Frequentist will think. Some of you will record the previous hold knowledge on probability and say it is still 50% and unless proved otherwise with series of trials. We win some and we lose some. You just choose Bayesian way of thinking. Similar example from real life may be how doctor examines the health condition of a patient. In a typical Bayesian way of thinking, doctor will take weightage around prior diagnostics, do fresh investigation and later recommend meditation basis fresh assumptions and result. But a Frequentist way would lead basis current diagnostic results and prescribe related mediation. These examples will direct us to think of below characteristics … · Bayesian way will tell you what you want to know, which one is better · Frequentist will keep It difficult for you to interpretate uncertainty since comparison is with P Value · Frequentist do not explicitly call out assumptions · Bayesian method is immune to data peeks, whether you update prior parameter with every experiment or at any given point of experiment – makes no difference
13. Earlier
14. Bayesian Vs Frequentist main difference of both thought process is reasoning of probability Bayesian Frequentist Bayesian thinking see the probability based on their certainty and uncertainty of trail. It’s based on belief of an event based on prior information Frequentist thinking see the probability based on frequencies of the repeated trail P Value based on the probability of the hypothesis. Which is inverse of Frequentist P value is probability of more extreme data with the assumption that null hypothesis is true Hypothesis is based on no variation in the data but the variation in the Model/Parameter Hypothesis is based on Variation of the data and their derived quality based on the repeated measurement with fixed Model/parameter Assuming 95% of true value of a model lies with in credibility region 95% of cases confident interval will have true value of the model Varying True value and Fixed credible region Varying confidence interval and fixed true value Process to understand possibility that Hypothesis it true based on observed result Process to understand how extreme the observed result under the Hypothesis Example: Playing Card with friend which has 52 card and Friend drawn a card and seeing card in hand asking possibility of card in his hand is Diamond card, On Bayesian way of thinking possibility of getting Diamond is out 13 out of 52 cards which is 25% its Diamond Example: Since the card has been drawn and the result is known its either Diamond or others, so this can be either 0 or 1 Bayesian way of thinking is preferred in term of clinical trails based which take more result change based on prior data and new state. Also, current AI model use more Bayesian theorem to learn based on prior data and change the result. Which allow to correct Bais and Noise level based on prior data to correct make it more accurate. But in uncertain case when there is no prior data (non-informative prior) consider all data are equally likely which create a Bias hypothesis in given data will not be always correct.
15. Frequentist Methodology In frequentist model, probability is the limit of the relative frequency of an event after many trials. This method calculates the probability that is from the current experiment when evaluating outcomes which would have the same outcomes and would replicate the same condition again. When applying frequentist statistics that uses a frequentist model, we will come across the p- value. Which is the calculated probability of obtaining an effect at least as extreme as the one in your sample data when we assume the truth is the null hypothesis. For example, a small p-value means that there is a small chance for the results to be completely randomly. A large p-value means that the results have a high probability of being random. In short, the smaller the p-value is more statistically significant. Often p-value is misinterpreted differently. P-value is the probability of false positive based on the data in the given experiment. It does not tell the probability of a specific event actually happening and it does not the probability that a variant is better than the control. P-values is the probability statement about the data sample and not about the hypothesis itself. So if an A/B test where the conversion rate of the variant is 10% higher than the conversion rate of the control, then in this experiment had a p-value of 0.01 would mean the observed result is statistically significant in the given experiment Bayesian Methodology Bayesian statistics is named after philosopher Thomas Bayes where the probability simply expresses a degree of belief in an event. Bayesian method is different from the frequentist methodology in a number of ways. One of the biggest differences is the probability actually expresses the chance of an event happening. Although the calculation can be extremely complex, Bayesian method seems to be a simpler and more intuitive approach. In simple words, a Bayesian methodology will tell the probability that a variant is better than an original and vice versa The Bayesian concept of probability is more conditional which uses prior and posterior knowledge and current experiment data to predict outcomes. Since we often have to make assumptions when running experiments, the Bayesian approach attempts to account for previous learnings from the experiments already done and data that could influence the end results of all the experiments At this point, many experimentation platforms are using proprietary, hybrid models that would combine a traditional statistical model which can either be Bayesian or frequentist model with some other technology such as machine learning. It is certain to have at least a basic understanding of the methodologies., when it comes down to it, what actually matters is how well we understand the results we have got in the experimentation platform of the choice that we have taken. This understanding leads to a more data-driven approach for assessing risk and what the organization is willing to accept and the predicted improvement to business outcomes could be. When we are debating the pros and cons of Bayesian and Frequentist statistical methodologies. We may have experimentation stakeholders from multiple departments simply wanting a decision and often there would be no regard for the statistical methodology used. Examples: Bayesian way of thinking example: Bayesian way is used in various occasions in our daily life which includes a medical testing for a rare disease. With this we can estimate the probability of actually having the condition given the test coming out positive Frequentist's way of thinking: The frequentist way is probability if there is the long-run frequency of repeatable experiments. For example, saying that the probability of a coin landing heads 0.5 means that if we were have to flip the coin enough times the we would see heads 50% of the time.
16. There are two common statistical approaches that are being followed when it comes to statistical testing i.e. The Frequentist Approach, which is based on the observation of data at a given moment or instance & The Bayesian Approach, which is basically a forecasting approach & it involves analyzing prior information. The frequentist approach is also described as experimental or inductive as it relies on observations while the bayesian approach is theoretical or deductive as it enables to combine the information provided by data with a priori knowledge from previous studies or expert opinions. Let us take a very simple example to understand both the concepts:- Let us toss a coin 10 times, now when it comes to frequentist approach, the probability of getting either a head or a tail is 0.5, now let’s say we get heads on 7 out of 10 tosses, then the probability of getting the heads will be 7/10 i.e. 0.7. Now let’s say we have a prior information through previous experiments of expert experience that heads will come 6 out of 10 times thus we have a prior probability of 0.6, now we will compare the outcome of the experiments with this prior probability. Thus we can say that the objective of the frequentist approach is to explore the data collected in order to identify a significant effect that could only be explained through by the hypothesis of the experiment & for the bayesian approach the focus is on comparing two hypothesis by comparing the data collected at the time of the experiment with the prior information available therefore assessing the chances that one was true comparison to other. As an organization performing experiments & relying on statistical analysis for analyzing the results of these experiments, it is thus important to understand the difference between the above two approaches on the basis of different parameters which are as shown below:- In terms of analyzing the test data :- Frequentist approach requires the experiment to be completed first by collecting sufficient samples before analyzing the data, this limits the test to be an offline experiment. Bayesian approach analysis can be performed during the experiment while collecting the data. Also it is an online experiment as the analysis results get updated when new batch of data gets ingested. Sample Size :- Frequentist approach requires calculating the sample size prior to conducting the test, also the number of samples among test groups needs to be balanced. Bayesian approach does not require a pre-defined sample size & also there is no need to have same number of samples amongst the test groups thus allowing an imbalanced sample size. Test results explanation :- For the frequentist approach, conclusions can be made like “We reject/ fail to reject the hypothesis that group A is better then group B. This conclusion is based on the observation of the historical data collected during the test. This approach uses p-value in order to quantify the confidence of the business conclusions. For the bayesian approach, we introduce the element of probability while making an interpretation of results such as “ There is a 98% probability that group A is better than group B”. Thus this probabilistic result quantifies the confidence of the business conclusions. Leveraging Test Results :- Frequentist approach gives summary statistics of the samples collected during the experiment period, thus cannot be used for making any conclusions about the future unseen data. Bayesian approach leverages the parameters of the distribution from the data & gives the posterior predictive distribution for unobserved, future values on the observed data. Duration of the Test :- In the frequentist approach, the duration of the experiment can be estimated basis the designed sample size as it is easy to estimate how long an experiment will be conducted. In the bayesian approach, the duration of the experiment cannot be estimated as more samples coming every day helps to get more confidence conclusions, but cannot estimate how long a specific experiment would take. Granularity of input data :- In the frequentist approach, the level of granularity of the input data is at the very base level for eg: data collected basis each user / ID & also it depends on the duration for which the test is conducted. In the bayesian approach, the level of granularity of the data depends on the frequency of the updating the test results, for eg : in case you are testing the Click through rate & the results are updated every 24 hours, one needs to calculate the number of total seen events & number of click events every day in order to arrive at the daily click through rate. Performing Multiple Comparison :- Frequentist approach leverages bonferonni adjustment in case when multiple variants are required to be tested at the same time. Bayesian approach uses hierarchical bayesian methods for cases involving multiple variants. Testing Approach :- The frequentist approach recommends different tests based on the distribution(s) that a variable of variable(s) follows. The bayesian approach leverages conjugate families for variables following different distributions for eg : Click through rate would leverage the beta distribution conjugate wherein prior parameters need to be set for the beta distribution, collected data is updated basis the baye’s rules in order to get the posterior of the parameters, then samples are taken from the posterior distribution & inferences are made on the test results accordingly.
17. Bayesian approach for hypothesis testing Frequentist approach for hypothesis testing Bayesian approach How unusual is the observed result under the given hypothesis is Bayesian approach. p-value for a Bayesian: The p-value is the probability is an expression of a degree of belief of an event, based on prior knowledge (previous experiments) or personal belief. P-value is the probability of the hypothesis given the data, P(Hypothesis|Data). Frequentist approach What is the probability that the hypothesis is true given the observed result is Frequentist approach. p-value for a Frequentist: The p-value is the probability of the observed data, or more extreme data under the assumption that the null hypothesis is true, P(Data|Hypothesis). Bayesian way of thinking · Bayesian statistics relates with subjective belief. · Bayesian statistics uses the idea of updating beliefs with new information when testing a hypothesis. · Prior belief x Bay’s factor = Posterior belief (= updated, new belief). · Bayesian talks about the observed data been fixed and the varying the model around. · Given the observed data, there is 95% probability that the true value of the parameter lies within the credible region. Frequentist way of thinking · Frequentist statistics is about the absolute truth and care about the true answer. · Frequentist statistics is not involving the opinion. · Frequentist talks about models been fixed and the data varying around them. · If the experiment is repeated multiple times, in 95% of the cases the computed confidence interval will contain the true value of the parameter.
18. Frequentist statistics, which could also be described as experimental or inductive, rely on the law of observations. In a frequentist model, probability is the limit of the relative frequency of an event after numerous trials. In other words, this system calculates the probability that the trial would have the same issues if you were to replicate the same conditions again. This model only uses data from the current trial when assessing issues. When applying frequentist statistics or using a tool that uses a frequentist model, you'll presumably hear the term p- value. A p- value is the advised probability of carrying an effect at least as extreme as the one in your sample data, assuming the verity of the null thesis. For illustration, a small p- value means that there's a small chance that your results could be fully arbitrary. A large p- value means that your results have a high probability of being arbitrary and not due to anything you did in the trial. In short, flash reverse that the lower the p- value, the more statistically significant your results. Unfortunately, people constantly misinterpret what p- value represents. P- value is basically the probability of a false positive rested on the data in the trial. It doesn't tell you the probability of a specific event actually passing and it doesn't tell you the probability that a variant is better than the control. P- values are probability statements about the data sample not about the thesis itself. So if you ran an A/ B test where the conversion rate of the variant was 10 advanced than the conversion rate of the control, and this trial had a p- value of0.01 it would mean that the observed result is statistically significant. Bayesian statistics, which is theoretical/deductive, enables us to combine the information provided by data with a priori knowledge from previous studies or expert opinions Bayesian statistics are named after champion Thomas Bayes who believed that “probability is orderly opinion, and that conclusion from data is nothing other than the modification of similar opinion in the light of applicable new information. With Bayesian statistics, probability indicates a degree of belief in an event. This system is different from the frequentist methodology in several ways. One of the big differences is that probability expresses the chance of an event end. Although the computation can be extremely complex, this system seems to be a simpler and further intuitive approach for A/ B testing. fairly simply, a Bayesian methodology will tell you the probability that a variant is better than an original or vice versa. The Bayesian generality of probability is also further conditional. It uses former and posterior knowledge as well as current trial data to prognosticate issues. Since life doesn’t be in a vacuum, we constantly must make hypotheticals when running trials. But the Bayesian approach attempts to regard former knowledge and data that could impact the end results.
19. This was a slightly tricky question as it is well known that AHP edges over Pugh Matrix. There are a few answers worth reading - Piyush Jain, Rahul Arora, Ashish Kumar Sharma and Rakesh Chandra. Winner for this question is Ashish Kumar Sharma for accurately summarizing the differences between the two tools and providing an example where Pugh Matrix can be used over AHP.
20. Excellent question, Amith. Let me put it this way - A Quality Assurance professional who has Lean Six Sigma competence creates opportunities for growth outside the Quality Assurance domain. Many of the Lean Six Sigma tools were initiated in the Quality domain but a true and successful LSS professional applies the learning for generating better business results than ever before. If you see the evolution of Lean Six Sigma in the image below, you will notice how things have progressed in this domain over the years. Hope this helps. Do ask a follow up question if you want to understand more.
21. The Pugh Matrix and AHP are both decision making methods using semi-objective input and attempt to make quantifiable comparisons between alternative solutions. The two methods rely on creation of criteria based on attributed customer value and subjective comparison and best framework for making choices ultimately depends on the demands of the situation and the preferences of the people involved. They both involves multicriteria evaluation using alternatives, wherein each alternative are given a rating, define the importance and criteria evaluation and commonly used for project prioritization and selection. AHP is a structured means of modelling the decision at hand. It consists of an overall goal, a group of options or alternatives for reaching the goal, and a group of factors or criteria that relate the alternatives to the goal. The disadvantage of it is the hidden assumptions made like consistency, difficult to use when number of criteria or alternatives are high (>7) and difficult to add new or remove existing criterion/alternative. The Pugh Matrix is easy to use and relies upon a series of pairwise comparisons between design candidates against a number of criteria or requirements. One advantage it has over other similar tools is its ability to handle multiple different decision criteria.
22. Q 505. What is the difference in Bayesian and Frequentist approach for hypothesis testing? Also explain Bayesian way of thinking and the Frequentist's way of thinking with simple examples. Note for website visitors - Two questions are asked every week on this platform. One on Tuesday and the other on Friday. All questions so far can be seen here - https://www.benchmarksixsigma.com/forum/lean-six-sigma-business-excellence-questions/ Please visit the forum home page at https://www.benchmarksixsigma.com/forum/ to respond to the latest question open till the next Tuesday/ Friday evening 5 PM as per Indian Standard Time. Questions launched on Tuesdays are open till Friday and questions launched on Friday are open till Tuesday. When you respond to this question, your answer will not be visible till it is reviewed. Only non-plagiarised (plagiarism below 5-10%) responses will be approved. If you have doubts about plagiarism, please check your answer with a plagiarism checker tool like https://smallseotools.com/plagiarism-checker/ before submitting. The best answer is always shown at the top among responses and the author finds honorable mention in our Business Excellence dictionary at https://www.benchmarksixsigma.com/forum/business-excellence-dictionary-glossary/ along with the related term
23. Pugh Matrix Pugh is a kind of prioritization or decision matrix that allows us to choose between a list of alternatives based on certain criteria. The ultimate aim is to narrow down the option to one choice, normally employed after we have captured the voice of the customer. Pugh matrix developed by “ Stuart Pugh” Pugh matrix basically helps to determine which items or potential solutions are more important or better than other options. Several concepts are evaluated against Datum. Datum is the best current concept that the team has to date. Example Step involved for constructing Pugh Matrix Step#1 : List important criteria Step #2 Select datum /Base line Step# 3 list of alternative Step# 4 Rank ( +Ve , -Ve , S ) +Ve = Better then datum -Ve = Bad then datum S = equal to datum Analytic Hierarchy Process (AHP) AHP is a technique is multiple criteria decision-making base on mathematics and psychology. It’s developed by Dr. Thomas L saaty developed this process in the 1970 Applications:- - Decision regarding the selection of the best alternative ( eg selecting the best suitable vendor based on certain criteria - Prioritize factors that may influence some phenomena ( e.g factor influencing on ROI due to high taxation) - Comparison while taking strategic decisions. Advantages of using the Analytic Hierarchy Process (AHP) over the Pugh Matrix? - Pugh matrix we are dealing with discrete data but in the case of AHP we are dealing with continuous data. Continuous data gives better and more accurate results compared to discrete data. - Chances of a decision may be biased while using Pugh matrix depend on the criteria selection and judgment but in the case of AHP we can find the best solution among the alternatives. - While using Pugh matrix we have a high chance to get two optimal solutions that have the same score, in that case, difficult to get one of the best solutions. While using AHP we will get the best solution. As well as we will get each criterion which is the best option available. - APH is having importance - Pugh matrix scaling Points are 5 but APH matrix total scaling points are 17. if scaling points are more It magnifies the result. - Criteria & Importance in AHP different company to company. Are there situations where Pugh Matrix is preferred over AHP? A company’s production PPE requirement has been increased due to the number of operators increased. The company was not sure whether its current vendor is the best in the marketplace. They have identified the criteria and their weightage to compare three vendors with the current one. Higher weightage was given to the most important criteria and lower weightage was given to the least important. Using the Pugh matrix, decision-makers can decide what most satisfied their criteria. In this case Reason for use of Pugh matrix over the AHP is I have an existing vendor who is providing PPE to the company due to demand increase I am looking for some alternate option that can provide me better solution in terms of selected criteria
24. Analytic Hierarchy Process (AHP) is an organized decision-making method that enables analysis around a problem, needed for making a choice between available alternatives by determining the criteria basis which selection or prioritization will be done. It is a process of quantifying criteria and alternatives and relating each element to the desired outcome. Pugh Matrix is most popular decision making six sigma tool that uses scores awarded to criteria and scoring them for each alternative. It is a qualitative technique which allows stakeholders to make a choice between alternatives basis scores. While both are used for the same purpose but preference and usage are largely driven by the stakeholder approach, problem in hand and proof of concept needed. Let’s look at some of the differences. AHP PUGH Matrix 1. Pair wised Matching – compare two criteria at a time and amongst alternatives 1. Each alternative is independently awarded a score and compared with DATUM and against the weightage decided for each criteria 2. Quantitative method used for evaluation 2. Qualitative method by awarding scores 3. Complex Statistical Method 3. Simple Method based of Ranking 4. Enables a direct comparison between alternatives and via defined criteria 4. Each Alternative is not compared with each other 5. Consistency Index (<10%) aids validation of the comparison outcome. Improving a decision is possible 5. No such validation and standard are possible 6. Based on Continuous Data – Ratio 6. Based on Discrete Data – Ordinal Data Type 7. Is not LEAN SIX SIGMA QUALITY Tool 7. Is integral part of the LEAN SIX SIGMA QUALITY Playbook 8. Very difficult and time-consuming process especially with more criteria 8. Preferred tool to handle several criteria’s 9. Based on stimulus – response, a mathematical numeric relationship is established 9. Based on Logical thinking, experience and willingness of stakeholders. 10. Individual and Group decisions can be combined. Everyone has a strong reason to believe in the outcome 10. Stays Subjective to a great extent, enables understanding of each alternative compared to existing one The complexity involved and the ability to run AHP, differentiates the choice to be made in comparison with Pugh. AHP is more time consuming and requires complex calculations to reach towards conclusion, ability to handle data and using the method is the key. Hence, AHP is less preferred compared to Pugh and mostly due to simplicity factor. Example scoring movies to judge the most preferred for annual reward compared to a preference of IT Software involving investments. In first case Experience and Knowledge of stakeholders in reference to the problem in hand, governs the success and accuracy of Pugh Matrix outcome. Hence Pugh Matrix can handle sensitive analysis better and where quantitative data is available in abundant. Where as the client would want to have more statistical proof concept for deciding which Software to install and WHY, AHP will be the preference.
25. AHP refers to solving complex problems by using maths and in psychological way to ressolve them by organising and analysing the decision. AHP provides a framework which os rational and quantifies it's criteria with different options for achieving the final goal relating those elements achieved. The main benefit of AHP is that it removes any bais in the decision making process to ensure that the decision is purely bases on the values and priorities. AHP process 1. Defining the decision problems. 2. Developing a conceptual framework 3. Setting up the decision hierachy 4. Collecting data from experts 5. Employing the pairwise comparison 6. Estimating relative weights of elements 7.Calculating the degree of consistency 8.Calculatomg the mean relative weight Other benefits 1. AHP is a very simple and practical process 2. AHP is used for decision making of a complex problem 3. AHP is actively turture intellectual discussion, debates and research in various studies and field. Pugh matrix is a decision matrix method is a qualitative method to rang thr multi dimensional option of a given option set. A weighed decision matrix operates in the same way as any other basic decision matrix and introduces the concept of the weighing the criteria in the importance order. By doing this the importance of the criterion is considered high when the weighed number is on the higher end. The advantages are it encourages self reflection amongst the members of a design team to analyse each criteria with minimum bias. The disadvantage is that there are high Possibility that yur weightage get similar to many criteria and it would make it difficult to prioritise Pugh matrix vs Analytical Heirarchy process Pugh matrix and AHP the more commonly used. They are both decision making methods to incorporate semi objective input and attempts to make quatifiable comparison between alternative solutions. Both the methods reply on establishong criteria based on attributed customer ratio and subjective comparison. Though there are inchangeable. AHp is more simpler and complusive to use. It is better at forcing a decision when there is lots of disaggreement and uncertainty. But Pugh matrix is good at optimising and eliminating bad alternatives There are many situations where Pugh matrix is preferred than AHP and the main is that Pugh matrix has the ability to handle large no of decision criteria.
26. Advantages Of using AHP over Pugh Matrix Ø Pugh Matrix need Baseline (Datum) to compare with other alternative and AHP doesn’t require Datum Ø AHP can provide a comparison between each alternate for decision and Pugh only compares alternate with Datum, which does provide a full landscape to make a decision Ø Criteria weightage is an estimate in Pugh Matrix based on input from VOC or from survey. AHP arrive weightage can be arrived same calculation using the pairing method Ø Pugh Matrix is discrete data for comparison and AHP uses both Discrete and continuous data for comparison, so precision is better in decision making AHP has better advantage on comparing the alternate for decision making and will best choice for strategic decision making for critical solution, It has its own de-merit due to paring & scale. Scale allows only 1-9 and user can’t choose in-between value in some which is cases in muti attribute which has minor difference. Also user provide scale of 1-9 and person ranking need to consider the distance between attribute to be consistent, else CI will be higher and will be difficult make people understand make this correct. AHP need multiple calculation and data point to get required result arrive consistent ratio & index to perform comparison. Some time this become difficult for some simple selection. In such cases Pugh Matrix can be preferred. When there is criteria provide more of person experience considered as rating and rational is not important, in such cases Pugh Matrix is preferred than AHP
×
×
• Create New...