Suresh Jayaram
Members
Content Count
179 
Joined

Last visited

Days Won
7
Suresh Jayaram last won the day on February 19 2019
Suresh Jayaram had the most liked content!
Community Reputation
19 GoodAbout Suresh Jayaram

Rank
Advanced Member
Profile Information

Company
Benchmark Six Sigma
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.

Abhijith Dharanikkavil started following Suresh Jayaram

Papi Reddy started following Suresh Jayaram

Six sigma project for supermarkets
Suresh Jayaram replied to jagadeeswara_8's topic in Other Services
Six Sigma is all about reducing variation, reducing defects, and improving customer satisfaction. All businesses are made of processes  hiring process, ordering process, inventory management process, etc. All processes have variation  it is not possible to have a single process with no variation in it. There is bound to be some variation  however small. The key question is  is this variation bigger than what your customers and/or business stakeholders can tolerate? So, identify the key processes, identify the amount of variation you currently see in these processes, indentify the amount of variation your customers can tolerance. Once you have this data, you will be able to point to projects that could benefit from Six Sigma. There are other threads in this group that talk about initiating projects  please do a search: One such search is found below: http://forum.benchmarksixsigma.com/forum/63improvementassignmentsupport/ 
Hi Shalini, Don't worry  these are advanced topics that are covered in MBB training. Most of the questions here refer to Multiple Linear Regression which is not covered in BB training  only Simple Linear Regression is covered. You should be able to answer 1 and 4 based on BB training. Best Regards, SJ

Sequential Test Method For Process Capability Decisions
Suresh Jayaram replied to Kiran Varri's topic in General Discussions
Dear Shalini, This concept is very similar to Acceptance Sampling. Please refer to the following article: http://www.itl.nist.gov/div898/handbook/pmc/section2/pmc2.htm Best Regards, SJ. 
VSM : Differentiation between "Ideal State" & "Future State"
Suresh Jayaram replied to Kiran Varri's topic in Other Services
Dear Kiran, Ideal state refers to perfection. For example, if the current Work in Process (WIP) = 1000, the ideal WIP = 0. However, we cannot get to the ideal state within a short period of time. So, people plan for an interim future state usually six months or one year from now. This is called the future state. For this example, maybe the future state could show that we want to reduce WIP from 1000 to 500. After the end of one year, the future state becomes the new current state and we plan for another future state at that point with the goal of getting to the ideal state in the long run. Hope this helps, SJ 
Dear Kiran/Shalini, Both of you are right. If both are known, then we usually work with the defects and if we drive the defects down the zero, then defective also goes down to zero. However, if we are working with only defectives, some people assume defects = defectives. As Kiran points out, technically, they are different. Please note that there are some practitioners who believe that we should only be working with defectives and not defects because we may artificially claim a good process sigma level if we inflate the number of possible defects in a product/service. Best Regards, SJ

Sequential Test Method For Process Capability Decisions
Suresh Jayaram replied to Kiran Varri's topic in General Discussions
Hi Kiran, If you look at the confidence interval of Cp/Cpk (process capability index), you will find that the confidence interval is pretty wide when the sample size is small. For example, the confidence interval is around +/ 0.4 when the sample size is 30. If you calculate the process capability index as 1.0, it could be as low as 0.6 or as high as 1.4. If you use a smaller number of samples as recommended by the Sequential Test Method, say 15, then the confidence interval would be a lot higher. This would mean that the error in analysis could be a lot higher. A process that is shown to be capable may in fact be not capable. So, I would recommend to use this method with caution. Best Regards, SJ 
Help with Probability Distribution function
Suresh Jayaram replied to Kiran Varri's topic in General Discussions
Dear Kiran, We need to differentiate between continuous and discrete variables. Let's first look at the discrete case. For example, if we are talking about tossing a coin. The probability of getting a head is 0.5 and the probability of getting a tail is 0.5. The value 0.5 can be referred to as the probability mass function. For continuous variables, the probability mass function is referred to as the probability density function. However, the value of the probability density function does not equal the probability of getting a value in the continuous case. In fact, the probability of exactly getting a value for a continuous distribution is always 0. The area under the probability density function gives the probability in the continuous case. For example, if we have normally distributed data with mean = 20 and standard deviation = 5, then the probability of getting say 20, P(20) = 0. We can however, calculate the probability of getting values between 19 and 21, represented as P(19 < X < 21). If we look at the probability of getting all values less than 21, i.e. P(X < 21), this function is called the cumulative distribution function. It is the area to the left of that value under the probability density function. This can also be represented as CDF(21). Note: P(19 < X < 21) = CDF(21)  CDF(19). The area between 19 and 21 is equal to the total area to the left of 21 minus the total area to the left of 19. In most cases, for continuous distributions, we usually work with areas, so CDF values are more important than PDF. However, when we plot the distribution functions, we usually plot the PDF as their shapes are easier to recognize compared to CDF. It is hard to explain this without a figure. Hope this helps. SJ. 
Need help with choosing the right test
Suresh Jayaram replied to siv_santh's topic in General Discussions
Hi, I am not sure I could comment on the temporary vs. permanent solution with this exercise. You can only state whether there is or is not a statistical difference between the two proportions. For example, proportion of licenses with reconciliation is statistically similar to the proportion of licenses without reconciliation. Statistically, there is a difference between license policy of 60 days vs. 90 days. You will have to determine if the change you detected is practically important and how you can establish a good control plan so that the process will work over the long term. Best Regards, SJ. 
Calculating Cost of a process
Suresh Jayaram replied to Adhiraj Bandyopadhyay's topic in Other Services
Dear Adhiraj, You can calculate incremental costs and claim these as benefits for your project. Cost A: Cost that would impact your organization assuming you did not do this project. Cost B: Cost that would impact your organization assuming you did this project. The difference between A and B would be your project benefits. For example  If you did not do the project, the revenue would increase by 2% and cost by 1% If you did the project, the revenue would increase by 5% and cost by 3%. Cost A: 1% increase in margin Cost B: 2% increase in margin So, you can claim 1% as a result of your project. Hope this helps! SJ. 
Regression anlaysis for optimisation
Suresh Jayaram replied to Satheesh.nt's topic in General Discussions
Dear Satheesh, Regression model is nothing but a relationship between your input(s) and your output. It is primarily used when your input(s) and output are continuous. Typically, we build a linear model between the input(s) and output. If you have one input and one output, we use simple regression of the form Y = m*X + c. Where, X is your input and Y is your output. For your question, if I understand it correctly, you have three inputs, X1, X2, X3, in which case, we would use multiple regression where the model would be: Y = m1*X1 + m2*X2 + m3*X3 + c Once you build a regression model and check that you have a decent model between your inputs and output, then you can use this for prediction or optimization. Make sure you check the adjusted R^2 values, the appropriate Pvalues, and also make sure that you check the model assumptions are satisfied. One of the most important requirements is that X1, X2, and X3 should not be colinear. Once you have a model, you can then use it for optimization by adding additional constraints on X1, X2, and X3 (if appropriate). This optimization can be done using Linear Programming (LP)  a reference to LP is shown below. Reference: http://en.wikipedia.org/wiki/Linear_programming. Hope this helps, SJ 
Books to go for regarding Six Sigma
Suresh Jayaram replied to zuluzulu2002's topic in General Discussions
Dear Shanka, If you look at the right hand top corner of this website, you will find a search feature. If you search for Lean Six Sigma as the keywords, you will find several articles on this topic. Best Regards, SJ 
Need help with choosing the right test
Suresh Jayaram replied to siv_santh's topic in General Discussions
Dear Shiva, Just wanted to understand why you indicate that a 2proportion test is not relevant here. Could you not do a 2proportion test with the first case (90day policy) and determine how many licenses would be freed (events) vs. total number of licenses (trials). Similarly, you can calculate for the second case (60day policy) how many licenses would be freed (events) vs. total number of licenses (trials). Using this information, you should be able to use the 2proportion test and get an appropriate value of P based on the discrete distribution (in this case a Fisher's test). As an approximation, you could use the 2sample t test if the proportions you calculated above are not close to 0 or 1. SJ 
What is BoxCox transformation?
Suresh Jayaram replied to AnishMohandas's topic in General Discussions
Dear Anish, There are several approaches to solve any problem. You will have to pick the most appropriate one depending on your situation. BoxCox transformation is a group of transformations that help with transformation of the data from Nonnormal to Normal. This transformation includes, among others, taking the reciprocal of the data set, or taking the square of the data set, or taking the logarithm of the data etc. in order to make it Normal. BoxCox tries different combinations and then picks the transformation that best makes the data Normal. There is NO guarantee that if you apply the BoxCox transformation, your data will become Normal. One application of BoxCox transformation is in Control Charts for Individuals (example IMR Chart). In order to use IMR charts, data has to be Normal. If it is not Normal, then you may try using a BoxCox to make it Normal before using the IMR chart. When you are doing Hypothesis testing and say comparing two populations, it may be preferable to use Nonparametric distributions rather than apply BoxCox transformations (please see one of my earlier posts on this topic). Hope this helps. SJ. 
What's Wrong with Specification Limits?
Suresh Jayaram replied to Suresh Jayaram's topic in General Discussions
Dear LR, It is okay to take action when points go beyond the control limits (especially when there is a special cause which causes the points to fall out of the control limits). In order to compute process capability, we compare the process performance to specification limits. The control limits are an indication of process performance. When we talk about specification limits, we implicitly use LSL or USL or both. However, when we calculate DPMO or Sigma level using this approach, we assume any parts that fall outside LSL/USL as defects  which causes some problems as you have correctly identified. It may be better to use Taguchi loss function to characterize quality / cost rather than use specification limits. SJ 
Dear SK, These are two different tools within Six Sigma with totally different objectives – even though they do have some similarities. QFD (Quality Function Deployment) is a tool that is used to translate Voice of the Customer (VOC) into design requirements. It basically translates what the customers desire into specific measurable metrics with well defined targets. QFD is one of the primary tools in Design for Six Sigma (DFSS) projects. FDM (Function Deployment Matrix) is a tool that is also called the Cause and Effect Matrix (C&E Matrix) or the XY matrix. This tool can be used to map which X’s have a big impact on the output Y (usually based on team perception). So, if you have a large number of X’s, you can use the FDM tool to narrow down the list of X’s that may be important to investigate further. The similarity between QFD and FDM is that one of the houses of QFD (the central house) that determines which design specifications are important uses a relationship matrix that is very similar to the FDM methodology usually using a 139 scale. QFD does a lot more than what an FDM does – for example, it can be used to evaluate competitor’s products, conflict between design requirements etc. In addition, there are second and third level QFDs which can further translate the design requirements into the next level – such as manufacturing tolerances etc. Hope this helps, SJ.