Jump to content


Excellence Ambassador
  • Content count

  • Joined

  • Last visited

  • Days Won


mohanpb0 last won the day on October 17

mohanpb0 had the most liked content!

Community Reputation

4 Average

About mohanpb0

  • Rank
    Active Member

Profile Information

  • Name
  • Company
    M/s CMA CGM Shared Service Centre India
  • Designation
    Director (Performance Management and BCMS)
  1. Kano Model

    In any business, resources are expensive and need to be judiciously expended. Customer is important, but the importance lies in the fact that the customer is the source of profits. Customer satisfaction is very important, which is why the correct needs need to be identified and the right resources need to be expended on fulfilling these needs and keep the customer satisfied. Typically, there are many hundreds of customer needs that can be identified through various methods. It would be an exercise in futility to try to prioritize each of these needs. Therefore, the sensible approach would be to categorize these needs into a smaller number of groups, prioritize firstly the groups and then the needs within each of the groups. The Kano model is a useful tool in categorizing these needs as dissatisfiers (Basic requirements), satisfiers (Performance requirements) and delighters (Excitement requirements). Non-fulfillment of dissatisfiers, results in dis-satisfaction, while fulfillment does not increase satisfaction. Fulfillment of satisfiers results in proportional increase in satisfaction. While non-fulfillment of delighters does not result in dissatisfaction, their fulfillment delights the customer. The irony of customer satisfaction vis-à-vis customer needs is that the most important needs turn out to be dissatisfiers as they are very basic and practically taken for granted. As an extension of this very same irony, a less important need can surprisingly turn out to be a satisfier as these are needs the customer wants to be fulfilled and is willing to pay for it. Therefore, there may be more customer satisfaction obtained by improving fulfillment performance needs rather than that of basic needs. In many cases, the fulfillment of dissatisfiers would be mandated by various regulations related to safety, global product standards etc.. After using the Kano Model to categorize and then prioritize customer needs, the next step would be to convert these using a QFD, to Product or Service Functionalities, Product or Service Design Features, Product or Service Design Specifications and finally Process Specifications. Providing features in the product or service always involves a cost and an estimated additional revenue. To produce the product within the budget allotted, use of resources will need to be on those features that have been identified as satisfiers through the Kano Model. Obviously, the basic needs will need to be fulfilled first because the product does not exist without the, But when it comes to improvement of features, satisfies should get prioritized.
  2. In any business, performance is typically expected to vary over time and w.r.t. inputs. When comparing two performances, it would not be completely correct if a decision that the performances are different were to be taken based on comparison of just one or few data points from both the performances. Sampling errors should not influence the decision. Therefore, it is essential that the correctness of the decision taken should be sustainable over time. For the decision to be sustainable, data that reflect the sustainability of both the performances will be required. Once this data is available or is collected, the decision based on this data is also expected to sustain over time. The decision that is taken based on samples must hold good for the populations also. In other words, even after some unavoidable overlaps of both the performances, perhaps due to chance causes, the difference in the performances of the two populations must be visible, conspicuous and clearly discernible. In other words, the difference in the two performances need to be significantly different. But “significance” is quantitative and statistical. The significance of the difference is assessed from statistical data of the two performances. Statistically significant difference represents the clarity or discernibility of the difference between the two performances and the sustainability of this difference over time. Performances of two populations with a statistically significant difference will remain different over time unless there are some special causes in play on one or both of them. But how significant is significant? This depends on the objective of comparison and the stakes involved. The margin of error tolerable in taking a decision on the difference between the performances depends on these factors. For different combinations of conditions, this margin of error could be 1% or 5% or 10% or any other agreed number. This is the error involved in the decision to conclude that the two performances are significantly different based on the available statistics. Uses of the concept of Statistically Significant Difference in Problem Solving and Decision Making The uses of this key concept of “Statistically Significant Difference” to solve problems and take decisions are innumerable, a few of which are given below. 1. Comparison of performances between two or more a. Time periods b. Processes c. People d. Suppliers or Service Providers e. Applications 2. Assessing effectiveness of a. Training b. Improvements c. Corrective Actions d. Action taken on suspected root causes 3. Evaluating a. User ratings in market surveys against marketing campaigns b. Performances of new recruits against agreed targets In all the above cases, Hypothesis Testing can be effectively applied to assess the existence of a statistically significant difference.
  3. Stable Process and Capable Process In the Chapter entitled, “Common Causes and Special Causes of Improvement. Stable System” of his treatise, “Out of the Crisis”, Deming says, “As we shall learn, a process has a capability only if it is stable”. A process that operates with its control limits is a stable process while one that operates within Specification Limits is capable. For a process to be deemed as capable, it needs to be consistently capable. For the process to be consistent, it needs to be stable. While process stability and process capability are not related, the key connection is that Process capability assessments should be performed after demonstrating stability of the process. Process capability assesses ability to meet specifications. But, with an unstable process, it is difficult to assess or predict its capability. With an unstable process, the estimate of the process capability becomes relevant only at that point of time. The capability of a stable process can be improved, but an unstable process cannot be considered capable. In conclusion, while stability and capability need to be treated together in terms of conclusions about the process, it is imperative that “stable” comes before “capable”. Process Stability – A pre-requisite for every process? As mentioned above, stability of a process confirms that its capability can be predicted better. Stability is a characteristic that is observed over time. In other words, is a process as good now, as it was before and will it be as good later? This assumes a certain repetitiveness of the process within a reasonable time-frame. But for processes which have a long time interval between them or when processes are more of one-off or once-in-a-long-time or even once-in-a-lifetime event, in which there is nothing like predictability because there is no definite future for the process or no likelihood of the process happening again in the conceivable future. In such cases, process stability, although important as a concept may not be quite relevant. Examples of such processes can include construction projects, large machine assemblies, equipment erections, rocket-launches, Software upgrades, ERP implementation, Functions, Various Financial, Process or System audits and so on. In these “processes”, process stability may not be a prerequisite.
  4. Correlation

    Despite the fact that correlation does not necessarily mean causation, exploring and where relevant, identifying correlations still remain a key step in the improvement project life-cycle. Exploring correlations of the observed effect with potential causes is a preliminary step in identifying root causes. This shortlists the potential causes from the huge universe of causes, leaving the project team with a reduced list of potential, most-likely causes to be investigated further. Without this shortlisting, the project team can get drowned in the sheer number of potential causes, the recovery from which can use up valuable project time. Further, when people are introduced to and are being trained in structured problem solving, correlations are a good route for inducting them into a "cause-and-effect" mode of thinking. The logic of a relationship between a potential cause and an effect, summed up as correlation is relatively easy to understand as examples from all walks of life can be quoted and when followed by quantified correlation, further embeds the understanding of correlation in the minds of people. This is also applicable when training people on data based decision making and data driven improvements. Additionally, the coefficient of correlation is a relatively easy-to-understand measure and can be used to illustrate both positive and negative correlation. When combined with visual tools like Scatter Diagram which are easy to create on popular spread sheet applications, the concept of correlation becomes even easier to understand. As an extension, the concept of interactions between potential causes, resulting in varied impacts on the effect can also be well illustrated and understood by correlation analysis. In summary, while correlation does not imply causation, causation typically displays correlation, making correlation an essential step in the root-cause analysis process.
  5. VOC, Voice of customer

    In any initiative, any over-emphasis on any aspect of business is bound to hurt business at some point of time. Voice-of-customer is no exception. The advantages of VOC cannot be over-emphasized and still remains a most effective method of getting customer feedback, both to identify opportunities for improvement and also to confirm the effectiveness of improvements implemented. Nowadays, VOC is used not just by Suppliers but also by Customers themselves. Many of these customers may be themselves using VOC to feel the pulse of their customers and know how a response to a VOC survey would be interpreted. Therefore, the Customer can sometimes tend to be manipulative in the response to a VOC. One of the main reasons for this is to proactively snuff out any possibility of rate or price increase request from the supplier, if the customer were to accept total satisfaction with the Supplier's performance. In such situations, using VOC in the usual way may not be ineffective and worse still get the Supplier to focus on incorrect priorities and waste resources in solving non-existent problems. In conclusion, if it is observed that a customer is not quite transparent in feedback, VOC can be used as a dipstick, but major decisions including judging the successes of projects need not be based only on VOC.
  6. Process mapping

    The reasons why the humble flow chart evolved into the powerful process map lies in the analogy between the process map and the geographical map. Just as a location on a map is referenced by its latitude and longitude, a process step in a process map is referenced by a combination of (say) the person / team doing that step and the stage of the process in which that step occurs. The references could be also be different – for example, a timeline could be one of the references. These references or the facility to reference a process step constitute the life of a process map. Now that this facility to reference is here to stay, swim lanes, be they horizontal, vertical or both are also an inseparable part of the process map. It does not matter as to which position in the sequence of detailing the process map is. Swim lanes make the process map easier to read and use. Therefore, it would be advisable to create and update one full set of swim-lane process maps from L0 to L5 levels. In the ITeS sector and in a typical BPO scenario, would use the following sequence of increased detailing. Level Description Details L0 Entity Level Customer, Supplier, Other External Parties L1 Sub-entity Level Different Departments of the Customer and Supplier, Other External Parties L2 Process / Sub-process Level Interactions of different processes or sub-processes with hand-ins and hand-outs L3 Activity Level Activities done by different stakeholders at different stages of the process L4 Task / Sub-task level Various tasks or sub-tasks that constitute activities L5 Field / Key-stroke level Absolute detail of every field touched or every key struck This set of process maps for every process is valuable as a training tool, as a real-time guide or SOP and as a trigger to identify improvement opportunities. To augment the above, would also use an enhanced SIPOC that contains apart from the usual Suppliers, Inputs, Process, Output and Customers, related information like process step times, who does what step, the team size and distribution across shifts, the average volumes of these transactions, the qualifications of staff for this process, the training required and so on. Other maps can be used to explain a specific perspective or to support a specific initiative. A turtle diagram or alternatively a Relationship Map can be used to understand at a glance, interlinks and dependencies. A value-stream map could be used to identify opportunities for leaning out a process by crashing lead-time. Overall, a simple, situation-based approach to selection of process map types for use would help in optimal utilization of this wonderful tool.
  7. It would be possible to make data traditionally considered as continuous appear as attribute by an appropriately worded question. Some examples of the same are provided below. The column on the left has a set of parameters traditionally considered as continuous, while the column on the right has certain questions, which if answered will lead to “counting” of the value of the supposedly continuous variable. Weight of an object How many grams of matter are there in that object? Volume of water in a container How many millilitres of space or occupied by water in that container? Bank Balance How many paise are there in that account? Height of a building How many metres of height are there in that building? If there is an argument that the value cannot be always counted in whole number of grams or ml and so on as they can be fractions thereof, the counter to that would be to narrow down the Unit of Measurement to micro-grams, pico-grams, atto-grams, femto-grams and so on and at some point of time, the value of the parameter can be counted. To resolve the above, one just needs to stick to the unit of measurement traditionally used. For example, bank balances are normally measured in Rupees and Paise, which sustains the continuous nature of the parameter. Similarly the traditional units for weights and heights can be considered, as again the continuous nature of the parameter being measured is retained. Some discrete data like errors can get a continuous "make-up" when averaged. For example, errors or error transactions assessed every hour are discrete but when averaged hourly over a day appears continuous. Further, using the discrete data, “errors”, various related parameters can be derived which can appear both continuous and discrete. For example, a defect rate of 10% which is traditionally considered attribute can be also expressed as “Average defects per product” of 0.1, which would appear continuous. Such data could perhaps be considered, "Quasi-continuous". Additionally, in hard-core Mathematics, mixed random variables and topological sets are conceptually considered neither continuous nor discrete.
  8. Examples of Correction, Corrective Action and Preventive Actions In a typical outsourcing organisation, when as part of a transaction being processed, a wrong entry has been made by an associate in a web form and this is identified in an internal audit, the transaction is not completed and dispatched, but is reverted back to the associate for action. The associate corrects the entry made and sends it back. This is the Correction that has been done. At the end of the week, the error log for the week is compiled and this error is root-cause analysed. It is found out that this error occurred because the concerned associate had missed the recent update on the rules to be followed given by the Team Leader, who had of course received this from the Customer. Therefore, in order to ensure that this type of root cause does not cause errors again, a practice of recording the attendance of all team members in all these update and feedback sessions, against the full team list is implemented in multiple shifts as required. This reveals the names of the team members who have not attended the session. When these team members then return to work and log in to the application, they get a prompt, “Please get the last update from your Team leader”, without doing which they cannot log in and start working. Thus, all team members have to get the update, before working again. The SOP is updated with this additional control. This is the Corrective Action that has been implemented. In the Monthly Review Meeting attended by all Business Heads, the summary with selective details of all errors and omissions are presented and reviewed by the Senior Management. This error and the actions taken are also discussed and it is agreed to extend this practice to all Business Lines and Customers. This is implemented with due support from the Internal Development team. The Central Documentation and all relevant Business Line level documentation are updated to ensure that this root cause does not have the opportunity to cause errors in other Business Lines. This is the Preventive Action that has been implemented. Situations where both preventive action and corrective action are undesirable and correction is the only preferred action A popular, if old-time saying on efficiency goes, “Nothing is more inefficient than doing efficiently, that which should not be done at all”. There could be situations in some processes in which some obviously non-value adding or for that matter waste steps are being done. These could typically be only temporary situations till a fix is being implemented that will eliminate these NVA / Waste steps altogether. If the fix is not very far away, and in the intervening period, an error occurs in one of these NVA / Waste steps, it may actually be a good idea to restrain actions to just corrections as the investments of time, effort and money in CAPA may not be justifiable as the environment or even the eco-system may undergo a complete change soon and there may not enough time to get a return on investment. The same reasons stated above may be valid in not implementing CAPA when a core application is going to be upgraded to a higher version or be replaced by an entirely new technology. Yet again, there could be errors associated with a specific location or work station, which precludes any CAPA as the process could be moved to a different location or the work station could be shifted elsewhere and so on. Further, in this age of instant Mergers and Acquisitions, in an organisation that is being swallowed by a larger one, as a result of which, the smaller organisation may cease to exist altogether or a product or an entire line of products may be discontinued. In such situations too, CAPA may be superfluous. To push the envelope open further, when some CAPA relates to a staff, who is going to attrite shortly, it may not be worthwhile to plan and implement CAPA.
  9. Check sheet

    The best way to correctly understand the evolution of Quality tools is through an analogy with a process, the results from which all of us have benefited, are benefiting and will benefit - Cooking. Once upon a time, a most important part of cooking was to prepare the spices (Masala) that was the very life of the dish. The process of dry roasting raw materials, crushing and grinding them, mixing them in the correct proportion was a difficult skill to master and needed specialized training and practice to achieve reasonable proficiency at it. However as time went on, lean principles were applied and after it was realized that spice preparation was not quite as core a process as the cooking itself, it was outsourced. Now, the cook has multiple options and brands of spices to choose from and neither is the value-add of the spice to the dish diminished due to outsourcing nor are the performance KPIs of the output adversely impacted due to outsourcing. The process of cooking has been effectively deskilled with the result that more people are able to do an effective job of it. Similar to the above, all Quality tools go through a process of evolution to become even better albeit in different forms. The "Check sheet" tool may not be explicitly used nowadays as it was in the past but the principles behind check sheeting that give valuable data which are the life of any structured improvement approaches still remain relevant. On the contrary, it is because "Check sheet" is such an essential, inseparable part of data collection, has it been automated. On the Quality education side, one good way to have trainees or students understand tools and methodologies better, would be to explain them from first principles in a brief manner. Therefore, explaining the concept of Check sheet would continue to be a part of Quality Education. It need not continue to be an independent tool, but will be an essential part of an integrated, automated data collection "Tool" perhaps along with Stratification.
  10. Kanban / Pull System

    Some of the situations in which the "Push" system is generally successful would be one or more of the following. No two situations are the same, even if some appear similar. 1 Demand is easily and accurately predictable Due to an accurate forecasting system, the risk of carrying “dead” inventory is less. Moreover, by planning and pushing a steady volume to the market, supply chain and production are also steadied, thereby eliminating delay losses. 2 Conversion costs between products is low due to late point differentiation If in spite of an accurate forecasting system, there is a difference in the final product type demanded, the stock of Product A can be converted to Product B at a very low cost and pushed on to the market. 3 Very short time demanded from order to delivery If a very short delivery time or instant delivery from the point of time an order is placed is demanded by the market or customer, there is no option except to supply from stock and avoid revenue losses due to short supplies. 4 Products do not deteriorate during storage When there is no constraint on “shelf life”, the risk of inventory to be written off is low. Further more, inventory is being used up sooner rather than later, reducing cost of delays. 5 Carrying cost is less than cost of lost business When a manufacturer is able to make up for the expense of carrying inventory by exploiting the predictable demand, the likelihood of profiting, “net-net” is high when compared with the potential loss of business, customers and reputation by becoming Just-Short-Of-Time rather than Just-In-Time 6 Long, geographically global supply chains with their own unpredictability Even with the best e-Kanban-powered pull system, the long winding, supply chain that traverses the entire globe is so packed with potential “delay-bombs”, that some “good-old” stock, which can be pushed becomes the life-saver 7 Shipping costs can be optimised by shipping in bulk When the costs of transporting raw material or components or sub-assemblies can be whittled down to almost next to nothing by using up (say) full container space, stocking up and pushing is not a bad idea 8 Demand profiles across time periods are static When there is no fluctuations between days of a week, weeks of a month and months of an year, it is profitable to stabilise production and supply chains by planning and pushing an average volume periodically to the market
  11. The principle, "No pain No gain" used by body-builders, athletes and sportsmen is applicable to all aspects of life, including the corporate world. Cost of Poor Quality (CoPQ) is a classical example of the truth of this adage. Would explain CoPQ as the cost of the four "tion"s - Prevention, Detection, Correction, Dereliction. The last "tion" may raise more than just a few eye-brows, but the more popular term, "failure" happens ultimately due to some kind of dereliction somewhere. CoPQ when correctly assessed and used turns out to be a great "Decision-making" tool. It speaks the language preferred by the Top Management - the language of Vitamin M - Money. Convincing the Top Management to take decisions and approve investments becomes easier when the negative impact (Costs) are realistically assessed and presented. After all, avoiding the loss of a rupee is considered the equivalent of three rupees earned. For example, the decision to invest in an advanced machine or in specialised training becomes easier when the costs of dereliction (failure) i.e. revenue already lost or revenue that can be lost are quantified realistically and presented. In certain other scenarios, awareness beforehand of the expected costs of 100% inspection and rework (Detection and Correction) can sway a decision on investment in Prevention efforts. This investment need not be only equipment. It could also be increased inspection of inputs and raw material or a longer training programme for staff that preclude possibilities of dereliction leading to increased losses downstream. In yet other situations, the decision to implement sample inspection or audits of the output becomes easier to make if the costs of not doing this are known beforehand. To reach the position described above, where an organisation can calculate the various components of CoPQ quickly as well as accurately, is not easy. It requires a lot of effort to set up the system, but only some effort at periodic intervals to maintain the system. Be it costs of various activities done by different people of different skills, at different levels, using equipment of varied sophistication, or the costs of Management time spent in discussions, meetings video conferencing etc. root cause analysing, defending, explaining or apologising for defects, the basis needs to be established first in terms of (say) Cost per Hour of people at various levels, Functions and roles, and Cost per hour of various activities in terms of the equipment used. Once this made available and periodically updated, extracting the value of various components of CoPQ is not as difficult as it would have been when doing it for the first time. Review and update of these cost norms should happen regularly both at a fixed period and also be triggered by specific events. An annual review is a good trade-off between keeping the CoPQ cost norms updated without indulging in an overkill. Salary increases happen once a year and all costs including CoPQ would also change. Other events warranting a review, even if not immediately, at the next opportunity would be organisational changes, equipment replacement, equipment AMC changes, National Budget & tax regime changes and so on. For any metric to be accurate, there needs to be time, effort and money invested in setting up and maintaining the metric infrastructure. This is applicable to CoPQ also. The CoPQ measured and reported needs to be accurate enough to differentiate between the different products, different types of errors or defects and different market situations. If with the good intentions of simplifying calculations of CoPQ to increase its usage, the process of calculation is approximated to the extent that the system becomes insensitive to some of the key points of differentiation listed above, the effectiveness of usage of CoPQ derived from this method will not be very high. One need not always measure CoPQ to the last paisa, but CoPQ need to remain accurate in the context in which it is used. When projecting costs of the future, one can weight it down with a probability factor to project a realistic picture. Typically, if a defective piece reaches a strategic customer, the potential loss of business with that customers and a chain reaction from some others could be a crore of rupees. But if it is assessed from past history that the likelihood of that happening is only 5%, the CoPQ could be 5% of one crore, which is the less threatening than the total value of business. Such considerations may avoid the usual branding of CoPQ as theoretical by Operations and the Top Management. Monthly CoPQ reports will add more value when it contains trends for the earlier months of the current year and also the last year. It would also be a good idea when publishing Monthly CoPQ Reports to add narratives that seek to explain changes especially increases. For example, in a month, if due to an unavoidable reasons, there has been inputs of quality slightly poorer than normal and having known this before hand, a temporary pre-processing cell is created to filter out defect-ridden inputs and if for various reasons, this cost cannot be passed on to the supplier, it can appear that the costs of prevention have increased. In such a case, if the narrative can include an estimate of the value of other components of CoPQ that have been avoided by investing upstream in prevention, there may be better understanding of the reasons for increase. To summarise, it may be asking too much to expect people to appreciate CoPQ but if consistent, organisation-wide understanding is achieved, it would be a commendable job done.
  12. SIPOC

    The SIPOC is one of the most useful, yet one of the most maligned tool. Like with any tool, merely preparing a SIPOC will not yield much benefit. One should consciously attempt to draw maximum benefits from it. The SIPOC is pocket treasure trove of process information, which is all sufficiently all-encompassing in its completeness, yet manageably small in size, both physical and electronic. The traditional SIPOC has information on Suppliers, Inputs, Process, Output and Customers, while the enhanced SIPOC has a whole lot of additional, related information like process step times, who does what step, the team size and distribution across shifts, the average volumes of these transactions, the qualifications of staff for this process, the training required and so on. The first beneficiary of a SIPOC is the person or team preparing it. Documenting multiple aspects of a process makes the SIPOC creators first understand their process better and also raises various questions in their mind. They will either find answers to these questions and better their understanding or identify opportunities for improvements in the unanswered questions. Once the SIPOC is complete and is available for viewing or study by all, this opens up more and more opportunities for improvements. All the potential triggers required for stimulating improvement ideas are self-contained in the SIPOC. There are the process step times, which can provoke improvements to automate and reduce work content. There are various stakeholders in the process, which can trigger efforts to simplify the process. There are staff qualifications and skills required, which can initiate deskilling programmes. Every data point or piece of information in the SIPOC has a lot of value in the current context as well as in the outlook for improvements. Additionally, the SIPOC is a most excellent training tool both for the Senior Management who may want to have a bird's eye view of the process to take certain strategic decisions or for the hands-on trainee who wants to understand the process step by step to execute them. To summarise, the SIPOC being a repository of process information is a wonderful documentation, improvement, knowledge management and training tool.
  13. Hypothesis Testing

    In my humble opinion, Hypothesis Testing (HT) would be the life of the Six Sigma Methodology, where reality checks are conducted to validate one way or the other, assumptions, hunches or expert opinion. In short, Hypothesis Testing is the password to access the Domain of Excellence. Of course, access to the domain alone is not sufficient to achieve excellence. However, HT provides the opportunity for the same. HT symbolises the value add, the Six Sigma methodology brings to the table when compared with traditional improvement approaches. HT captures the essence of the "Data driven-ness" of the Six Sigma Methodology. In a validated Measurement System, HT remains the ultimate decision enabler. Apart from the uses of HT during the Six Sigma project life-cycle, HT is very effective even when used as a stand-alone, "Six Sigma Infrastructural process tool". Any comparisons of period based performance becomes that much enriched with HT. Daily, Weekly, Monthly Performance reports all become even more smarter when combined with HT. All staff can take more educated decisions in their daily work if they apply HT. In the Analyse Phase, HT is used to identify the Vital few Xs from the overall X universe. Further in the Improve Phase, HT is used to validate the results of the pilots. Again, after full blown implementation of the improvements in the Control Phase, HT is again used to validate the success of the project. Apart from the above-mentioned, almost mandatory uses of HT, in certain very specific situations, HT is sometimes also used in the Measure Phase to come to a conclusion on the Measurement System. In some even more peculiar situations, HT can come in handy in Project selection during the Define Phase.
  14. Six Sigm in a KPO environment

    Thanks to Senthil, Anees and Gurshit for their kind responses Kind Regards, P.B.Mohan
  15. Six Sigma Applicability

    Hi, Maybe you are right Sundar; just to keep the data complete, one gentleman (incidentally named after a title given to the first Prime Minister of independent India) replied privately on another BMSS forum mentioning that he may be willing to share a case study from his company; the bottom line still remains that till now I have not received a case study yet Thanks, P.B.Mohan