Jump to content

Santosh SHARMA

Members
  • Posts

    11
  • Joined

  • Last visited

  • Days Won

    2

Santosh SHARMA last won the day on January 22

Santosh SHARMA had the most liked content!

Profile Information

  • Name
    Santosh SHARMA
  • Company
    Aarti Industries Limited
  • Designation
    Zone Excellence Manager

Recent Profile Visitors

345 profile views

Santosh SHARMA's Achievements

  1. This method consists of of Rapid cycles of real-time experimentation,, used to-test and adjust improvement ideas before establishing standard work or implementing processes broadly.” In plain language this means – try it out! Try Storming incorporates physical actions which will engage other senses and provides testers a far better sense of whether a thought is viable or not. Trystorming is different from brainstorming therein it encourages the rapid development and test of a thought instead of merely brooding about the possible solutions. It allows people to see , touch and further improve on an initial idea. It also models action rather than talk. Often in our desire to style the right Future State we forget that the simplest thanks to build a process that works is thru the iterative process of trying, adjusting/correcting, and trying again. The process is built on three basic principles: It is not important to create perfect solutions. Be action-oriented. Keep solutions simple. These principles work hand-in-hand to develop effective solutions. When it is implemented correctly, Try-Storming can be used to continuously improve any business process. One of the key reasons to utilize trystorming as a part of any process design activity is that it models action instead of talk. By leaving the room and truly trying ideas during the course of the work, your team will quickly realize that your activity is quite just a gathering or an exercise in theory. In addition, taking action typically increases the extent of idea generation and team engagement exponentially. By mocking up, and trying concepts the team will be able to visualize their ideas and transform plans into tangible improvements quickly. While trystorming requires far more energy than the normal design approach, use of this system will significantly reduce the general time needed to succeed in a workable solution.
  2. Competency Mapping may be a process of identifying key competencies for a corporation or institution and therefore the jobs and functions within it. Competency mapping is vital and is an important exercise. Competency mapping is a way of assessing the strengths and weaknesses of a employee/team/organization. It is about identifying a person’s job skills and strengths in areas like teamwork, leadership and deciding . Boyatzis.(1980) – “A capacity that exists in a person/employee which results in behaviour that meets the work demands within parameters of 0rganisational environment, and that, successively brings about desired results”. Need for Competency Mapping Competency mapping has gained commonness, momentum, and popularity. The old maxim, ‘Slow and steady wins the race’ has lost its validity insight of the fast-changing business environment. In order to cope with the changing world economy and keeping in view that the world is becoming a global village, companies/factories have become more aware of the need for having competent employees and developing distinguished competencies for each organization. • Identify key attributes (skills/knowledge/ability/behaviour) that are required to perform tasks effectively. • Analyse individual’s Strengths and areas of development for better understanding leveraging for task/job completion. • Evaluate the suitability of an individual for and identified position/role consistent with the required competencies. • Anticipate developmental needs to keep the organization ‘future-ready’. Understanding key steps of the Competency Mapping process There are Five (5) key steps in any competency mapping process These aren't simply the steps but also the construct of the competency map. Each one of the step also from the weather within the competency map that you simply become a framework. 1. Classification of Competencies Generally, the competencies are classified into two categories. Functional and Behavioral. However, the competency are often classified into more categories counting on the general objective of developing a competency framework through the method of competency mapping. 2. Definition of Competencies It is vital that competency is defined well. This helps in providing a transparent picture of what exactly is that the detail about the set of skills and skills required to try to to the work. You can see below example, where for a sales team the competency/skills has been classified as functional and it has been clearly defined. 3. Identifying Behavioral Indicators (BI) The key element of any competency mapping process and developing a competency framework is that the set of behaviors that defines that competency. In the above example, we've already classified the competency as functional. Also, we've defined the competency, now the behavioral indicators are going to be assigned. It typically says that you simply simply got to demonstrate these behaviors to point out that you have the competency of ‘Drive for Sales Results’. Behavioral indicators are also called Behavioral Descriptors (BD) / BI. 4. Identifying Proficiency Levels (PL) Not every one during a department are going to be at an equivalent level of experience or within the context of competency mapping, at an equivalent level of proficiency. Therefore, it's critical that within employment role, department, level or grade of employees the specified proficiency levels are defined. As seen within the example above, we've a DPL of three that's just two below the utmost . But we don't know what would define a DPL of 4l. Therefore, it's extremely important that we clearly define the meaning of every level of proficiency from 1 to level 5 in DPL. Proficiency Level SOME OF THE TOOLS USED FOR COMPETENCY MAPPING 1.INTERVIEWS 2.COMPETENCY-BASED QUESTIONNAIRES 3.ASSESSMENT AND DEVELOPMENT CENTERS 4.CRITICAL INCIDENTS TECHNIQUE 5.PSYCHOMETRIC TESTS
  3. What is The Diffusion of Innovation? This model helps businesses to understand how a buyer adopts and engages with new products or technologies over time. Companies will use it when launching a replacement product or service, adapting it or introducing an existing product into a replacement market. It shows how the merchandise are often adopted by five different categories/customer types and therefore the thanks to interact as a business with these kinds of people: Of course, the emergence of latest digital technologies and marketing techniques means the diffusion of innovation model is particularly relevant to digital marketers. Analysts Gartner have an extended standing report showing the stages of adoption of latest technologies that's useful for digital strategists to follow. Returning to the DOI, what characterises each of the groups of adopters, generally they have these characteristics, see the primary work by Everett M. Rogers for more details. 1. Innovator. they're alittle group of people exploring new ideas and technologies. It includes "gadget fetishists!" during an internet marketing context there are many specialist blogs & media platform to interact them, Engadget and Gizmodo for examples. 2. Early Adopters. Considered to be Opinion Leaders who may/will share positive testimonials about new products and services, seeking improvements and efficiency. Engagement requires little persuasion as they're receptive to vary . Provide guides on the way to use the product/service. 3. Early Majority. These are Followers who will read reviews by earlier adopters about new products before purchasing and that they are often engaged with reviews and via various Media, where they go to look for your products. 4. Late Majority. To generalise, these are sceptics who aren't keen on change and may only adopt a replacement product or service if there is a robust feeling of being left behind or missing out. they will be engaged with providing marketing material, evidence, reviews from Opinion Leaders and case studies to means how it works. 5. Laggards. The descriptor says it all! Typically they like traditional communications and may adopt new products when there aren't any alternatives. Laggards will come on board when 'others' have written about your products/services, they need research evidence, statistics or felt pressure from others. How to use the Diffusion of Innovation? If you're launching a replacement tech product, like software, you'll use this model which may help with identifying the marketing materials needed for each group. The Adoption theory is most useful when watching new product launches, but it are often useful when taking existing products or services into a replacement market. Examples of how it are often applied to digital marketing strategies? This is an example supported launching new software to the varied groups. Innovator: Show the software on key software sites like Techcrunch, or Mashable. Providing marketing material on the online site , with relevant information and cause potential sales with downloads. Early Adoptor: Create guides and increase the most software sites, providing marketing material like case studies, Guides and FAQs. Early Majority: Blogger outreach with guest blog posts and supply links to social media pages, key facts and figures, and 'how to' YouTube videos. Late Majority: Encourage reviews & comparisons and share press commentary on your website. Provide a press section & social proof with information and links to reviews, testimonials, third party review sites etc Laggards: It's probably not worth trying to appeal to the present group!
  4. What is Inverted-U Theory? It is a theory that throws light on the relation between performance and pressure / arousal. In the original study, rats got electric shocks as motivation for escaping from a maze. The Inverted-U Theory owes its name to the line, in form of an inverted U, that appears when there is a correlation between pressure & performance. A quick look at the curve reveals that performance lags behind when there’s little pressure, and that performance is positively influenced when there’s some more pressure. If even more pressure is added, performance is influenced negatively and efficiency decreases. The worker’s efficiency and performance can reach an optimal point if the pressure or arousal have reached an optimal point. Inverted-U Theory was developed by psychologists Robert Yerkes and John Dodson in 1908. Despite the fact that the model was developed long ago, it continues to be relevant. Interpreting the Model When looking at the left-hand side of the graph, it’s notable that low pressure or low stress levels result in a stress response corresponding to ‘boredom or lack of challenge’. Even if the task itself is a critical activity, the attention, concentration, and precision required to properly execute a task is absent in the absence of an appropriate level of pressure or stress. On the right-hand side of the graph from the Inverted-U Theory, we can see that extreme pressure levels or high stress levels don’t automatically result in good performance. The opposite is true: if pressure gets too high, or a too high stress level is activated, this results in a feeling of unhappiness, stressfulness, and anxiety. These are all results of overwhelming stress. In the middle of the graph, however, is a region where the worker performs best. This area is where an optimal amount of pressure is applied. In this region, the moderate pressure leads to an optimal stress level, which is manageable as well. Eventually, this results in the highest performance level for the user. Four Influencing Factors It can be hard to determine how much impact pressure, and stress have because the desired amount of pressure is influenced by four factors. These factors are also known as influencers. Inverted-U Theory recognises the following four influencers: Personality Different personality types benefit from different levels of stress or pressure. Generally, extraverted personalities are more resistant to stress and better able to keep their head above water when stressed than introverted personalities. Introverted people usually have a higher chance of performing well in environments with little stress or excitement. Task Difficulty The degree of complexity of a task relates to the level of attention and effort a person requires to successfully complete it. People are generally able to carry out simple activities even when pressure is high, but complex tasks are better taken care of in quiet surroundings. Skills A shop manager and an accountant have completely different jobs. Each has more knowledge of the work they do individually than of the other’s job. If they would swap jobs, the challenge and the pressure would be so high in the beginning that it would strongly motivate them. After a while, when tasks get easier, they would have to use a new form of pressure to keep their performance up. Fear Inverted-U Theory shows that fear can also have an effect on performance. This mainly relates to the ability to set aside or ignore feelings of fear in order to be able to keep one’s focus on the situation and the tasks. People who are better at this also perform better under pressure. People who are not good at it will enter into challenging situations more often. Complexity and Motivation In situations that require carrying out tasks with a high level of complexity, or solving complex problems, motivation plays an important role. There have been various situations in which the relation between motivation and complex problem solving was studied. These have yielded several theories, such as McClelland’s motivation theory and Maslow’s hierarchy of needs. Using the Inverted-U Theory, To get the best performance from the team… The simplest way to use the Inverted-U Theory is to be aware of it when you allocate tasks and projects to people on your team, and when you plan your own workload. Start by thinking about existing pressures. If you're concerned that someone might be at risk of overload, see if you can take some of the pressure off them. This is a simple step to help them improve the quality of their work. By contrast, if anyone is under-worked, it may be in everyone's interest to shorten some deadlines, increase key targets, or add extra responsibilities – but only with clear communication and agreement. From there, balance the factors that contribute to pressure, so that your people can perform at their best. Remember, too little pressure can be just as stressful as too much! Try to provide team members with tasks and projects of an appropriate level of complexity, and work to build confidence in the people who need it. However, bear in mind that you won't always be able to balance the "influencers." Motivate and empower your people so that they can make effective decisions for themselves.
  5. DIKW refers to a pyramid/model/hierarchy for representing functional and/or structural relationships between data, information, knowledge, and wisdom. The DIKW model is often quoted/used implicitly, in definitions of knowledge , information and knowledge within the information management, information systems and knowledge management literatures, but there has been limited direct discussion of the hierarchy. Data is simply a group of signals or symbols. Nothing more — just noise. it's going to be server logs, user behavior events, or the other data set. It’s unorganized and/or unprocessed, It’s inert and if we don’t know what it means, it becomes useless. You get Information once you start to form data useful. once we apply systems to arrange and classify data, we will transform this unstructured noise into Information. The “What”, “When” and “Who” question should be answered at at this stage. In short, Information is data with meaning. This “meaning” are often useful, but it isn’t always useful. Knowledge is that the next step within the journey, and doubtless the foremost significant leap. It implicitly requires learning. It means we will take data, categorize and process it generating Information, then organize all this Information during a way that it are often useful. Where-as Information can help us to know relationships b/w each other, Knowledge allows us to detect patterns. It’s the inspiration which will allow us to build predictive models and generate real insights. A definition that i prefer is that Knowledge may be a mental structure, made up of accumulated learning and systematic analysis of data . Wisdom is that the final frontier. It allows us to predict the longer term correctly, not only by detecting and understanding patterns but also deeply comprehending the “Why” behind those patterns. Wisdom is all about the future: it relies on Knowledge and pattern models, but it can help to shape your “gut feeling” and intuition, supplying you with an exponential competitive advantage. Knowledge ages quickly due to how briskly reality changes, but wisdom remains more rigid. For now, this is often a pure human skill, but AI is catching up fast. When AI wisdom becomes better than human wisdom, the outcomes are going to be unpredictable. The following image exemplifies perfectly this mental model: This example also introduces the ‘Insight’ concept, sometimes referred to as ‘Intelligence’. It’s a sporadic manifestation of Wisdom. Insight is what connects Knowledge and Wisdom.
  6. Cellular manufacturing consists of a series of product‐focused work groups (cells) which house all operations to manufacture a product family. The cell is devoted to manufacturing products which requires similar operations. While during a traditional manufacturing environment is organized functionally with similar machines in one area (for example, all molding machines within the Molding Dept.), cellular manufacturing operates sort of a series of plants‐within‐a plant, each starting with raw materials and ending with finished product, with all operations being performed within the cell. Machines in manufacturing cells are located within close proximity to take care of continuous flow with zero inventory b/w operations and to attenuate product transportation. The cell is operated by multi‐skilled operators who have complete responsibility for delivery and quality performance within the cell. Benefits of Cellular Manufacturing • Cells shorten the space a neighborhood or product has got to move. This reduces handling costs, allows quicker feedback on potential quality problems, reduces Work‐In‐ Process inventories, permits easier scheduling, and reduces throughput time. • Cells organize the materials location at the purpose of use to makes it easy to see the work ahead. • Cell teams better understand the entire process of creating parts / assemblies. • Cell members feel responsibility to a little group, instead of to an impersonal company. Understandable, logical participation results in a sense of empowerment. 4 Dimensions of Cells Man : Operators are cross‐trained on support also as manufacturing equipment. Leaders and facilitators encourage teamwork, during a professional and fair manner. Material : Materials management practices reduce work‐in‐process buffers. Material flow within the cell is streamlined to attenuate travel distances, and team members take full responsibility for quality of parts. Machine : The layout of the cell permitting smooth material flow with minimum buffers, arranges equipment to be immediately adjacent. Method : Procedures related to the cell help eliminate waste. The most apparent waste is flawed parts, so quality procedures detect any potential errors. Limitations of Cellular Manufacturing While its benefits are well documented, Some have argued that implementing cellular manufacturing could lead to decrease in manufacturing flexibility. Conversion to cells may cause loss in flexibility, which could impact the viability of cell use. Obtaining balance among cells is additionally harder than for flow or job shops. Flow shops have relatively fixed capacity, and job shops can draw from a pool of skilled labor so balance isn't that much of a drag . By contrast, with cells, if demand diminishes greatly, it's going to be necessary to interrupt up that cell and redistribute the equipment or reform the families.
  7. This theory basically involves sort of techniques which enable researchers to effectively analyse ‘rich’ (detailed) qualitative data effectively. It reverses the classic hypothesis-testing approach to theory development by defining data collection because the first stage and requiring that theory is closely linked to everything of the info . The researcher keeps on the brink of the info when developing theoretical analyses – during this manner the analysis is ‘grounded’ within the data rather than being supported speculative theory which is then tested using hypotheses derived from the thought . It employs a unbroken process of comparison back and forwards between the varied aspects of the analysis and also the info . Grounded theory doesn't suggest that there are theoretical concepts just waiting within the info to be discovered. It means the thought is anchored within the info . In grounded theory, categories are developed and refined by the researcher so on explain no matter the researcher regards because the many features of the data . Highlights of grounded theory: It consists of guidelines for conducting data collection, data analysis and theory building, which may cause research which is closely integrated to social reality as represented within the data. The analysis of data to urge theory isn't enthusiastic to a stroke of genius or divine inspiration, but on perspiration and application of general principles or methods. Grounded theory involves inductive guidelines instead of deductive processes. This is very different from what's often considered conventional theory building (sometimes described because the ‘hypothetico-deductive method’). It should develop out of an understanding of the complexity of the topic matter. It knit the complexity of the info into a coherent whole. Primarily, such theories could also be tested effectively only in terms of the fit between the categories and thus the info , and by applying the categories to new data. In some ways this contrasts markedly with mainstream quantitative psychology where there's no requirement that the analysis fits all of the data closely – merely that there are statistically significant trends, no matter magnitude, which confirm the hypothesis derived from the thought . The unfitting data are considered measurement error rather than a reason to explore the data further so as to provide a much better analysis, because it could even be in qualitative research. The theory-building process could also be endless one rather than a sequence of critical tests of the idea through testing hypotheses. In some ways , it's impossible to separate the various phases of the research into discrete components like theory development, hypothesis testing, followed by refining the idea . the info collection phase, the transcription phase and therefore the analysis phase all share the common intent of building theory by matching the analysis closely to the complexity of the subject of interest.
  8. WHAT IS A PROCESS DECISION PROGRAM CHART? The PDPC is defined as a Mgmt Planning Tool which systematically identifies what might fail during a plan. Countermeasures are developed to prevent / offset those problems. By using PDPC, We can either revise the plan to avoid the problems or be ready with the best response when a problem will occur. PDPC Diagram Overview A useful way of designing is to interrupt down tasks into a hierarchy, employing a tree . PDPC simply extends this chart to a couple of levels to identify risks and countermeasures for the lowest level tasks, as within the diagram below. The PDPC may be a very simple tool with an unnecessarily impressive sounding name, possibly derived from the japanese name, from where it came together of the 'Second seven tools (also referred to as one among the 'Seven Tools for Management and Planning' or more commonly one among the ‘Seven New QC Tools’). When should a PDPC process be used? · Before implementing an idea , especially when the plan is large and sophisticated · When the plan should be completed on schedule · When the price of failure is high Advantages of using a PDPC process are: Identifying what can fail (failure mode or risks) Consequences of that failure (effect or consequence) Possible countermeasures (risk mitigation action plan)
  9. Knoster Model is a Change Management Tool. It defines six elements that we should address to affect desired behavior change : 1. Vision, 2. Incentive, 3. Agreement, 4. Action Plan, 5. Skills, 6. Resources, Addressing every element increases our likelihood to achieve the desired results. Failing to deliver on any of the elements will result in predictable reactions from stakeholders / end users. Reactions (ref : fig) that help us pinpoint our experience strategy falls short. All we need to do is pay attention. Our ability to recognize the reactions will impact our ability to spot where our change plan is falling apart. What’s Good About the Model It is an excellent tool for Managing Complex Change in both situations viz To diagnose issues when a project is already happening or to plan Change. It provides a consolidated list of all the elements needed. I particularly like the focus on Incentives, as this is too often missed in many alternative models. What’s Bad About the Model There is nothing inherently bad about this Model for Managing Complex Change, except that it does not focus on sequence of the elements. Compared to Kotter’s model, it lacks the direction or phasing of the various elements. Conclusion This Model for Managing Complex Change gives a clear advantage because it allows to truly understand the importance of every element that is required for Change to be succeed. It also gives a clear understanding of the Negative Change outcomes in case any element is not considered / left out. Using it as a “matrix” that ensure consistency and coherence is a must in any change process, as it creates the right mindset for success.
  10. Catchball is a practice that make Lean one of the most effective methodologies for managing teams. Which allows us to align company’s goals and objectives with the actions of the people on all hierarchical levels of organization. It has a vertical application meaning that top level of management sets goals for the company and creates a strategy. They toss it like a ball to the lower level and wait to receive tactics proposition and feedback. In next step, mid-level management tosses the goals down to front line managers, and process is repeated until “the ball” has reached the person at bottom of the pyramid. The final goal is to give input & align every action in a common direction, provided by every person who is working towards achieving the company’s goals. There may be some iterations before the consensus is reached. In Lean, tactics and process improvements are tossed up by lower management levels and regular team members, Although strategy is usually thrown down by executives. This makes this process extremely suitable for companies that have embraced a culture of shared leadership. It is a effective way to make sure that employees understand how they fit into the picture and become more connected to the organization's. Benefits of Applying Catchball Catchball is a practice that can lead to a rapid increase of engagement in the team, which will help to achieve continuous improvement. When looking at new ideas / plans for company, this element of Hoshin Kanri method can help get a better understanding of practicality of the plans and help executives to decide whether they will be success or not. This is done by allowing people from multiple areas to contribute to the analysis of the plan/idea. They will be able to suggest practical improvement coming straight from the “gemba” and will be more determined to execute the plan wanting to prove that their suggestions have an actual value.
  11. It is difficult or impossible to offer computer the talents of a one-year-old and easy to form computers exhibit adult level performance.” Reverse-engineering a person's skill was expected by AI practitioners to be proportional to the quantity of your time that skill had been evolving during mankind's evolution. In the early stages of AI, skills that appeared effortless were expected to be difficult to reverse-engineer (typically cognitive functions like reading comprehension, visual perception , speech recognition). Teaching a machine to beat a human playing chess has been considered comparably easy by AI practitioners as it is a skill that only requires effort (mainly compute power). The Moravec’s paradox is a factor that held back developments in AI. The availability of pretrained cognitive functions by hyperscale cloud providers is breaking it and fueling the current wave of AI. The Moravec Paradox was assuming the following: Oldest human skills are largely unconscious, so appear to us to be effortless. Therefore, Skills that require effort may not necessarily be difficult to engineer at all but we should expect skills that appear effortless to be difficult to reverse-engineer,
×
×
  • Create New...