Jump to content
Click here to know about ONLINE Lean Six Sigma Certiifcation ×
Message added by Mayank Gupta,

Algorithmic Bias (or AI bias or machine learning bias) is a repeatable decision that is systematically unfair or biased to an individual or to a group. Such a bias usually arises due to skewed data used in training, flawed design of the algorithm or the manner in which the algorithm executes.

 

An application-oriented question on the topic along with responses can be seen below. The best answer was provided by Sumukha Nagaraja on 11th Jun 2024.

 

Applause for all the respondents - Hardik Joshi, Radhika G, Rohit Kurup, Amol Ingole, Sameer Ahuja, Abhijeet Sonake, Nethaji, Sumukha Nagaraja.

Question

Q 676With a higher focus on AI and tech based solutions in Lean Six Sigma projects, there is also a new problem that is arising - Algorithmic Bias. Explain it with some examples. How does one prevent this bias in tech based solutions.

 

Note for website visitors -

Link to comment
Share on other sites

9 answers to this question

Recommended Posts

  • 0

Algorithmic bias indicates the presence of unfair or discriminatory outcomes in automated decision-making systems due to biases present in the data, algorithms, or design.

 

Examples and some consequences of Algorithmic Bias:

  • Search Engines - Social biases and meanings associated with certain words may be picked unintentionally by algorithms. As a result, search engines might display biased or inappropriate results when users search for specific terms or phrases.
  • Online Content and social media - Algorithmic bias can amplify misinformation, hate speech, and filter bubbles.
    Social media platforms across may focus on content and may promote harmful content unintentionally.
  • Facial Recognition: Facial recognition technology can struggle with darker skin tones, leading to misidentification and bias.
  • Criminal Justice - Criminal Sentencing Algorithms: Some jurisdictions use algorithms to predict recidivism and determine sentences. However, these models may disproportionately impact certain racial or socioeconomic groups due to biased training data. Unfair decisions may result in wrong convictions or harsh punishments.
  • Financial Services - Credit Scoring Models: Algorithms used by banks to assess creditworthiness can inadvertently discriminate against certain demographics if historical data contains biases impacting in approvals of required loan with specific interest rates and investment opportunities.
  • Healthcare - Bias in medical algorithms can affect diagnosis, treatment, and patient outcomes. For instance, if an algorithm underperforms for specific demographics, it may delay critical medical interventions.
  • Hiring and Employment - AI-driven hiring tools may inadvertently favor certain groups over others. Discrimination can occur during resume screening or interview processes.
  • Education - Biased algorithms in educational tools can impact student performance and opportunities. Students from marginalized backgrounds may receive less personalized support.
  • Public Services - Bias in predictive policing tools can lead to additional policing/enforcement in certain neighborhoods and may affect resource allocation in public services.

Measuring algorithmic bias involves several techniques and metrics. Here are some common approaches:

  • Disparate Impact Ratio (DIR): Measures the ratio of favorable outcomes for different groups (e.g., protected vs. non-protected classes) with a value close to 1 indicating fairness.
  • Equalized Odds: Comparison of true positive rate (sensitivity) and false positive rate (fallout) for each group for evaluating whether the true positive and false positive rates are similar across different groups by
  • Demographic Parity: By comparing the overall favorable rate of each group which ensures similar favorable outcomes across different groups
  • Conditional Demographic Disparity (CDD): Measures bias in specific subgroups (e.g., age, gender, race) and compares the favorable outcome rates within each subgroup.
  • Fairness-Aware Machine Learning Metrics: Use specialized fairness metrics (e.g., disparate impact, equalized odds) during model evaluation and implement the same in evaluation pipeline
  • Bias Auditing Tools: Use tools for visualizing and quantifying bias (E.g. IBM’s AI Fairness 360 or Google’s What-If Tool) for analyzing different fairness metrics

Strategies to Prevent Algorithmic Bias:

  • Diverse and Representative Data: Ensure that sample/training data is diverse and representative of the population. Collect data from multiple sources and demographics for minimizing bias.
  • Regular Audits: Continuously audit algorithms for bias to evaluate the impact on different groups and tweak/adjust as required. 
  • Fairness Metrics: Define fairness metrics (e.g., demographic parity, equalized odds) and incorporate them into the model evaluation process.
  • Sensitive Attribute Protection: Use techniques like adversarial de-biasing or encoding invariant representations to protect sensitive attributes (e.g., race, gender) during model training.
  • Human Oversight: Involve human experts to review and validate algorithmic decisions, especially in critical areas like criminal justice.
  • Transparency and Explainability: Make algorithms more interpretable. Understand how they arrive at decisions and provide explanations to affected individuals.
  • Ethical Guidelines: Adherence to defined ethical guidelines is required for AI development and deployment.

To summarize, addressing algorithmic bias is an ongoing process, requiring collaboration between data scientists, policymakers, and domain experts which is crucial in creating/designing/developing fair and unbiased tech-based solutions.

Link to comment
Share on other sites

  • 0

Algorithmic bias means the accuracy of AI or machine learning tools towards one side. It is also explained as when accuracy of AI is higher for certain inputs rather than all. It was first discovered by Joy Buolamwini when she notices facial detection bias in Face++, Microsoft Face Detect and IBM Watson. Each platform shows correct result between 87 to 93% when lighter skinned person evaluated. Accuracy decreases to 65% when skin gets darker.

Six Sigma mainly depends on statistical analysis and informed decision is made based on the interpretation of data. Algorithmic bias during analysis can put accuracy of result in six sigma projects under question mark.

 

Example of Bias in Six sigma Project:

1) When data selected for algorithm creation are from specific population then it will form output towards one direction. E.g. During manufacturing process optimization project when data was collected from one operator rather than multiple operator then it will provide solution based on one operator only.

2) When measurement tools used for data collection systemically generate data that defers from true values. This bias can also lead to inaccuracies and distortions in the data that affect validity and reliability of the project.

3) Interpreting data is a critical concern while analyzing six sigma data. When analyst inadvertently change their interpretation with preconceived outcomes. Due to this, wrong conclusion can be taken which impacts root cause and solution identification.

 

How to prevent Algorithm bias:

1) Data transparency: Data source, measurement system and sampling procedure should be well controlled and documented.

2) Diverse analysis teams: Data to be verified from cross functional teams.

3) Robust statistical methods: Rigorous statistical tools to be used to minimize the impact of bias.

4) Continuous training: Training to be provided to Six sigma project members and regularly training update also to be provided.

5) Document assumption: Identify potential source of bias and document properly. This helps to understand stakeholder for the potential risk.

Link to comment
Share on other sites

  • 0

Algorithmic bias creeps in due to systemic limitations in terms of basic assumptions considered right at the beginning of the hypothesis. Many a times, these are AI driven, and might go un-noticed during a development. Interesting to understand, that these are actually human biases, hard coded into the systems.

 

Examples:

1- If a solution is made with white population as sample group, there might be a complete set of used cases that will be skipped in this scenario. Solution might be quicker to market, but will not be able to cater to the entire world population. eg biometric & facial recognition systems

 

2- Recruitment tool that eliminates women candidates who have had a break of more than 2 years , might be a good example of the algorithmic bias

 

3- Narrow standards for health fitness for women that men to rate attractiveness

 

 Key to eliminate them :

1- Question your assumptions, and understand openly why are we making those assumptions. Having a group discussion with the right sample, seniority in the organization is key to ensure that we do not let our biases come into play.

 

2- Trust & fairness : It is important to remain fair to all cultural and gender groups, eliminate stereotypes and biases that might creep at various stages of the process.

 

Link to comment
Share on other sites

  • 0

Algorithmic bias is the systemic and unfair discrimination that results from algorithms and automatically powered systems. Such biases can occur in the early stage of algorithm development: data gathering, design of the algorithm, and during implementation. There is a danger that where Lean Six Sigma projects make use of AI and other tech-led solutions, the outcome could end up being unfair or inaccurate, thus reinforcing or aggravating an already existing bias.

 

1.      Bias in Algorithm Effects

1.1 Hiring Systems

Example: Algorithmic shortlisting based on biased previous hiring practice which would end up giving bias to female candidates compared to male candidates. It was identified in Amazon that, AI-based recruitment tools showed bias against female candidates. The algorithm, trained on historical hiring data, learned to prefer male candidates because the past data reflected a male-dominated hiring trend.

Impact: High-quaified female candidates potentially get erroneously filtered or screened out; bias tilts the scales towards male candidates.

1.2 Scoring for Credit

Example: AI credit scoring models – if training data is being modeled on biased human decision-making scenarios of previous erroneous practices by financial companies—will reinforce bias, penalizing racialized groups.

Impact: Members of marginalized communities might be denied equitable access to loans or receive worse terms, perpetuating economic inequalities.

1.3 Route Maps for Online Navigation

Example: Routing of cabs through highly congested roads even though there’s a longer but more suitable route available elsewhere. It’s a common phenomena still  to get stuck in traffic during peak hours congestion due to some accident/ repair/ special occasions, etc. because the Algorithm is not tweaked to identify patterns. Cab drivers also do not take the risk of the penalty for taking longer routes and thereby causing not only a discomfort to the customer but also contributing to the traffic chaos.

Impact:  Traffic jams and delays

2.      How to Combat Algorithmic Bias

To prevent algorithmic bias in these tech-led solutions, especially in Lean Six Sigma projects, consider the following tactics:

2.1 Diverse and Representative Data

Ensure the training data set is diverse and representative of all groups. It can best be accomplished by proactively looking for inputs from populations not easily identified and using auditing systems to the datasets to filter out bias.

Action: Regularly audit and update datasets to ensure they remain representative.

2.2 Bias Detection and Mitigation Techniques

 

Detect and mitigate biasing techniques in the algorithms. This refers to the use of fairness metrics and bias correction algorithms during model development.

Action: Apply fairness-aware machine learning, like re-weighting, re-sampling, or adversarial debiasing.

2.3 Transparent and Explainable AI

Develop transparent and explainable AI models to understand how decisions are made. This will help in understanding the identification and mitigation of any biases.

Action: Use explainability tools to interpret model decisions and provide transparency.

2.4 Continuous Monitoring and Auditing

Constantly monitor and audit the performance of AI systems to ensure they remain fair and unbiased over its operation.

Action: Develop standard review mechanisms to assess the output of the algorithm and modify it accordingly.

2.5 Inclusive Design and Development Teams

Make sure the teams involved in designing and developing AI-systems are diverse and inclusive. This can help diversify perspectives and reduce the risk of unintentional biases.

Action: Increase diversity in hiring and ensure an inclusive work environment.

2.6 Ethical Guidelines and Accountability

Lay down the ethical guidelines and ensure that the developers are responsible for the outcomes of their algorithms. Such ethical considerations are encouraged from the outset of design.

Action: Create governance bodies, such as ethics boards, to oversee AI projects and ensure compliance with ethical standards.

 

In conclusion, Algorithmic bias arguably represents one of the most significant challenges in AI integration and tech-based solutions within Lean Six Sigma projects. However, proactivity in taking well-rounded and multi-pronged measures shall ensure a check on adverse effects, thus making the systems fairer and more equitable.

Link to comment
Share on other sites

  • 0

Algorithmic bias refers to systematic errors in computer system that create unfair outcomes and decisions produced are biased and not fair.

Some causes & examples of Algorithm bias – Biased training data, Human Bias, Inherent bias in model

Examples –

1.      Some company uses algorithms for job screening and mostly one kind of Gender shortlisted as the algorithm favour one Gender than other

2.      It is found in some policing algorithms that some minority communities are disproportionately targeted leading to over-policing in those areas

3.      Medical algorithms sometimes perform worse on minority populations because they were trained primarily on data from majority segment of patients, leading to disparities in healthcare outcomes

How to avoid algorithmic bias –

1.     Consider Diverse and Representative Data - Ensure the training data is diverse and representative of the population to mitigate biases.

E.g. When designing a healthcare algorithm, include data from a wide range of demographics to ensure it works well for everyone

2.     Transparency and Accountability - Make the decision-making process of algorithms transparent and establish accountability mechanisms.
E.g. Implementing explainable AI systems where users can understand how decisions are made and challenge them if necessary

3.     Regular Updates, tracking & Monitoring: Algorithms should be regularly updated and monitored to ensure they adapt to new data and continue to perform fairly
E.g. An organization might set up a committee to periodically review the performance of its algorithms and update them based on the latest, most representative data

4.     Bias Audits and Testing - Regularly test algorithms for biases and address any issues that arise.
E.g. Companies like Facebook and Google perform bias audits on their algorithms to identify and rectify discriminatory patterns

Organizational Examples:

1. IBM has developed a toolkit called AI Fairness 360, which helps developers detect and mitigate bias in machine learning models.
2. Microsoft has established an AI Ethics and Effects in Engineering and Research (Aether) Committee to guide responsible AI development.
3. Accenture uses fairness and ethics reviews as part of its AI development process to ensure that its algorithms are free from bias.

Link to comment
Share on other sites

  • 0

Algorithmic bias refers to an bias or partiality in the performance of an AI based solution deployed.

Example : An AI system designed for hiring new employees which short lists candidates based on their CVs can have a biased approach towards hiring candidates of a specific gender in case it is trained or designed to shortlist candidates based on a historic data which may be biased towards a specific gender

 

This happens primarily due to the factors mentioned below:

 

Biased Training data:

In case the data on  which the AI system is trained on is biased, then it will also become a part of the AI system which will result in biased results when it goes live
Prevention: This can be prevented by ensuring the data on which the AI is trained on is representative of the general population without any biased approach

 

Designer bias:

In case there is any biased approach while designing, then this will also result in the AI system providing biased results. Prevention: This can be prevented ensuring that the design of the AI system designed is clearly documented and is reviewed by the experts to ensure that any biased approach in the design is addressed in a timely fashion. Alternatively, having a diverse team of designed which does peer to per review of the designs can also help mitigate any design bias

Link to comment
Share on other sites

  • 0

Below are few algorithmic bias and the steps to prevent the same :

 

  • If  AI  is used by a company for searching job applicants, it it possible that it is trained trained on historical hiring data that reflects past biases. This bias can result in less diversity in the workplace and the loss of potentially qualified candidates.
  • Credit scoring algorithms may use factors like PIN code or past credit history that disproportionately affect certain racial or socioeconomic groups. This can limit access to loans and credit for certain groups, resulting in financial inequalities.

Preventing Algorithmic Bias

 

  • We can ensure that the datasets used to train algorithms are comprehensive and representative of all groups.
  • Regularly audit and update datasets to reflect current and accurate information, avoiding reliance on outdated or biased data.
  • Implement tools and techniques designed to detect bias in algorithms.
  • Provide stakeholders with clear explanations of how algorithms work and how decisions are reached, ensuring accountability.
  • Conduct regular audits of AI systems to ensure they continue to operate fairly and accurately.
Link to comment
Share on other sites

  • 0

Algorithmic Bias:

  • Algorithmic bias is related to the errors generated by the software system, due to the limited training data or the data given for training is not representing the population. 
  • Example : One of the company in a excellence journey, they thought that the need of building the new digital systems with the available data. Then they started collecting the one/two months data and created a model which predicting the out put of the process. During the training& testing of the model, it is giving better accuracy with limited data. But when is tested in production environment it started failing or giving erroneous readings, because in last two months, the machine had undergone for maintenance, but over a period of time the equipment behavior changed due to wear and tear. Hence, we need to train the model with entire life cycle of the equipment, which represent the population. 
  • Example2: In quality department, the supervisor doing a study of tensile of the product using input parameters. The company produces 200+ different product, But the supervisor only considered the 50 products, for which the complete data is available. When a new grade comes(other than 50 products), then the tensile prediction is going wrong. Hence he need to collect the entire range of products to predict the tensile and there by he can control the input dosages.
  • Example 3: Once of the company started a project of efficiency improvement of Motor A. The ML developer got the data from the Scada system for the model. In production environment it got failed. After a careful study by the SCADA engineer they found that, when ever motor RPM is beyond a limits or the temperature is very high, the sensor not able to capture the efficiency data. Hence entire out of range data is missed in the training model. This is also considered as Algorithmic Bias
  • Example 4: One of the company started building a facial recognition system. The developer has a good experience and built multiple systems. Developed used the HR system employee data to develop the model model, but most of the time, the facial recognition system showing error, because the data used by developer is not upto date. All the face pictures are very old almost like 10 yrs back photos. Hence it is not recognizing. This can also be considered as Algorithmic
  • Some of the companies does not have the proper data capturing mechanisms or data lake kind of structure. But as the competitors started implementing the AI/ML models in their company, in burry every one wants to jump into this world. But the fact remains "Bad data will give Bad predictions". Some times we train the model with hourly frequency, but when we run the model in prediction, we look for minute base prediction, It will also give biased prediction. 
  • In summary here are the sources of Algorithmic Bias
    • Historically bias data
    • Implicit Human Biases
    • Feedback Loops
    • Lack of Diverse representation
  • Here are the some of the Best practices one can look into to avoid algorithmic Bias
    • Diverse and representative data: Before addressing the problem through data driven model, we need to ensure that the data is collected for training is representing the entire population ( Which can cover all Products, various speeds, various input materials, various timing, various operating conditions) 

    • Retraining the models regularly: Many manufacturing industries keep improving by changing their design of the equipment's. Hence every change, model need to retrained with fresh data set to reduce the errors in predictions

    • Transparency: We need to have a clear documentation about how decisions are made by the developed software system.

    • Inclusive development teams: If we are in business with various verticals, need to have a diverse team, who can to check and balance biases that may otherwise go unnoticed.

    • Including the human interactions in the decision making - If the software system fail to give the solution, it should allow humans to correct the decision. - COBOTs can be good example

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...