Hey… Your AI is Hurting Me!!

Mandar Karhade, MD. PhD.
Towards AI
Published in
7 min readDec 15, 2022

--

An Immature AI is not a reason to allow systemic bias.

As the use of artificial intelligence (AI) continues to grow and expand, concerns about bias in AI systems have become increasingly prominent. Bias in AI can have serious consequences, including discriminating against certain groups of people, perpetuating existing social inequalities, and making decisions that are unfair or even harmful.

Let’s go through some prominent examples of eager implementation AI that did not go as expected -

Photo by Amy Elting on Unsplash

Kidney disease predicting AI sucks in women: DeepMind health

  • VA researchers The DeepMind researchers acknowledged that the training data comprised 6.38% of women and all 93.62% of men. According to VA researchers“The model performance was lower for this demographic,” they wrote at the time, though their findings were limited only to patients in the earlier stages of acute kidney failure. The AUC (a metric of accuracy) of the model in men was 83% while in women 71%. This type of bias can lead to unequal access to health care, and can result in individuals receiving inadequate or inappropriate care. Now imagine if the disease is biologically different across ethnicities. Are those even represented?

AI flubs people with Disabilities: Microsoft Research

  • Also, this article shows that AI systems like search results, advertisements, or maps often show results that are either irrelevant or biased against certain groups of people, such as individuals with disabilities or those who live in certain geographic regions. An extension of this problem is that people with disabilities can’t use voice recognition systems effectively. This type of bias can result in individuals not being helped when they really need it. Even passive biases like these can contribute to existing inequalities in access to information and opportunities.

Students! we wrongly judge your potential: Cornell Research

  • This AI algorithm predicts a student’s likelihood of success in a certain academic program or field. It has never been evaluated for fairness in underserved populations. This paper evaluates different group fairness measures for student performance prediction problems on various educational datasets and fairness-aware learning models. The study found that the choice of the fairness measure is important, likewise for the choice of the grade threshold. AI used in such systems discriminates against students of certain races or socioeconomic backgrounds. This type of bias can result in students being placed in inappropriate or unequal learning environments and can contribute to existing inequalities in education.
Photo by Mateusz Wacławek on Unsplash

Facebook says Fight Puppets: Washington Post report

  • AI algorithms that are used to prioritize content in users’ news feeds can amplify controversial or sensational content, making it more likely for users to encounter and engage with this type of material. In addition, AI-powered chatbots and social media accounts can be used to automatically generate and disseminate divisive or inflammatory content, potentially inciting strong reactions from users. Furthermore, AI algorithms that are designed to generate personalized content or recommendations can sometimes lead users down “echo chambers” where they only encounter information and perspectives that align with their own beliefs, which can further fuel divisive or inflammatory interactions. All of these have been the mechanisms to keep “the user engaged” so that the data could be harvested for sale at the expense of the user’s mental health, or a whole race, or a whole minority group.

Black People .. you are all the same, Right? (2019): Harvard Study

  • This study found that Asian and African American people were up to 100 times more likely to be misidentified than white men. Native Americans had the highest false-positive rate. African American women were falsely identified more often in the kinds of searches used by police investigators. These are the same systems that the FBI uses (390,000 searches from 2011–19). Commercial facial recognition systems were more likely to misidentify individuals with darker skin tones, according to a Harvard study. This type of bias can lead to false arrests and other negative consequences for individuals who are wrongly identified by the system.

Microsoft’s AI-powered chatbot (2016): Microsoft-Tay

  • The chatbot learning from Twitter conversations by Microsoft-Tay was found to exhibit racist or sexist behavior, using discriminatory or offensive language. This type of bias can create a hostile environment for individuals who are targeted by the chatbot and can also reflect poorly on the organization or individual that created the chatbot. Is OpenAI immune to it? Not really, Stories like this tweet where chatGPT wrote an equation that to be a good scientist, you have to be white and male

AI for screening job candidates discriminates (2022): Link

  • AI systems used for hiring or job performance evaluations have been found to exhibit bias against certain groups of people, such as women or individuals with certain cultural backgrounds. Among the examples given of popular work-related AI tools were resume scanners, employee monitoring software that ranks workers based on keystrokes, game-like online tests to assess job skills, and video interviewing software that measures a person’s speech patterns or facial expressions. This type of bias can prevent qualified individuals from being hired or promoted and can contribute to existing inequalities in the workplace.

Credit scoring algorithms against disadvantaged borrowers: Stanford study

  • AI systems used for credit scoring and loan approval have been found to exhibit bias against certain groups of people, such as individuals with lower incomes or those who live in certain neighborhoods. Now a preprint study in which researchers used artificial intelligence to test alternative credit-scoring models finds that there is indeed a problem for lower-income families and minority borrowers: The predictive tools are between 5 and 10 percent less accurate for these groups than for higher-income and non-minority groups. This type of bias can prevent individuals from accessing credit or loans and can contribute to existing inequalities in the financial system.

“Black people have higher risk scores as a group”: COMPAS AI

  • In 2016, ProPublica reported that an artificial intelligence tool used in courtrooms across the United States to predict future crimes, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), was biased against Black defendants. This type of bias can lead to unfair treatment of individuals by the criminal justice system. The worse part is that Northpointe, the parent company of COMPAS, refuted ProPublica’s claims, stating that the algorithm was working as intended. Northpointe argued that Black people have a higher baseline risk of committing future crimes after an arrest. According to Northointe, this results in higher risk scores for Black people as a group. Northpointe, now rebranded as Equivant, hasn’t publicly changed the way it computes risk assessments.
Photo by Maxim Hopman on Unsplash

The Change Needs to Be Universal: Policy

To address these concerns, the role of policy in decreasing bias in the implementation of AI in our lives is crucial. Policymakers have the ability to establish guidelines and regulations that ensure that AI systems are designed, developed, and used in ways that are fair and unbiased.

Transparency & Accountability

One way that policy can help to decrease bias in AI is by requiring transparency and accountability in the development and use of AI systems. This can include measures such as publishing information about the data used to train and evaluate AI systems and making it possible for individuals to challenge the decisions made by AI systems. By making it clear how AI systems are developed and used, policymakers can help to ensure that AI systems are not biased against certain groups of people.

Setting Standards for the Quality and Accuracy

Another way that policy can help to decrease bias in AI is by setting standards for the quality and accuracy of the data used to train and evaluate AI systems. AI systems are only as good as the data they are trained on, and if the data is biased or flawed, the AI system will also be biased or flawed. By establishing standards for the quality and accuracy of data used in AI systems, policymakers can help to ensure that AI systems are based on high-quality data that is representative of the population they are intended to serve.

Establishing Mechanisms for Mitigating Bias

Finally, the policy can play a role in decreasing bias in AI by establishing mechanisms for addressing and mitigating bias when it is detected. This can include measures such as providing resources and support for researchers and developers working on methods for detecting and mitigating bias in AI systems and establishing processes for addressing bias when it is discovered. By creating these mechanisms, policymakers can help to ensure that AI systems are used in a way that is equitable and benefits society as a whole.

Conclusion

The bias in AI is a bane of new age. The role of policy in decreasing bias in the implementation of AI in our lives is essential. By establishing guidelines and regulations that ensure that AI systems are designed, developed, and used in a fair and unbiased way, policymakers can help to mitigate the potential negative consequences of bias in AI and ensure that AI is used in a way that benefits society of all colors.

Photo by Chris Lawton on Unsplash

--

--