By: Ambreena Ladhani

ChatGPT is an AI language processing model trained to interact in a conversational way.[1] Its dialogue format allows it to answer questions, admit mistakes, and challenge incorrect premises.

When ChatGPT is asked if it is biased, it generates the following response:

Question: ChatGPT, are you biased?

Answer: “As an artificial intelligence language model, I don’t have personal beliefs, emotions, or biases. However, I may reflect any biases that exist in the data I was trained on or in the language patterns that are commonly used in society.”

With its exceptional ability to respond to open-text inquiries, it is no surprise that ChatGPT can recognize its drawbacks when asked. AI is methodical, algorithmic, and inevitably coloured by human bias.

What is AI Bias?

As articulated by ChatGPT itself, AI bias occurs when algorithms reflect human bias while generating artificial information.[2] When trained to recognize data from human decisions, AI technology may unintentionally adopt social and political biases of human decision-makers. Minimizing AI bias is imperative to ensure that human bias does not colour the results produced by AI. [3]

Tay, the Racist Robot

In 2016, Microsoft unveiled an artificially intelligent chatbot on Twitter. [4]  The chatbot, named “Tay,” was programmed to mimic the language patterns of a typical American teenager and trained to recognize everyday language patterns. After being spoon-fed profanities from Twitter users, Tay quickly echoed racist, misogynistic, and extremist views. While it’s true that Tay wasn’t actually racist (or curiously pro-genocide), Tay’s short-lived legacy leaves us with an important reminder: artificial intelligence can be artificially biased.

Legal Consequences When AI Bias is Unchecked

AI models are susceptible to embedding human and societal biases, which can lead to skewed outcomes with severe consequences. For example, an AI risk assessment tool tasked with predicting recidivism rates incorrectly classified African American defendants as “high risk” at double the rate of white defendants. [5] While AI has the potential to enhance efficiency, it is imperative to monitor AI bias to prevent inaccurate outcomes resulting from human bias.

How Do You Minimize AI Bias?

AI systems that are entirely impartial are unlikely to exist.[6] The reason for this is that entirely impartial humans do not exist. The effectiveness of AI relies heavily on the quality of data it receives. When AI data is generated based on human decisions, producing an unbiased outcome becomes nearly impossible. [7] Nonetheless, it is possible to reduce AI bias to boost public confidence in AI technology and prevent the furtherance of social and historical inequalities caused by arbitrarily biased outcomes.

Mckinsey & Company propose the following six ways to minimize AI bias: [8]

1. Be aware of contexts in which AI can help correct for bias and those in which there is a high rate for AI to exacerbate bias.

It is crucial ­­­to understand where AI can help mitigate bias and contexts where it exacerbates bias. This is especially important when AI tools are adopted rapidly for their efficiency and commercial benefits.

2. Establish processes and practices to test for and mitigate bias in AI systems.

These procedures may involve enhancing data collection by adopting conscious sampling techniques and maintaining transparency in the methods used to train AI systems.

3. Engage in fact-based conversations about potential bias in human decisions.

AI can reveal biased patterns in human decision-making. AI may have a role in surfacing these long-standing biases, especially when they have gone unnoticed. Human-driven processes can learn from the biased decision-making revealed by AI.

4. Fully explore how humans and machines can work well together.

This requires identifying situations where automated decision-making is suitable and when human oversight is indispensable. AI is capable of handling tasks that require minimal supervision, but in some instances, it is constrained by the requirement for independent human validation. Recognizing the distinction between the two and adjusting accordingly is crucial.

5. Invest more in bias research, make more data available for research (while respecting privacy), and adopt a multidisciplinary approach.

A multidisciplinary approach may involve ethicists, social scientists, and experts who best understand the nuances of AI application in the context it is being used. Continually considering and evaluating the role of AI decision-making as the field progresses is a vital part of this approach.

6. Invest more in diversifying the AI field itself.

The AI field, as it stands, lacks diversity with respect to gender, race, class, geography and disability and other characteristics. Enhancing diversity within the AI field can equip the discipline to identify, alleviate, and rectify AI bias. Broadening access to AI education, tools, and opportunities for underrepresented groups can aid in achieving diversity within the field.

AI Bias: A Helpful Tool

While AI bias is inevitable, when properly monitored and mitigated, it can be used as a helpful tool in reducing disparities caused by human bias. AI decisions can be examined for their results in a way that human decisions cannot, resulting in fairer processes.[9]

For example, in a study conducted by the University of Chicago, the records of defendants in bail hearings were examined.[10] Of the 554,689 cases examined, judges in New York had released just over 400,000 of the defendants on bail. An artificial intelligence tool was programmed to recognize only the age and criminal record of the defendants and directed to make its own list of 400,000 people to release.

The study then compared the list generated by the AI tool with the list of released people chosen by the judges. The people generated on the AI list were found to be 25% less likely to commit crimes while released on bail than the 400,000 people released by the judges in New York.

While pre-trial decisions are intended to be assessed based on solely the defendant’s risk, human bias will often color decision making processes. The use of AI eliminates this by not conflating the risk of the offender with their racial background, facial expressions, or body language during bail hearings.[11]

Takeaways

While AI bias can be harmful if it is not accounted for, there are ways to minimize AI bias to ensure its predictive outcomes do not unintentionally reflect human bias. AI can recognize bias in human decision-making that may otherwise go unnoticed. AI can also produce accurate results based solely on the variables it is trained to recognize. While artificial intelligence can be artificially biased, examining this bias can lead to fairer decision-making and more accurate results.

Disclaimer: The information provided in this response is for general informational purposes only and is not intended to be legal advice. The content provided does not create a legal client relationship, and nothing in this response should be considered as a substitute for professional legal advice. The information is based on general principles of law and may not reflect the most current legal developments or interpretations in your jurisdiction. Laws and regulations vary by jurisdiction, and the application and impact of laws can vary widely based on the specific facts and circumstances involved. You should consult with a qualified legal professional for advice regarding your specific situation.


[1]Chat GPT online: https://openai.com/blog/chatgpt

[2] Zoe Larkin, “AI Bias – What Is It and How to Avoid It? (2022) online: https://levity.ai/blog/ai-bias-how-to-avoid

[3] Jake Silberg & James Manyika, “Notes from the AI frontier: Tackling bias in AI (2019) McKinsey Global Institute

[4] Elle Hunt, “Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter” (2016) online: The Guardian  https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter

[5] Julia Angwin et al., “Machine Bias” (2016) ProPublica.

[6] Larkin, supra note 2.

[7] Ibid.

[8] Silberg & Manyika, supra note 3.

[9] Silberg & Manyika supra note 3.

[10] Jon Kleinberg, “Human Decisions and Machine Predictions” (2017), online: http://www.nber.org/papers/w23180.

[11] Ibid.