By: Sara Franklin-White

Over the past several years, Artificial Intelligence (“AI”) has made its way into industries and sectors around the world. Despite all the benefits of AI, legal and ethical concerns arise with its use. Serious issues relating to racial bias in AI tools have been surfacing in recent years, resulting in discrimination, marginalization and infringements on privacy and basic human rights.

What exactly is ‘bias’?

When people hear the word ‘bias,’ they usually think it means something bad. However, algorithmic bias is not always negative. For a data scientist, algorithmic bias can mean a measurable value that exists in an algorithm,[1] but algorithmic bias is more broadly understood under its cultural, moral and social definitions.

In this context, algorithmic bias has been described as “the lack of fairness that results from the performance of a computer system,”[2] or more specifically as “possible sources of harm throughout the machine learning process, including “unintended or potentially harmful” properties of the data that lead to “unwanted or societally unfavourable outcome[s].”[3] This definition of algorithmic bias will be used throughout this post.

How do algorithms become biased?

Even if they are not aware of it, the people who make AI tools may hold biases or assumptions about certain groups of people. This may produce a prejudiced algorithm, resulting in discriminatory categorization of some groups over others based on how the algorithm is coded.[4] Similarly, algorithmic discrimination may result from a dataset that has captured historical or social inequities, such as how certain racial groups are overrepresented in the criminal justice system.[5]

Societal and contextual data bias

Bias and discrimination may also result from contextual and social circumstances. Consider “Tay,” Microsoft’s AI chatbot launched on Twitter, designed to engage users in conversation. Tay was programmed to discover patterns of language through its interactions with users, which it used in future conversations.[6]Within a few hours, Tay had ‘learned’ to become racist, misogynistic and anti-Semitic, and had tweeted abusive and offensive statements over 95,000 times.[7] Tay was  removed from the platform sixteen hours after it launched, indicating that discriminatory algorithms are not only a technical problem, but a social one.[8] These contextual considerations must be addressed as AI tools continue to grow and expand.

Other examples of biased AI

Amazon’s facial recognition software, Rekognition, was found to have both racial and gender biases – it was uncovered that the software had a higher chance of misidentifying darker skinned women than any other group.[9] In analyzing an image of Oprah Winfrey, one of the most widely known television presenters, the software concluded that there was a 76.5% chance she was a man.[10]

In another study, Rekognition was fed with 25,000 mugshots and was “tasked with comparing them to official photos of members of Congress, [Rekognition] misidentified 28 Congressional representatives as criminals. A majority of the false matches – 38 percent – were people of color.”[11] This is clearly problematic in and of itself but is especially concerning since these issues were raised to Amazon leadership, yet the company continued to sell the technology to law enforcement and military around the world until 2020.[12]

What can be done?

AI tools will continue to evolve and grow around the world. These tools show incredible promise to make processes more effective and efficient – exemplified by innovations such as MyOpenCourt, which uses AI to increase access to basic legal support.[13] However, technology should benefit all members of society equally, and there are still serious legal and ethical issues relating to algorithmic bias and discrimination.

Regulation and accountability

Regulating the development and use of AI tools does not have to limit innovation but can ensure the responsible and ethical use of these tools. Responsible AI means “AI that is fair and non-biased, transparent and explainable, secure and safe, privacy-proof, accountable, and to the benefit of mankind.”[14] To promote responsible AI, Canada can look to incorporate the principles of fairness and privacy, transparency and accountability within future policies, and shape regulations and guidance for those who develop, sell, buy and use AI tools. While certain companies have made their own responsible AI strategies, federal regulations and safeguards can make sure these principles are upheld, which could result in positive, transformative outcomes for all members of society.  

Disclaimer: The information provided in this response is for general informational purposes only and is not intended to be legal advice. The content provided does not create a legal client relationship, and nothing in this response should be considered as a substitute for professional legal advice. The information is based on general principles of law and may not reflect the most current legal developments or interpretations in your jurisdiction. Laws and regulations vary by jurisdiction, and the application and impact of laws can vary widely based on the specific facts and circumstances involved. You should consult with a qualified legal professional for advice regarding your specific situation.


[1] “Azeem Azhar’s Exponential View – How to Practice Responsible AI” at 00h:10m:52s, online (podcast): Harvard Business Review <hbr.org/podcast/2021/06/how-to-practice-responsible-ai>

[2] Steve Nouri, “The Role of Bias in Artificial Intelligence” (4 February 2021), online: Forbes <forbes.com/sites/forbestechcouncil/2021/02/04/the-role-of-bias-in-artificial-intelligence/?sh=5f4d2a30579d>.

[3] Harini Suresh & John V Guttag “A Framework for Understanding Unintended Consequences of Machine Learning” (15 June 2021), online: Cornell University <arxiv.org/abs/1901.10002>.

[4] Richmond Alake, “Algorithm Bias in Artificial Intelligence Needs to Be Discussed (And Addressed)” (27 April 2020), online: Towards Data Science <towardsdatascience.com/algorithm-bias-in-artificial-intelligence-needs-to-be-discussed-and-addressed-8d369d675a70>.

[5] Julia Angwin et al, “Machine Bias” (23 May 2016), online: ProPublica <www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing>

[6] Oscar Schwartz, “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation” (25 November 2019), online: IEEE Spectrum <spectrum.ieee.org/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation>.

[7] Ibid.

[8] Ibid.

[9] Kyle Wiggers, “MIT researchers: Amazon’s Rekognition shows gender and ethnic bias” (24 January 2019), online: Venture Beat <venturebeat.com/2019/01/24/amazon-rekognition-bias-mit/>.

[10] Zoe Kleinman, “Amazon: Facial recognition bias claims are ‘misleading’” (4 February 2019), online: BBC News <www.bbc.com/news/technology-47117299>.

[11] Kyle Wiggers, “Amazon’s Rekognition misidentified 28 members of Congress as criminals” (26 July 2018), online: Venture Beat <venturebeat.com/2018/07/26/amazons-rekognition-misidentified-28-members-of-congress-as-criminals/>.

[12] Monica Melton, “Amazon Still Pushing Biased Facial-Recognition Software to Law Enforcement, MIT Researcher Contends” (1 February 2019), online: Forbes <www.forbes.com/sites/monicamelton/2019/02/01/amazon-still-pushing-biased-facial-recognition-software-to-law-enforcement-military-mit-researcher-contends/?sh=57dd4ad11221>.

[13] “MyOpenCourt” (2021), online: MyOpenCourt <myopencourt.org>.

[14] Paul B de Laat, “Companies Committed to Responsible AI: From Principles towards Implementation and Regulation?” (6 October 2021) Phil Tech (doi.org/10.1007/s13347-021-00474-3).