By: Jessica Krueger


Artificial intelligence (“AI”) has proven to be an incredibly useful tool across countless industries. Just a few of the many benefits include increasing efficiency, creativity, and productivity at reduced costs.[1]

Employers, for example, have been increasingly using AI to assist in their recruitment and hiring processes. AI can be used to sort through large numbers of resumes, narrow down a list of candidates, analyse interviews, and communicate with prospective employees in a personalized way. The technology can also combine various data points to predict the best-fit candidate for a role.[2] As of 2021, the AI recruitment industry is worth over $600 million USD.[3]

However, with all things good come at least a few areas of concern. One of the major concerns with the utilization of AI is the prevalence of bias. At first glance, one may be inclined to believe that the use of AI would help to eliminate the human biases that inevitably affect one’s decision-making.  However, studies have repeatedly shown that this is not the case. AI is only as good as the data set programmers use to train it, and any errors or inherent biases will be reflected in the AI’s output.[4] A 2018 Boston Consulting Group survey found that 84% of executives reported their AI algorithms as only reinforcing existing gender or racial bias.[5]

Bias in AI

There are a couple of ways that AI produces biased results. First, an AI tool is taught to interpret information in a specific way and, therefore, is prone to adopting the biases of its programmers. Second, an AI tool will produce biased results if it bases its analysis on biased information. When this information is used, it can lead to unfair outcomes and perpetuate discrimination.

To see the prevalence of bias in AI, simply complete a quick Google search of the word “beautiful” on Google Images. You will notice that the results reveal mostly photos of white women. The search engine doesn’t have racial preferences, but the samples from which it draws its results were made by people who did.[6] The AI can then magnify its own biases. As photos rise to a search engine’s front page, more people click on them, creating a positive feedback loop where the algorithm suggests them even more.[7]

Another potential issue is that the AI may unintentionally link specific features to predictions or hidden categories. For instance, Zac Amos’s article “How to Reduce the Effects of AI Bias in Hiring” provided the following example:

“suppose programmers don’t give AI a sample containing female truck drivers.

In that case, the software will automatically link the “male” and “truck driver” categories together by process of exclusion. It then creates a bias against women and may conclude they should not be hired as truck drivers based on previous patterns.”[8]

This issue led to the discontinuance of Amazon’s AI recruiting tool. The tool was found to be biased in favour of men. The AI learned this preference because the technology field is dominated by men, and thus, the patterns in successful candidates led to the system learning that male candidates were preferable.[9] The tool automatically penalized resumes that included the word “women’s,” as in “women’s studies” or “women’s university” and favoured candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as “executed” and “captured.”[10]

In the recruitment context, the presence of AI can have serious implications, which can quite possibly open an employer up to legal liability. Although employers have a lot of flexibility when deciding who to hire, they must ensure that their actions comply with the Ontario Human Rights Code (the “Code”). As is set out in section 5 of the Code, an employer cannot decide who to or not to hire based on Code-prohibited grounds:

5 (1) Every person has a right to equal treatment with respect to employment without discrimination because of race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, gender identity, gender expression, age, record of offences, marital status, family status or disability.

The use of an AI program which discriminates, even unintentionally, on any of the above grounds may open the employer up to legal liability.

There are countless characteristics that AI may use to discriminate against a candidate based on a Code-protected ground. For instance, a candidate’s name and word choice on a resume may be indicative of the candidate’s race, gender, or age. AI may be able to determine if a candidate is a man or a woman by looking at their extra-curricular activities. For instance, if golf is listed as a pastime, they are likely male.[11]

This kind of subtle discrimination has been found to be a breach of the Code on numerous occasions. For instance, in the case of Ramsay v. Brantford (City),[12] the City was found to have discriminated against the Applicant by tailoring the interview process to emphasise a male candidate’s skills. By doing so, the City had favoured a male candidate’s skills over the female Applicant’s qualifications, which were consistent with the qualifications asked for in the job posting.[13]

Strategies for Avoiding Bias in AI

To ensure that AI decision-making is not biased, several strategies may be employed. First, when biases are recognized, they should be identified – programmers may be able to correct the issue. Second, programmers should be as transparent as possible with their algorithms. That way, if there are any recognizable issues with the data or the programming, they will be easier to spot.  If this information is accessible to a sociologist or psychologist, they may be more capable of noticing biases in training sets and thus be able to offer advice on correcting them.[14] It would also be beneficial to educate recruiters and hiring managers to recognize biases, as well as how to intentionally shift processes to mitigate against them. Third, programmers can ensure they are relying on a wide range of inputs so as to dilute potential biases.[15]

Disclaimer: The information provided in this response is for general informational purposes only and is not intended to be legal advice. The content provided does not create a legal client relationship, and nothing in this response should be considered as a substitute for professional legal advice. The information is based on general principles of law and may not reflect the most current legal developments or interpretations in your jurisdiction. Laws and regulations vary by jurisdiction, and the application and impact of laws can vary widely based on the specific facts and circumstances involved. You should consult with a qualified legal professional for advice regarding your specific situation.

[1] Nish Parikh, Understanding Bias In AI-Enabled Hiring (October 14, 2021), online: Forbes <>.

[2] Ibid.

[3] Zac Amos, How to Reduce the Effects of AI Bias in Hiring (May 2, 2023), online: Lever <(> [Amos].

[4] Ibid.

[5] Emily Douglas, AI anti-bias laws: Are algorithms ‘inherently’ discriminatory? (April 13, 2023), online: Human Resources Director Canada <>.

[6] Amos, supra note 3.

[7] Ibid.

[8] Ibid.

[9] Jeffrey Dastin, Amazon scraps secret AI recruiting tool that showed bias against women (October 10, 2018), online: Thomson Reuters <>.

[10] Amos, supra note 3.

[11] Jeffrey S. Tenenbaum, Five Key Legal Issues to Consider When It Comes to Generative AI (April 27, 2023), online: ASAE <>.

[12] Ramsay v. Brantford (City), 2022 HRTO 1438.

[13] Ibid.

[14] Amos, supra note 3.

[15] Dataforce, 3 Ways to Minimize AI Bias (October 11, 2022), online: Dataforce <