By: Daniel Moholia

As artificial intelligence (“AI”) begins to revolutionize various industries, public authorities and private entities will increasingly be challenged to tackle the ethical concerns around its use. This blog post will explore the concerns with direct and indirect discrimination by AI systems and how government and private industry have attempted to address it.

The main points of this blog are: (a) governments are increasingly relying on AI to make decisions in governing, (b) ethical concerns with AI require double optimization, focusing on accuracy as well as respect of human rights and freedoms, and (c) private industry and governments, including the Canadian government, have begun grappling with how to test the fairness of AI systems.

The Give and Take of Using AI

The use of AI is promising yet may also have dire implications if not designed and implemented carefully. This is especially the case when public authorities rely on AI to determine resource distribution and assess risk within the criminal justice system. A report to the Administrative Conference of the United States published in 2020 explored the engagement with AI of 142 US federal agencies. The report found that 45% of these agencies actively explored or implemented AI in fulfilling their administrative mandates.[i]

The Goal of Double Optimization

When AI is used to assist in decision-making it may have negative effects on the right to privacy and freedom from discrimination of those impacted by the decision. AI designers now focus on double optimization – building AI that is as accurate as possible while having minimal impact on rights and freedoms.[ii] One of the biggest ethical concerns with the use of AI is that automation will engrain systematic discrimination in decision-making.

Making AI Fair

Making AI fair in its application within public systems is a tricky endeavour. The first obstacle is to clarify what we mean by ‘fairness’. Arvind Narayanan, Associate Professor of Computer Science at Princeton University, has pointed out that there are as many as 21 different ways to define ‘fairness’ in evaluating the impacts of code.[iii] The Ethics Guidelines for Trustworthy AI published by the High-Level Expert Group on Artificial Intelligence (AI HLEG) of the European Commission defines ‘fairness’ according to avoidance of direct and indirect discrimination as defined by law. This definition is especially mindful of discrimination against vulnerable groups in society.[iv]

Once clarity is achieved on the concept of ‘fairness’, criteria must be implemented in evaluating fairness concerns at every stage of AI design and implementation. Bias can enter the system at various stages of design, including the definition of target variables and data sets, selection of features, and adoption of proxy variables. Bias may also enter in at the implementation stage by way of human actors making use of the AI.[v] Designers of AI can evaluate their algorithms and data sets with AI system audits that ensure specific rules are followed. Yet such audits are unlikely to be sufficient to weed out all discriminatory impacts. This is due to the complex interaction of variables and the prospect of indirect discriminatory effect.[vi]

While voluntary guidelines adopted by commercial actors are helpful, AI fairness is ultimately a legal issue that needs a measured legal response. Governments must define criteria for evaluating the fairness of AI systems and evaluate systems before implementing them in decision-making. A popular approach is the use of algorithmic impact assessment whereby governments draft, publish and implement assessment frameworks.[vii] One example of such an assessment is included in the Canadian government’s Directive on Automated Decision-Making, issued in February 2019. The Directive outlined the government’s framework for conducting algorithmic impact assessment and its commitment to public transparency in the use of AI in decision-making.[viii]

Conclusion

As AI is becoming more important in private industry and public administration, industry leaders and governments will need to respond to ethical concerns with its use. The fairness of the design and implementation of AI systems is a growing area of interest. Many governments and intergovernmental organizations are already responding to fairness concerns with voluntary guidelines and mandatory directives.

For more information on the ethical concerns surrounding Artificial Intelligence, please see: “The Power of AI to Increase Access to Justice” (18 June 2020), online (blog): MyOpenCourt

Disclaimer: The information provided in this response is for general informational purposes only and is not intended to be legal advice. The content provided does not create a legal client relationship, and nothing in this response should be considered as a substitute for professional legal advice. The information is based on general principles of law and may not reflect the most current legal developments or interpretations in your jurisdiction. Laws and regulations vary by jurisdiction, and the application and impact of laws can vary widely based on the specific facts and circumstances involved. You should consult with a qualified legal professional for advice regarding your specific situation.


[i] David Freeman Engstrom et al, “Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies” (Report submitted to the Administrative Conference of the United States, February 2020) at 5, 15-16, online (pdf): Stanford Law School <https://law.stanford.edu/> [https://perma.cc/AUJ8-WSYP].

[ii] Osonde Osoba and William Welser IV, “An Intelligence in Our Image: The Risks of bias and Errors in Artificial Intelligence” (2017) at 18, online (pdf): Rand Organization <https://www.rand.org/> [https://perma.cc/GY5H-PQF6].

[iii] Arvind Narayanan, “Tutorial: 21 definitions of fairness and their politics” (23 February 2018), online: Princeton University <https://www.cs.princeton.edu/~arvindn/talks/> [https://perma.cc/HS6W-GUK4].

[iv] “Ethics Guidelines for Trustworthy AI”, European Commission (8 April 2019), online: <https://ec.europa.eu/> [https://perma.cc/4QPX-AL7L].

[v] Frederik Zuiderveen Borgesius, “Discrimination, artificial intelligence, and algorithmic decision-making” (Report submitted to the Anti-Discrimination Department of the Council of Europe, 2018) at 13, online (pdf): Directorate General on Human Rights and Rule of Law of the Council of Europe <https://www.coe.int/en/web/human-rights-rule-of-law/home> [https://perma.cc/Y9XU-DB3U].

[vi] Pauline T Kim, “Auditing Algorithms for Discrimination” (2017) 166 U Pa L Rev Online at 190-91, 195-96 [https://perma.cc/6W73-TSRQ].

[vii] Dillon Reisman et al, “Algorithmic Impact Assessments: A Practical Framework for Public Policy Accountability” (2018) at 4-5, online (pdf): AI Now Institute <https://ainowinstitute.org/> [https://perma.cc/3ZTW-N6ZR].[viii] Canada, Treasury Board, Directive on Automated Decision-Making (Ottawa: Treasury Board, 2019).