By: Holly Grosdanis

With social media becoming an ever more prevalent part of our daily life, the need to moderate discussion and content on social media platforms has also grown. Accordingly, social media companies have turned to content moderators to screen online content to create safer online space for people to interact. Facebook, Twitter, YouTube, and TikTok each employ thousands of these content moderators to review content [1]. However, the practical challenges of screening the exponential amount of online content as well as the moral dilemma of what to remove and what to publish has made content moderation a contentious issue in recent years.

What is Content Moderation?

Content moderation is when an online platform, often a social media company, screens and monitors user-generated content to ensure that it is appropriate. User-generated content (UGC) refers to any type of online subject matter, including text, pictures, videos, and memes, that is created by users rather than by a company, platform, or brand. Content is screened based on platform-specific guidelines, or “community guidelines,” to determine whether the content should be removed, flagged, or published to the online platform. In other words, when content is submitted by a user to a website, that piece of content will go through a moderation process to ensure that the content complies the regulations of the website and is not illegal, inappropriate, or harassment.

Who are Content Moderators?

Content moderation can be done by manually by humans or through automated moderation systems using artificial intelligence (AI). Facebook spending billions of dollars to review millions of pieces of content every day employing 15,000 full-time content moderators [2]. Although content moderators are often full-time employees, but Facebook relies heavily on independent contractors to do the job. Many content moderators have reported struggling with mental health issues due to the emotionally taxing nature of the role [3]. Content moderators are often required to look at some of the most brutal and horrific content posted on the internet. In response, some companies are relying more on algorithms to review content. However, computers are not able to detect everything that falls out of bounds of the community guidelines, particularly the nuances of hate speech and misinformation.

How do social media companies decide what should be removed?

Recently, several social media companies have been accused of limiting free speech on their platforms. In Canada, freedom of speech is protected under the Canadian Charter of Rights and Freedoms. Yet, this freedom does have some limits to it — namely expressions which promote hatred [4], violence as a form of expression [5], speech that constitutes defamation [6], and threats [7]. However, since social media platforms are owned by corporations and users voluntarily sign up and agree to the terms and conditions, social media companies are able to place additional constraints on speech and content.

This dilemma has prompted many to call on governments to take a more expanded role in social media regulation. Advocates for social media regulation argue that social media platforms have become such an integral part of our media and communication landscape that they require more regulation and corporate accountability than typical companies. 

While much about the role of content moderation moving forward is still unclear, it’s evident that content moderation will only grow more important as our world becomes increasingly more digital.

Disclaimer: The information provided in this response is for general informational purposes only and is not intended to be legal advice. The content provided does not create a legal client relationship, and nothing in this response should be considered as a substitute for professional legal advice. The information is based on general principles of law and may not reflect the most current legal developments or interpretations in your jurisdiction. Laws and regulations vary by jurisdiction, and the application and impact of laws can vary widely based on the specific facts and circumstances involved. You should consult with a qualified legal professional for advice regarding your specific situation.

[1] Katie Schoolov, “Why content moderation costs billions and is so tricky for Facebook, Twitter, YouTube and others,” NBC News, February 27 2021.

[2] Casey Newton, “The Trauma Floor,” The Verge, February 25 2019.

[3] Elizabeth Dwoskin, Jeanne Whalen and Regine Cabato, “Content moderators at YouTube, Facebook and Twitter see the worst of the web — and suffer silently,” The Washington Post, July 25 2019.

[4] R v Keegstra, 1990 CarswellAlta 192 (S.C.C.)

[5] Irwin Toy Ltd. V. Quebec (Attorney General 1989 CarswellQue 115 (S.C.C.), para. 43

[6] King v. Power 2015 NLTD (G) 32, para. 28

[7] R. v. Clement (1994) 2 S.C.R., 758