By: Nicole Kinley
Concerns about the impact of misinformation and disinformation on elections have escalated globally. Although virtually indistinguishable to the public, misinformation refers to false information shared without the intention of misleading, whereas disinformation involves false information shared with the deliberate intent to mislead. Such concerns have become particularly pronounced in the US, where the presidential election is less than 500 days away and features candidates like Donald Trump; the former US president who propagated election denialism, leading to the incitement of an angry mob to attack the Capitol Building in Washington D.C. Clearly, these concerns are not unfounded. They align with research demonstrating that the distribution of mis(dis)information tends to peak around election cycles, with the intent of undermining democratic principles. Additionally, it has been established that false news spreads more rapidly on social media than the truth.
These issues are not limited to the US. For instance, Nigeria was faced with disinformation challenges during its presidential 2023 election. Several social media influencers came forward to reveal they had been paid by political parties to spread disinformation. This included the use of AI-generated audio to misrepresent one of the candidates’ political views. In Kenya, videos circulated on Tik Tok suggested that one of the presidential candidates intended to “take revenge” on specific ethnic groups.
Social media platforms have terms of service and guidelines that users must adhere to while using their services. As citizens increasingly rely on social media for political information to inform their voting decisions, the significance of these platforms’ policies against spreading false information has grown significantly. While Facebook and Twitter have reportedly tightened their policies, Tik Tok continues to face criticism regarding their apparent failure to filter large amounts of election mis(dis)information. An investigation by Global Witness and the Cybersecurity for Democracy (C4D) team at NYU Tandon revealed that TikTok approved 90% of the ads that were shown to contain outright false and misleading election mis(dis)information. The platform’s poor track record in combatting this type of information was further demonstrated during elections in France and Australia. US lawmakers are also calling for more information about TikTok’s operations due to concerns about mis(dis)information spreading through foreign interference; they believe the company’s ties to China could make it a national security threat. Some critics would consider this unsurprising, considering the platform’s relatively recent emergence and popularity compared to its rival social networking platforms, which have had more time to address how to protect their users from harmful content.
YouTube’s recent decision to reverse its policy on election mis(dis)information, which allows content suggesting fraud in US elections to remain on the platform, has been widely perceived as a step in the wrong direction. The platform concedes that while “removing this content does curb some mis(dis)information, it could also have the unintended effect of curtailing political speech without meaningfully reducing the risk of violence or other real-world harm”. This statement highlights the reluctance of leading digital platforms in social media to censor posts; to avoid interference with the right to freedom of expression and to uphold political debate in a functioning democratic society.
This reluctance is echoed by the Canadian government’s hesitation in imposing more stringent controls on the spread of mis(dis)information, as it believes that “media freedom remains an important part of democratic societies and essential to the protection of human rights and fundamental freedoms”. While certain forms of mis(dis)information, such as false advertising, libel, and hate speech, are addressed by existing Canadian laws, the rules become less clear when it comes to the spread of other forms. In response to this challenge, there is a growing call for social media platforms to engage in self-regulation and limit the reach of mis(dis)information. This call advocates for the implementation of “new institutional mechanisms for more participative forms of regulation”, in tandem with legislative action to increase oversight over these platforms.”
Self-regulation refers to the steps social media platforms take to pre-empt of supplement governmental rules and guidelines. This can include fact-checking and content review, algorithm adjustments, and using AI and machine learning to detect and remove harmful content. However, these approaches will require significant advancements. For example, on Tik Tok, identifying problematic language becomes challenging when the audio has must behind it, which is a characteristic component of much of the platform’s content.
By collectively addressing these challenges, social media platforms can more effectively work towards preserving the integrity of elections. Self-regulation is more likely to occur when all companies participate, as they may hesitate to implement changes individually due to concerns about incurring added costs that their competitors may not face. This work will be crucial in safeguarding the foundation of our democratic institutions for generations to come.
 Online disinformation (March 2023), online: Government of Canada <https://www.canada.ca/en/campaign/online-disinformation.html>.
 Soroush Vosoughi et al. ”The spread of true and false news online” (2018) 359 Science 1146-1151. DOI:10.1126/science.aap9559
 Chiagozie Nwonwu, Fauziyya Tukur, & Yemisi Oyedepo, Nigeria elections 2023: How influencers are secretly paid by political parties (January 2023), online: BBC News <https://www.bbc.com/news/world-africa-63719505>
 Vittoria Elliott, Disinfo and Hate Speech Flood TikTok Ahead of Kenya’s Elections (June 2022), online: Wired <https://www.wired.com/story/kenya-tiktok-election-disinformation-hate-speech/>.
 TikTok and Facebook fail to detect election disinformation in the US, while YouTube succeeds (October 2022), online: Global Witness <https://www.globalwitness.org/en/campaigns/digital-threats/tiktok-and-facebook-fail-detect-election-disinformation-us-while-youtube-succeeds/#:~:text=TikTok%20fared%20the%20worst%3B%20the,removing%20the%20problematic%20election%20ads.>
 Tiffany Hsu, On TikTok, Election Misinformation Thrives Ahead of Midterms (August 2022), online: The New York Times <https://www.nytimes.com/2022/08/14/business/media/on-tiktok-election-misinformation.html>.
 The Youtube Team, An update on our approach to US election misinformation (June 2023), online: Youtube Official Blog <https://blog.youtube/inside-youtube/us-election-misinformation-update-2023/>.
 Freedom of expression and media freedom (May 2023), online: Government of Canada <https://www.international.gc.ca/world-monde/issues_development-enjeux_developpement/human_rights-droits_homme/freedom_expression_media-liberte_expression_medias.aspx?lang=eng>.
 Michael A. Cusumano, Anabelle Gawer, A., & David B. Yoffie. “Social media companies should self-regulate. Now.” (2021) 15 Harvard Business Review.
 Supra note 4.
 Supra note 12.