By: Catherine Blair
Artificial Intelligence Can End Cyberbullying
In our digital age, cyberbullying has become facilitated by growing technology. Not only can cyberbullying be hurtful, but it can also be illegal. For example, in Canada, criminal harassment and defamatory libel are offences under the Criminal Code [1]. Statements posted on social media that cause someone to fear for their safety and comments that spread harmful misinformation about another can result in serious legal consequences. The rise of a digital era in which social media is becoming a more mainstream method of communication for youths, has created a vehicle for such hateful statements to be perpetuated and preserved on the Internet.
Automated Detection of Cyberbullying on Social Media
Automated text analysis approaches can be used to identify and remove posts on social media containing cyberbullying offences. This approach allows posts containing highly aggressive speech directed to another individual to be identified. By identifying text that contains pronouns, profanity, and a large number of capital letters, posts can be flagged for removal by moderators [2].
Another approach involves machine learning to automatically identify and remove posts containing cyberbullying offences. Machine learning is a type of Artificial Intelligence that can learn to recognize patterns in data [3]. Humans first manually label a set of statements by classifying statements that are considered cyberbullying offences and statements that are not considered cyberbullying offences. This data is given to the machine, and the software will begin to recognize patterns in statements considered to be cyberbullying. Eventually, the software will be able to determine whether new statements are considered cyberbullying offences all on its own. Because the software learns from examples, its accuracy improves over time.
Implementation on Social Media
Automated detection of cyberbullying has already been implemented on various social media platforms. META, the parent company of Facebook and Instagram, is using a machine learning technology known as DeepText to combat cyberbullying on their platforms. DeepText is “a learning-based text understanding engine that can comprehend, with near-human accuracy, the textual content of several thousand posts per second.” [4] DeepText can classify statements in more than 20 languages, making it widely available on the platforms. META is the first company to use machine learning to eliminate abusive language on its platforms.
Takeaway
Automated detection of cyberbullying at first instance is an optimistic approach to address and end cyberbullying on social media. Although technology created this problem, advancements in technology such as Artificial Intelligence may be the solution.
Disclaimer: The information provided in this response is for general informational purposes only and is not intended to be legal advice. The content provided does not create a legal client relationship, and nothing in this response should be considered as a substitute for professional legal advice. The information is based on general principles of law and may not reflect the most current legal developments or interpretations in your jurisdiction. Laws and regulations vary by jurisdiction, and the application and impact of laws can vary widely based on the specific facts and circumstances involved. You should consult with a qualified legal professional for advice regarding your specific situation.
Sources Cited
[1] Criminal Code, RSC, 1985, c. C-46
[2] [3] https://www.researchgate.net/publication/314048394_Social_Interactions_and_Automated_Detection_Tools_in_Cyberbullying
[4] https://code.facebook.com/posts/181565595577955/introducing-deeptext-facebook-s-text-understanding-engine/
Image reference: https://www.webpurify.com/blog/40-statistics-about-cyberbullying-in-2021/