Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Use of Large Language Model for Cyberbullying Detection

Version 1 : Received: 15 June 2023 / Approved: 15 June 2023 / Online: 15 June 2023 (05:35:25 CEST)

A peer-reviewed article of this Preprint also exists.

Ogunleye, B.; Dharmaraj, B. The Use of a Large Language Model for Cyberbullying Detection. Analytics 2023, 2, 694-707. Ogunleye, B.; Dharmaraj, B. The Use of a Large Language Model for Cyberbullying Detection. Analytics 2023, 2, 694-707.

Abstract

The dominance of social media has added to the channels of bullying to perpetrators. Unfortunately, cyberbullying (CB) is the most prevalent phenomenon in today’s cyber world and is a severe threat to the mental and physical health of citizens. This opens the need to develop a robust system to prevent bullying content from online forums, blogs, and social media platforms to manage the impact in our society. Several machine learning (ML) algorithms have been proposed for this purpose. However, their performances are not consistent due to high class imbalance issue and generalisation. In recent years, large language models (LLM) like BERT and RoBERTa have achieved state of the art (SOTA) results in several natural language processing (NLP) tasks. Unfortunately, the LLMs have not been applied extensively. In our paper, we explored the use of these models for cyberbullying (CB) detection. We have prepared a new dataset (D2) from existing studies (Formspring and Twitter). Our experimental results for dataset D1 and D2 showed RoBERTa outperformed other models.

Keywords

BERT; Cyberbullying; RoBERTa; Language Model; Machine learning; Online abuse; Natural language processing; NLP

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.