Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Artificial Intelligence and Online Hate Speech Moderation: A Risky Match?

Version 1 : Received: 15 March 2022 / Approved: 17 March 2022 / Online: 17 March 2022 (15:26:41 CET)

How to cite: Alkiviadou, N. Artificial Intelligence and Online Hate Speech Moderation: A Risky Match?. Preprints 2022, 2022030258 (doi: 10.20944/preprints202203.0258.v1). Alkiviadou, N. Artificial Intelligence and Online Hate Speech Moderation: A Risky Match?. Preprints 2022, 2022030258 (doi: 10.20944/preprints202203.0258.v1).

Abstract

Artificial Intelligence is increasingly being used by social media platforms to tackle online hate speech. The sheer quantity of content, the speed at which is it developed and the enhanced pressure companies are facing by States to remove hate speech quickly from their platforms have led to a tricky situation. This commentary argues that automated mechanisms, which may have biased datasets and be unable to pick up on the nuances of language, should not be left unattended with hate speech as this can lead to issues of violating freedom of expression and the right to non-discrimination.

Keywords

hate speech; artificial intelligence; social media platforms; content moderation; freedom of expression; non-discrimination

Subject

SOCIAL SCIENCES, Law

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.