Version 1
: Received: 15 March 2022 / Approved: 17 March 2022 / Online: 17 March 2022 (15:26:41 CET)
How to cite:
Alkiviadou, N. Artificial Intelligence and Online Hate Speech Moderation: A Risky Match?. Preprints2022, 2022030258. https://doi.org/10.20944/preprints202203.0258.v1.
Alkiviadou, N. Artificial Intelligence and Online Hate Speech Moderation: A Risky Match?. Preprints 2022, 2022030258. https://doi.org/10.20944/preprints202203.0258.v1.
Cite as:
Alkiviadou, N. Artificial Intelligence and Online Hate Speech Moderation: A Risky Match?. Preprints2022, 2022030258. https://doi.org/10.20944/preprints202203.0258.v1.
Alkiviadou, N. Artificial Intelligence and Online Hate Speech Moderation: A Risky Match?. Preprints 2022, 2022030258. https://doi.org/10.20944/preprints202203.0258.v1.
Abstract
Artificial Intelligence is increasingly being used by social media platforms to tackle online hate speech. The sheer quantity of content, the speed at which is it developed and the enhanced pressure companies are facing by States to remove hate speech quickly from their platforms have led to a tricky situation. This commentary argues that automated mechanisms, which may have biased datasets and be unable to pick up on the nuances of language, should not be left unattended with hate speech as this can lead to issues of violating freedom of expression and the right to non-discrimination.
Keywords
hate speech; artificial intelligence; social media platforms; content moderation; freedom of expression; non-discrimination
Subject
SOCIAL SCIENCES, Law
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.