Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Fighting Deepfakes Using Body Language Analysis

Version 1 : Received: 15 March 2021 / Approved: 16 March 2021 / Online: 16 March 2021 (11:02:45 CET)
Version 2 : Received: 28 April 2021 / Approved: 28 April 2021 / Online: 28 April 2021 (12:02:00 CEST)

A peer-reviewed article of this Preprint also exists.

Yasrab, R.; Jiang, W.; Riaz, A. Fighting Deepfakes Using Body Language Analysis. Forecasting 2021, 3, 303-321. Yasrab, R.; Jiang, W.; Riaz, A. Fighting Deepfakes Using Body Language Analysis. Forecasting 2021, 3, 303-321.

Abstract

Recent improvements in deepfake creation made deepfake videos more realistic. Open-source software has also made deepfake creation more accessible, which reduces the barrier to entry for deepfake creation. This could pose a threat to the public privacy. It is a potential danger if the deepfake creation techniques are used by people with an ulterior motive to produce deepfake videos of world leaders to disrupt the order of the countries and the world. Research into automated detection for deepfaked media is therefore essential for public safety. We propose in this work the use of upper body language analysis for deepfake detection. Specifically, a many-to-one LSTM network was designed and trained as a classification model is trained for deepfake detection. Different models trained using various hyper-parameters to build a final model with benchmark accuracy. We achieve 94.39% accuracy on a test deepfake set. The experimental results show that upper body language can effectively provide identification and deepfake detection.

Keywords

Imaging; Machine learning; Deepfakes; Human pose estimation; Upper body languages; World leader; Deep learning; Computer vision; Recurrent Neural Networks (RNNs); Long Short-term Memory(LSTM); machine learning; Forecasting

Subject

Computer Science and Mathematics, Algebra and Number Theory

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.