Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Using a Combination of Electroencephalographic and Acoustic Features to Accurately Predict Emotional Responses to Music

Version 1 : Received: 30 June 2021 / Approved: 1 July 2021 / Online: 1 July 2021 (11:12:19 CEST)

How to cite: Krish, D. Using a Combination of Electroencephalographic and Acoustic Features to Accurately Predict Emotional Responses to Music. Preprints 2021, 2021070014. https://doi.org/10.20944/preprints202107.0014.v1 Krish, D. Using a Combination of Electroencephalographic and Acoustic Features to Accurately Predict Emotional Responses to Music. Preprints 2021, 2021070014. https://doi.org/10.20944/preprints202107.0014.v1

Abstract

Music has the ability to evoke a wide variety of emotions in human listeners. Research has shown that treatment for depression and mental health disorders is significantly more effective when it is complemented by music therapy. However, because each human experiences music-induced emotions differently, there is no systematic way to accurately predict how people will respond to different types of music at an individual level. In this experiment, a model is created to predict humans’ emotional responses to music from both their electroencephalographic data (EEG) and the acoustic features of the music. By using recursive feature elimination (RFE) to extract the most relevant and performing features from the EEG and music, a regression model is fit and accurately correlates the patient’s actual music-induced emotional responses and model’s predicted responses. By reaching a mean correlation of r = 0.788, this model is significantly more accurate than previous works attempting to predict music-induced emotions (e.g. a 370% increase in accuracy as compared to Daly et al. (2015)). The results of this regression fit suggest that accurately predicting how people respond to music from brain activity is possible. Furthermore, by testing this model on specific features extracted from any musical clip, music that is most likely to evoke a happier and pleasant emotional state in an individual can be determined. This may allow music therapy practitioners, as well as music-listeners more broadly, to select music that will improve mood and mental health.

Keywords

EEG; music therapy; acoustic features; machine learning; emotional-response predictions

Subject

Medicine and Pharmacology, Neuroscience and Neurology

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.