Article
Version 1
Preserved in Portico This version is not peer-reviewed
Hybrid Speech-Lexicon Emotion Analysis
Version 1
: Received: 9 April 2024 / Approved: 10 April 2024 / Online: 10 April 2024 (10:58:12 CEST)
How to cite: Weber, M.; Nasir, W.; Rossi, S. Hybrid Speech-Lexicon Emotion Analysis. Preprints 2024, 2024040710. https://doi.org/10.20944/preprints202404.0710.v1 Weber, M.; Nasir, W.; Rossi, S. Hybrid Speech-Lexicon Emotion Analysis. Preprints 2024, 2024040710. https://doi.org/10.20944/preprints202404.0710.v1
Abstract
This study delves into Personal Narratives (PN), which encompass both oral and written recounting of individual experiences, encompassing facts, events, people, and thoughts. Traditional emotion recognition and sentiment analysis primarily focus on broader categories like utterances or documents. Our research, however, centers on identifying Emotion Carriers (EC), which are specific segments within speech or text that elucidate the narrator's emotional state (e.g.,"losing a parent", "decision-making moments"). Extracting these ECs enriches the representation of a user's emotional state, thereby enhancing natural language understanding and the sophistication of dialogue models. While previous studies have utilized lexical attributes to identify ECs, we argue that incorporating spoken narratives offers a more nuanced view of the context and emotional state. This paper explores the integration of speech and textual embeddings at the word level, alongside both early and late fusion techniques, for improved EC detection in spoken narratives. We employ Residual Neural Networks (ResNet), initially pre-trained on diverse speech emotion datasets and subsequently fine-tuned for EC detection. Our experimental findings demonstrate that late fusion, in particular, significantly enhances EC detection capabilities.
Keywords
Hybrid emotion detection; Textual analysis; Fusion techniques; Natural language understanding
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment