Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Automatic Detection of Inconsistencies and Hierarchical Topic Classification for Open-Domain Chatbots

Version 1 : Received: 19 June 2023 / Approved: 20 June 2023 / Online: 22 June 2023 (10:23:26 CEST)

A peer-reviewed article of this Preprint also exists.

Rodríguez-Cantelar, M.; Estecha-Garitagoitia, M.; D’Haro, L.F.; Matía, F.; Córdoba, R. Automatic Detection of Inconsistencies and Hierarchical Topic Classification for Open-Domain Chatbots. Appl. Sci. 2023, 13, 9055. Rodríguez-Cantelar, M.; Estecha-Garitagoitia, M.; D’Haro, L.F.; Matía, F.; Córdoba, R. Automatic Detection of Inconsistencies and Hierarchical Topic Classification for Open-Domain Chatbots. Appl. Sci. 2023, 13, 9055.

Abstract

Current State-of-the-Art (SotA) chatbots are able to produce high quality sentences, handling different conversation topics, and larger interaction times. Unfortunately, the generated responses highly depend on the data on which they have been trained, the specific dialogue history and current turn used for guiding the response, the internal decoding mechanisms, ranking strategies, among others. Therefore, it may happen that for semantically similar questions asked by users, the chatbot may provide a different answer, which can be considered as a form of hallucination or produce confusion in long-term interactions. In this research paper, we propose a novel methodology consisting of two main phases: a) hierarchical automatic detection of topics and subtopics in dialogue interactions using a Zero-Shot learning approach, and b) detecting inconsistent answers using K-Means and the Silhouette coefficient. To evaluate the efficacy of topic and subtopic detection, we used a subset of the DailyDialog dataset and real dialogue interactions gathered during the Alexa Socialbot Grand Challenge 5 (SGC5). The proposed approach enables detecting up to 18 different topics and 102 subtopics. For the purpose of detecting inconsistencies, we manually generate multiple paraphrased questions and employ several pre-trained SotA chatbot models to generate responses. Our experimental results demonstrate a weighted F-1 value of 0.34 for topic detection, a weighted F-1 value of 0.78 for subtopic detection in DailyDialog, then 81% and 62% accuracy for topic and subtopic classification in SGC5; finally, to predict the number of different responses, we obtained a mean squared error (MSE) of 3.4 when testing smaller generative models and 4.9 in recent large language models.

Keywords

chatbots; inconsistent responses; zero-shot topic detection; clustering

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.