Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Semantic Relationship-based Embedding Models for Text Classification

Version 1 : Received: 19 October 2022 / Approved: 20 October 2022 / Online: 20 October 2022 (03:25:22 CEST)

A peer-reviewed article of this Preprint also exists.

Lezama-Sánchez, A.L.; Tovar Vidal, M.; Reyes-Ortiz, J.A. An Approach Based on Semantic Relationship Embeddings for Text Classification. Mathematics 2022, 10, 4161. Lezama-Sánchez, A.L.; Tovar Vidal, M.; Reyes-Ortiz, J.A. An Approach Based on Semantic Relationship Embeddings for Text Classification. Mathematics 2022, 10, 4161.

Abstract

Embedding representation models characterize each word as a vector of numbers with a fixed length. These models have been used in tasks involving text classification, such as recommen- dation and question-answer systems. Semantic relationships are words with a relationship between them providing a complete idea to a text. Therefore, it is hypothesized that an embedding model involving semantic relationships will provide better performance for tasks that use them. This paper presents three embedding models based on semantic relations extracted fromWikipedia to classify texts. The synonym, hyponym, and hyperonym semantic relationships were the ones considered in this work since previous experiments have shown that they are the ones that provide the most semantic knowledge. Lexical-syntactic patterns present in the literature were implemented and subsequently applied to the Wikipedia corpus to obtain the semantic relationships present in it. Several semantic relationships are used in different models: synonymy, hyponym-hyperonym, and a combination of the first two. A convolutional neural network was trained for text classification to evaluate the performance of each model. The results obtained were evaluated with the metrics of precision, accuracy, recall, and F1-measure. The best values obtained with the second model were accuracy of 0.79 for the 20-Newsgroup corpus. F1-measure and recall of 0.87 respectively for the Reuters corpus.

Keywords

Deep Learning; Embedding models; Semantic Relationships; Lexical Syntactic Patterns; Convolucional Neural Networks

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.