Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Reversal of the Word-Sense Disambiguation Task Using Deep Learning Model

Version 1 : Received: 14 May 2024 / Approved: 15 May 2024 / Online: 15 May 2024 (13:03:29 CEST)

How to cite: Laukaitis, A. Reversal of the Word-Sense Disambiguation Task Using Deep Learning Model. Preprints 2024, 2024051045. https://doi.org/10.20944/preprints202405.1045.v1 Laukaitis, A. Reversal of the Word-Sense Disambiguation Task Using Deep Learning Model. Preprints 2024, 2024051045. https://doi.org/10.20944/preprints202405.1045.v1

Abstract

Word Sense Disambiguation (WSD) stands as a persistent challenge within the Natural Language Processing (NLP) community. While various NLP packages exist, the Lesk algorithm in the NLTK library, a widely recognized tool, demonstrates suboptimal accuracy. Conversely, the application of deep neural networks offers heightened classification precision, yet their practical utility is constrained by demanding memory requirements. This research paper introduces an innovative method addressing WSD challenges by optimizing memory usage without compromising state-of-the-art accuracy. The presented methodology facilitates the development of WSD system that seamlessly integrates into NLP tasks, resembling the functionality offered by the NLTK library. Furthermore, this paper advocates treating the BERT language model as a gold standard, proposing modifications to manually annotated datasets and semantic dictionaries such as WordNet to enhance WSD accuracy. The empirical validation through a series of experiments establishes the effectiveness of the proposed method, achieving state-of-the-art performance across multiple WSD datasets. This contribution represents advancement in mitigating the challenges associated with WSD, offering a practical solution for integration into NLP applications.

Keywords

word sense disambiguation; natural language processing; WordNet

Subject

Computer Science and Mathematics, Information Systems

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.