Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Advanced Machine Translation with Linguistic-Enhanced Transformer

Version 1 : Received: 4 December 2023 / Approved: 4 December 2023 / Online: 4 December 2023 (10:54:56 CET)

How to cite: Patricia, R.; Sadok, J.; Patel, R. Advanced Machine Translation with Linguistic-Enhanced Transformer. Preprints 2023, 2023120186. https://doi.org/10.20944/preprints202312.0186.v1 Patricia, R.; Sadok, J.; Patel, R. Advanced Machine Translation with Linguistic-Enhanced Transformer. Preprints 2023, 2023120186. https://doi.org/10.20944/preprints202312.0186.v1

Abstract

Recent advancements in neural language models, particularly attention-based architectures like the Transformer, have substantially surpassed traditional methods in various natural language processing tasks. These models adeptly generate nuanced token representations by considering contextual relations within a sequence. However, augmenting these models with explicit syntactic knowledge, such as part of speech tags, has been found to remarkably bolster their effectiveness, especially under constrained data scenarios. This study introduces the Linguistic Enhanced Transformer (LET), which integrates multiple syntactic features, showing a notable increase in translation accuracy, evidenced by an improvement of up to 1.99 BLEU points on subsets of the WMT '14 English-German dataset. Furthermore, this paper demonstrates that enriching BERT models with syntax-aware embeddings enhances their performance on several GLUE benchmark tasks.

Keywords

enhanced machine translation; linguistic syntax; transformer model

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.