Elizabeth, P.; Patel, R.; Jose, F. Enhancing Sentence Representation with Syntactically Structural Transformer. Preprints2023, 2023120600. https://doi.org/10.20944/preprints202312.0600.v1
APA Style
Elizabeth, P., Patel, R., & Jose, F. (2023). Enhancing Sentence Representation with Syntactically Structural Transformer. Preprints. https://doi.org/10.20944/preprints202312.0600.v1
Chicago/Turabian Style
Elizabeth, P., Rodolfo Patel and Fomuso Jose. 2023 "Enhancing Sentence Representation with Syntactically Structural Transformer" Preprints. https://doi.org/10.20944/preprints202312.0600.v1
Abstract
The evolution of sentence representation learning, especially with parse tree encoders, has shown remarkable progress. Traditional approaches predominantly rely on recursive encoding of tree structures, which impedes parallel processing capabilities. Additionally, these methods often overlook the significance of dependency tree arc labels. To overcome these limitations, we introduce the Syntax-Enhanced Transformer (SET), incorporating a novel dual-attention mechanism that integrates relation-focused attention alongside traditional self-attention. This design effectively encodes both dependency and spatial positional relationships within sentence dependency trees. Our approach innovatively incorporates syntactic information into the Transformer framework without compromising its inherent parallelizability. The SET demonstrates superior or comparable performance to contemporary methods across various sentence representation tasks, significantly enhancing computational efficiency.
Keywords
Syntactic tree; Transformer; Self-attention
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.