ARTICLE
|
doi:10.20944/preprints202401.2120.v1
Subject:
Computer Science And Mathematics,
Artificial Intelligence And Machine Learning
Keywords:
Deep Learning; Natural Language Processing; Transformer-based Network; Atinuke; Attention Mechanisms; Hyperparameter Tuning; Multi-Head Attention; Embeddings; Machine Learning; Pipelines; State-of-the-Art (SOTA) Results
Online: 31 January 2024 (02:55:47 CET)