Version 1
: Received: 8 December 2023 / Approved: 11 December 2023 / Online: 11 December 2023 (09:19:12 CET)
How to cite:
Hari, A. Comparing Transformer and RNN Models in BCIs for Handwritten Text Decoding via Neural Signals. Preprints2023, 2023120674. https://doi.org/10.20944/preprints202312.0674.v1
Hari, A. Comparing Transformer and RNN Models in BCIs for Handwritten Text Decoding via Neural Signals. Preprints 2023, 2023120674. https://doi.org/10.20944/preprints202312.0674.v1
Hari, A. Comparing Transformer and RNN Models in BCIs for Handwritten Text Decoding via Neural Signals. Preprints2023, 2023120674. https://doi.org/10.20944/preprints202312.0674.v1
APA Style
Hari, A. (2023). Comparing Transformer and RNN Models in BCIs for Handwritten Text Decoding via Neural Signals. Preprints. https://doi.org/10.20944/preprints202312.0674.v1
Chicago/Turabian Style
Hari, A. 2023 "Comparing Transformer and RNN Models in BCIs for Handwritten Text Decoding via Neural Signals" Preprints. https://doi.org/10.20944/preprints202312.0674.v1
Abstract
The purpose of this study is to explore the use of a custom Transformer model in brain-computer interfaces (BCIs) that translate the neural activity present when an individual with limited verbal and fine-motor skills attempts to handwrite. As found in previous studies, Transformers have performed better than recurrent neural networks (RNNs) in translation tasks which are similar to its usage here in decoding neural signals into intended handwritten text. Due to the known benefits of a Transformer, the hypothesis was that the Transformer would show promise in the context of a BCI through the recorded metrics. The neural signals of a tetraplegic individual, when they attempted to handwrite, were provided by existing research. Four trials were conducted using data with or without augmentation which the model used when training to separately minimize training loss and validation loss. When comparing the results of a BCI with the implementation of a Transformer model with the original RNN BCI (the original data source), the Transformer model performed less favorably across all four trials. Although the results do not indicate that the Transformer model currently outperforms an RNN BCI, it is important to note that further testing of the model's capabilities (such as training it with a larger and more preferable dataset and/or for longer, comparing training times between the RNN and Transformer, and/or seeing how the Transformer is improved with an offline autocorrect feature) is necessary before determining whether Transformers can enhance communication in this manner.
Keywords
artificial intelligence; transformer; brain-computer interface; comparison; RNN; neural data
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.