Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Comparing Transformer and RNN Models in BCIs for Handwritten Text Decoding via Neural Signals

Version 1 : Received: 8 December 2023 / Approved: 11 December 2023 / Online: 11 December 2023 (09:19:12 CET)

How to cite: Hari, A. Comparing Transformer and RNN Models in BCIs for Handwritten Text Decoding via Neural Signals. Preprints 2023, 2023120674. https://doi.org/10.20944/preprints202312.0674.v1 Hari, A. Comparing Transformer and RNN Models in BCIs for Handwritten Text Decoding via Neural Signals. Preprints 2023, 2023120674. https://doi.org/10.20944/preprints202312.0674.v1

Abstract

The purpose of this study is to ​​explore the use of a custom Transformer model in brain-computer interfaces (BCIs) that translate the neural activity present when an individual with limited verbal and fine-motor skills attempts to handwrite. As found in previous studies, Transformers have performed better than recurrent neural networks (RNNs) in translation tasks which are similar to its usage here in decoding neural signals into intended handwritten text. Due to the known benefits of a Transformer, the hypothesis was that the Transformer would show promise in the context of a BCI through the recorded metrics. The neural signals of a tetraplegic individual, when they attempted to handwrite, were provided by existing research. Four trials were conducted using data with or without augmentation which the model used when training to separately minimize training loss and validation loss. When comparing the results of a BCI with the implementation of a Transformer model with the original RNN BCI (the original data source), the Transformer model performed less favorably across all four trials. Although the results do not indicate that the Transformer model currently outperforms an RNN BCI, it is important to note that further testing of the model's capabilities (such as training it with a larger and more preferable dataset and/or for longer, comparing training times between the RNN and Transformer, and/or seeing how the Transformer is improved with an offline autocorrect feature) is necessary before determining whether Transformers can enhance communication in this manner.

Keywords

artificial intelligence; transformer; brain-computer interface; comparison; RNN; neural data

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.