Version 1
: Received: 10 July 2021 / Approved: 12 July 2021 / Online: 12 July 2021 (12:03:06 CEST)
How to cite:
Nosouhian, S.; Nosouhian, F.; Kazemi Khoshouei, A. A Review of Recurrent Neural Network Architecture for Sequence Learning: Comparison between LSTM and GRU. Preprints2021, 2021070252. https://doi.org/10.20944/preprints202107.0252.v1
Nosouhian, S.; Nosouhian, F.; Kazemi Khoshouei, A. A Review of Recurrent Neural Network Architecture for Sequence Learning: Comparison between LSTM and GRU. Preprints 2021, 2021070252. https://doi.org/10.20944/preprints202107.0252.v1
Nosouhian, S.; Nosouhian, F.; Kazemi Khoshouei, A. A Review of Recurrent Neural Network Architecture for Sequence Learning: Comparison between LSTM and GRU. Preprints2021, 2021070252. https://doi.org/10.20944/preprints202107.0252.v1
APA Style
Nosouhian, S., Nosouhian, F., & Kazemi Khoshouei, A. (2021). A Review of Recurrent Neural Network Architecture for Sequence Learning: Comparison between LSTM and GRU. Preprints. https://doi.org/10.20944/preprints202107.0252.v1
Chicago/Turabian Style
Nosouhian, S., Fereshteh Nosouhian and Abbas Kazemi Khoshouei. 2021 "A Review of Recurrent Neural Network Architecture for Sequence Learning: Comparison between LSTM and GRU" Preprints. https://doi.org/10.20944/preprints202107.0252.v1
Abstract
Deep neural networks (DNNs) have made a huge impact in the field of machine learning by providing unbeatable humanlike performance to solve real-world problems such as image processing and natural language processing (NLP). Convolutional neural network (CNN) and recurrent neural network (RNN) are two typical architectures that are widely used to solve such problems. Time sequence-dependent problems are generally very challenging, and RNN architectures have made an enormous improvement in a wide range of machine learning problems with sequential input involved. In this paper, different types of RNN architectures are compared. Special focus is put on two well-known gated-RNN’s Long Term Short Memory (LSTM) and Gated Recurrent Unit (GRU). We evaluated these models on the task of force estimation system in pouring. In this study, four different models including multi-layers LSTM, multi-layers GRU, single-layer LSTM and single-layer GRU) were created and trained. The result suggests that multi-layer GRU outperformed other three models.
Keywords
Recurrent neural network; Long-term short memory; Gated recurrent unit
Subject
Computer Science and Mathematics, Computer Science
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.