Version 1
: Received: 24 March 2024 / Approved: 25 March 2024 / Online: 25 March 2024 (15:04:44 CET)
How to cite:
Iacob, S.; Dambre, J. Exploiting Signal Propagation Delays to match Task Memory Requirements in Reservoir Computing. Preprints2024, 2024031489. https://doi.org/10.20944/preprints202403.1489.v1
Iacob, S.; Dambre, J. Exploiting Signal Propagation Delays to match Task Memory Requirements in Reservoir Computing. Preprints 2024, 2024031489. https://doi.org/10.20944/preprints202403.1489.v1
Iacob, S.; Dambre, J. Exploiting Signal Propagation Delays to match Task Memory Requirements in Reservoir Computing. Preprints2024, 2024031489. https://doi.org/10.20944/preprints202403.1489.v1
APA Style
Iacob, S., & Dambre, J. (2024). Exploiting Signal Propagation Delays to match Task Memory Requirements in Reservoir Computing. Preprints. https://doi.org/10.20944/preprints202403.1489.v1
Chicago/Turabian Style
Iacob, S. and Joni Dambre. 2024 "Exploiting Signal Propagation Delays to match Task Memory Requirements in Reservoir Computing" Preprints. https://doi.org/10.20944/preprints202403.1489.v1
Abstract
Recurrent neural networks (RNNs) transmit information over time through recurrent connections. In contrast, biological neural networks use many other temporal processing mechanisms. One of these mechanisms is the inter-neuron delays caused by varying axon properties. Recently, this feature was implemented in echo state networks (ESNs), a type of RNN, by assigning spatial locations to neurons and introducing distance-dependent inter-neuron delays. These delays were shown to significantly improve ESN task performance. However, thus far it is still unclear why distance-based delay networks (DDNs) perform bettern than ESNs. In this paper, we show that by optimizing inter-node delays, the memory capacity of the network matches the memory requirements of the task. As such, networks concentrate their memory capabilities to the points in the past which contain the most information for the task at hand. Moreover, we show that DDNs have more total linear memory capacity, with the same amount of non-linear processing power.
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.