Preprint
Technical Note

The Gap between Deep Learning and the Nervous System

Altmetrics

Downloads

181

Views

56

Comments

0

This version is not peer-reviewed

Submitted:

31 August 2023

Posted:

01 September 2023

You are already at the latest version

Alerts
Abstract
A remarkable strengths of deep neural networks lies in their ability to discover patterns and representations from complex data. Deep learning models can learn and refine features at different levels of abstraction, enabling them to handle complex tasks in computer vision recognition, natural language processing, and speech recognition. The ability to generalize from examples, coupled with the capacity to process vast amounts of data, empowers deep learning to achieve state-of-the-art results across a wide spectrum of applications. An argument for this ability is based on similarities between deep neural networks and the nervous system. In this report, we argue that despite the remarkable performance of deep learning, there are still gaps between deep learning and the nervous system that needs to be closed to enable deep learning doing tasks that currently only the nervous system can perform
Keywords: 
Subject: Engineering  -   Electrical and Electronic Engineering

1. Introduction

Deep neural networks exhibit a remarkable capacity to undertake tasks that only the nervous system is known to be capable of doing. Just as the human brain processes visual stimuli to perceive and recognize objects, deep learning models have demonstrated exceptional prowess in image classification tasks, effectively distinguishing intricate patterns and objects within images [1,2]. Moreover, akin to how the brain comprehends and generates language, neural networks have been harnessed for natural language processing tasks, enabling translation [3], sentiment analysis [4], and even the generation of coherent text [5]. In the realm of decision-making, deep neural networks have shown promise in making complex judgments by processing and evaluating a multitude of factors, mirroring the brain’s intricate web of interconnected neurons that contribute to cognitive processes [6]. These shared functionalities emphasize the parallelism between artificial and biological intelligence, underscoring the growing potential of deep neural networks to replicate and augment the remarkable capabilities of the nervous system.
Many researchers draw parallels between deep learning and the biological nervous system due to certain similarities in their functioning, which has led to the development of the field of artificial neural networks (ANNs). Indeed, there are similarities between deep learning and the nervous systems which have led to arguments for the similarity between deep learning and the nervous system. A major reason is that deep learning models, particularly artificial neural networks, are built upon the concept of neurons. These artificial neurons are loosely modeled after biological neurons, with input connections, weighted synapses, and activation functions. The architecture similarity dates back to the seminal work of Rosenblatt [7] which developed perceptron which later gave rise to multi-layer perceptron and more complex architectures. The more complex networks, process information similar to the nervous system. In both deep learning and the brain, information is processed in a hierarchical manner. In deep neural networks, different layers capture increasingly abstract features of the input data. Similarly, the brain processes sensory information in hierarchical structures[8,9].
Moreover, deep neural networks exhibit behaviors akin to those of the intricate nervous system. Just as the human brain engages in learning from experiences and adjusts itself to novel information, deep learning models are meticulously crafted to acquire knowledge from vast datasets during the training process [10,11,12]. This learning is achieved by iteratively refining the connection strengths between individual neurons, a process that enhances their efficacy in accomplishing specific tasks. Consequently, deep learning networks possess the remarkable capability to autonomously discern and extract pertinent features from raw input data. In a manner mirroring this, the human brain is theorized to distill meaningful attributes from the sensory information it receives, which in turn contributes to the formation of perceptions and mental representations [13]. The intriguing parallel between the artificial and biological systems becomes evident, as artificial neural networks, guided by a shared training paradigm, demonstrate behaviors reminiscent of those exhibited by the nervous system [14,15]. This convergence in learning mechanisms further accentuates the resemblance between these two distinct yet interconnected realms of computational prowess.
Parallel processing stands out as a crucial commonality between deep neural networks and the intricate nervous system [16,17,18]. This parallel processing ability is a cornerstone that both deep learning models and the human brain share, enabling them to concurrently handle numerous units of information. In the realm of deep neural networks, this is frequently manifested through simultaneous computations across diverse nodes or processors, a strategy that mirrors the brain’s own parallel processing prowess. This convergence in processing methodology underscores the intriguing similarity in how these two distinct systems tackle complex information processing tasks.

2. Divergence of Deep Learning from the Nervous system

The divergence between deep learning and the nervous system is underscored prominently by the factor of biological complexity. A pivotal distinction lies in the extraordinary intricacy characterizing the biological nervous system, which stands leagues beyond the complexity of any artificial neural network devised thus far. The brain’s operational dynamics encompass an astonishing array of distinct cell types, interwoven connections between neurons, elaborate chemical interactions, and a sophisticated web of feedback mechanisms that remain elusive to the current state of deep learning models [19,20]. The human brain, with its billions of neurons interconnected through synapses in intricate neural circuits, orchestrates an immensely intricate symphony of activity that gives rise to cognition, perception, emotion, and much more. These neural networks are not just about simple connections; they involve a symposium of neurotransmitters, modulators, and neuromodulators that dynamically regulate the flow of information, adapting the system’s responses to context and experience. Such levels of complexity contribute to the brain’s remarkable plasticity, enabling it to learn, unlearn, and rewire itself based on changing circumstances. In contrast, while artificial neural networks exhibit admirable capabilities in learning and pattern recognition, they lack the biological system’s richness in terms of cellular diversity, interconnectivity, and chemical modulation. The underlying architecture of deep learning models, while inspired by neural networks, is substantially simplified to facilitate computationally feasible operations. As we seek to propel artificial intelligence toward greater sophistication, it is vital to acknowledge the vast chasm of complexity that separates current AI models from the intricate marvel that is the biological nervous system. This recognition should fuel not just ambition but also humility in our pursuit of creating AI systems that inch closer to the incredible capabilities demonstrated by nature.
The dissimilarity in learning mechanisms constitutes another facet where deep learning and the brain markedly diverge. Despite the shared attribute of learning from data, the fundamental underpinnings of this process differ significantly between these two systems. In the realm of deep learning, the foundation rests upon gradient-based optimization techniques, a method that involves iteratively adjusting the parameters of a model to minimize the difference between predicted outcomes and actual data. On the other hand, the biological nervous system orchestrates its learning through a multifaceted landscape of mechanisms, prominently featuring various forms of plasticity. Notably, long-term potentiation and long-term depression are two integral processes within the brain’s repertoire of adaptability [21]. Long-term potentiation entails the strengthening of synaptic connections between neurons that frequently activate together, thereby enhancing the efficiency of information transmission. Conversely, long-term depression diminishes synaptic strength between neurons that rarely synchronize their firing, contributing to the fine-tuning of neural pathways. This contrast underscores the nuanced sophistication of the brain’s learning mechanisms. The brain’s plasticity [22,23] extends beyond simple parameter adjustments to intricate, context-dependent alterations in synaptic connections and neural firing patterns. The dynamic interplay of these mechanisms underlies the brain’s capacity to encode memories, refine skills, and adapt to novel situations. Deep learning, while making strides in emulating learning and recognition tasks, currently lacks the biological system’s intricacies. Acknowledging these distinctions is pivotal as we endeavor to advance artificial intelligence. By drawing inspiration from the brain’s complex adaptability and plasticity, we can strive to embed elements of such mechanisms into AI systems, possibly opening avenues for more robust, flexible, and human-like learning paradigms.
The divergence in cognitive capabilities between these two domains is stark and noteworthy [24]. Deep learning models undoubtedly shine when applied to specific, tightly-defined tasks, demonstrating exceptional aptitude in areas like image recognition and language translation. Nonetheless, a profound chasm separates their prowess from the expansive terrain of cognitive functions showcased by the human brain. Deep learning models excel within bounded contexts, leveraging massive amounts of data to make accurate predictions within their designated domains. Their accomplishments are a testament to the power of data-driven learning and pattern recognition. However, when it comes to the broader realm of cognitive functions, the gap becomes evident. The human brain, a marvel of evolution, exhibits a comprehensive general intelligence that encompasses an array of cognitive faculties, including but not limited to common sense reasoning, abstract thinking, creative problem-solving, emotional understanding, and social interaction. This contrast exemplifies the distinction between narrow AI and the broad spectrum of capabilities inherent to human cognition. While deep learning models are designed for specialized tasks, the brain’s capacity to effortlessly navigate diverse scenarios, learn from minimal data, and transfer knowledge across domains remains unmatched. The brain’s ability to perceive context, reason abstractly, and make intuitive leaps showcases the depth and breadth of its cognitive prowess. As the field of artificial intelligence advances, acknowledging these dissimilarities is paramount. While AI has made remarkable strides in replicating certain cognitive functions, achieving the multifaceted, flexible, and nuanced cognitive abilities of the human brain presents an ongoing challenge. As we chart the future of AI, bridging this gap would necessitate insights from neuroscience, innovative learning paradigms, and the harmonious integration of diverse AI approaches.
The human brain stands out for its exceptional energy efficiency, seamlessly executing intricate computations while consuming minimal power. This contrasts sharply with the resource-intensive nature of deep learning models, which demand substantial computational capabilities and energy input. This discrepancy underscores the disparity in efficiency between these two systems. The human brain, through its intricate network of neurons and synapses, achieves remarkable feats of information processing using a fraction of the energy consumed by modern computing systems. This energy efficiency has been fine-tuned over millions of years of evolution, resulting in an organ that performs an astonishing array of tasks while maintaining a low energy footprint [25,26]. On the other hand, deep learning models, while impressive in their ability to tackle complex tasks, often necessitate large-scale computing infrastructures and substantial energy resources to train and operate effectively [27]. These models involve massive numbers of computations, which can be power-hungry and environmentally taxing. This distinction is not only notable from a technological standpoint but also holds implications for the development of more sustainable artificial intelligence. As we strive to create AI systems that are not only powerful but also eco-friendly, drawing inspiration from the brain’s energy-efficient design becomes crucial. By emulating the brain’s principles of parallel processing, sparse coding, and adaptive learning, we could potentially pave the way for more energy-efficient AI models that align better with the natural world’s resource-efficient paradigm.

3. Future Research Directions

Based on the preceding discussions, it can be deduced that while certain conceptual congruences exist between deep learning models and the intricate biological nervous system, the current stage of artificial neural networks falls considerably short of emulating the intricate complexity, adaptability, and cognitive proficiencies inherent in the human brain. The landscape of deep learning comprises distinct merits and constraints that must be acknowledged. While drawing inspiration from the fundamental principles of biology can undoubtedly enrich the design of artificial systems, it remains crucial to conscientiously acknowledge the foundational disparities underpinning these two distinct domains. While deep learning indeed holds the potential to learn and generalize from data in ways reminiscent of neural processes, the intricacies of the human brain remain unparalleled in their capacity to learn, adapt, and execute intricate cognitive tasks. The brain’s plasticity, its seamless integration of sensory information, and its ability to reason and strategize surpass the current capabilities of artificial neural networks.
As we navigate the exciting realm of AI and its interaction with neuroscience, it is paramount to embrace both the promise and the limitations of deep learning. Recognizing that while we can integrate certain biological insights to improve AI systems, we are operating within a domain defined by different principles and mechanisms. Striving to create more powerful and efficient artificial systems should involve a comprehensive understanding of what sets the biological nervous system apart from its artificial counterparts. In this pursuit, interdisciplinary collaboration between AI and neuroscience can pave the way toward innovations that enrich both fields and ultimately push the boundaries of our collective understanding.
Making deep learning more similar to the biological nervous system is a complex and ambitious goal but is necessary to bridge the gap between artificial neural networks and the intricacies of the human brain. While we have made significant progress in deep learning, there are still several research directions to explore in order to achieve greater biological similarity. Biological neural networks are highly sparse, with only a small fraction of connections active at any given time. Research in creating sparser neural architectures, where connections are dynamically pruned or activated, could lead to more efficient and brain-like networks. Neuromodulation and plasticity are two less explored directions. Implementing mechanisms like synaptic plasticity (the ability of synapses to strengthen or weaken over time) and neuromodulation (chemical signaling that adjusts neural behavior) can enable learning, adaptation, and memory formation in artificial networks more similar to the nervous system. Integrating such features could make models more adaptive and capable of lifelong learning.
Multi-Sensory Integration is a major reason for the power of brain to work much better in complex situations compared to deep learning. Humans perceive and understand the world through multiple senses. Integrating information from different modalities, such as vision, language, and touch, can lead to more robust and human-like AI systems. Exploring hardware implementations that mimic neural processing, such as neuromorphic computing, could greatly improve the energy efficiency of deep learning models and bring them closer to the brain’s capabilities. Additionally, biological neural networks process information over time, allowing for tasks like sequential processing, rhythm perception, and motor coordination. Designing models that can effectively capture and exploit temporal dynamics is an important direction for brain-inspired AI.
Finally, integrating emotional and affective components into AI systems could make interactions more natural and emotionally intelligent. This involves understanding emotional cues and responding appropriately. As AI systems become more brain-like, addressing ethical concerns and ensuring the responsible development of brain-inspired AI becomes crucial. This includes considerations about consciousness, autonomy, and the potential societal impacts of highly advanced AI. The journey to creating AI systems that closely resemble the biological nervous system is a multidisciplinary challenge involving neuroscience, computer science, cognitive psychology, and more. Progress in these research directions could lead to AI systems that are not only highly intelligent but also more human-like in their cognitive abilities and behavior. We have a long journey to fulfill these goals but we conclude that research in deep learning is far from over and for forseeble future, deep learning should remain an active and exciting research area.

4. Conclusions

In our exploration, we delved into several parallels that exist between deep neural networks and the intricate nervous system. While these shared attributes offer valuable insights, it’s important to acknowledge that inherent distinctions between the two systems can contribute to the limitations encountered in deep learning. As researchers embark on this journey, it becomes imperative to not only recognize these disparities but also to actively work towards mitigating them. By discerning and understanding these nuanced differences, we can lay the foundation for bridging the gap between artificial and biological intelligence. A promising avenue lies in drawing inspiration from the latest discoveries in neuroscience. Embracing these insights can empower us to sculpt artificial systems that more closely mimic the intricacies of the nervous system, thereby propelling us towards the creation of more robust and versatile AI models. This convergence of scientific disciplines holds the potential to unlock new horizons in both our understanding of natural intelligence and the advancement of artificial intelligence.

References

  1. Affonso, C.; Rossi, A.L.D.; Vieira, F.H.A.; de Leon Ferreira, A.C.P.; others. Deep learning for biological image classification. Expert systems with applications 2017, 85, 114–122. [Google Scholar] [CrossRef]
  2. Cai, L.; Gao, J.; Zhao, D. A review of the application of deep learning in medical image classification and segmentation. Annals of translational medicine 2020, 8. [Google Scholar] [CrossRef]
  3. Yang, S.; Wang, Y.; Chu, X. A survey of deep learning techniques for neural machine translation. arXiv 2020, arXiv:2002.07526. [Google Scholar]
  4. Dang, N.C.; Moreno-García, M.N.; De la Prieta, F. Sentiment analysis based on deep learning: A comparative study. Electronics 2020, 9, 483. [Google Scholar] [CrossRef]
  5. Iqbal, T.; Qureshi, S. The survey: Text generation models in deep learning. Journal of King Saud University-Computer and Information Sciences 2022, 34, 2515–2528. [Google Scholar] [CrossRef]
  6. Lapan, M. Deep Reinforcement Learning Hands-On: Apply modern RL methods, with deep Q-networks, value iteration, policy gradients, TRPO, AlphaGo Zero and more; Packt Publishing Ltd, 2018.
  7. Rosenblatt, F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review 1958, 65, 386. [Google Scholar] [CrossRef]
  8. Chatterjee, N.; Sinha, S. Understanding the mind of a worm: hierarchical network structure underlying nervous system function in C. elegans. Progress in brain research 2007, 168, 145–153. [Google Scholar]
  9. Cleeremans, A.; McClelland, J.L. Learning the structure of event sequences. Journal of Experimental Psychology: General 1991, 120, 235. [Google Scholar] [CrossRef] [PubMed]
  10. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  11. LeCun, Y.; Touresky, D.; Hinton, G.; Sejnowski, T. A theoretical framework for back-propagation. Proceedings of the 1988 connectionist models summer school. San Mateo, CA, USA, 1988, Vol. 1, pp. 21–28.
  12. Rumelhart, D.E.; McClelland, J.L.; Group, P.R.; others. Parallel distributed processing. Foundations 1988, 1. [Google Scholar]
  13. Novak, D.; Mihelj, M.; Munih, M. A survey of methods for data fusion and system adaptation using autonomic nervous system responses in physiological computing. Interacting with computers 2012, 24, 154–172. [Google Scholar] [CrossRef]
  14. Morgenstern, Y.; Rostami, M.; Purves, D. Properties of artificial networks evolved to contend with natural spectra. Proceedings of the National Academy of Sciences 2014, 111, 10868–10872. [Google Scholar] [CrossRef] [PubMed]
  15. Zhao, C.W.; Daley, M.J.; Pruszynski, J.A. Neural network models of the tactile system develop first-order units with spatially complex receptive fields. PloS one 2018, 13, e0199196. [Google Scholar] [CrossRef] [PubMed]
  16. Cohen, J.D.; Dunbar, K.; McClelland, J.L. On the control of automatic processes: a parallel distributed processing account of the Stroop effect. Psychological review 1990, 97, 332. [Google Scholar] [CrossRef]
  17. Young, E.D. Parallel processing in the nervous system: evidence from sensory maps. Proceedings of the National Academy of Sciences 1998, 95, 933–934. [Google Scholar] [CrossRef]
  18. Mpitsos, G.J.; Cohan, C.S. Convergence in a distributed nervous system: Parallel processing and self-organization. Journal of Neurobiology 1986, 17, 517–545. [Google Scholar] [CrossRef]
  19. Sanes, D.H.; Reh, T.A.; Harris, W.A. Development of the nervous system; Academic press, 2011.
  20. Cantile, C.; Youssef, S. Nervous system. Jubb, Kennedy & Palmer’s Pathology of Domestic Animals: Volume 1.
  21. Bliss, T.V.; Cooke, S.F. Long-term potentiation and long-term depression: a clinical perspective. Clinics 2011, 66, 3–17. [Google Scholar] [CrossRef]
  22. Cooke, S.F.; Bliss, T.V. Plasticity in the human central nervous system. Brain 2006, 129, 1659–1673. [Google Scholar] [CrossRef]
  23. Fields, R.D. A new mechanism of nervous system plasticity: activity-dependent myelination. Nature Reviews Neuroscience 2015, 16, 756–767. [Google Scholar] [CrossRef]
  24. Chen, W.; An, J.; Li, R.; Li, W. Review on deep-learning-based cognitive computing. Acta automatica sinica 2017, 43, 1886–1897. [Google Scholar]
  25. Sengupta, B.; Stemmler, M.B.; Friston, K.J. Information and efficiency in the nervous system—a synthesis. PLoS computational biology 2013, 9, e1003157. [Google Scholar] [CrossRef] [PubMed]
  26. Anthony, L.F.W.; Kanding, B.; Selvan, R. Carbontracker: Tracking and predicting the carbon footprint of training deep learning models. arXiv 2020, arXiv:2007.03051. [Google Scholar]
  27. Panda, P.; Sengupta, A.; Roy, K. Conditional deep learning for energy-efficient and enhanced pattern recognition. 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2016, pp. 475–480.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated