Submitted:
25 April 2025
Posted:
28 April 2025
Read the latest preprint version here
Abstract
Keywords:
1. Introduction
2. The Original Cognitive Architecture
3. Related Work
3.1. Current State-of-the-Art
3.2. Alternative Models
4. The Memory Model
4.1. Memory Model Levels
- (1)
- The lowest level is an n-gram structure that is sets of links only, between every source concept that has been stored. The links describe any possible routes through the source concept sequences, but are unweighted.
- (2)
- The middle level is an ontology that aggregates the source data through 3 phases and this converts it from set-based sequences into type-based clusters.
- (3)
- The upper level is a combination of the functional properties of the brain, with whatever input and resulting conversions they produce, being stored in the same memory substrate.
- Experience to knowledge.
- Knowledge to knowledge.
- Knowledge to experience.

4.2. Lower Memory Level
4.3. Middle Ontology Level
4.4. Ontology Tests
4.5. Unit of Work
5. The Neural Level
5.1. Function Identity
5.2. Function Structure
5.3. Index Types
- Unipolar Type: this has a list of index terms that is generally a bit longer and is unordered. It can be matched with any sequence in the input set, but to only 1 sequence.
- Bipolar Type: this has a list of index terms and a related feature set. The index terms should be matched to only 1 sequence and some of the feature values should also match with that sequence. This matching should be in order however, where the order in the feature should be repeated in the sequence. Then the rest of the feature values can match with any other sequence and in any order.
- Pyramidal Type: this has a list of index terms and a related feature set. The index terms however are split over 2 specific sequences. Both the index terms and the related feature set should match with 2 specific sequences and the matching should be ordered in both.
6. Ordinal Learning
6.1. Ordinal Tests
7. Some Biological Comparisons
7.1. Gestalt Psychology
7.1.1. Numbers in the Gestalt Process
7.2. Animal Brain Function
7.2.1. Theory of Small Changes
8. Conclusions and Future Work
Appendix A. Upper Ontology Trees for Book Texts
| Clusters |
| thou |
| love, o, thy |
| romeo, shall |
| death, eye, hath |
| day, give, lady, make, one, out, up, well |
| go, good, here, ill, night, now |
| come, thee |
| man, more, tybalt |
| Clusters |
| dorothy |
| asked, came, see |
| city, emerald |
| great, oz |
| again, answered, away, before, down, made, now, shall, toto, up |
| scarecrow |
| lion, woodman |
| back, come, girl, go, green, head, heart, man, one, over, upon, very, witch |
| little, out, tin |
| Clusters |
| back, before, came |
| down, know |
| more, room, think, well |
| day, eye, face, found, matter, tell |
| upon |
| holmes, very |
| little, man, now |
| one |
| away, case, good, heard, house, much, nothing, quite, street, such, through, two, ye |
| go, here |
| come, hand, over, shall, time |
| asked, never |
| door, saw |
| mr, see |
| out, up |
| made, way |
| Clusters |
| answer, computer, man, question, think |
| machine |
| one |
| such |
Appendix B Documents and Test Results for the Neural-Level Sorting
| Train File – Hard-Boiled Egg |
| Place eggs at the bottom of a pot and cover them with cold water. Bring the water to a boil, then remove the pot from the heat. Let the eggs sit in the hot water until hard-boiled. Remove the eggs from the pot and crack them against the counter and peel them with your fingers. |
| Train File – Panna Cotta |
| For the panna cotta, soak the gelatine leaves in a little cold water until soft. Place the milk, cream, vanilla pod and seeds and sugar into a pan and bring to a simmer. Remove the vanilla pod and discard. Squeeze the water out of the gelatine leaves, then add to the pan and take off the heat. Stir until the gelatine has dissolved. Divide the mixture among four ramekins and leave to cool. Place into the fridge for at least an hour, until set. For the sauce, place the sugar, water and cherry liqueur into a pan and bring to the boil. Reduce the heat and simmer until the sugar has dissolved. Take the pan off the heat and add half the raspberries. Using a hand blender, blend the sauce until smooth. Pass the sauce through a sieve into a bowl and stir in the remaining fruit. To serve, turn each panna cotta out onto a serving plate. Spoon over the sauce and garnish with a sprig of mint. Dust with icing sugar. |
| Test File – Hard-Boiled Egg and Panna Cotta |
| Remove the vanilla pod and discard. For the panna cotta, soak the gelatine leaves in a little cold water until soft. As soon as they are cooked drain off the hot water, then leave them in cold water until they are cool enough to handle. Squeeze the water out of the gelatine leaves, then add to the pan and take off the heat. Spoon over the sauce and garnish with a sprig of mint. Stir until the gelatine has dissolved. Place the eggs into a saucepan and add enough cold water to cover them by about 1cm. Pass the sauce through a sieve into a bowl and stir in the remaining fruit. Divide the mixture among four ramekins and leave to cool. Place into the fridge for at least an hour, until set. To peel them crack the shells all over on a hard surface, then peel the shell off starting at the wide end. For the sauce, place the sugar, water and cherry liqueur into a pan and bring to the boil. Place the milk, cream, vanilla pod and seeds and sugar into a pan and bring to a simmer. Reduce the heat and simmer until the sugar has dissolved. Take the pan off the heat and add half the raspberries. Using a hand blender, blend the sauce until smooth. Bring the water up to boil then turn to a simmer. To serve, turn each panna cotta out onto a serving plate. Dust with icing sugar. |
| Selected Sequences from the Hard-Boiled Egg Function |
| [place, the, eggs, into, a, saucepan, and, add, enough, cold, water, to, cover, them, by, about] [bring, the, water, up, to, boil, then, turn, to, a, simmer] [as, soon, as, they, are, cooked, drain, off, the, hot, water, then, leave, them, in, cold, water, until, they, are, cool, enough, to, handle] [to, peel, them, crack, the, shells, all, over, on, a, hard, surface, then, peel, the, shell, off, starting, at, the, wide, end] |
| Selected Sequences from the Panna Cotta Function |
| [for, the, panna, cotta, soak, the, gelatine, leaves, in, a, little, cold, water, until, soft] [place, the, milk, cream, vanilla, pod, and, seeds, and, sugar, into, a, pan, and, bring, to, the, boil] [remove, the, vanilla, pod, and, discard] [squeeze, the, water, out, of, the, gelatine, leaves, then, add, to, the, pan, and, take, off, the, heat] [stir, until, the, gelatine, has, dissolved] [divide, the, mixture, among, four, ramekins, and, leave, to, cool] [place, into, the, fridge, for, at, least, an, hour, until, set] [for, the, sauce, place, the, sugar, water, and, cherry, liqueur, into, a, pan, and, bring, to, the, boil] [reduce, the, heat, and, simmer, until, the, sugar, has, dissolved] [take, the, pan, off, the, heat, and, add, half, the, raspberries] [using, a, hand, blender, blend, the, sauce, until, smooth] [pass, the, sauce, through, a, sieve, into, a, bowl, and, stir, in, the, remaining, fruit] [to, serve, turn, each, panna, cotta, out, onto, a, serving, plate] [spoon, over, the, sauce, and, garnish, with, a, sprig, of, mint] [dust, with, icing, sugar] |
References
- Anderson, J.A.; Silverstein, J.W.; Ritz, S.A.; Jones, R.A. Distinctive Features, Categorical Perception, and Probability Learning: Some Applications of a Neural Model. Psychological Review 1977, 84, 413. [Google Scholar] [CrossRef]
- Barabasi, A.L.; Albert, R. Emergence of scaling in random networks. Science 1999, 286, 509–12. [Google Scholar] [CrossRef]
- Brown, P.F.; Della Pietra, V.J.; Desouza, P.V.; Lai, J.C.; Mercer, R.L. Class-based n-gram models of natural language. Computational linguistics 1992, 18, 467–480. [Google Scholar]
- Buffart, H. A formal approach to Gestalt theory, Blurb, 2017. ISBN: 9781389505577.
- Cavanagh, J.P. Relation between the immediate memory span and the memory search rate. Psychological Review 1972, 79, 525–530. [Google Scholar] [CrossRef]
- Cover, T.M.; Joy, A.T. Elements of Information Theory; John Wiley & Sons, Inc.: 1991. Print ISBN 0-471-06259-6 Online ISBN 0-471-20061-1.
- Dobrynin, V.; Sherman, M.; Abramovich, R.; Platonov, A. A Sparsifier Model for Efficient Information Retrieval, AICT’24. 2024. [Google Scholar]
- Dobson, S.; Fields, C. Constructing condensed memories in functorial time. Journal of Experimental & Theoretical Artificial Intelligence 2023, 1–25. [Google Scholar] [CrossRef]
- Dong, M.; Yao, L.; Wang, X.; Benatallah, B.; Zhang, S. GrCAN: Gradient Boost Convolutional Autoencoder with Neural Decision Forest. arXiv 2018, arXiv:1806.08079. [Google Scholar]
- Eliasmith, C.; Stewart, T.C.; Choo, X.; Bekolay, T.; DeWolf, T.; Tang, Y.; Rasmussen, D. A Large-Scale Model of the Functioning Brain. Science 2012, 338, 1202–1205. [Google Scholar] [CrossRef] [PubMed]
- Feng, S.; Sun, H.; Yan, X.; Zhu, H.; Zou, Z.; Shen, S.; Liu, H.X. Dense reinforcement learning for safety validation of autonomous vehicles. Nature 2023, 615, 620–627. [Google Scholar] [CrossRef]
- Fink, G.A. Markov models for pattern recognition: from theory to applications; Springer Science & Business Media, 2014. [Google Scholar]
- Friedman, R. Cognition as a Mechanical Process. NeuroSci 2021, 2, 141–150. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Advances in neural information processing systems, 2014; 27. [Google Scholar]
- Greer, K. An Auto-Associative Unit Memory Network, Preprints, 2024. https://www.preprints.org/manuscript/202412.1209/v1.
- Greer, K. Neural Assemblies as Precursors for Brain Function. NeuroSci 2022, 3, 645–655, Also published in Eds. Parnetti, L.; Paoletti, F.P.; Gallart-Palau, X.; Feature Papers in NeuroSci : From Consciousness to Clinical Neurology, July 2023, pages 256. ISBN 978-3-0365-7846-0 (hardback); ISBN 978-3-0365-7847-7 (PDF) https://doi.org/10.3390/books978-3-0365-7847-7.. [Google Scholar] [CrossRef]
- Greer, K. New Ideas for Brain Modelling 7. International Journal of Computational and Applied Mathematics & Computer Science 2021, 1, 34–45. [Google Scholar]
- Greer, K. Is Intelligence Artificial? Euroasia Summit, Congress on Scientific Researches and Recent Trends-8, August 2-4, The Philippine Merchant Marine Academy, Philippines, 2021; pp. 307-324. Also available on arXiv at https://arxiv.org/abs/1403.1076.
- Greer, K. Category Trees-Classifiers that Branch on Category. International Journal of Artificial Intelligence & Applications (IJAIA) 2021, 12, 65–76. [Google Scholar]
- Greer, K. New Ideas for Brain Modelling 6. AIMS Biophysics 2020, 7, 308–322. [Google Scholar] [CrossRef]
- Greer, K. A Pattern-Hierarchy Classifier for Reduced Teaching. WSEAS Transactions on Computers 2020, 19, 183–193; ISSN / E, ISSN / E-ISSN: 1109-2750 / 2224-2872. [Google Scholar] [CrossRef]
- Greer, K. New Ideas for Brain Modelling 3. Cognitive Systems Research, Elsevier. 2019, 55, 1–13. [Google Scholar] [CrossRef]
- Greer, K. Turing: Then, Now and Still Key. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM)-Turing 2012; Yang, X.-S., Ed.; Studies in Computational Intelligence; Springer: Berlin Heidelberg, 2013; Volume 427. [Google Scholar] [CrossRef]
- Greer, K. Symbolic Neural Networks for Clustering Higher-Level Concepts. NAUN International Journal of Computers 2011, 5, 378–386, extended version of the WSEAS/EUROPMENT International Conference on Computers and Computing (ICCC’11). [Google Scholar]
- Gruber, T. A translation approach to portable ontology specifications. Knowledge Acquisition 1993, 5, 199–220. [Google Scholar] [CrossRef]
- Gupta, B.; Rawat, A.; Jain, A.; Arora, A.; Dhami, N. Analysis of various decision tree algorithms for classification in data mining. International Journal of Computer Applications 2017, 163, 15–19. [Google Scholar] [CrossRef]
- Hawkins, J.; Lewis, M.; Klukas, M.; Purdy, S.; Ahmad, S. A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex. Frontiers in neural circuits 2019, 12, 121. [Google Scholar] [CrossRef]
- Hawkins, J.; Blakeslee, S. On Intelligence. Times Books, 2004.
- High, R. The era of cognitive systems: An inside look at IBM Watson and how it works. IBM Corporation, Redbooks, 2012; pp.1-16.
- Hinton, G. How to represent part-whole hierarchies in a neural network. Neural Computation 2023, 35, 413–452. [Google Scholar] [CrossRef]
- Hinton, G.E.; Osindero, S.; Teh, Y.-W. A fast learning algorithm for deep belief nets. Neural computation 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
- Katuwal, R.; Suganthan, P.N. Stacked Autoencoder Based Deep Random Vector Functional Link Neural Network for Classification, accepted: Applied Soft Computing 2019. [CrossRef]
- Kingma, D.P.; Welling, M. An introduction to variational autoencoders. Foundations and Trends in Machine Learning 2019, 12, 307–392. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 2012, 1097–1105. [Google Scholar] [CrossRef]
- Krotov, D. A new frontier for Hopfield networks. Nature Reviews Physics 2023, 5, 366–367. [Google Scholar] [CrossRef]
- Laird, J. The Soar cognitive architecture, MIT Press. 2012.
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Lieto, A.; Lebiere, C.; Oltramari, A. The knowledge level in cognitive architectures: Current limitations and possible developments, Cognitive Systems Research. 2017. [Google Scholar]
- Lynn, C.W.; Holmes, C.M.; Palmer, S.E. Heavy-tailed neuronal connectivity arises from Hebbian self-organization. Nature Physics 2024, 20, 484–491. [Google Scholar] [CrossRef]
- Meunier, D.; Lambiotte, R.; Bullmore, E.T. Modular and hierarchically modular organization of brain networks. Frontiers in neuroscience 2010, 4, 200. [Google Scholar] [CrossRef] [PubMed]
- Miller, G.A. The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review 1956, 63, 81–97. [Google Scholar] [CrossRef]
- Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.S.; Dean, J. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems 2013, 26. [Google Scholar]
- Minaee, S.; Mikolov, T.; Nikzad, N.; Chenaghlu, M.; Socher, R.; Amatriain, X.; Gao, J. Large language models: A survey. arXiv 2024, arXiv:2402.06196. [Google Scholar]
- Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; Petersen, S.; Beattie, C.; Sadik, A.; Antonoglou, I.; King, H.; Kumaran, D.; Wierstra, D.; Legg, S.; Hassabis, D. Human-level control through deep reinforcement learning. Nature 2015, 518, pp. 529–533. [Google Scholar] [CrossRef]
- Mountcastle, V.B. The columnar organization of the neocortex. Brain: J Neurol 1997, 120, 701–722. [Google Scholar] [CrossRef] [PubMed]
- Newell, A.; Simon, H.A. Computer science as empirical inquiry: Symbols and search. Communications of the ACM 1976, 19, 113–126. [Google Scholar] [CrossRef]
- Nguyen, T.; Ye, N.; Bartlett, P.L. Learning Near-optimal Convex Combinations of Basis Models with Generalization Guarantees. arXiv 2019, arXiv:1910.03742. [Google Scholar]
- OpenAI. GPT-4 Technical Report. arXiv 2023, arXiv:cs.CL/2303.08774. [Google Scholar]
- Pulvermüller, F.; Tomasello, R.; Henningsen-Schomers, M.R.; Wennekers, T. Biological constraints on neural network models of cognitive function. Nature Reviews Neuroscience 2021, 22, 488–502. [Google Scholar] [CrossRef]
- Rock, I. In defence of unconscious inference. In Stability and constancy in visual perception: mechanisms and processes; Epstein, W., Ed.; John Wiley & Sons: New York, NY, USA, 1977. [Google Scholar]
- Rubinov, M.; Sporns, O.; van Leeuwen, C.; and Breakspear, M. Symbiotic relationship between brain structure and dynamics. BMC Neurosci. 2009, 10, 55. [Google Scholar] [CrossRef] [PubMed]
- Sarker, M.K.; Zhou, L.; Eberhart, A.; Hitzler, P. Neuro-symbolic artificial intelligence. AI Communications 2021, 34, 197–209. [Google Scholar] [CrossRef]
- Shannon, C.E. A Mathematical Theory of Communication. The Bell System Technical Journal 1948, 27, 379–423. [Google Scholar] [CrossRef]
- Spens, E.; Burgess, N. A generative model of memory construction and consolidation. Nature Human Behaviour 2024, 8, 526–543. [Google Scholar] [CrossRef]
- Tkačik, G.; Mora, T.; Marre, O.; Amodei, D.; Palmer, S.E.; Berry, M.J.; Bialek, W. Thermodynamics and signatures of criticality in a network of neurons. Proceedings of the National Academy of Sciences 2015, 112, 11508–11513. [Google Scholar] [CrossRef]
- The Gutenberg Project. Available online: https://www.gutenberg.org/browse/scores/top (accessed on 2 September 2023).
- Treves, A.; Rolls, E.T. What determines the capacity of autoassociative memories in the brain? Network: Computation in Neural Systems 1991, 2, 371. [Google Scholar] [CrossRef]
- Tsien, R.Y. Very long-term memories may be stored in the pattern of holes in the perineuronal net. Proc. Natl. Acad. Sci. USA 2013, 110, 12456–12461. [Google Scholar] [CrossRef] [PubMed]
- Turing, A.M. Computing machinery and intelligence. Mind 1950, 59, 433–460. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. Advances in neural information processing systems 2017, 30. [Google Scholar]
- Webb, T.W.; Frankland, S.M.; Altabaa, A.; Segert, S.; Krishnamurthy, K.; Campbell, D.; Russin, J.; Giallanza, T.; O’Reilly, R.; Lafferty, J.; Cohen, J.D. The relational bottleneck as an inductive bias for efficient abstraction. Trends in Cognitive Sciences 2024. [Google Scholar] [CrossRef]


Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
