Submitted:
28 October 2024
Posted:
30 October 2024
Read the latest preprint version here
Abstract
Keywords:
1. Introduction
2. The Original Cognitive Architecture
3. Related Work
3.1. Current State-Of-The-Art
3.2. Alternative Models
4. The Memory Model
4.1. Memory Model Levels
- (1)
- The lowest level is an n-gram structure that is sets of links only, between every source concept that has been stored. The links describe any possible routes through the source concept sequences, but are unweighted.
- (2)
- The middle level is an ontology that aggregates the source data through 3 phases and this converts it from set-based sequences into type-based clusters.
- (3)
- The upper level is a combination of the functional properties of the brain, with whatever input and resulting conversions they produce, being stored in the same memory substrate.
- Experience to knowledge.
- Knowledge to knowledge.
- Knowledge to experience.
4.2. Lower Memory Level
4.3. Middle Ontology Level
4.4. Ontology Tests
5. The Neural Level
5.1. Function Identity
5.2. Function Structure
5.3. Index Types
- Unipolar Type: this has a list of index terms that is generally a bit longer and is unordered. It can be matched with any sequence in the input set, but to only 1 sequence.
- Bipolar Type: this has a list of index terms and a related feature set. The index terms should be matched to only 1 sequence and some of the feature values should also match with that sequence. This matching should be in order however – the order in the feature should be repeated in the sequence. Then the rest of the feature values can match with any other sequence and in any order.
- Pyramidal Type: this has a list of index terms and a related feature set. The index terms however are split over 2 specific sequences. Both the index terms and the related feature set should match with 2 specific sequences and the matching should be ordered in both.
6. Ordinal Learning
Ordinal Tests
7. Some Biological Comparisons
7.1. Gestalt Psychology
7.1.1. Example of a Gestalt Process
7.2. Brain Evolution
7.3. Small Changes
8. Conclusions and Future Work
Appendix A – Upper Ontology Trees for Book Texts
| Clusters |
| thou |
| love, o, thy |
| romeo, shall |
| death, eye, hath |
| day, give, lady, make, one, out, up, well |
| go, good, here, ill, night, now |
| come, thee |
| man, more, tybalt |
| Clusters |
| dorothy |
| asked, came, see |
| city, emerald |
| great, oz |
| again, answered, away, before, down, made, now, shall, toto, up |
| scarecrow |
| lion, woodman |
| back, come, girl, go, green, head, heart, man, one, over, upon, very, witch |
| little, out, tin |
| Clusters |
| back, before, came |
| down, know |
| more, room, think, well |
| day, eye, face, found, matter, tell |
| upon |
| holmes, very |
| little, man, now |
| one |
| away, case, good, heard, house, much, nothing, quite, street, such, through, two, ye |
| go, here |
| come, hand, over, shall, time |
| asked, never |
| door, saw |
| mr, see |
| out, up |
| made, way |
| Clusters |
| answer, computer, man, question, think |
| machine |
| one |
| such |
Appendix B – Documents and Test Results for the Neural-Level Sorting
| Train File – Hard-Boiled Egg |
| Place eggs at the bottom of a pot and cover them with cold water. Bring the water to a boil, then remove the pot from the heat. Let the eggs sit in the hot water until hard-boiled. Remove the eggs from the pot and crack them against the counter and peel them with your fingers. |
| Train File – Panna Cotta |
| For the panna cotta, soak the gelatine leaves in a little cold water until soft. Place the milk, cream, vanilla pod and seeds and sugar into a pan and bring to a simmer. Remove the vanilla pod and discard. Squeeze the water out of the gelatine leaves, then add to the pan and take off the heat. Stir until the gelatine has dissolved. Divide the mixture among four ramekins and leave to cool. Place into the fridge for at least an hour, until set. For the sauce, place the sugar, water and cherry liqueur into a pan and bring to the boil. Reduce the heat and simmer until the sugar has dissolved. Take the pan off the heat and add half the raspberries. Using a hand blender, blend the sauce until smooth. Pass the sauce through a sieve into a bowl and stir in the remaining fruit. To serve, turn each panna cotta out onto a serving plate. Spoon over the sauce and garnish with a sprig of mint. Dust with icing sugar. |
| Test File – Hard-Boiled Egg and Panna Cotta |
| Remove the vanilla pod and discard. For the panna cotta, soak the gelatine leaves in a little cold water until soft. As soon as they are cooked drain off the hot water, then leave them in cold water until they are cool enough to handle. Squeeze the water out of the gelatine leaves, then add to the pan and take off the heat. Spoon over the sauce and garnish with a sprig of mint. Stir until the gelatine has dissolved. Place the eggs into a saucepan and add enough cold water to cover them by about 1cm. Pass the sauce through a sieve into a bowl and stir in the remaining fruit. Divide the mixture among four ramekins and leave to cool. Place into the fridge for at least an hour, until set. To peel them crack the shells all over on a hard surface, then peel the shell off starting at the wide end. For the sauce, place the sugar, water and cherry liqueur into a pan and bring to the boil. Place the milk, cream, vanilla pod and seeds and sugar into a pan and bring to a simmer. Reduce the heat and simmer until the sugar has dissolved. Take the pan off the heat and add half the raspberries. Using a hand blender, blend the sauce until smooth. Bring the water up to boil then turn to a simmer. To serve, turn each panna cotta out onto a serving plate. Dust with icing sugar. |
| Selected Sequences from the Hard-Boiled Egg Function |
| [place, the, eggs, into, a, saucepan, and, add, enough, cold, water, to, cover, them, by, about] [bring, the, water, up, to, boil, then, turn, to, a, simmer] [as, soon, as, they, are, cooked, drain, off, the, hot, water, then, leave, them, in, cold, water, until, they, are, cool, enough, to, handle] [to, peel, them, crack, the, shells, all, over, on, a, hard, surface, then, peel, the, shell, off, starting, at, the, wide, end] |
| Selected Sequences from the Panna Cotta Function |
| [for, the, panna, cotta, soak, the, gelatine, leaves, in, a, little, cold, water, until, soft] [place, the, milk, cream, vanilla, pod, and, seeds, and, sugar, into, a, pan, and, bring, to, the, boil] [remove, the, vanilla, pod, and, discard] [squeeze, the, water, out, of, the, gelatine, leaves, then, add, to, the, pan, and, take, off, the, heat] [stir, until, the, gelatine, has, dissolved] [divide, the, mixture, among, four, ramekins, and, leave, to, cool] [place, into, the, fridge, for, at, least, an, hour, until, set] [for, the, sauce, place, the, sugar, water, and, cherry, liqueur, into, a, pan, and, bring, to, the, boil] [reduce, the, heat, and, simmer, until, the, sugar, has, dissolved] [take, the, pan, off, the, heat, and, add, half, the, raspberries] [using, a, hand, blender, blend, the, sauce, until, smooth] [pass, the, sauce, through, a, sieve, into, a, bowl, and, stir, in, the, remaining, fruit] [to, serve, turn, each, panna, cotta, out, onto, a, serving, plate] [spoon, over, the, sauce, and, garnish, with, a, sprig, of, mint] [dust, with, icing, sugar] |
References
- Anderson, J.A., Silverstein, J.W., Ritz, S.A. and Jones, R.A. (1977) Distinctive Features, Categorical Perception, and Probability Learning: Some Applications of a Neural Model, Psychological Review, Vol. 84, No. 5. [CrossRef]
- Barabasi A.L. and Albert R. (1999). Emergence of scaling in random networks, Science, 286:509-12. [CrossRef]
- Brown, P.F., Della Pietra, V.J., Desouza, P.V., Lai, J.C. and Mercer, R.L.. (1992). Class-based n-gram models of natural language. Computational linguistics, 18(4), pp.467-480.
- Buffart, H. (2017). A formal approach to Gestalt theory, Blurb, ISBN: 9781389505577.
- Cavanagh, J. P. (1972). Relation between the immediate memory span and the memory search rate. Psychological Review, 79, pp. 525 - 530. [CrossRef]
- Cover, T.M. and Joy, A.T. (1991). Elements of Information Theory, John Wiley & Sons, Inc. Print ISBN 0-471-06259-6 Online ISBN 0-471-20061-1.
- Dobrynin, V., Sherman, M., Abramovich, R., and Platonov, A. (2024). A Sparsifier Model for Efficient Information Retrieval, AICT’24.
- Dobson, S. and Fields, C. (2023). Constructing condensed memories in functorial time. Journal of Experimental & Theoretical Artificial Intelligence, pp.1-25. [CrossRef]
- Dong, M., Yao, L., Wang, X., Benatallah, B. and Zhang, S. (2018). GrCAN: Gradient Boost Convolutional Autoencoder with Neural Decision Forest. arXiv preprint arXiv:1806.08079.
- Eliasmith, C., Stewart, T.C., Choo, X., Bekolay, T. DeWolf, T., Tang, Y. and Rasmussen, D. (2012). A Large-Scale Model of the Functioning Brain, Science, 338(6111), pp. 1202 - 1205. [CrossRef]
- Feng, S., Sun, H., Yan, X., Zhu, H., Zou, Z., Shen, S. and Liu, H.X. (2023). Dense reinforcement learning for safety validation of autonomous vehicles. Nature, 615(7953), pp. 620 - 627.
- Fink, G.A. (2014). Markov models for pattern recognition: from theory to applications. Springer Science & Business Media.
- Friedman, R. (2021). Cognition as a Mechanical Process. NeuroSci, 2, 141–150. [CrossRef]
- Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. and Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27.
- Greer, K. (2022). Neural Assemblies as Precursors for Brain Function, NeuroSci, 3(4), pp. 645 - 655. https://doi.org/10.3390/neurosci3040046. Also published in Eds. Parnetti, L., Paoletti, F.P. and Gallart-Palau, X., Feature Papers in NeuroSci : From Consciousness to Clinical Neurology, July 2023, pages 256. ISBN 978-3-0365-7846-0 (hardback); ISBN 978-3-0365-7847-7 (PDF). [CrossRef]
- Greer, K. (2021). New Ideas for Brain Modelling 7, International Journal of Computational and Applied Mathematics & Computer Science, Vol. 1, pp. 34-45.
- Greer, K. (2021). Is Intelligence Artificial? Euroasia Summit, Congress on Scientific Researches and Recent Trends-8, August 2-4, The Philippine Merchant Marine Academy, Philippines, pp. 307 - 324. Also available on arXiv at https://arxiv.org/abs/1403.1076.
- Greer, K. (2021). Category Trees - Classifiers that Branch on Category, International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 12, No. 6, pp. 65 - 76.
- Greer, K. (2020). New Ideas for Brain Modelling 6, AIMS Biophysics, Vol. 7, Issue 4, pp. 308-322. [CrossRef]
- Greer, K. (2020). A Pattern-Hierarchy Classifier for Reduced Teaching, WSEAS Transactions on Computers, ISSN / E-ISSN: 1109-2750 / 2224-2872, Volume 19, Art. #23, pp. 183-193.
- Greer, K. (2019). New Ideas for Brain Modelling 3, Cognitive Systems Research, 55, pp. 1-13, Elsevier. [CrossRef]
- Greer, K. (2012). Turing: Then, Now and Still Key, in: X-S. Yang (eds.), Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) - Turing 2012, Studies in Computational Intelligence, 2013, Vol. 427/2013, pp. 43-62, Springer-Verlag Berlin Heidelberg. [CrossRef]
- Greer, K. (2011). Symbolic Neural Networks for Clustering Higher-Level Concepts, NAUN International Journal of Computers, Issue 3, Vol. 5, pp. 378 – 386, extended version of the WSEAS/EUROPMENT International Conference on Computers and Computing (ICCC’11).
- Gruber, T. (1993). A translation approach to portable ontology specifications. Knowledge Acquisition, 5, pp. 199 - 220.
- Gupta, B., Rawat, A., Jain, A., Arora, A. and Dhami, N. (2017). Analysis of various decision tree algorithms for classification in data mining. International Journal of Computer Applications, Vol. 163, No. 8, pp. 15 - 19.
- Hawkins, J., Lewis, M., Klukas, M., Purdy, S. and Ahmad, S. (2019). A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex, Frontiers in neural circuits, 12, p. 121.
- Hawkins, J. and Blakeslee, S. On Intelligence. Times Books, 2004.
- High, R., 2012. The era of cognitive systems: An inside look at IBM Watson and how it works. IBM Corporation, Redbooks, pp.1-16.
- Hinton, G., 2023. How to represent part-whole hierarchies in a neural network. Neural Computation, 35(3), pp.413-452.
- Hinton, G.E., Osindero, S. and Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets, Neural computation, Vol. 18, No. 7, pp. 1527 - 1554.
- Katuwal, R., Suganthan, P.N. (2019). Stacked Autoencoder Based Deep Random Vector Functional Link Neural Network for Classification, accepted: Applied Soft Computing . [CrossRef]
- Kingma, D.P. and Welling, M., 2019. An introduction to variational autoencoders. Foundations and Trends in Machine Learning, 12(4), pp.307-392.
- Krotov, D. (2023). A new frontier for Hopfield networks. Nature Reviews Physics, 5(7), pp. 366 - 367.
- Laird, J. (2012). The Soar cognitive architecture, MIT Press.
- LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015). [CrossRef]
- Lieto, A., Lebiere, C. and Oltramari, A. (2017). The knowledge level in cognitive architectures: Current limitations and possible developments, Cognitive Systems Research.
- Lynn, C.W., Holmes, C.M. and Palmer, S.E. (2024). Heavy-tailed neuronal connectivity arises from Hebbian self-organization, Nature Physics, 20(3), pp.484-491.
- Meunier, D., Lambiotte, R. and Bullmore, E.T., 2010. Modular and hierarchically modular organization of brain networks. Frontiers in neuroscience, 4, p.200.
- Miller, G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review, 63, pp. 81 - 97.
- Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S. and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26.
- Minaee, S., Mikolov, T., Nikzad, N., Chenaghlu, M., Socher, R., Amatriain, X. and Gao, J., 2024. Large language models: A survey. arXiv preprint arXiv:2402.06196.
- Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S. and Hassabis, D. (2015). Human-level control through deep reinforcement learning, Nature, Vol. 518, pp. 529-533.
- Mountcastle, V.B. (1997). The columnar organization of the neocortex, Brain: J Neurol, Vol. 120, pp. 701 - 722. [CrossRef]
- Newell, A. and Simon, H.A. (1976). Computer science as empirical inquiry: Symbols and search, Communications of the ACM, Vol.19, No. 3, pp. 113 - 126.
- Nguyen, T., Ye, N. and Bartlett, P.L. (2019). Learning Near-optimal Convex Combinations of Basis Models with Generalization Guarantees. arXiv preprint arXiv:1910.03742.
- OpenAI. 2023. GPT-4 Technical Report. (2023). arXiv:cs.CL/2303.08774.
- Pulvermüller, F., Tomasello, R., Henningsen-Schomers, M.R. and Wennekers, T. (2021). Biological constraints on neural network models of cognitive function, Nature Reviews Neuroscience, 22(8), pp. 488 - 502. [CrossRef]
- Rock, I. (1977). In defence of unconscious inference. In W. Epstein (Ed.), Stability and constancy in visual perception: mechanisms and processes. New York, N. Y.: John Wiley & Sons.
- Rubinov, M., Sporns, O., van Leeuwen, C., and Breakspear, M. (2009). Symbiotic relationship between brain structure and dynamics. BMC Neurosci. 10, 55. [CrossRef]
- Sarker, M.K., Zhou, L., Eberhart, A. and Hitzler, P. (2021). Neuro-symbolic artificial intelligence. AI Communications, 34(3), pp. 197 - 209. [CrossRef]
- Shannon, C.E. (1948). A Mathematical Theory of Communication, The Bell System Technical Journal, 27(3), pp. 379 - 423.
- Spens, E. and Burgess, N. (2024). A generative model of memory construction and consolidation, Nature Human Behaviour, 8(3), pp. 526 - 543. [CrossRef]
- Tkačik, G., Mora, T., Marre, O., Amodei, D., Palmer, S.E., Berry, M.J. and Bialek, W. (2015). Thermodynamics and signatures of criticality in a network of neurons, Proceedings of the National Academy of Sciences, Vol. 112, No. 37, pp. 11508 - 11513.
- The Gutenberg Project., https://www.gutenberg.org/browse/scores/top. (last downloaded 2/9/23).
- Treves, A. and Rolls, E.T. (1991). What determines the capacity of autoassociative memories in the brain?, Network: Computation in Neural Systems, 2(4), p.371.
- Tsien, R.Y. Very long-term memories may be stored in the pattern of holes in the perineuronal net. Proc. Natl. Acad. Sci. USA 2013, 110, 12456-12461. [CrossRef]
- Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, pp. 433 - 460.
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L. and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
- Webb, T.W., Frankland, S.M., Altabaa, A., Segert, S., Krishnamurthy, K., Campbell, D., Russin, J., Giallanza, T., O’Reilly, R., Lafferty, J. and Cohen, J.D. (2024). The relational bottleneck as an inductive bias for efficient abstraction. Trends in Cognitive Sciences.



Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
