Version 1
: Received: 21 January 2024 / Approved: 23 January 2024 / Online: 23 January 2024 (09:56:23 CET)
Version 2
: Received: 29 January 2024 / Approved: 30 January 2024 / Online: 30 January 2024 (05:02:04 CET)
How to cite:
Monk, B.; Meyer, T.; Ngo, M. Language Models Do Not Possess the Uniquely Human Cognitive Ability of Relational Abstraction. Preprints2024, 2024011681. https://doi.org/10.20944/preprints202401.1681.v2
Monk, B.; Meyer, T.; Ngo, M. Language Models Do Not Possess the Uniquely Human Cognitive Ability of Relational Abstraction. Preprints 2024, 2024011681. https://doi.org/10.20944/preprints202401.1681.v2
Monk, B.; Meyer, T.; Ngo, M. Language Models Do Not Possess the Uniquely Human Cognitive Ability of Relational Abstraction. Preprints2024, 2024011681. https://doi.org/10.20944/preprints202401.1681.v2
APA Style
Monk, B., Meyer, T., & Ngo, M. (2024). Language Models Do Not Possess the Uniquely Human Cognitive Ability of Relational Abstraction. Preprints. https://doi.org/10.20944/preprints202401.1681.v2
Chicago/Turabian Style
Monk, B., Timothy Meyer and Mary Ngo. 2024 "Language Models Do Not Possess the Uniquely Human Cognitive Ability of Relational Abstraction" Preprints. https://doi.org/10.20944/preprints202401.1681.v2
Abstract
Humans have a unique cognitive ability known as relational abstraction, which allows us to identify logical rules and patterns beyond basic perceptual characteristics. This ability represents a key difference between how humans and other animals learn and interact with the world. With current large language models rivaling human intelligence in many regards, we investigated whether relational abstraction was an emergent chat completion capacity in various models. We find that despite their impressive language processing skills, all tested language models failed the relational match-to-sample (RMTS) test, a benchmark for assessing relational abstraction. These results challenge the assumption that advanced language skills inherently confer the capacity for complex relational reasoning. The paper highlights the need for a broader evaluation of AI cognitive abilities, emphasizing that language proficiency alone may not be indicative of certain higher-order cognitive processes thought to be supported by language.
Keywords
Language models; LLM; human cognition; relational abstraction; comparative cognition; artificial intelligence; AI; AGI; benchmarks; natural language processing, generative AI
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.