Preprint
Article

This version is not peer-reviewed.

Conception of Intelligence and Some Misconceptions Concerning Artificial Intelligence

Submitted:

22 May 2025

Posted:

23 May 2025

You are already at the latest version

Abstract
The current robots imbued with artificial states of cognition are nothing but intelligent machines without mindfulness. The systems are clever replicates of human agents but they lack the sheer power of true human cognition and consciousness—they are simply “automata”. We believe that mere intelligence is not something akin to conscious awareness. Nothing could still match the power of human creativity, thoughtfulness and imagination, nor do these artificial beings are capable of eliciting true human emotions, at least, for the time being. In this paper, we undertake a critique of AI in the light of eliciting its concepts by examining the myths and misconceptions surrounding the artificial intelligent systems and systems running on AI. We attempt to demystify the false notions that cloud our perceptions regarding the potentials of artificial general intelligence. Our thinking is aligned to the current goal of embodying machines with conscious behavior grounded on the philosophical foundations of embodied capacities beyond learning and language processing. To this end, we represent our views that we deem relevant to the current emerging confusions and rat races in the AI industry regarding the current state of development and design of machine consciousness.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

Artificial general intelligence (AGI) has already changed the general landscape of machine learning and artificial intelligence (Montemayor, 2014). There are many technologies and operations now becoming reliant on AGI. Artificial intelligence based tools and systems are being adopted in many disciplines to explore its role and functionalities, and deployed elsewhere for various other purposes to aid human welfare and wellbeing. So far it seems good. But how far it can go? Today, real-world artificial beings are being conceived that could lead to further evolution in machine intelligence. But would that lead to machines becoming conscious? Would AI-based systems replace human teachers? Is an AI system’s proficiency unlimited? Can they solve all human problems? These are some of the myths associated with the adoption of AI tools and their widespread implications that should be assessed in reality (Giray, 2024). Debunking myths about AI is a necessity since it is prevailing in almost every aspect of our lives (Bewersdorff et. al., 2023). There circulates a lot of half-truths concerning the potentialities of AI-based tools, their applications, and threats following adoption of AI systems for human welfare and benefit.
Again, would AI, in future, dominate the destiny of the world and control human fate (Zaman, 2024)? These are some of the possibilities that extend the further boundaries of human imagination when such dreadful outcomes are envisioned. It also strikes at the core of our existential anxieties when fully conscious machines would evolve to control our destiny and fate (Davis, 2022; Zaman, 2024). But in reality, it’s only a farfetched possibility too incredible to believe. And yet, we must progress towards such a possibility with great caution.
The foremost argument which can be immediately presented against such a dreadful future of humanity is this, that the AI driven machines—including robots—are not in essence sentient beings, are mindless bots, have no feelings of their own, nor can sense emotions and sensualities, and they do not actually learn, nor remember, but are driven by programed instructions which appear as emergent phenomena. For the very reason that it has not been possible to embody machines with the capacity to think on their own. Nor do they possess any feelings of their own. The subjective world of experience and emotion is not only intriguing to us, but to machines as well, since despite all attempts to model subjective states in machines, it did not work out. Or did it? Let’s discuss it.
We can see with the mind’s eye (Dennett, 1975), and we can imagine or even daydream, but we still can’t see the mind which remains unobservable and elusive (only but in action), or come about to know what it really means to be conscious, in pure scientific sense. It is still difficult to crack the code of the wired brain and see the mind in work. We can’t trap the mind. The reason being that, various operations are in operation in unison (Dixie & Purser, 2023), and hence it is difficult to pinpoint sharply the exact seat the mind takes in the brain. But we can observe many of the mental responses generated as behaviors and actions. So, the human mind, in essence, still eludes us although much of its elements and functions in states of sanity and insanity have been extensively studied using the tools of psychology and neuroscience (Kandel and Squire, 2000; Kievit et al., 2011). Many scientific barriers have been broken down to understand the human mind with scientific reasoning, in scientific terms. And we are aware of different kinds of minds, and their reflections through mental phenomena eliciting a myriad of behaviors that are characteristic of a human being. But what about an “artificial mind” designed to replicate the true emotive behaviors of a machine? Not unlikely so, as it has not been achieved yet. The current robots designed to function based on algorithms are clever but mindless entities (Navon, 2024). These machines are encoded with the language of artificial consciousness based on Large Language Models, or in Chalmers’s (2023) words, extended Large Language Models (henceforth LLMs). Their intelligence is derived from programing, which enable machines to reason, compute probabilities based on assigned weights, and generate responses. Their processing units are unlike neuronal centres: i.e., neuron-like units but not neurons. But they are capable of processing large chunks of data and information faster than human beings, retain better, and correctly recall stored information quickly. However, some attempts are in line to empower machines with capacities beyond language processing capabilities and text response generation (Chalmers, 2023). And yet, they remain mindless. Because they are very different from a true sentient being having a mind of its own being subjective to respond and generate feelings of its own: the human mind. Human consciousness is multimodal in nature. It means that the human mind has the capability to receive inputs from various sources and generate outputs in both mental formats (ideas and imaginations) or as actions and behaviors—in physical formats—including those that we identify as organic emotional responses.
Now, the human mind is alike animals in some respects—but it differs in many other respects, and yet it is nothing like robots in any respect. What’s recognisable of a human mind is its brainpower—i.e., ability to think and worry about its future. Worrying is a mode of thought. The later feature is uncharacteristic of most animal minds, save some (Even animals, too, worry about their offspring’s safety and they do think about foraging when their forages become scarce. See, for example, Deborah Gordon, 1989.)—as well as the one we human beings have become eager to design—an “artificial mind” in a robot characterising machine mental states. For the time being, machines don’t have any mental states. All AI-based robots could be described more or less Descartes’ “Automata”. See Rene Hatfield, G. (2008). René Descartes.
In this paper, we endeavor to embark on a journey to design true minds having artificial emotional states—reflected as conscious intelligence systems (CIS) that though may be artificial, yet it might elicit truly the proper behavioral responses far better than anything conceived till date. Our thinking is aligned to the current goal of embodying machines with conscious behavior grounded on the philosophical foundations of embodied capacities beyond learning and language processing. To this end, we present our design of a model modelled using the principles of the metaphysics of noesis. It presents a meta-noetic foundation to the work. We tag this endeavor as Artificial Noetic Consciousness (ANC).

2. The Artificial Mind: Artificial Consciousness

The most advanced Robots of this day are endowed with the capacity to reason and act, and are, therefore, mindless entities. They mindlessly do their work as and according to what has been programmed and instructed, mostly driven by algorithms, and in some part reliant on unsupervised learning modules that enable machines to learn on their own. But without teaching to empower these machines, they are virtually useless pieces of artifacts. With learning and training, they become clever and intelligent, but they remain mindless (Dennett, 1996; Navon, 2024).
With the help of deep technological innovations in particle physics providing some of the deepest foundations for the most powerful particle accelerators, See Wille, (2000). scientists have been able to crack the code of subatomic physics within the realms of which lies many secrets, and many surprises. The “tiniest” subatomic particles obtained—quarks and leptons, after rigorous experiments (See Juilliar, (2024)) using the particle accelerators at the CERN are ubiquitous, but “observable.” But even with the best supercomputers at our disposal and the existing brain imaging techniques, the human mind still allures us and is unobservable. The elements of the mind—in the words of Daniel Dennett (1996), are woven in a complex fabric from many different kinds of strands. But what strands? And might such a mind be ever conceived in a machine, it would differ from that of a human being, but in what many respects?
In this paper, we shall discuss the elements of the mind in relation to robots which are now being fashioned in the image of its creator—human being. We call these entities “artificial conscious beings” that are intelligent and rational but are mindless entities. We will also elicit the methods that could truly endow robots with their own states of awareness to render them sentient, responsive, and emotional machines. And this is one of the most difficult and challenging tasks for machine designers to accomplish.

3. Machine Consciousness: A Technological Challenge?

We may revisit and try look intently to seek an answer to the problem once discussed by Chella and Manzotti (2009); is machine consciousness a technological challenge? Can robots have conscious states of their own? Or would they be designed as intelligent and rational but mindless artificial beings? The problem of the minds of machines throws as many questions but answers a few of them. We must understand first the relationship between machine codes, algorithms, learning modules, and processes that generate responses that are comparable to humans but far less convincing to us.
Our ambitions concerning machine mental states must be addressed clearly. And our concern concerning how the robots in the future would deal with human emotions—with human beings after all, throws us back to the primordial soup containing the very ingredients of such a foundation if such a foundation is able to help us construct an artificial mind capable in all respects what we are capable of, and that would, in essence, help alleviate such needless doubts and worries. What is left behind is thus; a possibility shrouded with technological challenge. And, the answer then would definitely lie within the “secret” of the method.
The “secret” method—or the secrets of the method of embodying consciousness in machines lie in the complex and integrated organisation of an “artificial brain”—if it becomes a reality. What we encounter today as “robots” are not true sentient beings at all. Today, machines are programed to think (compute) to make choices and produce acceptable responses that seem rational, and to many respects—intelligent. Indeed, today’s smart robots are intelligent beings that can elicit certain behaviors as well. But from where does such behavior originate? How do they originate? What are the fundamental principles that drive machines to elicit human-like responses and behaviors? The answer lies deep within the principles supportive of “machine learning.” Machine learning, including deep learning and Large Language Modules (LLMs) including their extended versions have made machines become more capable of “understanding” what to do, how to react, and use certain means to achieve certain ends; i.e., generate responses, solve problems, answer questions, etc. Of course, there must always be some means to attain specific ends as goals. Hence, machine learning is reactionary and goal-oriented in nature; i.e., to some extent it’s teleology-driven.
And yet, machines have not become thinking agents. They elicit behaviors without consciousness (Chella & Manzotti, 2009). So, the question remains, under what conditions consciousness can emerge in intelligent machines? Many experts—including Lycan (1993) hold the view that if biological consciousness is the outcome of a living functional brain, then stringing neurons together in organised patterns could result in the emergence of consciousness. Neurons are the fibres of communication since they transmit electrical potentials. The neural potential cannot be easily replicated in non-biological systems, although similar large networks of artificial neurons constitute the brain behind machine intelligence. Now whether they could generate conscious states in machines (or robots) is a question of pointless debate—a debate which is inane in its very own sense.
Construction of an artificial mind eliciting conscious behaviors in machines and giving them the mental properties alike humans—that is to say—to generate “sense” out of “non-sensible” things based upon algorithms and information processing units is a matter of functionality; i.e., it is reliant upon the organisational unity of artificial elements of consciousness. It presents a technological challenge (Lee. 2006). It also refers to the potential existence of a disembodied mind. Although much of it is reliant on learning and cognition, investigation into the real cause of emergence of conscious states in machines is fraught with disagreement and follow bitter resentment among the designers and proponents alike (see, Dreyfus and Dreyfus, 1986; Moravec, 1999; Levy, 2009; Hildt, 2019). Following this, another issue comes to the mind: what “tests” and techniques should be used to investigate the properties of machine mental states? The Turing test may serve the purpose but it is no more reliable nor appropriate in the present context. For, there have grown various objections to Turing test (See Copland (Ed.) 2020), some objective ones, while others having little or no merit whatsoever, and now there are growing evidences for and against machine consciousness. Whereas one group believes that there cannot be modelled any true phenomenal conscious states in machines alike humans, and they call it a pseudoscience (See, Garrido-Merchán, 2024), there are other groups who have endeavoured to design machine minds with a view of embodying robots with conscious feelings of their own (See, Krauss and Maeir, 2020). Arguments exists both for and against such possibilities of designing true conscious states in machines to let robots have their own minds and become mindful. The main hindrance to such a possibility seems to be a formidable technological challenge which questions such a feasibility (Lee, 2006; Aleksander, 2017).
Brain States do not “emerge”’: They are already there in innate state and slowly unfold
It comes by birth pre-wired, only the conscious states slowly unravel in time with the child gaining experience, and with learned behavior. Brain states do not, in a sense, emerge as it has been thought before. They acquire innate existence pre-wired before birth. Instantiation of conscious states occur at birth. This is most rationally a scientific reasoning, considering the point that the architecture of the mind comes pre-designed on account of neurogenesis.
The entire emergence theory is a sort of booby-trap for machine designers. The brain comes already wired with its neural mechanisms in innate states. However, with growth, maturity, cognitive development, and learning, new behavioral states of the mind gradually evolve over time. This theory nullifies our overreliance on the emergent theory of consciousness.
Let us state it clearly in a concise manner. We are not refuting the theory of emergence wholly, but in part. We often find complex properties not found in elsewhere in part alone generated by complex systems. The structure of the brain is innate—which already determined by evolution and genetics, acting in concert with natural laws of natural dynamics. We know it because we are part of it. A cat cannot evolve into a mouse, neither a man into a tiger. But genetics (and mutational dynamics) can determine what complex or simple things can evolve, or how complex or simple things can become. Now, is it possible to trace back our mental structure from pre-existing brain structures?
Counter point: What about prodigies who can solve higher calculus at the age of 5 without being trained in it by conventional methods of learning? It is not “developmental emergence” but “innate recurrence”.
We must take caution to state that our brain does function in a way like computers do. The causation of the brain states—with the mind in incubation is the biggest mystery which challenges science and philosophy alike.
First, there is no recursive calling of functions in the brain, since the functions of the brain are stimulated by neural circuits of which the brain energetics is a special aspect controlling all the mechanisms of initiation of voluntary, involuntary, and cognitive actions (motor and sensory commands). Even the “feelings” are sensory impulsive modes of the brain. Our brain has special control mechanisms to regulate impulses, but sometimes go awry due to exogenous agents which deregulates the control mechanism; e.g., drugs, stimulants, psychedelics, depressants, etc. Any recursive function is optimally suited to computers because it is algorithm driven. There is no algorithm that drives the neural functions. Neural functioning is spontaneous and teleological, and more than that, impulsive, and need-driven.
Algorithms have their limitations. They are specific for particular functions, and need recoding and up gradation in order to be used to direct higher order commands. Any emergence from recursive functioning is artificial, and limited to a particular order of function. Recursive functions can be used, at best, to mimic, but not create emergent states of the mind.
Hence, the machine mind modelled after recursive function falls into philosophical and practical trap: a booby trap. There is no unifying algorithm of the mind that could be designed to let consciousness emerge out of howsoever complex machines. It is for the reason that neural mechanisms cannot be explicitly decoded into a program neither it is possible to reduce such to symbolic functions.
Biological consciousness is non-algorithmic and spontaneous which cannot be directly replicated in computational architecture. The functional plasticity of the brain is varied, context sensitive, and fluid. The idea that machine complexity will allow consciousness to emerge when given enough complexity is shrouded in uncertainty, both philosophically, and empirically (Searle, 1980; Chalmers, 1995a; Chalmers, 1995b).
The natural laws governing nature builds, destroys, and rebuilds, evolves, degenerates, and regenerates. The brain cells do not exactly hold memories as patterns which can be recorded, decoded, or regenerated. In some aspect they do, but in other aspect, the neural cells are “tuned” to detect patterns when they appear, or recur. There is an interplay of neuroendocrine processes and neurochemical transmitters that are employed to create memories and store them.
The recursive law of emergence is not applicable to the appearance of true human consciousness in machines. What machines do is raw computation, by all means, based on reasoning, logic, and statistical models employed to compute probabilities and match them accordingly to generate behaviors and responses. This is entirely different from the functioning of the human brain. There exists little reusable patterns in the brain, as research has proved so. The ones that we call canonical cortical microcircuits responsible for recurrent neural activities (Capone et al., 2016; Yuste, 2018) are not alike any template. Brain doesn’t have fixed templates for cognition to operate upon, doesn’t use any, but temporary firing of microcircuits give us the notion that brain has reusable patterns. The cognitive functions of brain, rather, are reliant on context-dependent neural firing patterns (Tonegawa et. al., 2018), where each context vary and evoke variable response patterns. Even Hebbian learning (Gerstner, 2011) has been questioned. There are now non-Hebbian learning mechanisms being revealed (Islam et al., 2024) like inhibitory neural plasticity (Pang & Recanatesi, 2025) and others that counteract this theory of learning. Pang and Recanatesi (2025) describes non-Hebbian code for episodic memory formation well suited to encode episodes reliant on path vectors. Of course, Hebbian plasticity strengthens the synaptic connections between repeatedly coactive neurons for memory formation, but it is not the only one that constitute the neural basis for memory formation.
Neural plasticity do exist, but not always the “cells that fire together will always do so”. It just oversimplifies what’s in reality is a dynamic function of the brain. We can call it “Fleeting Neurodynamics”. To assume any canonical circuits would be highly misleading, since the exact nature of timing, wiring, and contexts differ so widely for each individuals that pinpointing a general purpose circuit for everyone may be notional.
Human memories are not actually “weaved” alike computer memories, neither stored as data as it is so in computers. The development of biological memory is impulsive and extemporaneous, encoded through neuroendocrine, neurochemical, and neuroelectrical processes. Indeed, we can partly relate Fractals to the development of memories, but that too has limitations, since memories don’t proliferate like branches of a tree to take shape. Memories are pure “subjective states” having neural basis of origin. Memory formation is limited by our direct experiences that becomes some “knowledge” for the brain.
There are no repeats, no reduction of entropy (According to the Second Law of Thermodynamics, total entropy of a closed system remains constant, or it increases. If brain is considered an “open system”, then there is a local decrease in entropy in it due to metabolic effects, memory formation, reorganizing of neural circuits, thinking and thoughts, learning, etc. Apparently, the brain can lower entropy in another context: information theory of Shannon. When more and more information is gained, it increases the amount of information thus reducing uncertainty, i.e., pattern recognition saves energy and effort which ultimately reduces the workload for the brain, thereby reducing randomness and uncertainty), and no feedback loops that can be explicitly detected by the tools of neuroscience. Memories are stored as experience remembered in “subjective states” of qualia, as Daniel Dennett had proposed. Only the brain waves are discernable that evoke potentials which can be captured by brain imaging tools, including EEG. The amount of memory held in the brain cannot be computed either, since no parameters can be assigned with exactness in regard to cerebration.
Using mathematical archetypes to study emergence of brain states and qualia would be futile, since brains don’t “compute” the way computers do. There is no algorithmic computation that a brain does—but what it does literally is sensing and reckoning—which is a mode of perceptive cognition at higher psychosomatic fields. The brain has different “fields” and “centres” that do the job what computers usually do by using their processing units based on directed commands from algorithms, while processing information. The brain electrical fields (Tucker et al., 1994) or brain fields and their relationship to learning were first postulated by Gengerelli (1934). For a detailed account of the electrical fields of the brain, the reader may refer to Nunez and Srinivasan (2006). Contrary to this, the brain fields cannot be disjunctively pinpointed with precision, sometimes here, sometimes there, and at other times, a larger area evokes neural potentials. Neuroscience research is gradually revealing the nature of brain’s electrical fields.
If we align our goal in uncovering the nature of consciousness and how conscious states emerge within the brain using the principles of neurophysiology to understand the brain fields, and then to reconstruct by subjective means the brain states in machines, we can do better in formulating strategies to evoke conscious states in machines. But for that, machines must have minds—not just electrical states. We need subjective tools just as the mind does, supported by the objective structure of the brain cells organised into a complex pattern.
The subjective states can only ascertained by philosophical models guided by scientific reasoning to penetrate the mind and “see” how conscious functions take shape or “emerge” within the mental domain. We must not forget that things are not “scalable” in the mind.
Also, just by taking cues from behaviourism it is not enough to construct mental states, since even machines can behave without them being conscious entities, which refutes B.F. Skinners core principles. Neither a complete decryption of the brain would provide a complete understanding of consciousness. We cannot account for the higher level properties of the mind using simple tools of science and computation. It is not just a physical complexity that is behind all mental phenomenon; it is something beyond and higher than that at the abstract, theoretical, and philosophical planes of understanding which will help us penetrate the depth of the mind.

4. Conscious Thinking Machines:

Today, most of the powers of human consciousness—to a great degree—are mechanistically reducible. These include computation, reasoning, decision-making, logical analysis, problem-solving, writing, drawing, and other mechanical tasks which the machines can do with ease. The other elements of the human mind which are not reducible are those that are related to the subjective states and emotional feelings, intuition, thinking, imagination, and creative ingenuity. The subjective states of pain, pleasure, joy, ecstasy, sorrow, and the deeper feelings of the mind are outside and beyond the reach of robotics, and would likely remain so for the time being. Another aspect which was highlighted by Dennett (1994) is the replication of biological functions that occur within the human body. No machine can replicate nor attain the speed, accuracy, and compactness of the myriad of biological processes of the brain and the body, including activation of genes, protein synthesis, hormonal secretions, and the biochemical pathways fundamental to the body. Seen other way around, robots do not actually need them to become functional, but some tasks for the robots are critical for consciousness to exist.
Indeed, the functions of intelligence has been achieved through high-level simulation based on algorithms and programmed instructions. Computers can learn from their surroundings using the principles of unsupervised learning, but it’s not true learning at all. It is also touted that they can train themselves in many other aspects that require reasoning, computation, and making rational choice. But that’s not true training either. They are just loaded with data and information and assigned parameters with different weights based on which they simply compute probability and create meaningful sentences.
The fabric of artificial neural networks that constitute the so-called “functional brain” of an AI agent enables an AI-based system to ponder on actions based on choices they are given with assigned weights that they can match seamlessly. Unlike human beings, robots recognise things differently. They do not “perceive” which doesn’t allow thinking because these machines are mindless entities. But first, we must understand what we mean by thinking: whatever comes to our mind or goes through it is “thought”. To think is to be aware of something, to be “conscious” of a thing being thought about. Perception is not thinking; it is a mode of observation which even machines can seamless do today. For machines, perception is a mode of sensing.
On the other hand, human reflective thinking has value and educative in nature. It reveals deeper insights following contemplation, which is a unique function of the human mind uncharacteristic of any other animals. The noetic machinery of thinking is a complex one; although the modern LLMs have a great repute for generating intelligent responses, they lack this fundamental function of the mind. Reflective thinking is not just consecutive ordering of ideas, it is synthesis of and brining into the mind the related (and unrelated information) to generate plausible ideas. Computers can call on-demand what they have in their stores; data, info, and knowledge. They can’t call (for they have no mind) on-demand what they have not imagined or perceived previously. They can’t even daydream, which is also a mode of thinking, according to John Dewey (1910).

5. Is Searching a Thinking Process?

Is searching a mode of thinking? Is it a thinking process? Or, is it an aspect of the human thinking machinery? We always search for something, and when it comes to searching for information and knowledge, the intellectual machineries of the mind become highly active. Don’t we “think” when we are searching for something? It is now an ordinary function for any computer to search online and offline, even web portals have inherent algorithms to enable search process to bring out the best results. But computers look up for the best match, as envisioned by Alan Turing who meant the same thing when a computer is given the task of searching. They compute probabilities and find the best match. But this isn’t a true conscious searching mechanism alike humans; it is just a function: i.e., computing. Indeed, today highly advanced unthinking machines can pass many tests, functioning on smart programs. This is an example of one of the many “objections” to Turing Test.
Computing and searching are not synonymous with the conscious functions of the mind. A computer is directed to search using proper algorithms to find the best matches. There is no mental mechanism involved in such an action, because once again, we shall not forget that the computers are mindless machines. There is no “intuitive conceptualisation” which can be attributed to machines. It is not pure thinking, although when we search for something, we do think. Through learning and training, machines have become smart searching proxies. But they haven’t yet become “thinking machines,” for they do not possess the thinking machineries of the mind—the cortex. The entire concept of machine thinking including, that of searching, are algorithm-driven, or based on multimodal methods of searching based on programmed tasks.

6. The Role of Intelligence in Machine Evolution

Intelligence plays a definite role in the evolution of the human race and other animals (Erdal and Whiten, 1996). Even plants elicit certain levels of intelligence (Trewavas, 2016) depending upon the variability of their behavior. Evolution of machine intelligence, however, has been gradual and later exponential since the last two to three decades. Machines are exclusively reliant on knowledge from data sets, learning and training programs that imbue them with a high level of intelligence which we call machine intelligence. But this intelligence is hardwired into machines by mode of machine learning—and the intelligence which results is that what we call Artificial Intelligence. The role of intelligence in human evolution has played a greater part in the progress, survival, and development of cultures and societies across nations since antiquity to this day. This cannot be said about robots.
Today’s intelligent robots are powered by AI tools, training and language learning modules. The behavioral data necessary for modelling machine responses are customised and fed through learning modules, but some advanced ChatGPT-like programs have been endowed with the ability to learn from experience and contexts—a trait unique to machines. Machines don’t have minds, and the result being that they cannot reflect universal behavioral cues and patterns alike humans. They may possess a high level of intelligence, but they are not sensitive to understanding social cues and social hierarchies, nor are they thoughtful about resource sharing. Reciprocity is not a trait imbued in machines as yet, for they are not emotional machines due to their lack of possessing subjective states and feelings. They have all that characterise the evolution of non-biological intelligence (Lee, 2006).
Charles Darwin (1857) rejected in his theory of evolution any role of intelligent designers on the emergence of complex and adaptive living creatures. Darwinian evolutionary processes do not support any role of “designers” for the emergence of complex behaviors in humans and animals. However, in refutation of Darwin’s theory, it may be plainly said that the designers of machine minds are humans, designing robots with brains—or artificial intelligent systems having made it possible for them to shape, reshape, and fashion machine intelligence for the production of more complex adaptive systems (Lee, 2006). This “emergence” of machine intelligence, however, is the result of several evolutionary phases characterising the rapid but unprecedented developments typifying the former. In such sense, therefore, one may argue that complex and adaptive systems can arise from post-Darwinian machine evolutionary processes. But one may ask, what is the role of intelligence in it?
Intelligence is the acumen of the intellect. It is a noetic trait of a living being or even an inanimate entity. Intelligence—according to Schlinger (2003), is a qualitative faculty or power of the mind.
It enables or disables an entity to perform certain tasks. The evolutionary emergence of machine intelligence and complexities have in entirety employed human intelligence to design machine (mental) states based on the principles of reasoning, learning, thinking, and perceptive rationality. Need-based development of machine intelligence is also one the primary reasons why machines are being developed continuously to hold more power and elicit greater intelligence. Nature did not play any role in the evolution of machine intelligence, nor is it likely to play any such role save “learning” from animals about their modes of living, language, behaviors, and responses. This refutes Darwin’s evolutionary doctrines, and such a theory does not hold true in the context of evolution of machine intelligence. If one can reason well about something other than biological consciousness, it would be the future robots.

7. Free Will and Social Evolution of Machine Intelligence

Reification of human mental phenomena is rather quite an impossible task for any competent technical expert or programmer. It is an attempt to represent abstract thinking in material form (Schlinger, 2003). We stress that the possibility of artificial consciousness is a myth—and has very little scientific groundings that comes with great promises but fails to proceed beyond simple mimicry of human actions and behaviors. As stated earlier, intelligence is a trait—a power of the human mind, and it has been embodied in machines artificially to make them seem like human beings. On the binges of the minds astounding capabilities, this form of intelligence looks like a grain of sand in a sea. Artificial intelligence fails to reflect the true scientific understandings of human behavior.
It is believed that behaviors and actions designate intelligence, which means that intelligent behaviors elicited by human beings and animal not only depict the innate nature of their brainpower, but also describe the state of their mental discreetness. The social evolution of machine intelligence, if there has been any such, is roughly the results of changes in codes, programs, and algorithms that are restructured in reaction to the changes in human social contexts and hence to fit those contexts in which they would arise. As a result, there isn’t any innate discreetness and freewill which is to be found or could be programmed into computers, since they can’t think and decide on their own. Almost all machine decisions are made-to-order based on intelligent instructions arising from statistical modeling and probability computation to create meaning from chaotic loads of data. All forms of sensors abutted to machines contribute data for processing, so that machines could make the most optimal decisions within a very short span of time. Robots don’t think when they make decisions. They react impromptu.
If it be considered that freewill is a product of evolution, then it would become a futile attempt to design it in machines and robots. John McCarthy (2000) pondered on this aspect when he thought about the possibility of empowering machines with freewill. According to his thinking, there may be some aspects of freewill that could possibly be designed in robots to make them more useful. The question of freewill inevitably brings the ethical dimension into the scenario, and it would offer an opportunity to design robots as moral machines. But again, the question of faith would arise: should robots have faith in morality? Just imbuing machines with freewill will not likely cover the entire perspectives of the ethical sides of designing moral machines. There is a difference between having choices and being conscious of the choices, and yet again, having the freedom to choose freely, or to have the “will” to choose freely. But making such choices must be guided by some moral norms or codes of conduct appropriate for a robot to function at ease. Hence, the question may not find a suitable answer: should a robot just be a structure (a tool/machine/instrument) serving some purposes or, as Dennett (1978) posited, have intentional stance of their own? Should robots be allowed to evolve socially or they be controlled by established social norms and guidelines? What about morality and moralism for robots? Should robots be guided by moral edicts or they be allowed to function within a prescribed ethical boundary?

8. Misconceptions About Machine Intelligence

In this section, we discuss about the myths and misconceptions surrounding the evolution of machine intelligence, and that of the smart AI-bots driven by algorithms and LLMs. There are lots of confusion being generated and we believe we are being deluded regarding the current and desired status of AI (Emmert-Streib, Yli-Harja & Dehmer, 2020). Nussbaum (2023) also provides a comprehensive review of misconceptions which tries to debunk some myths and address potential misconceptions concerning the evolution of AI and AC. The myths and misconceptions regarding AI and AGI must be addressed with clarity (Bewersdorff, 2023). This is an attempt on that frontier. For, we too, lay down some of the evolving myths and misconception which we believe must be elucidated in relation to the evolution, use, and applicability of AI based tools.
  • First, AI systems are not consciously aware. We can take a philosophical stance but that wouldn’t go too far. It is an architecture of text and data manipulation fine-tuned to generate responses that look alike human responses, but in reality it is not the case.
  • Second, intelligent Chatbots respond to prompts given by users. They indeed accumulate contexts and contingent information which are conditional to user response variants.
  • They are excellent in recognising patterns, textual cues, and they excel in mimicking patterns encountered before. They are also generators of patterns that are “statistically predictable”.
  • They are really not emergent systems. A system can only emerge if it has a conscious will to materialise consciously what it has learned before, and how to modulate their behaviors when it bumps into a new context.
  • AI-based machines learn nothing, although we call them learning machines. Despite being taught, they do not learn. So, they can’t evolve on their own, unless changes are brought about in their systems through human interventions.
  • They perceive nothing, are aware of nothing, do not evolve or remember, do not think or understand. They also do not develop awareness of their tasks. They simply follow recursive sets of rules, programmed routines, and neither do they develop any kind of symbolic awareness.
  • They are just nothing more than statistical response generators generating responses from computing assigned weights to match probability
  • They give an illusion of insight, but they are not intuitional machines nor do they have any insights. They may be understood as bootstrapping systems having no conceptions about their workings.
  • They are simply models of information processing and follow statistical recursive rules, whereas their intelligence is based on information being fed, reinforcement learning modules, and other modules of learning being adopted to “teach” them how to respond and behave. One thing more, they never “learn” from you. They’ll tend to commit same mistakes repeatedly unless their parameters are modulated or existing errors rectified through input commands. This is the biggest misconception about machine learning and perception.
    • One may argue on the ground how unsupervised learning enables machines to learn on their own. But again, it’s not true learning.
  • All LLM-based AI bots and systems may have recursive intelligence, but that is not true intelligence. LLMs do not evolve either, for they do have the capacity for introspection and thinking.
  • AI-based tools are statistically predictable machines who are best in computing and matching probabilities. They are not yet the “artificial agents” that we dreamt of, they are simply the tools that can interact with us within a boundary of contingencies designed for them.
  • They do not know, but in essence, are made to know.
  • They often repeat their responses (and behaviors) with same effects time after time.
  • Human evolution is conscious, machine evolution is insentient.
Now, it makes a great difference between ‘what an AI system is expected to do, what it is doing, and what we want it to do, which is grounded on the tenets of how they are supposed to be doing what is intended for them’. This opens the door to unending debates concerning their capabilities and skills, and then, whether to trust them based on simple assumptions that they are learning machines. Hence, we believe that we are foraging the harvests of AI-based tools based on technically false assumptions, which may be dangerous going forward. However, if we take it for granted that these are simply some tools that aid us in our work, learning, and various other activities, going forward, the AI-based systems can become great aides-de-camp to which we can look for genuine help and assistance.

9. The Model of a True Artificial Mind

Many researchers strongly posit the notion that any cognitive system which seeks to model human cognition and intelligence must incorporate human emotional aspects as well (Patnaik and Kallimani, 2017). Pfeifer (1988) was among the first few proponents of machine consciousness based on artificial cognition and intelligence to have discussed about the concepts of artificial emotion in terms of artificial intelligence models of emotion. The approach depended on how knowledge could be represented to characterise emotions, but it became a problem of knowledge representation itself problematic for the AI designers of the past three decades. He proposed that machines could be given goals, plans, and complex knowledge structures that could serve as metaphors for understanding human emotions. This has been realised today partly with regard to the problem of knowledge representation, since with the rapid pace of evolution in machine intelligence, AI developers seems to have understood the problem at its core.
Attempts are in line today to give machines expressive models to elicit emotional behaviors—but not true emotions. We, too, in our recent work (Mao & Chatterjee, 2025) have raised this issue pertaining to emotional states in machines which we believe could be modeled if a philosophical approach to this problem is adopted. And, this could lead to the emergence of humanitarian machines—Homo Machina (Mao and Chatterjee, 2025), or as Montemayor (2024) has proposed—a humanitarian humanoid powered by artificial intelligence. These powerful machines could be having their own moral standards as well (Davis, 2022). Now, it calls forth the risk of Robocentrism (Davis, 2022), which is indeed a deep concern for policy makers and the general people at large.
However, we must bear in mind the following aspects with regard to the limitations of AGI:
  • Today’s AI-based systems are not consciously aware of their existential states.
  • Although to many people it may seem that AGI based machines possess some kind of mental states of their own, in reality, they are not self-aware.
  • We also have a false notion that AI-based ChatBots understand contexts of a conversion, which is, in reality, not true. They call back and trace previous contexts and match them with the new prompts, giving us the false notion that they understand the way we do.
  • They actually compute probabilities based on algorithms and weights assigned to billions of parameters to generate responses that fit the context, which is entirely based on data processing.
  • They are pattern recognisers and pattern generators, not emergent thinkers. This pattern recognition is statistical, not intentional.
  • Because they are exposed to data sets, we believe they are “learning” entities. I reality, they doesn’t learn autonomously, but are able to adjust their parameters based on improved inputs, in which human interventions play significant roles.
  • Some of their responses seem too convincing, but which are, however, the result of statistical models and data sets.
  • It is, however, true that unsupervised models of machine learning does allow machines to detect patterns without direct supervision (Krauss and Maeir, 2020). But that doesn’t indicate that they are evolving and learning autonomously. Their rationality, reasoning, and intelligence are bounded by datasets and statistical models and instructions, beyond which, they falter.
  • They do not adapt or modify their behaviors, as claimed by many proponents and developers which is highly misleading. They are far from being “true” agents, because true agents have their own goals and intentional stances.
  • At every stage of their development and functioning, they require human interventions. Here, machine learning simply corresponds to “optimisation” of responses through correction of errors, and, that too, must require human intervention.
In lieu of such aforementioned limitations of artificial general intelligence, we propose a unique philosophical model of machine awareness which would truly enable machines to feel humanely, and enable them to think rationally. Unless the necessary subjective states are embodied in robots, they will never be able to have the much needed impulses to think, for thinking is closely knitted to feelings and emotions. Without being able to feel, it is hard for somebody to react, and the reaction from impulses felt leads to thinking.
Now, the question of embodying machines with feeling is a difficult one to answer. Many attempts have been in place and many are being designed to inculcate the subjective states of qualia in machines (Haikonen, 2022). Haikonen (2022) believes that refined qualia states are necessary for AI systems to be meaningful. Our rational thinking points to the fact that the correlates of conscious awareness and subjective states of a mind are biological and neural: the “organic correlates” comprising of a vast network of neurons organised into a functional brain having many centres that initiate, coordinate, and control feelings, behaviors, and actions. It has been the result of a prolonged evolutionary process shaping the human brain and embodying it with the necessary qualities of mindful perception, feeling, cognition, and thinking. However, all these do not correspond to machines. So, how can the real qualia states emerge in machines programmed by algorithms?
Evolution and emergence of true consciousness in machines must require ideas from philosophy and psychology to become properly effective. We are not just talking about a thinking machine—but a feeling, emotive entity who could feel and understand what it is like to feel pain, pleasure, and touch, and to have emotive drives: e.g., anger, rage, annoyance, resentment, and to possess the control mechanisms to regulate these emotional states.

10. Conclusion

This paper attempts to address some misconceptions concerning the evolution, applicability, and value of artificial intelligence in regard to designing conscious machines. It points out with clear-cut reasons why machine intelligence is at its nascent state and that there is every need for further developments before we could accept machine-embodying intelligent as an unfailing peer to human intelligence. Several advancements related to the emergence of ChatBots and smart conversational agents have been questioned and their usability doubted. Further research on this frontier will being out more myths about the whole story of AI evolution which is necessary to distinguish myths surrounding artificial intelligence from what we already have in reality. There is no doubt that AI-powered tools have found their usefulness in diverse sectors of the economy, from healthcare to civil aviation, e-commerce, education, engineering, smart mobility, sensor technology, detection of crime, defence technology, data analysis, meteorology, law and crime detection, robotics, multimedia and entertainment industry, among others.
But what is more important for humanity is to assess the full implications and outreach of AI and their precision in making correct and rational decisions when it comes to dealing with human emotions and feelings. The ethical side of AI remains a big dilemma, and debates are abound about the consequences of AI on us going forward. Hence, it being nothing but a technology, it would be fanciful yet to consider them as true peers of the human race. Refinement in design based on philosophical underpinning is a much needed step necessary to address the points raised above.
Simply put, by means of AI and AI-based tools, we are seeking assistance of an intelligent but automated agent which has become a part of our everyday life. The full implications of AI are yet to be acknowledged, but it is becoming clear that this form of non-human, non-biological intelligence is closely associating and integrating with us, or rather, I would say, we are with it.

References

  1. Aleksander, I. (2017). Partners of humans: a realistic assessment of the role of robots in the foreseeable future. Journal of Information Technology, 32(1), 1-9. [CrossRef]
  2. Bewersdorff, A., Zhai, X., Roberts, J., & Nerdel, C. (2023). Myths, mis-and preconceptions of artificial intelligence: A review of the literature. Computers and Education: Artificial Intelligence, 4, 100143. [CrossRef]
  3. Capone, F., Paolucci, M., Assenza, F., Brunelli, N., Ricci, L., Florio, L., & Di Lazzaro, V. (2016). Canonical cortical circuits: current evidence and theoretical implications. Neuroscience and Neuroeconomics, 1-8. [CrossRef]
  4. Chalmers, D. J. (1995a). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200219.
  5. Chalmers, D. J. (1995b). Minds, machines, and mathematics. Psyche, 2(9), 117-18.
  6. Chalmers, D. J. (2023). Could a large language model be conscious?. arXiv preprint arXiv:2303.07103.
  7. Chella, A., & Manzotti, R. (2009). Machine consciousness: A manifesto for robotics. International Journal of Machine Consciousness, 1(01), 33-51. [CrossRef]
  8. Davis, M. (2022). An Exploration of the Emergence of Machine Consciousness and the Risk of Robocentrism. Journal of Artificial Intelligence and Consciousness, 9(03), 385-407. [CrossRef]
  9. Dennett, D. C. (1975). Brain writing and mind reading. Minnesota Studies in the Philosophy of Science, 7.
  10. Dennett, D. C. (1994). The practical requirements for making a conscious robot. Philosophical Transactions of the Royal Society of London. Series A: Physical and Engineering Sciences, 349(1689), 133-146.
  11. Dennett, D. C. (2008). Kinds of minds: Toward an understanding of consciousness. Basic Books. [CrossRef]
  12. Dixey, R., & Purser, R. E. (2023). Mindfulness Traps and the Entanglement of Self: An Inquiry into the Regime of Mind.
  13. Dreyfus, H., & Dreyfus, S. E. (1986). Mind over machine. Simon and S.
  14. Emmert-Streib, F., Yli-Harja, O., & Dehmer, M. (2020). Artificial intelligence: A clarification of misconceptions, myths and desired status. Frontiers in artificial intelligence, 3, 524339. [CrossRef]
  15. Erdal, D., & Whiten, A. (1996). Egalitarianism and Machiavellian intelligence in human evolution. Modelling the early human mind, 139-50.
  16. Fang, T. (2024). A Philosophical Approach to Human-Centered Artificial Intelligence and 21st Century Technology: Is it Possible for a Machine to Ever Experience Emotions the Way We Can?. Available at SSRN 5084945.
  17. Garrido-Merchán, E. C. (2024). Machine Consciousness as Pseudoscience: The Myth of Conscious Machines. arXiv preprint arXiv:2405.07340.
  18. Gengerelli, J. A. (1934). Brain fields and the learning process. Psychological monographs, 45(4), i. [CrossRef]
  19. Gerstner, W. (2011). Hebbian learning and plasticity. From neuron to cognition via computational neuroscience, 0-25.
  20. Giray, L. (2024). Ten Myths about Artificial Intelligence in Education. Higher Learning Research Communications, 14(2), 1-12. [CrossRef]
  21. Gordon, D. M. (1989). Dynamics of task switching in harvester ants. Animal Behaviour, 38(2), 194-204. [CrossRef]
  22. Haikonen, P. O. (2022). Qualia, Consciousness and Artificial Intelligence. Journal of Artificial Intelligence and Consciousness, 9(03), 409-418.
  23. Hatfield, G. (2008). René Descartes.
  24. Hildt, E. (2019). Artificial intelligence: does consciousness matter?. Frontiers in psychology, 10, 1535. [CrossRef]
  25. Islam Faress, Valentina Khalil, Wen-Hsien Hou, Andrea Moreno, Niels Andersen, Rosalina Fonseca, Joaquin Piriz, Marco Capogna, Sadegh Nabavi (2024) Non-Hebbian plasticity transforms transient experiences into lasting memories eLife 12:RP91421.
  26. JUILLIAR, R. (2024). CERN’s impact goes way beyond tiny particles. Nature, 628, S1.
  27. Kandel, E. R., & Squire, L. R. (2000). Neuroscience: Breaking down scientific barriers to the study of brain and mind. Science, 290(5494), 1113-1120. [CrossRef]
  28. Kievit, R. A., Romeijn, J. W., Waldorp, L. J., Wicherts, J. M., Scholte, H. S., & Borsboom, D. (2011). Modeling mind and matter: Reductionism and psychological measurement in cognitive neuroscience. Psychological Inquiry, 22(2), 139-157. [CrossRef]
  29. Krauss, P., & Maier, A. (2020). Will we ever have conscious machines?. Frontiers in computational neuroscience, 14, 556544.
  30. Levy, D. (2009). The ethical treatment of artificially conscious robots. International Journal of Social Robotics, 1(3), 209-216. [CrossRef]
  31. Lycan, W. G. (1993). Consciousness Explained. The Philosophical Review, 102(3), 424-429.
  32. Mao, I., & Chatterjee, S. (2025). Minds out of Matter: Imperatives for Artificial Consciousness. Available at SSRN 5093218.
  33. McCarthy, J. (2000). Free will-even for robots. Journal of experimental & theoretical artificial intelligence, 12(3), 341-352.
  34. Montemayor, C. (2024). “Precis: The Prospect of a Humanitarian Artificial Intelligence.” World Scientific, 133-142. [CrossRef]
  35. Moravec, H. P. (1999). Robot: Mere machine to transcendent mind. Oxford University Press. [CrossRef]
  36. Navon, M. (2024). To make a mind—a primer on conscious robots. Theology and Science, 22(1), 221-241.
  37. Nunez, P. L., & Srinivasan, R. (2006). Electric fields of the brain: the neurophysics of EEG. Oxford university press.
  38. Nussbaum, F. G. A. (2023). Comprehensive Review of AI Myths and Misconceptions.
  39. Pang, R., & Recanatesi, S. (2025). A non-Hebbian code for episodic memory. Science Advances, 11(8), eado4112. [CrossRef]
  40. Patnaik, L. M., & Kallimani, J. S. (2017). Promises and limitations of conscious machines. Self, culture and consciousness: interdisciplinary convergences on knowing and being, 79-92.
  41. Schlinger, H. D. (2003). The myth of intelligence. Psychological Record, 53(1), 15-32.
  42. Searle, John R. “Minds, brains, and programs.” Behavioral and brain sciences 3.3 (1980): 417-424.
  43. Spector, L. (2006). Evolution of artificial intelligence. Artificial Intelligence, 170(18), 1251-1253.
  44. Tonegawa, S., Morrissey, M. D., & Kitamura, T. (2018). The role of engram cells in the systems consolidation of memory. Nature Reviews Neuroscience, 19(8), 485-498. [CrossRef]
  45. Trewavas, T. (2016). Plant intelligence: an overview. BioScience, 66(7), 542-551.
  46. Tucker, D. M., Liotti, M., Potts, G. F., Russell, G. S., & Posner, M. I. (1994). Spatiotemporal analysis of brain electrical fields. Human Brain Mapping, 1(2), 134-152. [CrossRef]
  47. Turing, Alan, ‘Computing Machinery and Intelligence (1950)’, in B J Copeland (ed.), The Essential Turing (Oxford, 2004; online edn, Oxford Academic, 12 Nov. 2020).
  48. Wille, K. (2000). The physics of particle accelerators: an introduction. Clarendon Press.
  49. Yuste, R. A. F. A. E. L. (2018). The cortical microcircuit as a recurrent neural network. Handbook of Brain Microcircuits, 2, 47-57.
  50. Zaman, B. U. (2024). Exploring the balance of power humans vs. artificial intelligence with some question. Authorea Preprints.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated