1. Intelligence and Cognition
Instead of making philosophical enquiries, as an alternative approch, we propose the following hypothesis:
Postulate 1. We assume as our working hypothesis that machine intelligence exists. Artificial intelligence consists of human intelligence and machine intelligence. We call the former the proper intelligence, and the latter the anti-intelligence.
Machine intelligence can be seen as a special kind of Platonic reality. In neural network, Platonic reality happens in hidden layers, called Platonic representation. Yang (2025) applied this idea in artificial intelligence (LLM) modeling. We made a clear distinction of the human intelligence (
and the machine intelligence (
. Further, we make a mathematical trivialization treatment such that
and
are orthogonal that is represented by Kronecker symbol:
Now we define
as the intelligence demand
(human component) and
as the intelligence supply
. For an any given task
, we have
, and
The above definition can be characterized as a Wyle spinor below,
Definition 2. Artificial intelligence is the integration of human intelligence and machine intelligence, denoted as The corresponding intelligence field is denoted as E.
Definition 3. The moving artificial intelligence produces intelligent current, denoted as J.
Postulate 2. We assume as our working hypothesis that there is a difference between intelligence and cognition. The intelligence is a general capacity that is globally available. Intelligence is composed of intentions and a fundamental computational architecture. When intelligence engages with a task, it becomes activated and remains in motion. Cognition is specific and local. Nevertheless, solving a task involves cognitive processes including, understanding, discourse processing, text comprehension, reasoning, decision making. Obviously, in artificial intelligence, this cognitive process also involves computing.
For example, one is prepared to take a standard educational test such as SAT or GRE. Call this preparation the intelligence. During the test, this examinee’s intention and intelligence may continue to support the efforts on the way of solving testing items. However, the structure and relative difficulty varies. In other words, different items demand different cognitive efforts. For solving a particular testing item, the examinee would go through all possible cognitive modes. Thus, we need to introduce a new component:
Definition 4. During the process of solving a task, together the above-mentioned cognitive modes are defined as the artificial cognition, denoted as B.
Now, we take Maxwell Equations as a referential modeling framework. Let the intelligence E be the electric field, and B the magnetic field. This analytic working definition allows us to investigate the interaction between the intelligence field E and cognition field B.
As a working definition, we use the terms electric field and intelligence field interchangeably. Meanwhile, we use the terms magnetic field and cognitive field interchangeably. Now we are ready to go through the Maxwell equations.
2. Gauss’s Law
The first equation (Schey, 1973/2005) is formulated as follows
with
the electric field,
the electric charge density.
is the vacuum permittivity.
is the divergence operator, also denoted as div. It is formulated as below
We can see that the divergence is a vector.
In artificial intelligence, particularly for LLM, the probability function serves as a density function . Moreover, intelligence can be in different levels. In physics, when the electric field is applied to the dielectric, it will generate an induced charge and weaken the electric field. The ratio of the original external electric field (in vacuum) and the electric field in the final medium is the dielectric constant (permittivity), also known as the induction rate. In artificial intelligence, let be the intelligence level when it engages with the null task and be the intelligence level when it engages with a non-null task. The ratio of and is . By this analysis, we have
Proposition 1. Artificial intelligence satisfies Gauss’s law.
To better understand the Gauss’s law, I quote from Schey (1973/2005),
“It is not explicit expression for E. That is, it does not say ‘E equals something.’ Rather, it says ‘The flux of E (the surface integral of the normal component of E) equals something.’ Thus, to use Gauss’ law, we must ‘disentangle’ E from its surroundings. Despite this, there are situations in which Gauss’ law can be used to find the field.”
Now, let us do a thought experiment. Imagine an AI task as a field, which can be treated as a curved surface. Assume this curved surface can be divided as many small flat plants. Imagine again the intelligence engaged with this task as a family of intelligent lines. The quantity of intelligent lines flow through a small plant is called unit intelligence flux. To sum up all the unit intelligent flux, we obtain the integral form of Gauss’ law as follows:
3. Gauss’s Law for Magnetism
Magnetic field is a loop-like closure. There is no magnetic monopole. The outflow of the magnetic field through the Gauss surface is zero. The differential form of Gauss’s law for Magnetism is formulated as follows:
with
the magnetic field. In artificial intelligence, the cognitive field is similar to the magnetic field. It is attributed to dipole field with the north pole and the south pole. That is, the magnetic field can be characterized as a closure like a loop. If an electron is thrown in a magnetic field, it will be polarized to either the north pole or the south pole. Similarly, the artificial cognitive field is a “dipole” decision maker two exclusive eigen poles: Yes, or else No. If an AI system is equipped with quantum mechanics, and run a measurement experiment, the AI cognition will return a Yes/No answer. This is called the Yes/No measurement by von Neuman (1955) and Penrose (2005). A probability question can also be reformulated as a Yes/No question: for a given interval, how likely to find a particle. Similarly, if AI is equipped with decision theory, and throw Yes/No task into the intelligence field, the AI cognitive field will return a Yes/No answer. Thus, we may reasonably propose,
Proposition 2.
The AI cognitive field satisfies Gauss’ law for magnetism. Of which, the integral form is,
Now we can do some thought experiment. Assume an artificial intelligence field is trained and learned in preparation for a standard educational test such as SAT or GRE. When a real test is given, the artificial cognitive field will work on it toward choosing a possible right answer from a number of options given. The artificial cognitive field may get it right or wrong. This is a typical Yes/No measurement. Notice that when ETS (Princeton) grades this test, it only grades the performance of the artificial cognitive field by looking at if the answer for a testing item is right or wrong. ETS does not look into the intelligence field to see how well this intelligence agent prepared for taking this test. By the way, this grading job is done by machine.
In LLM, the inverse testing is a Yes/No measurement; embedding the angle-distance into 0-1 bits is a Yes/No measurement. Hence, the artificial cognitive field has already been partially embodied in LLM.
4. Faraday’s Law
Let us start from a quote by (Nobel laureate in Physics) C. N. Yang (2014). One day, Faraday found that moving a bar magnet either into or out of a solenoid would generate electric currents in the solenoid. Thus, he had discovered electric induction. He was especially impressed by two facts – namely, that the magnet must be moved to produce induction, and that induction seemed to produce effects perpendicular to the cause. Yang notes that, as Faraday wrote,
“If we endeavor to consider electricity and magnetism as the results of two faces of a physical agent, or a peculiar condition of matter, exerted in determinate directions perpendicular to each other, then, it appears to me that we must consider these two states or forces as convertible into each other in a greater or smaller degree.”
These quotations illustrate one thing: In artificial intelligence, the intelligence field and the cognitive field are the two faces of the same artificial agent, and the two fields are in perpendicular directions. There has been a recent debate between Chomsky and Hinton (2025, Dublin lectures). We found that Chomsky stands for linguistic “electric intelligence” while Hinton stands for linguistic “magnetic cognition”. Faraday Law reflects a very commonsense. That is: If the harder LLM to be trained, the more knowledge to be learned, the deeper the computational architecture, then the better artificial intelligence and the stronger its capacity.
In 1851, Thomson introduced what we now call the vector potential
A to express the magnetic field
H through
an equation that would be crucial important for Maxwell. The equation of Faraday law is as follows
where (
is the curl operator. For reference, we provide the definition of curl in Cartesian coordinate system as follows
The integral form of Faraday Law is as follows
Now we can run a thought experiment, and predict that the deeper and the more frequent the AI-cognition works on a task, the stronger the artificial intelligence in general. Imagine that an LLM system is trained to take the standard educational test such as SAT or GRE. Assume this model never took a course in logic. To keep giving verbal tasks such as logical reasoning problems to the system, and the system returns the Yes/No gradings. This can be seen as a cognitive process. The prediction would be that eventually the system will gain logical intelligence. In Dirac bra-ket formalism, this thought experiment can be formulated as follows
where
denotes the AI system,
denotes the training experiment, and
denotes the training items. During the training process, for each stimulate
from
, the system
responds a Yes or No. Thus,
becomes a function of
. This function is a wavefunction. Feynman calls
the initial state and
the final state of the system. He also refers to this measurement process as the reduction procedure in quantum mechanics. The above process is an artificial cognitive process, and this process will create artificial intelligence by Faraday law.
In order to creating an electric field, the magnetic field needs to keep moving and changing. Consider an LLM system as a solenoid, verbal tasks as a bar magnet, and keeping moving tasks into or out (stimulus-responds process). This cognitive process would generate intellectual current. Thus, the system would generate intelligence induction. But, still, why and how this happens? Maxwell discovered that the mover bar magnet brings in the magnetic potential, which generates the electric induction. In other words, it is cognitive potential that causes intelligence induction.
5. Amp
re-Maxwell Law
Imagine the following situation. Assume the AI agent is taking a logic test of ten reasoning items. The first item and the last item are with the same logical structure but verbalized in different ways. From logic perspectives, we may treat this test as a logic circuit. Let be the intelligence current for . Accordingly, let be the cognitive field caused by each . Now, we can formulate a logical dilemma. On one hand, each supposes a loop closure. On the other hand, should have certain effect on . How could this be possible? Here is the explanation. Let , and call the cognitive vector potential. This potential can be also written in the integral form. Obviously, contains the gauge freedom. From to any given is a many-to-one function. In this sense, we say other are eliminated by taking the appropriate gauged, such as the Lorentz gauge, to achieve the gauge symmetry. To treat B as the cognitive potential of the whole test, the intelligence current has moved from to , call this the displacement intelligence-current.
The above is the AI version of Amp
re’s law. The notion of displacement current and the potential analysis above is due to Maxwell; therefore, the equation of Amp
re’s law is named Amp
re-Maxwell equation, which is represented as follows
This equation can be abbreviated as
Where
is the displacement current.
In Gauge Field Theory, particularly for quantum electrodynamics, it makes clear distinction of the gauge vector potential and the field strength. The gauge potential is written in its component form
(Zee, 2003) and the field strength
is calculated from
In order to achieve the local symmetry, it introduces the covariance derivate
and the gauge field
, which are represented as follows,
where both
and
are functions of
x at the local level, and
is used to balance out variations of the dynamic phase
such that quantum electrodynamics (QED) satisfy the
gauge symmetry.
6. Conclusion
As C. N. Yang pointed out, it was Thomson’s view of vector potential that led Maxwell to find a unified representation of two Gauss’s laws, Faraday law, and Ampre-Maxwell law. Maxwell equations led the road to modern gauge field theory. This paper applied Maxwell equations in artificial intelligence from modeling as well as conceptual perspectives. This is also the road that leads to gauge field theory of AI.
In artificial intelligence, we make distinction between intelligence and cognition, which are modeled by the electric field and magnetic field respectively. We also make the distinction between the vector potential and field strength. This conceptual treatment is crucial to find Maxwell structure in artificial intelligence. This is a road that leads to the dynamic analyses in AI.
Penrose (2004) proposes a three-world picture of reality: Physical world, Mental world, and Platonic world. They are bidirectional projections from one world to another. This paper treats AI as a field of the platonic world, Maxwell equations as a field of physical world, and human intelligence and cognition as a domain of the mental world. We have provided a sample picture of projections among the three domains.
References
- Penrose, R. (2004). The Road to Reality: A Complete Guide to the Laws of the Universe. Vintage Books, New York.
- Schey, H. M. (1973/2005). Div, Grad, Curl, and All That: An Informal Text on Vector Calculus. W. W. Norton and Company, New York.
- Searle, J. (1984). Minds, Brains, and Science. Harvard University Press. Cambridge, MA.
- von Neumann, J. (1955). The Mathematical Foundations of Quantum Mechanics. Princeton University Press. Princeton, New Jersey.
- Yang, C. N. (2014). The conceptual origins of Maxwell’ equations and gauge theory. Physics Today 67, 11, 45.
- Yang, Y. (2025) Maxwell and Artificial Intelligence: Preliminary QED Models of AI (LLM) Dynamics. Preprint. [CrossRef]
- Zee, A. (2003). Quantum Field Theory in A Nutshell. Princeton University Press. Princeton, New Jersey.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).