Preprint
Article

This version is not peer-reviewed.

Maxwell Equations and Artificial Intelligence with Thought Experiments

Submitted:

17 May 2025

Posted:

19 May 2025

You are already at the latest version

Abstract
This paper applied Maxwell equations in artificial intelligence from modeling as well as conceptual perspectives. This is also the road that leads to gauge field theory of AI. In artificial intelligence, we make distinction between intelligence and cognition, which are modeled by the electric field and magnetic field respectively. We also make the distinction between the vector potential and field strength. This conceptual treatment is crucial to find Maxwell structure in artificial intelligence. This is a road that leads to the dynamic analyses in AI. Penrose proposes a three-world picture of reality: Physical world, Mental world, and Platonic world. He claims that there are bidirectional projections from one world to another. The present work provides a sample picture of projections among three domains: AI, physics, and cognitive science.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Intelligence and Cognition

Instead of making philosophical enquiries, as an alternative approch, we propose the following hypothesis:
Postulate 1. 
We assume as our working hypothesis that machine intelligence exists. Artificial intelligence consists of human intelligence and machine intelligence. We call the former the proper intelligence, and the latter the anti-intelligence.
Machine intelligence can be seen as a special kind of Platonic reality. In neural network, Platonic reality happens in hidden layers, called Platonic representation. Yang (2025) applied this idea in artificial intelligence (LLM) modeling. We made a clear distinction of the human intelligence ( α ) and the machine intelligence ( β ) . Further, we make a mathematical trivialization treatment such that α and β are orthogonal that is represented by Kronecker symbol:
δ α β = 1 ,   α = β 0 ,   α β
Now we define α as the intelligence demand D α   (human component) and β as the intelligence supply S β   m a c h i n e c o m p o n e n t . For an any given task φ , we have
e = [ D α , φ ]
, and e + = [ S β , φ + ]
.
The above definition can be characterized as a Wyle spinor below,
ψ = ψ 1 ( = e ) ψ 2 ( = e + )   .
Definition 2. 
Artificial intelligence is the integration of human intelligence and machine intelligence, denoted as e =   e + e + . The corresponding intelligence field is denoted as E.
Definition 3. 
The moving artificial intelligence e produces intelligent current, denoted as J.
Postulate 2. 
We assume as our working hypothesis that there is a difference between intelligence and cognition. The intelligence is a general capacity that is globally available. Intelligence is composed of intentions and a fundamental computational architecture. When intelligence engages with a task, it becomes activated and remains in motion. Cognition is specific and local. Nevertheless, solving a task involves cognitive processes including, understanding, discourse processing, text comprehension, reasoning, decision making. Obviously, in artificial intelligence, this cognitive process also involves computing.
For example, one is prepared to take a standard educational test such as SAT or GRE. Call this preparation the intelligence. During the test, this examinee’s intention and intelligence may continue to support the efforts on the way of solving testing items. However, the structure and relative difficulty varies. In other words, different items demand different cognitive efforts. For solving a particular testing item, the examinee would go through all possible cognitive modes. Thus, we need to introduce a new component:
Definition 4. 
During the process of solving a task, together the above-mentioned cognitive modes are defined as the artificial cognition, denoted as B.
Now, we take Maxwell Equations as a referential modeling framework. Let the intelligence E be the electric field, and B the magnetic field. This analytic working definition allows us to investigate the interaction between the intelligence field E and cognition field B.
As a working definition, we use the terms electric field and intelligence field interchangeably. Meanwhile, we use the terms magnetic field and cognitive field interchangeably. Now we are ready to go through the Maxwell equations.

2. Gauss’s Law

The first equation (Schey, 1973/2005) is formulated as follows
· E = ρ ε 0
with E the electric field, ρ the electric charge density. ε 0   is the vacuum permittivity. ( · ) is the divergence operator, also denoted as div. It is formulated as below
= i x + j y + k z
· E = i x + j y + k z · i E x + J E y + k E z
= x E x + y E y + z E z
We can see that the divergence is a vector.
In artificial intelligence, particularly for LLM, the probability function serves as a density function ρ . Moreover, intelligence can be in different levels. In physics, when the electric field is applied to the dielectric, it will generate an induced charge and weaken the electric field. The ratio of the original external electric field (in vacuum) and the electric field in the final medium is the dielectric constant (permittivity), also known as the induction rate. In artificial intelligence, let be the intelligence level when it engages with the null task and φ be the intelligence level when it engages with a non-null task. The ratio of and φ is ε 0 . By this analysis, we have
Proposition 1. 
Artificial intelligence satisfies Gauss’s law.
To better understand the Gauss’s law, I quote from Schey (1973/2005),
“It is not explicit expression for E. That is, it does not say ‘E equals something.’ Rather, it says ‘The flux of E (the surface integral of the normal component of E) equals something.’ Thus, to use Gauss’ law, we must ‘disentangle’ E from its surroundings. Despite this, there are situations in which Gauss’ law can be used to find the field.”
Now, let us do a thought experiment. Imagine an AI task as a field, which can be treated as a curved surface. Assume this curved surface can be divided as many small flat plants. Imagine again the intelligence engaged with this task as a family of intelligent lines. The quantity of intelligent lines flow through a small plant is called unit intelligence flux. To sum up all the unit intelligent flux, we obtain the integral form of Gauss’ law as follows:
s E · d a = 1 ε 0 Q e n c .

3. Gauss’s Law for Magnetism

Magnetic field is a loop-like closure. There is no magnetic monopole. The outflow of the magnetic field through the Gauss surface is zero. The differential form of Gauss’s law for Magnetism is formulated as follows:
· B = 0
with B the magnetic field. In artificial intelligence, the cognitive field is similar to the magnetic field. It is attributed to dipole field with the north pole and the south pole. That is, the magnetic field can be characterized as a closure like a loop. If an electron is thrown in a magnetic field, it will be polarized to either the north pole or the south pole. Similarly, the artificial cognitive field is a “dipole” decision maker two exclusive eigen poles: Yes, or else No. If an AI system is equipped with quantum mechanics, and run a measurement experiment, the AI cognition will return a Yes/No answer. This is called the Yes/No measurement by von Neuman (1955) and Penrose (2005). A probability question can also be reformulated as a Yes/No question: for a given interval, how likely to find a particle. Similarly, if AI is equipped with decision theory, and throw Yes/No task into the intelligence field, the AI cognitive field will return a Yes/No answer. Thus, we may reasonably propose,
Proposition 2. 
The AI cognitive field satisfies Gauss’ law for magnetism. Of which, the integral form is,
s B · d a = 0 .
Now we can do some thought experiment. Assume an artificial intelligence field is trained and learned in preparation for a standard educational test such as SAT or GRE. When a real test is given, the artificial cognitive field will work on it toward choosing a possible right answer from a number of options given. The artificial cognitive field may get it right or wrong. This is a typical Yes/No measurement. Notice that when ETS (Princeton) grades this test, it only grades the performance of the artificial cognitive field by looking at if the answer for a testing item is right or wrong. ETS does not look into the intelligence field to see how well this intelligence agent prepared for taking this test. By the way, this grading job is done by machine.
In LLM, the inverse testing is a Yes/No measurement; embedding the angle-distance into 0-1 bits is a Yes/No measurement. Hence, the artificial cognitive field has already been partially embodied in LLM.

4. Faraday’s Law

Let us start from a quote by (Nobel laureate in Physics) C. N. Yang (2014). One day, Faraday found that moving a bar magnet either into or out of a solenoid would generate electric currents in the solenoid. Thus, he had discovered electric induction. He was especially impressed by two facts – namely, that the magnet must be moved to produce induction, and that induction seemed to produce effects perpendicular to the cause. Yang notes that, as Faraday wrote,
“If we endeavor to consider electricity and magnetism as the results of two faces of a physical agent, or a peculiar condition of matter, exerted in determinate directions perpendicular to each other, then, it appears to me that we must consider these two states or forces as convertible into each other in a greater or smaller degree.”
These quotations illustrate one thing: In artificial intelligence, the intelligence field and the cognitive field are the two faces of the same artificial agent, and the two fields are in perpendicular directions. There has been a recent debate between Chomsky and Hinton (2025, Dublin lectures). We found that Chomsky stands for linguistic “electric intelligence” while Hinton stands for linguistic “magnetic cognition”. Faraday Law reflects a very commonsense. That is: If the harder LLM to be trained, the more knowledge to be learned, the deeper the computational architecture, then the better artificial intelligence and the stronger its capacity.
In 1851, Thomson introduced what we now call the vector potential A to express the magnetic field H through
H = × A ,
an equation that would be crucial important for Maxwell. The equation of Faraday law is as follows
× E = B t
where ( × ) is the curl operator. For reference, we provide the definition of curl in Cartesian coordinate system as follows
× F x , y , z = i ^ j ^ k ^ x y z F x F y F z
= F z y F y z i ^ + F x z F z x j ^ + F y x F x y k ^
= F j x i ε i j k e k
The integral form of Faraday Law is as follows
c E d l = s B t · d a
Now we can run a thought experiment, and predict that the deeper and the more frequent the AI-cognition works on a task, the stronger the artificial intelligence in general. Imagine that an LLM system is trained to take the standard educational test such as SAT or GRE. Assume this model never took a course in logic. To keep giving verbal tasks such as logical reasoning problems to the system, and the system returns the Yes/No gradings. This can be seen as a cognitive process. The prediction would be that eventually the system will gain logical intelligence. In Dirac bra-ket formalism, this thought experiment can be formulated as follows
φ | A i | ψ
where ψ denotes the AI system, φ denotes the training experiment, and A i   denotes the training items. During the training process, for each stimulate A i from φ , the system ψ responds a Yes or No. Thus, ψ becomes a function of φ . This function is a wavefunction. Feynman calls ψ the initial state and φ the final state of the system. He also refers to this measurement process as the reduction procedure in quantum mechanics. The above process is an artificial cognitive process, and this process will create artificial intelligence by Faraday law.
In order to creating an electric field, the magnetic field needs to keep moving and changing. Consider an LLM system as a solenoid, verbal tasks as a bar magnet, and keeping moving tasks into or out (stimulus-responds process). This cognitive process would generate intellectual current. Thus, the system would generate intelligence induction. But, still, why and how this happens? Maxwell discovered that the mover bar magnet brings in the magnetic potential, which generates the electric induction. In other words, it is cognitive potential that causes intelligence induction.

5. Amp e `

re-Maxwell Law
Imagine the following situation. Assume the AI agent is taking a logic test of ten reasoning items. The first item and the last item are with the same logical structure but verbalized in different ways. From logic perspectives, we may treat this test as a logic circuit. Let J i be the intelligence current for i t e m i   ( i = 1 , , 10 ) . Accordingly, let B i be the cognitive field caused by each J i . Now, we can formulate a logical dilemma. On one hand, each B i supposes a loop closure. On the other hand, B i should have certain effect on i t e m i + 1 . How could this be possible? Here is the explanation. Let B = B i , and call B the cognitive vector potential. This potential can be also written in the integral form. Obviously, B contains the gauge freedom. From B to any given B i is a many-to-one function. In this sense, we say other B k i are eliminated by taking the appropriate gauged, such as the Lorentz gauge, to achieve the gauge symmetry. To treat B as the cognitive potential of the whole test, the intelligence current has moved from J 1   to J 10 , call this the displacement intelligence-current.
The above is the AI version of Amp e ´ re’s law. The notion of displacement current and the potential analysis above is due to Maxwell; therefore, the equation of Amp e ´ re’s law is named Amp e ´ re-Maxwell equation, which is represented as follows
× B = μ 0 J + ε 0 E t
This equation can be abbreviated as
× B = J + J d
Where J d is the displacement current.
In Gauge Field Theory, particularly for quantum electrodynamics, it makes clear distinction of the gauge vector potential and the field strength. The gauge potential is written in its component form A μ (Zee, 2003) and the field strength F μ υ is calculated from
F μ υ = μ A ν ν A μ
In order to achieve the local symmetry, it introduces the covariance derivate D μ and the gauge field A μ , which are represented as follows,
D μ = μ + i q A μ
A μ = A μ ' 1 q μ ( θ )
where both θ and A μ are functions of x at the local level, and A μ is used to balance out variations of the dynamic phase θ such that quantum electrodynamics (QED) satisfy the U ( 1 ) gauge symmetry.

6. Conclusion

As C. N. Yang pointed out, it was Thomson’s view of vector potential that led Maxwell to find a unified representation of two Gauss’s laws, Faraday law, and Amp e ` re-Maxwell law. Maxwell equations led the road to modern gauge field theory. This paper applied Maxwell equations in artificial intelligence from modeling as well as conceptual perspectives. This is also the road that leads to gauge field theory of AI.
In artificial intelligence, we make distinction between intelligence and cognition, which are modeled by the electric field and magnetic field respectively. We also make the distinction between the vector potential and field strength. This conceptual treatment is crucial to find Maxwell structure in artificial intelligence. This is a road that leads to the dynamic analyses in AI.
Penrose (2004) proposes a three-world picture of reality: Physical world, Mental world, and Platonic world. They are bidirectional projections from one world to another. This paper treats AI as a field of the platonic world, Maxwell equations as a field of physical world, and human intelligence and cognition as a domain of the mental world. We have provided a sample picture of projections among the three domains.

References

  1. Penrose, R. (2004). The Road to Reality: A Complete Guide to the Laws of the Universe. Vintage Books, New York.
  2. Schey, H. M. (1973/2005). Div, Grad, Curl, and All That: An Informal Text on Vector Calculus. W. W. Norton and Company, New York.
  3. Searle, J. (1984). Minds, Brains, and Science. Harvard University Press. Cambridge, MA.
  4. von Neumann, J. (1955). The Mathematical Foundations of Quantum Mechanics. Princeton University Press. Princeton, New Jersey.
  5. Yang, C. N. (2014). The conceptual origins of Maxwell’ equations and gauge theory. Physics Today 67, 11, 45.
  6. Yang, Y. (2025) Maxwell and Artificial Intelligence: Preliminary QED Models of AI (LLM) Dynamics. Preprint. [CrossRef]
  7. Zee, A. (2003). Quantum Field Theory in A Nutshell. Princeton University Press. Princeton, New Jersey.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated