Preprint
Article

This version is not peer-reviewed.

Quantum Wave Probability Derive Thermodynamic Distribution

Submitted:

18 January 2026

Posted:

20 January 2026

You are already at the latest version

Abstract
Using the concept of quantum wave probability, combined with the identity principle, we can derive the Boltzmann distribution, Fermi distribution, and Bose distribution. Different distributions correspond to different conditions. The Boltzmann condition corresponds to the Boltzmann distribution. The Fermi condition corresponds to the Fermi distribution. The Bose condition corresponds to the Bose distribution. This demonstrates that the foundation of these three statistical distributions is quantum wave probability, all originating from quantum mechanics. The Boltzmann distribution is also an independent quantum distribution and is not simply a sparse limit of the Fermi or Bose distributions. The essence of the Boltzmann distribution is a uniform distribution. The Fermi and Bose distributions are deviations from the uniform distribution. Boltzmann entropy based on the quantum wave probability can resolve the Gibbs paradox. Using this new approach, we can also derive the results of eigenstate thermalization. The equilibrium state is the state in which all eigenstates have the same temperatures. We need to rethink the fundamentals of statistical mechanics.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

Introduction

In classical statistical mechanics, Boltzmann entropy has the problem known as Gibbs paradox [1,2]. Gibbs attempted to resolve this paradox by assuming identical particles and dividing by N!. Although Gibbs method is effective in mathematical calculations, it is actually not correct. The proof is as follows. The number of microscopic states calculated using the Boltzmann method is expressed by the following formula.
Ω = N ! n i ! ϖ i n i
The terms N! and n i ! are results obtained by exchanging distinguishable particles. N! represents the increase in the number of microstates due to exchanging distinguishable particles. Within the same energy level, exchanging particles does not produce new microstates, so exchanging particles will lead to duplicate microstates, these duplicates need to be removed. This is why division by n i ! is necessary. If the particles are identical, exchanging them does not generate new microstates, making it impossible to obtain the terms N! and n i ! . According to Gibbs method, if we assume identical particles, we need to divide by N! to resolve the Gibbs paradox. However, assuming particles are identical, although N! is removed, since exchanging particles does not increase the number of microstates, there is no longer a need to divide by n i ! . Hence, the n i ! term also disappears. This means the Boltzmann distribution cannot be derived. Therefore, Gibbs method can only be considered a mathematical trick and does not hold up logically in physics.
The popular approach is to treat the Boltzmann distribution as the sparse limit of Bose or Fermi distributions, the so-called classical limit condition [3,4,5]. The sparse limit means that when particles are sparse, they can approximately be distinguished from each other and are no longer identical in behavior. But particles are fundamentally identical, so how could they become distinguishable in the sparse limit? Under what conditions are they identical, under what conditions are they distinguishable, and what are the conditions for this change? We cannot explain the physical origin of this condition. This condition contradicts the basic principles of quantum mechanics. Hence, this popular approach has a significant flaw.
Therefore, in quantum mechanics, correctly understanding the Boltzmann distribution and Boltzmann statistics is a major issue. This problem has always been ambiguous, difficult to comprehend, and has never been clearly resolved. How to clearly and self-consistently derive the Boltzmann distribution based on quantum mechanics is a very important question.
In previous papers, the author proposed a new concept of quantum wave probability [6]. By introducing this new concept, the author derived a new concept of quantum wave entropy. The author found that, based on the concept of quantum wave probabilities, we can derive the Boltzmann distribution more simply and consistently, thus resolving these questions.
This paper provides a completely new perspective, new thinking, and new answers, inspiring people to rethink the fundamentals of statistical mechanics.

2. Quantum Wave Probability Derive Boltzmann Distribution

In the previous paper, the author proposed an entirely new concept of quantum wave probability [6]. In quantum mechanics, the wavelength of a monochromatic particle wave represents a new distribution probability. Within a fixed space range, the smaller the wavelength, the higher the probability that the particle will be excited within that range; conversely, the larger the wavelength, the lower the probability of particle excitation within that range. This new distribution property of particle probability has a probability density that is inversely proportional to the wavelength. The density of quantum wave probability is expressed by the following formula (1.1).
d p = ϱ d r λ
Among them, ϱ is a probability proportionality constant, λ is the wavelength of the particle wave, and d p represents the probability density over a length dr. The numerical value of the proportionality constant ϱ is π. In order to distinguish it from the commonly used symbols in thermodynamics, the probability proportionality constant in previous papers is denoted by the symbol ϱ . Please take care of this distinction.
For example, within a range of length L, the total probability of particle excitation is the integral of the probability density.
p = 0 L d p = 0 L ϱ d r λ = ϱ L λ
In fact, here the particle wavelength λ can be regarded as a unit size. One particle wavelength is equivalent to one unit length. A unit can actually be equivalently viewed as a phase space unit. This corresponds to the phase space cell in classical statistical mechanics. The particle wavelength is the unit length of a one-dimensional phase space cell.
Similar to the phase space cells in classical statistical mechanics, in three-dimensional space, the particle wavelengths along different dimensions are independent of each other, so the probability densities of particles in different dimensions are also independent. The probability density within a three-dimensional space unit is given by formula (1.3).
d p = ϱ d r 1 λ 1 ϱ d r 2 λ 2 ϱ d r 3 λ 3 = ϱ 3 d V λ 1 λ 2 λ 3
We simplify the model by assuming that the particle has the same wavelength in the three dimensions, which means the momentum components in the three dimensions are the same, leading to formula (1.4).
d p = ϱ 3 d V λ 3
Formula (1.4) is completely analogous to the phase space in classical statistical mechanics. For a 3-dimensional space region of volume V, the total probability of a particle excited with wavelength λ is given by formula (1.5).
p = 0 V d p = 0 V ϱ 3 d V λ 3 = ϱ 3 V λ 3
Discussions in three-dimensional space are very complicated and difficult to understand. We simplify the problem by first discussing probability in one-dimensional space. The total excitation probability of a particle in a one-dimensional space of length L is given by formula (1.2). In formula (1.2), the particle's wavelength λ serves as a natural space unit boundary. Each distance of length λ corresponds to one space cell. The total number of cells within the length L is given by formula (1.6), denoted by m as the total number of cells. For a free particle in a monochromatic wave, the total excitation probability within the length L is given by formula (1.6). This result implies an assumption that only one particle can be excited in one cell, and that the excitation probability of a particle is the same in each cell.
m = ϱ L λ
Now suppose there are n free particles of monochromatic waves with the same wavelength λ. Each particle can be excited in each cell, meaning each particle has m excitation probabilities. Therefore, the total excitation probability of these n particles within the length L is given by formula (1.7).
Ω = m × m × m × × m = m n
In quantum mechanics, particles are identical, so these n particles are identical particles. These n particles have the same wavelength λ, which actually means they have the same energy and belong to the same energy level.
We assume a condition where each cell is allowed to excite only one particle, so there exist probabilities of duplication in formula (1.7). The probability that more than one particle is excited in one cell belongs to the duplication probability. The total number of excited particles is n, so the total number of probabilities where more than one particle is excited in one cell is n!. These duplicate probabilities need to be removed from formula (1.7).
For example, suppose there are 10 cells, and at a certain moment, 4 particles are randomly excited, with each cell able to excite only one particle. The situation where more than one particle is excited in a single cell is an impossible probability. If we calculate the total probability according to formula (1.7), the probability that more than one particle is excited in a single cell is the duplicate probability. If 4 particles are excited in a single cell, the number of duplicates is 4, so the total probability must be divided by 4. If 3 particles are excited in a single cell, the number of duplicates is 3, so the total probability must be divided by 3. If 2 particles are excited in a single cell, the number of duplicates is 2, so the total probability must be divided by 2. Therefore, all the probabilities that need to be removed is 4! = 4 x 3 x 2.
Therefore, the actual total excitation probability is given by formula (1.8).
Ω = m n n !
Formula (1.8) is exactly the same as the formula for the number of microstates at a given energy level in the Boltzmann distribution of classical statistical mechanics [3,4,5]. The excitation probability of a particle in m cells, and the possible number of microstates of a particle in m phase cells, these are just two different conceptual representations. But the two representations are mathematically completely equivalent.
However, in the Boltzmann distribution of classical statistical mechanics, the phase space cell is introduced as a hypothetical condition and cannot explain the physical origin of this phase space cell. Moreover, in the Boltzmann distribution, it is assumed that the phase space cell sizes of particles at all energy levels are the same. From the derivation process above, we can see that the physical origin of the phase space cell actually arises from the wavelength of particles' waves. Since the excitation probability of particles is expressed by formula (1.1), there is a relationship between the excitation probability of particles and their wavelength. It is precisely formula (1.1) that gives rise to the existence of this phase space cell. Furthermore, from the derivation above, we also find another result: particle at different energy levels have different momentum and different wavelength, so particle at different energy levels have different phase space cell sizes. Therefore, we find that starting from the quantum wave probability, we can use a simpler, clearer, and more self-consistent method to derive the Boltzmann distribution for identical particles. Thus, the foundation for the validity of the Boltzmann distribution in classical statistical mechanics originates from quantum mechanics. It is precisely because particles in quantum mechanics have wave properties and the attribute of quantum wave probability that the Boltzmann distribution and Boltzmann statistics exist.
In formula (1.8), because n! is omitted, this means that the number of particles n must be less than the number of cells m, so n < m.
So we find that starting from the quantum wave probability, we can derive the Boltzmann distribution. At the same time, we can also understand the physical essence of the Boltzmann distribution: for n particles, with n less than the number of cells m, each cell has a probability of particle excitation, and the number of excited particles in each cell does not exceed one. This can be referred to as the Boltzmann condition. This condition is essentially a uniform condition. If a cell allows the excitation of more than one particle, or if a cell allows the probability of exciting zero particles, then this situation is considered non-uniform.
We can also find that the conditions for the validity of the Boltzmann distribution do not necessarily require the total number of particles to be very sparse. As long as the Boltzmann condition is satisfied, the Boltzmann distribution holds. Therefore, the classical limit condition for transitioning from the Bose distribution (or Fermi distribution) to the Boltzmann distribution, also called the particle sparsity condition, is not a necessary condition. However, a sparse particle situation does satisfy the Boltzmann condition. A sparse particle situation is just one of the scenarios under the Boltzmann condition. In actual physical processes, other situations that satisfy the Boltzmann condition cannot be ruled out.
In classical statistical mechanics, the factor of 1/n! in the Boltzmann distribution originates from the fact that exchanging distinguishable particles within the same energy level does not generate new microstates. However, in quantum mechanics, particles are identical, so this factor cannot arise from particle exchange. In quantum mechanics, this factor comes from the restriction imposed by the Boltzmann condition, which allows only one particle to occupy each cell. Therefore, the origin of this factor is completely different in quantum statistics compared to classical statistics. In fact, the origin of this factor is related to the truth of the Boltzmann distribution. We find that this restriction is actually a constraint of a uniform distribution. The essence of the Boltzmann distribution is a form of uniform distribution.
The probability of particle distribution on a single energy level is given by formula (1.8), and the total probability of all particles over multiple energy levels is given by formula (1.9).
Ω = m i n i n i !
Formula (1.9) represents the number of microstates of identical particles distributed according to the Boltzmann distribution across all energy levels.
Because particles are identical in quantum mechanics, exchanging particles between different energy levels does not produce new microstates, so there is no N! term.
Here, we can also see why Gibbs method is effective. In Boltzmann original formula for the number of microscopic states, after dividing by N! , we exactly get formula (1.9). However, Gibbs could not explain why there also exist the n i ! term. It can be seen that the n i ! term actually comes from Boltzmann condition. Boltzmann condition is a uniformity condition. The restriction of uniformity brings about the n i ! term.
Similarly, all particles and all energy levels satisfy the following conditions. At any given moment, the total number of excited particles N remains unchanged and is a constant. The total energy E of all excited particles remains unchanged and is a constant.
N = n i
E = n i E i
Using standard methods of statistical mechanics, we define entropy S, which is the Boltzmann entropy, satisfying the following formula (1.12).
S = κ B ln Ω = κ B ln ( m i n i n i ! )
Similarly, the equilibrium state is the case of extreme entropy, satisfying condition (1.13).
δ S = κ B δ ( ln Ω ) = 0
δ S = κ B δ ( n i ln m i ln n i ! )
δ S = κ B δ ( n i ln m i n i ln n i + n i )
For a multi-particle system, the total probability m i of each energy level is a fixed value, the only variable is the number of particles n i excited in each energy level. So we get
δ S = κ B ( δ n i ln m i δ n i ln n i ) = κ B ln m i n i δ n i
Combining (1.10) and (1.11), according to the Lagrange multiplier method, we can obtain
κ B δ S κ B α δ n i κ B β E i δ n i = 0
κ B ln m i n i δ n i κ B α δ n i κ B β E i δ n i = 0
( ln m i n i α β E i ) δ n i = 0
So get (1.14).
ln m i n i α β E i = 0
We then derive the Boltzmann distribution (1.15).
n i = m i e α β E i
The α in formula (1.15) is a constant. The constant β is related to temperature and satisfies formula (1.16).
β = 1 K Q T
Define the partition function Z, satisfying formula (1.17).
Z = m i e β E i
Formula (1.10) is transformed into formula (1.18).
N = n i = m i e α β E i = e α m i e β E i
So get (1.19).
e α = N Z
α = ln Z ln N
By combining formulas (1.15) and (1.17), formula (1.21) can be obtained.
E = n i E i = e α E i m i e β E i = e α m i ( e β E i β ) = e α ( m i e β E i β )
E = N Z Z β = N ln Z β
Formula (1.12) is converted into the following formula.
S = κ B ln ( m i n i n i ! ) = κ B ln ( m i n i n i ! ) = κ B ( n i ln m i ln n i ! )
It can be obtained from formula (1.15).
m i = n i e α + β E i
ln m i = ln ( n i e α + β E i ) = α + β E i + ln n i
ln n i ! = n i ln n i n i
Take into the above formula (1.22), and can get.
S = κ B n i α + β E i + ln n i ( n i ln n i n i ) = κ B β n i E i + κ B α n i + κ B n i
S = κ B β E + κ B α N + κ B N = κ B β E + κ B N ( ln Z ln N ) + κ B N
S = κ B N ( ln Z β ln Z β ) κ B ( Nln N N )
S = κ B N ( ln Z β ln Z β ) κ B ln N !
Formula (1.25) is the Boltzmann entropy. Compared with the Boltzmann entropy in classical statistical mechanics [3,4,5], formula (1.25) has an extra term of ln N ! . This additional term arises from the identity particles in quantum mechanics. By deriving it from the quantum wave probability, we can logically obtain the ln N ! term without assuming its existence. Therefore, by taking quantum wave probability as the foundation, the entire theory becomes simpler and more logically consistent.
The above derivation is in the case of a simple one-dimensional space. For an ideal gas in three-dimensional space, the probability density within a three-dimensional space cell is given by formula (1.3).
In quantum mechanics, the particle wavelength satisfies the following formula (1.26).
λ = h P
P is the momentum of the particle. So (1.3) changes to (1.27).
d p = ϱ 3 d V P 1 P 2 P 3 h 3
For the case of the Boltzmann distribution, the particle distribution satisfies formula (1.15). Convert it into differential form.
d n i = d m i e α β E i = ϱ 3 e α β E d V d P 1 d P 2 d P 3 h 3
The partition function formula (1.17) is then transformed into the integral formula (1.29).
Z = ϱ 3 e β E d V d P 1 d P 2 d P 3 h 3
For an ideal gas in three-dimensional space, the energy E of each particle satisfies formula (1.30).
E = P 2 2 m = P 1 2 + P 2 2 + P 3 2 2 m
Z = ϱ 3 h 3 e β P 1 2 2 m β P 2 2 2 m β P 3 2 2 m d V d P 1 d P 2 d P 3
Z = ϱ 3 V h 3 e β P 1 2 2 m d P 1 e β P 2 2 2 m d P 2 e β P 3 2 2 m d P 3
Z = ϱ 3 V h 3 ( 2 π m β ) 3 2
We thus derive the partition function of an ideal gas. The result differs from that in classical statistical mechanics only by a constant ϱ 3 . The constant ϱ 3 is a probability ratio constant. However, in this derivation, there is no need to assume the existence of phase space or phase cells. Formulas (1.27) and (1.29) are entirely results obtained from taking differentials in 3-dimensional space and 3-dimensional momentum, independent of phase space. Yet, the differential form brings effects equivalent to phase space cells. Nevertheless, we do not need to assume the existence of phase space, which reduces theoretical assumptions and makes the derivation simpler and more straightforward.
So we derive the result of the Boltzmann distribution from the quantum wave probability. This proves that the Boltzmann distribution actually originates from quantum mechanics. The foundation of Boltzmann statistics is actually quantum mechanics. Boltzmann statistics is also essentially a form of quantum statistics.
Combining formulas (1.23) and (1.28), a result can be calculated and proven. When two ideal gases of the same type, at the same temperature, with the same number of particles, and the same volume are mixed together, the number of particles doubles, the volume doubles, the energy doubles, and the Boltzmann entropy also doubles, meaning there is no Gibbs paradox [1,2].
According to formula (1.32), we obtain.
ln Z = ln ( ϱ 3 V h 3 ( 2 π m β ) 3 2 ) = A + ln V
A = ln ϱ 3 h 3 ( 2 π m β ) 3 2
ln Z β = A β
According to formula (1.24), before mixing, the Boltzmann entropy of the ideal gas is.
S 1 = κ B N A + κ B N ln V κ B N β A β κ B N ln N + κ B N
After mixing, the number of particles N->2N, and V->2V. The temperature remains unchanged, so A remains unchanged, and A β remains unchanged. Therefore, we obtain the entropy of the ideal gas after mixing.
S = κ B 2 N ( A + ln 2 V β A β ) κ B ( 2 N ln ( 2 N ) 2 N )
S = κ B 2 N A + κ B 2 N ln 2 + κ B 2 N ln V κ B 2 N β A β κ B 2 N ln N κ B 2 N ln 2 + κ B 2 N
S = κ B 2 N A + κ B 2 N ln V κ B 2 N β A β κ B 2 N ln N + K B 2 N
S = 2 ( κ B N A + κ B N ln V κ B N β A β κ B N ln N + K B N ) = 2 S 1
Therefore, Gibbs paradox does not exist.
From the derivation process above, we can also understand why the Boltzmann distribution in classical statistical mechanics is effective. In classical statistical mechanics, the number of microscopic states calculated using the Boltzmann method is expressed by the following formula (1.33).
Ω = N ! m i n i n i !
The total probability of particle excitation derived from quantum wave probability is formula (1.34).
Ω = m i n i n i !
The difference between these two formulas is only a factor of N!. For a system, since the total number of particles is a fixed value, the N! term is just a constant. Therefore, mathematically, the two formulas differ only by a constant. Consequently, the entropy also differs only by a constant. Temperature T = dE/dS. When calculating dS, this constant is canceled out and has no effect, so the same temperature is obtained. Thus, in the mathematical calculation of temperature, the two formulas yield the same temperature value and have the same effect mathematically. However, when calculating entropy separately, the constant N! cannot be canceled out, so result in different outcomes. Hence, formula (1.33) gives rise to the Gibbs paradox problem. In fact, the physical meanings of formulas (1.33) and (1.34) are completely different. The factor 1 / n i !   in formula (1.33) comes from removing the number of duplicated states caused by exchanging distinguishable particles. The factor 1 / n i !   in formula (1.34) comes from the constraints of uniform conditions. Since only one particle is allowed to be excited in one cell, it is need to remove the probability of exciting more than one particle in one cell. The two remove operations use the same mathematical express, which can be considered a perfect coincidence. However, the physical origins of the two removed terms are completely different, and their physical meanings are entirely distinct. The formula (1.33) is not compatible with the quantum mechanical principle of identical particles. But the formula (1.34) is compatible with the quantum mechanical principle of identical particles.
To summarize the derivation process above, what are the differences compared with the derivation in classical statistical mechanics? First, particles are completely identical. Second, there is no need to assume phase space cells. Because the quantum wave probabilities satisfy formula (1.2), the particle wavelength introduces a space unit equivalent to a phase cell. Not all particles have the same size unit; particles at different energy levels have different wavelengths and thus have different unit sizes. Third, the probability that each unit excites a particle satisfies the Boltzmann condition. Fourth, there are no fixed, permanent particles. Particles are randomly excited in the unit, each unit has an excitation probability, and then these particles quickly annihilate. The excitation and annihilation continue repeatedly. However, at any given moment, the total number of excited particles satisfies the constraints of formulas (1.10) and (1.11). Deriving the Boltzmann distribution and Boltzmann entropy based on quantum wave probability starts entirely from the fundamental probabilities of quantum mechanics, and the entire derivation naturally conforms to the concepts and methods of quantum mechanics. This fully integrates the Boltzmann distribution and Boltzmann statistics into the theoretical framework of quantum mechanics.

3. Quantum Wave Probability Derive Fermi Distribution and Bose Distribution

In the derivation of the Boltzmann distribution above, a key assumption was made. Each particle has the same excitation probability at each cell. There are total of m cells, and each particle has m possible excitation probabilities. Based on this condition, we can obtain the total excitation probabilities given in formulas (1.7) and (1.8).
Now let's assume another condition. The total number of cells is m, which is still given by formula (1.6). We assume that at a certain moment, among these m cells, some cells may excite one particle, while some cells may excite zero particles. The total number of excited particles at a certain moment in these m cells is n. These n particles are identical particles. The number of cells m is greater than the number of excited particles n, n<m. Under such conditions, if we regard the m cells as m degenerate quantum states, this condition is actually the Pauli exclusion principle: a quantum state can either contain one particle or contain zero particles. Therefore, this assumption is essentially the condition for Fermi distribution. Hence, under this assumption, the total particle excitation probability of m cells and n particles is given by formula (2.1), which is essentially the number of microstates for Fermi distribution [3,4,5]. We can refer to this assumption as the Fermi condition.
Ω = C m n = m ! n ! m n !
Formula (2.1) is the excitation probability for a single energy level. The total excitation probability for all particles and all energy levels is given by formula (2.2).
Ω = m i ! n i ! m i n i !
By using the same derivation method as for the Boltzmann distribution above, the Fermi distribution formula (2.3) can be derived.
n i = m i 1 e α + β E i + 1
Similarly, the α in formula (2.3) is a constant. The constant β is related to temperature and satisfies formula (1.16).
In the process of deriving the Boltzmann distribution above, we assumed that at any given moment, a cell can only excite one particle. What would happen if we allow a cell to excite multiple particles, or even none at all? We still consider m cells as m degenerate quantum states, which is essentially the case of the Bose distribution. The number of particles that can be excited on one cell is unlimited, with a minimum of zero and a maximum equal to the total number of particles n. Therefore, under this assumption, the total excitation probability of m cells and n particles is given by formula (2.4), which corresponds to the number of microstates in the Bose distribution [3,4,5]. We can refer to this assumption as the Bose condition.
Ω = C m + n 1 n 1 = m + n 1 ! n ! m 1 !
Formula (2.4) is the excitation probability for a single energy level. The total excitation probability for all particles and all energy levels is given by formula (2.5).
Ω = m i + n i 1 ! n i ! m i 1 !
Using the same derivation method, the Bose distribution (2.6) can be derived.
n i = m i 1 e α + β E i 1
Similarly, the α in formula (2.6) is a constant. The constant β is related to temperature and satisfies formula (1.16).
Through the above process, it is thus proven that based on quantum wave probability, we can derive the Boltzmann distribution, Fermi distribution, and Bose distribution. These three statistical mechanics distribution models can all be derived from quantum wave probability. Their foundation is quantum mechanics. Similar to the Fermi and Bose distributions, the Boltzmann distribution is also essentially a quantum distribution and originates from quantum mechanics. Based on quantum wave probability, different distributions can be derived using different conditions. Using Boltzmann conditions, the Boltzmann distribution is derived. Using Fermi conditions, the Fermi distribution is derived. Using Bose conditions, the Bose distribution is derived.
By comparing formulas (1.9) and (2.1), we can see that in formula (2.1), under the condition where n << m, formula (2.1) can be approximated by formula (1.9), and the Fermi distribution transitions to the Boltzmann distribution. Similarly, under the condition n << m, formula (2.4) can also be approximated by formula (1.9), and the Bose distribution likewise transitions to the Boltzmann distribution. This is the conventional classical limit condition, also known as the sparse condition. Therefore, conventional statistical mechanics considers Boltzmann statistics to be the classical limit of quantum statistics. However, from the derivation above, we can reach a conclusion that this statement is incorrect. The Boltzmann distribution is actually a type of quantum distribution, completely independent of both Fermi and Bose distributions. As long as the Boltzmann condition is satisfied, the Boltzmann distribution still holds even when n is close to m. It is not only under the condition n << m that the Boltzmann distribution exists. The correct statement is that under the condition n << m, these three distributions converge to the same distribution.
We observe the Boltzmann condition again. For n particles, where n is less than the number of cells m, each cell has a probability of particle excitation, and the number of excited particles in each cell is one. Each particle has a probability of being excited in each cell. This condition is essentially a uniform condition. Therefore, the Boltzmann distribution is actually a kind of uniform distribution pattern.
In contrast, Bose distribution allows all particles to be excited onto a single cell. Obviously, such a distribution pattern is not uniform. Although Fermi statistics restrict each cell to at most one particle, but it allows some cells to have zero particles, its uniformity is still not as high as that of the Boltzmann distribution. When the number of particles n is much smaller than the number of cells m, both Bose and Fermi distributions are actually close to a uniform distribution, and thus can be approximated by the Boltzmann distribution.
In the existing theory of quantum mechanics, both Bose and Fermi distributions are related to the symmetry of the wave function. Particles with symmetric wave functions are bosons and follow the Bose distribution, while particles with antisymmetric wave functions are fermions and follow the Fermi distribution. This can be understood as the symmetry of the wave function causing an effect among particles. This effect disrupts the uniformity of particle distribution, causing the distribution of particles to deviate from uniform distribution, and thus deviate from the Boltzmann distribution. Bose and Fermi distributions are two different types of deviation effects. In quantum mechanics, under certain extreme conditions, there may be other types of deviation effects that could lead to new distribution patterns. The deviation effect from the uniform distribution is a topic worth further study.
In current quantum statistical mechanics, a popular viewpoint exists. For Fermi and Bose distributions, under sparse conditions, the waves of different particles no longer overlap, and they can be approximately considered distinguishable, so under sparse conditions, they can be approximated by the Boltzmann distribution. But this is obviously contrary to the principle of identical particles. How can identical particles become distinguishable under sparse conditions? Where is the boundary between identical and distinguishable particles? This crucial question cannot be answered clearly. Therefore, this sparse condition explanation has serious logical contradictions. The biggest doubts and logical inconsistencies in existing statistical mechanics also stem from this viewpoint. Now we can assert that this is incorrect. Particles in the Boltzmann distribution are also identical; even under sparse conditions, they remain identical particles and are still indistinguishable. The actual truth is that under sparse conditions, the interactions between particles become very small, and the distribution of particles becomes nearly uniform, so under sparse conditions, the Fermi and Bose distributions approximately converge to the Boltzmann distribution. The key to the Boltzmann distribution lies in the absence of interactions between particles. Particles are approximately ideal single particles, with no aggregation or exclusion, and their distribution becomes very uniform. Particles following the Boltzmann distribution are ideal single particles, but they are still identical particles. Ideal single particles are particles that have absolutely no influence on each other. Similar to fermions and bosons, we refer to ideal single particle as Boltzmannon. The Boltzmannons are quantized particles that follows the Boltzmann distribution.
Let's further look at the quantum wave probability. For the quantum wave probability of a single particle, it is expressed by formulas (1.1) and (1.6). From these two formulas, no close relationship with statistical distributions is apparent. However, when many particles combine together, the overall probability undergoes a magical change, and we have derived three statistical distribution patterns. Therefore, in quantum mechanics, because particles are identical, the overall properties of combined particles are not simply equal to the sum of the properties of these particles. Identicalness leads to changes in the overall properties. This is similar to an emergent phenomenon. Therefore, in quantum statistics, the identicalness of particles plays a key role. Because particles are identical, the total probability for multiple particles is no longer equal to the sum of the probabilities for each individual particle, which leads to changes in the overall probability distribution. This is the physical origin of the thermodynamic properties of quantum multi-particle systems. Thus, we can consider quantum statistics as an emergent result of particle identicalness. Quantum statistics has two foundations. The first is quantum wave probability. And the second is particle identicalness. Based on these two fundamental concepts, we can simply and logically derive the three quantum distributions, thereby constructing the entire quantum statistics. The key role of particle identicalness in quantum statistics is worth further in-depth study.

4. Eigenstate Thermalization

In the derivation process of the formulas (1.8) to (1.15) above, we used the standard methods of statistical mechanics, first calculating the total entropy S of the system, and the total entropy S satisfy condition (1.13), and then obtaining the Boltzmann distribution formula (1.15). But in fact, we can directly derive the Boltzmann distribution from equation (1.8).
First, we deal with an individual eigenstate. Assume that all particles are in the same eigenstate. The energy of each particle is E. Each particle has the same excitation probability m. There are n particles. So the actual total excitation probability is given by formula (1.8). That is the formula (3.1).
Ω = m n n !
Using standard methods of statistical mechanics also, we define entropy S, which satisfy the following formula (3.2).
S = κ B ln Ω = κ B ln ( m n n ! )
S = κ B n ln m ln n ! = κ B n ln m n ln n + n
S = κ B n ln m ln n + 1
The total energy of these n particles is formula (3.4).
E a l l = n E
Using standard methods of statistical mechanics also, we define the system temperature T, satisfying the formula (3.5).
T = E a l l S = n E κ B n ln m ln n + 1 = E κ B ln m ln n + 1
We define parameter β
β = 1 κ B T
From formula (3.5) and (3.6), so we can get
n = m e β E + 1 = e m e β E
This formula is very similar to formula (1.15), with only a difference in the constant. A single eigenstate satisfies equation (3.7).
Now let's deal with the entire system. The system contains many eigenstates. The total number of particles in all eigenstates is given by equation (3.8).
N = n = e m e β E
For ease of distinction, we use subscript to distinguish different eigenstates. So we get formula (3.9).
N = n i = e m i e β i E i
Because the temperatures of different eigenstates may vary, different eigenstates may have different parameters.
β i = 1 κ B T i
Formula (3.9) is not normalized. We define a normalization parameter X that satisfies formula (3.11).
N = e N X m i e β i E i
So we get
X = 1 e m i e β i E i
We define a function Z that satisfies formula (3.13).
Z = e m i e β i E i
So we get
X = 1 Z
From formula (3.8) and (3.11), we can obtain formula (3.15).
e N X m i e β i E i = N = n i
So ge get
n i = N Z m i e β i E i
We define a new parameter α that satisfies formula (3.17).
e α = N Z
So formula (3.16) becomes formula (3.18).
n i = m i e α β i E i
formula (3.16) becomes formula (3.18).
N = n i = m i e α β i E i
The derivation process here is clearly completely different from the derivation process from formula (1.8) to (1.15). We do not need to calculate the total entropy of particles at all energy levels, nor do we need to set the variational condition δ S = 0 for the total entropy, nor do we need to use the method of Lagrange multipliers. We only need to define a temperature for each energy level using formula (3.5), and we can then derive formulas (3.18) and (3.19).
Please note that in the derivation above, different energy levels may have different temperatures and different parameters β i . Therefore, this derivation is a general approach and is not limited to equilibrium states. It also applies to non-equilibrium states.
If we define the situation where all eigenstates have the same temperature as the equilibrium state of the system, then in the equilibrium state, all energy levels have the same temperature and the same parameter β . Clearly, under the condition of equilibrium state, formula (3.18) transforms into formula (3.20), which is the Boltzmann distribution formula (1.15). Therefore, we have derived the result of the Boltzmann distribution.
n i = m i e α β E i
The derivation process of formulas (3.1) and (3.19) is completely different from the method used in the first chapter above to derive the Boltzmann distribution. However, it still arrives at the result of the Boltzmann distribution. The only difference is that the partition function Z here is given by formula (3.13), which differs from formula (1.17) by a constant factor of e. The constant e in formula (3.7), (3.8), and (3.9) can be absorbed into the function Z without affecting the derivation of the Boltzmann distribution at all. We can also see that the partition function Z is actually just the normalization parameter.
Obviously, the derivation processes of formulas (3.1) and (3.19) are simpler and more straightforward. Moreover, it can be seen that the entire derivation process above is not restricted by the condition of equilibrium. This derivation method is applicable to both equilibrium and non-equilibrium states.
An equilibrium state is a system state in which all eigenstates have the same temperature. A non-equilibrium state is one in which different eigenstates have different temperatures. For a non-equilibrium state, we can independently calculate the temperature and entropy of each eigenstate, with each eigenstate have independent thermodynamic properties. When all eigenstates have the same temperature, it becomes the equilibrium state of the system.
The temperature of the eigenstate is expressed by formula (3.5), which is formula (3.21).
T i = E i κ B ln m i ln n i + 1
For the one-dimensional case,
m i = ϱ L λ = ϱ P L h = ϱ L h 2 M i E i
For the three-dimensional case,
m i = ϱ 3 L 1 L 2 L 3 λ 1 λ 2 λ 3 = ϱ 3 P 1 P 2 P 3 L 1 L 2 L 3 h 3 = ϱ 3 L 1 L 2 L 3 h 3 ( 2 M i ) 3 2 E i 1 E i 2 E i 3
Therefore, the temperature of an eigenstate is determined only by the energy and the number of particles of that eigenstate. Thus, if a system is to reach equilibrium, it requires that the temperatures of all eigenstates be the same. Then, for formula (3.21), all eigenstates must yield the same result. This is a very strong constraint. If the temperature of a particular eigenstate cannot evolve to be the same as that of the other eigenstates, then this eigenstate cannot reach the final equilibrium state.
This new approach independently calculates the entropy and temperature of each eigenstate. The thermodynamic properties of each eigenstate evolve independently. If the temperatures of all eigenstates reach the same value, the entire system reaches equilibrium state, thereby yielding the result of the Boltzmann distribution. This new approach clearly derives the result of eigenstate thermalization [7]. By using this new approach, we can derive eigenstate thermalization and simultaneously derive the Boltzmann distribution. Therefore, using this new approach, we can theoretically prove the validity of eigenstate thermalization.
Formula (3.21) can serve as a criterion for determining whether an eigenstate can thermalize to reach equilibrium state. Whether examined from a local perspective or a global perspective, if the temperatures between eigenstates can become the same or approximately the same, then we can consider that equilibrium state has been reached. If the temperature differences between different eigenstates are large and cannot be treated as approximately the same, then equilibrium state cannot be achieved.
If a system does not satisfy the condition that all eigenstates have the same temperature, it cannot reach an equilibrium state. According to the principles of quantum mechanics, we can still define an average temperature. This average temperature is represented by formula (3.22).
T = φ i T i φ i
Here, a hypothesis is proposed. The temperature actually measured in a real system is this average temperature, rather than the temperature of an equilibrium state. The equilibrium state in real world is actually an approximate equilibrium state. The ideal conditions for equilibrium are difficult to achieve in real world, so they almost never occur. This topic still warrants further in-depth study.
This paper only introduces the basic concepts and processes of this new processing method. To this new processing, there are still many issues that need further in-depth study.

Conclusions

From the derivation process above, we obtain a result. By using the concept of quantum wave probability, combined with the identity principle of particles, we can derive the Boltzmann distribution, Fermi distribution, and Bose distribution. Different distributions correspond to different conditions. Using the Boltzmann condition, we derive the Boltzmann distribution. Using the Fermi condition, we derive the Fermi distribution. Using the Bose condition, we derive the Bose distribution. This proves that the basis of these three statistical distribution patterns is quantum wave probability and all originate from quantum mechanics. The Boltzmann distribution is also an independent quantum distribution and is not merely a sparse limit of the Fermi and Bose distributions. The essence of the Boltzmann distribution is uniform distribution. The Fermi and Bose distributions represent deviations from the uniform distribution. In the sparse limit, these three distributions tend to converge to the same distribution pattern. Based on the Boltzmann distribution, we calculate Boltzmann entropy, compute the entropy of an ideal gas, and resolve the Gibbs paradox. This demonstrates that quantum wave probability has very important physical significance and is well worth further in-depth study.
Based on the quantum wave probability and the identical particles principles, we can develop a completely new thermodynamic approach. In this new approach, the entropy and temperature of each eigenstate are defined independently, and the thermodynamic properties of each eigenstate are calculated individually. The equilibrium state of the system is achieved when the temperatures of all eigenstates become the same. So we can demonstrate the validity of eigenstate thermalization. This new approach is completely different from the traditional methods of statistical mechanics.
Through the discussion in this paper, we are inspired to rethink the fundamental of statistical mechanics. The foundations of statistical mechanics are not entirely problem-free. We can still approach the study of its fundamentals from a completely new perspective for in-depth research.

References

  1. Van Kampen, N. The Gibbs paradox. In Essays in Theoretical Physics in Honour of Dirk ter Haar; Parry, W.E., Ed.; Pergamon Press: Oxford, UK, 1984.
  2. Gibbs, J. Elementary Principles in Statistical Mechanics; Yale University Press: New Haven, CT, USA, 1902.
  3. David Chandler, introduction to modern statistical mechanics, ISBN-13: 978-0195042771.
  4. R.K. Pathria and Paul D. Beale, Statistical Mechanics Fourth Edition, ISBN: 978-0-08-102692-2, Elsevier Ltd. 2021.
  5. R. P. Feynman, R.B. Leighton, M. Sands. "The Feynman Lectures on Physics (Volume I,II,III)", ISBN 9787506272476, ISBN 9787506272483,ISBN 9787506272490.
  6. Li, X.L. (2025). Quantum Wave Entropy. Journal of High Energy Physics, Gravitation and Cosmology, 11, 316-330. [CrossRef]
  7. J. M. Deutsch, Quantum statistical mechanics in a closed system, Phys. Rev. A 43, 2046 – Published 1 February, 1991. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated