2. Quantum Wave Probability Derive Boltzmann Distribution
In the previous paper, the author proposed an entirely new concept of quantum wave probability [
6]. In quantum mechanics, the wavelength of a monochromatic particle wave represents a new distribution probability. Within a fixed space range, the smaller the wavelength, the higher the probability that the particle will be excited within that range; conversely, the larger the wavelength, the lower the probability of particle excitation within that range. This new distribution property of particle probability has a probability density that is inversely proportional to the wavelength. The density of quantum wave probability is expressed by the following formula (1.1).
Among them, is a probability proportionality constant, is the wavelength of the particle wave, and represents the probability density over a length dr. The numerical value of the proportionality constant is π. In order to distinguish it from the commonly used symbols in thermodynamics, the probability proportionality constant in previous papers is denoted by the symbol . Please take care of this distinction.
For example, within a range of length L, the total probability of particle excitation is the integral of the probability density.
In fact, here the particle wavelength can be regarded as a unit size. One particle wavelength is equivalent to one unit length. A unit can actually be equivalently viewed as a phase space unit. This corresponds to the phase space cell in classical statistical mechanics. The particle wavelength is the unit length of a one-dimensional phase space cell.
Similar to the phase space cells in classical statistical mechanics, in three-dimensional space, the particle wavelengths along different dimensions are independent of each other, so the probability densities of particles in different dimensions are also independent. The probability density within a three-dimensional space unit is given by formula (1.3).
We simplify the model by assuming that the particle has the same wavelength in the three dimensions, which means the momentum components in the three dimensions are the same, leading to formula (1.4).
Formula (1.4) is completely analogous to the phase space in classical statistical mechanics. For a 3-dimensional space region of volume V, the total probability of a particle excited with wavelength
is given by formula (1.5).
Discussions in three-dimensional space are very complicated and difficult to understand. We simplify the problem by first discussing probability in one-dimensional space. The total excitation probability of a particle in a one-dimensional space of length L is given by formula (1.2). In formula (1.2), the particle's wavelength λ serves as a natural space unit boundary. Each distance of length λ corresponds to one space cell. The total number of cells within the length L is given by formula (1.6), denoted by m as the total number of cells. For a free particle in a monochromatic wave, the total excitation probability within the length L is given by formula (1.6). This result implies an assumption that only one particle can be excited in one cell, and that the excitation probability of a particle is the same in each cell.
Now suppose there are n free particles of monochromatic waves with the same wavelength λ. Each particle can be excited in each cell, meaning each particle has m excitation probabilities. Therefore, the total excitation probability of these n particles within the length L is given by formula (1.7).
In quantum mechanics, particles are identical, so these n particles are identical particles. These n particles have the same wavelength λ, which actually means they have the same energy and belong to the same energy level.
We assume a condition where each cell is allowed to excite only one particle, so there exist probabilities of duplication in formula (1.7). The probability that more than one particle is excited in one cell belongs to the duplication probability. The total number of excited particles is n, so the total number of probabilities where more than one particle is excited in one cell is n!. These duplicate probabilities need to be removed from formula (1.7).
For example, suppose there are 10 cells, and at a certain moment, 4 particles are randomly excited, with each cell able to excite only one particle. The situation where more than one particle is excited in a single cell is an impossible probability. If we calculate the total probability according to formula (1.7), the probability that more than one particle is excited in a single cell is the duplicate probability. If 4 particles are excited in a single cell, the number of duplicates is 4, so the total probability must be divided by 4. If 3 particles are excited in a single cell, the number of duplicates is 3, so the total probability must be divided by 3. If 2 particles are excited in a single cell, the number of duplicates is 2, so the total probability must be divided by 2. Therefore, all the probabilities that need to be removed is 4! = 4 x 3 x 2.
Therefore, the actual total excitation probability is given by formula (1.8).
Formula (1.8) is exactly the same as the formula for the number of microstates at a given energy level in the Boltzmann distribution of classical statistical mechanics [
3,
4,
5]. The excitation probability of a particle in m cells, and the possible number of microstates of a particle in m phase cells, these are just two different conceptual representations. But the two representations are mathematically completely equivalent.
However, in the Boltzmann distribution of classical statistical mechanics, the phase space cell is introduced as a hypothetical condition and cannot explain the physical origin of this phase space cell. Moreover, in the Boltzmann distribution, it is assumed that the phase space cell sizes of particles at all energy levels are the same. From the derivation process above, we can see that the physical origin of the phase space cell actually arises from the wavelength of particles' waves. Since the excitation probability of particles is expressed by formula (1.1), there is a relationship between the excitation probability of particles and their wavelength. It is precisely formula (1.1) that gives rise to the existence of this phase space cell. Furthermore, from the derivation above, we also find another result: particle at different energy levels have different momentum and different wavelength, so particle at different energy levels have different phase space cell sizes. Therefore, we find that starting from the quantum wave probability, we can use a simpler, clearer, and more self-consistent method to derive the Boltzmann distribution for identical particles. Thus, the foundation for the validity of the Boltzmann distribution in classical statistical mechanics originates from quantum mechanics. It is precisely because particles in quantum mechanics have wave properties and the attribute of quantum wave probability that the Boltzmann distribution and Boltzmann statistics exist.
In formula (1.8), because n! is omitted, this means that the number of particles n must be less than the number of cells m, so n < m.
So we find that starting from the quantum wave probability, we can derive the Boltzmann distribution. At the same time, we can also understand the physical essence of the Boltzmann distribution: for n particles, with n less than the number of cells m, each cell has a probability of particle excitation, and the number of excited particles in each cell does not exceed one. This can be referred to as the Boltzmann condition. This condition is essentially a uniform condition. If a cell allows the excitation of more than one particle, or if a cell allows the probability of exciting zero particles, then this situation is considered non-uniform.
We can also find that the conditions for the validity of the Boltzmann distribution do not necessarily require the total number of particles to be very sparse. As long as the Boltzmann condition is satisfied, the Boltzmann distribution holds. Therefore, the classical limit condition for transitioning from the Bose distribution (or Fermi distribution) to the Boltzmann distribution, also called the particle sparsity condition, is not a necessary condition. However, a sparse particle situation does satisfy the Boltzmann condition. A sparse particle situation is just one of the scenarios under the Boltzmann condition. In actual physical processes, other situations that satisfy the Boltzmann condition cannot be ruled out.
In classical statistical mechanics, the factor of 1/n! in the Boltzmann distribution originates from the fact that exchanging distinguishable particles within the same energy level does not generate new microstates. However, in quantum mechanics, particles are identical, so this factor cannot arise from particle exchange. In quantum mechanics, this factor comes from the restriction imposed by the Boltzmann condition, which allows only one particle to occupy each cell. Therefore, the origin of this factor is completely different in quantum statistics compared to classical statistics. In fact, the origin of this factor is related to the truth of the Boltzmann distribution. We find that this restriction is actually a constraint of a uniform distribution. The essence of the Boltzmann distribution is a form of uniform distribution.
The probability of particle distribution on a single energy level is given by formula (1.8), and the total probability of all particles over multiple energy levels is given by formula (1.9).
Formula (1.9) represents the number of microstates of identical particles distributed according to the Boltzmann distribution across all energy levels.
Because particles are identical in quantum mechanics, exchanging particles between different energy levels does not produce new microstates, so there is no N! term.
Here, we can also see why Gibbs method is effective. In Boltzmann original formula for the number of microscopic states, after dividing by N! , we exactly get formula (1.9). However, Gibbs could not explain why there also exist the term. It can be seen that the term actually comes from Boltzmann condition. Boltzmann condition is a uniformity condition. The restriction of uniformity brings about the term.
Similarly, all particles and all energy levels satisfy the following conditions. At any given moment, the total number of excited particles N remains unchanged and is a constant. The total energy E of all excited particles remains unchanged and is a constant.
Using standard methods of statistical mechanics, we define entropy S, which is the Boltzmann entropy, satisfying the following formula (1.12).
Similarly, the equilibrium state is the case of extreme entropy, satisfying condition (1.13).
For a multi-particle system, the total probability
of each energy level is a fixed value, the only variable is the number of particles
excited in each energy level. So we get
Combining (1.10) and (1.11), according to the Lagrange multiplier method, we can obtain
We then derive the Boltzmann distribution (1.15).
The α in formula (1.15) is a constant. The constant β is related to temperature and satisfies formula (1.16).
Define the partition function Z, satisfying formula (1.17).
Formula (1.10) is transformed into formula (1.18).
By combining formulas (1.15) and (1.17), formula (1.21) can be obtained.
Formula (1.12) is converted into the following formula.
It can be obtained from formula (1.15).
Take into the above formula (1.22), and can get.
Formula (1.25) is the Boltzmann entropy. Compared with the Boltzmann entropy in classical statistical mechanics [
3,
4,
5], formula (1.25) has an extra term of
. This additional term arises from the identity particles in quantum mechanics. By deriving it from the quantum wave probability, we can logically obtain the
term without assuming its existence. Therefore, by taking quantum wave probability as the foundation, the entire theory becomes simpler and more logically consistent.
The above derivation is in the case of a simple one-dimensional space. For an ideal gas in three-dimensional space, the probability density within a three-dimensional space cell is given by formula (1.3).
In quantum mechanics, the particle wavelength satisfies the following formula (1.26).
P is the momentum of the particle. So (1.3) changes to (1.27).
For the case of the Boltzmann distribution, the particle distribution satisfies formula (1.15). Convert it into differential form.
The partition function formula (1.17) is then transformed into the integral formula (1.29).
For an ideal gas in three-dimensional space, the energy E of each particle satisfies formula (1.30).
We thus derive the partition function of an ideal gas. The result differs from that in classical statistical mechanics only by a constant . The constant is a probability ratio constant. However, in this derivation, there is no need to assume the existence of phase space or phase cells. Formulas (1.27) and (1.29) are entirely results obtained from taking differentials in 3-dimensional space and 3-dimensional momentum, independent of phase space. Yet, the differential form brings effects equivalent to phase space cells. Nevertheless, we do not need to assume the existence of phase space, which reduces theoretical assumptions and makes the derivation simpler and more straightforward.
So we derive the result of the Boltzmann distribution from the quantum wave probability. This proves that the Boltzmann distribution actually originates from quantum mechanics. The foundation of Boltzmann statistics is actually quantum mechanics. Boltzmann statistics is also essentially a form of quantum statistics.
Combining formulas (1.23) and (1.28), a result can be calculated and proven. When two ideal gases of the same type, at the same temperature, with the same number of particles, and the same volume are mixed together, the number of particles doubles, the volume doubles, the energy doubles, and the Boltzmann entropy also doubles, meaning there is no Gibbs paradox [
1,
2].
According to formula (1.32), we obtain.
According to formula (1.24), before mixing, the Boltzmann entropy of the ideal gas is.
After mixing, the number of particles N->2N, and V->2V. The temperature remains unchanged, so A remains unchanged, and
remains unchanged. Therefore, we obtain the entropy of the ideal gas after mixing.
Therefore, Gibbs paradox does not exist.
From the derivation process above, we can also understand why the Boltzmann distribution in classical statistical mechanics is effective. In classical statistical mechanics, the number of microscopic states calculated using the Boltzmann method is expressed by the following formula (1.33).
The total probability of particle excitation derived from quantum wave probability is formula (1.34).
The difference between these two formulas is only a factor of N!. For a system, since the total number of particles is a fixed value, the N! term is just a constant. Therefore, mathematically, the two formulas differ only by a constant. Consequently, the entropy also differs only by a constant. Temperature T = dE/dS. When calculating dS, this constant is canceled out and has no effect, so the same temperature is obtained. Thus, in the mathematical calculation of temperature, the two formulas yield the same temperature value and have the same effect mathematically. However, when calculating entropy separately, the constant N! cannot be canceled out, so result in different outcomes. Hence, formula (1.33) gives rise to the Gibbs paradox problem. In fact, the physical meanings of formulas (1.33) and (1.34) are completely different. The factor in formula (1.33) comes from removing the number of duplicated states caused by exchanging distinguishable particles. The factor in formula (1.34) comes from the constraints of uniform conditions. Since only one particle is allowed to be excited in one cell, it is need to remove the probability of exciting more than one particle in one cell. The two remove operations use the same mathematical express, which can be considered a perfect coincidence. However, the physical origins of the two removed terms are completely different, and their physical meanings are entirely distinct. The formula (1.33) is not compatible with the quantum mechanical principle of identical particles. But the formula (1.34) is compatible with the quantum mechanical principle of identical particles.
To summarize the derivation process above, what are the differences compared with the derivation in classical statistical mechanics? First, particles are completely identical. Second, there is no need to assume phase space cells. Because the quantum wave probabilities satisfy formula (1.2), the particle wavelength introduces a space unit equivalent to a phase cell. Not all particles have the same size unit; particles at different energy levels have different wavelengths and thus have different unit sizes. Third, the probability that each unit excites a particle satisfies the Boltzmann condition. Fourth, there are no fixed, permanent particles. Particles are randomly excited in the unit, each unit has an excitation probability, and then these particles quickly annihilate. The excitation and annihilation continue repeatedly. However, at any given moment, the total number of excited particles satisfies the constraints of formulas (1.10) and (1.11). Deriving the Boltzmann distribution and Boltzmann entropy based on quantum wave probability starts entirely from the fundamental probabilities of quantum mechanics, and the entire derivation naturally conforms to the concepts and methods of quantum mechanics. This fully integrates the Boltzmann distribution and Boltzmann statistics into the theoretical framework of quantum mechanics.