3. Entropy in Complex Systems: an Examination of Solids
The connection between thermodynamic entropy and degrees of freedom has been established for the classical ideal gas [
10] via physical interpretation of the function,
, which arises because the dimensionless change in entropy at constant volume in a system with constant-volume heat capacity
Cv is,
The restriction to constant volume is important because in an ideal gas the quantity is the number of degrees of freedom. Comparison with the Boltzmann entropy, yields , which has been interpreted as the total number of ways of arranging the energy among the degrees of freedom in the system. This interpretation arises from the property of the Maxwellian distribution that it is the product of three independent Gaussian distributions of the velocity in each of the x, y and z directions. It was argued that each particle has access to, in effect, velocity states in each of the three directions, making a total of states for N particles across the three dimensions.
The restriction to constant volume is important as it means that no energy is supplied to the system to do work against the external pressure and any energy that is supplied is distributed among the degrees of freedom. In a solid, a small amount of work is done against the external pressure as a solid expands, but the volume change is usually so small that this contribution to the heat supplied can be neglected. Therefore, the main difference between the constant-volume and constant-pressure heat capacities arises predominantly from work done against the internal cohesive forces. Unlike in a gas, therefore, the excess heat supplied at constant pressure over that supplied at constant volume remains within the solid and the heat supplied at constant pressure is equivalent to the change in internal energy of a solid. Therefore, use of the constant-volume heat capacity in the calculation of the total thermodynamic entropy would under-estimate the total entropy. In consequence, the change in entropy in equation (19) has to be modified to include the constant-pressure heat capacity and as this is dependent on temperature, it must be included in the integration over the temperature range.
Unlike in an ideal gas, the active degrees of freedom within a solid are not so readily identifiable. The heat capacity of a solid can be understood as arising from the excitation of quantized oscillations of the lattice, as described first by Einstein and later by Debye. The Debye model itself is strictly a constant-volume heat capacity, as the quantised levels are assumed to remain unchanged over the entire temperature range of interest, but relaxation of the lattice would not allow for this. Even without this difficulty, the Debye model does not lend itself to an identification of activated degrees of freedom. At high temperatures, the heat capacity is assumed to approach the Dulong-Petit limit of , which is imposed as an upper limit and corresponds to the classical model of six degrees of freedom per atom, with the kinetic and potential energies each characterised by three degrees of freedom. At very low temperatures, the heat capacity varies as T3 and the heat capacity tends to zero as T tends to zero. Whereas in the classical view of a solid, the energy of the oscillation can steadily decrease simply through a reduction in the amplitude, the energy of a quantized oscillator cannot be continuously reduced. The fact of a decrease in heat capacity means that vibrational energy is partitioned among fewer and fewer atoms as the temperature is lowered, thereby implying that degrees of freedom are de-activated. Quantifying this, or the inverse, the activation of degrees of freedom as the temperature is raised, is the difficulty.
The phonon spectrum of a crystalline solid always contains so-called “acoustic mode” vibrations, regardless of the solid. Despite its name, the frequency of the phonons can extend into GHz and beyond at the edge of the Brillouin zone, where the wave vector
, with
a being the lattice spacing. Close to the centre of the Brillouin zone the frequency tends to zero at very small wavevectors and these acoustic mode phonons represent travelling waves with relatively long wavelengths moving at the velocity of sound. These are the modes that are excited at very low temperatures and for which the solid acts as a continuum [
11]. Therefore, these vibrations do not correspond to the classical picture of atoms vibrating randomly relative to their neighbours, for which there are six degrees of freedom, but constitute a collective motion of a large number of atoms. The average energy per atom is probably very small and it is not clear just how many effective degrees of freedom are active. By contrast, at the edge of the Brillouin zone,
, meaning that the group velocity is close to, or could even be, zero. These are not travelling waves but isolated vibrations. Some materials also contain higher frequency phonon modes, called optical mode phonons because the frequency extends into the THz and beyond and the phonons themselves can be excited by optical radiation.
The detailed phonon structure for any given solid can be quite complex and the Debye theory, being general in nature, does not take this detail into account. Rather, it represents the internal energy as the integral over a continuous range of phonon frequencies with populations given by Bose-Einstein statistics and an arbitrary cut off for the upper frequency [
11]. The specific heat is then derived from the variation of internal energy with temperature. Despite the wide acceptance of the Debye theory, it is also recognised that it does not constitute a complete description of the specific heats. Instead of the
T2 dependence of the specific heat derived by Einstein at very low temperatures, the Debye model gives
. All this is well known and forms the staple of undergraduate courses in this topic, but what is perhaps not so well known is that, according to Blackman [ref 11, p24], “The experimental evidence for the existence of a
T3 region is, on the whole, weak…”. Moreover, the Debye theory gives rise to a single characteristic temperature, the Debye temperature, for a given solid which should define the heat capacity over the whole temperature range, but in fact does not. Low temperature heat capacities generally require a different Debye temperature from high temperature values.
This discussion of the inadequacies of the Debye model is important because, whilst it shows that in general the heat capacity of a solid can be understood as arising form the excitation of quantized oscillations, a complete, accurate and general picture is not available. Moreover, in practice, the high temperature specific heats of solids often exceeds 3
NkB, making the quantification of the number of active degrees of freedom even more difficult. This deviation might possibly be explained by the difference between the constant-volume and constant-pressure heat capacities, but in [
5] the author presented data for the molar heat capacity of silicon over the whole temperature range from 0K to the melting point [
12]. Close to the melting point (1687K) the constant-pressure heat capacity is 28.93
J mol-1 K-1, which, assuming
½kBT per degree of freedom, is equivalent to 7 degrees of freedom per atom. However, data for the constant-volume heat capacity [
13] was also presented which showed that
Cv is already 25
J mol-1 K-1 at 800K. This equates to six degrees of freedom per atom (3
NAkB, NA being Avogadro’s number corresponds to 24.9
J mol-1 K-1), but as data is available only up to 800K [
13] it is not clear whether the constant-volume heat capacity continues to increase with temperature, thereby exceeding 3
NAkB, or flattens off. Clearly, the constant-pressure heat capacity does not directly represent the active degrees of freedom because there is no classical theory which allows for 7 degrees of freedom per atom, but the constant-volume heat capacity does not properly account for the total change in internal energy and hence the entropy change. This means that the function
, which gives the number of ways of distributing the energy among the degrees of freedom in an ideal gas, has no directly comparable meaning in a solid, even allowing for a change from
Cv to
Cp.
Nonetheless, we can suppose that there exists at any given temperature
β(T) active degrees of freedom, each associated with an average of
½kBT of energy. Then, for one mole of substance, the internal energy can be written as,
The meaning of this is perhaps not immediately apparent, but it is equivalent to taking an average heat capacity. For simplicity, let
has the units of Joules, but it can be considered a temperature if we effectively rescale the absolute temperature so that
If the molar heat capacity at constant volume is
, then for 1 mole,
By definition, the internal energy at some value of
is,
The angular brackets denote an average. Therefore
This is equivalent to the geometrical transformation indicated in
Figure 3a,b.
By comparison with equation (22),
By definition, for simple solids for which the heat capacity increases with temperature,
, so we can expect the number of active degrees of freedom to be less than would be indicated by the heat capacity alone. Making use of equations (22) and (26) together, we have,
In other words, for a heat capacity that varies with temperature, the effective number of degrees of freedom is less than twice the dimensionless heat capacity, becoming equal only when all the degrees of freedom have been activated and . This is true regardless of the system, whether solid, liquid or gas. It should be noted, however, that equation (27) does not account for latent heat and therefore implies a restriction to the solid state or at least to ideal systems in which phase changes do not occur. This includes phase changes within the solid state and therefore implies a simple solid, such as one that is described by the Debye theory of the heat capacity over the entire temperature range to melting.
Figure 4a shows the reduced heat capacity of five such solids based on the constant pressure heat capacity. As discussed in relation to equation (19), the heat capacity at constant pressure gives the internal energy. Included in
Figure 4a is data for silicon and, for comparison, germanium as a similar semiconducting material, though the available data is restricted in its temperature range [
14]. The reduced heat capacity is the dimensionless heat capacity of equation (22) normalized to Avogadro’s number and a reduced heat capacity of 3 would therefore correspond to the classical Dulong-Petit limit of a molar heat capacity of 3
NAkB. In all cases, the reduced heat capacity exceeds 3 at a temperature close to the Debye temperature, which might well be a consequence of using the constant-pressure rather than the constant-volume heat capacity.
Figure 4b shows the mean reduced heat capacity (equation (24)) of the four solids for which heat capacity data down to
T=0 is readily available.
There is nothing particular about the elements represented in
Figure 4 other than that they have moderate to low melting points. They were chosen because an internet search produced a complete range of data for each of the three metals, aluminium [
15], indium [
16] and lead [
17]. It is noticeable that, with the exception of lead, all four have a mean reduced heat capacity below three. Even for lead, the discrepancy is less than 3%, with the maximum value being 3.08. Arblaster claims an accuracy of 0.1% in his data for temperatures exceeding 210K [
17], so it is not clear what the cause of this excess might be.
Figure 4a shows very clearly, however, that the heat capacity is not a direct representation of the active degrees of freedom. By definition, from equations (20) and (26), the mean heat capacity is a direct representation of the number of effective degrees of freedom and it is clear that even up to the melting point, the small excess with lead notwithstanding, there is less than 3
kBT of energy associated with each constituent atom.
The mean heat capacity can now be used to attach a physical meaning to the thermodynamic entropy. From equation (20), the change in internal energy is,
Under the assumption that the internal energy is partitioned equally among active degrees of freedom, the change in internal energy comprises two components: the change in the average energy among the degrees of freedom already activated and the distribution of some the additional energy into newly activated degrees of freedom, each of which contains an average energy
. Both of these terms contribute to the entropy. However, as the change in internal energy is written in terms of
TB, it is necessary to divide the entropy by
kB to give the dimensionless quantity,
Upon substitution of equation (29), the dimensionless entropy change is,
Here we make use of the fact that
It is notable that the change in entropy is still given as a function of the absolute temperature in Kelvins, which allows for a ready interpretation in terms of the active degrees of freedom, as expressed in equation (33) below.
The first term on the right in equation (31) can be interpreted as the change in the function
, which has already been defined for a classical ideal gas as the number of arrangements by direct comparison of the thermodynamic entropy and Boltzmann’s entropy [
10]. Straightforward differentiation of ln
W with respect to ln
T yields
The change in thermodynamic entropy therefore represents the fractional change in the number of arrangements or, equivalently, the change in the Boltzmann entropy, plus the addition of new degrees of freedom. Integrating equation (31) by parts to get the total entropy at some temperature
T1, we find,
At
,
, but
and the lower limit is also zero. Therefore,
The first term on the RHS is recognizable as the Boltzmann entropy,
. In the second term on the RHS,
is a positive number for
T>1, and greater than unity for
. Therefore, for systems in which the heat capacity varies with temperature the thermodynamic entropy at any given temperature above approximately 3K is less than the Boltzmann entropy for the simple reason that the number of arrangements at any given temperature depends only on the number of degrees of freedom active at that temperature, whereas the thermodynamic entropy, being given by an integral over all temperature, accounts for the fact that the number of degrees of freedom have changed over the temperature range. This relationship is illustrated in
Figure 5.