A Contextual Foundation for Mechanics, Thermodynamics, and Evolution

: The prevailing interpretations of physics are based on deeply entrenched assumptions, rooted in classical mechanics. Logical implications include: the denial of entropy and irreversible change as fundamental physical properties; the inability to explain random quantum measurements or nonlocality without untestable metaphysical implications; and the inability to define complexity or explain its evolution. We propose a conceptual model based on empirically justifiable assumptions. The WYSIWYG Conceptual Model ( WCM ) assumes no hidden properties: “What You can See Is What You Get.” The WCM defines a system’s state in the context of its actual ambient background , and it extends existing models of physical reality by defining entropy and exergy as objective contextual properties of state. The WCM establishes the irreversible production of entropy and the Second law of thermodynamics as a fundamental law of physics. It defines a dissipative system’s measurable rate of internal work as an objective me asure of stability of its dissipative process. A dissipative system can follow either of two paths toward higher stability: it can 1) increase its rate of exergy supply or 2) utilize existing exergy supplies better to increase its internal work rate and functional complexity. These paths guide the evolution of both living and non-living systems.


Introduction
Physics has had a foundational crisis since the early Twentieth Century, when classical mechanics ceded its supremacy to quantum mechanics and relativity as fundamental descriptions of physics. Five conceptual problems that highlight this crisis are: 1) the problem of time, 2) the problem of measurement, 3) the problem of quantum randomness, 4) the problem of nonlocality, and 5) the problem of evolving complexity.

The Problem of Time
Perhaps the most fundamental conceptual issue facing physics concerns the nature of time [1][2][3][4]. Relativity describes time as a dimension of spacetime, and like the three dimensions of space, time has no preferred direction. Change within spacetime is reversible and deterministic. Reversibility means that there is no fundamental arrow of time, and determinism means that the future is determined by the present. The future, as well as the past, is set in stone.
Determinism is a logical consequence of classical mechanics. Classical mechanics defines the microstate, which expresses everything that is measurable and knowable about a system, by perfect measurement in the absence of thermal noise. Perfect measurement reveals (in principle) the precise positions and motions of a system's parts and the forces acting on them. Classical mechanics further assumes that the microstate completely specifies the system's underlying physical state. Application of the equations of motion to a precisely defined state determines all future states.
Determinism does not by itself imply reversibility, however. Newton's laws of mechanics do not include conservation of energy. Newtonian mechanics did not recognize heat as energy, and it accommodates the irreversible dissipation and loss of mechanical energy by non-conservative forces, such as friction.
William Rowan Hamilton reformulated classical mechanics in 1832. Following Joseph-Louis Lagrange's earlier work, he resolved a system into its elementary particles. An elementary particle has mass, but no internal energy. With no internal energy, an elementary particle's total energy equals its mechanical energy, defined by the sum of kinetic and potential energies. Mechanical energy is quantified by its potential to do work, so conservation of total energy means conservation of work potential. The conservation of work potential, along with determinism, implies that we could, in principle, reverse the motions of a system's particles and reverse its evolution without external work. This is the definition of reversibility. Hamiltonian mechanics went beyond Newton's empirical laws of mechanics by eliminating non-conservative forces, and it thereby established reversibility as fundamental.
A system's physical state may be reversible, but reversibility does not by itself imply equal probabilities for directions of change. Physics acknowledges the empirical arrow of time, as recorded by the irreversible increase in entropy. Boltzmann defined entropy by a system's disorder, which he related to the number of accessible microstates consistent with its statistical mechanical macrostate description. The macrostate is defined by coarse graining of the physical microstate, due to thermal noise and imperfect measurement. He described the increase in entropy as the statistical tendency for large numbers of initially ordered particles to disperse and become increasingly disordered. The particles' dispersal could be reversed, in principle, however, resulting in a decrease in entropy. This is the idea raised by Maxwell's Demon [5], who could reverse the increase in entropy without violating any fundamental laws of physics. Physics regards entropy as an emergent property of a macrostate and its imperfect description, and not as a fundamental property of physics. It likewise regards the Second Law of thermodynamics as an empirical phenomenon, and not as a fundamental law of physics.
With the discovery of quantum phenomena in the early twentieth century, it became clear that the laws of classical mechanics break down for very small particles, and a new theory was needed. Quantum mechanics describes the quantum state by the Schrödinger wavefunction. The wavefunction expresses everything that is measurable and knowable about a system, and it therefore defines the quantum mechanical microstate. The quantum mechanical microstate, like the Hamiltonian classical mechanical microstate, is both deterministic and reversible.
The determinism and reversibility of the wavefunction is a fact of its formulation. Whether or not the underlying physical state is deterministic and reversible, however, is a matter of interpretation and debate. Prevailing interpretations of objective quantum reality accept a key conclusion of Hamiltonian classical mechanics, that the fundamental forces of physics are conservative. This eliminates objective randomness and implies that an isolated quantum system's physical state is both deterministic and reversible.
Individual quantum measurements are inherently random, however. The empirical randomness of quantum mechanics is often attributed to external perturbations from the system's surroundings. External perturbations can include, but are not limited to, observation and external measurement. An external perturbation causes decoherence and physical collapse of a superposed state [6]. Prevailing interpretations of quantum mechanics assert that the physical state, as it exists in isolation, unperturbed, and unobserved, is deterministic and reversible.
Given the isolation of the universe, it has no external perturbations, implying that its evolution is deterministic and reversible. Its physical state is described as a static block in 4D spacetime spanning the past, present and future. This is the block model of the universe, and it provides the conceptual foundation for Eternalism [2,7,8], in which all time, past and future, exist and are equally real. Reconciling the empirical arrow of time with a fundamentally reversible universe is the unresolved conceptual problem of time.

The Problem of Measurement
Multiple measurements on an ensemble of identically prepared radioactive particles reveal a statistical mix of decayed and undecayed microstates. Individual measurements are described as eigenfunctions of a quantum operator corresponding to an observable property or property set. An eigenfunction describes the definite measurable properties of a system's eigenstate, subsequent to a measurement. Quantum mechanics describes a system, as it exists prior to measurement, by a superposed wavefunction comprising a statistical superposition of individual measurement results: The are measurable eigenstates, Ψ is the superposed wavefunction, and the 's are complex weighting factors based on quantum-state tomography [9]. This uses statistical measurement results for an ensemble of identically prepared systems and the Born rule to reconstruct the system's microstate as it existed in isolation prior to measurement or observation. After its preparation but prior to its measurement, a radioisotope is described by a superposed wavefunction and as a superposition of its potentially measurable states.
If the superposed wavefunction is just an empirical approximation of the system's actual physical state, then the collapse of a superposed wavefunction to a mixture of definite microstates would simply reflect a change in the system's description to a new wavefunction, based on new information acquired at observation. This describes Max Born's statistical interpretation of the wavefunction. However, the Copenhagen Interpretation (CI), which emerged during the late 1920s, considered the wavefunction and quantum microstate as a complete specification of the physical state.
Erwin Schrödinger tried to highlight the absurdity of equating a superposed wavefunction with a system's underlying physical state. He considered a system comprising a radioisotope and its measurement apparatus. For added drama, Schrödinger used a cat for the measurement device. If the particle decays, the cat dies. Between preparation and observation, the radioisotope and its measurement apparatus remain isolated and unobserved.
At preparation, the cat is known to be alive. At observation sometime later, we either find a dead cat, indicating that the particle decayed, or we find it alive, indicating no decay. We can predict the probability of the outcome, but the outcome of any individual measurement is random.
Equating the physical state with a wavefunction implies that it evolves deterministically, as long as the system remains isolated, unperturbed and unobserved. The CI asserts that, prior to observation and while isolated, the radioisotope and cat deterministically evolve to a physically superposed state of undecayed-decayed and live-dead. Random collapse of the physically superposed state to a definite state of dead cat or live cat occurs only when the system's isolation is violated at observation. Schrödinger rejected the possibility of superposed cats and the Copenhagen Interpretation, and he proposed his experiment to illustrate the absurdity of its implications.
Hugh Everett proposed an alternative interpretation that avoids the possibility of superposed cats. In essence, his Many Worlds Interpretation (MWI) [10] says that everything that can happen does happen in separate branches of an exponentially branching universe. Even we, as observers, are split. Each of our split selves observes our own particular branch and sees only a single outcome. We perceive random wavefunction collapse, but from the objective perspective of the universe as a whole, there is no random selection, and the universe evolves deterministically. The MWI trades the possibility of superposed cats for an exponentially branching universe instead.
The CI and MWI both consider the wavefunction as a complete description of the physical state, as it exists isolated and unobserved, and both are consistent with observations. Despite their untestable and aesthetically distasteful metaphysical implications, they both rated well in a survey at a quantum mechanics conference [11]. The measurement problem, and the role of the observer on triggering the apparent randomness of observed measurement results, nevertheless remain unresolved conceptual problems of quantum mechanics [12].

The Problem of Nonlocality
Closely related to the measurement problem is the unresolved problem of quantum nonlocality [13]. Einstein, Podolsky and Rosen (EPR) raised issue of nonlocality in an article they published in 1935 [14]. They argued that if the wavefunction description of a system's state is complete, then a pair of entangled particles, emitted from a common source, exists in an indefinite superposition of measurable states prior to their measurement. Quantum mechanics predicts that if the particles are measured using parallel detectors, then the outcomes of measurement would be random but strictly correlated, even if measurements are simultaneous and spatially separated. EPR argued that this would violate relativity and the requirement of locality, which prohibits superluminal propagation of effects. EPR suggested that there are hidden properties, inherited from the particles' common source, and that they carry information to determine the measurement results. They concluded that quantum mechanics is therefore incomplete. Hidden variables are unknown and unknowable, so the correlated measurement results only appear random.
However, in 1964, John Bell devised a statistical test for hidden variables, based on the statistics of measurements using randomly oriented analyzers [13]. Numerous experiments have since demonstrated that the statistics of multiple measurements violate Bell's test [15,16]. The results and Bell's theorem indicate that if hidden variables do exist, they must themselves be nonlocal, and that the assumption of local realism is inconsistent with quantum measurements [17]. The DeBroglie-Bohm Interpretation and its variants maintain physical determinism by positing the existence of nonlocal hidden variables, consistent with Bell's theorem. Nonlocality cannot be used to transmit signals superluminally, so there is no empirical conflict with relativity, but there is no explanation for the coexistence of nonlocality with relativity. This is the problem of nonlocality, and it poses a significant conceptual problem [13,14].
There is a loophole in Bell's theorem, however. Bell's theorem implicitly assumes that measurement settings for the entangled particles are strictly uncorrelated. As Bell himself noted, the 5 of 29 conclusion of nonlocality is avoided if there is "absolute determinism in the universe [and] the complete absence of free will. Suppose the world is super-deterministic, … the difficulty disappears. There is no need for a faster than light signal to tell particle A what measurement has been carried out on particle B, because the universe, including particle A, already "knows" what that measurement, and its outcome, will be." [18].

The Problem of Quantum Randomness
Superdeterminism is the idea that the universe's past, present, and future are uniquely determined by its initial state. Superdeterminism is a logical consequence of classical mechanics, as famously expressed by Laplace's demon [19]. Classical statistical mechanics regards empirical randomness, as measured by a system's entropy, as a measure of incomplete information and imperfect measurement.
Superdeterminism of quantum mechanics is similarly based on the idea that a system's past, present, and future are uniquely and completely determined by its initial state. In the case of quantum mechanics, however, superdeterminism presumes the existence of hidden variables [20]. Even if the underlying physical state is deterministic and hidden quantum variables do exist, they are not just unknown, they are inherently unknowable. This means that the quantum microstate is inherently random, even if the physical state is deterministic.
Random fluctuations of hidden variables are explicitly invoked for stochastic interpretations of quantum mechanics. For stochastic quantum mechanics, the source of randomness is unspecified, and for stochastic electrodynamics, randomness is modeled as zero-point field fluctuations [21]. However, whether random quantum fluctuations are ontic, meaning that physical reality itself is objectively random, or epistemic, meaning that they are simply unpredictable, is inherently unobservable and unknowable. The objective randomness of the physical state is a matter of assumption and an unresolved question of quantum reality.
Superdeterminism is consistent with empirical randomness, but the idea that the course of the universe's evolution, including our own thoughts and choices, is determined by its initial state is so aesthetically distasteful that many physicists either ignore superdeterminism or reject it outright. The costs of rejecting superdeterminism and asserting fundamental randomness, however, are 1) accepting nonlocality and reconciling it with relativity, and 2) reconciling randomness with the fundamental deterministic laws of physics.

The Problem of Evolving Complexity
The empirical arrow of increasing entropy is not the only arrow of time. The universe has evolved from a nearly homogeneous state following the Big Bang to its current state of extraordinary diversity and complexity. In our own corner of the universe, prebiotic chemicals evolved into selfreplicating forms as early as 0.1 billion years after formation of the Earth's oceans [22]. Diverse ecosystems have subsequently spread throughout the oceans and across the globe. Much more recently, humans have evolved culture, technology, and a global economy. These trends exemplify the empirical arrow of evolving complexity.
The evolution of complexity is not limited to biology or self-replicating systems. Ilya Prigogine documented the spontaneous self-organization of dissipative structures within numerous far-fromequilibrium systems [23]. Self-organized dissipative structures can be as simple as convection currents and resonances, or as complex as life. The organization of stars into galaxes, galaxy groups, superclusters, and filaments describes organized structures at the largest scales [24].
Self-organization and local reductions of entropy are sustained by the throughput and dissipation of energy, which invariably leads to an overall production of entropy. Even while selforganization does not violate any laws of physics or thermodynamics, there is no general explanation for the spontaneous self-organization of dissipative structures. The Santa Fe Institute, an organization founded in 1984 to foster the pursuit of complexity science, concluded in a 1993 workshop that complexity arises in many disparate types of systems, and that there likely can be no unified theory of complexity. In a 2014 retrospective , David Pines, one of SFI's cofounders, acknowledged that the dream of a unifying theory of complexity remains elusive [25]. Investigations of self-organization resort to computer simulations of specific cases, and a general theory of self-organization remains unrealized.
In the absence of some principle and mechanism of selection, the probability that random fluctuations could lead to the spontaneous self-organization of dissipative structures would be astronomically remote. Alternatively, if self-organization is not spontaneous, then the current state of complexity must have been deterministically encoded in an extraordinarily improbable and unexplained initial state of the universe. In either case, the failure to explain the universe's state of extraordinary complexity motivates both supernatural explanations and the anthropic principle, in which our universe is a rare statistical outlier within a much larger multiverse [26]. The failure to explain the empirically documented tendency of far-from-equilibrium systems to self-organize expresses the problem of evolving complexity.

We Need a Better Conceptual Model
A conceptual model is an interpretation of physical reality. It seeks to explain empirical observations in terms of its physical model of reality. In particular, a conceptual model of physical reality should explain the empirical of arrow time, the randomness of measurements and macroscopic fluctuations, the superluminal correlation of measurements, and the empirical evolution of complexity.
The nature of a system's physical state, while it is isolated and unobserved, cannot be resolved experimentally. It is strictly a matter of the assumptions by which experimental results are obtained and interpreted. Interpretations mentioned in the preceding sections all define state properties with respect to a noise-free reference state at absolute zero. Logical implications are the conservation of work potential, and the determinism and reversibility of states while they exist isolated, unperturbed and unobserved. In addition, properties can be transformed via an information-preserving Galilean or Lorentzian transformation, meaning that the state is independent of the particular reference frame used, as long as it is at absolute zero and noise-free. An objective physical state is therefore noncontextual. I refer to any such interpretation as a Non-Contextual Model (NCM) of physical reality.
Its key implications are the determinism and reversibility of isolated physical states.
NCMs are consistent with empirical observations, but as described in the preceding sections, they have untestable and aesthetically distasteful metaphysical implications. And with no fundamental arrow of time, they all but ignore the arrow of evolving complexity, but this remains an empirical fact in need of a physical explanation.
Not all interpretations of quantum mechanics adhere to the NCM assumptions. The Consistent Histories Interpretation [27], for example, asserts that physical states are defined by eigenstates, which are contextually defined by the system's measurement framework. It also abandons the strict objectivity of physical reality by abandoning Unicity. In Quantum Bayesianism [28], the state is contextually defined and updated by an observer's information. The Von Neumann-Wigner interpretation attributes the physical collapse of the wavefunction to consciousness of an observation event [29]. Contextual interpretations are motivated by efforts to resolve the outstanding conceptual problems of quantum mechanics, but they typically define context by an observer or its choices. They therefore typically come at the cost of abandoning or simply ignoring objective physical reality.
Any viable interpretation of quantum mechanics necessarily makes predictions consistent with observations. This begs the question of what difference any particular interpretation really makes? There has been a strong sentiment among some physicists to dismiss the philosophy of science. Richard Feynman is credited with saying: "The philosophy of science is as useful to scientists as ornithology is to birds." Efforts to understand the meaning of quantum mechanics are countered with the edict: "Shut up and calculate!" [30].
We take a different position. Seeking an objective interpretation of physical reality is more than an idle intellectual exercise, and it has real-world consequences. The universe is not a static block in spacetime, unchanging for eternity. Recognizing the objective reality of irreversible dissipative processes and explaining their behavior in terms of fundamental physical principles is essential if we want to understand how nature works. To advance physics beyond its current focus on states and to understand the dynamics of complex systems confronting us, we need a conceptual model that embraces irreversible dissipative processes and defines a general principle of spontaneous selforganization of dissipative structures. This requires nothing less than a major shift in our interpretation of physical reality.

The WCM Interpretation of State
The WYSIWYG conceptual model (WCM) is an alternative to NCM interpretations of physical reality. Its paramount premise is that there are no hidden or unmeasurable properties: What You can See Is What You Get. Like any conceptual model, the WCM is an axiomatic system based on 1) empirical physical facts, 2) a definition of perfect measurement, and 3) basic assumptions. The WCM accepts as true the empirically validated fundamental laws of physics. These include: • Empirical laws of conservation (energy, momentum, spin/angular momentum, etc.) • Empirical laws of motion • Empirical laws of interaction (e.g., Law of gravitation) Empirical laws are well documented and accepted as facts, but they are valid only within the domains of their empirical validation, and they are subject to revision or replacement to reflect new observations. A conceptual model is likewise valid only within its domain of definition and is always subject to revision.

The Postulates of State
In addition to the empirical facts and laws of physics, the WCM's interpretation of physical state is based on the following postulates and definitions: Postulate 1: There are no unobservable "hidden" variables. Physical properties of state are measurable, and perfect measurement completely describes a system's physical state.

Postulate 2:
The Zeroth Law of Thermodynamics is fundamental, and this establishes temperature as a measurable property. The Zeroth Law also establishes that absolute zero temperature can never be attained.

Definition 1:
A system's total energy, E, equals the system's potential work as measured on the surroundings in the limit of absolute zero.

Definition 2:
A system's ambient temperature, Ta, equals the positive temperature of the system's ambient surroundings, with which it interacts or potentially interacts.

Definition 3:
A system's exergy, X, is defined by its potential work as measured at the ambient surroundings.

Definition 4:
A system is in its ground state when its temperature equals the ambient temperature, and its exergy equals zero. The ground state is in equilibrium with its ambient surroundings.

Definition 5:
A system's ground-state energy Qgs is the potential work of the ground state as measured on the surroundings in the limit of absolute zero.

Definition 6: System energy is defined by Esys = E-Qgs.
Definition 7: A system's ambient heat is defined by Q = Esys-X.

Definition 8:
A system's WCM entropy is defined by SDMC = Q/Ta.

Definition 9:
A process is reversible if and only if it produces no entropy.
Definition 10: Perfect measurement is a reversible transformation from a system's initial state to its ground state.
The postulates, definitions, and the empirically validated fundamental laws of physics provide the logical foundation for the WCM and its physical explanations of empirical facts.
Postulate 1 is a statement about the WCM's interpretation of physical reality. Postulate 1 defines physical reality by perfect measurement. The microstate, which expresses everything measurable and knowable about a system, is therefore a complete description of the physical ontological state. "State," without qualification, will refer to both the microstate and the physical state.
Postulate 2 establishes temperature as a measurable property. The Zeroth Law of thermodynamics defines thermal equilibrium and a system's temperature by the measured temperature of the surroundings with which it is thermally equilibrated. Temperature can be measured by a conventional thermometer or by the energy spectrum of ambient photons in the surroundings.
Postulate 2 also says that absolute zero temperature can be approached, but it is an unattainable idealization. No system is perfectly isolated from its surroundings and all systems have a potential to interact with their surroundings at a positive ambient temperature. The universe, by definition, has no physical surroundings, but it does have an ambient boundary defined by its vacuum state. The cosmic background temperature at 2.7 kelvin permeates the vacuum state's quantum electrodynamic field. This defines an ambient temperature for the universe, relevant for exchanges of heat or photons.
Postulate 2 enables definitions of ambient temperature (Ta), ground state and ground-state energy, system energy, exergy, and ambient heat (Definitions 2-7) as contextual properties of state. They are related to total energy (Definition 1) by: E = Qgs + Esys = Qgs + X + Q. ( The total energy is independent of the ambient temperature and it is non-contextual. A system's ground-state energy, Qgs, is the positive energy of the ambient ground state reference (Definition 4), and the difference between total energy and ground state energy is the system energy, Esys. The WCM partitions system energy into exergy (X) and ambient heat (Q). Exergy includes the kinetic and potential energies acting on a system's resolvable parts, and the internal potential energy of those parts. Ambient heat is the randomized kinetic energy of those resolvable parts. It is heat at the ambient temperature, and it has zero potential for work. The ground state is contextually defined at the ambient temperature, and the other energy components are contextually defined relative to the ambient ground state. The ground state has positive energy, but by definition, it has zero exergy, zero ambient heat, and zero system energy.
When combined with the Law of Conservation for energy, we can rewrite equation (2) as: If the ambient temperature is fixed, then equals zero, and equation (3) expresses the conservation of energy during irreversible dissipation of exergy to ambient heat. Equation (3) also expresses conservation of energy during changes in the ambient surroundings. A change in the ambient surroundings changes the ground-state energy and redistributes the system energy, but in the limit of perfect isolation, a system's total energy does not change. Postulate 2 also enables definitions 8 to 10. Definition 8 defines entropy by S=Q/Ta. Like ambient heat, entropy is a physical property of state, contextually defined relative to a system's ground state. The ground state, by definition, has zero entropy. Definition 9 defines reversibility by zero entropy production. Definition 10 defines perfect measurement as a reversible transformation between a system and its ground state.
Perfect measurement ( Figure 1) involves an ambient agent or device to record the system's changes during measurement. By recording the transformation process and then reversing it, a perfect ambient observer can reverse the process and restore the system to its initial state. If the same change in state occurs without a record of the changes, then information on the change is lost and entropy increases. The initial state cannot be restored, and the process is not reversible. The WCM defines a system's state relative to its equilibrium ground state and within the context of its ambient surroundings. Perfect measurement is a reversible transformation from the system's state to its ground-state reference in equilibrium with its ambient surroundings. Perfect reversible measurement involves an ambient observer or measurement device to record the process of physical change in state. Reversing the process restores the initial premeasurement state.
The possibility of perfect reversible measurement is necessary for the definition of a microstate, but reversible measurement is not always possible. The Quantum Zeno effect shows that a continuously measured (and measurable) state does not change irreversibly [31]. The contrapositive of this is equally true; an irreversibly changing system is not continuously measurable. A system can be continuously measurable and reversible between irreversible transitions, and it exists as a state. But during transition a system is not continuously and reversibly measurable, and it therefore does not exist as a WCM state. It is in irreversible transition between states.
Postulate 3 is a generalization of thermodynamics' Second Law. It states that any irreversible transition produces entropy and increases the total entropy of the system and its surroundings. A reversible process has zero entropy production (Definition 9), and no process can cause a reduction of entropy: where equality and the maximization of entropy apply to reversible interactions. The Second Law of thermodynamics has been thoroughly validated by empirical observations. It has not been regarded as a fundamental law of physics, however, because entropy is not a fundamental property of mechanics. Definition 8, and the postulates and definitions leading up to it, however, establish entropy as a physical property of state and Postulate 3 as a fundamental law of physics within the WCM. The WCM's postulates and definitions are firmly based on thoroughly validated empirical facts. The WCM is consistent with empirical observations, and it extends the objective definition of state to systems as they exist in the context of their actual ambient surroundings at positive temperatures. And it defines the irreversible transitions to states of higher entropy, thereby establishing the arrow of time as a fundamental law of physics.

Classical Entropy, Dissipation, and Refinement
The WCM entropy is a function of three independent variables: system temperature, ambient temperature, and a reaction progress variable, zeta (ζ). Zeta indexes the exchange of ambient heat with the surroundings during an isothermal process at the ambient temperature. The WCM resolves entropy into two components, the ambient entropy, Samb, and the entropy of refinement, Sref. These are defined in (5) and illustrated in Figure 2: Cv(T) is the temperature-dependent volumetric heat capacity, which the WCM defines as ). The negative sign for Sref is because WCM entropy is defined is relative to the ambient ground state, and as Ta,new declines, Sref increases. The ambient entropy (Samb, horizontal vector in Figure 2) is the change in entropy as the system is reversibly driven from its ground state with zero exergy and entropy to a positive exergy state at fixed ambient temperature. As detailed in equation (5), this change is resolved into 1) exchanges of ambient heat as the system progresses from ζ=0 to ζ=1 at fixed temperature Ta, followed by 2) exchanges of ambient heat as the system progresses from Ta to Tsys at fixed ζ=1. Exergy increases during the process, but entropy and temperature can increase or decrease. In the limit of absolute zero ambient temperature, Q and Samb equal zero.
The entropy of thermal refinement (Sref, vertical vector in Figure 2) is a consequence of a change in the ambient reference, but before the system readjusts to the change. For a system thermally equilibrated at Ta, as Ta,new approaches absolute zero, both Sref and SWCM approach the Third Law entropy of thermodynamics ( = ∫ 0 ), which also equals the classical statistical mechanical entropy. We therefore recognize the WCM entropy as a generalization of the classical statistical mechanical entropy, applicable to real systems defined with respect to a positive ambient temperature.
The WCM recognizes two distinct paths for entropy production. The first path of increasing entropy is by refinement. The concept of quantum refinement was introduced by Robert Griffiths in his Consistent Histories Interpretation [27]. Refinement results from a change in measurement framework when a single projector (potential measurement) is replaced with multiple compatible measurement possibilities. Here, we consider thermal refinement. If a system is initially prepared at equilibrium in its ground state, its entropy and exergy are zero. Differentiating entropy (5) with respect to Ta,new at fixed Ta and Tsys shows that the increase in the entropy of thermal refinement is given by: A decline in ambient temperature increases the entropy of refinement. It also shifts energy from ground state to system energy, and from ambient heat to exergy. These effects combine to increase the system's exergy. A positive exergy provides the drive for the second path of entropy production, as the system seeks to reestablish equilibrium with its new ambient surroundings and restore zero exergy. This is the path of dissipation: dSQ is the production of entropy due to dissipation of exergy (−dX) at a fixed ambient temperature.
Equation (7) also shows the thermodynamic free energy, F. Free energy is related to exergy, but it is defined relative to the system's local, and generally variable, temperature, and it is not a contextual property of state. The state of ultimate stability is the equilibrium ground state, defined by thermal equilibrium with the ambient surroundings and zero exergy and free energy. A system can also exist as a metastable state with positive exergy. A metastable state is non-equilibrium, and it has a potential to irreversibly dissipate its exergy and approach equilibrium. The thermodynamic stability of nonequilibrium metastable states has been investigated by Glansdorff and Prigogine [32}, among others.
Refinement, commonly associated with a declining ambient temperature (6), and dissipation at a fixed ambient temperature (7) describe two distinct paths leading to the irreversible production of entropy and the thermodynamic arrow of time.

Von Neumann Entropy and Actualization
The entropy for a quantum system is given by the von Neumann entropy [33]. It can be expressed by: where ηi is the probability of eigenstate i following the collapse of a pure (zero entropy) superposed quantum state to one of its eigenstates. Entropy expresses the uncertainty of the state after wavefunction collapse. After observation of the collapsed state, there is no uncertainty, and the system again exists as a pure state with zero entropy. Interpreted this way, the von Neumann entropy, like the classical statistical mechanical entropy, is a subjective measure of uncertainty of a system's actual state. The WCM, in contrast, interprets the von Neumann entropy as an objective property of a quantum state.
To illustrate the WCM interpretation of von Neumann entropy, we consider the classic example of a quantum particle confined to a one-dimensional infinite-potential well. The physical system is illustrated in Figure 3A and quantized energy levels for the particle are illustrated in Figure 3B. The energies for the particle's eigenfunctions are given by [34]: where n is the quantum number, h is Planck's constant, m is the particle's mass, and L is the length of the one-dimensional configuration space over which the wavefunction is defined. From Postulate 2 and the definition of Boltzmann's constant, kB, each energy level can be assigned a temperature by: Figure 3C shows the probability distribution function (PDF) for each energy level. The PDFs are defined over the configuration space of positions. Positions are statistically and irreversibly measured in the limit of zero thermal noise and infinite resolution. Each PDF corresponds to the particle's energy eigenfunction, except that the coefficients for its position eigenstates are squared to reflect each position's statistical probability, in accordance with the Born rule. The PDF outside the well is zero, and the area under each PDF within the interval is one, meaning that measurement of the particle's position would find it somewhere within the interval L with a 100% probability. PDF3 describes the statistical results of position measurements for a particle with energy E3. PDF3 has three equal humps spanning the particle's configuration space. This means that measurements would find the particle's position statistically distributed evenly among the three distinct intervals, with zero probability at the intervals' edges.
If the particle's energy is E3, then the particle's temperature equals T3. If the ambient temperature also equals T3, then From Postulate 1 (no hidden variables) and the definition of perfect measurement (Figure 1), PDF3 expresses everything that is measurable and knowable about the particle. PDF3 therefore defines the particle as a single three-humped microstate spanning the particle's configuration space. With a single available microstate and eigenfunction, the particle's von Neumann entropy is trivially equal to zero.
We next consider the case in which the particle's total energy is again E3, but we lower the ambient temperature to T1. Reducing the particle's ambient temperature immediately reduces its ground state energy to E1. Lowering the ambient temperature also fine grains the particle's configuration space into three microstate potentialities, each spanning a length of L/3, and it lowers its quantum number to n=1. From equation 9, the total energy is still E3, and PDF3 still describes the statistical results of position measurements, but it no longer describes a single microstate.
With its lowered ambient temperature, the particle is metastable. It has a positive exergy, expressing its potential to transition to a new equilibrium ground state, and it has a positive entropy, expressing the objective randomness of selecting and actualizing one of its potential microstates. If each potential microstate has equal probability of being actualized, then η1 = η2 = η3 = ⅓, and from (8), SvN = log(3). Once the system does randomly actualize a new microstate, the particle is confined to a single L/3 interval, and the system returns to a definite physical state of zero entropy, whether that interval is known or not. The von Neumann entropy is therefore an objective property of state within the WCM.
The objective randomness of actualizations is key to resolving the measurement problem. The radioisotope in Schrödinger's cat experiment exists as a metastable state with respect to ambient surroundings much cooler than the surroundings of its creation. The particle can momentarily exist in spontaneous transition between measurable states, but during the irreversible transition of actualization, it is not reversibly measurable, and it does not exist as a WCM state. At no time does the particle or its entangled cat exist as part of a physically superposed state. There is only the irreversible and random transition of the system from an initial metastable state to a new state of higher stability. The measurement problem thereby vanishes.

The Classical Mechanical State
To illustrate the WCM's contextual interpretation of a classical mechanical state, we consider an ideal classical gas prepared at equilibrium with ambient surroundings at 500K. Measurement of the gas's temperature, pressure, and volume determines its total energy and defines its thermodynamic macrostate (Table 1, top row left column). The thermodynamic macrostate and total energy do not depend on the ambient temperature, and they are non-contextual. The gas's WCM microstate, in contrast, is contextually defined by the thermodynamic macrostate plus the system's contextually defined WCM entropy and energy components, listed in the bottom five rows of Table 1. By Postulate 1, the WCM microstate completely defines the classical mechanical physical state.  Ei is the total energy of the quantum mechanical eigenstate .
(1) Contextuality only considers ambient temperature, not relative velocity, which would affect kinetic energy.
(2) Cooling to the ambient temperature brings the system to its ambient ground state, so Samb=0.
Simply reducing the system's ambient temperature immediately changes the gas's contextual properties, as shown in Table 1. If we insulate the gas and lower its ambient temperature from 500K to 300K, its system energy and exergy increase. The gas's state is metastable; it has a potential to approach its new ambient ground state at 300K, but the process is suspended by the insulation.
Perfect reversible measurement of the metastable gas from the new ambient temperature involves extracting and reserving energy (e.g., by employing a reversible heat engine and thermal reservoir) until the gas reaches 300K. The process exchanges entropy between the gas and its ambient surroundings, but it is reversible, and no entropy is produced. Reversing the measurement process uses the reserved energy to pump ambient heat back, restoring the gas's 500K state as it contextually exists at the 300K ambient temperature.

The Quantum Mechanical State
To illustrate the WMC's contextual interpretation of the quantum mechanical state, we switch from describing a gas of essentially inert particles to describing the particles' internal states as they interact with ambient (black body) photons. We consider an ensemble of hydrogen atoms prepared in equilibrium with ambient photons at 6000K, but as it exists with respect to 300K. This is a sufficiently low ambient temperature that hydrogen atoms' energy levels are measurable. The ensemble is below the temperature range for ionization of hydrogen (7,000-10,000K), but high enough that multiple energy levels are occupied, so the atom's energy state is described as a superposed wavefunction.
Measurements at 300K would reveal a statistical distribution of discrete and measurable eigenstates, ψi, as expressed by the ensemble's superposed wavefunction, Ψ (Table 1, top row, right). Individual eigenstate energies are quantized and are independent of temperature but their complex weighting coefficients, ci, do depend on the ensemble's temperature of equilibration. The superposed wavefunction's time-averaged energy consequently depends on the system temperature, but it is independent of the ambient temperature and it is therefore non-contextual. The bottom five rows of Table 1 define time-averaged contextual quantum properties with respect to the ambient surroundings. Postulate 1 and the definition of perfect measurement establishes these contextual properties as objective properties of state.
The Copenhagen Interpretation interprets a superposed wavefunction as a microstate and a complete description of the physical state. In contrast, the WCM generally interprets the wavefunction as a macrostate and an incomplete description of the system's instantaneous physical state, as it randomly fluctuates amongst its measurable energy states. The macrostate's average energy is 〈 ( )〉 = ∑ | ( )| 2 , where the | ( )| 2 are the probabilities that a hydrogen atom exists in the measurable eigenstate at any given instant (independent of actual measurement), and is its energy. The WCM's physical state at any instant is completely defined by the wavefunction macrostate together with the instantaneous measurable values for the contextual properties listed and defined in Table 1.
A superposed wavefunction is generally an incomplete macrostate description of the randomly fluctuating physical state. However, for an ambient temperature of 6000K, the hydrogen atom at 6000K is in its ambient ground state, all contextual properties equal zero, and ambient fluctuations are not a measurable property. The WCM therefore models the ground-state's superposed wavefunction as a complete description of the system as it exists in equilibrium with the ambient surroundings. The CI and WCM both interpret the wavefunction for a system in its ground state as complete, and the WCM recognizes the Copenhagen interpretation of the wavefunction as a special case for equilibrium systems.
The wavefunction macrostate has a fixed time-averaged energy, but if its energy does not correspond to an energy eigenfunction, the system randomly fluctuates amongst its energy eigenstates. If the system has a positive exergy and is metastable, then the fluctuations are measurable. In their book, The End of Certainty, Prigogine and Stengers document mechanical instabilities that can amplify quantum fluctuations to macroscopic and measurable scales [35,36]. Their work, interpreted within the WCM, establishes a physical explanation for objective macroscopic randomness and novelty. Randomness is not sourced in the ambient surroundings; it is internal and intrinsic to a metastable quantum system fluctuating among its energy eigenstates. This resolves the question of objective randomness.

The Two Components of System Time
The WCM recognizes two fundamental and distinct components of time. Thermodynamic time describes the irreversible production of entropy. Mechanical time describes the reversible and deterministic change of a system during intervals between transitions, while it exists as a state.
Thermodynamic time records a system's irreversible production of entropy due either to the dissipation of exergy or to refinement. The exergy for a first-order kinetic system 1 is given by: where Xo is the initial exergy, λ is a dissipation rate constant, and tq is the real-valued thermodynamic component of system time. Equation (11) describes, for example, the dissipation of exergy during macroscopic radioactive decay. At time zero, the system's exergy equals its initial exergy, Xo, and as time advances toward infinity, the system approaches zero exergy at equilibrium. Thermodynamic time is an objective contextual property of state and a logical consequence of the WCM postulates of state. As a contextual property, it is incompatible with, and it is ignored by, NCM interpretations. Mechanical time in relativity is defined by a coordinate on the time axis in 4D spacetime. Mechanical time is conventionally defined as a real-valued coordinate, but this is merely a matter of convention. The WCM adopts a different convention, by replacing the notation for real-valued time t with the mathematically equal -i(itₘ), where i is the square root of negative one and itₘ is the coordinate of imaginary mechanical time. The WCM changes mechanical time to an imaginary parameter, but it leaves all equations of mechanics unchanged. For example, the WCM expresses the time-dependent quantum wavefunction for an isolated (fixed energy) and metastable (non-reactive) quantum system, by: Except for the change in the function's argument for time, equation (12) is identical to the conventional expression for the system's time-dependent wavefunction. System Time: Equation (11) describes the continuous dissipation for a many-particle thermodynamic system. In the quantum limit, dissipation by an unstable positive-exergy particle is discontinuous. Periods with no dissipation mark intervals during which the particle exists as a welldefined and reversibly measurable metastable state. Its time evolution is indexed by a reversible time coordinate. At some point, however, the particle irreversibly transitions and actualizes a more stable state. An irreversible transition marks an interval of entropy production and an irreversible advance in thermodynamic time. A metastable particle therefore requires both mechanical time and thermodynamic time to describe its behavior. The WCM recognizes system time as a complex property of state, comprising both real-valued thermodynamic time and imaginary mechanical time. System time is represented by a point on the complex system-time plane ( Figure 4A). A change over imaginary mechanical time (vertical axis) conserves exergy and describes the reversible and deterministic changes between irreversible transitions or measurements, within a single instant of thermodynamic time. A change over real thermodynamic time (horizontal axis) describes an irreversible and random transition to a more stable state and the production of entropy.  Figure 4A shows the complex system-time plane, spanned by real-valued thermodynamic time (horizontal axis) and imaginary mechanical time (vertical axis). Figure 4B shows the irreversible advance in an observers' reference time during changes in system-time. Δtr1 and Δtr2 are advances in reference time during irreversible transitions. The intervals between transitions mark the advance of reference time during reversible changes in mechanical time.

Reference Time
System time, whether it proceeds reversibly or irreversibly, is empirically measured by the advance of reference time, tr, as recorded by an observer's reference clock ( Figure 4B). Counting oscillations and recording memories is an inherently irreversible process [37]. Reference time provides the empirical time scale across which a system's events are recorded, and it marks the continuous and irreversible "flow" of an observer's time.

Space, Time, and Nonlocality
We can now consider how quantum nonlocality coexists with relativity. We first consider simultaneous measurements of entangled photons at points A and B by Alice and Bob ( Figure 5), using parallel vertically polarized filters. The entangled electron pair has zero spin and no net polarization. Experiments show that if Bob measures a vertically polarized photon, then Alice measures a horizontally polarized photon, even when the measurements are physically separated and simultaneous. Individual measurement results are random, but they are strictly and instantaneously correlated. When the entangled photon pair is initially produced, it has zero spin and no net polarization. If the polarizers at A and B are parallel, then the measurement at one ensures that measurement at the other is anticorrelated. The instantaneous correlation of physically separated measurements at points A and B, outside of each other's light cone, graphically illustrates the nonlocality of the photon pair's correlated measurements. Einstein famously referred to nonlocal correlations as spooky action at a distance.
As discussed in section 1.3, any interpretation that denies superdeterminism must accommodate nonlocality. The WCM accommodates nonlocality and explains its coexistence with relativity by recognizing system time as comprising both thermodynamic time and mechanical time. Measurement of the entangled photon pair is intrinsically irreversible. Measurement prior to either A or B irreversibly actualizes the photon pair's transition to photons with random, but definite and anticorrelated, polarizations. At point B, Bob reversibly records his definite measurement result and transmits it to Alice via a signal photon polarized with the orientation that he measured. Alice reversibly records her photon's measurement result, and based on her results, she knows the orientation of Bob's entangled photon and his signal photon. The light cone and Bob's signal photon both reach Alice at point C. Knowing the signal photon's orientation, Alice reversibly measures it and confirms the correlation of their results.
Alice's observation of her measurement at A, Bob's observation and transmission of his measurement at B, and Alice's measurement of Bob's signal photon at C are all reversibly conducted over mechanical time. Reversibility means no entropy production. With no change in entropy, mechanical time is not just reversible; it is also time symmetrical. With time symmetry, asserting that an initial event causes a future event and asserting that a future event causes the initial event are equally valid. This expresses the idea of retrocausality [38,39], and it exists over time-symmetrical mechanical time. The time-symmetry of recording and transmitting the measurement results creates a deterministic chain of causality and retrocausality within a single instant of thermodynamic time, represented in Figure 5 by A↔C↔B. The photons at A and B are entangled by virtue of the deterministic link connecting them. No hidden variables or spooky action is required to explain the deterministic and nonlocal correlation of measurements at A and B.
We next consider measurements using randomly oriented polarizers. If Bob's and Alice's analyzers are no longer parallel, the system's contextual framework changes. Alice cannot know Bob's result based on her own results, and she cannot reversibly measure his signal photon. The timesymmetry link of causality and retrocausality connecting their measurement results is broken, and the photons at A and B are no longer entangled. The observed statistics of measurement results reflect local measurements on photons with definite anticorrelated but random polarizations, consistent with Bell's theorem. Again, no hidden variables or spooky action is required to explain the observed results.
The righthand axis of Figure 5 shows the record of Alice's measurement events at points A and C, as measured by her reference clock. Even when the events at A, B and C are correlated within an instant of thermodynamic time, the reference clock continues to mark the irreversible passage of reference time. Alice experiences the irreversible passage of time between recording her measurement at time trA and her recording of Bob's measurement at trC. The irreversible flow of an observer's reference time and the empirical constraints of relativity preclude superluminal exchange of information between observers across their reference time. The WCM successfully explains the mechanical details of nonlocality and it explains how relativity and quantum nonlocality compatibly coexist without spooky action or hidden variables.

The WCM Interpretation of Process
Up to this point, we have focused on states and their transitions. We have described the relative stabilities of states and the potential for a metastable state to transition irreversibly to a state of lower exergy. In this section, we shift from the description of systems' physical states and transitions to the description of systems' processes of dissipation. The WCM establishes a contextual foundation for defining nonequilibrium dissipative processes and their self-organization into increasingly complex dissipative structures, driven by the dissipation of exergy. Figure 6 extends the WCM model for states ( Figure 1) to a stationary dissipative system. The model assumes a stationary environment, but it is nonequilibrium. The system's ambient surroundings includes one or more sources of exergy or high-exergy material components. A system with stationary exergy sources and environment for wastes will converge over time to a stationary process of dissipation. The system is stationary, but it is not microscopically static, and it is not an actual state, as its components are in constant flux and dissipating exergy. We refer to a stationary dissipative system as a homeostate.

The Dissipative Homeostate
For a near-equilibrium system, energy flow is proportional to gradients. Fourier's Law, Fick's Law and Ohm's Law express the linearity of fluxes and gradients in potential for heat conduction, chemical diffusion, and electrical flow, respectively. Linearity defines the near-equilibrium regime. A near-equilibrium system converges to a unique steady-state dissipative process consistent with its steady-state boundary constraints and conservation laws. Figure 6. Homeostate Model. The system's nonequilibrium surroundings includes exergy source(s), either directly (e.g., sunlight) or indirectly, by high-exergy components. Ambient resources in the surroundings are also freely available to the system for processing and discharge. At homeostasis, time-averaged inputs of materials and energy equal outputs.
Far from equilibrium, linearity breaks down. At a critical temperature gradient, for example, heat flow dramatically increases as a fluid spontaneously reorganizes itself from conduction to convection. Non-linearity can allow multiple dissipative solutions and multiple homeostates, all consistent with the system's boundary constraints and conservation laws.
We can express a homeostate as a network of links, pods, and dissipative nodes. Figure 7 shows the dissipative network for the Brusselator reaction [23]. The Brusselator is a theoretical model for an autocatalytic chemical reaction exhibiting oscillations. It has two sources, one for component 1 in state A and one for component 2 in state B. Links are the pathways for energy and components to flow from external sources, through the system, and back to the surroundings. Pods provide transient storage capacity for components to accommodate fluctuations in flow rates. Nodes represent irreversible transitions of components from one state to another. All dissipation within the system is assigned to nodes. Elementary nodes and transitions are contextually defined by perfect measurement at the ambient temperature, and they have no internal details.  's exergy is only partially dissipated, however. Some of the exergy is transferred to node 3A via an energy link (wavy arrow). Node 3A is an endergonic transition. An endergonic node utilizes an exergy supply to lift a component "uphill" to a higher-exergy state. Node 3A lifts component 1 from state X to state Y.
Transition rates, based on simple reaction rate theory, are shown by the arrowhead expressions in the figure. The kinetic rate coefficient for the forward direction is set to unity, and the reverse direction is assumed to be much slower and is set to zero. So, for example, the transition rate for R3 (B+X→D+Y) is simply BX. Reaction 2 is an autocatalytic transition of component 1 from state Y to state X. Autocatalysis means that the product partakes in the reaction. For reaction R2 (Y+2X→3X), the transition rate is X 2 Y, making the Brusselator non-linear.
Setting the net production rates of X and Y to zero yields the steady-state concentrations X=A and Y =B/A. For B>1+A 2 , the steady state homeostate is unstable to perturbations. Any perturbation from the steady state homeostate sends the system on a transient path that converges to a stationary periodic homeostate, in which the concentrations of X and Y cycle. Steady-state and periodic homeostates are graphically illustrated in Figure 8 as attractors [40]. The steady state homeostate is represented by the fixed-point attractor and the oscillating homeostate is represented by the limit cycle attractor. A homeostate can also be chaotic, represented by a strange attractor. In all cases, an attractor is a fixed and bounded trajectory in state-space, and it represents a stationary homeostate.

The Constructive Power of Dissipation
Dissipation can be much more than the dissipation of exergy into waste heat; it is the driver for all change, constructive as well as destructive. Nature is replete with examples in which systems become increasingly organized and evolve toward higher organization [23]. These changes occur within far-from-equilibrium systems that are sustained by external exergy sources.
Lord Kelvin recognized the constructive power of dissipation in an article he wrote in 1862 [41]. He began by describing heat death, when all directed activity ceases, as the inevitable end-result of dissipation within a finite universe. He then proceeded to express a much deeper and overlooked idea. Backing off on the inevitability of heat death, he continued that the universe is in a state of "endless progress…involving the transformation of potential energy into palpable motion and hence into heat." In essence, he asserted that a system tends to defer dissipation by first utilizing exergy for palpable work, before eventually dissipating it into heat.
When Lord Kelvin stated this idea, classical mechanics was well entrenched in physical thought. Kelvin's idea was incompatible with classical mechanics, so it never gained a foothold and was ignored. His idea is fully compatible with the WCM, however, and we formalize his insight with Postulate 4 and Definition 11: Definition 11: Internal work rate, 〈̇〉, is the measurable time-averaged rate of exergy increases of a system's components.

Postulate 4 (Kelvin Selection Principle):
Of the multiple paths by which a system can take exergy from a source and dissipate it to the surroundings, the path that maximizes the system's rate of internal work is the most stable.
The Kelvin Selection Principle (KSP) is analogous to the Second Law of thermodynamics, but whereas the Second Law describes the selection and relative stability of states based on entropy, the KSP describes the selection and relative stabilities of dissipative processes based on internal work. A system's internal work is contextually defined by measurable increases in exergy.
Postulate 4 states that the available pathway with the highest rate of internal work is the most stable. A familiar real-world illustration of the KSP is the stability of convection over conduction. Heat added to the base of a convecting liquid does work of thermal expansion of the fluid to maintain density gradients. This is the internal work on the liquid necessary to sustain convective flow. Heat added to a static fluid, in contrast, is completely dissipated by conductive heat flow, without doing any measurable work. The KSP therefore says that if boundary and system constraints allow both convection and conduction, convection is the more stable homeostate. Observations invariably show this to be the case.
A more revealing illustration of the KSP is the Miller-Urey experiment. Stanley Miller and Harold Urey took a mixture of gases, which they believed represented Earth's primordial atmosphere, and they stimulated it with electrical sparks to simulate lightning [42]. When they analyzed the system afterward, they found that the gas molecules had reorganized themselves into a variety of amino acids. The gas mixture started in a low-exergy near-equilibrium state and it ended up in a high-exergy far-from-equilibrium state. The sparks added exergy to the gas mixture, but instead of directly dissipating the exergy to heat, the gas mixture deferred dissipation and utilized it to do work of creating high-exergy amino acids.
In the case of convection, perturbation analysis shows that, given a heat source and an unstable temperature gradient, a random perturbation will send the fluid down any one of many deterministic paths, all leading to convection. Starting with an equilibrium mixture of gases, however, producing amino acids by random selection would seem extraordinarily unlikely. Yet, the Miller-Urey experiment is repeatable with similar results each time. The KSP offers an alternate explanation, in which amino acid synthesis occurs through a sequence of incremental steps. Each step selects from multiple possibilities, not based on random selection, but instead guided by the KSP. The end-result of successive increments of internal work is the creation of high-exergy amino acids.
The WCM recognizes two common paths toward higher internal work rate: 1) increasing the net rate of exergy supply (RXS) and 2) increasing functional complexity. The rate of exergy supply is given by: where Ẋ and J i are the rates of exergy and component flows into or out of the system and X ̅ is the specific exergy. The last equality states that for a stationary homeostate, the rate of net exergy supply equals the average rate of dissipation 〈̇〉.
Perhaps the simplest way to increase RXS is by expanding or replicating a dissipative process, given an expandable exergy source. Given sufficient resources, this path pushes a dissipative system or population of dissipators to expand, thereby increasing both its net RXS and internal work rate.
The second path toward a higher rate of internal work is increasing functional complexity. The term complexity is commonly associated with the amount of information required to specify a system's state. We are not interested here in the complexity of a state, however. Rather, we are interested in the complexity of a system's dissipative process. If, as proposed by Lord Kelvin, a system defers dissipation by doing palpable work on some other dissipative system, then that system could likewise defer dissipation. The recursive deferral of dissipation to sustain other dissipative systems leads to an expanding network of dissipative nodes of increasing interconnectedness and organization. This idea precisely expresses the concept of functional complexity. We formally define functional complexity as the ratio of internal work (Definition 11) and net exergy supply (13):

Definition 12: Functional Complexity is defined by
Functional complexity is a measurable and well-defined property of a homeostate's dissipative process.
For a single pass of exergy and a single endergonic node, functional complexity can approach unity for perfect efficiency and no dissipation. However, a homeostate can increase its functional complexity well beyond unity by reprocessing and recycling exergy via feedback loops or by sustaining a network of endergonic nodes. Figure 9 illustrates a simple feedback loop resulting in functional complexity greater than unity. Feedback loops are ubiquitous within biological systems, from cells to ecosystems, leading to higher functional complexity. Figure 9 could also illustrate a highexergy system or organism sustained by a low-exergy environment and stabilized by the KSP.

Whirlpools and Entropy Production
We commonly observe whirlpools, indicating they can be more stable than radial flow of water directly toward the drain. A whirlpool provides an important counterexample to proposals that processes are stabilized by maximizing the rate of dissipation or entropy production. The Maximum Entropy Production Principle (MEPP) and related proposals have had success in a number of areas, but they are not universally applicable [43][44][45].
The MEPP is equivalent to maximizing the rates of net exergy supply and dissipation. This is one of the paths toward maximizing internal work rate, but it ignores functional complexity. The centrifugal force of a whirlpool's circulation lowers the water level and pressure over the drain. This actually reduces the rate of water discharge. A stationary whirlpool therefore has lower rates of water and exergy supply, a lower rate of dissipation (from equation 13), and a lower rate of entropy production. The key to a whirlpool's stability turns out to be its higher functional complexity.
To model a whirlpool's functional complexity, we consider a 50 cm diameter cylindrical container of water partitioned into concentric shells radiating outward from a central drain ( Figure  10). Water in each shell is modeled with uniform pressure and kinetic energy.
Water in the outermost shell is maintained at a constant 20 cm depth. The rate of drainage is proportional to the square root of the pressure of water over the drain, taken as the height of the water column in the central core. At each interface, a component of water transitions to a zone of lower pressure, higher velocity, and higher kinetic energy. This applies to both the radial flow and whirlpool models. Figure 11 shows a detail of the network model for the transition between two zones in the whirlpool model. The surface water contour for the whirlpool is determined by conserving angular momentum and balancing hydrostatic and centrifugal forces. Each concentric shell is a single zone with uniform pressure (water elevation) and fluid speed. Arrows illustrate fluid flow directions only. Speed is constant within each zone but the radial speed increases inward in both cases due to the incompressibility of water. In addition, conservation of angular momentum requires that the rotational velocity for the whirlpool is inversely proportional to the radial distance and increases inward. Figure 11. Paired Nodes at Whirlpool Zonal Interface. As water flows across a zonal interface, it undergoes both a decline in pressure and an increase in velocity and kinetic energy. An elementary node represents only a single transition, so each interface has two nodes. The first node is exergonic. It transfers exergy at a rate of Ẋ= JΔP to endergonic node 2. Node 2 uses this exergy for internal work of accelerating the water, given by Ẇ = Ẋ− Q̇= ½ ΔV 2 J, where Q̇ is the rate of dissipation by node 2. The internal work for the system is summed over all interfaces.
For radial flow, the surface water contour is determined by energy conservation, with increasing kinetic energy toward the drain offset by lower potential energy and water column height. Viscous dissipation outside of the central core is negligible for radial flow and is ignored. Figure 12 shows profiles for the radial-flow and whirlpool homeostate models. In the case of radial flow (solid lines), the profiles show that the drainward decline in water level and the increase in kinetic energy are imperceptible at the plotted scale. In the case of whirlpools (dashed lines), the figure shows a dramatic decline in water pressure and height near the drain. This profile represents the whirlpool's shape. There is a correspondingly large increase in water velocity and kinetic energy as water approaches the drain. Figure 12 clearly shows that the increase in velocity and kinetic energy near the drain is much greater for the whirlpool than for radial flow. Table 2 summarizes the results of the steady-state radial and whirlpool models. The table shows that the flow rate is higher for the radial flow, but the internal work rate is 4,000 times higher and the functional complexity is 5,000 time higher for the whirlpool. According to the KSP, the whirlpool should be more stable than radial flow, despite its lower rates of exergy supply, dissipation, and entropy production. The common emergence of whirlpools in draining water empirically documents the relative stability of whirlpools over radial flow, as predicted by the KSP. The stability of the whirlpool provides an important counterexample to the idea that "faster is better." It falsifies the maximum entropy production principle [43], which asserts that the rate of exergy dissipation and entropy production always tends to be maximized.   Ẋout = Kinetic exergy of water exiting a 1 cm diameter drain with area Adrain.  = Fluid density (1,000 kg/m 3 ) ho = Water depth at perimeter (20 cm). Ẇ = 1 2 Δ 2 is the increase in kinetic energy per volume of water at interface i ( Figure 11). Ẇ int = Internal work of accelerating water from the perimeter zone to the core zone. Figure 8 showed two homeostates for the Brusselator: a steady state homeostate (point attractor) and a periodic homeostate (limit cycle attractor). Any perturbation from the steady state homeostate sends the system to the more stable oscillating homeostate. The Brusselator's steady-state and cycling homeostates have identical rates of dissipation and net exergy extraction rate. They differ, however, in their internal work. For the steady state homeostate, the concentrations of X and Y are fixed. There is never a measurable transfer of component 1 "uphill" from state X to Y, and endergonic node 3A does no measurable work on the system. For the oscillating mode, however, component 1 periodically accumulates in and is released from the X and Y pods. During net transfer from X to Y, endergonic node 3B does internal work of lifting component 1 to higher exergy. The oscillating homeostate has higher rate of internal work than the steady-state mode. The KSP asserts that the oscillating mode is more stable, in agreement with perturbation analysis (Figure 8). With a fixed RXS (13), Definition 12 shows that the oscillating homeostate also has higher functional complexity.

Oscillations and Synchronization
We can generalize this conclusion and assert that an oscillating homeostate is more stable than a steady state homeostate, other differences being negligible. The spontaneous emergence of resonance, commonly observed in mechanical or fluid mechanical systems far from equilibrium, illustrates spontaneous cycling and the arrow of increasing functional complexity.
Systems of linked oscillators often synchronize in a process known as entrainment. Christiaan Huygens, the inventor of the pendulum clock, first documented this in 1666, when two of his pendulum clocks mounted on a common support spontaneously synchronized [46]. Oscillations and synchrony are ubiquitous within biological systems, human behaviors and institutions, astrophysics and quantum mechanics [47]. Figures 13 and 14 describe the synchronization of a network of linked oscillators. As detailed in the figure captions, the analysis shows that a network of linked oscillators increases its rate of internal work when all oscillators synchronize. The KSP therefore predicts that networks of coupled oscillators are stabilized by synchronization, independent of their physical details. The KSP provides a general principle that explains the spontaneous emergence of oscillations and synchronization in terms of a general principle, independent of a system's specific dynamics.  When oscillators are not synchronized, some exergy is lost to diffusive leakage between adjacent oscillators. When all oscillators synchronize after 1500 time steps, concentrations are equal, there is no diffusive loss, and the pods' rate of exergy accumulation is maximized and equal to the exergy input (except at pod discharge). The rate of internal work is equal to the time-averaged accumulation of exergy by the pods, and this is maximized by synchronization.

Cosmological Implications
Ellis and Drossel promote the evolving block universe (EBU) model to resolve indeterminism and the origin of time's arrows [3]. Whereas the block model of the universe is a static block in 4D spacetime spanning all past and future since the Big Bang, the EBU is bounded at "now" by the unfolding of the present from an indeterminate future.
Ellis and Drossel attribute the indeterminate unfolding of "now" to quantum randomness. However, given that the wavefunction is a deterministic function, there is a conflict between two of their statements in a related article [48]: 1. "The view taken here is close to the Copenhagen interpretation: it assumes firstly that the wave function is real (i.e., descriptive and objective rather than epistemic)." 2. "Our basic view is that measurement is truly a stochastic, nonunitary process … a 'wave function collapse.'" The first sentence extends the determinism of the wavefunction to physical states. The second (correctly) acknowledges fundamental physical randomness associated with measurement, and this contradicts the first statement. Failure to explicitly reconcile this contradiction undermines EBU's logical consistency. However, Ellis and Drossel recognize a key limitation on determinism [3]: "when quantum effects are not dominant and suitable classical equations of state are given, as at times after the end of [cosmic] inflation, outcomes will be unique [i.e., determined]. However, this raises again the above-mentioned problem that deterministic equations require infinite precision." As they point out, determinism depends on infinite precision. WCM's positive ambient temperature, however, leads directly to coarse graining of configuration space and therefore to a breakdown of determinism. Coarse graining leads to indeterminate outcomes, but it does not establish the direction of indeterminate change or of time's arrow. In order to establish the direction of time's arrows, Ellis and Drossel invoke cosmic expansion and a past condition [3]: "A global Direction of Time arises from the cosmological context of an expanding and evolving universe; This overall context, and particularly the associated decrease of temperature of interacting matter and radiation with time, together with a Past Condition of low initial entropy, leads to local Arrows of Time." David Albert introduced the Past Hypothesis, stating: "We make the cosmological posit that the universe began in an extremely tiny section of its available phase space," i.e., in an initial state of low entropy (quoted in [3]). This implies an extremely well-tuned past condition. The WCM has a different take on the past hypothesis. The universe started in a state of near-zero entropy, but we model it at or near its ground state, in equilibrium with the intensely hot ambient temperature of its formation. As an equilibrium system, the universe's initial state spanned its configuration space. There was only a single possible initial state, so no fine tuning was required.
Expansion and a declining ambient temperature subsequent to the Big Bang led to the refinement of the universe's configuration space and to the indeterminate unfolding of its future by random actualizations of newly refined and contextually defined potentialities. The universe has evolved from an initially hot state of near-equilibrium and zero-entropy to its current state of highexergy and high-entropy with respect to its cold ambient microwave background. Its positive exergy drives irreversible processes and the emergence of dissipative structures. The universe will continue to produce and dissipate exergy, without ever reaching an equilibrium state of zero-exergy heat death, as long as it continues to expand, and its ambient temperature continues to decline.

Origin and Evolution of Life
The WCM sheds light on another perplexing question, on the origin of life. Once self-replicating autocatalytic systems exist, Darwinian natural selection of random mutations can propagate beneficial mutations and guide evolution toward higher fitness, but it cannot account for the origin of life or for the evolution of complexity within non-replicating systems.
The Kelvin Selection Principle constitutes a fundamental law of evolution that applies to both replicating and non-replicating systems. The KSP can apply to a system of simple organic compounds that is open to exergy sources (e.g., Miller-Urey experiment), by progressively selecting dissipative processes and guiding the system toward higher functional complexity. By continually seeking to defer dissipation to do internal work on the system, the KSP guides a system toward expanding networks of links and feedback loops to recycle exergy. Over time, self-organization can lead to autocatalytic and self-replicating networks.
Once life is established, the KSP guides its evolution through an interplay between Darwinian competition to increase fitness and cooperation to sustain and increase functional complexity. Darwinian competition predominates when a species' dissipative cost of competition is small relative to the increase in the rate of internal work by expanding its exergy supply. In this case, the KSP provides a competitive drive for a species to expand up to the carrying capacity of its environment.
Cooperation predominates if resources are inelastic or the dissipative cost of competition is too high. When a rainforest's canopy completely covers the forest, for example, its solar resource base is inelastic to further gains. Over its fifty-million-year period of relative stability in its environment and net exergy supply, the Amazon rainforest has continued to evolve by developing highly complex webs of interaction for recycling components and exergy [49]. Ecological nutrient cycling [50] involves repeated utilization and renewal of the nutrients' specific exergy. From Definition 12, this increases the system's functional complexity factor. By recursively deferring dissipation and recycling exergy, a dissipative system can increase its rate of internal work by increasing its functional complexity, with no upper bound in the limit of perfect efficiency.

Summary and Conclusions
The WCM and existing NCMs start with the same empirical facts, but they differ in the assumptions by which they interpret those facts. Hidden variables theories of quantum mechanics follow classical mechanics and define the physical state in terms of precise coordinates of position and momentum. Implications are a noise-free ambient reference at absolute zero, zero entropy, superdeterminism, and non-contextuality, as a consequence of information-preserving Galilean or Lorentz transformations on reference states. The Copenhagen and Many Worlds interpretations assume that the wavefunction is a complete description of the physical state. Implications are an ambient reference in equilibrium with the system, zero entropy, and non-contextuality, since the system's ambient surroundings are reflexively defined with respect to itself. The CI, MWI, and hidden variables theories are based on two extreme special cases of ambient surroundings, both of which result in non-contextual zero-entropy states.
By abandoning non-contextuality, the WCM accommodates the CI, MWI, and hidden variable theories as special cases, and it fills the gap between the extremes in their assumed ambient surroundings. The WCM assumes a contextually defined ambient surroundings at the temperature of the system's actual surroundings. This allows the WCM to accommodate non-equilibrium metastable states with positive entropy, positive exergy, and the potential for irreversible change.
WCM establishes the irreversible production of entropy as a fundamental law of physics. The dimension of irreversible time breaks the time-symmetry of non-contextual zero-entropy states. It expands physics beyond the investigation of static states and their transitions to the investigation of irreversible dissipative processes. It establishes the Kelvin Selection Principle as a law of evolution, applicable to both living and non-living systems. The KSP selects dissipative processes of higher internal work rate and stability, leading to the spontaneous emergence of dissipative structures and increasing functional complexity.
The WCM is an improvement over non-contextual interpretations of physics, but it is still just a model of physical reality. Quantum randomness within a metastable state can be amplified to macroscopic scales [35], it is measurable, and it is a key element of the WCM model. Ambient fluctuations, on the other hand, have zero exergy, they are not measurable, and they are not recognized by the model. The WCM models a system's ambient surroundings as equilibrium, defining a single unique ambient temperature, and it defines states and homeostates with respect to a fixed environment. Reality is more complex, however. In a non-equilibrium and finite universe, the surroundings are not equilibrium and they are not constant over time. In addition, we have only considered ambient temperatures associated with photon exchanges, but the quantum electrodynamic field is only one of several fields that permeate the quantum vacuum. Each field may have its own ambient temperature governing field excitations and particle interactions.
Like any model, the WCM is a simplification of reality, but it is a key step forward from noncontextual models of physical reality, and it is a good model. A model is "good" if it is consistent with empirical observations; precise; parsimonious in its assumptions; explanatorily broad; falsifiable; and if it promotes scientific progress [51]. The WCM, like all viable interpretations of physics, satisfies the consistency requirement. WCM is parsimonious by assuming no hidden variables and by not assuming non-contextuality, which cannot be empirically justified by observations.
The WCM explanations are broad and falsifiable. It explains the thermodynamic arrow of time and the evolution of complexity as fundamental physical responses of non-equilibrium systems, without invoking an exceptional and unexplained initial state or improbable accident. It explains the empirical randomness of quantum measurements and the coexistence of nonlocality and relativity in terms of fundamental principles, without invoking empirically consistent but implausible and untestable metaphysical implications. And it explains the evolution of open dissipative systems toward higher functional complexity and stability, which is manifested across the cosmos at all scales. The WCM's implications and explanations are well documented and thoroughly validated by empirical observations.
The WCM also suggests new avenues of scientific progress. It extends the scope of physics from its traditional focus on states to irreversible dissipative processes, thereby opening up new avenues of investigation within fields previously regarded as too high-level or too complex for fundamental physical analysis. The WCM discretizes configuration space, but its quantization of space is contextual, and it does not conflict with relativistic length contraction. The contextual discretization of space is compatible with relativity, and it might provide a new pathway to explore quantum gravity.
By all measures, the WCM is a good interpretation of physical reality and a viable response to Ellis and Drossel's challenge [3]: "the challenge is to find some alternative proposal [to the Evolving Bock Universe] that does justice to the fact that time does indeed pass at the macro scale, and hence it must be represented by something like the EBU structure presented here. To deny that time passes is to close one's mind to a vast body of evidence in a spectacular way."