Preprint
Article

This version is not peer-reviewed.

The Non-Ordinary Laws of Physics Describing Life

Submitted:

31 March 2025

Posted:

14 April 2025

You are already at the latest version

Abstract
The question of whether the same laws of science can describe living and non-living matter has been debated since thermodynamics was invented. We show that E.~Schr\ödinger's and R.P.Feynman's predictions were correct that instead of a new interaction (and the force it generates), the interaction of two scientific disciplines is responsible for the thermodynamics's inverse relations, which serve as a fundament for processes of life. The thermodynamical and the electrical interaction speeds differ by several orders of magnitude, and classical physics is not prepared for handling such a case. We develop a mathematical method based on a physical transformation, for deriving the interrelations of the time derivatives of the interactions, essentially as Einstein did. We provide exact descriptions for charge-related transport processes needed in many fields. Based on assumptions, touching fundamental science principles, we explain why the former attempts describing life failed, and why non-ordinary laws are needed. We derive the presumed/suspected non-ordinary (non-disciplinary) laws of science describing life.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  
Subject: 
Physical Sciences  -   Other
"Though this be madness, yet there is method in’t."
Shakespeare: Hamlet: Act 2 Scene 2

1. Introduction

The existence of life is still a mystery for science. As E. Schrödinger formulated, "the construction [of living matter] is different from anything we have yet tested in the physical laboratory ...it is working in a manner that cannot be reduced to the ordinary laws of physics" [1]. Why he desperately inserted the highlighted word was his strong convincement that "And that not on the ground that there is any ’new force’ or what not, directing the behaviour of the single atoms within a living organism, but because the construction is different from anything we have yet tested in the physical laboratory." He attempted to find those laws in a disciplinary way. Maybe R.P.Feynman was right in saying that "We make no apologies for making these excursions into other fields, because the separation of fields, as we have emphasised, is merely a human convenience, and an unnatural thing. Nature is not interested in our separations, and many of the interesting phenomena bridge the gaps between fields." Maybe we must make ’excursions’ to find those non-ordinary (in other words, non-disciplinary) laws that describe life? Or, as Schrdinger implicitly suggested, we must revisit our approximations leading to classical physics (where we derive the well-known ’ordinary’ laws in one approximation for the construction the non-living matter represents)? Maybe the "different construction" the living matter represents needs different approximations, and then laws based on the same first principles in a different approximation (in a non-disciplinary approach) can describe life? We scrutinize the "construction" and its "working" in a non-disciplary way, using non-ordinary approximations and abstractions. We show that life is not against laws of science; it is only against discussing it in terms of a single grasped science discipline.

2. Laws of Motion

Science laws about separate interactions of masses and charges are based on abstractions, which enable and need approximations and omissions. While we understand that the speeds of electrical and gravitational interactions are finite, we can use the ’instant interaction’ approximation in classical physics. One effect of the first particle reaches the second particle simultaneously with the other effect, leading to the absence of a time-dependent term in the mathematical formulation. However, this is not the case in electrodiffusion, where the mass transfer is significantly slower than the transfer speed of the electromagnetic field.
From a physical point of view, ionic solutions are confined to a well-defined volume, with no interaction with the rest of the world. What complicates things is that their volume is finite, so we must adapt the corresponding laws to the case of finite resources. At a microscopic level, on the one hand, we use the abstraction that they consist of charge-less and size-less simple balls with mass, have thermal (kinetic) energy, and collide with each other, as thermodynamics excellently describes it. On the other hand, we use another abstraction, which is mass-less and size-less charged points with mutual repulsion. At a macroscopic level, we use the abstraction that the respective volume is filled with a continuous medium with uniformly distributed macroscopic parameters such as temperature, pressure, concentration, and potential.
One can parallelize describing how ions change their position with Newton’s laws of motion, relating an object’s motion to the forces acting on it. The first and third laws are static, and the second is dynamic. We can translate the first law to ions that without external invasion, their volume at rest will remain at rest. The third law, for ions’ volume, essentially states that in a resting state at every point, the electric and thermodynamic forces are equal; the Nernst-Planck electrodiffusion equation (without transport) expresses this. The second law, for mechanics, expresses the time course of the object: the position’s time derivative. Notice that in this case, we make one abstraction that the object (the carrier) has one attribute, its mass. (Recall how important it was for the special theory of relativity that the accelerated mass and the gravitational mass were identical; i.e., they could be described as one abstraction.)
For ions, we have two abstractions, and two attributes ’charge’ and ’mass’, and the two forces act on the two attributes that science classified to belong to different science disciplines. We cannot express easily how the electric and thermodynamic forces will change the object’s position because those forces act differently on different attributes. No time derivatives are known, only position derivatives. Due to this hiatus, physics (and consequently physiology) cannot describe the electrochemical processes: the second law of motion for electrodiffusion is missing. As a consequence of the instant interaction, classical science has no mechanism for handling the case when two different force fields (gradients) having different propagation speeds act on an object and two different abstractions (charge and mass), belonging to different science disciplines, translate the force into acceleration.
When describing processes (i.e., dynamical systems), we must have one or more equations of motion: how the time gradient of the fundamental entities change in the function of the fundamental entities. In classical science, we have only one fundamental entity: the position and also the driving forces depend only on the position. The (Newtonian) laws of motion do not depend on a time gradient. In the Einsteinian world, speed explicitly appears when describing the interrelation of fundamental entities mass, position, and time. Actually, a second entity (position and time) appears; and the speed connects them.
In our ’extraordinary’ science, we have two abstractions; correspondingly, we have two laws of motion, corresponding to the two disciplines or the two abstractions. In our laws of motion (see Eq.(5) and Eq(8)) we also have explicit speed dependence in describing the interrelation of concentration and potential. Actually, we are thinking in two entities (concentration and potential), but both of them are parametrized by position x. In line with the Einsteinian case, the time shines up and the speed connects those entities. However, the different interaction speeds act on the coordinates differently, that is, the effects of changed entities cannot be separated. This is why we need ’extraordinary’ laws. (Given that our thermodynamical speed is always by orders of magnitude lower than the electrical (limiting) speed, we neglect transforming the time.)
In all cases, the law has the form of a differential equation; i.e., we can derive the fundamental entities by integration. In ’ordinary’ science, we have a single-abstraction interaction; so we have one law of motion; an analytical solution is possible. In non-ordinary (non-disciplinary) case, we have dual-abstraction interaction, and only numerical solution is possible.

3. Steady State

The ions experience two effects in those two abstractions in volumes containing ions. When an invasion in the volume happens, electric potential, pressure, temperature, or concentration changes locally; dynamic changes begin to restore its balanced steady state. When the invasion persists, the system finds another steady state. The observer experiences that changing one macroscopic parameter of the system causes an unexpected (and unexplainable) local change in another macroscopic parameter. The microscopic world maps the changes from one abstraction to the other. Experimentally, the microscopic world maps the change from the world of electric abstraction to the world of thermodynamic abstraction and vice versa. Theoretically, we can do the exact mapping of macroscopic electrical and thermodynamical parameters using microscopic universal constants.
The phenomenon of invasion called ’electrodiffusion’ means that when a potential gradient is created in a volume with ions (while its thermodynamical parameters, such as its volume and temperature, are constant), it creates a concentration gradient. Conversely, a created concentration gradient creates a potential gradient. Two driving forces act on the ions: thermodynamical and electrical ones. In a steady state, at every spatial point of the segment, the two driving forces are equal, and the ions will not move. We can describe the equilibrium state (the mutual dependence of the spatial gradients of the electrical and thermodynamical fields on each other) using the Nernst-Planck electrodiffusion equation
d d x V m ( x ) = R T q * F 1 C k ( x ) d d x C k ( x )
In good textbooks (see, for example, [2], Eq (11.28)), its derivation is exhaustively detailed. In the equation, x is the spatial variable across the direction of the changed invasion parameter, R is the gas constant, F is the Faraday’s constant, T is the temperature, q the valence of the ion, V m ( x ) the potential, and C k ( x ) the concentration of the chemical ion. In simple words, it states that the change in the concentration of ions creates a change in the electric field (and vice versa), and in a stationary state, they remain unchanged. However, in classical science, there is no way to take into account the field’s propagation speed.
It is one of the rare cases when the starting point was wrong, but the conclusion was correct. The equation is a rearranged flux equation, where an identical speed for all interactions was assumed. The identical speed was calculated as a "mean-field" speed, where the "mean" stands for some average of some interaction speeds differing by several orders of magnitude. Not suitable for describing flux. However, in an equilibrium state, the actual value of both interaction speeds is zero, so they have the same value.

4. Time Derivatives

Eq.(1) describes a stationary state with no ionic movement. Deriving a time course (time derivatives) from the position derivatives is impossible in a strict mathematical sense: the interaction is instant. However, we can provide it using physical principles. We consider the electric ion current represented by viscous charged fluids [3]. As expected, selecting the speed (aka calculating the appropriate value of the macroscopic speed, see Eq.(9)) plays a key role, especially since we are at the boundaries of physics abstractions; furthermore, we are mixing microscopic and macroscopic notions.
In classical physics, because of the lack of time-dependent terms in the expressions, the changes are described by position-dependent terms (position derivatives), both in the case of electromagnetical and electrodiffusional interactions. In classical (’instant interaction’) science, the time derivatives are either not interpreted or can be derived through the externally derived joint interaction speed. As explained, we can extend the idea to enormously different speeds and derive time derivatives if we consider the faster interaction to be instant.
In timeless classical physics, there is no explicit dependence on the time: everything happens simultaneously. In a resting state, the Maxwell equations follow from the conservation of energy. One form of energy transforms into another form, and the system arrives in another balanced state. The carriers of force fields are continuous, so one can calculate and make infinitesimal changes in the driving forces; they do not change the system’s energy. If one gradient changes, the other automatically (per definitionem) changes in the opposite direction. In another words: the driving forces are permanently balanced, the magnetical and electrical forces act instantly ("simultaneously"), and are always of opposite sign. A time derivative cannot be interpreted: everything happens at the same time. In other words: at the same space-time (in the classical interpretation, the time is the same at any point).
In an electrodiffusional process, we start with the same point of view. We assume that the thermodynamic and electrical driving forces are equal in equilibrium. That assumption results in the Nernst-Planck equation. On the one side, we use a macroscopic parameter, the potential. On the other side, we use another macroscopic parameter, the concentration. The equation bridges those macroscopic parameters by using universal constants from the microscopic world. However, unlike electromagnetism, we cannot make infinitesimally small changes in the gradient since the carrier of the force fields is "atomic". Furthermore, moving it infinitesimally (changing only its position coordinates), the changes in the electric and thermodynamic gradients do not result in a new balanced state. The effect of ion’s charge has an immediate effect on the volume, but ion’s mass has a delayed effect. The infinitesimally small change in the position results in an infinitesimally small increase in the energy of the system, given that moving a carrier changes the potential and the concentration in the same direction, and we did not consider that time changes. In the Newtonian world, everything happens at the same time, so we cannot handle instant and finite interaction speeds simultaneously. The infinitesimally small change disappears only when the slower interaction reaches the other carriers in the volume. When interaction speeds differ, energy conservation is valid only if we use space-time.
Fortunately, we can derive the infinitely small change where the time and space (position) coordinates are connected, essentially, in the same way as in the special theory of relativity. Let us assume that the gradients act on the mass and the charge, but the ion’s effects on the gradients are negligible. According to the principle of relativity, the phenomena must remain the same in a reference frame moving with a constant speed relative to the first one, and we choose the one that moves together with the ion. In the second frame, no ionic movement occurs along the movement’s direction. In line with that the speed of the light is independent of the reference frame; we assume that the higher interaction speed remains the same in both systems: it is instant. The observers in both reference frames must see that the system is balanced. The difference is that in the first frame, the system is statically balanced (no change in the gradients, but the ion is moving), and in the second one, it is dynamically balanced (the gradients change to keep the ion in rest). The gradients the moving ion experiences are the ones that the standing ion experiences at another time (depending on its speed). In this way, we can provide the needed time course of the process.
Compared with the electromagnetic case, we see crucial differences. First, the mass’s propagation speed is millions of times slower than the charge’s. Second, the moving ion simultaneously represents mass transport and charge transport. Third, when deriving position derivatives, we conclude from the assumption that there is no movement (in other words, no explicit dependence on the time): the effect of the electric and magnetic driving forces are equal, whatever time is needed to reach that balanced state. In contrast, in electrodiffusion, the velocity changes the concentration gradient, and simultaneously, the potential gradient.
We assume that equation (1) is valid for a given time t. At time t + d t , in another steady state, the two interactions manifest at different times: we have
d d x V m ( x + v ( x ) * d t ) = R T q * F 1 C k ( x ) d d x C k ( x )
or, equivalently, it can be expressed as
d d x C k ( x v ( x ) * d t ) = q * F R T C k ( x ) d d x V m ( x )
The concentration at position x determines the potential (apart from an integration constant) at position:
d V m ( x ) = d x * d d x V m ( x ) = d x R T q * F 1 C k ( x ) d d x C k ( x )
so (and here the constant disappears) the time derivative is
d d t V m ( x ) = v ( x ) * d d x V m ( x ) = v ( x ) * R T q * F 1 C k ( x ) d d x C k ( x )
or
d d t V ( x ) = D * R F * C ( x ) * d C d x * R T q * F 1 C ( x ) d d x C k ( x )
Similarly, at time t d t , in another steady state, we have
d C k ( x v ( x ) * d t ) = d x * d d x C k ( x ) = d x q * F R T V m ( x ) d d x V m ( x )
d d t C k ( x ) = v ( x ) * d d x C k ( x ) = v ( x ) q * F R T V m ( x ) d d x V m ( x )
We expressed the dependence of gradients on each other using the ion’s speed v as an intermediate variable, that can be expressed by the Stokes-Einstein relation as
v = D * R F * C ( x ) * d C d x
After simplifying the expression
d V d t = T * R 2 q * F 2 * D * d 2 C d x 2
For practical calculations, the voltage’s time derivative can be calculated directly from the input current, which directly considers the current production mechanism.

5. Fick’s Law

Given that
d V d t = D * d 2 C d x 2
expresses Fick’s Second Law of Diffusion, we can derive the ratio between the electric and thermodynamic temporal gradients. Using the values of universal constants
d V d t = 2.23 * 10 6 * D * d 2 C d x 2 = 2 . 23 * 10 6 d C d t
We note that non-dedicated experimental results (measuring concentration invasion and voltage invasion) result in an experimental value 2 * 10 6 .

6. Summary

Onsager’s reciprocal thermodynamical relations resulted in a Nobel prize, but their mathematical handling is not yet solved. In physics, interactions of different fields are usually solved in a "mean-field approximation". However, in the case of electrodiffusion, one needs to average quantities deviating by six orders of magnitude, resulting in very inaccurate transport equations. Our approach to speed handling results in an exact derivation of the time courses of the concentration and voltage of electrodiffusion processes, which are needed for many practical transport processes based on electrodiffusion. By calculating the ratio of coefficients of electrodiffusion and diffusion, we provide a way to determine the electrodiffusion’s diffusion and/or viscosity parameters that are not directly accessible due to experimental difficulties.
Those equations are also the laws of motion for biological processes, that is, of life. Among others, they explain how in neuronal processes, an ion packet moves without external potential on the surface of the membrane and along the axon. Using them, one can quantitatively describe at which parameter combination a system composed of non-living components (such as a closed volume having two segments with largely differing concentrations of electrolyte, separated by a semipermeable membrane with ion channels, plus a current drain (such as AIS) and current sources (such as synapses)) shows symptoms that biology calls "action potential".

Appendix A. Features for Describing Life

In his very accurately formulated question, "How can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry?", Schrödinger focused on (at least) these significant points
  • events’ Unlike non-living matter, living matter is dynamic, changing autonomously by its internal laws; we must think differently about it, including making hypotheses and testing it in the labs. Those laws are extraordinary ’because the construction is different’, but its principles must not differ from the ones we already know. Processes happen inside; we can observe some characteristic (time-space) points.
  • space and time’ Those characteristic points are significant changes resulting from processes that have material carriers, which change their positions with finite speed autonomously, so (unlike in classical mechanics) the events also have the characteristics ’time’ in addition to their ’position’. In biology, the spatiotemporal behavior is implemented by slow ion currents. The meticulous observations must describe the events using special ’space-time’ coordinates (to distinguish them from the ones used in theories of relativity, we call them ’time-space’ coordinates). In other words, instead of ’moments’, we must consider ’periods’.
  • within the spatial boundary’ We derive laws of physics for stand-alone systems, in the sense that the considered system is infinitely far from the rest of the world; also in the sense that the changes we cause or observe do not significantly change the external world, so its idealized disturbing effect will not change it.
  • accounted for by physics’[by extraordinary laws] We are used to abstract and test a static attribute and derive the ’ordinary’ laws of motion for the ’net’ interactions. In the case of physiology, nature prevents us from testing ’net’ interactions. We must understand that some interactions are non-separable, and we must derive ’extraordinary’ laws. The forces are not unknown, but the known ’ordinary’ laws of motion of physics are about single-speed interactions.
  • living matter’ To describe its dynamic behavior, we must introduce a dynamic description.
  • yet tested in the physical laboratory’[including physiological ones] We need to test those ’constructions’ in laboratories, their true environment, and in ’working state’. As we did with non-living matter, we need to develop and gradually refine the testing methods and hypotheses. Moreover, we must not forget that our commonly used methods refer to ’states’ (i.e., moments); this time, we test ’processes’.
A common fallacy in biology is that physics cannot underpin the operation of living matter, citing E. Schrödinger. However, the claim falsifies his opinion by omitting the essential word, ’ordinary’. Schrdinger wanted to emphasize the opposite: there is no new force (no unknown new interaction), and studying living matter needs different testing methods in the physical laboratory. He suggested to answer the question "Is life based on the laws of physics?" affirmatively, but expected to invent the appropriate forms of physical laws describing the ’extraordinary’ (in our reading: non-disciplinary in the sense of ’classical physics’) behavior of living matter.
No doubt, the basic notions and terms need to be interpreted precisely for living matter, much beyond the level we are used to. We need a more careful, many-disciplinary analysis to do so. However, after that pinpointing, we can interpret and explain features of living matter. As we discuss, biophysics translated the corresponding major terminus technicus words from the theory and practice of physics’ major disciplines, mainly from electricity, which were worked out for homogeneous, isotropic, structureless metals, and for strictly pair-wise interactions with a single (actually, ’instant’) interaction speed; to the structured, non-homogeneous, non-isotropic, material mixtures and for multiple interaction speeds. Those notions do not always have unchanged meaning, and how much they do depends on the actual conditions. We need to use the appropriate abstractions and approximations for the phenomena, depending on the level needed in the given cooperation of objects and interactions.
Science’s first principles could serve as a firm base for all its disciplines. As we discuss, its disciplines use abstractions based on limited-validity approximations based on the same first principles. However, the approximations could be and are different for biology and physics (this is why they are separated by disciplines). In physics, some processes we observe are fast enough so that we can use the approximation that they are essentially jumps between states. In some cases, the approach can be –more or less– successful. For slower, well-observable processes, we have the laws of motion that describe how the processes happen under the effect of some driving force. We also experienced that nature is not necessarily linear (in the sense that it depends only on space coordinates but not on their derivatives), which we can describe by "nice" mathematical formulas. A century ago, A. Einstein invented that the approximations I. Newton introduced two centuries earlier are not sufficiently accurate for describing the movement of bodies at high speeds. In other words, a new paradigm, the constancy of the speed of light, must have been introduced that caused a revolution in physics and led to the birth of "modern physics" disciplines.
Life, including the brain’s operation, is dynamic. As Schrödinger formulated, the "construction of living matter" differs from the one science used to test in its labs. The scientific abstraction based on "states" (i.e., on instant changes) fails for the case of biology, where "processes" happen (i.e., the changes are much slower). The commonly used measuring methods, such as clamping, patching, and freezing, reduce the life to states (and correspondingly, the related theories describe states with perturbance [4]). It was forgotten that using feedback for stabilizing an autonomously working electrical system means introducing foreign currents, and this way, falsifying its operation. On the one side, this technology fixes the cell at some well-defined static state and enables us to observe a static anatomic picture. On the other, it eliminates the dynamic processes, i.e., hides forever the essence of the life that the cell exists in a continuous change governed by laws of motion.
We derive the needed ’extraordinary laws’ by using the same first principles as the ’ordinary laws’ but make abstractions for the approximations valid for living matter. As discussed, those ’ordinary’ laws were derived for strictly pair-wise interactions at very high speeds and only for a single abstraction. In biology, we can observe interactions at a million times smaller speed in inhomogeneous, non-isotropic, structured material. Biology has not the conditions for which we derived the ordinary laws of physics. We show that the ordinary laws are also the result of approximations (including omissions), and by using the appropriate approximations for the biological cases, we can derive those ’extraordinary’ laws of physics. Which laws are more complex to derive, and we need to use several stages (with the approximations changing from stage to stage) instead of one stage in the case of the ’ordinary’ laws. However, ordinary and extraordinary laws follow the same principles.

Appendix B. Ways to Consider Speed

Considering the role of time, space, and matter is the subject of endless debates in science. Using finite interaction speeds is against using "nice and classical physics" with its nice mathematical formulas, but omitting the different speeds misled and may mislead research in several fields. Biology produces situations where the complexity of phenomena and the needed carefulness meet the ones required in cosmology. The difference is that, in biology, phenomena’ consequences are immediate and can be studied experimentally.
To describe the related phenomena, we must scrutinize, case by case, which interactions are significant and which interaction(s) can be omitted; instead of setting up ad-hoc models that contradict each other if used outside their narrow range of validity. To provide their correct physics-based description, we must understand the corresponding behavior of living material, including thatit works with slow ion currents, electrically active, non-isotropic, structured materials, and consequently, its temporal behavior (the speeds of interactions) matters. We must consider macroscopic and microscopic phenomena at the same time, different science fields and their interplay. "Living complex systems in particular create high-ordered functionalities by pairing up low-ordered complementary processes, e.g., one process to build and the other to correct". [5] We need to double-check the validity of our abstractions.
Galilei said, "Mathematics is the language in which God has written the universe". However, it is not sure that when we attempt to read a piece of the universe, we use the right piece of the language, and even that humans have already invented the needed piece. For example, mathematical calculus (integral and differential) was invented mainly for the practical needs of analyzing the spatial motion of celestial bodies, and only after inventing the laws of their motion. Similarly, Minkowski’s mathematics theory proliferated widely [6] only after inventing the special theory of relativity. Although the mathematical description was developed earlier, there was no practical need to apply it. The classical laws of motion were valid only until the available more meticulous observations required to consider speed and acceleration (time derivatives of position) dependence in addition to position dependence. Newton’s static laws remained valid, but for the dynamic description, we must revisit the second law of motion.
Also, we must not forget that "mathematics is not just a language. Mathematics is a language plus reasoning. It’s like a language plus logic. Mathematics is a tool for reasoning." (Richard P. Feynman) Mathematical formulas work with numbers, but math theorems and statements begin with "If ... then". They have their range of validity, even when they describe nature. Using mathematics to describe the classical equations of motion for calculating forces and times that speed up bodies above the speed of light is possible. However, in that case, mathematics is applied to an inappropriate approximation of nature. Different physical approximations (that call for different mathematical handling) are to be used when approaching the speed of light. A mathematical formula, without naming which interactions it describes and naming under which conditions and approximations the formula can be applied, are just numbers without meaning. It surely describes something but only eventually describes what we studied. Galilei made measurements with objects having friction, but his careful analysis extrapolated his results to the abstraction that no friction was present. We know his name because of making meticulous abstractions and omissions (and mainly, recognizing the need to do so!) instead of publishing a vast amount of half-understood measured data.
Science, unfortunately, is separated into classical and modern science based on whether the theoretical description assumes infinitely fast interaction (the Newtonian model) or acknowledges the finite interaction speed (the Einsteinian model). However, the finite interaction speed is erroneously associated with the speed of light and frames of reference moving relative to each other with speeds approaching the speed of light. Assuming that the interaction speed is finite is sufficient to build up the special theory of relativity [7] (using the speed of light as the value for its external parameter). Still, the Minkowski-mathematics [8] behind the special theory of relativity works with any speed parameter c. The same mathematics describes technical [9] and biological [10] computing systems, where there are no moving reference frames, but the finite interaction speed has noticeable effects on the operation of the system.
There exist attempts to interpret the task of transporting ions under the effect of several interactions with different speeds (for a review, see [11]). However, "a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential" actually means averaging gradients propagating with speeds 10 8 m / s (electromagnetic interaction) and 10 1 m / s (ionic current), respectively, which is inappropriate for either (any way of averaging). The computational methods need position-dependent diffusion coefficient profiles, and in addition, they are generally quite limited for most confined regions such as ion channels. For this reason, they have joint issues, limitations, and high computational complexity; furthermore, biophysics [2] explains, "while diffusion is like a hopping flee, electrodiffusion is like a flee that is hopping in a breeze". This sentence is the complete mathematical description of a state change. The lack of notion of non-infinite interaction speed does not enable theory to say anything. The theory considers the process as just a momentary "hop" between two states, although it admits that there are longer and much shorter moments. Classical theory has no idea what to do with non-infinite interaction speeds. This mistake is a significant obstacle, among others, when attempting to comprehend how the electrochemical charge handling implements neuronal computation and information transfer, furthermore, the life itself.
Another primary source of confusion is that the phenomena happen in a limited region of space, and we study processes (in a period instead of a moment) where the environment is not "infinitely far" from the studied object and the studied process interacts with its environment. We must consider that the resources are finite.

Appendix C. Abstractions&Approximations

To describe a well defined range of phenomena, we use approximations and omissions, and we create abstractions which can then be described by known laws using the universal language of mathematics. We use the abstraction "charge" and "charge carrier" for electrons, protons, ions, etc., and we can describe the electricity-related abstract features of the carriers. We must not forget, however, that those laws have been derived for abstractions based on approximations and omissions, and so they also have their range of validity. To apply laws from different fields of science, we must scrutinize whether all laws we use are applied within their range of validity.
Classical physics is based on the Newtonian idea that space and time are absolute, so everything happens simultaneously. Moreover, all interactions (and their observation) have the same speed. Consequently, when their objects interact, it must be instantaneous; in other words, their interaction speed is infinitely large. Furthermore, electromagnetic waves with the same high (logically, infinitely high) speed inform the observer. This self-consistent abstraction enables us to provide a "nice" mathematical description of nature in various phenomena: the classic science. In the first year of college, we learned that the idea resulted in "nice" reciprocal square dependencies, Kepler’s and Coulomb’s Laws. We discussed that the macroscopic phenomenon "current" is implemented at the microscopic level by transferring (in different forms) "atomic" charge and that that movement of charges has no effect on the environment. Furthermore, that without charge (and, without atomic charge carriers), neither potential nor current exists. In the following year, we learned that the speed of light is finite and that solids show a macroscopic behavior "resistance" against forwarding microscopic charges.
Biology, predominantly neuronal operation, produces examples where wrong omissions in complex processes result in absolutely wrong results. In those cases, some initial resemblance between our theoretical predictions and our phenomena exists, but the success in simple cases provides no guarantee that the model was appropriate. As correctly assessed, "the success of the equations is no evidence in favour of the mechanism" [12]. Finally, all laws are approximations, and the accuracy of verifying their predictions is limited. Several theories can describe the same phenomenon with the required accuracy. The most known laws (from Newton, Coulomb, Kirchoff, etc.) are also approximations. They have their range of validity, although it is often forgotten.
One such neuralgic point of omissions and approximations is the vastly different interaction speeds; furthermore, where the speed is considered at all, the same speed is assumed for all interactions. The laws are abstract also in that, say, the objects in the laws of physics have either mass or electric charge, but not both. The researcher’s task is to decide which combination of laws can be applied to the given condition. For example, one can assume in most cases that the speeds sum up linearly, except at very high speeds. Biology provides excellent case studies where different interactions shape the phenomenon, and special care must be exercised.
Neuronal operation is at the boundary where sometimes, in the same phenomenon, one interaction can be interpreted at the macroscopic level, and another must already be interpreted at the microscopic level. Furthermore, a series of stages (instead of a single state) and processes (instead of stages) describe the subject under study. Furthermore, we must consider that the processes happen in a finite volume.

Appendix D. Speed

The role of speed and time, particularly in the context of an object’s changing location over time has long held a mystique in the realm of scientific discovery (and recently returned to be mystic again in cosmology). This intrigue can be traced back to historical debates, such as Zeno’s paradoxes. The acknowledgment that an object’s movement speed can influence our observations is a topic that has sparked significant scientific discourse over the years.
It has been a long-standing mystery that interactions with different speeds play their role simultaneously. The issue forces researchers to give non-scientific explanations to everyday phenomena only because they routinely assume that the interactions have the same speed, and they use the laws about strictly pair-wise interactions. They have no choice: there is no formalism to handle non-equal speeds.
We need different abstractions (finite-speed interaction in modern physics) for different phenomena that require different mathematical handling, which is not as simple and friendly. The speeds of observation and propagation of electric fields remain the same in biology, and it is easy to extrapolate, mistakenly, that all interactions have infinitely large interaction speeds. However, also slow interaction speeds exist, furthermore, different interaction speeds can intermix in the same phenomenon. Neglecting that effect introduces the need to assume fake mechanisms and effects for explaining some details, which are naturally explained by assuming finite interaction speeds and their combinations.

Appendix D.1. Speed of Light

In 1676, the Danish astronomer Ole Roemer was making meticulous observations of Jupiter’s moon Io and concluded not only that the speed of light is finite, but he measured its value with sufficient accuracy. Roemer never published a formal description of his method, possibly because of his bosses, Cassini and Picard, opposing his ideas. Cassini knew Rmer’s idea and the measurement data.
However, the theory of finite speed quickly gained support among other natural philosophers of the period, such as Christiaan Huygens and Isaac Newton. Although Newton surely knew that the observation speed was finite, in his "Philosophiae Naturalis Principiai Mathematica" [13], published in 1687, he decided to refer to observations that happened "at the same time" despite knowing that what we observe at the same times, happen at different times. Using instant interaction results in "nice’" mathematical laws and enables us to describe most of nature’s experiences with sufficient accuracy.
Einstein, in 1905, discovered [14] that the speed of observation (in moving reference frames) may play a decisive role in interpreting scientific phenomena. The results he derived using Minkowski-coordinates [8] were counter-intuitive, with many unexpected consequences. Instead of introducing improvement(s) or correction(s) to the existing classic principles and methods, he introduced a new principle: the finite (limiting) interaction speed. "The disciplinary analysis of the reception of Minkowski’s Cologne lecture reveals an overwhelmingly positive response on the part of mathematicians, and a decidedly mixed reaction on the part of physicists" [15] has turned to the exact opposite. Today, physics generally accepts the description, that is, the existence of finite interaction speed (resulting in the birth of a series of modern science disciplines). However, other science disciplines, including biology and computing science, refute (or at least do not use) it despite its evident effects.

Appendix D.2. Speed in Neuroscience

Helmholtz, in 1850, sent a short report off to the Academy [16] "I have found that a measurable time passes when the stimulus exerted by a momentary electric current on the hip plexus (Hftgeflecht) of a frog propagates itself to the nerves of the thigh and enters the calf muscle." His teacher "had thought that the speed of nervous conduction might be in excess of the speed of light and could probably never be measured. Helmholtz’s father, on hearing of the experiment and the surprisingly slow measured speed, wrote to his son that he would as soon believe this result as that one can see the light of a star that burned out a million years ago" [17].
With the development of measurement technology, it became evident that finite speed is a general feature of the "nervous connection". (Somehow, "the speed of nervous conduction" has been renamed to "conduction velocity", neglecting the clear distinction that physics makes between the two wording.) The experimental research also quickly (re-)discovered that those wires forward signals in a particular way; the speed of the potential wave is finite. Furthermore, the axons are not equipotential during transmission. Although its structure is practically identical with axons, biology assumes that, unlike an axon, the membrane remains equipotential during its operation, although the evidence shows the opposite: ’the action potential spreads as a traveling wave from the initial site of depolarization to involve the entire plasma membrane’ [18].

Appendix D.3. Finite-Speed Interactions

When speaking about speed, especially about the speed of charged objects inside biological objects, one needs to consider microscopic and macroscopic levels of understanding. On the boundaries of the two levels, we need to make distinctions between different kinds of speeds, among others (in units of m / s ), the propagation speed 10 8 of the electric field (aka potential gradient), the speed 10 5 of thermal motion and potential-accelerated motion, the apparent speed of current (potential-assisted speed of a macroscopic stream, both in metals and electrolytes, mainly due to the repulsion of nearby ions in the stream) 10 1 , diffusion speed of electrons in a wire 10 4 , drift speed of the individual carriers in aqueous solutions 10 7 . Fortunately, in most but not all cases, different mechanisms (such as the Grotthuss mechanism or free electron model; for a review, see [19]) at the level of microscopic structure helps to create the illusion of a high macroscopic propagation speed (a million times higher than its microscopic carriers). The same carrier can have macroscopic speeds differing by orders of magnitude, depending on the context; see a biological example at ion channels. When more than one of those speeds play a role in the phenomenon we study, we must carefully consider its context and prepare for handling fast and slow effects, furthermore, their mixing.
When an object can interact with another in a way abstracted by science as more than one interaction type, we need to find the relation (the ’extraordinary’ law). Such a famous case is electricity and magnetism. Their interrelation is defined by the Maxwell equations: how an electrical field creates a magnetic one and vice versa (notice that the law is about their space derivatives, aka space gradients instead of the entities themselves). While we understand that the speeds of electromagnetic and gravitational interactions are finite, we can use the ’instant interaction’ approximation in classical physics because one effect of the first particle reaches the second particle simultaneously with the other effect, leading to the absence of a time-dependent term in the mathematical formulation.
An apparently similar case is found in electrodiffusion, where ions can be abstracted as mass and charge, one belonging to thermodynamics and the other to electricity. There is, however, an essential difference between those cases: the interaction speeds are the same in the first case (moreover, in the spirit of classical physics, the interactions are instant) and differ by several orders of magnitude in the second one. Of course, the Maxwell equations can be nicely solved and also modeled for biology if one introduces [20] that the axial currents have the same speed (BTW: its measured value 20 m / s ) as the electric and magnetic waves, furthermore the longitudinal current is (?)defined(?) to have no attenuation. Furthermore, it is likely also defined that the current needs no driving force. This is why the positive and negative ions flow in the same direction. It is really a novel paradigm leading to "(mis)understanding cell interactions" but describes some alternative nature.

Appendix D.4. Speed in laws of science

The famous Coulomb’s Law is expressed as
F ( t ) Q 1 = k Q 2 r 2
r is a space-time distance. That is, in the Newtonian approximation, time is identical at all places, so we used to omit it, and use the commonly known space coordinate instead. Considering the finite field propagation speed requires revisiting the fundamental physical laws. (in a Lorentz-transformed form, at zero speed) should be written as
F ( t ) Q 1 = k Q 2 r 2 ( t r c )
The electrostatic field that the charge Q 1 experiences due to the finite propagation speed c of the electric field (or interaction) corresponds to that Q 2 at a distance rgenerated r c time ago (k is the constant describing the electric interaction). The effect of delayed interaction has experimental proof in the case of gravitational interaction (gravitational waves). This term has no role if the two charges do not change their position; similarly to that in the special theory of relativity, only the relative movement leads to complications.
This speed term pops another law from classical physics into our minds: Kirchhoff’s junction rule. Given that the law expresses charge conservation, and the current is defined in differential form as d Q / d t , the law is perfect at any point of the circuit. However, for an extended object, it is only valid in the approximation ’instant interaction’ that classical physics uses, but not for biology. For extended biological objects, an input current arriving with finite speed leaves the object later (in the order of m s e c ). The law is invalid for an extended object or only if we use time-space coordinates. In addition, when charges are "created" inside biological objects (ions diffuse into the junction; see the role of ion channels in the wall of membranes), it vitiates the law. Using a wrong definition of current means assuming ’instant interaction’, that is, that neural signals propagate with the speed of light. The currents (and the voltages), measured at two points in space-time, differ. Consequently, for extended objects (such as a line-like finite-size neuron), it is valid only with a time delay
I o u t ( t ) = I i n ( t Δ t )
The time delay in biology is in the 1 m s e c range. We must not describe the axon or the membrane by the non-differential form Kirchoff equation: the input and the output currents flow at different times (the charge carriers need time to reach from input to output). We must not describe the axon or the membrane with the non-differential form of the Kirchoff equation: the input and the output currents flow at different times (the charge carriers need time to reach from input to output); only the differential equation form expresses charge conservation (furthermore, in the case of "producing" ions, even by the differential form is invalid). Studying electric phenomena on structured media, such as biological cells, needs much care. We must not apply laws derived from entirely different conditions (mainly metals).

Appendix E. Mixing Interaction Speeds

Physics notoriously suffers from the lack of handling different simultaneous interactions; facing such a case leads to misunderstandings, debates, and causality problems. Such a famous case is the interaction speed of entanglement. At that time, E. Schrödinger introduced his famous law of motion in quantum mechanics entirely analogously to I. Newton introduced his second law of motion. Similarly to the Newtonian ’absolute time’, the quantum mechanical interaction is supposed to be ’instant’ (this is the price for having ’nice’ equations in classical and quantum mechanics), i.e., its speed is supposed to be infinitely high. However, at that time, it was already known that the electric interaction (propagation of electromagnetic waves) is finite, so if an object has quantum mechanical interaction (aka entanglement), and electrical interaction, the corresponding forces start simultaneously but arrive at the other object at different times. The entanglement arrives instantly; the electromagnetic effect arrives at the time we can calculate from the interaction speed and object’s the spatial distance. This speed difference leads to causality problems: the effect of the two interactions of photons entangled earlier in an exploded supernova should be measured at two different times; meaning a "spooky remote interaction" as A. Einstein coined. Moreover, it leads to contradictions such as the Einstein-Podolsky-Rosen paradox. The issue rooted in the improper handling of mixing interaction speeds: the Schrdinger-equation introduces the infinitely large interaction speed, while the EM interaction has a finite speed.
The confusion and question marks in connection with describing life by science mostly arise from the interpretation of notion ’speed’ in physics. When discussing the underlying physical laws, we go back to the very basic physical notions instead of taking over the approximations and abstractions used in the classical physics for non-biological matter and less complex interactions. As we have often emphasized, we construct laws and conclusions based on somewhat simplified abstractions about nature in all fields of science. Considering speed dependence distinguishes the Newtonian and Einsteinian worlds. Similarly to the effect of speed in the theory of special relativity, we can be prepared for some counter-intuitive experiences in physiology, as expected: "we must be prepared to find it [the living matter] working in a manner that cannot be reduced to the ordinary laws of physics"[1].
The light is an electromagnetic wave with a vast but finite propagation speed. At the same time, it is the propagation speed of the electric (and magnetic and gravitational) interaction force fields as well. Science uses ’instant’ in the sense that an interaction is much faster than the process under study; we consider the fast interaction as instant. The approach of classical science is based on the oversimplified approximation that the interaction speed is always much higher than the speed of changes it causes and that the processes can always be described by a single stage. In our approach, for biology, we put together a series of stages to describe the observed complex phenomena, where the stages provide input and output for each other, involve more than one interaction speed, and use per-stage-valid approximations. We simplify the approximations by omitting the less significant interactions and introduce ideas for accounting for the different interaction speeds. This way, we reduce the problem to a case that science can describe mathematically. This procedure is fundamentally different from applying some mathematical equations derived for an abstracted case of science to a complex biological phenomenon without validating that we use the appropriate formalism.

Appendix F. Thermodynamics

We can handle atomicity in different abstractions, as charge-less or mass-less points, and can derive laws for a single interaction, see Newton’s law and Coulomb’s law. The ’physical points’ (having charge and mass) can be abstracted as the behavior that there are two underlying interactions; correspondingly, it has less simple laws of forces and motions. The macroscopic features (such as pressure, temperature, potential, and concentration) of systems of physical points are interpreted as statistical quantities, and the scientific discipline of thermodynamics discusses their laws. Its notions drastically differ from the ones of classical fields. Here, the ’temperature’ is a generalization: a homogeneous distribution means that the physical quantities (such as momentum and energy) have a well-established distribution instead of uniform values of parameters. At the same time (in infinitely large volumes) the macroscopic parameters ’concentration’ and ’potential’ (notice that they are based on the single interaction abstractions ’mass’ and ’charge’, respectively) are simple densities. However, to interpret them, a large number of particles must be considered. For the more careful experimenter, it is evident that this homogeneity is a dynamic one: particles’ movement changes it continuously and it is constant only as a statistical average.
The distribution, however, can be calculated for charge-less and size-less ’heavy points’ only. The interference of those forces and their effect on two different features of the atomic particles lead to unusual disciplinary consequences. For discovering the reciprocal relations in thermodynamics, Lars Onsager was awarded the 1968 Nobel Prize in Chemistry. The presentation speech referred to his result that "Onsager’s reciprocal relations represent a further law making a thermodynamic study of irreversible processes possible". In that sense, we provide mathematical equations of the fourth law of motion in thermodynamics. The experimental verification [21] of that law mentions "the well-known difficulty of carrying out these experiments". We can overcome that experimental difficulty using our mathematical relations between electrical and chemical diffusion. The significance of our Eq. (12) is, that one can derive the speed of electrodiffusion in electrolytes, which is otherwise not measurable ("hopping in a breeze" [2]: we should have to measure potential changes at distances of the size of the electrodes, with picosec resolution while the electrolytic electrodes cause nearly m s e c delays).
In our research, the key point is that life (including neural processes) is based (mainly) on electrodiffusional processes. The contradictions and duality (mainly) arise from the enormously different interaction speeds of the electric and diffusion processes. In our approach, we divide ion movements into three stages based on the speed of the dominating electric interaction. We introduce diffusion (or potential-less), potential-assisted (based on the mutual repulsion only), and potential-accelerated (internal voltage on biological components accelerates the ions) speeds. In some cases, the diffusion and electric processes follow each other in separate phases, so in some phases, they can be better approximated as a "net" electrical system, combining "fast" and "slow" currents. We show that the processes can be staged in such a way that in addition to the dominant interaction only one more significant interaction remains in the stage, and we can work out a physics-based approximation that a mathematical formalism can describe.
We are at the boundary of the microscopic and macroscopic worlds and must consider different interactions at different speeds. More than one abstraction must be used to describe phenomena that are neither purely microscopic nor macroscopic. Still, they show the behavior of both worlds. Furthermore, they change their behavior during the process under study. The inappropriate handling of mixing interaction speeds led to ’extraordinary’ behavior, and one can conclude ’extraordinary’ laws when one uses the appropriate approximation(s). We need more careful handling (and more ’extraordinary’ laws) if we consider the interactions in a finite volume, with strongly different conditions on its boundaries. We need to conduct case studies and apply casual approximations to describe the phenomena which are neither purely microscopic nor macroscopic and where more than one abstraction must be used. It is important to remember that we are dealing with a mixture of macroscopic and microscopic descriptions and this understanding is a crucial aspect of our research.

References

  1. Schrödinger, E.: IS LIFE BASED ON THE LAWS OF PHYSICS? Canto, pp. 76–85. Cambridge University Press, (1992).
  2. Koch, C. Biophysics of Computation; Oxford University Press: New York; Oxford, 1999. [Google Scholar]
  3. Forcella, D., Zaanen, J., Valentinis, D., Marel, D. Electromagnetic properties of viscous charged fluids. Phys. Rev. B 2014, 90, 035143. [Google Scholar] [CrossRef]
  4. Maass, W., Natschläger, T., Markram, H. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation 2002, 14(11), 2531–2560. [Google Scholar] [CrossRef] [PubMed]
  5. Podobnik, B., Jusup, M., Tiganj, Z., Wangi, W.-X., Buld, J.M., Stanley, H.E. Biological conservation law as an emerging functionality in dynamical neuronal networks. Applied Physical Sciences 2017, 45, 11826–11831. [Google Scholar]
  6. Pyenson, L. Hermann Minkowski and Einstein’s special theory of relativity. Archive for History of Exact Sciences 1977, 17, 71–95. [Google Scholar] [CrossRef]
  7. Das, A. The Special Theory of Relativity: a Mathematical Exposition, 1st ed.; Springer, 1993. [Google Scholar]
  8. Hermann Minkowski: Die Grundgleichungen für die electromagnetischen Vorgänge in bewegten Körpern. Nachrichten von der Königlichen Gesellschaft der Wissenschaften zu Göttingen (in German), 53–111 (1908).
  9. Végh, J. Revising the classic computing paradigm and its technological implementations. Informatics 2021, 8(4). [Google Scholar] [CrossRef]
  10. Végh, J.; Berki, Á.J. On the Role of Speed in Technological and Biological Information Transfer for Computations. Acta Biotheoretica 2022, 70(4), 26. [Google Scholar] [CrossRef] [PubMed]
  11. Zheng, Q., G.W., W. Poisson-Boltzmann-Nernst-Planck model. J Chem Phys 2011, 19. [Google Scholar]
  12. Hodgkin, A.L.; Huxley, A.F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 1952, 117, 500–544. [Google Scholar] [CrossRef] [PubMed]
  13. Newton, I. Philosophiae Naturalis Principia Mathematica. Available online: https://www.britannica.com/topic/Principia.
  14. Einstein, A. On the Electrodynamics of Moving Bodies. Annalen der Physik (in German) 1905, 10(17), 891–921. [Google Scholar] [CrossRef]
  15. Walter, S. Hermann Minkowski and the scandal of spacetime. ESI News 2008, 1(3), 6–8. [Google Scholar]
  16. Schmidgen, H. Schmidgen, H.: Of frogs and men: the origins of psychophysiological time experiments, 1850-1865 (1850).
  17. The Rise of Experimental Psychology (1850).
  18. Alberts, B., Johnson, A., Lewis, J., al. Molecular Biology of the Cell, 4th ed; Garland Science: New York, 2002. [Google Scholar]
  19. Popov, I., Zhu, Z., Young-Gonzales, A.R.e.a. Search for a Grotthuss mechanism through the observation of proton transfer. Commun Chem 2023, 6. [Google Scholar]
  20. Isakovic, J., Dobbs-Dixon, I., Chaudhury, D.; et al. Modeling of inhomogeneous electromagnetic fields in the nervous system: a novel paradigm in understanding cell interactions, disease etiology and therapy. Sci Rep 2018, 8. [Google Scholar]
  21. Miller, D.G. THERMODINAMICS OF IRREVERSIBLE PROCESSES THE EXPERIMENTAL VERIFICATION OF THE ONSAGER RECIPROCAL RELATIONS . Technical Report Contract No. W-7405-eng-48, UNIVERSITY OF CALIFORNIA, Lawrence Radiation Laboratory, Livermore, California (1959). https://doi.org/#1.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated