Preprint
Article

This version is not peer-reviewed.

Algorithm for Describing Neuronal Electric Operation

A peer-reviewed article of this preprint also exists.

Submitted:

14 June 2025

Posted:

16 June 2025

Read the latest preprint version here

Abstract
The development of neuroanatomy and neurophysiology discovered many new details in the past decades about neuron’s electric operation that required modifying its theoretical model. The development of computing technology enables us to consider the fine details the new model requires, but it necessitates a different approach. As it was long ago suspected, the faithful simulation of biological processes requires accurately mapping biological time to technical computing time. Therefore, the paper focuses on time handling in biology-targeting computations. However, the operation of biology and the physical/mathematical processes in living matter are unusual from the point of view of algorithmic description. Furthermore, the way technical operation works prevents achieving the needed accuracy in reproducing biological operations using computer programs. We also touch on the question of simulating the operation of their network, contrasted to the operation of spiking artificial neural networks. On the one side, we use an updated theoretical model that considers neuronal current as charged ions (and so considers thermodynamic effects) and opens the way for explaining mechanical, optical, etc., consequence phenomena of the electric operation. On the other hand, we apply a tool developed for achieving extreme accuracy in simulating high-speed electronic circuits. The algorithm that applies this model and the unusual programming method provides new insights into both neuronal operation and its technical implementation.
Keywords: 
;  ;  ;  ;  

1. Introduction

On the occasion of the anniversary of Erwin Scrödinger’s famous book [1] we could revive that "The construction [of living matter] is different from anything we have yet tested in the physical laboratory." "It is working in a manner that cannot be reduced to the ordinary laws of physics". Even if we find the ’non-ordinary’ physical laws [2] that describe the phenomena of life, it is by no means certain that we will also have the mathematical methods and algorithms to describe them quantitatively. When Newton discovered that the complicated description of celestial mechanics and the Ptolemaic worldview had to be replaced by the law of universal gravitation, he had to create a new mathematical procedure (algorithm) that served as the basis for mathematical analysis in order to be practical.
Similarly, the theory of neural information processing needs revisiting our notions on information and its processing [3,4,5]. A decade ago, it was targeted “The Human Brain Project (https://www.humanbrainproject.eu/en/) should lay ...a new model of ...brain research, ...to achieve a new understanding of the brain ...and new brain-like computing technologies.” However, as witnessed by the summary reports [6,7,8], the theory-related advances and tools in the field are confined to providing new graphical interfaces and user-oriented libraries for the many-decades-old theoretical understanding of neuronal operation; almost entirely neglecting the discoveries of experimental research and acquired knowledge in that period.
We emphasize that the first principles of science are the same, but the different "construction" of the living matter needs different – ’non-ordinary’ – approximations and laws. If one wants to describe the phenomena of the life’s processes underlying the functioning of the brain, one must also perform a ’non-ordinary’ analysis in developing the abstraction of the processes happening in living matter, furthermore also develop ’non-ordinary’ computing methods, including somewhat unusual mathematical handling of nature’s phenomena. We formulate problems, provide their numerical solutions, and open the way for mathematics to provide analytical solutions. Our procedure still meets the requirement given by Feynman: [9] "an effective procedure is a set of rules telling you, moment by moment, what to do to achieve a particular end; it is an algorithm."
We know from the beginnings that "the language of the brain, not the language of mathematics" [10] and that "whatever the system [of the brain] is, it cannot fail to differ from what we consciously and explicitly consider mathematics" [11] adding that maybe the appropriate mathematical methods are not yet invented. After many decades and concluding grandiose projects [6,7], we must admit that "Yet for the most part, we still do not understand the brain’s underlying computational logic" [6]. "We so farlack principles to understand rigorously how computation is done" in living, or active, matter [12]. We also note that we also lack principles of computation in large-scale complex computing systems [13], especially when using them to imitate neuronal operation.
As [14] classified, "the term neuromorphic encompasses at least three broad communities of researchers, distinguished by whether they aim to emulate neural function (reverse-engineer the brain), simulate neural networks (develop new computational approaches), or engineer new classes of electronic device." We address the first class, indirectly touch the second, and discuss the third in [15]. Many methods and ideas, including some that are half-understood and misleading, are used to imitate the brain’s operation. Unfortunately, all these communities inherited the wrong ideas above, see [15]. We aim to provide an algorithmic-level description of the genuine processes of neuronal operation (see [15], Section 4) for these fields, facilitating a better understanding. Mainly because "the operation of our brain differs vastly from that of human-made computing systems, both in terms of topology and in the way it processes information" [16], a precise description of the way of mapping one operation to the other is needed.
At the level of abstraction, we use, one does not need to consider all biological details since "despite the extraordinary diversity and complexity of neuronal morphology and synaptic connectivity, the nervous systems adopt a number of basic principles" [17] and we proceed along those basic principles, emphasizing the need for precise timing. Although we discuss the operation of single neurons here, we must not forget that "what makes the brain a remarkable information processing organ is not the complexity of its neurons but the fact that it has many elements interconnected in a variety of complex ways" [18]. We must implement the operation and cooperation of the fundamental pieces with great accuracy: any inaccuracy in the biological model or its mapping to computing can lead to far-reaching consequences in their complex interaction. Physiology and neuroscience discovered that "timing of spike matters", giving way to interpreting Hebb’s learning rule [19,20], which usually remains outside the scope of mathematics when it attempts to imitate biological learning. A vital part of what happens in a biological neuron is the temporal coordination of events [21], not only during learning. Although it is not widely known and used, technical computing also has temporal behavior [22,23]. The two types of temporal behaviors, however, differ drastically [15]. A misinterpretation of those time courses leads to poor results when attempting to imitate one type of computing with another.

2. Events and Time

2.1. Schrödinger’s Question

In his very accurately formulated question, "How can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry?", Schrödinger also pointed to the importance of time handling in understanding neuronal operation, so it must have a central place in our algorithmic handling. He focused on (at least) these significant points
  • events’ Unlike non-living matter, living matter is dynamic, changing autonomously by its internal laws; we must think differently about it, including making hypotheses and testing them in the labs (including computing methods). Processes (and not only jumps) happen inside it, and we can observe some characteristic points.
  • space and time’ Those characteristic points are significant changes resulting from processes that have material carriers, which change their positions with finite speed, so (unlike in classical science) the events also have the characteristics ’time’ in addition to their ’position’. In biology, the spatiotemporal behavior is implemented by slow ion currents. In other words, instead of ’moments’, sometimes we must consider ’periods’, and in the interest of mathematical description, we imitate the slow processes by closely matching ’instant’ processes.
  • living organism’ To describe its dynamic behavior, we must introduce a dynamic description.
  • within the spatial boundary’ Laws of physics are usually derived for stand-alone systems, in the sense that the considered system is infinitely far from the rest of the world; also, in the sense that the changes we observe do not significantly change the external world, so its idealized disturbing effect will not change it. In biology, we must consider changing resources.
  • accounted for by physics’[by extraordinary laws] We are accustomed to abstracting and testing a static attribute, and we derive the ’ordinary’ laws of motion for the ’net’ interactions. In the case of physiology, nature prevents us from testing ’net’ interactions. We must understand that some interactions are non-separable, and we must derive ’non-ordinary’ laws [1,2]. The forces are not unknown, but the known ’ordinary’ laws of motion of physics are about single-speed interactions.
  • yet tested in the physical laboratory’[including physiological ones] We need to test those ’constructions’ in laboratories, in their actual environment, and in ’working state’. As we did with non-living matter, we need to develop and gradually refine the testing methods and the hypotheses. Moreover, we must not forget that our methods refer to ’states’, and this time, we test ’processes’. Not only in measuring them but also in handling them computationally, we need slightly different algorithms.

2.2. Notion and Time of Event

The central idea of simulation is the concept of "event". We use events almost in the classic sense that something happens at some given time. In biology, "A signal is a physical event that, to the receiver, was not bound to happen at the time or in the way it did." [24] Similarly, we "define an elementary operation of the brain as a single synaptic event" [25], that must have an accompanied time. Notice that the time in a system working with temporal arguments, whether biological or technical, per definitionem, is the time of a received instead of a sent event. The signal propagation in biological systems is slow; we need to introduce the time when something happens in one component and the time when the other component receives a message about what happened. In biological systems, the period between these events is much longer than the period when the objects compute the information content of the messages; this is why von Neumann told [26] that it would be ’unsound’ to apply his mathematical theory to neural computing. The sending system must know the downstream partners (i.e., which nodes require the result it sends) and the propagation delay added per message to the time of sending. In the case of technical imitation, one must add an appropriately calculated delay to the time of sending.Biological systems employ various mechanisms for synchronization to compensate for the slow signal propagation speed. In technical systems, one must follow the "proper sequencing" principle of computation [15,23,26]
When simulating biological time in technical systems, we must simulate the transmission time by an appropriate delay time. The upstream neuron must maintain the needed delay times (the temporal length of the axon) per connection. The receiver has no way to distinguish its inputs, as triggering its output spike works on a "first come, first serve" basis. The incoming spikes must be received strictly in the order of their arrival time. Exceeding the neuron’s membrane threshold voltage renders the still unprocessed inputs obsolete: the neuron closes its inputs. By exceeding the threshold, it prepared its result: how quickly the membrane received its needed charge. Making an accurate "digital twin" of the anatomic structure of the brain without attaching the temporal length to the spatial length of the axons (the conduction velocities may be very different) is not sufficient, as the failure of the whole-brain simulations witnessed [7,27].
In addition, we know that the propagation time is modulated by the receiving neurons, per synaptic input [21], with the goal to implement neuronal-level learning. Furthermore, a membrane maintains a per synaptic input neuronal memory by maintaining a voltage level above its resting potential. Because of those temporal changes, the result of the neuronal computation (the time of the output spike) is susceptible to the correct time handling. "The timing of spikes is important with a precision roughly two orders of magnitude greater than the temporal dynamics of the stimulus" [28]. One must prepare the systems targeting to imitate biological networks to handle the simulated time with such accuracy. That means when simulating a system with a firing rate at 500 H z , the system must be prepared to have a time resolution below 20 μ s e c (independently of the assumed coding type). As Figure 3 shows, the non-excitable period of a neuron is shorter than 1 m s e c , and to integrate the sharp gradient, an integration step about 2 μ s e c may be needed. Precise time measurement (in the sense that measured by its effect) at this scale can be carried out only by using special hardware devices; the "time slices" the ordinary computers use are at least an order of magnitude larger. By economizing measuring time, one derives very inaccurate temporal dependencies; see, for example, our Figure 4.
For artificial neurons, the needed time sensitivity should be derived from the technical implementation and strongly depends on the coding/decoding method. In our timing analysis, we check the parameters which influence the achievable resolution. It was known from the beginning that using von Neumann’s stored program concept converts the logical dependence on actions to temporal dependence [29]. One must consider that biological computing is inherently parallel, while technical computing is serial (or, in the best case, parallelized sequential), breaking down events that happen simultaneously.

2.3. Time

2.3.1. Time in Technical Computing

In technical computing, an event is an electric signal that denotes the beginning or end of an elementary operation or signals transferring control to another place in the program, or that the program’s control reached a specific instruction in the memory. It has a particular ’wall-clock’ time, a ’processor-time’, and a ’pseudo-biological time’ (in the library and language we use [30], it is called ’simulation time’). The last one (provided that the computer program is accurate and faithfully reproduces neuronal operation) corresponds to biological time, and the paper is about controlling the simulation time of neuronal operation. The other two depend on many factors, and whether they can be uniquely and proportionally mapped to the simulation time is doubtful. Not only can the time of elementary operations be mapped to each other with a factor differing by several orders of magnitude, but they also include the subtleties of computer operation (system calls, Input/Output (I/O) operations, and bus arbitration times) and scheduling computer resources between the simulated biological resources. In addition, the latter two times depend on the type of workload the simulation represents [31,32,33,34,35]. The scaling of computing performance (and, as a consequence: the mapping of computing-related time to biology-related time) is strongly nonlinear.
The notion of time is vital for biological and electronic computing [21,36]. However, when simulating biological objects by electronic computers, they are not identical, and, what is worse, they are even not proportional. The way computers work [37] destroys even the sequence of events. For this reason, a pseudo-time is used, which we call ’simulated time’. The biological processes are divided into segments of varying sizes. At the characteristic points, the period for the biological process is set and its length is transferred to the scheduler of the engine (important: this scheduler sits on top of the scheduler of the operating system and works independently from it). The information includes a callback function that determines when the activity is to be executed. Many biological processes run simultaneously, and they individually communicate with the scheduler. This way, the scheduler has the information on which biological process wants to use the processor in chronological order. The scheduler maintains the simulated time as multiples of a ’time resolution’. In this way, the continuous simulated time is mapped to discrete time steps. The time between those discrete steps is considered to be the same.
When comparing continuous time values of any kind, a practical need is to define a tolerance (called "time resolution"), a period within which two values are considered equal. A shorter period makes the computation more accurate, but it requires more computing capacity. The time course of the considered biological quantities may also require varying time resolution for the simulation. In this sense, the per system "grid time" is also a kind of time resolution. We use a per neuron varying "heartbeat time" in the simulator, which is close to the actual time resolution in the simulated biological system, and a (much) smaller technical time resolution used by the simulator’s scheduler.

2.3.2. Time Scales

The basic issue with simulating biology by technological means is that three time scales exist, and they are connected by events only (i.e., non-temporal characteristics of what happens). We have a wall-clock time, that is how much time passes between the events on the wall-clock; how much processor time is needed to imitate the operating results in the events, and how much time the genuine biological system needs to operate between those events. The actual computer processing time is not directly measurable, so a proportionality (implying a homogeneous workload and uniform resource utilization) between the processing and the wall-clock times is usually assumed. After that, the wall-clock time is used as ’the time’ or ’the computing time’, by adding its empirical ratio to the also empirical biological time. The lengths of the computing and the biological operation times are not proportional at all. Their ratio depends on several factors, including the technical parameters and architecture of the computing system, as well as the complexity and detail of the computation.
This empirical ratio is an integral quantity (a long-term average) only, it cannot be used directly to align the simulated events. Since the biological neurons communicate with each other, and at that given communication time, the equivalent computing time may be different; the time scales of the technical neurons must be aligned by some method, at least approximately. The oldest method (see Algorithm 1) is to use "time slices" (also called "grid time"), i.e., to divide the biological time into slices and to perform simulating calculations only for that slice period in all neurons, then stop. That is, at these synchronization points, all the neuronal computations stop, send the result of their computations to their fellow neurons, and receive their similar data.
The ratio of computing to biological times is called real-time performance (i.e., how much the computing time roughly matches the biological time). To overcome the limitation stemming from the computing workload [31,32,33,34,35], typically, a 1 m s ’integration time step’ (measured on biological scale) is used. To study biological processes with finer time resolution, one can also use smaller step time. That step time, however, prolongs unproportionally the computing time (slows down the simulation). Furthermore, see Figure 3, the gradients change steeply; in some phases, needing integration step sizes near 1 μ s . Figure 4 depicts the effect of the time resolution on the accuracy of deriving a phase diagram.
Natively, computing can only eventually account for the processing time but not control it. It is hard to find an integration time step which is not wasting computing time and still provides sufficiently accurate results.

2.3.3. Aligning the Time Scales

The deviation and non-linearity of the simulation’s wall-clock time and simulated biological time were observed, and various methods for mitigating their effects were introduced. The simplest one is to compare the total simulated time to the total wall-clock time and assume that, on average, a constant factor relates the biological time to the simulation time. It is, however, not valid for the individual events: they can change their distance and even their order; the time course of the events may differ drastically. The usual (although high-speed) serial bus makes the transfer times nondeterministic [15,23]. In vast technical computing systems, performing all scheduled transfers may require up to dozens of minutes, as shown in Figure 1 of [38]. In other words, the transfer time varies in a quasi-random way between a few nanoseconds up to several minutes, so in the worst case, the length of the period of two events with a distance of msec on the biological time scale can amount to minutes on the computing time scale. To avoid practically never-ending computations, "spikes are processed as they come in and are dropped if the receiving process is busy over several delivery cycles" [39]. These are clear signs that, mainly due to the differing operating principles, the vast technical systems cannot handle such an amount and/or density of events that biological simulations need. The presently available serial systems do not enable that performance to be reached [33]. "The idea of using the popular shared bus to implement the communication medium is no longer acceptable, mainly due to its high contention." [40].

2.3.4. Time Resolution

The way we perform simulation that we define events (such as the beginning or end of computing, receiving an input, etc.) and we perform the actions that happen at the same (biological) time. The "same time" in this context means that the simulated times are within a so-called "time resolution". The biological actions are implemented as a kind of callback function that gets called when the corresponding (simulated) time arrives. Choosing a smaller time resolution results in slightly more accurate results at the price of much more computing time. Our experience shows that, given that the time derivative of the rush-in voltage is a very steep function (see Section 5.4), that is made even steeper by adding the Axon Initial Segment (AIS) current-related derivative, should be as low as 2 μ s to keep the value of d d t V M around 1 m V . (compare it to the 1 m s integration step commonly used by neuronal simulators [39].)

2.3.5. Time Stamping

When using timestamping (i.e., attaching the wall-clock times of happening to the events), there are two bad choices. Option one is for neurons to have a (biological) sending-time-ordered input queue and begin processing when all partner neurons have sent their message. (The neurons form a linear chain, but the computing scheduler does not know about that chaining.) That needs a synchrony signal, leading to severe performance loss. Option two is processing the messages as soon as they arrive, enabling one to give feedback to a neuron when processing a message with a timestamp referring to a biologically earlier time but received physically later. In all cases, it is worth considering if their effect exceeds some tolerance level compared to the last state. Moreover, one shall mitigate the need for communication also in this way.
When training an Artificial Neural Network (ANN), one starts showing an input, and the system begins to work. It uses its synaptic weights valid before showing that input. Its initial weights may be randomized or correspond to the previous input data. The system sends correct signals, but a receiver processes a signal only after it iss physically delivered (whether or not the message envelope contains a timestamp). It may start to adjust its weight to a state that is not yet defined (in the case of a cyclic operation, to the state in the previous cycle. It is fine in continuously working biological systems, but not when a technical system has just started to work, especially not when upgrading with a new piece of information, leading to ’catastrophic forgetting’). In their operation (essentially an iteration), without synchronization, the actors, in most cases, use wrong input signals, and they surely adjust their weights to false signals initially and with significant time delay at later stages. Neglecting networks’ temporal behavior, in a lucky case, the system will converge but painfully slowly. Or not at all.
Synchronization is a must, even in ANNs. Care must taken when using accelerators, feedback, and recurrent networks. Time matters; furthermore, the computing theory is valid only for linear sequential computing. To provide faster feedback for computing neuronal results faster cannot help. The received feedback delivers the state variables that were valid long ago. In biology, spiking is also a ’look at me’ signal: the feedback shall be addressed to that neuron that caused the change (see the Hebbian learning). Without considering the spiking time (organizing neuron polling in a software cycle), neurons receive feedback about ’the effect of all fellow neurons, including me’, and adjust all weights of neurons using the same metric.

2.3.6. Simulating Time

The software we use is a special C++-based library SystemC with a user-level scheduler [30,41]. The primary field of application of the software is to prepare electronic designs, so some formal elements are to be considered. Those elements are typically confined to low-level modules and user-accessible modules resemble standard C++ modules, although their names and descriptions may reflect their specialties.
In the SystemC engine, there exists a time resolution, a small period in which all events ’happen at the same time’. For using all its features, the complete Reference Guide [41] may be needed for developing the code, but using the well-written core of the packet, it is usually sufficient to study the textbook [30]. Our simulator has a time resolution 1 μ s .
When the next element of the event queue follows (see Section 4.1.1), the scheduler increases the simulated time to the value of the requested time of action of the actual item. If more than one action is scheduled at the same time, all those actions are performed, in an arbitrary order. At the end of the period, the callback function notifies the biological process that the requested timing period is over (meaning that the requested computing activity was performed), and the scheduler takes the following item in the queue.
This way, the simulated biological objects work on the same time scale. Through the elapsed time and/or notifying each other, they can cooperate. Although the periods of the simulated time and the period while the computer works out the simulated task are vastly disproportional, the simulation is perfectly timed. The simulated process requests that its phases be scheduled to the biologically correct time, and they are executed at a processor time when the processor has free activity time.

3. Spiking and Information

One of the most eye-catching features (also historically) of biological neurons is that they receive and issue intense charge pulses called spikes. It is known from the beginning that "There is, however, another parameter of our neuronal signal. The time interval between the successive impulses can vary" [24]. Although the role of time has been confirmed in several studies, and there is a general agreement that time is at the heart of communication inside the systems of neurons, there is no agreed opinion about how time is coded. Even, it is question marked if coding as a metaphor is acceptable for neurons [42].

3.1. Information Coding

We must distinguish "coding" at the perception level and "coding" at the neural communication level. "Information that has been coded [at perception level] must at some point be decoded also. One suspects, then, that somewhere within the nervous system there is another interface, or boundary, but not necessarily a geometrical surface, where `code’ becomes `image’” [43]. We use "coding" at the neuronal level, that is, how the spike pulses convey information. Given that the information is temporally and spatially distributed, moreover most of the preconditions of the applicability of Shannon’s communication theory [44] are not fulfilled, it is unrealistic to expect a quantitative relation between spikes and the information they deliver.
The information must be coded (and will be converted to signal after coding), and if so, it must be correspondingly decoded and converted to information. Shannon [46] also warned that this coding/decoding must not be confused with that "the human being acts in some situations like an ideal decoder, this is an experimental and not a mathematical fact". Notice that a human observer sees a signal, instead of the coded information. Up to this point, we can assume that a presynaptic neuron produces information, and the system transfers it to the postsynaptic neuron using an axon. Notice also that Shannon requires cutting a neuron logically to a receiver (the synapses and the membrane) and a transmitter (the hillock and AIS) units, in line with as von Neumann introduced input and output sections.
In neuroscience, "the interpretational gamut implies hundreds, perhaps thousands, of different possible neuronal versions of Shannon’s general communication system" [45]. As we discussed in [3], neuronal communication does not follow Shannon’s model for communication. In this way, some "information loss" surely takes place. If we consider that the Inter-Spike Interval (ISI) carries information and a neuron takes into consideration in its "computing", only the spikes that arrive within an appropriate time window [15], some information is lost again. Even the result of the computation, a single output spike, may be issued before either of the input spikes was entirely delivered – one more reason why we should revisit the notion of information.

3.2. Spiking

Similarly, definitions such as “Multiple neuromorphic systems use Spiking Neural Networks (SNN) to perform computation in a way that is inspired by concepts learned about the human brain” [47] and “SNNs are artificial networks made up of neurons that fire a pulse, or spike, once the accumulated value of the inputs to the neuron exceeds a threshold” [47] are misleading. That review “looks at how each of these systems solved the challenges of forming packets with spiking information and how these packets are routed within the system”. In other words, the technically needed “servo” mechanism, a well-observable symptom of neural operation, inspired the “idea of spiking networks”. The idea lacks knowing neurons’ information handling and replaces the (in space and time) distributed information content by well-located timestamps in packets, the continuous analog signal transfer by discrete packet delivering, the dedicated wiring of neural systems by bus-centered transfer with arbitration and packet routing, and the continuous biological operating time with a mixture of digital quanta of computer processing plus transferring time. Even the order of timestamps conveyed in spikes (not to mention the information they deliver) is not resemblant to biology; furthermore, due to the vast need for communication, the overwhelming majority of the generated events must be killed [39] without delivering and processing them to provide a reasonable operating time and to avoid the collapse [48] of the communication system. In the rest of the aspects, the system may provide an excellent imitation of the brain.
In addition to wasting valuable time with contenting for the exclusive use of the single serial bus, utilization of the bit width (and package size) is also inefficient. Temporal precision encodes the information in the spikes and is estimated to need up to three bits/spike” [49]. The time-unaware timestamping of artificial spikes does not encode temporal precision, given that the message comprises the sending time coordinate of the event instead of the relative time to a local synchrony signal (a base frequency); furthermore, the conduction time is not included in the message. Time stamping misses the biological method of coding information [3].

3.3. Neuronal Learning

The discussed operation (from a distance) resembles actual neural operation and is essentially data-controlled. Biological neurons learn autonomously: they adjust their synaptic sensitivity based on the timing relations of the data arriving at their inputs. On the contrary, one must train artificial neurons, typically using the back-propagation method; see [50] and our figures about bus transfer in [15]. During that operation, the program (which works in instruction-driven mode) supervises the learning process, performs computations, and sends its results through the network and the bus in opposite directions. In the figure, the directions of data transmission are opposite in the case of ’forward’ and ’backward’ directions, while the time is unidirectional. One can operate in the ’backward’ direction when the computation is in the ’forward’ direction has already terminated (the time windows overlap, and they have no transfer time window between). Furthermore, the above restriction is also valid for the computations inside the layers.
However, the case of ANNs is different. In an ANN, the bus delivers the signals, and the interconnection type and sequence depend on many factors (ranging from the kind of task to the actual inputs). During training ANNs, their feedback complicates the case. The only fixed aspect of timing is that a neuronal input arrives inevitably only after a partner produced it. However, the time ordering of delivered events may differ from the assumed one. It depends on the technical parameters of delivery rather than the logic that generates them.

3.4. Information Density

As discussed in [3], when sending messages with finite speed, the message comprises temporal and spatial components. The statement applies to both technical and biological computing. In technical computing, the temporal contribution is unintended and unwanted, so it is suppressed as much as possible. In biological computing, the arrival of a spike is only a synchronization signal (in line with Shannon, "if a source can produce only one particular message, its entropy is zero" [44]). The difficulties in deriving experimental conclusions are hard: "These observations on the encoding of naturalistic stimuli cannot be understood by extrapolation from quasi-static experiments, nor do such experiments provide any hint of the timing and counting accuracy that the brain can achieve." [51] In biological systems, evidence shows that more than one bit (some guesses, see [49], ch. 6, suggest 3 bits) are transferred in a spike (modeled as a bit train). When the number of bits in messages can seriously change (and gets significantly higher than one or two bits), the proportionality between the number of spikes and the conveyed information ceases.
An unwanted synergy between computing and neuroscience is the hypothesis that the information transfer and the phenomenon of ’spiking’ (sending short identical pulses) are closely related. A study [51] investigated the connection between spike firing patterns and the visual information they convey. They experienced that "constant-velocity motion produces irregular spike firing patterns, and spike counts typically have a variance comparable to the mean". In statistics, a large variance indicates that the numbers in a set are far from the mean and one another. In other words, in the actual case of a constant velocity, there is a very weak (if any) dependence between the number of spikes and the motion, clearly showing that some other parameter of the transferred signal delivers the information. The experimenters noticed this, and they looked for some other "more natural" case: "But more natural, time-dependent input signals yield patterns of spikes that are much more reproducible". Simulating nature-defined encoding is a real challenge, but human-encoding is also made hard by human-made technical systems. Presumably, in the "much more reproducible" scenario, the number of bits per spike in the analog component is much slighter than in the "less natural" case, so using a wrong merit function for the information meets experimenters’ (false) expectation much better.
The fallacy of selecting a wrong merit function leads directly to the conclusion [52] that "energy efficiency [of neural transfer] falls by well over 90%" (many other similar examples could be cited, but this one shows the mistake explicitly). By defining their measuring method, "We calculated the firing rate of the spiking neuron model", and measured "spikes/s" [51,53] (assuming that the neuron is a repeater and the spike carries on/off information) they draw a false parallel with electronic circuits.

4. Technical Aspects

One can separate the subject of simulating neural networks into two independent segments: what a model is behind producing spikes (and what information the spikes deliver) and how the system (principally and technically) takes into account the delivered information.
The numerous enormous differences between technical and biological computing make simulation of neural operation by technical means a real challenge. A fundamental issue is that the biological time of the events is not directly proportional to the computer processing time. Given that the processing time comprises computing time plus transfer time, timestamping cannot provide a solution for time handling. The timestamp records when an event happens in the computing process instead of its biological time. Given that several neurons share the computing resources and the computer’s execution is sequential, the events happening at a biologically identical time will generate time stamps at technically different times. Furthermore, for the same reason, the generated event will be considered again at different times with a (technically random) delay compared to the time of generating those events and to each other. Furthermore, the propagation time through the axons cannot be included. These effects are late consequences of omitting the transfer time in computing science.
The central idea of simulation is the concept of "event". We use the term’ event’ in the everyday sense, referring to something that happens at a specific time. In technical computing, an event is an electric signal, which means the beginning or end of an elementary operation or signals transferring control to another place in the program. In biology, "A signal is a physical event that, to the receiver, was not bound to happen at the time or in the way it did." [24] Similarly, we "define an elementary operation of the brain as a single synaptic event" [54].
The basic issue with simulating biology with technology is that two time scales are used, and they are connected by events only. The length of the computing and the biological operations are not proportional at all. Furthermore, neuronal operations happen simultaneously, while technical operations work in a sequential way (or maybe in a parallelized sequential way) that breaks happening events simultaneously.
The way we perform simulation that we define events (such as the beginning or end of computing, receiving an input, etc.) and we perform the actions that happen at the same (biological) time. The "same time" in this context means that the simulated times are within a so-called "time resolution". The biological actions are implemented as a kind of callback function that is called when the corresponding (simulated) time arrives. Choosing a smaller time resolution results in slightly more accurate results at the price of much more computing time.

4.1. Neural Connectivity

It is typical that several spikes arrive at a neuron, and only one spike is produced (although it branches toward several downstream neurons): “A single neuron may receive inputs from up to 15,000–20,000 neurons and may transmit a signal to 40,000–60,000 other neurons” [55]. “In the terminology of communication theory and information theory, [a neuron] is a multiaccess, partially degraded broadcast channel that performs computations on data received at thousands of input terminals and transmits information to thousands of output terminals by means of a time-continuous version of pulse position. Moreover, [a neuron] engages in an extreme form of network coding; it does not store or forward the information it receives but rather fastidiously computes a certain functional of the union of all its input spike trains which it then conveys to a multiplicity of select recipients" [56].
In small-scale artificial neuronal networks, all neurons in a layer receive input from the neurons of the upstream layer and provide input for the downstream layer. In an extensive system, this method requires an enormous amount of communication actions.
Algorithm 1 The basic clock-driven algorithm [57], Figure 1
  • t 0
  • while t < d u r a t i o n do
  •     for every neuron do
  •         // State updates
  •         Process incoming spikes
  •         Advance neuron dynamics
  •     end for
  •     for every neuron do
  •         //Propagation of spikes
  •         if  V M > V t h r e s h o l d  then
  •            reset neuron
  •            for every connection do
  •                send spike
  •            end for
  •         end if
  •     end for
  •      t t + d t // Advance to next time by fixed grid time
  • end while
Algorithm 2 The basic event-driven algorithm with instantaneous synaptic interactions [57], Figure 2
  • while queue not empty and t < d u r a t i o n  do
  •     Extract event with lowest timing
  •     (=timing t, target i,weight w)
  •     Compute state of neuron i at time t
  •     update state of neuron i (+w)
  •     if  V M > V t h r e s h o l d  then
  •         for each connection i j  do
  •            Insert event in the queue
  •         end for
  •         Reset neuron i
  •     end if
  • end while
The blue lines inserted into Algorithm 3 illustrate the usually neglected yet necessary and time-consuming actions that were not considered in the algorithm design. When calculating algorithmic time, they are considered as zero-time contributions, but in real computing systems they can become dominant. In small scale (toy) systems, they can be neglected. As the system grows, performance gets saturated, as measurement [35,58] and theory [31,33] witnessed.

4.1.1. Queue Handling

Attempts to simulate spiking biological neural networks started early, and different types of simulation strategies and algorithms have been implemented; for an excellent review, see [57]. As experienced, the precision of those simulations sensitively depends on the applied strategies, in particular in cases where plasticity depends on the exact timing of the spikes. It was found that the appropriateness (or applicability) of some method or strategy sensitively depends on the given task. The analysis method can be followed for artificial spiking neural networks, too. The paper discusses the basic algorithms used to compute neural networks (notice that computing the individual neurons according to different models is not detailed).
We detail the calculation algorithm in one single node of the network, as it is the vital component of the correctness of the biological network. Given that, "overall, it appears that the crucial component in general event-driven algorithms is the queue management." [57], we used the queue of SystemC [30,41]. This way, we use at least one time event and at most one event per neuron in the queue; furthermore, we used "Event-driven algorithms implicitly assume that we can calculate the state of a neuron at any given time, i.e., we have an explicit solution of the differential equations". Our approach is distinct: we employ a heartbeat-based calculation method, enabling us to determine a neuron’s state at any given time. Our method is "spiking" in the sense that a neuron sends a message only when the calculation is complete (in this sense, it minimizes communication to the absolute minimum). However, it is time-aware (works on the actual biological time) at the price of making more scheduling. Of course, it can utilize single-shot operating neurons that correspond to step-like changes rather than slow processes.
  • the two latter algorithms comprise a deadlock, as all neurons expect the others to compute inputs (or work with values calculated in previous cycles, mixing "this" and "previous" values)
  • after processing a spike initiation, the membrane potential is reset, excluding the important role of local neuronal memory (also learning)
  • the algorithms are optimized to single-thread processing by applying a single event queue
Algorithm 3 The basic event-driven algorithm with non-instantaneous synaptic interactions [57], Figure 3
  • for every neuron i do
  •     Compute timing of next spike
  •     Insert event in priority queue
  •     Queue handling and I/O
  • end for
  • while queue not empty and t < d u r a t i o n  do
  •     Queue handling and I/O
  •     Extract event with lowest timing
  •     (event=timing t, neuron i)
  •     Schedule neuron
  •     Compute state of neuron i at time t
  •     Reset membrane potential
  •     Compute timing of next spike
  •     Insert event in queue
  •     Queue handling and I/O
  •     for every connection i j  do
  •         Compute state of neuron j at time t
  •         Schedule neuron
  •         Change state with weight w(i,j)
  •         Compute timing of next spike
  •         Insert/change/suppress event in the queue
  •         Queue handling and I/O
  •     end for
  • end while
We must consider the conclusion [57] that "these results support the argument that the speed of neuronal simulations should not be the sole criterion for the evaluation of simulation tools, but must complement an evaluation of their exactness." Following cite BretteSpikingSimulation:2007, we can assume that enqueue/dequeue operations of events can be performed relatively quickly; however, queue management remains a crucial component in general event-driven algorithms. Moreover, as the number of events in the queue grows, the hit ratio of finding an event in the cache quickly decreases. Given that the memory addresses of neurons found in the queue are "random" for the cache, data locality is not effective any more, and practically the processor can access all neuron data after a cache failure. The I/O operations are protected mode instructions, so a single I/O command must be accompanied by context switchings, there and back, with an about 20,000 machine instruction offsets [59,60]. This offset is why "artificial intelligence, ...it’s the most disruptive workload from an I/O pattern perspective."1

4.1.2. Limiting Computing Time

One must drop some result/feedback events because of long queuing to provide seemingly higher performance in excessive systems. The feedback the neuron receives provides a logical dependence that the physical implementation of the computing system converts to temporal dependence [22]. The feedback arrives later, so those messages stand at the end of the queue. Because of this, it is highly probable that they are dropped if the receiving process is busy over several delivery cycles [39]. In excessive systems, undefined inputs may establish the feedback and the system may neglect (maybe correct) feedback. Another danger when simulating an asynchronous system on a system that is using at least one single centrally synchronized component introduces a hidden clock signal [33], which degrades the system’s computing efficiency by orders of magnitude.
Paper [61] provides an excellent experimental proof of the statements above: Yet the task of training such networks remains a challenging optimization problem. Several related problems arise. Very long training time (several weeks on modern computers, for some problems), the potential for over-fitting (whereby the learned function is too specific to the training data and generalizes poorly to unseen data), and more technically, the vanishing gradient problem. The immediate effect of activating fewer units is that propagating information through the network will be faster, both at training and test time. The intention to compute feedback faster has its price: As λ s increases, the running time decreases, but so does performance. Introducing the spatiotemporal behavior of ANNs improved the efficacy of video analysis significantly [62]. Investigations in the time domain directly confirmed the role of time (mismatching): The Computer Neural Network (CNN) models are more sensitive to low-frequency channels than high-frequency channels [63]. The feedback can follow slow changes with less difficulty than faster changes.

4.1.3. Sharing Processing Units

To fasten the overall execution, several threads and several cores share the task. The different computing times in the cores and threads are unrelated to the biological scale, so the programmer divides the biological time scale to relatively short periods and forcefully stops simulation in a given thread when the activity up to that biological time was simulated entirely. That means that the threads must wait each other at the "meeting points", furthermore they must mutually inform each other. This is accomplished by sending the relevant calculated result(s) to each other and receive the results of other computing units as input parameters. This idea is the exact equivalent of central clocking the processors use. Typically, a 1 m s e c "grid time" is used. As we discussed [33], the final result is the same than using a very high performance processor under the control of a very low frequency ( 1 k H z ) central clock. As experienced, the operating characteristics correspond to the theoretical expectations [31,32]. The system can load the memory with content in linear time [64] (in this sense, as designed, "can simulate one billion neurons"). However, when those simulated neuron need to communicate, only less than 80,000 neuron can be operated. It means that, at this special workload, the computing efficiency is less than 10 4 . It is well understood theoretically [31,32] (the first observation and theoretical description is more than 3 decades old [65]) and it was also experimentally confirmed [34,58] that for such workload types, only a couple of dozens of processors can be used efficiently. It is also known that limiting the number of communication messages, for example for real-time operation [63], decreases operation quality. In vast systems, collecting the data from all participating processors takes dozens of minutes [38].
After finding the data address, the instruction address must also be set appropriately (it is not sure at all that all neurons all times execute the same code). Usually, several neurons share a processor core. As [39] estimated, about 10% of the computing capacity goes for administration, provided that once the core was scheduled, all neurons assigned to it make their computation, without needing system-level rescheduling. This means that the relative non-payload contribution can be as good as 10/1000% if all cores are scheduled sequentially, but it can be as wrong as only one payload activity follows the expensive rescheduling operation.
Experience shows that "small differences in spike times can accumulate and lead to severe delays or even cancellation of spikes, depending on the simulation strategy utilized or the temporal resolution within clock-driven strategies used." [57]

4.1.4. Pruning Connections

Communications, anyhow, tragically decrease the efficiency of computation also in large-size neural networks (including brain simulation and ANNs). The grid-time based computing results in sending and receiving messages irrespectively whether some significant change happened in the state of the neuron. One may conclude that the number of communication actions must be reduced, the number of epochs (needing non-payload communication, including scheduling) leading to communication must be reduced. The most evident manifestation of biological operation, sending spikes to the fellow neurons, inspired creating "spiking neural networks". The "spiking neural networks" target reducing messaging by limiting communication to one single message, that is sent by a neuron when it "fires". The issues against the idea is the content of the message: the information the spike delivers is the time when the effect of the environment (the fellow neurons) causes the neuron to fire. The time is the ’clock time’ that the sender stamps into the message and the receiver may work out the message according to the time that is stamped into the message.
It is known in neuroscience that although a neuron may have several dozens of thousands input axons, only about a dozen of them participate in integrating charge, see Eq.(7). Biology also applies pruning during development of individuals [66,67].
In technical systems, pruning can be applied in a random way, based on empirical utilization statistic of the connection line, or using predefined models (such as Large Language Model (LLM)) based on theoretical assumptions on the targeted utilization. The idea of spiking networks is that only the neurons that have collected enough charge, send a notification (a spike) to their downstream neurons. This idea reduces the communication traffic considerably, but has ’side effects’ in locality as discussed). If the spikes carry timing information, the neuron can finish charge integration when sufficient charge arrived to raise the membrane’s voltage to above the threshold level. This idea assumes that at the time of summing all input spikes to be considered by a neuron are available as a time-ordered queue so one can decide when the charge integration shall be started and finished. After that time, at a fixed-time delivery offset, the neuron sends a message to its downstream neurons. The arrival time at the downstream neurons is composed of the time of event, the time of delivery plus the transfer time per connection line.
The time values are on the ’clock time’ scale: the computer prepares the time stamp when executed the operation that caused exceeding the membrane’s threshold. Given that that time depends on the thread scheduler (and up to 1000 neurons per core), in the case of using clock-based algorithm, the neurons send messages (spikes) at approximately the same time offset. When running an event-based algorithm, the non-payload activity is not anymore uniform, strongly decreasing the dispersion of spiking times, and so the temporal precision of transferring neuronal information.
When imitating biological processes, one needs to consider both the time at which the event can be "seen" in wet neurobiology (the biological time) and the time duration that the computer processor needs to deliver the result corresponding to the biological event (computing time). The computing objects, intending to imitate biological systems, need to be aware of both time scales. As discussed above, in connection with the serial bus, the technical implementation may introduce enormously low payload computing efficiency, and considerably distorts the time relations between computing and data delivery times.
Passing of time is measured via counting some periodic events, such as clock periods in computing systems or spiking events in biological systems. In this event-based world, everything happening in periods between those events happens "at the same time". However, the technical implementation (including measuring biological processes) may introduce another, unintended, granularity. The biological neurons perform analog integration; the technological implementations are prepared to perform "step-wise" digital integration. This step involves (mostly) losing phase information. Furthermore, as detailed in [33], it introduces severe payload performance limits for neuronal operations.
Also an issue is that the artificial neurons work "at the same time" (biological time) and they are scheduled essentially in a random way on the wall-clock time scale. Since the neurons integrate "signals from the past", this method is not suitable for simulating the effects of spikes with decay time comparable to length of the grid time. In addition, this grid time is "making all cores likely to send spikes at the same time" and "[SpiNNaker]does not cope well with all the traffic occurring within a short time window within the time step" [39]. This is why the designers of the hardware (HW) simulator [68] say that the events arrive "more or less" in the same order as they should be. The difference of the timely behavior is also noticed by [39]: "The spike times from the cortical microcircuit model obtained with different simulation engines can only be compared in a statistical sense".
In ANNs, the signals are delivered via a bus, and the interconnection type and sequence depends on many factors (ranging from the kind of task to the actual inputs). During training ANNs, their feedback complicates the case. The only fixed thing in timing is that the neuronal input arrives inevitably only after a partner produced it (which also needs time). The time ordering of delivered events, however, is not sure: it depends on technical parameters of delivery, rather than on the logic that generates them. Time stamping cannot help much. There are two bad choices. Option one is that neurons should have a (biological) sending time-ordered input queue and begin processing when all partner neurons have sent their message. That needs a synchronous signal and leads to severe performance loss. Option two is that they have a (physical) arrival-time ordered queue, and they are processing the messages as soon as they arrive. This technical solution enables us to give feedback to a neuron that fired later (according to its timestamp), and set a new neuronal variable state; which is a "future state" when processing a message received physically later, but with a timestamp referring to a biologically earlier time. A third, maybe better, option would be to maintain a biological-time ordered queue (that method is SystemC follows), and either in some time slots (much shorter than the commonly used "grid time") send out output and feedback, individually process the received events, and send back feedback and output immediately. In both cases, it is worth to consider if their effect is significant (exceeds some tolerance level compared to the last state) and mitigate the need for communication also in this way.
In excessive systems, some result/feedback events must be dropped because of long queuing to provide seemingly higher performance. The logical dependence that the feedback is computed from the results of the neuron that receives the feedback, the physical implementation of the computing system converts to time dependence [22]. Because of this time sequence, the feedback messages will arrive at the neuron later (even if at the same biological time, according to their time stamp they carry), so they stand at the end of the queue. Because of this, they probably "are dropped if the receiving process is busy over several delivery cycles" [39]. In excessive systems, undefined inputs may establish the feedback, and the system may disregard (or possibly correct) the feedback.
An excellent "experimental proof" of the claims above is provided in [61]. With the words of that paper: "Yet the task of training such networks remains a challenging optimization problem. Several related problems arise: very long training time (several weeks on modern computers, for some problems), the potential for over-fitting (whereby the learned function is too specific to the training data and generalizes poorly to unseen data), and more technically, the vanishing gradient problem". "The immediate effect of activating fewer units is that propagating information through the network will be faster, both at training as well as at test time." This effect also means that the computed feedback, based maybe on undefined inputs, reaches the previous layer’s neurons faster. A natural consequence is that (see their Figure 5): "As λ s increases, the running time decreases, but so does performance." Similarly, introducing the spatio-temporal behavior of ANNs, even in its simple form, using separated (i.e., not connected in the way proposed in [22]) time and space contributions to describe them, significantly improved the efficacy of video analysis [62].
The role of time (mismatching) is confirmed directly, via making investigations in the time domain. "The CNN models are more sensitive to low-frequency channels than high-frequency channels" [63]: the feedback can follow the slow changes with less difficulty compared to the faster changes.

4.2. Hardware/Software Limitations

The processor time is shared between non-payload tasks, such as the operating system’s tasks [69], including context switching [59,60], as well as scheduling computing tasks and simulated units, and the payload tasks, which involve computing for simulating neuronal activity. For an illustration, we use a flagship project [39]. As published, approximately 10% of the time is spent on non-payload activity. As the calculation in [32] details, the time share and number of cores mean a computing efficacy of 10 5 ; in other words, out of the 10 6 cores, only 10 can contribute full-value computing performance. This result is in line with the results that (at slightly less demanding workloads) only a few dozen processors (out of several hundreds of thousands) can be effectively used for computation [34,35], similarly to the case of large neural networks [31,58].
Based on theoretical expectations such as "The naïve conversion between Operations Per Second (OPS) and Synaptic Operations Per Second (SOPS) as follows: 1 SOPS ≈ 2 OPS” [70], and the ratio of their operating times ( 1 n s vs 1 m s ), it was designed that 1000 neurons share a processor core [39]. That is, in addition to thread scheduling (by the operating system), additional neuron scheduling must also take place (and take time) in the computing thread(s). In this way, much less than 1 μ s (on another scale: much less than one thousand machine instructions) can go for calculating the 1 m s activity of a neuron (a detailed consideration of the required computing cycles is given in [39]). If some neurons cannot finish the required flow of calculations or not all simulated neurons can have a time slice, furthermore, if the computed result cannot be processed, the whole computation in the timeslot is (almost) useless. This is why, according to [39], "Each of these cores also simulates 80 units per core" (out of the designed 1000 units): when reducing the number of required context switchings, the core also has some time to perform payload calculations. The limitation is hard: "As a consequence of the combination of required computation step size and large numbers of inputs, the simulation has to be slowed down compared to real-time. Reducing the number of neurons to be processed on each core, which we presently cannot set to fewer than 80, may contribute to faster simulation" [39].
One of the significant drawbacks of the "grid time" method is that the time resolution (the "sampling rate") cannot be shorter than the "grid time"; the "computing neurons" always send messages at that time, whether they have or not a new state; the event is quantized and, at the end of the "time grid" periods "communication bursts" [48] happen: the achievable performance is severely limited [33]. Especially, the intense communication needed in vast systems can lead to communication collapse [48], limiting the number of collaborating units [39,71], pruning the connections by HW/software (SW) methods [67,72,73] or simply omitting some events to speed up processing [39] apparently.

5. Biological Computing

Biological computing was created by nature without a user’s guide attached, so our basic difficulty is that we "want to design systems; neuroscientists want to analyze an existing one they little understand" [74]. We must never forget, especially when experimenting to imitate neuronal learning using recurrence and feedback: "Neurons ensure the directional propagation of signals throughout the nervous system. The functional asymmetry of neurons is supported by cellular compartmentation: the cell body and dendrites (somatodendritic compartment) receive synaptic inputs, and the axon propagates the action potentials that trigger synaptic release toward target cells." [75]
A neuron operates in tight cooperation with its environment (the fellow neurons, with outputs distributed in space and time). We must never forget that "the basic structural units of the nervous system are individual neurons" [17], but neurons "are linked together by dynamically changing constellations of synaptic weights" and "cell assemblies are best understood in light of their output product" [3,76]. A neuron receives multiple inputs at different times (at different offset times from the different upstream neurons) and in different stages. To understand the operation of their assemblies and higher levels, we must provide an accurate description of the operation of single neurons. We must recall the functional asymmetry as well as the networked nature (slow distributed processing) when discussing biological relevance of a method, such as backpropagation.
We generalized computing [23], and its in closely integrated communication [21]. Moreover, we extended the idea to biological operations. We abstract the operation as shown in Figure 1, present its mathematical description in Section 5.4, construct its state-diagram-like summary in Figure 2, describe some details of the algorithm in Section 5.5, present the results of the calculation in Figure 3.

5.1. Conceptual operation

We describe neuronal operation simultaneously using its conceptual operating graph in Figure 1. We subdivide the neuron’s operation into three stages (green, red, and blue sections of the broken diagram line), in line with the state machine in Figure 2. In the former, we use some biological notions, but we omit the unnecessary biological details.
Our model incorporates the latest discoveries, including those related to the AIS. The neuron’s operation resembles a serial R C oscillator [77]. It has a stage variable (the membrane potential) and a regulatory threshold value. Crossing the membrane’s voltage threshold value upward and downward causes a stage transition from "Computing" to "Delivering" and from "Delivering" to "Relaxing", respectively. In the lack of external current contribution, the length of the period "Delivering" is fixed (defined by physiological parameters), the length of "Computing" depends on the activity of the upstream neurons (furthermore, on the gating due to the membrane’s voltage). Due to the finite speed, we discuss all operations in neuron’s own local "local time".
Figure 1. The conceptual graph of the action potential.
Figure 1. The conceptual graph of the action potential.
Preprints 163748 g001
We consider that the neuron is in a resting ground state [78] due to an external perturbation it passes into a transient state, and by issuing the obsolete ions, returns to its resting state. We start in the ’Relaxing’ stage (it is a steady state, with the membrane’s voltage at its resting value). Everything is balanced, and synaptic inputs are enabled. No currents flow (neither input nor output); since all components have the same potential, an output current has no driving force (there is no (significant) "leaking current" [79]).
When the membrane’s voltage decreases below the threshold value, the axonal inputs are re-opened, which may mean an instant passing to stage "Computing" again. The current stops only when the charge on the membrane disappears (the driving force terminates), so the current may change continuously, changing the voltage on the circuit’s output. The time of the end of the operation is ill-defined, and so is the value of the membrane’s voltage when the next axonal input arrives. The residual potential acts as a (time-dependent) memory, with about a m s e c lifetime; see Figure 1. Notice that this is the way nature implements Analog In-Memory Computing (AIMC).

5.2. Stage Machine

Figure 2 illustrates our abstract view of a neuron, in this case, as a "stage machine". The double circles are stages (states with event-defined periods and internal processing) connected by bent arrows representing instant stage transitions. At the same time, at some other abstraction level, we consider them processes that have a temporal course with their event handling. Fundamentally, the system circulates along the blue pathway and maintains its state (described by a single stage variable, the membrane potential) using the black loops. However, sometimes it takes the less common red pathways. It receives its inputs cooperatively (controls the accepted amount of its external inputs from the upstream neurons by gating them by regulating its stage variable). Furthermore, it actively communicates the time of its state change (not its state as assumed in the so-called neural information theory) toward the downstream neurons.
In any stage, a "leaking current" (see Eq. (3)) changing the stage variable is present; the continuous change (the presence of a voltage gradient) is fundamental for a biological computing system. This current is proportional to the stage variable (membrane current); it is not identical to the fixed-size "leaking current" assumed in the Hodgkin-Huxley model [79]. The event that is issued when stage "Computing" ends and "Delivering" begins separates two physically different operating modes: inputting payload signals for computing and inputting "servo" ions for transmitting (signal transmission to fellow neurons begins and happens in parallel with further computation(s)).
Figure 2. The model of neuron as an abstract state machine.
Figure 2. The model of neuron as an abstract state machine.
Preprints 163748 g002
The neuron receives its inputs as ’Axonal inputs’. For the first input in the ’Relaxing’ stage, the neuron enters the ’Computing’ stage. The time of this event becomes the base time used for calculating the neuron’s "local time". Notice that to produce the result, the neuron cooperates with upstream neurons (the neuron gates its input currents). One of the upstream neurons opens computing, and the receiving neuron terminates it.
The figure also discovers one of the secrets of the marvelous efficiency of neuronal computing. The dynamic weighting of the synaptic inputs plus adding the local memory content happens analog, per synapse, and the summing happens at the jointly used membrane. The synaptic weights are not stored for an extended period. It is more than a brave attempt to accompany (in the sense of technical computing) a storage capacity to this time-dependent weight and a computing capacity to the neuron’s time measurement.

5.2.1. Stage ’Relaxing’

Initially, a neuron is in the ’Relaxing’ stage, which is the ground state of its operation. In this stage, the neuron (re-)opens its synaptic gates. (We also introduce a ’Sleeping’ (or ’Standby’) helper stage, which can be imagined as a low-power mode [80] in the electronic or state maintenance mode of biological computing or "creating the neuron" in biology, a "no payload activity" stage.) The stage transition from "Sleeping" also resets the internal stage variable (the membrane potential) to the value of the resting potential. In biology, a "leaking" background activity takes place: it changes (among others) the stage variable toward the system’s "no activity" value.
The ion channels generating an intense membrane current are closed. However, the membrane’s potential may differ from the resting potential. The membrane voltage plays the role of an accumulator (with a time-dependent content): a non-zero initial value acts as a short-time memory in the subsequent computing cycle. A new computation begins (the neuron passes to the stage ’Computing’) when a new axonal input arrives. Given that the computation is analog, a current flows through the AIS and the result is the length of the period to reach the threshold value. The local time is reset when a new computing cycle begins, but not when eventually the resting potential is reached. Notice that the same stage control variable plays many roles: the input pulse writes a value into the memory (the synaptic inputs generate voltage increment contributions which decay with time, so the different decay times set a per-channel memory value while simultaneously the weighted sum is calculated).

5.2.2. Stage ’Computing’

The neuron receives its inputs as ’Axonal inputs’. For the first input in the ’Relaxing’ stage, the neuron enters the “Computing” stage. The time of this event is the base time used for calculating the neuron’s "local time". Notice that to produce the result, the neuron cooperates with upstream neurons (the neuron gates its input currents). One of the upstream neurons opens computing, and the receiving neuron terminates it.
The figure also discovers one of the secrets of the marvelous efficiency of neuronal computing. The dynamic weighting of the synaptic inputs plus adding the local memory content happens analog, per synapse, and the summing happens at the jointly used membrane. The synaptic weights are not stored for an extended period. It is more than a brave attempt to accompany (in the sense of technical computing) a storage capacity to this time-dependent weight and a computing capacity to the neuron’s time measurement.
While computing, a current flows out from the neuronal condenser, so the arrival time of its neural input charge packets (spikes) matters. All charges arriving when the time window is open increase or decrease the membrane’s potential. The neuron has memory-like states [81] (implemented by different biophysical mechanisms); the computation can be restarted, and its result also depends on the actual neural environment and neuronal sensitivity. Although the operation of neurons can be described when all factors are known, because of the enormous number of factors and their time dependence (including ’random’ spikes), it is much less deterministic (however, not random!) than the technical computation.
The inputs considered in the computation (those arriving within the time window), their weights and arrival times change dynamically between the operations. On the one hand, this change makes hard to calculate the result; on the other (accompanied by the learning mechanism), it enables implementing of higher-order neural functionality, such as redundancy, rehabilitation, intuition, association, etc. Constructing solid electrolytes enables the creation of artificial synapses [82], and many biological facilities in reach, with the perspective of having a thousand times faster neurons, provide facilities for getting closer to the biological operation.
The external signal triggers a stage change and simultaneously contributes to the value of the internal stage variable (membrane voltage). During regular operation, when the stage variable reaches a critical value (the threshold potential), the system generates an event that passes to the "Delivering" stage and "flushes" the collected charge. In that stage, it sends a signal toward the environment (to the other neurons connected to its axon). After that period, it passes to the "Relaxing" stage without resetting the stage variable. From this event on, the "leaking" and the input pulses from the upstream neurons contribute to its stage variable.

5.2.3. Stage ’Delivering’

In this stage, the result is ready: the time between the arrival of the first synaptic input and reaching the membrane’s threshold voltage was already measured. No more input is desirable, so the neuron closes its input synapses. Simultaneously, the neuron starts its ’servo’ mechanism: it opens its ion channels, and an intense ion current starts to charge the membrane. The voltage on the membrane quickly rises, and it takes a short time until its peak voltage is reached. The increased membrane voltage drives an outward current, and the membrane voltage gradually decays. When the voltage drops below the threshold voltage, the neuron re-opens its synaptic inputs and passes to the stage ’Relaxing’: it is ready for the next operation. The signal transmission to downstream neurons happens in parallel with the recent "Delivering" stage and the subsequent "Relaxing" (and maybe "Computing") stages.
The process "Delivering" operates as an independent subsystem ("Firing" ): it happens simultaneously with the process of "Relaxing", which, after some period usually passes to the next "Computing" stage. The stages "Computing" and "Delivering" mutually block each other, and the I/O operations happen in parallel with them. They have temporal lengths, and they must follow in the well-defined order "Computing"⇒" Delivering"⇒"Relaxing" (a "proper sequencing" [26]). Stage "Delivering" has a fixed period, stage "Computing" has a variable period (depends mainly on the upstream neurons), and the total length of the computing period equals their sum. The (physiologically defined) length of the "Delivering" period limits the neuron’s firing rate; the length of "Computing" is usually much shorter.

5.2.4. Extra Stages

There are two more possible stage transitions from the stage ’Computing’. First, the stage variable (due to "leaking") may approach its initial value (the resting potential), and the system passes to the "Relaxing" stage; in this case, we consider that the excitation "Failed". This happens when the leaking is more intense than the input current pulses (the input firing rate is too low, or a random firing event started the computing). Second, an external pulse [83] “Synchronize” may have the effect of forcing the system (independently from the value of the stage variable) to pass instantly to the "Delivering" stage and, after that, to the "Relaxing" stage. (When several neurons share that input signal, they will go to the "Relaxing" stage simultaneously: they get synchronized, a simple way of synchronizing low-precision asynchronous oscillators.)

5.2.5. Synaptic Control

As discussed, controlling the operation of its synapses is a fundamental aspect of neuronal operation. It is a gating and implements an ’autonomous cooperation’ with the upstream neurons. The neuron’s gating uses a ’downhill method’: while the membrane’s potential is above the axonal arbor’s (at the synaptic terminals), the charges cannot enter the membrane. When the membrane’s voltage exceeds the threshold voltage, the synaptic inputs stop and restart only when the voltage drops below that threshold again; see the top inset of Figure 3. The synaptic gating makes interpreting neural information and entropy [3] at least hard.

5.2.6. Timed Cooperation of Neurons

The length of the axons between the neurons and the conduction velocity of the neural signals entirely define the time of the data transfer (in the msec range); all connections are direct. The transferred signal starts the subsequent computation as well (asynchronous mode). "A preconfigured, strongly connected minority of fast-firing neurons form the backbone of brain connectivity and serve as an ever-ready, fast-acting system. However, the full performance of the brain also depends on the activity of very large numbers of weakly connected and slow-firing majority of neurons." [84]
In biology, the inter-neuronal transfer time is much longer than the neuronal processing time. Within the latter, the internal delivery time (“forming a spike”) is much longer than the computing (collecting charge from the presynaptic neurons) itself. Omitting the computing time aside from transmission time would be a better approximation than the opposite included in the theory. The arguments and the result are temporal; the neural processing is analog and event-driven [43].
Synaptic charges arrive when the time window is open, or artificial charges increase or decrease the membrane’s potential. A neuron has memory-like states [81], see also Figure 1 (implemented by different biophysical mechanisms), the computation can be restarted, and its result also depends on the actual neural environment. When all factors are known, the operation can be described. However, because of the enormous number of factors, their time dependence, the finite size of the neuron’s components, and the finite speed of ions (furthermore, their interaction while traveling on the dendrites), it is much less deterministic (however, not random!) than the technical computation. We learned that the neuron’s behavior can hardly be described mathematically and be modelled electronically. The inputs considered (those arriving within the time window) in the computation, their weights, and arrival times change between the operations, making it hard to calculate the result on the one hand. On the other hand (accompanied by the learning mechanism), it enables the implementation of higher-order neural functionality, such as intuition, association, redundancy, and rehabilitation.

5.3. Classic Stages

We can map our ’modern’ (dynamic) stages to those ’classic’ (static) stages, and we can see why defining the length of the Action Potential is problematic. The effect of slow current affects the apparent boundary between our "Delivering" stage and "Relaxing" stages. Classical physiology sees the difference and distinguishes ’absolute’ and ’relative’ refractory periods with a smeared boundary between them. Furthermore, it defines the length of the spike with some characteristic point, such as reaching the resting value for the first or second time or reaching the maximum polarization/hyperpolarization. Our derivation of the stages (see Figure 1) defines clear-cut breakpoints between them.
We can define the spike length as the sum of the variable-size length of periods "Computing" and fixed-size period "Delivering". The "absolute refractory" period is defined as the period when the neuron membrane’s voltage keeps the gates of the synaptic inputs closed (the value of membrane voltage is above their threshold). That period is apparently extended (and interpreted as a "relative refractory" period) by the period when, although the gating is re-enabled, but the slow current did not yet arrive at the Axon Initial Segment where it contributes to the measured AP, see Figure 1. Only one refractory period exists, plus the effect of the slow current.

5.4. Mathematics of Spiking 

The so-called equivalent circuit of the neuron is a differentiator-type R C oscillator circuit [15,85,86] as proved by direct neuroanatomical evidence [75,87,88,89,90] (it is erroneously claimed in biophysics that it is an integrator-type oscillator). The synaptic currents can increase the membrane’s voltage above its threshold, so an intense current starts that suddenly increases the membrane’s voltage to a peak voltage, and then the condenser discharges. The circuit produces a damped oscillation, and after a longer time, the membrane voltage returns to its resting value. While the membrane’s voltage is above its threshold value, the synaptic inputs are inactive (the analog inputs are gated).
Although using an "equivalent circuit" is not perfect for biological circuits, in our model, the neuron is represented as a constant capacity condenser C M and a constant resistance R M . The charge carriers are slow (typically positive) ions, which are about 50,000 times heavier than electrons (furthermore, they cannot form a "free ion cloud"), while the electrostatic force between the carriers is the same as in the case of electrons. Due to the slow ions, the "instant interaction" model used in the theory of electricity cannot be used. The ions can move only using the “downhill” method (toward decreasing potential), but the potential must be interpreted as the sum of the electric and thermodynamic potentials.
Concerning Figure 1, given that slow ion currents deliver the charge on the membrane’s surface, the charge propagation takes time, and so after the charge-up process, the synaptic inputs reach the output of the circuit about tenths of a millisecond later. Their arrival (the voltage gradients they produce) provides input for the next computing cycle. A potential that resides above the resting potential implements a neuron-level memory.
Although the implementation of a neuronal circuit works with ionic currents with many differences, the operation can align (to some measure) with the instant currents of physical electronics when using current generators of special form. The input currents in a biological circuit are the sum of a large rush-in current through the membrane in an extremely short time and much smaller gated synaptic currents arriving at different times through the synapses. Those currents generate voltage gradients, and the output voltage measured on the output resistor is
V o u t D i f f e r e n t i a t o r = R C d V i n d t
The input voltage gradients are voltage-gated; the logical condition is
b o o l I S y n a p t i c , i E n a b l e d = V M e m b r a n e a t p o s i t i o n , i < V t h r e s h o l d
(the temporal gating, see also Eq.(7) and Figure 3, is implemented through voltage gating). The sum of the input gradients that the currents generate plus the voltage gradient of the output current generates through the AIS [75,90]. The latter term can be described as
d d t V O U T A I S = 1 C V i n t e r n a l V e x t e r n a l R
That is
d d t V i n = d d t V I N , G a t e d C o m p o n e n t d d t V O U T A I S
The input currents (although due to slightly differing physical reasons for the different current types) are usually described by an analytic form
I i n = I o * ( 1 exp ( 1 α * t ) ) * e x p ( 1 β * t )
where I o is a current amplitude per input, α and β are timing constants for current in- and outflow for the given contributing current, respectively. They are (due to geometrical reasons) approximately similar for the synaptic inputs and differ from the constants valid for the rush-in current. The corresponding voltage derivative is
d V i n d t = I o C * 1 α * e x p ( 1 α * t 1 β * t ) 1 β * e x p ( 1 β * t ) * e x p ( 1 e x p ( 1 α * t ) )
To implement such an analog circuit with a conventional electronic circuit, voltage-gated current injectors (with time constants of around 1 m s ) are needed. Notice that the currents have a range of validity of interpretation as described. The input (synaptic) currents are interpreted only in "Computing"; the rush-in current flows only from the beginning of "Delivering" while the outward current (see Eq.(3)) is persistent but depends on the membrane’s potential.
In the simplest case, the resulting voltage derivative comprises only the rush-in contribution described by Eq (6) and the AIS contribution described by Eq. (3). The gating implements a dynamically changing temporal window and dynamically changing synaptic weights. The appearance of the first arriving synaptic gradient starts the “Computing” stage. The appearance of the rush-in gradient starts the “Delivering” stage. The charge collected in that temporal window is
Q ( t ) = t 0 , i t t h r d t i t 0 , i t t t h r I s y n , i ( t )
The total charge is integrated in a mathematically entirely unusual way: the upper limit depends on the result. Notice that artificial neural networks (including "spiking neural networks") entirely overlook this point: the integration is continuously performed (all summands are used), and the information reduces to a logical variable. Also, it is not considered that the AIS current decreases the membrane’s voltage and charge between incoming spikes.
This omission also has implications for the algorithm. The only operation that a neuron can perform is that unique integration. It is implemented in biology by gating the inputs of the computing units, as shown in Eq.(2). In the "Computing" stage, the synaptic inputs are open, while in the "Delivering" stage, the synaptic inputs are closed (the inputs are ineffective). It produces multiple outputs (in the sense that the signal may branch along its path) in the form of a signal with a temporal course.
Although it has many (several thousand) inputs, a neuron uses only about a dozen of them in a single computing operation, appropriately and dynamically weighting and gating its inputs [84,91]. It has many (several thousand) inputs. A computing operation uses only about a dozen at a time, appropriately and dynamically weighting and gating its inputs [84,91].
The result of the computation is the reciprocal of the weighted sum, and it is represented by the time when a spike is sent out (when that charge on the membrane’s fixed capacity leads to reaching the threshold). The neuron (through the integration time window) selects which ones out of its upstream neurons can contribute to the result. The individual contributions are a bargain between the sender and the receiver: the neuron decides when a contributing current terminates, but the upstream neuron decides when it begins. The beginning of the time window t o is set by the arrival of a spike from one of the upstream neurons, and it is closed by the neuron when the charge integration results in a voltage exceeding the threshold. The computation (a weighted summation) is performed on the fly, excluding the obsolete operands. (The idea of AIMC [92] almost precisely reproduces the charge summation except for its timing.)
Figure 3 (resulted by our simulator [86]) shows how the described physical processes control neuron’s operation. In the middle inset, when the membrane’s surface potential (see Eq.(1)) increases above its threshold potential due to three step-like excitations opens the rush-in ion channels instantly (more details in [15,86]), and creates an exponentially decreasing, step-like voltage derivative that charges up the membrane. The step-like imitated synaptic inputs resemble the real ones: the synaptic inputs produce smaller, rush-in-resemblant, voltage gradient contributions. When integrated as Eq. (1) describes, the resulting voltage gradient produces the Action Potential (AP) shown in the lower inset. Crossing the threshold level controls the gating signal as shown in the top inset.
Notice how nature implemented "in-memory computing" by weighted parallel analog summing. The idea of AIMC [92] quite precisely reproduced the idea, except that it uses a “time of programming” instead of “time of arrival” the biological system uses. The “time of programming” depends on technical transfer characteristics. In general, that time is on the scale of computing time instead of simulated time.
Notice nature’s principles of reducing operands and noise: the less important inputs are omitted by timing. The rest of the signals are integrated over time. The price paid is a straightforward computing operation: the processor can perform only one single type of operation. Its arguments, result, and neuronal-level memory are all time-dependent. Furthermore, the operation excludes all but (typically) a dozen of neurons from the game. It is not necessarily at all that the same operands, having the same dynamic weights are included in consecutive operations. It establishes a chaotic-like operation [93] at the level of the operation of neuronal assemblies [51,76,94], but also enables learning, association, rehabilitation, and similar functionalities. Since the signal’s shape is identical, integrating them in time means multiplying it by a weight (depending non-linearly on time), and the deviation from the steady-state potential represents a (time-dependent) memory. The significant operations, the dynamic weighting, and the dynamic summing are performed strictly in parallel. There are attempts to use analog or digital accelerators (see [92,95]), but they work at the expense of degrading either the precision or the computing efficiency. Essentially, this is why, from the point of view of technical computation, “we are between about 7 and 9 orders of magnitude removed from the efficiency of mammalian cortex” [39].

5.5. Algorithm for the operation

As can be concluded by from our Figure 1 and Figure 2, the computation and the event control heavily depends on the stage of the operation, although the underlying logic is the same. When calculation takes place, at the ’heartbeat’ times the system updates the neuron’s state and also changes its stage if needed.
Algorithm 4 The main computation
  • //Calculate stage-dependent contributions
  • if Stage "Relaxing" then
  •     if ( R e l a x i n g S t o p p e d ) then
  •          d V d t R u s h i n 0 ;
  •     else
  •          d V d t R u s h i n E q . ( 6 ) for rushin current at time t
  •     end if
  • end if
  • if Stage "Computing" then
  •      d V d t R u s h i n 0 ;
  •      d V d t I n p u t E q . ( 6 ) for input currents at time t
  • end if
  • if Stage "Delivering" then
  •      S y n a p s e s E n a b l e d ( V M e m b r a n e < V T h r e s h o l d )
  •      d V d t R u s h i n E q . ( 6 ) for rushin current at time t
  •      d V d t I n p u t E q . ( 6 ) for input currents at time t; according to Eq.(4)
  • end if
  • d V d t R e s u l t i n g E q . ( 4 )
  • d V d t A I S E q . ( 3 ) for the output current at time t
  • V M e m b r a n e + d V d t R e s u l t i n g * d t from Eq.(1)
  • Adjust heartbeat time and set new time
Algorithm 5 The heartbeat computation
  • if Stage "Relaxing" then
  •     if ReceivedSynapticInput then
  •         Send ’BeginComputing’
  •     end if
  •     if  d V d t M e m b r a n e < 0  then
  •         if  a b s ( V M e m b r a n e ) < A l l o w e d  then
  •             V M e m b r a n e 0
  •             R e l a x i n g S t o p p e d t r u e
  •         end if
  •     end if
  • end if
  • if Stage "Computing" then
  •     if  V M e m b r a n e < 0  then
  •         Send ’Failed’
  •     end if
  • else
  •     if  V M e m b r a n e > = V T h r e s h o l d  then
  •          S y n a p s e s E n a b l e d f a l s e
  •         Send ’DeliveringBegin’
  •     end if
  • end if
  • if Stage "Delivering" then
  •     if  V M e m b r a n e < V T h r e s h o l d  then
  •          R e l a x i n g S t o p p e d t r u e
  •          S y n a p s e s E n a b l e d t r u e
  •         Start ’RelaxingBegin’
  •     end if
  • end if
  • Do main computing in Algorithm (4)
  • Send new heartbeat
Figure 3 (produced by our simulator [86]) shows how the described physical processes control neuron’s operation. In the middle inset, when the membrane’s surface potential increases above its threshold potential due to three step-like excitations opening the ion channels, N a + ions rush in instantly and create an exponentially decreasing, step-like voltage derivative that charges up the membrane. The step-like imitated synaptic inputs are resemblant to the real ones: the incoming Post-Synaptic Potential (PSP)s produce smaller, rush-in-resemblant, voltage gradient contributions. The charge creates a thin surface layer current that can flow out through the AIS. This outward current is negative and proportional to the membrane potential above its resting potential. In the beginning, the rushed-in current (and correspondingly, its potential gradient contribution) is much higher than the current flowing out through the AIS, so for a while, the membrane’s potential (and so: the AIS current) grows. When they get equal, the AP reaches its top potential value. Later, the rush-in current gets exhausted, and its potential-generating power drops below that of the AIS current; the resulting potential gradient changes its sign, and the membrane potential starts to decrease.
Figure 3. How the physical processes describe membrane’s operation. The rushed-in N a + ions instantly increase the charge on the membrane, and then the membrane’s capacity discharges (produces an exponentially decaying voltage derivative). The ions are created at different positions on the membrane, so they need different times to reach the AIS, where the current produces a peaking voltage derivative. The resulting voltage derivative, the sum of the two derivatives (the AIS current is outward), drives the oscillator. Its integration produces the membrane potential. When the membrane potential reaches the voltage threshold value, it switches the synaptic currents off or on.
Figure 3. How the physical processes describe membrane’s operation. The rushed-in N a + ions instantly increase the charge on the membrane, and then the membrane’s capacity discharges (produces an exponentially decaying voltage derivative). The ions are created at different positions on the membrane, so they need different times to reach the AIS, where the current produces a peaking voltage derivative. The resulting voltage derivative, the sum of the two derivatives (the AIS current is outward), drives the oscillator. Its integration produces the membrane potential. When the membrane potential reaches the voltage threshold value, it switches the synaptic currents off or on.
Preprints 163748 g003
In the previous period, the rush-in charge was stored on the membrane. Now, when the potential gradient reverses, the driving force starts to decrease the charge in the layer on the membrane, which, per definitionem, means a reversed current; without foreign ionic current through the AIS. This is a fundamental difference between the static picture that Hodgkin and Huxley hypothesized [79] and biology uses, and the dynamic one that describes its behavior. The microscopes’ resolution enabled us to notice AIS only two decades after their time; their structure could be studied only three more decades later [75]. In the past two decades, it has not been built into the theory of neuronal circuits. The correct equivalent electric circuit of a neuron is a serial, instead of a parallel, oscillator, and its output voltage is defined dynamically by its voltage gradients instead of static currents (as physiology erroneously assumes). In the static picture, the oscillator is only an epizodist, while in the time-aware (dynamic) picture, it is a star.
Notice also that only the resulting d V d t Action Potential Time Derivative (APTD) disappears with time. Its two terms are connected through the membrane potential. As long as the membrane’s potential is above the resting value, a current of variable size and sign will flow, and the output and input currents are not necessarily equal: the capacitive current changes the game’s rules.
Figure 4. The diagram describing the Action Potential in the V v s d V d t phase diagram. A parametric figure showing the correspondence between the membrane’s potential, see Eq. (1) and its gradient, see Eq. (3). The theory’s diagram line perfectly describes the experimental data(see [98], Figure 2.d) where the distance between the sampling data points is adequate.
Figure 4. The diagram describing the Action Potential in the V v s d V d t phase diagram. A parametric figure showing the correspondence between the membrane’s potential, see Eq. (1) and its gradient, see Eq. (3). The theory’s diagram line perfectly describes the experimental data(see [98], Figure 2.d) where the distance between the sampling data points is adequate.
Preprints 163748 g004
The top inset shows how the membrane potential controls the synaptic inputs. Given the ions from the neuronal arbor [96,97] can pass to the membrane using the ’downhill’ method, but they cannot do so if the membrane’s potential is above the threshold. The upper diagram line shows how this gating changes in time’s function.

5.6. Operating Diagrams

Figure 3 (produced by the simulator) shows how the described physical processes control neuron’s operation. In the middle inset, when the membrane’s surface potential increases above its threshold potential due to three step-like excitations opening the ion channels, N a + ions rush in instantly and create an exponentially decreasing, step-like voltage derivative that charges up the membrane. The step-like imitated synaptic inputs are resemblant to the real ones: the incoming PSPs produce smaller, rush-in-resemblant, voltage gradient contributions. The charge creates a thin surface layer current that can flow out through the AIS. This outward current is negative and proportional to the membrane potential above its resting potential. In the beginning, the rushed-in current (and correspondingly, its potential gradient contribution) is much higher than the current flowing out through the AIS, so for a while, the membrane’s potential (and so: the AIS current) grows. When they get equal, the AP reaches its top potential value. Later, the rush-in current gets exhausted, and its potential-generating power drops below that of the AIS current; the resulting potential gradient changes its sign, and the membrane potential starts to decrease.
In the previous period, the rush-in charge was stored on the membrane. Now, when the potential gradient reverses, the driving force starts to decrease the charge in the layer on the membrane, which, per definitionem, means a reversed current; without foreign ionic current through the AIS. This is a fundamental difference between the static picture that Hodgkin and Huxley hypothesized [79] and biology uses, and the dynamic one that describes its behavior. The microscopes’ resolution enabled us to notice AIS only two decades after their time; their structure could be studied only three more decades later [75]. In the past two decades, it has not been built into the theory of neuronal circuits. The correct equivalent electric circuit of a neuron is a serial, instead of a parallel, oscillator (see [15,86]), and its output voltage is defined dynamically by its voltage gradients instead of static currents (as physiology erroneously assumes). In the static picture, the oscillator is only an epizodist, while in the time-aware (dynamic) picture, it is a star.
Notice also that only the resulting d V d t APTD disappears with time. Its two terms are connected through the membrane potential. As long as the membrane’s potential is above the resting value, a current of variable size and sign will flow, and the output and input currents are not necessarily equal: the capacitive current changes the game’s rules.
Figure 4 (resulted by our simulator [86]) shows the phase diagram between the membrane’s potential and its voltage gradient, compared to the measured diagram from [98], Figure 2.d. In this parametric figure, the time parameter is derived from both the experimental data and the simulated theoretical data. The base distance of the sampling is not equidistant. On the blue experimental line, the measuring times are marked by joint broken line segments. On the red theoretical diagram line, the marks designate the points where the simulated value was calculated (the time adapts to the actual gradient). As discussed, the improper measurement periods act as averaging over improperly selected time intervals. On the right side of the figure, the broken like, although roughly, properly follows the time course of the theoretical diagram line. The measurement periods are moderately above the optimal, around tenths of a msec. However, this is not the case in the middle of the figure. Here, the time distance of the sampling points is about 2 μ s , and the rush-in of the N a + ions causes a drastically different gradient value. This fine time resolution enables us to follow the effect, while the "long" measurement resolution period only provides an improper average and hides the effect that drives neuronal operation. The effect clearly demonstrates the need for using a gradient-aware algorithm and integration step; furthermore, it proves that voltage gradients control neuronal operation.

6. Summary

The brain, with its information processing capability, and finally, the mind, with its conscience and behavior, are still among the big mysteries of science. From the point of view of understanding neural computing, the large, coordinated projects of many years failed. The Human Brain Project aimed for a new understanding of the brain but only new graphic user interfaces and libraries [8] have been prepared for the many-decades-old understanding. As the Brain Initiative summarized, "Yet for the most part, we still do not understand the brain’s underlying computational logic" [6]. The "new understanding" encompasses incorporating recent anatomical and physiological discoveries from the past few decades into the model, along with the correct (physically based) electrical and thermodynamic descriptions of the processes, their mathematical descriptions, and their algorithmic implementations We provided an exact implementation of the physically correct model, which is based on the first principles of science, uses well-established mathematical differential equations (without any empirical functions), and uses a tool developed for simulating the temporally correct operation of high-speed electrical circuits.This combination faithfully reproduces the electric operation of a single neuron, opening the path toward modeling cooperating neurons.

References

  1. Schrödinger, E. IS LIFE BASED ON THE LAWS OF PHYSICS? In What is Life?: With Mind and Matter and Autobiographical Sketches; Cambridge University Press: Canto, 1992; pp. 76–85. [Google Scholar]
  2. Végh, J. The non-ordinary laws of physics describing life. Physics A: Statistical Mechanics and its Applications 2025, 1, in review. [CrossRef]
  3. Végh, J.; Berki, Á.J. Towards generalizing the information theory for neural communication. Entropy 2022, 24, 1086. [Google Scholar] [CrossRef]
  4. Drukarch, B.; Wilhelmus, M. Thinking about the nerve impulse: A critical analysis of the electricity-centered conception of nerve excitability. Progress in Neurobiology 2018, 169, 172–185. [Google Scholar] [CrossRef]
  5. Drukarch, B.; Wilhelmus, M. Thinking about the action potential: the nerve signal as a window to the physical principles guiding neuronal excitability. Frontiers in Cellular Neuroscience 2023, 17. [Google Scholar] [CrossRef]
  6. Ngai, J. BRAIN @ 10: A decade of innovation. Neuron 2024, 112. [Google Scholar] [CrossRef]
  7. Human Brain Project. A closer look at scientific advances, 2023.
  8. Human Brain Project. New HBP-book: A user’s guide to the tools of digital neuroscience, 2023.
  9. Feynman, R.P. Feynman Lectures on Computation; CRC Press, 2018. [Google Scholar]
  10. von Neumann, J. The Computer and the Brain; Yale University Press: New Haven, 1958. [Google Scholar]
  11. Abbott, L.; Sejnowski, T.J. Neural Codes and Distributed Representations; MIT Press: Cambridge, MA, 1999. [Google Scholar]
  12. Chu, D.; Prokopenko, M.; Ray, J.C. Computation by natural systems. Interface Focus 2018, 8, 2025-03–20. [Google Scholar] [CrossRef]
  13. Végh, J.; Berki, A.J. Do we know the operating principles of our computers better than those of our brain? In Proceedings of the 2020 International Conference on Computational Science and Computational Intelligence (CSCI); 2020; pp. 668–674. [Google Scholar] [CrossRef]
  14. Mehonic, A.; Kenyon, A.J. Brain-inspired computing needs a master plan. Nature 2022, 604, 255–260. [Google Scholar] [CrossRef]
  15. Végh, J. On implementing technomorph biology for inefficient computing. Applied Sciences 2025, 1. [Google Scholar] [CrossRef]
  16. Markovic, D.; Mizrahi, A.; Querlioz, D.; Grollier, J. Physics for neuromorphic computing. Nature Reviews Physics 2020, 2, 499–510. [Google Scholar] [CrossRef]
  17. Johnston, D.; Wu, S.M.S. Foundations of Cellular Neurophysiology; Massachusetts Institute of Technology: Cambridge, Massachusetts and London, England, 1995.
  18. Kandel, E.R.; Schwartz, J.H.; Jessell, T.M.; Siegelbaum, S.A.; Hudspeth, A.J. Principles of Neural Science, 5 ed.; The McGraw-Hill Medical: New York Chicago etc., 2013.
  19. Hebb, D. The Organization of Behavior; Wiley and Sons: New York, 1949. [Google Scholar]
  20. Caporale, N.; Dan, Y. Spike Timing–Dependent Plasticity: A Hebbian Learning Rule. Annual Review of Neuroscience 2008, 31, 25–46. [Google Scholar] [CrossRef] [PubMed]
  21. Végh, J.; Berki, Á.J. On the Role of Speed in Technological and Biological Information Transfer for Computations. Acta Biotheoretica 2022, 70, 26. [Google Scholar] [CrossRef]
  22. Végh, J. Introducing Temporal Behavior to Computing Science. In Proceedings of the Advances in Software Engineering, Education, and e-Learning; Arabnia, H.R., Deligiannidis, L., Tinetti, F.G., Tran, Q.N., Eds.; Springer International Publishing, 2021; pp. 471–491. [Google Scholar]
  23. Végh, J. Revising the Classic Computing Paradigm and Its Technological Implementations. Informatics 2021, 8. [Google Scholar] [CrossRef]
  24. MacKay, D.M.; McCulloch, W.S. The limiting information capacity of a neuronal link. The bulletin of mathematical biophysics 1952, 14, 127–135. [Google Scholar] [CrossRef]
  25. Sejnowski, J.T. The Computer and the Brain Revisited. Annals Hist Comput 1989, 197–201. [Google Scholar] [CrossRef]
  26. von Neumann, J. First draft of a report on the EDVAC. IEEE Annals of the History of Computing 1993, 15, 27–75. [Google Scholar] [CrossRef]
  27. Nature. Documentary follows implosion of billion-euro brain project. Nature 2020, 588, 215–216. [Google Scholar] [CrossRef]
  28. Nemenman, I.; Lewen, G.D.; Bialek, W.; de Ruyter van Steveninck, R.R. Neural Coding of Natural Stimuli: Information at Sub-Millisecond Resolution. PLOS Computational Biology 2008, 4, 1–12. [Google Scholar] [CrossRef]
  29. Backus, J. Can Programming Languages Be liberated from the von Neumann Style? A Functional Style and its Algebra of Programs. Communications of the ACM 1978, 21, 613–641. [Google Scholar] [CrossRef]
  30. Black, C.D.; Donovan, J.; Bunton, B.; Keist, A. SystemC: From the Ground Up, second ed.; Springer: New York, 2010. [Google Scholar]
  31. Végh, J. Which scaling rule applies to Artificial Neural Networks. Neural Computing and Applications 2021, 33, 16847–16864. [Google Scholar] [CrossRef]
  32. Végh, J. Finally, how many efficiencies the supercomputers have? The Journal of Supercomputing 2020, 76, 9430–9455. [Google Scholar] [CrossRef]
  33. Végh, J. How Amdahl’s Law limits performance of large artificial neural networks. Brain Informatics 2019, 6, 1–11. [Google Scholar] [CrossRef]
  34. D’Angelo, G.; Rampone, S. Towards a HPC-oriented parallel implementation of a learning algorithm for bioinformatics applications. BMC Bioinformatics 2014, 15. [Google Scholar] [CrossRef]
  35. D’Angelo, G.; Palmieri, F. Network traffic classification using deep convolutional recurrent autoencoder neural networks for spatial–temporal features extraction. Journal of Network and Computer Applications 2021, 173, 102890. [Google Scholar] [CrossRef]
  36. Végh, J. Why do we need to Introduce Temporal Behavior in both Modern Science and Modern Computing. Global Journal of Computer Science and Technology: Hardware & Computation 2020, 20/1, 13–29. [Google Scholar]
  37. Végh, J. von Neumann’s missing "Second Draft": what it should contain. In Proceedings of the Proceedings of the 2020 International Conference on Computational Science and Computational Intelligence (CSCI’20: December 16-18, 2020, Las Vegas, Nevada, USA. IEEE Computer Society, 2020, pp. 1260–1264. [CrossRef]
  38. hpcwire.com. TOP500: Exascale Is Officially Here with Debut of Frontier. Available online: https://www.hpcwire.com/2022/05/30/top500-exascale-is-officially-here-with-debut-of-frontier/ (accessed on 10 September 2023).
  39. van Albada, S.J.; Rowley, A.G.; Senk, J.; Hopkins, M.; Schmidt, M.; Stokes, A.B.; Lester, D.R.; Diesmann, M.; Furber, S.B. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model. Frontiers in Neuroscience 2018, 12, 291. [Google Scholar] [CrossRef]
  40. de Macedo Mourelle, L.; Nedjah, N.; Pessanha, F.G. Reconfigurable and Adaptive Computing: Theory and Applications; CRC press, 2016; chapter 5: Interprocess Communication via Crossbar for Shared Memory Systems-on-chip. [CrossRef]
  41. IEEE/Accellera. Systems initiative. 2017. Available online: http://www.accellera.org/downloads/standards/systemc.
  42. Brette, R. Is coding a relevant metaphor for the brain? The Behavioral and brain sciences 2018, 42, e215. [Google Scholar] [CrossRef]
  43. Somjen, G. SENSORY CODING in the mammalian nervous system; New York, MEREDITH CORPORATION, 1972. [CrossRef]
  44. Shannon, C.E. A mathematical theory of communication. The Bell System Technical Journal 1948, 27, 379–423. [Google Scholar] [CrossRef]
  45. Nizami, L. Information theory is abused in neuroscience. Cybernetics & Human Knowing 2019, 26, 47–97. [Google Scholar]
  46. Shannon, C.E. The Bandwagon. IRE Transactions in Information Theory 1956, 2, 3. [Google Scholar] [CrossRef]
  47. Young, A.R.; Dean, M.E.; Plank, J.S.; S. Rose, G. A Review of Spiking Neuromorphic Hardware Communication Systems. IEEE Access 2019, 7, 135606–135620. [CrossRef]
  48. Moradi, S.; Manohar, R. The impact of on-chip communication on memory technologies for neuromorphic systems. Journal of Physics D: Applied Physics 2018, 52, 014003. [Google Scholar] [CrossRef]
  49. Stone, J.V. Principles of Neural Information Theory; Sebtel Press: Sheffield, UK, 2018. [Google Scholar]
  50. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  51. de Ruyter van Steveninck, R.R.; Lewen, G.D.; Strong, S.P.; Koberle, R.; Bialek, W. Reproducibility and Variability in Neural Spike Trains. Science 1997, 275, 1805–1808. [Google Scholar] [CrossRef]
  52. Sengupta, B.; Laughlin, S.; Niven, J. Consequences of Converting Graded to Action Potentials upon Neural Information Coding and Energy Efficiency. PLoS Comput Biol 2014, 1. [Google Scholar] [CrossRef]
  53. Strong, S.P.; Koberle, R.; de Ruyter van Steveninck, R.R.; Bialek, W. Entropy and Information in Neural Spike Trains. Phys. Rev. Lett. 1998, 80, 197–200. [Google Scholar] [CrossRef]
  54. Sejnowski, T.J. The Computer and the Brain Revisited. IEEE Annals of the History of Computing 1989, 11, 197–201. [Google Scholar] [CrossRef]
  55. Gordon, S., Ed. The Synaptic Organization of the Brain, 5 ed.; Oxford Academic, New York, 2005. [CrossRef]
  56. Berger, T.; Levy, W.B. A Mathematical Theory of Energy Efficient Neural Computation and Communication. IEEE Transactions on Information Theory 2010, 56, 852–874. [Google Scholar] [CrossRef]
  57. Brette, R. Simulation of networks of spiking neurons: a review of tools and strategies. J Comput. Neurosci., 23, 349–98. [CrossRef]
  58. Keuper, J.; Pfreundt, F.J. Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability. In Proceedings of the 2nd Workshop on Machine Learning in HPC Environments (MLHPC). IEEE; 2016; pp. 1469–1476. [Google Scholar] [CrossRef]
  59. Tsafrir, D. The Context-switch Overhead Inflicted by Hardware Interrupts (and the Enigma of Do-nothing Loops). In Proceedings of the Proceedings of the 2007 Workshop on Experimental Computer Science, San Diego, California, New York, NY, USA, 2007.
  60. David, F.M.; Carlyle, J.C.; Campbell, R.H. Context Switch Overheads for Linux on ARM Platforms. In Proceedings of the Proceedings of the 2007 Workshop on Experimental Computer Science, San Diego, California, New York, NY, USA, 2007. [CrossRef]
  61. Bengio, E.; Bacon, P.L.; Pineau, J.; Precu, D. Conditional Computation in Neural Networks for faster models. Available online: https://arxiv.org/pdf/1511.06297 (accessed on 30 August 2023).
  62. Xie, S.; Sun, C.; Huang, J.; Tu, Z.; Murphy, K. Rethinking Spatiotemporal Feature Learning: Speed-Accuracy Trade-offs in Video Classification. In Proceedings of the Computer Vision – ECCV 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; 2018; pp. 318–335. [Google Scholar]
  63. Xu, K.; Qin, M.; Sun, F.; Wang, Y.; Chen, Y.K.; Ren, F. Learning in the Frequency Domain. 2020. [CrossRef]
  64. Kunkel, S.; Schmidt, M.; Eppler, J.M.; Plesser, H.E.; Masumoto, G.; Igarashi, J.; Ishii, S.; Fukai, T.; Morrison, A.; Diesmann, M.; et al. Spiking network simulation code for petascale computers. Frontiers in Neuroinformatics 2014, 8, 78. [Google Scholar] [CrossRef]
  65. Singh, J.P.; Hennessy, J.L.; Gupta, A. Scaling Parallel Programs for Multiprocessors: Methodology and Examples. Computer 1993, 26, 42–50. [Google Scholar] [CrossRef]
  66. Lowel, S.; Singer, W. Selection of intrinsic horizontal connections in the visual cortex by correlated neuronal activity. Science 1992, 255, 209–212. [Google Scholar] [CrossRef]
  67. Iranmehr, E.; Shouraki, S.B.; Faraji, M.M.; Bagheri, N.; Linares-Barranco, B. Bio-Inspired Evolutionary Model of Spiking Neural Networks in Ionic Liquid Space. Frontiers in Neuroscience 2019, 13, 1085. [Google Scholar] [CrossRef]
  68. Furber, S.B.; Lester, D.R.; Plana, L.A.; Garside, J.D.; Painkras, E.; Temple, S.; Brown, A.D. Overview of the SpiNNaker System Architecture. IEEE Transactions on Computers 2013, 62, 2454–2467. [Google Scholar] [CrossRef]
  69. Ousterhout, J.K. Why Aren’t Operating Systems Getting Faster As Fast As Hardware? 1990. Available online: http://www.stanford.edu/~ouster/cgi-bin/papers/osfaster.pdf (accessed on 10 September 2023).
  70. Kendall, J.D.; Kumar, S. The building blocks of a brain-inspired computer. Appl. Phys. Rev. 2020, 7, 011305. [Google Scholar] [CrossRef]
  71. TOP500. Top500 list of supercomputers. 2021. Available online: https://www.top500.org/lists/top500/ (accessed on 24 October 2021).
  72. Han, S.; Pool, J.; Tran, J.; Dally, W.J. Learning both Weights and Connections for Efficient Neural Networks. 2015. Available online: https://arxiv.org/pdf/1506.02626.pdf.
  73. Liu, C.; Bellec, G.; Vogginger, B.; Kappel, D.; Partzsch, J.; Neumärker, F.; Höppner, S.; Maass, W.; Furber, S.B.; Legenstein, R.; et al. Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype. Frontiers in Neuroscience 2018, 12, 840. [Google Scholar] [CrossRef]
  74. Johnson, D.H., Information theory and neuroscience: Why is the intersection so small? In 2008 IEEE Information Theory Workshop; IEEE, 2008; pp. 104–108. [CrossRef]
  75. Leterrier, C. The Axon Initial Segment: An Updated Viewpoint. Journal of Neuroscience 2018, 38, 2135–2145. [Google Scholar] [CrossRef]
  76. Buzsáki, G. Neural syntax: cell assemblies, synapsembles, and readers. Neuron 2010, 68, 362–85. [Google Scholar] [CrossRef]
  77. Végh, J. Why does the membrane potential of biological neuron develop and remain stable? The Journal of Membrane Biology 2025, 1. [Google Scholar]
  78. Levenstein, D.; Girardeau, G.; Gornet, J.; Grosmark, A.; Huszar, R.; Peyrache, A.; Senzai, Y.; Watson, B.; Rinzel, J.; Buzsáki, G. Distinct ground state and activated state modes of spiking in forebrain neurons. bioRxiv. 2021. Available online: https://www.biorxiv.org/content/10.1101/2021.09.20.461152v3.full.pdf.
  79. Hodgkin, A.L.; Huxley, A.F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 1952, 117, 500–544. [Google Scholar] [CrossRef]
  80. Tschanz, J.W.; Narendra, S.; Ye, Y.; Bloechel, B.; Borkar, S.; De, V. Dynamic sleep transistor and body bias for active leakage power control of microprocessors. IEEE Journal of Solid State Circuits 2003, 38, 1838–1845. [Google Scholar] [CrossRef]
  81. Susi, G.; Garcés, P.; Paracone, E.; Cristini, A.; Salerno, M.; Maestú, F.; Pereda, E. FNS allows efficient event-driven spiking neural network simulations based on a neuron model supporting spike latency. Nature Scientific Reports 2021, 11. [Google Scholar] [CrossRef]
  82. Onen, M.; Emond, N.; Wang, B.; Zhang, D.; Ross, F.M.; Li, J.; Yildiz, B.; del Alamo, J.A. Nanosecond protonic programmable resistors for analog deep learning. Science 2022, 377, 539–543. [Google Scholar] [CrossRef]
  83. Losonczy, A.; Magee, J. Integrative properties of radial oblique dendrites in hippocampal CA1 pyramidal neurons. Neuron 2006, 50, 291–307. [Google Scholar] [CrossRef]
  84. Buzsáki, G.; Mizuseki, K. The log-dynamic brain: how skewed distributions affect network operations. Nature Reviews Neuroscience 2014, 15, 264–278. [Google Scholar] [CrossRef] [PubMed]
  85. Végh, J. Physics-based electric operation and control of biological neurons. Biological Cybernetics 2025, 1. [Google Scholar]
  86. Végh, J. Dynamic Abstract Neural Computing with Electronic Simulation. 2025. Available online: https://jvegh.github.io/DANCES/ (accessed on 18 April 2025).
  87. Kole, M.H.P.; Ilschner, S.U.; Kampa, B.M.; Williams, S.R.; Ruben, P.C.; Stuart, G.J. Action potential generation requires a high sodium channel density in the axon initial segment. Nature Neuroscience 2008, 11, 178–186. [Google Scholar] [CrossRef]
  88. Rasband, M. The axon initial segment and the maintenance of neuronal polarity. Nat Rev Neurosci 2010, 11, 552–562. [Google Scholar] [CrossRef]
  89. Kole, M.; Stuart, G. Signal Processing in the Axon Initial Segment. Neuron 2012, 73, 235–247. [Google Scholar] [CrossRef] [PubMed]
  90. Huang, C.Y.M.; Rasband, M.N. Axon initial segments: structure, function, and disease. Annals of the New York Academy of Sciences 2018, 1420. [Google Scholar] [CrossRef] [PubMed]
  91. Végh, J.; Berki, A.J. Revisiting neural information, computing and linking capacity. Mathematical Biology and Engineering 2023, 20, 12380–12403. [Google Scholar] [CrossRef]
  92. Antolini, A.; Lico, A.; Zavalloni, F.; Scarselli, E.F.; Gnudi, A.; Torres, M.L.; Canegallo, R.; Pasotti, M. A Readout Scheme for PCM-Based Analog In-Memory Computing With Drift Compensation Through Reference Conductance Tracking. IEEE Open Journal of the Solid-State Circuits Society 2024, 4, 69–82. [Google Scholar] [CrossRef]
  93. Alonso1, L.M.; Magnasco, M.O. Complex spatiotemporal behavior and coherent excitations in critically-coupled chains of neural circuits. Chaos: An Interdisciplinary Journal of Nonlinear Science 2018, 28, 093102. [Google Scholar] [CrossRef]
  94. Li, M.; Tsien, J.Z. Neural Code-Neural Self-information Theory on How Cell-Assembly Code Rises from Spike Time and Neuronal Variability. Frontiers in Cellular Neuroscience 2017, 11. [Google Scholar] [CrossRef] [PubMed]
  95. Kneip, A.; Lefebvre, M.; Verecken, J.; Bol, D. IMPACT: A 1-to-4b 813-TOPS/W 22-nm FD-SOI Compute-in-Memory CNN Accelerator Featuring a 4.2-POPS/W 146-TOPS/mm2 CIM-SRAM With Multi-Bit Analog Batch-Normalization. IEEE Journal of Solid-State Circuits 2023, 58, 1871–1884. [Google Scholar] [CrossRef]
  96. Goikolea-Vives, A.; Stolp, H. Connecting the Neurobiology of Developmental Brain Injury: Neuronal Arborisation as a Regulator of Dysfunction and Potential Therapeutic Target. Int J Mol Sci 2021, 15. [Google Scholar] [CrossRef] [PubMed]
  97. Hasegawa, K.; ichiro Kuwako, K. Molecular mechanisms regulating the spatial configuration of neurites. Seminars in Cell & Developmental Biology 2022, 129, 103–114. [Google Scholar] [CrossRef]
  98. Bean, B. The action potential in mammalian central neurons. Nature Reviews Neuroscience 2007, 8. [Google Scholar] [CrossRef]
1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated