Preprint
Article

This version is not peer-reviewed.

Front Running Simulation: A Digital Twin Framework for Real-Time Replication, Prediction, and Goal Navigation

Submitted:

16 April 2026

Posted:

20 April 2026

You are already at the latest version

Abstract
Front Running Simulation (FRS) is a Digital Twin–enabled capability that continuously replicates the current state of physical systems, predicts probable future states, and identifies actions to navigate to defined goals while minimizing the expenditure of scarce physical resources. Unlike traditional simulation, which operates offline with predefined initial conditions, FRS is continuously synchronized with reality through Digital Twin Instances (DTIs), allowing forward-looking simulation from the present state. FRS is based on three core activities—replication, prediction, and navigation—supported by data, Models of Reality (MoR), simulation, and information. Data enables replication. Simulation based on MoRs, enables prediction through causal and probabilistic methods; and information enables goal-directed action selection. The integration of Digital Twin Aggregates (DTAs) and Artificial Intelligence introduces Bayesian, data-driven prediction that complements physics-based simulation. This hybrid approach combines exploration of possible futures with rapid identification of probable outcomes. As the examples demonstrate, FRS shifts the focus from only adverse event avoidance to goal attainment under constraints, enabling proactive, information-driven decision-making. It provides a unifying Digital Twin FRS framework for Models of Reality, data, simulation, information, and AI to improve operational efficiency and effectiveness in complex systems.
Keywords: 
;  ;  ;  

1. Introduction

For all of history, humans have had a keen interest in predicting the future. At the very fringes of credibility, we’ve had shamans and fortune tellers who profess to foretell the future with the entrails of chickens and in the position of stars. In ancient Greece, the Oracle of Delphi was reputed to tell the future, albeit in ambiguous utterances. Crystal balls and tarot cards became all the rage in the Middle Ages. Humans looked for any method, no matter how implausible, in order to deal with the uncertainty of the future and in hopes of predicting the future so that they could successfully deal with adverse future events that would prevent them from obtaining their goals.
So why is it that humans are so interested in having a crystal ball in the future? The main reason is that most of human existence, and almost all its progress, is driven by goal seeking, while attempting to minimize physical scarce resources. For most of human history, that goal seeking was simply survival.
These attempts at predicting the future were to augment what humans naturally do. Humans constantly attempt to realize their goals by replicating the current state of reality, predicting the possibilities that can happen, and then taking actions to navigate (Navigate is an especially appropriate word. Unlike replicate and predict, navigate represent not only an effort but an outcome. It is defined in the Cambridge dictionary as “to successfully find a way from one place to another.” https://dictionary.cambridge.org/us/dictionary/english/navigate) from the present into a future that realizes their goals. Humans do this for short term goals such as immediate survival and now long-term goals such as retirement.
For example, our goal as we drive is to arrive at our destination safely, avoiding accidents. To do so, we continually engage in replication, prediction, and navigation with our goal of arriving at our destination safely. When we approach an intersection, we assess the velocity of a vehicle coming perpendicular to us and predict whether the vehicle can cause an accident threat to us in the intersection. We do this on a continuous basis. Based on our continuous predictions that the vehicle is slowing and will stop in time, we simply continue through the intersection.
However, if our predictions based on the vehicle’s progression rate is that the vehicle will be unable to stop, we use information that we have about driving to navigate so as to avoid an accident. Based on our mental simulation of the probabilities of what actions will produce what outcomes, we will take such actions as slamming on our brakes to stop before the intersection or accelerating to get through the intersection before the arrival of the oncoming vehicle.
We do this replication of the current environment, prediction of what we think is going to happen, and, using probabilities of the success of different actions, navigate to our goal continually throughout our day. The issue that we have, given all the traffic accidents that actually happen, is that the human brain is extremely limited. This limitation is in replication of the current environment, the prediction of what is going to occur, and the probability of actions for successful navigation. Humans historically considered themselves fortunate to just obtain their goals. The minimization of physical resources was an unusual bonus.
With the advent of computers, we now have the opportunity to have a crystal ball to foresee the future to obtain our goals and minimize physical resources. As described above, this requires that we engage in replication, prediction, navigation. This is the purpose of Front Running Simulation (FRS).
Unlike the usual simulation models that are run unconnected to time in the physical world. FRS is conceptualized to perform like the driving example. FRS is performed on a continual loop replicating the current state, predicting future states, and providing data and information to navigate to goals. The cadence of its loop depends on the rate of change. For driving examples above, FRS would be running constantly. In production facilities, every fifteen (15) minutes most likely would suffice.
In the remaining paper, I will discuss the capabilities and new 21st century or, as I have contended, the Third Millennium [1] digital/virtual (This is a continued example of my use of virtual and digital as synonymous that I will continue to do so in this paper. There actually is no true virtual space. The virtual representation is always instantiated in atom-based physical material. In humans, it is in the carbon-based matter of the brain. In digital computers, it’s in the silicon-based matter of digital processor and memory components. “Digital” and “virtual” do share the characteristic that they are both intangible representations, so their interchangeability is warranted.) concepts that we are bringing to bear, such as Digital Twins with Front Running Simulation. FRS will provide us with the data that we need to understand and replicate our environment, simulation for prediction of future events, and information to navigate to our goals. With the computing capability that we now have available or soon will, we will possess the ability to at least probabilistically understand what is occurring, especially adverse events, and prevent or mitigate their effect. This will be a quantum leap from our current capabilities.

2. The Digital Twin Model

The Digital Twin Model is the basis for Front Running Simulation. The commonly accepted Digital Twin Model is shown in Figure 1 [2]. The Digital Twin Model is used across a wide variety of disciplines and industries [3]. It consists of three components: physical objects in the physical environment, digital objects in a digital environment, and a persistent two-way connection between the physical and digital. The persistent two-way is connection is the particular feature that FRS relies on. Data is sent from the physical to the digital to provide replication on a real-time basis or periodic basis depending on use case. Prediction via simulation is performed on the right-hand digital side and resulting information and data are returned to the physical environment so that constantly physically navigating to the desired goal can be accomplished.
There are three types of Digital Twins [4]. Of the three types of Digital Twins, Digital Twin Prototype (DTP), Digital Twin Instance (DTI), and Digital Twin Aggregate (DTA), the DTI and the DTA are the most relevant to FRS. The DTI is created when its Physical Twin Instance is manufactured. The DTI remains tethered to its Physical Twin Instance for its remaining life.
Driven by use cases, the DTI contains the longitudinal data of its Physical Twin counterpart. Examples of the types of data the DTI can collect are: “the system’s own identity, the location of a system and/or its components, the change in velocity and acceleration, state changes, both discrete and continuous, from one state to another, such as off to on, not-triggered to triggered, heat gradients over time, the forces that are acting on the system, such as heat, air pressure, and gravitational forces, and the presence of sound, light, and a wide spectrum of electro-magnetic waves, the presence of other objects and their mass, shape, and relative speed and direction in relationship to our artifact.” [5].
Unlike the Physical Twin Instance, the DTI remains available and useful after its Physical Twin Instance is retired from service. The DTI replicates its corresponding Physical Twin, and the DTI prediction is specific to that Physical Twin.
Within the Digital Twin Model, the Digital Twin Aggregate (DTA) represents the population-level instantiation of Digital Twin Instances (DTIs), capturing longitudinal and latitudinal data across many similar assets, processes, or systems over time. Unlike a single DTI, which reflects the state and behavior of an individual entity, the DTA embodies the accumulated experiential history of the entire class, including operational conditions, interventions, and outcomes.
In the context of Front Running Simulation (FRS), the DTA serves as a probabilistic and empirical type that enables Bayesian-informed prediction by leveraging correlations and patterns derived from this population history. This allows FRS not only to simulate causally modeled futures within a specific DTI, but also to augment those simulations with Bayesian statistically grounded likelihoods of outcomes and candidate actions observed across the aggregate. As the scale, diversity, and temporal depth of the DTA increase, the fidelity of these probability estimates improves, thereby enhancing FRS’s ability to recommend actions that achieve task goals while minimizing wasted physical resources under time-constrained or information-limited conditions.
Figure 2 represents my current Digital Twin Model, which is a throwback to its original introduction in 2002 [6]. The physical objects in this environment are replicated in the large virtual space on the right-hand side.
The forward prediction and navigation are done in the lower modules that inherit the same initial conditions, which are the replication of their Physical Twin Instance. However, the different simulations will have different assumptions as to how future events will affect them and will predict different outcomes. Different actions will run against these different scenarios to find the action that will best obtain the goal that is being sought.
While there is only one replication space shown in the model, its state at a set cadence can be captured and saved for future use, analysis, and traceability. That is unlike its physical counterpart where once the state changes, the old state, as are all states in the physical world, is lost.

3. Introduction of Front Running Simulation

While FRS is a concept of general applicability, I introduced it as particularly applicable to production or manufacturing facilities. These facilities are intended to be highly deterministic to be both efficient and effective. Their intent is to plan production to produce specified products using the minimum of resources. There is even a common term for this – lean manufacturing. Any deviation from the lean manufacturing plan such as equipment malfunctions is considered adverse events to be avoided or mitigated as quickly as possible. Because of a lack of predictive capability, these adverse events are almost dealt with on an inefficient reactive basis after the adverse event occurs.
My first description of Front Running Simulation was in a Whitepaper series that I did in 2017. The paper was called “Driving Digital Continuity in Manufacturing” [7]. I continued to refer to FRS in my work over the ensuing years, most recently here [8].
It is instructive to reproduce the original FRS description as a starting point here.
“Instead of simply a factory replication showing what is currently occurring on the factory floor, we would run simulations of the next few seconds, few minutes, and/or few hours. We would utilize the current state of the factory data at any specific point in time and simulate from that point on a continuous basis.
This means that the simulation would run in front of the actual factory, providing a window on what would happen to the factory in the immediate future. Using this front running simulation, we could provide OverWatch of the factory and step in to adjust or even shut down intelligent equipment if the simulations indicated a potential problem developing.
This would not only allow us to see what actually is occurring on the factory floor at all points in time but predict problems in the future based on the current and actual states of the factory floor.
Front running would be especially useful during the manufacturing ramp up of a new product. Production could be simulated forward from each step of the Bill of Process (BoP) utilizing actual information from the steps already performed. When the front running simulation showed that the future processes were not going to produce the product as desired, the BoP could be adjusted right then and there. This would reduce the number of bad builds and compress the time to move through ramp up to full quality production. This is just one example of digital continuity between Product Engineering, Industrial Engineering and Manufacturing Operations.
This integration of virtual and physical activities requires complete fidelity and timelessness of digital continuity. Having machines communicate with each other and having humans work alongside robots, i.e., cobotics, requires this kind of digital continuity. IIoT on the factory floor cannot safely exist without this capability.”
The main characteristics of the original view of FRS are:
  • Goal seeking replication, prediction, and navigation
  • Continuous Overwatch of factory operations, matching planned operations against predicted operations
  • Continuous replication of the current physical state as the simulation starting condition
  • Continual simulations of predicted future states including adverse events and deviations
  • Continual simulation predicted future operational outcomes of actions to navigate to goals
  • Temporal horizon and update cadence determined by operational dynamics and use cases
This was a reasonable first perspective of a new paradigm. Up until then, simulation had been viewed as planning and exploration tool, untethered from actual reality. Computer simulations were established with user defined initial conditions and executed to produce outputs based on programmed assumption. Their usual use was as a planning tool for development activities. Once the simulated entity went into operation, the simulation was put away.
This was primarily because the computing capability to run simulations in real time, let alone faster than real time, simply did not exist. However, by 2017 when I introduced FRS, it could be foreseen by plotting Moore’s Law that the computing capability FRS would require was on the near horizon.

Front Running Simulation in Overwatch Mode

Figure 3 is the current Front Running Simulation Model, as I have been historically presenting it. It is effectively the standard Digital Twin Model presented vertically and advanced over time. This is the FRS Model in what I called Overwatch mode, running on a continual loop whose cadence is dictated by the change in its environment. While it is engaged in monitoring activity as is common in factories, it is intended to go beyond that and predict future factory states and take actions to bring the factory back into steady state operation to meet its planned goals if deviations or disruptions are predicted.
Consistent with the Digital Twin Model, this figure is divided into two environments. The bottom part is the environment of our physical world with its physical objects. It is completely constrained by the laws of our physical universe. Time moves in one direction and has a cadence that we have no control over. The only way to move into the future is to wait for the passage of time. Only one possibility will happen. All other possibilities are collapsed into an actuality.
The upper part of the figure is the virtual environment with Digital Twin Instances (DTIs) that correspond to their physical object counterparts. In this virtual environment we are only constrained by the Models of Reality that we choose to institute in this environment. We need to follow the rules of the physical universe in terms of cause and effect. However, we can unconstrain time, meaning that we can control the speed of time, which allows for prediction.
However, the environment at every time x is dictated by the reality we have captured from the physical world in the form of IoT sensing and resulting data. In order to predict the future, we need to replicate and sync that physical reality state in the virtual environment. We do that in the same way humans do by sensing the physical world and recreating it in a virtual one.
In FRS at every time x+y, what we want to do is to predict the future states of the physical environment and its DTIs. This is why replication is so important, because if we do not have the same state, of reality as it exists in the physical environment, we will not be able to accurately predict what will happen in the future. And every time x, we want to predict what will happen based on the state that we start with and the facts and assumptions that we make about how reality behaves.
Since we may not be perfectly accurate in our understanding of reality, the result will be probabilities of the future states of different things that may occur. While we need to be aware for the potential for deviations, if we have planned our production processes properly and executed according to that plan, we will see a prediction of what we had planned match what then occurs.
Obviously, one thing that FRS must continue to do in Overwatch mode is to look at its predictions and see how well that they match what then occurs in reality. This will allow us to modify the facts and assumptions that we’ve been making or add additional collection of data, in order that our projections will get increasingly more accurate in modeling the actual physical world.

Front Running Simulation and Goal Deviations

There are a couple of points to make with regard to deviations from our plans that have the potential to prevent us from meeting our goals. While certainly we would like to predict and prevent these deviations especially if they are partial or total equipment failures from occurring, our primary concern is meeting our goals while using the minimum of necessary resources.
If we cannot predict that a deviation is going to occur and one does occur, then our task is to take actions, so this does not prevent us from meeting our goals. In many cases, the cost of preventing this deviation may be far more expensive than letting it occur and taking other action that prevents this problem from affecting our planned attainment of our goals. We need to remember that our primary focus is not preventing adverse events or deviations but obtaining our goals with a minimization of scarce physical resources.
Figure 4 represents the FRS model in predicting deviations or adverse conditions. Where we have an issue is when we start to see adverse events, actual or predicted, occur that are not what we have planned for in the production facility. We need to see the probabilities of the future events that will occur from that deviation to be detrimental to us in obtaining our goal. At that point in time, we need to look for information actions that have probabilities of getting us back to achieving our goals.
Most importantly, we will look at which actions maximize the probability of obtaining our goals and, secondarily, attempt to minimize the amount of scarce resources in obtaining those goals. We create or use information to select the action that we determine matches our risk profile. We will then execute those actions so as to appropriately navigate to avoid the deviations, mitigate them, or even negate their effect and thereby obtain our goals.
We do that first in the virtual space where there is almost zero cost if we are mistaken that we have found a solution. When we are confident in our solution in virtual space, we then move to physical space. This requires we execute the solution successfully so that our navigation will result in the physical world reproducing the virtual world result.
This is in essence what FRS is intended to do: replicate the physical world, predict future events, look for deviations that will prevent us from reaching our goal, and then select actions that will allow us to navigate back to successfully obtaining our goals.
There are two critical elements here. It’s one thing to predict the future our crystal ball shows us. But it is another to have the capability of being able to physically execute a solution. An accurate prediction of a solution without the ability to perform successfully the actions that are needed in order to be able to obtain the physical results is useless.
The primary purpose of FRS is to obtain its goals. FRS replicates, predicts future states, and then provides action-oriented information to navigate to those goals. Its secondary purpose is to minimize the use of scarce physical resources. Given its primary goal, FRS may recommend information that uses more resources than the minimum if the probability of obtaining its goals with the minimum resources do not meet the risk threshold for success. While avoiding adverse events, such as machine failures, would appear to be highly desirable, FRS will recommend that only if the action for that avoidance is more resource effective than letting the adverse event occur and mitigating it.

4. FRS Operational Substrate

Front Running Simulation requires an operational substrate to support replication, prediction, and navigation. Figure 5 is a visual representation of their functions. The operational substrate is composed of Models of Reality (MoR), data, simulation models, and information. These elements do not merely precede FRS as prerequisites. These elements form the active medium through which replicated reality is continually projected forward, allowing operational goals to be pursued while minimizing the expenditure of scarce physical resources. MoRs and data guide and select, while simulation and information guide and interpret. To understand how FRS replicates, predicts, and navigates, we need to specifically understand the function that these operational substrate elements perform.

Models of Reality

Models of Reality are structured representations of how the world operates. As a result, MoRs are integral to all three elements of replication, prediction, and navigation. We rely of MoRs to produce a regularity that we can rely on [9]. We rely on reality to produce the same results from the same causes. As humans, we develop MoRs naturally. The most basic MoRs are biological and ontological. We develop many of these MoRs soon after birth [10,11,12]. We develop a sense of space and spatial orientation, object permanence, causality, motion continuance, and a direction of time.
We expand those MoRs as we interact with the physical world. Almost every child learns by experience that something hot will cause pain, no matter how many times they may have been told that before they touch a hot object. Children hopefully learn from permanence of objects and their parents not to walk into traffic and other experiential lessons involving safety.
We acquire MoRs that are more formal through education that exist as evidence based scientific abstractions. We acquire formula and models in science and engineering that move from general abstractions into precise calculations. We also find MoRs in statistics and correlations that we may not fully understand the cause and effect but understand that it does provide us with probabilistic regularity.
These MoRs provide the structure and theory for why we can predict the future with at a minimum a directional predictive capability and at a maximum with almost absolute certainty. This makes FRS feasible.

Data

Data is the substrate element that enables replication. Data are facts about the state of reality [13]. In order to replicate the physical aspects that pertain to our use cases of physical objects and their environment, we need to collect, process, and organize data in the digital realm.
Replication determines the current understanding of the state of the physical world. It is the starting point from which all future actions will progress. Without a full and complete understanding of where we are at, we will not be able to predict what the future states will be nor will we be able to navigate to our goal.
Replication is usually more than capturing data at a single point in time. Replication implies duplicating something in a way that is understandable, not merely observable. A single data point capture represents a fact about reality at a moment, but in isolation it is often insufficient to determine meaning or state.
It is only when data are captured longitudinally—across time—that they can be evaluated against our Models of Reality. Through temporal continuity, patterns emerge that allow us to determine whether observed behavior is consistent with known structure, physics, or process dynamics. In this sense, replication necessarily involves model-constrained sense-making, where data acquire meaning by fitting within an existing representation of how reality actually behaves.
Up until fairly recently, the collection of data was a human mediated process. Humans observed, interpreted, and recorded the data of the physical world. That severely limited the quantity and quality of the data available. Humans lived in a small bubble of potential data that was readily accessible to them. The limited data availability was not a constraint because human computing brain power was also limited.
The Third Millennium brought computer-based capture of data, commonly referred to as IoT, that enabled massive amounts of objective data to be collected, processed, and organized without human mediation. While this expanded the bubble of potential accessible data by magnitudes [14], this would be useless to humans without a concomitant access to computing capability to reduce that data to be useful to humans. Digital Twin FRS provides that capability.
The problem is that the universe presents us with an infinite number of facts about reality. However, data is highly granular, so we only need to collect the data needed for our use cases. We do not need to replicate the entirety of a physical object in data, just the attributes of the object state we require, such as only outer mold lines (OMLs) and not anything internal. We need to use our MoRs to determine what facts we need to determine what is relevant for the predictions we need to make. Selecting the data that is relevant to our needs is often an iterative process.
The characteristics that facts need to have for FRS is that the data are [8]:
  • Selected for purpose
  • Contextually relevant
  • Objective
  • Complete
  • Accurate
  • Interoperable

Simulation

Simulation is the substrate element that enables prediction. There is a common perspective that it was the development of computers that enabled simulation. While that is true of digital simulation, simulation itself is as old as human existence as I have described of prehistoric man producing simulations of hunting for food with his tribe [15]. Humans of today constantly perform complex simulations similar to FRS, as my driving example in the introduction demonstrates. Simulation is defined as one process that imitates another process [16]. A core aspect of processes is, by their very nature, time evolved. By that description, simulation is a foundational aspect of human thinking.
The invention of computers in the last half of the 20th century was a watershed event for simulation. This marked the first time in history that the computing behind time evolved simulation was performed by other than a human brain. Because of the limited computing capabilities of early computers, these simulations were relatively simple iterative mathematical calculation with output being reams of paper with numbers that needed to be deciphered and visualized by humans [17].
Fast forward to today. Computing power has increased approximately 100 billion times since the mid 1970s (The operations of the first supercomputer, the Illiac IV, reported into me in the late 1970s. Its computing speed was theoretically 25 megaFLOPS. The current fastest supercomputer, the El Capitan, has a speed of 2.8 exaFLOPS). This is a discontinuity scale shift. Computer-based simulations have become highly complex, and output can be visually photorealistic videos. The modeling of physical objects with their environments closely mirrors the behavior of their real-world counterparts under comparable forces. This enables the transition from physical world trial-and-error with substantial waste to goal-directed navigation with minimized wasted scarce physical resources in the virtual world.
Simulations are about modeling and predicting possible future outcomes. There are two approaches that simulation uses: causation and correlation. FRS is intended to be a fusion of these two methods, physics and big data [18]. For causation, there are well defined and understood inputs. We apply functions to those inputs to produce well defined outputs that have an acceptable margin of error. We have almost complete confidence that in the physical world that those inputs will always result in those outputs.
For correlations, there are no explicitly defined inputs or governing functions. Instead, we observe multiple variables over time and identify statistical probability distributions [19]. Outputs are not produced by applying known functions, but by extrapolating from historically observed relationships. The behavior is probabilistic rather than deterministic, with confidence expressed as likelihoods rather than fixed margins of error. The strength of the correlations will determine whether we can rely on them for prediction.
The issue with using correlations in FRS simulations is that these simulations can have multiple variables. Simply using correlations assumes that they are independent which they may not be. This will produce distorted outcome distributions. The use of Monte Carlo techniques will help in addressing this issue [20,21]. Monte Carlo simulation can incorporate correlated input variables by sampling from joint distributions that preserve observed statistical relationships.
Because of the DTA, the probabilities are Bayesian in nature. In traditional Monte Carlo, probability distributions are assumed, estimated from limited historical data, and often static over time. FRS Monte Carlo, with its DTA use, is empirically grounded and continuously updated. Under conditions of relevance, consistency, and proper modeling, the bigger the population and more experience over time of DTIs collected in the DTA, the more accurate the Bayesian probabilities become. This ensures that simulated scenarios remain consistent with empirically observed system behavior which is critical for FRS.

AI Based Evolution of Front Running Simulation

Since the introduction of FRS, there has been a major technological advancement that enables the evolution of FRS into a much more powerful capability. That technological advancement is Artificial Intelligence (AI). AI has added a new dimension of predictive capability to FRS to augment simulation that is based on causal and correlative predictive simulation
Technically, AI is not simulation as it does not imitate one process by another. AI enables behavioral prediction based on big data and Bayesian inference. What FRS has now evolved to is an additional capability of longitudinal behavioral accumulated data by the DTA to not simulate but infer probable future outcomes. This doesn’t replace simulation but augments it. DTA AI predictions which are also Bayesian based [22] may recommend actions whose causal/correlated mechanisms are not fully understood but whose historical correlations with successful outcome is statistically supported.
It is risky to rely on this black prediction versus white box simulation. However, in a hybrid form of using both these methods, DTA AI prediction can triangulate with the simulation. In the best case, this can be for reinforcement of the simulation result, and, in the worst case, to cause further questioning of the simulation results.
Because DTA AI is effectively instantaneous, it may provide a necessary action that simulation may be too slow in providing. While that is risky, in a situation where an action is immediately required, DTA AI may be the only way to provide one. This means that the DTA becomes much more powerful. Effectively DTA AI becomes a prior distribution generator. Simulation explores possible futures. While AI, based on DTA, ranks probable futures based on historical data.
It’s important to note that DTA-based AI predictions should not replace simulations. A hybrid fusion of simulation and DTA AI prediction greatly enhances FRS’s capabilities. FRS becomes the unifying architecture between classical engineering simulations, modern AI systems, and Digital Twin theory. FRS now consists of both how physical systems should perform either causally or probabilistically and how physical systems have actually performed over time. This is a huge leap in strengthening FRS as a predictive capability.

Information

Information is the substrate element that enables us to navigate to achieve our goals. There is almost no consensus, even by academics, as to what information is [23]. I have long proposed that the relevant perspective is to focus on what information does [24]. With that focus, I have contended that information is a potential replacement for the physical scarce resources of time, energy, and material.
Information is the selection of navigable actions to achieve defined goals with minimal wasted physical resources. These actions are derived from the relevant data representing the current replicated state, guided by Models of Reality and predicted through simulation. Information enables us to determine and perform goal-oriented tasks while minimizing the expenditure of scarce physical resources needed to perform those tasks successfully. The physical resources we have at our disposal are time, both labor hours and elapsed time, energy, and materials.
Figure 6 displays bars that represent the total cost expenditures for performing a hypothetical goal-oriented task under different conditions. The left side represents performing the task using trial and error. We can divide that task into two categories of resource usage: the optimum minimum expenditure of scarce physical resources and the remaining scarce physical resource usage we actually expend in performing the task. This latter category is considered to be wasted resources.
The first category of physical resource usage, which is the lower part of the bar, is the minimum expenditure of physical resources that if we were omniscient and omnipotent that we would need to perform the physical task. This category is the minimum of resources we would utilize to successfully complete the task if we knew the actions that we needed to take and that we could execute those actions perfectly. This category is always subject to constraints of what we will do (moral) and what we can do (physical and legal).
Because we are neither omniscient nor omnipotent, the upper part of the bar is the remainder of the physical resources that we actually use to perform the task. These are, by definition, wasted physical resources. Because we are not omnipotent, we may know what we need to do but simply don’t have the technical capability to execute.
We are also not omniscient. So more commonly, we don’t know what to do. We historically have used trial-and-error to find the actions that work, in which every failed attempt wastes physical resources. This is called the Edisonian method because Edison made 10,000 attempts to discover the correct elements needed for the light bulb [25]. We often have determined the information that produced successful actions, only to not capture it or to only keep it as tacit information (This is more commonly called tacit knowledge. As I’ve described elsewhere, knowledge is a repository in which information and data reside. Knowledge is a metonymic substitution for data and potential information that reside in a knowledge repository.) that resided in one individual and often disappeared when the individual died. Tacit information still results in physical resource waste due to its ill-defined, obscure, and intuitive nature [26].
In order to quantify and consolidate these different types of physical resources, we must employ a cost or payoff function [27]. Here we are using a financial cost function that transforms the physical resources into monetary costs. The entire cost of the task is C(t,e,m). The cost of the optimum usage is Co(t,e,m). The cost of the remaining or wasted resources is Cw(t,e,m).
The right side of Figure 6 shows the role of information. The minimum expenditure of physical resources to perform the task efficiently and effectively does not change. However, information can substitute or replace the wasted resources, as shown by the upper part of the bar. We said above that if we “knew” the actions we needed to take or not take, that is what we would do. The use of information is how we know what actions to take and not to take.
However, we are still neither omniscient nor omnipotent, so we most likely need to build in a contingency buffer. The information of the actions that will result in the minimum of physical resource usage may have a lower probability of success than actions that result in slightly higher expenditures. So, we may select the latter actions which have a higher probability of success. This results in a contingency expenditure of physical resources, represented by the middle bar on the right. The cost of the contingency usage is Cc(t,e,m).
The issue we have with information is how do we cost it? We do not have units of information like we have for physical resources. Despite having no unit of measurement, information has a cost. While we cannot measure those costs in units, we can quantify the costs of hardware, software, and labor costs to develop information.
The conditions under which this substitution of wasted physical resources by information holds true is indicated by the formula, C(I)<ΣCw(t,e,m), where Cw is the cost of the wasted resources in the upper left bar. This formula represents that replacement of information for wasted resources is beneficial only if the cost of information is less than the cost of wasted resources for all the times the task is performed.
Figure 6 shows information replacing all the wasted resources. This is the ideal. This probably does not happen except in fully automated tasks. However, since the potential for wasting resources is infinite (The data that a perpetual motion machine is not possible still has not stopped the waste of resources trying to invent one. The data that the earth revolved around the sun and not vice versa was available for hundreds of years. That did not stop the waste of uncountable number of hours calculating the orbits according to the Ptolemaic theory. The action of the information associated with both these examples in order to replace wasted resources, is simply “stop”. If a task is impossible to be accomplished, the entire bar for the task is red, i.e., all physical resources are wasted resources), information can and has substituted for task wasted resources.
It is also important to understand that information is a non-rival good [28]. Unlike physical resources, information is a resource that can be used over and over again without diminishing it. It is an asset and not an expense like the wasted physical resources this information replaces. Once created, the cost of information used in a task is the fixed cost of the computing/ommunication infrastructure. However, for this to be the case, this information needs to be captured, organized, and reused as potential information in a knowledge repository [8].
This idea of using information as a replacement for wasted resources is what enables FRS to navigate in obtaining goals. FRS would be useful if all it did was replicate and predict, letting humans attempt to figure out what actions to employ. With information created ad hoc or retrieved from a [8] repository, FRS can assess the deviation gap between the current predicted state and the goal to more exhaustively, efficiently, and effectively create and search the solution space to recommend actions to obtain the task goals or send commands to implement actions. The examples below will illustrate that.

5. HITL and HOOTL

I have not previously dealt with the topic of Humans-In-The-Loop (HITL), or Human-Out-Of-The-Loop (HOOTL) with respect to the Digital Twins. The Digital Twin is equally applicable to both. The requirement for a Digital Twin is that data and information come from the digital realm to the physical one. That data and information may go to a human, so that the human can assess and make decisions as to the actions that need to be taken.
Similarly, information in the form of commands may be sent to the physical twin instance itself in order to command action. It also may be that for autonomous objects that data and information relevant to the specific situation being encountered by the autonomous object will be sent to it, and the autonomous object will make its own determination as to actions that it will take. I have pointed out that Digital Twins can be great value to humans in the loop by helping guard against biases such as confirmation and confidence bias [29].
I have long proposed the idea of what I called cued availability [24], which foreshadowed the ability of AI. Humans not in the loop will usually be required when the time frames for responses exceed the ability of humans, such as the operation of nuclear reactors. To understand and respond, however, the risk of a highly inappropriate response that a human would never perform increases dramatically. FRS will need guardrails to prevent that.

6. Front Running Simulation Examples

Here are two hypothetical but representative examples of FRS in deterministic production facilities. The first is a discrete production automotive factory. The second is a continuous production petroleum refinery. Both have production that involves a deviation or adverse event during the shift.

Automotive Discrete Manufacturing Welding Line

An automotive body and white welding line is running a high-volume shift build. It has clearly defined task goals for the day. It has a manufacturing process that details each production process through the product build.
The manufacturing process is designed and validated to meet those task goals. The task goals are:
  • Achieve 1,040 units by end of shift
  • Maintain acceptable weld quality thresholds
  • Minimize overtime, energy spikes, and downstream rework
  • Avoid unnecessary line stoppages that waste scarce physical resources
The plant uses FRS that runs every fifteen minutes throughout the day, predicting operational results through the remainder of the shift. FRS uses replicated, real-time, state data from all of its stations.
At 8:10 am, there is an initial process deviation or glitch. A robotic welding cell begins taking 3.8 seconds longer per cycle due to the following: an intermittent torch positioning lag, a slight access vibration, and extra confirmation passes on weld seams.
However, the quality remains acceptable, so the line does not stop. FRS notes this deviation and incorporates it into its simulation. Since the deviation is small, the manufacturing process remains within limits for the next few hours in the simulation. However, FRS starts predicting higher probability of deviations.
By the 10:00 am FRS run, FRS predicts with almost 100% probability that by 1:30 p.m. the upstream buffers begin filling. The conveyor dwell time starts to increase. Downstream inspection stations are idling intermittently. By late afternoon, the predicted queue surge creates a localized bottleneck.
When this prediction happens, it triggers FRS to generate several information candidate action paths that are evaluated against the probability of not meeting daily production goals by causing additional energy use, labor impact, and the risk of future disruptions. FRS creates four different options.
The first option is to have an immediate robot shutdown to replace the torch assembly and eliminate the delay that is occurring. The second option is to speed up the conveyor belt by 4%. The third option is to micro rebalance the workload by shifting one minor weld operation to an adjacent robot with available slack, slightly extend lunch micro-maintenance window by three (3) minutes to recalibrate the axes, and adjusting the buffer logic so upstream robots insert controlled spacing, rather than continuous flow. The fourth option is to accept the backload and add an overtime inspection crew.
FRS evaluates these options. FRS determines that the first option, immediate robot shutdown, does prevent the later bottleneck but will cause a 12-minute hard stop. That results in lost production and an energy surge restart.
This second option is to increase the conveyors by 4%. This offsets the delay but raises the weld heat variance in energy consumption. This then increases the probability of downstream rework.
The third option of micro-rebalance the workload is that bottleneck probability drops from 62% down to 8%. The daily build target is still achievable. There is no full stop required.
The fourth option is to allow the bottleneck and remediate later. The goal is still obtainable, but it has the highest labor and energy cost expenditures,
With a human in the loop (HITL). FRS recommends the third option: micro rebalance workload. The plant manager looks at the four options and concurs.
The result of the workday actually playing out is that by accepting option three, the morning production continued uninterrupted, the lunch micro maintenance recalibrates the robot, and line pacing adjustments prevents queue buildup.
So, at the end of the shift, the daily target is met. There is no overtime required, and energy usage remains within tolerance. The welding glitch was not avoided, but FRS achieved the goal state with nearly minimum waste.

Continous Production Oil Refinery Facility

This example is of a crude oil refinery that consists of distillation columns, heat exchangers, compressors, and catalytic cracking units, The refinery has a well-defined continual manufacturing process that has these task goals:
  • Maintain throughput at 92% of maximum capacity
  • Maximize yield of high-value middle distillates
  • Stay within safety and emissions constraints
  • Minimize energy consumption and flaring
The plant uses FRS that runs every 15 minutes throughout the day, predicting operational results through the remainder of the shift. FRS uses replicated, real time, state data from all its refinery stations.
At 9:12, am there is an initial deviation. A subtle disturbance occurs, where the feed stock density drifts slightly heavier than forecast. A heat exchanger shows increased fouling resistance, and the column bottom temperature begins inching upward. However, the operators see only a minor deviation. There is nothing that requires an immediate intervention. The process remains within safe limits.
However, on its 10:15 run, FRS starts to show a real risk probability for later in the day. In that simulation prediction, the reboiler duty slowly increases to maintain separation, and energy consumption climbs. At around a predicted 2:40 p.m. overhead pressure margins shrink. If this is left unchecked, the operators will need to reduce throughput or flare gases. Based on this 10:15 FRS run, information candidate action options are developed and evaluated.
The first option is immediate aggressive stabilization. That consists of aggressive increased cooling, adjusting reflux sharply, and reducing feed rate temporarily. This option eliminates the deviation quickly, but spikes energy consumption and reduces valuable distillate yield.
The second option is to slow throughput reduction. It proposes lowering the feed rate by 5% and maintaining conservative operating margins. This option guarantees safety but misses daily production goals and reduces revenue yield.
The third option is preemptive maintenance shutdown. It consists of bringing the unit down for exchange cleaning immediately. This option does eliminate drift entirely, but it causes massive lost throughput, startup fuel burn, and catalyst stress.
Option four proposes slightly increased side-draw flow to redistribute heat load, adjusting the reflux ratio incrementally rather than aggressively, delaying the heat exchanger cleaning until the scheduled maintenance window, and accepting a small temporary rise in bottom temperature. This proposal results in energy consumption remaining within the target envelope, the bottleneck probability dropping from 54% to 11%, and the desired product yields remaining within tolerance.
The refinery operators choose the fourth option, which means that the plant continues operating with a slight deviation. The adjustments redistribute energy rather than eliminating the adverse event or deviation. No flaring occurs later in the day, and production targets are met with minimal energy use. The deviation was not eliminated. It was navigated to a successful conclusion that met the goals.
These examples show that FRS is not necessarily a deviation or adverse event avoidance. It is goal directed navigation, grounded in data-driven replicated reality that is predicted by simulation. While avoiding predicted anomalies is to be desired wherever possible, the overarching responsibility of FRS is to meet the task goals with information.

7. Conclusion

Front Running Simulation (FRS) represents a fundamental advancement in how we approach goal-oriented activity in complex physical systems. While humans have always engaged in rudimentary forms of replication, prediction, and navigation, FRS elevates this capability to a level of scale, speed, and rigor that was previously unattainable. By continuously synchronizing with the current state of reality and projecting forward into probabilistic future states, FRS provides an operational “crystal ball” grounded not in speculation, but in data, Models of Reality (MoR), simulation, and information.
The key contribution of FRS is not merely improved prediction, but goal-directed navigation under constraints. Traditional simulation has largely been used for planning and exploration, detached from real-time operations. In contrast, FRS is tethered to reality, continuously resetting its initial conditions to the present state and evaluating future trajectories in the context of achieving defined task goals while minimizing scarce physical resources. This reframes simulation from a passive analytical tool into an active operational capability.
The integration of Digital Twins—particularly the Digital Twin Instance (DTI) and Digital Twin Aggregate (DTA)—provides the structural foundation for FRS. DTIs enable high-fidelity replication of current state, while DTAs introduce a longitudinal, population-based learning capability. With the incorporation of AI, especially Bayesian-informed inference derived from DTA data, FRS evolves into a hybrid predictive system. In this hybrid model, causal and Monte Carol based correlation simulation explores possible futures, while AI ranks probable futures based on observed behavior. This fusion strengthens both predictive accuracy and decision responsiveness, particularly under time-constrained conditions.
Critically, FRS shifts the focus from adverse event and deviation avoidance to goal attainment under resource constraints. As demonstrated in the examples, the optimal course of action is not always the elimination of a deviation. Instead, FRS evaluates whether deviations materially impact the ability to achieve goals and identifies actions that maximize the probability of success with minimal expenditure of time, energy, and materials. This aligns directly with the principle that information serves as a replacement for wasted scarce physical resources.
FRS also introduces important considerations regarding execution. The value of prediction is contingent on the ability to act. Accurate foresight without executable capability has no operational benefit. Therefore, FRS must be tightly coupled not only with sensing and simulation, but also with the physical and organizational ability to implement selected actions. This reinforces the importance of aligning FRS outputs with real-world constraints, including human decision-making (HITL), autonomous systems (HOOTL), and necessary guardrails to eliminate or at least mitigate inappropriate actions.
Finally, Digital Twin FRS should be understood not as a standalone technology, but as an integrative architecture. It unifies MoR based simulation, data, information, and now AI principles into a coherent framework for operating in the Third Millennium—an era defined by virtualization, computation, and data-driven decision-making. As computing capability continues to expand and data availability increases, FRS will become increasingly precise, adaptive, and indispensable.
In summary, Front Running Simulation transforms how we engage with the future:
  • From reactive to proactive
  • From isolated simulation to continuous synchronization with reality
  • From resource-intensive trial-and-error to information-driven efficiency
FRS enables us to move from a world constrained solely by physical trial and error to one where we can virtually explore, evaluate, and select actions before committing physical resources, thereby achieving our goals more effectively and efficiently than ever before.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Grieves, M., Forward to Human-Centered Metaverse, in Human-Centered Metaverse: Concepts, Methods, Applications, C. Nam, D. Song, and H. Jeong, Editors. 2024, Elsevier. p. 400.
  2. Tao, F. Digital twin in industry: state-of-the-art  . IEEE Transactions on Industrial Informatics 2018, 15(4), 2405–2415. [Google Scholar] [CrossRef]
  3. Wooley, A.; Dimson, G.; Bitencourt, J. Digital Twins Across Domains: A Cross-Industry Umbrella Review of Systematic Literature Reviews. Systems Engineering 2026, p. e70049. [Google Scholar] [CrossRef]
  4. Grieves, M. and J. Vickers, Digital Twin: Mitigating Unpredictable, Undesirable Emergent Behavior in Complex Systems, in Trans-Disciplinary Perspectives on System Complexity, F.-J. Kahlen, S. Flumerfelt, and A. Alves, Editors. 2017, Springer: Switzerland. p. 85–114.
  5. Grieves, M. Virtually Intelligent Product Systems: Digital and Physical Twins, in Complex Systems Engineering: Theory and Practice, S. Flumerfelt, et al., Editors. 2019, American Institute of Aeronautics and Astronautics. p. 175–200.
  6. Grieves, M. Completing the Cycle: Using PLM Information in the Sales and Service Functions [Slides]. in SME Management Forum. 2002. Troy, MI.
  7. Grieves, M. Driving Digital Continuity in Manufacturing. 2017; Available from: https://research.fit.edu/media/site-specific/researchfitedu/camid/documents/MWG-Digital-Continuity-Whitepaper-copy-(002).pdf.
  8. Grieves, M. Data Driven Digital Twins, in Research Handbook on Digital Data: Interdisciplinary Perspectives, A. Aaltonen, K. Lyytinen, and M. Stelmaszak, Editors. 2026, Edgar Elgar Publishing: Northhampton, MA. p. 89–107.
  9. Popper, K. Conjectures and refutations: The growth of scientific knowledge. 2014: routledge.
  10. Spelke, E.S. Origins of knowledge  . Psychological review 1992, 99, 605. [Google Scholar] [CrossRef] [PubMed]
  11. Baillargeon, R. Physical reasoning in infancy  . In The cognitive neurosciences; 1995; pp. 181–204. [Google Scholar]
  12. Baillargeon, R. Object permanence in 3½-and 4½-month-old infants  . Developmental psychology 1987, 23(5), 655. [Google Scholar] [CrossRef]
  13. Grieves, M. DIKW As a General and Digital Twin Action Framework: Data, Information, Knowledge, and Wisdom. Knowledge 2024, 4(2), 120–140. [Google Scholar] [CrossRef]
  14. Rydning, D.R.-J.G.-J.; Reinsel, J.; Gantz, J. The digitization of the world from edge to core. Framingham: International Data Corporation, 2018. 16: p. 1–28.
  15. Grieves, M.; Hua, E. Defining, Exploring, and Simulating the Digital Twin Metaverses, in Digital Twins, Simulation, and Metaverse: Driving Efficiency and Effectiveness in the Physical World through Simulation in the Virtual Worlds, M. Grieves and E. Hua, Editors. 2024, Springer.
  16. Hartmann, S. The world as a process: Simulations in the natural and social sciences, in Modelling and simulation in the social sciences from the philosophy of science point of view. 1996, Springer. p. 77–100.
  17. Schriber, T.J. Simulation using GPSS. 1974, New York: Wiley. xv, 533 p.
  18. Guo, Y. Digital twins for electro-physical, chemical, and photonic processes. CIRP Annals 2023, 72(2), 593–619. [Google Scholar] [CrossRef]
  19. Pearl, J. Statistics and causal inference: A review. Test 2003, 12, 281–345. [Google Scholar] [CrossRef]
  20. Chen, D. Digital twin for federated analytics using a Bayesian approach  . IEEE Internet of Things Journal 2021, 8(22), 16301–16312. [Google Scholar] [CrossRef]
  21. Khatun, Z. Hybrid Digital Twin and Monte Carlo Simulation For Reliability Of Electrified Manufacturing Lines With High Power Electronics. International Journal of Scientific Interdisciplinary Research 2025. 6, 2, 143–194. [Google Scholar] [CrossRef]
  22. Korb, K.B.; Nicholson, A.E. Bayesian artificial intelligence. 2010: CRC press.
  23. Zins, C. What is the meaning of” data”,” information”, and” knowledge. Dr. Chaim Zins, 2009.
  24. Grieves, M. Product Lifecycle Management: Driving the Next Generation of Lean Thinking. 2006, New York: McGraw-Hill. 319.
  25. Wills, I. The Edisonian Method: Trial and Error, in Thomas Edison: Success and Innovation through Failure, I. Wills, Editor. 2019, Springer International Publishing: Cham. p. 203–222.
  26. Cortada, J.W. Boundaries between explicit and tacit knowledge: data’s world. Research Handbook on Digital Data: Interdisciplinary Perspectives 2026, 20.
  27. Simon, H.A. A behavioral model of rational choice  . The quarterly journal of economics 1955, 99–118. [Google Scholar] [CrossRef]
  28. Benkler, Y. The wealth of networks : how social production transforms markets and freedom. 2006, New Haven Conn.: Yale University Press. xii, 515 pages.
  29. Kahneman, D. Thinking, fast and slow. 1st ed. 2011, New York: Farrar, Straus and Giroux. 499 p.
Figure 1. Digital Twin Model.
Figure 1. Digital Twin Model.
Preprints 208795 g001
Figure 2. Digital Twin Model 2026.
Figure 2. Digital Twin Model 2026.
Preprints 208795 g002
Figure 3. Front Running Simulation (FRS).
Figure 3. Front Running Simulation (FRS).
Preprints 208795 g003
Figure 4. Front Running Simulation (FRS).
Figure 4. Front Running Simulation (FRS).
Preprints 208795 g004
Figure 5. Front Running Simulation (FRS) Substrate.
Figure 5. Front Running Simulation (FRS) Substrate.
Preprints 208795 g005
Figure 6. Information as Task Wasted Time, Energy, Material Substitute.
Figure 6. Information as Task Wasted Time, Energy, Material Substitute.
Preprints 208795 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated