1. Introduction
The theory of deterministic pursuit-evasion games can single-handedly be attributed to Isaacs in the 1950s [
1,
2]. Here, Isaacs first considered differential games as two-player zero-sum games. One early application was formulation of missile guidance systems during his time with the RAND Corporation. Shortly thereafter, Kalman among others initiated the linear quadratic regulator and tracking (LQR) and (LQT) in the continuous and discrete cases (see [
3,
4,
5,
6]). Since then, the concept of pursuit-evasion games and optimal control have been closely related, each playing a fundamental role in control engineering and economics. One breakout paper to combine these concepts was written by Ho, Bryson, and Baron. Together, they studied linear quadratic pursuit-evasion games (LQPEG) as regulator problems [
7,
8]. In particular, this work included a three-dimensional target interception problem. Since then, there have been a number of papers that have extended these results in the continuous and discrete cases. One of the issues that researchers have faced in the past is the discrete nature of these mixed strategies.
In 1988, Stefan Hilger initiated the theory of dynamic equations of time scales, which seeks to unify and extend discrete and continuous analysis [
9]. As a result, we can generalize a process to account for both cases, or any combination of the two provided we restrict ourselves to closed, nonempty subsets of the reals (a time scale). From a numerical viewpoint, this theory can be thought of a generalized sampling technique that allows a researcher to evaluate processes with continuous, discrete, or uneven measurements. Since its inception, this area of mathematics has gained a great deal of international attention. Researchers have since found applications of time scales to include heat transfer, population dynamics, as well as economics. For a more in depth study of time scales, it is suggested that one see Bohner and Peterson’s books [
10,
11].
There have been a number of researchers who have sought to combine this field with the theory of control. A number of authors have contributed to generalizing the basic notions of controllability and observability (see [
12,
13,
14,
15,
16]). Bohner first provided the conditions for optimality for dynamic control processes in [
17]. DaCunha unified the theory of Lyapunov and Floquet theory in his dissertation [
18]. Hilscher along with Zeidan have studied optimal control for sympletic systems [
19]. Additional contributions can be found in [
20,
21,
22,
23,
24,
25], among several others.
In this paper, we study a natural extension of the LQR and LQT previously generalized to dynamics equations on time scales (see [
26,
27]). Here, we consider the following separable dynamic systems
where
represent our states and
represent our controls. Note that the subscripts
P and
E to stand for the pursuer and the evader respectively. The pursuing state seeks to intercept the evading state at time
while the latter state seeks to do the opposite. For simplicity, we make the following assumptions. First, we assume the given systems are linear-time invariant (although the strategies for the time-varying case can be determined in a similar fashion). Second, we assume that both states are controllable and are being evaluated on the same time scale. Finally, we assume our state equations are associated with the cost functional
where
and diagonal,
and
,
. Note that the goal of the pursuing state is to minimize (1.2) while the evading state seeks to maximize it. Since these states represent opposing players, evaluating this cost can be thought of as a minimax problem.
The pursuit-evasion framework remains an active area across multiple disciplines, as found in [
28,
29,
30,
31,
32,
33,
34]. It should be noted that there have been other excursions in combining dynamic games with time scales calculus. Libich and Stehlík introduced macroeconomic policy games on times scales with inefficient equilibria in [
35]. Martins and Torres considered
player games where each player sought to minimize a shared cost functional. Mozhegova and Petrov introduced a simple pursuit problem in [
36] and a dynamic analogue to the “Cossacks-robbers” in [
37]. Minh and Phuong have previously studied linear pursuit-evasion games on time scales in [
38]. However, these results do include a regulator/saddle point framework, nor are they complete when compared to this manuscript.
The organization of this paper is as follows.
Section 2 presents core definitions and concepts of the time scales calculus. We offer the variational properties needed such that an optimal strategy exists in
Section 3. In
Section 4, we seek a mixed strategy when the final states are both fixed. In this setting, we can rewrite our cost functional (1.2) in terms of the difference in Gramians of each system. For
Section 5, we find a pair of a controls in terms of an extended state. In
Section 6, we offer some examples including a numerical result. Finally, we provide some concluding remarks and future plans in
Section 7.
2. Time Scales Preliminaries
Here we offer a brief introduction to the theory of dynamic equations on time scales. For a more in-depth study of time scales, see Bohner and Peterson’s books [
10,
11].
Definition 1. A time scale is an arbitrary nonempty closed subset of the real numbers. We let if exists; otherwise .
Example 2. The most common examples of time scales are , , for , and for .
Definition 3. We define the
forward jump operator and the
graininess function by
Definition 4. For any function , we define the function by .
Next, we define the delta (or Hilger) derivative as follows.
Definition 5. Assume
and let
. The
delta derivative is the number (when it exists) such that given any
, there is a neighborhood
U of
t such that
In the next two theorems, we consider some properties of the delta derivative.
Theorem 6 (See Theorem 1.16 [
10]).
Suppose is a function and let . Then we have the following:
-
a.
If f is differentiable at t, then f is continuous at t.
-
b.
If f is continuous at t, where t is right-scattered, then f is differentiable at t and
-
c.
If f is differentiable at t, where t is right-dense, then
-
d.
If f is differentiable at t, then
Note that (2.1) is sometimes called the “simple useful formula."
Example 7. Note the following examples.
- a.
When
, then (if the limit exists)
- b.
- c.
When
for
, then
- d.
When
for
, then
Next we consider the linearity property as well as the product rules.
Theorem 8 (See Theorem 1.20 [
10]).
Let be differentiable at . Then we have the following:
-
a.
For any constants α and β, the sum is differentiable at t with
-
b.
The product is differentiable at t with
Definition 9. A function is said to be rd-continuous on when f is continuous in points with and it has finite left-sided limits in points with . The class of rd-continuous functions is denoted by . The set of functions that are differentiable and whose derivative is rd-continuous is denoted by .
Theorem 10 (See Theorem 1.74[
10]).
Any rd-continuous function has an antiderivative F, i.e., on .
Definition 11. Let
and let
F be any function such that
for all
. Then the Cauchy integral of
f is defined by
Example 12. Let with and assume that .
- a.
- b.
- c.
When
for
, then
- d.
When
for
, then
Next, we present the matrix exponential and some of its properties.
Definition 13. An
matrix-valued function
A on
is rd-continuous if each of its entries are rd-continuous. Furthermore, if
,
A is said to be
regressive (we write
) if
Theorem 14 (See Theorem 5.8 [
10]).
Suppose that A is regressive and rd-continuous. Then the initial value problem
where I is the identity matrix, has a unique matrix-valued solution X.
Definition 15. The solution X from Theorem 14 is called the matrix exponential function on and is denoted by .
Theorem 16 (See Theorem 5.21 [
10]).
Let A be regressive and rd-continuous. Then for ,
-
a.
, hence ,
-
b.
,
-
c.
,
-
d.
,
-
e.
.
Next we give the solution (state response) to our linear system using variation of parameters.
Theorem 17 (See Theorem 5.24 [
10]).
Let be an matrix-valued function on and suppose that is rd-continuous. Let and . Then the solution of the initial value problem
is given by
3. Optimization of Linear Systems on Time Scales
In this section, we make use of variational methods on time scales as introduced by Bohner in [
17]. First, note that the state equations in (1.1) are uncoupled. For convenience, we rewrite (1.1) as
where
z represents an extended state given by
,
,
, and
. Associated with (3.1) is the quadratic cost functional
where
,
and
,
. To minimize (3.2), we introduce the augmented cost functional
where the so-called
Hamiltonian H is given by
and
represents a multiplier to be determined later.
Remark 18. Our treatment of (1.1) differs from the argument used by Ho, Bryson, and Baron in [
7]. In their paper, they appealed to state estimates of the pursuer and evader to evaluate the cost functional. Their motivation for their argument is due to notion that when they studied pursing and evading missiles, they considered difference in altitude as negligible. As a result of our rewriting of (1.1), we are not required to make such a restriction.
Next, we provide necessary conditions for an optimal control. We assume that
for all
such that
.
Lemma 19.
Let (3.2) be the cost functional associated with (3.1). Assume (3.4) holds. Then the first variation, , is zero provided that z, λ, u, and v satisfy
Proof. First note that
Then
Then after rearranging terms, the first variation can be written as
Now in order for
, we set each coefficient of independent increments
,
,
,
equal to zero. This yields the necessary conditions for a minimum of (
3.2). Using the Hamiltonian (
3.3), we have state and costate equations
and
Similarly, we have the stationary conditions
and
This concludes the proof. □
Remark 20. We note that
z,
,
u, and
v solve (3.5) if and only if they solve
where
is a “mixing term” given by
Throughout this paper, we assume that
is regressive. As a result, we can determine an optimal strategy if we know the value of the costate.
Finally, we give the sufficient conditions for a local optimal control.
Lemma 21. Let (3.2) be the cost functional associated with (3.1). Assume (3.4) holds. Then the second variation, , is positive provided that , , and satisfy the constraints where and is fixed.
Proof. Taking the second derivative of
, we have
If we assume that
,
, and
satisfy the constraint
then the second variation is given by
Note that
and
while
and
. Thus if
and
is fixed, then (3.7) is guaranteed to be positive. □
Definition 22. The pair
is a saddle point to the system (
3.1) associated with the cost (
3.2) provided
Here, the stationary conditions needed to ensure a saddle are
and
(see [
39]). For our purposes, this pair corresponds to when neither player wishes to deviate from this compromise without being penalized by the other player. It be understood that this compromise occurs when we have the natural caveat that the pursuer and evader belong to the same time scale. In this paper, we do not claim that this saddle point must be unique.
4. Fixed Final States Case
In this section, we seek an optimal strategy when the final states are fixed. In this setting we write the equations for the pursuer and evader separately. Here we consider the state and costate equations for the pursuer
as well as those for the evader
associated with the cost functional
Definition 23. The
initial state difference,
, is the difference between the zero-input pursuing and evading states, i.e.,
Next, we determine an open–loop strategy for both players. Note that the following theorem mirrors Kalman’s generalized controllability criterion as found in Theorem 3.2 [
16].
Theorem 24.
Suppose that and solve (4.1) while and satisfy (4.2). Let the Gramians for the pursuer and evader
and
respectively, be such that is invertible for all . Then u and v can be rewritten as
and
Proof. Solving (
4.1) for
, we have
Using (
2.1) and (
3.5a), the state equation becomes
Now solving (
4.8) with Theorem 17 at time
, we have
Similarly, the final state for the pursuer can be written as
Taking the difference in the final states and rearranging, we have
Finally, plugging
into (3.6c) and using (
4.9) yields
The equation for
v can be shown similarly. This concludes the proof. □
Next, we determine the optimal cost.
Theorem 25.
If u and v are given by (4.6) and (4.7), respectively, then the cost functional (4.3) can be rewritten as
where .
Proof. First, plugging (
4.6), (
4.7), and (
4.9) into (
4.3), we have
using the gramians (
4.4) and (
4.5). Since
is symmetric, we can pull out common factors on the left and right to obtain our result. □
Remark 26. Suppose that the pursuer wants to use a strategy u that intercepts the evader (using strategy v) with minimal energy. Note that if and only if . From the classical definition of controllability, this implies that the pursuer captures the evader when the pursuer is “more controllable" than the evader. A sufficient condition for the pursuing state to intercept the evader is given by . As a result, this relationship is preserved in the unification of pursuit-evasion to dynamic equations on time scales.
5. Free Final States Case
In this section, we develop an optimal control law in the form of state feedback. In considering the boundary conditions, note that
is known (meaning
) while
is free (meaning
). Thus the coefficient on
must be zero. This gives the terminal condition on the costate to be
Remark 27. Now in order to solve this two-point boundary value problem, we make the assumption that
z and
satisfy
for all
. This condition (
5.2) is called a “sweep condition," a term used by Bryson and Ho in [
8]. Since the terminal condition
, it is natural to assume that
as well.
Theorem 28.
Assume that S solves
If x satisfies
and λ is given by (5.2), then
Proof. Since
is as given in (
5.2), we may use the product rule, (
5.3), (
5.4), and (
2.1) to arrive at
which gives (
5.5) as desired. □
Next we offer an alternative form of our Riccati equation.
Lemma 29.
If is regressive, then S solves (5.3) if and only if it solves
Proof. Note that
Plugging the above identity into (
5.3) yields (
5.6). □
Next we define our Kalman gains as follows.
Definition 30. Let
be regressive. Then the matrix-valued functions
and
are called the
pursuer feedback gain and
evader feedback gain, respectively.
Theorem 31.
Let be regressive and suppose that z and λ solve (4.8) such that (5.2) holds. Then
Proof. Using (3.6), (
5.2), and (
2.1), we have
Now combining like terms yields
Multiplying both side by the inverse of
and rearranging terms, we have
Finally, (
5.9) follows using (
5.7) and (
5.8). □
Next we rewrite our extended state equation under the influence of the pursuit-evasion control laws. This yields the closed-loop plant given by
which can be used to find an optimal trajectory for any given
.
Lemma 32.
If is regressive and S is symmetric, then
Moreover, both sides of (5.11) are equal to .
Proof. We can use (
5.7) and (
5.8) to rewrite the left-hand side of (
5.11) as
Using (
5.7) and (
5.8), the right-hand side of (
5.11) can be written as
Thus, (
5.11) holds. □
Now we rewrite the Riccati equation (
5.6) in so-called (generalized) Joseph stabilized form (see [
39]).
Theorem 33.
If is regressive and S is symmetric, then S solves the Riccati equation (5.6) if and only if it solves
Proof. The statement follows directly from Lemma 32. □
Finally, we rewrite the cost.
Theorem 34.
Suppose that S solves (5.12). If z, u, and v satisfy (5.10), and (5.9) respectively, then the cost functional (3.2) can be rewritten as
Proof. First note that we may use the product rule, (
2.1), and (
5.10) to find
Using this and (
5.9) in (
3.2), we have
Using (
5.14) and (
5.12), the cost functional can be rewritten as
This concludes the proof. □
From Theorem 34, if the current state and S are known, we can determine the optimal cost before we apply the optimal control or even calculate it. The table below summarizes our results.
Table 5.1.
The LQPEG on
Table 5.1.
The LQPEG on
| System: |
|
| Cost: |
|
| Mixing Term: |
|
| Pursuer Feedback: |
|
| Evader Feedback: |
|
| Riccati Equation: |
|
6. Examples
Example 35. (The Continuous LQPEG) Let
and consider
associated with the cost functional
(observe part (a) of Examples 7 and 12). Then the state, costate, and stationary equations (3.6) are given by
In this case, our pursuer-evader feedback gains (
5.7) and (
5.8) are given as
Now the pursuer-evader law (
5.9) and the closed-loop plant (
5.10) can be written as
and
Similarly, the closed-loop Riccati equation (
5.12) can be written as
while the optimal cost is given by (
5.13).
Example 36. (The
h-difference LQPEG) Let
and consider
By observing Example 7 (c) and introducing
we can rewrite the system as
and the associated cost functional takes the form (observe Example 12 (c))
Then the state, costate, and stationary equations (3.6) are given by
Now our pursuer and evader feedback gains (
5.7) and (
5.8) are
and
Next, the control-tracker law (
5.9) and the closed-loop plant (
5.10) can be written as
and
respectively. Similarly, the closed-loop Riccati equation (
5.12) can be written as
while the optimal cost is given by (
5.13).
Example 37. (The
q-difference LQPEG) Let
with
and consider
By observing Example 7 (d) and introducing
we can rewrite the system as
while the associated cost functional becomes (observe Example 12 (d))
Then the state, costate, and stationary equations (3.6) are given by
In this case, our pursuer and evader feedback gains (
5.7) and (
5.8) are
and
Now the control-tracker law (
5.9) and the closed-loop plant (
5.10) can be written as
and
respectively. Finally, the closed-loop Riccati equation (
5.12) can be written as
while the optimal cost is given by (
5.13).
Example 38. In this last example, we provide a numerical of the LQPEG. In this setting, we sample a two-dimensional pursuer and evader on the same discrete, but uneven time scale
Next, we consider the theoretical linear dynamic system
Note that the first component of each player represents its position while the second corresponds to its velocity. For simplicity, only the position is observed. Here, we set the weights in (
3.2) to be
,
and
. The plots for the pursuer and evader’s positions are given in
Figure 6.1 below.
7. Concluding Remarks and Future Work
In this project, we have established the LQPEG where the pursuer and evader belong to the same arbitrary time scale
One potential application of this work is when the evader represented by a drone and the evader represents a missile guidance where their corresponding signals are unevenly sampled. Here. the cost in part represents the wear and tear on the drone. A saddle point in this setting would represent a “live and let live” arrangement, where the drone is allowed to spy briefly on the missile-guidance system and return home, but is not given opportunity to preserve enough of its battery to outstay its welcome. Similarly, in finance the pursuer and evader can represents competing companies where a saddle point would correspond to an effort to coexist, where a hostile takeover or unnecessarily expended resources can be avoided. We have sidestepped the setting where the pursuer and evader each belong to their own time scale
and
, respectively. However, these time scales can be merged using a sample-and-hold method as found in [
40,
41].
One potential extension of this work is the introduction of additional pursuers. In this setting, the cost must be adjusted to account for the closest pursuer, which can vary over the time scale. A second potential extension is to consider the setting when one player is subject to a delay. Here, both players can still belong to the same time scale. However, this allows one player to act after the other, perhaps with some knowledge of the opposing player’s strategy. Finally, a third possible approach is to such games in a stochastic setting. Here, we can discretize each player’s stochastic linear time-invariant system to a dynamic system on an isolated time scale, as found in [
40,
42]. However, the usual separability property is not preserved in this setting.
Author Contributions
D. Funk and R Williams contributed to the analysis and writing/editing of the manuscript as well as the numerical example. N. Wintz contributed to the project conceptualization/analysis, writing/editing, and the funding of the project. All authors have read and agreed to the published version of the manuscript.
Funding
This project was partially supported by the National Science Foundation grant DMS-2150226, the NASA West Virginia Space Grant Consortium, training grant #80NSSC20M0055, and the NASA Missouri Space Grant Consortium grant #80NSSC20M0100.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
The authors would like to thank Matthias Baur and Tom Cuchta for the use of their time scales Python package in producing the last example.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| LQR |
linear quadratic regulator |
| LQT |
linear quadratic tracker |
| LQPEG |
linear quadratic pursuit-evasion games |
References
- Isaacs, R. Differential Games I, II, III, IV; The RAND Corp. Memo. RM-1391, RM-1399, RM-1411, RM-1486, 1954-1956.
- Isaacs, R. Differential games. A mathematical theory with applications to warfare and pursuit, control and optimization; John Wiley & Sons Inc.: New York, 1965; pp. xvii+384. [CrossRef]
- Kalman, R.E. Contributions to the theory of optimal control. Bol. Soc. Mat. Mexicana (2) 1960, 5, 102–119.
- Kalman, R.E. When is a linear control system optimal? Trans. ASME Ser. D. J. Basic Engineering 1964, 86, 81–90.
- Kalman, R.E.; Koepcke, R.W. Optimal synthesis of linear sampling control systems using generalized performance indexes. Trans. ASME Ser. D. J. Basic Engineering 1958, 80, 1820–1826. [CrossRef]
- Kalman, R.E. The theory of optimal control and the calculus of variations. In Mathematical optimization techniques; Univ. California Press: Berkeley, Calif., 1963; pp. 309–331.
- Ho, Y.C.; Bryson, Jr., A.E.; Baron, S. Differential games and optimal pursuit-evasion strategies. IEEE Trans. Automatic Control 1965, AC-10, 385–389. [CrossRef]
- Bryson, Jr., A.E.; Ho, Y.C. Applied optimal control; Hemisphere Publishing Corp. Washington, D. C., 1975; pp. xiv+481. Optimization, estimation, and control, Revised printing.
- Hilger, S. Ein Maßkettenkalkül mit Anwendung auf Zentrumsmannigfaltigkeiten; PhD Thesis, Universität Würzburg, 1988.
- Bohner, M.; Peterson, A. Dynamic equations on time scales; Birkhäuser Boston Inc.: Boston, MA, 2001; pp. x+358.
- Bohner, M.; Peterson, A., Eds. Advances in dynamic equations on time scales; Birkhäuser Boston Inc.: Boston, MA, 2003; pp. xii+348.
- Bartosiewicz, Z.; Pawłuszewicz, E. Linear control systems on time scales: unification of continuous and discrete. Proceedings of the 10th IEEE International Conference on Methods and Models in Automation and Robotics MMAR’04 2004, pp. 263–266.
- Bartosiewicz, Z.; Pawłuszewicz, E. Realizations of linear control systems on time scales. Control Cybernet. 2006, 35, 769–786. [CrossRef]
- Davis, J.; Gravagne, I.; Jackson, B.; Marks II, R. Controllability, observability, realizability, and stability of dynamic linear systems. Electron. J. Diff. Eqns 2009, 2009, 1–32.
- Fausett, L.V.; Murty, K.N. Controllability, observability and realizability criteria on time scale dynamical systems. Nonlinear Stud. 2004, 11, 627–638.
- Bohner, M.; Wintz, N. Controllability and observability of time-invariant linear dynamic systems. Mathematica Bohemica 2012, 137, 149–163. [CrossRef]
- Bohner, M. Calculus of variations on time scales. Dynam. Systems Appl. 2004, 13, 339–349. [CrossRef]
- DaCunha, J.J. Lyapunov Stability and Floquet Theory for Nonautonomous Linear Dynamic Systems on Time Scales; PhD Thesis, Baylor University, 2004.
- Hilscher, R.; Zeidan, V. Weak maximum principle and accessory problem for control problems on time scales. Nonlinear Anal. 2009, 70, 3209–3226. [CrossRef]
- Bettiol, P.; Bourdin, L. Pontryagin maximum principle for state constrained optimal sampled-data control problems on time scales. ESAIM: Control, Optimisation and Calculus of Variations 2021, 27. [CrossRef]
- Zhu, Y.; Jia, G. Linear Feedback of Mean-Field Stochastic Linear Quadratic Optimal Control Problems on Time Scales. Mathematical Problems in Engineering 2020, 2020, 8051918. [CrossRef]
- Ren, Q.Y.; Sun, J.P.; et al. Optimality necessary conditions for an optimal control problem on time scales. AIMS Math. 2021, 6, 5639–5646. [CrossRef]
- Zhu, Y.; Jia, G. Stochastic Linear Quadratic Control Problem on Time Scales. Discrete Dynamics in Nature and Society 2021, 2021, 5743014. [CrossRef]
- Poulsen, D.; Defoort, M.; Djemai, M. Mean Square Consensus of Double-Integrator Multi-Agent Systems under Intermittent Control: A Stochastic Time Scale Approach. Journal of the Franklin Institute 2019, 356. [CrossRef]
- Duque, C.; Leiva, H. CONTROLLABILITY OF A SEMILINEAR NEUTRAL DYNAMIC EQUATION ON TIME SCALES WITH IMPULSES AND NONLOCAL CONDITIONS. TWMS Journal of Applied and Engineering Mathematics 2023, 13, 975–989.
- Bohner, M.; Wintz, N. The linear quadratic regulator on time scales. Int. J. Difference Equ 2010, 5, 149–174.
- Bohner, M.; Wintz, N. The linear quadratic tracker on time scales. International Journal of Dynamical Systems and Differential Equations 2011, 3, 423–447. [CrossRef]
- Mu, Z.; Jie, T.; Zhou, Z.; Yu, J.; Cao, L. A survey of the pursuit–evasion problem in swarm intelligence, volume = 24, journal = Frontiers of Information Technology & Electronic Engineering, doi = 10.1631/FITEE.2200590, 2023. pp. 1093–1116. [CrossRef]
- Sun, Z.; Sun, H.; Li, P.; Zou, J. Cooperative strategy for pursuit-evasion problem in the presence of static and dynamic obstacles. Ocean Engineering 2023, 279, 114476. [CrossRef]
- Chen, N.; Li, L.; Mao, W. Equilibrium Strategy of the Pursuit-Evasion Game in Three-Dimensional Space. IEEE/CAA Journal of Automatica Sinica 2024, 11, 446–458. [CrossRef]
- Venigalla, C.; Scheeres, D.J. Delta-V-Based Analysis of Spacecraft Pursuit–Evasion Games. Journal of Guidance, Control, and Dynamics 2021, 44, 1961–1971. [CrossRef]
- Feng, Y.; Dai, L.; Gao, J.; Cheng, G. Uncertain pursuit-evasion game. Soft Comput. 2020, 24, 2425–2429. [CrossRef]
- Bertram, J.; Wei, P., An Efficient Algorithm for Multiple-Pursuer-Multiple-Evader Pursuit/Evasion Game; AIAA Scitech 2021 Forum, 2021; [https://arc.aiaa.org/doi/pdf/10.2514/6.2021-1862]. [CrossRef]
- Ye, D.; Shi, M.; Sun, Z. Satellite proximate pursuit-evasion game with different thrust configurations. Aerospace Science and Technology 2020, 99, 105715. [CrossRef]
- Libich, J.; Stehlík, P. Macroeconomic games on time scales. Dynamic Systems and Applications 2008, 5, 274–278.
- Petrov, N.; Mozhegova, E. On a simple pursuit problem on time scales of two coordinated evaders. Chelyabinsk Journal of Physics and Mathematics 2022, 7, 277–286. [CrossRef]
- Mozhegova, E.; Petrov, N. The differential game “Cossacks–robbers” on time scales. Izvestiya Instituta Matematiki i Informatiki Udmurtskogo Gosudarstvennogo Universiteta 2023, 62, 56–70. [CrossRef]
- Minh, V.D.; Phuong, B.L. LINEAR PURSUIT GAMES ON TIME SCALES WITH DELAY IN INFORMATION AND GEOMETRICAL CONSTRAINTS. TNU Journal of Science and Technology 2019, 200, 11–17.
- Lewis, F.L.; Vrabie, D.L.; Syrmos, V.L. Optimal Control; John Wiley & Sons, Inc., 2012. [CrossRef]
- Poulsen, D.; Davis, J.; Gravagne, I. Optimal Control on Stochastic Time Scales. IFAC-PapersOnLine 2017, 50, 14861–14866. [CrossRef]
- Siegmund, S.; Stehlik, P. Time scale-induced asynchronous discrete dynamical systems. Discrete & Continuous Dynamical Systems - B 2017, 22. [CrossRef]
- Poulsen, D.; Wintz, N. The Kalman filter on stochastic time scales. Nonlinear Analysis: Hybrid Systems 2019, 33, 151–161. [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).