Preprint
Article

This version is not peer-reviewed.

Optimal Control of Impulsive Systems Under State, Control, and Terminal Constraints

Submitted:

08 December 2025

Posted:

29 December 2025

You are already at the latest version

Abstract
We establish a version of Pontryagin’s maximum principle for optimal control problems with impulses and phase constraints. Using the Dubovitskii-Milyutin theory, we construct a conic variational framework that handles impulsive dynamics and general state constraints. The main difficulty lies in working with piecewise continuous functions, required by the impulsive nature of the system. This setting also demands an extension of the classical result on the existence of nonnegative Borel measures, which leads to an adjoint equation formulated as a Stieltjes integral. Theoretical results are illustrated with examples, and key results by I. Girsanov are extended to the impulsive context.
Keywords: 
;  ;  ;  ;  
Optimal control problems involving impulsive dynamics and state constraints present structural difficulties that do not arise in classical smooth systems. Impulses force the state to lie in the space of piecewise continuous functions, breaking the continuity assumptions under which standard variational tools operate. In particular, the analysis of constraints along the trajectory requires extending classical results—such as the representation of dual cones by nonnegative Borel measures—to domains partitioned by the impulse instants. Consequently, the adjoint equation naturally appears as a Volterra-type integral equation with a Stieltjes term associated with such a measure.
The goal of this work is to establish a version of Pontryagin’s Maximum Principle for impulsive systems subject to state constraints. Our approach relies on the Dubovitskii–Milyutin conic framework, which provides precise approximations of the objective functional, the impulsive dynamics, and the inequality constraints. This geometric structure allows us to formulate necessary optimality conditions even in the presence of multiple sources of nonsmoothness: discontinuous trajectories, admissible cones shaped by impulses, and active state constraints.
A distinctive contribution of this paper is the adaptation and extension of several foundational results of Girsanov [3] to the impulsive setting. Unlike many existing treatments of impulsive control—where state constraints are either avoided or handled under strong regularity assumptions—our formulation requires no additional restrictions on the trajectory beyond the natural admissibility conditions. This yields a unified variational framework capable of handling both the impulsive jumps and the state limitations in a rigorous and transparent manner.
The Dubovitskii–Milyutin theory has long been recognized as a powerful tool in optimal control (see [1,2,4,6,9,11,14]), including applications to impulsive systems without state constraints [11]. More recently, in [12], this framework was applied to optimal control problems governed by integral equations with both state and control constraints, though without impulses. Additional developments and applications can be found in [15,16,18], further illustrating the versatility and strength of conic methods in optimization.
To demonstrate the breadth of applicability of our main result, we conclude with two representative examples: a controlled epidemiological SIR model and a Bolza-type economic problem involving consumption and investment.

1. Framework of the Optimization Problem and Main Statements

We now introduce an alternative presentation of the optimal control model considered in this work. The notation, symbols, and structure have been fully redefined to provide an equivalent but independent formulation. This setting describes the evolution of a state variable subject to continuous dynamics, impulsive effects, terminal constraints, pointwise state restrictions, and admissible control values. The problem below serves as the foundation for all subsequent analysis.
Problem 1.
F ( z , v ) = F 0 ( z ( 0 ) , z ( T ) ) + 0 T L ( z ( t ) , v ( t ) , t ) d t min loc ,
( z , v ) E = PC ( [ 0 , T ] ; R m ) × L ( [ 0 , T ] ; R ) ,
z ˙ ( t ) = H ( z ( t ) , v ( t ) , t ) , z ( 0 ) = z * ,
C j ( z ( T ) ) = 0 , j = 1 , , r ,
z ( σ k + ) = z ( σ k ) + Γ k ( z ( σ k ) ) , k = 1 , , N ,
v ( t ) U , a . e . t [ 0 , T ] ,
ϕ ( z ( t ) , t ) 0 , t [ 0 , T ] .
Here, min loc indicates that the functional F attains a local minimum at ( z , v ) .
The dimensions m , N and the horizon T > 0 are fixed. The mappings involved in the model are now denoted by:
H : R m × R × [ 0 , T ] R m , L : R m × R × [ 0 , T ] R , Γ k : R m R m , k = 1 , , N , C j : R m R , j = 1 , , r , ϕ : R m × [ 0 , T ] R , F 0 : R m × R m R .
The set of partition points;
0 = σ 0 < σ 1 < < σ N < T
induces subintervals
I 1 = [ 0 , σ 1 ] , I k + 1 = ( σ k , σ k + 1 ] , k = 1 , , N 1 .
The space PC ( [ 0 , T ] ; R m ) consists of all functions that are continuous on each I k , possess right limits at the nodes σ k , and is endowed with the supremum norm
z = sup t [ 0 , T ] z ( t ) .
For z PC we define the regularized representative on each closed interval I k ¯ ,
z ˜ k ( t ) = z ( t ) , t I k , z ( σ k 1 + ) , t = σ k 1 .
The control space L ( [ 0 , T ] ; R ) has the usual essential supremum norm.

Assumptions

a)
All functions F 0 , L , H , C j , Γ k are continuous and their partial derivatives F 0 , L x , L v ,
H x , H v , Γ k , C j exist and are sufficiently smooth.
b)
The control constraint set U R is nonempty, closed, and convex, with a nonempty interior.
c)
The state constraint function ϕ is continuous and satisfies
ϕ ( z * , 0 ) < 0 , ϕ ( z T , T ) < 0 ,
and its derivative ϕ x is continuous and nonzero on the boundary where ϕ ( · , t ) = 0 .
d)
The impulsive mappings satisfy a uniform contraction-type bound:
Γ k ( z ) ρ k < 1 , z R m , k = 1 , , N .
e)
At the terminal point of the nominal solution ( z , v ) , let C = ( C 1 , C 2 , , C r ) and
Λ = C ( z ( T ) ) R r × m
denote the Jacobian matrix of the terminal constraint. We assume full row rank:
rank ( Λ ) = r .
Remark 1.
The condition rank ( Λ ) = r is equivalent to the surjectivity of Λ : R m R r , which ensures the existence of a right inverse Λ + = Λ ( Λ Λ ) 1 . Thus the equation Λ z ( T ) = b has a solution z ( T ) = Λ ( Λ Λ ) 1 b for every b R r .
Remark 2.
Let ( z , v ) be a feasible pair, and consider the linearized continuous-time system
y ˙ ( t ) = H z ( z ( t ) , v ( t ) , t ) y ( t ) + H v ( z ( t ) , v ( t ) , t ) w ( t ) .
If this system is controllable and the bounds in assumption (d) hold, then the corresponding impulsive linear variational system
y ˙ ( t ) = H x ( z ( t ) , v ( t ) , t ) y ( t ) + H v ( z ( t ) , v ( t ) , t ) w ( t ) , t σ k , y ( σ k + ) = Γ k ( z ( σ k ) ) y ( σ k ) + β k ,
is also controllable.
Theorem 1.
Assume that hypotheses (a)–(e) are satisfied for an optimal pair ( z , v ) E associated with Problem 1. Then the following assertions hold.
1. Existence of a state–constraint measure.There exists a nonnegative Borel measure
ν on [ 0 , T ] ,
whose support is contained in the active constraint set
A = { t [ 0 , T ] | ϕ ( z ( t ) , t ) = 0 } .
2. Adjoint equation.There exist a scalar α 0 and a function
p L 1 ( [ 0 , T ] ; R m )
not simultaneously zero with α, satisfying the backward integral relation
p ( t ) = t T H z ( z ( σ ) , v ( σ ) , σ ) p ( σ ) + α L z ( z ( σ ) , v ( σ ) , σ ) d σ Λ ξ + α 2 F 0 ( z ( 0 ) , z ( T ) ) + t T ϕ z ( z ( σ ) , σ ) d ν ( σ ) ,
for some vector ξ R r .
3.Pontryagin’s maximum principle.For almost every t [ 0 , T ] and all w U ,
H v ( z ( t ) , v ( t ) , t ) p ( t ) + α L v ( z ( t ) , v ( t ) , t ) , w v ( t ) 0 .
Theorem 2.
Assume that structural conditions(a)(e)hold and consider the terminally constrained problem in which the path restriction ϕ ( z ( t ) , t ) 0 is replaced by the endpoint inequality
ϕ ( z ( T ) , T ) 0 .
Let ( z , v ) E be an optimal pair, and suppose that
ϕ z ( z ( T ) , T ) 0 .
Then there exist nonnegative multipliers
α 0 , ρ 0 ,
and an adjoint function
p C ( [ 0 , T ] ; R m ) ,
not simultaneously zero with α, such that the relations below are satisfied.
1. Adjoint dynamics.The function p ( · ) satisfies
p ˙ ( t ) = H z ( z ( t ) , v ( t ) , t ) p ( t ) + α L z ( z ( t ) , v ( t ) , t ) + ρ ϕ z ( z ( T ) , T ) , 0 < t < T ,
together with the transversality condition
p ( T ) = Λ γ α 2 F 0 ( z ( 0 ) , z ( T ) ) ,
for some vector γ R r , where
Λ : = C ( z ( T ) )
denotes the Jacobian matrix of the terminal constraint.
2. Pontryagin’s maximum principle.For almost every t [ 0 , T ] and every w U ,
H v ( z ( t ) , v ( t ) , t ) p ( t ) + α L v ( z ( t ) , v ( t ) , t ) , w v ( t ) 0 .

2. Preliminaries

This section gathers the analytical tools required for the variational approach used later in the article. Our presentation follows the classical Dubovitskii–Milyutin framework, but we adapt the exposition to the structural features of optimal control problems with impulsive dynamics and state constraints. Only the ingredients relevant for the derivation of the necessary conditions are included here; a comprehensive treatment can be found in [3,9,11].

2.1. Cones, Dual Cones, and Separation Mechanisms

Let E be a locally convex topological vector space with dual E * . A subset K E is called a cone when λ K = K for every λ > 0 . The corresponding dual cone is
K + : = { E * : ( x ) 0 for all x K } .
A fundamental operation is the passage to dual cones under intersections. For any family of convex, weakly closed cones { K α } α A E , one has
α A K α + = α A K α + ¯ w * .
If a cone K has nonempty interior relative to a linear subspace L , then
( K L ) + = K + + L + .
For a finite family of open convex cones K 1 , , K m with nonempty intersection,
i = 1 m K i + = i = 1 m K i + .
Finally, if K 1 , , K m + 1 are convex cones with apex at the origin, and the first m are open, then
i = 1 m + 1 K i = i K i + ( i = 1 , , m + 1 ) , not all zero , such that i = 1 m + 1 i = 0 .
These separation properties constitute the abstract backbone of the Euler–Lagrange equation used later.

2.2. Abstract Euler–Lagrange Principle

Consider an optimization problem of the form
F ( x ) min loc s . t . x Q i , i = 1 , , m + 1 ,
where F : E R is the objective and the sets Q i encode the constraints. Typically, Q 1 , , Q m correspond to inequality constraints (with nonempty interiors), and Q m + 1 represents equality constraints.
Let x be a local minimizer and denote by
- K 0 the decay cone of F at x , - K i the admissible cone to Q i at x ( i = 1 , , m ), - K m + 1 the tangent cone to Q m + 1 at x ., .
Theorem 3
(Dubovitskii–Milyutin). If each K i is convex, then there exist dual elements i K i + , not all zero, such that
0 + 1 + + m + 1 = 0 .
This identity is the abstract Euler–Lagrange equation.
The equation above will later be instantiated in the setting of impulsive optimal control, where each dual element is identified with a co-state, a multiplier, or a measure depending on which cone generates it.

2.3. Decay and Admissible Directions

A direction h E is a decay direction for the functional F at x if small perturbations of the form x + ε h decrease F to first order. When the directional derivative
F ( x ; h ) : = lim ε 0 F ( x + ε h ) F ( x ) ε
exists, this condition is equivalent to F ( x ; h ) < 0 . Thus the decay cone is
K d ( F , x ) = { h E : F ( x ; h ) < 0 } .
Admissible directions for a constraint set Q are those perturbations that keep the perturbed point inside Q at first order. The admissible cone at x Q is
K a ( Q , x ) = h : x + ε h Q for all sufficiently small ε > 0 .
Theorem 4.(See [3]). If Q is an arbitrary convex set with i n t ( Q ) , then
K a ( Q , x ) = { h E / h = λ ( x x ) , x i n t ( Q ) , λ > 0 } .

2.4. Tangent Directions and Lyusternik’s Principle

A vector h is tangent to a set Q at x if one can find perturbations of the form
x + ε h + o ( ε )
that remain in Q. The set of all such directions is the tangent cone K T ( Q , x ) .
Lyusternik’s theorem provides an efficient way to compute tangent cones for equality-constrained sets of the form Q = { x : P ( x ) = 0 } .
Theorem 5
(Lyusternik). Let P : E 1 E 2 be Fréchet differentiable at x , and suppose that P ( x ) is surjective. Then
K T ( Q , x ) = ker P ( x ) .
This result will later be used to analyze the linearized dynamics of the impulsive control system.

2.5. Dual Characterizations for Several Cones

We now list three duality results that will be needed in the construction of the functionals in the approximation dual cones for the optimal control problem.
Proposition 1.
Let us consider E = PW ( [ 0 , T ] , I R m ) , Λ a matrix of dimension r × m with r m and R a n k ( Λ ) = r . Consider the following cone
K = { x E : Λ x ( T ) = 0 } .
Then l K + if and only if there is a I R r such that
l ( x ) = a , Λ x ( T ) = Λ T a , x ( T ) ( x E ) .
Proposition 2.(see [11]) Let A , Γ k : [ 0 , T ] I R m × m , k = 1 , 2 , 3 , , N and B : [ 0 , T ] I R m × l be measurable and bounded functions. Suppose the following impulsive linear system is controllable on [ 0 , T ] for any b ¯ = ( b ¯ 1 , b ¯ 2 , , b ¯ N ) I R m N = ( I R m ) N
x ˙ ( t ) = A ( t ) x ( t ) + B ( t ) v ( t ) , t ( 0 , T ] , t σ k x ( σ k + ) = Γ k ( σ k ) x ( σ k ) + b ¯ k , k = 1 , 2 , 3 , , N . ,
where ( x , v ) E : = PW ( [ 0 , T ] ; I R m ) × L ( [ 0 , T ] ; I R ) . Let us define the following cone
L 1 : = ( x , v ) E / x ( T ) = 0 , x ( σ k + ) Γ k ( σ k ) x ( σ k ) = 0 , k = 1 , 2 , 3 , , N .
Then, dim L 1 + = m ( N + 1 ) , and also for all l L 1 + there is a I R m ( N + 1 ) such that
l ( x , v ) = a , x ( T ) , x ( σ 1 + ) Γ 1 ( σ 1 ) x ( σ 1 ) , , x ( σ N + ) Γ N ( σ N ) x ( σ N ) , ( x , v ) E .
A modification of example 10.3 of [3], leads us to the following result.
Proposition 3.
Suppose E 1 = PW ( [ 0 , T ] ; I R m ) and A [ 0 , T ] closed, h E 1 such that h ( t ) 0 ( t A ) . Define the following cone
K : = { x E 1 / h ( t ) , x ( t ) 0 , t A } .
Then, for all l K + , there exists a non–negative Borel measure ν in [ 0 , T ] , with support in A , such that
l ( x ) = A h ( t ) , x ( t ) d ν ( t ) , ( x E 1 ) .
Proof. 
Let l K + and consider the cones
K k : = { x E 1 / h ¯ k ( t ) , x ¯ k ( t ) 0 , t J ¯ k A } , k = 1 , 2 , 3 . , N .
Since A = k = 1 N ( J ¯ k A ) , we get that K = k = 1 N K k . Then, from the preliminaries, we get that
K + = k = 1 N K k + a n d l ( x ) = k = 1 N l k ( x ) , l k K k + .
Hence, from Example 10.3 in [3], there is a non-negative Borel measure ν k on [ 0 , T ] with support on J ¯ k A such that
l k ( x ) = J ¯ k A h ( t ) , x ( t ) d ν k ( t ) , ( x E 1 ) .
Therefore,
l ( x ) = k = 1 N J ¯ k A h ( t ) , x ( t ) d ν k ( t ) = k = 1 N J ¯ k R h ( t ) , x ( t ) d ν ( t )
if we put ν = k = 1 N χ J ¯ k R ν k , where χ J ¯ k A is the characteristic function of the set J ¯ k A . Hence,
l ( x ) = A h ( t ) , x ( t ) d ν ( t ) , ( x E 1 ) ,
and ν is a Borel measure with support in A . □
Now, let’s delve into a significant example related to support functionals.
Example 6.
Let U I R and Q : = { v L ( [ 0 , T ] ; R ) / v ( t ) U , t [ 0 , T ] , a . e . } and consider v Q , a L ( [ 0 , T ] ; R ) and : L ( [ 0 , T ] ; R ) I R defined as follows
( v ) : = 0 T a ( t ) , v ( t ) d t , ( v L ( [ 0 , T ] ; R ) ) .
Let us suppose that ( v ) ( v ) ( v Q ) . Then for all ω U and almost all t [ 0 , T ]
a ( t ) , ω v ( t ) 0 .
For details of this example see [3].

3. Proof of the Main Theorem 1

Proof. 
Let
F : E R
be the objective functional defined by
F ( z , v ) = F 0 z ( 0 ) , z ( T ) + 0 T L z ( t ) , v ( t ) , t d t .
Let
Q : = Q 1 Q 2 Q 3 ,
where each subset of feasible pairs ( z , v ) E is given by:
- Q 1 : pairs satisfying the continuous dynamics and the impulsive conditions, - Q 2 : pairs respecting the control constraint v ( t ) U a.e., - Q 3 : pairs satisfying the state constraint ϕ ( z ( t ) , t ) 0 .
Thus the optimal control problem is equivalent to
F ( z , v ) loc min , ( z , v ) Q .
(a) Decay cone of the functional F
Define the decay cone of F at the optimal point ( z , v ) as
K 0 : = K d ( F , ( z , v ) ) = ( h , w ) E : F ( z , v ) ( h , w ) < 0 .
Assume temporarily that K 0 . Its dual cone is
K 0 + = α F ( z , v ) : α 0 .
Thus, for every 0 K 0 + , there exists α 0 such that for all ( h , w ) E :
0 ( h , w ) = α 1 F 0 z ( 0 ) , z ( T ) h ( 0 ) + 2 F 0 z ( 0 ) , z ( T ) h ( T ) α 0 T L z z ( t ) , v ( t ) , t h ( t ) + L v z ( t ) , v ( t ) , t w ( t ) d t .
(b) Analysis of the Constraint Q 1
We now compute the tangent cone to Q 1 at the optimal point ( z , v ) , where Q 1 encodes the continuous dynamics () and the impulsive relations (). Define
K 1 : = K T ( Q 1 , ( z , v ) ) .
To apply Lyusternik’s theorem, we introduce the product space
E 1 = PC ( [ 0 , T ] ; R m ) × L ( [ 0 , T ] ; R ) ,
and the space collecting the residuals of the dynamics, impulse constraints, and terminal condition:
E 2 = PC ( [ 0 , T ] ; R m ) × R m N × R r .
Define the operator
Π : E 1 E 2
by
Π ( z , v ) = Π c ( z , v ) , Π imp ( z , v ) , C ( z ( T ) ) ,
where
Π c ( z , v ) ( t ) : = z ( t ) z * 0 t H z ( s ) , v ( s ) , s d s ,
Π imp ( z , v ) : = z ( σ 1 + ) Γ 1 ( z ( σ 1 ) ) , , z ( σ N + ) Γ N ( z ( σ N ) ) .
Thus Q 1 = Π 1 ( 0 ) .
The Fréchet derivative at the nominal point ( z , v ) is
Π ( z , v ) ( h , w ) = Π c ( h , w ) , Π imp ( h , w ) , Λ h ( T ) ,
where Λ = C ( z ( T ) ) , and
Π c ( h , w ) ( t ) = h ( t ) 0 t H z ( z ( s ) , v ( s ) , s ) h ( s ) + H v ( z ( s ) , v ( s ) , s ) w ( s ) d s ,
Π imp ( h , w ) = h ( σ 1 + ) Γ 1 ( z ( σ 1 ) ) h ( σ 1 ) , , h ( σ N + ) Γ N ( z ( σ N ) ) h ( σ N ) .
To apply Theorem 5, we must check the surjectivity of Π ( z , v ) . Given arbitrary data
a ( · ) , b 1 , , b N , z 1 E 2 ,
we must find a pair ( h , w ) E 1 satisfying
Π ( z , v ) ( h , w ) = a ( · ) , b 1 , , b N , z 1 .
First solve the Volterra equation
q ( t ) = a ( t ) + 0 t H z ( z ( s ) , v ( s ) , s ) q ( s ) d s ,
which admits a unique solution q PC ( [ 0 , T ] ; R m ) .
Let the impulsive mismatches be
β k : = b k q ( σ k + ) + Γ k ( z ( σ k ) ) q ( σ k ) .
Assuming (as required in the theorem) that the linear variational impulsive system is controllable, there exists a control correction w ( · ) such that the corresponding solution y ( t ) of the linear system
y ˙ ( t ) = H z ( z ( t ) , v ( t ) , t ) y ( t ) + H v ( z ( t ) , v ( t ) , t ) w ( t ) ,
with impulses
y ( σ k + ) = Γ k ( z ( σ k ) ) y ( σ k ) + β k ,
satisfies
y ( 0 ) = 0 , y ( T ) = Λ ( Λ Λ ) 1 z 1 q ( T ) .
Define
h : = y + q .
Then one verifies that
Π c ( h , w ) = a ( · ) , Π imp ( h , w ) = ( b 1 , , b N ) , Λ h ( T ) = z 1 .
Hence Π ( z , v ) is surjective. By Lyusternik’s theorem,
K 1 = ker Π ( z , v ) .
That is, ( h , w ) K 1 if and only if
h ˙ ( t ) = H z ( z ( t ) , v ( t ) , t ) h ( t ) + H v ( z ( t ) , v ( t ) , t ) w ( t ) , h ( σ k + ) = Γ k ( z ( σ k ) ) h ( σ k ) , k = 1 , , N , Λ h ( T ) = 0 .
Define the linear subspaces
L 1 = ( h , w ) E 1 : the homogeneous linear system above holds , L 2 = ( h , w ) : Λ h ( T ) = 0 .
Then
K 1 = L 1 L 2 .
The dual cone is
K 1 + = L 1 + + L 2 + .
Since L 1 is a linear subspace, any functional 1 L 1 + vanishes on all ( h , w ) that satisfy the linear dynamics. Then 2 K 1 + if and only if there is ξ I R r such that
2 ( h , w ) = ξ , Λ h ( T ) = Λ ξ , h ( T ) .
Thus every dual element in K 1 + is the sum of a terminal multiplier term and a functional annihilating all homogeneous state-control variations.
(c)Analysis of the Control Constraint Q 2
The set of admissible controls is
Q 2 : = v L ( [ 0 , T ] ; R ) | v ( t ) U for a . e . t [ 0 , T ] ,
where the control constraint set U R is, by hypothesis, nonempty, convex, closed, and satisfies int ( U ) .
We define
Q 2 : = PC ( [ 0 , T ] ; R m ) × Q 2 .
Because U is convex with nonempty interior, both Q 2 and Q 2 are convex and closed subsets of their respective spaces, and both have nonempty interior.
Let
K 2 : = K a ( Q 2 , ( z , v ) )
be the admissible cone at the optimal pair. Since the state variable z ( · ) is unrestricted in Q 2 , the cone splits as
K 2 = PC ( [ 0 , T ] ; R m ) × K 2 ,
where
K 2 : = K a ( Q 2 , v )
is the admissible cone of the control constraint.
Consequently, any dual element
2 K 2 +
acts only on the control component, i.e.,
2 ( h , w ) = 2 ( w ) , 2 ( K 2 ) + .
By Theorem 4, every functional 2 ( K 2 ) + is a support functional of Q 2 at v . Thus,
2 ( w v ) 0 for all w Q 2 .
In other words, 2 defines the supporting hyperplane to the control constraint set Q 2 at the point v . This property will later yield The Pontryagin’s Maximum Principle.
(d) Analysis of the State Constraint Q 3
We introduce the auxiliary functional
: PC ( [ 0 , T ] ; R m ) R ( z ) : = max k = 1 , , N max t I k ¯ ϕ ( z ˜ k ( t ) , t ) ,
which is continuous by construction. Following the arguments of Examples 7.4–7.5 in [3], its directional derivative at z satisfies
( z ; h ) = max k I max t A k ϕ z ( z ( t ) , t ) , h ( t ) , h PC ( [ 0 , T ] ; I R m ) ,
where:
I = k : max t I k ¯ ϕ ( z ˜ k ( t ) , t ) = max j max t I j ¯ ϕ ( z ˜ j ( t ) , t ) ,
A k = t I k ¯ : ϕ ( z ˜ k ( t ) , t ) = ( z ) .
Since along the optimal process ϕ ( z ( t ) , t ) 0 , we have
A k = { t I k ¯ : ϕ ( z ( t ) , t ) = 0 } .
Thus the active set of times is
A = k = 1 N A k = { t [ 0 , T ] : ϕ ( z ( t ) , t ) = 0 } .
On the other hand,
K 3 = K a ( Q 3 , ( z , v ) ) K d ( , z ) = : K d ,
and
K d = ( h , w ) E : ( z ; h ) < 0 = ( h , w ) E : ϕ z ( z ( t ) , t ) , h ( t ) < 0 for all t A .
From (9) we obtain
K 3 + K d + .
Therefore, by Proposition 3, for every dual element 3 K 3 + there exists a nonnegative Borel measure
ν on [ 0 , T ] ,
such that
3 ( z , v ) = 0 T ϕ z ( z ( t ) , t ) , z ( t ) d ν ( t ) , ( z , v ) E ,
and the measure is supported on the active set:
supp ( ν ) A = { t [ 0 , T ] | ϕ ( z ( t ) , t ) = 0 } .
The Pontryagin’s Maximum Principle.
It is easy to see that K 0 , K 1 , K 2 , K 3 are convex cones. Hence, by Theorem 3, there exist functionals i K i + for i = 0 , 1 , 2 , 3 , not all zero, such that
0 + 1 + 2 + 3 = 0 .
The Euler–Lagrange identity (10) now becomes
α 1 F 0 ( z ( 0 ) , z ( T ) ) z ( 0 ) + 2 F 0 ( z ( 0 ) , z ( T ) ) z ( T ) α 0 T L z ( z ( t ) , v ( t ) , t ) z ( t ) + L v ( z ( t ) , v ( t ) , t ) v ( t ) d t + 1 ( z , v ) + a , Λ z ( T ) + 2 ( v ) + 3 ( z , v ) = 0 , ( ( z , v ) E ) ,
where
- 1 K 1 + corresponds to the dynamics constraint, - 2 ( K 2 ) + corresponds to the control constraint set Q 2 , - 3 K 3 + corresponds to the path constraint ϕ ( z ( t ) , t ) 0 , - ν is the Borel measure associated with that constraint, - α is the scalar multiplier of the cost functional, - and Λ = C ( z ( T ) ) is the Jacobian of the terminal constraint.
For every v L ( [ 0 , T ] : R ) , the linearized system admits a solution z PW ( [ 0 , T ] ; R m ) with z ( 0 ) = 0 . Thus ( z , v ) L 1 , and therefore
1 ( z , v ) = 0 .
Hence, the Euler–Lagrange identity reduces to
2 ( v ) = α 0 T L z ( z ( t ) , v ( t ) , t ) z ( t ) d t + α 0 T L v ( z ( t ) , v ( t ) , t ) v ( t ) d t + α 2 F 0 ( z ( 0 ) , z ( T ) ) z ( T ) a , Λ z ( T ) + 0 T ϕ z ( z ( σ ) , σ ) z ( σ ) d ν ( σ ) ,
for all ( z , v ) E .
Let p solve the adjoint integral equation:
p ( t ) = t T H z ( z ( σ ) , v ( σ ) , σ ) p ( σ ) + α L z ( z ( σ ) , v ( σ ) , σ ) d σ Λ ξ + α 2 F 0 ( z ( 0 ) , z ( T ) ) + t T ϕ z ( z ( σ ) , σ ) d ν ( σ ) .
This is a second-kind Volterra equation with unique solution p L 1 ( [ 0 , T ] ; I R m ) (see [7]). Multiplying both sides of this equation by z ˙ and integrating over [ 0 , T ] we obtain the following:
0 T z ˙ ( t ) , p ( t ) d t = 0 T Λ ξ , z ˙ ( t ) d t + α 2 F 0 ( z ( 0 ) , z ( T ) ) z ( T ) + 0 T z ˙ ( t ) , t T [ H z ( z , v , τ ) p ( τ ) + α L z ( z , v , τ ) ] d τ d t + 0 T z ˙ ( t ) , t T ϕ z ( z ( τ ) , τ ) d ν ( τ ) d t .
Given that z ˙ ( t ) = H z ( z ( t ) , v ( t ) , t ) z ( t ) + H v ( z ( t ) , v ( t ) , t ) v ( t ) and z ( 0 ) = 0 , we get:
z ˙ ( t ) H v ( z ( t ) , v ( t ) , t ) v ( t ) , p ( t ) = H z ( z ( t ) , v ( t ) , t ) z ( t ) , p ( t ) .
Hence, the above becomes:
0 T z ˙ ( t ) , p ( t ) d t = ξ , Λ z ( T ) 0 T z ˙ ( t ) , p ( t ) d t + α 2 F 0 ( z ( 0 ) , z ( T ) ) z ( T ) + 0 T H v ( z ( t ) , v ( t ) , t ) p ( t ) , ( t ) d t + α 0 T z ( t ) , L x ( z , v , t ) d t + 0 T z ˙ ( t ) , t T ϕ z ( z ( τ ) , τ ) d ν ( τ ) d t .
Using Stieltjes integration by parts and the fact that ν ( 0 ) = ν ( T ) = 0 , we obtain:
0 T z ˙ ( t ) , t T L z ( z ( τ ) , τ ) d ν ( τ ) d t = 0 T ϕ z ( z ( t ) , t ) z ( t ) d ν ( t ) .
Then we get:
α 0 T L x ( z ( t ) , v , t ) z ( t ) d t + 0 T ϕ z ( z ( t ) , t ) z ( t ) d ν ( t ) ξ , Λ z ( T ) + α 2 F 0 ( z ( 0 ) , z ( T ) ) = 0 T H v ( z , v , t ) p ( t ) , u ( t ) d t .
Then, the support functional 2 becomes:
2 ( v ) = 0 T H v T z ( t ) , v ( t ) , t p ( t ) + α L v z ( t ) , v ( t ) , t , v ( t ) d t .
Since 2 ( K 2 ) + , for almost all t [ 0 , T ] and all ω U , we have
H v z ( t ) , v ( t ) , t p ( t ) + α L v z ( t ) , v ( t ) , t , ω v ( t ) 0 .
To exclude the trivial case, assume α = 0 and p = 0 . Then Λ ξ = p ( T ) = 0 , hence
12 ( z , v ) = Λ ξ , z ( T ) = 0 .
The adjoint equation implies 3 = 0 . Equation (12) gives 2 = 0 . Thus 1 = 11 + 12 = 0 , contradicting Theorem 3.
Finally, we show that neither the assumption K 0 nor controllability of the linearized system is essential, which completes the proof.
Hence, the proof of Theorem 1 is complete. □
Sufficient Condition of Optimality
The necessary condition of optimality proved in Theorem 1 (Maximum Principle), under certain additional conditions, is also sufficient. In fact, let us consider the particular case of Problem 1 in which the differential equation is linear.
Problem 2.
F 0 ( z ( 0 ) , z ( T ) ) + 0 T L ( z ( t ) , v ( t ) , t ) d t loc min ,
( z , v ) PW ( [ 0 , T ] ; I R m ) × L ( [ 0 , T ] ; I R )
z ˙ ( t ) = A ( t ) z ( t ) + B ( t ) v ( t ) , z ( 0 ) = z 0 ,
Λ z ( T ) = 0 ,
z ( σ k + ) = z ( σ k ) + Γ k ( z ( σ k ) ) , k = 1 , , N ,
v ( t ) U , t [ 0 , T ] ,
L ( z ( t ) , t ) 0 , t [ 0 , T ] .
Here:
  • z ( · ) PW ( [ 0 , T ] ; I R m ) is the piecewise absolutely continuous state trajectory.
  • A ( · ) and B ( · ) are continuous matrix-valued functions.
  • Γ k : I R m I R m are the impulse mappings at times σ k .
  • Λ I R r × m is a full–rank matrix defining the terminal constraints.
  • ϕ ( z , t ) 0 is the state constraint, compatible with the set
    A = { t [ 0 , T ] : ϕ ( z ( t ) , t ) = 0 } .
Let ( z , v ) be a feasible pair satisfying (14)–(18).
The proof of the following theorem proceeds in the same way as the proof of Theorem 8 in [13].
Theorem 7.Assume that the conditions a ) e ) of Theorem 1 (the Maximum Principle) hold for the pair ( z , v ) .
In addition, suppose:
I)
The linear system
z ˙ ( t ) = A ( t ) z ( t ) + B ( t ) v ( t )
is controllable.
II)
There exists a control v ˜ L ( [ 0 , T ] ; I R ) such that
v ˜ ( t ) int ( U ) , t [ 0 , T ] .
III)
Let z ˜ denote the corresponding trajectory of the linear system associated with v ˜ . Then,
z ˜ ( T ) = z 1 and ϕ ( z ˜ ( t ) , t ) < 0 , t [ 0 , T ] .
IV)
The functions F 0 ( · , · ) , L ( · , · , t ) , and ϕ ( · , t ) are convex in the state and control variables.
Then the pair ( z , v ) is a global minimizer of Problem 2.
Mathematical Models
In this section we illustrate the applicability of the Maximum Principle obtained in Theorem 1 by analyzing a representative class of optimal control problems with impulsive dynamics. The following example, inspired by classical epidemic models, highlights how the abstract conditions of the theorem translate into a concrete and computable optimality system.
Optimal Vaccination Strategy in an Impulsive SIR Model
We consider a population undergoing an epidemic spread, where vaccination acts as the control strategy capable of mitigating the propagation of the infection. The state vector is
z ( t ) = S ( t ) I ( t ) R ( t ) ,
where:
  • S ( t ) is the number of susceptible individuals,
  • I ( t ) is the number of infectious individuals,
  • R ( t ) represents recovered individuals, permanently removed from the pool of susceptibles.
The dynamics follow the standard nonlinear SIR interactions with a vaccination:
z ˙ ( t ) = H ( z ( t ) , v ( t ) , t ) = r S ( t ) I ( t ) + v ( t ) r S ( t ) I ( t ) γ I ( t ) v ( t ) γ I ( t ) , t [ 0 , T ] ,
where r > 0 is the transmission parameter and γ > 0 is the recovery rate. The vaccination rate v ( t ) acts as the control and is restricted by
v ( t ) [ 0 , e ] , a . e . t [ 0 , T ] .
Impulsive sanitary interventions are incorporated at predetermined instants { σ k } k = 1 N :
z ( σ k + ) = z ( σ k ) + Γ k ( z ( σ k ) ) ,
with
Γ k ( z ) = ρ s , k S ρ i , k I ρ r , k , | ρ * , k | 1 ,
representing small abrupt changes due to external containment policies.
The cost functional penalizes both the final number of infected individuals and the cumulative vaccination effort:
F 0 ( z ( 0 ) , z ( T ) ) = α 2 I ( T ) , L ( z , v , t ) = 1 2 v 2 ( t ) , α > 0 .
The optimal control problem reads:
α 2 I ( T ) + 0 T 1 2 v 2 ( t ) d t min loc ,
subject to the above dynamics, impulses, the initial condition z ( 0 ) = z 0 , and the control constraint v ( t ) [ 0 , e ] .
Adjoint equation. Since the terminal state is free, then ξ = 0 . Hence the adjoint variable p ( · ) satisfies
p ˙ ( t ) = H z ( z ( t ) , v ( t ) , t ) p ( t ) , p ( T ) = 2 F 0 ( z ( 0 ) , z ( T ) ) = 0 α / 2 0 .
The Jacobian matrix of H with respect to the state variables is
H z ( z , v , t ) = r I r S 0 r I r S γ 0 0 γ 0 ,
and therefore the adjoint dynamics become
p ˙ ( t ) = r I ( t ) p 1 ( t ) p 2 ( t ) r S ( t ) p 1 ( t ) p 2 ( t ) + γ p 2 ( t ) p 3 ( t ) 0 .
Maximum condition. Since
H v ( z , v , t ) = 1 1 0 , L v ( x ( t ) , v ( t ) , t ) = v ( t )
the maximum principle of Theorem 1 becomes:
p 2 ( t ) p 1 ( t ) + v ( t ) , ω v ( t ) 0 , t [ 0 , T ] , a . e .
This is equivalent to
max v [ 0 , e ] ( p 1 ( t ) p 2 ( t ) v ( t ) ) v = ( p 1 ( t ) p 2 ( t ) v ( t ) ) v ( t ) .
Then, the optimal control is given by
v ( t ) = 0 , p 1 ( t ) p 2 ( t ) < 0 , p 1 ( t ) p 2 ( t ) , 0 p 1 ( t ) p 2 ( t ) e , e , p 1 ( t ) p 2 ( t ) > e .
Since p 2 ( T ) = α / 2 and p 1 ( T ) = 0 , we obtain p 1 ( T ) p 2 ( T ) = α / 2 . Thus, whenever e < α / 2 , the optimal control saturates near the terminal time:
v ( t ) = e for t T .
Remark 3.A natural epidemiological requirement is to prevent the infectious class from exceeding the susceptible population. This motivates the state constraint
ϕ ( z ( t ) , t ) = I ( t ) S ( t ) 0 .
Under this additional restriction, the multiplier associated with L is a nonnegative Borel measure supported on the contact set { t : I ( t ) = S ( t ) } , as stated in Theorem 1. The adjoint equation acquires an additional forcing term: Thus, the adjoint equation is given via a Borel measure ν as in Theorem 1.
p ( t ) = t T H z ( z ( τ ) , v ( τ ) , τ ) p ( τ ) d τ + 2 F 0 ( z ( 0 ) , z ( T ) ) + t T ϕ z ( z ( τ ) , τ ) d ν ( τ ) = t T H z ( z ( τ ) , v ( τ ) , τ ) p ( τ ) d τ + 0 α 2 0 + t T 1 1 0 d ν ( τ ) = t T H z ( z ( τ ) , v ( τ ) , τ ) p ( τ ) d τ + 0 α 2 0 + ν ( [ T , t ] ) 1 1 0 .
The maximum condition remains unchanged.
Optimal Control at the Onset of a New Viral Outbreak
We now examine a second epidemic model, which describes the early phase of a newly emerging viral outbreak. The model follows the classical SIR structure but incorporates vaccination as a bounded control, together with impulsive effects that reflect sudden population-level interventions. Our analysis reformulates the system within the framework of Theorem 1, allowing us to obtain the optimality system in a unified notation.
The state vector is
z ( t ) = S ( t ) I ( t ) R ( t ) ,
where S, I, and R denote susceptible, infectious, and recovered individuals, respectively. The transmission and recovery parameters are ρ > 0 and γ > 0 . The vaccination imNut v ( t ) satisfies
0 v ( t ) 1 , a . e . t [ 0 , T ] .
The controlled dynamics are
z ˙ ( t ) = H ( z ( t ) , v ( t ) , t ) = ρ ( 1 v ( t ) ) S ( t ) I ( t ) ρ ( 1 v ( t ) ) S ( t ) I ( t ) γ I ( t ) γ I ( t ) , z ( 0 ) = z 0 .
Impulsive interventions appear at instants { σ k } k = 1 N :
z ( σ k + ) = z ( σ k ) + Γ k ( z ( σ k ) ) ,
with
Γ k ( z ) = ρ s , k S ρ i , k I ρ r , k , | ρ * , k | 1 ,
representing small corrections from sudden policies.
The performance index balances the reduction of susceptibles with the overall intervention cost:
F 0 ( z ( 0 ) , z ( T ) ) = S ( 0 ) S ( T ) .
We therefore seek
S ( 0 ) S ( T ) + 0 T L ( z ( t ) , v ( t ) , t ) d t min loc .
Adjoint dynamics
Since no terminal constraint of the form C z ( T ) = 0 is imposed, the adjoint terminal condition is simply
p ( T ) = 2 F 0 ( z ( 0 ) , z ( T ) ) = 1 0 0 = 1 0 0 .
The Jacobian H z ( z ( t ) , v ( t ) , t ) equals
ρ ( 1 v ) I ρ ( 1 v ) I 0 ρ ( 1 v ) S ρ ( 1 v ) S + γ γ 0 0 0 .
Thus the adjoint system in Theorem 1 becomes
p ˙ ( t ) = H z ( z ( t ) , v ( t ) , t ) p ( t ) = ρ ( 1 v ) I ( p 1 p 2 ) ρ ( 1 v ) S ( p 1 p 2 ) + γ ( p 2 p 3 ) 0 .
Maximum condition
The gradient of H with respect to the control is
H v ( z , v , t ) = ρ S I ρ S I 0 .
Hence the Pontryagin inequality from Theorem 1 reads
H v ( z , v , t ) p ( t ) + L v ( z , v , t ) , ω v ( t ) 0 .
Then, for all ω [ 0 , 1 ] we get
ρ S ( t ) I ( t ) ( p 2 ( t ) p 1 ( t ) ) + L u ( z ( t ) , v ( t ) , t ) , ω v ( t ) 0 , t [ 0 , T ] , a . e .
This is equivalent to:
max ω [ 0 , 1 ] ( ρ S ( t ) I ( t ) ( p 1 ( t ) p 2 ( t ) ) L v ( z ( t ) , v ( t ) , t ) ω = ( ρ S ( t ) I ( t ) ( p 1 ( t ) p 2 ( t ) ) L v ( z ( t ) ) v ( t ) .
Different choices of L yield different explicit control laws:
i)
If
L ( z , v , t ) = c v , c > 0 ,
then L v = c and
v ( t ) = 0 , ρ S I ( p 1 p 2 ) c < 0 , 1 , ρ S I ( p 1 p 2 ) c 0 .
ii)
If
L ( z , v , t ) = 1 2 v 2 ,
then L v = v ( t ) and we get that
v ( t ) = 0 , ρ S I ( p 1 p 2 ) < 0 , ρ S I ( p 1 p 2 ) , 0 ρ S I ( p 1 p 2 ) 1 , 1 , ρ S I ( p 1 p 2 ) > 1 .
The model therefore fits naturally into the impulsive optimal control framework of Theorem 1, and the optimal strategy emerges directly from the structure of the adjoint dynamics.
A Bolza Problem in Economics
We conclude this section with a classical intertemporal consumption model, which fits naturally into the impulsive optimal control framework developed earlier. An economic agent receives an income flow r ( t ) on the interval [ 0 , T ] , and at each instant allocates a quantity v ( t ) to consumption, while the remaining wealth accumulates at a constant interest rate σ > 0 . The control is constrained by
0 v ( t ) ρ , a . e . t [ 0 , T ] .
The state variable z ( t ) R represents the wealth level and evolves according to
z ˙ ( t ) = H ( z ( t ) , v ( t ) , t ) = r ( t ) + σ z ( t ) v ( t ) , z ( 0 ) = z 0 .
The performance index is of Bolza type:
F 0 ( z ( 0 ) , z ( T ) ) = 0 , L ( z , v , t ) = e α t ln v ,
so the objective becomes
0 T e α t ln ( v ( t ) ) d t min ,
equivalently,
0 T ln ( v ( t ) ) e α t d t max .
A natural feasibility requirement is that wealth remains nonnegative at the final time. This is encoded as a state constraint:
ϕ ( z ( T ) , T ) = z ( T ) 0 .
Thus the admissible set for z and v is
Q ϕ = { ( z , v ) : z ( T ) 0 } , U = [ 0 , ρ ] .
Let ( z , v ) denote an optimal pair. We distinguish two structurally different cases, which correspond to the activation or nonactivation of the terminal constraint z ( T ) 0 .
a)
Suppose that z ( T ) > 0 . Then, Q g = { ( z , v ) E : z ( T ) > 0 } is an open set, which implies that the dual cone of the admissible direction is just the zero functional. Therefore, ξ = 0 in Theorem 1.3. Hence, the adjoint equation is given by
p ˙ ( t ) = Ψ x * ( x ( t ) , u ( t ) , t ) p ( t ) β 0 , p ( T ) = 0 ,
with β 0 0 . i.e.,
p ˙ ( t ) = τ p ( t ) β 0 , p ( T ) = 0 .
Then, p ( t ) = β 0 τ ( e τ ( T t ) 1 ) . Then, applying Pontryagin Maximum Principle, for all u M , we get that
p ( t ) 1 u ( t ) ) e α t , u u ( t ) 0 , t [ 0 , T ] , a . e .
This is equivalent to:
max u [ 0 , ρ ] ( e α t p ( t ) u ( t ) ) u = ( e α t p ( t ) u ( t ) ) u ( t ) .
Then, the optimal control is given by
u ( t ) = 0 , i f ( e α t p ( t ) u ( t ) ) 0 , ρ , i f ( e α t p ( t ) u ( t ) ) > 0 .
Note that, if β 0 = 0 , then u ( t ) = ρ , t [ 0 , T ] .
b)
Suppose that x ( T ) = 0 , β 0 = 0 , α 0 = 1 , ξ = 1 a n d ρ 1 . Then
p ˙ ( t ) = τ p ( t ) , p ( T ) = 1 ,
and p ( t ) = e τ ( T t ) . This is equivalent to:
max u [ 0 , ρ ] ( e α t e τ ( T t ) u ( t ) ) u = ( e α t e τ ( T t ) u ( t ) ) u ( t ) .
Then, the optimal control is given by
u ( t ) = e α t τ ( T t ) , t [ 0 , ρ ] .
Conclusions
In this work we have developed a comprehensive extension of Pontryagin’s maximum principle to optimal control problems that involve both impulsive actions and state constraints. The impulsive nature of the dynamics required us to work in spaces of piecewise continuous functions, where many classical tools of functional analysis and measure theory do not apply directly. By constructing an appropriate conic variational framework, based on the Dubovitskii–Milyutin theory, we were able to overcome these difficulties and produce a unified set of necessary conditions for optimality.
A key contribution of the paper is the adaptation of the classical existence theorem for nonnegative Borel measures to settings in which the underlying domain is divided by impulse times. This extension leads naturally to an adjoint relation formulated as a Stieltjes integral, providing an appropriate dual description of variational information in impulsive systems. Our results also generalize and complement fundamental ideas originally developed by I. Girsanov, showing that his framework remains applicable when extended to piecewise continuous trajectories and impulsive constraints.
The conic approach adopted here proves flexible enough to incorporate general state constraints without imposing restrictive conditions on the admissible trajectories. Thus, our formulation bridges a gap in the literature, where the combined treatment of impulses and state inequalities has received comparatively little attention. Moreover, our results extend previous studies that either excluded state constraints or relied on more rigid assumptions.
Finally, the applicability of the theory was demonstrated through two illustrative models: a modified SIR epidemiological system and the classical Bolzano economic problem. These examples highlight the range and effectiveness of the proposed method, and suggest several directions for further research, including sufficiency conditions, numerical implementations, and the study of impulsive systems with more complex constraint structures.

Funding

This research is supported by Yachay Tech

References

  1. Coronel, A.; Huancas, F.; Lozada, E.; Rojas-Medar, M. The Dubovitskii and Milyutin methodology applied to an optimal control problem originating in an ecological system. Mathematics 2021, 9, 479. [Google Scholar] [CrossRef]
  2. Dubovitskii, A. Ya.; Milyutin, A. A. Extremum problems in the presence of restrictions. In Elsevier; 1965; Volume 5, no. 3, pp. 1–80. [Google Scholar]
  3. Girsanov, I. V. Lectures on Mathematical Theory of Extremum Problems; Elsevier, 1972. [Google Scholar]
  4. Halkin, H. A satisfactory treatment of equality and operator constraints in the Dubovitskii–Milyutin optimization formalism. Journal of Optimization Theory and Applications 1970, 6(no. 2), 138–149. [Google Scholar] [CrossRef]
  5. Ioffe, A. D.; Tikhomirov, V. M. Theory of Extremal Problems; Springer, 1979. [Google Scholar]
  6. Khan, A. A.; Tammer, C. Generalized Dubovitskii–Milyutin approach in set-valued optimization. Vietnam Journal of Mathematics 2012, 40 nos. 2&3, 285–304. [Google Scholar]
  7. Kolmogórov, A. N.; Fomin, S. V. Elementos de la teoría de funciones y del análisis funcional; Editorial Mir: Moscú, 1975. [Google Scholar]
  8. Lee, E. B.; Markus, L. Foundations of Optimal Control Theory; Wiley: New York, 1967. [Google Scholar]
  9. Leiva, H.; Tapia-Riera, G.; Romero-Leiton, J. P.; Duque, C. Mixed cost function and state constraints optimal control problems. AppliedMath 2025, 5, 46. [Google Scholar] [CrossRef]
  10. Camacho, O.; Castillo, R.; Leiva, H. Optimal control governed by impulsive neutral differential equations. Results in Control and Optimization 2024, 17, 100505. [Google Scholar] [CrossRef]
  11. Leiva, H. Pontryagin’s maximum principle for optimal control problems governed by nonlinear impulsive differential equations. Journal of Mathematics and Applications 2023, 46, 111–164. [Google Scholar]
  12. Leiva, H. Pontryagin’s Maximum Principle for Optimal Control Problems Governed by Integral Equations with State and Control Constraints. Symmetry 2025, 17, 288. [Google Scholar] [CrossRef]
  13. Leiva, H.; Cabada, D.; Gallo, R. Roughness of the controllability for time-varying systems under the influence of impulses, delay, and non-local conditions. Nonautonomous Dynamical Systems 2020, 7(no. 1), 126–139. [Google Scholar] [CrossRef]
  14. Leung, S. F. An economic application of the Dubovitskii–Milyutin version of the maximum principle. Optimal Control Applications and Methods 2007, 28(no. 6), 435–449. [Google Scholar] [CrossRef]
  15. Boulin, Loï; Trélat, E. Pontryagin maximum principle for finite-dimensional nonlinear optimal control problems on time scales. SIAM Journal on Control and Optimization See also. 2013, 51, 3781–3813. [Google Scholar]
  16. Samylovskiy, I. Time-optimal trajectories for a trolley-like system with state constraint. MS&E 2020, 747, no. 1. [Google Scholar]
  17. Smirnova, A.; Ye, X. On optimal control at the onset of a new viral outbreak. Infectious Disease Modelling 2024, 9(no. 4), 995–1006. [Google Scholar] [CrossRef] [PubMed]
  18. Suna, B.; Wu, M.-X. Optimal control of age-structured population dynamics for spread of universally fatal diseases. Applicable Analysis 2013, 92(no. 5), 901–921. [Google Scholar] [CrossRef]
  19. Trélat, E. Contrôle optimal: théorie et applications; Vuibert: Paris, 2012. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated