Optimal control problems involving impulsive dynamics and state constraints present structural difficulties that do not arise in classical smooth systems. Impulses force the state to lie in the space of piecewise continuous functions, breaking the continuity assumptions under which standard variational tools operate. In particular, the analysis of constraints along the trajectory requires extending classical results—such as the representation of dual cones by nonnegative Borel measures—to domains partitioned by the impulse instants. Consequently, the adjoint equation naturally appears as a Volterra-type integral equation with a Stieltjes term associated with such a measure.
The goal of this work is to establish a version of Pontryagin’s Maximum Principle for impulsive systems subject to state constraints. Our approach relies on the Dubovitskii–Milyutin conic framework, which provides precise approximations of the objective functional, the impulsive dynamics, and the inequality constraints. This geometric structure allows us to formulate necessary optimality conditions even in the presence of multiple sources of nonsmoothness: discontinuous trajectories, admissible cones shaped by impulses, and active state constraints.
To demonstrate the breadth of applicability of our main result, we conclude with two representative examples: a controlled epidemiological SIR model and a Bolza-type economic problem involving consumption and investment.
3. Proof of the Main Theorem 1
Proof. Let
be the objective functional defined by
Let
where each subset of feasible pairs
is given by:
- : pairs satisfying the continuous dynamics and the impulsive conditions, - : pairs respecting the control constraint a.e., - : pairs satisfying the state constraint .
Thus the optimal control problem is equivalent to
(a) Decay cone of the functional
Define the
decay cone of
at the optimal point
as
Assume temporarily that
. Its dual cone is
Thus, for every
, there exists
such that for all
:
(b) Analysis of the Constraint
We now compute the tangent cone to
at the optimal point
, where
encodes the continuous dynamics () and the impulsive relations (). Define
To apply Lyusternik’s theorem, we introduce the product space
and the space collecting the residuals of the dynamics, impulse constraints, and terminal condition:
Define the operator
by
where
Thus .
The Fréchet derivative at the nominal point
is
where
, and
To apply Theorem 5, we must check the surjectivity of
. Given arbitrary data
we must find a pair
satisfying
First solve the Volterra equation
which admits a unique solution
.
Let the impulsive mismatches be
Assuming (as required in the theorem) that the linear variational impulsive system is controllable, there exists a control correction
such that the corresponding solution
of the linear system
with impulses
satisfies
Hence
is surjective. By Lyusternik’s theorem,
That is,
if and only if
Define the linear subspaces
Since
is a linear subspace, any functional
vanishes on all
that satisfy the linear dynamics. Then
if and only if there is
such that
Thus every dual element in is the sum of a terminal multiplier term and a functional annihilating all homogeneous state-control variations.
(c)Analysis of the Control Constraint
The set of admissible controls is
where the control constraint set
is, by hypothesis, nonempty, convex, closed, and satisfies
.
Because is convex with nonempty interior, both and are convex and closed subsets of their respective spaces, and both have nonempty interior.
Let
be the admissible cone at the optimal pair. Since the state variable
is unrestricted in
, the cone splits as
where
is the admissible cone of the control constraint.
Consequently, any dual element
acts only on the control component, i.e.,
By Theorem 4, every functional
is a
support functional of
at
. Thus,
In other words, defines the supporting hyperplane to the control constraint set at the point . This property will later yield The Pontryagin’s Maximum Principle.
(d) Analysis of the State Constraint
We introduce the auxiliary functional
which is continuous by construction. Following the arguments of Examples 7.4–7.5 in [
3], its directional derivative at
satisfies
where:
Since along the optimal process
, we have
Thus the active set of times is
Therefore, by Proposition 3, for every dual element
there exists a nonnegative Borel measure
such that
and the measure is supported on the active set:
The Pontryagin’s Maximum Principle.
It is easy to see that
are convex cones. Hence, by Theorem 3, there exist functionals
for
, not all zero, such that
The Euler–Lagrange identity (
10) now becomes
where
- corresponds to the dynamics constraint, - corresponds to the control constraint set , - corresponds to the path constraint , - is the Borel measure associated with that constraint, - is the scalar multiplier of the cost functional, - and is the Jacobian of the terminal constraint.
—
For every
, the linearized system admits a solution
with
. Thus
, and therefore
Hence, the Euler–Lagrange identity reduces to
for all
.
Let
p solve the adjoint integral equation:
This is a second-kind Volterra equation with unique solution
(see [
7]). Multiplying both sides of this equation by
and integrating over
we obtain the following:
Given that
and
, we get:
Hence, the above becomes:
Using Stieltjes integration by parts and the fact that
, we obtain:
Then, the support functional
becomes:
Since
, for almost all
and all
, we have
To exclude the trivial case, assume
and
. Then
, hence
The adjoint equation implies
. Equation (
12) gives
. Thus
, contradicting Theorem 3.
Finally, we show that neither the assumption nor controllability of the linearized system is essential, which completes the proof.
Hence, the proof of Theorem 1 is complete. □
Sufficient Condition of Optimality
The necessary condition of optimality proved in Theorem 1 (Maximum Principle), under certain additional conditions, is also sufficient. In fact, let us consider the particular case of Problem 1 in which the differential equation is linear.
Here:
is the piecewise absolutely continuous state trajectory.
and are continuous matrix-valued functions.
are the impulse mappings at times .
is a full–rank matrix defining the terminal constraints.
is the state constraint, compatible with the set
Let be a feasible pair satisfying (14)–(18).
The proof of the following theorem proceeds in the same way as the proof of Theorem 8 in [
13].
Theorem 7.Assume that the conditions of Theorem 1 (the Maximum Principle) hold for the pair .
In addition, suppose:
- I)
-
is controllable.
- II)
There exists a control such that
- III)
Let denote the corresponding trajectory of the linear system associated with . Then,
- IV)
The functions , , and are convex in the state and control variables.
Then the pair is a global minimizer of Problem 2.
Mathematical Models
In this section we illustrate the applicability of the Maximum Principle obtained in Theorem 1 by analyzing a representative class of optimal control problems with impulsive dynamics. The following example, inspired by classical epidemic models, highlights how the abstract conditions of the theorem translate into a concrete and computable optimality system.
Optimal Vaccination Strategy in an Impulsive SIR Model
We consider a population undergoing an epidemic spread, where vaccination acts as the control strategy capable of mitigating the propagation of the infection. The state vector is
where:
is the number of susceptible individuals,
is the number of infectious individuals,
represents recovered individuals, permanently removed from the pool of susceptibles.
The dynamics follow the standard nonlinear SIR interactions with a vaccination:
where
is the transmission parameter and
is the recovery rate. The vaccination rate
acts as the control and is restricted by
Impulsive sanitary interventions are incorporated at predetermined instants
:
with
representing small abrupt changes due to external containment policies.
The cost functional penalizes both the final number of infected individuals and the cumulative vaccination effort:
The optimal control problem reads:
subject to the above dynamics, impulses, the initial condition
, and the control constraint
.
Adjoint equation. Since the terminal state is free, then
. Hence the adjoint variable
satisfies
The Jacobian matrix of
H with respect to the state variables is
and therefore the adjoint dynamics become
Maximum condition. Since
the maximum principle of Theorem 1 becomes:
Then, the optimal control is given by
Since
and
, we obtain
. Thus, whenever
, the optimal control saturates near the terminal time:
Remark 3.A natural epidemiological requirement is to prevent the infectious class from exceeding the susceptible population. This motivates the state constraint
Under this additional restriction, the multiplier associated with L is a nonnegative Borel measure supported on the contact set , as stated in Theorem 1. The adjoint equation acquires an additional forcing term: Thus, the adjoint equation is given via a Borel measure ν as in Theorem 1.
The maximum condition remains unchanged.
Optimal Control at the Onset of a New Viral Outbreak
We now examine a second epidemic model, which describes the early phase of a newly emerging viral outbreak. The model follows the classical SIR structure but incorporates vaccination as a bounded control, together with impulsive effects that reflect sudden population-level interventions. Our analysis reformulates the system within the framework of Theorem 1, allowing us to obtain the optimality system in a unified notation.
The state vector is
where
S,
I, and
R denote susceptible, infectious, and recovered individuals, respectively. The transmission and recovery parameters are
and
. The vaccination imNut
satisfies
The controlled dynamics are
Impulsive interventions appear at instants
:
with
representing small corrections from sudden policies.
The performance index balances the reduction of susceptibles with the overall intervention cost:
Adjoint dynamics
Since no terminal constraint of the form
is imposed, the adjoint terminal condition is simply
The Jacobian
equals
Thus the adjoint system in Theorem 1 becomes
Maximum condition
The gradient of
H with respect to the control is
Hence the Pontryagin inequality from Theorem 1 reads
Then, for all
we get
Different choices of L yield different explicit control laws:
- i)
- ii)
If
then
and we get that
The model therefore fits naturally into the impulsive optimal control framework of Theorem 1, and the optimal strategy emerges directly from the structure of the adjoint dynamics.
A Bolza Problem in Economics
We conclude this section with a classical intertemporal consumption model, which fits naturally into the impulsive optimal control framework developed earlier. An economic agent receives an income flow
on the interval
, and at each instant allocates a quantity
to consumption, while the remaining wealth accumulates at a constant interest rate
. The control is constrained by
The state variable
represents the wealth level and evolves according to
The performance index is of Bolza type:
so the objective becomes
equivalently,
A natural feasibility requirement is that wealth remains nonnegative at the final time. This is encoded as a state constraint:
Thus the admissible set for
z and
v is
Let denote an optimal pair. We distinguish two structurally different cases, which correspond to the activation or nonactivation of the terminal constraint .
- a)
-
Suppose that
. Then,
is an open set, which implies that the dual cone of the admissible direction is just the zero functional. Therefore,
in Theorem 1.3. Hence, the adjoint equation is given by
with
. i.e.,
Then,
. Then, applying Pontryagin Maximum Principle, for all
, we get that
Then, the optimal control is given by
Note that, if , then .
- b)
-
Suppose that
. Then
and
. This is equivalent to:
Then, the optimal control is given by
Conclusions
In this work we have developed a comprehensive extension of Pontryagin’s maximum principle to optimal control problems that involve both impulsive actions and state constraints. The impulsive nature of the dynamics required us to work in spaces of piecewise continuous functions, where many classical tools of functional analysis and measure theory do not apply directly. By constructing an appropriate conic variational framework, based on the Dubovitskii–Milyutin theory, we were able to overcome these difficulties and produce a unified set of necessary conditions for optimality.
A key contribution of the paper is the adaptation of the classical existence theorem for nonnegative Borel measures to settings in which the underlying domain is divided by impulse times. This extension leads naturally to an adjoint relation formulated as a Stieltjes integral, providing an appropriate dual description of variational information in impulsive systems. Our results also generalize and complement fundamental ideas originally developed by I. Girsanov, showing that his framework remains applicable when extended to piecewise continuous trajectories and impulsive constraints.
The conic approach adopted here proves flexible enough to incorporate general state constraints without imposing restrictive conditions on the admissible trajectories. Thus, our formulation bridges a gap in the literature, where the combined treatment of impulses and state inequalities has received comparatively little attention. Moreover, our results extend previous studies that either excluded state constraints or relied on more rigid assumptions.
Finally, the applicability of the theory was demonstrated through two illustrative models: a modified SIR epidemiological system and the classical Bolzano economic problem. These examples highlight the range and effectiveness of the proposed method, and suggest several directions for further research, including sufficiency conditions, numerical implementations, and the study of impulsive systems with more complex constraint structures.