Preprint
Article

This version is not peer-reviewed.

Pontryagin’s Maximum Principle for Optimal Control Problems Governed by Integral Equations with State and Control Constraints

A peer-reviewed article of this preprint also exists.

Submitted:

01 September 2025

Posted:

02 September 2025

You are already at the latest version

Abstract
This work presents a new characterization of the controllability of linear control systems governed by Volterra integral equations. This result, established through a novel lemma, constitutes a fundamental contribution to the theory of integral equations and opens new avenues for future research. As an application, we prove that the classical assumption regarding the controllability of the variational linear equation around an optimal pair is, in fact, superfluous. Building upon this result, we extend Pontryagin’s Maximum Principle to a broad class of optimal control problems governed by Volterra-type integral equations. The formulation incorporates general constraints on both control and state variables, including fixed terminal constraints and time-dependent state restrictions. The cost functional consists of a terminal term and an integral term that may depend on the state. Using the Dubovitskii–Milyutin framework, we construct conic approximations to the cost functional, the integral dynamics, and the constraints, deriving necessary optimality conditions under minimal regularity assumptions. Two types of optimality system are established: one involving a classical adjoint equation when only terminal constraints are imposed, and another involving a Stieltjes integral with respect to a non-negative Borel measure in the presence of time-dependent state constraints. In the particular case where the Volterra system reduces to a differential system, our results recover the classical Pontryagin Maximum Principle. To illustrate the practical implications of our findings, we present an example related to the optimal control of an epidemic SIR model.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

In recent decades, a considerable body of work has been devoted to the study of optimal control problems governed by integral equations, both of Fredholm and Volterra types. Classical results have addressed linear and nonlinear settings, with various forms of cost functionals and state-control constraints (see, for instance, [1,2,3,9,16,20]). Specific applications have arisen in population dynamics, viscoelastic systems, and epidemic models, often assuming either linear dynamics or simplified forms of the kernel. However, most of these studies rely on structural assumptions or do not fully leverage the general framework provided by Dubovitskii–Milyutin theory.
This work begins with a new lemma that characterizes the controllability of linear control systems governed by Volterra integral equations. This result, which we regard as significant in its own right, represents a novel contribution to the theory of integral equations. Furthermore, it allows us to eliminate the common assumption regarding the controllability of the variational linear equation around an optimal pair (4).
Several recent contributions have established maximum principles for control problems involving Volterra-type integral equations. In [17], systems with unilateral constraints are studied, with emphasis on existence results in linear–quadratic frameworks. In [4], nonlinear Volterra equations are examined through discrete-time approximations and modified Hamiltonian techniques. A maximum principle for systems with singular kernels and terminal state constraints, incorporating fractional Caputo dynamics, is presented in [18]. A rigorous analysis based on Dubovitskii–Milyutin theory under smooth nonlinearities and mixed control constraints is provided in [6]. A numerical collocation method employing Genocchi polynomials for weakly singular kernels is developed in [7].
While each of these works provides valuable insights, most are limited by structural assumptions on the kernel, simplified dynamics, or restrictive cost functionals. In contrast, our work establishes a general maximum principle for nonlinear Volterra systems subject to both terminal and time-dependent state constraints. Building upon our controllability result, we derive necessary optimality conditions by means of conic approximations within the Dubovitskii–Milyutin framework.
When only terminal constraints are imposed, the resulting adjoint equation takes the form of a modified Volterra integral equation. In the presence of time-dependent state constraints, the adjoint equation involves a Stieltjes integral with respect to a nonnegative Borel measure concentrated on the set of active constraints. This structure captures the additional complexity introduced by such constraints. Notably, when the Volterra kernel reduces to a differential equation, our results recover the classical Pontryagin Maximum Principle. Thus, our findings not only unify but also extend the existing theory for optimal control problems governed by integral and differential equation systems.
The Dubovitskii–Milyutin approach has proven effective in a wide range of optimal control problems, as demonstrated in works such as [5,11,12,13]. In particular, [13] explores impulsive control problems in the absence of state constraints.

2. Setting of the Problem and the Main Results

With this background, we now introduce the optimal control problem addressed in this work. Our objective is to derive the necessary optimality conditions, specifically, a Pontryagin-type maximum principle, for systems governed by nonlinear Volterra integral equations, where the state trajectory is subject to both control constraints and state constraints.
Without further ado, let us consider the following optimal control problem, which constitutes the main focus of this work. Our goal is to derive necessary conditions of optimality in the spirit of Pontryagin’s Maximum Principle, for a system whose dynamics are governed by a nonlinear Volterra integral equation with general constraints on both control and state variables, including fixed terminal constraints and time-dependent state restrictions. The cost functional consists of a terminal term and an integral term that may depend on the state. The general formulation is as follows:
Problem 2.1.
h ( x ( T ) ) + 0 T φ ( x ( t ) , u ( t ) , t ) d t loc min ,
( x , u ) E = C n [ 0 , T ] × L r [ 0 , T ] ,
x ( t ) = p ( t ) + 0 t K ( t , s , x ( s ) , u ( s ) ) d s ,
x ( T ) = x 1 ; x 0 , x 1 R n ,
u ( t ) Ω , f o r a l m o s t e v e r y t [ 0 , T ] ,
g ( x ( t ) , t ) 0 , t [ 0 , T ] ,
Here, x ( t ) R n denotes the state vector and u ( t ) R r the control vector, which is measurable and takes values in a given admissible control set Ω R r . The pair ( x , u ) belongs to the product space E = C n [ 0 , T ] × L r [ 0 , T ] , with C n [ 0 , T ] = C ( [ 0 , T ] ; R n ) , L r [ 0 , T ] = L ( [ 0 , T ] ; R r ) . That is to say, the state function x ( · ) is continuous and the control u ( · ) is essentially bounded. The function h : R n R represents the terminal cost, g : I R n × [ 0 , T ] I R the state restriction function, and φ : R n × R r × [ 0 , T ] R the running cost, which depends on the state function x ( t ) , the control u ( t ) , and time t. The vector-valued function p : [ 0 , T ] R n is given, and the nonlinear kernel K : [ 0 , T ] × [ 0 , T ] × R n × R r R n defines the Volterra-type dynamics.
Now, we continue with the hypotheses of this work.
Hypotheses
a)
φ , K, h, φ x , φ u , K x , K u , K x , t , K x , t , h x are continuous functions on compact subsets of R n × R r × [ 0 , T ] and p L n [ 0 , T ] with p ( 0 ) I R n .
b)
Ω R r is convex and closed with int ( Ω ) .
c)
g is a continuous function and g ( x 0 , 0 ) < 0 , g ( x 1 , T ) < 0 , where x 0 , x 1 R n are fixed. Besides g has continuous derivative with respect to its first variable g x , such that g x ( x , t ) 0 when g ( x , t ) = 0 .
  • Now, the more complex version of the maximum principle can be established. In this case, the corresponding adjoint equation involves a Stieltjes integral with respect to a nonnegative Borel measure. This extension accounts for the effect of the state constraints at each time instant along the trajectory.
Theorem 1.
Let us suppose that conditions a) - c) are fulfilled for ( x , u ) E , a solution of Problem 2.1.
α ) Then, there exists a non-negative Borel measure μ on [ 0 , T ] with support in
R : = { t [ 0 , T ] / g ( x ( t ) , t ) = 0 } .
β ) There exist λ 0 I R + 0 and a function ψ L 1 n [ 0 , T ] = L 1 ( [ 0 , T ] ; I R n ) such that λ 0 and ψ are not simultaneously zero, and ψ is solution of the integral equation
ψ ( t ) = t T τ T K x , τ * s , τ , x ( τ ) , u ( τ ) ψ ( s ) d s λ 0 φ x ( x ( τ ) , u ( τ ) , τ ) d τ
t T K x * ( τ , τ , x ( τ ) , u ( τ ) ) ψ ( τ ) d τ + t T g x ( x ( τ ) , τ ) d μ ( τ )
a + λ 0 h ( x ( T ) ) .
for some a I R n , and also for all U Ω and almost all t [ 0 , T ] it follows
t T K u t * ( s , t , x ( t ) , u ( t ) ψ ( s ) d s K u * ( t , t , x ( t ) , u ( t ) ) ψ ( t ) + λ 0 φ u ( x ( t ) , u ( t ) , t ) , U u ( t ) 0 ,
Remark 1.
If condition (6) is replaced by the following terminal state constraint
g ( x ( T ) , T ) 0 ,
then a simpler version of the maximum principle can be established. In this case, the corresponding adjoint equation does not involve a Stieltjes integral with respect to a nonnegative Borel measure, as the state constraint only acts at the terminal time.
Theorem 2.
Let us suppose that conditions a), b) and (10) are fulfilled, and suppose that g has continuous derivative with respect to its first variable g x such that g x ( x ( T ) , T ) 0 . Let ( x , u ) E be a solutions of the Problem 2.1.
Then, there exist λ 0 0 , μ 0 0 and a function ψ C 1 [ 0 , T ; R n ] such that λ 0 and ψ are not simultaneously zero, and ψ is solution of the following system:
ψ ( t ) = t T K x , t * s , t , x ( t ) , u ( t ) ψ ( s ) d s + λ 0 φ x ( x ( t ) , u ( t ) , t ) K x * ( t , t , x ( t ) , u ( t ) ) ψ ( t ) + μ 0 g x ( x ( T ) , T ) ,
ψ ( T ) = a λ 0 h ( x ( T ) ) ,
and also, for all U Ω and almost all t [ 0 , T ] it follows
t T K u t * ( s , t , x ( t ) , u ( t ) ψ ( s ) d s K u * ( t , t , x ( t ) , u ( t ) ) ψ ( t ) + λ 0 φ u ( x ( t ) , u ( t ) , t ) , U u ( t ) 0 ,
Corollary 1.
Let us suppose that conditions a) - c) are fulfilled for ( x , u ) E , a solution of Problem 2.1, and
K x , τ s , τ , x ( τ ) , u ( τ ) = 0 , s , τ [ 0 , T ]
α ) Then, there exists a non-negative Borel measure μ on [ 0 , T ] with support in
R : = { t [ 0 , T ] / g ( x ( t ) , t ) = 0 } .
β ) There exist λ 0 I R + 0 and a function ψ L 1 n [ 0 , T ] = L 1 ( [ 0 , T ] ; I R n ) such that λ 0 and ψ are not simultaneously zero, and ψ is solution of the integral equation
ψ ( t ) = t T K x * ( τ , τ , x ( τ ) , u ( τ ) ) ψ ( τ ) d τ + t T λ 0 φ x ( x ( τ ) , u ( τ ) , τ ) d τ
t T g x ( x ( τ ) , τ ) d μ ( τ ) a + λ 0 h ( x ( T ) ) .
for some a I R n , and also for all U Ω and almost all t [ 0 , T ] it follows
K u * ( t , t , x ( t ) , u ( t ) ) ψ ( t ) + λ 0 φ u ( x ( t ) , u ( t ) , t ) , U u ( t ) 0 .

3. Preliminaries Results

In this section, we present a lemma that characterizes the controllability of linear control systems governed by Volterra integral equations. This result is significant in its own right and represents a novel contribution to the theory of integral equations. Moreover, it allows us to prove that the commonly assumed controllability of the variational linear equation around an optimal pair is, in fact, superfluous.
We then summarize the main results of the Dubovitskii–Milyutin (DM) theory. We formulate the general optimization problem with constraints and construct approximation cones for both the objective function and the constraints. The optimality condition, expressed through the Euler–Lagrange (EL) equation, is presented in terms of the dual of these cones. The fundamental results rest on well-established principles of DM theory, with detailed proofs available in [8,11,12,13].

3.1. Controllability of Linear Volterra Equation

In this subsection, we present a lemma that characterizes the controllability of the following linear control system:
x ( t ) = 0 t [ A ( t , s ) x ( s ) + B ( t , s ) u ( s ) ] d s ,
where x ( t ) R n denotes the state variable and u ( t ) R r is the control function. The matrix A ( t , s ) is assumed to be n × n , and B ( t , s ) is assumed to be n × r , with all entries belonging to C 1 ( Δ ; R ) , where Δ = { ( t , s ) [ 0 , T ] 2 : 0 s t T } . That is, A and B are continuously differentiable in both variables over the triangular domain Δ .
Lemma 1.
System (16) is not controllable if, and only if, there exist a nonzero function ψ C ( [ 0 , T ] , R n ) with ψ ( T ) 0 such that
ψ ( t ) = t T A * t ( s , t ) ψ ( s ) d s A * ( t , t ) ψ ( t ) , t [ 0 , T ] ,
and
t T B * t ( s , t ) ψ ( s ) d s + B * ( t , t ) ψ ( t ) = 0
for all t [ 0 , T ] .
Proof. 
Assume that system (16) is not controllable, then we have that the set D R n defined by
D = x ( T ) : x is a solution of ( 16 )
is a linear subspace such that D R n . Then, there exist a R n such that a 0 and
a , x ( T ) = 0 , x ( T ) D .
Let ψ be a solution of the following system
ψ ( t ) = 0 T A * t ( s , t ) ψ ( s ) d s A * ( t , t ) ψ ( t ) , t [ 0 , T ] ψ ( T ) = a .
Now, let u L r [ 0 , T ] arbitrary, and let x ( t ) be the corresponding solution associated to (16). Notice that
0 T x ( t ) ψ ( t ) d t = 0 T x ( t ) , t T A * t ( s , t ) ψ ( s ) d s d t 0 T x ( t ) , A * ( t , t ) ψ ( t ) d t = 0 T t T x ( t ) , A * t ( s , t ) ψ ( s ) d s d t 0 T A ( t , t ) x ( t ) , ψ ( t ) d t = 0 T 0 t A t ( t , s ) x ( s ) d s , ψ ( t ) d t 0 T A ( t , t ) x ( t ) , ψ ( t ) d t = 0 T t 0 t A ( t , s ) x ( s ) d s , ψ ( t ) d t
On the other hand
0 T x ( t ) ψ ( t ) d t = 0 T t 0 t A ( t , s ) x ( s ) d s , ψ ( t ) d t + 0 T t 0 t B ( t , s ) x ( s ) d s , ψ ( t ) d t .
Now, by integration by parts and since ψ ( T ) = a , we get that
0 T x ( t ) ψ ( t ) d t = ψ ( T ) x ( T ) ψ ( 0 ) x ( 0 ) 0 T ψ ( t ) x ( t ) d t = 0 T t 0 t A ( t , s ) x ( s ) d s , ψ ( t ) d t .
Hence, we have that
0 = 0 T t 0 t B ( t , s ) u ( s ) d s , ψ ( t ) d t . = 0 T 0 t B t ( t , s ) u ( s ) d s , ψ ( t ) d t + 0 T B ( t , t ) u ( t ) , ψ ( t ) d t = 0 T t T B * t ( s , t ) ψ ( s ) d s , u ( t ) d t + 0 T B * ( t , t ) ψ ( t ) , u ( t ) d t = 0 T t T B * t ( s , t ) ψ ( s ) d s + B * ( t , t ) ψ ( t ) , u ( t ) d t ,
for all u L r [ 0 , T ] . Therefore, we conclude that
t T B * t ( s , t ) ψ ( s ) d s + B * ( t , t ) ψ ( t ) = 0 , t [ 0 , T ] .
Conversely, assume that there exist such ψ with ψ ( T ) 0 satisfying (17) and (18). Now, let x be any solution of (16), then by the same computations as above, and integrations by parts, we obtain that
0 = 0 T t T B * t ( s , t ) ψ ( s ) d s + B * ( t , t ) ψ ( t ) , u ( t ) d t = 0 T t 0 t B ( t , s ) u ( s ) d s , ψ ( t ) d t = 0 T x ( t ) ψ ( t ) d t 0 T t 0 t A ( t , s ) x ( s ) d s , ψ ( t ) d t = ψ ( T ) x ( T ) ψ ( 0 ) x ( 0 ) = ψ ( T ) x ( T ) .
Then, we have that D admits ψ ( T ) 0 as a perpendicular vector. Therefore, we conclude that D R n is not dense, i.e., D R n . Hence, system (16) is not controllable. □

3.2. Cones, Dual Cones and DM Theorem

Let E be a locally convex topological vector space, and E * its dual (continuous linear functionals on E).
A subset E is a cone with apex at zero if it satisfies β = for all β > 0 .
The dual cone of ⋀ is:
+ = { L E * : L ( x ) 0 , x } .
If { α E : α A } are convex, w-closed cones, then:
α A α + = α A α + ¯ ,
where the closure is taken in the w * -topology. (See [8]).
Lemma 2.
Let 1 , 2 , , n E be open convex cones such that
i = 1 n i .
Then
i = 1 n i + = i = 1 n i + .
Lemma 3.
(DM). Let 1 , 2 , , n + 1 E be convex cones with apex at zero, with 1 , 2 , , n open. Then
i = 1 n + 1 i =
if and only if there are L i i + ( i = 1 , 2 , , n + 1 ), not all zero such that
L 1 + L 2 + + L n + L n + 1 = 0 .

3.3. The Abstract Euler–Lagrange(EL) Equation

Let us consider a function L : E R , and i E ( i = 1 , 2 , , n + 1 ) such that i n t ( i ) ( i = 1 , 2 , , n ). Set the following problem
L ( x ) l o c min x i , ( i = 1 , 2 , , n + 1 ) .
Remark 2.
The sets i , ( i = 1 , 2 , , n ) usually are given by restrictions of inequality type, and n + 1 by restrictions of equality type, and in general i n t ( n + 1 ) = .
Theorem 3.
(DM). Let x E be a solution of problem (20), and assume that:
a) 
0 is the decay cone of L at x .
b) 
i are the admissible cones to i at x i ( i = 1 , 2 , , n ).
c) 
n + 1 is the tangent cone to n + 1 at x .
If i ( i = 0 , 1 , 2 , , n + 1 ) are convex, then there exist functions L i i + , ( i = 0 , 1 , , n + 1 ) not all zero such that
L 0 + L 1 + + L n + 1 = 0
Remark 3.
Equation (21) is called theAbstract Euler–Lagrange(EL) Equation. The definition of decay, admissible and tangent cone can be found in [8]
Plan to apply the DM Theorem to specific problems:
i)
Identify the decay vectors.
ii)
Identify the admissible vectors.
iii)
Identify the tangent vectors.
iv)
Construct the dual cones.

3.4. Important Results

Now, we explicitly compute the cones of decay, admissible and tangent vectors in some cases. For x E , we denote by d = d ( L , x ) the cone of decay direction.
Theorem 4.
(See [8, pg 48]). If E is a Banach space and L is Fréchet–differentiable at x E , then
d ( L , x ) = { h E / L ( x ) h < 0 } ,
where L ( x ) is the Fréchet’s derivative of L at x .
Theorem 5.
(See [8]). Let L : E R be a continuous and convex function in a topological linear space E , and let x E . Then L has a directional derivative in all directions at x , and we also have that
a)
L ( x , h ) = inf L ( x + ε h ) L ( x ) ε / ε R + ,
b)
d ( L , x ) = { h E / L ( x , h ) < 0 } .
The admissible cone to x will be denoted by a = a ( , x ) .
Theorem 6.
(See [8]). If ℧ is an arbitrary convex set with i n t ( ) , then
a = { h E / h = λ ( x x ) , x i n t ( ) , λ R + } .
The set of all the tangent vectors to ℧ at x is a cone with apex at zero, which will be denoted by T : = T ( , x ) , and it will be called tangent cone.
In this part, we highlight the Lyusternik Theorem, a potent tool for computing the cone of tangent vectors. This theorem is crucial to our analysis, as it facilitates the determination the tangent vectors to a given point.
Theorem 7.
(Lyusternik-See [8]). Let E 1 , E 2 be Banach spaces, and suppose that
a) 
x E 1 , P : E 1 E 2 is Fréchet’s differentiable at x .
b) 
P ( x ) : E 1 E 2 is surjective.
Then, the cone of tangent vectors T to the set : = { x E 1 / P ( x ) = 0 } at the point x , is given by
T = K e r P ( x ) .
The proof of this theorem, which is not simple, can be found in [10] and [19].

4. Proof of the Main Theorem 1

Proof. 
Let L ¯ : E R be a function defined as follows
L ¯ ( x , u ) = h ( x ( T ) ) + 0 T φ ( x ( t ) , u ( t ) , t ) d t ,
and let : = 1 2 3 , where 1 , 2 , 3 are given by pairs sets ( x , u ) E , which satisfy (3)- (4), (5) and (6) respectively.
Then, Problem 2.1 is equivalent to
L ¯ ( x , u ) l o c min , ( x , u ) .
a)
Analysis of the Function L ¯ .
Let 0 : = d ( L ¯ , ( x , u ) ) represent the decay cone of L ¯ at the point ( x , u ) . According to Theorem 4:
0 = { ( x , u ) E : L ¯ ( x , u ) ( x , u ) < 0 } .
If 0 , it follows that:
0 + = { λ 0 L ¯ ( x , u ) : λ 0 0 } .
By Example 9.2 in [8, pg 62], the derivative L ¯ ( x , u ) is expressed as:
L ¯ ( x , u ) ( x , u ) = h ( x ( T ) ) x ( T ) + 0 T [ φ x ( x , u , t ) x ( t ) + φ u ( x , u , t ) u ( t ) ] d t , ( x , u ) E .
Thus, for any L 0 0 + , there exists λ 0 0 such that:
L 0 ( x , u ) = λ 0 h ( x ( T ) ) x ( T ) + 0 T [ φ x ( x , u , t ) x ( t ) + φ u ( x , u , t ) u ( t ) ] d t , ( x , u ) E .
b)
Analysis of the Restriction 1 .
Let’s determine the tangent cone to 1 at the point ( x , u )
1 : = T ( 1 , ( x , u ) ) .
Assume, for a moment, that the Variational Integral Equation
y ( t ) = 0 t [ K x ( t , s , x ( s ) , u ( s ) ) y ( s ) + K u ( t , s , x ( s ) , u ( s ) ) u ( s ) ] d s ,
is controllable, then applying Theorem 7 we get that (see[5,8,12,13])
1 = ( x , u ) E / x ( t ) = 0 t [ K x ( t , s , x ( s ) , u ( s ) ) x ( s ) + . K u ( t , s , x ( s , u ( s ) ) u ( s ) ] d s , x ( T ) = 0 t [ 0 , T ] } .
Now, let us calculate 1 + . To do so, we shall consider the following linear spaces
L 1 : = ( x , u ) E / x ( t ) = 0 t [ K x ( t , s , x ( s ) , u ( s ) ) x ( s ) + . K u ( t , s , x ( s , u ( s ) ) u ( s ) ] d s } , L 2 : = { ( x , u ) E / x ( T ) = 0 } .
Hence
1 = L 1 L 2 .
Then, by Proposition 2.40 from [13], we have that L 12 L 2 + if, and only if, there exists a R n such that
L 12 ( x , u ) = a , x ( T ) ( ( x , u ) E ) .
Moreover, by Lemma 2.5 from [13], it follows that L 1 + + L 2 + is w * closed; then by cones properties, we obtain that
1 + = L 1 + + L 2 + .
Therefore, L 1 1 + if, and only if, L 1 = L 11 + L 12 , L 11 L 1 + , L 12 L 2 + .
c)
Analysis of Restriction 2 .
Define the set:
2 : = { u L r [ 0 , T ] : u ( t ) Ω , t [ 0 , T ] , a . e . } .
Then 2 = C n [ 0 , T ] × 2 . Given that Ω is convex, closed, and int ( Ω ) , the following hold:
1. 2 and 2 are closed and convex. 2. int ( 2 ) and int ( 2 ) .
Let 2 be the admissible cone to 2 at ( x , u ) 2 . Then
2 = C n [ 0 , T ] × 2 ,
where 2 is the admissible cone to 2 at u 2 .
For any L 2 2 + , there exists L 2 2 + such that:
L 2 = ( 0 , L 2 ) .
By Theorem 6, L 2 is a support of 2 at u .
d)
Analysis of Restriction 3 .
Let us define the function as follows
l : C n [ 0 , T ] R l ( x ) = max t [ 0 , T ] g ( x ( t ) , t ) .
Then, by Example7.5 from (See [8, pg 52]), we have that
l ( x ; h ) = max t R g x ( x ( t ) , t ) h ( t ) , ( h C n [ 0 , T ] ) ,
where
R = { t [ 0 , T ] / g ( x ( t ) , t ) = l ( x ) } .
Then, we obtain that
R = { t [ 0 , T ] / g ( x ( t ) , t ) = 0 } .
On the other hand,
3 = a ( 3 , ( x , u ) ) d ( L , ( x , u ) ) = : d
But, by Theorem 5, we obtain that
d = { ( h , u ) E / l ( x , h ) < 0 } = { ( h , u ) E / g x ( x ( t ) , t ) h ( t ) < 0 ( t R ) } .
From (23), we have that
3 + d + .
Then, by Example10.3 [8, pg 73], we have that for all L 3 + , there is a non–negative Borel measure μ on [ 0 , T ] such that
L ( x , u ) = 0 T g x ( x ( t ) , t ) x ( t ) d μ ( t ) , ( ( x , u ) E )
and μ has support in
R = { t [ 0 , T ] / g ( x ( t ) , t ) = 0 } .
e)
Euler-Lagrange Equation.
It is evident that 0 , 1 , 2 , 3 are convex cones. Then, by Theorem 3 there exist functionals L i i + ( i = 0 , 1 , 2 ) not all zero, so that
L 0 + L 1 + L 2 + L 3 = 0 .
Equation (24) can be expressed as follows
λ 0 h ( x ( T ) ) x ( t ) λ 0 0 T [ φ x ( x , u , t ) x ( t ) + φ u ( x , u , t ) u ( t ) ] d t + L 11 ( x , u ) + a , x ( T ) + L 3 ( x , u ) + L 2 ( u ) = 0 , ( ( x , u ) E ) .
Now, for all u L r there exist x C n [ 0 , T ] , solution of equation () with x ( 0 ) = 0 . Then ( x , u ) L 1 , and therefore L 11 ( x , u ) = 0 . Hence, the equation (24) can be written as follows:
L 2 ( u ) = λ 0 0 T φ x ( x , u , t ) x ( t ) d t + λ 0 0 T φ u ( x , u , t ) u ( t ) d t a , x ( t ) + λ 0 h ( x ( T ) ) x ( T ) + 0 T g x ( x ( t ) , t ) x ( t ) d μ ( t ) .
Let ψ be the solution the adjoint equation (14), which means
ψ ( t ) = t T τ T K x , τ * s , τ , x ( τ ) , u ( τ ) ψ ( s ) d s λ 0 φ x ( x ( τ ) , u ( τ ) , τ ) d τ t T K x * ( τ , τ , x ( τ ) , u ( τ ) ) ψ ( τ ) d τ + t T g x ( x ( τ ) , τ ) d μ ( τ ) a + λ 0 h ( x ( T ) ) .
This equation is a second-order Volterra type equation, which admits a unique solution ψ L 1 n [ 0 , T ] (see [15, pg 519]). Then multiplying both sides of the foregoing equation by x and integrating from 0 to T, we obtain
0 T ψ ( t ) , x ( t ) d t = 0 T t T τ T K x τ * ( s , τ , x ( τ ) , u ( τ ) ) ψ ( s ) d s , x ( t ) d τ d t 0 T t T K x * ( τ , τ , x ( τ ) , u ( τ ) ) ψ ( τ ) d τ , x ( t ) d t λ 0 0 T t T φ x ( x ( τ ) , μ ( τ ) , τ ) d τ , x ( t ) d t + 0 T t T g x ( x ( τ ) , τ ) μ ( τ ) d τ , x ( t ) d t a , x ( T ) + λ 0 h ( x ( T ) ) x ( T )
The first term of (26) can be rewritten, after integration by parts and changing the order of the integration, as:
0 T t T τ T K x τ * s , τ , x ( τ ) , u ( τ ) ψ ( s ) d s , x ( t ) d τ , ψ ( t ) d t = 0 T t T K x t * s , t , x ( t ) , u ( t ) ψ ( s ) d s , x ( t ) d t = 0 T 0 t K x t t , τ , x ( τ ) , u ( τ ) x ( τ ) d τ , ψ ( t ) d t .
Similarly, the second term of (26) can be written as:
0 T t T K x * ( τ , τ , x ( τ ) , u ( τ ) ) ψ ( τ ) d τ , x ( t ) d t = 0 T T t K x * ( τ , τ , x ( τ ) , u ( τ ) ) ψ ( τ ) d τ , x ( t ) d t = 0 T K x * ( t , t , x ( t ) , u ( t ) ψ ( t ) , x ( t ) d t = 0 T ψ ( t ) , K x ( t , t , x ( τ ) , u ( t ) x ( t ) d t
Likewise, we have that for the third term
λ 0 0 T t T φ x ( x ( τ ) , μ ( τ ) , τ ) d τ , x ( t ) d t = λ 0 0 T T t φ x ( x ( τ ) , μ ( τ ) , τ ) d τ , x ( t ) d t = λ 0 0 T φ x ( x ( t ) , u ( t ) , t ) x ( t ) d t .
The forth term on the right-hand side can be simplified by using the integration by parts method for the Stieltjes integral, along with the conditions g ( z 0 , 0 ) < 0 and g ( x 1 , T ) < 0 . Specifically, since 0 R and T R , it follows that μ ( 0 ) = μ ( T ) = 0 .
0 T x ( t ) , t T g x ( x ( τ ) , τ ) d μ ( τ ) d t = 0 T g x ( x ( t ) , t ) x ( t ) d μ ( t ) .
Then, by (27), (28), (29) and (30) we have that
0 T ψ ( t ) , x ( t ) d t = 0 T 0 t K x t t , τ , x ( τ ) , u ( τ ) x ( τ ) d τ , ψ ( t ) d t 0 T ψ ( t ) , K x ( t , t , x ( τ ) , u ( t ) x ( t ) d t + λ 0 0 T φ x ( x ( t ) , u ( t ) , t ) x ( t ) d t + 0 T g x ( x ( t ) , t ) x ( t ) d μ ( t ) a , x ( T ) + λ 0 h ( x ( T ) ) x ( T )
Then, rewriting last equality we have
λ 0 0 T φ x ( x ( t ) , u ( t ) , t ) x ( t ) d t + 0 T g x ( x ( t ) , t ) x ( t ) d μ ( t ) a , x ( T ) + λ 0 h ( x ( T ) ) x ( T ) = 0 T 0 t K x t t , τ , x ( τ ) , u ( τ ) x ( τ ) d τ , ψ ( t ) d t + 0 T ψ ( t ) , K x ( t , t , x ( τ ) , u ( t ) x ( t ) d t 0 T ψ ( t ) , x ( t ) d t
Since x ( t ) satisfies the variational equation (4), we have that
0 T ψ ( t ) , x ( t ) d t + 0 T 0 t K x t t , τ , x ( τ ) , u ( τ ) x ( τ ) d τ , ψ ( t ) d t + 0 T ψ ( t ) , K x ( t , t , x ( τ ) , u ( t ) x ( t ) d t = 0 T 0 t K u t t , τ , x ( τ ) , u ( τ ) u ( τ ) d τ , ψ ( t ) d t + 0 T ψ ( t ) , K u ( t , t , x ( τ ) , u ( t ) u ( t ) d t .
Then, we have
λ 0 0 T φ x ( x ( t ) , u ( t ) , t ) x ( t ) d t + 0 T g x ( x ( t ) , t ) x ( t ) d μ ( t ) a , x ( T ) + λ 0 h ( x ( T ) ) x ( T ) = 0 T 0 t K u t t , τ , x ( τ ) , u ( τ ) u ( τ ) d τ , ψ ( t ) d t 0 T ψ ( t ) , K u ( t , t , x ( t ) , u ( t ) u ( t ) d t = 0 T t T K u t * ( s , t , x ( t ) , u ( t ) ) ψ ( s ) d s , u ( t ) d t 0 T K u * ( t , t , x ( t ) , u ( t ) ) ψ ( t ) , u ( t ) d t = 0 T t T K u t * ( s , t , x ( t ) , u ( t ) ) ψ ( s ) d s + K u * ( t , t , x ( t ) , u ( t ) ) ψ ( t ) , u ( t ) d t
Then by the equation (25), we obtain that
L 2 ( u ) = 0 T t T K u t * ( s , t , x ( t ) , u ( t ) ψ ( s ) d s K u * ( t , t , x ( t ) , u ( t ) ) ψ ( t ) + λ 0 φ u ( x ( t ) , u ( t ) , t ) , u ( t ) > d t ,
for all ( u L r [ 0 , T ] ) . Since L 2 is a support of 2 at the point u 2 , from Example10.5 [8, pg 76], it follows that
t T K u t * ( s , t , x ( t ) , u ( t ) ψ ( s ) d s K u * ( t , t , x ( t ) , u ( t ) ) ψ ( t ) + λ 0 φ u ( x ( t ) , u ( t ) , t ) , U u ( t ) 0 ,
for all U Ω and almost all t [ 0 , T ] .
Now, we will demonstrate that the case λ 0 = 0 , ψ = 0 is not admissible. In fact, if ψ = 0 , then ψ ( T ) = a = 0 . Thus,
L 12 ( x , u ) = a , x ( T ) = 0 ( ( x , u ) E ) ,
which implies that L 12 0 . Hence, from equation (14) and the fact that λ 0 = 0 , we obtain
t T g x ( x ( τ ) , τ ) d μ ( τ ) = 0 ( t [ 0 , T ] ) ,
which leads to L 3 = 0 . Additionally, from equation (25) we have that L 2 ( u ) = 0 ( v L r [ 0 , T ] ) ; then from the equation (24), it follows that L 11 = 0 , where
L 1 = L 11 + L 12 = 0 ,
which contradicts the statement of Theorem 3.
At this stage, we have introduced two supplementary assumptions:
First, we assumed that 0 . Second, we supposed that the system
x ( t ) = 0 t K x ( t , s , x ( s ) , u ( s ) ) x ( s ) + K u ( t , s , x ( s ) , u ( s ) ) u ( s ) d s ,
is controllable.
We will now show that these assumptions are unnecessary. Indeed, if 0 = , then by the definition of 0 , we have
h ( x ( T ) ) x ( T ) + 0 T [ φ x ( x ( t ) , u ( t ) , t ) x ( T ) + φ u ( x ( t ) , u ( t ) , t ) u ( t ) ] d t = 0 .
Let us consider μ = 0 , a = 0 . Then, from equation (32), we get
h ( x ( T ) ) x ( T ) + 0 T φ x ( x , u , t ) x ( T ) =
0 T t T K u t * ( s , τ , x ( τ ) , u ( τ ) ) ψ ( s ) d s + K u * ( t , t , x ( t ) , u ( t ) ) ψ ( t ) , u ( t ) d t ,
for all ( x , u ) such that x is a solution of equation (4). Therefore, for ( u L r [ 0 , T ] ) , we have that:
0 T t T K u t * ( s , t , x ( t ) , u ( t ) ψ ( s ) d s K u * ( t , t , x ( t ) , u ( t ) ) ψ ( t ) + φ u ( x ( t ) , u ( t ) , t ) , u ( t ) d t = 0 ,
which leads to the conclusion that
t T K u t * ( s , t , x ( t ) , u ( t ) ψ ( s ) d s K u * ( t , t , x ( t ) , u ( t ) ) ψ ( t ) + φ u ( x ( t ) , u ( t ) , t ) , U u ( t ) = 0 ,
for all U Ω and almost every t [ 0 , T ] .
Now, assuming that system (4) is not controllable. Then by Lemma 1, there exists a nontrivial function ψ C n [ 0 , T ] satisfying
ψ ˙ ( t ) = t T K x t * ( s , t , x ( t ) , u ( t ) ψ ( s ) d s K x * ( t , t , x ( t ) , u ( t ) ) ψ ( t ) ,
such that, for all t [ 0 , T ] it holds that
t T K u t * ( s , t , x ( t ) , u ( t ) ψ ( s ) d s + K u * ( t , t , x ( t ) , u ( t ) ) ψ ( t ) = 0 .
By taking λ 0 = 0 , μ = 0 , we find that ψ solves equation (14), and therefore
t T K u t * ( s , t , x ( t ) , u ( t ) ψ ( s ) d s K u * ( t , t , x ( t ) , u ( t ) ) ψ ( t ) , U u ( t ) 0 ,
for all U Ω and almost every t [ 0 , T ] .
Therefore, the proof of Theorem 1 is now complete. □

5. Sufficient Condition for Optimality

The necessary condition for optimality presented in Theorem 1 (Maximum Principle), given certain additional conditions, is also sufficient. Specifically, let us examine the specific case of Problem 2.1 where the differential equation is linear.
Problem 5.1.
h ( x ( T ) ) + 0 T φ ( x ( t ) , u ( t ) , t ) d t l o c min .
( x , u ) E : = C n [ 0 , T ] × L r [ 0 , T ] ,
x ( t ) = 0 t [ A ( t , s ) x ( s ) + B ( t , s ) u ( s ) ] d s ,
x ( T ) = x 1 ; x 1 R n ,
u ( t ) Ω , t [ 0 , T ] ,
g ( x ( t ) , t ) 0 ( t [ 0 , T ] ) ,
where x ( t ) R n denotes the state variable and u ( t ) R r is the control function. The matrix A ( t , s ) is assumed to be n × n , and B ( t , s ) is assumed to be n × r , with all entries belonging to C 1 ( Δ ; R ) , where Δ = { ( t , s ) [ 0 , T ] 2 : 0 s t T } . That is, A and B are continuously differentiable in both variables over the triangular domain Δ. Let ( x , u ) E be satisfying the conditions (37)– (40).
Theorem 8.
Let us suppose that conditions a ) c ) , α ) and β ) from Theorem 1 are satisfied.
Furthermore, let us assume the following:
A) 
The system (37) is controllable.
B) 
There exists u ˜ L r [ 0 , T ] such that u ˜ ( t ) i n t ( Ω ) , t [ 0 , T ] .
C) 
The corresponding solutions to u ˜ , of equation (37), x ˜ satisfies x ˜ ( T ) = x 1 and g ( x ˜ ( t ) , t ) < 0 , ( t [ 0 , T ] ) .
D) 
h , g , ψ are a convex functions in its two first variables.
Then ( x , u ) is global solution of Problem 5.1.
Proof. 
Let us define the function L ¯ : E R as follows
L ¯ ( x , u ) = h ( x ( T ) ) + 0 T ψ ( x ( t ) , u ( t ) , t ) d t .
Let us consider : = 1 2 3 , where 1 is given by (37)–(38), 2 by (39), 3 by (40) as in the Theorem 1.
Then, Problem 5.1 is equivalent to:
L ¯ ( x , u ) l o c min , ( x , u ) .
It is clear that i ( i = 1 , 2 , 3 ) are convex sets, and from the conditions ( C ) ( D ) we have that L ¯ is convex, and ( x ˜ , u ˜ ) i n t ( 2 ) i n t ( 3 ) 1 .
Thus, by Theorem 2.17 from [13] it follows that:
( x , u ) is a minimum point of L ¯ at ℧ if, and only if, there are L i i + ( i = 0 , 1 , 2 , 3 ) , not all zero such that
L 0 + L 1 + L 2 + L 3 = 0 .
Here, i ( i = 0 , 1 , 2 , 3 ) are the approximation cones defined as in the Theorem 1.
Let 3 = a ( 3 , ( x , u ) ) be the admissible cone to 3 at the point ( x , u ) . Then
3 d ( L ¯ , ( x , u ) ) = : d ,
where
d : = { ( x , u ) E / g x ( x ( t ) , t , α ) x ( t ) < 0 ( t R ) } ,
and
R : = { t [ 0 , T ] / g ( x ( t ) , t ) = 0 } .
Then, by dual cones properties, we have that
3 + d + .
So, each L d + has the following form
L ( x , u ) = 0 T g x ( x ( t ) , t ) x ( t ) d μ ( t ) ( ( x , u ) E ) .
Here μ is a non negative Borel measures with support on R .
Now, assume that the Maximum Principle of Theorem 1 is satisfied. This implies the existence of λ 0 0 , a R n , and non-negative Borel measures μ supported on R. Additionally, there exists a function ψ L 1 n [ 0 , T ] that satisfies the following integral equation:
ψ ( t ) = t T τ T A * t ( s , t ) ψ ( s ) d s λ 0 φ x ( x ( τ ) , u ( τ ) , τ ) d τ t T A * ( τ , τ ) ψ ( τ ) d τ + t T g x ( x ( τ ) , τ ) d μ ( τ ) a + λ 0 h ( x ( T ) ) .
where both λ 0 and ψ are non-zero, and for every U Ω and almost every t [ 0 , , T ] , the following holds
t T B * t ( s , t ) ψ ( s ) d s B * ( t , t ) ψ ( t ) + λ 0 φ u ( x ( t ) , u ( t ) , t ) , U u ( t ) 0 ,
To demonstrate the theorem, it is enough to show that there exist of L i i + ( i = 0 , 1 , 2 , 3 ) not all zero, such that L 0 + L 1 + L 2 + L 3 = 0 ; for which we define the following set
2 = { u L r / u ( t ) Ω , t [ 0 , T ] , a . e . } .
and functionals
L 2 : L r R , L 2 ( u ) = 0 T t T B * t ( s , t ) ψ ( s ) d s B * ( t , t ) ψ ( t ) , u ( t ) ) ψ ( t ) + λ 0 φ u ( x ( t ) , u ( t ) , t ) , u ( t ) d t , L 2 : = ( 0 , L 2 ) .
Then, from (42), we get that
L 2 ( u ) L 2 ( u ) ( u 2 ) .
So, L 2 is a support of 2 at u . Hence L 2 = ( 0 , L 2 ) 2 + . Let us define the functional L 11 : E R as follows
L 11 ( x , u ) = λ 0 h ( ( x ( T ) ) ) x ( T ) + λ 0 0 T [ φ x ( x ( t ) , u ( t ) , t ) x ( t ) + φ u ( x ( t ) , u ( t ) , t ) u ( t ) ] d t L 2 ( u ) a , x ( T ) + 0 T g x ( x ( t ) , t ) x ( t ) d μ ( t ) .
Now, we will see that L 11 L 1 + , where
L 1 = ( x , u ) / x ( t ) = 0 t [ A ( t , τ ) x ( τ ) + B ( t , τ ) u ( τ ) ] d τ ( t [ 0 , T ] ) ,
as in the Theorem 1. In fact, suppose that ( x , u ) L 1 , then multiplying both sides of the equation (41) by x ˙ and integrating from 0 to T, we obtain that
λ 0 0 T φ x ( x ( t ) , u ( t ) , t ) x ( t ) d t + 0 T g x ( x ( t ) , t ) x ( t ) d μ ( t ) a , x ( T ) + λ 0 h ( x ( T ) ) x ( T ) = 0 T t T B * t ( s , t ) ψ ( s ) d s + B * ( t , t ) ψ ( t ) , u ( t ) d t
Then
L 11 ( x , u ) = L 2 ( u ) 0 T t T B * t ( t , s ) ψ ( s ) d s + B * ( t , t ) ψ ( t ) , u ( t ) d t + λ 0 0 T ψ u ( x ( t ) , u ( t ) , t ) u ( t ) d t .
Therefore,
L 11 ( x , u ) = L 2 ( u ) + L 2 ( u ) = 0 .
Thus L 11 L 1 + .
Next, we will introduce the following functionals
L 0 , L 1 , L 3 ; E R ,
by
L 0 ( x , u ) = λ 0 h ( ( x ( T ) ) ) x ( T ) + 0 T [ φ x ( x ( t ) , u ( t ) , t ) x ( t ) + φ u ( x ( t ) , u ( t ) , t ) u ( t ) ] d t L 1 ( x , u ) = L 11 ( x , u ) + a , x ( T ) , L 3 ( x , u ) = 0 T g x ( x ( t ) , t ) x ( t ) d μ ( t ) .
Then L 0 0 + , L 1 1 + , L 3 3 + , and also
L 0 + L 1 + L 2 + L 3 = 0 ,
and not all these functionals are zero, because by hypothesis λ 0 and ψ are not simultaneously zero.
From the convexity conditions, it follows the global-minimality of ( x , u )

6. A Mathematical Model

In this section, we will present and important real-life models where our results can be applied, then in the next section we will present an open problem.

6.1. Optimal Control in Epidemic: SIR Model

Let us consider a population affected by an epidemic that we seek to stop through vaccination. We define the following variable:
  • I ( t ) , the number of infectious individuals who can infect others;
  • S ( t ) , the number of non-infectious individuals, but who are susceptible to infection;
  • R ( t ) , the number of individuals who have recovered and are no longer part of the susceptible population.
Let r > 0 be the infection rate, γ > 0 the recovery rate, and u ( t ) the vaccination rate. The control u ( t ) satisfies the constraint 0 u ( t ) e . The optimal control problem for SIR model system is given by
α 2 I ( T ) + 0 T 1 2 u ( t ) 2 d t l o c min ,
S ( t ) = S 0 ( t ) + 0 t ( r S ( τ ) I ( τ ) + u ( τ ) ) d τ , I ( t ) = I 0 ( t ) + 0 t ( r S ( τ ) I ( τ ) γ I ( τ ) u ( τ ) ) d τ R ( t ) = R 0 ( t ) + 0 t γ I ( τ ) d τ ,
g ( x ( t ) , t ) 0 , t [ 0 , T ] ,
u ( t ) [ 0 , e ] , t [ 0 , T ] a . e .
where α > 0 . In contrast with classical formulations using differential equations, we model the epidemic process through an integral representation. This approach naturally accounts for the cumulative nature of epidemic transitions and facilitates numerical implementation.
The functions S 0 ( t ) , I 0 ( t ) , and R 0 ( t ) should not be interpreted merely as constant initial values. Instead, they may represent either:
  • pre-processed or interpolated empirical data on the susceptible, infected, and recovered subpopulations prior to the onset of control,
  • accumulated trajectories obtained from previous simulation stages or observational time series, or
  • non-constant baseline trajectories reflecting uncertainty or memory effects.
In epidemic models such as the SIR system, various forms of time-dependent state constraints of the form
g ( S ( t ) , I ( t ) , R ( t ) , t ) 0
may arise to reflect practical limitations, policy objectives, or ethical considerations. Below, we describe several representative types of such constraints:
  • Healthcare Capacity Constraints:
    -
    Limit on active infections:
    g ( S , I , R , t ) = I ( t ) I max 0
    -
    Maximum proportion infected:
    g ( S , I , R , t ) = I ( t ) S ( t ) + I ( t ) + R ( t ) α 0
  • Containment or Safety Constraints:
    -
    Infected individuals less than susceptible:
    g ( S , I , R , t ) = I ( t ) S ( t ) 0
    -
    Infection rate constraint:
    g ( S , I , R , t ) = I ( t ) δ 0
  • Final-Time Constraints:
    -
    Minimum number of recovered individuals at final time:
    g ( S , I , R , T ) = R target R ( T ) 0
    -
    Near eradication at the final time:
    g ( S , I , R , T ) = I ( T ) ε 0
  • Mixed Variable Constraints:
    -
    Combined restriction on susceptible and recovered populations:
    g ( S , I , R , t ) = S ( t ) R ( t ) β I ( t ) 0
    -
    Preservation of a minimum level of susceptibles:
    g ( S , I , R , t ) = σ S ( t ) 0
  • Cost or Resource-Based Constraints:
    -
    Indirect economic/social cost limitation:
    g ( S , I , R , t ) = c 1 S ( t ) + c 2 I ( t ) + c 3 R ( t ) B 0
This formulation allows greater flexibility in modeling and aligns well with the structure of optimal control theory in Banach spaces, where integral equations provide a natural framework for weak formulations and the application of variational methods.
The goal is to determine an optimal vaccination strategy in order to minimize the above cost function on a fixed time T.
Using the following notation, we can write this problem in an abstract formulation as the Theorem 1.
x = S I R , K ( t , s , x , u ) = r S I + U r S I γ I U γ I , h ( x ) = 1 2 α I a n d φ ( x , u , t ) = 1 2 u 2 .
x ( t ) = p ( t ) + 0 t K ( t , s , x ( s ) , u ( s ) ) d s = p ( t ) + 0 t r S ( s ) I ( s ) + u ( s ) r S ( s ) I ( s ) γ I ( s ) u ( s ) γ I ( s ) d s , p ( t ) = S 0 ( t ) I 0 ( t ) R 0 ( t )
Where ( x , u ) E = C ( [ 0 , T ] ; I R 3 ) × L ( [ 0 , T ] ; I R ) .
Therefore, the adjoint equation becomes the following equation:
ψ ( t ) = t T K x * ( τ , τ , x ( τ ) , u ( τ ) ) ψ ( τ ) d τ + h ( ( x ( T ) ) ) + t T g x ( x ( τ ) , τ ) d μ ( τ ) = t T r I ( τ ) r I ( τ ) 0 r S ( τ ) r S ( t τ ) + γ 0 0 0 0 ψ 1 ( τ ) ψ 2 ( τ ) ψ 3 ( τ ) d τ + 0 α 2 0 + t T g x ( x ( τ ) , τ ) d μ ( τ )
Since there is no condition on x ( T ) , we have a = 0 . For simplicity in calculations, we also set λ 0 = 1 . Now, we proceed to describe the Pontryagin’s maximum principle.
K u * ( t , t , x ( t ) , u ( t ) ) ψ ( t ) = 1 , 1 , 0 ψ 1 ( t ) ψ 2 ( t ) ψ 3 ( t ) = ψ 2 ( t ) ψ 1 ( t ) ,
and φ u ( x ( t ) , u ( t ) , t ) = u ( t ) . Then, for all U Ω we get
ψ 2 ( t ) ψ 1 ( t ) + u ( t ) , U u ( t ) 0 , t [ 0 , T ] , a . e .
This is equivalent to:
max U [ 0 , e ] ( ψ 1 ( t ) ψ 2 ( t ) u ( t ) ) U = ( ψ 1 ( t ) ψ 2 ( t ) u ( t ) ) u ( t ) .
Then, the optimal control is given by
u ( t ) = 0 , i f ψ 1 ( t ) ψ 2 ( t ) < 0 , ψ 1 ( t ) ψ 2 ( t ) , i f 0 ψ 1 ( t ) ψ 2 ( t ) e , e , i f ψ 1 ( t ) ψ 2 ( t ) > e .
At final time T, we have that ψ 1 ( T ) ψ 2 ( T ) = α 2 . Hence, if 2 e < α , then u ( t ) = e near the final time T.

7. Open Problem

In this section, we present an open problem that offers a promising avenue for future research and could even serve as the basis for a doctoral thesis. This problem addresses an optimal control scenario that simultaneously incorporates impulses and constraints on a state variable. Our main objective is to analyze the following optimal control problem, which we plan to explore in future work:
Problem 7.1.
Find a control function u L ( [ 0 , T ] ; R r ) and a trajectory x PW ( [ 0 , T ] ; R n ) that minimize the cost functional
h ( x ( T ) ) + 0 T φ ( x ( t ) , u ( t ) , t ) d t min loc .
Subject to the following conditions:
  • Volterra-type integral state dynamics:
    x ( t ) = p ( t ) + 0 t K ( t , s , x ( s ) , u ( s ) ) d s ,
    where K is a suitable kernel with regularity and growth conditions to be specified.
  • Terminal equality constraints:
    G i ( x ( T ) ) = 0 , i = 1 , 2 , , q n .
  • Impulsive state equation:
    x ( t k + ) = x ( t k ) + J k ( x ( t k ) ) , k = 1 , 2 , , p ,
    where J k are given jump functions and { t k } [ 0 , T ] are prescribed impulse times.
  • Control constraints:
    u ( t ) Ω R r , for a . e . t [ 0 , T ] ,
    with Ω a compact and convex set.
  • State constraints (parametrized):
    g ( x ( t ) , t ) 0 , t [ 0 , T ] .

8. Conclusion and Final Remarks

This paper has presented a new characterization of the controllability of linear control systems governed by Volterra integral equations, obtained through a novel lemma of independent theoretical interest. Building upon this result, we showed that the classical assumption concerning the controllability of the variational linear equation around an optimal pair is not necessary.
Within this framework, we extended Pontryagin’s Maximum Principle to a broad class of optimal control problems governed by Volterra-type integral equations, incorporating both terminal and time-dependent state constraints. Using Dubovitskii–Milyutin theory, we derived necessary conditions for optimality under minimal regularity assumptions. Two formulations of the adjoint system were established: one involving a Volterra adjoint equation in the presence of only terminal constraints, and another involving a Stieltjes integral with respect to a nonnegative Borel measure when state constraints are present.
In the special case where the Volterra kernel reduces to a differential equation, our results recover the classical Pontryagin Maximum Principle, thereby unifying and extending the existing theory for systems governed by integral and differential equations. The application to the optimal control of an epidemic SIR model illustrates the practical significance of our theoretical contributions.
Finally, we have outlined an open problem that naturally arises from this work, providing a direction for further investigation. Addressing such questions could broaden the scope of the Dubovitskii–Milyutin framework and foster new applications in both theory and practice.

Author Contributions

Hugo Leiva: Writing the original draft, review and editing, research, formal analysis, conceptualization. Marcial Valero: Writing, review and editing, research, formal analysis. All authors read and approved the final manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data Availability Statement: This study is theoretical and does not involve any datasets. All results are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Athanassov, Z.S.; Vitanov, N.K. Optimal control of systems governed by Volterra integral equations. J. Optim. Theory Appl. 1996, 90, 177–192. [Google Scholar]
  2. Balakrishnan, A.V. Applied Functional Analysis; Springer: New York, NY, USA, 1976. [Google Scholar]
  3. Bashirov, A.E. Necessary Conditions for Optimality for Control Problems Governed by Volterra Integral Equations; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1994. [Google Scholar]
  4. Belbas, S.A. A new method for optimal control of Volterra integral equations. Appl. Math. Comput. 2007, 189, 1902–1915. [Google Scholar] [CrossRef]
  5. Camacho, O.; Castillo, R.; Leiva, H. Optimal control governed impulsive neutral differential equations. Results Control Optim. 2024, 17, 100505. [Google Scholar] [CrossRef]
  6. Dmitruk, A.V.; Osmolovskii, N.P. Necessary conditions in optimal control problems with integral equations of Volterra type. Inst. Control Sci., Russ. Acad. Sci., Moscow, Russia, Technical Report.
  7. Ebrahimzadeh, A.; Hashemizadeh, E. Optimal control of non-linear Volterra integral equations with weakly singular kernels based on Genocchi polynomials and collocation method. J. Nonlinear Math. Phys. 2023, 30, 1758–1773. [Google Scholar] [CrossRef]
  8. Girsanov, I.V. Lectures on Mathematical Theory of Extremum Problems; Springer: New York, NY, USA, 1972. [Google Scholar]
  9. Goh, B.S. Necessary conditions for optimal control problems with Volterra integral equations. SIAM J. Control Optim. 1980, 18, 85–99. [Google Scholar]
  10. Ioffe, A.D.; Tihomirov, V.M. Theory of Extremal Problems; Springer: Berlin/Heidelberg, Germany, 1979. [Google Scholar]
  11. Leiva, H.; Tapia-Riera, G.; Romero-Leiton, J.P.; Duque, C. Mixed Cost Function and State Constrains Optimal Control Problems. Appl. Math. 2025, 5, 46. [Google Scholar] [CrossRef]
  12. Leiva, H.; Cabada, D.; Gallo, R. Roughness of the controllability for time varying systems under the influence of impulses, delay, and non-local conditions. Nonauton. Dyn. Syst. 2020, 7, 126–139. [Google Scholar] [CrossRef]
  13. Leiva, H. Pontryagin’s maximum principle for optimal control problems governed by nonlinear impulsive differential equations. J. Math. Appl. 2023, 46, 111–164. [Google Scholar]
  14. Lee, E.B.; Markus, L. Foundations of Optimal Control Theory; Wiley: New York, NY, USA, 1967. [Google Scholar]
  15. Kolmogorov, A.N.; Fomin, S.V. Elementos de la Teoría de Funciones y de Análisis Funcional; Editorial Mir: Moscú, Rusia, 1975. [Google Scholar]
  16. Malinowska, A.B.; Torres, D.F.M. Optimal control problems governed by Volterra integral equations on time scales. J. Math. Anal. Appl. 2013, 399, 508–518. [Google Scholar]
  17. Medhin, N.G. Optimal processes governed by integral equations with unilateral constraint. J. Math. Anal. Appl. 1988, 129, 269–283. [Google Scholar] [CrossRef]
  18. Moon, J. A Pontryagin maximum principle for terminal state-constrained optimal control problems of Volterra integral equations with singular kernels. AIMS Math. 2023, 8, 22924–22943. [Google Scholar] [CrossRef]
  19. Lyusternik, L.A.; Sobolev, V.I. Elements of Functional Analysis; Nauka: Moscow, Russia, 1965. [Google Scholar]
  20. Tröltzsch, F. Optimal Control of Partial Differential Equations: Theory, Methods and Applications; American Mathematical Society: Providence, RI, USA, 2005. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated