1. Introduction
In recent decades, a considerable body of work has been devoted to the study of optimal control problems governed by integral equations, both of Fredholm and Volterra types. Classical results have addressed linear and nonlinear settings, with various forms of cost functionals and state-control constraints (see, for instance, [
1,
2,
3,
9,
16,
20]). Specific applications have arisen in population dynamics, viscoelastic systems, and epidemic models, often assuming either linear dynamics or simplified forms of the kernel. However, most of these studies rely on structural assumptions or do not fully leverage the general framework provided by Dubovitskii–Milyutin theory.
This work begins with a new lemma that characterizes the controllability of linear control systems governed by Volterra integral equations. This result, which we regard as significant in its own right, represents a novel contribution to the theory of integral equations. Furthermore, it allows us to eliminate the common assumption regarding the controllability of the variational linear equation around an optimal pair (4).
Several recent contributions have established maximum principles for control problems involving Volterra-type integral equations. In [
17], systems with unilateral constraints are studied, with emphasis on existence results in linear–quadratic frameworks. In [
4], nonlinear Volterra equations are examined through discrete-time approximations and modified Hamiltonian techniques. A maximum principle for systems with singular kernels and terminal state constraints, incorporating fractional Caputo dynamics, is presented in [
18]. A rigorous analysis based on Dubovitskii–Milyutin theory under smooth nonlinearities and mixed control constraints is provided in [
6]. A numerical collocation method employing Genocchi polynomials for weakly singular kernels is developed in [
7].
While each of these works provides valuable insights, most are limited by structural assumptions on the kernel, simplified dynamics, or restrictive cost functionals. In contrast, our work establishes a general maximum principle for nonlinear Volterra systems subject to both terminal and time-dependent state constraints. Building upon our controllability result, we derive necessary optimality conditions by means of conic approximations within the Dubovitskii–Milyutin framework.
When only terminal constraints are imposed, the resulting adjoint equation takes the form of a modified Volterra integral equation. In the presence of time-dependent state constraints, the adjoint equation involves a Stieltjes integral with respect to a nonnegative Borel measure concentrated on the set of active constraints. This structure captures the additional complexity introduced by such constraints. Notably, when the Volterra kernel reduces to a differential equation, our results recover the classical Pontryagin Maximum Principle. Thus, our findings not only unify but also extend the existing theory for optimal control problems governed by integral and differential equation systems.
The Dubovitskii–Milyutin approach has proven effective in a wide range of optimal control problems, as demonstrated in works such as [
5,
11,
12,
13]. In particular, [
13] explores impulsive control problems in the absence of state constraints.
2. Setting of the Problem and the Main Results
With this background, we now introduce the optimal control problem addressed in this work. Our objective is to derive the necessary optimality conditions, specifically, a Pontryagin-type maximum principle, for systems governed by nonlinear Volterra integral equations, where the state trajectory is subject to both control constraints and state constraints.
Without further ado, let us consider the following optimal control problem, which constitutes the main focus of this work. Our goal is to derive necessary conditions of optimality in the spirit of Pontryagin’s Maximum Principle, for a system whose dynamics are governed by a nonlinear Volterra integral equation with general constraints on both control and state variables, including fixed terminal constraints and time-dependent state restrictions. The cost functional consists of a terminal term and an integral term that may depend on the state. The general formulation is as follows:
Here, denotes the state vector and the control vector, which is measurable and takes values in a given admissible control set . The pair belongs to the product space , with , . That is to say, the state function is continuous and the control is essentially bounded. The function represents the terminal cost, the state restriction function, and the running cost, which depends on the state function , the control , and time t. The vector-valued function is given, and the nonlinear kernel defines the Volterra-type dynamics.
Now, we continue with the hypotheses of this work.
Hypotheses
- a)
, K, h, are continuous functions on compact subsets of and with .
- b)
is convex and closed with
- c)
g is a continuous function and , where are fixed. Besides g has continuous derivative with respect to its first variable such that when
Now, the more complex version of the maximum principle can be established. In this case, the corresponding adjoint equation involves a Stieltjes integral with respect to a nonnegative Borel measure. This extension accounts for the effect of the state constraints at each time instant along the trajectory.
Theorem 1. Let us suppose that conditions a) - c) are fulfilled for , a solution of Problem 2.1.
Then, there exists a non-negative Borel measure μ on with support in
There exist and a function such that and ψ are not simultaneously zero, and ψ is solution of the integral equation
for some , and also for all and almost all it follows
Remark 1.
If condition (6) is replaced by the following terminal state constraint
then a simpler version of the maximum principle can be established. In this case, the corresponding adjoint equation does not involve a Stieltjes integral with respect to a nonnegative Borel measure, as the state constraint only acts at the terminal time.
Theorem 2. Let us suppose that conditions a), b) and (10) are fulfilled, and suppose that g has continuous derivative with respect to its first variable such that . Let be a solutions of the Problem 2.1.
Then, there exist , and a function such that and ψ are not simultaneously zero, and ψ is solution of the following system:
and also, for all and almost all it follows
Corollary 1.
Let us suppose that conditions a) - c) are fulfilled for , a solution of Problem 2.1, and
Then, there exists a non-negative Borel measure μ on with support in
There exist and a function such that and ψ are not simultaneously zero, and ψ is solution of the integral equation
for some , and also for all and almost all it follows
3. Preliminaries Results
In this section, we present a lemma that characterizes the controllability of linear control systems governed by Volterra integral equations. This result is significant in its own right and represents a novel contribution to the theory of integral equations. Moreover, it allows us to prove that the commonly assumed controllability of the variational linear equation around an optimal pair is, in fact, superfluous.
We then summarize the main results of the Dubovitskii–Milyutin (DM) theory. We formulate the general optimization problem with constraints and construct approximation cones for both the objective function and the constraints. The optimality condition, expressed through the Euler–Lagrange (EL) equation, is presented in terms of the dual of these cones. The fundamental results rest on well-established principles of DM theory, with detailed proofs available in [
8,
11,
12,
13].
3.1. Controllability of Linear Volterra Equation
In this subsection, we present a lemma that characterizes the controllability of the following linear control system:
where
denotes the state variable and
is the control function. The matrix
is assumed to be
, and
is assumed to be
, with all entries belonging to
, where
. That is,
A and
B are continuously differentiable in both variables over the triangular domain
.
Lemma 1.
System (16) is not controllable if, and only if, there exist a nonzero function with such that
and
for all
Proof. Assume that system (
16) is not controllable, then we have that the set
defined by
is a linear subspace such that
. Then, there exist
such that
and
Let
be a solution of the following system
Now, let
arbitrary, and let
be the corresponding solution associated to (
16). Notice that
Now, by integration by parts and since
, we get that
Hence, we have that
for all
. Therefore, we conclude that
Conversely, assume that there exist such
with
satisfying (
17) and (
18). Now, let
x be any solution of (
16), then by the same computations as above, and integrations by parts, we obtain that
Then, we have that
D admits
as a perpendicular vector. Therefore, we conclude that
is not dense, i.e.,
Hence, system (
16) is not controllable. □
3.2. Cones, Dual Cones and DM Theorem
Let E be a locally convex topological vector space, and its dual (continuous linear functionals on E).
A subset is a cone with apex at zero if it satisfies for all .
If
are convex,
w-closed cones, then:
where the closure is taken in the
-topology. (See [
8]).
Lemma 2.
Let be open convex cones such that
Lemma 3.
(DM). Let be convex cones with apex at zero, with open. Then
if and only if there are (), not all zero such that
3.3. The Abstract Euler–Lagrange(EL) Equation
Let us consider a function
and
(
) such that
(
). Set the following problem
Remark 2. The sets , () usually are given by restrictions of inequality type, and by restrictions of equality type, and in general
Theorem 3.
(DM). Let be a solution of problem (20), and assume that:
-
a)
is the decay cone of at
-
b)
are the admissible cones to at ().
-
c)
is the tangent cone to at
If are convex, then there exist functions , not all zero such that
Remark 3.
Equation (21) is called theAbstract Euler–Lagrange(EL) Equation. The definition of decay, admissible and tangent cone can be found in [8]
Plan to apply the DM Theorem to specific problems:
- i)
Identify the decay vectors.
- ii)
Identify the admissible vectors.
- iii)
Identify the tangent vectors.
- iv)
Construct the dual cones.
3.4. Important Results
Now, we explicitly compute the cones of decay, admissible and tangent vectors in some cases. For , we denote by the cone of decay direction.
Theorem 4.
(See [8, pg 48]). If E is a Banach space and is Fréchet–differentiable at then
where is the Fréchet’s derivative of at
Theorem 5.
(See [8]). Let be a continuous and convex function in a topological linear space E , and let . Then has a directional derivative in all directions at , and we also have that
- a)
- b)
The admissible cone to will be denoted by .
Theorem 6.
(See [8]). If ℧ is an arbitrary convex set with then
The set of all the tangent vectors to ℧ at is a cone with apex at zero, which will be denoted by , and it will be called tangent cone.
In this part, we highlight the Lyusternik Theorem, a potent tool for computing the cone of tangent vectors. This theorem is crucial to our analysis, as it facilitates the determination the tangent vectors to a given point.
Theorem 7. (Lyusternik-See [8]). Let be Banach spaces, and suppose that
-
a)
is Fréchet’s differentiable at
-
b)
is surjective.
Then, the cone of tangent vectors to the set at the point is given by
The proof of this theorem, which is not simple, can be found in [
10] and [
19].
4. Proof of the Main Theorem 1
Proof. Let
be a function defined as follows
and let
where
are given by pairs sets
which satisfy (3)- (4), (5) and (6) respectively.
Then, Problem 2.1 is equivalent to
- a)
-
Analysis of the Function
Let
represent the decay cone of
at the point
. According to Theorem 4:
If
, it follows that:
By Example 9.2 in [
8, pg 62], the derivative
is expressed as:
Thus, for any
, there exists
such that:
- b)
-
Analysis of the Restriction
Let’s determine the tangent cone to
at the point
Assume, for a moment, that the
Variational Integral Equation
is controllable, then applying Theorem 7 we get that (see[
5,
8,
12,
13])
Now, let us calculate
To do so, we shall consider the following linear spaces
Then, by Proposition 2.40 from [
13], we have that
if, and only if, there exists
such that
Moreover, by Lemma 2.5 from [
13], it follows that
is
closed; then by cones properties, we obtain that
Therefore, if, and only if, .
- c)
-
Analysis of Restriction
Then . Given that is convex, closed, and , the following hold:
1. and are closed and convex. 2. and .
Let
be the admissible cone to
at
. Then
where
is the admissible cone to
at
.
For any
, there exists
such that:
By Theorem 6, is a support of at .
- d)
-
Analysis of Restriction
Let us define the function as follows
Then, by Example7.5 from (See [
8, pg 52]), we have that
where
But, by Theorem 5, we obtain that
Then, by Example10.3 [
8, pg 73], we have that for all
there is a non–negative Borel measure
on
such that
and
has support in
- e)
-
Euler-Lagrange Equation.
It is evident that
are convex cones. Then, by Theorem 3 there exist functionals
not all zero, so that
Equation (
24) can be expressed as follows
Now, for all
there exist
, solution of equation () with
. Then
, and therefore
Hence, the equation (
24) can be written as follows:
Let
be the solution the adjoint equation (14), which means
This equation is a second-order Volterra type equation, which admits a unique solution
(see [
15, pg 519]). Then multiplying both sides of the foregoing equation by
x and integrating from 0 to
T, we obtain
The first term of (
26) can be rewritten, after integration by parts and changing the order of the integration, as:
Similarly, the second term of (
26) can be written as:
Likewise, we have that for the third term
The forth term on the right-hand side can be simplified by using the integration by parts method for the Stieltjes integral, along with the conditions
and
. Specifically, since
and
, it follows that
.
Then, by (
27), (
28), (
29) and (
30) we have that
Then, rewriting last equality we have
Since
satisfies the variational equation (4), we have that
Then by the equation (
25), we obtain that
for all
Since
is a support of
at the point
from Example10.5 [
8, pg 76], it follows that
for all
and almost all
Now, we will demonstrate that the case
is not admissible. In fact, if
then
Thus,
which implies that
Hence, from equation (14) and the fact that
, we obtain
which leads to
Additionally, from equation (
25) we have that
then from the equation (
24), it follows that
where
which contradicts the statement of Theorem 3.
At this stage, we have introduced two supplementary assumptions:
First, we assumed that
Second, we supposed that the system
is controllable.
We will now show that these assumptions are unnecessary. Indeed, if
then by the definition of
, we have
Let us consider
Then, from equation (
32), we get
for all
such that
x is a solution of equation (4). Therefore, for
, we have that:
which leads to the conclusion that
for all
and almost every
Now, assuming that system (4) is not controllable. Then by Lemma 1, there exists a nontrivial function
satisfying
such that, for all
it holds that
By taking
we find that
solves equation (14), and therefore
for all
and almost every
Therefore, the proof of Theorem 1 is now complete. □
5. Sufficient Condition for Optimality
The necessary condition for optimality presented in Theorem 1 (Maximum Principle), given certain additional conditions, is also sufficient. Specifically, let us examine the specific case of Problem 2.1 where the differential equation is linear.
Problem 5.1.
where denotes the state variable and is the control function. The matrix is assumed to be , and is assumed to be , with all entries belonging to , where . That is, A and B are continuously differentiable in both variables over the triangular domain Δ. Let be satisfying the conditions (37)– (40).
Theorem 8. Let us suppose that conditions and from Theorem 1 are satisfied.
Furthermore, let us assume the following:
-
A)
The system (37) is controllable.
-
B)
There exists such that
-
C)
The corresponding solutions to of equation (37), satisfies and
-
D)
are a convex functions in its two first variables.
Then is global solution of Problem 5.1.
Proof. Let us define the function
as follows
Let us consider
where
is given by (
37)–(
38),
by (
39),
by (
40) as in the Theorem 1.
Then, Problem 5.1 is equivalent to:
It is clear that
are convex sets, and from the conditions
we have that
is convex, and
Thus, by Theorem 2.17 from [
13] it follows that:
is a minimum point of
at ℧ if, and only if, there are
not all zero such that
Here, are the approximation cones defined as in the Theorem 1.
Let
be the admissible cone to
at the point
. Then
where
and
Then, by dual cones properties, we have that
So, each
has the following form
Here is a non negative Borel measures with support on
Now, assume that the Maximum Principle of Theorem 1 is satisfied. This implies the existence of
,
, and non-negative Borel measures
supported on
R. Additionally, there exists a function
that satisfies the following integral equation:
where both
and
are non-zero, and for every
and almost every
, the following holds
To demonstrate the theorem, it is enough to show that there exist of
not all zero, such that
for which we define the following set
and functionals
Then, from (
42), we get that
So,
is a support of
at
. Hence
Let us define the functional
as follows
Now, we will see that
where
as in the Theorem 1. In fact, suppose that
then multiplying both sides of the equation (
41) by
and integrating from 0 to
T, we obtain that
Therefore,
Thus
Next, we will introduce the following functionals
by
Then
and also
and not all these functionals are zero, because by hypothesis
and
are not simultaneously zero.
From the convexity conditions, it follows the global-minimality of □
6. A Mathematical Model
In this section, we will present and important real-life models where our results can be applied, then in the next section we will present an open problem.
6.1. Optimal Control in Epidemic: SIR Model
Let us consider a population affected by an epidemic that we seek to stop through vaccination. We define the following variable:
, the number of infectious individuals who can infect others;
, the number of non-infectious individuals, but who are susceptible to infection;
, the number of individuals who have recovered and are no longer part of the susceptible population.
Let
be the infection rate,
the recovery rate, and
the vaccination rate. The control
satisfies the constraint
. The optimal control problem for
SIR model system is given by
where
. In contrast with classical formulations using differential equations, we model the epidemic process through an integral representation. This approach naturally accounts for the cumulative nature of epidemic transitions and facilitates numerical implementation.
The functions , , and should not be interpreted merely as constant initial values. Instead, they may represent either:
pre-processed or interpolated empirical data on the susceptible, infected, and recovered subpopulations prior to the onset of control,
accumulated trajectories obtained from previous simulation stages or observational time series, or
non-constant baseline trajectories reflecting uncertainty or memory effects.
In epidemic models such as the SIR system, various forms of time-dependent state constraints of the form
may arise to reflect practical limitations, policy objectives, or ethical considerations. Below, we describe several representative types of such constraints:
-
Healthcare Capacity Constraints:
- -
Limit on active infections:
- -
Maximum proportion infected:
-
Containment or Safety Constraints:
- -
Infected individuals less than susceptible:
- -
Infection rate constraint:
-
Final-Time Constraints:
- -
Minimum number of recovered individuals at final time:
- -
Near eradication at the final time:
-
Mixed Variable Constraints:
- -
Combined restriction on susceptible and recovered populations:
- -
Preservation of a minimum level of susceptibles:
-
Cost or Resource-Based Constraints:
- -
Indirect economic/social cost limitation:
This formulation allows greater flexibility in modeling and aligns well with the structure of optimal control theory in Banach spaces, where integral equations provide a natural framework for weak formulations and the application of variational methods.
The goal is to determine an optimal vaccination strategy in order to minimize the above cost function on a fixed time T.
Using the following notation, we can write this problem in an abstract formulation as the Theorem 1.
Where
.
Therefore, the adjoint equation becomes the following equation:
Since there is no condition on
, we have
. For simplicity in calculations, we also set
. Now, we proceed to describe the Pontryagin’s maximum principle.
and
. Then, for all
we get
This is equivalent to:
Then, the optimal control is given by
At final time
T, we have that
. Hence, if
, then
near the final time
T.
7. Open Problem
In this section, we present an open problem that offers a promising avenue for future research and could even serve as the basis for a doctoral thesis. This problem addresses an optimal control scenario that simultaneously incorporates impulses and constraints on a state variable. Our main objective is to analyze the following optimal control problem, which we plan to explore in future work:
Problem 7.1.
Find a control function and a trajectory that minimize the cost functional
Subject to the following conditions:
-
Volterra-type integral state dynamics:
where K is a suitable kernel with regularity and growth conditions to be specified.
Terminal equality constraints:
-
Impulsive state equation:
where are given jump functions and are prescribed impulse times.
-
with Ω a compact and convex set.
State constraints (parametrized):
8. Conclusion and Final Remarks
This paper has presented a new characterization of the controllability of linear control systems governed by Volterra integral equations, obtained through a novel lemma of independent theoretical interest. Building upon this result, we showed that the classical assumption concerning the controllability of the variational linear equation around an optimal pair is not necessary.
Within this framework, we extended Pontryagin’s Maximum Principle to a broad class of optimal control problems governed by Volterra-type integral equations, incorporating both terminal and time-dependent state constraints. Using Dubovitskii–Milyutin theory, we derived necessary conditions for optimality under minimal regularity assumptions. Two formulations of the adjoint system were established: one involving a Volterra adjoint equation in the presence of only terminal constraints, and another involving a Stieltjes integral with respect to a nonnegative Borel measure when state constraints are present.
In the special case where the Volterra kernel reduces to a differential equation, our results recover the classical Pontryagin Maximum Principle, thereby unifying and extending the existing theory for systems governed by integral and differential equations. The application to the optimal control of an epidemic SIR model illustrates the practical significance of our theoretical contributions.
Finally, we have outlined an open problem that naturally arises from this work, providing a direction for further investigation. Addressing such questions could broaden the scope of the Dubovitskii–Milyutin framework and foster new applications in both theory and practice.
Author Contributions
Hugo Leiva: Writing the original draft, review and editing, research, formal analysis, conceptualization. Marcial Valero: Writing, review and editing, research, formal analysis. All authors read and approved the final manuscript.
Funding
This research received no external funding.
Informed Consent Statement
Not applicable.
Data Availability Statement
Data Availability Statement: This study is theoretical and does not involve any datasets. All results are contained within the article.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Athanassov, Z.S.; Vitanov, N.K. Optimal control of systems governed by Volterra integral equations. J. Optim. Theory Appl. 1996, 90, 177–192. [Google Scholar]
- Balakrishnan, A.V. Applied Functional Analysis; Springer: New York, NY, USA, 1976. [Google Scholar]
- Bashirov, A.E. Necessary Conditions for Optimality for Control Problems Governed by Volterra Integral Equations; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1994. [Google Scholar]
- Belbas, S.A. A new method for optimal control of Volterra integral equations. Appl. Math. Comput. 2007, 189, 1902–1915. [Google Scholar] [CrossRef]
- Camacho, O.; Castillo, R.; Leiva, H. Optimal control governed impulsive neutral differential equations. Results Control Optim. 2024, 17, 100505. [Google Scholar] [CrossRef]
- Dmitruk, A.V.; Osmolovskii, N.P. Necessary conditions in optimal control problems with integral equations of Volterra type. Inst. Control Sci., Russ. Acad. Sci., Moscow, Russia, Technical Report.
- Ebrahimzadeh, A.; Hashemizadeh, E. Optimal control of non-linear Volterra integral equations with weakly singular kernels based on Genocchi polynomials and collocation method. J. Nonlinear Math. Phys. 2023, 30, 1758–1773. [Google Scholar] [CrossRef]
- Girsanov, I.V. Lectures on Mathematical Theory of Extremum Problems; Springer: New York, NY, USA, 1972. [Google Scholar]
- Goh, B.S. Necessary conditions for optimal control problems with Volterra integral equations. SIAM J. Control Optim. 1980, 18, 85–99. [Google Scholar]
- Ioffe, A.D.; Tihomirov, V.M. Theory of Extremal Problems; Springer: Berlin/Heidelberg, Germany, 1979. [Google Scholar]
- Leiva, H.; Tapia-Riera, G.; Romero-Leiton, J.P.; Duque, C. Mixed Cost Function and State Constrains Optimal Control Problems. Appl. Math. 2025, 5, 46. [Google Scholar] [CrossRef]
- Leiva, H.; Cabada, D.; Gallo, R. Roughness of the controllability for time varying systems under the influence of impulses, delay, and non-local conditions. Nonauton. Dyn. Syst. 2020, 7, 126–139. [Google Scholar] [CrossRef]
- Leiva, H. Pontryagin’s maximum principle for optimal control problems governed by nonlinear impulsive differential equations. J. Math. Appl. 2023, 46, 111–164. [Google Scholar]
- Lee, E.B.; Markus, L. Foundations of Optimal Control Theory; Wiley: New York, NY, USA, 1967. [Google Scholar]
- Kolmogorov, A.N.; Fomin, S.V. Elementos de la Teoría de Funciones y de Análisis Funcional; Editorial Mir: Moscú, Rusia, 1975. [Google Scholar]
- Malinowska, A.B.; Torres, D.F.M. Optimal control problems governed by Volterra integral equations on time scales. J. Math. Anal. Appl. 2013, 399, 508–518. [Google Scholar]
- Medhin, N.G. Optimal processes governed by integral equations with unilateral constraint. J. Math. Anal. Appl. 1988, 129, 269–283. [Google Scholar] [CrossRef]
- Moon, J. A Pontryagin maximum principle for terminal state-constrained optimal control problems of Volterra integral equations with singular kernels. AIMS Math. 2023, 8, 22924–22943. [Google Scholar] [CrossRef]
- Lyusternik, L.A.; Sobolev, V.I. Elements of Functional Analysis; Nauka: Moscow, Russia, 1965. [Google Scholar]
- Tröltzsch, F. Optimal Control of Partial Differential Equations: Theory, Methods and Applications; American Mathematical Society: Providence, RI, USA, 2005. [Google Scholar]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).