Preprint
Article

This version is not peer-reviewed.

Conjugate Descent-Type Algorithm for Interval-valued Multiobjective Optimization Problems

Submitted:

18 August 2025

Posted:

18 August 2025

You are already at the latest version

Abstract
This paper deals with a class of interval-valued multiobjective optimization problems (IVMOPs). The conjugate-descent (CD) direction is defined to introduce the CD-type conjugate direction algorithm for IVMOPs. We perform the convergence analysis and establish that the algorithm exhibits a linear rate of convergence under certain suitable assumptions. We further investigate its worst-case complexity. Several numerical examples, including a large-scale problem, are provided to demonstrate the efficacy of the proposed algorithm. To the best of our knowledge, the CD-type conjugate direction algorithm is introduced for the first time to solve a class of IVMOPs.
Keywords: 
;  ;  ;  

1. Introduction

Multiobjective optimization problems (MOPs) are concerned with the simultaneous optimization of two or more mutually conflicting objective functions. Several authors have explored the MOPs in different frameworks (see [1,2,3,4,5,6,7]). In fields like engineering and management sciences (see [8,9]), problems frequently involve imprecise or uncertain data. Such uncertainty may arise from factors including unknown future events, measurement or manufacturing errors, or incomplete information during model formulation [10,11]. In these situations, uncertain parameters or objective functions are often modeled using intervals. When the uncertainties in the objective functions of MOPs are expressed as intervals, the resulting formulations are known as interval-valued multiobjective optimization problems (IVMOPs). Various methods have been developed to handle IVMOPs. For instance, a class of IVMOPs have been studied in [13] by transforming them into their corresponding deterministic MOPs and establishing the relationships between the solutions of IVMOPs and MOPs. In [14], Newton’s method has been proposed to solve IVMOPs, assuming the objective functions possess twice continuous generalized Hukuhara (gH)-differentiability with a positive definite gH-Hessian. Subsequently, the quasi-Newton method for IVMOPs has been introduced in [15]. For recent developments and an updated survey of IVMOPs, we refer to (see [16,17,18]).
The conjugate direction method serves as a powerful tool in optimization, extensively applied to systems of linear equations and diverse optimization problems. Numerous real-world challenges across different fields of modern research, including inverse engineering problems [19], electromagnetic scattering problems [20], and geophysical inversion problems [21], have been successfully addressed employing this method. Hestenes and Stiefel [22] first introduced the conjugate direction method to solve linear systems. Subsequently, several researchers have introduced various versions of conjugate direction methods to solve single-objective as well as multiobjective optimization problems (see [23,24,25,26]). For instance, Fletcher [27] developed the conjugate-descent (CD) method to solve scalar-objective optimization problems. Later, Pérez and Prudente [28] introduced the CD conjugate direction algorithm to solve a class of MOPs. From the above discussions, it is evident that the CD-type conjugate direction method has not yet been introduced to solve IVMOPs. The main aim of this article is to bridge the aforementioned research gap.
Motivated by the works of [27,28], in this article, we investigate a class of IVMOPs and introduce a CD-type conjugate direction algorithm for solving them. We perform the convergence analysis and establish the linear order of convergence of the sequence obtained from the algorithm. Moreover, we investigate the worst-case complexity of the proposed algorithm. Finally, we furnish several numerical examples, including a large scale problem, to illustrate the efficacy of our proposed algorithm.
The key contributions and unique aspects of this article are threefold. Firstly, the results established in this paper generalize several well-known algorithms from the literature. In particular, we generalize the work of Pérez and Prudente [28] on the CD method from MOPs to IVMOPs. Furthermore, we generalize the results of Fletcher [27] for real-valued optimization problems to IVMOP. Secondly, if the conjugate parameter in the proposed algorithm is set to zero and if every component of the objective function of the IVMOP is a real-valued function rather than an interval-valued function, the proposed algorithm coincides with the steepest descent-type algorithm for MOPs introduced by Fliege and Svaiter [29]. Thirdly, it is imperative to note that the proposed CD algorithm is applicable to any IVMOPs where the objective function possesses continuous gH-differentiability. However, Newton’s and quasi-Newton methods proposed in [14,15] require the objective function of the IVMOPs to satisfy twice continuously gH-differentiability. In view of the above fact, our proposed algorithm could be applied to solve a wider class of optimization problems as compared to the algorithms introduced in [14,15].
We design the remainder of this article as follows: Section 2 presents some fundamental definitions and key results. In Section 3, we consider an IVMOP and introduce a CD-type conjugate direction algorithm for solving it. We perform the convergence analysis and establish the linear order of convergence of the sequence obtained from the proposed algorithm. We further investigate the worst-case complexity of the algorithm. To demonstrate the efficacy of the proposed algorithm, we present several numerical examples in Section 4. Finally, Section 5 presents our conclusions and outlines future research directions.

2. Preliminaries

The notations N , R m are employed to represent the set of natural numbers and Euclidean space of dimension m, respectively, throughout this article. Let c ^ R m and r > 0 . Then, B c ^ , r and B c ^ , r represent the open and closed balls of radius r having center at c ^ , respectively. Let I r = { 1 , , r } ( r N ) , and let I m denote the m × m ( m N ) identity matrix. For Y , Z R m × m , Z Y represents the positive semidefiniteness of ( Y Z ) .
Let c ^ : = ( c ^ 1 , , c ^ m ) , v ˜ : = ( v ˜ 1 , , v ˜ m ) R m . Then, the notations mentioned below will be employed:
c ^ v ˜ c ^ j v ˜ j , for all j I m , c ^ < v ˜ c ^ j < v ˜ j , for all j I m .
Let Θ ^ : R m R . For any c ^ : = ( c ^ 1 , , c ^ m ) R m , the symbols Θ ^ c ^ j + c ^ and Θ ^ c ^ j c ^ are defined as follows:
Θ ^ c ^ j + c ^ : = lim h 0 + Θ ^ ( c ^ 1 , , c ^ j + h , , c ^ m ) Θ ^ ( c ^ 1 , , c ^ j , , c ^ m ) h , and Θ ^ c ^ j c ^ : = lim h 0 Θ ^ ( c ^ 1 , , c ^ j + h , , c ^ m ) Θ ^ ( c ^ 1 , , c ^ j , , c ^ m ) h ,
provided that the above limits exist.
Assume that for each j I m and c ^ R m , Θ ^ c ^ j + ( c ^ ) , and Θ ^ c ^ j ( c ^ ) exist. Then, we define:
Θ ^ c ^ : = Θ ^ c ^ 1 c ^ , , Θ ^ c ^ m c ^ , + Θ ^ c ^ : = Θ ^ c ^ 1 + c ^ , , Θ ^ c ^ m + c ^ .
Let J ( R ) and J ( R ) denote the sets
a ̲ , a ¯ : a ̲ , a ¯ R , a ̲ a ¯ and a ̲ , a ¯ : a ̲ , a ¯ R , a ̲ a ¯ , a ¯ < 0 ,
respectively.
Let w ˜ , y ˜ R . Then, w ˜ y ˜ represents the following interval:
w ˜ y ˜ : = min { w ˜ , y ˜ } , max { w ˜ , y ˜ } .
If a ̲ = a ¯ , then A ^ : = [ a ̲ , a ¯ ] ( J ( R ) ) is called degenerate interval.
Let A ^ = [ a ̲ , a ¯ ] , B ^ : = [ b ̲ , b ¯ ] J ( R ) and ς R . We define the subsequent algebraic operations (see [14]):
ς A ^ = [ ς a ̲ , ς a ¯ ] , ς 0 , ς a ¯ , ς a ̲ ] , ς < 0 . , A ^ B ^ : = [ a ̲ + b ̲ , a ¯ + b ¯ ] , A ^ B ^ : = [ a ̲ b ¯ , a ¯ b ̲ ] .
The following definition of interval norm is from [14].
Definition 1. 
Let A ^ : = [ a ̲ , a ¯ ] J ( R ) . The interval norm A ^ I is defined below:
A ^ I : = max { | a ̲ | , | a ¯ | } .
For A ^ : = [ a ̲ , a ¯ ] , B ^ : = [ b ̲ , b ¯ ] J ( R ) , the following inequalities will be employed:
A ^ L U B ^ a ̲ b ̲ and a ¯ b ¯ , A ^ < L U B ^ a ̲ < b ̲ and a ¯ < b ¯ , A ^ L U B ^ A ^ L U B ^ and A ^ B ^ .
Let S ^ : = ( S ^ 1 , , S ^ n ) , P ^ : = ( P ^ 1 , , P ^ n ) J ( R ) n . The order relations n and n on S ^ and P ^ are defined below:
S ^ n P ^ S ^ i L U P ^ i , for all i I n , S ^ n P ^ S ^ i < L U P ^ i , for all i I n .
The following definitions are from [15].
Definition 2. 
Let A ^ : = [ a ̲ , a ¯ ] , and B ^ : = [ b ̲ , b ¯ ] be arbitrary intervals in J ( R ) . The generalized Hukuhara difference of A ^ and B ^ is given by:
A ^ gH B ^ : = a ̲ b ̲ a ¯ b ¯ .
The interval-valued function χ : R m J ( R ) is defined below:
χ ( c ^ ) : = [ χ ̲ ( c ^ ) , χ ¯ ( c ^ ) ] , for all c ^ R m .
Definition 3. 
The function χ is called continuous at z ^ R m provided for any ϵ > 0 , some δ > 0 exists satisfying:
c ^ z ^ < δ χ ( c ^ ) χ ( z ^ ) I < ϵ .
The following definition of gH-Lipschitz continuity is from [32].
Definition 4. 
The function χ is called gH-Lipschitz continuous on R m with Lipschitz constant C ˜ > 0 , provided some constant C ˜ > 0 exists, satisfying:
c ^ , v ˜ R m : χ ( c ^ ) g H χ ( v ˜ ) I C ˜ c ^ v ˜ .
Remark 1. 
Let λ ^ : R m R defined below:
λ ^ ( c ^ ) : = χ ̲ ( c ^ ) + χ ¯ ( c ^ ) , for all c ^ R m .
In view of Definition 4, we yield that if χ is gH-Lipschitz continuous on R m with Lipschitz constant C ˜ > 0 , then λ ^ is also possess Lipschitz continuity on R m with Lipschitz constant 2 C ˜ .
The following definitions and theorem are from [31].
Definition 5. 
Let z ^ , ν ^ R m . If the limit
lim h 0 + χ ( z ^ + h ν ^ ) g H χ ( z ^ ) h ,
exists, then we say that χ possesses gH-directional derivative at z ^ along the direction ν ^ , denoted by D g H χ z ^ , ν ^ .
Definition 6. 
Let the functions Θ ^ 1 : R m R and Θ ^ 2 : R m R be given by:
Θ ^ 1 ( c ^ ) : = χ ¯ ( c ^ ) + χ ̲ ( c ^ ) 2 , for all c ^ R m , Θ ^ 2 ( c ^ ) : = χ ¯ ( c ^ ) χ ̲ ( c ^ ) 2 , for all c ^ R m .
The function χ is said to be gH-differentiable at z ^ R m if, there exist w ˜ 1 : = w ˜ 1 ( 1 ) , , w ˜ 1 ( m ) ,   w ˜ 2 : = w ˜ 2 ( 1 ) , , w ˜ 2 ( m ) R m , and error functions G 1 , G 2 : R m R such that lim c ^ 0 G 1 ( c ^ ) = lim c ^ 0 G 2 ( c ^ ) = 0 , and for all c ^ 0 the following hold:
Θ ^ 1 z ^ + c ^ Θ ^ 1 z ^ = j = 1 m w 1 ( j ) c ^ j + c ^ G 1 ( c ^ ) ,
and
Θ ^ 2 z ^ + c ^ Θ ^ 2 z ^ = j = 1 m w 2 ( j ) c ^ j + c ^ G 2 ( c ^ ) .
χ is said possess the gH-differentiable on R m provided it is gH-differentiable at every z ^ R m .
Theorem 1. 
If χ possesses gH-differentiability at z ^ R m , then for all ν ^ R m , one of the following statements holds:
(i) 
χ ̲ z ^ and χ ¯ z ^ both exist and
D g H χ z ^ , ν ^ = χ ̲ z ^ , ν ^ χ ¯ z ^ , ν ^ .
(ii) 
χ ̲ z ^ , χ ¯ z ^ , + χ ̲ z ^ and + χ ¯ z ^ exist and satisfy:
χ ̲ z ^ , ν ^ = + χ ¯ z ^ , ν ^ , χ ¯ z ^ , ν ^ = + χ ̲ z ^ , ν ^ .
Moreover,
D g H χ z ^ , ν ^ = χ ̲ z ^ , ν ^ χ ¯ z ^ , ν ^ .
The following definition is from [32].
Definition 7. 
Let χ be gH-differentiable on R m . The gH-gradient of χ on R m at c ^ R m is given below:
g H χ c ^ : = D gH χ c ^ ; e 1 , , D gH χ c ^ ; e m ,
where e j ( j I m ) is the j-th canonical direction in R m .
The subsequent proposition is from [14].
Proposition 1. 
If χ possesses k-times gH-differentiability at z ^ R m , then λ ^ : R m R , defined in (1), also possesses k-times differentiablility at z ^ .
Now, we define the interval-valued vector function Γ : R m J ( R ) n as follows:
Γ ( c ^ ) : = Γ 1 ( c ^ ) , , Γ n ( c ^ ) , for all c ^ R m ,
where each Γ i : R m J ( R ) ( i I n ) are given below:
Γ i ( c ^ ) : = Γ ̲ i ( c ^ ) , Γ ¯ i ( c ^ ) , for all c ^ R m .
The definitions from [16] play a significant role in our forthcoming discussions.
Definition 8. 
Let Γ : R m J ( R ) n and z ^ R m . Suppose each component Γ i ( i I n ) possesses gH-directional derivatibility at z ^ . Then, z ^ is a critical point of Γ provided no ν ^ R m exists, satisfying:
D g H Γ ( z ^ , ν ^ ) ( J ( R ) ) n ,
where D g H Γ ( z ^ , ν ^ ) : = D g H Γ 1 ( z ^ , ν ^ ) , , D g H Γ n ( z ^ , ν ^ ) .
Definition 9. 
A vector ν ^ R m is called descent direction of Γ at a point z ^ R m provided some ς > 0 exists, satisfying:
Γ ( z ^ + t ν ^ ) n Γ ( z ^ ) , for all t ( 0 , ς ] .

3. CD-Type Conjugate Direction Method for IVMOP

This section is devoted to introduce the CD-type conjugate direction algorithm to solve an IVMOP. We perform the convergence analysis and establish the linear rate of convergence of the sequence obtained from the proposed algorithm. We investigate further the worst-case complexity of the proposed algorithm, under certain mild assumptions.
Consider the IVMOP, given below:
( IVMOP ) Minimize Γ ( c ^ ) : = ( Γ 1 ( c ^ ) , , Γ n ( c ^ ) ) , subject to c ^ R m ,
where the functions Γ i : R m J ( R ) ( i I n ) . Unless specified otherwise, we assume that Γ i ( i I n ) are continuously gH-differentiable functions.
The following definitions are from [14].
Definition 10. 
An element z ^ R m is called effective (respectively, weak effective) solution of IVMOP provided no c ^ R m exists, satisfying:
Γ ( c ^ ) n Γ ( z ^ ) and Γ ( c ^ ) Γ ( z ^ ) ( respectively , Γ ( c ^ ) n Γ ( z ^ ) ) .
Throughout the remainder of this article, S denotes the set of all critical points of Γ .
In light of Definition 8, it follows that if c ^ S , then there exists ν ^ R m , satisfying:
D g H Γ i ( c ^ , ν ^ ) < L U [ 0 , 0 ] , for all i I n .
To determine the descent direction at c ^ S , we consider following problem from [16]:
( P ) c ^ Minimize ψ ( σ ^ , ν ^ ) : = σ ^ , subject to D g H Γ i ( c ^ , ν ^ ) + 1 2 ν ^ 2 , ν ^ 2 L U [ σ ^ , σ ^ ] , ν ^ R m , i I n ,
where ψ : R × R m R , and the uniqueness of the solution of ( P ) c ^ is straightforward.
For each c ^ R m , let K c ^ R × R m denote the feasible set of problem ( P ) c ^ , and let ( σ ^ c ^ , ν ^ c ^ ) K c ^ denote an arbitrary feasible point. We then define the functions ω ^ : R m R m and ξ : R m R as follows:
ξ ( c ^ ) , ω ^ ( c ^ ) : = arg min σ ^ c ^ , ν ^ c ^ K c ^ ψ σ ^ c ^ , ν ^ c ^ .
Henceforth, for any c ^ R m , the optimal solution of ( P ) c ^ will be signified by ( ξ ( c ^ ) , ω ^ ( c ^ ) ) .
The lemmas provided below are from Upadhyay et al. [16].
Lemma 1.  
Let z ^ R m . Then the following properties hold:
(i) 
ξ ( z ^ ) = 0 , provided z ^ S .
(ii) 
ξ ( z ^ ) < 0 , provided z ^ S .
Lemma 2. 
If Γ i ( i I n ) are locally strongly convex (locally convex, respectively) at z ^ S , then z ^ is a locally effective (locally weak effective, respectively) solution of IVMOP.
Lemma 3. 
If z ^ S , then ω ^ ( z ^ ) is a descent direction of Γ at z ^ .
We define Λ : R m × R m R as follows:
Λ ( c ^ , ν ^ ) : = max i I n D ¯ g H Γ i c ^ , ν ^ , for all ( c ^ , ν ^ ) R m × R m .
where for each i I n , D ¯ g H Γ i ( c ^ , ν ^ ) denotes the upper bound of the interval
D g H Γ i c ^ , ν ^ : = D ̲ g H Γ i c ^ , ν ^ , D ¯ g H Γ i c ^ , ν ^ .
The lemma provided below relates Λ ( c ^ , ν ^ ) to ξ ( c ^ ) .
Lemma 4. 
For every c ^ R m , the following relation holds:
ξ ( c ^ ) = Λ ( c ^ , ω ^ ( c ^ ) ) .
Proof. 
Since ξ ( c ^ ) , ω ^ ( c ^ ) K c ^ , the following holds:
D g H Γ i ( c ^ , ω ^ ( c ^ ) ) + 1 2 ω ^ ( c ^ ) 2 , ω ^ ( c ^ ) 2 L U [ ξ ( c ^ ) , ξ ( c ^ ) ] , for all i I n .
This implies that:
D ¯ g H Γ i ( c ^ , ω ^ ( c ^ ) ) + 1 2 ω ^ ( c ^ ) 2 ξ ( c ^ ) , for all i I n .
Using (3) and (4) we obtain:
Λ ( c ^ , ω ^ ( c ^ ) ) ξ ( c ^ ) .
If Λ ( c ^ , ω ^ ( c ^ ) ) < ξ ( c ^ ) , then some φ ^ ( 0 , 1 ) exists, satisfying:
Λ ( c ^ , φ ^ ω ^ ( c ^ ) ) + 1 2 φ ^ ω ^ ( c ^ ) 2 < ξ ( c ^ ) .
Define
σ ^ : = Λ ( c ^ , φ ^ ω ^ ( c ^ ) ) + 1 2 φ ^ ω ^ ( c ^ ) 2 .
Then, from (3) and (6) we obtain:
D g H Γ i ( c ^ , φ ^ ω ^ ( c ^ ) ) + 1 2 φ ^ ω ^ ( c ^ ) 2 , φ ^ ω ^ ( c ^ ) 2 L U σ ^ , σ ^ < L U [ ξ ( c ^ ) , ξ ( c ^ ) ] , for all i I n .
This implies
σ ^ , φ ^ ω ^ ( c ^ ) K c ^ , and σ ^ < ξ ( c ^ ) ,
contradicting the assumption that ξ ( c ^ ) , ω ^ ( c ^ ) is a solution to ( P ) c ^ . Therefore
Λ ( c ^ , ω ^ ( c ^ ) ) ξ ( c ^ ) .
Hence, from (5) and (8) we have
Λ ( c ^ , ω ^ ( c ^ ) ) = ξ ( c ^ ) .
This completes the proof.    □
Let s N { 0 } and { c ^ ( r ) : 0 r s } R m . If c ^ ( s ) S , then the conjugate-descent (CD) direction at c ^ ( s ) is defined as follows:
ν ^ ( s ) : = ω ^ ( c ^ ( s ) ) , if s = 0 , ω ^ ( c ^ ( s ) ) + β s ν ^ ( s 1 ) , if s 1 ,
where for s 1 , β s is given by
β s : = 0 , if Λ c ^ ( s 1 ) , ν ^ ( s 1 ) = 0 , Λ c ^ ( s ) , ω ^ ( c ^ ( s ) ) Λ c ^ ( s 1 ) , ν ^ ( s 1 ) , otherwise .
Remark 2. 
When each Γ i : R m R ( i I n ) is scalar-valued, Equation (9) coincides with the conjugate-descent direction for MOPs introduced by Pérez and Prudente [28]. Hence, the direction ν ^ ( s ) defined in (9) generalizes the classical CD direction from MOPs to IVMOPs.
The next theorem demonstrates that, under suitable hypotheses, the direction ν ^ ( s ) defined in (9) at any c ^ ( s ) S is a descent direction.
Theorem 2. 
Let s 1 and suppose that c ^ ( s ) S . If the parameter β s satisfies the following conditions:
β s [ 0 , ) , if Λ c ^ ( s ) , ν ^ ( s 1 ) 0 , 0 , μ Λ c ^ ( s ) , ω ^ ( c ^ ( s ) ) Λ c ^ ( s ) , ν ^ ( s 1 ) , if Λ c ^ ( s ) , ν ^ ( s 1 ) > 0 ,
where, the function Λ is defined in (3) and μ ( 0 , 1 ) , then ν ^ ( s ) is a descent direction of Γ at c ^ ( s ) .
Proof. 
Given that each function Γ i ( i I n ) is continuously gH-differentiable, to show ν ^ ( s ) is a descent direction it suffices to show that
D g H Γ i c ^ ( s ) , ν ^ ( s ) < L U [ 0 , 0 ] , for all i I n .
Let i I n be fixed. Consider the expression:
D g H Γ i c ^ ( s ) , ν ^ ( s ) = D g H Γ i c ^ ( s ) , ω ^ ( c ^ ( s ) ) + β s ν ^ ( s 1 ) = + Γ ̲ i c ^ ( s ) T ω ^ ( c ^ ( s ) ) + β s ν ^ ( s 1 ) + Γ ¯ i c ^ ( s ) T ω ^ ( c ^ ( s ) ) + β s ν ^ ( s 1 ) .
Expanding the first term we get:
+ Γ ̲ i c ^ ( s ) T ω ^ ( c ^ ( s ) ) + β s ν ^ ( s 1 ) = + Γ ̲ i ( c ^ ( s ) ) T ω ^ ( c ^ ( s ) ) + β s + Γ ̲ i ( c ^ ( s ) ) T ν ^ ( s 1 ) .
Since (11) implies β s 0 , using (3) and (13) we have:
+ Γ ̲ i c ^ ( s ) T ω ^ ( c ^ ( s ) ) + β s ν ^ ( s 1 ) Λ c ^ ( s ) , ω ^ ( c ^ ( s ) ) + β s Λ c ^ ( s ) , ν ^ ( s 1 ) .
Now, we consider the two possible cases:
Case 1: Λ c ^ ( s ) , ν ^ ( s 1 ) 0 . Using Equation (14), it follows that
+ Γ ̲ i c ^ ( s ) T ω ^ ( c ^ ( s ) ) + β s ν ^ ( s 1 ) Λ c ^ ( s ) , ω ^ ( c ^ ( s ) ) .
Since c ^ ( s ) S , therefore from Lemma 1 we have
ξ ( c ^ ( s ) ) < 0 .
Also, from Lemma 4 we get:
Λ c ^ ( s ) , ω ^ ( c ^ ( s ) ) = ξ ( c ^ ( s ) ) .
Therefore, combining (15), (16) and (17) we obtain:
+ Γ ̲ i c ^ ( s ) T ω ^ ( c ^ ( s ) ) + β s ν ^ ( s 1 ) Λ c ^ ( s ) , ω ^ ( c ^ ( s ) ) = ξ ( c ^ ( s ) ) < 0 .
Case 2: Λ c ^ ( s ) , ν ^ ( s 1 ) > 0 . Using Equation (14), it follows that
+ Γ ̲ i c ^ ( s ) T ω ^ ( c ^ ( s ) ) + β s ν ^ ( s 1 ) Λ c ^ ( s ) , ω ^ ( c ^ ( s ) ) μ Λ c ^ ( s ) , ω ^ ( c ^ ( s ) ) Λ c ^ ( s ) , ν ^ ( s 1 ) Λ c ^ ( s ) , ν ^ ( s 1 ) = 1 μ Λ c ^ ( s ) , ω ^ ( c ^ ( s ) ) .
Since μ ( 0 , 1 ) , Therefore, combining (16), (17) and (19) we obtain:
+ Γ ̲ i c ^ ( s ) T ω ^ ( c ^ ( s ) ) + β s ν ^ ( s 1 ) 1 μ Λ c ^ ( s ) , ω ^ ( c ^ ( s ) ) = 1 μ ξ c ^ ( s ) < 0 .
Using (14), (18) and (20) it follows that
+ Γ ̲ i c ^ ( s ) T ω ^ ( c ^ ( s ) ) + β s ν ^ ( s 1 ) < 0 .
Similarly it can be shown that
+ Γ ¯ i c ^ ( s ) T ω ^ ( c ^ ( s ) ) + β s ν ^ ( s 1 ) 1 μ Λ c ^ ( s ) , ω ^ ( c ^ ( s ) ) = 1 μ ξ c ^ ( s ) < 0 .
Thus, using (12), (21), and (22) we conclude
D g H Γ i c ^ ( s ) , ν ^ ( s ) < L U [ 0 , 0 ] , for all i I n .
This completes the proof.    □
We define the functions λ ^ i : R m R ( i I n ) as follows:
λ ^ i ( u ) : = Γ ̲ i ( u ) + Γ ¯ i ( u ) , u R m .
The functions λ ^ i ( i I n ) , as defined in (23), are employed in the subsequent analysis.
Remark 3. 
In light of Theorem 2, if c ^ ( s ) S and β s satisfies (11), then the direction ν ^ ( s ) , as defined by (9), serves as a descent direction at c ^ ( s ) . Therefore,
D g H Γ i ( c ^ ( s ) , ν ^ ( s ) ) L U 0 , 0 , for all i I n .
Moreover, combining (20) and (22) we have:
D g H Γ i c ^ ( s ) , ν ^ ( s ) L U 1 μ ξ c ^ ( s ) , ξ c ^ ( s ) , for all i I n .
It follows that
λ ^ i c ^ ( s ) T ν ^ ( s ) 2 ( 1 μ ) ξ c ^ ( s ) , for all i I n .
The following Armijo-like line search for Γ is from [32].
Let c ^ , ν ^ R m be such that ν ^ defines a descent direction of Γ at c ^ . For a given θ ( 0 , 1 ) , a step length t > 0 is called admissible provided it satisfies the subsequent condition:
Γ i ( c ^ + t ν ^ ) g H Γ i ( c ^ ) L U ( θ t ) D g H Γ i ( c ^ , ν ^ ) , for all i I n .
Remark 4. 
Let t > 0 satisfy the Armijo condition (25) for Γ at c ^ along direction ν ^ with parameter θ. Then, the functions λ ^ i ( i I n ) satisfies:
λ ^ i ( c ^ + t ν ^ ) λ ^ i ( c ^ ) ( θ t ) λ ^ i ( c ^ ) T ν ^ , for all i I n .
For the numerical implementation of the Armijo-like search method, we incorporate it with a backtracking strategy. The line search is initialized with t = 1 and proceeds as follows:
(a)
If the Armijo condition in Equation (25) holds for the current t, accept t as the step length.
(b)
Else, set t : = t / 2 and return to Step (a).
We now present a CD-type conjugate direction algorithm for IVMOPs.
Algorithm 1 CD-type conjugate direction algorithm for solving IVMOP
1:
Let θ ( 0 , 1 ) , c ^ ( 0 ) R m , ϵ > 0 , and s = 0 .
2:
Solve ( P ) c ^ ( s ) , to find ξ ( c ^ ( s ) ) and ω ^ c ^ ( s ) .
3:
If | ξ ( c ^ ( s ) ) | < ϵ stop; else, go to Step 4.
4:
Compute ν ^ ( s ) using (9), (10) and Theorem (2).
5:
Select the largest step size γ s { 2 p : p N { 0 } } that satisfies (25).
6:
Update the next iterate using c ^ ( s + 1 ) : = c ^ ( s ) + γ s ν ^ ( s ) . Then, replace s by s + 1 and return to Step 2.
Remark 5. 
If we set β s = 0 in (9) and assume Γ i : R m R ( i I n ), then the Algorithm 1 coincides with the steepest descent method for MOPs developed in [29].
Observe that if Algorithm 1 terminates after a finite number of iterations, its final iterate is an approximate critical point. Consequently, it is relevant to perform the convergence analysis of Algorithm 1 for the case of infinite sequence, that is, c ^ ( s ) S for all s N { 0 } .
The theorem, provided below, establishes the convergence analysis of the sequence obtained from Algorithm 1.
Theorem 3. 
Suppose the set
W : = { c ^ R m : Γ ( c ^ ) n Γ ( c ^ ( 0 ) ) } ,
is bounded. Then, every accumulation point of ( c ^ ( s ) ) s N { 0 } belongs to the set S .
Proof. 
In view of (25), the following inequality is satisfied for all s N { 0 } :
Γ i ( c ^ ( s + 1 ) ) g H Γ i ( c ^ ( s ) ) L U ( θ γ s ) D g H Γ i ( c ^ ( s ) , ν ^ ( s ) ) , for all i I n .
Therefore, from Remark 3 we obtain
Γ i ( c ^ ( s + 1 ) ) g H Γ i ( c ^ ( s ) ) L U [ 0 , 0 ] , for all s N { 0 } , for all i I n .
This implies
Γ i ( c ^ ( s + 1 ) ) L U Γ i ( c ^ ( s ) ) , for all s N { 0 } , for all i I n .
Hence, ( c ^ ( s ) ) s N { 0 } W . From the given hypotheses, we have the boundedness of W , which gives the boundedness of the sequence ( c ^ ( s ) ) s N { 0 } . This ensures that it has at least one accumulation point, z ^ , say. We prove that z ^ S .
Now, using (25) we have
Γ i ( c ^ ( s + 1 ) ) g H Γ i ( c ^ ( s ) ) L U ( θ γ s ) D g H Γ i ( c ^ ( s ) , ν ^ ( s ) ) L U [ 0 , 0 ] , for all i I n .
Since from (28), ( Γ i ( c ^ ( s ) ) ) s N { 0 } ( i I n ) is non-increasing bounded sequence and θ 0 , 1 , therefore using (29) we obtain
lim s γ s D g H Γ i ( c ^ ( s ) , ν ^ ( s ) ) = [ 0 , 0 ] , for all i I n .
Since γ s ( 0 , 1 ] therefore, lim sup s γ s exists. Here two cases may arise:
Case 1: lim sup s γ s > 0 . Since z ^ be an accumulation point of ( c ^ ( s ) ) s N { 0 } and lim sup s γ s > 0 , therefore, a subsequence ( c ^ ( s r ) ) r N { 0 } of ( c ^ ( s ) ) s N { 0 } exists, satisfying:
lim r D g H Γ i ( c ^ ( s r ) , ν ^ ( s r ) ) = [ 0 , 0 ] , for all i I n , and lim r c ^ ( s r ) = z ^ .
We want to prove z ^ S . On the contrary, assume that z ^ S . Therefore, there are ν ˜ R m and [ b ^ 1 , b ^ 2 ] < L U [ 0 , 0 ] such that
D g H Γ i ( z ^ , ν ˜ ) < L U [ b ^ 1 , b ^ 2 ] , for all i I n .
Because D g H Γ i is continuous for all i I n , and ( c ^ ( s r ) ) r N { 0 } converges to z ^ , therfore from (32) there exists n 1 N such that for all r n 1 we have
D g H Γ i ( c ^ ( s r ) , ν ˜ ) < L U [ b ^ 1 , b ^ 2 ] , for all i I n .
Taking b ^ = max { b ^ 1 , b ^ 2 } and using (33) we obtain
D g H Γ i ( c ^ ( s r ) , ν ˜ ) < L U [ b ^ , b ^ ] < [ 0 , 0 ] , for all i I n , and r n 1 .
From (34), some ν ^ R m exists such that
D g H Γ i ( c ^ ( s r ) , ν ^ ) + 1 2 ν ^ 2 , ν ^ 2 < L U [ b ^ , b ^ ] < [ 0 , 0 ] , for all i I n , and r n 1 .
From (35) we obtain ( ν ^ , b ^ ) K c ^ ( s r ) . Therefore, due to the fact that ξ c ^ ( s r ) , ω ^ c ^ ( s r ) is a solution of ( P ) c ^ ( s r ) we obtain
ξ c ^ ( s r ) b ^ , for all r n 1 .
Since ξ c ^ ( s r ) , ω ^ c ^ ( s r ) is a solution of ( P ) c ^ ( s r ) , from (36) we obtain
D g H Γ i c ^ ( s r ) , ω ^ c ^ ( s r ) < L U [ b ^ , b ^ ] < L U [ 0 , 0 ] , for all i I n , and r n 1 .
For all i I n and r n 1 we have
D g H Γ i c ^ ( s r ) , ν ^ ( s r ) = D g H Γ i c ^ ( s r ) , ω ^ ( c ^ ( s r ) ) + β s r ν ^ ( s r 1 ) , = + Γ ̲ i c ^ ( s r ) T ω ^ ( c ^ ( s r ) ) + β s r ν ^ ( s r 1 ) + Γ ¯ i c ^ ( s r ) T ω ^ ( c ^ ( s r ) ) + β s ν ^ ( s r 1 ) .
Therefore, from (37) and (38), for all i I n and r n 1 we obtain
D g H Γ i c ^ ( s r ) , ν ^ ( s r ) L U ξ c ^ ( s r ) , ξ c ^ ( s r ) β s r D g H Γ i c ^ ( s r ) , ν ^ ( s r 1 ) .
Now consider β s r D g H Γ i c ^ ( s r ) , ν ^ ( s r 1 ) . If β s r = 0 , then we have
β s r D g H Γ i c ^ ( s r ) , ν ^ ( s r 1 ) = [ 0 , 0 ] .
Therefore, from (39) and (40) we obtain:
D g H Γ i c ^ ( s r ) , ν ^ ( s r ) L U ξ c ^ ( s r ) , ξ c ^ ( s r ) .
If β s r > 0 , then using (3) we have
β s r D g H Γ i c ^ ( s r ) , ν ^ ( s r 1 ) L U β s r Λ c ^ ( s r ) , ν ^ ( s r 1 ) , Λ c ^ ( s r ) , ν ^ ( s r 1 ) .
Now if Λ c ^ ( s r ) , ν ^ ( s r 1 ) 0 , then from (39), (42) we obtain:
D g H Γ i c ^ ( s r ) , ν ^ ( s r ) L U ξ c ^ ( s r ) , ξ c ^ ( s r )
Now if Λ c ^ ( s r ) , ν ^ ( s r 1 ) > 0 , then using (11), (42) we obtain:
β s r D g H Γ i c ^ ( s r ) , ν ^ ( s r 1 ) L U μ Λ c ^ ( s r ) , ω ^ ( c ^ ( s r ) ) Λ c ^ ( s r ) , ν ^ ( s r 1 ) Λ c ^ ( s r ) , ν ^ ( s r 1 ) , Λ c ^ ( s r ) , ν ^ ( s r 1 ) = μ Λ c ^ ( s r ) , ω ^ ( c ^ ( s r ) ) , Λ c ^ ( s r ) , ω ^ ( c ^ ( s r ) ) = μ ξ c ^ ( s r ) , ξ c ^ ( s r ) .
Therefore, using (39) and (44) we obtain:
D g H Γ i c ^ ( s r ) , ν ^ ( s r ) L U 1 μ ξ c ^ ( s r ) , ξ c ^ ( s r )
Therefore, from (41),(43) and (45), i I n and r n 1 we have
D g H Γ i c ^ ( s r ) , ν ^ ( s r ) L U ξ c ^ ( s r ) , ξ c ^ ( s r ) , or D g H Γ i c ^ ( s r ) , ν ^ ( s r ) L U 1 μ ξ c ^ ( s r ) , ξ c ^ ( s r ) .
Therefore, using (36) we obtain:
D g H Γ i c ^ ( s r ) , ν ^ ( s r ) L U b ^ , b ^ , or D g H Γ i c ^ ( s r ) , ν ^ ( s r ) L U 1 μ b ^ , b ^ .
From (34) we have b ^ < 0 and since μ ( 0 , 1 ) , therefore b ^ < ( 1 μ ) b ^ , hence from (47) we obtain:
D g H Γ i c ^ ( s r ) , ν ^ ( s r ) L U 1 μ b ^ , b ^ , for all i I n and r n 1 ,
which contradicts Equation (31).
Case 2: lim sup s γ s = 0 .
This implies lim s γ s = 0 . Let p N be fixed. Then, some s m N exists, satisfying:
γ s < 1 2 p , s s m .
Therefore, for t = 1 2 p the Equation (25) does not hold. That is, the following does not hold for all s s m
Γ i c ^ ( s ) + 1 2 p ν ^ ( s ) g H Γ i c ^ ( s ) L U θ 2 p D g H Γ i c ^ ( s ) , ν ^ ( s ) , for all i I n .
Hence, passing limit p along suitable subsequence there exists i I n such that for sufficiently large enough s we have
D g H Γ i ( c ^ ( s ) , ν ^ ( s ) ) L U θ D g H Γ i ( c ^ ( s ) , ν ^ ( s ) ) ,
yields a contradiction.    □
The subsequent lemma is from [32].
Lemma 5. 
Assume that each function Γ i ( i I n ) possesses twice continuous gH-differentiability and satisfies gH-Lipschitz continuity with a Lipschitz constant C ˜ on R m . Then, the functions λ ^ i are also twice continuously differentiable and Lipschitz continuous on R m , with a Lipschitz constant 2 C ˜ .
The next theorem establishes that the sequence obtained from Algorithm 1 exhibits linear order convergence.
Theorem 4. 
Let ( c ^ ( s ) ) s N { 0 } be the sequence obtained from Algorithm 1, and assume the set
W : = { c ^ R m : Γ ( c ^ ) n Γ ( c ^ ( 0 ) ) }
is bounded. Assume that Γ i ( i I n ) possesses twice continuous gH-differentiability on R m , and its gH-gradient g H Γ i ( i I n ) possesses gH-Lipschitz continuity on R m with Lipschitz constant C ˜ > 0 . Further, let 2 λ ^ i a I n ( i I n ) for some a > 0 , and define ϱ ^ : = 4 C ˜ a < 1 . Then the sequence ( c ^ ( s ) ) s N { 0 } converges linearly.
Proof. 
In view of the boundedness of W and Theorem 3, the sequence { c ^ ( s ) } s N 0 obtained from Algorithm 1 converges to a critical point of Γ , say z ^ . Since the functions Γ i ( i I n ) possess twice continuous gH-differentiability, Lemma 5 implies that the functions λ ^ i ( i I n ) are twice continuously differentiable. Therefore, by employing the second-order Taylor expansion for each Γ i with i I n (see [30]) we obtain:
λ ^ i c ^ ( s + 1 ) = λ ^ i z ^ + λ ^ i z ^ T c ^ ( s + 1 ) z ^ + 1 2 c ^ ( s + 1 ) z ^ T 2 λ ^ i z ^ + θ 1 c ^ ( s + 1 ) z ^ c ^ ( s + 1 ) z ^ ,
for some θ 1 ( 0 , 1 ) . Since 2 λ ^ i a I n , it follows from (49) that:
a 2 c ^ ( s + 1 ) z ^ 2 λ ^ i c ^ ( s + 1 ) λ ^ i z ^ λ ^ i z ^ T c ^ ( s + 1 ) z ^ , for all i I n .
Using (28) we have
λ ^ i z ^ λ ^ i c ^ ( s + 1 ) λ ^ i c ^ ( s ) , for all i I n .
Combining (50) with (51) we get
a 2 c ^ ( s + 1 ) z ^ 2 λ ^ i c ^ ( s ) λ ^ i z ^ λ ^ i z ^ T c ^ ( s + 1 ) z ^ , for all i I n .
Utilizing the mean value theorem (see [30]) on the right-hand side of (52), leads to the following:
a 2 c ^ ( s + 1 ) z ^ 2 λ ^ i z ^ + θ 2 c ^ ( s ) z ^ T c ^ ( s ) z ^ λ ^ i z ^ T c ^ ( s ) z ^ λ ^ i z ^ T c ^ ( s + 1 ) c ^ ( s ) ,
for some θ 2 ( 0 , 1 ) . Given that g H Γ i ( i I n ) are gH-Lipschitz continuous with Lipschitz constant C ˜ > 0 , Lemma 5 implies that the λ ^ i ( i I n ) are Lipschitz continuous with constant 2 C ˜ . Hence, from (53) we obtain
a 2 c ^ ( s + 1 ) z ^ 2 2 C ˜ c ^ ( s ) z ^ 2 + λ ^ i z ^ T c ^ ( s ) c ^ ( s + 1 ) , for all i I n .
As z ^ is a critical point, there exists i I n such that the following inequality holds:
λ ^ i z ^ T c ^ ( s ) c ^ ( s + 1 ) 0 .
By combining (54) and (55) we obtain
c ^ ( s + 1 ) z ^ ϱ ^ c ^ ( s ) z ^ ,
where ϱ ^ = 4 C ˜ a . Since ϱ ^ < 1 , it follows from inequality (56) that the sequence { c ^ ( s ) } s N { 0 } converges linearly. This completes the proof.    □
The following lemma plays a pivotal role in our subsequent discussions.
Lemma 6. 
Suppose that g H Γ i ( i I n ) possess gH-Lipschitz continuity with Lipschitz constant C ˜ and assume some κ > 0 exists, satisfying:
ν ^ ( s ) κ ω ^ ( c ^ ( s ) ) , for all s N { 0 } .
Then every step length γ ( s ) computed in Algorithm 1 satisfies
γ ( s ) > 1 θ 4 C ˜ κ 2 , for all s N { 0 } .
Proof. 
In view of Remark 4 and the fact that γ ( s ) is the step length, there exists i I n such that:
λ ^ i c ^ ( s ) + 2 γ ( s ) ν ^ s λ ^ i ( c ^ ( s ) ) > 2 θ γ ( s ) λ ^ i ( c ^ ( s ) ) T ν ^ s , for all s N { 0 } .
By Lemma 5 and the gH-Lipschitz continuity of g H Γ i ( i I n ) with Lipschitz constant C ˜ > 0 we obtain λ ^ i ( i I n ) is Lipschitz continuous with Lipschitz constant 2 C ˜ . Consequently, the following inequality holds:
λ ^ i c ^ ( s ) + 2 γ ( s ) ν ^ s λ ^ i ( c ^ ( s ) ) 2 γ ( s ) λ ^ i ( c ^ ( s ) ) T ν ^ ( s ) + 4 C ˜ ( γ ( s ) ) 2 ν ^ ( s ) 2 , for all s N { 0 } .
As γ ( s ) > 0 for all s N { 0 } , therefore (57) and (58) yield:
2 θ λ ^ i ( c ^ ( s ) ) T ν ^ s < 2 λ ^ i ( c ^ ( s ) ) T ν ^ ( s ) + 4 C ˜ γ ( s ) ν ^ ( s ) 2 , for all s N { 0 } .
Substituting ν ^ ( s ) κ ω ^ ( c ^ ( s ) ) into (59) yields:
θ λ ^ i ( c ^ ( s ) ) T ν ^ s < λ ^ i ( c ^ ( s ) ) T ν ^ ( s ) + 2 C ˜ γ ( s ) κ 2 ω ^ ( c ^ ( s ) 2 , for all s N { 0 } .
Equivalently,
2 C ˜ γ ( s ) κ 2 ω ^ ( c ^ ( s ) ) 2 < ( 1 θ ) λ ^ i ( c ^ ( s ) ) T ν ^ ( s ) , for all s N { 0 } .
Since ξ ( c ^ ( s ) ) , ω ^ ( c ^ ( s ) ) K c ^ s and ξ ( c ^ s ) 0 it follows that
λ ^ i ( c ^ ( s ) ) T ω ^ ( c ^ ( s ) ) 1 2 ω ^ ( c ^ ( s ) ) 2 , for all s N { 0 } .
Furthermore, by Remark 3, the following inequality holds:
λ ^ i ( c ^ ( s ) ) T ν ^ ( s ) 1 2 ω ^ ( c ^ ( s ) ) 2 , for all s N { 0 } .
Combining (61) and (62) and using 1 θ > 0 we obtain:
2 C ˜ γ ( s ) κ 2 ω ^ ( c ^ ( s ) ) 2 < ( 1 θ ) λ ^ i ( c ^ ( s ) ) T ν ^ ( s ) ( 1 θ ) 2 ω ^ ( c ^ ( s ) ) 2 , for all s N { 0 } .
Since ω ^ ( c ^ ( s ) ) 0 for every s N { 0 } , therefore (63) implies:
γ ( s ) > 1 θ 4 C ˜ κ 2 , for all s N { 0 } .
This completes the proof.    □
In the forthcoming theorem, the worst-case complexity of the sequence obtained from Algorithm 1 is investigated.
Theorem 5. 
Let every assumption of Theorem 4, along with the assumptions of Lemma 6, be satisfied. Let { c ^ ( s ) } s N { 0 } be the sequence obtained from Algorithm 1. If, for any ϵ > 0 , the algorithm requires at most s max iterations to obtain an iterate c ^ ( s ) such that | ξ ( c ^ ( s ) ) | < ϵ , then
s max O ( log ( 1 / ϵ ) ) .
Proof. 
Given that the sequence c ^ ( s ) s N { 0 } is obtained from Algorithm 1 and the set W is bounded, Theorem 3 implies that the sequence converges to z ^ S . Further, since the functions Γ i ( i I n ) possess twice continuous gH-differentiability, Lemma 5 implies that λ ^ i ( i I n ) also possess twice continuous differentiability. Given the boundedness of W , some r > 0 exists, satisfying:
W B z ^ , r .
Furthermore, since λ ^ i ( i I n ) are continuous on the compact set B z ^ , 3 r R m , some L > 0 exists satisfying:
λ ^ i c ^ L , for all i I n , for all c ^ B z ^ , 3 r .
As γ ( s ) represents the step size in Algorithm 1, Remark 4 implies that, for each i I n the following inequality holds
λ ^ i c ^ ( s ) + γ ( s ) ν ^ ( s ) λ ^ i ( c ^ ( s ) ) θ γ ( s ) λ ^ i ( c ^ ( s ) ) T ν ^ ( s ) .
Since c ^ ( s + 1 ) = c ^ ( s ) + γ ( s ) ν ^ ( s ) , therefore using Remark 3 we obtain:
λ ^ i c ^ ( s + 1 ) λ ^ i ( c ^ ( s ) ) 2 θ ( 1 μ ) γ ( s ) ξ c ^ ( s ) , for all i I n ,
where μ ( 0 , 1 ) . Equivalently,
2 θ ( 1 μ ) γ ( s ) ξ c ^ ( s ) λ ^ i c ^ ( s ) λ ^ i c ^ ( s + 1 ) .
Since ξ ( c ^ ( s ) ) 0 for all s N { 0 } , it follows from (67) that:
2 θ ( 1 μ ) γ ( s ) ξ ( c ^ ( s ) ) λ ^ i c ^ ( s ) λ ^ i c ^ ( s + 1 ) .
By applying the mean value theorem (see [30]) on the left-hand side of (68) we obtain
2 θ ( 1 μ ) γ ( s ) ξ ( c ^ ( s ) ) λ ^ i c ^ ( s ) + θ 3 c ^ ( s ) c ^ ( s + 1 ) T c ^ ( s ) c ^ ( s + 1 ) ,
for some θ 3 ( 0 , 1 ) . Since c ^ ( s ) , c ^ ( s + 1 ) W B z ^ , r , it follows that
c ^ ( s ) + θ 3 c ^ ( s ) c ^ ( s + 1 ) B z ^ , 3 r .
Utilizing (69), (64), and (70) we derive the following:
2 θ ( 1 μ ) γ ( s ) ξ ( c ^ ( s ) ) L c ^ ( s + 1 ) c ^ ( s ) L c ^ ( s + 1 ) z ^ + c ^ ( s ) z ^ .
Moreover, by invoking Theorem 4 and  (71) we arrive at the following inequality:
2 θ ( 1 μ ) γ ( s ) ξ ( c ^ ( s ) ) L ϱ ^ s ϱ ^ + 1 c ^ ( 0 ) z ^ .
Now, applying Lemma 6 to (72) it follows that:
ξ ( c ^ ( s ) ) < 2 C ˜ κ 2 1 θ L ϱ ^ s ϱ ^ + 1 θ ( 1 μ ) c ^ ( 0 ) z ^ .
Assume now that for the first s ϵ iterations the condition | ξ ( c ^ ( s ϵ ) ) | ϵ holds. Then, using (73) we obtain:
ϵ < 2 C ˜ κ 2 1 θ L ϱ ^ s ϵ ϱ ^ + 1 θ ( 1 μ ) c ^ ( 0 ) z ^ . θ ( 1 μ ) 1 θ 2 C ˜ κ 2 L ϱ ^ + 1 c ^ ( 0 ) z ^ ϵ < ϱ ^ s ϵ .
By taking the logarithm of both sides of the above inequality we obtain:
log θ ( 1 μ ) 1 θ 2 C ˜ κ 2 L ϱ ^ + 1 c ^ ( 0 ) z ^ ϵ < log ϱ ^ s ϵ . log 2 C ˜ κ 2 L ϱ ^ + 1 c ^ ( 0 ) z ^ θ ( 1 μ ) 1 θ 1 ϵ > s ϵ log 1 ϱ ^ . s ϵ < 1 log 1 ϱ ^ log 2 C ˜ κ 2 L ϱ ^ + 1 c ^ ( 0 ) z ^ θ ( 1 μ ) 1 θ 1 ϵ .
This implies
s ϵ < O ( log ( 1 / ϵ ) ) .
This completes the proof.    □

4. Numerical Experiments

An example involving a locally convex IVMOP is presented below to illustrate the significance of Algorithm 1.
Example 1. 
Examine the following IVMOP:
( Q 1 ) Minimize Γ 1 ( c ^ 1 , c ^ 2 , c ^ 3 ) , Γ 2 ( c ^ 1 , c ^ 2 , c ^ 3 ) , subject to ( c ^ 1 , c ^ 2 , c ^ 3 ) R 2 ,
where Γ i : R 2 J ( R ) ( i = 1 , 2 ) are defined as follows:
Γ 1 ( c ^ 1 , c ^ 2 , c ^ 3 ) : = c ^ 1 3 + c ^ 2 2 + c ^ 3 3 c ^ 1 c ^ 2 2 + e c ^ 2 + c ^ 3 , Γ 2 ( c ^ 1 , c ^ 2 , c ^ 3 ) : = c ^ 1 1 2 + c ^ 2 1 2 + c ^ 3 1 2 c ^ 1 + 1 2 + c ^ 2 + 1 2 + c ^ 3 + 1 2 .
It can be verified that ( 1 , 1 , 1 ) is a critical point of Γ in (Q1). Moreover, since Γ 1 and Γ 2 are locally convex at ( 1 , 1 , 1 ) , Lemma 2 implies that ( 1 , 1 , 1 ) is a locally weak effective solution of (Q1). We solve (Q1) using Algorithm 1, starting from the initial point ( 14 , 17 , 11 ) and termination condition | ξ ( c ^ ( s ) ) | < ϵ = 0.0001 . The computational results of Algorithm 1 are presented in Table 1.
The result shown in Step 16 of Table 1 ensures that the sequence converges to ( 1 , 1 , 1 ) , which is a locally weak effective solution of (Q1).
It is important to highlight that a locally weak efficient solution of an IVMOP is not necessarily an isolated point. Nevertheless, when Algorithm 1 is executed from a specific starting point, it may converge to one such solution. To generate a set of approximate locally weak efficient solutions in Example 1, a multi-start strategy is adopted. Specifically, 50 uniformly distributed random initial points are generated using MATLAB’srandfunction. Algorithm 1 is then independently executed from each of these initial points, with the termination condition defined as | ξ ( c ^ ( s ) ) | < ϵ = 0.0001 . The iterative sequences generated from these initializations are visualized in Figure 1.
Remark 6. 
It is important to note that, in (Q1), Γ 1 is non-convex and possesses twice continuous gH-differentiability. As a result, the Newton’s method for IVMOPs proposed in [14] is not applicable to solve (Q1). Nevertheless, Example 1 illustrates that Algorithm 1 solves (Q1) effectively.
It is significant to note that the Newton’s and quasi-Newton methods proposed in [14,15] are applicable to solve a specific class of IVMOPs, where the objective functions possess twice continuous gH-differentiability. In contrast, Algorithm 1 needs Γ i ( i I n ) to be continuously gH-differentiable. Consequently, Algorithm 1 can be applied to a broader class of IVMOPs than the methods proposed in [14,15]. This is demonstrated in the following example.
Example 2. 
Examine the following IVMOP:
( Q 2 ) Minimize Γ 1 ( c ^ 1 , c ^ 2 ) , Γ 2 ( c ^ 1 , c ^ 2 ) , subject to ( c ^ 1 , c ^ 2 ) R 2 ,
where Γ 1 : R 2 J ( R ) and Γ 2 : R 2 J ( R ) are defined as follows:
Γ 1 ( c ^ 1 , c ^ 2 ) : = c ^ 2 | c ^ 2 | + ( c ^ 1 + 1 ) 2 ( c ^ 1 + 2 ) 2 + ( c ^ 2 1 ) 2 ,
and
Γ 2 ( c ^ 1 , c ^ 2 ) : = ( c ^ 1 3 ) 2 + sin c ^ 2 2 cos c ^ 1 2 + ( c ^ 2 4 ) 2 .
Evidently, Γ 1 is continuously gH-differentiable; however, it does not possess twice continuous gH-differentiability. As a matter of the fact that, Newton’s method introduced in [14] are not applicable for solving (Q2). Nevertheless, (Q2) is solved employing Algorithm 1, implemented in the MATLAB. We now apply Algorithm 1 to solve (Q2). The initial point and the stopping condition criteria are set to be ( 9 , 4 ) and | ξ ( c ^ ( s ) ) | < ϵ = 0.0001 , respectively. The computational results obtained using Algorithm 1 are presented in Table 2.
As indicated in Step 36 of Table 2, Algorithm 1 generates a sequence that converges to the approximate critical point ( 1.0282 , 0.005612 ) for the objective function in (Q2).
In the next example we investigate a large-scale IVMOP.
Example 3. 
Examine the following IVMOP:
( Q 3 ) Minimize Γ 1 ( c ^ 1 , , c ^ m ) , Γ 2 ( c ^ 1 , , c ^ m ) , subject to ( c ^ 1 , , c ^ m ) R m ,
where Γ i : R m J ( R ) ( i = 1 , 2 ) is defined as follows:
Γ 1 ( c ^ 1 , , c ^ m ) : = i = 1 m c ^ i sin c ^ i 2 i = 1 m c ^ i + cos c ^ i 5 , Γ 2 ( c ^ 1 , , c ^ m ) : = i = 1 m log ( 1 + c ^ i ) 3 + c ^ i 4 i = 1 m c ^ i + 1 2 .
A random initial point, generated using MATLAB’srand(n,1)function, is used to initialize Algorithm 1. The termination criterion is defined as | ξ ( c ^ ( s ) ) | < ϵ = 0.0001 . From Table 3 we find the iteration counts and the associated computational times required by Algorithm 1 to solve (Q3) for different values of n.

5. Conclusions and Future Research Directions

In this article, we investigated a class of IVMOPs. We defined the conjugate-descent (CD) direction to introduce CD-type conjugate direction algorithm for IVMOPs. We performed the convergence analysis and established the linear rate of convergence of the sequence obtained from the algorithm, under appropriate assumptions. Moreover, the worst-case complexity of the proposed algorithm has been investigated.
The results established in this paper generalize several significant results existing in the literature. In particular, the results derived in this paper generalize the corresponding results of Pérez and Prudente [28] on CD conjugate direction algorithm from MOPs to a more general class of optimization problems, namely, IVMOPs. Moreover, if the conjugate parameter in the proposed algorithm is set to be zero, and if every component of the objective function of the IVMOP is a real-valued function rather than an interval-valued function, then the proposed algorithm coincides with the steepest descent-type method for MOPs introduced by Fliege and Svaiter [29]. We have conducted numerical experiments to illustrate the fact that the proposed algorithm is applicable to solve a broader class of optimization problems as compared to the methods existing in the literature (see Upadhyay et al. [14,15]).
The findings of this research open several avenues for future exploration. On one hand, the results of this paper could be further generalized to the class of nonsmooth IVMOPs. On the other hand, given the advantages of developing optimization algorithms in the framework of Riemannian manifolds (see [33]), the results of this paper could be generalized to the Riemannian manifold setting.

Author Contributions

Conceptualization, B.B.U. and R.K.P.; methodology, B.B.U., R.K.P.; validation, B.B.U., R.K.P., S.P., and I.M.S.-M.; formal analysis, B.B.U., R.K.P., and S.P.; writing-review and editing, B.B.U., R.K.P., and S.P.; supervision, B.B.U.

Funding

The first author would like to thank receives financial support from the University Grants Commission, New Delhi, India (UGC-Ref. No.: 1213/(CSIR-UGC NET DEC 2017)). The third author extends gratitude to the Ministry of Education, Government of India, for their financial support through the Prime Minister Research Fellowship (PMRF), granted under PMRF ID-2703573.

Data Availability Statement

The authors confirm that no data, text, or theories from others are used in this paper without proper acknowledgement.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive suggestions, which have substantially improved the paper in its present form.

Conflicts of Interest

The authors confirm that there are no actual or potential conflicts of interest related to this article.

References

  1. Miettinen, K. Nonlinear Multiobjective Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, xvii, 298 p. 1999.
  2. Diao, X.; Li, H.; Zeng, S.; Tam, V.W.; Guo, H. A Pareto multi-objective optimization approach for solving time-cost-quality tradeoff problems. Technol. Econ. Dev. Econ. 2011, 17, 22–41. [Google Scholar] [CrossRef]
  3. Guillén-Gosálbez, G. A novel MILP-based objective reduction method for multi-objective optimization: Application to environmental problems. Comput. Chem. Eng. 2011, 35, 1469–1477. [Google Scholar] [CrossRef]
  4. Bento, G.C.; Melo, J.G. Subgradient method for convex feasibility on Riemannian manifolds. J. Optim. Theory Appl. 2012, 152, 773–785. [Google Scholar] [CrossRef]
  5. Upadhyay, B.B.; Stancu-Minasian, I.M.; Mishra, P.; Mohapatra, R.N. On generalized vector variational inequalities and nonsmooth vector optimization problems on Hadamard manifolds involving geodesic approximate convexity. Adv. Nonlinear Var. Inequal. 2022, 25, 1–25. [Google Scholar]
  6. Upadhyay, B.B.; Singh, S.K.; Stancu-Minasian, I.M.; Rusu-Stancu, A.M. Robust optimality and duality for nonsmooth multiobjective programming problems with vanishing constraints under data uncertainty. Algorithms 2024, 17(11), 482. [Google Scholar] [CrossRef]
  7. Upadhyay, B.B.; Poddar, S.; Yao, J. C.; Zhao, X. Inexact proximal point method with a Bregman regularization for quasiconvex multiobjective optimization problems via limiting subdifferentials. Ann. Oper. Res. 2025, 345, 417–466. [Google Scholar] [CrossRef]
  8. Qiu, D.; Jin, X.; Xiang, L. On solving interval-valued optimization problems with TOPSIS decision model. Eng. Lett. 2022, 30, 1101–1106. [Google Scholar]
  9. Lanbaran, N.M.; Celik, E.; Yiğider, M. Evaluation of investment opportunities with interval-valued fuzzy TOPSIS method. Appl. Math. Nonlinear Sci. 2020, 5, 461–474. [Google Scholar] [CrossRef]
  10. Beer, M.; Ferson, S.; Kreinovich, V. Imprecise probabilities in engineering analyses. Mech. Syst. Signal Process, 2013, 37, 4–29. [Google Scholar] [CrossRef]
  11. Chaudhuri, A.; Lam, R.; Willcox, K. Multifidelity uncertainty propagation via adaptive surrogates in coupled multidisciplinary systems. AIAA J. 56, 235–249. [CrossRef]
  12. Zhang, J.; Li, S. The portfolio selection problem with random interval-valued return rates. Int. J. Innov. Comput. Inf. Control 2009, 5, 2847–2856. [Google Scholar]
  13. Kumar, P.; Bhurjee, A.K. Multi-objective enhanced interval optimization problem. Ann. Oper. Res. 2022, 311, 1035–1050. [Google Scholar] [CrossRef]
  14. Upadhyay, B.B.; Pandey, R.K.; Liao, S. Newton’s method for interval-valued multiobjective optimization problem. J. Ind. Manag. Optim. 2024, 20, 1633–1661. [Google Scholar] [CrossRef]
  15. Upadhyay, B.B.; Pandey, R.K.; Pan, J.; Zeng, S. Quasi-Newton algorithms for solving interval-valued multiobjective optimization problems by using their certain equivalence. J. Comput. Appl. Math. 2024, 438, 115550. [Google Scholar] [CrossRef]
  16. Upadhyay, B.B.; Pandey, R.K.; Zeng, S. A generalization of generalized Hukuhara Newton’s method for interval-valued multiobjective optimization problems. Fuzzy Sets Syst. 2024, 492, 109066. [Google Scholar] [CrossRef]
  17. Zhang, Z.; Wang, X.; Lu, J. Multi-objective immune genetic algorithm solving nonlinear interval-valued programming. Eng. Appl. Artif. Intell. 2018, 67, 235–245. [Google Scholar] [CrossRef]
  18. Upadhyay, B.B.; Pandey, R.K.; Zeng, S.; Singh, S.K. On conjugate direction-type method for interval-valued multiobjective quadratic optimization problems. Numer. Algorithms 2024. [Google Scholar] [CrossRef]
  19. Pandey, V.; Bekele, A.; Ahmed, G.M.S.; Kanu, N.J. An application of conjugate gradient technique for determination of thermal conductivity as an inverse engineering problem. Mater. Today Proc. 2021, 47, 3082–3087. [Google Scholar] [CrossRef]
  20. Sarkar, T.; Rao, S. : The application of the conjugate gradient method for the solution of electromagnetic scattering from arbitrarily oriented wire antennas. IEEE Trans. Antennas Propag. 1984, 32, 398–403. [Google Scholar] [CrossRef]
  21. Frank, M.S.; Balanis, C.A. A conjugate direction method for geophysical inversion problems. IEEE Trans. Geosci. Remote Sens. 2007, 25, 691–701. [Google Scholar] [CrossRef]
  22. Hestenes, M.R.; Stiefel, E. Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 1952, 49, 409–436. [Google Scholar] [CrossRef]
  23. Fletcher, R.; Reeves, C.M. Function minimization by conjugate gradients. Comput. J. 1964, 7, 149–154. [Google Scholar] [CrossRef]
  24. Polak, E. , Ribieŕe, G.: Note sur la convergence de méthodes de directions conjuguées. Rev. Fr. Inform. Recherche Opér. 1969, 3, 35–43. [Google Scholar]
  25. Khoda, K.M.; Liu, Y.; Storey, C. Generalized Polak-Ribieŕe algorithm. J. Optim. Theory Appl. 1992, 75, 345–354. [Google Scholar] [CrossRef]
  26. Zhang, L.; Zhou, W.J.; Li, D.H. A descent modified Polak-Ribieŕe-Polyak conjugate gradient method and its global convergence. IMA J. Numer. Anal. 2006, 26, 629–640. [Google Scholar] [CrossRef]
  27. Fletcher, R. Unconstrained Optimization; Pract. Methods Optim., Vol. 1; John Wiley: New York, 1980. [Google Scholar]
  28. Pérez, L.R.; Prudente, L.F. Nonlinear conjugate gradient methods for vector optimization. SIAM J. Optim. 2018, 28, 2690–2720. [Google Scholar] [CrossRef]
  29. Fliege, J.; Svaiter, B.F. Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 2000, 51, 479–494. [Google Scholar] [CrossRef]
  30. Apostol, T.M. Multi-Variable Calculus and Linear Algebra with Applications to Differential Equations and Probability; Wiley: New York, NY, USA, 1969. [Google Scholar]
  31. Stefanini, L.; Bede, B. Generalized Hukuhara differentiability of interval-valued functions and interval differential equations. Nonlinear Anal. 2009, 71, 1311–1328. [Google Scholar] [CrossRef]
  32. Pandey, R.K.; Upadhyay, B.B.; Poddar, S.; Stancu-Minasian, I.M. Hestenes-Stiefel-type conjugate direction algorithm for interval-valued multiobjective optimization problems. Algorithms 2025, 18, 381. [Google Scholar] [CrossRef]
  33. Upadhyay, B.B.; Poddar, S.; Ferreira, O. P.; Zhao, X.; Yao, J.C. An inexact proximal point method with quasi-distance for quasiconvex multiobjective optimization problems on Riemannian manifolds. Numer. Algorithms 2025, 1–51. [Google Scholar] [CrossRef]
Figure 1. Approximate locally weak effective solutions obtained from 50 uniformly random initial points.
Figure 1. Approximate locally weak effective solutions obtained from 50 uniformly random initial points.
Preprints 172923 g001
Table 1. Computational results obtained from Algorithm 1 for solving (Q1).
Table 1. Computational results obtained from Algorithm 1 for solving (Q1).
s c ^ ( s ) ν ^ ( s ) γ ( s ) | ξ ( c ^ ( s ) ) |
0 ( 14 , 17 , 11 ) ( 26 , 32 , 20 ) 0.03125 1050
1 ( 13.188 , 16 , 10.375 ) ( 24.375 , 30 , 18.75 ) 0.03125 922.85
2 ( 12.426 , 15.063 , 9.7891 ) ( 22.852 , 28.125 , 17.578 ) 0.03125 811.1
3 ( 11.712 , 14.184 , 9.2397 ) ( 21.423 , 26.367 , 16.479 ) 0.03125 712.88
4 ( 11.042 , 13.36 , 8.7248 ) ( 20.084 , 24.719 , 15.45 ) 0.03125 626.56
5 ( 10.415 , 12.587 , 8.242 ) ( 18.829 , 23.174 , 14.484 ) 0.03125 550.68
6 ( 9.8261 , 11.863 , 7.7893 ) ( 17.652 , 21.726 , 13.579 ) 0.0625 484
7 ( 8.7229 , 10.505 , 6.9407 ) ( 15.446 , 19.01 , 11.881 ) 0.0625 370.56
8 ( 7.7575 , 9.3169 , 6.1981 ) ( 13.515 , 16.634 , 10.396 ) 0.0625 283.71
9 ( 6.9128 , 8.2773 , 5.5483 ) ( 11.826 , 14.555 , 9.0967 ) 0.0625 217.22
10 ( 6.1737 , 7.3677 , 4.9798 ) ( 10.347 , 12.735 , 7.9596 ) 0.0625 166.31
11 ( 5.527 , 6.5717 , 4.4823 ) ( 9.054 , 11.143 , 6.9646 ) 0.0625 127.33
12 ( 4.9611 , 5.8752 , 4.047 ) ( 7.9223 , 9.7505 , 6.094 ) 0.125 97.486
13 ( 3.9708 , 4.6564 , 3.2853 ) ( 5.9417 , 7.3129 , 4.5705 ) 0.125 54.836
14 ( 3.2281 , 3.7423 , 2.714 ) ( 4.4563 , 5.4846 , 3.4279 ) 0.25 30.845
15 ( 2.1141 , 2.3712 , 1.857 ) ( 2.2281 , 2.7423 , 1.714 ) 0.5 7.7113
16 ( 1 , 1 , 1 ) ( 0.0005 , 0.0007 , 0.0008 ) 4.5475 × 10 13 1 × 10 6
Table 2. Computational results obtained from Algorithm 1 for solving (Q2).
Table 2. Computational results obtained from Algorithm 1 for solving (Q2).
s c ^ ( s ) ν ^ ( s ) γ ( s ) | ξ ( c ^ ( s ) ) |
0 ( 5 , 4 ) ( 3 , 0.80099 ) 0.125 4.5
1 ( 4.625 , 3.8999 ) ( 2.625 , 0.801 ) 0.125 3.4453
2 ( 4.2969 , 3.7998 ) ( 2.2969 , 0.80078 ) 0.125 2.6378
3 ( 4.0098 , 3.6997 ) ( 2.0098 , 0.80099 ) 0.125 2.0196
4 ( 3.7585 , 3.5995 ) ( 1.7585 , 0.80099 ) 0.125 1.5462
5 ( 3.5387 , 3.4994 ) ( 1.5387 , 0.80078 ) 0.125 1.1838
6 ( 3.3464 , 3.3993 ) ( 1.3464 , 0.80099 ) 0.125 0.90638
7 ( 3.1781 , 3.2992 ) ( 1.1781 , 0.80099 ) 0.125 0.69394
8 ( 3.0308 , 3.1991 ) ( 1.0308 , 0.4007 ) 0.25 0.5313
9 ( 2.7731 , 3.0989 ) ( 0.88656 , 0.4007 ) 0.25 0.39299
10 ( 2.5515 , 2.9987 ) ( 0.77574 , 0.4007 ) 0.25 0.30089
11 ( 2.3575 , 2.8985 ) ( 0.67877 , 0.4007 ) 0.25 0.23037
12 ( 2.1879 , 2.7984 ) ( 0.59393 , 0.40057 ) 0.25 0.17637
13 ( 2.0394 , 2.6982 ) ( 0.51969 , 0.40057 ) 0.25 0.13504
14 ( 1.9094 , 2.5981 ) ( 0.45473 , 0.40057 ) 0.25 0.10339
15 ( 1.7958 , 2.4979 ) ( 0.39789 , 0.4007 ) 0.25 0.079155
16 ( 1.6963 , 2.3978 ) ( 0.34815 , 0.40057 ) 0.25 0.060603
17 ( 1.6093 , 2.2976 ) ( 0.30463 , 0.40057 ) 0.25 0.046399
18 ( 1.5331 , 2.1975 ) ( 0.26655 , 0.40057 ) 0.25 0.035524
19 ( 1.4665 , 2.0973 ) ( 0.23323 , 0.40064 ) 0.25 0.027198
20 ( 1.4082 , 1.9972 ) ( 0.20408 , 0.4007 ) 0.25 0.020823
21 ( 1.3571 , 1.897 ) ( 0.17857 , 0.4007 ) 0.25 0.015942
22 ( 1.3125 , 1.7968 ) ( 0.15625 , 0.4007 ) 0.25 0.012206
23 ( 1.2734 , 1.6967 ) ( 0.13671 , 0.40044 ) 0.25 0.0093452
24 ( 1.2392 , 1.5965 ) ( 0.11963 , 0.4007 ) 0.25 0.0071544
25 ( 1.2093 , 1.4964 ) ( 0.10468 , 0.4007 ) 0.25 0.0054774
26 ( 1.1402 , 1.1959 ) ( 0.070122 , 0.40031 ) 0.25 0.0024582
27 ( 1.1227 , 1.0958 ) ( 0.061357 , 0.40031 ) 0.25 0.001882
28 ( 1.1074 , 0.99575 ) ( 0.053688 , 0.40031 ) 0.25 0.0014409
29 ( 1.0939 , 0.89567 ) ( 0.046987 , 0.4006 ) 0.25 0.0011028
30 ( 1.0629 , 0.59522 ) ( 0.031483 , 0.4006 ) 0.25 0.0004945
31 ( 1.0551 , 0.49507 ) ( 0.027557 , 0.40071 ) 0.25 0.0003783
32 ( 1.0482 , 0.3949 ) ( 0.02409 , 0.40031 ) 0.25 0.00028989
33 ( 1.0421 , 0.29482 ) ( 0.021102 , 0.40057 ) 0.25 0.00022154
34 ( 1.0369 , 0.19468 ) ( 0.018469 , 0.40059 ) 0.25 0.00016944
35 ( 1.0323 , 0.094529 ) ( 0.016166 , 0.40056 ) 0.25 0.00012954
36 ( 1.0282 , 0.005612 ) ( 0.01415 , 0.39944 ) 0.25 9.8987 × 10 05
Table 3. The computational results of Algorithm 1 for the (Q3).
Table 3. The computational results of Algorithm 1 for the (Q3).
n Number of iterations Computation times (in seconds)
100 2183 224.9
130 2611 3225
160 4518 6337
190 3422 5442
220 3667 6750
250 3896 7889
280 2363 5705
310 2460 7059
340 3549 1062
370 4325 1575.2
400 3004 1314.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated