Preprint
Article

This version is not peer-reviewed.

Hestenes-Stiefel-Type Conjugate Direction Algorithm for Interval-Valued Multiobjective Optimization Problems

A peer-reviewed article of this preprint also exists.

Submitted:

17 May 2025

Posted:

20 May 2025

You are already at the latest version

Abstract
This article investigates a class of interval-valued multiobjective optimization problems (IVMOPs, in short). We define the Hestenes-Stiefel (HS, in short) direction for the objective function of IVMOPs and establish that it has a descent property at noncritical points. An Armijo-like line search is employed to determine an appropriate step size. We present an HS-type conjugate direction algorithm for IVMOPs and establish the convergence of the sequence generated by the algorithm. Moreover, we furnish several numerical examples, including convex, locally convex, and large-scale IVMOPs, to demonstrate the effectiveness of our proposed algorithm and solve them by employing MATLAB R2024a. To the best of our knowledge, the HS-type conjugate direction method has not yet been explored for the class of IVMOPs.
Keywords: 
;  ;  ;  ;  

1. Introduction

Multiobjective optimization problems (MOPs, in short) involve the simultaneous optimization of two or more conflicting objective functions. Vilfredo Pareto [1] introduced the concept of Pareto optimality in the context of economic systems. A solution is called Pareto optimal or efficient, if none of the objective functions can be improved without deteriorating some of the other objective values (see, [2]). MOPs arise in scenarios where trade-offs are required, such as balancing cost and quality in business operations or improving efficiency while reducing environmental impact in engineering (see, [3,4]). As a consequence, various techniques and algorithms have been proposed to solve MOPs in different frameworks (see, [5,6,7]). For a more detailed discussion on MOPs, we refer to [2,8] and the references cited therein.
In many real world problems arising in engineering, science and related fields, we often encounter data that are imprecise or uncertain. This uncertainty can arise from various factors such as unknown future developments, measurement or manufacturing errors, or incomplete information in model development (see, [9,10]). In several optimization problems, the presence of imprecise data or uncertainty in the parameters or the objective functions is modeled as intervals. These optimization problems are termed interval-valued optimization problems (IVOPs, in short). IVOPs have significant applications in various fields, including management, economics and engineering (see, [11,12,13]). The foundational work on IVOPs is attributed to Ishibuchi and Tanaka [14], who investigated IVOPs by transforming them into corresponding deterministic MOPs. Wu [15] derived optimality conditions for a class of constrained IVOPs, employing the notion of Hukuhara differentiability and assuming the convexity hypothesis on the objective and constraint functions. Moreover, Wu [16] developed Karush-Kuhn-Tucker-type optimality conditions for IVOPs and derived strong duality theorems that connect the primal problems with their associated dual problems. Bhurjee and Panda [17] explored IVOPs by defining interval-valued functions in parametric forms. More recently, Roy et al. [18] proposed a gradient based descent line search technique employing the notion of generalized Hukuhara (gH, in short) differentiability to solve IVOPs.
In the theory of mathematical optimization, if the uncertainties involved in the objective functions of MOPs are represented as intervals, the resulting problems are referred to as interval-valued multiobjective optimization problems (IVMOPs, in short). IVMOPs frequently arise in diverse fields such as transportation, economics, and business administration [19,20]. As a result, several researchers have developed various methods to solve IVMOPs. For instance, Kumar and Bhurjee [21] studied IVMOPs by transforming them into their corresponding deterministic MOPs and establishing the relationships between the solutions of IVMOPs and MOPs. Upadhyay et al. [22] introduced Newton’s method for IVMOPs and established the quadratic convergence of the sequence generated by Newton’s method under suitable assumptions. Subsequently, Upadhyay et al. [23] developed quasi-Newton methods for IVMOPs and demonstrated their efficacy in solving both convex and non-convex IVMOPs. For a more comprehensive and updated survey on IVMOPs, see [24,25,26,27] and the references cited therein.
It is well-known that the conjugate direction method is a powerful optimization technique that can be widely employed for solving systems of linear equations and optimization problems (see, [28,29]). It has been observed that several real-life optimization problems emerging in different fields of modern research, such as electromagnetic scattering [30], inverse engineering [31], and geophysical inversion [32]. The conjugate direction method was first introduced by Hestenes and Stiefel [28], who developed the conjugate gradient method to solve a system of linear equations. Subsequently, Pérez and Prudente [29] introduced the HS-type conjugate direction algorithm for MOPs by employing an inexact line search. Wang et al. [33] introduced an HS-type conjugate direction algorithm for MOPs without employing line search techniques, and established the global convergence of the proposed method under suitable conditions. Recently, based on the memoryless Broyden-Fletcher-Goldfarb-Shanno update, Khoshsimaye-Bargard and Ashrafi [34] presented a convex hybridization of the Hestenes-Stiefel and Dai-Yuan conjugate parameters. From the above discussion, it is evident that HS-type conjugate direction methods have been developed to solve single-objective problems as well as MOPs. However, there is no research paper available in the literature that has explored the HS-type conjugate direction method for IVMOPs. The aim of this article is to fill the aforementioned research gaps by developing the HS-type conjugate direction method for a class of IVMOPs.
Motivated by the works of [28,29,33], in this paper, we investigate a class of IVMOPs and define the HS-direction for the objective function of IVMOPs. A descent direction property of the HS-direction is established at noncritical points. To determine an appropriate step size, we employ an Armijo-like line search. Moreover, an HS-type conjugate direction algorithm for IVMOPs is presented, and the convergence of this algorithm is established. Finally, the efficiency of the proposed method is demonstrated by solving various numerical problems employing MATLAB R2024a.
The primary contribution and novel aspect of this article are threefold. Firstly, the results presented in this paper generalize several significant results from existing literature. Specifically, we generalize the results established by Pérez and Prudente [29] on the HS-type method from MOPs to a more general class of optimization problems, namely, IVMOPs. Secondly, if the conjugate parameter is set to zero and if every component of the objective function of the IVMOP is a real-valued function rather than interval-valued function, then the proposed algorithm reduces to the steepest descent algorithm for MOPs, as introduced by Fliege and Svaiter [5]. Thirdly, it is imperative to note that the proposed HS-type algorithm is applicable to any IVMOP, where the objective function is continuously gH-differentiable. However, Newton’s and Quasi-Newton methods proposed by Upadhyay et al. [22,23] require the objective functions of the IVMOPs to be twice continuously gH-differentiable. In view of this fact, our proposed algorithm can be applied to a broader class of optimization problems compared to the algorithms introduced by Upadhyay et al. [22,23].
The rest of the paper is structured as follows. In Section 2, we discuss some mathematical preliminaries that will be employed in the sequel. Section 3 presents an HS-type conjugate direction algorithm for IVMOPs, along with a convergence analysis of the sequence generated by the algorithm. In Section 4, we demonstrate the efficiency of the proposed algorithm by solving several numerical examples via MATLAB R2024a. Finally, in Section 5, we provide our conclusions as well as future research directions.

2. Preliminaries

Throughout this article, the symbol N denotes the set of all natural numbers. For n N , the symbol R n refers to the n -dimensional Euclidean space. The symbol R refers to the collection of all negative real numbers. The symbol is employed to denote an empty set. For any non-empty set X and m N , the notation X m represents the Cartesian product defined as
X m : = X × × X ( m times ) .
Let y , u R n . Then the symbol y , u is defined as
y , u : = k = 1 n y k u k .
For u R n , the symbol u is defined as
u : = u , u .
Let n , m N . Then, the notations I n and I m are used to represent the following sets:
I n : = { 1 , , n } , I m : = { 1 , , m } .
Let y , u R n . The following notations are employed throughout this article:
y u y k u k , for all k I n , y < u y k < u k , for all k I n , y u y u and y u .
For u R n and G : R n R , we define
G u k + u : = lim β 0 + G ( u 1 , , u k + β , , u n ) G ( u 1 , , u k , , u n ) β ,
and
G u k u : = lim β 0 G ( u 1 , , u k + β , , u n ) G ( u 1 , , u k , , u n ) β ,
which denote the one-sided right and left k -th partial derivatives of G at the point u , respectively, assuming the limits defined in (1) and (2) are well-defined.
If for every k I n and u ¯ R n , G u k + u ¯ and G u k u ¯ exist, then we define:
G u ¯ : = G u 1 u ¯ , , G u n u ¯ , + G u ¯ : = G u 1 + u ¯ , , G u n + u ¯ .
The symbols C ( R ) and C ( R ) are used to denote the following sets:
C ( R ) : = b ̲ , b ¯ : b ̲ , b ¯ R , b ̲ b ¯ , C ( R ) : = b ̲ , b ¯ : b ̲ , b ¯ R , b ̲ b ¯ , b ¯ < 0 .
Let p , q R . The symbol p q represents the following:
p q : = min { p , q } , max { p , q } .
The interval X : = [ b ̲ , b ¯ ] C ( R ) is referred to as a degenerate interval if and only if
b ̲ = b ¯ .
Let X = [ p ̲ , p ¯ ] , Z : = [ q ̲ , q ¯ ] C ( R ) , and ξ R . Corresponding to X , Z , and ξ , we define the following algebraic operations (see, [22]):
X Z : = [ p ̲ + q ̲ , p ¯ + q ¯ ] , X Z : = [ p ̲ q ¯ , p ¯ q ̲ ] , ξ X = [ ξ p ̲ , ξ p ¯ ] , ξ 0 , ξ p ¯ , ξ p ̲ ] , ξ < 0 .
The subsequent definition is from [35].
Definition 1.
Consider an arbitrary set E : = [ e ̲ , e ¯ ] C ( R ) . Then, the symbol E I represents the norm of E  and is defined as follows:
E I : = max { | e ̲ | , | e ¯ | } .
For X : = [ p ̲ , p ¯ ] and Z : = [ q ̲ , q ¯ ] C ( R ) , we adopt the following notations throughout the article:
X L U Z p ̲ q ̲ and p ¯ q ¯ , X < L U Z p ̲ < q ̲ and p ¯ < q ¯ , X L U Z X L U Z and X Z .
Let E = ( E 1 , , E m ) , R = ( R 1 , , R m ) C ( R ) m . The ordered relations between E and R are described as follows:
E m R E k L U R k , for all k I m , E m R E k < L U R k , for all k I m .
The following definition is from [35].
Definition 2.
For arbitrary intervals X : = [ p ̲ , p ¯ ] and Z : = [ q ̲ , q ¯ ] C ( R ) , the symbol X gH Z represents the gH-difference between X and Z  and is defined as
X gH Z : = p ̲ q ̲ p ¯ q ¯ .
The notion of continuity for G : R n C ( R ) is recalled in the subsequent definition (see, [35]).
Definition 3.
The function G : R n C ( R ) is termed a continuous function at a point u ¯ R n if for any ϵ > 0 , there exists some δ ϵ > 0 such that for any u R n satisfying u u ¯ < δ ϵ , the following inequality holds:
G ( u ) G ( u ¯ ) I < ϵ .
The subsequent definition is from Upadhyay et al. [23].
Definition 4.
The function G : R n C ( R ) is said to be convex function if, for all u 1 , u 2 R n and any β [ 0 , 1 ] , the following inequality holds:
G ( ( 1 β ) u 1 + β u 2 ) L U ( 1 β ) G ( u 1 ) β G ( u 2 ) .
If there exists some γ > 0 and a convex function Θ : R n C ( R ) , such that the following condition holds:
G u = Θ u 1 2 u 2 [ γ , γ ] , for all u R n ,
then G is said to be a strongly convex function on R n . Moreover, G is said to be locally convex (respectively, locally strongly convex) at a point u ¯ R n if there exists a neighborhood V of u ¯ such that the restriction of G to V is convex (respectively, strongly convex).
The subsequent definition is from [36].
Definition 5.
Let u ¯ R n and d R n with d 0 . The gH-directional derivative of G : R n C ( R ) at u ¯ in the direction d is defined as
D g H G ( u ¯ , d ) : = lim β 0 + G ( u ¯ + β d ) g H G ( u ¯ ) β ,
provided that the above limit exists.
The subsequent definition from [36].
Definition 6.
Let u ¯ R n and G : R n C ( R ) be defined as
G ( u ) : = [ G ̲ ( u ) , G ¯ ( u ) ] , for all u R n .
Let the functions G 1 , G 2 : R n R be defined as follows:
G 1 ( u ) : = G ¯ ( u ) + G ̲ ( u ) 2 , for all u R n , G 2 ( u ) : = G ¯ ( u ) G ̲ ( u ) 2 , for all u R n .
The mapping G is said to be gH-differentiable at u ¯ if, there exist vectors w 1 , w 2 R n with w 1 : = w 1 ( 1 ) , , w 1 ( n ) and w 2 : = w 2 ( 1 ) , , w 2 ( n ) , and error functions F 1 , F 2 : R n R , such that lim z 0 F 1 ( z ) = lim z 0 F 2 ( z ) = 0 , and for all z 0  the following hold:
G 1 u ¯ + z G 1 u ¯ = k = 1 n w 1 ( k ) z k + z F 1 ( z ) ,
and
G 2 u ¯ + z G 2 u ¯ = k = 1 n w 2 ( k ) z k + z F 2 ( z ) .
If G is gH-differentiable at every element u ¯ R n , then G is said to be gH-differentiable on R n .
The proof of the following theorem can be established using Theorem 5 and Propositions 9 and 11 from [36].
Theorem 1.
Let u ¯ R n and let G : R n C ( R ) be defined as
G ( u ) : = [ G ̲ ( u ) , G ¯ ( u ) ] , for all u R n .
If G is gH-differentiable at u ¯ R n , then for any d R n , one of the following conditions is fulfilled:
(i)
The gradients G ̲ u ¯ and G ¯ u ¯ exist, and
D gH G u ¯ ; d = G ̲ u ¯ , d G ¯ u ¯ , d .
(ii)
If G ̲ u ¯ , G ¯ u ¯ , + G ̲ u ¯ , and + G ¯ u ¯ exist and satisfy:
G ̲ u ¯ = + G ¯ u ¯ , G ¯ u ¯ = + G ̲ u ¯ .
Moreover,
D gH G u ¯ ; d : = G ̲ u ¯ , d G ¯ u ¯ , d ,
or,
D gH G u ¯ ; d : = + G ̲ u ¯ , d + G ¯ u ¯ , d .
We define the interval-valued vector function H : R n C ( R ) m  as follows:
H ( u ) : = H 1 ( u ) , , H m ( u ) , for all u R n ,
where H k : R n C ( R ) ( k I m ) are interval-valued functions.
The subsequent two definitions are from [26].
Definition 7.
Let H : R n C ( R ) m and u ¯ R n . Suppose that every component of H possesses gH-directional derivatives. Then, u ¯ is called a critical point of H , provided that there does not exist any d R n , satisfying:
D g H H ( u ¯ , d ) ( C ( R ) ) m ,
where D g H H ( u ¯ , d ) : = D g H H 1 ( u ¯ , d ) , , D g H H m ( u ¯ , d ) .
Definition 8.
An element d R n is referred to as descent direction of H : R n C ( R ) m at a point u ¯ R n , provided that some β ( β R , β > 0 ) exists, satisfying:
H ( u ¯ + t d ) m H ( u ¯ ) , for all t ( 0 , β ] .
Definition 9.
Let u ¯ R n . Then, the function H : R n C ( R ) m is said to be continuously gH-differentiable at u ¯ if every component of H is continuously gH-differentiable at u ¯ .
Remark 1.
In view of Definitions 5 and 8, it follows that if H : R n C ( R ) m is continuously gH-differentiable at u ¯ R n and if d R n is a descent direction of H at u ¯ , then
D g H H k u ¯ , d L U [ 0 , 0 ] , for all k I m .

3. HS-Type Conjugate Direction Method for IVMOP

In this section, we present an HS-type conjugate direction method for IVMOPs. Moreover, we establish the convergence of the sequence generated by the above method.
Consider the following IVMOP:
( IVMOP ) Minimize H ( u ) : = ( H 1 ( u ) , , H m ( u ) ) , subject to u R n ,
where the functions H k : R n C ( R ) ( k I m ) are defined as
H k u : = H ̲ k u , H ¯ k u , for all u R n .
The functions H k ( k I m ) are assumed to be continuously gH-differentiable unless otherwise specified.
The notions of effective and weak effective solutions for IVMOP are recalled in the subsequent definition (see, [22]).
Definition 10.
A point u ¯ R n is referred to as effective solution of the IVMOP if there is no other point u R n , such that
H ( u ) m H ( u ¯ ) and H ( u ) H ( u ¯ ) .
Similarly, a point u ¯ R n is referred to as weak effective solution of the IVMOP provided that there is no other point u R n for which:
H ( u ) m H ( u ¯ ) .
In the rest of the article, we employ P to represent the set of all critical points of H .
Let u ¯ R n .
In order to determine the descent direction for the objective function H of IVMOP, we consider the following scalar optimization problem with interval-valued constraints (see, Upadhyay et al. [26]):
( P ) u ¯ Minimize φ ( α , d ) : = α , subject to D g H H k ( u ¯ , d ) + 1 2 d 2 , d 2 L U [ α , α ] , k I m ,
where φ : R × R n R is a real-valued function. It can be shown that the problem ( P ) u ¯ has a unique solution.
Any feasible point of the problem ( P ) u ¯ is represented as ( α u ¯ , d u ¯ ) , where α u ¯ R and d u ¯ R n . Let K u ¯ R n + 1 denote the feasible set of ( P ) u ¯ . We consider the functions d : R n R n and α : R n R , which are defined as follows:
α ( u ¯ ) , d ( u ¯ ) : = arg min α u ¯ , d u ¯ K u ¯ φ ( α u ¯ , d u ¯ ) .
From now onwards, for any u R n , the notation ( α ( u ) , d ( u ) ) will be used to represent the optimal solution of the problem ( P ) u .
In the subsequent discussions, we utilize the following lemmas from Upadhyay et al. [26].
Lemma 1.
Let u ¯ R n . If u ¯ P , then d ( u ¯ ) is a descent direction at u ¯ for H .
Lemma 2.
For u ¯ R n , the following properties hold:
(i)
If u ¯ P , then d ( u ¯ ) = 0 R n and α ( u ¯ ) = 0 .
(ii)
If u ¯ P , then α ( u ¯ ) < 0 .
Remark 2.
From Lemma 1, it follows that if u ¯ P , then the optimal solution of ( P ) u ¯ yields a descent direction. Furthermore, from Lemma 2, it can be inferred that the value of α ( u ¯ ) can be utilized to determine whether u ¯ P or not. Specifically, for any given point u ¯ R n , if α ( u ¯ ) = 0 , then u ¯ P . Otherwise, u ¯ P , and in this case, d ( u ¯ ) serves as a descent direction at u ¯ for H .
We recall the following result from Upadhyay et al. [26].
Lemma 3.
Let u ¯ P . If the functions H k ( k I m ) are locally convex (respectively, locally strongly convex) at u ¯ , then u ¯ is a locally weak effective (respectively, locally effective) solution of IVMOP.
The proof of the following lemma follows on the lines of the proof of Lemma 3.
Lemma 4.
Let u ¯ P . If the functions H k ( k I m ) are strongly convex on R n , then u ¯ is an effective solution of IVMOP.
To introduce the Hestenes-Stiefel-type direction for IVMOP, we define a function Φ ˜ : R n × R n R as follows
Φ ˜ ( u , d ) : = max k I m D ¯ g H H k u , d , for all ( u , d ) R n × R n ,
where,
D g H H k u , d : = D ̲ g H H k u , d , D ¯ g H H k u , d , for all ( u , d ) R n × R n .
In the following lemma, we establish the relationship between the critical point of H and the function Φ ˜ .
Lemma 5.
Let Φ ˜ : R n × R n R be defined by (3), and u ¯ R n . Then, u ¯ is a critical point of H if and only if
Φ ˜ ( u ¯ , d ) 0 , for all d R n .
Proof. 
Let u ¯ be a critical point of H . Then, by Definition 7, for every d R n there exists k I m  such that
D g H H k ( u ¯ , d ) C ( R ) = .
Consequently, it follows that D ¯ g H H k ( u ¯ , d ) 0 , which implies
Φ ˜ ( u ¯ , d ) 0 .
Conversely, suppose that
Φ ˜ ( u ¯ , d ) 0 , for all d R n .
Then, for any d R n there exists k I m  such that
Φ ˜ ( u ¯ , d ) = D ¯ g H H k ( u ¯ , d ) 0 .
This further implies that for any d R n ,
D g H H ( u ¯ , d ) ( C ( R ) ) m = .
Therefore, it follows that u ¯ is a critical point of H . This completes the proof.    □
We establish the following lemma, which will be used in the sequel.
Lemma 6.
Let u ¯ R n and let α ( u ¯ ) , d ( u ¯ ) be the optimal solution of the problem ( P ) u ¯ . Then
α ( u ¯ ) = Φ ˜ u ¯ , d ( u ¯ ) .
Proof. 
Since α ( u ¯ ) , d ( u ¯ ) ) K u ¯ , therefore we have
D g H H k u ¯ , d ( u ¯ ) L U [ α ( u ¯ ) , α ( u ¯ ) ] , for all k I m .
From (4), we obtain
D ¯ g H H k u ¯ , d ( u ¯ ) α ( u ¯ ) , for all k I m .
Consequently,
Φ ˜ u ¯ , d ( u ¯ ) α ( u ¯ ) .
Let us define α u ¯ : = Φ ˜ u ¯ , d ( u ¯ ) . Then, we have
D ̲ g H H k u ¯ , d ( u ¯ ) D ¯ g H H k u ¯ , d ( u ¯ ) Φ ˜ u ¯ , d ( u ¯ ) = α u ¯ , for all k I m .
Therefore, we obtain
D g H H k u ¯ , d ( u ¯ ) L U [ α u ¯ , α u ¯ ] , for all k I m .
This implies that α u ¯ , d ( u ¯ ) K u ¯ . Since α ( u ¯ ) , d ( u ¯ ) ) is the optimal solution of the problem ( P ) u ¯ , we obtain
α ( u ¯ ) α u ¯ = Φ ˜ u ¯ , d ( u ¯ ) .
Combining (5) and (6), we conclude
α ( u ¯ ) = Φ ˜ u ¯ , d ( u ¯ ) .
This completes the proof.    □
Let s { 0 , 1 , 2 , } be fixed and let { u ( r ) } 0 r s R n . We now introduce a Hestenes-Stiefel direction (HS-direction, in short) w s HS at u ( s ) .
w s HS : = d ( u ( s ) ) ; if s = 0 , d ( u ( s ) ) + β s HS w ( s 1 ) HS ; if s 1 ,
where w ( s 1 ) HS represents the HS-direction at ( s 1 ) -th step, and for s 1 , β s HS is defined as
β s HS : = Φ ˜ u ( s ) , d ( u ( s ) ) + Φ ˜ u ( s 1 ) , d ( u ( s 1 ) ) Φ ˜ u ( s ) , w ( s 1 ) HS Φ ˜ u ( s 1 ) , w ( s 1 ) HS .
Remark 3.
If every component of the objective function H of the IVMOP is a real-valued function rather than interval-valued function, that is, H : R n R m , then Equation (7) reduces to the HS-direction defined for vector-valued functions, as considered by Pérez and Prudente [29]. As a result, the parameter β s HS introduced in (8) extends the HS-direction from MOPs to IVMOPs, which belong to a broader class of optimization problems. Moreover, when m = 1 , Equation (7) further reduces to the classical HS-direction for a real-valued function, defined by Hestenes and Stiefel [28].
It can be observed that β s HS , defined in (8), become undefined when
Φ ˜ u ( s ) , w ( s 1 ) HS = Φ ˜ u ( s 1 ) , w ( s 1 ) HS ,
and the direction w s HS defined in Equation (7) may not provide a descent direction. Therefore, to address this issue, we adopt an approach similar to that proposed by Gilbert and Nocedal [37] and Pérez and Prudente [29]. Hence, we define β s and w ( s ) as follows:
β s : = 0 , if β s HS < 0 or undefined or s = 0 ; 0 , if Φ ˜ u ( s 1 ) , d u ( s 1 ) > Φ ˜ u ( s ) , d u ( s ) , Φ ˜ u ( s ) , d ( u ( s ) ) + Φ ˜ u ( s 1 ) , d ( u ( s 1 ) ) Φ ˜ u ( s ) , w ( s 1 ) Φ ˜ u ( s 1 ) , w ( s 1 ) , otherwise .
and
w ( s ) : = d ( u ( s ) ) + β s w ( s 1 ) .
In the following lemma, we establish an inequality that relates the directional derivative of H at point u ( s ) in the direction w ( s 1 ) to β s .
Lemma 7.
Let u ( s ) and u ( s 1 ) be noncritical points of H . Suppose that w ( s 1 ) represents a descent direction of H at point u ( s 1 ) . Then, we have
β s D g H H k ( u ( s ) , w ( s 1 ) ) L U [ 0 , 0 ] , for all k I m .
Proof. 
In view of Definition 9, it follows that β s 0 . Now, the following two possible cases may arise:
Case 1: If β s = 0 , then
β s D g H H k ( u ( s ) , w ( s 1 ) ) = [ 0 , 0 ] , for all k I m .
Therefore, the inequality in (11) is satisfied.
Case 2: Let β s > 0 . Our aim is to prove that
β s D g H H k ( u ( s ) , w ( s 1 ) ) L U [ 0 , 0 ] , for all k I m .
Since β s > 0 , it suffices to show that
D g H H k ( u ( s ) , w ( s 1 ) ) L U [ 0 , 0 ] , for all k I m .
On the contrary, assume that there exists k I m such that
D g H H k ( u ( s ) , w ( s 1 ) ) L U [ 0 , 0 ] .
This implies that
Φ ˜ u ( s ) , w ( s 1 ) > 0 .
Since w ( s 1 ) is a descent direction of H at point u ( s 1 ) , we have
D g H H k ( u ( s 1 ) , w ( s 1 ) ) L U [ 0 , 0 ] , for all k I m .
This implies that
Φ ˜ u ( s 1 ) , w ( s 1 ) 0 .
From (13) and (14), we obtain
Φ ˜ u ( s ) , w ( s 1 ) Φ ˜ u ( s 1 ) , w ( s 1 ) > 0 .
Now, if
Φ ˜ u ( s 1 ) , d u ( s 1 ) > Φ ˜ u ( s ) , d u ( s ) ,
then from Definition 9, we obtain
β s = 0 ,
which contradicts the assumption that β s > 0 . On the other hand, if
Φ ˜ u ( s 1 ) , d u ( s 1 ) Φ ˜ u ( s ) , d u ( s ) ,
then
Φ ˜ u ( s 1 ) , d u ( s 1 ) Φ ˜ u ( s ) , d u ( s ) 0 .
Using (15) and Definition 9, we obtain
β s 0 ,
which is a contradiction. This completes the proof.    □
Notably, for s = 0 , the direction w ( s ) , as defined in Equation (10), coincides with d ( u ( s ) ) . Thus, by Lemma 1, we conclude that w ( s ) serves as a descent direction at u ( s ) for s = 0 . Therefore, in the following theorem, we establish that w ( s ) serves as a descent direction at u ( s ) for s 1 , under appropriate assumptions.
Theorem 2.
Let u ( r ) r { 0 , 1 , , s } be noncritical points of H . Suppose that w ( r ) , as defined in Equation (10), serves as descent direction of H at u ( r ) for all r { 0 , 1 , , ( s 1 ) } . Then, w ( s ) serves as a descent direction at u ( s ) for the function H .
Proof. 
Since the functions H k ( k I m ) are continuously gH-differentiable, therefore to prove that w ( s ) is a descent direction at u ( s )  it is sufficient to show that
D g H H k ( u ( s ) , w ( s ) ) < L U [ 0 , 0 ] , for all k I m .
Let s 1 be fixed. From Theorem 1, we have
D g H H k ( u ( s ) , w ( s ) ) = D g H H k ( u ( s ) , d ( u ( s ) ) + β s w ( s 1 ) ) , = + H ̲ k u ( s ) T d ( u ( s ) ) + β s w ( s 1 ) + H ¯ k u ( s ) T d ( u ( s ) ) + β s w ( s 1 ) , for all k I m .
Consider
+ H ̲ k u ( s ) T d ( u ( s ) ) + β s w ( s 1 ) = + H ̲ k u ( s ) T d ( u ( s ) ) + β s + H ̲ k u ( s ) T w ( s 1 ) . α u ( s ) + β s D ¯ g H H k ( u ( s ) , w ( s 1 ) ) , for all k I m .
Therefore, from (17), Lemmas 2 and 7, we obtain
+ H ̲ k u ( s ) T d ( u ( s ) ) + β s w ( s 1 ) < 0 , for all k I m .
Similarly, we can prove that
+ H ¯ k u ( s ) T d ( u ( s ) ) + β s w ( s 1 ) < 0 , for all k I m .
Therefore, from (16), (18) and (19), we conclude that
D g H H k ( u ( s ) , w ( s ) ) < L U [ 0 , 0 ] , for all k I m .
This completes the proof.    □
Now, we introduce an Armijo-like line search method for the objective function H of IVMOP.
Consider u , w R n such that w is a descent direction at u for the function H . Let γ ( 0 , 1 ) . A step length t is acceptable if it satisfies the following condition:
H k ( u + t w ) g H H k ( u ) L U ( γ t ) D g H H k ( u , w ) , for all k I m .
Remark 4.
If every component of the objective function H of the IVMOP is a real-valued function rather than an interval-valued function, that is, H : R n R m , then (21) reduces to the following Armijo-like line search, defined by Fliege and Svaiter [5]:
H ( u + t w ) H ( u ) + γ t J H ( u ) w ,
where J H ( u ) represents the Jacobian of H at u .
In the next lemma, we prove the existence of such t which satisfies (21) for a given γ ( 0 , 1 ) .
Lemma 8.
If H is gH-differentiable and D g H H k ( u , w ) < L U [ 0 , 0 ] for each k I m , then for given γ ( 0 , 1 ) there exists t ^ > 0 , such that
H k ( u + t w ) g H H k ( u ) L U ( γ t ) D g H H k ( u , w ) , k I m , t ( 0 , t ^ ) .
Proof. 
Let k I m be fixed. By the definition of the directional derivative of H k , there exists a function τ k : R n C ( R ) such that
H k ( u + t w ) g H H k ( u ) = t D g H H k ( u , w ) + t τ k ( t ) ,
where τ k ( t ) 0 , 0 as t 0 .
From (22) and Definition 2, the possible two cases may arises:
Case 1:
H ̲ k ( u + t w ) H ̲ k ( u ) = t D ̲ g H H k ( u , w ) + t ϵ ̲ k ( t ) , H ¯ k ( u + t w ) H ¯ k ( u ) = t D ¯ g H H k ( u , w ) + t ϵ ¯ k ( t ) .
Since D g H H k ( u , w ) < L U [ 0 , 0 ] , therefore D ̲ g H H k ( u , w ) < 0 and D ¯ g H H k ( u , w ) < 0 . Define
ϵ : = max ( 1 γ ) D ̲ g H H k ( u , w ) , ( 1 γ ) D ¯ g H H k ( u , w ) > 0 .
Since τ k ( t ) [ 0 , 0 ] as t 0 , there exists t ^ k > 0 such that
τ k ( t ) I ϵ , for all t ( 0 , t ^ k ) .
Substituting (24) and (25) in (23), we have
H ̲ k ( u + t w ) H ̲ k ( u ) t D ̲ g H H k ( u , w ) t ( 1 γ ) D ̲ g H H k ( u , w ) , for all t ( 0 , t ^ k ) H ¯ k ( u + t w ) H ¯ k ( u ) t D ¯ g H H k ( u , w ) t ( 1 γ ) D ¯ g H H k ( u , w ) , for all t ( 0 , t ^ k ) .
This implies that
H k ( u + t w ) g H H k ( u ) L U ( γ t ) D g H H k ( u , w ) , for all t ( 0 , t ^ k ) .
Case 2:
H ¯ k ( u + t w ) H ¯ k ( u ) = t D ̲ g H H k ( u , w ) + t ϵ ̲ k ( t ) , H ̲ k ( u + t w ) H ̲ k ( u ) = t D ¯ g H H k ( u , w ) + t ϵ ¯ k ( t ) .
On the lines of the proof of Case 1, it can be shown that the inequality in (26) holds for all t ( 0 , t ^ k ) for some t ^ k > 0 .
Since k I m was arbitrary, we conclude that for each k I m , there exists t ^ k > 0 such that (26) holds. Let us set t ^ : = min { t ^ 1 , , t ^ m } , we have
H k ( u + t w ) g H H k ( u ) L U ( γ t ) D g H H k ( u , w ) , for all k I m , for all t ( 0 , t ^ ) .
This completes the proof.    □
Remark 5.
From Lemma 8, it follows that for u R n , if D g H H k ( u , w ) < L U [ 0 , 0 ] ( k I m ) , then there exists some t ^ > 0 such that Equation (21) holds for all t ( 0 , t ^ ) . To compute the step length t numerically, we employ the following steps:
We start with p = 0 and check whether Equation (21) holds for t = 1 2 p .
(a)
If the inequality in (21) is satisfied, we take t = 1 2 p as the step length.
(b)
Otherwise, set p : = p + 1 and update t = 1 2 p , repeating the process until the Equation (21) is satisfied.
In view of the fact that, as some t ^ > 0 exists such that Equation (21) holds for all t ( 0 , t ^ ) , and the sequence 1 2 p p N { 0 } converges to 0, the above process terminates after a finite number of iterations.
Thus, at any u R n with D g H H k ( u , w ) < L U [ 0 , 0 ] ( k I m ) , we can choose η as the largest t from the set
1 2 p p N { 0 } ,
such that Equation (21) is satisfied.
Now, we present the HS-type conjugate direction algorithm for IVMOP.
Remark 6.
It is worth noting that if β s in (10) is set to zero and if every component of the objective function H of the IVMOP is a real-valued function rather than interval-valued function, that is, H : R n R m , then the Algorithm 1 reduces to the steepest descent algorithm for MOPs, as proposed by Fliege and Svaiter [5].
Algorithm 1 HS-Type Conjugate Direction Algorithm for IVMOP
1:
Let γ ( 0 , 1 ) , initial point u ( 0 ) R n , ϵ > 0 , and set s = 0 .
2:
Solve the optimization problem ( P ) u ( s ) and obtain the values of α ( u ( s ) ) and d u ( s ) .
3:
If | α ( u ( s ) ) | < ϵ , then stop. Otherwise, proceed to the next step.
4:
Calculate w ( s ) using (10).
5:
Select η ( s ) as the largest value of t 1 2 p : p N { 0 } that satisfies (21). Update the iterate as follows:
u ( s + 1 ) : = u ( s ) + η ( s ) w ( s ) .
6:
Set s : = s + 1 , and go to Step 2.
It is worthwhile to note that if, at iteration s , Algorithm 1 does not reach Step 4, that is, if it terminates at Step 3, then in view of Lemma 2, it follows that u ( s ) is an approximate critical point of H . On the other hand, if Algorithm 1 converges in a finite number of iterations, then the last iterative point is an approximate critical point of H . As a consequence of this fact, we establish the convergence of the infinite sequence generated by Algorithm 1. That is, α ( u ( s ) ) 0 for all s N . Consequently, we have α ( u ( s ) ) < 0 and w ( s ) serves as a descent direction at u ( s ) for all s N .
In the following theorem, we establish the convergence of the sequence generated by the Algorithm 1.
Theorem 3.
Let ( u ( s ) ) s N be an infinite sequence generated by Algorithm 1. Suppose that the set
T : = { u R n : H ( u ) m H ( u ( 0 ) ) } ,
is bounded. Under these assumptions, every accumulation point of the sequence ( u ( s ) ) s N is a critical point of the objective function of IVMOP.
Proof. 
From (21), for all s { 0 , 1 , 2 , } , we have
H k ( u ( s + 1 ) ) g H H k ( u ( s ) ) L U ( γ η ( s ) ) D g H H k ( u ( s ) , w ( s ) ) , for all k I m .
Using Remark 1, for all s { 0 , 1 , 2 , } , we obtain
H k ( u ( s + 1 ) ) g H H k ( u ( s ) ) L U ( γ η ( s ) ) D g H H k ( u ( s ) , w ( s ) ) L U [ 0 , 0 ] , for all k I m .
This implies that
H k ( u ( s + 1 ) ) L U H k ( u ( s ) ) , for all s { 0 , 1 , 2 , } and for all k I m .
From (29), the sequence ( u ( s ) ) s N lies in T which is a bounded subset of R n . As a result, the sequence ( u ( s ) ) s N is also bounded. Hence, it possesses at least one accumulation point, say u ¯ . We claim that u ¯ is a critical point of the objective function of IVMOP.
Indeed as ( u ( s ) ) s N is a bounded sequence in R n and for all k I m , H k is gH-continuous on R n , therefore using (29), we conclude that, for all k I m , the sequence H k ( u ( s ) ) s N is non-increasing and bounded. Consequently, from (28), it follows that
lim s η ( s ) D g H H k ( u ( s ) , w ( s ) ) = [ 0 , 0 ] , k I m .
Since η ( s ) ( 0 , 1 ] for all s N , therefore the value lim sup s η ( s ) exists. Therefore, the following two possible cases may arise:
Case 1: Let lim sup s η ( s ) > 0 . Hence, employing (30) and taking into account the fact that u ¯ is an accumulated point of the sequence ( u ( s ) ) s N , there exist subsequence u s j j N and η s j j N of u s s N and η s s N , respectively, such that
lim j u s j = u ¯ , lim j η s j = lim sup s η ( s ) > 0 ,
and
lim j D g H H k ( u ( s j ) , w ( s j ) ) = [ 0 , 0 ] , for all k I m .
Our aim is to show that u ¯ is a critical point of H . On the contrary, assume that u ¯ is a noncritical point of H . This implies that there exists d R n  such that
D g H H k ( u ¯ , d ) < L U [ 0 , 0 ] , k I m .
Since H is continuously gH-differentiable, therefore there exist ϵ > 0 and δ > 0 , such that
D g H H k ( u , d ) < L U [ ϵ , ϵ ] , for all k I m and for all u B δ u ¯ ,
where, B δ u ¯ represents an open ball of radius δ centered at u ¯ .
Since u s j u ¯ as j , therefore using (33), there exists n 0 N , such that
D g H H k ( u s j , d ) < L U [ ϵ , ϵ ] , for all k I m and for all j n 0 .
Now, for every j n 0 , by defining α s j : = Φ ˜ u s j , d , we get
D g H H k ( u s j , d ) L U [ α s j , α s j ] , for all k I m .
This implies that, for every j n 0 ,
α s j , d K u s j
Using (34) and (35), for all j n 0 , we obtain
α u s j α s j < ϵ .
This implies that, for all j n 0 , we get
D g H H k ( u s j , d u s j ) < L U [ ϵ , ϵ ] , for all k I m .
Now, for all j n 0 and for all k I m , we consider
D g H H k u ( s j ) , w ( s j ) = D g H H k u ( s j ) , d u s j + β s j w ( s j 1 ) = + H ̲ k u ( s j ) T d u s j ) + β s j w ( s j 1 ) + H ¯ k u ( s j ) T d u s j ) + β s j w ( s j 1 ) .
Therefore, using (36), for all j n 0 and for all k I m , we conclude that
D g H H k ( u ( s j ) , w ( s j ) ) < L U ϵ , ϵ + β s j D g H H k d u s j , w ( s j 1 ) .
Now using Lemma 7, for all j n 0 and for all k I m , we obtain
D g H H k ( u ( s j ) , w ( s j ) ) < L U ϵ , ϵ .
This leads to a contradiction with Equation (31).
Case 2: Let lim sup s η ( s ) = 0 . Since η ( s ) 0 , for all s N , therefore we get
lim s η ( s ) = 0 .
Now, for p N , there exists n p N  such that
η ( s ) < 1 2 p , s n p .
Therefore, for t = 1 2 p , Equation (21) is not satisfied, that is for all s n p , we have
H k u ( s j ) + 1 2 p u ( s j ) g H H k u ( s j ) L U γ 2 p D g H H k u ( s j ) , w ( s j ) , for all k I m .
Letting p (along a suitable subsequence, if necessary) in both sides of the inequality in (38), there exists k I m , such that
D g H H k ( u ( s j ) , w ( s j ) ) L U γ D g H H k ( u ( s j ) , w ( s j ) ) , s j n p ,
which leads to a contradiction. This completes the proof.    □

4. Numerical Experiments

In this section, we furnish several numerical examples to illustrate the effectiveness of Algorithm 1 and solve them by employing MATLAB R2024a.
In the following example, we provide an example of a strongly convex IVMOP to illustrate the significance of Algorithm 1.
Example 1.
Consider the following problem (P1) which belongs to the class of IVMOPs.
( P 1 ) Minimize H 1 ( u 1 , u 2 , u 3 ) , H 2 ( u 1 , u 2 , u 3 ) , subject to ( u 1 , u 2 , u 3 ) R 3 ,
where H k : R 3 C ( R ) ( k = 1 , 2 ) are defined as follows:
H 1 ( u 1 , u 2 , u 3 ) : = u 1 1 2 + u 2 2 + ( u 3 1 ) 2 u 1 2 + ( u 2 2 ) 2 + ( u 3 3 ) 2 , H 2 ( u 1 , u 2 , u 3 ) : = e u 1 + e u 2 + e u 3 u 1 2 + u 2 2 + u 3 2 .
Evidently, ( 0 , 2 , 3 ) is a critical point of the objective function of (P1). Since the components of the objective function in (P1) are strongly convex, it follows from Lemma 4 that ( 0 , 2 , 3 ) is an effective solution of (P1).
Now, we employ Algorithm 1 to solve (P1), with an initial point ( 14 , 17 , 11 ) . The stopping criterion is defined as | α ( u ( s ) ) | < ϵ = 10 4 . The numerical results for Algorithm 1 are shown in Table 1.
From Step 9 of Table 1, we conclude that the sequence converges to an approximate effective solution ( 2 . 5519 × 10 8 , 2 , 3 ) of (P1).
In Example 2, we consider a locally convex IVMOP to demonstrate the effectiveness of Algorithm 1.
Example 2.
Consider the following problem (P2) which belongs to the class of IVMOPs.
( P 2 ) Minimize H 1 ( u 1 , u 2 ) , H 2 ( u 1 , u 2 ) , subject to ( u 1 , u 2 ) R 2 ,
where H k : R 2 C ( R ) ( k = 1 , 2 ) are defined as follows:
H 1 ( u 1 , u 2 ) : = 1 2 u 1 1 2 + u 2 1 2 1 2 ( u 1 1 ) 3 u 1 u 2 2 + e u 1 + u 2 , H 2 ( u 1 , u 2 ) : = u 1 1 2 + u 2 1 2 u 1 + 1 2 + u 2 + 1 2 .
It is evident that ( 1 , 1 ) is a critical point of the objective function of (P2). Since the components of the objective function in (P2) are locally convex at ( 1 , 1 ) , it follows from Lemma 3 that ( 1 , 1 ) is a locally weak effective solution of (P2).
Now, we employ Algorithm 1 to solve (P2), with an initial starting point ( 4 , 12 ) . The stopping criterion is defined as | α ( u ( s ) ) | < ϵ = 10 4 . The numerical results for Algorithm 1 are shown in Table 2.
From Step 3 of Table 1, we conclude that the sequence converges to locally weak effective solution ( 1 , 1 ) of (P2).
It is worth noting that the locally weak effective solution of an IVMOP is not an isolated point. However, applying Algorithm 1 with a given initial point can lead to one such locally weak effective solution. To generate an approximate locally weak effective solution set, we employ a multi-start approach. Specifically, we generate 100 uniformly distributed random initial points and subsequently execute Algorithm 1 starting from each of these points. In view of the above fact, in Example 2 we generate a set of approximate locally weak effective solutions by selecting 100 uniformly distributed random initial points in the domain [ 0 , 10 ] × [ 0 , 10 ] using the “rand” function of MATLAB R2024a. We use | α ( u ( s ) ) | < ϵ = 10 4 or a maximum of 1000 iterations as the stopping criteria. The sequences generated from these points are illustrated in Figure 1.
In view of the works of Upadhyay et al. [22,23], it can be observed that Newton’s and Quasi-Newton methods are applicable to solve certain classes of IVMOPs in which the objective functions of IVMOPs are twice continuously gH-differentiable. In contrast, our proposed algorithm only requires the continuous gH-differentiability on the components of the objective function. In view of this, Algorithm 1 could be applied to solve a broader class of IVMOPs compared to the algorithms proposed by Upadhyay et al. [22,23]. To demonstrate this, we consider an IVMOP, where the first component of the objective function is not twice continuously gH-differentiable.
Example 3.
Consider the following problem (P3) which belongs to the class of IVMOPs.
( P 3 ) Minimize H 1 ( u 1 , u 2 ) , H 2 ( u 1 , u 2 ) , subject to ( u 1 , u 2 ) R 2 ,
where, H 1 : R 2 C ( R ) and H 2 : R 2 C ( R ) are defined as follows:
H 1 ( u 1 , u 2 ) : = ( u 1 3 ) 2 2 + u 1 ( u 1 3 ) 2 2 + u 1 + u 2 2 , if u 1 3 , u 1 2 4 u 1 2 + 9 4 u 1 2 4 u 1 2 + 9 4 + u 2 2 , if u 1 < 3 ,
and
H 2 ( u 1 , u 2 ) : = ( u 1 3 ) 2 + u 2 2 u 1 2 + ( u 2 4 ) 2 .
It can be verified that H 1 is gH-differentiable but not twice gH-differentiable. As a result, Newton’s and Quasi-Newton methods proposed by Upadhyay et al. [22,23] cannot be applied to solve (P3).
We solve (P3) by employing Algorithm 1 and MATLAB R2024a. We initialize the algorithm at the starting point ( 9 , 4 ) and define the stopping criteria as | α ( u ( s ) ) | < ϵ = 10 4 . The numerical results for Algorithm 1 are shown in Table 3.
Therefore, in view of Step 17 in Table 3, the sequence generated by Algorithm 1 converges to an approximate critical point ( 1 . 0022 , 3 . 5864 ) of the objective function of (P3).
In the following example, we apply Algorithm 1 employing MATLAB to solve a large-scale IVMOP for different values of n .
Example 4.
Consider the following problem (P4) which belongs to the class of IVMOPs.
( P 4 ) Minimize H 1 ( u 1 , u 2 , , u n ) , H 2 ( u 1 , u 2 , , u n ) , subject to ( u 1 , , u n ) R n ,
where H k : R n C ( R ) ( k = 1 , 2 ) is defined as follows:
H 1 ( u 1 , u 2 , , u n ) : = k = 1 n u k 1 2 k = 1 n u k + 4 5 , H 2 ( u 1 , u 2 , , u n ) : = k = 1 n u k 3 + u k 4 k = 1 n u k + 1 2 .
We consider a random point, obtained using the built-in MATLAB R2024a functionrand(n,1), as the initial point of Algorithm 1. We define the stopping criteria as | α ( u ( s ) ) | < ϵ = 10 4 or reaching a maximum of 5000 iterations. Table 4 presents the number of iterations and the computational times required to solve (P4) using Algorithm 1 for various values of n, starting from randomly generated initial points.
The computations were carried out on an Ubuntu system with the following specifications:Memory:128.0 GiB,Processor:Intel® Xeon® Gold 5415+ (32 cores), andOS Type:64-bit.

5. Conclusions and Future Research Directions

In this article, we have investigated a class of unconstrained IVMOPs. We have defined the HS-direction for IVMOPs and established its descent property at noncritical points. To ensure efficient step size selection, we have employed an Armijo-like line search. Furthermore, we have proposed an HS-type conjugate direction algorithm for IVMOPs and derived the convergence of the sequence generated by the proposed algorithm. Finally, the efficiency of the proposed algorithm has been demonstrated through numerical experiments in convex, locally convex, and large-scale IVMOPs via MATLAB R2024a.
The results established in this article have generalized several significant results existing in the literature. Specifically, we have extended the work of Pérez and Prudente [29] on the HS-type conjugate direction method for MOPs to a more general class of optimization problems, namely, IVMOPs. Moreover, it is worth noting that, if the conjugate parameter is set to zero and if every component of the objective function of the IVMOP is a real-valued function rather than interval-valued function then the proposed algorithm reduces to the steepest descent algorithm for MOPs, as introduced by Fliege and Svaiter [5]. Furthermore, it is imperative to note that the proposed HS-type algorithm is applicable to any IVMOP where the objective function is continuously gH-differentiable. However, Newton’s method proposed by Upadhyay et al. [22] requires the objective function of the IVMOP to be twice continuously gH-differentiable. In view of this, our proposed algorithm can be applied to a broader class of optimization problems compared to the algorithms introduced by Upadhyay et al. [22].
It has been observed that all objective functions in the considered IVMOP are assumed to be continuously gH-differentiable. Consequently, the findings of this paper are not applicable when the objective function involved do not satisfy this requirement, which can be considered as a limitation of this paper. Moreover, the present work does not explore convergence analysis using alternative line search techniques, such as the Wolfe and Goldstein line searches.
The results presented in this article leave numerous avenues for future research works. An important direction for future research is the exploration of hybrid approaches that integrate a convex hybridization of different conjugate direction methods (see, [34]). Another key direction is investigating the conjugate direction method for IVMOP without employing any line search techniques (see, [38]).

Author Contributions

Conceptualization, B.B.U. and R.K.P.; methodology, B.B.U., R.K.P.; validation, B.B.U., R.K.P., and I.M.S.-M.; formal analysis, B.B.U., R.K.P., and S.P.; writing-review and editing, B.B.U., R.K.P., and S.P.; supervision, B.B.U.

Data Availability Statement

The authors confirm that no data, text, or theories from others are used in this paper without proper acknowledgement.

Acknowledgments

The first author receives financial support from the University Grants Commission, New Delhi, India (UGC-Ref. No.: 1213/(CSIR-UGC NET DEC 2017)). The third author extends gratitude to the Ministry of Education, Government of India, for their financial support through the Prime Minister Research Fellowship (PMRF), granted under PMRF ID-2703573.

Conflicts of Interest

The authors confirm that there are no actual or potential conflicts of interest related to this article.

References

  1. Pareto, V. Manuale di Economia Politica; Societa Editrice: Milano, Italy, 1906.
  2. Miettinen, K. Nonlinear Multiobjective Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, xvii, 298. 1999.
  3. Diao, X., Li, H., Zeng, S.; Tam, V.W.; V., Gho, H. A Pareto multi-objective optimization approach for solving time-cost-quality tradeoff problems. Technol. Econ. Dev. Econ. 2011, 17, 22–41.
  4. Guillén-Gosálbez, G. A novel MILP-based objective reduction method for multi-objective optimization: Application to environmental problems. Comput. Chem. Eng. 2011, 35, 1469–1477.
  5. Fliege, J.; Svaiter, B.F. Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 2000, 51, 479–494.
  6. Bento, G.C.; Melo, J.G. Subgradient method for convex feasibility on Riemannian manifolds. J. Optim. Theory Appl. 2012, 152, 773–785.
  7. Upadhyay, B.B.; Singh, S.K.; Stancu-Minasian, I.M.; Rusu-Stancu, A.M. Robust optimality and duality for nonsmooth multiobjective programming problems with vanishing constraints under data uncertainty. Algorithms 2024, 17, 482.
  8. Ehrgott, M. Multicriteria Optimization; Springer: Berlin/Heidelberg, Germany 2005.
  9. Beer, M.; Ferson, S.; Kreinovich, V. Imprecise probabilities in engineering analyses. Mech. Syst. Signal Process, 2013, 37, 4–29.
  10. Chaudhuri, A.; Lam, R.; Willcox, K. Multifidelity uncertainty propagation via adaptive surrogates in coupled multidisciplinary systems. AIAA J., 56, 235–249.
  11. Qiu, D.; Jin, X.; Xiang, L. On solving interval-valued optimization problems with TOPSIS decision model. Eng. Lett. 2022, 30, 1101–1106.
  12. Lanbaran, N.M.; Celik, E.; Yiğider, M. Evaluation of investment opportunities with interval-valued fuzzy TOPSIS method. Appl. Math. Nonlinear Sci. 2020, 5, 461–474.
  13. Moore, R.E. Method and Applications of Interval Analysis; SIAM: Philadelphia, 1979.
  14. Ishibuchi, H.; Tanaka, H. Multiobjective programming in optimization of the interval objective function. European J. Oper. Res. 1990, 48, 219–225.
  15. Wu, H.-C. The Karush-Kuhn-Tucker optimality conditions in an optimization problem with interval-valued objective function. European J. Oper. Res. 2007, 176, 46–59.
  16. Wu, H.-C. On interval-valued nonlinear programming problems. J. Math. Anal. Appl. 2008, 338, 299–316.
  17. Bhurjee, A.K.; Panda, G. Efficient solution of interval optimization problem. Math. Methods Oper. Res. 2012, 76, 273–288.
  18. Roy, P.; Panda, G.; Qiu, D. Gradient-based descent line search to solve interval-valued optimization problems under gH-differentiability with application to finance. J. Comput. Appl. Math. 2024, 436, 115402.
  19. Maity, G.; Roy, S.K.; Verdegay, J.L. Time variant multi-objective interval-valued transportation problem in sustainable development. Sustainability 2019, 11, 6161.
  20. Zhang, J.; Li, S. The portfolio selection problem with random interval-valued return rates. Int. J. Innov. Comput. Inf. Control 2009, 5, 2847–2856.
  21. Kumar, P.; Bhurjee, A.K. Multi-objective enhanced interval optimization problem. Ann. Oper. Res. 2022, 311, 1035–1050.
  22. Upadhyay, B.B.; Pandey, R.K.; Liao, S. Newton’s method for interval-valued multiobjective optimization problem. J. Ind. Manag. Optim. 2024, 20, 1633–1661.
  23. Upadhyay, B.B.; Pandey, R.K.; Pan, J.; Zeng, S. Quasi-Newton algorithms for solving interval-valued multiobjective optimization problems by using their certain equivalence. J. Comput. Appl. Math. 2024, 438, 115550.
  24. Upadhyay, B.B.; Li, L.; Mishra, P. Nonsmooth interval-valued multiobjective optimization problems and generalized variational inequalities on Hadamard manifolds. Appl. Set-Valued Anal. Optim. 2023, 5, 69–84.
  25. Luo, S.; Guo, X. Multi-objective optimization of multi-microgrid power dispatch under uncertainties using interval optimization. J. Ind. Manag. Optim. 2023, 19, 823–851.
  26. Upadhyay, B.B.; Pandey, R.K.; Zeng, S. A generalization of generalized Hukuhara Newton’s method for interval-valued multiobjective optimization problems. Fuzzy Sets Syst. 2024, 492, 109066.
  27. Upadhyay, B.B.; Pandey, R.K.; Zeng, S.; Singh, S.K. On conjugate direction-type method for interval-valued multiobjective quadratic optimization problems. Numer. Algorithms 2024. [CrossRef]
  28. Hestenes, M.R.; Stiefel, E. Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Standards 1952, 49, 409–436.
  29. Pérez, L.R.; Prudente, L.F. Nonlinear conjugate gradient methods for vector optimization. SIAM J. Optim. 2018, 28, 2690–2720.
  30. Sarkar, T.; Rao, S. The application of the conjugate gradient method for the solution of electromagnetic scattering from arbitrarily oriented wire antennas. IEEE Trans. Antennas Propag. 1984, 32, 398–403.
  31. Pandey, V.; Bekele, A.; Ahmed, G.M.S.; Kanu, N.J. An application of conjugate gradient technique for determination of thermal conductivity as an inverse engineering problem. Mater. Today Proc. 2021, 47, 3082–3087.
  32. Frank, M.S.; Balanis, C.A. A conjugate direction method for geophysical inversion problems. IEEE Trans. Geosci. Remote Sens. 2007, 25, 691–701.
  33. Wang, C.; Zhao, Y.; Tang, L.; Yang, X. Conjugate gradient methods without line search for multiobjective optimization. arXiv preprint 2023, arXiv:2312.02461. Available online: https://arxiv.org/abs/2312.02461.
  34. Khoshsimaye-Bargard, M.; Ashrafi, A. A projected hybridization of the Hestenes-Stiefel and Dai-Yuan conjugate gradient methods with application to nonnegative matrix factorization. J. Appl. Math. Comput. 2025, 71, 551-571.
  35. Stefanini, L.; Bede, B. Generalized Hukuhara differentiability of interval-valued functions and interval differential equations. Nonlinear Anal. 2009, 71, 1311–1328.
  36. Stefanini, L.; Arana-Jiménez, M. Karush-Kuhn-Tucker conditions for interval and fuzzy optimization in several variables under total and directional generalized differentiability. Fuzzy Sets Syst. 2019, 362 1–34.
  37. Gilbert, J.C.; Nocedal, J. Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 1992, 2, 21–42.
  38. Nazareth, L. A conjugate direction algorithm without line searches. J. Optim. Theory Appl. 1977, 23 373-387.
Figure 1. Approximate locally weak effective solutions generated from 100 uniformly distributed random initial points.
Figure 1. Approximate locally weak effective solutions generated from 100 uniformly distributed random initial points.
Preprints 159909 g001
Table 1. Sequence generated by Algorithm 1 for the problem (P1).
Table 1. Sequence generated by Algorithm 1 for the problem (P1).
s u ( s ) w ( s ) η ( s ) | α ( u ( s ) ) |
0 ( 14 , 17 , 11 ) ( 28 , 30 , 16 ) 0.0625 970
1 ( 12.25 , 15.125 , 10 ) ( 24.5 , 26.25 , 14 ) 0.0625 742.66
2 ( 10.719 , 13.484 , 9.125 ) ( 21.437 , 22.969 , 12.25 ) 0.0625 568.6
3 ( 9.3789 , 12.049 , 8.3594 ) ( 18.758 , 20.098 , 10.719 ) 0.0625 435.33
4 ( 8.2065 , 10.793 , 7.6895 ) ( 16.413 , 17.585 , 9.3789 ) 0.125 333.3
5 ( 6.1549 , 8.5945 , 6.5171 ) ( 12.31 , 13.189 , 7.0342 ) 0.125 187.48
6 ( 4.6162 , 6.9459 , 5.6378 ) ( 9.2324 , 9.8918 , 5.2756 ) 0.125 105.46
7 ( 3.4621 , 5.7094 , 4.9784 ) ( 6.9243 , 7.4189 , 3.9567 ) 0.25 59.32
8 ( 1.7311 , 3.8547 , 3.9892 ) ( 3.4621 , 3.7094 , 1.9784 ) 0.5 14.83
9 ( 2.5519 × 10 8 , 2 , 3 ) ( 0.00011572 , 0.00062104 , 0.00096434 ) 0 4.0092 × 10 08
Table 2. Sequence generated by Algorithm 1 for the problem (P2).
Table 2. Sequence generated by Algorithm 1 for the problem (P2).
s u ( s ) w ( s ) η ( s ) | α ( u ( s ) ) |
0 ( 4 , 12 ) ( 10 , 22 ) 0.125 292
1 ( 2.75 , 9.25 ) ( 7.5 , 16.5 ) 0.25 164.25
2 ( 0.875 , 5.125 ) ( 3.75 , 8.25 ) 0.5 41.062
3 ( 1 , 1 ) ( 0.00099217 , 0.00099217 ) 0 1.2 × 10 6
Table 3. Sequence generated by Algorithm 1 for the problem (P3).
Table 3. Sequence generated by Algorithm 1 for the problem (P3).
s u ( s ) w ( s ) η ( s ) | α ( u ( s ) ) |
0 ( 9 , 4 ) ( 7 , 0.00099728 ) 1 24.5
1 ( 2 , 3.999 ) ( 0.4555 , 0.14238 ) 0.5 0.11387
2 ( 1.7723 , 3.9278 ) ( 0.34145 , 0.12352 ) 0.5 0.06592
3 ( 1.6015 , 3.8661 ) ( 0.25917 , 0.10383 ) 0.5 0.038973
4 ( 1.4719 , 3.8141 ) ( 0.19892 , 0.085853 ) 0.5 0.023469
5 ( 1.3725 , 3.7712 ) ( 0.15416 , 0.070341 ) 0.5 0.014354
6 ( 1.2954 , 3.736 ) ( 0.12042 , 0.057329 ) 0.5 0.008892
7 ( 1.2352 , 3.7074 ) ( 0.094691 , 0.046586 ) 0.5 0.0055667
8 ( 1.1878 , 3.6841 ) ( 0.074865 , 0.03779 ) 1 0.0035148
9 ( 1.113 , 3.6463 ) ( 0.044284 , 0.023278 ) 1 0.0012498
10 ( 1.0687 , 3.623 ) ( 0.026646 , 0.014336 ) 1 0.00045745
11 ( 1.0421 , 3.6087 ) ( 0.016225 , 0.0089043 ) 1 0.00017037
12 ( 1.0258 , 3.5998 ) ( 0.0099184 , 0.0054681 ) 1 6.4004 × 10 5
13 ( 1.0159 , 3.5943 ) ( 0.0061017 , 0.0034133 ) 1 2.4231 × 10 5
14 ( 1.0098 , 3.5909 ) ( 0.0038132 , 0.002264 ) 1 8.5523 × 10 6
15 ( 1.006 , 3.5886 ) ( 0.0023159 , 0.0013489 ) 1 3.2717 × 10 6
16 ( 1.0037 , 3.5873 ) ( 0.0014456 , 0.00089985 ) 1 1.1303 × 10 6
17 ( 1.0022 , 3.5864 ) ( 0.00091828 , 0.00065009 ) 1 3.1304 × 10 7
Table 4. The numerical results of Algorithm 1 for the problem (P4).
Table 4. The numerical results of Algorithm 1 for the problem (P4).
n Number of iterations Computation times (in seconds)
100 4498 414.3
120 4336 436.7
140 4641 468.8
160 4895 525.6
180 4880 586.8
200 4619 628.2
220 4886 727.7
240 4686 810.5
260 4707 896.6
280 5000 1044.4
300 5000 1148.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated