Preprint
Article

This version is not peer-reviewed.

Optimality Conditions Under a New Constraint Qualification for Nonconvex Optimization Problems

Submitted:

22 November 2025

Posted:

26 November 2025

You are already at the latest version

Abstract
The aim of this paper is to give necessary and sufficient optimality conditions for nonconvex optimization problems such that the constraint inequalities are both nonnegative and nonpositive, where the objective function and the constraint functions are tangentially convex, but are not necessarily convex. We do this first by introducing a novel constraint qualification, call “tangential nearly convexity” ((TNC), in short). Next, by using the cone of tangential subdifferentials together with the novel constraint qualification, we show that Karush-Kuhn-Tucker (KKT) conditions are necessary and sufficient for the optimality. Several examples are presented to clarify and compare the novel constraint qualification with the other well known constraint qualifications.
Keywords: 
;  ;  ;  ;  

1. Introduction

As a general rule, to present a checkable necessary and sufficient optimality condition for an constrained optimization problem, it needs to assume some properties of the constraint system called constraint qualifications. The subject of constraint qualifications is of significant importance in optimization. Indeed, constraint qualifications are corner stones for the study of the convex and nonconvex problems and they guarantee necessary and sufficient conditions for optimality. Since constraint qualifications are proved to play an important role in optimization and mathematical programming, there have been fruitful works on this topic and several types of constraint qualifications have been extensively studied. Constraint qualifications (involving epigraph or subdifferential) have been widely studied and extensively used in various aspects of optimization and mathematical programming. Moreover, constraint qualifications were used to study Fenchel duality and the formula of subdifferentials of convex functions. It is worth noting that nonconvex functions frequently appear in mathematical programming and several types of normal cones and subdifferentials for the nonconvex case have been extensively studied in optimization and its applications [1,2,3,7,13,14,15,16,18,19,20,21,22,23,24,25,27,28,30,32,33,34,35,36,38,40]. Then, it is natural to further study constraint qualifications by normal cones and subdifferentials for nonconvex and nonsmooth inequalities. Therefore, we consider the following nonconvex nonsmooth constrained optimization problem:
( P ) min f ( x ) s . t . , g j ( x ) 0 , j = 1 , . . . , m , g j ( x ) 0 , j = m + 1 , . . . , m + l , x R n , m , l N ,
with the feasible set F is defined by:
F : = C K ,
where K is defined as follows:
K : = { x R n : g j ( x ) 0 , j = 1 , . . . , m ; g j ( x ) 0 , j = m + 1 , , m + l } ,
and C is a nonempty subset of R n such that C K . Also, f : R n R { + } is the objective function and g j : R n R ( j = 1 , . . . , m + l ) are the constraint functions.
In fact, "establishing optimality conditions" for the optimization Problem ( P ) is one of the fundamentals in both the practice and theory. Of course, it is preferred whenever an optimality condition is both necessary and sufficient, but under some certain assumptions on the optimization Problem ( P ) such kind of conditions may be valid, for example; differentiability, smoothness and convexity. Thus, in the past decades, a great deal of attention was given to investigate the optimality conditions for scalar optimization problems (for example, [1,3,7,14,15,19,20,21,29,30,31,35,38]).
Therefore, the above given results have motivated us to investigate and focus on the class of tangentially convex functions that are not necessarily convex or smooth. It should be noted that a few works have been done in optimizing the problems with such constraint functions [15,30,35,38]. In this paper, we remove the convexity of the feasible set and the objective function. We consider a nonconvex nonsmoth optimization problem with constraint inequalities are both nonpositive and nonnegative, call the Problem ( P ) , whose the objective function is tangentially convex and active constraint functions are tangentially convex at a given feasible point, but are not necessarily convex or differentiable, and moreover, the feasible set is not convex. Our aim is to present a condition on a nonconvex feasible set defined by tangentially convex functions and provide a novel constraint qualification that guarantees the Karush-Kuhn-Tucker (KKT) conditions are necessary and sufficient for optimality of the Problem ( P ) . It is worth noting that in all mentioned works the objective function and the constraint functions are differentiable or convex while the feasible set is convex, and moreover, the constraint inequalities are only nonpositive, but in the paper under consideration all functions are nonconvex, nonsmooth with the feasible set is not convex and the constraint inequalities are both nonpositive and nonnegative. Some examples are presented to clarify and compare the novel constraint qualification with the other well known constraint qualifications. Consequently, our results recapture the corresponding known results of [7,8,9,14,15,26,30,35,38].
Note that the Karush-Kuhn-Tucker (KKT) conditions [10,12] is one of the most common optimality conditions and that the establishment of the (KKT) optimality conditions depends on the representation of the set of feasible points. Thus, the (KKT) conditions and constraint qualifications play a key role in the study of the optimization problems. Recently, the (KKT) conditions and constraint qualifications were studied by many authors for convex and nonconvex scalar optimization problems: [1,2,3,7,13,14,15,16,18,19,20,21,23,24,30].
The presentation of the paper is as follows. Some definitions, basic facts and important auxiliary results related to nonconvex and nonsmooth analysis are presented in Section 2. In Section 3, we present a novel constraint qualification, call "tangential nearly convexity" ((TNC), in short), and compare it with the other well known constraint qualifications. Moreover, necessary conditions for pseudoconvexity of tangentially convex functions are given. Finally, necessary and sufficient optimality conditions for the nonconvex nonsmooth optimization Problem ( P ) with nonnegative and nonpositive constraints are presented in Section 4, where the objective function and the constraint functions are tangentially convex. Also, we give several examples to clarify and compare the novel constraint qualification with the other well known constraint qualifications.

2. Preliminaries

In this section, we gather some basic definitions, notations and results related to nonconvex and nonsmooth analysis (see [3,11]) that will be used in the sequel. Our notations are basically standard. Throughout this paper, we denote the n-dimensional Euclidean space by R n , and the inner product of two vectors x , y R n by x , y . We use the Euclidean norm, i.e., x : = x , x for each x R n . The closed unit ball and the set of positive integers are denoted by B and N , respectively.
Let D be a non-empty subset of R n . We denote the convex hull, the conical hull, the cone generated by D and the closure of D by c o n v ( D ) , c o n i c ( D ) ,   c o n e ( D ) and c l D , respectively.
We define the negative polar cone (dual cone) of a set W R n [4,12] by:
W : = { λ R n : λ , y 0 , y W } .
For a point x ¯ W , we consider the tangent cone T W ( x ¯ ) of W at x ¯ , and the normal cone N W ( x ¯ ) : = ( T W ( x ¯ ) ) to W at x ¯ , respectively (for more details, see [3,10]). Also, we refer the reader to [31,39] for the definition a tangentially convex function. The concept of a tangentially convex function suggests the following definition of subdifferentials.
Definition 1. 
[6,31,39] The tangential subdifferential of a function g : R n R { + } at a point x ¯ d o m g is defined as follows:
𝜕 T g ( x ¯ ) : = v R n : v , d g ( x ¯ , d ) , d R n ,
where g ( x ¯ , d ) is denoted the directional derivative of the function g at the point x ¯ in the direction of d .
For a function g : R n R { + } which is tangentially convex at the point x ¯ , one has
g ( x ¯ , d ) : = max v 𝜕 T g ( x ¯ ) v , d , d R n .
We now present the following proposition for the large class of tangentially convex functions which is essentially for presenting optimality conditions (see Lemma 4).
Proposition 1. 
Let f : R n R { + } be tangentially convex and continuous in a neighborhood of a point x ¯ d o m f . Then the following assertions hold:
(i) 
For each y R n , the directional derivative function f ( · , · ) is continuous at the point ( x ¯ , y ) .
(ii) 
The set valued function 𝜕 T f ( · ) is upper semicontinuous at the point x ¯ .
Proof.  ( i ) . Let y R n be arbitrary, and let { x k } k 1 and { y k } k 1 be sequences in R n such that x k x ¯ and y k y . Since f is continuous in a neighborhood of x ¯ , then, for all α > 0 , we have
lim k f ( x k + α y k ) f ( x k ) α = f ( x ¯ + α y ) f ( x ¯ ) α .
Let ϵ > 0 be given. Then, in view of (4), we get
f ( x ¯ + α y ) f ( x ¯ ) α ϵ < f ( x k + α y k ) f ( x k ) α < f ( x ¯ + α y ) f ( x ¯ ) α + ϵ ,
for all α > 0 and all sufficiently large k N . Now, since f is tangentially convex in a neighborhood of x ¯ , by letting α 0 + in (5), it follows that
f ( x ¯ , y ) ϵ f ( x k , y k ) f ( x ¯ , y ) + ϵ .
for all sufficiently large k N . Hence,
lim k f ( x k , y k ) = f ( x ¯ , y ) ,
which implies that f ( · , · ) is continuous at the point ( x ¯ , y ) .
( i i ) . Let { x k } k 1 R n be any sequence converging to x ¯ , and let d k 𝜕 T f ( x k ) for all k 1 . We claim that { d k } k 1 is bounded. Assume on the contrary without loss of generality that d k . Put
z k : = d k d k , k 1 .
Since d k 𝜕 T f ( x k ) for all k 1 , thus in view of Definition 1, one has
d k = d k , z k f ( x k , z k ) , k 1 .
On the other hand, since { z k } k 1 is bounded in R n , so it has a convergent subsequence. We may assume without loss of generality that { z k } k 1 converges to some point y ¯ R n . In view of part ( i ) and letting k in (6), we conclude that f ( x ¯ , y ¯ ) = , which contradicts the tangential convexity of f at the point x ¯ . Therefore, the claim is true, and so, { d k } k 1 is bounded. Now, let y R n be arbitrary. Then
d k , y f ( x k , y ) , k 1 .
Let d R n be a limit point of { d k } k 1 . By taking limit along the relevant subsequences in the inequality (7) and by using part ( i ) , it follows that
d , y f ( x ¯ , y ) , y R n .
Hence, d 𝜕 T f ( x ¯ ) , and by [10], Proposition 4.1.2, the set valued function 𝜕 T f ( · ) is upper semicontinuous at the point x ¯ .

3. Tangential Nearly Convexity

In this section, we present a novel constraint qualification, call “tangential nearly convexity“ ( ( T N C ) , in short), and compare it with the constraint qualification that called “tangentially constraint qualification“ ( ( T C Q ) , in short) in [8] and the other well known constraint qualifications. To this end, we consider the nonconvex nonsmooth constrained optimization Problem ( P ) which is given by (1). Moreover, we assume that the functions f , g j ( j = 1 , . . . , m + l ) are tangentially convex at a given point x ¯ F . Now, put
I ( x ¯ ) : = { j { 1 , . . . , m } : g j ( x ¯ ) = 0 } ,
and
J : = { j { m + 1 , . . . , m + l } : g j ( x ) = 0 , x C } .
Now, we define
C ( x ¯ ) : = c o n i c j I ( x ¯ ) J 𝜕 T g j ( x ¯ ) = j I ( x ¯ ) J λ j 𝜕 T g j ( x ¯ ) : λ j 0 , j I ( x ¯ ) J .
Clearly, C ( x ¯ ) is a convex cone. However, C ( x ¯ ) is not necessarily closed. Let S be the optimal solutions set of the Problem ( P ) , i.e.,
S : = { x F : f ( x ) = min y F f ( y ) } .
We refer the reader to [6] for the definition of a nonsmooth version of the well known constraint qualifications. It is worth noting that the nonsmooth Guignard’s constraint qualification ((NGCQ), in short) is the weakest among the others (see [6]).
We now give the following novel constraint qualification which has a crucial role throughout the paper.
Definition 2. (Tangential Nearly Convexity ( ( T N C ) , in short ) )  Let K R n , and let x K be arbitrary. Then, K is said to be “tangentially nearly convex“ at the point x or equivalently, the “tangential nearly convexity“ holds at the point x if, for each y K , there exist sequences { y k } k 1 R n and { t k } k 1 R + + such that y k y and t k 0 + and that x + t k ( y k x ) K for all sufficiently large k N . Moreover, K is said to be tangentially nearly convex or equivalently, the tangential nearly convexity holds at each point of K , if K is tangentially nearly convex at each of its point.
Remark 1. 
It is obvious that if K R n is nearly convex at a point x K  (we refer the reader to [17,26] for the definition of a nearly convex set ) , then, K is tangentially nearly convex at x . However, the converse statement is not necessarily true. See the following example.
Example 1. 
Let the set K R 2 be given as follows:
K : = { ( x 1 , x 2 ) R 2 : x 2 x 1 2 } { ( 1 , 0 ) } .
It is clear that the set K is not nearly convex at the point x ¯ : = ( 0 , 0 ) K , because for x = ( 1 , 0 ) K and, for all 0 < t < 1 , one has
x ¯ + t ( x x ¯ ) = ( t , 0 ) K .
But, it is easy to see that K is tangentially nearly convex at the point x ¯ . Indeed, let x K be arbitrary. Consider two possible cases:
  • Case ( 1 ) :  Let x ( 1 , 0 ) . In this case, one can easily see, for any 0 < t < 1 , that
    x ¯ + t ( x x ¯ ) = t x K .
  • Case ( 2 ) :  Let x = ( 1 , 0 ) . Define the sequences { x m } m 1 R 2 and { t m } m 1 R + + by x m : = ( 1 , 1 m ) and t m : = 1 m ,   m = 1 , 2 , . Then, x m x and t m 0 + , and moreover
    x ¯ + t m ( x m x ¯ ) = t m x m K , m 1 .
Therefore, in both of two possible cases, we have shown, for each x K , that there exist sequences { x m } m 1 R 2 and { t m } m 1 R + + such that x m x and t m 0 + and that
x ¯ + t m ( x m x ¯ ) = t m x m K , m 1 .
Hence, K is “tangentially nearly convex“ at the point x ¯ = ( 0 , 0 ) .
The following lemma has a crucial role for proving of the main results.
Lemma 1. 
Consider the optimization Problem ( P ) . Assume that x ¯ F is arbitrary and the tangentially constraint qualification ( T C Q ) holds at the point x ¯ . Then the following assertions hold:
(i) 
The “tangential nearly convexity“ ( T N C ) holds at the point x ¯ .
(ii) 
T F ( x ¯ ) = c l ( c o n e ( C x ¯ ) ) = c l ( c o n e ( F x ¯ ) ) . Hence, T F ( x ¯ ) is convex, if C is convex.
(iii) 
( T F ( x ¯ ) ) = ( C x ¯ ) = ( F x ¯ ) .
Proof.  ( i ) . Let z F be arbitrary. Since the tangentially constraint qualification ( T C Q ) holds at the point x ¯ , one has
z x ¯ F x ¯ = ( C K ) x ¯ C x ¯ T F ( x ¯ ) .
So, z x ¯ T F ( x ¯ ) . Therefore, by the definition of the tangent cone, there exist sequences { x k } k 1 F and { t k } k 1 R + + such that x k x ¯ and t k 0 + and that
x k x ¯ t k z x ¯ , a s k .
Now, for each k 1 , define z k : = x ¯ + x k x ¯ t k . Then, by (12), z k z as k . Moreover, one has
x ¯ + t k ( z k x ¯ ) = x k F , k 1 ,
which implies that the “tangential nearly convexity“ holds at the point x ¯ .
( i i ) . Since the tangentially constraint qualification ( T C Q ) holds at the point x ¯ and the fact that F = C K , it is easy to show that
C x ¯ T F ( x ¯ ) c l ( c o n e ( F x ¯ ) ) c l ( c o n e ( C x ¯ ) ) .
Since T F ( x ¯ ) is a closed cone, we conclude that
c l ( c o n e ( C x ¯ ) ) T F ( x ¯ ) c l ( c o n e ( F x ¯ ) ) c l ( c o n e ( C x ¯ ) ) .
This implies that
T F ( x ¯ ) = c l ( c o n e ( C x ¯ ) ) = c l ( c o n e ( F x ¯ ) ) ,
which completes the proof of the assertion ( i i ) .
( i i i ) . This is an immediate consequence of the assertion ( i i ) .
By the following example, we illustrate Lemma 1.
Example 2. 
Let K : = epi g , where the function g : R R and the epigraph of g are defined by:
g ( x ) : = c o s x , x R ,
and
epi g : = { ( x , λ ) R × R : g ( x ) λ } ,
respectively. Let
C : = { ( x 1 , x 2 ) R 2 : x 2 1 } .
Therefore, we have F : = C K = K . Let x ¯ : = ( π , 1 ) F . It is obvious that F is not nearly convex at the point x ¯ , because for x = ( π , 1 ) , one has
x ¯ + t ( x x ¯ ) K , 0 < t < 1 .
Moreover, we have
C x ¯ = T F ( x ¯ ) = { ( x 1 , x 2 ) R 2 : x 2 0 } .
Therefore, the tangentially constraint qualification ( T C Q ) holds at the point x ¯ . Thus, in view of Lemma 1 ( i ) , the “tangential nearly convexity“ holds at the point x ¯ . Furthermore, the assertions ( i i ) and ( i i i ) in Lemma 1 also hold.
By the following example, we now show that the “tangential nearly convexity“ ( T N C ) does not imply that the “tangentially constraint qualification“ ( T C Q ) holds, while the converse statement always holds (see Lemma 1 ( i ) ) .
Example 3. 
Let K R 2 be defined by:
K : = { ( x 1 , x 2 ) R 2 : 2 x 1 2 x 2 0 } ,
and let C : = R 2 . Then, F : = C K = K . It is clear that F is convex. Now, let x ¯ : = ( 0 , 0 ) F . One can easily see that
T F ( x ¯ ) = { ( d 1 , d 2 ) R 2 : d 2 0 } R 2 = C x ¯ .
Hence, the “tangentially constraint qualification“ ( T C Q ) does not hold at the point x ¯ , while the “tangential nearly convexity“ ( T N C ) holds at the point x ¯ , because F is convex.
The following example shows that the constraint qualification ( T C Q ) holds, but the nonsmooth Guignard’s constraint qualification ( N G C Q ) does not hold.
Example 4. 
Let
K : = ( x 1 , x 2 ) R 2 : g 1 ( x 1 , x 2 ) : = x 1 + | x 2 | 0 , g 2 ( x 1 , x 2 ) = 2 x 1 2 + 2 x 2 2 + 1 0 ,
and C : = [ 0 , + ) × 0 . Then, F = C K = C , which is a convex set. So, T F ( x ¯ ) = [ 0 , + ) × 0 whenever x ¯ : = ( 0 , 0 ) F . Moreover, we have g 1 ( x ¯ ) = 0 , and so, in view of ( 8 ) , ( 9 ) and the definition of K , one has I ( x ¯ ) = { 1 } and J = . Also, g 1 ( x ¯ , t ) = | t 2 | t 1 and g 2 ( x ¯ , t ) = 0 for all t : = ( t 1 , t 2 ) R 2 , and thus, g 1 and g 2 are tangentially convex at the point x ¯ . Then, 𝜕 T g 1 ( x ¯ ) = c o n v ( 1 , 1 ) , ( 1 , 1 ) and 𝜕 T g 2 ( x ¯ ) = { ( 0 , 0 ) } . Hence, by (10 ) , we have
M ( x ¯ ) = ( x 1 , x 2 ) R 2 : x 1 0 , x 1 x 2 x 1 .
Thus,
M ( x ¯ ) = ( x 1 , x 2 ) R 2 : x 1 0 , x 1 x 2 x 1 .
Clearly, C x ¯ = T F ( x ¯ ) ,   M ( x ¯ ) T F ( x ¯ ) and M ( x ¯ ) cl ( c o n v ( T F ( x ¯ ) ) ) . Therefore, the constraint qualification ( T C Q ) holds at the point x ¯ , while the nonsmooth Guignard’s constraint qualification ( N G C Q ) does not hold at the point x ¯ .
The following example shows that the constraint qualification ( T C Q ) does not hold, while the nonsmooth Guignard’s constraint qualification ( N G C Q ) holds.
Example 5. 
Let K : = { ( x 1 , x 2 ) R 2 : g j ( x 1 , x 2 ) 0 , j = 1 , 2 , g 3 ( x 1 , x 2 ) 0 } with g 1 ( x 1 , x 2 ) : = 1 ( x 1 5 ) 2 2 x 2 2 ,   g 2 ( x 1 , x 2 ) : = | x 2 | x 1 and g 3 ( x 1 , x 2 ) : = 2 x 1 2 + 3 for all ( x 1 , x 2 ) R 2 . Let C : = R + × R . We see that F : = C K is a nonconvex set. It is easy to check that g 1 ( x ¯ ) = 24 0 ,   g 2 ( x ¯ ) = 0 and g 3 ( x ¯ ) = 3 0 whenever x ¯ : = ( 0 , 0 ) F . Thus, in view of ( 8 ) ,   ( 9 ) and the definition of K , one has I ( x ¯ ) = { 2 } and J = . The functions g 1 ,   g 2 and g 3 are tangentially convex at the point x ¯ because g 1 ( x ¯ , t ) = 10 t 1 ,   g 2 ( x ¯ , t ) = | t 2 | t 1 and g 3 ( x ¯ , t ) = 0 for all t : = ( t 1 , t 2 ) R 2 . Moreover, we obtain that 𝜕 T g 2 ( x ¯ ) = c o n v ( 1 , 1 ) , ( 1 , 1 ) , and so, by (10 ) ,
M ( x ¯ ) = ( x 1 , x 2 ) R 2 : | x 2 | + x 1 0 .
Therefore,
M ( x ¯ ) = ( x 1 , x 2 ) R 2 : | x 2 | x 1 .
It is easy to show that T F ( x ¯ ) = ( v 1 , v 2 ) R 2 : | v 2 | v 1 , and hence, C x ¯ T F ( x ¯ ) . Thus, the constraint qualification ( T C Q ) does not hold at the point x ¯ , while T F ( x ¯ ) = M ( x ¯ ) , and so, M ( x ¯ ) = cl ( c o n v ( T F ( x ¯ ) ) ) because T F ( x ¯ ) is closed and convex. Thus the nonsmooth constraint qualification ( N G C Q ) holds at the point x ¯ .
Remark 2. 
In view of Example 3, one can easily see that the “tangential nearly convexity“ ( T N C ) is weaker than the constraint qualification ( T C Q ) (note that in view of Lemma 1 ( i ) , we always observe that ( T C Q ) implies ( T N C ) ) . Therefore, as a consequence, it should be noted that in addition to the easiness of using the constraint qualification ( T N C ) , an important advantage of ( T N C ) , is that ( T N C ) is a constraint qualification under which ( K K T ) conditions are “necessary and sufficient“ for optimality of the nonconvex nonsmooth optimization Problem ( P ) without any further assumption (see Theorem 3 and Theorem 4 in Chapter 4 ) , while the nonsmooth constraint qualification ( N G C Q ) (which is weaker than the other well known nonsmooth constraint qualifications, see [6]) together with a further assumption (closedness of the convex cone M ( x ¯ ) ) implies that ( K K T ) conditions are only “necessary“ for optimality of the Problem ( P )  (see [7,14] ) .
In the following, we give a characterization of the novel “constraint qualification ( T N C ) ` ` , which will be used in Chapter 4.
Lemma 2. 
Let D R n be a set, and let x ¯ D be arbitrary. Then, the “tangential nearly convexity“ ( T N C ) holds at the point x ¯ if and only if T D ( x ¯ ) = c l ( c o n e ( D x ¯ ) ) . As a consequence, we have ( T D ( x ¯ ) ) = ( D x ¯ ) .
Proof. 
) By the definition of the tangent cone, we always have
T D ( x ¯ ) c l ( c o n e ( D x ¯ ) ) .
Now, we show that
c l ( c o n e ( D x ¯ ) ) T D ( x ¯ ) .
To this end, let z D be arbitrary. Since, by the hypothesis, the “tangential nearly convexity“ holds at the point x ¯ , it follows that there exist sequences { z k } k 1 R n and { t k } k 1 R + + such that z k z and t k 0 + as k and that
x ¯ + t k ( z k x ¯ ) D ,
for all sufficiently large k N . Set:
x k : = x ¯ + t k ( z k x ¯ ) , k = 1 , 2 , .
Thus, x k D for all sufficiently large k N . Also, one has
x k x ¯ , and x k x ¯ t k z x ¯ , a s k .
This together with (??) implies that z x ¯ T D ( x ¯ ) . Then, D x ¯ T D ( x ¯ ) . Since T D ( x ¯ ) is a closed cone, we conclude that
c l ( c o n e ( D x ¯ ) ) T D ( x ¯ ) .
Hence,
c l ( c o n e ( D x ¯ ) ) = T D ( x ¯ ) .
Consequently, in view of (13), one can easily see that ( T D ( x ¯ ) ) = ( D x ¯ ) .
) Let z D be arbitrary. Since, by the hypothesis, T D ( x ¯ ) = c l ( c o n e ( D x ¯ ) ) , so, z x ¯ T D ( x ¯ ) . Thus, by the definition of the tangent cone, there exist sequences { x k } k 1 D and { t k } k 1 R + + such that x k x ¯ and t k 0 + and that
x k x ¯ t k z x ¯ , a s k .
Now, for each k N , put:
z k : = x ¯ + x k x ¯ t k .
Therefore, z k R n ( k N ) , and
z k z , a s k ,
and that
x ¯ + t k ( z k x ¯ ) = x k D , k N .
Hence, in view of Definition 2, the “tangential nearly convexity“ ( T N C ) holds at the point x ¯ , which completes the proof. □
The following lemma has been proved in [4], but we present it in the following without proof for an easy reference.
Lemma 3. 
[4] Let x , y R n be given. Then,
x , y < 0 α > 0 x < x α y .
Now, by using Lemma 3, we give a characterization of the convexity of a closed set in R n with respect to the “novel constraint qualification ( T N C ) ”.
Theorem 1. 
Let D be a nonempty closed subset of R n . Then, D is convex if and only if the “tangential nearly convexity“ ( T N C ) holds at each point of D .
Proof. 
) If D is convex, then it is clear that the “tangential nearly convexity“ holds at each point of D .
) Suppose that the “tangential nearly convexity“ holds at each point of D . Assume on the contrary that D is not convex. Then there exist x , y D and 0 < λ < 1 such that ( 1 λ ) x + λ y D . Put:
z 0 : = ( 1 λ ) x + λ y .
So, z 0 D . Now, let r : = x z 0 , and consider the following optimization problem:
( P 1 ) min z z 0 , s . t . , z D B ( z 0 , r ) ,
where B ( z 0 , r ) is the closed ball in R n with the center at z 0 and the radius r > 0 . Let z * D B ( z 0 , r ) be an optimal solution of the Problem ( P 1 ) (note that since D is closed, it follows that D B ( z 0 , r ) is a nonempty compact subset of R n , and hence, by the continuity of the norm function · z 0 on D B ( z 0 , r ) , such z * exists). We now claim that either the inner product z * z 0 , x z * < 0 , or the inner product z * z 0 , y z * < 0 . Suppose not, i.e.,
z * z 0 , x z * 0 , and z * z 0 , y z * 0 .
Therefore,
0 z * z 0 2 = z * z 0 , z * z 0 = z * z 0 , z 0 z * = z * z 0 , ( 1 λ ) ( x z * ) + λ ( y z * ) = ( 1 λ ) z * z 0 , x z * λ z * z 0 , y z * 0 .
Hence, z 0 = z * D , which is a contradiction because z 0 D . Now, we assume without loss of generality that z * z 0 , y z * < 0 . Since, by the hypothesis, the “tangential nearly convexity“ holds at the point z * , so for the above y D (see (14)) there exist sequences { y k } k 1 R n and { t k } k 1 R + + with y k y and t k 0 + such that
z * + t k ( y k z * ) D , f o r   a l l   s u f f i c i e n t l y   l a r g e k N .
Moreover, since z * z 0 , y z * < 0 , it is not difficult to show that
z * + t k ( y k z * ) z 0 , y k z * < 0 ,
for all sufficiently large k N . Therefore, this together with Lemma 3 implies that
z * + t k ( y k z * ) z 0 < z * z 0 , f o r   a l l   s u f f i c i e n t l y   l a r g e k N .
This is a contradiction with the fact that z * + t k ( y k z * ) D for all sufficiently large k N and z * is an optimal solution of the Problem ( P 1 ) . Hence, D is convex, and the proof is complete. □
We conclude this section by presenting a necessary condition for pseudoconvexity of tangentially convex functions. We refer the reader to [35] for the definition pseudoconvexity of tangentially convex functions.
Theorem 2. 
Let f : R n R { } be a tangentially convex function at the point x ¯ d o m f . Moreover, we assume that the strictly lower level set L < ( f ; f ( x ¯ ) ) of f (at the point f ( x ¯ ) ) is an open convex set. If 0 𝜕 T f ( x ¯ ) , then, f is pseudoconvex at the point x ¯ .
Proof. 
Let x R n be such that f ( x ) < f ( x ¯ ) . In view of Definition ??, it is enough to show that f ( x ¯ , x x ¯ ) < 0 . To this end, since f ( x ) < f ( x ¯ ) , we have x L < ( f ; f ( x ¯ ) ) . On the other hand, since 0 𝜕 T f ( x ¯ ) , thus there exists d 0 R n such that
f ( x ¯ + 1 n d 0 ) < f ( x ¯ ) , n N .
Now, for each n N , set: x n : = x ¯ + 1 n d 0 . Therefore, x n x ¯ and x n L < ( f ; f ( x ¯ ) ) . It follows that x ¯ c l L < ( f ; f ( x ¯ ) ) . Since, by the hypothesis, L < ( f ; f ( x ¯ ) ) is convex, we obtain that
x ¯ , x L < ( f ; f ( x ¯ ) ) ,
which implies that f ( x ¯ , x x ¯ ) 0 . Now, we show that f ( x ¯ , x x ¯ ) < 0 . Assume if possible that f ( x ¯ , x x ¯ ) = 0 . Therefore, in view of (3), there exists v ¯ 𝜕 T f ( x ¯ ) such that v ¯ , x x ¯ = 0 . Let h R n be arbitrary. Since x L < ( f ; f ( x ¯ ) ) and, by the hypothesis, L < ( f ; f ( x ¯ ) ) is open, it follows that
x + t h L < ( f ; f ( x ¯ ) ) ,
for all sufficiently small t > 0 . By using an argument similar to the above, we conclude that
f ( x ¯ , x + t h x ¯ ) 0 .
Hence, in view of (3), since v ¯ , x x ¯ = 0 , one has,
v ¯ , h 0 , h R n .
This implies that v ¯ = 0 , which is a contradiction because 0 𝜕 T f ( x ¯ ) . Thus, f ( x ¯ , x x ¯ ) < 0 , and the proof is complete. □
By the following example, we illustrate Theorem 2.
Example 6. 
Let f : R R be a differentiable function is defined by:
f ( x ) : = x 3 , i f x ( 1 , + ) , 3 2 ( x + 1 ) ( x + 2 ) ( x + 3 ) + 1 , i f x ( , 1 ] ,
and let x ¯ : = 4 . Indeed, since f ( x ¯ , x x ¯ ) = f ( x ¯ ) , x x ¯ , so it is not difficult to show that
x R , f ( x ¯ , x x ¯ ) 0 f ( x ) f ( x ¯ ) .
Therefore, by the definition, f is pseudoconvex at the point x ¯ . But, On the other hand, one can easily check that all the hypotheses of Theorem 2 at the point x ¯ hold, and hence, by using Theorem 2, f is pseudoconvex at the point x ¯ .
The following lemma has a crucial role for the proof of the main results.
Lemma 4. 
Consider the optimization Problem ( P ) . Let x ¯ S be an optimal solution of the Problem ( P ) . If the objective function f is tangentially convex and continuous at the point x ¯ , then
T F ( x ¯ ) { d R n : f ( x ¯ , d ) < 0 } = ,
where S defined by ( 11 ) .
Proof. 
By Proposition 1 ( i i ) and applying [37], Proposition 10, we conclude that f is locally Lipschitz at the point x ¯ because f is tangentially convex and continuous at the point x ¯ . Now, in view of [7], Lemma 2.5, the result is obtained. □

4. Necessary and Sufficient Optimality Conditions for the Problem ( P )

In this section, under the “tangential nearly convexity“ (TNC) of the feasible set F , we present necessary and sufficient optimality conditions for the Problem ( P ) . Moreover, by using the constraint qualification (TCQ), we show that the Karush-Kuhn-Tucker (KKT) conditions are necessary and sufficient for optimality of the Problem ( P ) . Finally, by given examples to illustrate the results of this section.
Theorem 3. 
(Necessary Optimality Conditions) Consider the optimization Problem ( P ) . Let x ¯ S be an optimal solution of the Problem ( P ) . Assume that the “tangential nearly convexity“ ( T N C ) holds at the point x ¯ and the objective function f is tangentially convex and continuous at the point x ¯ . Moreover, suppose that T F ( x ¯ ) ) is convex. Then, 0 𝜕 T f ( x ¯ ) + M ( x ¯ ) + ( F x ¯ ) , where M ( x ¯ ) and S defined by ( 10 ) and ( 11 ) , respectively.
Proof. 
Let d T F ( x ¯ ) be arbitrary. Since x ¯ S is an optimal solution of the Problem ( P ) and the objective function f is tangentially convex and continuous at the point x ¯ , it follows from Lemma 4 that
f ( x ¯ , d ) 0 , d T F ( x ¯ ) .
Therefore, we conclude from (3) that
sup v 𝜕 T f ( x ¯ ) v , d 0 , d T F ( x ¯ ) .
Hence,
inf d T F ( x ¯ ) B sup v 𝜕 T f ( x ¯ ) v , d 0 ,
where B is the closed unit ball of R n . Since, by the hypothesis, T F ( x ¯ ) is a closed convex cone (note that T F ( x ¯ ) is always a closed cone) and 𝜕 T f ( x ¯ ) is a compact convex set, by applying the saddle point theorem [10], Proposition 2.6.9, one has
sup v 𝜕 T f ( x ¯ ) inf d T F ( x ¯ ) B v , d 0 .
This implies that there exists v ¯ 𝜕 T f ( x ¯ ) such that
v ¯ , d 0 , d T F ( x ¯ ) B .
Since T F ( x ¯ ) is a convex cone, it follows that
v ¯ , d 0 , d T F ( x ¯ ) ,
which implies that
v ¯ ( T F ( x ¯ ) ) .
On the other hand, by the hypothesis, the “tangential nearly convexity“ ( T N C ) holds at the point x ¯ , so in view of Lemma 2, ( T F ( x ¯ ) ) = ( F x ¯ ) . Thus, it follows from (17) that
v ¯ ( F x ¯ ) .
or equivalently,
0 𝜕 T f ( x ¯ ) + ( F x ¯ ) .
Since 0 M ( x ¯ ) , we conclude from (19) that
0 𝜕 T f ( x ¯ ) + M ( x ¯ ) + ( F x ¯ ) ,
and the proof is complete. □
It should be noted that in the following (Corollary 4.1), the assumption “locally Lipschitz continuity“ of the objective function f which used in [8] for nonconvex optimization problems with nonpositive constraints under the constraint qualification (TCQ), reduces to the continuity of f .
Corollary 1. 
(Necessary Optimality Conditions) Consider the optimization Problem ( P ) . Let x ¯ S be an optimal solution of the Problem ( P ) . Assume that C is a convex set and the constraint qualification ( T C Q ) holds at the point x ¯ . Moreover, suppose that the objective function f is tangentially convex and continuous at x ¯ . Then, 0 𝜕 T f ( x ¯ ) + M ( x ¯ ) + ( C x ¯ )  (Karush-Kuhn-Tucker (KKT) conditions ) .
Proof. 
Since C is a convex set and the constraint qualification ( T C Q ) holds at the point x ¯ , then in view of Lemma 1 (assertions ( i ) and ( i i ) ) , all hypotheses of Theorem 3 hold. Therefore, by Theorem 3, we have
0 𝜕 T f ( x ¯ ) + M ( x ¯ ) + ( F x ¯ ) .
But, by Lemma 1 (the assertion ( i i i ) ) , one has ( F x ¯ ) = ( C x ¯ ) . Hence, it follows from (20) that
0 𝜕 T f ( x ¯ ) + M ( x ¯ ) + ( C x ¯ ) ,
which completes the proof. □
Theorem 4. 
(Sufficient Optimality Conditions) Consider the optimization Problem ( P ) with the constraint functions g j , j I ( x ¯ ) J , are tangentially convex and locally Lipschitz at the point x ¯ F . Assume that the “tangential nearly convexity“ ( T N C ) holds at the point x ¯ , and the objective function f is tangentially convex and pseudoconvex at x ¯ . If 0 𝜕 T f ( x ¯ ) + M ( x ¯ ) + ( F x ¯ ) , then, x ¯ is an optimal solution of the Problem ( P ) , i.e., x ¯ S .
Proof. 
Let z F be arbitrary. Since the “tangential nearly convexity“ ( T N C ) holds at the point x ¯ , there exist the sequences { z k } k 1 R n and { t k } k 1 R + + such that z k z and t k 0 + and that
x ¯ + t k ( z k x ¯ ) F , k N .
Since the functions g j , j I ( x ¯ ) J , are tangentially convex and locally Lipschitz at the point x ¯ , it follows from (21) that
g j ( x ¯ , z x ¯ ) = lim l g j ( x ¯ , z l x ¯ ) = lim l lim k g j ( x ¯ + t k ( z l x ¯ ) ) g j ( x ¯ ) t k = lim k g j ( x ¯ + t k ( z k x ¯ ) ) g j ( x ¯ ) t k 0 , j I ( x ¯ ) .
Similarly and by using (9), one has
g j ( x ¯ , z x ¯ ) = 0 , j J .
On the other hand, since 0 𝜕 T f ( x ¯ ) + M ( x ¯ ) + ( F x ¯ ) , there exists u ( F x ¯ ) , and moreover, for each j I ( x ¯ ) J , there exist λ j 0 and v j 𝜕 T g j ( x ¯ ) such that
j I ( x ¯ ) J λ j v j u 𝜕 T f ( x ¯ ) .
Now, by the tangential convexity of f at the point x ¯ , and in view of (22), (23) and Definition 1 and the fact that u ( F x ¯ ) , we conclude that
f ( x ¯ , z x ¯ ) j I ( x ¯ ) λ j v j j J λ j v j u , z x ¯ j I ( x ¯ ) λ j v j j J λ j v j , z x ¯ j I ( x ¯ ) λ j g j ( x ¯ , z x ¯ ) j J λ j g j ( x ¯ , z x ¯ ) 0 , z F .
Now, by pseudoconvexity of f at the point x ¯ , we deduce that
f ( z ) f ( x ¯ ) , z F .
Hence, x ¯ is an optimal solution of the Problem ( P ) .
Corollary 2. 
(Sufficient Optimality Conditions) Consider the optimization Problem ( P ) with the constraint functions g j , j I ( x ¯ ) J , are tangentially convex and locally Lipschitz at the point x ¯ F . Assume that the constraint qualification ( T C Q ) holds at x ¯ , and the objective function f is tangentially convex and pseudoconvex at x ¯ . If 0 𝜕 T f ( x ¯ ) + M ( x ¯ ) + ( C x ¯ )  (Karush-Kuhn-Tucker (KKT) conditions ) , then, x ¯ is an optimal solution of the Problem ( P ) .
Proof. 
Since the constraint qualification ( T C Q ) holds at the point x ¯ , in view of Lemma 1 ( ( i ) a n d ( i i i ) ) , the “tangential nearly convexity“ ( T N C ) holds at the point x ¯ and ( C x ¯ ) = ( F x ¯ ) . Therefore, the result obtains from Theorem 4. □
By the following examples, we illustrate Theorem 3 and Theorem 4 and their corollaries.
Example 7. 
Consider the following nonconvex nonsmooth constrained optimization problem:
( P 2 ) min f ( x 1 , x 2 ) : = | x 1 | + x 2 3 , s . t . , g 1 ( x 1 , x 2 ) : = 3 x 1 2 c o s x 1 x 2 0 , g 2 ( x 1 , x 2 ) : = x 1 2 + ( x 2 2 ) 2 1 0 , ( x 1 , x 2 ) R 2 .
Let C : = [ 1 , 0 ] × { 0 } { 0 } × R + . So, the feasible set F of the Problem ( P 2 ) is given by F : = C K = C , where K is defined by:
K : = { ( x 1 , x 2 ) R 2 : g 1 ( x 1 , x 2 ) 0 , g 2 ( x 1 , x 2 ) 0 } .
Obviously, x ¯ : = ( 0 , 0 ) F is the unique optimal solution of the Problem ( P 2 ) . It is clear that the “tangential nearly convexity“ ( T N C ) holds at the point x ¯ , while F is not convex. Also,
C x ¯ = C R × { 0 } { 0 } × R + = T F ( x ¯ ) ,
and
( C x ¯ ) = R + × R .
Therefore, the constraint qualification ( T C Q ) holds at the point x ¯ . It is not difficult to show that
𝜕 T f ( x ¯ ) = [ 1 , 1 ] × { 0 } .
So, in view of ( 24 ) and ( 25 ) , we observe that
𝜕 T f ( x ¯ ) ( C x ¯ ) .
Since 0 M ( x ¯ ) , it follows from ( 26 ) that
0 𝜕 T f ( x ¯ ) + M ( x ¯ ) + ( C x ¯ ) .
Example 8. 
Consider the following nonconvex nonsmooth constrained optimization problem:
( P 3 ) min f ( x 1 , x 2 ) , s . t . , g ( x 1 , x 2 ) = 0 , ( x 1 , x 2 ) R 2 ,
where the functions f : R 2 R and g : R 2 R are defined by:
f ( x 1 , x 2 ) : = 0 , x 1 = x 2 2 , x 2 , o t h e r w i s e , ( x 1 , x 2 ) R 2 ,
and
g ( x 1 , x 2 ) : = x 1 x 2 2 , ( x 1 , x 2 ) R 2 .
Let C : = R 2 . Therefore, the feasible set F of the Problem ( P 3 ) is given by F : = C K = K , where K is defined by:
K : = { ( x 1 , x 2 ) R 2 : g ( x 1 , x 2 ) = 0 } .
It is clear that x ¯ : = ( 0 , 0 ) F is an optimal solution of the Problem ( P 3 ) . By a simple calculation, it can be seen that ( 0 , 0 ) 𝜕 T f ( x ¯ ) + M ( x ¯ ) + ( F x ¯ ) . The reason is that neither the “tangential nearly convexity“ ( T N C ) holds at the point x ¯ nor the constraint qualification ( T C Q ) holds at the point x ¯ .

Acknowledgments

The second author was partially supported by Mahani Mathematical Research Center, Shahid Bahonar University of Kerman, Iran [grant no: 1403/4865].

Data Availability Statement

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

Conflicts of Interest

This declaration is not applicable. Indeed, there is no any conflict of interest.

References

  1. Allevi, E., Martínez-Legaz, J.-E., & Riccardi, R. (2020). Optimality conditions for convex problems on intersections of non necessarily convex sets, J. Global Optim., 77(1), 143-155. [CrossRef]
  2. Ansari, Q.H., Kobis, E., & Yao, J.-C. (2018). Vector Variational Inequalities and Vector Optimization: Theory and Applications, Springer, Berlin. [CrossRef]
  3. Bagirov, A., Karmitsa, N., & Makela, M.M. (2014). Introduction to Nonsmooth Optimization: Theory, Practice and Software, Springer, New York. [CrossRef]
  4. Bauschke, H.H., & Combettes, P.L. (2017). Convex Analysis and Monotone Operator Theory in Hilbert Spaces, Second Edition, Springer, New York.
  5. Bazaraa, M.S., Sherali, H.D., & Shetty, C.M. (2006). Nonlinear Programming, Wiley, New York.
  6. Bazargan, F., & Mohebi, H. (2022). Nonsmooth constraint qualifications for nonconvex inequality systems, Numer. Funct. Anal. Optim., 43(14), 1617-1646. [CrossRef]
  7. Bazargan, F., & Mohebi, H. (2020). New qualification condition for convex optimization without convex representation, Optim. Lett.,. [CrossRef]
  8. Bazargan, F., & Mohebi, H. (2022). A new constraint qualification for optimality of nonconvex nonsmooth optimization problems, Filomat, 36(12), 4041–4054.
  9. Beni-Asad, M., & Mohebi, H. (2023). Characterizations of the solution set for tangentially convex optimization problems, Optim. Lett., 17, 1027–1048. [CrossRef]
  10. Bertsekas, D., Nedic, A., & Ozdaglar, A.E. (2003). Convex Analysis and Optimization, Athena Scientific, Belmont, Massachusetts.
  11. Borwein, J.M., & Lewis, A.S. (2006). Convex Analysis and Nonlinear Optimization, Theory and Examples, Springer, New York.
  12. Boyd, S., & Vandenberghe, L. (2004). Convex Optimization, Cambridge University Press, Cambridge.
  13. Burke, J.V., & Ferris, M.C. (1991). Characterization of the solution sets of convex programs, Oper. Res. Lett., 10, 57-60. [CrossRef]
  14. Chieu, N.H., Jeyakumar, V., Li, G., & Mohebi, H. (2018). Constraint qualifications for convex optimization without convexity of constraints: new connections and applications to best approximation, Eur. J. Oper. Res. 265, 19-25. [CrossRef]
  15. Dutta, J., & Lalitha, C.S. (2013). Optimality conditions in convex optimization revisited, Optim. Lett., 7, 221-229. [CrossRef]
  16. Dinh, N., Jeyakumar, V., & Lee, G.M. (2006). Lagrange multiplier characterizations of solution sets of constrained pseudolinear optimization problems, Optim., 55, 241-250. [CrossRef]
  17. Ghafari, N., & Mohebi, H. (2021). Optimality conditions for nonconvex problems over nearly convex feasible sets, Arab. J. Math.,. [CrossRef]
  18. Goberna, M.A., Guerra-Vazquez, F., & Todorov, M.I. (2016). Constraint qualifications in convex vector semi-infinite optimization, Eur. J. Oper. Res., 249, 32-40.
  19. Hoheisel, T., & Kanzow, C. (2009). On the Abadie and Guignard constraint qualifications for mathematical programs with vanishing constraints, Optim., 58, 431-448. [CrossRef]
  20. Ho, Q. (2017). Necessary and sufficient KKT optimality conditions in non-convex optimization, Optim. Lett., 11, 41-46. [CrossRef]
  21. Hosseini, S., Mordukhovich, B.S., & Uschmajew, A. (2019). Nonsmooth Optimization and Its Applications, International Series of Numerical Mathematics, Springer, Switzerland AG., 170.
  22. Jahn, J. (2011). Vector Optimization: Theory, Applications and Extensions, Second Edition, Springer, Berlin Heidelberg.
  23. Jahn, J. (2017). Karush-Kuhn-Tucker conditions in set optimization, J. Optim. Theory Appl., 172, 707-725. [CrossRef]
  24. Jeyakumar, V., Lee, G.M., & Dinh, N. (2004). Lagrange multiplier conditions characterizing the optimal solution sets of cone-constrained convex programs, J. Optim. Theory Appl., 123, 83-103. [CrossRef]
  25. Jeyakumar, V., Lee, G.M., & Dinh, N. (2006). Characterizations of solution sets of convex vector minimization problems, Eur. J. Oper. Res., 174, 1380-1395. [CrossRef]
  26. Jeyakumar, V., & Mohebi, H. (2019). Characterizing best approximation from a convex set without convex representation, J. Approx. Theory, 239, 113-127. [CrossRef]
  27. Kobis, E., Tammer, C., & Yao, J.-C. (2017). Optimality conditions for set-valued optimization problems based on set approach and applications in uncertain optimization, J. Nonlinear Convex Anal., 18(6), 1001-1014.
  28. Kong, X., Zhang, Y., & Yu, G. (2018). Optimality and duality in set-valued optimization utilizing limit sets, Open Math., 16, 1128-1139. [CrossRef]
  29. Komiya, H. (1988). Elementary proof for Sion’s minimax theorem, Kodai Math. J., 11, 5-7. [CrossRef]
  30. Lasserre, J.B. (2010). On representations of the feasible set in convex optimization, Optim. Lett., 4, 1-5. [CrossRef]
  31. Lemaréchal, C. (1986). An introduction to the theory of nonsmooth optimization, Optim., 17, 827-858. [CrossRef]
  32. Liao, J.G., & Du, T.S. (2016). On some characterizations of sub-b-s-convex functions, Filomat, 30(14), 3885-3895.
  33. Liao, J.G., & Du, T.S. (2017). Optimality conditions in sub-(b;m)-convex programming, Politehn. Univ. Bucharest Sci. Bull. Ser. A Appl. Math. Phys., 79(2), 95-106.
  34. Mangasarian, O.L. (1988). A simple characterization of solution sets of convex programs, Oper. Res. Lett., 7, 21-26. [CrossRef]
  35. Martínez-Legaz, J.-E. (2015). Optimality conditions for pseudoconvex minimization over convex sets defined by tangentially convex constraints, Optim. Lett., 9, 1017-1023. [CrossRef]
  36. Martínez-Legaz, J.-E. (1983). A Generalized Concept of Conjugation, Lecture Notes in Pure and Appl. Math., 86, 45-59.
  37. Martínez-Legaz, J.-E. (2023). A mean value theorem for tangentially convex functions, Set-Valued Variational Anal., 31(2), 1-13. [CrossRef]
  38. Mashkoorzadeh, F., Movahedian, N., & Nobakhtian, S. (2019). Optimality conditions for nonconvex constrained optimization problems, Numer. Funct. Anal. Optim., 40, 1918-1938. [CrossRef]
  39. Pshenichnyi, B.N. (1971). Necessary Conditions for an Extremum, Marcel Dekker Inc, New York.
  40. Zhou, Z., Yang, X., & Qiu, Q. (2018). Optimality conditions of the set-valued optimization problem with generalized cone convex set-valued maps characterized by contingent epiderivative, Acta Math. Appl. Sin. Engl. Ser., 34, 11-18. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated