1. Introduction
As a general rule, to present a checkable necessary and sufficient optimality condition for an constrained optimization problem, it needs to assume some properties of the constraint system called constraint qualifications. The subject of constraint qualifications is of significant importance in optimization. Indeed, constraint qualifications are corner stones for the study of the convex and nonconvex problems and they guarantee necessary and sufficient conditions for optimality. Since constraint qualifications are proved to play an important role in optimization and mathematical programming, there have been fruitful works on this topic and several types of constraint qualifications have been extensively studied. Constraint qualifications (involving epigraph or subdifferential) have been widely studied and extensively used in various aspects of optimization and mathematical programming. Moreover, constraint qualifications were used to study Fenchel duality and the formula of subdifferentials of convex functions. It is worth noting that nonconvex functions frequently appear in mathematical programming and several types of normal cones and subdifferentials for the nonconvex case have been extensively studied in optimization and its applications [
1,
2,
3,
7,
13,
14,
15,
16,
18,
19,
20,
21,
22,
23,
24,
25,
27,
28,
30,
32,
33,
34,
35,
36,
38,
40]. Then, it is natural to further study constraint qualifications by normal cones and subdifferentials for nonconvex and nonsmooth inequalities. Therefore, we consider the following nonconvex nonsmooth constrained optimization problem:
with the feasible set
F is defined by:
where
K is defined as follows:
and
C is a nonempty subset of
such that
Also,
is the objective function and
are the constraint functions.
In fact, "establishing optimality conditions" for the optimization Problem
is one of the fundamentals in both the practice and theory. Of course, it is preferred whenever an optimality condition is both necessary and sufficient, but under some certain assumptions on the optimization Problem
such kind of conditions may be valid, for example; differentiability, smoothness and convexity. Thus, in the past decades, a great deal of attention was given to investigate the optimality conditions for scalar optimization problems (for example, [
1,
3,
7,
14,
15,
19,
20,
21,
29,
30,
31,
35,
38]).
Therefore, the above given results have motivated us to investigate and focus on the class of tangentially convex functions that are not necessarily convex or smooth. It should be noted that a few works have been done in optimizing the problems with such constraint functions [
15,
30,
35,
38]. In this paper, we remove the convexity of the feasible set and the objective function. We consider a nonconvex nonsmoth optimization problem with constraint inequalities are both nonpositive and nonnegative, call the Problem
whose the objective function is tangentially convex and active constraint functions are tangentially convex at a given feasible point, but are not necessarily convex or differentiable, and moreover, the feasible set is not convex. Our aim is to present a condition on a nonconvex feasible set defined by tangentially convex functions and provide a novel constraint qualification that guarantees the Karush-Kuhn-Tucker (KKT) conditions are necessary and sufficient for optimality of the Problem
It is worth noting that in all mentioned works the objective function and the constraint functions are differentiable or convex while the feasible set is convex, and moreover, the constraint inequalities are only nonpositive, but in the paper under consideration all functions are nonconvex, nonsmooth with the feasible set is not convex and the constraint inequalities are both nonpositive and nonnegative. Some examples are presented to clarify and compare the novel constraint qualification with the other well known constraint qualifications. Consequently, our results recapture the corresponding known results of [
7,
8,
9,
14,
15,
26,
30,
35,
38].
Note that the Karush-Kuhn-Tucker (KKT) conditions [
10,
12] is one of the most common optimality conditions and that the establishment of the (KKT) optimality conditions depends on the representation of the set of feasible points. Thus, the (KKT) conditions and constraint qualifications play a key role in the study of the optimization problems. Recently, the (KKT) conditions and constraint qualifications were studied by many authors for convex and nonconvex scalar optimization problems: [
1,
2,
3,
7,
13,
14,
15,
16,
18,
19,
20,
21,
23,
24,
30].
The presentation of the paper is as follows. Some definitions, basic facts and important auxiliary results related to nonconvex and nonsmooth analysis are presented in
Section 2. In
Section 3, we present a novel constraint qualification, call "tangential nearly convexity" ((TNC), in short), and compare it with the other well known constraint qualifications. Moreover, necessary conditions for pseudoconvexity of tangentially convex functions are given. Finally, necessary and sufficient optimality conditions for the nonconvex nonsmooth optimization Problem
with nonnegative and nonpositive constraints are presented in
Section 4, where the objective function and the constraint functions are tangentially convex. Also, we give several examples to clarify and compare the novel constraint qualification with the other well known constraint qualifications.
2. Preliminaries
In this section, we gather some basic definitions, notations and results related to nonconvex and nonsmooth analysis (see [
3,
11]) that will be used in the sequel. Our notations are basically standard. Throughout this paper, we denote the
n-dimensional Euclidean space by
, and the inner product of two vectors
by
. We use the Euclidean norm, i.e.,
for each
The closed unit ball and the set of positive integers are denoted by
and
, respectively.
Let D be a non-empty subset of We denote the convex hull, the conical hull, the cone generated by D and the closure of D by , and respectively.
We define the negative polar cone (dual cone) of a set
[
4,
12] by:
For a point
we consider the tangent cone
of
W at
, and the normal cone
to
W at
, respectively (for more details, see [
3,
10]). Also, we refer the reader to [
31,
39] for the definition a tangentially convex function. The concept of a tangentially convex function suggests the following definition of subdifferentials.
Definition 1.
[6,31,39] The tangential subdifferential of a function at a point is defined as follows:
where is denoted the directional derivative of the function g at the point in the direction of
For a function
which is tangentially convex at the point
one has
We now present the following proposition for the large class of tangentially convex functions which is essentially for presenting optimality conditions (see Lemma 4).
Proposition 1. Let be tangentially convex and continuous in a neighborhood of a point Then the following assertions hold:
-
(i)
For each , the directional derivative function is continuous at the point .
-
(ii)
The set valued function is upper semicontinuous at the point
Proof. Let
be arbitrary, and let
and
be sequences in
such that
and
Since
f is continuous in a neighborhood of
then, for all
we have
Let
be given. Then, in view of (4), we get
for all
and all sufficiently large
Now, since
f is tangentially convex in a neighborhood of
by letting
in (5), it follows that
for all sufficiently large
Hence,
which implies that
is continuous at the point
Let
be any sequence converging to
and let
for all
We claim that
is bounded. Assume on the contrary without loss of generality that
Put
Since
for all
thus in view of Definition 1, one has
On the other hand, since
is bounded in
so it has a convergent subsequence. We may assume without loss of generality that
converges to some point
In view of part
and letting
in (6), we conclude that
which contradicts the tangential convexity of
f at the point
Therefore, the claim is true, and so,
is bounded. Now, let
be arbitrary. Then
Let
be a limit point of
By taking limit along the relevant subsequences in the inequality (7) and by using part
it follows that
Hence,
and by [
10], Proposition 4.1.2, the set valued function
is upper semicontinuous at the point
□
3. Tangential Nearly Convexity
In this section, we present a novel constraint qualification, call “tangential nearly convexity“
in short), and compare it with the constraint qualification that called “tangentially constraint qualification“
in short) in [
8] and the other well known constraint qualifications. To this end, we consider the nonconvex nonsmooth constrained optimization Problem
which is given by (1). Moreover, we assume that the functions
are tangentially convex at a given point
Now, put
and
Now, we define
Clearly,
is a convex cone. However,
is not necessarily closed. Let
S be the optimal solutions set of the Problem
i.e.,
We refer the reader to [
6] for the definition of a nonsmooth version of the well known constraint qualifications. It is worth noting that the nonsmooth Guignard’s constraint qualification ((NGCQ), in short) is the weakest among the others (see [
6]).
We now give the following novel constraint qualification which has a crucial role throughout the paper.
Definition 2. (Tangential Nearly Convexity in short Let and let be arbitrary. Then, K is said to be “tangentially nearly convex“ at the point x or equivalently, the “tangential nearly convexity“ holds at the point x if, for each there exist sequences and such that and and that for all sufficiently large Moreover, K is said to be tangentially nearly convex or equivalently, the tangential nearly convexity holds at each point of if K is tangentially nearly convex at each of its point.
Remark 1. It is obvious that if is nearly convex at a point (we refer the reader to [17,26] for the definition of a nearly convex set then, K is tangentially nearly convex at However, the converse statement is not necessarily true. See the following example.
Example 1.
Let the set be given as follows:
It is clear that the set K is not nearly convex at the point because for and, for all one has
But, it is easy to see that K is tangentially nearly convex at the point Indeed, let be arbitrary. Consider two possible cases:
Case Let In this case, one can easily see, for any that
Case Let Define the sequences and by and Then, and , and moreover
Therefore, in both of two possible cases, we have shown, for each that there exist sequences and such that and and that
Hence, K is “tangentially nearly convex“ at the point
The following lemma has a crucial role for proving of the main results.
Lemma 1. Consider the optimization Problem Assume that is arbitrary and the tangentially constraint qualification holds at the point Then the following assertions hold:
-
(i)
The “tangential nearly convexity“ holds at the point
-
(ii)
Hence, is convex, if is convex.
-
(iii)
Proof. Let
be arbitrary. Since the tangentially constraint qualification
holds at the point
one has
So,
Therefore, by the definition of the tangent cone, there exist sequences
and
such that
and
and that
Now, for each
define
Then, by (12),
as
Moreover, one has
which implies that the “tangential nearly convexity“ holds at the point
Since the tangentially constraint qualification
holds at the point
and the fact that
it is easy to show that
Since
is a closed cone, we conclude that
This implies that
which completes the proof of the assertion
This is an immediate consequence of the assertion □
By the following example, we illustrate Lemma 1.
Example 2.
Let where the function and the epigraph of g are defined by:
respectively. Let
Therefore, we have Let It is obvious that F is not nearly convex at the point because for , one has
Moreover, we have
Therefore, the tangentially constraint qualification holds at the point Thus, in view of Lemma 1 the “tangential nearly convexity“ holds at the point Furthermore, the assertions and in Lemma 1 also hold.
By the following example, we now show that the “tangential nearly convexity“ does not imply that the “tangentially constraint qualification“ holds, while the converse statement always holds (see Lemma 1
Example 3.
Let be defined by:
and let Then, It is clear that F is convex. Now, let One can easily see that
Hence, the “tangentially constraint qualification“ does not hold at the point while the “tangential nearly convexity“ holds at the point because F is convex.
The following example shows that the constraint qualification holds, but the nonsmooth Guignard’s constraint qualification does not hold.
Example 4.
Let
and Then, which is a convex set. So, whenever Moreover, we have and so, in view of and the definition of one has and Also, and for all and thus, and are tangentially convex at the point Then, and Hence, by (10 we have
Thus,
Clearly, and Therefore, the constraint qualification holds at the point while the nonsmooth Guignard’s constraint qualification does not hold at the point
The following example shows that the constraint qualification does not hold, while the nonsmooth Guignard’s constraint qualification holds.
Example 5.
Let with and for all Let We see that is a nonconvex set. It is easy to check that and whenever Thus, in view of and the definition of one has and The functions and are tangentially convex at the point because and for all Moreover, we obtain that and so, by (10
Therefore,
It is easy to show that and hence, Thus, the constraint qualification does not hold at the point while and so, because is closed and convex. Thus the nonsmooth constraint qualification holds at the point
Remark 2. In view of Example 3, one can easily see that the “tangential nearly convexity“ is weaker than the constraint qualification (note that in view of Lemma 1 we always observe that implies Therefore, as a consequence, it should be noted that in addition to the easiness of using the constraint qualification an important advantage of is that is a constraint qualification under which conditions are “necessary and sufficient“ for optimality of the nonconvex nonsmooth optimization Problem without any further assumption (see Theorem 3 and Theorem 4 in Chapter 4 while the nonsmooth constraint qualification (which is weaker than the other well known nonsmooth constraint qualifications, see [6]) together with a further assumption (closedness of the convex cone implies that conditions are only “necessary“ for optimality of the Problem (see [7,14]
In the following, we give a characterization of the novel “constraint qualification which will be used in Chapter 4.
Lemma 2. Let be a set, and let be arbitrary. Then, the “tangential nearly convexity“ holds at the point if and only if As a consequence, we have
Proof.
By the definition of the tangent cone, we always have
Now, we show that
To this end, let
be arbitrary. Since, by the hypothesis, the “tangential nearly convexity“ holds at the point
it follows that there exist sequences
and
such that
and
as
and that
for all sufficiently large
Set:
Thus,
for all sufficiently large
Also, one has
This together with (??) implies that
Then,
Since
is a closed cone, we conclude that
Hence,
Consequently, in view of (13), one can easily see that
Let
be arbitrary. Since, by the hypothesis,
so,
Thus, by the definition of the tangent cone, there exist sequences
and
such that
and
and that
Now, for each
put:
Therefore,
and
and that
Hence, in view of Definition 2, the “tangential nearly convexity“
holds at the point
which completes the proof. □
The following lemma has been proved in [
4], but we present it in the following without proof for an easy reference.
Lemma 3.
[4] Let be given. Then,
Now, by using Lemma 3, we give a characterization of the convexity of a closed set in
with respect to the “novel constraint qualification
”.
Theorem 1. Let D be a nonempty closed subset of Then, D is convex if and only if the “tangential nearly convexity“ holds at each point of
Proof.
If D is convex, then it is clear that the “tangential nearly convexity“ holds at each point of
Suppose that the “tangential nearly convexity“ holds at each point of
Assume on the contrary that
D is not convex. Then there exist
and
such that
Put:
So,
Now, let
and consider the following optimization problem:
where
is the closed ball in
with the center at
and the radius
Let
be an optimal solution of the Problem
(note that since
D is closed, it follows that
is a nonempty compact subset of
and hence, by the continuity of the norm function
on
such
exists). We now claim that either the inner product
or the inner product
Suppose not, i.e.,
Therefore,
Hence,
which is a contradiction because
Now, we assume without loss of generality that
Since, by the hypothesis, the “tangential nearly convexity“ holds at the point
, so for the above
(see (14)) there exist sequences
and
with
and
such that
Moreover, since
it is not difficult to show that
for all sufficiently large
Therefore, this together with Lemma 3 implies that
This is a contradiction with the fact that
for all sufficiently large
and
is an optimal solution of the Problem
Hence,
D is convex, and the proof is complete. □
We conclude this section by presenting a necessary condition for pseudoconvexity of tangentially convex functions. We refer the reader to [
35] for the definition pseudoconvexity of tangentially convex functions.
Theorem 2. Let be a tangentially convex function at the point Moreover, we assume that the strictly lower level set of f (at the point is an open convex set. If then, f is pseudoconvex at the point
Proof. Let
be such that
In view of Definition ??, it is enough to show that
To this end, since
we have
On the other hand, since
thus there exists
such that
Now, for each
set:
Therefore,
and
It follows that
Since, by the hypothesis,
is convex, we obtain that
which implies that
Now, we show that
Assume if possible that
Therefore, in view of (3), there exists
such that
Let
be arbitrary. Since
and, by the hypothesis,
is open, it follows that
for all sufficiently small
By using an argument similar to the above, we conclude that
Hence, in view of (3), since
one has,
This implies that
which is a contradiction because
Thus,
and the proof is complete. □
By the following example, we illustrate Theorem 2.
Example 6.
Let be a differentiable function is defined by:
and let Indeed, since so it is not difficult to show that
Therefore, by the definition, f is pseudoconvex at the point But, On the other hand, one can easily check that all the hypotheses of Theorem 2 at the point hold, and hence, by using Theorem 2, f is pseudoconvex at the point
The following lemma has a crucial role for the proof of the main results.
Lemma 4.
Consider the optimization Problem Let be an optimal solution of the Problem If the objective function f is tangentially convex and continuous at the point then
where S defined by
Proof. By Proposition 1
and applying [
37], Proposition 10, we conclude that
f is locally Lipschitz at the point
because
f is tangentially convex and continuous at the point
Now, in view of [
7], Lemma 2.5, the result is obtained. □
4. Necessary and Sufficient Optimality Conditions for the Problem
In this section, under the “tangential nearly convexity“ (TNC) of the feasible set we present necessary and sufficient optimality conditions for the Problem Moreover, by using the constraint qualification (TCQ), we show that the Karush-Kuhn-Tucker (KKT) conditions are necessary and sufficient for optimality of the Problem Finally, by given examples to illustrate the results of this section.
Theorem 3.
(Necessary Optimality Conditions) Consider the optimization Problem Let be an optimal solution of the Problem Assume that the “tangential nearly convexity“ holds at the point and the objective function f is tangentially convex and continuous at the point Moreover, suppose that is convex. Then, where and S defined by and respectively.
Proof. Let
be arbitrary. Since
is an optimal solution of the Problem
and the objective function
f is tangentially convex and continuous at the point
it follows from Lemma 4 that
Therefore, we conclude from (3) that
Hence,
where
is the closed unit ball of
Since, by the hypothesis,
is a closed convex cone (note that
is always a closed cone) and
is a compact convex set, by applying the saddle point theorem [
10], Proposition 2.6.9, one has
This implies that there exists
such that
Since
is a convex cone, it follows that
which implies that
On the other hand, by the hypothesis, the “tangential nearly convexity“
holds at the point
so in view of Lemma 2,
Thus, it follows from (17) that
or equivalently,
Since
we conclude from (19) that
and the proof is complete. □
It should be noted that in the following (Corollary 4.1), the assumption “locally Lipschitz continuity“ of the objective function
f which used in [
8] for nonconvex optimization problems with nonpositive constraints under the constraint qualification (TCQ), reduces to the continuity of
Corollary 1.
(Necessary Optimality Conditions) Consider the optimization Problem Let be an optimal solution of the Problem Assume that C is a convex set and the constraint qualification holds at the point Moreover, suppose that the objective function f is tangentially convex and continuous at Then, (Karush-Kuhn-Tucker (KKT) conditions
Proof. Since
C is a convex set and the constraint qualification
holds at the point
then in view of Lemma 1 (assertions
and
all hypotheses of Theorem 3 hold. Therefore, by Theorem 3, we have
But, by Lemma 1 (the assertion
one has
Hence, it follows from (20) that
which completes the proof. □
Theorem 4.
(Sufficient Optimality Conditions) Consider the optimization Problem with the constraint functions are tangentially convex and locally Lipschitz at the point Assume that the “tangential nearly convexity“ holds at the point and the objective function f is tangentially convex and pseudoconvex at If then, is an optimal solution of the Problem i.e.,
Proof. Let
be arbitrary. Since the “tangential nearly convexity“
holds at the point
there exist the sequences
and
such that
and
and that
Since the functions
are tangentially convex and locally Lipschitz at the point
it follows from (21) that
Similarly and by using (9), one has
On the other hand, since
there exists
and moreover, for each
there exist
and
such that
Now, by the tangential convexity of
f at the point
and in view of (22), (23) and Definition 1 and the fact that
we conclude that
Now, by pseudoconvexity of
f at the point
we deduce that
Hence,
is an optimal solution of the Problem
□
Corollary 2.
(Sufficient Optimality Conditions) Consider the optimization Problem with the constraint functions are tangentially convex and locally Lipschitz at the point Assume that the constraint qualification holds at and the objective function f is tangentially convex and pseudoconvex at If (Karush-Kuhn-Tucker (KKT) conditions then, is an optimal solution of the Problem
Proof. Since the constraint qualification holds at the point in view of Lemma 1 the “tangential nearly convexity“ holds at the point and Therefore, the result obtains from Theorem 4. □
By the following examples, we illustrate Theorem 3 and Theorem 4 and their corollaries.
Example 7.
Consider the following nonconvex nonsmooth constrained optimization problem:
Let So, the feasible set F of the Problem is given by where K is defined by:
Obviously, is the unique optimal solution of the Problem It is clear that the “tangential nearly convexity“ holds at the point while F is not convex. Also,
and
Therefore, the constraint qualification holds at the point It is not difficult to show that
So, in view of and we observe that
Since it follows from that
Example 8.
Consider the following nonconvex nonsmooth constrained optimization problem:
where the functions and are defined by:
and
Let Therefore, the feasible set F of the Problem is given by where K is defined by:
It is clear that is an optimal solution of the Problem By a simple calculation, it can be seen that The reason is that neither the “tangential nearly convexity“ holds at the point nor the constraint qualification holds at the point
Acknowledgments
The second author was partially supported by Mahani Mathematical Research Center, Shahid Bahonar University of Kerman, Iran [grant no: 1403/4865].
Data Availability Statement
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
Conflicts of Interest
This declaration is not applicable. Indeed, there is no any conflict of interest.
References
- Allevi, E., Martínez-Legaz, J.-E., & Riccardi, R. (2020). Optimality conditions for convex problems on intersections of non necessarily convex sets, J. Global Optim., 77(1), 143-155. [CrossRef]
- Ansari, Q.H., Kobis, E., & Yao, J.-C. (2018). Vector Variational Inequalities and Vector Optimization: Theory and Applications, Springer, Berlin. [CrossRef]
- Bagirov, A., Karmitsa, N., & Makela, M.M. (2014). Introduction to Nonsmooth Optimization: Theory, Practice and Software, Springer, New York. [CrossRef]
- Bauschke, H.H., & Combettes, P.L. (2017). Convex Analysis and Monotone Operator Theory in Hilbert Spaces, Second Edition, Springer, New York.
- Bazaraa, M.S., Sherali, H.D., & Shetty, C.M. (2006). Nonlinear Programming, Wiley, New York.
- Bazargan, F., & Mohebi, H. (2022). Nonsmooth constraint qualifications for nonconvex inequality systems, Numer. Funct. Anal. Optim., 43(14), 1617-1646. [CrossRef]
- Bazargan, F., & Mohebi, H. (2020). New qualification condition for convex optimization without convex representation, Optim. Lett.,. [CrossRef]
- Bazargan, F., & Mohebi, H. (2022). A new constraint qualification for optimality of nonconvex nonsmooth optimization problems, Filomat, 36(12), 4041–4054.
- Beni-Asad, M., & Mohebi, H. (2023). Characterizations of the solution set for tangentially convex optimization problems, Optim. Lett., 17, 1027–1048. [CrossRef]
- Bertsekas, D., Nedic, A., & Ozdaglar, A.E. (2003). Convex Analysis and Optimization, Athena Scientific, Belmont, Massachusetts.
- Borwein, J.M., & Lewis, A.S. (2006). Convex Analysis and Nonlinear Optimization, Theory and Examples, Springer, New York.
- Boyd, S., & Vandenberghe, L. (2004). Convex Optimization, Cambridge University Press, Cambridge.
- Burke, J.V., & Ferris, M.C. (1991). Characterization of the solution sets of convex programs, Oper. Res. Lett., 10, 57-60. [CrossRef]
- Chieu, N.H., Jeyakumar, V., Li, G., & Mohebi, H. (2018). Constraint qualifications for convex optimization without convexity of constraints: new connections and applications to best approximation, Eur. J. Oper. Res. 265, 19-25. [CrossRef]
- Dutta, J., & Lalitha, C.S. (2013). Optimality conditions in convex optimization revisited, Optim. Lett., 7, 221-229. [CrossRef]
- Dinh, N., Jeyakumar, V., & Lee, G.M. (2006). Lagrange multiplier characterizations of solution sets of constrained pseudolinear optimization problems, Optim., 55, 241-250. [CrossRef]
- Ghafari, N., & Mohebi, H. (2021). Optimality conditions for nonconvex problems over nearly convex feasible sets, Arab. J. Math.,. [CrossRef]
- Goberna, M.A., Guerra-Vazquez, F., & Todorov, M.I. (2016). Constraint qualifications in convex vector semi-infinite optimization, Eur. J. Oper. Res., 249, 32-40.
- Hoheisel, T., & Kanzow, C. (2009). On the Abadie and Guignard constraint qualifications for mathematical programs with vanishing constraints, Optim., 58, 431-448. [CrossRef]
- Ho, Q. (2017). Necessary and sufficient KKT optimality conditions in non-convex optimization, Optim. Lett., 11, 41-46. [CrossRef]
- Hosseini, S., Mordukhovich, B.S., & Uschmajew, A. (2019). Nonsmooth Optimization and Its Applications, International Series of Numerical Mathematics, Springer, Switzerland AG., 170.
- Jahn, J. (2011). Vector Optimization: Theory, Applications and Extensions, Second Edition, Springer, Berlin Heidelberg.
- Jahn, J. (2017). Karush-Kuhn-Tucker conditions in set optimization, J. Optim. Theory Appl., 172, 707-725. [CrossRef]
- Jeyakumar, V., Lee, G.M., & Dinh, N. (2004). Lagrange multiplier conditions characterizing the optimal solution sets of cone-constrained convex programs, J. Optim. Theory Appl., 123, 83-103. [CrossRef]
- Jeyakumar, V., Lee, G.M., & Dinh, N. (2006). Characterizations of solution sets of convex vector minimization problems, Eur. J. Oper. Res., 174, 1380-1395. [CrossRef]
- Jeyakumar, V., & Mohebi, H. (2019). Characterizing best approximation from a convex set without convex representation, J. Approx. Theory, 239, 113-127. [CrossRef]
- Kobis, E., Tammer, C., & Yao, J.-C. (2017). Optimality conditions for set-valued optimization problems based on set approach and applications in uncertain optimization, J. Nonlinear Convex Anal., 18(6), 1001-1014.
- Kong, X., Zhang, Y., & Yu, G. (2018). Optimality and duality in set-valued optimization utilizing limit sets, Open Math., 16, 1128-1139. [CrossRef]
- Komiya, H. (1988). Elementary proof for Sion’s minimax theorem, Kodai Math. J., 11, 5-7. [CrossRef]
- Lasserre, J.B. (2010). On representations of the feasible set in convex optimization, Optim. Lett., 4, 1-5. [CrossRef]
- Lemaréchal, C. (1986). An introduction to the theory of nonsmooth optimization, Optim., 17, 827-858. [CrossRef]
- Liao, J.G., & Du, T.S. (2016). On some characterizations of sub-b-s-convex functions, Filomat, 30(14), 3885-3895.
- Liao, J.G., & Du, T.S. (2017). Optimality conditions in sub-(b;m)-convex programming, Politehn. Univ. Bucharest Sci. Bull. Ser. A Appl. Math. Phys., 79(2), 95-106.
- Mangasarian, O.L. (1988). A simple characterization of solution sets of convex programs, Oper. Res. Lett., 7, 21-26. [CrossRef]
- Martínez-Legaz, J.-E. (2015). Optimality conditions for pseudoconvex minimization over convex sets defined by tangentially convex constraints, Optim. Lett., 9, 1017-1023. [CrossRef]
- Martínez-Legaz, J.-E. (1983). A Generalized Concept of Conjugation, Lecture Notes in Pure and Appl. Math., 86, 45-59.
- Martínez-Legaz, J.-E. (2023). A mean value theorem for tangentially convex functions, Set-Valued Variational Anal., 31(2), 1-13. [CrossRef]
- Mashkoorzadeh, F., Movahedian, N., & Nobakhtian, S. (2019). Optimality conditions for nonconvex constrained optimization problems, Numer. Funct. Anal. Optim., 40, 1918-1938. [CrossRef]
- Pshenichnyi, B.N. (1971). Necessary Conditions for an Extremum, Marcel Dekker Inc, New York.
- Zhou, Z., Yang, X., & Qiu, Q. (2018). Optimality conditions of the set-valued optimization problem with generalized cone convex set-valued maps characterized by contingent epiderivative, Acta Math. Appl. Sin. Engl. Ser., 34, 11-18. [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).