Second-Order Optimality Conditions: An extension to Hadamard Manifolds

This work is intended to lead a study of necessary and sufficient optimality conditions for scalar optimization problems on Hadamard manifolds. In the context of this geometry, we obtain and present new function types characterized by the property of having all their second-order stationary points to be global minimums. In order to do so, we extend the concept convexity in Euclidean space to a more general notion of invexity on Hadamard manifolds. This is done employing notions of second-order directional derivative, secondorder pseudoinvexity functions and the second-order Karush-Kuhn-Tuckerpseudoinvexity problem. Thus, we prove that every second-order stationary point is a global minimum if and only if the problem is either second-order pseudoinvex or second-order KKT-pseudoinvex depending on whether the problem regards unconstrained or constrained scalar optimization respectively. This result has not been presented in the literature before. Finally, examples of these new characterizations are provided in the context of ”Higgs Boson like” potentials among others.


Introduction
This article concerns itself with several questions, the first of which is why to study the optimality conditions of scalar mathematical programming problems. Optimality conditions are a crucial asset to solve optimization problems which constitute some of the most ubiquitous type of problems across many scientific disciplines. Furthermore, optimality conditions and their associated optimal points play a vital role in activities as interdisciplinary as finding the best-fit parameters of a model given a set of data.
The second, and most important of these questions, is the generalization and use of notions convexity beyond standard Euclidean geometries. Convexity conditions and their different extensions are best understood by looking at the properties they bring to an optimization problem. Hence, ensuring convexity conditions in an optimization problem guarantees that every found local optimum is indeed a global optimum. Other functions such as pseudo-convex ensure us that every critical point is also an optimum. However, this later property does not fully characterize pseudo-convex functions as it is also shared with other types of functions. For example, the invex functions, first described by Hanson [1], are defined as any function where critical and optimal points coincide. The equivalence between critical and optimal points makes possible to design practical numerical methods or algorithms to find solutions to problems that satisfy this condition. Accompanying this, replacing the x − y vector in the definition of convex function by a certain function or kernel η(x, y), makes the invex definition concept much more flexible and applicable to other fields, such as fuzzy or interval-valued environments [2,3].
In scalar mathematical programming problems with constraints, the invexity of the objective function is not sufficient to guarantee that a Karush-Kuhn-Tucker stationary point is an optimum. Thus, it is also necessary to impose conditions on the constraints of problem. Following these ideas, Martin [4] first defined the KT-invex function. It is also well known that the first-order KKT necessary conditions only provide us with possible candidate solutions, the KKT stationary points, while the second-order necessary conditions discard some of these points to find the optimum. The importance of these second order conditions must be stressed since optimums cannot be found making exclusive use of the first order classical optimality conditions. An example of this is illustrated in Ginchev and Ivanov [5].
lines. Since the role of the straight lines in Euclidean space is carried out by geodesics on manifolds, translating a problem to a non-Euclidean set up partially amounts to replacing the linear segments by geodesic arcs.
Studying optimization problems on Hadamard manifolds poses great advantages since generally, solving nonconvex constrained problems on R n with a Euclidean metric can be rephrased as solving the unconstrained convex minimization problem on a Hadamard manifold feasible set with an appropriate metric [6]. In this way we can take advantage of convexity of the geometry.
Besides the field of mathematics, Hadamard manifolds have attracted a substantial amount of interest. For example, in medicine, Hadamard manifolds are used in medical imaging analysis to quantify tumor growth and consequently infer disease progression. Furthermore, problems related to stereo vision processing [7] or the study of machine learning or computer vision cannot be solved in classical spaces and requires the help of Hadamard manifolds for their modelling [8].
In recent times, several authors have studied these topics. The invexity on Riemannian manifolds and its relationship with the monotonicity are studied by Barani and Pouryayevali [9] and the Mond-Weir dual problems for the optimization problems under invexity assumptions are also obtained on Hadamard manifolds by Zhou and Huang [10]. More closely related to our study, Ginchev and Ivanov [5] obtain second-order optimality conditions of KKT type for a problem with inequality constraints using pseudo-convex functions and Euclidean spaces. In Ivanov [11], the author utilizes second-order Fréchet differentiable functions in R n and he defines the second-order KT-pseudoconvexity to prove that each KT point of second-order is a global minimum. Two years later, the same author, in [12] will extend some of these results to invex functions but always in Euclidean spaces.
In Ruiz-Garzón et al. [13] we obtained first-order optimality conditions for the scalar and vector optimization problem on Riemannian manifolds, but not second order conditions, it will be discussed in this article. Also, in Ruiz-Garzón et al. [14] we prove the existence of optimality conditions from KKT to constrained vector optimization problems on Hadamard manifolds as a particular case of equilibrium vector problems with constraints.
Motivated by Ivanov's work mentioned previously, the objective of this paper will focus on extending the second-order optimality conditions obtained in Euclidean spaces to geometries beyond R n ; particularly to Hadamard manifolds. Thus, extending the concept of KT-invexity to Hadamard manifolds. Therefore, we present necessary and sufficient optimality conditions for both unconstrained and constrained scalar optimization problems on Hadamard manifolds, looking for the function types for which the second-order critical Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 July 2020 doi:10.20944/preprints202007.0310.v1 points and the global minimum points coincide.
Next, we will show the different sections of this article. In section 2, we recall concepts related to Hadamard manifolds and we define the second-order directional derivative function on Hadarmad manifolds. In section 3, we prove that second-order invexity and second-order pseudoinvexity functions coincide and are characterized by the fact that all second-order critical points are equivalent to optimums for unconstrained optimization problem. In the section 4 we extend the previous results to constrained optimization problem by defining new concepts such as second-order KKT stationary point and secondorder KKT-pseudoinvex problem to characterize the global optimums of these problems on Hadamard spaces. We will end with some conclusions and some references.

Preliminaries
Let M be an n-dimensional differential manifold. We denote by T x M the n-dimensional tangent space of M at x, which is a real vector space of the same dimension as M . The collection of all tangent spaces of M is denoted by T M = x∈M T x M and it is the tangent bundle of M .
Given a piecewise C 1 curve α : [a, b] → M , we denote its norm by . x and its corresponding length as L(α) = b a α (t) α(t) dt. For any two points x, y ∈ M , the distance d between the points can be defined as d(x, y) = inf{L(α)| α is a piecewise C 1 curve joining x and y}.
Furthermore, any curve α joining x and y in M such that L(α) = d(x, y) constitutes a geodesic. We will assume that the manifold is complete. In other words, that any two points x and y can be connected with a geodesic whose length is d(x, y).
For differentiable manifolds, it is broadly possible to define the derivatives of the curves on the manifold. The derivative at a point x on the manifold lies in a vector space T x M . We define the Riemannian exponential map exp : With the goal of generalizing the concept of convex set to Hadamard manifolds, in Bangert [15], the author proposes the following definition: We'll say that a subset S 1 of M is totally convex if S 1 contains every geodesic α x,y of M whose endpoints x and y belong to S 1 .
It is well known that a simply connected complete Riemannian manifold of non-positive sectional curvature is called a Hadamard manifold. There, the geodesic between any two points is unique. In addition to this, the exponential map at each point of the manifold, M , is a global diffeomorphism. Moreover, this exp map is defined on the whole tangent space and exp −1 Let η : M × M → T M be a function defined on the product manifold such that η(x, y) ∈ T y M, ∀x, y ∈ M . Now, to achieve the objectives of this article we will need an adequate concept of differential and second-order differential on Hadamard manifolds: Definition 2.2 Let M be a Hadamard manifold. We define the differential of the function θ : M → R at the pointx along the direction η(x,x) as dθx(η(x,x)) = grad θx, η(x,x) for all η(x,x) in TxM where grad θx is the gradient of function θ atx.
Remark 2.1 The differential of θ atx of η(x,x) is similar to the definition of directional derivative in the Euclidean space.
If we wish to study the second order optimality conditions, we need to take a step forwards. Thus, we propose the following the definition of second-order directional derivative on Hadamard manifolds: Definition 2.3 Let M be a Hadamard manifold. A mapping θ : M → R is said to be a second-order directional derivative function at the pointx along the direction ξ(x,x) ∈ TxM if and only if the limit exists and it is finite. The function θ is called second-order directionally differentiable on M if the second-order directional derivative function exists for each point of M and any direction ξ(x,x) ∈ TxM .

Applications to Scalar Optimization Problems
In this section we will characterize the functions whose critical points are global optimums lived on Hadamard's spaces in the context of scalar optimization problems.

Unconstrained Case
We start by considering the Unconstrained Scalar Optimization Problem: where M is a Hadamard manifold and θ : M → R is a differentiable function.
Let us now introduce the notion of invexity of a function on Hadamard manifolds, guided by the concept of convexity on a linear spaces: Note that if M = R n we have the classic and well-known invexity definition given by Hanson [1]. Thus, the previously defined inverse exponential map simplifies to the familiar form exp −1 Once introduced the concept of invexity, we can know discuss the existence of critical points. For this purpose, we will adopt the definition given by Ruiz-Garzón et al. [13]: This result is a generalization to Riemannian manifolds of similar result to the one achieved by Craven and Glover [16].
Given these definitions, we are at a position in which we can tackle one of the objectives of our work. Thus, we will now propose an generalization of the concept of 2-invexity given by Ivanov [12] for Fréchet differentiable functions in dimensional finite Euclidean space R n to Hadamard manifolds M .
where v is a unit vector. Let exp x (tv) = xe (v/x)t and exp −1 x x = x ln(x/x) be the Riemannian exponential and its inverse map. Let θ : M → R be a function defined as θ( ξ(x,x)), ∀x ∈ M . Moreover, θ is a convex function and therefore it is 2-invex.
Furthermore, we can similarly extend the concept of the stationary point: Proof We will argue this proof using reductio ad absurdum. Let us begin by making the hypothesis that θ is 2-invex atx ∈ M is a 2-CP but it is not a global minimum. Thus if follows that dθx(η(x,x)) = 0. Moreover, for some ξ(x,x)) ∈ T x M such that θ (x, ξ(x,x)) exists and there is x ∈ M with θ(x) < θ(x).
By the 2-invexity of θ there exist a µ(x,x) ∈ T x M and a ξ(x,x) ∈ T x M such that θ (x, ξ(x,x)) exists and which is a contradiction with (1). Now, we will prove the sufficient condition. Suppose if each 2-CP is a global minimum we need to prove that if there exist η(x,x), ξ(x,x) ∈ TxM such that the derivative θ (x, ξ(x,x)) exists and This is ensured by defining, for example, η(x,x) = tgradθ(x) where t is an arbitrary positive real and ξ(x,x) = 0 then (2) holds and θ is 2-invex.
We consider now an example of a possible application of the previous characterization in Hadamard manifolds.
Example 3.2 Let us consider the following Unconstrained Scalar Optimization Problem: The objective function corresponds to Ricker wavelet, usually referred to as the Mexican hat wavelet (see fig. 1). The Ricker wavelet is function with many applications within the field of physics such as the modelling of seismic data. Furthermore, this wavelet can be understood as the cross section of a Higgs-Anderson potential. A potential of great relevance in explaining the inner workings of the Higgs Boson and other modern topics of condensed matter physics. Note that the tails of the original Higgs-Anderson potential diverge to infinity while in our example they converge to a constant value. Moreover, let us further consider the set Ω = {p = (p 1 , p 2 ) ∈ R 2 : p 2 > 0}. chosen to be 2x2 matrix defined by G(p) = (g ij (p)) with g 11 (p) = g 22 (p) = 1 p 2 2 , g 12 (p) = g 21 (p) = 0.
Endowing Ω with the Riemannian metric, we can define the inner product of two vectors, u and v, lived on such space as u, v =< G(p)u, v >. Furthermore, the gradient is then defined as grad θ(p) = G(p) −1 ∇θ(p). Thus, we obtain a complete Hadamard manifold H 2 , representative of the upper halfplane of a Hyperbolic space.
Thus, we can prove that the function θ is 2-invex atp, but not invex. We can show this by considering a η(p, p) = ξ(p, p) = 3p − p and η(p, p) = (0, 1) such that In generalized convexity theory, it's well known that pseudoconvex functions are a generalization of convex functions in Euclidean spaces and they ensure us that all critical points are optimal. Now, in the same way, we will concern ourselves with extending the concept of 2-invexity function to the 2-pseudoinvex function on Hadamard manifolds. Definition 3.5 Let M be a Hadamard manifold. Moreover, let θ : M → R be a differentiable and second order directionally differentiable function at anȳ x ∈ M and along every direction. A differentiable θ function is said to be a 2-pseudoinvex (2-PIX) atx ∈ M if there exist η(x,x), ξ(x,x) ∈ TxM such that We will now analyze the relationship between 2-IX and 2-PIX functions.
Proof In order to prove this let us make the following assumption. Letx, x ∈ M be two points such that θ(x) < θ(x).
We can even go one step further very easily: Theorem 3.4 Let M be a Hadamard manifold. Let θ : M → R be differential and second-order directionally differential at everyx ∈ M in every di- Proof On the one hand, from Theorem 3.3, the 2-pseudoinvexity implies the 2-invexity. On the other hand, it is well known that the 2-invexity implies the 2-pseudoinvexity.
Joining the theorems 3.2 and 3.4 we get the following conclusion: Corollary 3.1 Let M be a Hadamard manifold. Let θ : M → R be differential and second-order directionally differential at everyx ∈ M in every direction such that x ∈ M , θ(x) < θ(x), dθx(η(x,x)) = 0. The function θ is 2-pseudoinvex atx ∈ M if and only if each 2-CP is a global minimum of θ on M .
In summary, we have that The previous results extend the results obtained by Ivanov, Theorems 2.12 and 2.14 in [12] from an environment of convexity to a more general environment of invexity. Furthermore, we generalize these notions from Euclidean space to Hadamard manifolds. Therefore, the 2-pseudoinvexity coincides with 2-invexity in the same way that pseudoinvexity coincides with invexity, as demonstrated by Craven and Glover in [16] on Euclidean space.

Constrained Case
We consider the Constrained Scalar Optimization Problem of the form: where M is Hadamard manifold and θ, g j : M → R, j = 1, 2, . . . , m is a set of differentiable functions. Let us consider S 1 = {x ∈ M | g j (x) ≤ 0, j = 1, 2, . . . , m} and let I(x) be the set of active constraints.
Similarly to the unconstrained case, our aim is to find the kind of functions lived on Hadamard spaces for which the Karush-Kuhn-Tucker points and the optimums coincide. For this purpose, let us consider quasiinvex functions. These functions are a generalization of quasiconvex functions, a type of functions that shares most of their properties with convex and pseudoconvex functions. However, as opposed to pseudoconvex functions, the critical points of quasiconvex functions may not be optimal.
We have employed the following definition of critical direction: In the constrained case the concept of critical point we explored in the unconstrained case is replaced by this concept.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 July 2020 doi:10.20944/preprints202007.0310.v1 Note that the last two conditions have been added to the classic KKT conditions. Now, we need to introduce some new concepts that allow us to relate stationary and optimal points in the Constrained Scalar Optimization Problem CSOP, since the invexity of the objective function by itself does not guarantee the identification of stationary and optimal points. Thus, our intention is to extend the kind of KT-invex functions created by Martin [4] to generalized invexity on Hadamard manifolds. In order to do so, let us set the following definitions: where I(x) = {j = 1, . . . , m : g j (x) = 0}.
We now can obtain the sufficient condition for global optimality: Theorem 3.5 Let M be a Hadamard manifold. Suppose that the functions θ, g j : S 1 ⊆ M → R, j = 1, 2, . . . , m are differentiable and second-order directionally differentiable at anyx ∈ S 1 in every critical direction. Furthermore, assume that the CSOP is 2-KKT pseudoinvex problem. Then, each 2-KKT point is a global minimum.
Proof Suppose by hypothesis that the CSOP is a 2-KKT pseudoinvex problem and thatx ∈ S 1 is a 2-KKT point. In order to proof thatx is a global minimum, let us assume the opposite and thus, that there is a x ∈ S 1 with θ(x) < θ(x). Thanks to 2-KKT-pseudoinvexity we have that dθx(η(x,x)) ≤ 0 and dg j(x) (η(x,x)) ≤ 0, j ∈ I(x), the direction η(x,x) is critical.
From the 2-KKT-pseudoinvexity of CSOP we have that θ (x; η(x,x)) < 0 and g j (x; η(x,x)) ≤ 0 for all j ∈ I(x) with µ j ≥ 0. Thus, on the one hand we get L (x; η(x,x)) < 0 and on the other hand we get expression (7), a contradiction. Now, we will obtain a necessary condition for optimality.
Proof Given that each 2-KKT stationary point is a global minimum, we will prove that CSOP is 2-KKT pseudoinvex. Given two x,x ∈ S 1 points with The relationships (8)-(11) should be checked: Step 1. According to the quasiinvexity of θ atx ∈ S 1 then dθx(η(x,x)) ≤ 0 holds and therefore expression (8) is verified.
Step 3. We need to prove that dg j(x) (η(x,x)) ≤ 0. But this is a consequence of quasiinvexity, then expression (10) holds.
Hence, we arrive at the most important outcome of this section that we present in the following corollary: Corollary 3.2 Let M be a Hadamard manifold. Suppose that the functions θ, g j : S 1 ⊆ M → R, j = 1, 2, . . . , m are differentiable and second-order directionally differentiable at anyx ∈ S 1 in every critical direction η(x,x) ∈ TxM and the functions θ, g j : S 1 ⊆ M → R, j = 1, 2, . . . , m are quasiinvex differentiable atx ∈ S 1 with respect to η non identically zero. Then, each 2-KKT point is a global minimum if and only if the problem CSOP is 2-KKT pseudoinvex.
Therefore, we have that under a quasiinvexity environment: One possible way forward may be related to the extension of these methods to the computational environment, i.e. to obtaining optimal points in Hadamard manifolds.