Computational Algebraic Geometry and Quantum Mechanics: An Initiative toward Post-Contemporary Quantum Chemistry

A new framework in quantum chemistry has been proposed recently (“An approach to first principles electronic structure calculation by symbolicnumeric computation” by A. Kikuchi). It is based on the modern technique of computational algebraic geometry, viz. the symbolic computation of polynomial systems. Although this framework belongs to molecular orbital theory, it fully adopts the symbolic method. The analytic integrals in the secular equations are approximated by the polynomials. The indeterminate variables of polynomials represent the wave-functions and other parameters for the optimization, such as atomic positions and contraction coefficients of atomic orbitals. Then the symbolic computation digests and decomposes the polynomials into a tame form of the set of equations, to which numerical computations are easily applied. The key technique is Gröbner basis theory, by which one can investigate the electronic structure by unraveling the entangled relations of the involved variables. In this article, at first, we demonstrate the featured result of this new theory. Next, we expound the mathematical basics concerning computational algebraic geometry, which are necessitated in our study. We will see how highly abstract ideas of polynomial algebra would be applied to the solution of the definite problems in quantum mechanics. We solve simple problems in “quantum ∗akihito kikuchi@gakushikai.jp (The corresponding author; a visiting researcher in IRCQE) 1 Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 26 December 2019 © 2019 by the author(s). Distributed under a Creative Commons CC BY license. chemistry in algebraic variety” by means of algebraic approach. Finally, we review several topics related to polynomial computation, whereby we shall have an outlook for the future direction of the research. Keywords— quantum mechanics; algebraic geometry; commutative algebra; Grönber basis; primary ideal decomposition, eigenvalue problem in quantum mechanics; molecular orbital theory; quantum chemistry; quantum chemistry in algebraic variety; first principles electronic structure calculation; symbolic computation; symbolic-numeric solving; Hartree-Fock theory; Taylor series; polynomial approximation; algebraic molecular orbital theory. 2 Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 26 December 2019


Introduction Dear Readers,
If you are researchers or students with the expertise of physics or chemistry, you might have heard of "algebraic geometry" or "commutative algebra". Maybe you might have heard only of these words, and you might not have definite ideas about them, because these topics are taught in the department of mathematics, not in those of physics and chemistry. You might have heard of advanced regions of theoretical physics, such as super-string theory, matrix model, etc., where the researchers are seeking the secret of the universe by means of esoteric theories of mathematics with the motto algebraic geometry and quantum mechanics. And you might be desperate in imagining the required endurance to arrive at the foremost front of the study... However, algebraic geometry is originated from rather a primitive region of mathematics. In fact, it is an extension It simply asserts that algebraic geometry is the study of polynomial systems. And polynomial is ubiquitous in every branch of physics. If you attend the lecture of elementary quantum mechanics, or you study quantum chemistry, you always encounter secular equations in order to compute the energy spectrum. Such equations are actually given by the polynomial systems, although you solve them through linear algebra. Indeed, linear algebra is so powerful that you have almost forgotten that you are laboring with polynomial algebraic equations.
Be courageous! Let us have a small tour in the sea of QUANTUM MECHANICS with the chart of ALGEBRAIC GEOMETRY. Your voyage shall never be in stormy and misty mare incognitum. Having chartered a cruise ship, the "COMMUTATIVE ALGEBRA", we sail from a celebrated seaport named "MOLECULAR ORBITAL THEORY".
The molecular orbital theory [1]  In these expressions,  is the Hamiltonian; the vector Ψ represents the coefficients in LCAO wave-function i i χ φ Ψ = ∑ ; E is the energy spectrum. The Fock matrix H and the overlap matrix S are, in theory, computed symbolically and represented by the analytic formulas with respect to the atomic coordinates and other parameters included in the Gaussian-or Slater-type localized atomic basis; they are, in practice, given numerically and the equation is solved by means of linear algebra. In contrast to this practice, it is demonstrated by Kikuchi [2] that there is a possibility of symbolic-numeric computation of molecular orbital theory, which can go without linear algebra: the secular equation is given by the analytic form and approximated by the polynomial system, which is processed by the computational algebraic geometry. The symbolic computation reconstructs and decomposes the polynomial equations into a more tractable and simpler form, by which the numerical solution of the polynomial equation is applied for the purpose of obtaining the quantum eigenstates. The key technique is the Gröbner basis theory and triangulation of polynomial set. Let us review the exemplary computation of hydrogen molecule in [2].
Let i φ , a Z , a R be the wavefunctions, the atomic charges, and the atomic positions. The total energy functional of Hartree-Fock theory is given by From these expressions, all of the integrals involved in the energy functional are analytically computed. (As for the analytic forms of the integrals, see the supplement of [2].) Then, with respect to inter-atomic distance AB R , Taylor expansion of the energy functional is computed up to degree four, at the center of 7 / 5 AB R = . The polynomial approximation is given by OMEGA = (3571 -1580*a^2 -3075*a*b -1580*b^2 -1580*c^2 + 625*a^2*c^2 + 1243*a*b*c^2 + 620*b^2*c^2 -3075*c*d + 1243*a^2*c*d + 2506*a*b*c*d + 1243*b^2*c*d - -86*a*b*c*d*r^4 -17*b^2*c*d*r^4 + 12*d^2*r^4 -4*a^2*d^2*r^4 -17*a*b*d^2*r^4 + 13*a*b*ev*r^4 + 13*c*d*ew*r^4)/1000 where the inter-atomic distance AB R is represented by r (for the convenience of polynomial processing).
The equations used in the study are quite lengthy, so we only show the part of them. We give the exact ones in the appendix (supplementary material): the energy functional in Appendix A; the secular equations in Appendix B; the Gröbner bases in Appendix C; the triangulation in Appendix D.
In order to reduce the computational cost, the numerical coefficients are represented by the fraction, by the truncation of decimal numbers. We make the change of variables from +992*t*u*v*r^2-160*t*u*v*r+40*t*u*v=0 S [3]156*s^2*u*r^4-1068*s^2*u*r^3+2248*s^2*u*r^2-80*s^2*u*r The triangular decomposition to the Gröbner basis is computed, which contains five decomposed sets of equations T [1],..., T [5]. Here the only the skeleton is presented, while the details are given in the appendix. Observe that one decomposed set includes seven entries; from the first entry to the last, the seven variables are added one by one, with the order of r, ew, ev, v, u, t, s, in the arrangement of a triangle. Now we can solve the equation by determining the unknown variables one by one. As a result, the triangular decomposition yields four subsets of the solutions of equations: the possible electronic configurations are exhausted, as is shown in Table 1.    This is one of the featured results in [2]. The author of that work had demonstrated the procedure of the computation in a factual way, but he had not explained the underlying mathematical theory so minutely. Consequently, it is beneficial for us to grasp some prerequisites of commutative algebra and algebraic geometry because these theories are still strange to a greater part of physicists and chemists. In the following sections, we review concepts of commutative algebra and algebraic geometry, which are utilized in this sort of computation. Next, we learn about Grönbner bases. We will find that the "primary ideal decomposition" in commutative algebra surrogate the eigenvalue problem of linear algebra. Then we apply our knowledge to solve simple problems of molecular orbital theory from the standpoint of polynomial algebra. In the end, we take a look at the related topics which shall enrich the molecular orbital theory with a taste of polynomial algebra, such as "polynomial optimization" and "quantifier elimination". The employment of these methods will show the course of future studies.

Basics of Commutative Algebra and Algebraic Geometry
Our handy tool is polynomial and our chief concern is how to solve the set of polynomial equations. Such topics are the themes of commutative algebra. If we would like to do with geometrical view, the task lies also in algebraic geometry. From this reason, in this section, we shall review mathematical definitions and examples related to the theory of polynomials.
N. B.: The readers should keep in mind that the chosen topics are mere "thumbnails" to the concepts of profound mathematics. As for proofs, the readers should study from more rigorous sources; for instance, for commutative algebra, the book by Eisenbud [3] or the Book by Reid [4]; for algebraic geometry, the book by Perrin [5], for Gröbner bases, the works by Cox, Little, and O'Shea [6,7], the book by Becker and Weispfenning [8], or the book by Ene and and Herzog [9]. The degree of polynomial is given by A homogeneous polynomial is a polynomial, in which every non-zero monomial has the same degree.  3  2  2  3  3  2  2 ( , , , )

Ideal
Definition 4.1 An ideal I in the polynomial ring S is the set of polynomials, which is closed under these operation: We usually denote an ideal by the generators, such as ( ) , The sum and the product of the ideals are defined by The ideal quotient is defined to be the ideal For two ideals I and m, the saturation is defined by for i < j. In many cases, we do not have to consider the union of infinite number of ideal quotients, as the extension by union with : i I m shall "saturates" and stop to grow at some finite i, if the ring is Noetherian, as will be discussed later.) The radical of an ideal I is defined by   Also, these properties of the affine algebraic set are notable.
is an affine algebraic set: • The intersection of affine algebraic sets is also an affine algebraic set: • The finite union of affine algebraic sets is also an affine algebraic set: The interpretation of the saturation : I J ∞ in algebraic geometry is this: the saturation is the closure (in the sense of topology) of the complement of ( ) ≠ . This is the consequence of the famous theorem of Hilbert (Nullstellensatz), which we will learn later.

Residue class ring or quotient ring
We can define "residue class rings". Let I R ⊂ be an ideal in a ring R, and f is an element in R. The set

{ }
: f I f h h I + = + ∈ is "the residue class of f modulo I"; f is a representative of the residue class f I + . We denote the set of residue classes modulo I by / R I . It also has the ring structure through addition and multiplication. Several resources use the term "quotient ring" or "factor ring" for the same mathematical object.
In general, an ideal depicts geometric objects, which might be discrete points or might be connected and lie in several dimensions.
+ is the representative of ( ) f x when it is mapped into the residue class ring. We might assume that x (the representative of x in the residue class ring) would not be an undetermined variable, but a certain number α (outside of  ) such that 2 2 α = .
We assume that the representatives x and y of x and y in the residue class ring would not be undetermined variables, but a pair of numbersα and β such that

Prime and primary ideal
There are two fundamental types of ideal: prime ideal and primary ideal.  In the affine algebraic set, these two properties are equivalent.
• Ideal I is a prime ideal.
Example 4.17 For the non-prime ideal

Dimension and Krull dimension
Let X be a topological space. The dimension of X is the maximum of the lengths of the chains of irreducible closed subsets of X. The chain is the relation of inclusion as this: We have seen that for a prime ideal P, the affine algebraic set ( ) V P is an irreducible closed subset. Hence we define a kind of dimension related to prime ideals. For a prime ideal p , we can construct the chain of prime ideals with length n of the form Recall that for two prime ideals such that p q ⊂ , ; the inclusion is reversed. Example 4. 22 We have dim k R[x_1,x_2,...,x_n]=n, because the maximal chain of primes ideals is given by One often refers to the Krull dimension of the residue class ring / R I by "dimension of the ideal I". The smallest prime ideal 0 p ideal is I itself (when I is prime) or a minimal   ). We assume that f is not the zero divisor, i.e. there is no element g S ∈ such that 0 g f ⋅ = and that it is not invertible, namely, there is no element g S ∈ such that These examples seem to be trivial but demand us a certain amount of technical proof to show the validity of the statements. (See the argument in [5].)

Zero-dimensional ideal
As we have seen, an ideal I in a ring R[x 1 ,x 2 ,...,x n ]is defined by the set of polynomials. The points (x 1 ,x 2 ,...,x n )∈R, which are the zero of the generating polynomials, determine the affine algebraic set V(I). If the affine algebraic set V(I) is a discrete set, the ideal I is said to be "zero-dimensional". If we have to solve the set of polynomials, we have to work with the zero-dimensional ideal, where we shall find the solution as the discrete set of points. It is fairly difficult to find the zero-set for general cases. Hence it is important to foresee whether an ideal is zero-dimensional or not. The criterion for an ideal to be zero-dimensional is given by Gröbner basis theory, which we shall see later.

Nullstellensatz
Assume that k is an algebracally closed field.

Spectrum of ring and Zariski topology
As the set of polynomials define the geometric objects, we can adopt a view of geometry. In this section, we see a bit of it.
We say that a variety is irreducible when it is not empty and not the union of two proper sub-varieties, namely, For a prime ideal P in a ring S, V(P) is irreducible.
For a ring A, we define the ring spectrum by The Zariski topology of a variety X is defined by assuming that the sub-varieties Y ⊂ X are the closed sets, whereby the union and the intersection of a family of variety are also closed sets. For an algebraically closed field K, these two statements are valid.
(i) Any decreasing chain V 1 ⊃V 2 ⊃... of varieties in K n eventually terminates.
(ii) Hence any non-empty set in K n has a minimal element.
A variety which satisfies the descending chain condition for closed subsets is called to be Noetherian. Observe that the chain for the Noetherian varieties is descending, while the chain for the ideals is ascending when we have defined the Noetherian ring.
We define another type of topology in Spec(A): the Zariski topology of Spec(A), in which the closed sets are of the form Two types of Zariski topology for varieties and Spec(A) have similar properties. The comparison is given in Chapter 5 of the book of Reid [4].     are UFDs. Hence any polynomial in these rings has unique factorization. However there are a lot of example which is not a unique factorization domain.

Unique factorization domain
is not a UFD, since it permits two different factorization for one element:

Completion: formal description of Taylor expansion
In the computation of molecular orbital theory through computer algebra, as is presented in the introduction, we approximate the energy function by polynomials. We simply replace the transcendental functions in the energy functional (such as ( ) exp Ax or ( ) erf Bx with the corresponding formal power series, and we truncate these series at the certain degree. For this purpose, we execute Taylor expansion for the variable at a certain center point in the atomic distance. If we increase the maximum degree in Taylor expansion toward the infinity, the computation would converge to that by the exact analytic energy functional. Such a circumstance could be represented in a formal language of mathematics.
We need several definitions in order to present the formal description of Taylor expansion.
For a ring S, a "filtration" by the powers of a proper ideal I would be written as follows: The sequence is given by the inclusion relation. The filtration determines the Krull topology of I-adic topology. An "inverse system" is a set of algebraic objects ( ) j j J A ∈ , which is directed by the index J with an order ≤ . In the inverse system, we have a family of map (homomorphism), for all i ≤ j between two objects, from the larger index to the smaller i. The map satisfies is the ideal in R, generated by the monomials of degree i in S. We also assume that the inclusion is given in the sense of ideal so that ideals generated by monomials of lower degrees should contain those generated by monomials of higher degrees. We set by the quotient rings. Hence the entries in are the finite polynomials, in which the monomials in i F S are nullified, and i A are represented by the set of polynomials up to degree 1 i − . The operation of the map from / j S F S to / i S F S is to drop the monomials of higher degrees and to shorten polynomials. In case of the polynomial approximation by means of Taylor expansion, the map is the projection from finer to coarser approximations.
The inverse system can be glued together as a mathematical construction, which is called the "inverse limit" (or "projective limit"). The inverse limit is denoted and defined as The inverse limit A has the natural projection : The inverse limit is a "completion" of the ring , when the inverse limit is taken for the quotient rings in the following way:     would represent the inverse limit of "glued" Taylor expansions. In the algebraic formulation of molecular orbital theory, by means of inverse limit, we can bundle the different level of polynomial approximation with respect to the maximum polynomial degrees. Hence the natural map means the model selection.
Such a mathematical formality might appear only to complicate the matter in practice, but it is important to introduce a neat "topology" in theory. The topology is constructed as follows: an object (i.e. a polynomial) in a ring has the nested (or concentrated) neighborhoods in S ; the open basis of the neighborhoods is generated by the powers of a proper ideal I S ⊂ and represented as for x I S x S + ∈ We say "open basis" in the sense of topology. If the reader is unfamiliar with topology, simply image that polynomials around a polynomial x are sieved into different classes, which are represented by the above form. The powers of the ideal I serve as the indicator of the distance between polynomials. In the terminology of topology, the completion makes a "complete" topological space. If we consider the inverse system for Taylor expansions at a point X , the formal neighborhoods of X should be small enough so that Taylor series should be convergent.

Example 4.31 Consider the map from [ ] [ ]
x   and the corresponding inverse system. Now has the open basis of neighborhoods, which is given by the set of the form The extension to the multivariate case in [ ] We interpret the neighborhoods of 0 in the slightly different view. Any polynomial has the image in each of Hence there are the different classes of polynomials around 0 , which are represented by nil-potent ε as follows, such that 0} aε and so on. In other words, the choice of the neighborhoods of 0 is to choose the tolerable threshold, above which the monomials are admittedly zero.

Localization
be a ring of polynomial functions. We can consider the rational function (or the fraction) This sort of fraction is determined locally on a subset X in n  , such that ( ) 0 g X ≠ . For a fixed point ( ) 1 ,..., n a a a X = ∈ , we define the subset S of R such that Then, for the pair in (a,s), denoted (a,s), the fraction / a s can be well-defined, although it is a "local function". The set of fractions must be closed under multiplication and addition. Moreover the equivalence relation between two pairs in R S × is given by In commutative algebra, "localization" is defined in a more general way. Definition 4.9 Let R be a ring.
For R and its multiplicatively closed subset S, the equivalence relation between two elements in RxS is given by is the localization of at R the multiplicatively closed subset S. It is a ring with addition and multiplication, a a as a s s s ss a a aa s s ss ⋅ = .
Example 4.32 Let P be a prime ideal in a ring R. As the set \ S R P = is multiplicatively closed, we localize R by S. It is usually denoted by P R and called the localization of R at the prime ideal P. This localization has the exactly one maximum ideal P PR . In particular, for functions well defined around a .

Example 4.33
Let R be a commutative ring and let f be y and 0 /1 should be equivalent as the elements in ( 1) S The roots are given by( If every element in S is integral over R , then S is integral over R . We say that S is the integral extension of R .
Then these statements hold: The succession of integral extensions makes the integral extension. If S is integral over R and T is integral over S, then is T integral over.
is not an integral extension. From the geometrical viewpoint, the correspondence by this map (by inverting the direction) gives us the projection of the hyperbola   . I represents a simple secular equation is not an integral extension. In fact, I can be represented by another basis ( ) This is an example of the resolution of singularity. This sort of procedure lies in a broader concept of "normalization". If S is an integral domain, its normalization S is the integral closure of S in the quotient field (the field of fractions) of S.

Example 4.39
One can prove that every UFD is normal.
Definition 4.12 (Normalization) Let R be a commutative ring with identity. The normalization of R is the set of elements in the field of fractions of R which satisfy some monic polynomial with coefficients in R .

Example 4.40
Observe that, in the above example, t is in the , and observe that the equation 2 1 t x − − is a monic polynomial which guarantees that t is integral over S. In fact, in order to prove that this resolution of singularity is literally the normalization, it is necessary for us to do the argument in detail. So we omit it now. x t = and 2 y t = , the coordinate ring is not normal.

Example 4.42
In the above example of the resolution of singularity, t is contained in the normalization of ( ) : whereby the target of π (the product of quotient rings) is the normalization.
As we shall see in section 6, the primary ideal decomposition is a substitution for the solution of eiganvalue problem. It is not so surprising that we meet again the decomposition of an ideal in the desingularization of the algebraic variety. The secular equation is given as Then we have to find ε which shall give the non-zero solution of the matrix equation Observe that the matrix in the left-hand side is the Jacobian matrix The singular locus of the algebraic affine set defined by { } i f is determined by the rank condition of the Jacobian matrix, namely by the condition that the Jacobian matrix is not of full rank. The desingularization is the normalization, and the latter would result in the decomposition of an ideal, by the algorithm of Decker.
It is not so difficult to trace the algorithm of normalization given in [10,11].
We need some definitions. Let Where, Then the singular locus of A is given by Then we execute the computation by following these steps.
• Compute the singular locus I. Let J be a radical ideal such that ( ) V J contains this singular locus.
• Choose a non-zero divisor g J ∈ and compute : gJ J . For a homomorphism from J to J , denoted by • Also there are linear relations between

. This ring '
A is normal. Indeed, after some algebra (in fact, by means of computer algebra), we can check that the singular locus of ( ) ' V A is an empty set. (We check the emptiness by the computation of Gröbner basis, which should be generated by ( ) 1 .) From this reason, for a radical ideal J which contains the singular locus, we have and the criterion for the normality is satisfied.
I is represented in two ways by the substitution for T: , , x y e  . As I a is trivial, we adopt I b for the purpose of normalization. In addition, when we introduce the variable T, we implicitly assume that 0 x ≠ . When 0 x = , the ideal I is given by Then the normalization of A is given by two rings: Compare this result to the example of primary ideal decomposition with the same ideal I which we will compute in section 6. We observe that the normalization has done the decomposition of the ideal imperfectly.

Hensel's lemma
Consider the problem to solve the equation . It means that we obtain the 3-adic solution of the equation Observe the similarity with Newton method in numerical analysis. In other words, the p-adic solution of ( ) ∑ . This is Hensel's lemma, stated as follows.
In the lemma, there is a condition ( ) from the above example, it seems that we should use ( ) ′ n f a ≢0 mod p for changeable n a . In fact, by the construction, Hence we assume that condition only for the starting point a .
In commutative ring there is a related idea, called "Linear Hensel Lifting", by the following theorem.
The computation of ( ) and we obtain ( ) ( ) , l l g h in an iterative way. As usual, we simply write the roots of ( ) 0

Real algebraic geometry
The computation of hydrogen molecule, presented in the introduction, is done in the real number, and in the algebraic set defined by the polynomials: On the other hand, we can add extra constraint of the form Hence it is important to study the existence of solutions of (A) with (B).
We review several results from real algebraic geometry from now on. (As for rigorous theory, see the book by Bochnak et al. [12] or the book by Lassere [13].) These two statement are equivalent for an affine algebraic variety: (ii) X X = , (by complex conjugation) Then we define real affine algebraic variety and real ideal.

Definition 4.13
The set of real points ( ) is called a real affine algebraic variety.
Then it holds that ( ) ( ) ; that is to say, for a real ideal J, the radical ideal J (by the standard definition of commutative algebra) is equal to the ideal of  is the "quadratic module", which is the set of polynomials generated by From this theorem, one can derive the following theorem, also.
, such that  Define D , f , g , h to be The problem is equivalent to

Noether normalization
Let R S ⊂ be a ring extension. Remember how can be an element s S ∈ integral over R . The equation is called an integral equation for S over R. If every element s S ∈ is integral over R, we say that S is integral over R. We also say that R S ⊂ is an integral extension.  Hence, after the change of the variables, the term of the highest degree with respect to 1 x is given by

Differential Galois theory
One of the important ideas concerning algebraic geometry, differential algebra, and quantum mechanics is the differential Galois theory. Then there arises a question: under what circumstance one can express the solution of a differential equation using exponents, integration, and also by adding algebraic elements? We proceed step by step, by adding more elements which are constructed over the elements already presented in the computation. The analytic solution given by this way is called Liouvillian solution, although it is not always possible to construct it. The solvability condition is given by the differential Galois theory [14][15][16].
In the application of eigenvalue problem in quantum mechanics, we can consider the one-dimensional Schrödinger equation: with the even-degree monic polynomial potential ( ) In the last representation of 2n P , the polynomial is given by completing squares. Let us write the solution in the following form:

Other important concepts of commutative algebra
It is advisable that the readers should consult textbooks and grasp the concepts of more advanced technical terms, such as "module", "free module", "regular local rings", "valuation", "Artinian ring", "affine algebraic variety", or so. Maybe the concepts of homological algebra would be necessary, such as "functor", "exact sequence", "resolution", "projective", "injective", "Betti-number", and so on. Indeed, the research articles are frequented by these concepts.

Gröbner Basis Theory
The Gröbner basis theory is the key technique of commutative algebra and algebraic geometry [17][18][19]. The idea was first proposed by Bruno Buchberger, and the namesake is his advisor Wolfgang Gröbner.

The monomial orders
In one-variable case, we implicitly use the "monomial order" through the ordering of degrees: In the multivariate case, we encounter the monomials of the form x i y j z k . What should be the appropriate ordering of the monomials? In fact, there are several ways to set the order .
The initial ideal in ( ) I < is the monomial ideal, generated by the initial terms of the polynomials in I , This ideal is a useful tool in the theory of Gröbner bases.

Definition of Gröbner basis
The Gröbner basis is defined now. We could represent the polynomial f by means of a sequence of polynomials ( ) 1 2 , ,..., m u u u I ∈ in the following way (the standard expression)
One can prove that The computational step is as follows.
Step-0 Let G be the generating set of ideal I .
The difference between the lexicographic order and the reverse lexicographic order is apparent; it is simply a reversal. The difference between lexicographical order and the degree reverse is subtle; it could be understandable by these phrases by Ene and Herzog [9], ...in the lexicographic order, u v > if and only if u has "more from the beginning" than v; in the degree reverse lexicographic order, u v > if and only if u has "less from the end" than v... Step-1 For every pair of polynomials , p q in the ideal G , compute ( ) , spoly p q and its reminder r with respect to G .

Initial ideal
Step-2 If all ( ) , spoly p q reduce to zero with respect to G , we have already obtained the Gröbner basis. If there are nonzero reminders 0 r ≠ , add r to G and repeat again at Step-1.

The computation terminates after finite steps.
Let us see examples. , , We compute the s-polynomials: ( ) and reduce to zero with respect to Ĝ . Thus we can chose Ĝ as the Gröbner basis, and indeed this ideal satisfies the required property.

Example 5.4 Consider
. This Gröbner basis is never to be zero, and the set of equation In the above algorithm the generated Gröbner basis is not properly "reduced" by the terminology of several contexts. A reduced Gröbner basis ( ) 1 ,..., n g g should have the following property [23]: • The leading coefficient of each i g is 1.
• for all i j ≠ , the monomials in j g are not divisible by in ( ) i g < .

Gröbner basis of zero-dimensional ideal
The following statements are equivalent for a zero- 6. / S I is a K -vector space of finite dimension.
The statement 3) enables us to detect a zero-dimensional ideal from its Gröbner basis, if the latter contains polynomials which have initial terms such that i i x ν for any 1 i n ≤ ≤ .
The feature of zero-dimensional ideal, given by statement 5) and 6), will be useful for solving polynomial equations by Stickelberger's theorem, as is explained in section 5.12.     Sygyzy is a module, in other words, a kind of vector space generated by the vectors, the entries of which are polynomials. We can compute the second syzygy in the vector generators of the first syzygy and, likewise, the higher syzygy, too. The successive computation of higher syzygy enables us to construct the "resolution" of a module M in a Noetherian ring; the computation terminates after a certain step so that we do not find any non-trivial sygyzy in the last step [23].

Church-Rosser property
The reduction is a binary relation from one object to another, like an arrow going in one direction. We often call it the rewriting process.

Definition 5.5 A binary relation (denoted by the symbol
→ ) has the Church-Rosser property, if, whenever P and Q are connected by a path of arrows, P and Q have a common destination R by the relation.

Hence we can define Gröbner bases in the other way:
A basis is a Gröbner basis if and only if the reduction with respect to the bases has the Church-Rosser property, as is illustrated in Figure 3.

Complexity of the Gröbner basis algorithm
The original algorithm by Buchberger, as is presented here, is not so efficient. It produces a lot of useless pairs of polynomials ( ) , p q , since such pairs give birth to s-polynomials ( ) , spoly p q , which immediately reduce to zero or produce redundant polynomials. There are a lot of studies about the upper bound of the complexity of Gröbner bases, such as Hermann bound [24], Dube bound [25], Wiesinger's theorem [26], and so on. Those bounds are determined by several factors: (1) the number of variables, (2) the number of polynomials in the ideal, (3) the number of possible s-polynomials, (4) the maximum degrees of the polynomials, etc. In the worst case, the complexity is doubly exponential in the number of variables [25,[27][28][29][30]. However, the computation of the Gröbner basis is equivalent to the Gaussian elimination process of a large matrix (by the row reduction of Macauley matrix) [17,29,31,32]. For the case of homogeneous polynomial ideals, the complexity in the Gaussian elimination method is bounded by where ω is a constant, m is the number of polynomials [33], n is the dimension of the ring, and D is the maximum degree of the polynomials [33]. In order to improve efficiency, one can employ more refined methods, such as Faugère's 4 F and 5 F . Indeed the efficiency of 5 F [34,35] outperforms the row reduction computation of Macaulay matrix [33].
In fact, the efficiency of the algorithm is highly dependent on the chosen monomial order. The lexicographic order is convenient for theoretical study, but it consumes a considerable quantity of computational resource. Thus one often has to compute the Gröbner basis by other monomial orders (say, the degree reversible lexicographic order) in order to facilitate the computation, and then, one can remake the computed result in lexicographic order, by means of FGLM (Faugère, Gianni, Lazard, Mora) algorithm [36].
There is another problem in the Gröbner bases generation. which is apparent in practice. The Buchberger's algorithm applies the operation of addition, subtraction, multiplication, and division to the polynomial system. It often causes a great discrepancy in the degrees of the generated polynomials and the numerical scale of coefficients in the final result. In the computation presented in the introduction of this article, one can observe such a tendency. Thus, for the practical purpose, one must utilize several tricks to keep the polynomials as "slim" as one can [37][38][39].

Gröbner basis for modules
for which one might ask for the linear dependence of rows.
We define the monomial order in modules in these ways.
In the similar way as in the case of polynomial, we chose the leading terms of the elements in a module; we also compute the s-polynomials and the reminders in order that we obtain the Gröbner basis. The slight difference is that when in ( )

Application of Gröbner basis
Gröbner basis is a convenient tool to actually compute various mathematical objects in the commutative algebra.   : Hence, if the ideal, Î defined in example 5.12, has the Gröbner basis { } 1 , then f I ∈ .

Gröbner basis algorithm in a different light
The Buchberger's algorithm is actually the elimination in large matrices. It is the way of Faguère to do the row reduction in the large matrix. Let us see how it would work.
The matrix in the right hand side is the so-called Macaulay matrix of the second order. The row reduction yields: x xye We obtain The row reduction yields this:  , , G f f f = We again compute the third order Macaulay matrix y e y f y f The row reduction yields: y e y f y f , , We compute the fourth order Macaulay matrix: y e y f y e f e We do the row reduction in the fourth order Macaulay matrix: We obtain , , As those s-polynomials reduce to zero with respect to G, it is proved that G is a Gröbner basis.
In the above example, we process the computation carefully so that the unnecessary polynomials are removed immediately as soon as they appear; we also limit the computation in the Macaulay matrices which would be as small as possible, although these matrices might be embedded into a very large one. Indeed we have done the row reduction numerically, not symbolically, in a very large, but sparse matrix. The algorithms 4 F and 5 F by Faugère adopt these policies in a systematical way so that these algorithms are of the most efficient methods to generate Gröbner bases.

Stickelberger's theorem
In [2], an alternative of the numerical solution, instead of Gröbner basis, is also used. This method is based on Stickelberger's theorem. In general, for a zero-dimensional ideal [ ] 1 ,... n I R k x x ⊂ = , / R I is a k-vector space of finite dimension; that is to say, the vector space is spanned by the set of monomials and the result of the multiplication between two monomial is also represented by the linear combination of the monomial basis. Therefore, according to the assertion of the theorem, the operation of a monomial to the monomial basis is thought to be the operation of a matrix in k . And the eigenvalue of the matrix gives the numeral value of the corresponding monomial at ( ) The transformation by y and e are given as follows.
The transformation matrices y M (by y ) and e M (by e) are those in the right hand side in the both equations. It is easily checked that there are four eigenvectors, common both for y M and e M , which lie in / R I :  give us the value of y and e at ( ) V I . As for the value of x , we easily compute it from the polynomial relation by the ideal I.

Algorithm of computing Krull dimension
As we have seen, the zero-dimensional ideal is detected immediately after the computation of its Gröbner ideal. However, it takes a little of algebra to compute the non-zero dimension of an ideal [40].

Algorithm for the Decomposition of the ideal as eigenvalue solver
In the example of the hydrogen molecule in section 3, we have seen that the polynomial secular equation is made into the triangular form, in which the relation of variables is clarified, almost to represent the roots themselves. The mathematical foundation of such computations is the primary ideal decomposition.

Primary ideal decomposition
There are two special sorts of ideal: prime ideal and primary ideal, as we have seen in the previous section. One can comprehend these sorts of ideal with the analogy of elementary number theory. An integer is decomposed as An ideal in the commutative algebra, likewise, could be decomposed as the intersection of primary ideals [3]: One of the most elementary procedure for primary ideal decomposition is summarized as follows [41]. Let I be the ideal in a Noetherian ring R , and , r s R ∈ are elements such that r I ∉ , s I ∉ and rs I ∈ .
1. Find n such that : : n I r I r ∞ = . 3. The ideal 1 I and 2 I are larger than I . The decomposition process must be done for each of them. By choosing some proper r , we can decompose 1 I and 2 I , hence I by primary ideals.  We should notice that the ideal J is the secular equation of the diatomic system, In fact, the above example is one of the simplest cases which we can compute manually. The general algorithm for ideal decomposition is first proposed by Eisenbud [42]; and practical algorithms are developed by Gianni, Trager and Zacharias [43], by Shimoyama and Yokoyama [44], by Möller [45] and by Lazard . (As for the semi-algebraic set, one can do similar decomposition, as is studied by Chen and Davenport [46]). The comparison of algorithms is given in [10].

Algorithm of Gianni, Trager, and Zacharias
In this section, we review the primary ideal decomposition algorithm of Gianni, Trager, and Zacharias (GTZ algorithm) [18,40]. In the algorithm which we have seen in the previous section, we have to search a "key polynomial" by trial and error to decompose an ideal. In contrast, GTZ algorithm enables us to find such a key in a more rational way.   , , y x e  we apply the algorithm in zero-dimensional case. Since Let us review the triangular decomposition algorithm by Möller [45]. The algorithm is based upon several lemmas.
If B is an ideal such that B A ⊂ and if m ∈  is sufficiently large, the affine algebraic (The definition of ( ) V B of this theorem, given in [45], is slightly different from the conventional one. In this section only, we use this definition.) : m A f f +  , ... , 1 2 ( , ,..., n R x x − and in this ring we have the set of ideals which are to be processed again.
Iteratively we apply the algorithm to the set of ideals and drop the variable one by one; the process shall terminate after finite steps.
Let us consider the problem  , , , f f f f as the intersection of ( )

Simple Example of The Molecular Orbital Method by Means of Algebraic Geometry
Let us execute molecular orbital computation by means of algebraic geometry. The example is a hexagonal molecule, like benzene, where s-orbital is located at each atom and interacts only with the nearest neighbors. as is depicted in Figure 4.
We assume that [ ] The Gröbner basis is given in Table 2: The first entry of the list in Table 2 shows the relation between the energy e and the hopping integral T. Then the polynomials including other variables (c 6 , c 5 ,...., c 1 ) appear in succession. This is the example of variable elimination.

Outlook for Related Topics
We have seen how the computational algebraic geometry could be applied to the molecular orbital theory, in so far as the equations could be represented by polynomials. In this section, we will see the related topics on the symbolic computation by means of polynomials.

Polynomial optimization
Polynomial optimization is a numerical method which solves the optimization problem given by polynomial equations and inequities [47,[48][49][50][51]. , g x x x = and the constraints The problem is given by: Maximize the cost multivariate polynomial function ( ) In other words, the cost function g is to be maximized in the semi-algebraic set defined by 1 f , 2 f and 3 f , as in Figure 5.
The key idea is to replace the monomials 1 2 i j x x with variables ij y . The variables should satisfy relations such as , , , p q r s p q r s y y y + + = and 00 1 y = . Let us define the first-order moment matrix in the following style: By means of the formalism of the polynomial optimization, the problem is restated as follows. In the above, the constraint of the moment matrix comes from the semi-positive-definiteness of the quadratic form: Hereafter we denote the semi-positive definiteness of a matrix M as 0 M  .
The optimization problem is solved by the standard way of semi-positive-definite programming. We can formulate the dual problem also. In general, it is not guaranteed that the use of the first-moment matrix would suffice to give the correct answer; indeed the formulation should be given in the matrices of infinite dimension, which include the moments of higher orders up to the infinity. In addition, in the above formulation, there is no constraint for the condition , , , p q r s p q r s y y y + + = . We only expect that the moment matrix would be reduced into the one with lower and suitable rank at the optimum. Actually, the above procedure is an approximation and the accuracy is improved by the use of larger matrices. In general, a global polynomial optimization problem is given by Assume that the degree of ( ) i g x to be 2 1 i d − or 2 i d . In correspondence to the above global polynomial problem, The higher-order moment matrices are computed likewise.
The localizing matrices are computed for the polynomials 1 f , 2 f and 3 f in the example problem.
Observe that these localizing matrices include the entries of the second order moment matrix and do not contain superfluous ones. We can optimize the given problem concisely with the use of ( )

M f y .
As is demonstrated in [2], once the molecular orbital theory has been built over the polynomial system on the ground of algebraic geometry and commutative algebra, it enables us to joint the equation of conventional quantum mechanics and the extra constraints into a set of polynomial equations. We solve the problem through the collaboration of numerical and symbolical methods. The computational scheme is basically to determine the affine algebraic set described by the set of equations. However, it is somewhat inconvenient that we should use only equations for the general optimization problem. It seems that polynomial optimization could get over such limitations because it could deal with the constraints by inequities in the semi-algebraic set. The mathematical ground is "real algebraic geometry" which contains a various range of interests.

Quantifier elimination
When we deal with equations and inequities of polynomials, we often ask ourselves about the existence of the solutions. For example, under what condition would a quadratic equation have roots? The "quantifier elimination" (QE) is the computational process to answer this sort of questions: when the question is given by ( ) 2 . 0 x x bx c ∃ ∈ + + =  , the computer eliminates the quantifier ( ∃ ) and returns the simplified form 2 4 0 b ac − ≥ as the answer. In fact, there are general theories for executing such sort of simplifications, proposed by several mathematicians, such as by Fourier, by Motzkin, and by Tarski [52,53], or Presburger arithmetic. But those algorithms are not practical. Afterward, more practical algorithms by Collins and by others have come into use, called "Cylindrical Algebraic Decomposition" (CAD). The theoretical ground of the standard algorithm in Algebraic Cylindrical Decomposition are given in [46,[54][55][56][57][58]. There are several computer packages in which this algorithm is implemented, such as QEPCAD [59], Mathematica [60], Maple [61], and Reduce [62].
Since CAD algorithm is apparently related to the theme of this article, let us review the computational procedure in this section. Let us consider the problem through a simpler example of quantifier elimination: 1 0 a x ax ∃ + + = .
We can simplify the above prenex form into the one without quantifier, such as 2 The determinant of the matrix in the left-hand side is the discriminant of ( ) , f a x . If it is zero, the matrix equation would permit non-zero vector solution. Hence we project the polynomial in one ring into a polynomial in another ring of lower dimension (Projection step). We now analyse the projected polynomial (the discriminant) in order to inquire after the existence of the root. As the discriminant is  For other cylinders, we construct the cells likewise, as is illustrated in Figure 6. These cells in a-x plane are In general multivariate case of CAD, the algorithm executes the multi-step projection (from n-variables to onevariable ) and uplifting (from an axis to the whole space).
There are examples of QE with the taste of molecular • CoCoA : Computer algebra system [69].
• GAP: Software for computational discrete algebra. The chief aim of the package is to execute the computation in the group theory, but it could process polynomials [70]. As for the application of GAP software in material physics, see the work of Kikuchi [71], and the article by Kikuchi and Kikuchi [72].
• Mathematica: Technical computing system for almost all fields of science and industry [60].
• Maple: Symbolic computation software for general purpose. The primary ideal decomposition is implemented at the package "Regular Chains Package" [61].
• SINGULAR: Computer algebra system [76]. It contains various libraries to solve the problems in algebraic geometry. It also contains the extension package "Plural" for non-commutative algebra. One can learn how to compute by Singular from introductory textbooks [40,77].
• As for quantifier elimination, also there are available packages.
• QEPCAD: A free software, which does quantifier elimination by means of Partial Cylindrical Algebraic Decomposition [59].

• Maple
• There is a platform system which bundles several free software packages in mathematical science.
• SageMath: Open-Source Mathematical Software System. From this platform, one can utilize a lot of software packages both of numerical and symbolic computations (in which Gap, Maxima, and Singular are included) [78].
• There are a lot of research centers of symbolic computations. A great deal of computer algebra systems are the products of the studies over long years in several universities. One can find software implementations of the latest researches of INRIA in France: • INRIA: l'institut national de recherche dédié aux sciences du numérique. The research programs, at the interplay of algebra, geometry and computer science, are ongoing now [79].

Summary
Dear readers, our voyage has now ended up the planned course. How do you think about the connection between quantum mechanics and algebraic geometry? We have found it even in the familiar region of quantum chemistry. Algebraic geometry is not a language of another sphere; it is a tool to solve actual problems with quantitative precision. Is it interesting for you? Or have you judged coldly that it is a plaything?
In this article, at first, we have demonstrated a model case of symbolic-numeric computation of molecular orbital theory with the taste of algebraic geometry. Then we have expounded the related mathematical concepts in commutative algebra and algebraic geometry with simple examples. In addition, we have introduced several mathematical and computational methods, which would be connected to this post-contemporary theory of quantum chemistry. The two main principles of the theory are to represent the involved equations by polynomials and to process the polynomials by computer algebra. Then one can analyze the polynomial equations and unravel the dependence between the variables. In the traditional molecular orbital theory, the principal mathematical tool is the linear algebra. Indeed, it is a subset of the commutative algebra. For instance, the diagonalization of the matrix and the orthonormality of eigenstates would be comprehensible in a wider context: the primary ideal decomposition. And the latter has a close relation to the other fundamental ideas in algebraic geometry: the resolution of singularity and the normalization of the variety. Besides, the polynomial approximation (inevitable in our theory) should ideally be embedded into the "completion" of commutative rings; we do this approximation with the finite number of symbols for the sake of facile computations in the limited resources. Without doubt, there are a lot of other concepts in commutative algebra and algebraic geometry which would be embodied in the application of computational quantum mechanics. You should Ask, and it will be given to you... It will be a felicity for us, for the authors of this article, if you enjoy every bite of the "quantum chemistry with the view toward algebraic geometry" and if this article stirs your curiosity; we heartily greet your participation in the research of this new theme.