Preprint
Article

This version is not peer-reviewed.

Kernel Principal Component Analysis for Allen–Cahn Equation

A peer-reviewed article of this preprint also exists.

Submitted:

10 October 2024

Posted:

10 October 2024

You are already at the latest version

Abstract
Different researchers analyzed effective computational methods that maintain the precision of Allen-Chan (AC) equations and their constant security. This article presents a method known as reduced order model technique by utilizing kernel principle component analysis (KPCA), a nonlinear variation of traditional principal component analysis (PCA). KPCA is utilized on the data matrix created using discrete solution vectors of the AC equation. In order to achieve discrete solutions, small variations are applied for dividing up extraterrestrial elements, while Kahan’s method is used for temporal calculations. Handling the process of backmapping from small-scale space involves utilizing a non-iterative formula rooted in the concept of the multi-dimensional scaling (MDS) method. Using KPCA, we show that simplified sorting methods preserve dissipation of energy structure. The effectiveness of simplified solutions from linear PCA and KPCA, the retention of invariants, and computational speeds are shown through one-, two-, and three-dimensional AC equations.
Keywords: 
;  ;  ;  

1. Introduction

Differential equations, particularly partial differential equations (PDEs), are commonly used to mathematically represent most real-life problems in different scientific fields. Among them the Allen-Chan (AC) equation is a popular nonlinear PDE for modeling phase transition issues. The AC equation was firstly presented in [1] to simulate the movement of antiphase boundaries in crystalline solids. The AC equation is commonly applied to represent different natural phenomena and has become a fundamental equation for the diffuse interface method used to investigate phase transitions and interfacial dynamics in various fields such as materials science [2], image analysis [3], fluid dynamics [4] and mean curvature flow [5]. In materials science, it is used to simulate phase transformations in polymer, metal and ceramic materials [6]. AC equation is further used in multi-reconstruction [7] and predicting the evolution of microstructures in various materials and also in biophysics [8].
The AC equation define by
u t = μ 2 Δ u f ( u ) ,     ( x , t ) Ω × ( 0 , T ] ,
explains the movement of the anti-phase borders of a dual metal mixture at a set temperature. In equation (1), u ( x , t ) is used to denote the amount of one metal component in the alloy while the positive parameter μ represents the narrow inter-facial width that is smaller than the lab scale length. We examine the AC equation (1) on a domain Ω R d , ( d = 1 , 2 , 3 ), enforced with homogeneous Neumann or periodic boundary conditions, and with the double-well potential F ( u ) = ( u 2 1 ) 2 / 4 . The nonlinear functional in (1) is given as f ( u ) = F ( u ) = u u 3 , and the free energy functional of the AC equation is defined by
E ( u ) = Ω μ 2 2 | u | 2 + F ( u ) d x .
In this form, the AC equation (1) is a dissipative gradient system
u t = δ E δ u ,
where δ E / δ u represents the variational derivative of the free energy. In other words, the AC equation meets a non-linear stability condition, which means that the free energy functional E ( u ) decreases over time, i.e., E ( u ( x , t n ) ) E ( u ( x , t m ) ) for t n > t m .
Dimensionality reduction for dynamical systems is a crucial process aimed at simplifying complex models while retaining the essential features of their behavior. In many real-world applications, such as fluid dynamics, climate modeling, and biological systems, the state space can be incredibly high-dimensional, making simulations computationally expensive and difficult to analyze. Intrusive and data-driven reduced order modeling (ROM) are two distinct approaches used for dimensionality reduction. Intrusive ROM relies on integrating reduced models directly into the numerical algorithms of the original system, often requiring detailed knowledge of the governing equations. In contrast, data-driven ROM leverages discrete data to construct models, utilizing techniques like machine learning and statistical analysis to capture system behavior without explicit knowledge of the underlying physics. Among them, principal component analysis (PCA) is a robust statistical method used for reducing dimensionality and visualizing data across fields like image processing, genetics, finance, and social sciences. It transforms original variables into principal components that maximize variance, aiming to create a lower-dimensional representation while retaining original variability [9,10]. PCA’s effectiveness diminishes when data distributions exhibit low variance, making it less suitable for linear subspace decomposition.
Recently, kernel principal component analysis (KPCA) has emerged as an advanced alternative, which is an extension of traditional PCA that enables the analysis of nonlinear relationships within high-dimensional data. By employing a kernel function, KPCA implicitly maps the original data into a higher-dimensional feature space where linear separability is often achieved, allowing for more effective dimensionality reduction and feature extraction. This method excels in capturing complex patterns that conventional PCA might overlook, making it particularly useful in fields such as exploratory data analysis, pattern recognition, face recognition, manifold learning across fields like computer vision, bio-informatics, broad learning systems, and signal processing [11,12,13,14,15,16,17,18]. However, KPCA faces challenges in interpreting results in the original input space and the computational cost associated with kernel evaluations, with multidimensional scaling (MDS) being a notable approach for this purpose [14,19,20]. MDS allows for the iterative computation of the pre-image by employing various distance metrics between the data vectors. Despite these challenges, KPCA remains a powerful tool for exploratory data analysis and is widely utilized in applications
This paper explores the application of KPCA to data that derived from discrete solution vectors of the AC equation, a novel approach in ROM that preserves solution accuracy and conservation of energy dissipation property of the AC equation. In the literature, there are rare papers dealing with the ROM for AC equation. An energy stable ROM using finite differences and convex splitting, is derived for AC equation in [21], and a non-linear POD/Galerkin ROM is applied in [22]. In [23], the authors derive an energy stable ROM for AC equation using discontinuous Galerkin and average vector field methods, utilizing proper orthogonal decomposition-greedy adaptive sampling and discrete empirical interpolation method.
In the sequel, the paper outlines the AC equation’s mathematical formulation with space/time discretization in Section 2, details the KPCA process in Section 3, showcases results related to reduced solution accuracy and computational efficiency for the one-, two- and three-dimensional AC equations in Section 4, and concluding with key findings.

2. Discrete Data from AC Equation

Here, we present the spatio-temporal (full) discrete representation of the AC equation based on its free energy (2), where the solution vectors are utilized to construct the data set for further analysis.
The AC equation is first discretized in space. With this goal in mind, we employ finite difference discretization to the second order differentiation (Laplace) operator Δ in AC equation utilizing a tensor framework. Let the matrix A R m × m denotes the discrete Laplace operator, where the number m stands for the total degrees of freedom on the d-dimensional spatial mesh ( d = 1 , 2 , 3 ) . Let also the matrix D i stands, under periodic boundary conditions, for the matrix corresponding to the discretization of the Laplace operator by finite differences on a one-dimensional spatial interval consisting of m i grid points with the mesh size Δ x i , given by
D i : = 1 Δ x i 2 2 1 1 1 2 1 1 2 1 1 1 2 R m i × m i .
Firstly, consider a one-dimensional domain Ω R that partitioned into m 1 uniform subintervals with the mesh size Δ x 1 . On this grid, simply, we have that A = D 1 and m = m 1 . On a two-dimensional rectangular domain Ω R 2 that partitioned into m 1 and m 2 uniform subintervals with the mesh sizes Δ x 1 and Δ x 2 in either x 1 and x 2 coordinates, respectively, we can take the matrix A of discrete Laplace operator as
A = I m 2 D 1 + D 2 I m 1 .
On a three-dimensional cuboid Ω R 3 , on the other hand, let the domain is partitioned into m 1 , m 2 and m 3 uniform subintervals with the mesh sizes Δ x 1 , Δ x 2 and Δ x 3 in either x 1 , x 2 and x 3 coordinates, respectively. Then, the matrix A of discrete Laplace operator is of the form
A = I m 3 I m 2 D 1 + I m 3 D 2 I m 1 + D 3 I m 2 I m 1 .
In the above setting, ⊗ denotes the Kronecker product and I m i is the identity matrix of size m i , i = 1 , 2 , 3 . We note that the matrix D i , and hence the matrix A, can be easily formed under homogeneous Neumann boundary condition; all we need is setting the most upper-right and the most bottom-left corner entries of the matrix D i to zero, and take m i : = m i + 1 .
After, we establish time-varying semi-discrete solution vector u ( t ) : [ 0 , T ] R m as
u ( t ) = ( u 1 ( t ) , , u m ( t ) ) T ,
with u i ( t ) representing the approximate solution at time t and i-th grid node of the partitioned mesh of the domain Ω R d , in an appropriate order. After spatial discretization, we obtained the m-dimensional semi-discrete (dynamical) system
u ˙ = μ 2 A u f ( u ) ,
where u ˙ is the ordinary differentiation with respect to time variable t, and the m-dimensional vector f is such that f i ( u ) = u i u i 3 , i = 1 , , m .
The full discrete system for the AC equation is derived by applying a temporal integration technique to the semi-discrete system (3). Since the effective non-intrusive dimension reduction methods rely on an accurate and reliable data, and the data set used in this paper is composed of the discrete solution vectors of the full discrete system of the AC equation, it is crucial to use a temporal integration that is both accurate and preserves the energy dissipation property of the AC equation. An energy-stable scheme is a numerical method that maintains the energy dissipation/conservation of a gradient/Hamiltonian flow at the discrete level. Extensive literature exists on gradient stable schemes for the AC equation. In this instance, we utilize Kahan’s method [24], which is an advanced numerical method designed to enhance the accuracy of time integration for dynamic systems. Kahan time integrator achieves higher precision compared to traditional integration methods, such as the standard Runge-Kutta or Euler methods. It is a second order, time-reversal, and linearly implicit technique for ODEs [24].
For the full discrete system, we partition the time interval [ 0 , T ] into n 1 equal parts with time step-size Δ t = T / ( n 1 ) to create discrete time points t j = j Δ t , j = 0 , , n 1 . The solution vector at time t j is denoted by u j = u ( t j ) R m . Then, the application of Kahan’s method leads to the full discrete problem: given u 0 R m by the initial condition, compute u j + 1 from the linear system
I m Δ t 2 J r ( u j ) ( u j + 1 u j ) = Δ t r ( u j ) ,     j = 0 , 1 , , n 1 ,
where r ( u ) : = μ 2 A u u + u 3 is the right-hand side vector of the dynamical system (3), and J r ( u ) is its diagonal Jacobian matrix. Under the given discretization, the discrete form of the energy E reads as
E ( u ) = i = 1 m μ 2 2 ( D f u ) i 2 + 1 2 ( u i 2 1 ) 2 ,
where the matrix D f R m × m mimics the first order forward finite difference differentiation.

3. Reduced Order Model

In this part, we will cover the linear/nonlinear ROM formulation of the AC equation. First, we briefly outline the conventional PCA method, followed by an explanation of KPCA, the nonlinear counterpart of the PCA. The process involves two steps: starting with vectors from the column space of a data matrix, known as the input space, moving to the reduced space via projection, and then mapping back from the reduced space to the input space to get the reduced approximation. Because the ROM approach we are examining relies on a given data, we will focus in the following sections on the data matrix
U = [ u 1 u n ] R m × n ,
where the column vectors represent the solution vectors of the AC equation derived from the full discrete system (4). We make a point to simplify notation by beginning the initial superscript of the snapshot vectors in equation (6) at 1 instead of 0. This means that the column vector u i R m in equation (6) corresponds to the solution vector u i 1 R m , i = 1 , , n , of the AC equation derived from the system (4).

3.1. Linear Dimension Reduction (PCA)

In the case of a data matrix (6) containing the solution vectors for AC equation, PCA aims to find a linear model of a reduced dimension k m , which can accurately represent the variance in the vectors u i . We assume that the matrix U has zero column sum, otherwise, we can achieve this by subtracting the average column value u ¯ = ( 1 / n ) u i from each column. The covariance matrix C of the matrix U is defined by
C = i = 1 n u i ( u i ) T = U U T R m × m ,
whose diagonalization is given by
C = P Λ P T ,
where the column vectors of the orthogonal matrix P = [ p 1 p m ] R m × m represent the eigenvectors related to the (sorted) eigenvalues λ 1 > > λ m lined up on the diagonal entries of diagonal matrix Λ R m × m . Next, the eigenvectors { p 1 , , p k } R m of the covariance matrix C with the k largest eigenvalues λ 1 > > λ k are considered as the basis of the k-dimensional linear subspace in a straightforward manner [17]. Ultimately, any random vector u * R m in the input space can be roughly expressed by the pre-image u ^ * R m as a linear combination of the eigenvectors
u * u ^ * = i = 1 k z i * u i = P k z * ,
with the matrix P k = [ p 1 p k ] R m × k being composed of the initial k columns of the orthogonal matrix P. The reel coefficients z i * represent the components of the projection vector z * = [ z 1 * , , z k * ] R k , obtained by projecting u * onto the smaller linear subspace. The k m dimensional vector z * = P k T u * lies in the reduced space, which is the projection of the m-dimensional vector u * from the input space [25].

3.2. Nonlinear Dimension Reduction (KPCA)

PCA is restricted to reducing dimensionality in a linear manner. However, standard PCA may be inefficient when confronted with data featuring intricate structures that cannot be accurately represented within a linear subspace. Yet, KPCA provides a resolution by allowing us to expand the linear PCA for nonlinear dimensionality reduction [12,14,16].
By utilizing KPCA, one maps the vectors from the m-dimensional input space to a larger M m dimensional (potentially infinite-dimensional) space known as the feature space, through a nonlinear map Φ ( · ) : R m R M . Then, the traditional PCA is subsequently implemented on the vectors within this feature space. In order to achieve this, we label as U ˜ the converted data matrix created as
U ˜ = [ Φ ( u 1 ) Φ ( u n ) ] R M × n ,
where the columns consist of the converted vectors Φ ( u l ) R M corresponding to the input space vectors u l . In most cases, the matrix U ˜ , associated with the arbitrary map Φ ( · ) , does not necessarily possess zero column sum for PCA. Subtracting the mean Φ ¯ = ( 1 / n ) Φ ( u i ) from each column results in a data matrix
U ¯ ˜ = [ Φ ˜ ( u 1 ) Φ ˜ ( u n ) ] = U ˜ H R M × n ,
with zero column sum, where the columns are Φ ˜ ( u l ) = Φ ( u l ) Φ ¯ , and H is the centering matrix defined by
H = I n 1 n 1 1 T R n × n ,
where I n is the n-dimensional identity matrix and 1 = [ 1 , , 1 ] T R n is the vector of ones. Next, we apply the standard PCA steps described earlier, to the data matrix U ¯ ˜ which has columns that represent the feature space. This involves identifying the eigenvectors of the covariance matrix
C ˜ = l = 1 n Φ ˜ ( u l ) Φ ˜ ( u l ) T = U ¯ ˜ U ¯ ˜ T R M × M .
Currently, the KPCA has two significant disadvantages. In the first place, the size M could be so large that it becomes nearly impossible to calculate the eigenvectors of the M-dimensional covariance matrix C ˜ . Second, the random nonlinear function Φ ( · ) is often inaccessible. To tackle these problems, a kernel trick is utilized [16,17]. To comprehend this technique, let’s examine the eigenvalue problem
C ˜ v i = λ ˜ i v i ,     i = 1 , , M ,
where { λ ˜ i , v i } represent the eigenpair of the covariance matrix C ˜ . It is important to understand that, by definition, the eigenvectors cover the feature space, which is equivalent to the column space of the altered data matrix U ¯ ˜ = [ Φ ˜ ( u 1 ) Φ ˜ ( u n ) ] . Consequently, there are real coefficients a i j for each eigenvector v i that satisfy the linear combination
v i = j = 1 n a i j Φ ˜ ( u j ) ,     i = 1 , , M .
By replacing the connection (11) and the identity (9) into the equation (10), we get that
l = 1 n Φ ˜ ( u l ) j = 1 n a i j Φ ˜ ( u l ) T Φ ˜ ( u j ) = λ ˜ i j = 1 n a i j Φ ˜ ( u j ) .
All eigenvectors are within the range of the altered vectors { Φ ˜ ( u s ) } s = 1 n , so for s = 1 , , n , we can examine the equivalent equations by projecting onto the vectors Φ ˜ ( u s ) , resulting in
l = 1 n Φ ˜ ( u s ) T Φ ˜ ( u l ) j = 1 n a i j Φ ˜ ( u l ) T Φ ˜ ( u j ) = λ ˜ i j = 1 n a i j Φ ˜ ( u s ) T Φ ˜ ( u j ) .
At this stage, a kernel function κ ( · , · ) : R m × R m R is defined so that
κ ( u s , u l ) = Φ ( u s ) , Φ ( u l ) = Φ ( u s ) T Φ ( u l ) ,     s , l = 1 , , n ,
with the goal of depicting the Euclidean inner products Φ ( u s ) T Φ ( u l ) of non-centered transformed vectors in the feature space using input space vectors. Different types of kernel functions are utilized in various studies, including linear, polynomial, and Gaussian kernels [16,25]. Here, we employ the Gaussian kernel defined for any vector x , y by
κ ( x , y ) = exp x y 2 2 σ 2 ,
where · represents the Euclidean norm and σ being a parameter.
In addition, to depict the Euclidean inner products Φ ˜ ( · ) T Φ ˜ ( · ) of transformed vectors that are centered in the feature space, we utilize the framework [14]
κ ˜ ( u s , u l ) = κ ( u s , u l ) 1 n 1 T k u s 1 n 1 T k u l + 1 n 2 1 T K 1 ,
where the kernel matrix K R n × n , and the vector k u R n are defined as
K i j = κ ( u i , u j ) , k u = ( κ ( u , u 1 ) , , κ ( u , u n ) ) T .
We note that it is not necessary to compute all the vectors k u i , as they are simply the ith columns of the symmetric kernel matrix K. The use of the kernel function results in an equivalent equation to (12) being
l = 1 n κ ( u s , u l ) j = 1 n a i j κ ( u l , u j ) = λ ˜ i j = 1 n a i j κ ( u s , u j ) .
The equation (13) can be represented in matrix-vector form as
K ˜ 2 a i = λ ˜ i K ˜ a i , or K ˜ a i = λ ˜ i a i ,     i = 1 , , M ,
for the coefficient vector a i = ( a i 1 , , a i n ) T R n , where K ˜ = H K H and H is the centering matrix given in (8). Ultimately, for any vector u * R m in the input space, its transformed vector Φ ˜ ( u * ) R M in the feature space is found. The projection vector z * = ( z 1 * , , z k * ) T R k represents the projection of Φ ( u * ) onto the reduced k-dimensional space ( k m M ) spanned by the eigenvectors { v 1 , , v k } associated with the largest k eigenvalues { λ ˜ 1 , , λ ˜ k } . After calculating the coefficients a i from the eigenvalue problem (14) and utilizing the identity (11) of the eigenvectors v i , the projection vectors’ entries z i * can be determined using the kernel function as the following
z i * = Φ ˜ ( u * ) T v i = j = 1 n a i j κ ˜ ( u * , u j ) ,     i = 1 , , k .
Reconstructing the original image of any random vector u * R m in the input space can be achieved by approximating its pre-image u ^ * R m using standard PCA as shown in equation (7). Nonetheless, KPCA does not follow this pattern. Denote P k Φ ( u * ) R k as the projection of Φ ( u * ) onto the reduced space spanned by the eigenvectors { v 1 , , v k } , namely
P k Φ ( u * ) = i = 1 k z i * v i + Φ ¯ .
Next, we can find an estimated pre-image u ^ * such that the vector Φ ( u ^ * ) after transformation is the vector closest to the projected vector P k Φ ( u * ) . This involves determining the minimum of the objective functional ρ ( u ^ * ) = Φ ( u ^ * ) P k Φ ( u * ) 2 using a least-squares approach. By imposing u ^ * ρ = 0 , we obtain the following equation to be solved
u ^ * = i = 1 k γ ˜ i exp u ^ * u i 2 2 σ 2 u i i = 1 k γ ˜ i exp u ^ * u i 2 2 σ 2 ,
where we set
γ ˜ i = γ i + 1 n 1 j = 1 n γ j , γ i = l = 1 k z l * a l i .
The implicit equation (15) can be solved by fixed-point iteration or other Newton type nonlinear iterative solvers. But, the nonlinear iteration technique is unstable and depends highly on the initial estimation [14,25]. In the KPCA method presented in this study, the pre-image is calculated using a non-iterative approach [14], which relies on the correlation between the distance of vectors in the input space and the distance of the vectors in the transformed space, employing the Gaussian kernel function as a distance metric function · . By utilizing MDS concept [14,19], we can roughly discover a pre-image u ^ * where the differences in distances between u * and each input vector u i and between projected vector P k Φ ( u * ) and each transformed feature vector Φ ( u i ) remain constant, see Figure 1. To achieve this goal, let
d i j 2 : = d 2 ( u i , u j ) = u i u j 2 , d ˜ i j 2 : = d ˜ 2 ( Φ ( u i ) , Φ ( u j ) ) = Φ ( u i ) Φ ( u j ) 2 ,
represent the distance metrics between input space vectors and their transformed feature space vectors. After manipulating the Gaussian kernel function as f ( d i j 2 ) = exp ( d 2 ( u i , u j ) / 2 σ 2 ) , we can establish a relation between input space distance and feature space distance, and also its inverse map, as [14]
f ( d i j 2 ) = 1 2 ( K i i + K j j d ˜ i j 2 ) , d i j 2 = 2 σ 2 ln K i i + K j j d ˜ i j 2 2 .
In addition, it can be demonstrated using the kernel matrix that the distance in feature space between the projected vector P k Φ ( u * ) and a transformed vector Φ ( u i ) is
d ˜ 2 ( Φ ( u i ) , P k Φ ( u * ) = Φ ( u i ) P k Φ ( u * ) 2 = Φ ( u i ) 2 + P k Φ ( u * ) 2 2 P k Φ ( u * ) T Φ ( u i ) = k u * + 1 n K 1 2 k u i H T C a H k u * 1 n K 1 + 1 n 2 1 T K 1 + K i i 2 n 1 T k u * ,
for the matrix
C a = j = 1 k 1 λ ˜ j a j ( a j ) T .
Ultimately, by taking P k Φ ( u * ) Φ ( u ^ * ) and utilizing the metric relation (16) within equation (15), we can derive the non-iterative solution formula [14]
u ^ * = i = 1 k γ ˜ i exp u ^ * u i 2 2 σ 2 u i i = 1 k γ ˜ i exp u ^ * u i 2 2 σ 2 = i = 1 k γ ˜ i 1 2 ( 2 d ˜ 2 ( Φ ( u ^ * ) , Φ ( u i ) ) ) u i i = 1 k γ ˜ i 1 2 ( 2 d ˜ 2 ( Φ ( u ^ * ) , Φ ( u i ) ) ) i = 1 k γ ˜ i 1 2 ( 2 d ˜ 2 ( P k Φ ( u * ) , Φ ( u i ) ) ) u i i = 1 k γ ˜ i 1 2 ( 2 d ˜ 2 ( P k Φ ( u * ) , Φ ( u i ) ) ) ,
where the distance P k Φ ( u * ) Φ ( u i ) 2 in the formula is determined through relation (17). Furthermore, the k vectors u i mentioned in (18) are chosen from the n input space vectors that are closest to the given vector u * [14,25].

4. Numerical Results

In this section, we investigate how well the KPCA nonlinear dimensionality reduction method performs on one-, two- and three-dimensional AC equations. In either example, we construct the data matrix U whose columns are formed by the discrete solution vectors of the given AC equation, obtained by the full discrete system discussed in Section 2. In order to measure the accuracy of a reduced approximation u ^ * to a fixed full solution u * , we use the relative absolute error defined by
u * u ^ * R = | u * u ^ * | | u * | .

4.1. One-Dimensional Problem

The AC equation (1) will be numerically modeled over the one-dimensional domain Ω = [ 0 , 2 π ] with periodic boundary conditions, and with the target time T = 600 . The inter-facial width is set as μ = 0.16 , and the initial phase is
u ( x , 0 ) = 0.8 + sin ( x ) .
Data matrix U is created using discrete solutions at each time t k taken from the instant information of the system with space-time discretization, with the mesh sizes Δ x = π / 100 and Δ t = 0.5 [6]. The discretized data matrix for space and time coordinates is written as U = [ u 1 u 1200 ] R 200 × 1200 .
Figure 2 displays the plots of the precise solutions at t = 0 , 250 , 400 , 500 , and also the solution trajectory for the time span [ 0 , 600 ] . The AC equation with the potential function F ( u ) = ( u 2 1 ) 2 / 4 has one stable ( u = 0 ) and two unstable ( u = ± 1 ) equilibrium points. This phenomenon, known as phase separation, involves solutions transitioning between the equilibrium points. The boundaries separating the two unstable equilibrium zones shift across areas over extended periods, known as meta-stability. Figure 2, left, shows this state of meta-stability.
Table 1 compares PCA and KPCA methods in terms of accuracy and computational efficiency. For this comparison, the input space solution vector u * = u 100 corresponding to the solution at time t = 50 is used. Results are presented for reduced dimensions k = 1 , , 6 . The relative absolute errors between the full solution u * and the reduced approximate solution u ^ * are shown when PCA and KPCA are used. When using the same number of reduced bases, more accurate results are obtained with KPCA.
On the other hand, in terms of computational efficiency, Table 2 shows the solution times required by the KPCA method using the formula (15), which is solved by fixed point iteration, and the method using the algebraic equation (18). The first two columns show the errors between the full and reduced solutions obtained with both formulae, while the last two columns show the processing time required to generate the preliminary images. It can be seen that the errors obtained with the solutions of the non-linear equation solved with iteration (2-3 iterations) with the formula (15) and without iteration with the formula (18) are almost the same, which shows that the solutions obtained without iteration are acceptable in terms of precision. It can be seen in the last two columns that the solution time required by the KPCA method without iteration is much less than the solution time required with iteration, i.e., the KPCA method is quite fast.
Figure 3 shows the reduced solutions together with the full solutions at times t = 250 and t = 500 . For the reduced dimension k = 2 , it can be seen that the exact and reduced solutions coincide at both times. The full and reduced solution profiles are given in Figure 4. From the figures, it is observed that the full and reduced solutions coincide, and hence show the same phase transition behavior.
The energy graphs obtained with full solutions and preliminary images are given in Figure 5. It can be seen that the same energy reduction behavior is observed in both graphs.

4.2. Two-Dimensional Problem

The AC equation (1) will be numerically modeled over the two-dimensional domain Ω = [ 0 , 2 π ] 2 with periodic boundary conditions, and with the target time T = 5 . The inter-facial width is set as μ = 0.05 , and the initial condition is
u ( x 1 , x 2 , 0 ) = 2 e sin ( x 1 ) + sin ( x 2 ) 2 + 2.2 e sin ( x 1 ) sin ( x 2 ) 2 1 .
Data matrix U is created using discrete solutions with the mesh sizes Δ x 1 = Δ x 2 = π / 32 and Δ t = 0.01 [6]. The discretized data matrix for this setting is formed as U = [ u 1 u 500 ] R 4096 × 500 .
Table 3 compares the PCA and KPCA methods in terms of accuracy and computational efficiency. For this comparison, the solution vector u * = u 100 input space corresponding to the solution at time t = 1 is used. Results are presented for reduced dimensions k = 1 , , 6 . The relative absolute errors between the exact solution u * and the reduced approximate solution u ^ * are shown when PCA and KPCA are used. When using the same number of bases, more accurate results are obtained with KPCA.
In terms of computational efficiency, Table 4 shows the solution times required by the KPCA method using the formula (15) solved by fixed point iteration and the method using the algebraic equation (18). The first two columns show the errors between the full and reduced solutions obtained with both formulas, while the last two columns show the processing time required to generate the preliminary images. It can be seen that the errors obtained with the solutions of the non-linear equation solved with iteration (3 iterations) and without iteration are almost the same, which shows that the solutions obtained without iteration are acceptable in terms of precision. It can be seen in the last two columns that the solution time required by the KPCA method without iteration is considerably less than the solution time required with iteration.
Figure 6 shows the initial profile and profiles of the reduced solution together with the full solution at the final time t = 5 . For the reduced dimension k = 2 , the full and reduced solutions are similar. This means that the reduced solutions have the same behavior as the full one.
On the other hand, the energy plots obtained with full solutions and preliminary images are given in Figure 7. Here, it is clearly seen that the same energy reduction behavior occurs in both plots, similar to the one-dimensional AC equation.

4.3. Three-Dimensional Problem

In this last example, the scaled AC equation [6,26,27]
u t = Δ u 1 μ 2 f ( u ) ,
will be numerically modeled over the three-dimensional domain Ω = [ 0 , 1 ] 3 with homogenous Neumann boundary conditions, and with the target time T = 0.035 . The inter-facial width is set as μ = 0.1 , and the initial condition is
u ( x 1 , x 2 , x 3 , 0 ) = tanh R 0 ( x 1 0.5 ) 2 + ( x 2 0.5 ) 2 + ( x 3 0.5 ) 2 2 μ .
Here, the given parameter R 0 represents the initial radius of the spherical zero isosurface, and at time t, the exact radius R of the sphere follows from the formula R ( t ) = R 0 2 4 t [26]. This indicates a decrease in the radius of the spherical zero isosurface over time. In the numerical simulation, we take the initial radius as R 0 = 0.4 .
For the discrete solution matrix U, we apply the mesh sizes Δ x 1 = Δ x 2 = Δ x 3 = 0.05 and Δ t = 0.0001 , i.e., the discretized data matrix is U = [ u 1 u 350 ] R 9261 × 350 . Table 5 compares the PCA and KPCA methods in terms of accuracy and computational efficiency. For this comparison, the solution vector u * = u 100 input space corresponding to the solution at time t = 0.01 is used. Results are presented for reduced dimensions k = 1 , , 6 . The relative absolute errors between the full solution u * and the reduced approximate solution u ^ * are shown when PCA and KPCA are used. When using the same number of bases, more accurate results are obtained with KPCA.
In terms of computational efficiency, Table 6 shows the solution times required by the KPCA method with and without iteration. It can be seen that the errors obtained with the solutions of the non-linear equation solved with iteration (2 iterations) and without iteration are almost the same, similar to the one- and two-dimensional cases, and that the KPCA method without iteration is fastest.
In Figure 8, left, the energy plots obtained with full and reduced solutions are given. Again, it is clearly seen that the same energy reduction behavior occurs by both full and reduced solutions. Figure 9 shows the profiles of the zero isosurface by full and reduced solutions at times t = 0.02 , 0.03 , 0.035 . For the reduced dimension k = 2 , the full and reduced profiles are highly similar, the radius of the spherical zero isosurface decrease as the time progresses. The decrease of the exact and numerical radii is also demonstrated in Figure 8, right.

5. Discussion

In this article, we suggest a nonlinear dimensionality reduction method that upholds precision and conserves the energy decay property of the one-, two-, and three-dimensional AC equations. The method for reducing dimensionality is non-intrusive and relies on the KPCA. The KPCA utilizes the MDS method and considers the input space and feature space distance metrics to develop a non-iterative algorithm for reconstructing pre-images. The numerical test examples show the accuracy of solutions and the preservation of energy dissipation structure. A comparison is made with traditional PCA and KPCA, as well as an iterative scheme with a non-iterative one. In our future research, we aim to explore various types of kernel functions that may offer more effective performance compared to the Gaussian kernel.

Author Contributions

Conceptualization, Y.Ç. and M.U.; methodology, Y.Ç. and M.U.; software, M.U.; validation, Y.Ç. and M.U.; formal analysis, Y.Ç. and M.U.; investigation, Y.Ç. and M.U.; resources, Y.Ç. and M.U.; data curation, M.U.; writing—original draft preparation, Y.Ç.; writing—review and editing, M.U.; visualization, Y.Ç. and M.U.; supervision, M.U.; project administration, Y.Ç. and M.U.; funding acquisition, Y.Ç. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data sharing is not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Allen, S.M.; Cahn, J.W. A microscopic theory for antiphase boundary motion and its application to antiphase domain coarsening. Acta Metall 1979, 27, 1085–1095. [Google Scholar] [CrossRef]
  2. Chen, L.Q. Phase-field models for microstructure evolution. Annual Review of Materials Research 2002, 32, 113–140. [Google Scholar] [CrossRef]
  3. Benes̆, M.; Chalupecký, V.; Mikula, K. Geometrical image segmentation by the Allen-Cahn equation. Applied Numerical Mathematics 2004, 51, 187–205. [Google Scholar] [CrossRef]
  4. Yang, X.; Feng, J.J.; Liu, C.; Shen, J. Numerical simulations of jet pinching-off and drop formation using an energetic variational phase-field method. Journal of Computational Physics 2006, 218, 417–428. [Google Scholar] [CrossRef]
  5. Feng, X.; Prohl, A. Numerical analysis of the Allen-Cahn equation and approximation for mean curvature flows. Numerische Mathematik 2003, 94, 33–65. [Google Scholar] [CrossRef]
  6. Uzunca, M.; Karasözen, B. Linearly implicit methods for Allen-Cahn equation. Applied Mathematics and Computation 2023, 450. [Google Scholar] [CrossRef]
  7. Wang, J.; Shi, Z. Multi-Reconstruction from Points Cloud by Using a Modified Vector-Valued Allen–Cahn Equation. Mathematics 2021, 9, 1326. [Google Scholar] [CrossRef]
  8. Haq, M.U.; Haq, S.; Ali, I.; Ebadi, M.J. Approximate Solution of PHI-Four and Allen–Cahn Equations Using Non-Polynomial Spline Technique. Mathematics 2024, 12, 798. [Google Scholar] [CrossRef]
  9. Jackson, B.B.; Bund, B. Multivariate Data Analysis: An Introduction; From ThriftBooks-Atlanta (AUSTELL, GA, U.S.A., Richard D. Irwin, 1983.
  10. Johnson, R.; Wichern, D. Applied multivariate statistical analysis; New Jersey: Prentice-Hall, Michael Bell, 2007. [Google Scholar]
  11. Hached, M.; Jbilou, K.; Koukouvinos, C.; Mitrouli, M. A Multidimensional Principal Component Analysis via the C-Product Golub–Kahan–SVD for Classification and Face Recognition. Mathematics 2021, 9, 1249. [Google Scholar] [CrossRef]
  12. González, A.G.; Huerta, A.; Zlotnik, S.; Díez, P. A kernel Principal Component Analysis (kPCA) digest with a new backward mapping (pre-image reconstruction) strategy. Universitat Politecnica de Catalunya - Barcelona. [CrossRef]
  13. Mika, S.; Schölkopf, B.; Smola, A.; Müller, K.R.; Scholz, M.; Rätsch, G. Kernel PCA and De-Noising in Feature Spaces. Advances in Neural Information Processing Systems; Kearns, M.; Solla, S.; Cohn, D., Eds. MIT Press, 1998, Vol. 11.
  14. Rathi, Y.; Dambreville, S.; Tannenbaum, A. Statistical Shape Analysis using Kernel PCA. School of Electrical and Computer Engineering Georgia Institute of Technology 2006, 6064, 1–8. [Google Scholar] [CrossRef]
  15. Schölkopf, B.; Smola, A. Learning with kernels: support vector machines, regularization, optimization, and beyond; Adaptive computation and machine learning, MIT Press: Cambridge, Mass, 2002. [Google Scholar]
  16. Schölkopf, B.; Smola, A.; Müller, K.R. Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neural Computation 1998, 10, 1299–1319. [Google Scholar] [CrossRef]
  17. Wang, Q. Kernel Principal Component Analysis and its Applications in Face Recognition and Active Shape Models. Computer Vision and Pattern Recognition 2012, pp. 1–8. [CrossRef]
  18. Zhang, Q.; Ying, Z.; Zhou, J.; Sun, J.; Zhang, B. Broad Learning Model with a Dual Feature Extraction Strategy for Classification. Mathematics 2023, 11, 4087. [Google Scholar] [CrossRef]
  19. Cox, T.F.; Cox, M.A.A. Multidimensional Scaling; Monographs on Statistics and Applied Probability, Chapman & Hall: London, U.K., 2001. [Google Scholar]
  20. Williams, C.K.I. On a connection between kernel PCA and metric multidimensional scaling. Advances in Neural Information Processing Systems 13; Leen, T., Dietterich, T., Tresp, V., Eds.; MIT Press: Cambridge, MA, 2001. [Google Scholar]
  21. Song, H.; Jiang, L.; Li, Q. A reduced order method for Allen–Cahn equations. Journal of Computational and Applied Mathematics 2016, 292, 213–229. [Google Scholar] [CrossRef]
  22. Kalashnikova, I.; Barone, M.F. Efficient non-linear proper orthogonal decomposition/Galerkin reduced order models with stable penalty enforcement of boundary conditions. International Journal for Numerical Methods in Engineering 2012, 90, 1337–1362. [Google Scholar] [CrossRef]
  23. Uzunca, M.; Karasözen, B. Energy Stable Model Order Reduction for the Allen–Cahn Equation. In Model Reduction of Parametrized Systems; Benner, P., Ohlberger, M., Patera, A., Rozza, G., Urban, K., Eds.; Springer International Publishing: Cham, 2017; pp. 403–419. [Google Scholar] [CrossRef]
  24. Kahan, W.; Li, R.C. Unconventional Schemes for a Class of Ordinary Differential Equations. Journal of Computational Physics 1997, pp. 316–331. [CrossRef]
  25. Çakır, Y.; Uzunca, M. Nonlinear Reduced Order Modelling for Korteweg-de Vries Equation. International Journal of Informatics and Applied Mathematics 2024, 7, 57–72. [Google Scholar] [CrossRef]
  26. Li, Y.; Lee, H.G.; Jeong, D.; Kim, J. An unconditionally stable hybrid numerical method for solving the Allen-Cahn equation. Computers and Mathematics with Applications 2010, 60, 1591–1606. [Google Scholar] [CrossRef]
  27. Karasözen, B.; Uzunca, M.; Sariaydin-Filibelioğlu, A.; Yücel, H. Energy Stable Discontinuous Galerkin Finite Element Method for the Allen-Cahn Equation. International Journal of Computational Methods 2018, 15, 1850013 (26 pages). [Google Scholar] [CrossRef]
Figure 1. Reconstruction with KPCA.
Figure 1. Reconstruction with KPCA.
Preprints 120830 g001
Figure 2. 1D AC: Phase trajectory (left) and phase plots (right) at times t = 0 , 250 , 400 , 500 .
Figure 2. 1D AC: Phase trajectory (left) and phase plots (right) at times t = 0 , 250 , 400 , 500 .
Preprints 120830 g002
Figure 3. 1D AC: Full/reduced order solutions profiles at t = 250 , 500 for k = 2 .
Figure 3. 1D AC: Full/reduced order solutions profiles at t = 250 , 500 for k = 2 .
Preprints 120830 g003
Figure 4. 1D AC: Full/reduced phase trajectories for k = 2 .
Figure 4. 1D AC: Full/reduced phase trajectories for k = 2 .
Preprints 120830 g004
Figure 5. 1D AC: Energy plots by full/reduced order solutions for k = 2 .
Figure 5. 1D AC: Energy plots by full/reduced order solutions for k = 2 .
Preprints 120830 g005
Figure 6. 2D AC: Full/reduced solution profiles for k = 2 .
Figure 6. 2D AC: Full/reduced solution profiles for k = 2 .
Preprints 120830 g006
Figure 7. 2D AC: Energy plots by full/reduced order solutions for k = 2 .
Figure 7. 2D AC: Energy plots by full/reduced order solutions for k = 2 .
Preprints 120830 g007
Figure 8. 3D AC: Energy plots (left) and radii of the spherical zero isosurface (right) by full/reduced solutions for k = 2 .
Figure 8. 3D AC: Energy plots (left) and radii of the spherical zero isosurface (right) by full/reduced solutions for k = 2 .
Preprints 120830 g008
Figure 9. 3D AC: Snapshots of the zero isosurface of full (top) and reduced (bottom) solutions for k = 2 .
Figure 9. 3D AC: Snapshots of the zero isosurface of full (top) and reduced (bottom) solutions for k = 2 .
Preprints 120830 g009
Table 1. 1D AC: Relative absolute errors u * u ^ * R for different values of k.
Table 1. 1D AC: Relative absolute errors u * u ^ * R for different values of k.
k 1 2 3 4 5 6
PCA 3.09e-01 1.28e-01 1.50e-02 1.55e-02 2.34e-03 2.41e-03
KPCA 2.56e-04 1.71e-05 1.88e-05 2.28e-04 2.71e-04 5.14e-04
Table 2. 1D AC: Relative absolute errors and CPU times for different values of k.
Table 2. 1D AC: Relative absolute errors and CPU times for different values of k.
u * u ^ * R Cpu Time (Sec.)
k With Iteration Without Iteration With Iteration Without Iteration
1 2.56e-04 2.56e-04 3.7850 0.3690
2 1.71e-05 1.28e-04 3.9650 0.3800
3 1.88e-05 2.56e-04 4.2110 0.4115
4 2.28e-04 1.29e-04 4.0152 0.4025
5 2.71e-04 1.09e-03 4.6500 0.4780
6 5.14e-04 4.92e-03 4.7968 0.4975
Table 3. 2D AC: Relative absolute errors u * u ^ * R for different values of k.
Table 3. 2D AC: Relative absolute errors u * u ^ * R for different values of k.
k 1 2 3 4 5 6
PCA 2.80e-01 8.22e-02 4.78e-02 2.13e-02 6.53e-03 3.54e-03
KPCA 2.50e-03 1.49e-03 3.28e-03 1.92e-03 2.45e-03 1.37e-03
Table 4. 2D AC: Relative absolute errors and CPU times for different values of k.
Table 4. 2D AC: Relative absolute errors and CPU times for different values of k.
u * u ^ * R Cpu Time (Sec.)
k With Iteration Without Iteration With Iteration Without Iteration
1 2.50e-03 3.80e-03 12.2090 1.4310
2 1.49e-03 1.89e-03 11.7000 1.6470
3 3.28e-03 3.77e-03 12.2340 1.5070
4 1.92e-03 1.82e-03 12.6700 1.7160
5 2.45e-03 3.66e-03 13.2030 1.8110
6 1.37e-03 1.68e-03 13.5700 1.8230
Table 5. 3D AC: Relative absolute errors u * u ^ * R for different values of k.
Table 5. 3D AC: Relative absolute errors u * u ^ * R for different values of k.
k 1 2 3 4 5 6
PCA 7.06e-02 8.55e-02 1.38e-02 1.16e-02 2.78e-03 4.85e-04
KPCA 4.57e-03 2.25e-03 4.83e-03 2.03e-03 6.68e-03 5.37e-03
Table 6. 3D AC: Relative absolute errors and CPU times for different values of k.
Table 6. 3D AC: Relative absolute errors and CPU times for different values of k.
u * u ^ * R Cpu Time (Sec.)
k With Iteration Without Iteration With Iteration Without Iteration
1 4.57e-03 3.69e-03 12.2990 2.1780
2 2.25e-03 2.47e-03 13.4760 2.1510
3 4.83e-03 4.74e-03 14.1110 2.0270
4 2.03e-03 2.17e-03 14.1600 2.7510
5 6.68e-03 6.55e-03 15.2780 2.5950
6 5.37e-03 5.30e-03 15.1340 2.9640
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated