Preprint
Article

This version is not peer-reviewed.

Geometric Neural Ordinary Differential Equations: From Manifolds to Lie Groups

A peer-reviewed article of this preprint also exists.

Submitted:

05 June 2025

Posted:

12 June 2025

You are already at the latest version

Abstract
Neural ordinary differential equations (neural ODEs) are a well-established tool for optimizing the parameters of dynamical systems, with applications in image classification, optimal control, and physics-learning. Although dynamical systems of interest often evolve on Lie groups and more general differentiable manifolds, theoretical results on neural ODEs are frequently phrased on $\R^n$. We collect recent results on neural ODEs on manifolds, and present a unifying derivation of various results that serves as a tutorial to extend existing methods to differentiable manifolds. We also extend the results to the recent class of neural ODEs on Lie groups, highlighting a non-trivial extension of manifold neural ODEs that exploits the Lie group structure.
Keywords: 
;  ;  ;  ;  

1. Introduction

Ordinary differential equations (ODEs) are ubiquitious in the engineering sciences, from modeling and control of simple physical systems like pendulums and mass-spring dampers, or more complicated robotic arms and drones, to the description of high-dimensional spatial discretizations of distributed systems, such as fluid flows, chemical reactions or quantum oscillators. Neural ordinary differential equations (neural ODEs) [1,2] are ODEs parameterized by neural networks. Given a state x, and parameters θ representing weights and biases of a neural network, a neural ODE reads:
x ˙ = f θ ( x , t ) , x ( 0 ) = x 0 .
First introduced by [1] as the continuum limit of recurrent neural networks, the number of applications of neural ordinary differential equations quickly exploded beyond simple classification tasks: learning highly nonlinear dynamics of multi-physical systems from sparse data [3,4,5], optimal control of nonlinear systems [6], medical imaging [7] and real-time handling of irregular time-series[8], to name but a few. Discontinuous state-transitions and dynamics [9,10], time-dependent parameters [11], augmented neural ODEs [12] and physics preserving formulations [13,14] present further extensions that increase the expressivity of neural ODEs.
However, these methods are typically phrased for states x n . For many physical systems of interest, such as robot arms, humanoid robots and drones, the state lives on differentiable manifolds and Lie groups [15,16]. More generally, the manifold hypothesis in machine learning raises the expectation that many high-dimensional data-sets evolve on intrinsically lower-dimensional, albeit more complicated manifolds [17]. Neural ODEs on manifolds [18,19] presented significant steps to address this gap, presenting first optimization methods for neural ODEs on manifolds. Yet, the general tools and approaches available on R n , such as including running costs, augmented states, time-dependent parameters, control-inputs or discontinuous state-transitions, are rarely addressed in a manifold context. Similar issues persist in a Lie group context, were Neural ODEs on Lie groups [20,21] were formalized.
Our goal is to extend more of the methods for neural ODEs on R n to arbitrary manifolds, and in particular Lie groups. To this end, we present a systematic approach to the design of neural ODEs on manifolds and Lie groups. Specifically, our contributions are:
  • Systematic derivation of neural ODEs on manifolds and Lie groups, highlighting differences and equivalence of various approaches - for an overview, see also Table 1;
  • Summarizing the state of the art on manifold and Lie group neural ODEs, by formalizing the notion of extrinsic and intrinsic neural ODEs;
  • A tutorial-like introduction, to assist the reader in implementing various neural ODE methods on manifolds and Lie groups, presenting coordinate expressions alongside geometric notation.
The remainder of the article is organized as follows. A brief state-of-the-art on neural ODEs concludes this introduction. Section 2 provides background on differentiable manifolds, Lie groups, and the coordinate-free adjoint method. Section 3 describes neural ODEs on manifolds, and derives parameter updates via the adjoint method for various common architectures and cost-functions, including time-dependent parameters, augmented neural ODEs, running costs, and intermediate cost-terms. Section 4 describes neural ODEs on matrix Lie groups, explaining merits of treating Lie groups separately from general differentiable manifolds. Both Section 3 and Section 4 also classify methods into extrinsic and intrinsic approaches. We conclude with a discussion in Section 5, highlighting advantages, disadvantages, challenges and promises of the presented material. The Appendix includes background on Hamiltonian systems, which appear when transforming the adjoint method into a form that is unique to Lie groups.

1.1. Literature review

For a general introduction to neural ODEs, see [25]. Neural ODEs on R n with fixed parameters were first introduced by [1], and parameter optimization via the adjoint method allowed for intermittent and final cost terms on each trajectory. The generalized adjoint method [2] also allows for running cost terms. Memory-efficient checkpointing is introduced in [26] to address stability issues of adjoint methods. Augmented neural ODEs [12] introduced augmented state-spaces to allow neural ODEs to express arbitrary diffeomorphisms. Time-varying parameters were introduced by [11], with similar benefits to augmented neural ODEs. Neural ODEs with discrete transitions were formulated in [9,10], with [9] also learning event-triggered transitions common in engineering application. Neural controlled differential equations (CDEs) were introduced in [27] for handling irregular time-series, and parameter updates reapply the adjoint method [1]. Neural stochastic differential equations (SDEs) were introduced in [28], relying on a stochastic variant of the adjoint method for the parameter update. The previously mentioned literature phrases dynamics of neural ODEs on R n .
Neural ODEs on manifolds were first introduced by [22], including an adjoint method on manifolds for final cost terms, and application to continuous normalizing flows on Riemannian manifolds, but embedding manifolds into R n . Neural ODEs on Riemannian manifolds are expressed in local exponential charts in [18], avoiding embedding into R n , and considering final cost terms in the optimization. Charts for unknown, nontrivial latent manifolds, and dynamics in local charts, are learned from high-dimensional data in [29], including also discretized solutions to partial differential equations. Parameterized equivariant Neural ODEs on manifolds are constructed in [23], also commenting on state augmentation to express arbitrary (equivariant) flows on manifolds.
Neural ODEs on Lie groups were first introduced in [30] on the Lie group S E ( 3 ) , to learn the port-Hamiltonian dynamics of a drone from experiment, expressing group elements on an embedding R 12 , and the approach was formalized to port-Hamiltonian systems on arbitrary matrix Lie groups in [20], embedding m × m matrices in R m 2 .
Neural ODEs on S E ( 3 ) were phrased in local exponential charts in [24] to optimize a controller for a rigid body, using a chart-based adjoint method in local expential charts. An alternative, Lie algebra based adjoint method on general Lie groups was introduced in [21], foregoing Lie group specific numerical issues of applying the adjoint method in local charts.

1.2. Notation

For a complete introduction to differential geometry see e.g, [31], and for Lie group theory see [32].
Calligraphic letters M , N , denote smooth manifolds. For conceptual clarity, the reader may think of these manifolds as embedded in a high dimensional R N , e.g., M R N . The set C ( M , N ) contains smooth functions between M and N , and we define C ( M ) : = C ( M , R ) :.
The tangent space at x M is T x M and the cotangent space is T x * M . The tangent bundle of M is T M , and the cotangent bundle of M is T * M . Then ( M ) denotes the set of vector fields over M , and Ω k ( M ) denotes the set of k forms, where Ω 1 ( M ) are co-vector fields, and Ω 0 ( M ) = C ( M ) are smooth functions V : M R . The exterior derivative is denoted as : Ω k ( M ) Ω k + 1 ( M ) . For functions V C ( M × N , R ) , with x M , y N , we denote by x V ( y ) T x * M the partial differential at x M . Curves x : R M are denoted as x ( t ) , and their tangent vectors are denoted as x ˙ T x ( t ) M .
A Lie group is denoted by G, its elements by g , h . The group identity is e G , and I denotes the identity matrix. The Lie algebra of G is , and its dual is * . Letters A ˜ , B ˜ denote vectors in the Lie algebra, while letters A , B denote vectors in R n .
In coordinate expressions lower indices are covariant and upper indices are contravariant components of tensors. For example, for a ( 0 , 2 ) -tensor M, the components M i j are covariant, and for non-degenerate M, the components of its inverse M 1 are M i j , which are contravariant. We use the Einstein summation convention a i b i : = i a i b i , i.e., the product of variables with repeated lower and upper indices implies a sum.
Denoting W a topological space, D the Borel σ -algebra and P : D [ 0 , 1 ] a probability measure, then the tuple ( W , D , P ) denotes a probability space. Given a vector space L and a random variable C : X L , then the expectation of C w.r.t. P is E w P ( C ) : = W C ( w ) P ( w ) .

2. Background

2.1. Smooth Manifolds

Given an n-dimensional manifold M , with U M an open set and : U R n a homeomorphism, we call ( U , ) a chart and we denote the coordinates of x U as
( q 1 , , q n ) : = ( x ) , x U M .
Smooth manifolds admit charts ( U 1 , 1 ) and ( U 2 , 2 ) with smooth transition maps 21 = 2 1 1 defined on the intersection U 1 U 2 , and a collection A of charts ( U , ) with smooth transition maps is called a (smooth) atlas. A vector field  f ( M ) associates to any point x M a vector in T x M . It defines a dynamic system
x ˙ = f ( x ) ; x ( 0 ) = x 0 ,
and we denote the solution of (3) by the flow operator
Ψ f t : M M ; Ψ f t ( x 0 ) : = x ( t ) .
For a real valued function V C ( M ) , its differential is the covector field
V Ω 1 ( M ) ; V = [ V ] i i .
Given additionally a smooth manifold N and a smooth map φ : N M , with ( U , ) and ( U ¯ , ¯ ) appropriate charts of M and N , respectively. Then the pullback of V via φ is
φ * V Ω 1 ( N ) ; φ * V : = ( V φ ) = [ φ j ] ¯ i [ V ] j ¯ i .
With a Riemannian metric M (i.e., a symmetric, non-degenerate (0,2) tensor field) on M , the gradient of V is a uniquely defined vector field V ( M ) given by
V : = M 1 V = M i j [ V ] j [ ] i .
When M = R n , we assume that M is the Euclidean metric, and pick coordinates such that the components of the gradient and differential are the same. Finally, we define the Lie derivative of 1-forms, which differentiates ω Ω 1 ( M ) along a vector field f ( M ) and returns L f ω Ω 1 ( M ) :
L f ω : = t Ψ f t * ω t = 0 = ω j ( i f j ) i + ( j ω i ) f j i .

2.2. Lie groups

Lie groups are smooth manifolds with a compatible group structure. We consider real matrix lie groups G G L ( m , R ) , i.e., subgroups of the general linear group
G L ( m , R ) : = { g R m × m | det ( g ) 0 } ,
For g , h G the left and right translations by h are, respectively, the matrix multiplications
L h ( g ) : = h g ,
R h ( g ) : = g h .
The Lie algebra of G is the vector space gl ( m , R ) , with gl ( m , R ) = R m × m the Lie algebra of G L ( m , R ) .
Define a basis E : = { E ˜ 1 , , E ˜ n } with E ˜ i R m × m , and Define the (invertible, linear) map Λ : R n as1
Λ : R n ; ( A 1 , , A n ) i A i E ˜ i .
The dual of is denoted * , and given the map Λ we call Λ * : * R n its dual. For A ˜ , B ˜ the small adjoint ad A ˜ ( B ˜ ) is a bilinear map, and the large Adjoint is the linear map Ad g ( A ˜ ) is a linear map
ad : × ; ad A ˜ ( B ˜ ) = A ˜ B ˜ B ˜ A ˜ ,
Ad : G × ; Ad g ( A ˜ ) = g A ˜ g 1 .
In the remainder of the article, we exclusively use the adjoint representation ad A : R n R n , written without a tilde in the subscript A, and Adjoint representation Ad g : R n × R n which are obtained as
ad A : = Λ 1 ad Λ ( A ) Λ ( · ) ,
g : = Λ 1 Ad g Λ ( · )
The exponential map  exp : G is a local diffeomorphism given by the matrix exponential [Chapter 3.7 [32]:
exp ( A ˜ ) : = n = 0 1 n ! A ˜ n .
Its inverse log : U log is a given by the matrix logarithm, and it is well-defined on a subset U log G [Chapter 2.3 [32]:
log ( g ) = n = 1 ( 1 ) n + 1 ( g I ) n n .
Often, these infinite sums in (17) and (18) can further be reduced to a finite sums in m terms, by use of the Cayley-Hamilton Theorem [34]. A chart ( U h , h ) on G that assigns zero coordinates to h G can be defined using (18) and (12)
U h = { h g | g U log } ,
h : U h R n ; g Λ 1 log ( h 1 g ) ,
h 1 : R n G ; q h exp Λ ( q ) .
The chart ( U h , h ) is called an exponential chart, and a collection A of exponential charts ( U h , h ) that cover the manifold is called an exponential atlas.
The differential of a function V C ( G , R ) is the co-vector field V Ω 1 ( G ) (see also Equation (5)). For any given g G we further transform the co-vector V ( g ) T g * G to a left-trivialized differential, which collects the components of the gradient expressed in * :
g L V : = Λ * L g * V ( g ) = [ ] V g I + Λ ( ) | = 0 R n .
For a derivation of this coordinate expression, see [Sec. 3 [21].

2.3. Gradient over a flow

We will be interested in computing the gradient of functions with respect to the initial state of a flow. The adjoint sensitivity equations are a set of differential equations that achieve this. In the following, we show a derivation of the adjoint sensitivity on manifolds [21]. Given a function : M R , a vector field f ( M ) , the associated flow Ψ f t : M M , and a final time T R , the goal of the adjoint sensitvity method on manifolds is to compute the gradient
C Ψ f T ( x 0 ) .
In the adjoint method we define a co-state λ ( t ) = ( C Ψ T t ) x ( t ) T x ( t ) * M , which represents the differential of C x ( T ) with respect to x ( t ) . The adjoint sensitivity method describes its dynamics, which are integrated backwards in time from the known final condition λ ( T ) = C x ( T ) , see also Figure 1. The adjoint sensitivity method is stated in Theorem 1.
Theorem 1 
(Adjoint sensitivity on manifolds). The gradient of a function C Ψ f T is
C Ψ f T ( x 0 ) = λ ( 0 ) ,
where λ ( t ) T x ( t ) * M is the co-state. In a local chart ( U , ) of M with induced coordinates on T * U , x ( t ) and λ ( t ) satisfy the dynamics
˙ j = f j ( ) , ( 0 ) = ( x 0 ) ,
λ ˙ i = λ j i f j ( ) , λ i ( T ) = [ C ] i x ( T )
Proof. 
Define the co-state λ ( t ) T x ( t ) * M as
λ ( t ) : = ( Ψ f T t ) * C x ( T ) .
Then Equation (23) is recovered by application of Equation (6):
λ ( 0 ) = ( Ψ f T ) * C x ( T ) = ( C Ψ f T ) ( x 0 ) ,
A derivation of the dynamics governing λ ( t ) constitutes the remainder of this proof. By definition of λ ( t ) and the Lie derivative (8), we have that L f λ ( t ) = 0 :
L f λ ( t ) = d d s ( Ψ f s ) * λ ( t + s ) s = 0 = d d s λ ( t ) = 0 .
If we further treat λ as a 1-form λ Ω 1 ( M ) (denoted as λ , by an abuse of notation), we obtain:
L f λ = λ j ( i f j ) i + ( j λ i ) f j i = 0 .
The components satisfy the partial differential equation
λ j [ ] i f j + f j [ ] j λ i = 0 .
Impose that λ ( t ) = λ Ψ f t ( x 0 ) (this defines the 1-form λ along x ( t ) ), then
λ ˙ i = [ λ i ] j ˙ j = [ λ i ] j f j .
Combining Equations (29) and (30) leads to Equation ():
λ ˙ i = λ j i f j .
Expanding the final condition λ ( T ) = C x ( T ) in local coordinates (see Equation (5)) gives
λ ( T ) = [ C ] i x ( T ) i = λ i ( T ) i λ i ( T ) = [ C ] i x ( T ) .
A fact that will become useful in Section 4, is that the equations (24) and () have a Hamiltonian form. Define the control Hamiltonian H c : T * M R as
H c ( x , λ ) = λ f ( x , t ) = λ i f i ( , t ) .
Then Equation (24) and Equation (), respectively, of Theorem 1 follow as the Hamiltonian equations on T * M :
˙ j = H c λ j = f j ( , t ) ,
λ ˙ i = H c i = λ j i f j ( , t ) .
For background on Hamilton’s equations, see also Appendix A.1.

3. Neural ODEs on Manifolds

A neural ODE on a manifold is an NN-parameterized vector field in ( M ) – or including time dependence, it is an NN-parameterized vector field in ( M × R ) , with t in the R slot and t ˙ = 1 . Given parameters θ R n θ , we denote this parameterized vector field as f θ ( x , t ) : = f ( x , t , θ ) . This results in the dynamic system
x ˙ = f θ ( x , t ) , x ( 0 ) = x 0 ,
The key idea of neural ODEs is to tackle various flow approximation tasks, by optimising the parameters with respect to a to-be-specified optimization problem. Denote a finite time horizon T and intermittent times T 1 , T 2 , < T . Denote a general trajectory cost by
C f θ T ( x 0 , θ ) = F ( θ , Ψ f θ T 0 ( x 0 ) , Ψ f θ T 1 ( x 0 ) , , Ψ f θ T ( x 0 ) ) + 0 T r ( Ψ f θ s ( x 0 ) , s ) s ,
with intermittent and final cost term F, and running cost r. Indicating a probability space ( M , D , P ) , we define the total cost as
J ( θ ) : = E x 0 P C f θ T ( x 0 , θ ) .
The minimization problem takes the form
min θ J ( θ ) .
Note that (39) is not subject to any dynamic constraint - the flow already appears explicitly in the cost C f θ T .
Normally, the optimization problem is solved by means of a stochastic gradient descent optimization algorithm [35]. In this, a batch of N initial conditions x i is sampled from the probability distribution corresponding to the probability measure P . Writing C i = C f θ T ( x i , θ ) , the parameter gradient [ ] θ J ( θ ) is approximated as
θ J ( θ ) = E x 0 P θ C f θ T ( x 0 ) 1 N i = 0 N θ C i .
In this section, we show how to optimize the parameters θ for various choices of neural ODEs and cost functions, with (37) the most general case of a cost, and highlight similarities in the various derivations. In the following, the gradient θ C i is computed via the adjoint method on manifolds, for various scenarios. The advantage of the adjoint method over e.g., automatic differentiation of C i / back-propagation through an ODE solver, is that it has a constant memory efficiency with respect to the network depth T.

3.1. Constant parameters, running and final cost

Here we consider neural ODEs of the form (36), with constant parameters θ , and cost functions of the form
C f θ T ( x 0 , θ ) = F ( Ψ f θ T ( x 0 ) , θ ) + 0 T r ( Ψ f θ s ( x 0 ) , θ , s ) s ,
with a final cost term F and a running cost term r. This generalizes [1,2] to manifolds. Compared to existing manifold methods for neural ODES [18,19], the running cost is new.
The parameter gradient’s components [ ] θ C f θ T ( x 0 , t 0 ) , θ R n θ are then computed by Theorem 2 (see also [21]):
Theorem 2 
(Generalized Adjoint Method on Manifolds). Given the dynamics (36) and the cost (41), the parameter gradient’s components [ ] θ C f θ T ( x 0 , t 0 ) , θ R n θ are computed by
[ ] θ C f θ T ( x 0 , t 0 ) , θ = [ F ] θ ( x ( T ) , θ ) + 0 T [ ] θ λ j f θ j ( ( s ) ) + r ( ( s ) , θ , s ) s .
where the state x ( s ) M and co-state λ ( s ) T x ( s ) * M satisfy, in a local chart ( U , ) with ( t ) = ( x ( t ) ) , λ ( t ) = λ i ( t ) i
˙ j = f θ j ( , t ) , ( 0 ) = ( x 0 ) , t ( 0 ) = t 0 ,
λ ˙ i = λ j i f θ j ( , t ) r i , λ i ( T ) = [ F ] i x ( T ) , θ .
Proof. 
Define the augmented state space as M = M × R n θ × R × R , to include the original state x M , parameters θ R n θ , accumulated running cost L R and time t R in the augmented state x : = ( x , θ , L , t ) M . In addition, define the augmented dynamics f aug ( M ) as
x ˙ = f aug ( x ) = f θ ( x , t ) 0 r ( x , θ , t ) 1 , x ( 0 ) = x 0 : = x 0 θ 0 t 0 .
This is an autonomous system with final state x ( T ) = ( x ( T ) , θ , 0 T r ( x , θ , s ) s , T ) . Next, define the cost C aug : M R on the augmented space:
C aug ( x ) = F ( x , θ ) + L .
Then Equation (41) can be rewritten as the evaluation of a terminal cost C aug x ( T ) :
C f θ T ( x 0 ) = ( C aug Ψ f aug T ) ( x 0 ) .
By Theorem 1, the gradient C aug Ψ f aug T is given by
( C aug Ψ f aug T ) ( x 0 ) = λ ( 0 ) ,
and by Equation (), the componentents of λ ( s ) satisfy
λ ˙ i = λ j i f aug j , λ i ( T ) = [ ] i C aug x ( T )
Split the co-state into λ , λ θ , λ L , λ t , then their components’ dynamics are:
λ ˙ , i = [ ] i λ , j f θ j ( , t ) + λ L r ( , θ , t ) , λ ( T ) = [ F ] ( x ( T ) , θ ) ,
λ ˙ θ , i = [ ] θ i λ , j f θ j ( , t ) + λ L r ( , θ , t ) , λ θ ( T ) = [ F ] θ ( x ( T ) , θ )
λ ˙ L = 0 , λ L ( T ) = [ ] L C aug ( x ( T ) , θ ) = 1
λ ˙ t = [ ] t λ , j f θ j ( , t ) + λ L r ( , θ , t ) , λ t ( T ) = [ ] t C aug ( x ( T ) , θ ) = 0 .
The component λ L = 1 is constant, so equation (50) coincides with (). Integrating () from s = 0 to s = T recovers Equation (42). λ t does not appear in any of the other equations, such that Equation () may be ignored. □
In summary, the above proof dependeds on identifying a suitable augmented manifold M , with the goal that augmented dynamics f aug ( M ) are autonomous, and that the cost-function C aug : M R on the augmented manifold rephrases the cost (41) as a final cost C aug ( x ( T ) ) , which allows to apply Theorem 1, and express any gradients of the original cost. In later sections (Sec. Section 3.2), this process will be the main technical tool for generalizations of Theorem (2). The next sections describe common special cases of (36) and Theorem 2.

3.1.1. Vanilla neural ODEs and extrinsic neural ODEs on manifolds

The case of neural ODEs on R n (e.g., [1,2]) is obtained by setting M = R n . In this case, one global chart can be used to represent all quantities.
This overlaps with extrinsic neural ODEs on manifolds (described, for instance, in [22]), which optimize the neural ODE on an embedding space R N . We denote this embedding as ι : M R N , let x M and y R N . Optimizing the neural ODE on R N requires extending the dynamics f θ ( x , t ) ( M ) to a vector field f θ ( y , t ) ( R N ) , such that
ι * f θ ( x , t ) = f θ ( ι ( x ) , t )
The dynamics f θ ( y , t ) are then used in Theorem 2, and also the costate lives in T * R N .
As shown in [22], the resulting parameter gradients are equivalent to those resulting from an application in local charts, as long as it can be guaranteed that the integral curves of f ( y , t ) X ( R N ) remain within ι ( M ) R N , i.e., are geometrically preserving.
A strong upside of an extrinsic formulation is that existing neural ODE packages (e.g., [36]) can be applied directly. Possible downsides of extrinsic neural ODEs are that finding f ( y , t ) X ( R N ) may not be immediate, a geometrically preserving integration has to be guaranteed separately, and that N is larger than the intrinsic dimension n = dim M , leading to computational overhead.

3.1.2. Intrinsic neural ODEs on manifolds

The intrinsic case of neural ODEs on manifolds [18] is described by integrating the dynamics in local charts. Given a chart-transition from a chart ( U 1 , 1 ) to a chart ( U 2 , 2 ) chart transitions of the state and costate components ( 1 i , λ 1 , i and 2 i , λ 2 , i , respectively) are given by
2 i = 2 i 1 1 ( 1 ) ,
λ i , 2 = A i j λ j , 1 .
with A i j = i 1 j 2 1 . The advantage of intrinsic neural ODEs on manifolds over extrinsic neural ODEs on manifolds is that the dimension of the resulting equations is as low as possible, for the given manifold. A disadvantage lies in having to determine charts, and chart-switching procedures. In available state-of-the-art packages for neural ODEs, these are phrased as discontinuous dynamics with state transitions 1 j 2 1 : R n R n . For details on chart-switching methods see [18,21,29].

3.2. Extensions

The proof of Theorem 2 depended on identifying a suitable augmented manifold M , autonomous augmented dynamics f aug ( M ) and augmented cost-function C aug : M R that rephrases the cost (41) as a final cost C aug ( x ( T ) ) , to apply Theorem 1. This approach generalizes to various other scenarios, including different cost-terms, augmented neural ODEs on manifolds and time-dependent parameters, presented in the following.

3.2.1. Nonlinear and intermittent cost terms

We consider here the case of neural ODEs on manifolds of the form (36) with cost (37). For the final and intermittent cost term F θ : M × M × × M R we denote by k F θ T x * M the differential w.r.t. the k-th slot, and denote θ as a subscript to avoid confusion. The components of k F will be denoted [ F ] k i . In this case, the parameter gradient is determined by repeated application of Theorem 2:
Theorem 3 
(Generalized Adjoint Method on Manifolds). Given the dynamics (36) and the cost (37), the parameter gradient’s components [ ] θ C f θ T ( x 0 , t 0 ) , θ R n θ are computed by
[ ] θ C f θ T ( x 0 , t 0 ) , θ = [ F ] θ ( θ , x ( T 1 ) , x ( T 2 ) , , x ( T ) ) + 0 T [ ] θ λ j f θ j ( ( s ) ) + r ( ( s ) , θ , s ) s .
where the state x ( s ) M satisfies (43) and the co-state λ ( s ) T x ( s ) * M satisfies dynamics with discrete updates at times T 1 , , T N 1 given by
λ ˙ , i = [ ] i λ , j f θ j ( , t ) + r ( N , θ , t ) ; λ , i ( T ) = [ F θ ] N i ( x ( T 1 ) , , x ( T ) )
λ i ( T k , ) = λ i ( T k , + ) + [ F θ ] k i x ( T 1 ) , , x ( T ) ,
with T k , the instance after a discrete update at time T k (recall that co-state dynamics are integrated backwards, so T k , < T k < T k , + ), and T k , + the instance before.
Proof. 
We introduce an augmented manifold M = M × × M × R n θ × R × R , to include N copies of the original state x M , parameters θ R n θ , accumulated running cost L R and time t R in the augmented state x : = ( x 1 , , x N , θ , L , t ) M . Let
ϱ T i ( t ) = 1 t T i 0 t > T i ,
and define the augmented dynamics f aug ( M ) as
x ˙ = f aug ( x ) = ϱ T 1 ( t ) f θ ( x 1 , t ) ϱ T N 1 ( t ) f θ ( x N 1 , t ) f θ ( x N , t ) 0 r ( x N , θ , t ) 1 , x ( 0 ) = x 0 : = x 0 x 0 x 0 θ 0 t 0 .
This is an autonomous system with final state
x ( T ) = ( x ( T 1 ) , , x ( T N 1 ) , x ( T ) , θ , 0 T r ( x , θ , s ) s , T ) .
Next, define the cost C aug : M R on the augmented space:
C aug ( x ) = F θ ( x 1 , , x N ) + L .
Then Equation (37) can be rewritten as the evaluation of a terminal cost C aug x ( T ) :
C f θ T ( x 0 ) = ( C aug Ψ f aug T ) ( x 0 ) .
Apply Equation (), and split the co-state into λ 1 , , λ N , λ θ , λ L , λ t , then their components’ dynamics are:
λ ˙ 1 , i = [ ] i λ 1 , j ϱ T 1 ( t ) f θ j ( 1 , t ) , λ 1 ( T ) = [ F θ ] 1 ( x ( T 1 ) , , x ( T ) ) ,
λ ˙ N , i = [ ] N i λ N , j f θ j ( N , t ) + λ L r ( N , θ , t ) , λ N ( T ) = [ F θ ] N ( x ( T 1 ) , , x ( T ) ) ,
λ ˙ θ , i = [ ] θ i λ 1 , j ϱ T 1 ( t ) f θ j ( 1 , t ) + + λ N , j f θ j ( N , t ) + λ L r ( , θ , t ) , λ θ ( T ) = [ F θ ] θ ( x ( T 1 ) , , x ( T ) ) .
We excluded the dynamics of λ t , which does not appear in any of the other equations, and the constant λ L = 1 . Finally, define the cumulative co-state
λ = ϱ T 1 ( t ) λ 1 + + λ N .
Its dynamics at t [ 0 , T ] T 1 , · , T N 1 are given by the sum of (65) to (), letting = N :
λ ˙ , i = λ ˙ 1 , i + + λ ˙ N , i
= [ ] i λ , j f θ j ( , t ) + r ( N , θ , t )
λ ( T ) = [ F θ ] N ( x ( T 1 ) , , x ( T ) ) ,
with discrete jumps (58) accounting for the final conditions of λ 1 , , λ N , and the dynamics () can be rewritten as
λ ˙ θ , i = [ ] θ i λ , j f θ j ( , t ) + r ( , θ , t ) ; λ θ ( T ) = [ F θ ] θ ( x ( T 1 ) , , x ( T ) ) .
Integrating this from s = 0 to s = T recovers Equation (57). □
Cost terms of this form are interesting for optimiziation of e.g., periodic orbits [37] or trajectories on manifolds, where conditions at multiple checkpoints Ψ f θ T i ( x 0 ) may appear in the cost.

3.2.2. Augmented neural ODEs on manifolds and time-dependent parameters

With state x M , augmented state α N (not to be confused with x M ), and parameterized φ θ : M N , augmented neural ODEs on manifolds are neural ODEs on the manifold M × N of the form
x ˙ α ˙ = f θ ( x , α ) g θ ( x , α ) ; x ( 0 ) α ( 0 ) = x 0 φ θ ( x 0 ) .
Time t is not included explicitly in these dynamics, since it can be included in α . This case also includes the scenario of time-dependent parameters θ ¯ ( t ) as part of α . As the trajectory cost, we take a final cost
C f θ , g θ T ( x 0 , θ ) = F ( Ψ f θ , g θ T x 0 , φ θ ( x 0 ) , θ ) .
Theorem 4 
(Adjoint Method for Augmented Neural ODEs on Manifolds). Given the dynamics (73) and the cost (74), the parameter gradient’s components [ ] θ C f θ T ( x 0 , t 0 ) , θ R n θ are computed by
[ ] θ C f θ , g θ T ( x 0 , φ ( x 0 ) ) , θ = [ F ] θ ( x ( T ) , α ( T ) , θ ) + [ φ j ] θ λ α , j ( 0 ) + 0 T [ ] θ λ x , j f θ j ( ( s ) ) + λ α , j g θ j ( ( s ) ) s .
where the states x ( s ) M , α ( s ) N satisfy (73) and co-states λ x ( s ) T x ( s ) * M , λ α ( s ) T α ( s ) * N , satisfy, in a local chart ( U , ) on M and U ¯ , ¯ on N :
λ ˙ x , i = i λ x , j f θ j ( , ¯ , t ) + λ α , j g θ j ( , ¯ , t ) , λ x , i ( T ) = [ F ] i x ( T ) , α ( T ) , θ ,
λ ˙ α , i = ¯ i λ x , j f θ j ( , ¯ , t ) + λ α , j g θ j ( , ¯ , t ) , λ α , i ( T ) = [ F ] ¯ i x ( T ) , α ( T ) , θ .
Proof. 
Define the augmented state space as M = M × N × R n θ , to include the states x M , α ( s ) N and parameters θ R n θ in the augmented state x : = ( x , α , θ ) M . In addition, define the augmented dynamics f aug ( M ) as
x ˙ = f aug ( x ) = f θ ( x , α ) g θ ( x , α ) 0 , x ( 0 ) = x 0 : = x 0 φ θ ( x 0 ) θ .
This is an autonomous system with final state x ( T ) = ( x ( T ) , α ( T ) , θ ) . Next, define the cost C aug : M R on the augmented space:
C aug ( x ) = F ( x , α , θ ) .
Then Equation (41) can be rewritten as the evaluation of a terminal cost C aug x ( T ) . The gradient C aug Ψ f aug T is given by an application of Equation (). Split the co-state into λ x , λ α , λ θ , then their components’ dynamics are:
λ ˙ x , i = i λ x , j f θ j ( , ¯ , t ) + λ α , j g θ j ( , ¯ , t ) , λ x ( T ) = [ F ] ( x ( T ) , α ( T ) , θ ) ,
λ ˙ α , i = ¯ i λ x , j f θ j ( , ¯ , t ) + λ α , j g θ j ( , ¯ , t ) , λ α , i ( T ) = [ F ] ¯ i x ( T ) , α ( T ) , θ .
λ ˙ θ , i = [ ] θ i ( λ x , j f θ j ( , ¯ , t ) + λ α , j g θ j ( , ¯ , t ) ) , λ θ ( T ) = [ F ] θ ( x ( T ) , α ( T ) , θ )
Since α ( 0 ) = φ θ ( x 0 ) also depends on θ , the total gradient of the cost w.r.t. θ is given by
[ ] θ i C f θ T ( x 0 , φ θ ( x 0 ) ) , θ = λ θ , i ( 0 ) + [ φ j ] θ i λ α , j ( 0 ) .
This recovers Equation (75). □
A further degenerate application of Theorem 4 is obtained by removing x. Then both dynamics g θ ( α ) and initial condition α ( 0 ) = φ θ ( 0 ) are parameterized by θ , allowing joint optimization of parameters and initial condition. This is interesting for joint optimization and numerical continuation, e.g. [37].

4. Neural ODEs on Lie Groups

Just as a neural ODE on a manifold is an NN-parameterized vector field in ( M ) (or, including time, ( M × R ) ), a neural ODE on a Lie group can be seen as a parameterized vector field in ( G ) (or ( G × R ) , respectively). Similarly to Equation (36), this results in a dynamic system
g ˙ = f θ ( g , t ) , g ( 0 ) = g 0 ,
Yet, Lie groups offers more structure than manifolds: the Lie algebra provides a canonical space to represent tangent vectors, and its dual * provides a canonical space to represent the co-state. Similarly, canonical (exponential) charts offer structure for integrating dynamic systems [38]. Frequently, dynamics on a Lie group induced dynamics on a manifold M : by means of an action
Φ : G × M M ; ( g , x ) Φ ( g , x ) ,
evolutions g ( t ) induce evolutions x ( t ) = Φ ( g ( t ) , x 0 ) on M . This makes neural ODEs on Lie groups interesting in their own right.
In this section, we describe optimizing (39) for the cost
C f θ T ( g 0 , θ ) = F ( Ψ f θ T ( g 0 ) , θ ) + 0 T r ( Ψ f θ s ( g 0 ) , θ , s ) s ,
with a final cost term F and a running cost term r. We highlight the extrinsic approach, and two intrinsic approaches, where one of the latter is peculiar to Lie groups.

4.1. Extrinsic neural ODEs on Lie groups

The extrinsic formulation of neural ODEs on Lie groups was first introduced by [20], and applies ideas of [19] (see also Section 3.1.1). Given G G L ( m , R ) , this formulation treats the dynamic system (84) as a dynamic system on R m 2 . Denote vec : R m × m R m 2 an invertible map that stacks the components of an input matrix matrix into a component vector2 and let proj G : R m × m G be a projection onto G R m × m . Further denote A y = vec 1 ( y ) and g y = proj G ( A y ) . A lift f θ ( y , t ) can then be defined as
f θ ( y , t ) = vec A y g y 1 f ( g y , θ , t ) .
As was the case for extrinsic neural ODEs on manifolds, the cost-gradient resulting from this optimization is well-defined and equivalent to any intrinsically defined procedure. However, dimension m 2 of the vectorization can be significantly larger than the intrinsic dimension of the Lie group.

4.2. Intrinsic neural ODEs on Lie groups

Theorem 2 directly applies to optimization of neural ODEs on Lie groups, given the local exponential charts (19), () on G. This does not make full use of the available structure on Lie groups. Frequently, dynamical systems are of a left-invariant form (88) or a right-invariant form ()
g ˙ = g Λ ρ θ L ( g , t ) ,
g ˙ = Λ ρ θ R ( g , t ) g .
Denoting K ( ) : T R n R n the derivative of the exponential map (see [21] for details). Then the chart-representatives f θ i in a local exponential chart ( U h , h ) are
f θ L , i ( , t ) = ( K 1 ) j i ( ) ρ L , j ( h 1 ( ) ) ,
f θ R , i ( , t ) = ( K 1 ) j i ( ) Ad h 1 ( ) ρ R , j ( h 1 ( ) ) .
Application of Theorem 2 then requires computing [ ] j f θ L , i ( , t ) or [ ] j f θ R , i ( , t ) . But this leads to significant computational overhead due differentiation of the terms ( K 1 ) j i ( ) (see [21]). Instead of applying Theorem 2, i.e., expressing dynamics in local charts, the dynamics can also be expressed at the Lie algebra . Theorem has a Hamiltonian form, which can be directly transformed into Hamiltonian equations on a Lie group (see also Appendix A.1). Applying this reasoning to Theorem 2, we arrive at the following form, which foregoes differentiating ( K 1 ) j i ( x ) :
Theorem 5 
(Left Generalized Adjoint Method on Matrix Lie Groups). Given are the dynamics (88) and the cost (86), or the dynamics () with ρ θ L ( g , t ) = Ad g 1 ρ θ R ( g , t ) . Then the parameter gradient [ ] θ C f θ T ( g 0 ) of the cost is given by the integral equation
[ ] θ C f θ T ( g 0 ) = [ F ] θ ( g ( T ) , θ ) + 0 T [ ] θ λ g ρ θ L ( g , s ) + r ( g , θ , s ) s ,
where the state g ( t ) G and co-state λ g ( t ) R n are the solutions of the system of equations
g ˙ = f θ ( g , t ) , g ( 0 ) = g 0 ,
λ ˙ g = g L λ g ρ θ L ( g , s ) + r ( g , θ , s ) + ρ θ L ( g , t ) λ g , λ g ( T ) = g L F ( g ( T ) , θ ) .
Proof. 
This is proven in two steps. First, define the time-and-parameter-dependent control-Hamiltonian H c : T * M × R n θ × R R as
H c ( x , λ , θ , t ) = λ f θ ( x , t ) + r ( x , θ , t ) = λ i f θ i ( , t ) + r ( , θ , t ) .
The equations for the state- and co-state dynamics (43) and (), respectively, of Theorem 2 follow as the Hamiltonian equations on T * M :
˙ j = H c λ j = f θ j ( , t ) ,
λ ˙ i = H c i = λ j i f θ j ( , t ) r i .
And the integral equation (42) reads
[ ] θ C f θ T ( x 0 , t 0 ) , θ = [ F ] θ ( x ( T ) , θ ) + 0 T [ H c ] θ t .
Second, rewrite the control Hamiltonian (95) on a Lie group G, i.e., H c : T * G × × R n θ × R R . By substituting λ g ( t ) = Λ * L g * λ ( t ) (see also Equation (A6)), this induces H c : G × g * × R n θ × R R
H c ( g , λ g , θ , t ) = λ g ρ θ L ( g , t ) + r ( g , θ , t ) .
Finally Hamilton’s equations (96), () are rewritten in their form on a matrix Lie group by means of (A7), (), which recovers equations (93) and ():
g ˙ = g Λ ( [ H c ] λ g ) ,
λ ˙ g = g L H c + ad [ H c ] λ g λ g ,
To find the final condition for λ g , use that λ g ( t ) = Λ * L g * λ ( t ) :
λ g ( T ) = Λ * L g * λ ( T ) = Λ * L g * F ( g ( T ) , θ ) = d g L F ( g ( T ) , θ ) .
Similar equations also hold on abstract (non-matrix) Lie groups, see [21]. Compared to the extrinsic method of Section 4.1, Theorem 5 has the advantage that the dimension of the co-state λ g is as low as possible. Compared to the chart-based approach on Lie groups, Theorem 5 foregoes differentiating through the terms K j i ( ) , avoiding overhead. Compared to a chart-based approach on manifolds, also the choice of charts is canonical on Lie groups. Although the Lie-group approach foregoes many of the pitfalls of intrinsic neural ODEs on manifolds, implementation in existing neural ODE packages is currently cumbersome: the adjoint-sensitivity equations () have a non-standard form, requiring an adapted dynamics of the co-state λ , but these equations are rarely intended for modification, in existing packages. Packages for geometry-preserving integrators on Lie groups, such as [38] are also not readily available for arbitrary Lie groups.

4.3. Extensions

The proof of Theorem 5 relied on finding a control-Hamiltonian formulation for Theorem 2. This approach generalizes to methods in Section 3.2, which rely on the use of Theorem 1. That is because Theorem 1 itself has a Hamiltonian form ([19,21]).

5. Discussion

We discuss advantages and disadvantages of the main flavors of the presented formulations for manifold neural ODEs, expanding on the previous sections. We focus on extrinsic (embedding dynamics in R N ) and intrinsic (integrating in local charts) formulations. Summarizing prior comments:
  • the extrinsic formulation is readily implemented if the low-dimensional manifold M and an embedding into R N are known. This comes at the possible cost of geometric inexactness, and a higher dimension of the co-state and sensitivity equations
  • the co-state in the intrinsic formulation has a generally lower dimension, which reduces the dimension of the sensitivity equations. The chart-based formulation also guarantees geometrically exact integration of dynamics. This comes at the mild cost of having to define local charts and chart-transitions.
This dimensionality reduction is unlikely to have a high impact when the manifold M is known and low dimensional, e.g., for the sphere M = S 2 or similar manifolds. However, when applying the manifold hypothesis to high-dimensional data, there might be non-trivial latent manifolds for which the embedding is not immediate, and where the latent manifold is of a much lower dimension than the embedding data-manifold. Then the intrinsic method becomes difficult to avoid. If geometric exactness of the integration is desired, local charts need to be defined also for the extrinsic approach, in which case the intrinsic approach may offer further advantages.
In order to derive neural ODEs on Lie groups, three approaches were possible: the extrinsic and intrinsic formulations on manifolds directly carry over to matrix Lie groups, embedding G G L ( m , R ) in R m 2 or using local exponential charts, respectively. A third option a novel intrinsic method for neural ODEs on matrix Lie groups, which made full use of the Lie group structure by phrasing dynamics on (as is more common on Lie groups), and the co-state on * , avoiding difficulties of the chart-based formalism in differentiating extra terms.
Summarizing prior comments on advantages and disadvantages of these flavors:
  • the extrinsic formulation on matrix Lie groups can come at much higher cost than that on manifolds, since the intrinsic dimension of G can be much lower than m 2 , and a higher dimension of the co-state and sensitivity equations. Geometrically exact integration procedures are more readily available for matrix Lie groups, integrating g ˙ in local exponential charts.
  • the chart-based formulation on matrix Lie groups struggles when are not naturally phrased in local charts. This is common, dynamics are often more naturally phrased on . This was alleviated by an algebra-based formulation on matrix Lie groups. Both are intrinsic approaches, that feature a co-state dynamics that are as low as posssible. However, the algebra-based approach still misses readily available software implementation.
The authors believe that the algebra-based formulation is more convenient, in principle, and consider software implementations of the algebra-based approach as possible future work.
In summary, we presented a unified, geometric approach to extend various methods for neural ODEs on R N to neural ODEs on manifolds and Lie groups. Optimization of neural ODEs on manifolds was based on the adjoint method on manifolds. Given a novel cost-function C and neural ODE architecture f, the strategy to present the results in a unified fashion was to identify a suitable augmented manifold M aug , augmented dynamics f aug ( M aug ) , and cost C aug : M aug R such that the original cost-function can be rephrased as C = C aug Ψ f aug T . To further derive optimization of intrinsic neural ODEs on Lie groups was based on finding a Hamiltonian formulation of the adjoint method on manifolds, and to subsequently transformed them into the Hamiltonian equations on a matrix Lie group.

Author Contributions

Conceptualization, Y.P.W.; methodology, Y.P.W.; software, Y.P.W.; validation, Y.P.W.; formal analysis, Y.P.W.; investigation, Y.P.W.; resources, S.S.; data curation, Y.P.W.; writing—original draft preparation, Y.P.W.; writing—review and editing, Y.P.W., F.C., S.S.; visualization, Y.P.W.; supervision, F.C., S.S.; project administration, S.S.; funding acquisition, S.S. All authors have read and agreed to the published version of the manuscript.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A. Additional Material

Appendix A.1. Hamiltonian dynamics on Lie groups

We briefly review Hamiltonian systems on manifolds and matrix Lie groups (see also [21]).
Given a manifold Q with coordinate maps i : Q R and p i in the basis i on T q * Q , we define the symplectic form ω Ω 2 ( T * M ) as
ω = p i i .
Let Y ( T * Q ) , then a Hamiltonian H C ( T * Q , R ) implicitly defines a unique vector field X H ( T * Q ) by
H ( Y ) = ω ( X H , Y ) .
In coordinates, X H has the components
˙ i = [ H ] p i ,
p ˙ i = [ H ] i .
On a Lie group G, the group structure allows the identification T * G G × g * G × R n . E.g., using the pull back L g * : T g * G g * of left-translation map L g : G G , and Λ * : R n , to define P g R n as
P g = Λ * L g * P .
Then the left Hamiltonian H L : G × g * R is defined in terms of H : T * G g as
H L ( g , P g ) = H g , P ) .
For a matrix Lie group the left Hamiltonian equations read:
g ˙ = g Λ ( [ H L ] P ) ,
P ˙ = g L H L + ad [ H L ] P P ,
with Λ : R n g as in (12) and g L H R n as in (22).

References

  1. Chen, R.T.Q.; Rubanova, Y.; Bettencourt, J.; Duvenaud, D. Neural Ordinary Differential Equations. CoRR, 1806. [Google Scholar]
  2. Massaroli, S.; Poli, M.; Park, J.; Yamashita, A.; Asama, H. Dissecting neural ODEs. Advances in Neural Information Processing Systems, 2002. [Google Scholar]
  3. Zakwan, M.; Natale, L.D.; Svetozarevic, B.; Heer, P.; Jones, C.; Trecate, G.F. Physically Consistent Neural ODEs for Learning Multi-Physics Systems*. IFAC-PapersOnLine 2023, 56, 5855–5860. [Google Scholar] [CrossRef]
  4. Sholokhov, A.; Liu, Y.; Mansour, H.; Nabi, S. Physics-informed neural ODE (PINODE): embedding physics into models using collocation points. Scientific Reports 2023 13:1 2023, 13, 1–13. [Google Scholar] [CrossRef] [PubMed]
  5. Ghanem, P.; Demirkaya, A.; Imbiriba, T.; Ramezani, A.; Danziger, Z.; Erdogmus, D. Learning Physics Informed Neural ODEs With Partial Measurements. AAAI-25, arXiv:cs.LG/2412.08681].
  6. Massaroli, S.; Poli, M.; Califano, F.; Park, J.; Yamashita, A.; Asama, H. Optimal Energy Shaping via Neural Approximators. SIAM Journal on Applied Dynamical Systems 2022, 21, 2126–2147. [Google Scholar] [CrossRef]
  7. Niu, H.; Zhou, Y.; Yan, X.; Wu, J.; Shen, Y.; Yi, Z.; Hu, J. On the applications of neural ordinary differential equations in medical image analysis. Artificial Intelligence Review 2024, 57, 1–32. [Google Scholar] [CrossRef]
  8. Oh, Y.; Kam, S.; Lee, J.; Lim, D.Y.; Kim, S.; Bui, A.A.T. Comprehensive Review of Neural Differential Equations for Time Series Analysis 2025.
  9. Poli, M.; Massaroli, S.; Yamashita, S.A.J.O.N.L.S.C.N.A.L.A.; Asama, H.; Garg, A. Neural Hybrid Automata: Learning Dynamics with Multiple Modes and Stochastic Transitions 2021.
  10. Chen, R.T.Q.; Amos, B.; Nickel, M. Learning Neural Event Functions for Ordinary Differential Equations 2021.
  11. Davis, J.Q.; Choromanski, K.; Varley, J.; Lee, H.; Slotine, J.J.; Likhosterov, V.; Weller, A.; Makadia, A.; Sindhwani, V. Time Dependence in Non-Autonomous Neural ODEs 2020. arXiv:cs.LG/2005.01906].
  12. Dupont, E.; Doucet, A.; Teh, Y.W. Augmented Neural ODEs 2019. arXiv:stat.ML/1904.01681].
  13. Chu, H.; Miyatake, Y.; Cui, W.; Wei, S.; Furihata, D. Structure-Preserving Physics-Informed Neural Networks With Energy or Lyapunov Structure, 2024, [arXiv:cs.LG/2401.04986]. arXiv:cs.LG/2401.04986].
  14. Kütük, M.; Yücel, H. Energy dissipation preserving physics informed neural network for Allen–Cahn equations. Journal of Computational Science 2025, 87, 102577. [Google Scholar] [CrossRef]
  15. Bullo, F.; Murray, R.M. Tracking for fully actuated mechanical systems: a geometric framework. Automatica 1999, 35, 17–34. [Google Scholar] [CrossRef]
  16. Marsden, J.E.; Ratiu, T.S. Introduction to Mechanics and Symmetry; Vol. 17, Springer New York, 1999. [CrossRef]
  17. Whiteley, N.; Gray, A.; Rubin-Delanchy, P. Statistical exploration of the Manifold Hypothesis 2025. arXiv:stat.ME/2208.11665].
  18. Lou, A.; Lim, D.; Katsman, I.; Huang, L.; Jiang, Q.; Lim, S.N.; De Sa, C. Neural Manifold Ordinary Differential Equations. Advances in Neural Information Processing Systems, 2006. [Google Scholar]
  19. Falorsi, L.; Davidson, T.R.; Berkeley, A.U.C.; Forré, P.; Mar, M.L. Reparameterizing Distributions on Lie Groups 2019. 89, arXiv:1903.02958v1].
  20. Duong, T.; Altawaitan, A.; Stanley, J.; Atanasov, N. Port-Hamiltonian Neural ODE Networks on Lie Groups for Robot Dynamics Learning and Control. IEEE Transactions on Robotics 2024, 40, 3695–3715. [Google Scholar] [CrossRef]
  21. Wotte, Y.P.; Califano, F.; Stramigioli, S. Optimal potential shaping on SE(3) via neural ordinary differential equations on Lie groups. The International Journal of Robotics Research 2024, 43, 2221–2244. [Google Scholar] [CrossRef]
  22. Falorsi, L.; Forré, P. Neural Ordinary Differential Equations on Manifolds, 2020, [2006. 0 6663.
  23. Andersdotter, E.; Persson, D.; Ohlsson, F. Equivariant Manifold Neural ODEs and Differential Invariants 2024.
  24. Wotte, Y. Optimal Potential Energy Shaping on SE(3) via Neural Approximators. University of Twente Archive 2021. [Google Scholar]
  25. Pau, B.S. An introduction to neural ordinary differential equations 2024.
  26. Gholami, A.; Keutzer, K.; Biros, G. ANODE: Unconditionally Accurate Memory-Efficient Gradients for Neural ODEs 2019. arXiv:cs.LG/1902.10298].
  27. Kidger, P.; Morrill, J.; Foster, J.; Lyons, T.J. Neural Controlled Differential Equations for Irregular Time Series. CoRR, 2005. [Google Scholar]
  28. Li, X.; Wong, T.K.L.; Chen, R.T.Q.; Duvenaud, D. Scalable Gradients for Stochastic Differential Equations 2020. arXiv:cs.LG/2001.01328].
  29. Floryan, D.; Graham, M.D. Data-driven discovery of intrinsic dynamics. Nature Machine Intelligence 2022, 4, 1113–1120. [Google Scholar] [CrossRef]
  30. Duong, T.; Atanasov, N. Hamiltonian-based Neural ODE Networks on the SE(3) Manifold For Dynamics Learning and Control 2021. [2106.12782v3].
  31. Isham, C.J. Modern Differential Geometry for Physicists; 1999. [CrossRef]
  32. Hall, B.C. Lie Groups, Lie Algebras, and Representations: An Elementary Introduction; Graduate Texts in Mathematics (GTM, volume 222), Springer, 2015.
  33. Solà, J.; Deray, J.; Atchuthan, D. A micro Lie theory for state estimation in robotics, 2021, [arXiv:cs.RO/1812.01537]. arXiv:cs.RO/1812.01537].
  34. Visser, M.; Stramigioli, S.; Heemskerk, C. Cayley-Hamilton for roboticists. IEEE International Conference on Intelligent Robots and Systems 2006, 1, 4187–4192. [Google Scholar] [CrossRef]
  35. Robbins, H.; Monro, S. A Stochastic Approximation Method. The Annals of Mathematical Statistics 1951, 22, 400–407. [Google Scholar] [CrossRef]
  36. Poli, M.; Massaroli, S.; Yamashita, A.; Asama, H.; Park, J. TorchDyn: A Neural Differential Equations Library 2020. [2009.09346].
  37. Wotte, Y.P.; Dummer, S.; Botteghi, N.; Brune, C.; Stramigioli, S.; Califano, F. Discovering efficient periodic behaviors in mechanical systems via neural approximators. Optimal Control Applications and Methods 2023, 44, 3052–3079. [Google Scholar] [CrossRef]
  38. Munthe-Kaas, H. High order Runge-Kutta methods on manifolds. Applied Numerical Mathematics 1999, 29, 115–127. [Google Scholar] [CrossRef]
1
Equivalently (e.g, [33]), Λ and Λ 1 are often denoted as operators “hat” : R n R m × m and “vee” : R m × m R n , respectively.
2
in canonical coordinates on R m × m and R m 2 , though this choice is not required.
Figure 1. (a) The problem of computing the gradient over a flow, highlighting the cotangent spaces C x ( T ) T x ( T ) * M and C Ψ f T ( x 0 ) = ( Ψ f T ) * C x ( T ) T x 0 * M . (b) In the adjoint method we set λ ( t ) = ( C Ψ T t ) x ( t ) , whose dynamics are uniquely determined by the property L f λ = 0 , allowing to find λ ( 0 ) = C Ψ f T ( x 0 ) by integrating λ ˙ backwards from λ ( T ) = C x ( T ) .
Figure 1. (a) The problem of computing the gradient over a flow, highlighting the cotangent spaces C x ( T ) T x ( T ) * M and C Ψ f T ( x 0 ) = ( Ψ f T ) * C x ( T ) T x 0 * M . (b) In the adjoint method we set λ ( t ) = ( C Ψ T t ) x ( t ) , whose dynamics are uniquely determined by the property L f λ = 0 , allowing to find λ ( 0 ) = C Ψ f T ( x 0 ) by integrating λ ˙ backwards from λ ( T ) = C x ( T ) .
Preprints 162530 g001
Table 1. Summary of neural ODEs on manifolds and Lie groups presented in this article
Table 1. Summary of neural ODEs on manifolds and Lie groups presented in this article
Name of Neural ODE Subtype Trajectory Cost Subsection Originally introduced in
Neural ODEs on manifolds (Section 3) Extrinsic Running and final cost Section 3.1.1 Final cost [22], running cost [21]
Intrinsic Running and final cost , intermittent cost Section 3.1.2 Final cost [18], running cost [21], intermittent cost (this work)
Augmented, time-dependent parameters Final cost Section 3.2.2 Augmenting M to T M  [23], Augmenting M to M × N (this work)
Neural ODEs on Lie groups (Section 4) Extrinsic Final cost and intermittent cost Section 4.1 In [20]
Intrinsic, dynamics in local charts Running and final cost Section 4.2 In [21,24]
Intrinsic, dynamics at Lie algebra Running and final cost Section 4.2 In [21]
1 Tables may have a footer.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated