Preprint
Article

This version is not peer-reviewed.

The Optimal Frequency Control Problem of a Nonlinear Oscillator

  † These authors contributed equally to this work.

Submitted:

23 November 2025

Posted:

24 November 2025

You are already at the latest version

Abstract
We study a minimum-time (time-optimal) control problem for a nonlinear pendulum-type oscillator, in which the control input is the system’s natural frequency constrained to a prescribed interval. The objective is to transfer the oscillator from a given initial state to a prescribed terminal state in the shortest possible time. Our approach combines Pontryagin’s maximum principle with Bellman’s principle of optimality. First, we decompose the original problem into a sequence of auxiliary problems, each corresponding to a single semi-oscillation. For every such subproblem, we obtain a complete analytical solution by applying Pontryagin’s maximum principle. These results allow us to reduce the global problem of minimizing the transfer time between the prescribed states to a finite-dimensional optimization problem over a sequence of intermediate amplitudes, which is then solved numerically by dynamic programming. Numerical experiments reveal characteristic features of optimal trajectories in the nonlinear regime, including a non-periodic switching structure, non-uniform semi-oscillation durations, and significant deviations from the behavior of the corresponding linearized system. The proposed framework provides a basis for the synthesis of fast oscillatory regimes in systems with controllable frequency, such as pendulum and crane systems and robotic manipulators.
Keywords: 
;  ;  ;  

1. Introduction

Minimizing the duration of transient processes in mechanical and oscillatory systems is one of the fundamental problems of modern control theory. Classical optimal control methods, developed within the framework of Pontryagin’s maximum principle and dynamic programming [1,2,3,4], provide a rigorous theoretical foundation for studying the structure of optimal controls. However, their direct application to nonlinear oscillators with bounded parametric control is highly nontrivial.
A large class of contemporary applied problems is associated with swing suppression and fast load transportation in crane systems. In recent years, a wide range of highly efficient command-shaping and active vibration suppression methods has been developed. In particular, [5,6] proposes optimization-based input-shaping and MPC-based control algorithms for overhead cranes. Enhanced swing-suppression strategies, such as Negative Zero-Vibration schemes and low-pass-filter-based methods, are studied in [7,8]. Phase-planning and optimal anti-sway methods for tower and container cranes are presented in [9,10,11]. These results show that controlling the frequency or parameters of the oscillatory dynamics is a key mechanism for accelerating motion while maintaining stability.
In parallel, there has been active development in the analytical and semi-analytical study of nonlinear oscillators. Methods for estimating periods and frequencies, models with fractal–fractional operators, and various refinements of He’s frequency formulation are analyzed in [12,13,14]. Robust and optimal control methods for fractional nonlinear models are presented in [15], while modern variational and Hamiltonian-based computational techniques are discussed in [16]. Despite their high accuracy in describing the dynamics, these approaches generally do not yield a fully analytical solution to the minimum-time problem for nonlinear pendulums with a bounded controllable frequency.
Significant progress has also been achieved in dynamic programming and adaptive dynamic programming (ADP), including value-iteration and constrained-cost schemes for nonlinear systems [17,18]. Applications of ADP to systems with state delays are presented in [19]. Classical works on ADP and reinforcement-learning-based optimal control [20,21] provide powerful numerical tools for solving constrained optimal control problems; however, they do not supply explicit analytical switching conditions either.
Modern optimal control techniques are particularly in demand in power systems, mechatronics, and space engineering. Applications of Neural ODE methods to frequency stabilization in power systems are investigated in [22]; nonlinear frequency regulation in microgrids is studied in [23]; and swing suppression in flexible space structures is addressed in [24]. These directions demonstrate a growing interest in controlling oscillator-like parameters in complex real-world systems.
For linear harmonic systems with parametric excitation and viscous friction, rigorous analytical solutions for the structure of optimal controls have been obtained in [25,26]. However, linear models do not capture all the characteristic features inherent to pendulum-type systems.
Thus, there is a gap between, on the one hand, the highly developed swing-suppression methods for quasi-linear models [5,6,7,8], analytical and semi-empirical techniques for nonlinear oscillators [12,13,14,15,16], and numerical ADP/MPC-based methods [17,18,21,22], and, on the other hand, the absence of a strict analytical solution to the minimum-time control problem for a nonlinear pendulum when the frequency acts as the control parameter.
The aim of this work is to help fill this gap. We consider a nonlinear pendulum-type oscillator whose natural frequency, varying within a prescribed range, serves as the control input, and we study the problem of minimizing the transfer time between two rest states. Based on Pontryagin’s maximum principle and Bellman’s principle of optimality, we rigorously decompose the motion into semi-oscillations, show that the optimal control on each semi-oscillation is bang–bang with at most two switchings, derive analytical formulas for the semi-oscillation duration and switching conditions, and finally reduce the global problem to a finite-dimensional optimization problem.
The structure of the paper is as follows. Section 2 formulates the problem statement. Section 3 is devoted to deriving the structure of the optimal control on a single semi-oscillation. Section 4 investigates the conditions for the existence of a solution and provides analytical expressions for the optimal time. Section 5 constructs the global minimum-time trajectory using Bellman’s principle and presents numerical results together with a comparison to the linear system. The conclusions discuss possible directions for further development of the model.

2. Problem Statement

We consider a nonlinear pendulum-type oscillator with a time-varying natural frequency ω ( t ) serving as the control input. Its dynamics are described by
x ¨ ( t ) + ω 2 ( t ) sin x ( t ) = 0 , t [ 0 , T ] ,
where x ( t ) is the angular displacement (state coordinate), x ˙ ( t ) is the angular velocity, and T > 0 is the (unknown) final time of motion.
The control input is the frequency function ω ( t ) , which is subject to the bounds
0 < ω 0 ω ( t ) ω 1 , t [ 0 , T ] ,
where ω 0 and ω 1 are fixed constants.
The initial and terminal states are specified by
x ( 0 ) = x 0 , x ˙ ( 0 ) = 0 , x ( T ) = x T , x ˙ ( T ) = 0 ,
where x 0 0 and x T 0 . The point ( x , x ˙ ) = ( 0 , 0 ) is an equilibrium of system (1) for any admissible frequency ω ( t ) , since the control enters the equation multiplicatively. Hence a direct transfer through the equilibrium state ( 0 , 0 ) is impossible and is not considered.
The goal of the study is to transfer the system from the initial state to the terminal state, defined in (3), in minimal time, subject to the constraint (2). The quantities to be determined are the optimal terminal time T and the optimal control ω ( t ) . Collecting all conditions together, we arrive at the following minimum-time optimal control problem:
x ¨ + ω 2 sin x = 0 , 0 t T , x ( 0 ) = x 0 0 , x ˙ ( 0 ) = 0 , x ( T ) = x T 0 , x ˙ ( T ) = 0 , 0 < ω 0 ω ( t ) ω 1 , 0 t T , T min ω ( t ) .
In what follows, we restrict attention to solutions satisfying
| x ( t ) | < π , t [ 0 , T ] .
This assumption rules out phase wrapping (slipping), so that the equilibrium position remains at x = 0 .
To solve problem (4), we adopt an approach analogous to that used in [25,26] for optimal control of a linear oscillator. Exploiting the oscillatory nature of the optimal trajectory, the optimal control problem is decomposed into a sequence of similar subproblems, each corresponding to a single semi-oscillation of the optimal trajectory. First, using Pontryagin’s maximum principle [1], we solve the problem for one semi-oscillation. Then, by applying the dynamic programming method [4], we obtain a solution to the global problem.

3. Optimal Control on a Single Semi-Oscillation

Exploiting the oscillatory nature of the trajectories of system (4) for any admissible control and the symmetry with respect to the origin in the phase plane, we decompose the global motion into separate semi-oscillations. By Bellman’s principle of optimality, each semi-oscillation of a globally optimal trajectory must itself be optimal for an appropriately posed two-point boundary-value problem.
In this section, we introduce such an auxiliary problem corresponding to a single semi-oscillation and formulate it as a minimum-time optimal control problem.
We consider a motion that starts at rest at a positive angular displacement and ends at rest at a negative angular displacement, with strictly decreasing angle along the way. This corresponds to one semi-oscillation of the pendulum-like system and leads to the following auxiliary minimum-time optimal control problem:
x ¨ ( t ) + ω 2 ( t ) sin x ( t ) = 0 , 0 t T , x ( 0 ) = A > 0 , x ˙ ( 0 ) = 0 , x ( T ) = B < 0 , x ˙ ( T ) = 0 , x ˙ ( t ) < 0 , t ( 0 , T ) , 0 < ω 0 ω ( t ) ω 1 , 0 t T , T min ω ( t ) .
The monotonicity condition x ˙ ( t ) < 0 ensures that x ( t ) is strictly decreasing on ( 0 , T ) and indeed describes a single semi-oscillation from the amplitude A to the amplitude B. In other words, the trajectory passes from a right-hand rest position x = A > 0 to a left-hand rest position x = B < 0 without any additional turning points.
The boundary conditions in (5) prescribe both the coordinate and the velocity at the endpoints (here, zero velocity). They play a crucial role for two reasons:
  • They guarantee that individual semi-oscillations can be smoothly concatenated into a single global trajectory of problem (4), with continuity of both state and velocity at the junction points.
  • They allow us to invoke Bellman’s principle of optimality for the original problem (4): if a trajectory solves (4) optimally, then each of its semi-oscillations must be optimal for the corresponding auxiliary problem (5).
Thus, understanding the optimal control for the auxiliary problem (5) is a central step in solving the global minimum-time problem (4). In the next section, we apply Pontryagin’s maximum principle to (5), derive the structure of the optimal control on a single semi-oscillation, and determine the admissible bang–bang switching patterns.

4. Application of Pontryagin’s Maximum Principle to the Single Semi-Oscillation Problem

We apply Pontryagin’s maximum principle [1] to problem (5). To this end, we introduce the notation x ˙ ( t ) = v ( t ) and rewrite (5) as a system of first-order differential equations:
x ˙ ( t ) = v ( t ) , v ˙ ( t ) = ω 2 ( t ) sin x ( t ) , x ( 0 ) = A > 0 , v ( 0 ) = 0 , x ( T ) = B < 0 , v ( T ) = 0 , v ( t ) < 0 , t ( 0 , T ) , 0 < ω 0 ω ( t ) ω 1 , 0 t T , T min ω ( t ) .
We write the Pontryagin function:
H ( ψ 1 , ψ 2 , x , v , ω ) = ψ 1 v ψ 2 ω 2 sin x
and denote its upper boundary:
M ( ψ 1 , ψ 2 , x , v ) = sup ω [ ω 0 , ω 1 ] H ( ψ 1 , ψ 2 , x , v , ω )
According to PMP if x ( t ) , v ( t ) , and ω ( t ) constitute a solution to the optimal control problem (6), then the following three conditions are satisfied:
(I)
There exist continuous functions ψ 1 ( t ) and ψ 2 ( t ) , which never simultaneously become zero and are solutions to the adjoint system:
ψ ˙ 1 ( t ) = H x = ψ 2 ( t ) ω 2 ( t ) cos x ( t ) , ψ ˙ 2 ( t ) = H v = ψ 1 ( t ) .
(II)
For any t [ 0 , T ] , the maximum condition is satisfied:
H ( ψ 1 ( t ) , ψ 2 ( t ) , x ( t ) , v ( t ) , ω ( t ) ) = M ( ψ 1 ( t ) , ψ 2 ( t ) , x ( t ) , v ( t ) ) .
(III)
For any t [ 0 , T ] , a specific inequality occurs:
M ( ψ 1 ( t ) , ψ 2 ( t ) , x ( t ) , v ( t ) ) 0 .
From condition (8) for the maximum of the function H, the optimal control is obtained in the form
ω ( t ) = ω 1 , ψ 2 ( t ) sin x ( t ) < 0 , ω 0 , ψ 2 ( t ) sin x ( t ) > 0 , sin gular value , ψ 2 ( t ) sin x ( t ) 0 .
Let us show that the case of singular control in Formula (9), specifically when ψ 2 ( t ) sin x ( t ) 0 over a non-zero length interval of time, is impossible, assuming the opposite. This means considering the existence of a time interval during which ψ 2 ( t ) sin x ( t ) 0 . In such an interval, determining the value of optimal control from the maximum condition would not be feasible.
Given the continuity of the functions ψ 2 ( t ) and x ( t ) , it is possible either for ψ 2 ( t ) 0 over some interval or for x ( t ) 0 over a certain time period.
If ψ 2 ( t ) 0 , then ψ ˙ 2 ( t ) 0 must also be, identically, zero. However, this conclusion, derived from the second equation of the adjoint system (7), implies that ψ 1 ( t ) ψ 2 ( t ) 0 , contradicting the maximum principle’s condition (I).
In the scenario where x ( t ) 0 , it follows that v ( t ) = x ˙ ( t ) 0 . Such a case is deemed impossible, as it contradicts the condition v ( t ) < 0 , t ( 0 , T ) .
This reasoning leads to the formulation of a statement:
Optimal control ω ( t ) is limited to only two values, ω 1 and ω 0 , dictated by the sign of the product ψ 2 ( t ) sin x ( t ) . Considering the case where this product equals zero as non-existent is justified by the fact that the control value at a single point or a finite number of points lacks any impact on the trajectory of the controlled system.
Now, we consider condition (III). It represents the greatest interest at values t = 0 and t = T .
At t = 0 , the condition is expressed as
M ( ψ 1 ( 0 ) , ψ 2 ( 0 ) , x ( 0 ) , v ( 0 ) ) = H ( ψ 1 ( 0 ) , ψ 2 ( 0 ) , x ( 0 ) , v ( 0 ) , ω ( 0 ) ) = ψ 1 ( 0 ) v ( 0 ) ψ 2 ( 0 ) ω 2 ( 0 ) sin x ( 0 ) 0 .
At t = T , the condition becomes
M ( ψ 1 ( T ) , ψ 2 ( T ) , x ( T ) , v ( T ) ) = H ( ψ 1 ( T ) , ψ 2 ( T ) , x ( T ) , v ( T ) , ω ( T ) ) = ψ 1 ( T ) v ( T ) ψ 2 ( T ) ω 2 ( T ) sin x ( T ) 0 .
Given the boundary conditions that v ( 0 ) = v ( T ) = 0 , and considering that the control value ω ( t ) is always positive, with 0 < x ( 0 ) < π and π < x ( T ) < 0 , the following additional conditions are derived from (10) and (11)
ψ 2 ( 0 ) 0 , ψ 2 ( T ) 0 .
We now investigate the number of possible switching points of the optimal control. From formula (9) we see that a switching is only possible at points where either x ( t ) = 0 or ψ 2 ( t ) = 0 . On a single semi-oscillation, the trajectory x ( t ) crosses zero exactly once. We therefore study the function ψ 2 ( t ) and determine the maximal possible number of its zeros on the interval [ 0 , T ] . Combining system (5) with the adjoint system (7), we obtain the following system of two differential equations, which governs the adjoint variable ψ 2 ( t ) :
x ¨ ( t ) + ω 2 ( t ) sin x ( t ) = 0 , ψ ¨ 2 ( t ) + ψ 2 ( t ) ω 2 ( t ) cos x ( t ) = 0 .
We will show that, on any interval where x ( t ) keeps a constant sign, the function ψ 2 ( t ) can have at most one zero. Assume the contrary, and suppose that there exist at least two points at which ψ 2 ( t ) vanishes (see Figure 1). In the absence of switchings, the control is constant, and the solution of the first equation of system (13) is a periodic function. Consider an interval of length equal to half of this period, bounded by two consecutive zeros of x ( t ) (for the auxiliary optimal control problem, we are in fact interested in an even shorter interval, corresponding to one quarter of the period). Note that the second equation of system (13) can also be regarded as an oscillation equation with a time-varying frequency determined by x ( t ) . Accordingly, the minimal distance between consecutive zeros of ψ 2 ( t ) is achieved at the maximal frequency, that is, at the maximal value of the coefficient cos x ( t ) (see Sturm–Picone comparison theorem [27]), which corresponds to the minimal possible value of | x ( t ) | . This implies that, in order to attain the minimal distance between zeros of ψ 2 ( t ) , its first zero must coincide with a zero of x ( t ) , as shown in Figure 1. This configuration corresponds to the following Cauchy problem:
x ¨ ( t ) + ω 2 ( t ) sin x ( t ) = 0 , ψ ¨ 2 ( t ) + ψ 2 ( t ) ω 2 ( t ) cos x ( t ) = 0 , x ( 0 ) = 0 , x ˙ ( 0 ) = v 0 > 0 , ψ 2 ( 0 ) = 0 , ψ ˙ 2 ( 0 ) = 1 .
Note that the condition v 0 > 0 follows from the symmetry of the first equation with respect to the point x = 0 and the evenness of the function cos x , while the condition ψ ˙ 2 ( 0 ) = 1 follows from the possibility of normalizing the adjoint variable.
We also note that we are interested in an interval where the control keeps a constant sign. In this case, by the time rescaling t = τ / ω one can eliminate ω from the equations, so we may assume ω = 1 . Hence system (14) depends only on a single unknown parameter v 0 . We now study this dependence.
Let T x denote the first nonzero instant such that x ( T x ) = 0 , and let T ψ 2 denote the first nonzero instant such that ψ 2 ( T ψ 2 ) = 0 (see Figure 1). We compute these quantities as functions of v 0 by numerically solving the Cauchy problem (14) and plotting the resulting dependencies.
The numerical results in Figure 2 show that the configuration depicted in Figure 1 is impossible, since the distance between consecutive zeros of ψ 2 ( t ) is always larger than the distance between consecutive zeros of x ( t ) . Moreover, both distances tend to π as v 0 0 , because in this limit the dynamics approach those of the corresponding linear system, for which these distances are equal [25,26]. For larger values of v 0 (specifically v 0 > 2 ) one observes a transition from oscillatory to rotational motion, which is incompatible with the constraint | x ( t ) | < π .
Thus, we have shown that in problem (5) the optimal control can have at most one switching in each region where x ( t ) keeps a constant sign, and therefore at most three switchings in total (two at zeros of ψ 2 ( t ) and one at the zero of x ( t ) ).
Furthermore, the case of three switchings is also impossible: in that situation we would have ψ 2 ( 0 ) 0 and ψ 2 ( T ) 0 , and from conditions (9) and (12) it follows that the optimal control near the initial and terminal segments must take the same value ω 1 , which contradicts the odd number of switchings.
As a result, taking into account conditions (12), we conclude that the optimal control on a single semi-oscillation can only have one of the patterns shown in Figure 3. The remaining three control types, which do not satisfy all the conditions of the maximum principle, are depicted in Figure 4. Note that this selection of admissible control patterns coincides with the problem studied in [25,26], since the signs of x ( t ) and sin x ( t ) coincide on the interval ( π , π ) .
We have determined the structure of the optimal control on a single semi-oscillation. It remains to identify for which boundary conditions problem (5) admits a solution, and which control type corresponds to the given boundary values.

5. Existence of Solution for One Semi-Oscillation

Let us consider the question of the existence of a solution for one semi-oscillation, i.e., for which boundary values there exists a control that transfers the system from the initial state to the final state in one semi-oscillation.
We transform the first equation of problem ( ) taking into account that the control ω ( t ) is a piecewise constant function. On an interval where the control is constant, we multiply the equation by x ˙ ( t ) :
x ˙ x ¨ + ω 2 x ˙ sin x = 0 .
Noting that d ( x ˙ 2 ) d t = 2 x ˙ x ¨ and d ( cos ( x ) ) d t = x ˙ sin ( x ) , we obtain the equation:
0.5 d ( x ˙ ) 2 d t ω 2 d cos x d t = 0 .
Integrating the last equation with ω = const , we have:
( x ˙ ) 2 = 2 ω 2 ( h + cos x ) , h = const .
Integrating the obtained equation once more on any interval [ t 1 , t 2 ] where the control is constant, we arrive at an expression for time in terms of an elliptic integral (see [28]):
t 2 t 1 = 1 2 ω x ( t 1 ) x ( t 2 ) d x cos x + h .
Here we have a minus sign because the function x ( t ) is decreasing. Note also that the function x ( t ) is the upper limit of the elliptic integral, which cannot be evaluated analytically.
The constant h in (15) is determined from the differentiability condition of the function x ( t ) for t [ 0 , T ] and the satisfaction of the boundary conditions.
Now, based on what has been said, we determine the missing parameter values (the constant in (15), the switching points ξ , τ ) for the optimal control and optimal trajectory. Let us first consider control type 4 from Figure 3, which also includes type 3 when ξ = τ and type 5 when ξ = T . In this case, the optimal control ω ( t ) has the form:
ω ( t ) = ω 1 , 0 t τ , ω 0 , τ t ξ , ω 1 , ξ t T ,
where the switching points τ and ξ are unknown; it is only known that x ( τ ) = 0 .
Next, using the form of control (17) and the boundary conditions from (5), we determine the values of the constant h in equation ( ) on each interval where the control is constant:
  • On the interval [ 0 , τ ] we have:
    x ( 0 ) = A , x ˙ ( 0 ) = 0 , ( x ˙ ( 0 ) ) 2 = 2 ω 1 2 ( h 1 + cos x ( 0 ) ) = 0 h 1 = cos A .
  • On the interval [ ξ , T ] we have:
    x ( T ) = B , x ˙ ( T ) = 0 , ( x ˙ ( T ) ) 2 = 2 ω 1 2 ( h 3 + cos x ( T ) ) = 0 h 3 = cos B .
  • On the interval [ τ , ξ ] , using the condition x ( τ ) = 0 , and the continuity and differentiability of the function x ( t ) at the switching points, we have the system:
    2 ω 1 2 ( 1 cos ( A ) ) = 2 ω 0 2 ( 1 + h 2 ) , at point t = τ , 2 ω 0 2 ( cos ( x ( ξ ) ) + h 2 ) = 2 ω 1 2 ( cos ( x ( ξ ) ) cos ( B ) ) , at point t = ξ .
    Denoting γ = ω 1 ω 0 > 1 , we write the solution of system (18) in the form:
    h 2 = γ 2 ( 1 cos ( A ) ) 1 , cos ( x ( ξ ) ) = 1 + γ 2 γ 2 1 cos ( B ) cos ( A ) .
    Note that the switching moment ξ is defined implicitly by the second equation of system (19).
Similarly, we consider control type 2 from Figure 3, which includes control type 1 when ξ = 0 . We obtain equations for the constant h 2 and the switching point ξ :
h 2 = γ 2 ( 1 cos ( B ) ) 1 ; cos ( x ( ξ ) ) = 1 + γ 2 γ 2 1 ( cos ( A ) cos ( B ) ) .
We express B ( ξ ) from (20) and (19) as a function of the variable ξ :
B ( ξ ) = arccos cos ( A ) + γ 2 1 γ 2 ( 1 cos ( x ( ξ ) ) , ξ τ from ( 20 ) , arccos cos ( A ) γ 2 1 γ 2 ( 1 cos ( x ( ξ ) ) , τ ξ from ( 19 ) .
For ξ τ , the function x ( ξ ) decreases from A to 0 and the function B ( ξ ) is monotonically increasing for ξ [ 0 , τ ] . For ξ τ , the function x ( ξ ) increases from 0 to B and the function B ( ξ ) is also monotonically increasing (see Figure 5).
From the monotonicity of the function B ( ξ ) , we obtain that for a fixed value of A, the minimum value of B is achieved on control type 1 from Figure 3 and equals
B min ( A ) = 2 arcsin 1 γ sin A 2 .
The maximum value of B is achieved on control type 5 from Figure 3 and equals
B max ( A ) = 2 arcsin γ sin A 2 , γ sin A 2 1 , π , γ sin A 2 > 1 .
The case γ sin A 2 > 1 corresponds to a situation where phase slip is possible and is not considered in this work. Thus, the optimal control problem for one semi-oscillation has a solution in the case:
B min ( A ) B B max ( A ) .
Figure 6 shows examples of the domains of parameter values A and B satisfying condition (21) for different values of ω 0 .

6. Solution of the Optimal Control Problem for One Semi-Oscillation

In the previous section, constraints on the boundary conditions were obtained under which the system is controllable for one semi-oscillation. From the monotonicity of the function B ( ξ ) , shown in the previous section, we obtain that the type of optimal control and the switching moments are uniquely determined by the boundary conditions.
For the case B A , we have control type 3, 4, or 5 from Figure 3. In this case, from formulas (16) and (19), we obtain the solution of the optimal control problem (5):
τ = 1 2 ω 1 0 A d x cos x cos ( A ) first switching point , h 2 = γ 2 ( 1 cos ( A ) ) 1 , x ( ξ ) = arccos 1 + γ 2 γ 2 1 cos ( B ) cos ( A ) , ξ = 1 2 ω 0 0 x ( ξ ) d x cos x + h 2 sec ond switching point .
Optimal control:
ω ( t ) = ω 1 , 0 t τ , ω 0 , τ t ξ , ω 1 , ξ t T ,
where τ , ξ , h 2 are determined by formulas (22), and the time T is determined by the formula:
T ( A , B ) = 1 2 ω 1 0 A d x cos x cos ( A ) + + 1 2 ω 0 0 x ( ξ ) d x cos x + h 2 + 1 2 ω 1 x ( ξ ) B d x cos x cos ( B ) .
Note that the switching moment ξ is defined implicitly by the last equation in (22), and the optimal trajectory x ( t ) can be found, for example, by numerical integration of the differential equation in (4).
For the case B < A , we have control type 1 or 2 from Figure 3. From formulas (16) and (20), we obtain the solution of problem (5):
h 2 = γ 2 ( 1 cos ( B ) ) 1 , x ( ξ ) = arccos 1 + γ 2 γ 2 1 cos ( A ) cos ( B ) , ξ = 1 2 ω 1 x ( ξ ) A d x cos x cos ( A ) first switching point , τ = T 1 2 ω 1 0 B d x cos x cos ( B ) sec ond switching point ,
where T is the optimal time, determined by the formula:
T ( A , B ) = 1 2 ω 1 A x ( ξ ) d x cos x cos ( A ) + + 1 2 ω 0 0 x ( ξ ) d x cos x + h 2 + 1 2 ω 1 0 B d x cos x cos ( B ) .
The optimal control is given by the formula:
ω ( t ) = ω 1 , 0 t ξ , ω 0 , ξ t τ , ω 1 , τ t T .
Figure 7 presents a graph of the optimal time function for one semi-oscillation T ( A , B ) , given by formulas (24) and (26), for all values of A and B satisfying condition (21).

7. Solution to the Main Optimal Control Problem

Let x ( t ) be an arbitrary trajectory (not necessarily optimal) that fulfills the equation and boundary conditions of the main problem (4) and is composed of an unknown number n of semi-oscillations (Figure 8). The time moments at which the derivative becomes zero are denoted by t i , with the corresponding amplitudes given by x i = x ( t i ) .
According to Bellman optimality principle on each semi-oscillation interval [ t i , t i + 1 ] the trajectory must be optimal and be a solution to problem (5). We can write the expression for the total time as the sum of the optimal times for each semi-oscillation, using formulas (24) and (26) from the previous section:
T = i = 0 n 1 T | x i | , | x i + 1 | .
Thus, the problem reduces to an optimization where the objective is to find the quantity of semi-oscillations n and the intermediate amplitude values x i ( i = 1 , , n 1 ) that yield the minimal total time, while simultaneously satisfying constraints
( x i , x i + 1 ) D , i = 0 , , n 1 ,
where
D = ( A , B ) R 2 : A B < 0 , 2 arcsin 1 γ sin | A | 2 | B | 2 arcsin γ sin | A | 2 ,
which guarantee the existence of a trajectory on each of the semi-oscillations (see formula 21).
Thus, the optimal control problem (4) is reduced to a finite-dimensional minimization problem (2829). Owing to the complexity of the resulting expressions, this minimization problem is addressed using a numerical approach based on dynamic programming.
Let us fix the initial state x 0 and solve the problem for various terminal states x T . We introduce the so-called Bellman function V ( x T ) , which is equal to the optimal time in problem (4). To find this function, we will apply the dynamic programming method and consider the following iterative process. Let us define the function
V 0 ( x T ) = 0 , x T = x 0 , u n d e f i n e d , x T x 0 .
This initial function defines the optimal time for 0 semi-oscillations. Next, for k = 1 , 2 , , we define the iterative process by the formula
V k ( x T ) = V k 1 ( x T ) , V k 1 ( x T ) , min x R : ( x , x T ) D , V k 1 ( x ) V k 1 ( x ) + T ( | x | , | x T | ) , V k 1 ( x T ) a n d { x R : ( x , x T ) D , V k 1 ( x ) } , u n d e f i n e d , o t h e r w i s e .
The function V k ( x T ) defines the optimal time V ( x T ) in problem (4) in no more than k semi-oscillations.

8. Numerical Calculations and Comparison with the Linear Case

We consider the numerical implementation of the iterative method defined by formula (30). The calculations were performed using the Python programming language and the standard libraries numpy and matplotlib. The following parameter values were used: ω 0 = 0.85 , ω 1 = 1 , along with a fixed initial state x 0 = 0.5 . The results of the first ten iterations are shown in Figure 9. The values of the function V k ( x T ) were approximated by its values at the nodes of a uniform grid on the interval x T ( π , π ) .
Using the dynamic programming method, we can also construct a family of optimal trajectories emanating from a given initial point x 0 and terminating at all possible final values x T , for a prescribed maximum number of semi-oscillations. This is because, at each step of the iterative scheme (30), in addition to the value of the Bellman function, the minimization procedure actually also yields the optimal control (its type and switching times) on the current k-th semi-oscillation.
The computational results for at most three semi-oscillations are shown in Figure 10 (trajectories as functions of time) and Figure 11 (phase portrait of the trajectories). Here, the segments of the trajectories corresponding to the control ω 1 are highlighted in red, and those corresponding to the control value ω 0 are highlighted in blue. In the phase portrait (Figure 11), arrows additionally indicate the direction of motion corresponding to increasing time t.
These figures illustrate the overall behavior of the optimal trajectories and the regions of constant control in the phase space.
Next, in order to study in more detail the properties of the optimal control in problem (4), we consider three additional examples for prescribed initial and terminal states.
In the first example we choose x 0 = 1.5 , x T = 1.6 , ω 0 = 0.85 , ω 1 = 1 . The optimal control and the corresponding optimal trajectory are shown in Figure 12. Note that the types of control on the first and second semi-oscillations do not coincide. On the first semi-oscillation we have a type 2 control, while on the second we have a type 4 control (see Figure 3). The amplitudes of the semi-oscillations also vary non-monotonically. Here we observe a substantial difference from the linear case [25,26], where the control was a periodic function and the amplitudes of the semi-oscillations formed a geometric progression.
In the second example we choose x 0 = 0.5 , x T = 0.35 , ω 0 = 0.85 , ω 1 = 1 . The optimal control and the corresponding optimal trajectory are shown in Figure 13. In this case, for relatively small values of the variable x we have sin x x , and the optimal control, although not strictly periodic, is close to periodic.
In the last example we choose x 0 = 1.5 , x T = 1 , ω 0 = 0.85 , ω 1 = 1 . The optimal control and the corresponding optimal trajectory are shown in Figure 14. In this case, for relatively large values of the variable x, the optimal control strongly deviates from periodic behavior. There is a noticeable difference in the duration of individual semi-oscillations.

9. Conclusions

In this work, we solved the minimum-time optimal control problem for a nonlinear pendulum-type oscillator, where the control parameter is its natural frequency ω ( t ) constrained to the interval [ ω 0 , ω 1 ] . The objective was to transfer the system from one arbitrary rest state to another in the shortest possible time.
By applying Pontryagin’s maximum principle and Bellman’s principle of optimality, we decomposed the original problem into a sequence of similar subproblems corresponding to single semi-oscillations. For each such subproblem, it was rigorously shown that the optimal control is of relay (bang–bang) type and contains no more than two switchings. This, in turn, reduces the dynamics on each semi-oscillation to three segments with constant control.
A key analytical result is the derivation of expressions for the switching times and the total duration of a semi-oscillation in terms of elliptic integrals. This made it possible to reduce the original infinite-dimensional optimal control problem to a finite-dimensional minimization problem, namely the search for an optimal sequence of intermediate semi-oscillation amplitudes. The resulting finite-dimensional problem was solved numerically using dynamic programming, which enabled us to construct the Bellman function V ( x T ) (the optimal-time function) and, simultaneously, to recover the optimal control in the original problem.
Numerical examples confirmed the effectiveness of the proposed approach. A particularly important outcome is the demonstration of qualitative differences from the analogous linear problem (where sin x x ). In the nonlinear case, the optimal control is generally non-periodic: as the computations show, the duration of semi-oscillations and the control structure (switching type) may vary irregularly with the current amplitude, reflecting the intrinsic nonlinearity of sin x . From a numerical perspective, it is natural to compare the proposed dynamic programming scheme with state-of-the-art direct optimal control and NMPC solvers [29,30,31].
Overall, the paper provides a structurally explicit and computationally efficient solution to the minimum-time frequency control problem for a nonlinear oscillator, combining PMP-based switching analysis with elliptic-integral timing and dynamic programming for multi–semi-oscillation transfers. This work lays a theoretical foundation for further research. The most promising directions include:
  • Incorporating dissipative forces into the model, primarily Coulomb and viscous friction, which will complicate the Hamiltonian but bring the model significantly closer to real physical systems.
  • Introducing constraints on the rate of change of the control (slew-rate constraints), i.e., | ω ˙ ( t ) | K , which is more realistic from an engineering point of view.
  • Extending the proposed approach to systems with multiple degrees of freedom, such as spherical pendulums or models of robotic manipulators.

Author Contributions

Conceptualization, V.T.; investigation, D.K., V.I., and V.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The code that is developed within this paper is available from the corresponding author(s) upon a request.

Conflicts of Interest

The authors declare no conflicts of interest.

Nomenclature

t Time variable
x ( t ) , x ˙ ( t ) , x ¨ ( t ) Coordinate function, Velocity, Acceleration
x ( 0 ) = x 0 , x ˙ ( 0 ) = 0 , x ( T ) = x T , x ˙ ( T ) = 0 Boundary conditions
ω ( t ) Control function
ω 0 , ω 1 Lower and upper limits of the control function
T Optimal time
ξ , τ Control switching points
A, B Boundary conditions for one semi-oscillation

References

  1. Pontryagin, L.S. The Mathematical Theory of Optimal Processes; Routledge: London, UK, 1987. [CrossRef]
  2. Athans, M.; Falb, P. Optimal Control; McGraw–Hill: New York, NY, USA, 1966.
  3. Bryson, A.E.; Ho, Y.-C. Applied Optimal Control; Hemisphere: Washington, DC, USA, 1975. [CrossRef]
  4. Bellman, R. Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1957.
  5. Tang, W.; Ma, R.; Wang, W.; Gao, H. Optimization-Based Input-Shaping Swing Control of Overhead Cranes. Appl. Sci. 2023, 13, 9637. [CrossRef]
  6. Tang, W.; Zhao, E.; Sun, L.; Gao, H. An Active Swing Suppression Control Scheme of Overhead Cranes Based on Input-Shaping Model Predictive Control. Syst. Sci. Control Eng. 2023, 11, 2188401. [CrossRef]
  7. Cao, X.-H.; Meng, C.; Zhou, Y.; Zhu, M. An Improved Negative Zero-Vibration Anti-Swing Control Strategy for Grab Ship Unloader based on elastic wire rope model. Mech. Ind. 2021, 22, 45. [CrossRef]
  8. Wu, Q.; Wang, X.; Hua, L.; Xia, M. Improved Time-Optimal Anti-Swing Control for Double-Pendulum Systems. Mech. Syst. Signal Process. 2021, 151, 107444. [CrossRef]
  9. Huang, W.; Niu, W.; Zhou, X.; Gu, W. Anti-Sway Control of Variable Rope-Length Container Crane Based on Phase Plane Trajectory Planning. J. Vib. Control 2024, 30, 1227–1240. [CrossRef]
  10. Ouyang, H.; Tian, Z.; Yu, L.; Zhang, G. Motion Planning for Payload Swing Suppression in Tower Cranes. J. Frankl. Inst. 2020, 357, 8299–8320. [CrossRef]
  11. Li, G.; Ma, X.; Li, Z.; Li, Y. Optimal Trajectory Planning for Double-Pendulum Tower Cranes. IEEE/ASME Trans. Mechatron. 2023, 28, 919–932. [CrossRef]
  12. Tian, D.; Liu, Z. Period/Frequency Estimation of Nonlinear Oscillators. J. Low Freq. Noise Vib. Act. Control 2019. [CrossRef]
  13. He, C.-H. A Complement to Nonlinear Oscillator Frequency Estimation. J. Low Freq. Noise Vib. Act. Control 2019. [CrossRef]
  14. Zhang, L.; Gepreel, K.; Yu, J. He’s Frequency Formulation for Fractal–Fractional Nonlinear Oscillators. Front. Phys. 2025, 13, 1542758. [CrossRef]
  15. Liu, C.; Zhou, T.; Gong, Z.; Yi, X.; Teo, K.L.; Wang, S. Robust Optimal Control of Fractional Nonlinear Systems. Chaos Solitons Fractals 2023, 175, 113964. [CrossRef]
  16. He, J.-H. Variational Iteration Method–A Kind of Non-Linear Analytical Technique: Some Examples. Int. J. Nonlin. Mech. 1999, 34, 699–708. [CrossRef]
  17. Wei, Q.; Liu, D.; Lin, H. Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems. IEEE Trans. Cybern. 2016, 46, 840–853. [CrossRef]
  18. Wei, Q.; Li, T. Constrained Adaptive Dynamic Programming. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 3251–3264. [CrossRef]
  19. Wang, J.; Zhang, P.; Wang, Y.; Ji, Z. Adaptive Dynamic Programming for Nonlinear Systems with Input Delay. Nonlinear Dyn. 2023, 111, 19133–19149. [CrossRef]
  20. Lewis, F.L.; Vrabie, D.; Syrmos, V. Optimal Control and Reinforcement Learning; Wiley: Hoboken, NJ, USA, 2012. [CrossRef]
  21. Bertsekas, D. Dynamic Programming and Optimal Control, 4th ed.; Athena Scientific: Belmont, MA, USA, 2017.
  22. Gao, S.; Liu, E.; Wu, Z.; Li, J.; Zhang, M. Neural ODE-Based Frequency Stability Assessment and Control of Energy Storage Systems. Appl. Sci. 2025, 15, 12048. [CrossRef]
  23. Shah, A.A.; Han, X.; Armghan, H.; Almani, A.A. A Nonlinear Integral Backstepping Controller to Regulate the Voltage and Frequency of an Islanded Microgrid Inverter. Electronics 2021, 10, 660. [CrossRef]
  24. Meng, D.; Lu, W.; Xu, W.; She, Y.; Wang, X.; Liang, B.; Yuan, B. Vibration Suppression Control of Free-Floating Space Robots with Flexible Appendages for Autonomous Target Capturing. Acta Astronaut. 2018, 151, 904–918. [CrossRef]
  25. Kamzolkin, D.; Ilyutko, V.; Ternovski, V. Optimal Control of a Harmonic Oscillator with Parametric Excitation. Mathematics 2024, 12, 3981. [CrossRef]
  26. Kamzolkin, D.; Ternovski, V. Time-Optimal Motions of Systems with Viscous Friction. Mathematics 2024, 12, 1485. [CrossRef]
  27. Diaz, J.B.; McLaughlin, J.R. Sturm comparison theorems for ordinary and partial differential equations. Bulletin of the American Mathematical Society, Bull. Amer. Math. Soc. 75(2), 335-339, (March 1969).
  28. Abramowitz, M.; Stegun, I.A., Eds. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables; National Bureau of Standards: Washington, DC, USA, 1964.
  29. Betts, J.T. Practical Methods for Optimal Control and Estimation Using Nonlinear Programming, 2nd ed.; SIAM: Philadelphia, PA, USA, 2010. [CrossRef]
  30. Ross, I.M.; Fahroo, F. Pseudospectral Knotting Methods for Solving Optimal Control Problems. J. Guid. Control Dyn. 2004, 27, 397–405. [CrossRef]
  31. Diehl, M.; Ferreau, H.J.; Haverbeke, N. Efficient Numerical Methods for Nonlinear MPC and Moving Horizon Estimation. In Nonlinear Model Predictive Control; Findeisen, R., Allgöwer, F., Biegler, L.T., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 391–417. [CrossRef]
Figure 1. Hypothetical solution to the equations (14), representing the zeros of functions x ( t ) and ψ 2 ( t ) (red dots). (We have proved that such an arrangement of zeros of the two functions is impossible).
Figure 1. Hypothetical solution to the equations (14), representing the zeros of functions x ( t ) and ψ 2 ( t ) (red dots). (We have proved that such an arrangement of zeros of the two functions is impossible).
Preprints 186360 g001
Figure 2. Distances between zeros of the functions x ( t ) and ψ 2 ( t ) as functions of the parameter v 0 .
Figure 2. Distances between zeros of the functions x ( t ) and ψ 2 ( t ) as functions of the parameter v 0 .
Preprints 186360 g002
Figure 3. (1)-(5) – all possible variants of optimal control encountered in problem (5). Here x ( τ ) = 0 .
Figure 3. (1)-(5) – all possible variants of optimal control encountered in problem (5). Here x ( τ ) = 0 .
Preprints 186360 g003
Figure 4. (1)-(3) – nonoptimal control types encountered while applying PMP to problem (5). Here x ( τ ) = 0 .
Figure 4. (1)-(3) – nonoptimal control types encountered while applying PMP to problem (5). Here x ( τ ) = 0 .
Preprints 186360 g004
Figure 5. Plot of the function B ( ξ ) . Here, A = 1 , ω 0 = 0.5 , ω 1 = 1 , and the number of semi-oscillations is one.
Figure 5. Plot of the function B ( ξ ) . Here, A = 1 , ω 0 = 0.5 , ω 1 = 1 , and the number of semi-oscillations is one.
Preprints 186360 g005
Figure 6. Domains of values A and B (aquamarine color) for which problem (5) has a solution. (a) ω 0 = 0.9 , ω 1 = 1 , (b) ω 0 = 0.5 , ω 1 = 1 , (c) ω 0 = 0.1 , ω 1 = 1 .
Figure 6. Domains of values A and B (aquamarine color) for which problem (5) has a solution. (a) ω 0 = 0.9 , ω 1 = 1 , (b) ω 0 = 0.5 , ω 1 = 1 , (c) ω 0 = 0.1 , ω 1 = 1 .
Preprints 186360 g006
Figure 7. Plot of optimal process time of one semi-oscillation T ( A , B ) versus initial and final amplitudes. Here ω 0 = 0.5 , ω 1 = 1 .
Figure 7. Plot of optimal process time of one semi-oscillation T ( A , B ) versus initial and final amplitudes. Here ω 0 = 0.5 , ω 1 = 1 .
Preprints 186360 g007
Figure 8. Decomposition of the trajectory x ( t ) of the controlled system (4) into n semi-oscillations.
Figure 8. Decomposition of the trajectory x ( t ) of the controlled system (4) into n semi-oscillations.
Preprints 186360 g008
Figure 9. Plot of the optimal time T in problem (4) versus the terminal state x T . Here, x 0 = 0.5 , ω 0 = 0.85 , ω 1 = 1 , and the number of semi-oscillations is at most 10.
Figure 9. Plot of the optimal time T in problem (4) versus the terminal state x T . Here, x 0 = 0.5 , ω 0 = 0.85 , ω 1 = 1 , and the number of semi-oscillations is at most 10.
Preprints 186360 g009
Figure 10. Family of optimal trajectories starting from the given initial point x 0 = 0.5 . The case of all possible terminal states reachable in no more than three semi-oscillations. Here ω 0 = 0.85 , ω 1 = 1 . Red corresponds to the control value ω 1 , blue to the control value ω 0 . Green dots indicate the initial and various terminal states.
Figure 10. Family of optimal trajectories starting from the given initial point x 0 = 0.5 . The case of all possible terminal states reachable in no more than three semi-oscillations. Here ω 0 = 0.85 , ω 1 = 1 . Red corresponds to the control value ω 1 , blue to the control value ω 0 . Green dots indicate the initial and various terminal states.
Preprints 186360 g010
Figure 11. Phase portrait of optimal trajectories starting from the given initial point x 0 = 0.5 . The case of all possible terminal states reachable in no more than three semi-oscillations. Here ω 0 = 0.85 , ω 1 = 1 . Red corresponds to the control value ω 1 , blue to the control value ω 0 . Green dots indicate the initial and various terminal states.
Figure 11. Phase portrait of optimal trajectories starting from the given initial point x 0 = 0.5 . The case of all possible terminal states reachable in no more than three semi-oscillations. Here ω 0 = 0.85 , ω 1 = 1 . Red corresponds to the control value ω 1 , blue to the control value ω 0 . Green dots indicate the initial and various terminal states.
Preprints 186360 g011
Figure 12. Plots of the optimal trajectory and the optimal control. Here x 0 = 1.5 , x T = 1.6 , T 7.36 , ω 0 = 0.85 , ω 1 = 1 , and the number of semi-oscillations is 2. For such close initial and terminal conditions, one can obtain different types of control on the semi-oscillations, which is not typical for the linear case.
Figure 12. Plots of the optimal trajectory and the optimal control. Here x 0 = 1.5 , x T = 1.6 , T 7.36 , ω 0 = 0.85 , ω 1 = 1 , and the number of semi-oscillations is 2. For such close initial and terminal conditions, one can obtain different types of control on the semi-oscillations, which is not typical for the linear case.
Preprints 186360 g012
Figure 13. Plots of the optimal trajectory and the optimal control. Here x 0 = 0.5 , x T = 0.35 , T 9.83 , ω 0 = 0.85 , ω 1 = 1 , and the number of semi-oscillations is 3. For small values of the variable x, the optimal control is close to periodic: t 1 3.29 , t 2 t 1 3.27 , t 3 t 2 3.27 .
Figure 13. Plots of the optimal trajectory and the optimal control. Here x 0 = 0.5 , x T = 0.35 , T 9.83 , ω 0 = 0.85 , ω 1 = 1 , and the number of semi-oscillations is 3. For small values of the variable x, the optimal control is close to periodic: t 1 3.29 , t 2 t 1 3.27 , t 3 t 2 3.27 .
Preprints 186360 g013
Figure 14. Plots of the optimal trajectory and the optimal control. Here x 0 = 1.5 , x T = 1 , T 10.73 , ω 0 = 0.85 , ω 1 = 1 , and the number of semi-oscillations is 3. For large values of the variable x, the dynamics of the nonlinear system differ from the linear case, and the optimal control is far from periodic: t 1 3.72 , t 2 t 1 3.56 , t 3 t 2 3.45 .
Figure 14. Plots of the optimal trajectory and the optimal control. Here x 0 = 1.5 , x T = 1 , T 10.73 , ω 0 = 0.85 , ω 1 = 1 , and the number of semi-oscillations is 3. For large values of the variable x, the dynamics of the nonlinear system differ from the linear case, and the optimal control is far from periodic: t 1 3.72 , t 2 t 1 3.56 , t 3 t 2 3.45 .
Preprints 186360 g014
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated