Preprint
Article

Finite Time Synchronization for Stochastic Fractional-Order Memristive BAM Neural Networks with Multiple Delays

This version is not peer-reviewed.

Submitted:

18 August 2023

Posted:

22 August 2023

You are already at the latest version

A peer-reviewed article of this preprint also exists.

Abstract
This paper studies the finite-time synchronization problem of fractional-order stochastic memristive bidirectional associative memory neural networks (MBAMNNs) with discontinuous jumps. A novel criterion for finite-time synchronization is obtained by utilizing the properties of quadratic fractional-order Gronwall inequality with time delay and the comparison principle. This criterion provides a new approach to analyze the finite-time synchronization problem of neural networks with stochasticity. Finally, numerical simulations are provided to demonstrate the effectiveness and superiority of the obtained results.
Keywords: 
;  ;  ;  ;  

1. Introduction

Artificial intelligence has been an active field of research, and neural networks have emerged as a prominent branch due to their intelligence characteristics and potential for real-world applications. Neural networks have revolutionized the field of artificial intelligence by enabling computers to process and analyze large volumes of complex data with remarkable accuracy. These models are based on the structure and processes of the brain, where neurons are interconnected and communicated by using electrical signals. Neural networks are characterized by their remarkable ability to learn from data and improve their performance over time, without being explicitly programmed. Neural networks have evolved over time, with various models developed to address different types of problems. For example Cohen-Grossberg neural networks [1], Hopfield neural networks [2] and cellular neural networks [3].
Kosko’s bidirectional associative memory neural networks (BAMNNs) are noteworthy extension of the traditional single-layer neural networks [4]. The BAMNNs consist of two layers of neurons that are not interconnected within their own layer. In contrast, the neurons in different layers are fully connected, allowing for bidirectional information flow between the two layers. This unique structure enables the BAMNNs to function as both input and output layers, providing powerful information storage and associative memory capabilities. In signal processing, the BAMNNs can be used to filter signals or extract features, while in pattern recognition, it can classify images or recognize speech. In optimization, the BAMNNs can be used to identify the optimal solution, while in automatic control, it can be used to regulate or stabilize a system. The progress of artificial intelligence and the evolution of neural networks have created novel opportunities to tackle intricate issues in diverse domains. In summary, these advancements have paved the way for innovative problem-solving approaches that were previously unattainable. The BAMNNs’ unique architecture and capabilities make its a powerful tool for engineering applications, and its is anticipated that this technology will maintain its importance in future research and development, and continue to make substantial contributions to various fields.
Due to the restriction of network size and synaptic elements, the functions of artificial neural networks are greatly limited. If common connection weights and the self-feedback connection weights of BAMNNs are established by memristor [5], then its model can be built in a circuit. Memristor [6,7] is a circuit element in electronic circuit theory that behaves nonlinearly and has two terminals. Its unique feature has led to its widespread use and potential application in a variety of fields, including artificial intelligence, data storage, and neuromorphic computing. Adding memristor to neural networks makes it possible for artificial neural networks to simulate human brain on the circuit, which makes the research on memristor neural networks more meaningful. Therefore, the resistance in the traditional neural networks is replaced by memristor, and the BAMNNs based on memristor is formed (MBAMNNs). Compared with traditional neural networks, Memristor neural networks have stronger learning and associative memory abilities, allowing for more efficient processing and storage of information, thereby improving the efficiency and accuracy of artificial intelligence. Additionally, due to the nonlinear characteristics of memristors and their applications in circuits, Memristor neural networks also have lower energy consumption and higher speed. Therefore, the development of Memristor neural networks have broad application prospects in the field of artificial intelligence.
However, due to the limitation of amplifier conversion speed, the phenomenon of time delay in neural network systems is inevitable. Research indicates that the presence of time delay is a significant factor contributing to complex dynamic behaviors such as system instability and chaos [8]. To enhance the versatility and efficiency of the BAMNNs, Ding and Huang [9] developed a novel BAMNNs model in 2006. The focus of their investigation was on analyzing the global exponential stability of the equilibrium point and studying its characteristics in this model. The time delay BAMNNs has been advanced by its positive impact on its development [10,11,12,13].
Fractional calculus extends the traditional differentiation and integration operations to non-integer orders [14], and has been introduced into neural networks to capture the characteristics of memory and inheritance [15,16,17]. The emergence of fractional-order calculus has spurred the development of neural networks [6,7,18,19], which have found applications in diverse areas, including signal detection, fault diagnosis, optimization analysis, associative memory, and risk assessment. The fractional-order memristive neural networks (FMNN) are the specific type of fractional-order neural networks, which have been widely studied for their stability properties. For instance, scholars have investigated the asymptotic stability of FMNNs with delay using Caputo fractional differentiation and Filippov solution properties [20], and also have investigated the asymptotic stability of FMNNs with delay by leveraging the properties of Filippov solutions and Leibniz theorem [21].
As one of the significant research directions in the nonlinear systems field, synchronization includes quasi-consistent synchronization [22], projective synchronization [23], full synchronization [24], Mittag-Leffler synchronization [25], global synchronization [26] and many other types. And it is widely used in cryptography [27], image encryption [28], secure communication [29]. In engineering applications, people want to realize synchronization as soon as possible, so the concept of finite time synchronization is proposed. Due to its ability to achieve faster convergence speed in network systems, finite-time synchronization has become a crucial aspect in developing effective control strategies for realizing system stability or synchronization [30,31].
This paper addresses the challenge of achieving finite-time synchronization. The definition of finite-time synchronization used in this article is that the synchronization error be kept within a certain range for a limited time interval. However, dealing with the time delay term in this context is challenging. Previous studies have utilized the Hölder inequality [32] and the generalized Gronwall inequality [33,34] to address the finite-time synchronization problem of fractional-order time delay neural networks, providing valuable insights into the problem. However, this paper proposes a new criterion based on the quadratic fractional-order Gronwall inequality with time delay and comparison principle, offering a fresh perspective on the problem.
This paper presents significant contributions towards the study of finite-time synchronization in fractional-order stochastic MBAMNNs with time delay. The key contributions are as follows:
(1) We improved Lemma 2 in [34] by deriving a quadratic fractional-order Gronwall inequality with time delay, which is a crucial tool for analyzing the finite-time synchronization problem in stochastic neural networks.
(2) A novel criterion for achieving finite-time synchronization is proposed, which allows for the computation of the required synchronization time T. This criterion provides a new approach to analyze finite-time synchronization and has the potential to be widely applicable in the field of neural networks research. The paper is structured as follows: Section 2 introduces relevant concepts and presents the neural networks model used in this study. Section 3 proposes a novel quadratic fractional-order Gronwall inequality that takes into account time delay. This inequality is useful for studying the finite-time synchronization problem in fractional-order stochastic MBAMNNs with time delay, and by utilizing differential inclusion and set-valued mapping theory, a new criterion for determining the required time T for finite-time synchronization is derived. Section 4 provides a numerical example that demonstrates the effectiveness of the proposed results. Finally, suggestions for future research are presented.

2. Preliminaries and model

This section provides an overview of the necessary preliminaries related to fractional-order derivatives and the model of fractional-order stochastic MBAMNNs. We begin by introducing the fundamental concepts related to fractional-order derivatives and then move on to describe the fractional-order stochastic MBAMNNs model. And the definition of finite-time synchronization is provided.

2.1. Preliminaries

Notations: The norm and absolute value of vectors and matrices are defined as follows. Let N and R denote the sets of positive integers and real numbers, respectively.
The norm of a vector e x ( t ) = ( e x , 1 ( t ) , e x , 2 ( t ) , , e x , n ( t ) ) C ( [ 0 , + ) , R n ) is given by e x ( t ) = κ = 1 n e x , κ ( t ) . Similarly, the norm of a vector e y ( t ) = ( e y , 1 ( t ) , e y , 2 ( t ) , , e y , m ( t ) ) C ( [ 0 , + ) , R m ) is defined as e y ( t ) = ι = 1 m e y , ι ( t ) , n , m N .The induced norm of a matrix A is denoted by A = max 1 ι n κ = 1 n a κ ι .The absolute value of a vector x ( t ) R n is defined as x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T .
Following that, we provide a review and introduction of several definitions and lemmas related to fractional calculus.
Definition 1.
[35] A fractional-order integral of a function κ ( t ) with order α can be defined as
I t 0 α κ ( t ) = 1 ​Γ α t 0 t t θ α 1 κ ( θ ) d θ ,
where α > 0 , t t 0 and ​Γ ( α ) = t 0 t α 1 e t d t .
Especially, I t 0 α κ t = κ ( t ) κ ( t 0 ) for 0 < α < 1 .
Definition 2.
[35] Suppose κ ( t ) C ι ( [ 0 , + ) , R ) , where ι is a positive integer. The Caputo derivative of order α of the function κ ( t ) can be expressed as:
c D 0 α κ ( t ) = t 0 t ( t θ ) α ​Γ ( ι α ) κ ( ι ) ( θ ) d θ ,
where κ 1 < α < κ , κ N .
For the convenience, we use D α to represent c D 0 α .
Lemma 1.
[36] Let t [ t 0 , T ] , u ( t ) , v ( t ) R , κ ( t ) C 1 ( [ t 0 , T ] , R ) and satisfies
κ ( t ) u ( t ) κ ( t ) + v ( t ) , t [ t 0 , T ] , κ ( t 0 ) κ 0 , κ 0 R .
Then, we have
κ ( t ) κ 0 exp ( t 0 t u ( θ ) d θ ) + t 0 t exp ( θ t u ( s ) d s ) v ( θ ) d θ .

2.2. Model

We investigate a kind of fractional-order differential equations that capture the dynamics of fractional-order stochastic MBAMNNs with time delays. These equations are viewed as the driving system (1), which models the interactions between neurons in MBAMNNs and accounts for the influence of discontinuous jumps and time delays. Through examining the stability and analytical solutions of these equations, this study aims to enhance the comprehension of the behavior of MBAMNNs, ultimately leading to more comprehensive analysis for practical applications of this model.
D α x κ ( t ) = u κ ( x κ ( t ) ) x κ ( t ) + ι = 1 m a ι κ ( x κ ( t ) ) f ι ( y ι ( t ) ) + ι = 1 m b ι κ ( x κ ( t τ 1 ) ) f ι ( y ι ( t τ 2 ) ) + ι = 1 m r ι ( y ι ( t τ 2 ) ) d B ( t ) + I κ , κ = 1 , 2 , , n , D α y ι ( t ) = v ι ( y ι ( t ) ) y ι ( t ) + κ = 1 n c κ ι ( y ι ( t ) ) g κ ( x κ ( t ) ) + κ = 1 n d κ ι ( y ι ( t τ 2 ) ) g κ ( x κ ( t τ 1 ) ) + κ = 1 n s κ ( x κ ( t τ 1 ) ) d B ( t ) + J ι , ι = 1 , 2 , , m ,
where 0 < α < 1 , t [ 0 , + ) .
In this system, the positive parameters μ κ ( x κ ( t ) ) and v ι ( y ι ( t ) ) represent the rates of neuron self-inhibition, while x κ ( t ) and y ι ( t ) denote the state variables of the κ -th and ι -th neuron, respectively. The activation functions without time delay are denoted by f ι ( y ι ( t ) ) and g κ ( x κ ( t ) ) , and those with time delay are denoted by f ι ( y ι ( t τ 2 ) ) and g κ ( x κ ( t τ 1 ) ) . The neural connection memristive weights matrices are represented by a ι κ ( x κ ( t ) ) , b ι κ ( x κ ( t τ 1 ) ) , c κ ι ( y ι ( t ) ) , and d κ ι ( y ι ( t τ 2 ) ) . Stochastic terms representing Brownian motion are denoted by r ι ( y ι ( t τ 2 ) ) d B ( t ) and s κ ( x κ ( t τ 1 ) ) d B ( t ) . The constant input vectors are represented by I κ and J ι . The time delay parameters τ 1 and τ 2 satisfy 0 τ 1 τ and 0 τ 2 τ , where τ is a constant.
The initial conditions of fractional-order stochastic MBAMNNs (1) are given by x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T , y ( t ) = ( y 1 ( t ) , y 2 ( t ) , , y m ( t ) ) T , where x κ ( r ) = ϕ κ ( r ) C ( [ τ 1 , 0 ] , R n ) and y ι ( r ) = ψ ι ( r ) C ( [ τ 2 , 0 ] , R m ) . Here, ϕ κ ( r ) and ψ ι ( r ) are continuous functions on [ τ 1 , 0 ] and [ τ 2 , 0 ] .
Then, the corresponding system of drive system (1) is gived by
D α x ˘ κ ( t ) = u κ ( x ˘ κ ( t ) ) x ˘ κ ( t ) + ι = 1 m a ι κ ( x ˘ κ ( t ) ) f ι ( y ˘ ι ( t ) ) + ι = 1 m b ι κ ( x ˘ κ ( t τ 1 ) ) f ι ( y ˘ ι ( t τ 2 ) ) + ι = 1 m r ι ( y ˘ ι ( t τ 2 ) ) d B ( t ) + I κ + γ κ , κ = 1 , 2 , , n , D α y ˘ ι ( t ) = v j ( y ˘ ι ( t ) ) y ˘ ι ( t ) + κ = 1 n c κ ι ( y ˘ ι ( t ) ) g κ ( x ˘ κ ( t ) ) + κ = 1 n d κ ι ( y ˘ ι ( t τ 2 ) ) g κ ( x ˘ κ ( t τ 1 ) ) + κ = 1 n s κ ( x ˘ κ ( t τ 1 ) ) d B ( t ) + J ι + η ι , ι = 1 , 2 , , m ,
The initial conditions of the corresponding system (2) are x ˘ κ ( r ) = ϕ ˘ κ ( r ) C ( [ τ 1 , 0 ] , R n ) , y ˘ ι ( r ) = ψ ˘ ι ( r ) C ( [ τ 2 , 0 ] , R m ) ; γ κ and η ι are the following controllers:
γ κ = ζ κ ( x ˘ κ ( t ) x κ ( t ) ) , η ι = ζ ι ( y ˘ ι ( t ) y ι ( t ) ) ,
where ζ κ and ζ ι are both positive numbers called the control gain.
The synchronization error, as defined by systems (1) and (2), can be expressed as:
e x , κ ( t ) = x ˘ κ ( t ) x κ ( t ) , t [ t 0 , T ] , e y , ι ( t ) = y ˘ ι ( t ) y ι ( t ) , t [ t 0 , T ] , φ x , κ ( r ) = ϕ ˘ κ ( r ) ϕ κ ( r ) , r [ t 0 τ 1 , t 0 ] , φ ˘ y , ι ( r ) = ψ ˘ ι ( r ) ψ ι ( r ) , r [ t 0 τ 2 , t 0 ] .
Then we obtain synchronization error is e x ( t ) + e y ( t ) ; And denote φ = sup s [ τ 1 , 0 ] φ x , κ ( r ) , φ ˘ = sup s [ τ 2 , 0 ] φ ˘ y , ι ( r ) .
Definition 3.
If there is a real number T > 0 such that for any t > T and ϵ > 0 , the synchronization error satisfies
e x ( t ) + e y ( t ) ϵ , T < t < +
then it can be inferred that the drive system (1) and the response system (2) achieve finite-time synchronization at T.
Remark 1.
To obtain a sufficient condition for achieving finite-time synchronization between systems (1) and (2), it is necessary to identify an appropriate evaluation function χ ( t ) . This function should satisfy e x ( t ) + e y ( t ) χ ( t ) ϵ .
The next section will apply Lemma 1, which provides a quadratic fractional Gronwall inequality with time delay. This inequality is used to analyze the behavior of the MBAMNNs system.

3. Main results

This section presents a novel approach to obtain the evaluation function χ 1 ( t ) by improving quadratic fractional Gronwall inequality with time delay. We then utilize Theorem 1 to convert this inequality into a format that is consistent with Lemma 2, enabling us to derive a synchronization criterion for the drive system (1) and corresponding system (2) in finite time. Specifically, the application of Lemma 2 leads to the novel criterion for finite-time synchronization.
Quadratic fractional Gronwall inequality with time delay is given below.
Lemma 2.
Let T ( 0 , + ) and ω ( t ) , a ( t ) , b ( t ) , σ ( t ) and υ ( t ) be continuous functions that are nonnegative and defined on [ t 0 , T ] . Let ϕ ( t ) be a nonnegative continuous function defined on [ t 0 ​Ω , t 0 ] and suppose
ω 2 ( t ) υ 2 ( t ) t 0 T [ a ( θ ) ω 2 ( θ ) + b ( θ ) ω 2 ( θ ​Ω ) ] p d θ 1 p + 2 σ 2 ( t ) , t [ t 0 , T ] , ω ( t ) ϕ ( t ) , t [ t 0 ​Ω , t 0 ] .
Assume σ ( t ) and υ ( t ) are nondecreasing on [ t 0 , T ] , ϕ ( t ) is nondecreasing on [ t 0 ​Ω , t 0 ] and ϕ ( t 0 ) = σ ( t 0 ) .
(1) If σ ( t ) > υ ( t ) , then
ω 2 ( t ) σ 2 ( t ) 2 + [ e x p ( t 0 t 2 p 1 σ 2 p ( t ) [ 2 a ( θ ) + b ( θ ) ] p d θ ) 1 ] 1 p .
(2) If σ ( t ) υ ( t ) , then
ω 2 ( t ) υ 2 ( t ) 2 + [ e x p ( t 0 t 2 p 1 υ 2 p ( t ) [ 2 a ( θ ) + b ( θ ) ] p d θ ) 1 ] 1 p ,
where t [ t 0 , T ] , ​Ω > 0 , p 1 and they are all constants.
Proof 
(Proof). Define a function d ( t ) by
d ( t ) = t 0 t [ a ( θ ) ω 2 ( θ ) + b ( θ ) ω 2 ( θ ​Ω ) ] p d θ , t [ t 0 , T ] .
As shown in Equation (3), we get
ω 2 ( t ) 2 σ 2 ( t ) + υ 2 ( t ) d 1 p ( t ) , t [ t 0 , T ] .
When t [ t 0 , t 0 + ​Ω ] and the function d ( t 0 ) = 0 , then
d ( t ) = [ a ( t ) ω 2 ( t ) + b ( t ) ω 2 ( t ​Ω ) ] p { a ( t ) [ 2 σ 2 ( t ) + υ 2 ( t ) d 1 p ( t ) ] + b ( t ) ϕ 2 ( t ​Ω ) } p 2 p 1 [ 2 a ( t ) σ 2 ( t ) + b ( t ) ϕ 2 ( t ​Ω ) ] p + a p ( t ) υ 2 p ( t ) d ( t ) = 2 p 1 [ 2 a ( t ) σ 2 ( t ) + b ( t ) ϕ 2 ( t ​Ω ) ] p + 2 p 1 a p ( t ) υ 2 p ( t ) d ( t ) .
From Lemma 1, we obtain
d ( t ) t 0 t 2 p 1 [ 2 a ( θ ) σ 2 ( θ ) + b ( θ ) ϕ 2 ( θ ​Ω ) ] p exp ( θ T 2 p 1 a p ( s ) υ 2 p ( s ) d s ) d θ .
By using inequalities (4) and (5), we have
ω 2 ( t ) υ 2 ( t ) t 0 t 2 p 1 [ 2 a ( θ ) σ 2 ( θ ) + b ( θ ) ϕ 2 ( θ ​Ω ) ] p exp ( θ t 2 p 1 a p ( s ) υ 2 p ( s ) d s ) d θ 1 p + 2 σ 2 ( t ) .
Assume σ ( t ) , υ ( t ) are nondecreasing on [ t 0 , t 0 + ​Ω ] , ϕ ( t ) is nondecreasing on [ t 0 ​Ω , t 0 ] and σ ( t 0 ) = ϕ ( t 0 ) .
(1) If σ ( t ) > υ ( t ) , then
ω 2 ( t ) υ 2 ( t ) t 0 t 2 p 1 [ 2 a ( θ ) σ 2 ( θ ) + b ( θ ) ϕ 2 ( θ ​Ω ) ] p exp ( θ t 2 p 1 a p ( s ) υ 2 p ( s ) d s ) d θ 1 p + 2 σ 2 ( t ) σ 2 ( t ) t 0 t 2 p 1 σ 2 p ( t ) [ 2 a ( θ ) + b ( θ ) ] p exp ( θ t 2 p 1 [ 2 a ( s ) + b ( s ) ] p σ 2 p ( t ) d s ) d θ 1 p + 2 σ 2 ( t ) σ 2 ( t ) 2 + [ exp ( t 0 t 2 p 1 σ 2 p ( t ) [ b ( θ ) + 2 a ( θ ) ] p d θ ) 1 ] 1 p .
(2) If σ ( t ) υ ( t ) , similar to case (1), we obtain
ω 2 ( t ) υ 2 ( t ) 2 + [ exp ( t 0 t 2 p 1 υ 2 p ( t ) [ b ( θ ) + 2 a ( θ ) ] p d θ ) 1 ] 1 p .
For t [ t 0 + ​Ω , T ] , we get
d ( t ) = [ a ( t ) ω 2 ( t ) + b ( t ) ω 2 ( t ​Ω ) ] p { a ( t ) [ 2 σ 2 ( t ) + d 1 p ( t ) υ 2 ( t ) ] + b ( t ) [ 2 σ 2 ( t ​Ω ) + d 1 p ( t ​Ω ) σ 2 ( t ​Ω ) ] } p { [ υ 2 ( t ) a ( t ) + σ 2 ( t ​Ω ) b ( t ) ] d 1 p ( t ) + 2 σ 2 ( t ) a ( t ) + 2 σ 2 ( t ​Ω ) b ( t ) } p 2 p 1 [ υ 2 ( t ) a ( t ) + σ 2 ( t ​Ω ) b ( t ) ] p d ( t ) + 2 p [ σ 2 ( t ) a ( t ) + σ 2 ( t ​Ω ) b ( t ) ] p .
Then, by utilizing Lemma 1 and inequality (5), we arrive at
d ( t ) t 0 t 0 + ​Ω 2 p 1 [ 2 a ( θ ) σ 2 ( θ ) + b ( θ ) ϕ 2 ( θ ​Ω ) ] p exp θ t 0 + ​Ω 2 p 1 a p ( s ) υ 2 p ( s ) d s d θ × exp t 0 + ​Ω t × 2 p 1 [ a ( θ ) υ 2 ( θ ) + b ( θ ) σ 2 ( θ ​Ω ) ] p d θ + t 0 + ​Ω t 2 p [ a ( θ ) σ 2 ( θ ) + b ( θ ) σ 2 ( θ ​Ω ) ] p exp θ t 2 p 1 [ a ( s ) υ 2 ( s ) + b ( s ) υ 2 ( s ​Ω ) ] p d s d θ .
By using (4), we have
ω 2 ( t ) υ 2 ( t ) t 0 t 0 + ​Ω 2 p 1 [ 2 a ( θ ) σ 2 ( θ ) + b ( θ ) ϕ 2 ( θ ​Ω ) ] p exp ( θ t 0 + ​Ω 2 p 1 a p ( s ) × υ 2 p ( s ) d s ) d θ × exp ( t 0 + ​Ω t 2 p 1 [ a ( θ ) υ 2 ( θ ) + b ( θ ) σ 2 ( θ ​Ω ) ] p d θ ) + t 0 + ​Ω t 2 p [ a ( θ ) σ 2 ( θ ) + b ( θ ) σ 2 ( θ ​Ω ) ] p × exp ( θ t 2 p 1 [ a ( s ) υ 2 ( s ) + b ( s ) υ 2 ( s ​Ω ) ] p d s ) d θ 1 p + 2 σ 2 ( t ) .
Similarly, assume σ ( t ) , υ ( t ) are nondecreasing on [ t 0 + ​Ω , T ] , ϕ ( t ) is nondecreasing on t [ t 0 ​Ω , t 0 ] and σ ( t 0 ) = ϕ ( t 0 ) .
(1) If σ ( t ) > υ ( t ) , then
ω 2 ( t ) υ 2 ( t ) t 0 t 0 + ​Ω 2 p 1 [ 2 a ( θ ) σ 2 ( θ ) + b ( θ ) ϕ 2 ( θ ​Ω ) ] p exp θ t 0 + ​Ω 2 p 1 a p ( s ) υ 2 p ( s ) d s d θ × exp ( t 0 + ​Ω t 2 p 1 [ a ( θ ) υ 2 ( θ ) + b ( θ ) σ 2 ( θ ​Ω ) ] p d θ ) + t 0 + ​Ω t 2 p [ a ( θ ) σ 2 ( θ ) + b ( θ ) σ 2 ( θ ​Ω ) ] p × exp ( θ t 2 p 1 [ a ( s ) υ 2 ( s ) + b ( s ) υ 2 ( s ​Ω ) ] p d s ) d θ 1 p + 2 σ 2 ( t ) σ 2 ( t ) exp [ ( t 0 t 0 + ​Ω 2 p 1 σ 2 p ( t ) [ 2 b ( θ ) + a ( θ ) ] p d θ ) 1 ] × exp ( t 0 + ​Ω t 2 p 1 σ 2 p ( t ) [ b ( θ ) + a ( θ ) ] p d θ ) + exp ( t 0 + ​Ω t 2 p 1 σ 2 p ( t ) [ b ( θ ) + a ( θ ) ] p d θ ) 1 1 p + 2 σ 2 ( t ) σ 2 ( t ) exp ( t 0 t 2 p 1 σ 2 p ( t ) [ b ( θ ) + 2 a ( θ ) ] p d θ ) 1 1 p + 2 σ 2 ( t ) .
(2) If σ ( t ) υ ( t ) , then
ω 2 ( t ) υ 2 ( t ) 2 + [ exp ( t 0 t 2 p 1 υ 2 p ( t ) [ b ( θ ) + 2 a ( θ ) ] p d θ ) 1 ] 1 p .
Based on the above analysis, from Lemma 2 and when σ ( t ) and υ ( t ) are non-decreasing functions for t [ t 0 , T ] , ϕ ( t ) is a non-decreasing function for t [ t 0 ​Ω , t 0 ] with ϕ ( t 0 ) = σ ( t 0 ) , we get the following results.
(1) If σ ( t ) > υ ( t ) , then
ω 2 ( t ) σ 2 ( t ) 2 + [ e x p ( t 0 t 2 p 1 σ 2 p ( t ) [ 2 a ( θ ) + b ( θ ) ] p d θ ) 1 ] 1 p , t [ t 0 , T ] .
(2) If σ ( t ) υ ( t ) , then
ω 2 ( t ) υ 2 ( t ) 2 + [ e x p ( t 0 t 2 p 1 υ 2 p ( t ) [ 2 a ( θ ) + b ( θ ) ] p d θ ) 1 ] 1 p , t [ t 0 , T ] .
Theorem 1.
Assume T ( 0 , + ) and non-negative continuous functions ω ( t ) , σ ( t ) , υ ( t ) , a ( t ) , and b ( t ) defined on [ t 0 , T ] . Let ϕ ( t ) be a non-negative continuous function defined on [ t 0 ​Ω , t 0 ] and suppose
ω 2 ( t ) 2 σ 2 ( t ) + υ ( t ) ​Γ 2 ( α ) t 0 t ( t θ ) 2 ( α 1 ) [ a ( θ ) ω 2 ( θ ) + b ( θ ) ω 2 ( θ ​Ω ) ] d θ , t [ t 0 , T ] , ω ( t ) ϕ ( t ) , t [ t 0 ​Ω , t 0 ] .
Assume σ ( t ) and υ ( t ) are nondecreasing functions on [ t 0 , T ] , and ϕ ( t ) is a nondecreasing function on [ t 0 ​Ω , t 0 ] with ϕ ( t 0 ) = σ ( t 0 ) .
(1) If σ ( t ) > H ( t ) , then
ω 2 ( t ) σ 2 ( t ) 2 + [ e x p ( t 0 t 2 p 1 σ 2 p ( t ) [ 2 a ( θ ) + b ( θ ) ] p d θ ) 1 ] 1 p .
(2) If σ ( t ) H ( t ) , then
ω 2 ( t ) H 2 ( t ) 2 + [ e x p ( t 0 t 2 p 1 υ 2 p ( t ) [ 2 a ( θ ) + b ( θ ) ] p d θ ) 1 ] 1 p ,
where t [ t 0 , T ] and p , q > 0 such that 1 q + 1 p = 1 , α > 1 + p 2 p and H 2 ( t ) = υ ( t ) ( t t 0 ) 2 ( α 1 ) + 1 q ​Γ 2 ( α ) ( 2 q ( α 1 ) + 1 ) 1 q .
Proof 
(Proof). By using the Hölder inequality, it follows that
ω 2 ( t ) 2 σ 2 ( t ) + υ ( t ) ​Γ 2 ( α ) t 0 t ( t θ ) 2 ( α 1 ) [ a ( θ ) ω 2 ( θ ) + b ( θ ) ω 2 ( θ ​Ω ) ] d θ 2 σ 2 ( t ) + υ ( t ) ​Γ 2 ( α ) ( t 0 t ( t θ ) 2 q ( α 1 ) d θ ) 1 q ( t 0 t [ a ( θ ) ω 2 ( θ ) + b ( θ ) ω 2 ( θ ​Ω ) ] p d θ ) 1 p 2 σ 2 ( t ) + υ ( t ) ​Γ 2 ( α ) ( t t 0 ) 2 ( α 1 ) + 1 q [ 2 q ( α 1 ) + 1 ] 1 q ( t 0 t [ a ( θ ) ω 2 ( θ ) + b ( θ ) ω 2 ( θ ​Ω ) ] p d θ ) 1 p .
Let α > 1 + p 2 p , H 2 ( t ) = υ ( t ) ( t t 0 ) 2 ( α 1 ) + 1 q ​Γ 2 ( α ) ( 2 q ( α 1 ) + 1 ) 1 q , then from Lemma 2 we obtain
ω 2 ( t ) 2 σ 2 ( t ) + H 2 ( t ) { t 0 t [ a ( θ ) ω 2 ( θ ) + b ( θ ) ω 2 ( θ ​Ω ) ] p d θ } 1 p .
The proof is completed. □
To analyze the solutions of the discontinuous systems represented by equations (1) and (2), Filippov regularization is used. This involves transforming the equations into differential inclusions and set-valued maps.
The drive system represented by equation (1) can be expressed in terms of a differential inclusion, which is a powerful tool in the theory of differential inclusions. By using this approach, we can study the behavior of the system even when it experiences discontinuities or impulses.
Overall, Filippov regularization allows us to analyze the solutions of discontinuous systems like Equations (1) and (2) in a rigorous and systematic way, providing insights into their behavior and enabling us to make informed decisions about their design and operation.
D α x κ ( t ) c o [ u κ ( x κ ( t ) ) ] x κ ( t ) + ι = 1 m c o [ a ι κ ( x κ ( t ) ) ] f ι ( y ι ( t ) ) + ι = 1 m c o [ b ι κ ( x κ ( t τ 1 ) ) ] × f ι ( y ι ( t τ 2 ) ) + ι = 1 m r ι ( y ι ( t τ 2 ) ) d B ( t ) + I κ , D α y ι ( t ) c o [ v ι ( y ι ( t ) ) ] y ι ( t ) + κ = 1 n c o [ c κ ι ( y ι ( t ) ) ] g κ ( x κ ( t ) ) + κ = 1 n c o [ d κ ι ( y ι ( t τ 2 ) ) ] × g κ ( x κ ( t τ 1 ) ) + κ = 1 n s κ ( x κ ( t τ 1 ) ) d B ( t ) + J ι .
According to the definition of set-valued maps, we obtain
c o [ u κ ( ς ) ] = u ˙ κ , ς < T ˚ , u ¨ κ , ς > T ˚ , c o u ˙ κ , u ¨ κ , ς = T ˚ , c o [ a ι κ ( ς ) ] = a ˙ ι κ , ς < T ˚ , a ¨ ι κ , ς > T ˚ , c o a ˙ ι κ , a ¨ ι κ , ς = T ˚ , c o [ b ι κ ( ς ) ] = b ˙ ι κ , ς < T ˚ , b ¨ ι κ , ς > T ˚ , c o b ˙ ι κ , b ¨ ι κ , ς = T ˚ , c o [ v ι ( ς ) ] = v ˙ ι , ς < T ˚ , v ¨ ι , ς > T ˚ , c o v ˙ ι , v ¨ ι , ς = T ˚ , c o [ c κ ι ( ς ) ] = c ˙ κ ι , ς < T ˚ , c ¨ κ ι , ς > T ˚ , c o c ˙ κ ι , c ¨ κ ι , ς = T ˚ , c o [ d κ ι ( ς ) ] = d ˙ κ ι , ς < T ˚ , d ¨ κ ι , ς > T ˚ , c o d ˙ κ ι , d ¨ κ ι , ς = T ˚ ,
where ς R , the switching jumps T ˚ κ , T ι * * are positive contants, u ˙ i , u ¨ i , a ˙ ι κ , a ¨ ι κ , b ˙ ι κ , b ¨ ι κ , v ˙ j , v ¨ j , c ˙ κ ι , c ¨ κ ι , d ˙ κ ι and d ¨ κ ι are all contant numbers. c o [ u κ ( ς ) ] , c o [ a ι κ ( ς ) ] , c o [ b ι κ ( ς ) ] , c o [ v j ( ς ) ] , c o [ c κ ι ( ς ) ] , c o [ d κ ι ( ς ) ] are all compact, closed and convex.
Let
u ´ κ ( ς ) c o [ u κ ( ς ) ] , a ´ ι κ ( ς ) c o [ a ι κ ( ς ) ] , b ´ ι κ ( ς ) c o [ b ι κ ( ς ) ] , v ´ ι ( ς ) c o [ v ι ( ς ) ] , c ´ κ ι ( ς ) c o [ c κ ι ( ς ) ] , d ´ κ ι ( ς ) c o [ d κ ι ( ς ) ] .
By modifying the drive system (1), we can achieve
D α x κ ( t ) = u ´ κ ( x κ ( t ) ) x κ ( t ) + ι = 1 m a ´ ι κ ( x κ ( t ) ) f ι ( y ι ( t ) ) + ι = 1 m b ´ ι κ ( x κ ( t τ 1 ) ) f ι ( y ι ( t τ 2 ) ) + ι = 1 m r ι ( y ι ( t τ 2 ) ) d B ( t ) + I κ , D α y ι ( t ) = v ´ ι ( y ι ( t ) ) y ι ( t ) + κ = 1 n c ´ κ ι ( y ι ( t ) ) g κ ( x κ ( t ) ) + κ = 1 n d ´ κ ι ( y ι ( t τ 2 ) ) g κ ( x κ ( t τ 1 ) ) + κ = 1 n s κ ( x κ ( t τ 1 ) ) d B ( t ) + J ι .
Similarly, let
u ` κ ( ς ) c o [ u κ ( ς ) ] , a ` ι κ ( ς ) c o [ a ι κ ( ς ) ] , b ` ι κ ( ς ) c o [ b ι κ ( ς ) ] , v ` ι ( ς ) c o [ v ι ( ς ) ] , c ` κ ι ( ς ) c o [ c κ ι ( ς ) ] , d ` κ ι ( ς ) c o [ d κ ι ( ς ) ] ,
by employing the similar method, we can modify the corresponding system (2) as follows:
D α x ˘ κ ( t ) = u ` κ ( x ˘ κ ( t ) ) x ˘ κ ( t ) + ι = 1 m a ` ι κ ( x ˘ κ ( t ) ) f ι ( y ˘ ι ( t ) ) + ι = 1 m b ` ι κ ( x ˘ κ ( t τ 1 ) ) f ι ( y ˘ ι ( t τ 2 ) ) + ι = 1 m r ι ( y ˘ ι ( t τ 2 ) ) d B ( t ) + I κ + γ κ , D α y ˘ ι ( t ) = v ` ι ( y ˘ ι ( t ) ) y ˘ ι ( t ) + κ = 1 n c ` κ ι ( y ˘ ι ( t ) ) g κ ( x ˘ κ ( t ) ) + κ = 1 n d ` κ ι ( y ˘ ι ( t τ 2 ) ) g κ ( x ˘ κ ( t τ 1 ) ) + κ = 1 n s κ ( x ˘ κ ( t τ 1 ) ) d B ( t ) + J ι + η ι .
Assumption A1.
Let function f ι satisfy the Lipschitz condition, there exist positive constants f ι * such that
f ι ( t 1 ) f ι ( t 2 ) f ι * t 1 t 2
where t 1 , t 2 R , f ι * are positive Lipschitz constants and assume functions g κ , s κ and r ι satisfy this condition equally.
Assumption A2.
f ι ( ± T ˚ = g κ ( ± T ˚ ) = 0 .
Lemma 3.
[37] Under Assumption A1 and Assumption A2, we know for any c ` κ ι , c ´ κ ι c o [ c κ ι ( ϱ ) ]
c ` κ ι ( t 2 ) g κ ( t 2 ) c ´ κ ι ( t 1 ) g κ ( t 1 ) c κ ι * g κ * t 2 t 1 ,
where c κ ι * = max c ¨ κ ι , c ˙ κ ι .
The synchronization error system, by Assumption A1 and Lemma 3, can be expressed as:
D α e x , κ ( t ) ( u κ * + ζ κ ) e x , κ ( t ) + ι = 1 m a ι κ * f ι * e y , ι ( t ) + ι = 1 m b ι κ * f ι * e ˜ y , ι ( t ) + ι = 1 m r ι * e ˜ y , ι ( t ) d B ( t ) , D α e y , ι ( t ) ( v ι * + ζ ι ) e y , ι ( t ) + κ = 1 n c κ ι * g κ * e x , κ ( t ) + κ = 1 n d κ ι * g κ * e ˜ x , κ ( t ) + κ = 1 n s κ * e ˜ x , κ ( t ) d B ( t ) ,
where denote e ˜ y , ι ( t ) = e y , ι ( t τ 2 ) , e ˜ x , κ ( t ) = e x , κ ( t τ 1 ) , u κ * = max u ¨ κ , u ˙ κ , v ι * = max v ¨ ι , v ˙ ι ,   a ι κ * = max a ¨ ι κ , a ˙ ι κ , b ι κ * = max b ¨ ι κ , b ˙ ι κ , c κ ι * = max c ¨ κ ι , c ˙ κ ι and d κ ι * = max d ¨ κ ι , d ˙ κ ι .
For the sake of convenience, we can express inequality (8) as
D α e x ( t ) U e x ( t ) + A F e y ( t ) + B F e ˜ y ( t ) + R e ˜ y ( t ) d B ( t ) , D α e y ( t ) V e y ( t ) + C G e x ( t ) + D G e ˜ x ( t ) + S e ˜ x ( t ) d B ( t ) ,
where U = d i a g { u 1 * + ζ 1 , u 2 * + ζ 2 , , u n * + ζ n } , V = d i a g { v 1 * + ζ 1 , v 2 * + ζ 2 , , v m * + ζ m } , A = ( a ι κ * ) m × n , B = ( b ι κ * ) m × n , C = ( c κ ι * ) n × m , D = ( d κ ι * ) n × m , F = max 1 l m { f ι * } , G = max 1 κ n { g κ * } , R = max 1 ι m { r ι * } , and S = max 1 κ n { s κ * } .
Remark 2.
To ensure that e x ( t ) + e y ( t ) χ ( t ) ϵ , we can simply find an evaluation function χ 1 ( t ) that satisfies max e x ( t ) , e y ( t ) χ 1 ( t ) with χ 1 ( t ) ϵ 2 . This will guarantee that e x ( t ) + e y ( t ) remains below ϵ, while it also will ensure that χ ( t ) is never less than max e x ( t ) , e y ( t ) .
Lemma 4.
[38] [Burkholder-Davis-Gundy inequality] For any Φ L F t p ( [ τ , 0 ] ; H ) , then
E sup t 0 t T t 0 t Φ ( u ) d ω ( u ) p c p t 0 T E Φ ( u ) 2 d u p 2
where c p = p p + 1 2 ( p 1 ) p 1 p 2 ( p 2 ) .
Theorem 2.
Assume Assumption A1, Assumption A2 and Remark 2 hold and the following conditions are satisfied.
(1) If ϕ > H ( t ) , then
ϕ 2 2 + exp ( 2 p 1 ϕ 2 p [ 4 U + A F 2 + 2 B 2 F 2 + R 2 ] p t ) 1 1 p ϵ 2 4 .
(2) If ϕ < H ( t ) , then
H 2 ( t ) 2 + exp ( 2 p 1 H 2 p ( t ) [ 4 U + A F 2 + 2 B 2 F 2 + R 2 ] p t ) 1 1 p ϵ 2 4 ,
H 2 ( t ) = 4 t 2 α 1 1 p ​Γ 2 ( α ) ( 2 q ( α 1 ) + 1 ) 1 q ,
where t 0 , T , 0 < ϵ , 1 + p 2 p < α < 1 , and 1 p + 1 q = 1 ( p , q N ).
Then the drive system (1) and the corresponding system (2) are finite-time synchronized.
Proof 
(Proof).
By Definition 1, for 0 < α < 1 , we can obtain the following integral inequalities:
e x ( t ) e x ( 0 ) + 1 ​Γ ( α ) t 0 t ( t θ ) α 1 R e ˜ y ( θ ) d B ( θ ) + 1 ​Γ α t 0 t ( t θ ) α 1 U e x ( θ ) + A F e y ( θ ) + B F e ˜ y ( θ ) d θ , e y ( t ) e y ( 0 ) + 1 ​Γ ( α ) t 0 t ( t θ ) α 1 S e ˜ x ( θ ) d B ( θ ) + 1 ​Γ α t 0 t ( t θ ) α 1 V e y ( θ ) + C G e x ( θ ) + D G e ˜ x ( θ ) d θ .
By taking the norm on both sides of inequality (10) simultaneously, we obtain
e x ( t ) e x ( 0 ) + 1 ​Γ ( α ) t 0 t ( t θ ) α 1 R e ˜ y ( θ ) d B ( θ ) + 1 ​Γ α t 0 t ( t θ ) α 1 U e x ( θ ) + A F e y ( θ ) + B F e ˜ y ( θ ) d θ ,
e y ( t ) e y ( 0 ) + 1 ​Γ ( α ) t 0 t ( t θ ) α 1 S e ˜ x ( θ ) d B ( θ ) + 1 ​Γ α t 0 t ( t θ ) α 1 V e y ( θ ) + C G e x ( θ ) + D G e ˜ x ( θ ) d θ .
Assuming e x ( t ) > e y ( t ) , we can square both sides to obtain e x ( t ) 2 . By using Lemma 4, we can then rewrite the expression as follows
e x ( t ) 2 ( e x ( 0 ) + 1 ​Γ ( α ) t 0 t ( t θ ) α 1 R × e ˜ y ( θ ) d B ( θ ) + 1 ​Γ α t 0 t ( t θ ) α 1 U e x ( θ ) + A F e y ( θ ) + B F e ˜ y ( θ ) d θ ) 2 2 e x ( 0 ) 2 + 2 ( 1 ​Γ ( α ) t 0 t ( t θ ) α 1 R e ˜ y ( θ ) d B ( θ ) + 1 ​Γ α t 0 t ( t θ ) α 1 U e x ( θ ) + A F e y ( θ ) + B F e ˜ y ( θ ) d θ ) 2 2 e x ( 0 ) 2 + 16 ​Γ 2 ( α ) t 0 t ( t θ ) 2 ( α 1 ) R 2 e ˜ x ( θ ) 2 d θ + 8 ​Γ 2 α t 0 t ( t θ ) 2 ( α 1 ) [ U + A F 2 e x ( θ ) 2 + B 2 F 2 e ˜ x ( θ ) 2 ] d θ
2 e x ( 0 ) 2 + 8 ​Γ 2 α t 0 t ( t θ ) 2 ( α 1 ) [ U + A F 2 e x ( θ ) 2 + ( B 2 F 2 + 2 R 2 ) e ˜ x ( θ ) 2 ] d θ .
From Lemma 2, we define the following functions: ω ( t ) = e x ( t ) , σ ( t ) = e x ( 0 ) = φ , υ ( t ) = 8 , a ( t ) = U + A F 2 and b ( t ) = B 2 F 2 + 2 R 2 . The initial value t 0 = 0 and ​Ω = τ . It is easy to see that all of these functions are non-negative and continuous. Additionally, σ ( t ) and υ ( t ) are non-decreasing on [ t 0 , T ] , ϕ ( t ) is non-decreasing on [ t 0 ​Ω , t 0 ] and σ ( t 0 ) = ϕ ( t 0 ) .
By using Lemma 2, we can obtain the following results.
(1) If φ > H ( t ) , then
e x ( t ) 2 φ 2 2 + exp ( 2 p 1 φ 2 p [ 2 ( U + A F ) 2 + B 2 F 2 + 2 R 2 ] p t ) 1 1 p .
(2) If φ H ( t ) , then
e x ( t ) 2 H 2 ( t ) 2 + exp ( 2 p 1 H 2 p ( t ) [ 2 ( U + A F ) 2 + B 2 F 2 + 2 R 2 ] p t ) 1 1 p ,
where H 2 ( t ) = 4 t 2 α 1 1 p ​Γ 2 ( α ) ( 2 q ( α 1 ) + 1 ) 1 q .
Therefore, based on the hypothesis conditions, it can be concluded that systems (1) and (2) can achieve synchronization. □
So when e x ( t ) = max e x ( t ) , e y ( t ) , Remark 2 indicates that the evaluation function χ 1 ( t ) can be determined as follows.
(1) If φ > H ( t ) , then
χ 1 2 ( t ) = φ 2 2 + exp ( 2 p 1 φ 2 p [ 2 U + A F 2 + B 2 F 2 + 2 R 2 ] p t ) 1 1 p .
(2) If φ H ( t ) , then
χ 1 2 ( t ) = H 2 ( t ) 2 + exp ( 2 p 1 H 2 p ( t ) [ 2 ( U + A F ) 2 + B 2 F 2 + 2 R 2 ] p t ) 1 1 p .
In Section 4, we will present a numerical example to provide a more visual demonstration of the finite-time synchronization achieved between systems (1) and (2).

4. Numerical examples

Compared to conventional neural networks, neural networks incorporating stochasticity possess greater adaptability and robustness in achieving finite-time synchronization. This is because stochasticity can increase the complexity of the system, endowing it with enhanced fault-tolerance and adaptability, thus facilitating more efficient adaptation to diverse environments and application scenarios. Moreover, neural networks with stochasticity exhibit advantageous characteristics in handling nonlinear and complex problems. In practical applications, the parameters and states of the neural network systems are often uncertain, owing to the presence of uncertainty. Stochasticity can more effectively model such uncertainty and bolster the reliability of the neural network systems, thereby elevating its performance in practical applications. Therefore, investigating neural networks with stochasticity may contribute to enhancing the application capabilities and performance of neural networks.
We illustrate the practical application of Theorem 2 in achieving finite-time synchronization between the systems (1) and (2) through a numerical example. By showing this example, we can validate the effectiveness of the proposed synchronization method. Specifically, the example involves simulating the behavior of the systems with varying initial conditions, and analyzing the resulting trajectories. The insights gained from this example are used to serve as evidence to show the practical relevance of the finite-time synchronization approach presented in this paper.

Example

Consider the fractional-order stochastic MBAMNNs with time delay
D α x κ ( t ) = u κ ( x κ ( t ) ) x κ ( t ) + ι = 1 m a ι κ ( x κ ( t ) ) f ι ( y ι ( t ) ) + ι = 1 m b ι κ ( x κ ( t τ 1 ) ) f ι ( y ι ( t τ 2 ) ) + ι = 1 m r j ( y ι ( t τ 2 ) d B ( t ) + I κ , D α y ι ( t ) = v ι ( y ι ( t ) ) y ι ( t ) + κ = 1 n c κ ι ( y ι ( t ) ) g κ ( x κ ( t ) ) + κ = 1 n d κ ι ( y ι ( t τ 2 ) ) g κ ( x κ ( t τ 1 ) ) + κ = 1 n s κ ( x κ ( t τ 1 ) ) d B ( t ) + J ι .
where
u 1 ( x 1 ) = 0.6 , x 1 7 10 , 0.6 , x 1 > 7 10 , u 2 ( x 2 ) = 0.4 , x 2 7 10 , 0.4 , x 2 > 7 10 , v 1 ( y 1 ) = 0.2 , y 1 7 10 , 0.2 , y 1 > 7 10 , v 2 ( y 2 ) = 0.3 , y 2 7 10 , 0.3 , y 2 > 7 10 , a 11 ( x 1 ) = 0.1 , x 1 7 10 , 0.1 , x 1 > 7 10 , a 12 ( x 2 ) = 0.1 , x 2 7 10 , 0.1 , x 2 > 7 10 , a 21 ( x 1 ) = 0.2 , x 1 7 10 , 0.2 , x 1 > 7 10 , a 22 ( x 2 ) = 0.2 , x 2 7 10 , 0.2 , x 2 > 7 10 , b 11 ( x ˜ 1 ) = 0.2 , x 1 7 10 , 0.2 , x 1 > 7 10 , b 12 ( x ˜ 2 ) = 0.2 , x 2 7 10 , 0.2 , x 2 > 7 10 , b 21 ( x ˜ 1 ) = 0.3 , x 1 7 10 , 0.3 , x 1 > 7 10 , b 22 ( x ˜ 2 ) = 0.3 , x 2 7 10 , 0.3 , x 2 > 7 10 , c 11 ( y 1 ) = 0.2 , y 1 7 10 , 0.2 , y 1 > 7 10 , c 12 ( y 2 ) = 0.2 , y 2 7 10 , 0.2 , y 2 > 7 10 , c 21 ( y 1 ) = 0.1 , y 1 7 10 , 0.1 , y 1 > 7 10 , c 22 ( y 2 ) = 0.2 , y 2 7 10 , 0.2 , y 2 > 7 10 , d 11 ( y ˜ 1 ) = 0.3 , y 1 7 10 , 0.3 , y 1 > 7 10 , d 12 ( y ˜ 2 ) = 0.3 , y 2 7 10 , 0.3 , y 2 > 7 10 , d 21 ( y ˜ 1 ) = 0.2 , y 1 7 10 , 0.2 , y 1 > 7 10 , d 22 ( y ˜ 2 ) = 0.3 , y 2 7 10 , 0.3 , y 2 > 7 10 .
Let n = m = 2 , τ 1 = τ 2 = 0.2 , g κ ( x κ ) = tanh ( x κ 7 10 ) , f ι ( y ι ) = tanh ( y ι 7 10 ) , r ι ( y ι ) = tanh ( r ι 7 10 ) , s κ ( x κ ) = tanh ( s κ 7 10 ) , I 1 = I 2 = J 1 = J 2 = 2 , x ( 0 ) = ( 0.1 , 0.2 ) T , y ( 0 ) = ( 0.1 , 0.2 ) T .
Then, assume ζ κ = ( 0.1 , 0.2 ) T , ζ ι = ( 0.1 , 0.2 ) T , x ˘ ( 0 ) = ( 0.2 , 0.3 ) T and y ˘ ( 0 ) = ( 0.2 , 0.3 ) T in corresponding system (2).
Let p = 2 , q = 2 , α = 0.8 > 1 + p 2 p , ϵ = 2 such that ϵ 2 = 1 . It is easy to calculate that φ = 0.2 , φ ˘ = 0.2 , U = 0.5 , V = 0.1 , F = G = S = R = 1 , A = 0.3 , B = 0.5 , C = 0.4 and D = 0.6 . By analyzing Figure 1 and Figure 2, we can obtain the error components e x , 1 ( t ) , e y , 1 ( t ) and e x , 2 ( t ) , e y , 2 ( t ) of systems (1) and (2) when κ , ι = 1 and κ , ι = 2 . By using Figure 3 and Figure 4, we can calculate the error and square of error of systems (1) and (2). Finally, based on Figure 5 and Theorem 2, we can determine the finite-time synchronization time T = 0.3177 .

5. Conclusions

We have enhanced the fractional-order Gronwall inequality for studying finite-time synchronization in fractional-order stochastic MBAMNNs systems, based on the work originally proposed in [34]. Then we presented an illustrative example to show the effectiveness of our proposed approach. However, it should be noted that we have only analyzed the finite-time synchronization of continuous neural network systems, and have not provided a detailed description of discontinuous neural networks with impulses. Hence, we will investigate the dynamic behaviors of fractional-order neural networks with impulses in the future.

Author Contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Funding

This research was funded by Shandong Provincial Natural Science Foundation under grant ZR2020MA006 and the Introduction and Cultivation Project of Young and Innovative Talents in Universities of Shandong Province.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to express our thanks to the anonymous referees and the editor for their constructive comments and suggestions, which greatly improved this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cohen, M.A.; Grossberg, S. Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Syst. Man Cybernet. 1983, SMC-13 (5), 815–826. [Google Scholar] [CrossRef]
  2. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities, Proc. Natl. Acad. Sci. USA. 1982, 79(8), 2554–2558. [Google Scholar] [CrossRef] [PubMed]
  3. Chua, L.O.; Yang, L. Cellular neural networks: theory, IEEE Trans. Circuit. Syst. 1988, 35(10), 1257–1272. [Google Scholar]
  4. Kosko, B. Bidirectional associative memories, IEEE Trans. Syst. Man Cybernet. 1988, 18(1), 49–60. [Google Scholar] [CrossRef]
  5. Xiao, J.; Zhong, S.; Li, Y.; et al. Finite-time Mittag–Leffler synchronization of fractional-order memristive BAM neural networks with time delays. Neurocomputing 2017, 219, 431–439. [Google Scholar] [CrossRef]
  6. Zhang, W.; Zhang, H.; Cao, J.; et al. Synchronization in uncertain fractional-order memristive complex-valued neural networks with multiple time delays. Neural Netw. 2019, 110, 186–198. [Google Scholar] [CrossRef]
  7. Zhang, W.; Zhang, H.; Cao, J.; et al. Synchronization of delayed fractional-order complex-valued neural networks with leakage delay, Phys. A. 2020, 556, 124710. [Google Scholar]
  8. Marcus, C.M.; Westervelt, R.M. Stability of analog neural networks with delay, Phys. Rev. A. 1989, 39(1), 347. [Google Scholar] [CrossRef]
  9. Ding, K.E.; Huang, N.J. Global robust exponential stability of interval general BAM neural network with delays, Neural Process. Lett. 2006, 23(2), 171–182. [Google Scholar]
  10. Zhang, Z.; Yang, Y.; Huang, Y. Global exponential stability of interval general BAM neural networks with reaction–diffusion terms and multiple time-varying delays. Neural Netw. 2011, 24, 457–465. [Google Scholar] [CrossRef]
  11. Wang, D.; Huang, L.; Tang, L. Dissipativity and synchronization of generalized BAM neural networks with multivariate discontinuous activations, IEEE Trans. Neural Netw. Learn. Syst. 2017, 29(8), 3815–3827. [Google Scholar]
  12. Xu, C.; Li, P.; Pang, Y. Global exponential stability for interval general bidirectional associative memory (BAM) neural networks with proportional delays, Math. Methods Appl. Sci. 2016, 39(18), 5720–5731. [Google Scholar] [CrossRef]
  13. Duan, L. Existence and global exponential stability of pseudo almost periodic solutions of a general delayed BAM neural networks, J. Syst. Sci. Complex. 2018, 31(2), 608–620. [Google Scholar] [CrossRef]
  14. Diethelm, K.; Ford, N.J. Analysis of fractional differential equations, J. Math. Anal. Appl. 2002, 265(2), 229–248. [Google Scholar] [CrossRef]
  15. Magin, R.L. Fractional calculus models of complex dynamics in biological tissues, Comput. Math. Appl. 2010, 59, 1585–1593. [Google Scholar]
  16. Li, L.; Wang, X.; Li, C.; et al. Exponential synchronizationlike criterion for state-dependent impulsive dynamical networks, IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 1025–1033. [Google Scholar] [CrossRef]
  17. Picozzi, S.; West, B.J. Fractional Langevin model of memory in financial markets, Phys. Rev. E. 2001, 66, 046118. [Google Scholar]
  18. Kaslik, E.; Sivasundaram, S. Nonlinear dynamics and chaos in fractional-order neural networks. Neural Netw. 2012, 32, 245–256. [Google Scholar] [CrossRef]
  19. Ding, D.; You, Z.; Hu, Y.; et al. Finite-time synchronization of delayed fractional-order quaternion-valued memristor-based neural networks, Int. J. Mod. Phys. B. 2021, 35(03), 2150032. [Google Scholar] [CrossRef]
  20. Chen, J.; Jiang, M. Stability of memristor-based fractional-order neural networks with mixed time-delay and impulsive, Neural Process. Lett. 2022, 1–22. [Google Scholar]
  21. Chen, J.; Chen, B.; Zeng, Z. Global asymptotic stability and adaptive ultimate Mittag–Leffler synchronization for a fractional-order complex-valued memristive neural networks with delays, IEEE Trans. Syst. Man Cybernet. 2018, 49(12), 2519–2535. [Google Scholar] [CrossRef]
  22. Yang, X.; Li, C.; Huang, T.; et al. Quasi-uniform synchronization of fractional-order memristor-based neural networks with delay. Neurocomputing 2017, 234, 205–215. [Google Scholar] [CrossRef]
  23. Yang, S.; Yu, J.; Hu, C.; et al. Quasi-projective synchronization of fractional-order complex-valued recurrent neural networks. Neural Netw. 2018, 104, 104–113. [Google Scholar] [CrossRef] [PubMed]
  24. Chen, L.; Wu, R.; Cao, J.; et al. Stability and synchronization of memristor-based fractional-order delayed neural networks. Neural Netw. 2015, 71, 37–44. [Google Scholar] [CrossRef] [PubMed]
  25. Chen, J.; Zeng, Z.; Jiang, P. Global Mittag-Leffler stability and synchronization of memristor-based fractional-order neural networks. Neural Netw. 2014, 51, 1–8. [Google Scholar] [CrossRef] [PubMed]
  26. Yang, X.; Li, C.; Huang, T.; et al. Synchronization of fractional-order memristor-based complex-valued neural networks with uncertain parameters and time delays. Chaos, Solitons Fractals. 2018, 110, 105–123. [Google Scholar] [CrossRef]
  27. Muthukumar, P.; Balasubramaniam, P. Feedback synchronization of the fractional order reverse butterfly-shaped chaotic system and its application to digital cryptography. Nonlinear Dyn. 2013, 73, 1169–1181. [Google Scholar] [CrossRef]
  28. Wen, S.; Zeng, Z.; Huang, T.; et al. Lag synchronization of switched neural networks via neural activation function and applications in image encryption, IEEE Trans. Neural Netw. Learn. Syst. 2015, 7, 1493–1502. [Google Scholar] [CrossRef]
  29. Alimi, A.M.; Aouiti, C.; Assali, E.A. Finite-time and fixed-time synchronization of a class of inertial neural networks with multi-proportional delays and its application to secure communication. Neurocomputing. 2019, 332, 29–43. [Google Scholar] [CrossRef]
  30. Ni, J.; Liu, L.; Liu, C.; et al. Fast fixed-time nonsingular terminal sliding mode control and its application to chaos suppression in power system. IEEE Trans. Circuits Syst. II-Express Briefs. 2017, 64, 151–155. [Google Scholar] [CrossRef]
  31. Zhang, D.; Cheng, J.; Cao, J.; et al. Finite-time synchronization control for semi-Markov jump neural networks with mode-dependent stochastic parametric uncertainties, Appl. Math. Comput. 2019, 344, 230–242. [Google Scholar]
  32. Pratap, A.; Raja, R.; Cao, J.; Rajchakit, G.; Alsaadi, F.E.; et al. Further synchronization in finite time analysis for time-varying delayed fractional order memristive competitive neural networks with leakage delay. Neurocomputing. 2018, 317, 110–126. [Google Scholar]
  33. Ye, H.; Gao, J.; Ding, Y. A generalized Gronwall inequality and its application to a fractional differential equation, J. Math. Anal. Appl. 2007, 328(2), 1075–1081. [Google Scholar] [CrossRef]
  34. Du, F.; Lu, J.G. New criterion for finite-time synchronization of fractional-order memristor-based neural networks with time delay, Appl. Math. Comput. 2021, 389, 125616. [Google Scholar]
  35. Kilbas, A.A.; Marzan, S.A. Nonlinear differential equations with the Caputo fractional derivative in the space of continuously differentiable functions, Differ. Equ. 2005, 41(1), 84–89. [Google Scholar]
  36. Bainov, D.D.; Simeonov, P.S. Integral Inequalities and Applications. Springer: New York, NY, USA, 1992. [Google Scholar]
  37. Chen, J.; Zeng, Z.; Jiang, P. Global Mittag-Leffler stability and synchronization of memristor-based fractional-order neural networks. Neural Netw. 2014, 51, 1–8. [Google Scholar] [CrossRef]
  38. Rao, R.; Pu, Z. Stability analysis for impulsive stochastic fuzzy p-laplace dynamic equations under neumann or dirichlet boundary condition, Bound. Value Probl. 2013, 2013, 133. [Google Scholar] [CrossRef]
Figure 1. The errors e x , κ ( t ) and e y , ι ( t ) are computed for κ , ι = 1 and with α = 0.8 .
Figure 1. The errors e x , κ ( t ) and e y , ι ( t ) are computed for κ , ι = 1 and with α = 0.8 .
Preprints 82739 g001
Figure 2. The errors e x , κ ( t ) and e y , ι ( t ) are computed for κ , ι = 2 and with α = 0.8 .
Figure 2. The errors e x , κ ( t ) and e y , ι ( t ) are computed for κ , ι = 2 and with α = 0.8 .
Preprints 82739 g002
Figure 3. Both systems (1) and (2) have errors that can be quantified by the magnitudes of e x ( t ) and e y ( t ) .
Figure 3. Both systems (1) and (2) have errors that can be quantified by the magnitudes of e x ( t ) and e y ( t ) .
Preprints 82739 g003
Figure 4. The square of the errors in both systems (1) and (2) can be measured by e x ( t ) 2 and e y ( t ) 2 .
Figure 4. The square of the errors in both systems (1) and (2) can be measured by e x ( t ) 2 and e y ( t ) 2 .
Preprints 82739 g004
Figure 5. The evaluation function χ 1 ( t ) with α = 0.8 .
Figure 5. The evaluation function χ 1 ( t ) with α = 0.8 .
Preprints 82739 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.

Downloads

88

Views

15

Comments

0

Subscription

Notify me about updates to this article or when a peer-reviewed version is published.

Email

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated