Abstract
The study of the stability of dynamic systems excited by fractional Gaussian noise (fGn) is very difficult since the response of the system is not a Markov process, which results in the classical diffusion process method not being applicable and very low efficiency of numerical simulation. In the present paper, a procedure based on deep learning for calculating the largest Lyapunov exponent to determine the asymptotic Lyapunov stability with probability one of quasi-integrable and non-resonant Hamiltonian systems under parametric excitations of fGn is first proposed. First, fGn is regarded approximately as a wide-band process. Next, the original system is approximated as averaged SDEs with fewer dimensions using stochastic averaging method, where the drift and diffusion coefficients of the averaged SDEs were obtained using BPNN (back propagation neural network). Then, the expression for the largest Lyapunov exponent of the averaged SDEs is obtained by generalizing Khasminskii's procedure and the stochastic stability of the original systems is determined approximately. Two example of MODF nonlinear system is carried out to illustrate the proposed procedure. The results are compared with those from simulation of original system. The comparison shows the effectiveness of the proposed procedure.
Avoid common mistakes on your manuscript.
1 Introduction
Stochastic stability is a critical concern in engineering, manifesting in vibration dynamics like power systems [1], wind-induced vibration [2, 3], railway vehicle dynamics [4]. There are several definitions of stochastic stability, with the Lyapunov stability with probability one being the most commonly adopted one. The stochastic averaging method has extensively utilized to study Lyapunov stability with probability one due to its efficacy in analyzing multi-degree-of-freedom (MDOF) strongly nonlinear systems. Examples include the stability analysis of quasi-Hamiltonian Systems excited by Gaussian white noise [5,6,7], systems driven by combined Gaussian and Poisson white noises [8,9,10], systems with fractional derivative damping [11, 12]. In these studies, ideal or Markovian noise excitation is assumed, while real-world noise exhibits long-range correlation. Consequently, these works have limitations in analyzing stochastic stability of engineering systems.
Fractional Gaussian noise (fGn), recently gaining popularity, exhibits long-range correlation, making it more suitable for modeling practical noise. However, Due to its long correlation, developing theoretical methods for predicting system responses under fGn excitation is extremely challenging. Several scholars have conducted research in this area. For instance, Biagini et al. [13] developed calculus and the stochastic differential equations (SDEs) with respect to fractional Brownian motion (fBm). Kaarakka [14] and Grigoriu [15] investigated the case of one-dimensional linear system. Xu et al. [16] made significant effort on developing the stochastic averaging principle for systems subject to fGn. Based on this averaging principle, Lu et al. [17, 18] have developed the stochastic averaging method for quasi-Hamiltonian system systems subjected to fGn excitation. Due to the non-Markovian nature of the system response, numerical simulations are required to obtain response statistics when using this method. To address this limitation, Lu et al. [19, 20] recently have developed an analytical prediction for the response of quasi-integrable Hamiltonian system driven by fGn under appropriate parameter conditions. Despite numerous studies on stochastic dynamics of quasi-Hamiltonian systems under fGn excitation, the stochastic stability of the systems remains largely unexplored.
In previous studies, analyzing the stochastic stability of MDOF nonlinear systems involved combining the stochastic averaging method with the maximum Lyapunov exponent method. This approach first simplifies MDOF strongly nonlinear systems into approximate lower dimensional systems using stochastic averaging. The stochastic stability of the system is then analyzed using the maximum Lyapunov exponent method, such as the work in Ref. [6]. However, when dealing with complex and MDOF systems, particularly those systems with strong coupling and strong nonlinearity, it is challenging in applying stochastic averaging method to derive analytical expressions for the drift and diffusion coefficients of the averaged SDEs.
Currently, deep learning, as a data-driven model, can excels in handling large-scale, high-dimensional data and has achieved significant success in many application fields [21,22,23]. It has also been introduced into the field of stochastic dynamics [24]. Given the advantages of deep learning, this paper combines deep learning with stochastic averaging method. Specifically, we use a backpropagation neural network (BPNN) [25] to obtain the drift and diffusion coefficients of the averaged SDEs.
Based on data-driven stochastic averaging method, a procedure for studying the asymptotic Lyapunov stability with probability one of quasi-integrable and non-resonant Hamiltonian systems subject to fGn is first proposed. First, fGn is approximately regarded as wideband noise. The original system is transformed into averaged SDEs by applying the stochastic averaging method for quasi-integrable Hamiltonian systems subject wideband noise, where the drift and diffusion coefficients of the averaged SDEs were obtained using BPNN. Subsequently, the expression for the Lyapunov exponent of the averaged system is obtained by using a procedure similar to Khasminskii’s [26] and is considered as the first approximation of the largest Lyapunov exponent of the original system. Consequently, the asymptotic Lyapunov stability with probability one of the original system is approximately determined by using this largest Lyapunov exponent. Finally, two example are presented to provide detailed analyses. Comparative assessments between theoretical calculations and Monte Carlo simulations are performed to validate the proposed procedure.
This paper is arranged as follows: in Sect. 2, the general system equations are briefly introduced; in Sect. 3, the method of system dimensionality reduction is introduced, i.e., deep learning-based stochastic averaging method; in Sect. 4, the method for calculating the largest Lyapunov exponent of system is introduced; two examples are carried out in Sect. 5; the conclusion is given in Sect. 6.
2 Formulation of the problem
Consider an n-DOF quasi-Hamiltonian system driven by fGn and governed by the following motion equations:
where \(Q_{i}\) and \(P_{i}\) are generalized displacements and momenta, respectively; \(H = H({\mathbf{Q}},{\mathbf{P}})\) is twice differentiable Hamiltonian; \(\varepsilon\) is a small parameter; \(\varepsilon^{{2{\mathcal{H}}}} c_{ij} ({\mathbf{Q}},{\mathbf{P}})\) denote weak damping coefficient; \(\varepsilon^{{\mathcal{H}}} f_{ik} ({\mathbf{Q}},{\mathbf{P}})\) denote amplitudes of random excitations; \(W_{k}^{{\mathcal{H}}} (t)\;(k = 1,2, \ldots ,m)\) are the independent fGns. Similar to Gaussian white noise, fGn is the formal derivative of fractional Brown motion (fBm) \(W^{H} (t) = {\text{d}}B^{H} (t)/{\text{d}}t\). The mathematical expression of fBm is [27]
where parameter \(\mathcal{H}\) is called Hurst index. The coefficient \(C_{H}\) is rederived by Deng [17]
The corresponding power spectral densities \(S_{k} (\omega )\) of fGn is
where \(2D_{k}\) is the intensity of \(W_{k}^{H} (t)\); \({\mathcal{H}}_{k}\) is Hurst index. Figures 1 and 2 show the samples of fGn and the corresponding power spectral density (PSD) \(S(\omega )\) for different Hurst index \({\mathcal{H}}\), respectively. It is observed that as \({\mathcal{H}}\) ranges from 1/2 to 1, \(S(\omega )\) transitions from constant value to Dirac delta function. Asymptotic analysis for Eq. (4) also indicates that when \({\mathcal{H}} \to 1/2\) or \({\mathcal{H}} \to 1\), \(S(\omega ) \to D/\pi\) or \(S(\omega ) \to 2D\delta (\omega )\), respectively. Therefore, fGn can be understood as a type of noise with properties lying between Gaussian white noise and Gaussian random variable. In the domain of high frequencies, e.g., greater than 1.0, \(S(\omega )\) remains relatively flat. Thus, if the nature frequency of the system (1) locate in this frequency domain, the stochastic averaging method for quasi-integrable Hamiltonian systems under wideband random excitation [28] can be applied to system (1).
3 Simplification of system (1) using averaging method
3.1 Stochastic averaging
Assume that the Hamiltonian system associated with Eq. (1) is integrable and the Hamiltonian \(H\) is separable, i.e.,
where the potential
Under certain conditions [28], system (1) has following randomly periodic solution
where \(A_{i}\) is the displacement amplitude; \(B_{i}\) is central position of the \(i\)th DOF. Under the influence of random excitations \(W_{k}^{H} (t)\), the \(A_{i} ,\Phi_{i} ,\Gamma_{i} ,\Theta_{i}\) become random processes. The \(\nu_{i} = \nu_{i} (A_{i} ,\Phi_{i} )\) in Eq. (7) represents the instantaneous frequency of the \(i\)th DOF and can be expressed as
Treating Eq. (7) as a transformation from \([{\mathbf{Q}}_{{}}^{{\text{T}}} ,{\mathbf{P}}_{{}}^{{\text{T}}} ]^{{\text{T}}}\) to \([{\mathbf{A}}^{{\text{T}}} ,{{\varvec{\Phi}}}_{{}}^{{\text{T}}} ]^{{\text{T}}}\) and substituting it into system (1), one can obtain the motion equations for \([{\mathbf{A}}^{{\text{T}}} ,{{\varvec{\Phi}}}_{{}}^{{\text{T}}} ]^{{\text{T}}}\). By using the relation \(H_{i} = U_{i} (A_{i} )\), we can obtain the motion equations for \([{\mathbf{H}}^{{\text{T}}} ,{{\varvec{\Phi}}}^{{\text{T}}} ]^{{\text{T}}}\) as follows
where \({\mathbf{H}} = [H_{1} ,H_{2} , \ldots ,H_{n} ]^{{\text{T}}}\), \({\mathbf{\Phi = [}}\Phi_{1} ,\Phi_{2} , \ldots ,\Phi_{n} {\mathbf{]}}^{{\text{T}}}\) and
Considering the case that all mean frequencies \(\omega_{0i} (H_{i} )\;(i = 1,2, \ldots ,n)\) fall within the domain where the power spectral densities \(S_{k} (\omega )\;(k = 1,2, \ldots ,m)\) remain relatively flat (see Fig. 2), the stochastic averaging method for quasi-integrable Hamiltonian systems under wideband noise excitation can then be applied to system (9) [19, 28]. It is observed from Eq. (9) that in non-resonant case \({\mathbf{H}}\) are slowly varying vector processes and \({{\varvec{\Phi}}}\) rapidly varying vector processes. According to the Stratonovich–Khasminskii limit theorem [29, 30], \({\mathbf{H}}(t)\) in Eq. (9) converge weakly to an \(n\)-dimensional Markov diffusion process as \(\varepsilon \to 0\). The governing averaged Itô equations are of the form
where \(B_{1} (t),B_{2} (t), \ldots ,B_{m} (t)\) are the independent standard Wiener processes. The drift coefficient functions \(a_{i}\) and diffusion coefficient functions \(b_{ij}\) are given by
where \(R_{kl} (\tau )\) is the cross-correlation function of fGn; \(\langle \cdot \rangle_{t}\) denotes the following averaging operation
Traditionally, obtaining the explicit expressions for \(a_{i}\) and \(b_{ij}\) involves expanding those terms in Eq. (10) into Fourier series with respect to \(\Phi_{i}\), integrate with respect to \(\tau\), and then average according to Eq. (13). This mathematical processing is extremely complex. For general systems, especially those with MDOF and strong nonlinearity, it is often impossible to derive accurate expressions for \(a_{i}\) and \(b_{ij}\). This complexity poses significant challenges for subsequent stability analysis. Therefore, this paper proposes using a Backpropagation Neural Network (BPNN) to determine the values of these coefficients. This approach combined with stochastic averaging will be more efficient in obtaining the averaged SDEs, i.e. the approximate system with fewer dimensions of the original system.
3.2 BP neural network
Figure 3 depicts a neural network named by the Error Back Propagation Neural Network (BPNN) [25], which consists of input layer (\(n\) nodes), hidden layer (\(l\) nodes), and output layer (\(m\) nodes). In this network, the input data \(A_{1} ,A_{2} , \ldots ,A_{n}\) are \(H_{i}\), which are the motion variables of the averaged SDEs in Eq. (11). The output data \(Z_{1} ,Z_{2} , \ldots ,Z_{m}\) correspond to the drift and diffusion coefficients defined in Eq. (12). \(C_{1} ,C_{2} , \ldots ,C_{l}\) are the output of hidden layer. The \(w_{ji} \;(j = 1,2, \ldots ,l;\;i = 1,2, \ldots ,n)\) and \(v_{kj} \;(k = 1,2, \ldots ,m;\;j = 1,2, \ldots ,l)\) are the connection weights between the three layers. The \(\theta_{1} ,\theta_{2} , \ldots ,\theta_{l}\) and \(\gamma_{1} ,\gamma_{2} , \ldots ,\gamma_{m}\) denote the offsets of the hidden layer and output layer, respectively.
Training data of \(H_{i}\) and drift and diffusion coefficients, captured from motion-state \(({\mathbf{Q}},{\mathbf{P}})\) of original system (1) according to Eqs. (5) and (12), enter the network through the input layer, and we employ mean-squared error as the loss function, i.e.,
where \(T_{k}\) is the training data enter the output layer; \(S\) is the total number of training data. The network iteratively learns until the loss function \(L_{f}\) reaches its minimum. In the process of learning, the adjustment value \(\Delta v_{kj}\) and \(\Delta w_{ji}\) are determined by using the gradient descent method. That is, for the adjustment value \(\Delta v_{kj}\) of output layer,
where \(\alpha\) are constant which measure the adjustment level; \({\text{net}}_{k} = \sum\nolimits_{j = 1}^{l} {v_{kj} C_{j} + \gamma_{k} }\); \(f( \cdot ),f^{\prime}( \cdot )\) are the activation function and its derivative function. For the adjustment value \(\Delta w_{ji}\) of hidden layer,
where constant \(\beta\) measure the adjustment level; \({\text{net}}_{j} = \sum\nolimits_{i = 1}^{n} {w_{ji} A_{i} + \theta_{j} }\).
Thus, the process of deep learning based on BPNN can be roughly divided into three steps. Firstly, calculate output data of hidden layer and output layer,
where the activation function is defined by the sigmoid function \(f(x) = 1/(1 + \exp ( - x))\). Secondly, noting that \(f^{\prime}(x) = f(x)[1 - f(x)]\), the error \(\delta_{k}\) at output layer and error \(\sigma_{j}\) at hidden layer are
Finally, adjust the connection weight \(w_{ji} ,v_{kj}\) and offset value \(\gamma_{k} ,\theta_{j}\),
Once learning is complete, the network retains the learned results. Consequently, when input variables \(H_{i}\) are introduced, even if they are not part of the training data, the output layer can still provide reasonable values for the drift and diffusion coefficients.
4 The largest Lyapunov exponent
By employing the data-driven stochastic averaging method, the original system (1) can be simplified into averaged SDEs (11). Subsequently, the maximum Lyapunov exponent method can be utilized to analyze the stochastic stability.
Linearizing Eq. (11) at \(H = 0\) yields
And the coefficients of Eq. (20) satisfy the following conditions
Traditionally, the coefficients \(F_{r} ({\mathbf{H}})\) and \(G_{rk} ({\mathbf{H}})\) are extremely difficult to obtain because the coefficients in Eq. (12) are usually not exact expressions and contain multiple integrals. After applying BPNN method in Sect. 3.2 to obtain the average SDEs, it is easy to obtain the two coefficients in Eq. (20).
Under the condition of Eq. (21), the largest Lyapunov exponent of averaged SDEs (20) can be obtained using the procedure similar to that of Khasminskii [26]. Introduce following new variables,
The Itô Equation for ρ and \(\alpha_{r}\) can be derived from Eq. (20) using Itô differential rule
where
Note that \(\sum\nolimits_{r = 1}^{n} {\alpha_{r} } = 1\) and only n − 1 equations of Eq. (22) is independent. Let \(\alpha^{\prime} = [\alpha_{1} ,\alpha_{2} ,...,\alpha_{n - 1} ]^{{\text{T}}}\) be n − 1 dimensional vector diffusion process and \(\alpha_{n}\) be replaced by \(\alpha_{n} = 1 - \sum\nolimits_{r = 1}^{n - 1} {\alpha_{r} }\).
Define the Lyapunov exponent of linearized averaged system (20) as the asymptotic rate of the exponential growth of \(H^{1/2}\)
Integral Eq. (23) from 0 to t and divided by t
When \(t \to \infty\), the first term and third term of right hand of Eq. (27) approach to 0. Thus
Assume \(\alpha^{\prime}\) is an ergodic diffusion process in the interval \(0 < \left\| {\alpha^{\prime}} \right\| < 1\). According to the ergodic theorem [31], Lyapunov exponent is approaching to the largest Lyapunov exponent, i.e.,
where \(p(\alpha^{\prime})\) is the stationary probability density of \(\alpha^{\prime}\) and can be obtained from solving following stationary FPK equation associated to the former n − 1 equations of Eq. (24)
The boundary conditions of Eq. (30) are
\(\lambda_{\max }\) in Eq. (29) is the approximation of the largest Lyapunov exponent of original system (1) and can be utilized to study the asymptotic stability with probability one of system (1). The domain of asymptotic stability with probability one of system (1) in the parameter space of system is determined by \(\lambda_{\max } < 0\).
5 Examples
5.1 Example 1
Consider the asymptotic Lyapunov stability with probability one of two Duffing oscillators coupled by both linear damping under parametric excitations of fGn. The motion equations of the system are
where \(\beta_{ij}\), \(\omega_{i}\), \(\alpha_{i}\), \((i = 1,2)\) are constants; \(W_{1}^{H} (t),W_{2}^{H} (t)\) are independent unit fGn with Hurst index \({\mathcal{H}}\) and with PSD in Eq. (4); \(2D_{1} ,2D_{2}\) play the role of modulating the excitation intensity for fGns.
Letting \(X_{1} = q_{1}\), \(\dot{X}_{1} = p_{1}\), \(X_{2} = q_{2}\), \(\dot{X}_{2} = p_{2}\). The original system (32) can be expressed as that of the form of quasi-Hamiltonian system (1). The associated Hamiltonian is
Assume that \(\omega_{1}\) and \(\omega_{1}\) are in the frequency domain where the power spectral density of \(S_{1} (\omega )\) and \(S_{2} (\omega )\) of \(W_{1}^{H} (t)\) and \(W_{2}^{H} (t)\) are relatively flat. By applying the stochastic averaging method introduced in Sect. 3, the following averaged SDEs governing Hamiltonian \(H_{1} (t),H_{2} (t)\) can be obtained
Those drift and diffusion coefficients functions in Eq. (34) can be obtained by using the stochastic averaging in Eq. (12). Two ways are employed to finish the stochastic averaging. One way is the data-driven stochastic averaging. Another way is the exact expression and its numerical calculation. For the data-driven method, take the drift and diffusion coefficients \(\overline{m}_{1} ({\mathbf{H}})\) and \(b_{11}^{{}} ({\mathbf{H}})\) for example,
The motion-state \((q_{1} ,q_{2} ,p_{1} ,p_{2} )\) is the simulation data from original system (32). Then the training data for \(\overline{m}_{1} ({\mathbf{H}}),b_{11}^{{}} ({\mathbf{H}})\) is determined by doing statistics according to Eq. (35).
For the exact expressions of drift and diffusion coefficients in Eq. (34), they are
where
It is seen that analytically calculating the drift and diffusion coefficients in Eq. (36) is difficult, even numerically calculating is difficult. Therefore, the data-driven method based on BPNN described in Sect. 3.2 is used to obtain the drift and diffusion coefficients.
To demonstrate the effectiveness of the data-driven methods, Fig. 4 takes the coefficients \(\overline{m}_{1} ({\mathbf{H}})\) and \(b_{11}^{{}} ({\mathbf{H}})\) as examples, showing the results from three different sources: deep learning of Eq. (34), numerical calculation of Eq. (36) and Monte Carlo simulation of original system (32). It can be seen that the deep learning results meet very well with the other two results when \(H_{1}\) is small. For large \(H_{1}\), however, the deviation will become large too. For drift coefficient \(\overline{m}_{1} ({\mathbf{H}})\), when \(H_{1} = 0.5\), the relative error with the exact solution is 6%, 35%, NaN When \(H_{1} = 0.5\), \(H_{1} = 1\), \(H_{1} = 3\), respectively; for diffusion coefficient \(b_{11}^{{}} ({\mathbf{H}})\), the relative error is 8%, 10%, 14% at the same \(H_{1}\) values, respectively. Since this paper only requires the linearization of the two coefficients for stability analysis, the relative error when H is small satisfies the requirements for stability analysis.
For the study of system stochastic stability, a large number of H values will be concentrated near the equilibrium point (zero point), making the amount of training data in small H is much larger than that in large \(H\). According to the properties of the system, small H values for training data is more conducive to analyze system stability.
Then, linearizing Eq. (34) at \(H = 0\) yields
The coefficients \(F_{1} (H_{1} )\), \(F_{2} (H_{2} )\), \(G_{11} (H_{1} )\), \(G_{22} (H_{2} )\) in Eq. (38) can be easily obtained from the averaged SDEs with determined coefficient values.
Using the method mentioned in Sect. 4, the SDEs of \(\rho\) and \(\alpha_{1}\) can be obtained
where
Solve the FPK equation corresponding to the It̑o equation of \(\alpha_{1} (t)\) to obtain the stationary probability density \(p(\alpha_{1} )\), which can be written as
Then, the largest Lyapunov exponent is calculated as
Figures 5 and 6 show the stability boundaries of system (32) in coefficient plane \((\beta_{11} ,\beta_{22} )\) with system parameters \(\beta_{12} = 0.2\), \(\beta_{21} = 0.2\), \(\omega_{1}^{{}} = 1.414\), \(\omega_{2}^{{}} = 1\), \(\alpha_{1} = 1\), \(\alpha_{2} = 0.6\). The analytical results show good agreement with the corresponding simulated results from the original system (32). Figure 5 illustrates that the stable region increases as the stochastic excitation intensity D decreases. Figure 6 demonstrates that under the condition of the same excitation intensity, the stability region of the system decreases under fGn as Hurst index \({\mathcal{H}}\) approaches to 1/2. This implies that fGn excitation is weaker than Gaussian white noise excitation when they have the same intensity. Additionally, it confirms that as the Hurst index \({\mathcal{H}}\) increases, the randomness of fGn gradually weakens (see Fig. 1). Figure 7 illustrates that the largest Lyapunov exponent calculated from Eq. (42) agrees well with the simulated results from original system (32) over a wide range of Hurst index \({\mathcal{H}}\), provided that the natural frequencies \(\omega_{1}^{{}}\), \(\omega_{2}^{{}}\) of the system are larger than certain values, such as large than 0.6, which is consistent with findings in the literature [19].
5.2 Example 2
To demonstrate the applicability of proposed procedure to MDOF systems, consider a general system with 4-DOF coupling nonlinear damping under parametric excitation of fGn and the system equations can be expressed as
where \(\beta_{i} ({\mathbf{X}},{\dot{\mathbf{X}}})\) are the nonlinear coupling damping; \(W_{i}^{H} (t)\) are independent unit fGn with Hurst index \({\mathcal{H}}\) and with PSD in Eq. (4).
Letting \(Q_{i} = X_{i} ,\;P_{i} = \dot{X}_{i}\), the original system (43) can be transformed into quasi-integrable Hamiltonian system in the form of Eq. (1) and associated Hamiltonian functions is
By applying the stochastic averaging method introduced in Sect. 3.1, the following averaged SDEs governing Hamiltonian \(H_{i} (t)\) can be obtained
Those drift and diffusion coefficient functions in Eq. (45) can be obtained by using the stochastic averaging in Eq. (12). Similar to example 1, the data-driven stochastic averaging and the exact expression can be employed. For the data-driven method, take the drift and diffusion coefficients \(\overline{m}_{1} ({\mathbf{H}})\) and \(b_{11}^{{}} ({\mathbf{H}})\) for example,
The motion-state \(({\mathbf{q}},{\mathbf{p}})\) is the simulation data from original system (43). Then the training data for \(\overline{m}_{1} ({\mathbf{H}}),b_{11}^{{}} ({\mathbf{H}})\) is determined by doing statistics according to Eq. (46).
For the exact expressions, the coefficients \(m_{i} ({\mathbf{H}})\) and \(\sigma_{i} ({\mathbf{H}})\) are
where \(D_{i} = \pi S(\omega_{i} )\) and \(\left\langle \cdot \right\rangle_{t} = \frac{1}{{(2\pi )^{n} }}\int_{0}^{2\pi } { \cdots \int_{0}^{2\pi } {( \cdot ){\text{d}}\Phi_{1} \cdots {\text{d}}\Phi_{n} } }\).
Then, Linearizing Eq. (47) at \(H = 0\) yields
where \({\mathbf{A}} = [a_{1} ,a_{2} , \ldots ,a_{n} ],\;{\mathbf{H}} = [H_{1} ,H_{2} , \ldots ,H_{n} ]^{{\text{T}}}\).
The strategy proposed in this paper is to directly obtain the coefficient values of averaged SDEs from the training data of the original system (43) using data-driven method based on BPNN described in Sect. 3.2. Linearize the averaged SDEs and then the largest Lyapunov exponent can be calculated with the method presented in Sect. 4.
The coupling damping form of \(\beta_{i} ({\mathbf{X}},{\dot{\mathbf{X}}})\) is exceptionally complex for general structural systems and cannot be explicitly expressed. For the convenient to demonstrate the accuracy of applying deep learning algorithm in Sect. 3.2, we adopted the following damping coupling form
As an explanation of the physical background of the original system (43), Fig. 8 provides the diagram of a 4-DOF mass-spring system with coupling damping of Eq. (49). Substituting Eq. (49) into Eq. (47) can obtain the coefficient \(m_{i} ({\mathbf{H}})\) and \(b_{ii} ({\mathbf{H}})\)
Then, the expression for the linearized coefficients \({\mathbf{A}} = [a_{1} ,a_{2} , \ldots ,a_{n} ],\;{\mathbf{H}} = [H_{1} ,H_{2} , \ldots ,H_{n} ]^{{\text{T}}}\) in Eq. (48) can be calculated as following
To demonstrate the effectiveness of deep learning algorithms in calculating drift and diffusion coefficients of average SDEs, Figs. 9 and 10 give the results of coefficients of Eq. (46) getting from three different ways: training data from original system (43), exact expression in Eq. (50) and deep learning results. Similar to Example 1, the results obtained from deep learning algorithms are in good agreement with the exact solution, especially when the \(H_{1} ,H_{2}\) values are small. The error increases with the increase of \(H_{1} ,H_{2}\) values. Figure 11 shows the relative error of the two coefficients between deep learning results and the exact solutions. The reason for the gradual increase in relative error is that the amount of training data in small \(H_{1} ,H_{2}\) is much larger than that in large \(H_{1} ,H_{2}\), i.e., as the system tends to stability, resulting in fewer samples for larger \(H_{1} ,H_{2}\). Correspondingly to theoretical linearization, the results shown in the Figs. 9 and 10 near the equilibrium position (zero point) are consistent. At small H values, the accuracy of deep learning algorithms is sufficient to meet the research of stochastic stability of the system.
Meanwhile, the efficiency of deep learning algorithms is much higher than that of numerical simulations. Data-driven method requires time for neural networks to learn and collecting training data, while stability evaluation takes almost no time after the network is formed. For Example 2, it takes 38 s for 5000 times learning of 10,000 samples using Python on the computer with CPU ‘AMD Ryzen 7 4800H’. However, it takes about 26 min to simulate 10,000 samples using Matlab on the same computer.
Based on the linearized averaged SDEs (48), the largest Lyapunov exponent can be obtained using the method described in Sect. 4. Then, the asymptotic Lyapunov stability with probability one of system (43) is determined. Figures 12 and 13 show the stable boundaries in parameters plane \(\left( {\eta_{1} ,\eta_{2} } \right)\) and \(\left( {\eta_{1} ,\gamma_{2} } \right)\) with different Hurst index \({\mathcal{H}}\) and noise intensity, respectively. Figure 14 shows the largest Lyapunov exponent with different noise intensity. System parameters are \(\eta_{3} = \eta_{4} = 0.4\), \(\gamma_{3} = \gamma_{4} = 0.6\), \(\omega_{1}^{{}} = 1\), \(\omega_{2}^{{}} = 1.732\), \(\omega_{3}^{{}} = 1.414\), \(\omega_{4}^{{}} = 2.236\), \(D_{i} = 0.7(i = 1,2,3,4)\). The results show that the stable region is consistent with Monte Carlo simulation results from original system, thus verifying the effectiveness of the proposed procedure in this paper.
6 Conclusion
The non-Markovian nature of a dynamical system under fGn excitation makes the study of stochastic stability extremely challenging. In the present paper, based on the observation that the PSD of fGn is quite flat for larger frequency, fGn is regarded approximately as a wide-band process. Then the deep learning-based stochastic averaging method for quasi-integrable Hamiltonian systems under wide-band random excitation was applied to obtain the approximate expression for the largest Lyapunov exponent and to determine the asymptotic Lyapunov stability with probability one of quasi-integrable and non-resonant Hamiltonian systems under parametric excitations of fGn. The results of two example demonstrate the effectiveness of the proposed procedure under the condition that the natural frequencies of the system locate in frequency range where the fGn can be treated approximately as wideband noise.
Data availability
The data that support the findings of this study are available within the article.
References
H. Verdejo, L. Vargas, W. Kliemann, Appl. Math. Comput. 218, 11021–11032 (2012)
M.D. Pandey, S.T. Ariaratnam, Struct. Saf. 20, 379–389 (1998)
J. Náprstek, J. Wind Eng. Ind. Aerodyn. 89, 1499–1512 (2001)
Ç. Uyulan, M. Gokasan, S. Bogosyan, Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 232, 2787–2802 (2018)
W.Q. Zhu, Z.L. Huang, J. Sound Vib. 218, 769–789 (1998)
W.Q. Zhu, Z.L. Huang, J. Appl. Mech. 66, 211–217 (1999)
W.Q. Zhu, Z.L. Huang, Nonlinear Dyn. 33, 209–224 (2003)
W. Liu, W. Zhu, Int. J. Nonlinear Mech. 67, 52–62 (2014)
W. Liu, W. Zhu, W. Jia, Int. J. Nonlinear Mech. 58, 191–198 (2014)
W.-Y. Liu, W.-Q. Zhu, W. Xu, Probab. Eng. Mech. 32, 39–47 (2013)
L. Chen, Q. Lou, Z. Li, W. Zhu, Sci. China Phys. Mech. Astron. 55, 2284–2289 (2012)
L.C. Chen, W.Q. Zhu, Acta Mech. 207, 109–120 (2009)
F. Biagini, Stochastic Calculus for Fractional Brownian Motion and Applications (Probability and its Applications) (Springer, London, 2008)
T. Kaarakka, P. Salminen, COSA 5 (2011)
M. Grigoriu, Probab. Eng. Mech. 22, 276–284 (2007)
Y. Xu, R. Guo, W. Xu, Theor. Appl. Mech. Lett. 4, 13003 (2014)
M.L. Deng, Q.F. Lü, W.Q. Zhu, Int. J. Nonlinear Mech. 98, 43–50 (2018)
Q.F. Lü, M.L. Deng, W.Q. Zhu, J. Appl. Mech. (2017). https://doi.org/10.1115/1.4037409
Q. Lu, W. Zhu, M. Deng, IEEE Access 8, 72372–72380 (2020)
Q.F. Lü, W.Q. Zhu, M.L. Deng, J. Vib. Acoust. (2022). https://doi.org/10.1115/1.4051480
A. Mumuni, F. Mumuni, J. Inf. Intell. (2024)
K.W. Ng, Y.F. Huang, C.H. Koo, K.L. Chong, A. El-Shafie, A. Najah Ahmed, J. Hydrol. 625, 130141 (2023)
G. Petmezas, V.E. Papageorgiou, V. Vassilikos, E. Pagourelias, G. Tsaklidis, A.K. Katsaggelos, N. Maglaveras, Comput. Biol. Med. 176, 108557 (2024)
J. Feng, X. Wang, Q. Liu, Y. Li, Y. Xu, Commun. Nonlinear Sci. Numer. Simul. 127, 107589 (2023)
D.E. Rumelhart, G.E. Hinton, R.J. Williams, Nature 323, 533–536 (1986)
R.Z. Khas’minskii, Theory Probab. Appl. 12, 144–147 (1967)
B.B. Mandelbrot, J.W. van Ness, SIAM Rev. 10, 422–437 (1968)
M.L. Deng, W.Q. Zhu, J. Sound Vib. 305, 783–794 (2007)
R.L. Stratonovich, Topics in the Theory of Random Noise, vol. 1 (Gordon Breach, New York, 1967)
R.Z. Khasminskij, Kybernetika (Praha) 3, 260–279 (1968)
V.I. Oseledec, 19, 197–231 (1968)
Acknowledgements
The work reported in this paper was supported by the Zhejiang Provincial Natural Science Foundation of China (no. LY22A020003) and the National Natural Science Foundation of China (no. 12132013).
Author information
Authors and Affiliations
Contributions
All the authors have contributed equally to this manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Lü, Q., Zhu, W. & Deng, M. Deep learning-based stability of quasi-integrable and non-resonant Hamiltonian systems driven by fractional Gaussian noise. Eur. Phys. J. Spec. Top. (2024). https://doi.org/10.1140/epjs/s11734-024-01340-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1140/epjs/s11734-024-01340-x