Elsevier

Physics Letters A

Volume 373, Issue 5, 26 January 2009, Pages 529-535
Physics Letters A

Improved delay-dependent stability criterion for neural networks with time-varying delays

https://doi.org/10.1016/j.physleta.2008.12.005Get rights and content

Abstract

In this Letter, the problem of stability analysis for neural networks with time-varying delays is considered. By constructing a new Lyapunov functional, a new delay-dependent stability criterion for the network is established in terms of LMIs (linear matrix inequalities) which can be easily solved by various convex optimization algorithms. Two numerical examples are included to show the effectiveness of proposed criterion.

Introduction

Since neural networks have been successfully applied to various fields such as pattern recognition, associative memories, signal processing, fixed-point computations, and so on, the stability analysis of neural networks has been extensively investigated [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17]. On the other hand, time delays are frequently encountered in neural networks due to the finite switching speed of amplifiers and the inherent communication of neurons [3], [4], [5]. It is well known the existence of time delay may cause divergence, oscillation, and even instability. Therefore, considerable efforts have been devoted to stability analysis of neural networks with time delays since the applications of neural networks with time delays heavily depend on the dynamic behavior of the equilibrium point [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17].

In deriving the delay-dependent stability criterion of dynamic system with time delays, the main concern is to enlarge the feasibility region of stability criteria or to get the maximum allowable bound of time delays for guaranteeing the stability. In general, the delay-dependent criterion is less conservative than delay-independent one when the value of the size of delays are small. Therefore, how to choose Lyapunov–Krasovskii functionals and derive the stability condition by calculating the upper bounds of time derivative of Lyapunov–Krasovskii functionals play key roles to increase the maximum allowable bound of time delays. In this regard, Hua et al. [15] proposed a new Lyapunov–Krasovskii functional and employed free-weighting matrices to enlarge the delay bounds of neural networks with time-varying delays. Recently, Kwon et al. [16] investigated robust stability criterion for neural networks with interval time-varying delays. Very recently, Chen and Wu [17] presented an improved stability criterion by constructing the augmented Lyapunov functionals and resorting to the new technique for estimating the upper bound of the derivative of Lyapunov functionals.

In this Letter, we revisit the problem of delay-dependent stability analysis of neural networks with time-varying delays. By constructing a new Lyapunov–Krasovskii functional which fractions delay interval and employing different free-weighting matrices in the upper bounds of integral terms, a novel delay-dependent stability criterion is derived in terms of LMIs which can be solved efficiently by using well-known interior-point algorithms [18]. Two numerical examples are shown to support that our results are less conservative than those of the existing literature.

Notation

Rn is the n-dimensional Euclidean space, Rm×n denotes the set of m×n real matrix. refers to the Euclidean vector norm and the induced matrix norm. For symmetric matrices X and Y, the notation X>Y (respectively, XY) means that the matrix XY is positive definite, (respectively, nonnegative). diag{} denotes the block diagonal matrix. ⋆ represents the elements below the main diagonal of a symmetric matrix.

Section snippets

Problem statements

Consider the following uncertain neural networks with discrete time-varying delays:y˙(t)=Ay(t)+W0g(y(t))+W1g(y(th(t)))+b, where y(t)=[y1(t),,yn(t)]TRn is the neuron state vector, n denotes the number of neurons in a neural network, g(y(t))=[g1(y1(t)),,gn(yn(t))]TRn denotes the neuron activation function, g(y(th(t)))=[g1(y1(th(t))),,gn(yn(th(t)))]TRn, A=diag{ai}Rn×n is a positive diagonal matrix, W0=(wij0)n×nRn×n, and W1=(wij1)n×nRn×n are the interconnection matrices representing

Main results

In this section, a new delay-dependent stability criterion for neural networks with time-varying delays (4) will be derived. Before introducing main results, the notations of several matrices are defined for simplicity:Σ=[Σ(i,j)](i,j=1,,7),Σ(1,1)=G11+N11P1AATP1T,Σ(1,2)=(1hD)N11,Σ(1,3)=G12,Σ(1,4)=0,Σ(1,5)=R1P1ATP2T,Σ(1,6)=N12+LH1+P1W0,Σ(1,7)=P1W1,Σ(2,2)=0,Σ(2,3)=0,Σ(2,4)=0,Σ(2,5)=0,Σ(2,6)=0,Σ(2,7)=(1hD)N12+LH2,Σ(3,3)=G22G11,Σ(3,4)=G12,Σ(3,5)=0,Σ(3,6)=0,Σ(3,7)=0,Σ(4,4)=G22,Σ(4,5)=0,Σ(4

Numerical examples

Example 1

Consider the neural networks (4) with time-varying delays with the parametersA=[2002],W0=[1111],W1=[0.88111],L=diag{0.4,0.8}. To the best of authors' knowledge, the best results [17] of delay bounds for guaranteeing stability of this system when hD is 0.8, 0.9, and unknown are 2.3534, 1.6050, and 1.5103, respectively. However, by applying Theorem 1, the delay bounds when hD is 0.8, 0.9, and unknown are 2.8854, 1.9631, and 1.7810, respectively. The improvement over the existing best result is

Conclusion

In this Letter, a new delay-dependent stability criterion for neural networks with time-varying delays is proposed. To obtain a less conservative stability criterion for neural network (1), a new Lyapunov–Krasovskii functional which fractions delay intervals has been proposed. A bounding technique of integral terms with free-weighting matrices in different delay intervals is utilized to reduce the conservatism of stability criterion. Numerical examples have been given to demonstrate the

References (18)

  • M. Ramesh et al.

    Chaos Solitons Fractals

    (2001)
  • B. Cannas et al.

    Chaos Solitons Fractals

    (2001)
  • K. Otawara et al.

    Chaos Solitons Fractals

    (2002)
  • S. Arik

    Phys. Lett. A

    (2003)
  • S. Xu et al.

    Phys. Lett. A

    (2004)
  • S. Xu et al.

    Phys. Lett. A

    (2005)
  • J.H. Park

    Appl. Math. Comput.

    (2006)
  • O.M. Kwon et al.

    J. Franklin Inst.

    (2008)
  • C.C. Hua et al.

    Phys. Lett. A

    (2006)
There are more references available in the full text version of this article.

Cited by (85)

  • Stability analysis of delayed neural networks via a new integral inequality

    2017, Neural Networks
    Citation Excerpt :

    As known from Lyapunov stability theory, there are two effective ways to reduce the conservativeness in stability analysis. One is the suitable choice of LKFs and the other is estimating the bound on its time derivative (He et al., 2007; He, Wang, & Wu, 2005; He et al., 2006; He, Wu, She, & Liu, 2004; Hu et al., 2008; Kim, 2011; Kwon & Park, 2009; Kwon, Park, Lee, & Cha, 2013, 2014; Kwon, Park, Lee, Park et al., 2013; Li et al., 2010; Li, Wang, Song, & Fei, 2013; Park, Ko, & Jeong, 2011; Seuret & Gouaisbaut, 2013; Skelton, Iwasaki, & Grigoradis, 1997; Tian & Xie, 2010; Tian & Zhong, 2009; Wang, Zhang, Fei, & Li, 2013; Yang, Wang, Shi, & Dimirovski, 2015; Zhang et al., 2014; Zhang, Yang, & Liu, 2013; Zhang, Yue, & Tian, 2009). Methods for constructing a delicate LKF include augmented vectors, triple integral terms, delay-partitioning ideas and involve more information of activation functions.

View all citing articles on Scopus
View full text