Exponential Synchronization for Stochastic Neural Networks with ...

4 downloads 0 Views 2MB Size Report
Dec 31, 2013 - The exponential synchronization issue for stochastic neural networks (SNNs) with mixed time delays and Markovian jump parameters using ...
Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2014, Article ID 505164, 17 pages http://dx.doi.org/10.1155/2014/505164

Research Article Exponential Synchronization for Stochastic Neural Networks with Mixed Time Delays and Markovian Jump Parameters via Sampled Data Yingwei Li1 and Xueqing Guo2 1 2

School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China Department of Applied Mathematics, Yanshan University, Qinhuangdao 066004, China

Correspondence should be addressed to Xueqing Guo; [email protected] Received 17 October 2013; Accepted 31 December 2013; Published 5 March 2014 Academic Editor: Chuanzhi Bai Copyright © 2014 Y. Li and X. Guo. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The exponential synchronization issue for stochastic neural networks (SNNs) with mixed time delays and Markovian jump parameters using sampled-data controller is investigated. Based on a novel Lyapunov-Krasovskii functional, stochastic analysis theory, and linear matrix inequality (LMI) approach, we derived some novel sufficient conditions that guarantee that the master systems exponentially synchronize with the slave systems. The design method of the desired sampled-data controller is also proposed. To reflect the most dynamical behaviors of the system, both Markovian jump parameters and stochastic disturbance are considered, where stochastic disturbances are given in the form of a Brownian motion. The results obtained in this paper are a little conservative comparing the previous results in the literature. Finally, two numerical examples are given to illustrate the effectiveness of the proposed methods.

1. Introduction Neural networks, such as Hopfield neural networks, cellular neural networks, the Cohen-Grossberg neural networks, and bidirectional associative neural networks, are very important nonlinear circuit networks and, in the past few decades, have been extensively studied due to their potential applications in classification, signal and image processing, parallel computing, associate memories, optimization, cryptography, and so forth; see [1–7]. Many results, which deal with the dynamics of various neural networks such as stability, periodic oscillation, bifurcation, and chaos, have been obtained by applying the Lyapunov stability theory; see, for example, [8–10] and the references therein. As a special case, synchronization issues of the neural network systems have been extensively investigated too, and a lot of criteria have been developed to guarantee the global synchronization of the network systems in [11–17]. It has been widely reported that a neural network sometimes has finite modes that switch from one mode to another at different times; such a switching (jumping) signal between

different neural network models can be governed by a Markovian chain; see [18–25] and the references therein. This class of systems has the advantage of modeling the dynamic systems subject to abrupt variation in their structures and has many applications such as target tracking problems, manufactory processes, and fault-tolerant systems. In [24], delayinterval dependent stability criteria are obtained for neural networks with Markovian jump parameters and time-varying delays, which are based on free-weighing matrix method and LMIs technique. In [25], by introducing some freeweighting matrices, delay-dependent stochastic exponential synchronization conditions are derived for chaotic neural networks with Markovian jump parameters and mixed time delays in terms of the Jensen inequality and linear matrix inequalities. It is well known that noise disturbance widely exists in biological networks due to environmental uncertainties, which is a major source of instability and can lead to poor performances in neural networks. Such systems are described by stochastic differential systems which have been used

2 efficiently in modeling many practical problems that arise in the fields of engineering, physics, and science as well. Therefore, the theory of stochastic differential equation is also attracting much attention in recent years and many results have been reported in the literature [26–30]. In addition to the noise disturbance, time delay is also a major source for causing instability and poor performances in neural networks; see, for example, [31–35]. It is known that time delays are often encountered in real neural networks, and the existence of time delays may cause oscillation or instability in neural networks, which are harmful to the applications of neural networks. Therefore, the stability analysis for neural networks with time delays has been widely studied in the literature. On the other hand, as the rapid development of computer hardware, the sampled-data control technology has shown superiority over other control approaches because it is difficult to guarantee that the state variables transmitted to controllers are continuous in many real-world applications. In [36], Wu et al. investigated the synchronization problem of neural networks with time-varying delay under sampleddata control in the presence of a constant input delay. In [37], by using sampled-data controller, the global synchronization of the chaotic Lur’e systems is discussed and sufficient conditions are obtained in terms of effective synchronization linear matrix inequality by constructing the new discontinuous Lyapunov functionals. Wu et al. studied the sampled-data synchronization for Markovian jump neural networks with time-varying delay; some new and useful synchronization conditions in the framework of the input delay approach and the linear matrix inequality technique are derived in [38]. Motivated by the above discussion, in this paper we study the delay-dependent exponential synchronization of neural networks with stochastic perturbation, discrete and distributed time-varying delays, and Markovian jump parameters. Here, it should be mentioned that our results are delay dependent, which depend on not only the upper bounds of time delays but also their lower bounds. Moreover, the derivatives of time delays are not necessarily zero or smaller than one since several free matrices are introduced in our results. By constructing an appropriate Lyapunov-Krasovskii functional based on delay partitioning, several improved delaydependent criteria are developed to achieve the exponential synchronization in mean square in terms of linear matrix inequalities. Two numerical examples are also provided to demonstrate the advantage of the theoretical results. The rest of this paper is organized as follows. In Section 2, the model of stochastic neural network with both mixed time delays and Markovian jump parameters under sampleddata control is introduced, together with some definitions and lemmas. Exponential synchronization is proposed for neural networks with both Markovian jump parameters and mixed time delays via sampled data in Section 3. In Section 4, exponential synchronization is proved for stochastic neural networks with both Markovian jump parameters and mixed time delays under sampled-data control. In Section 5, two illustrative examples are given to demonstrate the validity of the proposed results. Finally, some conclusions are drawn in Section 6.

Abstract and Applied Analysis Notations. Throughout this paper, 𝑅 denotes the set of real numbers, 𝑅𝑛 denotes the n-dimensional Euclidean space, and 𝑅𝑚×𝑛 denotes the set of all 𝑚 × 𝑛 real matrices. For any matrix 𝐴, 𝐴𝑇 denotes the transpose of 𝐴. If 𝐴 is a real symmetric matrix, 𝐴 > 0 (𝐴 < 0) means that 𝐴 is positive definite (negative definite). 𝜆 min (⋅) and 𝜆 max (⋅) represent minimum and maximum eigenvalues of a real symmetric matrix, respectively. (Ω F P) is a complete probability space, where Ω is the sample space, F is the 𝜎-algebra of subsets of the sample space, and P is the probability measure on F. diag{⋅ ⋅ ⋅ } denotes a block-diagonal matrix and col{⋅ ⋅ ⋅ } stands for a matrix column with blocks given by the matrices in {⋅ ⋅ ⋅ }. E{⋅} denotes the expectation operator with respect to some probability measure P. Given the column vectors 𝑥 = (𝑥1 , . . . , 𝑥𝑛 )𝑇 , 𝑦 = (𝑦1 , . . . , 𝑦𝑛 )𝑇 ∈ 𝑅𝑛 , 𝑥𝑇 𝑦 = ∑𝑛𝑖=1 𝑥𝑖 𝑦𝑖 , 1/2

̇ denotes |𝑥| = (|𝑥1 |, . . . , |𝑥𝑛 |)𝑇 , and ‖𝑥‖ = (∑𝑛𝑖=1 𝑥𝑖2 ) . 𝑥(𝑡) the derivative of 𝑥(𝑡) and ∗ represents the symmetric form of matrix. Matrices, if their dimensions are not explicitly stated, are assumed to have compatible dimensions for algebraic operations.

2. Model Description and Preliminaries Let {𝑟(𝑡), 𝑡 ≥ 0} be a right-continuous Markovian chain on the probability space (Ω F P) taking values in a finite state space S = {1, 2, . . . , 𝑠} with generator Υ = (𝜋𝑖𝑗 )𝑠×𝑠 given by 𝑃 {𝑟 (𝑡 + Δ𝑡) = 𝑗 | 𝑟 (𝑡) = 𝑖} 𝜋 Δ𝑡 + 𝑜 (Δ𝑡) , = 𝑃𝑖𝑗 (Δ𝑡) = { 𝑖𝑗 1 + 𝜋𝑖𝑖 Δ𝑡 + 𝑜 (Δ𝑡) ,

if 𝑖 ≠ 𝑗, if 𝑖 = 𝑗,

(1)

where Δ𝑡 > 0 and limΔ𝑡 → 0 (𝑜(Δ𝑡)/Δ𝑡) = 0. Here, 𝜋𝑖𝑗 ≥ 0 (𝑗 ≠ 𝑖) is the transition rate from 𝑖 to 𝑗 if 𝑖 ≠ 𝑗 at time 𝑡 + Δ𝑡, and 𝜋𝑖𝑖 is the transition rate from 𝑖 to 𝑖 at time 𝑡 + Δ𝑡. Remark 1. The probability defined in (1) is called timehomogeneous transition probability, which is only relevant to the time internal Δ𝑡; that is, 𝑃𝑖𝑗 (Δ𝑡) is not relevant to the starting point 𝑡. Moreover, for the time-homogeneous transition probability defined in (1), the following two properties should be satisfied: 𝑠

∑ 𝑃𝑖𝑗 (Δ𝑡) = 1.

𝑃𝑖𝑗 (Δ𝑡) ≥ 0,

𝑗=1

(2)

Accordingly, for any Δ𝑡 satisfying the conditions in (2), the matrix 𝑃 = (𝑃𝑖𝑗 (Δ𝑡))𝑠×𝑠 is called the probability transition matrix for the right-continuous Markovian chain {𝑟(𝑡), 𝑡 ≥ 0}. Fix a probability space (Ω F P) and consider the neural networks with mixed time delays and Markovian jump described by the following differential equation system: 𝑥̇ (𝑡) = −𝐶 (𝑟 (𝑡)) 𝑥 (𝑡) + 𝑊0 (𝑟 (𝑡)) 𝑔 (𝑥 (𝑡)) + 𝑊1 (𝑟 (𝑡)) 𝑔 (𝑥 (𝑡 − 𝑑 (𝑡))) 𝑡

+ 𝑊2 (𝑟 (𝑡)) ∫

𝑡−𝜏(𝑡)

𝑔 (𝑥 (𝑠)) 𝑑𝑠 + 𝐼 (𝑡) ,

(3)

Abstract and Applied Analysis

3

where 𝑥(𝑡) = [𝑥1 (𝑡) 𝑥2 (𝑡) ⋅ ⋅ ⋅ 𝑥𝑛 (𝑡)]𝑇 ∈ 𝑅𝑛 is the neuron state vector and 𝑥𝑖 (𝑡) is the state of the 𝑖th neuron at time 𝑡; 𝑔(𝑥(𝑡)) = [𝑔1 (𝑥1 (𝑡)) 𝑔2 (𝑥2 (𝑡)) ⋅ ⋅ ⋅ 𝑔𝑛 (𝑥𝑛 (𝑡))]𝑇 denotes the neuron activation function; 𝐶(𝑟(𝑡)) = diag{𝑐1 , 𝑐2 , . . . , 𝑐𝑛 } is a diagonal matrix with positive entries; 𝑊0 (𝑟(𝑡)) = (𝑤0𝑖𝑗 )𝑛×𝑛 , 𝑊1 (𝑟(𝑡)) = (𝑤1𝑖𝑗 )𝑛×𝑛 , and 𝑊2 (𝑟(𝑡)) = (𝑤2𝑖𝑗 )𝑛×𝑛 are, respectively, the connection weight matrix, the discretely delayed connection weight matrix, and the distributively delayed connection weight matrix; 𝐼(𝑡) = [𝐼1 (𝑡) 𝐼2 (𝑡) ⋅ ⋅ ⋅ 𝐼𝑛 (𝑡)]𝑇 is an external input vector; 𝑑(𝑡) and 𝜏(𝑡) denote the discrete delay and the distributed delay. Throughout this paper, we make the following assumptions. (H1 ) There exist positive constants 𝑑, 𝜇, and 𝜏 such that 𝑑1 ≤ 𝑑 (𝑡) ≤ 𝑑2 ,

𝑑 ̇ (𝑡) ≤ 𝜇,

0 ≤ 𝜏 (𝑡) ≤ 𝜏.

(4)

(H2 ) Each activation function 𝑔𝑖 in (3) is continuous and bounded, and there exist constants 𝐹𝑖− and 𝐹𝑖+ such that 𝐹𝑖− ≤

𝑔𝑖 (𝛼1 ) − 𝑔𝑖 (𝛼2 ) ≤ 𝐹𝑖+ , 𝛼1 − 𝛼2

𝑖 = 1, 2, . . . , 𝑛,

where 𝑓(𝑒(𝑡)) = 𝑔(𝑦(𝑡)) − 𝑔(𝑥(𝑡)). It can be found that the functions 𝑓𝑖 (⋅) satisfy the following condition: 𝑓𝑖 (𝛼) (8) ≤ 𝐹𝑖+ , 𝑖 = 1, 2, . . . , 𝑛, 𝛼 where 𝛼 ∈ R and 𝛼 ≠ 0. The control signal is assumed to be generated by using a zero-order-hold function with a sequence of hold times 0 = 𝑡0 < 𝑡1 < ⋅ ⋅ ⋅ < 𝑡𝑘 < ⋅ ⋅ ⋅ . Therefore, the mode-independent state feedback controller takes the following form: 𝐹𝑖− ≤

𝑢 (𝑡) = 𝐾𝑒 (𝑡𝑘 ) ,

𝑒 ̇ (𝑡) = −𝐶 (𝑟 (𝑡)) 𝑒 (𝑡) + 𝑊0 (𝑟 (𝑡)) 𝑓 (𝑒 (𝑡)) + 𝑊1 (𝑟 (𝑡)) 𝑓 (𝑒 (𝑡 − 𝑑 (𝑡))) 𝑡

+ 𝑊2 (𝑟 (𝑡)) ∫

where 𝛼1 , 𝛼2 ∈ 𝑅 and 𝛼1 ≠ 𝛼2 .

𝑡−𝜏(𝑡)

In this paper, we consider system (3) as the master system and a slave system for (3) can be described by the following equation: 𝑦̇ (𝑡) = −𝐶 (𝑟 (𝑡)) 𝑦 (𝑡) + 𝑊0 (𝑟 (𝑡)) 𝑔 (𝑦 (𝑡)) + 𝑊1 (𝑟 (𝑡)) 𝑔 (𝑦 (𝑡 − 𝑑 (𝑡))) + 𝑊2 (𝑟 (𝑡)) ∫

𝑡

𝑡−𝜏(𝑡)

(6)

𝑔 (𝑦 (𝑠)) 𝑑𝑠 + 𝐼 (𝑡) + 𝑢 (𝑡) ,

where 𝐶(𝑟(𝑡)) and 𝑊𝑖 (𝑟(𝑡)) for 𝑖 = 0, 1, 2,. . . , are matrices given in (3) and 𝑢(𝑡) ∈ 𝑅𝑛 is the appropriate control input. In order to investigate the problem of exponential synchronization between systems (3) and (6), we define the error signal 𝑒(𝑡) = 𝑦(𝑡) − 𝑥(𝑡). Therefore, the error dynamical system between (3) and (6) is given as follows: 𝑒 ̇ (𝑡) = −𝐶 (𝑟 (𝑡)) 𝑒 (𝑡) + 𝑊0 (𝑟 (𝑡)) 𝑓 (𝑒 (𝑡)) + 𝑊1 (𝑟 (𝑡)) 𝑓 (𝑒 (𝑡 − 𝑑 (𝑡))) + 𝑊2 (𝑟 (𝑡)) ∫

𝑡

𝑡−𝜏(𝑡)

𝑓 (𝑒 (𝑠)) 𝑑𝑠 + 𝑢 (𝑡) ,

(10)

𝑓 (𝑒 (𝑠)) 𝑑𝑠 + 𝐾𝑒 (𝑡𝑘 ) .

For convenience, in the following, each possible value of 𝑒(𝑡) is denoted by 𝑖, 𝑖 ∈ S. Then we have 𝐶𝑖 = 𝐶(𝑟(𝑡)), 𝑊0𝑖 = 𝑊0 (𝑟(𝑡)), 𝑊1𝑖 = 𝑊1 (𝑟(𝑡)), and 𝑊2𝑖 = 𝑊2 (𝑟(𝑡)), where 𝐶𝑖 , 𝑊0𝑖 , and 𝑊1𝑖 𝑊2𝑖 , for any 𝑖 ∈ S, are known constant matrices of appropriate dimensions. The system (10) can be written as 𝑒 ̇ (𝑡) = −𝐶𝑖 𝑒 (𝑡) + 𝑊0𝑖 𝑓 (𝑒 (𝑡)) + 𝑊1𝑖 𝑓 (𝑒 (𝑡 − 𝑑 (𝑡))) + 𝑊2𝑖 ∫

𝑡

𝑡−𝜏(𝑡)

(11)

𝑓 (𝑒 (𝑠)) 𝑑𝑠 + 𝐾𝑒 (𝑡𝑘 ) .

The first purpose of this paper is to design a controller with the form (9) to achieve the exponential synchronization of the master system (3) and slave system (6). In other words, we are interested in finding a feedback gain matric 𝐾 such that the error system (11) is exponentially stable. As mentioned earlier, it is often the case in practice that the neural network is disturbed by environmental noises that affect the stability of the equilibrium. Motivated by this we express a stochastic system whose consequent parts are a set of stochastic uncertain recurrent neural networks with mixed time delays: 𝑑𝑥 (𝑡) = { − 𝐶 (𝑟 (𝑡)) 𝑥 (𝑡) + 𝑊0 (𝑟 (𝑡)) 𝑔 (𝑥 (𝑡)) + 𝑊1 (𝑟 (𝑡)) 𝑔 (𝑥 (𝑡 − 𝑑 (𝑡))) +𝑊2 (𝑟 (𝑡)) ∫

(7)

(9)

where 𝐾 is a sampled-data feedback controller gain matrix to be determined, 𝑒(𝑡𝑘 ) is a discrete measurement of 𝑒(𝑡) at the sampling instant 𝑡𝑘 , and lim𝑘 → ∞ 𝑡𝑘 = +∞. It is assumed that 𝑡𝑘+1 − 𝑡𝑘 = ℎ𝑘 ≤ ℎ for any integer 𝑘 ≥ 0, where ℎ is a positive scalar and represents the largest sampling interval. By substituting (9) into (7), we obtain

(5)

Remark 2. In the earlier literature, the activation functions 𝑔𝑖 (𝑖 = 1, 2, . . . , 𝑛) are supposed to be continuous, differentiable, monotonically increasing, and bounded. Moreover, the constants 𝐹𝑖− ≡ 0 for 𝑖 = 1, 2, . . . , 𝑛 or the constants 𝐹𝑖− = −𝐹𝑖+ for 𝑖 = 1, 2, . . . , 𝑛. However, in this paper, the resulting activation functions may be not monotonically increasing and more general than the usual Lipschitz-type conditions. Moreover, the constants 𝐹𝑖− and 𝐹𝑖+ (𝑖 = 1, 2, . . . , 𝑛) are allowed to be positive, negative, or zero. Hence, Assumption 2 of this paper is weaker than those given in the earlier literature (see, e.g., [39, 40]).

𝑡𝑘 ≤ 𝑡 < 𝑡𝑘+1 ,

𝑡

𝑡−𝜏(𝑡)

𝑔 (𝑥 (𝑠)) 𝑑𝑠 + 𝐼 (𝑡)} 𝑑𝑡

+ 𝜌 (𝑡, 𝑥 (𝑡) , 𝑥 (𝑡 − 𝑑 (𝑡)) , 𝑥 (𝑡 − 𝜏 (𝑡)) , 𝑟 (𝑡)) 𝑑𝜔 (𝑡) , (12)

4

Abstract and Applied Analysis

where 𝜔(𝑡) = (𝜔1 (𝑡), 𝜔2 (𝑡), . . . , 𝜔𝑛 (𝑡))𝑇 is an n-dimensional Brownian motion defined on a complete probability space (Ω F P) satisfying E{𝑑𝜔(𝑡)} = 0 and E{𝑑𝜔2 (𝑡)} = 𝑑𝑡 and 𝜌(𝑡, 𝑥(𝑡), 𝑥(𝑡−𝑑(𝑡)), 𝑥(𝑡−𝜏(𝑡)), 𝑟(𝑡)) : 𝑅+ ×𝑅𝑛 ×𝑅𝑛 ×𝑅𝑛 ×S → 𝑅𝑛×𝑚 is the noise intensity function matrix. And a slave system for (12) can be described by the following equation:

Lemma 4 (the Jensen inequality, see [41]). For any constant matrix 𝑃 ∈ 𝑅𝑚×𝑚 , 𝑃 > 0, scalar 0 < 𝑧(𝑡) < 𝑧, vector function 𝑉 : [𝑡−𝑧, 𝑡] → 𝑅𝑚 , 𝑡 ≥ 0, such that the integrations concerned are well defined; then

(∫

𝑧(𝑡)

0

𝑑𝑦 (𝑡)

+ 𝑊2 (𝑟 (𝑡)) ∫

𝑡−𝜏(𝑡)

𝑔 (𝑦 (𝑠)) 𝑑𝑠 + 𝐼 (𝑡) + 𝑢 (𝑡)} 𝑑𝑡

The mode-independent state feedback controller is made as the form of (9) and each possible value of 𝑒(𝑡) is denoted by 𝑖, 𝑖 ∈ S; then we have the final stochastic error system: 𝑑𝑒 (𝑡) = { − 𝐶𝑖 𝑒 (𝑡) + 𝑊0𝑖 𝑓 (𝑒 (𝑡)) + 𝑊1𝑖 𝑓 (𝑒 (𝑡 − 𝑑 (𝑡))) 𝑡

𝑡−𝜏(𝑡)

trace [𝜌𝑇 (𝑡, 𝑒1 , 𝑒2 , 𝑒3 , 𝑒4 , 𝑖) 𝜌 (𝑡, 𝑒1 , 𝑒2 , 𝑒3 , 𝑒4 , 𝑖)] ≤ 𝑒1𝑇 𝑌1𝑖 𝑒1 + 𝑒2𝑇 𝑌2𝑖 𝑒2 + 𝑒3𝑇 𝑌3𝑖 𝑒3 + 𝑒4𝑇 𝑌4𝑖 𝑒4 .



]2 (𝑡)

]1 (𝑡)

(18)

𝑉 (𝑠) 𝑑𝑠 = 𝜉̃𝑇 𝜙 (𝑡) ,

(19)

where 𝜉̃ ∈ 𝑅𝑘×𝑛 and 𝜙(𝑡) ∈ 𝑅𝑘 . Then the following inequality holds:

(15) ≤∫

]2 (𝑡)

]1 (𝑡)

Definition 3. Master system and slave system are said to be exponentially synchronous if error system is exponentially stable; that is, for any initial condition 𝑒(𝑠) = 𝜑(𝑠) defined on the interval [ −𝜛 0 ], 𝜛 = max{𝑑2 , 𝜏, ℎ}, the following condition is satisfied: −𝜛≤𝑠≤0

−𝐺2 𝐺3𝑇 ) < 0. 𝐺3 𝐺1

̃ 𝑇 − (] (𝑡) − ] (𝑡)) 𝑀𝑅−1 𝑀] 𝜙 (𝑡) 𝜙 (𝑡) [𝑀𝜉̃𝑇 + 𝜉𝑀 2 1

Our second purpose of this paper is to find a feedback gain matric 𝐾 in the controller with the form (9) to ensure that the error system (14) is exponentially stable, so that the master system (12) and slave system (13) are exponentially synchronous. To state our main results, the following definition and lemmas are first introduced, which are essential for the proof in the sequel.

󵄩2 󵄩 E {‖𝑒(𝑡)‖2 } ≤ 𝜁𝑒−𝜖𝑡 E { sup 󵄩󵄩󵄩𝜑(𝑠)󵄩󵄩󵄩 } .

(

Lemma 6 (see [42]). For any constant matric 𝑀 ∈ 𝑅𝑘×𝑛 , symmetric positive definite matrix 𝑅 ∈ 𝑅𝑛×𝑛 , two functions ]1 (𝑡) and ]2 (𝑡) satisfying 0 < ]𝑚 ≤ ]1 (𝑡) < ]2 (𝑡) ≤ ]𝑀 (𝑡 ≥ 0), and vector function 𝑉 : []𝑚 , ]𝑀] → 𝑅𝑛 such that the integrations concerned are well defined, let

+ 𝜌 (𝑡, 𝑒 (𝑡) , 𝑒 (𝑡 − 𝑑 (𝑡)) , 𝑒 (𝑡 − 𝜏 (𝑡)) , 𝑒 (𝑡𝑘 ) , 𝑖) 𝑑𝜔 (𝑡) . (14) (H3 ) 𝜌 : 𝑅+ × 𝑅𝑛 × 𝑅𝑛 × 𝑅𝑛 × 𝑅𝑛 × S → 𝑅𝑛×𝑚 is locally Lipschitz continuous and satisfies

(17)

𝑇

𝑉 (𝑠) 𝑃𝑉 (𝑠) 𝑑𝑠) .

𝐺 𝐺3𝑇 ( 1 ) < 0 𝑜𝑟 𝐺3 −𝐺2

𝑓 (𝑒 (𝑠)) 𝑑𝑠 + 𝐾𝑒 (𝑡𝑘 )} 𝑑𝑡

We impose the following assumption:

𝑉 (𝑠) 𝑑𝑠)

Lemma 5 (the Schur complement). Given one positive definite matrix 𝐺2 > 0 and constant matrices 𝐺1 and 𝐺3 , where 𝐺1 = 𝐺1𝑇 , 𝐺1 + 𝐺3𝑇 𝐺2−1 𝐺3 < 0 if and only if

+ 𝜌 (𝑡, 𝑦 (𝑡) , 𝑦 (𝑡 − 𝑑 (𝑡)) , 𝑦 (𝑡 − 𝜏 (𝑡)) , 𝑟 (𝑡)) 𝑑𝜔 (𝑡) . (13)

+ 𝑊2𝑖 ∫

𝑧(𝑡)

0

+ 𝑊1 (𝑟 (𝑡)) 𝑔 (𝑦 (𝑡 − 𝑑 (𝑡)))

𝑧(𝑡)

0

≤ 𝑧 (𝑡) (∫

= { − 𝐶 (𝑟 (𝑡)) 𝑦 (𝑡) + 𝑊0 (𝑟 (𝑡)) 𝑔 (𝑦 (𝑡))

𝑡

𝑇

𝑉 (𝑠) 𝑑𝑠) 𝑃 (∫

(16)

𝑉𝑇 (𝑠) 𝑅𝑉 (𝑠) 𝑑𝑠.

(20)

3. Exponential Synchronization for Markovian Jump Neural Networks with Mixed Time Delays via Sampled Data To present the main results of this section, we denote 𝐹1 = diag{𝐹1− 𝐹1+ , 𝐹2− 𝐹2+ , . . . , 𝐹𝑛− 𝐹𝑛+ }, 𝐹2 = diag{(𝐹1− + 𝐹1+ )/2, (𝐹2− + 𝐹2+ )/2, . . . , (𝐹𝑛− + 𝐹𝑛+ )/2}, 𝑔1 = [𝐼 0], and 𝑔2 = [0 𝐼]. Here, some LMI conditions will be developed to ensure that master system (3) and slave system (6) are exponential synchronous by employing the Lyapunov functionals. Theorem 7. Under Assumptions H1 and H2 , for a given scalar 𝛾, if there exist matrices 𝑃𝑖 > 0, 𝑄1 > 0, 𝑄2 > 0, 𝑄3 > 0, 𝑍1 > 0, 𝑍2 > 0, 𝑍3 > 0, 𝑈 > 0, 𝑆, 𝑋, 𝑋1 , 𝐺, 𝐿, and 𝐻𝑖 = [𝐻1𝑖 𝐻2𝑖 𝐻3𝑖 ] and diagonal matrices 𝑉1𝑖 > 0, 𝑉2𝑖 > 0,

Abstract and Applied Analysis

5

𝑉3𝑖 > 0, and 𝑉4𝑖 > 0 such that, for any 𝑖 ∈ S, inequalities (21)–(24) hold, 𝑍 𝑆 ) > 0, ( 2 ∗ 𝑍2 𝑃𝑖 + ℎ

Σ𝑖 (ℎ) = (

𝑋 + 𝑋𝑇 2

(21)

−ℎ𝑋 + ℎ𝑋1 −ℎ𝑋1 − ℎ𝑋1𝑇 + ℎ



𝑋 + 𝑋𝑇 2

) > 0, (22)

Σ1𝑖 (ℎ)

Σ17𝑖 = 𝑔1𝑇 𝑃𝑖 + 𝑔1𝑇 𝐻3𝑖 − 𝑔1𝑇 𝐺 − 𝛾𝑔1𝑇 𝐶𝑖𝑇 𝐺𝑇 + 𝛾𝑔2𝑇 𝑊0𝑖𝑇 𝐺𝑇 , Σ22𝑖 = −𝑄1 − 𝑔1𝑇 𝑍1 𝑔1 − 𝑔1𝑇 𝑍2 𝑔1 − 𝑔1𝑇 𝐹1 𝑉2𝑖 𝑔1 + 𝑔2𝑇 𝑉2𝑖 𝐹2 𝑔1 + 𝑔1𝑇 𝐹2 𝑉2𝑖 𝑔2 − 𝑔2𝑇 𝑉2𝑖 𝑔2 , Σ23 = 𝑔1𝑇 (𝑍2 − 𝑆) 𝑔1 ,

Σ11𝑖 Σ12 Σ13𝑖 ∗ Σ22𝑖 Σ23 ∗ ∗ Σ33𝑖 ( ∗ ∗ ∗ =( ( ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ (∗

0 Σ15𝑖 Σ16𝑖 Σ17𝑖 + Λ 1 (ℎ) Σ24 0 0 0 Σ34 0 0 Σ37𝑖 ) ) Σ44 0 0 0 ) Σ57𝑖 ∗ Σ55 0 ∗ ∗ Σ66𝑖 Σ67𝑖 + Λ 2 (ℎ) ∗ ∗ ∗ Σ77 + ℎ𝑈 )

< 0, (23) Σ2𝑖 0 Σ24 0 0 Σ34 0 0 0 Σ44 0 ∗ Σ55 0 ∗ ∗ Σ66𝑖 ∗ ∗ ∗ ∗ ∗ ∗

0 Σ37𝑖 0 Σ57𝑖 Σ67𝑖 Σ77 ∗

0 0 0 0 ℎ𝐻2𝑖𝑇 ℎ𝐻3𝑖𝑇 −ℎ𝑈

= − (1 − 𝜇) 𝑄2 + 𝑔1𝑇 (−2𝑍2 + 𝑆 + 𝑆𝑇 ) 𝑔1 − 𝑔1𝑇 𝐹1 𝑉3𝑖 𝑔1 + 𝑔2𝑇 𝑉3𝑖 𝐹2 𝑔1 + 𝑔1𝑇 𝐹2 𝑉3𝑖 𝑔2 − 𝑔2𝑇 𝑉3𝑖 𝑔2 , Σ34 = 𝑔1𝑇 (−𝑆 + 𝑍2 ) 𝑔1 ,

) ) ) ) ) ) (24)

Σ44

𝑠

Σ57𝑖 = 𝛾𝑊2𝑖𝑇 𝐺𝑇 ,

Σ55 = −𝑍3 ,

Σ66𝑖 = 𝑋1 + 𝑋1𝑇 −

𝑋 + 𝑋𝑇 − 𝐻2𝑖 − 𝐻2𝑖𝑇 , 2

Σ67𝑖 = −𝐻3𝑖 + 𝛾𝐿𝑇 , 2 Σ77 = 𝑑12 𝑍1 + 𝑑12 𝑍2 − 𝛾𝐺 − 𝛾𝐺𝑇 ,

Λ 1 (ℎ) = ℎ𝑔1𝑇

are satisfied, where

Σ37𝑖 = 𝛾𝑔2𝑇 𝑊1𝑖𝑇 𝐺𝑇 ,

+ 𝑔2𝑇 𝑉4𝑖 𝐹2 𝑔1 + 𝑔1𝑇 𝐹2 𝑉4𝑖 𝑔2 − 𝑔2𝑇 𝑉4𝑖 𝑔2 ,

0, the following inequality holds [45]: −𝐹 𝑉 𝐹 𝑉 𝜂 (𝑡) ( 1 1𝑖 2 1𝑖 ) 𝜂 (𝑡) ≥ 0, ∗ −𝑉1𝑖 𝑇

× (−𝑔1𝑇 𝐹1 𝑉3𝑖 𝑔1 + 𝑔2𝑇 𝑉3𝑖 𝐹2 + 𝑔1𝑇 𝐹2 𝑉3𝑖 𝑔2 − 𝑔2𝑇 𝑉3𝑖 𝑔2 ) × 𝜂 (𝑡 − 𝑑 (𝑡)) ≥ 0, 𝜂𝑇 (𝑡 − 𝑑2 )

(49)

× (−𝑔1𝑇 𝐹1 𝑉4𝑖 𝑔1 + 𝑔2𝑇 𝑉4𝑖 𝐹2 + 𝑔1𝑇 𝐹2 𝑉4𝑖 𝑔2 − 𝑔2𝑇 𝑉4𝑖 𝑔2 )

which implies

× 𝜂 (𝑡 − 𝑑2 ) ≥ 0.

𝜂𝑇 (𝑡)

(51)

× (−𝑔1𝑇 𝐹1 𝑉1𝑖 𝑔1 + 𝑔2𝑇 𝑉1𝑖 𝐹2 + 𝑔1𝑇 𝐹2 𝑉1𝑖 𝑔2 − 𝑔2𝑇 𝑉1𝑖 𝑔2 ) × 𝜂 (𝑡) ≥ 0. (50) Similarly, for any appropriately dimensioned diagonal matrices 𝑉2𝑖 > 0, 𝑉3𝑖 > 0, and 𝑉4𝑖 > 0, the following inequalities also hold:

Adding the left-hand sides of (46)–(51) to L𝑉(𝑒(𝑡), 𝑡, 𝑖) and letting 𝐿 = 𝐺𝐾, we have from (29)–(36), (42), and (45) that, for 𝑡 ∈ [𝑡𝑘 , 𝑡𝑘+1 ), L𝑉 (𝑒 (𝑡) , 𝑡, 𝑖)

𝜂𝑇 (𝑡 − 𝑑1 )

≤ X𝑇 (𝑡) [

× (−𝑔1𝑇𝐹1 𝑉2𝑖 𝑔1 + 𝑔2𝑇 𝑉2𝑖 𝐹2 + 𝑔1𝑇 𝐹2 𝑉2𝑖 𝑔2 − 𝑔2𝑇 𝑉2𝑖 𝑔2 ) × 𝜂 (𝑡 − 𝑑1 ) ≥ 0,

(52) 𝑡𝑘+1 − 𝑡 𝑡 − 𝑡𝑘 Σ1𝑖 (ℎ𝑘 ) + Γ𝑖 (ℎ𝑘 )] X (𝑡) , ℎ𝑘 ℎ𝑘

where 𝑇

𝑔1𝑇 𝐻1𝑖𝑇 𝑔1𝑇 𝐻1𝑖𝑇 0 0 0 ( ) −1 ( 0 ) ) ( ) 0 Γ𝑖 (ℎ𝑘 ) = Σ1𝑖 (0) + ℎ𝑘 ( ( 𝑇 ) 𝑈 ( 0𝑇 ) , 𝐻2𝑖 𝐻2𝑖 (

𝐻3𝑖𝑇

)

(

𝑡−𝜏(𝑡)

According to Lemma 5, (24) is equivalent to

(

𝐻3𝑖𝑇

)

(

𝐻3𝑖𝑇

(54)

(55)

ℎ − ℎ𝑘 ℎ𝑘 Σ1𝑖 (ℎ) + Σ1𝑖 (0) < 0. ℎ ℎ

≤ −𝜉 (‖𝑒(𝑡)‖2 + ‖𝑒(𝑡 − 𝑑(𝑡))‖2

(57)

̇ From the definition of 𝑉(𝑒(𝑡), 𝑡, 𝑟(𝑡)), 𝑒(𝑡), and 𝑓(𝑒(𝑡)), there exist positive scalars 𝛿0 , 𝛿1 , 𝛿2 , 𝛿3 , and 𝛿4 such that the following inequality holds: 𝑉 (𝑒 (𝑡) , 𝑡, 𝑖)

From (23) and (55), it can be seen that Σ1𝑖 (ℎ𝑘 ) =

𝑇

𝑒𝑇 (𝑡𝑘 ) 𝑒𝑇̇ (𝑡)] .

󵄩 󵄩2 + ‖𝑒(𝑡 − 𝜏(𝑡))‖2 + 󵄩󵄩󵄩𝑒(𝑡𝑘 )󵄩󵄩󵄩 ) .

)

Σ1𝑖 (0) < 0.

𝑓 (𝑒 (𝑠)) 𝑑𝑠)

L𝑉 (𝑒 (𝑡) , 𝑡, 𝑖)

which implies Γ𝑖 (ℎ𝑘 ) < 0,

)

Thus, we can show from (52)–(56) that

𝑇

𝑔1𝑇 𝐻1𝑖𝑇 𝑔1𝑇 𝐻1𝑖𝑇 0 0 ( 0 ) −1 ( 0 ) ) ( ) Σ1𝑖 (ℎ𝑘 ) + ℎ ( ( 0𝑇 ) 𝑈 ( 0𝑇 ) < 0, 𝐻2𝑖 𝐻2𝑖

(53)

𝑇

𝑡

X (𝑡) = [𝜂𝑇 (𝑡) 𝜂𝑇 (𝑡 − 𝑑1 ) 𝜂𝑇 (𝑡 − 𝑑 (𝑡)) 𝜂𝑇 (𝑡 − 𝑑2 ) (∫

𝐻3𝑖𝑇

(56)

󵄩 󵄩2 ≤ 𝛿0 󵄩󵄩󵄩𝑒(𝑡𝑘 )󵄩󵄩󵄩 + 𝛿1 ‖𝑒(𝑡)‖2 + 𝛿2 ∫

𝑡

𝑡−𝜛

‖𝑒(𝑠)‖2 𝑑𝑠

Abstract and Applied Analysis 𝑡

+ 𝛿3 ∫

𝑡−𝜛 𝑡

+ 𝛿4 ∫

𝑡−𝜛

9 𝑇

‖𝑒(𝑠 − 𝑑(𝑠))‖2 𝑑𝑠

− 𝜉E {∫ 𝑒𝜖𝑡 ‖𝑒 (𝑡 − 𝑑 (𝑡))‖2 𝑑𝑡}

‖𝑒(𝑠 − 𝜏(𝑠))‖2 𝑑𝑠.

− 𝜉E {∫ 𝑒𝜖𝑡 ‖𝑒 (𝑡 − 𝜏 (𝑡))‖2 𝑑𝑡}

0

𝑇

0

(58)

𝑇

𝑡

0

𝑡−𝜛

𝑇

𝑡

0

𝑡−𝜛

𝑇

𝑡

0

𝑡−𝜛

+ 𝜖𝛿2 E {∫ ∫ ̃ Define a new function 𝑉(𝑒(𝑡), 𝑡, 𝑟(𝑡)) = 𝑒𝜖𝑡 𝑉(𝑒(𝑡), 𝑡, 𝑟(𝑡)), where 𝜖 > 0 and 𝜖 max{𝛿0 , 𝛿1 + 𝛿2 𝜛𝑒𝜖𝜛 , 𝛿3 𝜛𝑒𝜖𝜛 , 𝛿4 𝜛𝑒𝜖𝜛 } ≤ 𝜉. It can be found that

+ 𝜖𝛿3 E {∫ ∫

̃ (𝑒 (𝑡) , 𝑡, 𝑟 (𝑡)) 𝑉

+ 𝜖𝛿4 E {∫ ∫

𝑒𝜖𝑡 ‖𝑒 (𝑠)‖2 𝑑𝑠 𝑑𝑡} 𝑒𝜖𝑡 ‖𝑒 (𝑠 − 𝑑 (𝑠))‖2 𝑑𝑠 𝑑𝑡} 𝑒𝜖𝑡 ‖𝑒 (𝑠 − 𝜏 (𝑠))‖2 𝑑𝑠 𝑑𝑡} ,

̃7 (𝑒 (𝑡) , 𝑡, 𝑟 (𝑡)) ̃1 (𝑒 (𝑡) , 𝑡, 𝑟 (𝑡)) + 𝑉 ≥𝑉 = 𝑒𝜖𝑡 ( ×(

𝑇 𝑡 −𝑡 𝑡 − 𝑡𝑘 𝑒 (𝑡) ) [ 𝑘+1 Σ𝑖 (ℎ𝑘 ) + Σ (0)] 𝑒 (𝑡𝑘 ) ℎ𝑘 ℎ𝑘 𝑖

(62) (59)

𝑒 (𝑡) ), 𝑒 (𝑡𝑘 )

where 𝐽1 = [𝛿0 + 𝛿1 + 𝜛𝛿2 + 𝜛𝛿3 + 𝜛𝛿4 ]sup−𝜛≤𝑠≤0 E‖𝜑(𝑠)‖2 . Consequently, by changing the integration sequence, the following inequalities hold: 𝑇

𝑡

0

𝑡−𝜛

∫ ∫

where

𝑒𝜖𝑡 ‖𝑒(𝑠)‖2 𝑑𝑠 𝑑𝑡

𝑇

𝜃+𝜛

−𝜛

𝜃

≤ ∫ (∫ ℎ𝑘 ℎ − ℎ𝑘 Σ (ℎ) + Σ𝑖 (0) . ℎ 𝑖 ℎ

Σ𝑖 (ℎ𝑘 ) =

(60)

𝑒𝜖𝑡 𝑑𝑡) ‖𝑒(𝑠)‖2 𝑑𝑠

𝑇

≤ ∫ 𝜛𝑒𝜖(𝜃+𝜛) ‖𝑒 (𝑠)‖2 𝑑𝑠 −𝜛

Due to the fact that 𝑃𝑖 > 0 and Σ𝑖 (ℎ) > 0, we can find a sufficiently small scalar 𝛿 > 0 such that 𝑃𝑖 > 𝛿𝐼 and Σ𝑖 (ℎ) > ̃ 𝛿𝐼, which implies 𝑉(𝑒(𝑡), 𝑡, 𝑟(𝑡)) > 𝛿𝑒𝜖𝑡 ‖𝑒(𝑡)‖2 > 0. On the other hand, from (57) and (58) we have

𝑇

0

󵄩2 󵄩 ≤ 𝜛𝑒𝜖𝜛 ∫ 𝑒𝜖𝑡 ‖𝑒 (𝑡)‖2 𝑑𝑡 + 𝜛𝑒𝜖𝜛 ∫ 󵄩󵄩󵄩𝜑 (𝑠)󵄩󵄩󵄩 𝑑𝑠 0 −𝜛 𝑇

󵄩2 󵄩 ≤ 𝜛𝑒𝜖𝜛 ∫ 𝑒𝜖𝑡 ‖𝑒 (𝑡)‖2 𝑑𝑡 + 𝜛𝑒𝜖𝜛 sup 󵄩󵄩󵄩𝜑 (𝑠)󵄩󵄩󵄩 , 0 −𝜛≤𝑠≤0

̃ (𝑒 (𝑡) , 𝑡, 𝑖) L𝑉

𝑇

𝑡

0

𝑡−𝜛

∫ ∫

󵄩 󵄩2 ≤ 𝑒𝜖𝑡 [ (𝜖𝛿0 − 𝜉) 󵄩󵄩󵄩𝑒(𝑡𝑘 )󵄩󵄩󵄩 + (𝜖𝛿1 − 𝜉) ‖𝑒(𝑡)‖2

𝑇

+ 𝜖𝛿2 ∫

𝑡−𝜛

+ 𝜖𝛿3 ∫

𝑡

𝑡−𝜛

+𝜖𝛿4 ∫

𝑡

𝑡−𝜛

0

󵄩2 󵄩 + 𝜛𝑒𝜖𝜛 sup 󵄩󵄩󵄩𝜑 (𝑠)󵄩󵄩󵄩 ,

(61)

2

‖𝑒 (𝑠)‖ 𝑑𝑠 2

‖𝑒 (𝑡 − 𝑑 (𝑠))‖ 𝑑𝑠

(63)

≤ 𝜛𝑒𝜖𝜛 ∫ 𝑒𝜖𝑡 ‖𝑒(𝑡 − 𝑑(𝑡))‖2 𝑑𝑡

− 𝜉‖𝑒 (𝑡 − 𝑑 (𝑡))‖2 − 𝜉‖𝑒 (𝑡 − 𝜏 (𝑡))‖2 𝑡

𝑒𝜖𝑡 ‖𝑒 (𝑠 − 𝑑 (𝑠))‖2 𝑑𝑠𝑑𝑡

−𝜛≤𝑠≤0

𝑇

𝑡

0

𝑡−𝜛

∫ ∫

𝑒𝜖𝑡 ‖𝑒(𝑠 − 𝜏(𝑠))‖2 𝑑𝑠𝑑𝑡 𝑇

2

‖𝑒 (𝑡 − 𝜏 (𝑠))‖ 𝑑𝑠] .

By using Dynkin’s formula, for 𝑇 > 0, we have

≤ 𝜛𝑒𝜖𝜛 ∫ 𝑒𝜖𝑡 ‖𝑒(𝑡 − 𝜏(𝑡))‖2 𝑑𝑡 0

󵄩2 󵄩 + 𝜛𝑒𝜖𝜛 sup 󵄩󵄩󵄩𝜑 (𝑠)󵄩󵄩󵄩 . −𝜛≤𝑠≤0

After substituting (63) into the right side of (62) and then using 𝜖 max{𝛿0 , 𝛿1 + 𝛿2 𝜛𝑒𝜖𝜛 , 𝛿3 𝜛𝑒𝜖𝜛 , 𝛿4 𝜛𝑒𝜖𝜛 } ≤ 𝜉, we can obtain

̃ (𝑒 (𝑡) , 𝑡, 𝑖)) E (𝑉 𝑇

󵄩 󵄩2 ≤ 𝐽1 + (𝜖𝛿0 − 𝜉) E {∫ 𝑒𝜖𝑡 󵄩󵄩󵄩𝑒 (𝑡𝑘 )󵄩󵄩󵄩 𝑑𝑡} 0 𝑇

+ (𝜖𝛿1 − 𝜉) E {∫ 𝑒𝜖𝑡 ‖𝑒 (𝑡)‖2 𝑑𝑡} 0

̃ (𝑒 (𝑡) , 𝑡, 𝑖)) ≤ 𝐽1 + 𝐽2 , E (𝑉

(64)

= (𝜖𝛿2 𝜛2 𝑒𝜖𝜛 + 𝜖𝛿3 𝜛2 𝑒𝜖𝜛 + 𝜖𝛿4 𝜛2 𝑒𝜖𝜛 ) where 𝐽2 2 sup−𝜛≤𝑠≤0 E‖𝜑(𝑠)‖ .

10

Abstract and Applied Analysis Theorem 8. Under Assumptions H1 , H2 , and H3 , for given scalar 𝛾, the error system (14) is globally exponentially stable, which ensures that the master system (12) and slave system (13) are stochastically synchronized, if there exist positive scalars 𝜆 𝑖 , symmetric positive definite matrices 𝑃𝑖 , 𝑄1 , 𝑄2 , 𝑄3 , 𝑍1 , 𝑍2 , 𝑍3 , and 𝑍4 , positive definite matrices 𝑌1𝑖 , 𝑌2𝑖 , 𝑌3𝑖 , and 𝑌4𝑖 , and ̃𝑘 , (𝑘 = 1, 2, . . . , 8), 𝐺, and 𝐿, such that, for ̃𝑘 , 𝑄 ̃𝑘 , 𝑇 matrices 𝑅 any 𝑖 ∈ S, the following matrix inequalities hold:

So, E‖𝑒 (𝑇)‖2 ≤

𝐽1 + 𝐽2 −𝜖𝑡 𝑒 . 𝛿

(65)

Then it can be shown that, for any 𝑡 > 0, 󵄩2 󵄩 E‖𝑒 (𝑡)‖2 ≤ 𝜁𝑒−𝜖𝑡 E { sup 󵄩󵄩󵄩𝜑 (𝑠)󵄩󵄩󵄩 } ,

(66)

−𝜛≤𝑠≤0

where 𝜁 = (1/𝛿)[𝛿0 +𝛿1 +𝜏𝛿2 +𝜏𝛿3 +𝜏𝛿4 +𝜖𝛿2 𝜛2 𝑒𝜖𝜛 +𝜖𝛿3 𝜛2 𝑒𝜖𝜛 + 𝜖𝛿4 𝜛2 𝑒𝜖𝜛 ]. Consequently, according to the Lyapunov-Krasovskii stability theory and Definition 3, we know that the error system (11) is exponentially stable. This completes the proof.

4. Exponential Synchronization for Stochastic Neural Networks with Mixed Time Delays and Markovian Jump via Sampled Data In this section, some sufficient conditions of exponential synchronization for stochastic error system (14) are obtained by employing the Lyapunov-Krasovskii functionals. Π11𝑖 Π12 Π13𝑖 Π14 ( ( ( ( ( ( ( ( ( ( Π=( ( ( ( ( ( ( ( ( ( (



Π22 Π23 Π24 ̃3 Π33 −𝑄

where ̃1 col{𝑇

𝑃𝑖 ≤ 𝜆 𝑖 𝐼,

(67)

̃ ̃ √𝑑12 𝑄 ̃ √ℎ𝑇 Π √𝑑1 𝑅 ∗ −𝑍1 0 0 Ω=( ) < 0, 0 ∗ ∗ −𝑍2 ∗ ∗ ∗ −𝑍4

(68)

̃ = ̃2 ⋅ ⋅ ⋅ 𝑅 ̃2 ⋅ ⋅ ⋅ 𝑄 ̃ 8 }, 𝑄 ̃ = col{𝑄 ̃1 𝑄 ̃8 }, 𝑇 ̃ = col{𝑅 ̃1 𝑅 𝑅 ̃ ̃ 𝑇2 ⋅ ⋅ ⋅ 𝑇8 }, and Π is given as follows:

Π15 Π16

Π17𝑖

Π18𝑖

Π25 Π26

Π27

Π28

0

̃3 𝛾𝐹𝑇𝑊𝑇 𝐺 −𝑇 1𝑖





























Π66

Π67𝑖













Π77















̃𝑇 Π46 Π44 −𝑄 5

̃𝑇 −𝑄 7

̃5 𝜆 𝑖 𝑌3𝑖 −𝑇

0

) ) ) ) ) ) ̃𝑇 ) −𝑄 8 ) ) ) ), 0 ) ) ) ̃𝑇 ) ) −𝑇 8 ) ) ) 𝛾𝐺𝑊2𝑖 ) 0

−𝑍3

)

̃𝑇 − 𝑇 ̃1 , ̃𝑇 + 𝑇 Π16 = 𝐿 + 𝑅 6 6

where Π11𝑖

Π17𝑖 𝑠

= ∑𝜋𝑖𝑗 𝑃𝑗 + 𝜆 𝑖 𝑌1𝑖 + 𝑄1 + 𝑄2 + 𝑄3 𝑗=1

2 𝑇

+ 𝜏 𝐹 𝑍3 𝐹 − 𝐺𝐶𝑖 −

𝐶𝑖𝑇 𝐺

̃1 + 𝑇 ̃ 𝑇, ̃1 + 𝑅 ̃𝑇 + 𝑇 + 𝐺𝑊0𝑖 𝐹 + 𝐹𝑇 𝑊0𝑖𝑇 𝐺 + 𝑅 1 1 ̃1 + Π12 = −𝑅

̃𝑇 𝑅 2

̃1 + +𝑄

̃ 𝑇, 𝑇 2

̃ 𝑇, ̃𝑇 + 𝑇 Π13𝑖 = 𝐺𝑊1𝑖 𝐹 + 𝑅 3 3 Π14 =

̃𝑇 𝑅 4

̃1 + −𝑄

̃ 𝑇, ̃𝑇 + 𝑇 Π15 = 𝑅 5 5

̃ 𝑇, 𝑇 4

̃ 𝑇, ̃𝑇 + 𝑇 = 𝑃𝑖 − 𝐺 − 𝛾𝐶𝑖𝑇 𝐺 + 𝛾𝐹𝑇 𝑊0𝑖𝑇 𝐺 + 𝑅 7 7 ̃ 𝑇, ̃𝑇 + 𝑇 Π18𝑖 = 𝐺𝑊2𝑖 + 𝑅 8 8 ̃2 − 𝑅 ̃𝑇 + 𝑄 ̃2 + 𝑄 ̃𝑇 , Π22 = −𝑄1 − 𝑅 1 2 ̃𝑇 + 𝑄 ̃𝑇 , Π23 = −𝑅 3 3 ̃𝑇 − 𝑄 ̃2 + 𝑄 ̃𝑇 , Π24 = −𝑅 4 4 ̃𝑇 + 𝑄 ̃𝑇 , Π25 = −𝑅 5 5

̃2 , ̃𝑇 + 𝑄 ̃𝑇 − 𝑇 Π26 = −𝑅 6 6

̃𝑇 + 𝑄 ̃𝑇 , Π27 = −𝑅 7 7

̃𝑇 + 𝑄 ̃𝑇 , Π28 = −𝑅 8 8

Π33 = 𝜆 𝑖 𝑌2𝑖 − (1 − 𝜇) 𝑄2 ,

(69)

Abstract and Applied Analysis

11

̃4 − 𝑄 ̃𝑇 , Π44 = −𝑄3 − 𝑄 4

Let L be the weak infinitesimal operator of stochastic process (𝑒(𝑡), 𝑡 ≥ 0, 𝑟(𝑡)) along the trajectories of error system (14). Then we obtain that

̃4 , ̃𝑇 − 𝑇 Π46 = −𝑄 6 ̃6 − 𝑇 ̃ 𝑇, Π66 = 𝜆 𝑖 𝑌3𝑖 − 𝑇 6

𝑑𝑉 (𝑒 (𝑡) , 𝑡, 𝑖) = L𝑉 (𝑒 (𝑡) , 𝑡, 𝑖) 𝑑𝑡

̃ 𝑇 + 𝜆 𝑖 𝑌4𝑖 , Π67𝑖 = 𝛾𝐿𝑇 − 𝑇 7

+ 2𝑒𝑇 (𝑡) 𝑃𝑖 𝜌 (𝑡, 𝑒 (𝑡) , 𝑒 (𝑡 − 𝑑 (𝑡)) ,

𝑇

Π77 = 𝑑1 𝑍1 + 𝑑12 𝑍2 + ℎ𝑍4 − 𝛾𝐺 − 𝛾𝐺 , 󵄨 󵄨 󵄨 󵄨 𝐹̃𝜀 = max {󵄨󵄨󵄨𝐹𝜀− 󵄨󵄨󵄨 , 󵄨󵄨󵄨𝐹𝜀+ 󵄨󵄨󵄨} ,

𝐹 = diag {𝐹̃1 𝐹̃2 ⋅ ⋅ ⋅ 𝐹̃𝑛 } ,

(74)

𝑒 (𝑡 − 𝜏 (𝑡)) , 𝑒 (𝑡𝑘 ) , 𝑖) 𝑑𝜔 (𝑡) , where

𝜀 = 1, 2, . . . , 𝑛. (70)

L𝑉 (𝑒 (𝑡) , 𝑡, 𝑖) = 𝑉𝑡 (𝑒 (𝑡) , 𝑡, 𝑖) + 𝑉𝑥 (𝑒 (𝑡) , 𝑡, 𝑖) 𝛽 (𝑒 (𝑡))

Proof. Let 𝛽(𝑒(𝑡)) = −𝐶(𝑟(𝑡))𝑒(𝑡) + 𝑊0 (𝑟(𝑡))𝑓(𝑒(𝑡)) + 𝑡 𝑊1 (𝑟(𝑡))𝑓(𝑒(𝑡 − 𝑑(𝑡))) + 𝑊2 (𝑟(𝑡)) ∫𝑡−𝜏(𝑡) 𝑓(𝑒(𝑠))𝑑𝑠 + 𝐾𝑒(𝑡𝑘 ). Then, the system (14) can be written as

1 + trace [𝜌𝑇 (𝑡, 𝑒 (𝑡) , 𝑒 (𝑡 − 𝑑 (𝑡)) , 2 𝑒 (𝑡 − 𝜏 (𝑡)) , 𝑒 (𝑡𝑘 ) , 𝑖)

𝑑𝑒 (𝑡) = 𝛽 (𝑒 (𝑡)) 𝑑𝑡 + 𝜌 (𝑡, 𝑒 (𝑡) , 𝑒 (𝑡 − 𝑑 (𝑡)) ,

× 𝑉𝑒𝑒 (𝑒 (𝑡) , 𝑡, 𝑖)

(71)

× 𝜌 (𝑡, 𝑒 (𝑡) , 𝑒 (𝑡 − 𝑑 (𝑡)) ,

𝑒 (𝑡 − 𝜏 (𝑡)) , 𝑒 (𝑡𝑘 ) , 𝑟 (𝑡)) 𝑑𝜔 (𝑡) .

𝑒 (𝑡 − 𝜏 (𝑡)) , 𝑒 (𝑡𝑘 ) , 𝑖) ]

To analyze the stability of error system (14), we construct the following stochastic Lyapunov functional candidate:

S

+ ∑𝜋𝑖𝑗 𝑉 (𝑒 (𝑡) , 𝑡, 𝑗) ,

6

𝑉 (𝑒 (𝑡) , 𝑡, 𝑟 (𝑡)) = ∑𝑉𝑙 (𝑒 (𝑡) , 𝑡, 𝑟 (𝑡)) , 𝑙=1

𝑗=1

(72) 𝑉𝑡 (𝑒 (𝑡) , 𝑡, 𝑖) =

𝑡 ∈ [𝑡𝑘 , 𝑡𝑘+1 ) ,

𝜕𝑉 (𝑒 (𝑡) , 𝑡, 𝑖) , 𝜕𝑡

𝑉𝑒 (𝑒 (𝑡) , 𝑡, 𝑖)

where 𝑉1 (𝑒 (𝑡) , 𝑡, 𝑟 (𝑡)) = 𝑒𝑇 (𝑡) 𝑃 (𝑟 (𝑡)) 𝑒 (𝑡) ,

=(

𝑉2 (𝑒 (𝑡) , 𝑡, 𝑟 (𝑡)) =∫

𝑡

𝑇

𝑡−𝑑1

𝑒 (𝑠) 𝑄1 𝑒 (𝑠) 𝑑𝑠 + ∫

𝑡−𝑑(𝑡)

𝑡

+∫

𝑡

𝑡−𝑑2

𝑉𝑒𝑒 (𝑒 (𝑡) , 𝑡, 𝑖) = (

𝑇

𝑒 (𝑠) 𝑄2 𝑒 (𝑠) 𝑑𝑠

𝑒𝑇 (𝑠) 𝑄3 𝑒 (𝑠) 𝑑𝑠,

𝑉3 (𝑒 (𝑡) , 𝑡, 𝑟 (𝑡)) = ∫

0

−𝑑1

𝑉4 (𝑒 (𝑡) , 𝑡, 𝑟 (𝑡)) = ∫

−𝑑1

−𝑑2



𝑡

𝑡+𝜃 𝑡



𝑡+𝜃

0

𝑡

−𝜏 𝑡+𝜃

𝛽𝑇 (𝑒 (𝑠)) 𝑍1 𝛽 (𝑒 (𝑠)) 𝑑𝑠 𝑑𝜃, 𝛽𝑇 (𝑒 (𝑠)) 𝑍2 𝛽 (𝑒 (𝑠)) 𝑑𝑠 𝑑𝜃,

0

𝑡

−ℎ 𝑡+𝜃

(75)

L𝑉1 (𝑒 (𝑡) , 𝑡, 𝑖) S

= 2𝑒𝑇 (𝑡) 𝑃𝑖 𝛽 (𝑒 (𝑡)) + 𝑒𝑇 (𝑡) [∑ 𝜋𝑖𝑗 𝑃𝑗 ] 𝑒 (𝑡) ] [𝑗=1 + trace [𝜌𝑇 (𝑡, 𝑒 (𝑡) , 𝑒 (𝑡 − 𝑑 (𝑡)) , 𝑒 (𝑡 − 𝜏 (𝑡)) , 𝑒 (𝑡𝑘 ) , 𝑖) × 𝑃𝑖 𝜌 (𝑡, 𝑒 (𝑡) , 𝑒 (𝑡 − 𝑑 (𝑡)) ,

𝑓𝑇 (𝑒 (𝑠)) 𝑍3 𝑓 (𝑒 (𝑠)) 𝑑𝑠 𝑑𝜃,

𝑒 (𝑡 − 𝜏 (𝑡)) , 𝑒 (𝑡𝑘 ) , 𝑖) ]

𝑉6 (𝑒 (𝑡) , 𝑡, 𝑟 (𝑡)) =∫ ∫

𝜕2 𝑉 (𝑒 (𝑡) , 𝑡, 𝑖) ) . 𝜕𝑒𝑖 𝜕𝑒𝑗 𝑛×𝑛

So, we have that

𝑉5 (𝑒 (𝑡) , 𝑡, 𝑟 (𝑡)) = 𝜏∫ ∫

𝜕𝑉 (𝑒 (𝑡) , 𝑡, 𝑖) 𝜕𝑉 (𝑒 (𝑡) , 𝑡, 𝑖) 𝜕𝑉 (𝑒 (𝑡) , 𝑡, 𝑖) , ,..., ), 𝜕𝑒1 𝜕𝑒2 𝜕𝑒𝑛

S

≤ 2𝑒𝑇 (𝑡) 𝑃𝑖 𝛽 (𝑒 (𝑡)) + 𝑒𝑇 (𝑡) [∑ 𝜋𝑖𝑗 𝑃𝑗 ] 𝑒 (𝑡) ] [𝑗=1

𝛽𝑇 (𝑒 (𝑠)) 𝑍4 𝛽 (𝑒 (𝑠)) 𝑑𝑠 𝑑𝜃. (73)

+ 𝜆 𝑖 [𝑒𝑇 (𝑡) 𝑌1𝑖 𝑒 (𝑡) + 𝑒𝑇 (𝑡 − 𝑑 (𝑡))

12

Abstract and Applied Analysis 𝑡

× 𝑌2𝑖 𝑒 (𝑡 − 𝑑 (𝑡)) + 𝑒𝑇 (𝑡 − 𝜏 (𝑡)) 𝑌3𝑖 𝑒 (𝑡 − 𝜏 (𝑡))

𝛽𝑇 (𝑒 (𝑡)) 𝑍4 𝛽 (𝑒 (𝑡)) 𝑑𝑠

−∫

𝑡−ℎ

+𝑒𝑇 (𝑡𝑘 ) 𝑌4𝑖 𝑒 (𝑡𝑘 )] ,

≤ ℎ𝛽𝑇 (𝑒 (𝑡)) 𝑍4 𝛽 (𝑒 (𝑡))

L𝑉2 (𝑒 (𝑡) , 𝑡, 𝑖)

𝑡

− ∫ 𝛽𝑇 (𝑒 (𝑡)) 𝑍4 𝛽 (𝑒 (𝑡)) 𝑑𝑠.

= 𝑒𝑇 (𝑡) (𝑄1 + 𝑄2 + 𝑄3 ) 𝑒 (𝑡)

𝑡𝑘

𝑇

− 𝑒 (𝑡 − 𝑑1 ) 𝑄1 𝑒 (𝑡 − 𝑑1 )

(76)

− (1 − 𝜇) 𝑒𝑇 (𝑡 − 𝑑 (𝑡)) 𝑄2 𝑒 (𝑡 − 𝑑 (𝑡))

It is easy to see from (71) that 𝑒 (𝑡) − 𝑒 (𝑡 − 𝑑1 )

− 𝑒𝑇 (𝑡 − 𝑑2 ) 𝑄3 𝑒 (𝑡 − 𝑑2 ) ,

𝑡

L𝑉3 (𝑒 (𝑡) , 𝑡, 𝑖)

=∫

𝑡−𝑑1

𝑇

= 𝑑1 𝛽 (𝑒 (𝑡)) 𝑍1 𝛽 (𝑒 (𝑡)) −∫

𝑡

𝑡−𝑑1

+∫

𝛽𝑇 (𝑒 (𝑡)) 𝑍1 𝛽 (𝑒 (𝑡)) 𝑑𝑠,

𝑡−𝑑1

(77) 𝜌 (𝑡, 𝑒 (𝑡) , 𝑒 (𝑡 − 𝑑 (𝑡)) , 𝑒 (𝑡 − 𝜏 (𝑡)) , 𝑒 (𝑡𝑘 ) , 𝑖) 𝑑𝜔 (𝑡) .

Taking the mathematical expectation on both sides of (77) yields

= 𝑑12 𝛽𝑇 (𝑒 (𝑡)) 𝑍2 𝛽 (𝑒 (𝑡)) 𝑡

𝑡

𝑡−𝑑1

L𝑉4 (𝑒 (𝑡) , 𝑡, 𝑖)

−∫

𝛽 (𝑒 (𝑠)) 𝑑𝑠

𝑡

𝛽𝑇 (𝑒 (𝑡)) 𝑍2 𝛽 (𝑒 (𝑡)) 𝑑𝑠,

E {∫

𝑡−𝑑1

L𝑉6 (𝑒 (𝑡) , 𝑡, 𝑖)

𝛽 (𝑒 (𝑠)) 𝑑𝑠} (78)

= E {𝑒 (𝑡) − 𝑒 (𝑡 − 𝑑1 )} =

= ℎ𝛽𝑇 (𝑒 (𝑡)) 𝑍4 𝛽 (𝑒 (𝑡))

E {𝜉1𝑇 X (𝑡)} ,

where

X (𝑡) = col {𝑒 (𝑡) 𝑒 (𝑡 − 𝑑1 ) 𝑒 (𝑡 − 𝑑 (𝑡)) 𝑒 (𝑡 − 𝑑2 ) 𝑒 (𝑡 − 𝜏 (𝑡)) 𝑒 (𝑡𝑘 ) 𝛽 (𝑡) ∫

𝑡

𝑡−𝜏(𝑡)

𝑓 (𝑒 (𝑠)) 𝑑𝑠} ,

(79)

𝜉1 = col {𝐼 − 𝐼 0 0 0 0 0 0} . ̃ ∈ Applying Lemma 6, we have that there exists a matrix 𝑅 𝑅8𝑛×𝑛 such that E {− ∫

𝑡

𝑡−𝑑1

𝛽𝑇 (𝑒 (𝑠)) 𝑍1 𝛽 (𝑒 (𝑠)) 𝑑𝑠}

𝜉2 = col {0 𝐼 0 −1 0 0 0 0} , (80)

̃ 𝑇 + 𝑑1 𝑅𝑍 ̃ −1 𝑅 ̃ 𝑇 ] X (𝑡)} . ̃ 𝑇 + 𝜉1 𝑅 ≤ E {X(𝑡)𝑇 [𝑅𝜉 1 1 ̃ and Similarly, it can be seen that there exist matrices 𝑄 ̃ ∈ 𝑅8𝑛×𝑛 such that 𝑇 E {− ∫

𝑡−𝑑1

𝑡−𝑑2

𝑡

E {− ∫ 𝛽𝑇 (𝑒 (𝑠)) 𝑍4 𝛽 (𝑒 (𝑠)) 𝑑𝑠}

𝜉3 = col {𝐼 0 0 0 0 −𝐼 0 0} .

Furthermore, according to the definition of 𝛽(𝑒(𝑡)), for any appropriately dimensioned matrix 𝐺 and scalar 𝛾, the following equality is satisfied:

× [ − 𝛽 (𝑒 (𝑡)) − 𝐶𝑖 𝑒 (𝑡) + 𝑊0𝑖 𝑓 (𝑒 (𝑡)) (81) + 𝑊1𝑖 𝑓 (𝑒 (𝑡 − 𝑑 (𝑡)))

𝑡𝑘

̃ 𝑇 + ℎ𝑇𝑍 ̃ 𝑇 ] X (𝑡)} , ̃ 𝑇 + 𝜉3 𝑇 ̃ −1 𝑇 ≤ E {X(𝑡) [𝑇𝜉 3 4 𝑇

(82)

2 [𝑒𝑇 (𝑡) 𝐺 + 𝛾𝛽𝑇 (𝑒 (𝑡)) 𝐺]

𝛽𝑇 (𝑒 (𝑠)) 𝑍2 𝛽 (𝑒 (𝑠)) 𝑑𝑠}

̃𝑇 + 𝑑12 𝑄𝑍 ̃ −1 𝑄 ̃𝑇 ] X (𝑡)} , ̃ 𝑇 + 𝜉2 𝑄 ≤ E {X(𝑡)𝑇 [𝑄𝜉 2 2

where

+𝑊2𝑖 ∫

𝑡

𝑡−𝜏(𝑡)

𝑓 (𝑒 (𝑠)) 𝑑𝑠 + 𝐾𝑒 (𝑡𝑘 )] = 0.

(83)

Abstract and Applied Analysis

13 8

Taking the mathematical expectation on both sides of (74), letting 𝐿 = 𝐺𝐾, and considering (8), (33), (76), and (80)– (83), we obtain that

6 4

E {L𝑉 (𝑒 (𝑡) , 𝑡, 𝑖)} (84)

̃ 𝑇 ) X (𝑡)} . ̃ −1 𝑇 ̃ −1 𝑄 ̃𝑇 + ℎ𝑇𝑍 +𝑑12 𝑄𝑍 2 4

2 x2 (t)

̃ −1 𝑅 ̃𝑇 = E {X(𝑡)𝑇 (Π + 𝑑1 𝑅𝑍 1

0 −2

Applying Lemma 5 and (68), we have that

−4

E {L𝑉 (𝑒 (𝑡) , 𝑡, 𝑖)}

−6

≤ E {−𝜉 (‖𝑒(𝑡)‖2 + ‖𝑒(𝑡 − 𝑑(𝑡))‖2 + ‖𝑒(𝑡 − 𝜏(𝑡))‖2 )} . (85) Following the similar line of the proof of Theorem 7, we can get that, for any 𝑡 > 0,

−𝜛≤𝑠≤0

(86)

where 𝜁 = (1/𝛿)[𝛿1 + 𝜏𝛿2 + 𝜏𝛿3 + 𝜏𝛿4 + 𝜖𝛿2 𝜛2 𝑒𝜖𝜛 + 𝜖𝛿3 𝜛2 𝑒𝜖𝜛 + 𝜖𝛿4 𝜛2 𝑒𝜖𝜛 ]. Thus, the master system and the slave system are exponentially synchronized; the sampled-data feedback control gain is given by 𝐾 = 𝐺−1 𝐿. This completes the proof.

0 x1 (t)

0.5

1

1.5

2

4 2 0

In this section, two numerical examples are given to demonstrate the effectiveness of the theoretical results.

−4

1.7 −0.15 𝑊01 = ( ), −5.2 3.3

−0.5

6

−2

0.8 0 𝐶1 = ( ), 0 0.9

−1

8

5. Illustrative Examples

Example 9. Consider the second-order master system (3) and slave system (6) with the following parameters:

−1.5

Figure 1: Chaotic behavior of master system (3) with 𝑢(𝑡) = 0.

y2 (t)

󵄩2 󵄩 E‖𝑒 (𝑡)‖2 ≤ 𝜁𝑒−𝜖𝑡 E { sup 󵄩󵄩󵄩𝜑(𝑠)󵄩󵄩󵄩 } ,

−8 −2

−6 −8 −2

−1.5

−1

−0.5

0 y1 (t)

0.5

1

1.5

2

Figure 2: Chaotic behavior of slave system (6) with 𝑢(𝑡) = 0.

−1.7 −0.1 ), 𝑊11 = ( −0.26 −2.5 1 0 ), 𝐶2 = ( 0 1

1.5 −0.16 𝑊02 = ( ), −5.1 3.4

(87)

−1.9 −0.1 𝑊12 = ( ), −0.25 −2.6 0.7 0.15 ), 𝑊21 = ( 2 −0.12

0.8 0.15 𝑊22 = ( ), 1.5 −0.12

and the activation functions are taken as 𝑔(𝛼) = (|𝛼 + 1| − |𝛼 − 1|)/2. It can be verified that 𝐹1− = 𝐹2− = 0 and 𝐹1+ = 𝐹2+ = 1. 0 Thus, 𝐹1 = ( 00 00 ) and 𝐹2 = ( 0.5 0 0.5 ). It is assumed that 𝐼(𝑡) = 0, discrete delay 𝑑(𝑡) = 𝑒𝑡 /𝑒𝑡 + 1, and distributed delay 𝜏(𝑡) = 0.5 sin2 (𝑡). Hence, a straightforward calculation gives 𝑑1 = 0.5, 𝑑2 = 1, 𝜇 = 0.25, and 𝜏 = 0.5. Moreover, the transition probability matrix is 5 chosen as Υ = ( −5 5 −5 ).

The chaotic behaviors of the master system (3) and slave system (6) with 𝑢(𝑡) = 0 are given in Figures 1 and 2, respectively, with the initial states chosen as 𝑥(𝑡) = 𝑇 𝑇 [−0.5 0.5] and 𝑦(𝑡) = [0.5 −0.5] , 𝑡 ∈ [−1 0]. Choosing 𝛾 = 0.7 and applying Theorem 7, we can find that the upper bound on the samplings, which preserves that master system (3) and slave system (6) are exponentially synchronous, is 0.11. And using the Matlab LMI Control Toolbox to solve LMIs (21)–(24), we can also obtain the following matrices: 1.7892 0.1326 𝐺=( ), 0.1326 0.2696 𝐿=(

−3.4741 0.0513 0.0513 0.1577

(88) ).

14

Abstract and Applied Analysis 𝑟(𝑡) is a right-continuous Markovian chain taking values in −3 3 ). 𝑆 = {1, 2} with generator Υ = ( 3 −3 For the two operating conditions (modes), the associated data are

4 3

u1 , u2

2

0.8 0 𝐶1 = ( ), 0 0.9

1

−0.1 3.2 ), 𝑊11 = ( 5.1 −0.1

0 −1

1 0 ), 𝐶2 = ( 0 1

−2 −3

0

5

10 t

15

𝑊21 = (

0.7 0.15 ), 2 −0.12

0.8 0.15 𝑊22 = ( ), 1.5 −0.12

(90)

𝜌 (𝑡, 𝑒 (𝑡) , 𝑒 (𝑡 − 𝑑 (𝑡)) , 𝑒 (𝑡 − 𝜏 (𝑡)) , 𝑒 (𝑡𝑘 ) , 1)

Figure 3: Control input 𝑢(𝑡).

𝑧 (𝑡) 0 ), =( 1 0 𝑧2 (𝑡)

6 5

𝜌 (𝑡, 𝑒 (𝑡) , 𝑒 (𝑡 − 𝑑 (𝑡)) , 𝑒 (𝑡 − 𝜏 (𝑡)) , 𝑒 (𝑡𝑘 ) , 2)

4

𝑧 (𝑡) 0 ), =( 3 0 𝑧4 (𝑡)

3 2 e1 , e2

0 5.2 ), 3.1 0

𝑊02 = (

−0.1 3.1 𝑊12 = ( ), 5.2 −0.1

20

u1 u2

where

1

𝑧1 (𝑡)

0

= 0.2𝑒1 (𝑡) + 0.3𝑒1 (𝑡 − 𝑑 (𝑡))

−1

+ 0.1𝑒1 (𝑡 − 𝜏 (𝑡)) + 0.4𝑒1 (𝑡𝑘 ) ,

−2 −3 −4

0.1 5.1 𝑊01 = ( ), 3.2 0.1

𝑧2 (𝑡) 5

0

10 t

15

20

= 0.1𝑒2 (𝑡) + 0.2𝑒2 (𝑡 − 𝑑 (𝑡)) + 0.2𝑒2 (𝑡 − 𝜏 (𝑡)) + 0.3𝑒2 (𝑡𝑘 ) ,

e1 (t) e2 (t)

𝑧3 (𝑡)

(91)

= 0.2𝑒1 (𝑡) + 0.3𝑒1 (𝑡 − 𝑑 (𝑡))

Figure 4: State responses of error system (10).

+ 0.1𝑒1 (𝑡 − 𝜏 (𝑡)) + 0.2𝑒1 (𝑡𝑘 ) , Thus, the corresponding gain matrix in (9) is given by 𝐾=(

−2.0298 −0.0152 1.1882

0.5924

).

(89)

Under the obtained gain matrix in (89), the response curves of control input (9) and error system (11) are exhibited in Figures 3 and 4, respectively. It is obvious from Figure 4 that the slave system (3) exponentially synchronizes with the master system (6). Example 10. In the following, we consider the second-order stochastic master system (12) and slave system (13) with 𝐼(𝑡) = (𝐼1 (𝑡), 𝐼2 (𝑡))𝑇 ; 𝜔(𝑡) is a second-order Brownian motion and

𝑧4 (𝑡) = 0.2𝑒2 (𝑡) + 0.1𝑒2 (𝑡 − 𝑑 (𝑡)) + 0.1𝑒2 (𝑡 − 𝜏 (𝑡)) + 0.3𝑒2 (𝑡𝑘 ) and the activation functions are taken as 𝑔(𝛼) = (1/2) sin(𝛼). It can be verified that 𝐹1− = 𝐹2− = −0.5 and 𝐹1+ = 𝐹2+ = 0.5. 0 Thus, 𝐹 = ( 0.5 0 0.5 ). In this example, 𝐼(𝑡) = 0, discrete delay 𝑑(𝑡) = 1 + 0.3 sin(2𝑡), and distributed delay 𝜏(𝑡) = 0.5 sin2 (𝑡). Then, a straightforward calculation gives 𝑑1 = 0.7, 𝑑2 = 1.3, 𝜏 = 0.5, ̇ = 0.6 cos(2𝑡) ≤ 0.6, which implies 𝜇 = 0.6. and 𝑑(𝑡) Figures 5 and 6 show the chaotic behavior of the master system (12) and slave system (13) with 𝑢(𝑡) = 0, respectively,

Abstract and Applied Analysis

15

5

4

4

3

3

2

2

1 u1 , u2

x2 (t)

1 0 −1

0 −1

−2

−2

−3

−3

−4 −5 −5

0 x1 (t)

−4

5

0

1

2

3

4

5

t

u1 u2

Figure 5: Chaotic behavior of master system (12) with 𝑢(𝑡) = 0.

Figure 7: Control input 𝑢(𝑡). 0.4

4

0.3

2

0.2 0.1

0

e1 , e2

y2 (t)

6

0

−2

−0.1

−4

−0.2

−6 −6

−0.3 −4

−2

0 y1 (t)

2

4

6

Figure 6: Chaotic behavior of slave system (13) with 𝑢(𝑡) = 0.

𝑇

under the initial states 𝑥(𝑡) = [−0.2 0.2] and 𝑦(𝑡) = 𝑇 [0.2 −0.2] , 𝑡 ∈ [−1.3 0]. By using Matlab LMI Control Toolbox to solve the LMIs given in Theorem 8 with 𝛾 = 0.2, we can find that the upper bound on the samplings, which preserves that master system (12) and slave system (13) are exponentially synchronous, is 0.1. Moreover, we can also get the following matrices:

0.1798 −0.0507 𝐺=( ), −0.0507 0.1495 𝐿=(

−1.7755 −0.0854 −0.0854 −1.6652

0

1

2

3

4

5

t

Figure 8: State responses of error system (14).

Thus, the corresponding gain matrix in (9) is given by −11.1000 −3.9995 ). 𝐾 = 𝐺−1 𝐿 = ( −4.3360 −12.4944

(93)

Under the above given gain matrix, Figures 7 and 8 show the response curves of control input (9) and error system (14), respectively. It is clear from Figures 7 and 8 that the obtained sampled-data controller achieves the exponential synchronization of master system (12) and slave system (13).

6. Conclusion (92)

).

−0.4

In this paper, the exponential synchronization issue for stochastic neural networks (SNNs) with mixed time delays and Markovian jump parameters under sampled-data control

16 has been addressed. New delay-dependent conditions have been presented in terms of LMIs to ensure the exponential stability of the considered error systems, and, thus, the master systems exponentially synchronize with the slave systems. The results obtained in this paper are a little conservative comparing the previous results in the literature. The methods of this paper can be applied to other classes of neural networks such as complex neural networks and impulsive neural networks.

Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments This work was supported by the National Science and Technology Major Project of China (2011ZX05020-006) and the Natural Science Foundation of Hebei Province of China (A2011203103).

References [1] Z. Wang, Y. Wang, and Y. Liu, “Global synchronization for discrete-time stochastic complex networks with randomly occurred nonlinearities and mixed time delays,” IEEE Transactions on Neural Networks, vol. 21, no. 1, pp. 11–25, 2010. [2] H. Wu, “Global exponential stability of Hopfield neural networks with delays and inverse Lipschitz neuron activations,” Nonlinear Analysis: Real World Applications, vol. 10, no. 4, pp. 2297–2306, 2009. [3] H. Wu and H. Yu, “Global robust exponential stability for Hopfield neural networks with non-Lipschitz activation functions,” Nonlinear Oscillations, vol. 15, no. 1, pp. 127–138, 2012. [4] X. Yang, J. Cao, and J. Lu, “Synchronization of delayed complex dynamical networks with impulsive and stochastic effects,” Nonlinear Analysis: Real World Applications, vol. 12, no. 4, pp. 2252–2266, 2011. [5] J. H. Park, O. M. Kwon, and S. M. Lee, “LMI optimization approach on stability for delayed neural networks of neutraltype,” Applied Mathematics and Computation, vol. 196, no. 1, pp. 236–244, 2008. [6] Z. Liu, H. Zhang, and Q. Zhang, “Novel stability analysis for recurrent neural networks with multiple delays via line integraltype L-K functional,” IEEE Transactions on Neural Networks, vol. 21, no. 11, pp. 1710–1718, 2010. [7] H. Zhang, Z. Liu, G.-B. Huang, and Z. Wang, “Novel weightingdelay-based stability criteria for recurrent neural networks with time-varying delay,” IEEE Transactions on Neural Networks, vol. 21, no. 1, pp. 91–106, 2010. [8] H. Wu, N. Li, K. Wang, G. Xu, and Q. Guo, “Global robust stability of switched interval neural networks with discrete and distributed time-varying delays of neural type,” Mathematical Problems in Engineering, vol. 2012, Article ID 361871, 18 pages, 2012. [9] H. Wu and L. Zhang, “Almost periodic solution for memristive neural networks with time-varying delays,” Journal of Applied Mathematics, vol. 2013, Article ID 716172, 12 pages, 2013.

Abstract and Applied Analysis [10] O. Faydasicok and S. Arik, “Equilibrium and stability analysis of delayed neural networks under parameter uncertainties,” Applied Mathematics and Computation, vol. 218, no. 12, pp. 6716–6726, 2012. [11] T. Huang, C. Li, W. Yu, and G. Chen, “Synchronization of delayed chaotic systems with parameter mismatches by using intermittent linear state feedback,” Nonlinearity, vol. 22, no. 3, pp. 569–584, 2009. [12] Z. Wu, P. Shi, H. Su, and J. Chu, “Exponential synchronization of neural networks with discrete and distributed delays under time-varying sampling,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 9, pp. 1368–1376, 2012. [13] X. Yang, J. Cao, and J. Lu, “Stochastic synchronization of complex networks with nonidentical nodes via hybrid adaptive and impulsive control,” IEEE Transactions on Circuits and Systems I, vol. 59, no. 2, pp. 371–384, 2012. [14] Q. Gan, “Exponential synchronization of stochastic CohenGrossberg neural networks with mixed time-varying delays and reaction-diffusion via periodically intermittent control,” Neural Networks, vol. 31, pp. 12–21, 2012. [15] W. Zhang, J. Fang, Q. Miao, L. Chen, and W. Zhu, “Synchronization of Markovian jump genetic oscillators with nonidentical feedback delay,” Neurocomputing, vol. 101, pp. 347–353, 2013. [16] H. Wu, X. Guo, S. Ding, L. Wang, and L. Zhang, “Exponential synchronization for switched coupled neural networks via intermittent control,” Journal of Computer Information Systems, vol. 9, no. 9, pp. 3503–3510, 2013. [17] H. Zhang, T. Ma, G.-B. Huang, and C. Wang, “Robust global exponential synchronization of uncertain chaotic delayed neural networks via dual-stage impulsive control,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 40, no. 3, pp. 831– 844, 2010. [18] Y. Wang, Z. Wang, J. Liang, Y. Li, and M. Du, “Synchronization of stochastic genetic oscillator networks with time delays and Markovian jumping parameters,” Neurocomputing, vol. 73, no. 13–15, pp. 2532–2539, 2010. [19] X. Yang, J. Cao, and J. Lu, “Synchronization of randomly coupled neural networks with Markovian jumping and timedelay,” IEEE Transactions on Circuits and Systems I, vol. 60, no. 2, pp. 363–376, 2013. [20] J. Tian, Y. Li, J. Zhao, and S. Zhong, “Delay-dependent stochastic stability criteria for Markovian jumping neural networks with mode-dependent time-varying delays and partially known transition rates,” Applied Mathematics and Computation, vol. 218, no. 9, pp. 5769–5781, 2012. [21] J. Yu and G. Sun, “Robust stabilization of stochastic Markovian jumping dynamical networks with mixed delays,” Neurocomputing, vol. 86, pp. 107–115, 2012. [22] D. Zhang and L. Yu, “Exponential state estimation for Markovian jumping neural networks with time-varying discrete and distributed delays,” Neural Networks, vol. 35, pp. 103–111, 2012. [23] L. Sheng and H. Yang, “Robust stability of uncertain Markovian jumping Cohen-Grossberg neural networks with mixed timevarying delays,” Chaos, Solitons & Fractals, vol. 42, no. 4, pp. 2120–2128, 2009. [24] P. Balasubramaniam and S. Lakshmanan, “Delay-range dependent stability criteria for neural networks with Markovian jumping parameters,” Nonlinear Analysis: Hybrid Systems, vol. 3, no. 4, pp. 749–756, 2009. [25] C.-D. Zheng, F. Zhou, and Z. Wang, “Stochastic exponential synchronization of jumping chaotic neural networks with

Abstract and Applied Analysis

[26]

[27]

[28]

[29]

[30]

[31]

[32]

[33]

[34]

[35]

[36]

[37]

[38]

[39]

[40]

[41]

mixed delays,” Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 3, pp. 1273–1291, 2012. Y. Li, Y. Zhu, N. Zeng, and M. Du, “Stability analysis of standard genetic regulatory networks with time-varying delays and stochastic perturbations,” Neurocomputing, vol. 74, no. 17, pp. 3235–3241, 2011. L. Pan and J. Cao, “Stochastic quasi-synchronization for delayed dynamical networks via intermittent control,” Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 3, pp. 1332–1343, 2012. Q. Zhu and J. Cao, “Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 41, no. 2, pp. 341–353, 2011. J. Fu, H. Zhang, T. Ma, and Q. Zhang, “On passivity analysis for stochastic neural networks with interval time-varying delay,” Neurocomputing, vol. 73, no. 4–6, pp. 795–801, 2010. H. Zhang and Y. Wang, “Stability analysis of Markovian jumping stochastic Cohen-Grossberg neural networks with mixed time delays,” IEEE Transactions on Neural Networks, vol. 19, no. 2, pp. 366–370, 2008. Y. Liu, Z. Wang, J. Liang, and X. Liu, “Synchronization and state estimation for discrete-time complex networks with distributed delays,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 38, no. 5, pp. 1314–1325, 2008. W. Allegretto, D. Papini, and M. Forti, “Common asymptotic behavior of solutions and almost periodicity for discontinuous, delayed, and impulsive neural networks,” IEEE Transactions on Neural Networks, vol. 21, no. 7, pp. 1110–1125, 2010. Z. Wang, Y. Liu, and X. Liu, “State estimation for jumping recurrent neural networks with discrete and distributed delays,” Neural Networks, vol. 22, no. 1, pp. 41–48, 2009. H. Huang, T. Huang, and X. Chen, “Global exponential estimates of delayed stochastic neural networks with Markovian switching,” Neural Networks, vol. 36, pp. 136–145, 2012. T. Huang, C. Li, S. Duan, and J. Starzyk, “Robust exponential stability of uncertin delayed neural networks with stochastic perturbation and impulse effects,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 6, pp. 866–875, 2012. Z.-G. Wu, J. H. Park, H. Su, and J. Chu, “Discontinuous Lyapunov functional approach to synchronization of time-delay neural networks using sampled-data,” Nonlinear Dynamics, vol. 69, no. 4, pp. 2021–2030, 2012. S. J. S. Theesar, S. Banerjee, and P. Balasubramaniam, “Synchronization of chaotic systems under sampled-data control,” Nonlinear Dynamics, vol. 70, no. 3, pp. 1977–1987, 2012. Z. G. Wu, P. Shi, H. Su, and J. Chu, “Stochastic synchronization of Markovian jump neural networks with time-varying delay using sampled data,” IEEE Transactions on Cybernetics, vol. 43, no. 6, pp. 1796–1806, 2013. Y. Zhang and Q.-L. Han, “Network-based synchronization of delayed neural networks,” IEEE Transactions on Circuits and Systems I, vol. 60, no. 3, pp. 676–689, 2013. H. Li, B. Chen, Q. Zhou, and W. Qian, “Robust stability for uncertain delayed fuzzy Hopfield neural networks with Markovian jumping parameters,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 39, no. 1, pp. 94–102, 2009. Z. Shu and J. Lam, “Exponential estimates and stabilization of uncertain singular systems with discrete and distributed delays,” International Journal of Control, vol. 81, no. 6, pp. 865–882, 2008.

17 [42] X.-M. Zhang and Q.-L. Han, “Stability of linear systems with interval time-varying delays arising from networked control systems,” in Proceeding of the 36th Annual Conference of the IEEE Industrial Electronics Society (IECON ’10), pp. 225–230, Glendale, Ariz, USA, November 2010. [43] P. Park, J. W. Ko, and C. Jeong, “Reciprocally convex approach to stability of systems with time-varying delays,” Automatica, vol. 47, no. 1, pp. 235–238, 2011. [44] Y. He, G. Liu, and D. Rees, “New delay-dependent stability criteria for neural networks with yime-varying delay,” IEEE Transactions on Neural Networks, vol. 18, no. 1, pp. 310–314, 2007. [45] Z. Wang, Y. Liu, L. Yu, and X. Liu, “Exponential stability of delayed recurrent neural networks with Markovian jumping parameters,” Physics Letters A, vol. 356, no. 4-5, pp. 346–352, 2006.

Advances in

Operations Research Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Advances in

Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Applied Mathematics

Algebra

Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Probability and Statistics Volume 2014

The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of

Differential Equations Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

Submit your manuscripts at http://www.hindawi.com International Journal of

Advances in

Combinatorics Hindawi Publishing Corporation http://www.hindawi.com

Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of Mathematics and Mathematical Sciences

Mathematical Problems in Engineering

Journal of

Mathematics Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Discrete Mathematics

Journal of

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Discrete Dynamics in Nature and Society

Journal of

Function Spaces Hindawi Publishing Corporation http://www.hindawi.com

Abstract and Applied Analysis

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of

Journal of

Stochastic Analysis

Optimization

Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014