190

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 1, JANUARY 2016

Exponential Synchronization of Coupled Stochastic Memristor-Based Neural Networks With Time-Varying Probabilistic Delay Coupling and Impulsive Delay Haibo Bao, Ju H. Park, and Jinde Cao, Senior Member, IEEE

Abstract— This paper deals with the exponential synchronization of coupled stochastic memristor-based neural networks with probabilistic time-varying delay coupling and time-varying impulsive delay. There is one probabilistic transmittal delay in the delayed coupling that is translated by a Bernoulli stochastic variable satisfying a conditional probability distribution. The disturbance is described by a Wiener process. Based on Lyapunov functions, Halanay inequality, and linear matrix inequalities, sufficient conditions that depend on the probability distribution of the delay coupling and the impulsive delay were obtained. Numerical simulations are used to show the effectiveness of the theoretical results. Index Terms— Probabilistic time-varying delay, stochastic memristor-based neural networks, synchronization, time-varying impulsive delay.

I. I NTRODUCTION

T

HE concept of the memristor (a contraction of memory and resistor) was first postulated in [1], and it was predicted as the fourth ideal electrical circuit element. About 40 years later, the first practical memristor device

Manuscript received November 26, 2014; revised July 19, 2015 and August 30, 2015; accepted August 31, 2015. Date of publication October 15, 2015; date of current version December 17, 2015. The work of H. Bao was supported in part by the National Natural Science Foundation of China under Grant 61203096 and Grant 61573291, in part by the China Post-Doctoral Science Foundation under Grant 2013M513924, in part by the Fundamental Research Funds through Central Universities under Grant XDJK2013C001, and in part by the Scientific Research, Southwest University, under Grant SWU112024. The work of J. H. Park was supported by the Basic Science Research Program through the National Research Foundation of Korea within the Ministry of Education under Grant 2013R1A1A2A10005201. The work of J. Cao was supported in part by the National Natural Science Foundation of China under Grant 61573096 and Grant 11072059, in part by the Specialized Research Fund through the Doctoral Program of Higher Education under Grant 20130092110017, and in part by the Natural Science Foundation of Jiangsu Province, China, under Grant BK2012741. (Corresponding author: Ju H. Park.) H. Bao is with the School of Mathematics and Statistics, Southwest University, Chongqing 400715, China, and also with the Nonlinear Dynamics Group, Department of Electrical Engineering, Yeungnam University, Gyeongsan 38541, Korea (e-mail: [email protected]). J. H. Park is with the Nonlinear Dynamics Group, Department of Electrical Engineering, Yeungnam University, Gyeongsan 38541, Korea (e-mail: [email protected]). J. Cao is with the Department of Mathematics, Southeast University, Nanjing 210096, China, and also with the Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2015.2475737

based on TiO2 thin films was realized by a research team at Hewlett–Packard who published their findings in [2] and [3]. The memristor is a two-terminal element with various features, such as memory characteristics, nanometer size, and a variable resistance called memristance. Memristors have received worldwide attention because of their potential applications in next-generation computers, powerful brain-like neural computers, and neuromorphic computing systems [4]–[7]. Some research indicates that the memristor exhibits pinched hysteresis, which means that a lag occurs between the application and removal of a field and its subsequent effect, similar to the neurons in the human brain. Because of this feature, many researchers use memristors to design new models of neural networks to emulate the human brain [8]–[10]. In recent years, memristor-based neural networks have gained considerable attention for their wide range of applications in the fields of pattern recognition, signal processing, optimization, and associative memories [11]–[15]. Time delay is widely accepted in the implementation of electronic networks due to the finite switching speed of amplifiers and finite signal propagation time. Delays can cause instability, bifurcation, and chaotic attractors [16]–[18], so it is necessary to investigate memristor-based neural networks with delay. In general, there are two types of delay: 1) internal delay within a system and 2) coupling delay caused by the exchange of information between systems. Many works have explored the synchronization of coupled neural networks with delay coupling. Synchronization of coupled delayed neural networks with linear constant coupling has been investigated [19]–[26]. The coupling term was described by D(x j (t − τ ) − x i (t − τ )), where x i (t) = (x i1 (t), x i2 (t), . . . , x in (t))T ∈ Rn denotes the state variable associated with the i th network at time t, D denotes the inner coupling matrices between the networks i and j at time t and time t − τ , and τ is the transmission delay. The following other symbols have the same meaning. Cao et al. [23], Yu et al. [24], and Cao et al. [25] discussed the global synchronization of coupled delayed neural networks with constant, delayed, and hybrid coupling. Wu et al. [26] investigated exponential synchronization for complex dynamical networks with time-varying couple delay and sampled data and described the coupling term by D(x j (t − τ (t)) − x i (t − τ (t))). Other studies considered that the time delay affects only the variable being transmitted from one system

2162-237X © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

BAO et al.: EXPONENTIAL SYNCHRONIZATION OF COUPLED STOCHASTIC MEMRISTOR-BASED NEURAL NETWORKS

to another and investigated synchronization for networks with the coupling term D(x j (t − τ ) − x i (t)) [27], [28]. Few studies have considered the synchronization problem of coupled memristor-based neural networks with a single delay. When investigating memristor-based networks, only deterministic time delay has been considered, and synchronization criteria were derived based only on the information of the variation range of the time delay. The time delay in some neural networks is often stochastic, and its probabilistic characteristics can be obtained easily by statistical methods [29]–[32]. This often occurs in real systems where some values of the delay are very large, but the probability of taking such large values is very small. Under these circumstances, if only the variation range of the time delay is used to derive the criteria, the results may be somewhat more conservative. Hence, it is necessary to investigate the synchronization of memristor-based neural networks with the random delay coupling. The actual signal transmission between subsystems of coupled memristor-based neural networks is inevitably influenced by a stochastic perturbation from various uncertainties, which probably leads to package loss or transmitted signals not fully being received. So, the stochastic perturbation should be considered when investigating the synchronization of memristor-based neural networks. Studies have pointed out that the state of the networks is often subject to instantaneous disturbances, and the networks experience impulsive effects in the form of abrupt changes at certain instants that may be caused by switching phenomena, frequency changes, or other sudden noise [33]–[36]. Impulsive control is effective for helping systems to achieve synchronization. Lu et al. [36] investigated the synchronization of linearly coupled neural networks with impulsive disturbances with no delayed impulsive coupling in the models. Considering the influence of the time delay in the impulsive coupling, synchronization has also been investigated using constantdelay impulsive controllers [37] and impulsive controllers with time-varying delay [38]. To the best of our knowledge, few studies have considered the synchronization of coupled memristor-based neural networks with stochastic disturbances and time-varying impulsive effects, which remains as an open challenge. This paper investigates the exponential synchronization of coupled stochastic memristor-based neural networks. The main contributions can be summarized as follows. 1) A new coupled memristor-based neural network with one single probabilistic time-varying delay coupling is proposed, which includes delayed memristor-based neural networks, delayed memristor-based cellular neural networks, and well-known chaotic systems, such as memristor-based Chua’s system. 2) Both stochastic disturbances and time-varying impulsive effects are considered. 3) Based on Lyapunov functions, Halanay inequality, and stochastic analysis technique, sufficient conditions are given under which the coupled memristor-based neural network is exponentially synchronized in the mean square.

191

The criteria can be applied to chaotic memristor-based systems with or without delay. Numerical simulation examples are presented to demonstrate the effectiveness and the applicability of the main results. The remainder of this paper is organized as follows. In Section II, the problem formulation is presented for the exponential synchronization of stochastic coupled memristorbased neural networks with probabilistic delay coupling and time-varying impulsive delay. Lemmas and notations used throughout this paper are also given. In Section III, the derivation of two theorems for exponential synchronization based on Lyapunov functions and linear matrix inequalities (LMIs) is presented. In Section IV, two numerical examples are given to validate the main results. Finally, the conclusion is given in Section V. II. P ROBLEM S TATEMENT In this section, we introduce mathematical models of coupled memristor-based neural networks and present notations, definitions, and lemmas used in this paper. Notation: N+ denotes the set of positive integers. + R and Rn denote the set of nonnegative real numbers and the n-dimensional Euclidean space, respectively. The superscript T denotes matrix or vector transposition. In is an n × n identify matrix. || · || is the Euclidean norm in Rn . For a matrix A, λmax (A) and λmin (A) are the largest and smallest eigenvalues of A, respectively. Tr(A) denotes the trace of matrix A. (, F , {Ft }t ≥0 , P) is a complete probability space with filtration {Ft }t ≥0 satisfying the usual conditions (i.e., the filtration contains all P-null sets, and it is right continuous). P ([−τ, 0]; Rn ) denotes the family of all F -measurable LF 0 0 C([−τ, 0] : Rn )-valued random variables ξ = {ξ(s) : −τ ≤ s ≤ 0} such that sup−τ ≤s≤0 Eξ(s) p < ∞, where E{·} is the mathematical expectation operator with respect to the given probability measure P. In some cases, the arguments of a function or a matrix are omitted in the analysis when no confusion can arise. Consider the isolated memristor-based neural networks described by the following state equation: d x i (t) = [C x i (t) + A(x i (t)) f (x i (t)) + B(x i (t)) f (x i (t − τ1 (t))) + J (t)]dt + σ (t, x i (t), x i (t − τ1 (t)))dw(t) (t))T

(1) Rn

where x i (t) = (x i1 (t), x i2 (t), . . . , x in ∈ for i = 1, 2, . . . , N denotes the state variable associated with the i th node at time t; C = −diag{c1 , c2 , . . . , cn } < 0 with ck > 0, k = 1, 2, . . . , n; A(x i (t)) = [akj (x i j (t))]n×n and B(x i (t)) = [bkj (x i j (t))]n×n (k = 1, 2, . . . , n) are the connection memristive weight matrix and the delayed connection memristive weight matrix, respectively. akj (x i j (t)) and bkj (x i j (t)) are defined as follows: aˆ kj , |x i j (t)| > T j akj (x i j (t)) = aˇ kj , |x i j (t)| < T j . akj (±T j ) = aˆ kj or aˇ kj for k, j = 1, 2, . . . , n, where switching jumps T j > 0 and weights aˆ kj and aˇ kj are all

192

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 1, JANUARY 2016

constants

bkj (x i j (t)) =

bˆkj , |x i j (t)| > T j bˇkj , |x i j (t)| < T j .

bkj (±T j ) = bˆkj or bˇkj for k, j = 1, 2, . . . , n, bˆkj and bˇkj are all constants; f (x i (t)) = ( f 1 (x i1 (t), . . . , x in (t)), . . . , f n (x i1 (t), . . . , x in (t)))T is the neuron activation function; J = (J1 (t), . . . , Jn (t))T ∈ Rn is an external input vector; τ1 (t) is the time-varying delay, 0 ≤ τ1 (t) ≤ τ1 and τ1 is a positive constant; w(t) = (w1 (t), w2 (t), . . . , wn (t))T is an n-dimensional Wiener process defined on (, F , {Ft }t ≥0, P), and wi (t) is independent of w j (t) for i = j . σ : R+ × Rn × Rn → Rn×n is the noise function matrix. Since akj (x i j (t)) and bkj (x i j (t)) are discontinuous, the solutions of all systems considered in this paper are handled in Filippov’s sense. Through the theories of differential inclusions and set-valued maps, it follows from (1) that: d x i (t) ∈ [C x i (t) + co[ A(x i (t))] f (x i (t)) + co[B(x i (t))] f (x i (t − τ1 (t))) + J (t)]dt + σ (t, x i (t), x i (t − τ1 (t)))dw(t)

(2)

where co[ A(x i (t))] = [co[akj (x i j (t))]]n×n and co[B(x i (t))] = [co[bkj (x i j (t))]]n×n ⎧ ⎪ |x i j (t)| > T j ⎨aˆ kj , co[akj (x i j (t))] = co{aˆ kj , aˇ kj }, |x i j (t)| = T j ⎪ ⎩ aˇ kj , |x i j (t)| < T j ⎧ ⎪ |x i j (t)| > T j ⎨bˆkj , co[bkj (x i j (t))] = co{bˆkj , bˇkj }, |x i j (t)| = T j ⎪ ⎩ˇ bkj , |x i j (t)| < T j co{aˆ kj , aˇ kj } = [a kj , a kj ], co{bˆkj , bˇkj } = [bkj , bkj ], a kj = min{aˆ kj , aˇ kj }, a kj = max{aˆ kj , aˇ kj }, b kj = min{bˆkj , bˇkj }, and bkj = max{bˆkj , bˇkj }; or, there exist πkj (x i j (t)) ∈ co[akj (x i j (t))] and δkj (x i j (t)) ∈ co[bkj (x i j (t))] such that d x i (t) = [C x i (t) + (x i (t)) f (x i (t)) + (x i (t)) f (x i (t − τ1 (t))) + J (t)]dt + σ (t, x i (t), x i (t − τ1 (t)))dw(t)

(3)

where (x i (t)) = [πkj (x i j (t))]n×n and (x i (t)) = [δkj (x i j (t))]n×n . Definition 1 [39]: A function (in Filippov’s sense) x i (t) = (x i1 (t), x i2 (t), . . . , x in (t))T is a solution of system (1) with initial condition x i (s) = φi (s), s ∈ [−τ, 0], if x i (t) is absolutely continuous on any compact interval of [−τ, +∞) and satisfies the differential inclusions (2) or (3). Assumption 1: The activation function f satisfies Lipschitz condition, that is, there exist positive constants li , such that || f i (x) − f i (y)|| ≤ li ||x − y|| for all x, y ∈ Rn , and i = 1, 2, . . . , n, L = diag{l1 , l2 , . . . , ln }. Assumption 2: There exist nonnegative constants ρ1 and ρ2 such that Tr[σ (t, x 1 , y1 ) − σ (t, x 2 , y2 )]T [σ (t, x 1 , y1 ) − σ (t, x 2 , y2 )] ≤ ρ1 x 1 − x 2 2 + ρ2 y1 − y2 2 holds for any x 1 , x 2 , y1 , y2 ∈ Rn , and t ∈ R+ .

Consider a network of N nodes, and let x i (t) be the i th node. The uncoupled dynamics of each node is defined as (1). Let G be the N × N configuration matrix of the network such that G i j > 0 if there is an edge from node j to node i and G i j = 0 otherwise. Thus, the dynamics of the i th node is given by d x i (t) ∈ [C x i (t) + co[ A(x i (t))] f (x i (t)) + co[B(x i (t))] f (x i (t − τ1 (t))) + J (t) N N + G i j Dx j (t) + G i j Dτ x j (t − τ (t)) j =1

j =1

− G ii Dτ (x i (t − τ (t)) − x i (t))]dt + σ (t, x i (t), x i (t − τ1 (t)))dw(t)

(4)

or d x i (t) = [C x i (t) + (x i (t)) f (x i (t)) + (x i (t)) f (x i (t − τ1 (t))) + J (t) N N + G i j Dx j (t) + G i j Dτ x j (t − τ (t)) j =1

j =1

− G ii Dτ (x i (t − τ (t)) − x i (t))]dt + σ (t, x i (t), x i (t − τ1 (t)))dw(t)

(5)

τ ] i = 1, 2, . . . , N, where D = [dkj ]n×n and Dτ = [dkj n×n denote the inner coupling matrices between the connected networks i and j at time t and t − τ (t) for all 1 ≤ i ≤ j ≤ N. The configuration matrix G is irreducible and satisfies the following condition: G i j ≥ 0, i = j, G ii = − Nj=1, j =i G i j . The time-varying delay τ (t) satisfies the following condition. Assumption 3: There exist constants τ2 and τ3 satisfying 0 ≤ τ2 ≤ τ3 such that either τ (t) ∈ [0, τ2 ] or τ (t) ∈ (τ2 , τ3 ]. Furthermore, the probability distribution of τ (t) taking values in [0, τ2 ] and (τ2 , τ3 ] is assumed as P{τ (t) ∈ [0, τ2 ]} = β0 , P{τ (t) ∈ (τ2 , τ3 ]} = 1 − β0 , where 0 ≤ β0 ≤ 1. Then

1, τ (t) ∈ [0, τ2 ] β(t) = 0, τ (t) ∈ (τ2 , τ3 ] where β(t) is independent of ws (t), s = 1, 2, . . . , n. Therefore P{β(t) = 1} = P{τ (t) ∈ [0, τ2 ]} = E{β(t)} = β0 P{β(t) = 0} = P{τ (t) ∈ (τ2 , τ3 ]} = E{1 − β(t)} = 1 − β0 . Remark 1: The binary stochastic variable was first introduced in [29] and was successfully used in [30]–[32]. Under Assumption 3, β0 depends on the values of τ2 and τ3 . Now, we introduce two time-varying delays τ2 (t) and τ3 (t) such that τ2 (t), τ (t) ∈ [0, τ2 ] τ (t) = τ3 (t), τ (t) ∈ (τ2 , τ3 ].

BAO et al.: EXPONENTIAL SYNCHRONIZATION OF COUPLED STOCHASTIC MEMRISTOR-BASED NEURAL NETWORKS

By using the new functions τ2 (t) and τ3 (t) and the definition of β(t), for i = 1, 2, . . . , N, the system in (4) and (5) can be written as d x i (t) ∈ [C x i (t) + co[ A(x i (t))] f (x i (t)) + co[B(x i (t))] f (x i (t − τ1 (t))) + J (t) +

N

G i j Dx j (t) + β(t)

j =1

+ (1 − β(t))

N

G i j Dτ x j (t − τ2 (t))

j =1 N

193

or ⎧ ⎪ ⎪ ⎪ d x i (t) = C x i (t) + (x i (t)) f (x i (t)) + (x i (t)) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ N ⎪ ⎪ ⎪ (t − τ (t))) + J (t) + G i j Dx j (t) × f (x i 1 ⎪ ⎪ ⎪ j =1 ⎪ ⎪ ⎪ N ⎪ ⎪ ⎪ G i j Dτ x j (t − τ2 (t)) + β(t) ⎪ ⎪ ⎪ j =1 ⎪ ⎪ ⎪ N ⎪ ⎪ ⎨ + (1 − β(t)) G i j Dτ x j (t − τ3 (t)) j =1

G i j Dτ x j (t − τ3 (t))

j =1

− β(t)G ii Dτ (x i (t − τ2 (t)) − x i (t)) − (1 − β(t))G ii Dτ (x i (t − τ3 (t)) − x i (t))]dt + σ (t, x i (t), x i (t − τ1 (t)))dw(t)

(6)

or

⎪ ⎪ ⎪ − β(t)G ii Dτ (x i (t − τ2 (t)) − x i (t)) ⎪

⎪ ⎪ ⎪ ⎪ ⎪ ⎪ − (1 − β(t))G ii Dτ (x i (t − τ3 (t)) − x i (t)) dt ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ + σ (t, x i (t), x i (t − τ1 (t)))dw(t), ⎪ ⎪ ⎪ ⎪ t = tk ⎪ ⎪ ⎪ + − − ⎪ ⎪ x i tk − x i tk = θ x i tk + μx i (tk − τ4 (tk )), ⎪ ⎪ ⎪ ⎩ t = t , k ∈ N+ k

(9) d x i (t) = [C x i (t) + (x i (t)) f (x i (t)) + (x i (t)) f (x i (t − τ1 (t))) + J (t) N N + G i j Dx j (t) + β(t) G i j Dτ x j (t − τ2 (t)) j =1

+ (1 − β(t))

j =1 N

G i j Dτ x j (t − τ3 (t))

j =1

− β(t)G ii Dτ (x i (t − τ2 (t)) − x i (t)) − (1 − β(t))G ii Dτ (x i (t − τ3 (t)) − x i (t))]dt + σ (t, x i (t), x i (t − τ1 (t)))dw(t).

(7)

By introducing the impulsive effects into systems (6) and (7), one can obtain the following coupled memristor-based neural networks with probabilistic delay coupling and time-varying impulsive delay effects, for i = 1, 2, . . . , N: ⎧ ⎪ ⎪ ⎪ ⎪d x i (t) ∈ C x i (t) + co[ A(x i (t))] f (x i (t)) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ + co[B(x i (t))] f (x i (t − τ1 (t))) + J (t) ⎪ ⎪ ⎪ N ⎪ ⎪ ⎪ ⎪ + G i j Dx j (t) ⎪ ⎪ ⎪ j =1 ⎪ ⎪ ⎪ N ⎪ ⎪ ⎪ +β(t) G i j Dτ x j (t − τ2 (t)) ⎪ ⎪ ⎪ j =1 ⎨ N +(1 − β(t)) G i j Dτ x j (t − τ3 (t)) ⎪ ⎪ ⎪ j =1 ⎪ ⎪ ⎪ ⎪ ⎪ −β(t)G ii Dτ (x i (t − τ2 (t)) − x i (t)) ⎪

⎪ ⎪ ⎪ ⎪ ⎪ ⎪ −(1 − β(t))G ii Dτ (x i (t − τ3 (t)) − x i (t)) dt ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ + σ (t, x i (t), x i (t − τ1 (t)))dw(t), t = tk ⎪ ⎪ ⎪ + ⎪ ⎪ x t − x i tk− = θ x i tk− + μx i (tk − τ4 (tk )), ⎪ i k ⎪ ⎪ ⎩ t = tk , k ∈ N+ (8)

where μ and θ are impulsive strengths with and without delay, respectively, 0 ≤ τ4 (t) ≤ τ4 , and the impulsive sequence {tk } satisfies 0 = t0 < t1 < t2 < · · · < tk−1 < tk < · · · and limk→+∞ tk = +∞, x(tk ) = x(tk+ ) = limt →t + x(t), x(tk− ) = k limt →t − x(t), and x i (tk ) = x i (tk ) − x i (tk− ). k The initial conditions of (1) are given by x i (s) = φi (s), s ∈ [t0 − τ, t0 ], i = 1, 2, . . . , N, where φi (·) = [φi1 (·), φi2 (·), . . . , φin (·)]T ∈ C([t0 − τ, t0 ], Rn ) and τ = max{τ1 , τ2 , τ3 , τ4 }. To discuss synchronization with one probabilistic delay coupling and impulsive effects, we define the set S = {x = (x 1T (t), x 2T (t), . . . , x NT (t))T : x i (s) ∈ C([t0 − τ, t0 ], Rn ), x i (t) = x j (t), i, j = 1, 2, . . . , N} as the synchronization manifold for network (8). In this case, the goal is to make system (8) reach synchronization, i.e., x 1 (t) = x 2 (t) = · · · = x N (t) = s(t) with the following synchronized state equation: ⎧ ds(t) ∈ [Cs(t) + co[ A(s(t))] f (s(t)) ⎪ ⎪ ⎪ ⎪ ⎪ + co[B(s(t))] f (s(t − τ1 (t))) + J (t) ⎪ ⎪ ⎪ ⎪ ⎪ − β(t)G ii Dτ (s(t − τ2 (t)) − s(t)) ⎨ − (1 − β(t))G ii Dτ (s(t − τ3 (t)) − s(t))]dt ⎪ ⎪ ⎪ + σ (t, s(t), s(t − τ1 (t)))dw(t), t = tk ⎪ ⎪ + ⎪ ⎪ ⎪ s t − s tk− = θ s tk− + μs(tk − τ4 (tk )), ⎪ ⎪ ⎩ k t = tk , k ∈ N+ (10) where i = 1, 2, . . . , N. Remark 2: When the array is synchronized, the nodes cannot be decoupled because of the single probabilistic delay in the coupling. The dynamics of the array reduces to system (10), not system (1). The probabilistic delay coupling plays an important role in the new collective behavior of the array. For i = 1, 2, . . . , N, we define the following: G ii = −a, x(t) = (x 1T (t), . . . , x NT (t))T , f(x(t)) =

194

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 1, JANUARY 2016

( f T (x 1 (t)), . . . , f T (x N (t)))T , J(t) = (J T (t), . . . , J T (t))T , co[A(x(t))] = diag{co[ A(x 1(t))], . . . , co[ A(x N (t))]}, co[B(x(t))] = diag{co[B(x 1 (t))], . . . , co[B(x N (t))]}, A(x(t)) = diag{ (x 1 (t)), . . . , (x N (t))}, B(x(t)) = diag { (x 1 (t)), . . . , (x N (t))}, and (t, x(t), x(t − τ1 (t))) d W (t) = ((σ (t, x 1 (t), x 1 (t −τ1 (t)))dw(t))T , . . . , (σ (t, x N (t), x N (t − τ1 (t)))dw(t))T )T . Then, systems (8) and (9) can be written as ⎧ ⎪ d x(t) ∈ [Cx(t) + co[A(x(t))]f(x(t)) ⎪ ⎪ ⎪ ⎪ + co[B(x(t))]f(x(t − τ1 (t))) + J(t) + Gx(t) ⎪ ⎪ ⎪ ⎪ ⎪ + β(t)Gτ x(t − τ2 (t)) + (1 − β(t))Gτ x(t − τ3 (t)) ⎪ ⎪ ⎪ ⎨ + aβ(t)Dτ (x(t − τ2 (t)) − x(t)) ⎪ + a(1 − β(t))Dτ (x(t − τ3 (t)) − x(t))]dt ⎪ ⎪ ⎪ ⎪ ⎪ + (t, x(t), x(t − τ1 (t)))d W (t), t = tk ⎪ ⎪ + − − ⎪ ⎪ ⎪ ⎪x tk − x tk = θ x tk + μx(tk − τ4 (tk )), ⎪ ⎩ t = tk , k ∈ N+ (11) or ⎧ ⎪ ⎪d x(t) = [Cx(t) + A(x(t))f(x(t)) ⎪ ⎪ ⎪ + B(x(t))f(x(t − τ1 (t))) + J(t) + Gx(t) ⎪ ⎪ ⎪ ⎪ ⎪ + β(t)Gτ x(t − τ2 (t)) + (1 − β(t))Gτ x(t − τ3 (t)) ⎪ ⎪ ⎪ ⎨ + aβ(t)Dτ (x(t − τ2 (t)) − x(t)) ⎪ + a(1 − β(t))Dτ (x(t − τ3 (t)) − x(t))]dt ⎪ ⎪ ⎪ ⎪ ⎪ + (t, x(t), x(t − τ1 (t)))d W (t), t = tk ⎪ ⎪ + ⎪ ⎪ ⎪x tk − x tk− = θ x tk− + μx(tk − τ4 (tk )), ⎪ ⎪ ⎩ t = tk , k ∈ N+ (12) where C = I N ⊗ C, G = G ⊗ D, Dτ = I N ⊗ Dτ , and Gτ = G ⊗ Dτ , in which ⊗ denotes the Kronecker product of two matrices and I N is the N-dimensional identity matrix. Definition 2: The coupled memristor-based neural networks with impulsive effects are said to be exponentially synchronized in the mean square if there exist two constants > 0 and β > 0 such that Ex i (t) − x j (t)2 ≤

sup EMφ(s)2 βe−t

−τ ≤s≤0

holds for all t ≥ 0, i, j = 1, 2, . . . , N, where M = M ⊗ In , M is defined in Lemma 1, and φ(s) = (φ1T (s), φ2T (s), . . . , φ NT (s))T . Definition 3 [40]: T (R, b) = {matrices with entries in R such that the sum of the entries in each row is equal to b for some b ∈ R}. To obtain our main result, we need Lemmas 1–5. Lemma 1 [40]: Let U be an N × N matrix in T (R, b). Then, the (N − 1) × (N − 1) matrix H defined as H = MU K satisfies MU = H M, where M is the following (N − 1) × N matrix: ⎡ ⎤ 1 −1 0 ⎢ ⎥ 1 −1 ⎢ ⎥ M =⎢ ⎥ . . ⎣ ⎦ . 1 −1 (N−1)×N

⎡

1 ⎢0 ⎢ ⎢ ⎢ K =⎢ ⎢ ⎢ ⎣0 0

1 1

1 1

0 0

0 0

... ... .. . 1 0 ...

⎤ 1 1⎥ ⎥ ⎥ ⎥ ⎥ 1⎥ ⎥ 1⎦ 0 N×(N−1)

and 1 is the multiplicative identity of R. Moreover, the matrix H can be written explicitly as H (i, j ) = j m=1 [U(i,m) − U(i+1,m) ]. Lemma 2 [41]: From the definition of the Kronecker product, the following properties can be easily obtained. 1) (α A) ⊗ B = A ⊗ (α B). 2) (A + B) ⊗ C = A ⊗ C + B ⊗ C. 3) (A ⊗ B)(C ⊗ D) = (AC) ⊗ (B D). Lemma 3 (Schur Complement [42]): For a given matrix S11 S12 0, then for any vectors x and y with appropriate dimensions x T 1T 2 y ≤ x T 1T 3 1 x + y T 2T 3−1 2 y. Lemma 5 [38]: Consider the following impulsive differential inequalities: ⎧ ⎪ D + v(t) ≤ av(t) + b1 [v(t)]τ1 + b2 [v(t)]τ2 + · · · ⎪ ⎪ ⎪ ⎪ + bm [v(t)]τm , t = tk , t ≥ t0 ⎪ ⎨ v tk+ ≤ pk v tk− + qk1 v tk− τ + qk2 v tk− τ + · · · 1 2 ⎪ − ⎪ m + ⎪ v t , k ∈ N + q ⎪ k k ⎪ τm ⎪ ⎩ v(t) = φ(t), t ∈ [t0 − τ, t0 ] where a, bi , pk , qki , and τi are constants, bi ≥ 0, pk ≥ 0, qki ≥ 0, τi ≥ 0, i = 1, 2, . . . , m, and v(t) ≥ 0. [v(t)]τi = supt −τi ≤s≤t v(s), [v(tk )]τi = suptk −τi (tk )≤s 1 and λ > 0 such that v(t) ≤ φτ βe−λ(t −t0) , t ≥ t0 , where φτ = supt0 −τ ≤s≤t0 φ(s) and τ = max{τi , i = 1, 2, . . . , m}. Lemma 6 [38]: Let x(t) = (x 1T (t), x 2T (t), . . . , x NT (t))T . If EMx(t)2 ≤ γ e−λt , where γ > 0, λ > 0, then Ex i (t) − x j (t)2 ≤ γ e−λt , for all i, j = 1, 2, . . . , N. III. M AIN R ESULTS In this section, the exponential synchronization criteria of coupled stochastic memristor-based neural networks with one probabilistic delay coupling and impulsive effects are derived.

BAO et al.: EXPONENTIAL SYNCHRONIZATION OF COUPLED STOCHASTIC MEMRISTOR-BASED NEURAL NETWORKS

Before stating the main results, some notations are given. Let C1 = I N−1 ⊗ C, D1τ = I N−1 ⊗ Dτ , H = H ⊗ D, ¯ 1 = I N−1 ⊗ A, ¯ B¯ 1 = I N−1 ⊗ B, ¯ H τ = H ⊗ Dτ , A ¯ ¯ ¯ ¯ L1 = I N−1 ⊗ L, A = I N ⊗ A, and B = I N ⊗ B, where H = M G K , M and K are defined in Lemma 1, u ) ¯ = (b u )n×n , a u = max{|aˆ kj |, |aˇ kj |}, and A¯ = (akj n×n , B kj kj u bkj = max{|bˆkj |, |bˇkj |}, for k, j = 1, 2, . . . , n. Theorem 1: Given Assumptions 1–3. Then, the network (11) is exponentially synchronized in the mean square if there exist positive matrices P = [ pi j ](N−1)n×(N−1)n , Q = [qi j ](N−1)n×(N−1)n , R = [ri j ](N−1)n×(N−1)n , and a positive constant α such that ⎡ ⎤ ¯ 1 L1 P B¯ 1 L1 11 0 13 14 PA ⎢ ∗ 22 0 0 0 0 ⎥ ⎢ ⎥ ⎢ ∗ ∗ −ξ P 0 0 0 ⎥ 3 =⎢ ⎥< 0 ⎢ ∗ ∗ ∗ −ξ4 P 0 0 ⎥ ⎣ ⎦ ∗ ∗ ∗ ∗ −Q 0 ∗ ∗ ∗ ∗ ∗ −R (13) P < α I (14) λ1 + λ2 < 1 (15) ξ2 + ξ3 + ξ4 ln(λ1 + λ2 ) + 0 and β > 0 such that x i (t)− x j (t)2 ≤ sup−τ ≤s≤0 Mφ(s)2 βe−t holds for all t ≥ 0, i, j = 1, 2, . . . , N, where M is defined in Lemma 1, and φ(s) = (φ1T (s), φ2T (s), . . . , φ NT (s))T . Theorem 2: Under Assumption 1, the network (17) is exponentially synchronized if there exist positive matrices P = [ pi j ](N−1)n×(N−1)n , Q = [qi j ](N−1)n×(N−1)n , and R = [ri j ](N−1)n×(N−1)n such that ⎡ ¯ 1 L1 P B¯ 1 L1 ⎤ 11 0 13 PA ⎢ ∗ −ξ2 P 0 0 0 ⎥ ⎥ ⎢ =⎢ ∗ ∗ −ξ3 P 0 0 ⎥< 0 (18) ⎦ ⎣ ∗ ∗ ∗ −Q 0 ∗ ∗ ∗ ∗ −R

λ1 + λ2 < 1 ξ2 + ξ3 ln(λ1 + λ2 ) + 1 1.5, |x| ≤ 1, 0.1, |x| ≤ 1 b11 (x) = b12 (x) = −1.5, |x| > 1, −0.1, |x| > 1 0.2, |x| ≤ 1, 4, |x| ≤ 1 b21 (x) = b22 (x) = −0.2, |x| > 1, −4, |x| > 1 x i1 (t) 0 σ (t, x i (t), x i (t) − τ1 (t)) = 0.1 . 0 x i2 (t − τ1 (t))

Let θ = −1, μ = 0.4, ξ1 = 3, ξ2 = 3, ξ3 = 1, ξ4 = 1, and tk+1 − tk = 0.05. We used the MATLAB LMI Control Toolbox to solve the LMIs in (13) and (14), and obtain the following feasible solution with tmin = −0.0095 and α = 1.1322 (negative tmin indicates the LMIs have a feasible solution set). For brevity, we omit the values of P, Q, and R. Through simple computation, we get λ1 + λ2 = 0.16 < 1 and ξ1 + (ξ2 + ξ3 + ξ4 )/0.16 + (ln(0.16))/0.05 = −2.4016 < 0. According to Theorem 1, it can be concluded that system (11) is exponentially synchronized in the mean square, which is verified in Figs. 1–4. Fig. 1 shows that system (21) has a chaotic attractor. Fig. 2 shows the synchronization state of system (10). Fig. 3 shows the synchronization errors of x i1 (t) − x 11 (t) and x i2 (t) − x 12 (t), which indicate x i1 (t) − x 11 (t) → 0 and x i2 (t) − x 12 (t) → 0 as t → +∞. The total error of (11) is defined as err(t) = 2 4 2 1/2 . Fig. 4 shows the total syni=1 ( j =1 |x 1i − x j i | ) chronization error, which clearly tends to zero. This also confirms that the network (11) is exponentially synchronized. Example 2: Consider a regular nearest neighbor network with ten nodes (i = 1, 2, . . . , 10). The dynamics of each node is given by (21), and the configuration matrix is ⎡ ⎤ −2 1 0 ... 1 ⎢ 1 −2 1 . . . 0 ⎥ ⎢ ⎥ ⎢ ⎥ . .. G = 4×⎢ ⎥ ⎢ ⎥ ⎣ 0 0 0 ... 1 ⎦ 1 0 0 1 −2 10×10

198

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 1, JANUARY 2016

A PPENDIX I Proof of Theorem 1: Step 1: Calculating LV (t). Let M = M ⊗ In , and consider a Lyapunov function of the following form: V (t) = x T (t)MT PMx(t). Using the weak infinitesimal operator L on the function V (t) along the solution of (12) for t = tk , k ∈ N+ leads to LV (t) = 2x T (t)MT PM{[Cx(t) + A(x(t))f(x(t)) +B(x(t))f(x(t − τ1 (t))) + J(t) + Gx(t) + β0 Gτ x(t − τ2 (t)) + (1 − β0 )Gτ x(t − τ3 (t)) + aβ0 Dτ (x(t − τ2 (t)) − x(t)) Fig. 8.

+ a(1 − β0 )Dτ (x(t − τ3 (t)) − x(t))]dt

Total synchronous error of (11) in Example 2.

2 5

0.1 , 4.5

D = 5I,

Dτ =

A¯ =

0 0

1.5 0.2 0 . 1

B¯ =

0.1 4

+ ((t, x(t), x(t − τ1 (t)))d W (t))T MT PM (t, x(t), x(t − τ1 (t)))d W (t)} + 2x T (t)MT PM

Let θ = −1, μ = 0.4, ξ1 = 4, ξ2 = 2.1, ξ3 = 2, ξ4 = 1, and tk+1 − tk = 0.05. By solving the LMIs in (13) and (14), we obtain the following feasible solution with tmin = −0.0029 and α = 0.5320. By simple computation, we obtain λ1 + λ2 = 0.16 < 1 and ξ1 + (ξ2 + ξ3 + ξ4 )/0.16 + (ln(0.16))/0.05 = −0.4016 < 0. According to Theorem 1, it can be concluded that system (11) is exponentially synchronized in the mean square, which is verified in Figs. 5–8. Fig. 5 shows the synchronization state of Example 2. Figs. 6 and 7 show the synchronization errors of x i1 (t)−x 11 (t) and x i2 (t)−x 12 (t), respectively. This shows x i1 (t)−x 11(t) → 0 and x i2 (t)−x 12(t) → 0 as t → +∞. The total error of (11) is defined as 2 10 2 1/2 . Fig. 8 shows the err(t) = i=1 ( j =1 |x 1i − x j i | ) total synchronization error, which tends to zero. This also confirms that the network (11) is exponentially synchronized.

V. C ONCLUSION We have investigated the exponential synchronization of a coupled memristor-based neural network. The memristorbased neural network involves one single probabilistic time-varying delay coupling, stochastic disturbances, and time-varying impulsive delay. Based on the Lyapunov function method, LMI framework, and an extended Halanay differential inequality, sufficient conditions were derived to guarantee that the coupled stochastic memristor-based neural networks achieve exponential synchronization in the mean square. Two numerical examples were provided to show the usefulness and effectiveness of the proposed design methods.

×{(β(t) − β0 )[Gτ x(t − τ2 (t)) − Gτ x(t − τ3 (t))] + a(β(t) − β0 )[Dτ x(t − τ2 (t)) − Dτ x(t − τ3 (t))]}. (22) Based on the properties of the Kronecker product, MC = C1 M, MDτ = D1τ M, and MJ = 0. Therefore E[LV (t)] = 2x T (t)MT PC1 Mx(t) + 2x T (t)MT PMA(x(t))f(x(t)) + 2x T (t)MT PMB(x(t))f(x(t − τ1 (t))) + 2x T (t)MT PMGx(t) + 2β0 x T (t)MT PMGτ x(t − τ2 (t)) + 2(1 − β0 )x T (t)MT PMGτ x(t − τ3 (t)) + 2aβ0 x T (t)MT PMDτ (x(t − τ2 (t)) − x(t)) + 2a(1 − β0 )x T (t)MT PMDτ ×(x(t − τ3 (t)) − x(t)) + ((t, x(t), x(t − τ1 (t)))d W (t))T MT PM (t, x(t), x(t − τ1 (t)))d W (t).

(23)

u , Based on the Lipschitz condition, akj (x i j (t)) ≤ akj u ¯ = A ¯ 1 M, and MB ¯ = B¯ 1 M, which bkj (x i j (t)) ≤ bkj , MA leads to

2x T (t)MT PMA(x(t))f(x(t)) ≤ 2x T (t)MT QMx(t) ¯ T (x(t))MT P Q −1 PMA(x(t))f(x(t)) + f T (x(t))A ≤ x T (t)MT QMx(t) ¯ 1T P Q −1 P A ¯ 1 L1 Mx(t) + x T (t)MT L1T A

(24)

2x (t)M PMB(x(t))f(x(t − τ1 (t))) T

T

≤ x T (t)MT RMx(t) + f T (x(t − τ1 (t)))B¯ T (x(t)) ×MT P R −1 PMB(x(t))f(x(t − τ1 (t))) ¯ 1T ≤ x T (t)MT RMx(t) + x T (t − τ1 (t))MT L1T B ×P R −1 P B¯ 1 L1 Mx(t − τ1 (t))

(25)

BAO et al.: EXPONENTIAL SYNCHRONIZATION OF COUPLED STOCHASTIC MEMRISTOR-BASED NEURAL NETWORKS

Using Lemma 3, < 0 is equivalent to (13), so

2x T (t)MT PMGx(t) = 2x T (t)MT P[(M ⊗ In )(G ⊗ D)]x(t) = 2x T (t)MT P[M G ⊗ D]x(t) = 2x T (t)MT P[H M ⊗ D]x(t) = 2x T (t)MT P[(H ⊗ D)(M ⊗ In )]x(t) = 2x T (t)MT PHMx(t)

(26)

2x (t)M PMGτ x(t − τ2 (t)) T

T

= 2x T (t)MT P[(M ⊗ In )(G ⊗ Dτ )]x(t − τ2 (t)) = 2x T (t)MT PHτ Mx(t − τ2 (t))

(27)

T

2x (t)MT PMGτ x(t − τ3 (t)) = 2x T (t)MT P[(M ⊗ In )(G ⊗ Dτ )]x(t − τ3 (t)) = 2x T (t)MT PHτ Mx(t − τ3 (t)).

(28)

Based on Assumption 2 and (14) ((t, x(t), x(t − τ1 (t)))d W (t))T MT PM ×(t, x(t), x(t − τ1 (t)))d W (t) N−1 ≤α [(σ (t, x i (t), x i (t − τ1 (t))) − σ (t, x i+1 (t), x i+1 (t − τ1 (t))))dw(t)] ×[(σ (t, x i (t), x i (t − τ1 (t))) − σ (t, x (t), x i+1 (t − τ1 (t))))dw(t)] N−1 i+1 ≤ α ρ1 x i (t) − x i+1 (t)2 dt + ρ2

x i (t − τ1 (t)) − x i+1 (t − τ1 (t)) dt 2

τ2

− τ3

⎡

11 ⎢ ∗ = ⎢ ⎣ ∗ ∗

[x T (t)MT , x T (t − τ1 (t))MT , x T (t − and

0 22 ∗ ∗

13 0 −ξ3 P ∗

⎤ 14 0 ⎥ ⎥ 1 and > 0 such that λmin (P)Ex T MT Mx(t) ≤ EV (t) ≤ sup−τ ≤s≤0 EMφ(s)2 βe−t for t ≥ 0. Hence, Ex T MT Mx(t) ≤ EV (t) ≤ (1/λmin (P)) sup−τ ≤s≤0

A PPENDIX II

= αρ1 x T (t)M Mx(t)dt + αρ2 x T (t − τ1 (t))MT Mx(t − τ1 (t))dt.

(t))MT , x T (t

+ μ2 Ex T (tk − τ4 (tk ))MT PMx(tk − τ4 (tk )) (31) ≤ λ1 EV tk− + λ2 EV (tk − τ4 (tk )).

(1/λmin (P)) sup−τ ≤s≤0 EMφ(s)2 βe−t .

T

i=1 T

where ζ1T (t)

E[LV (t)] < ξ1 V (t) + ξ2 V (t − τ1 (t)) + ξ3 V (t − τ2 (t)) (30) + ξ4 V (t − τ3 (t)) − T T + EV (tk ) = E (1 + θ )x tk + μx(tk − τ4 (tk )) M PM × (1 + θ )x tk− + μx(tk − τ4 (tk )) = (1 + θ )2 Ex T tk− MT PMx tk− + μ(1 + θ ) T − T Ex tk M PMx(tk − τ4 (tk )) + Ex T (tk − τ4 (tk ))MT PMx tk−

EMφ(s)2 βe−t . Based on Lemma 6, Ex i (t) − x j (t)2 ≤

i=1

i=1 N−1

199

= 2x T (t)MT PC1 Mx(t) + 2x T (t)MT PMA(x(t))f(x(t)) + 2x T (t)MT PMB(x(t))f(x(t − τ1 (t))) + 2x T (t)MT PMGx(t) + 2x T (t)MT PMGτ x(t − τ2 (t)) + 2ax T (t)MT PD1τ M(x(t − τ2 (t)) − x(t)).

(32)

Combining (24)–(27) and using (32) leads to D + V (t) = ζ2T (t) ζ2 (t) + ξ1 V (t) + ξ2 V (t − τ1 (t)) + ξ3 V (t − τ2 (t)) where ζ2T (t)

=

τ2 (t))MT ]T and ⎡

11 ⎣ = ∗ ∗

[x T (t)MT , x T (t − τ1 (t))MT , x T (t − 0 22 ∗

⎤ PHτ + PD1τ ⎦ 1, and > 0. So, x i (t) − x j (t)2 ≤ (1/λmin (P)) sup−τ ≤s≤0 Mφ(s)2 βe−t . R EFERENCES [1] L. O. Chua, “Memristor—The missing circuit element,” IEEE Trans. Circuit Theory, vol. 18, no. 5, pp. 507–519, Sep. 1971. [2] D. B. Strukov, G. S. Snider, D. R. Stewart, and R. S. Williams, “The missing memristor found,” Nature, vol. 453, no. 7191, pp. 80–83, 2008. [3] J. M. Tour and T. He, “Electronics: The fourth element,” Nature, vol. 453, no. 7191, pp. 42–43, 2008. [4] S. Duan, X. Hu, Z. Dong, L. Wang, and P. Mazumder, “Memristor-based cellular nonlinear/neural network: Design, analysis, and applications,” IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 6, pp. 1202–1213, Jun. 2015. [5] X. Hu, G. Feng, S. Duan, and L. Liu, “Multilayer RTD-memristor-based cellular neural networks for color image processing,” Neurocomputing, vol. 162, pp. 150–162, Aug. 2015. [6] S. H. Jo, T. Chang, I. Ebong, B. B. Bhadviya, P. Mazumder, and W. Lu, “Nanoscale memristor device as synapse in neuromorphic systems,” Nano Lett., vol. 10, no. 4, pp. 1297–1301, 2010. [7] M. Itoh and L. O. Chua, “Memristor oscillators,” Int. J. Bifurcation Chaos, vol. 18, no. 11, pp. 3183–3206, 2008. [8] I. Petráš, “Fractional-order memristor-based Chua’s circuit,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 57, no. 12, pp. 975–979, Dec. 2010. [9] Y. V. Pershin and M. Di Ventra, “Experimental demonstration of associative memory with memristive neural networks,” Neural Netw., vol. 23, no. 7, pp. 881–886, 2010. [10] J. Hu and J. Wang, “Global uniform asymptotic stability of memristorbased recurrent neural networks with time delays,” in Proc. Int. Joint Conf. Neural Netw. (IJCNN), Jul. 2010, pp. 1–8. [11] Z. Guo, J. Wang, and Z. Yan, “Attractivity analysis of memristor-based cellular neural networks with time-varying delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 4, pp. 704–717, Apr. 2014. [12] S. Wen, G. Bao, Z. Zeng, Y. Chen, and T. Huang, “Global exponential synchronization of memristor-based recurrent neural networks with time-varying delays,” Neural Netw., vol. 48, pp. 195–203, Dec. 2013. [13] A. Wu, S. Wen, and Z. Zeng, “Synchronization control of a class of memristor-based recurrent neural networks,” Inf. Sci., vol. 183, no. 1, pp. 106–116, 2012. [14] G. Zhang and Y. Shen, “New algebraic criteria for synchronization stability of chaotic memristive neural networks with time-varying delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 10, pp. 1701–1707, Oct. 2013. [15] G. Zhang and Y. Shen, “Exponential synchronization of delayed memristor-based chaotic neural networks via periodically intermittent control,” Neural Netw., vol. 55, pp. 1–10, Jul. 2014. [16] J. Cao and M. Xiao, “Stability and Hopf bifurcation in a simplified BAM neural network with two time delays,” IEEE Trans. Neural Netw., vol. 18, no. 2, pp. 416–430, Mar. 2007. [17] H. Lu, “Chaotic attractors in delayed neural networks,” Phys. Lett. A, vol. 298, nos. 2–3, pp. 109–116, 2002. [18] J. Wei and S. Ruan, “Stability and bifurcation in a neural network model with two delays,” Phys. D, Nonlinear Phenomena, vol. 130, nos. 3–4, pp. 255–272, 1999. [19] W. Wang and J. Cao, “Synchronization in an array of linearly coupled networks with time-varying delay,” Phys. A, Statist. Mech. Appl., vol. 366, pp. 197–211, Jul. 2006.

[20] G. Chen, J. Zhou, and Z. Liu, “Global synchronization of coupled delayed neural networks and applications to chaotic CNN models,” Int. J. Bifurcation Chaos, vol. 14, no. 7, pp. 2229–2240, 2004. [21] C. W. Wu, “Synchronization in arrays of coupled nonlinear systems with delay and nonreciprocal time-varying coupling,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 52, no. 5, pp. 282–286, May 2005. [22] J. Zhou and T. Chen, “Synchronization in general complex delayed dynamical networks,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 53, no. 3, pp. 733–744, Mar. 2006. [23] J. Cao, P. Li, and W. Wang, “Global synchronization in arrays of delayed neural networks with constant and delayed coupling,” Phys. Lett. A, vol. 353, no. 4, pp. 318–325, 2006. [24] W. Yu, J. Cao, and J. Lü, “Global synchronization of linearly hybrid coupled networks with time-varying delay,” SIAM J. Appl. Dyn. Syst., vol. 7, no. 1, pp. 108–133, 2008. [25] J. Cao, G. Chen, and P. Li, “Global synchronization in an array of delayed neural networks with hybrid coupling,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 38, no. 2, pp. 488–498, Apr. 2008. [26] Z.-G. Wu, P. Shi, H. Su, and J. Chu, “Sampled-data exponential synchronization of complex dynamical networks with time-varying coupling delay,” IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 8, pp. 1177–1187, Aug. 2013. [27] W. Lu, T. Chen, and G. Chen, “Synchronization analysis of linearly coupled systems described by differential equations with a coupling delay,” Phys. D, Nonlinear Phenomena, vol. 221, no. 2, pp. 118–134, 2006. [28] W. He and J. Cao, “Exponential synchronization of hybrid coupled networks with delayed coupling,” IEEE Trans. Neural Netw., vol. 21, no. 4, pp. 571–583, Apr. 2010. [29] A. Ray, “Output feedback control under randomly varying distributed delays,” J. Guid., Control, Dyn., vol. 17, no. 4, pp. 701–711, 1994. [30] Y. Tang, J.-A. Fang, M. Xia, and D. Yu, “Delay-distribution-dependent stability of stochastic discrete-time neural networks with randomly mixed time-varying delays,” Neurocomputing, vol. 72, nos. 16–18, pp. 3830–3838, 2009. [31] Z. Wang, Y. Wang, and Y. Liu, “Global synchronization for discrete-time stochastic complex networks with randomly occurred nonlinearities and mixed time delays,” IEEE Trans. Neural Netw., vol. 21, no. 1, pp. 11–25, Jan. 2010. [32] H. Bao and J. Cao, “Delay-distribution-dependent state estimation for discrete-time stochastic neural networks with random delay,” Neural Netw., vol. 24, no. 1, pp. 19–28, 2011. [33] B. Liu, X. Liu, G. Chen, and H. Wang, “Robust impulsive synchronization of uncertain dynamical networks,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 52, no. 7, pp. 1431–1441, Jul. 2005. [34] T. Yang, Impulsive Systems and Control: Theory and Applications. Commack, NY, USA: Nova, 2001. [35] Z.-H. Guan, Z.-W. Liu, G. Feng, and Y.-W. Wang, “Synchronization of complex dynamical networks with time-varying delays via impulsive distributed control,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 57, no. 8, pp. 2182–2195, Aug. 2010. [36] J. Lu, D. W. C. Ho, J. Cao, and J. Kurths, “Exponential synchronization of linearly coupled neural networks with impulsive disturbances,” IEEE Trans. Neural Netw., vol. 22, no. 2, pp. 329–336, Feb. 2011. [37] A. Khadra, X. Z. Liu, and X. Shen, “Analyzing the robustness of impulsive synchronization coupled by linear delayed impulses,” IEEE Trans. Autom. Control, vol. 54, no. 4, pp. 923–928, Apr. 2009. [38] X. Yang and Z. Yang, “Synchronization of TS fuzzy complex dynamical networks with time-varying impulsive delays and stochastic effects,” Fuzzy Sets Syst., vol. 235, pp. 25–43, Jan. 2014. [39] A. F. Filippov, Differential Equations With Discontinuous Righthand Sides (Mathematics and Its Applications). Boston, MA, USA: Kluwer, 1988. [40] C. W. Wu and L. O. Chua, “Synchronization in an array of linearly coupled dynamical systems,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 42, no. 8, pp. 430–447, Aug. 1995. [41] K. Gu, V. L. Kharitonov, and J. Chen, Stability of Time-Delay Systems. Boston, MA, USA: Birkhäuser, 2003. [42] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory (Studies in Applied Mathematics), vol. 15. Philadelphia, PA, USA: SIAM, 1994.

BAO et al.: EXPONENTIAL SYNCHRONIZATION OF COUPLED STOCHASTIC MEMRISTOR-BASED NEURAL NETWORKS

Haibo Bao received the B.S. and M.S. degrees from Northeast Normal University, Changchun, China, in 2002 and 2005, respectively, and the Ph.D. degree from Southeast University, Nanjing, China, in 2011, all in mathematics/applied mathematics. She was with the School of Science, Hohai University, Nanjing, from 2005 to 2012. In 2012, she joined the School of Mathematics and Statistics, Southwest University, Chongqing, China. From 2014 to 2015, she was a Research Associate with the Department of Electrical Engineering, Yeungnam University, Gyeongsan, Korea. She is currently an Associate Professor with Southwest University. Her current research interests include neural networks, complex dynamical networks, control theory, and the fractional calculus theory.

Ju H. Park received the Ph.D. degree in electronics and electrical engineering from the Pohang University of Science and Technology (POSTECH), Pohang, Korea, in 1997. He was a Research Associate with ERC-ARC, POSTECH, from 1997 to 2000. In 2000, he joined Yeungnam University, Gyeongsan, Korea, where he is currently the Chuma Chair Professor. From 2006 to 2007, he was a Visiting Professor with the Department of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA. He has authored a number of papers in his research areas. His current research interests include robust control and filtering, neural networks, complex networks, multiagent systems, and chaotic systems. Prof. Park serves as an Editor of the International Journal of Control, Automation and Systems. He is a Subject Editor/Associate Editor/Editorial Board Member for several international journals, including IET Control Theory and Applications, Applied Mathematics and Computation, the Journal of The Franklin Institute, Nonlinear Dynamics, the Journal of Applied Mathematics and Computing, and Congent Engineering.

201

Jinde Cao (M’07–SM’07) received the B.S. degree from Anhui Normal University, Wuhu, China, in 1986, the M.S. degree from Yunnan University, Kunming, China, in 1989, and the Ph.D. degree from Sichuan University, Chengdu, China, in 1998, all in mathematics/applied mathematics. He was a Professor with Yunnan University from 1996 to 2000. He was with Yunnan University from 1989 to 2000. In 2000, he joined the Department of Mathematics, Southeast University, Nanjing, China. From 2001 to 2002, he was a Post-Doctoral Research Fellow with the Department of Automation and Computer-Aided Engineering, Chinese University of Hong Kong, Hong Kong. From 2006 to 2008, he was a Visiting Research Fellow and Visiting Professor with the School of Information Systems, Computing and Mathematics, Brunel University, London, U.K. In 2014, he was a Visiting Professor with the School of Electrical and Computer Engineering, RMIT University, Melbourne, VIC, Australia. He is currently the Dean of the Department of Mathematics with Southeast University. He is a Distinguished Professor and Doctoral Advisor with Southeast University and also a Distinguished Adjunct Professor with King Abdulaziz University, Jeddah, Saudi Arabia. He has authored or co-authored over 300 journal papers and five edited books. His current research interests include nonlinear systems, neural networks, complex systems and complex networks, stability theory, and applied mathematics. Prof. Cao was an Associate Editor of the IEEE T RANSACTIONS ON N EURAL N ETWORKS , the Journal of The Franklin Institute, and Neurocomputing. He is an Associate Editor of the IEEE T RANSACTIONS ON C YBERNETICS , Differential Equations and Dynamical Systems, Mathematics and Computers in Simulation, and Neural Networks. He is a Reviewer of Mathematical Reviews and Zentralblatt-Math. He is an ISI Highly Cited Researcher in Mathematics and Engineering listed by Thomson Reuters.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 1, JANUARY 2016

Exponential Synchronization of Coupled Stochastic Memristor-Based Neural Networks With Time-Varying Probabilistic Delay Coupling and Impulsive Delay Haibo Bao, Ju H. Park, and Jinde Cao, Senior Member, IEEE

Abstract— This paper deals with the exponential synchronization of coupled stochastic memristor-based neural networks with probabilistic time-varying delay coupling and time-varying impulsive delay. There is one probabilistic transmittal delay in the delayed coupling that is translated by a Bernoulli stochastic variable satisfying a conditional probability distribution. The disturbance is described by a Wiener process. Based on Lyapunov functions, Halanay inequality, and linear matrix inequalities, sufficient conditions that depend on the probability distribution of the delay coupling and the impulsive delay were obtained. Numerical simulations are used to show the effectiveness of the theoretical results. Index Terms— Probabilistic time-varying delay, stochastic memristor-based neural networks, synchronization, time-varying impulsive delay.

I. I NTRODUCTION

T

HE concept of the memristor (a contraction of memory and resistor) was first postulated in [1], and it was predicted as the fourth ideal electrical circuit element. About 40 years later, the first practical memristor device

Manuscript received November 26, 2014; revised July 19, 2015 and August 30, 2015; accepted August 31, 2015. Date of publication October 15, 2015; date of current version December 17, 2015. The work of H. Bao was supported in part by the National Natural Science Foundation of China under Grant 61203096 and Grant 61573291, in part by the China Post-Doctoral Science Foundation under Grant 2013M513924, in part by the Fundamental Research Funds through Central Universities under Grant XDJK2013C001, and in part by the Scientific Research, Southwest University, under Grant SWU112024. The work of J. H. Park was supported by the Basic Science Research Program through the National Research Foundation of Korea within the Ministry of Education under Grant 2013R1A1A2A10005201. The work of J. Cao was supported in part by the National Natural Science Foundation of China under Grant 61573096 and Grant 11072059, in part by the Specialized Research Fund through the Doctoral Program of Higher Education under Grant 20130092110017, and in part by the Natural Science Foundation of Jiangsu Province, China, under Grant BK2012741. (Corresponding author: Ju H. Park.) H. Bao is with the School of Mathematics and Statistics, Southwest University, Chongqing 400715, China, and also with the Nonlinear Dynamics Group, Department of Electrical Engineering, Yeungnam University, Gyeongsan 38541, Korea (e-mail: [email protected]). J. H. Park is with the Nonlinear Dynamics Group, Department of Electrical Engineering, Yeungnam University, Gyeongsan 38541, Korea (e-mail: [email protected]). J. Cao is with the Department of Mathematics, Southeast University, Nanjing 210096, China, and also with the Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2015.2475737

based on TiO2 thin films was realized by a research team at Hewlett–Packard who published their findings in [2] and [3]. The memristor is a two-terminal element with various features, such as memory characteristics, nanometer size, and a variable resistance called memristance. Memristors have received worldwide attention because of their potential applications in next-generation computers, powerful brain-like neural computers, and neuromorphic computing systems [4]–[7]. Some research indicates that the memristor exhibits pinched hysteresis, which means that a lag occurs between the application and removal of a field and its subsequent effect, similar to the neurons in the human brain. Because of this feature, many researchers use memristors to design new models of neural networks to emulate the human brain [8]–[10]. In recent years, memristor-based neural networks have gained considerable attention for their wide range of applications in the fields of pattern recognition, signal processing, optimization, and associative memories [11]–[15]. Time delay is widely accepted in the implementation of electronic networks due to the finite switching speed of amplifiers and finite signal propagation time. Delays can cause instability, bifurcation, and chaotic attractors [16]–[18], so it is necessary to investigate memristor-based neural networks with delay. In general, there are two types of delay: 1) internal delay within a system and 2) coupling delay caused by the exchange of information between systems. Many works have explored the synchronization of coupled neural networks with delay coupling. Synchronization of coupled delayed neural networks with linear constant coupling has been investigated [19]–[26]. The coupling term was described by D(x j (t − τ ) − x i (t − τ )), where x i (t) = (x i1 (t), x i2 (t), . . . , x in (t))T ∈ Rn denotes the state variable associated with the i th network at time t, D denotes the inner coupling matrices between the networks i and j at time t and time t − τ , and τ is the transmission delay. The following other symbols have the same meaning. Cao et al. [23], Yu et al. [24], and Cao et al. [25] discussed the global synchronization of coupled delayed neural networks with constant, delayed, and hybrid coupling. Wu et al. [26] investigated exponential synchronization for complex dynamical networks with time-varying couple delay and sampled data and described the coupling term by D(x j (t − τ (t)) − x i (t − τ (t))). Other studies considered that the time delay affects only the variable being transmitted from one system

2162-237X © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

BAO et al.: EXPONENTIAL SYNCHRONIZATION OF COUPLED STOCHASTIC MEMRISTOR-BASED NEURAL NETWORKS

to another and investigated synchronization for networks with the coupling term D(x j (t − τ ) − x i (t)) [27], [28]. Few studies have considered the synchronization problem of coupled memristor-based neural networks with a single delay. When investigating memristor-based networks, only deterministic time delay has been considered, and synchronization criteria were derived based only on the information of the variation range of the time delay. The time delay in some neural networks is often stochastic, and its probabilistic characteristics can be obtained easily by statistical methods [29]–[32]. This often occurs in real systems where some values of the delay are very large, but the probability of taking such large values is very small. Under these circumstances, if only the variation range of the time delay is used to derive the criteria, the results may be somewhat more conservative. Hence, it is necessary to investigate the synchronization of memristor-based neural networks with the random delay coupling. The actual signal transmission between subsystems of coupled memristor-based neural networks is inevitably influenced by a stochastic perturbation from various uncertainties, which probably leads to package loss or transmitted signals not fully being received. So, the stochastic perturbation should be considered when investigating the synchronization of memristor-based neural networks. Studies have pointed out that the state of the networks is often subject to instantaneous disturbances, and the networks experience impulsive effects in the form of abrupt changes at certain instants that may be caused by switching phenomena, frequency changes, or other sudden noise [33]–[36]. Impulsive control is effective for helping systems to achieve synchronization. Lu et al. [36] investigated the synchronization of linearly coupled neural networks with impulsive disturbances with no delayed impulsive coupling in the models. Considering the influence of the time delay in the impulsive coupling, synchronization has also been investigated using constantdelay impulsive controllers [37] and impulsive controllers with time-varying delay [38]. To the best of our knowledge, few studies have considered the synchronization of coupled memristor-based neural networks with stochastic disturbances and time-varying impulsive effects, which remains as an open challenge. This paper investigates the exponential synchronization of coupled stochastic memristor-based neural networks. The main contributions can be summarized as follows. 1) A new coupled memristor-based neural network with one single probabilistic time-varying delay coupling is proposed, which includes delayed memristor-based neural networks, delayed memristor-based cellular neural networks, and well-known chaotic systems, such as memristor-based Chua’s system. 2) Both stochastic disturbances and time-varying impulsive effects are considered. 3) Based on Lyapunov functions, Halanay inequality, and stochastic analysis technique, sufficient conditions are given under which the coupled memristor-based neural network is exponentially synchronized in the mean square.

191

The criteria can be applied to chaotic memristor-based systems with or without delay. Numerical simulation examples are presented to demonstrate the effectiveness and the applicability of the main results. The remainder of this paper is organized as follows. In Section II, the problem formulation is presented for the exponential synchronization of stochastic coupled memristorbased neural networks with probabilistic delay coupling and time-varying impulsive delay. Lemmas and notations used throughout this paper are also given. In Section III, the derivation of two theorems for exponential synchronization based on Lyapunov functions and linear matrix inequalities (LMIs) is presented. In Section IV, two numerical examples are given to validate the main results. Finally, the conclusion is given in Section V. II. P ROBLEM S TATEMENT In this section, we introduce mathematical models of coupled memristor-based neural networks and present notations, definitions, and lemmas used in this paper. Notation: N+ denotes the set of positive integers. + R and Rn denote the set of nonnegative real numbers and the n-dimensional Euclidean space, respectively. The superscript T denotes matrix or vector transposition. In is an n × n identify matrix. || · || is the Euclidean norm in Rn . For a matrix A, λmax (A) and λmin (A) are the largest and smallest eigenvalues of A, respectively. Tr(A) denotes the trace of matrix A. (, F , {Ft }t ≥0 , P) is a complete probability space with filtration {Ft }t ≥0 satisfying the usual conditions (i.e., the filtration contains all P-null sets, and it is right continuous). P ([−τ, 0]; Rn ) denotes the family of all F -measurable LF 0 0 C([−τ, 0] : Rn )-valued random variables ξ = {ξ(s) : −τ ≤ s ≤ 0} such that sup−τ ≤s≤0 Eξ(s) p < ∞, where E{·} is the mathematical expectation operator with respect to the given probability measure P. In some cases, the arguments of a function or a matrix are omitted in the analysis when no confusion can arise. Consider the isolated memristor-based neural networks described by the following state equation: d x i (t) = [C x i (t) + A(x i (t)) f (x i (t)) + B(x i (t)) f (x i (t − τ1 (t))) + J (t)]dt + σ (t, x i (t), x i (t − τ1 (t)))dw(t) (t))T

(1) Rn

where x i (t) = (x i1 (t), x i2 (t), . . . , x in ∈ for i = 1, 2, . . . , N denotes the state variable associated with the i th node at time t; C = −diag{c1 , c2 , . . . , cn } < 0 with ck > 0, k = 1, 2, . . . , n; A(x i (t)) = [akj (x i j (t))]n×n and B(x i (t)) = [bkj (x i j (t))]n×n (k = 1, 2, . . . , n) are the connection memristive weight matrix and the delayed connection memristive weight matrix, respectively. akj (x i j (t)) and bkj (x i j (t)) are defined as follows: aˆ kj , |x i j (t)| > T j akj (x i j (t)) = aˇ kj , |x i j (t)| < T j . akj (±T j ) = aˆ kj or aˇ kj for k, j = 1, 2, . . . , n, where switching jumps T j > 0 and weights aˆ kj and aˇ kj are all

192

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 1, JANUARY 2016

constants

bkj (x i j (t)) =

bˆkj , |x i j (t)| > T j bˇkj , |x i j (t)| < T j .

bkj (±T j ) = bˆkj or bˇkj for k, j = 1, 2, . . . , n, bˆkj and bˇkj are all constants; f (x i (t)) = ( f 1 (x i1 (t), . . . , x in (t)), . . . , f n (x i1 (t), . . . , x in (t)))T is the neuron activation function; J = (J1 (t), . . . , Jn (t))T ∈ Rn is an external input vector; τ1 (t) is the time-varying delay, 0 ≤ τ1 (t) ≤ τ1 and τ1 is a positive constant; w(t) = (w1 (t), w2 (t), . . . , wn (t))T is an n-dimensional Wiener process defined on (, F , {Ft }t ≥0, P), and wi (t) is independent of w j (t) for i = j . σ : R+ × Rn × Rn → Rn×n is the noise function matrix. Since akj (x i j (t)) and bkj (x i j (t)) are discontinuous, the solutions of all systems considered in this paper are handled in Filippov’s sense. Through the theories of differential inclusions and set-valued maps, it follows from (1) that: d x i (t) ∈ [C x i (t) + co[ A(x i (t))] f (x i (t)) + co[B(x i (t))] f (x i (t − τ1 (t))) + J (t)]dt + σ (t, x i (t), x i (t − τ1 (t)))dw(t)

(2)

where co[ A(x i (t))] = [co[akj (x i j (t))]]n×n and co[B(x i (t))] = [co[bkj (x i j (t))]]n×n ⎧ ⎪ |x i j (t)| > T j ⎨aˆ kj , co[akj (x i j (t))] = co{aˆ kj , aˇ kj }, |x i j (t)| = T j ⎪ ⎩ aˇ kj , |x i j (t)| < T j ⎧ ⎪ |x i j (t)| > T j ⎨bˆkj , co[bkj (x i j (t))] = co{bˆkj , bˇkj }, |x i j (t)| = T j ⎪ ⎩ˇ bkj , |x i j (t)| < T j co{aˆ kj , aˇ kj } = [a kj , a kj ], co{bˆkj , bˇkj } = [bkj , bkj ], a kj = min{aˆ kj , aˇ kj }, a kj = max{aˆ kj , aˇ kj }, b kj = min{bˆkj , bˇkj }, and bkj = max{bˆkj , bˇkj }; or, there exist πkj (x i j (t)) ∈ co[akj (x i j (t))] and δkj (x i j (t)) ∈ co[bkj (x i j (t))] such that d x i (t) = [C x i (t) + (x i (t)) f (x i (t)) + (x i (t)) f (x i (t − τ1 (t))) + J (t)]dt + σ (t, x i (t), x i (t − τ1 (t)))dw(t)

(3)

where (x i (t)) = [πkj (x i j (t))]n×n and (x i (t)) = [δkj (x i j (t))]n×n . Definition 1 [39]: A function (in Filippov’s sense) x i (t) = (x i1 (t), x i2 (t), . . . , x in (t))T is a solution of system (1) with initial condition x i (s) = φi (s), s ∈ [−τ, 0], if x i (t) is absolutely continuous on any compact interval of [−τ, +∞) and satisfies the differential inclusions (2) or (3). Assumption 1: The activation function f satisfies Lipschitz condition, that is, there exist positive constants li , such that || f i (x) − f i (y)|| ≤ li ||x − y|| for all x, y ∈ Rn , and i = 1, 2, . . . , n, L = diag{l1 , l2 , . . . , ln }. Assumption 2: There exist nonnegative constants ρ1 and ρ2 such that Tr[σ (t, x 1 , y1 ) − σ (t, x 2 , y2 )]T [σ (t, x 1 , y1 ) − σ (t, x 2 , y2 )] ≤ ρ1 x 1 − x 2 2 + ρ2 y1 − y2 2 holds for any x 1 , x 2 , y1 , y2 ∈ Rn , and t ∈ R+ .

Consider a network of N nodes, and let x i (t) be the i th node. The uncoupled dynamics of each node is defined as (1). Let G be the N × N configuration matrix of the network such that G i j > 0 if there is an edge from node j to node i and G i j = 0 otherwise. Thus, the dynamics of the i th node is given by d x i (t) ∈ [C x i (t) + co[ A(x i (t))] f (x i (t)) + co[B(x i (t))] f (x i (t − τ1 (t))) + J (t) N N + G i j Dx j (t) + G i j Dτ x j (t − τ (t)) j =1

j =1

− G ii Dτ (x i (t − τ (t)) − x i (t))]dt + σ (t, x i (t), x i (t − τ1 (t)))dw(t)

(4)

or d x i (t) = [C x i (t) + (x i (t)) f (x i (t)) + (x i (t)) f (x i (t − τ1 (t))) + J (t) N N + G i j Dx j (t) + G i j Dτ x j (t − τ (t)) j =1

j =1

− G ii Dτ (x i (t − τ (t)) − x i (t))]dt + σ (t, x i (t), x i (t − τ1 (t)))dw(t)

(5)

τ ] i = 1, 2, . . . , N, where D = [dkj ]n×n and Dτ = [dkj n×n denote the inner coupling matrices between the connected networks i and j at time t and t − τ (t) for all 1 ≤ i ≤ j ≤ N. The configuration matrix G is irreducible and satisfies the following condition: G i j ≥ 0, i = j, G ii = − Nj=1, j =i G i j . The time-varying delay τ (t) satisfies the following condition. Assumption 3: There exist constants τ2 and τ3 satisfying 0 ≤ τ2 ≤ τ3 such that either τ (t) ∈ [0, τ2 ] or τ (t) ∈ (τ2 , τ3 ]. Furthermore, the probability distribution of τ (t) taking values in [0, τ2 ] and (τ2 , τ3 ] is assumed as P{τ (t) ∈ [0, τ2 ]} = β0 , P{τ (t) ∈ (τ2 , τ3 ]} = 1 − β0 , where 0 ≤ β0 ≤ 1. Then

1, τ (t) ∈ [0, τ2 ] β(t) = 0, τ (t) ∈ (τ2 , τ3 ] where β(t) is independent of ws (t), s = 1, 2, . . . , n. Therefore P{β(t) = 1} = P{τ (t) ∈ [0, τ2 ]} = E{β(t)} = β0 P{β(t) = 0} = P{τ (t) ∈ (τ2 , τ3 ]} = E{1 − β(t)} = 1 − β0 . Remark 1: The binary stochastic variable was first introduced in [29] and was successfully used in [30]–[32]. Under Assumption 3, β0 depends on the values of τ2 and τ3 . Now, we introduce two time-varying delays τ2 (t) and τ3 (t) such that τ2 (t), τ (t) ∈ [0, τ2 ] τ (t) = τ3 (t), τ (t) ∈ (τ2 , τ3 ].

BAO et al.: EXPONENTIAL SYNCHRONIZATION OF COUPLED STOCHASTIC MEMRISTOR-BASED NEURAL NETWORKS

By using the new functions τ2 (t) and τ3 (t) and the definition of β(t), for i = 1, 2, . . . , N, the system in (4) and (5) can be written as d x i (t) ∈ [C x i (t) + co[ A(x i (t))] f (x i (t)) + co[B(x i (t))] f (x i (t − τ1 (t))) + J (t) +

N

G i j Dx j (t) + β(t)

j =1

+ (1 − β(t))

N

G i j Dτ x j (t − τ2 (t))

j =1 N

193

or ⎧ ⎪ ⎪ ⎪ d x i (t) = C x i (t) + (x i (t)) f (x i (t)) + (x i (t)) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ N ⎪ ⎪ ⎪ (t − τ (t))) + J (t) + G i j Dx j (t) × f (x i 1 ⎪ ⎪ ⎪ j =1 ⎪ ⎪ ⎪ N ⎪ ⎪ ⎪ G i j Dτ x j (t − τ2 (t)) + β(t) ⎪ ⎪ ⎪ j =1 ⎪ ⎪ ⎪ N ⎪ ⎪ ⎨ + (1 − β(t)) G i j Dτ x j (t − τ3 (t)) j =1

G i j Dτ x j (t − τ3 (t))

j =1

− β(t)G ii Dτ (x i (t − τ2 (t)) − x i (t)) − (1 − β(t))G ii Dτ (x i (t − τ3 (t)) − x i (t))]dt + σ (t, x i (t), x i (t − τ1 (t)))dw(t)

(6)

or

⎪ ⎪ ⎪ − β(t)G ii Dτ (x i (t − τ2 (t)) − x i (t)) ⎪

⎪ ⎪ ⎪ ⎪ ⎪ ⎪ − (1 − β(t))G ii Dτ (x i (t − τ3 (t)) − x i (t)) dt ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ + σ (t, x i (t), x i (t − τ1 (t)))dw(t), ⎪ ⎪ ⎪ ⎪ t = tk ⎪ ⎪ ⎪ + − − ⎪ ⎪ x i tk − x i tk = θ x i tk + μx i (tk − τ4 (tk )), ⎪ ⎪ ⎪ ⎩ t = t , k ∈ N+ k

(9) d x i (t) = [C x i (t) + (x i (t)) f (x i (t)) + (x i (t)) f (x i (t − τ1 (t))) + J (t) N N + G i j Dx j (t) + β(t) G i j Dτ x j (t − τ2 (t)) j =1

+ (1 − β(t))

j =1 N

G i j Dτ x j (t − τ3 (t))

j =1

− β(t)G ii Dτ (x i (t − τ2 (t)) − x i (t)) − (1 − β(t))G ii Dτ (x i (t − τ3 (t)) − x i (t))]dt + σ (t, x i (t), x i (t − τ1 (t)))dw(t).

(7)

By introducing the impulsive effects into systems (6) and (7), one can obtain the following coupled memristor-based neural networks with probabilistic delay coupling and time-varying impulsive delay effects, for i = 1, 2, . . . , N: ⎧ ⎪ ⎪ ⎪ ⎪d x i (t) ∈ C x i (t) + co[ A(x i (t))] f (x i (t)) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ + co[B(x i (t))] f (x i (t − τ1 (t))) + J (t) ⎪ ⎪ ⎪ N ⎪ ⎪ ⎪ ⎪ + G i j Dx j (t) ⎪ ⎪ ⎪ j =1 ⎪ ⎪ ⎪ N ⎪ ⎪ ⎪ +β(t) G i j Dτ x j (t − τ2 (t)) ⎪ ⎪ ⎪ j =1 ⎨ N +(1 − β(t)) G i j Dτ x j (t − τ3 (t)) ⎪ ⎪ ⎪ j =1 ⎪ ⎪ ⎪ ⎪ ⎪ −β(t)G ii Dτ (x i (t − τ2 (t)) − x i (t)) ⎪

⎪ ⎪ ⎪ ⎪ ⎪ ⎪ −(1 − β(t))G ii Dτ (x i (t − τ3 (t)) − x i (t)) dt ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ + σ (t, x i (t), x i (t − τ1 (t)))dw(t), t = tk ⎪ ⎪ ⎪ + ⎪ ⎪ x t − x i tk− = θ x i tk− + μx i (tk − τ4 (tk )), ⎪ i k ⎪ ⎪ ⎩ t = tk , k ∈ N+ (8)

where μ and θ are impulsive strengths with and without delay, respectively, 0 ≤ τ4 (t) ≤ τ4 , and the impulsive sequence {tk } satisfies 0 = t0 < t1 < t2 < · · · < tk−1 < tk < · · · and limk→+∞ tk = +∞, x(tk ) = x(tk+ ) = limt →t + x(t), x(tk− ) = k limt →t − x(t), and x i (tk ) = x i (tk ) − x i (tk− ). k The initial conditions of (1) are given by x i (s) = φi (s), s ∈ [t0 − τ, t0 ], i = 1, 2, . . . , N, where φi (·) = [φi1 (·), φi2 (·), . . . , φin (·)]T ∈ C([t0 − τ, t0 ], Rn ) and τ = max{τ1 , τ2 , τ3 , τ4 }. To discuss synchronization with one probabilistic delay coupling and impulsive effects, we define the set S = {x = (x 1T (t), x 2T (t), . . . , x NT (t))T : x i (s) ∈ C([t0 − τ, t0 ], Rn ), x i (t) = x j (t), i, j = 1, 2, . . . , N} as the synchronization manifold for network (8). In this case, the goal is to make system (8) reach synchronization, i.e., x 1 (t) = x 2 (t) = · · · = x N (t) = s(t) with the following synchronized state equation: ⎧ ds(t) ∈ [Cs(t) + co[ A(s(t))] f (s(t)) ⎪ ⎪ ⎪ ⎪ ⎪ + co[B(s(t))] f (s(t − τ1 (t))) + J (t) ⎪ ⎪ ⎪ ⎪ ⎪ − β(t)G ii Dτ (s(t − τ2 (t)) − s(t)) ⎨ − (1 − β(t))G ii Dτ (s(t − τ3 (t)) − s(t))]dt ⎪ ⎪ ⎪ + σ (t, s(t), s(t − τ1 (t)))dw(t), t = tk ⎪ ⎪ + ⎪ ⎪ ⎪ s t − s tk− = θ s tk− + μs(tk − τ4 (tk )), ⎪ ⎪ ⎩ k t = tk , k ∈ N+ (10) where i = 1, 2, . . . , N. Remark 2: When the array is synchronized, the nodes cannot be decoupled because of the single probabilistic delay in the coupling. The dynamics of the array reduces to system (10), not system (1). The probabilistic delay coupling plays an important role in the new collective behavior of the array. For i = 1, 2, . . . , N, we define the following: G ii = −a, x(t) = (x 1T (t), . . . , x NT (t))T , f(x(t)) =

194

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 1, JANUARY 2016

( f T (x 1 (t)), . . . , f T (x N (t)))T , J(t) = (J T (t), . . . , J T (t))T , co[A(x(t))] = diag{co[ A(x 1(t))], . . . , co[ A(x N (t))]}, co[B(x(t))] = diag{co[B(x 1 (t))], . . . , co[B(x N (t))]}, A(x(t)) = diag{ (x 1 (t)), . . . , (x N (t))}, B(x(t)) = diag { (x 1 (t)), . . . , (x N (t))}, and (t, x(t), x(t − τ1 (t))) d W (t) = ((σ (t, x 1 (t), x 1 (t −τ1 (t)))dw(t))T , . . . , (σ (t, x N (t), x N (t − τ1 (t)))dw(t))T )T . Then, systems (8) and (9) can be written as ⎧ ⎪ d x(t) ∈ [Cx(t) + co[A(x(t))]f(x(t)) ⎪ ⎪ ⎪ ⎪ + co[B(x(t))]f(x(t − τ1 (t))) + J(t) + Gx(t) ⎪ ⎪ ⎪ ⎪ ⎪ + β(t)Gτ x(t − τ2 (t)) + (1 − β(t))Gτ x(t − τ3 (t)) ⎪ ⎪ ⎪ ⎨ + aβ(t)Dτ (x(t − τ2 (t)) − x(t)) ⎪ + a(1 − β(t))Dτ (x(t − τ3 (t)) − x(t))]dt ⎪ ⎪ ⎪ ⎪ ⎪ + (t, x(t), x(t − τ1 (t)))d W (t), t = tk ⎪ ⎪ + − − ⎪ ⎪ ⎪ ⎪x tk − x tk = θ x tk + μx(tk − τ4 (tk )), ⎪ ⎩ t = tk , k ∈ N+ (11) or ⎧ ⎪ ⎪d x(t) = [Cx(t) + A(x(t))f(x(t)) ⎪ ⎪ ⎪ + B(x(t))f(x(t − τ1 (t))) + J(t) + Gx(t) ⎪ ⎪ ⎪ ⎪ ⎪ + β(t)Gτ x(t − τ2 (t)) + (1 − β(t))Gτ x(t − τ3 (t)) ⎪ ⎪ ⎪ ⎨ + aβ(t)Dτ (x(t − τ2 (t)) − x(t)) ⎪ + a(1 − β(t))Dτ (x(t − τ3 (t)) − x(t))]dt ⎪ ⎪ ⎪ ⎪ ⎪ + (t, x(t), x(t − τ1 (t)))d W (t), t = tk ⎪ ⎪ + ⎪ ⎪ ⎪x tk − x tk− = θ x tk− + μx(tk − τ4 (tk )), ⎪ ⎪ ⎩ t = tk , k ∈ N+ (12) where C = I N ⊗ C, G = G ⊗ D, Dτ = I N ⊗ Dτ , and Gτ = G ⊗ Dτ , in which ⊗ denotes the Kronecker product of two matrices and I N is the N-dimensional identity matrix. Definition 2: The coupled memristor-based neural networks with impulsive effects are said to be exponentially synchronized in the mean square if there exist two constants > 0 and β > 0 such that Ex i (t) − x j (t)2 ≤

sup EMφ(s)2 βe−t

−τ ≤s≤0

holds for all t ≥ 0, i, j = 1, 2, . . . , N, where M = M ⊗ In , M is defined in Lemma 1, and φ(s) = (φ1T (s), φ2T (s), . . . , φ NT (s))T . Definition 3 [40]: T (R, b) = {matrices with entries in R such that the sum of the entries in each row is equal to b for some b ∈ R}. To obtain our main result, we need Lemmas 1–5. Lemma 1 [40]: Let U be an N × N matrix in T (R, b). Then, the (N − 1) × (N − 1) matrix H defined as H = MU K satisfies MU = H M, where M is the following (N − 1) × N matrix: ⎡ ⎤ 1 −1 0 ⎢ ⎥ 1 −1 ⎢ ⎥ M =⎢ ⎥ . . ⎣ ⎦ . 1 −1 (N−1)×N

⎡

1 ⎢0 ⎢ ⎢ ⎢ K =⎢ ⎢ ⎢ ⎣0 0

1 1

1 1

0 0

0 0

... ... .. . 1 0 ...

⎤ 1 1⎥ ⎥ ⎥ ⎥ ⎥ 1⎥ ⎥ 1⎦ 0 N×(N−1)

and 1 is the multiplicative identity of R. Moreover, the matrix H can be written explicitly as H (i, j ) = j m=1 [U(i,m) − U(i+1,m) ]. Lemma 2 [41]: From the definition of the Kronecker product, the following properties can be easily obtained. 1) (α A) ⊗ B = A ⊗ (α B). 2) (A + B) ⊗ C = A ⊗ C + B ⊗ C. 3) (A ⊗ B)(C ⊗ D) = (AC) ⊗ (B D). Lemma 3 (Schur Complement [42]): For a given matrix S11 S12 0, then for any vectors x and y with appropriate dimensions x T 1T 2 y ≤ x T 1T 3 1 x + y T 2T 3−1 2 y. Lemma 5 [38]: Consider the following impulsive differential inequalities: ⎧ ⎪ D + v(t) ≤ av(t) + b1 [v(t)]τ1 + b2 [v(t)]τ2 + · · · ⎪ ⎪ ⎪ ⎪ + bm [v(t)]τm , t = tk , t ≥ t0 ⎪ ⎨ v tk+ ≤ pk v tk− + qk1 v tk− τ + qk2 v tk− τ + · · · 1 2 ⎪ − ⎪ m + ⎪ v t , k ∈ N + q ⎪ k k ⎪ τm ⎪ ⎩ v(t) = φ(t), t ∈ [t0 − τ, t0 ] where a, bi , pk , qki , and τi are constants, bi ≥ 0, pk ≥ 0, qki ≥ 0, τi ≥ 0, i = 1, 2, . . . , m, and v(t) ≥ 0. [v(t)]τi = supt −τi ≤s≤t v(s), [v(tk )]τi = suptk −τi (tk )≤s 1 and λ > 0 such that v(t) ≤ φτ βe−λ(t −t0) , t ≥ t0 , where φτ = supt0 −τ ≤s≤t0 φ(s) and τ = max{τi , i = 1, 2, . . . , m}. Lemma 6 [38]: Let x(t) = (x 1T (t), x 2T (t), . . . , x NT (t))T . If EMx(t)2 ≤ γ e−λt , where γ > 0, λ > 0, then Ex i (t) − x j (t)2 ≤ γ e−λt , for all i, j = 1, 2, . . . , N. III. M AIN R ESULTS In this section, the exponential synchronization criteria of coupled stochastic memristor-based neural networks with one probabilistic delay coupling and impulsive effects are derived.

BAO et al.: EXPONENTIAL SYNCHRONIZATION OF COUPLED STOCHASTIC MEMRISTOR-BASED NEURAL NETWORKS

Before stating the main results, some notations are given. Let C1 = I N−1 ⊗ C, D1τ = I N−1 ⊗ Dτ , H = H ⊗ D, ¯ 1 = I N−1 ⊗ A, ¯ B¯ 1 = I N−1 ⊗ B, ¯ H τ = H ⊗ Dτ , A ¯ ¯ ¯ ¯ L1 = I N−1 ⊗ L, A = I N ⊗ A, and B = I N ⊗ B, where H = M G K , M and K are defined in Lemma 1, u ) ¯ = (b u )n×n , a u = max{|aˆ kj |, |aˇ kj |}, and A¯ = (akj n×n , B kj kj u bkj = max{|bˆkj |, |bˇkj |}, for k, j = 1, 2, . . . , n. Theorem 1: Given Assumptions 1–3. Then, the network (11) is exponentially synchronized in the mean square if there exist positive matrices P = [ pi j ](N−1)n×(N−1)n , Q = [qi j ](N−1)n×(N−1)n , R = [ri j ](N−1)n×(N−1)n , and a positive constant α such that ⎡ ⎤ ¯ 1 L1 P B¯ 1 L1 11 0 13 14 PA ⎢ ∗ 22 0 0 0 0 ⎥ ⎢ ⎥ ⎢ ∗ ∗ −ξ P 0 0 0 ⎥ 3 =⎢ ⎥< 0 ⎢ ∗ ∗ ∗ −ξ4 P 0 0 ⎥ ⎣ ⎦ ∗ ∗ ∗ ∗ −Q 0 ∗ ∗ ∗ ∗ ∗ −R (13) P < α I (14) λ1 + λ2 < 1 (15) ξ2 + ξ3 + ξ4 ln(λ1 + λ2 ) + 0 and β > 0 such that x i (t)− x j (t)2 ≤ sup−τ ≤s≤0 Mφ(s)2 βe−t holds for all t ≥ 0, i, j = 1, 2, . . . , N, where M is defined in Lemma 1, and φ(s) = (φ1T (s), φ2T (s), . . . , φ NT (s))T . Theorem 2: Under Assumption 1, the network (17) is exponentially synchronized if there exist positive matrices P = [ pi j ](N−1)n×(N−1)n , Q = [qi j ](N−1)n×(N−1)n , and R = [ri j ](N−1)n×(N−1)n such that ⎡ ¯ 1 L1 P B¯ 1 L1 ⎤ 11 0 13 PA ⎢ ∗ −ξ2 P 0 0 0 ⎥ ⎥ ⎢ =⎢ ∗ ∗ −ξ3 P 0 0 ⎥< 0 (18) ⎦ ⎣ ∗ ∗ ∗ −Q 0 ∗ ∗ ∗ ∗ −R

λ1 + λ2 < 1 ξ2 + ξ3 ln(λ1 + λ2 ) + 1 1.5, |x| ≤ 1, 0.1, |x| ≤ 1 b11 (x) = b12 (x) = −1.5, |x| > 1, −0.1, |x| > 1 0.2, |x| ≤ 1, 4, |x| ≤ 1 b21 (x) = b22 (x) = −0.2, |x| > 1, −4, |x| > 1 x i1 (t) 0 σ (t, x i (t), x i (t) − τ1 (t)) = 0.1 . 0 x i2 (t − τ1 (t))

Let θ = −1, μ = 0.4, ξ1 = 3, ξ2 = 3, ξ3 = 1, ξ4 = 1, and tk+1 − tk = 0.05. We used the MATLAB LMI Control Toolbox to solve the LMIs in (13) and (14), and obtain the following feasible solution with tmin = −0.0095 and α = 1.1322 (negative tmin indicates the LMIs have a feasible solution set). For brevity, we omit the values of P, Q, and R. Through simple computation, we get λ1 + λ2 = 0.16 < 1 and ξ1 + (ξ2 + ξ3 + ξ4 )/0.16 + (ln(0.16))/0.05 = −2.4016 < 0. According to Theorem 1, it can be concluded that system (11) is exponentially synchronized in the mean square, which is verified in Figs. 1–4. Fig. 1 shows that system (21) has a chaotic attractor. Fig. 2 shows the synchronization state of system (10). Fig. 3 shows the synchronization errors of x i1 (t) − x 11 (t) and x i2 (t) − x 12 (t), which indicate x i1 (t) − x 11 (t) → 0 and x i2 (t) − x 12 (t) → 0 as t → +∞. The total error of (11) is defined as err(t) = 2 4 2 1/2 . Fig. 4 shows the total syni=1 ( j =1 |x 1i − x j i | ) chronization error, which clearly tends to zero. This also confirms that the network (11) is exponentially synchronized. Example 2: Consider a regular nearest neighbor network with ten nodes (i = 1, 2, . . . , 10). The dynamics of each node is given by (21), and the configuration matrix is ⎡ ⎤ −2 1 0 ... 1 ⎢ 1 −2 1 . . . 0 ⎥ ⎢ ⎥ ⎢ ⎥ . .. G = 4×⎢ ⎥ ⎢ ⎥ ⎣ 0 0 0 ... 1 ⎦ 1 0 0 1 −2 10×10

198

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 1, JANUARY 2016

A PPENDIX I Proof of Theorem 1: Step 1: Calculating LV (t). Let M = M ⊗ In , and consider a Lyapunov function of the following form: V (t) = x T (t)MT PMx(t). Using the weak infinitesimal operator L on the function V (t) along the solution of (12) for t = tk , k ∈ N+ leads to LV (t) = 2x T (t)MT PM{[Cx(t) + A(x(t))f(x(t)) +B(x(t))f(x(t − τ1 (t))) + J(t) + Gx(t) + β0 Gτ x(t − τ2 (t)) + (1 − β0 )Gτ x(t − τ3 (t)) + aβ0 Dτ (x(t − τ2 (t)) − x(t)) Fig. 8.

+ a(1 − β0 )Dτ (x(t − τ3 (t)) − x(t))]dt

Total synchronous error of (11) in Example 2.

2 5

0.1 , 4.5

D = 5I,

Dτ =

A¯ =

0 0

1.5 0.2 0 . 1

B¯ =

0.1 4

+ ((t, x(t), x(t − τ1 (t)))d W (t))T MT PM (t, x(t), x(t − τ1 (t)))d W (t)} + 2x T (t)MT PM

Let θ = −1, μ = 0.4, ξ1 = 4, ξ2 = 2.1, ξ3 = 2, ξ4 = 1, and tk+1 − tk = 0.05. By solving the LMIs in (13) and (14), we obtain the following feasible solution with tmin = −0.0029 and α = 0.5320. By simple computation, we obtain λ1 + λ2 = 0.16 < 1 and ξ1 + (ξ2 + ξ3 + ξ4 )/0.16 + (ln(0.16))/0.05 = −0.4016 < 0. According to Theorem 1, it can be concluded that system (11) is exponentially synchronized in the mean square, which is verified in Figs. 5–8. Fig. 5 shows the synchronization state of Example 2. Figs. 6 and 7 show the synchronization errors of x i1 (t)−x 11 (t) and x i2 (t)−x 12 (t), respectively. This shows x i1 (t)−x 11(t) → 0 and x i2 (t)−x 12(t) → 0 as t → +∞. The total error of (11) is defined as 2 10 2 1/2 . Fig. 8 shows the err(t) = i=1 ( j =1 |x 1i − x j i | ) total synchronization error, which tends to zero. This also confirms that the network (11) is exponentially synchronized.

V. C ONCLUSION We have investigated the exponential synchronization of a coupled memristor-based neural network. The memristorbased neural network involves one single probabilistic time-varying delay coupling, stochastic disturbances, and time-varying impulsive delay. Based on the Lyapunov function method, LMI framework, and an extended Halanay differential inequality, sufficient conditions were derived to guarantee that the coupled stochastic memristor-based neural networks achieve exponential synchronization in the mean square. Two numerical examples were provided to show the usefulness and effectiveness of the proposed design methods.

×{(β(t) − β0 )[Gτ x(t − τ2 (t)) − Gτ x(t − τ3 (t))] + a(β(t) − β0 )[Dτ x(t − τ2 (t)) − Dτ x(t − τ3 (t))]}. (22) Based on the properties of the Kronecker product, MC = C1 M, MDτ = D1τ M, and MJ = 0. Therefore E[LV (t)] = 2x T (t)MT PC1 Mx(t) + 2x T (t)MT PMA(x(t))f(x(t)) + 2x T (t)MT PMB(x(t))f(x(t − τ1 (t))) + 2x T (t)MT PMGx(t) + 2β0 x T (t)MT PMGτ x(t − τ2 (t)) + 2(1 − β0 )x T (t)MT PMGτ x(t − τ3 (t)) + 2aβ0 x T (t)MT PMDτ (x(t − τ2 (t)) − x(t)) + 2a(1 − β0 )x T (t)MT PMDτ ×(x(t − τ3 (t)) − x(t)) + ((t, x(t), x(t − τ1 (t)))d W (t))T MT PM (t, x(t), x(t − τ1 (t)))d W (t).

(23)

u , Based on the Lipschitz condition, akj (x i j (t)) ≤ akj u ¯ = A ¯ 1 M, and MB ¯ = B¯ 1 M, which bkj (x i j (t)) ≤ bkj , MA leads to

2x T (t)MT PMA(x(t))f(x(t)) ≤ 2x T (t)MT QMx(t) ¯ T (x(t))MT P Q −1 PMA(x(t))f(x(t)) + f T (x(t))A ≤ x T (t)MT QMx(t) ¯ 1T P Q −1 P A ¯ 1 L1 Mx(t) + x T (t)MT L1T A

(24)

2x (t)M PMB(x(t))f(x(t − τ1 (t))) T

T

≤ x T (t)MT RMx(t) + f T (x(t − τ1 (t)))B¯ T (x(t)) ×MT P R −1 PMB(x(t))f(x(t − τ1 (t))) ¯ 1T ≤ x T (t)MT RMx(t) + x T (t − τ1 (t))MT L1T B ×P R −1 P B¯ 1 L1 Mx(t − τ1 (t))

(25)

BAO et al.: EXPONENTIAL SYNCHRONIZATION OF COUPLED STOCHASTIC MEMRISTOR-BASED NEURAL NETWORKS

Using Lemma 3, < 0 is equivalent to (13), so

2x T (t)MT PMGx(t) = 2x T (t)MT P[(M ⊗ In )(G ⊗ D)]x(t) = 2x T (t)MT P[M G ⊗ D]x(t) = 2x T (t)MT P[H M ⊗ D]x(t) = 2x T (t)MT P[(H ⊗ D)(M ⊗ In )]x(t) = 2x T (t)MT PHMx(t)

(26)

2x (t)M PMGτ x(t − τ2 (t)) T

T

= 2x T (t)MT P[(M ⊗ In )(G ⊗ Dτ )]x(t − τ2 (t)) = 2x T (t)MT PHτ Mx(t − τ2 (t))

(27)

T

2x (t)MT PMGτ x(t − τ3 (t)) = 2x T (t)MT P[(M ⊗ In )(G ⊗ Dτ )]x(t − τ3 (t)) = 2x T (t)MT PHτ Mx(t − τ3 (t)).

(28)

Based on Assumption 2 and (14) ((t, x(t), x(t − τ1 (t)))d W (t))T MT PM ×(t, x(t), x(t − τ1 (t)))d W (t) N−1 ≤α [(σ (t, x i (t), x i (t − τ1 (t))) − σ (t, x i+1 (t), x i+1 (t − τ1 (t))))dw(t)] ×[(σ (t, x i (t), x i (t − τ1 (t))) − σ (t, x (t), x i+1 (t − τ1 (t))))dw(t)] N−1 i+1 ≤ α ρ1 x i (t) − x i+1 (t)2 dt + ρ2

x i (t − τ1 (t)) − x i+1 (t − τ1 (t)) dt 2

τ2

− τ3

⎡

11 ⎢ ∗ = ⎢ ⎣ ∗ ∗

[x T (t)MT , x T (t − τ1 (t))MT , x T (t − and

0 22 ∗ ∗

13 0 −ξ3 P ∗

⎤ 14 0 ⎥ ⎥ 1 and > 0 such that λmin (P)Ex T MT Mx(t) ≤ EV (t) ≤ sup−τ ≤s≤0 EMφ(s)2 βe−t for t ≥ 0. Hence, Ex T MT Mx(t) ≤ EV (t) ≤ (1/λmin (P)) sup−τ ≤s≤0

A PPENDIX II

= αρ1 x T (t)M Mx(t)dt + αρ2 x T (t − τ1 (t))MT Mx(t − τ1 (t))dt.

(t))MT , x T (t

+ μ2 Ex T (tk − τ4 (tk ))MT PMx(tk − τ4 (tk )) (31) ≤ λ1 EV tk− + λ2 EV (tk − τ4 (tk )).

(1/λmin (P)) sup−τ ≤s≤0 EMφ(s)2 βe−t .

T

i=1 T

where ζ1T (t)

E[LV (t)] < ξ1 V (t) + ξ2 V (t − τ1 (t)) + ξ3 V (t − τ2 (t)) (30) + ξ4 V (t − τ3 (t)) − T T + EV (tk ) = E (1 + θ )x tk + μx(tk − τ4 (tk )) M PM × (1 + θ )x tk− + μx(tk − τ4 (tk )) = (1 + θ )2 Ex T tk− MT PMx tk− + μ(1 + θ ) T − T Ex tk M PMx(tk − τ4 (tk )) + Ex T (tk − τ4 (tk ))MT PMx tk−

EMφ(s)2 βe−t . Based on Lemma 6, Ex i (t) − x j (t)2 ≤

i=1

i=1 N−1

199

= 2x T (t)MT PC1 Mx(t) + 2x T (t)MT PMA(x(t))f(x(t)) + 2x T (t)MT PMB(x(t))f(x(t − τ1 (t))) + 2x T (t)MT PMGx(t) + 2x T (t)MT PMGτ x(t − τ2 (t)) + 2ax T (t)MT PD1τ M(x(t − τ2 (t)) − x(t)).

(32)

Combining (24)–(27) and using (32) leads to D + V (t) = ζ2T (t) ζ2 (t) + ξ1 V (t) + ξ2 V (t − τ1 (t)) + ξ3 V (t − τ2 (t)) where ζ2T (t)

=

τ2 (t))MT ]T and ⎡

11 ⎣ = ∗ ∗

[x T (t)MT , x T (t − τ1 (t))MT , x T (t − 0 22 ∗

⎤ PHτ + PD1τ ⎦ 1, and > 0. So, x i (t) − x j (t)2 ≤ (1/λmin (P)) sup−τ ≤s≤0 Mφ(s)2 βe−t . R EFERENCES [1] L. O. Chua, “Memristor—The missing circuit element,” IEEE Trans. Circuit Theory, vol. 18, no. 5, pp. 507–519, Sep. 1971. [2] D. B. Strukov, G. S. Snider, D. R. Stewart, and R. S. Williams, “The missing memristor found,” Nature, vol. 453, no. 7191, pp. 80–83, 2008. [3] J. M. Tour and T. He, “Electronics: The fourth element,” Nature, vol. 453, no. 7191, pp. 42–43, 2008. [4] S. Duan, X. Hu, Z. Dong, L. Wang, and P. Mazumder, “Memristor-based cellular nonlinear/neural network: Design, analysis, and applications,” IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 6, pp. 1202–1213, Jun. 2015. [5] X. Hu, G. Feng, S. Duan, and L. Liu, “Multilayer RTD-memristor-based cellular neural networks for color image processing,” Neurocomputing, vol. 162, pp. 150–162, Aug. 2015. [6] S. H. Jo, T. Chang, I. Ebong, B. B. Bhadviya, P. Mazumder, and W. Lu, “Nanoscale memristor device as synapse in neuromorphic systems,” Nano Lett., vol. 10, no. 4, pp. 1297–1301, 2010. [7] M. Itoh and L. O. Chua, “Memristor oscillators,” Int. J. Bifurcation Chaos, vol. 18, no. 11, pp. 3183–3206, 2008. [8] I. Petráš, “Fractional-order memristor-based Chua’s circuit,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 57, no. 12, pp. 975–979, Dec. 2010. [9] Y. V. Pershin and M. Di Ventra, “Experimental demonstration of associative memory with memristive neural networks,” Neural Netw., vol. 23, no. 7, pp. 881–886, 2010. [10] J. Hu and J. Wang, “Global uniform asymptotic stability of memristorbased recurrent neural networks with time delays,” in Proc. Int. Joint Conf. Neural Netw. (IJCNN), Jul. 2010, pp. 1–8. [11] Z. Guo, J. Wang, and Z. Yan, “Attractivity analysis of memristor-based cellular neural networks with time-varying delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 4, pp. 704–717, Apr. 2014. [12] S. Wen, G. Bao, Z. Zeng, Y. Chen, and T. Huang, “Global exponential synchronization of memristor-based recurrent neural networks with time-varying delays,” Neural Netw., vol. 48, pp. 195–203, Dec. 2013. [13] A. Wu, S. Wen, and Z. Zeng, “Synchronization control of a class of memristor-based recurrent neural networks,” Inf. Sci., vol. 183, no. 1, pp. 106–116, 2012. [14] G. Zhang and Y. Shen, “New algebraic criteria for synchronization stability of chaotic memristive neural networks with time-varying delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 10, pp. 1701–1707, Oct. 2013. [15] G. Zhang and Y. Shen, “Exponential synchronization of delayed memristor-based chaotic neural networks via periodically intermittent control,” Neural Netw., vol. 55, pp. 1–10, Jul. 2014. [16] J. Cao and M. Xiao, “Stability and Hopf bifurcation in a simplified BAM neural network with two time delays,” IEEE Trans. Neural Netw., vol. 18, no. 2, pp. 416–430, Mar. 2007. [17] H. Lu, “Chaotic attractors in delayed neural networks,” Phys. Lett. A, vol. 298, nos. 2–3, pp. 109–116, 2002. [18] J. Wei and S. Ruan, “Stability and bifurcation in a neural network model with two delays,” Phys. D, Nonlinear Phenomena, vol. 130, nos. 3–4, pp. 255–272, 1999. [19] W. Wang and J. Cao, “Synchronization in an array of linearly coupled networks with time-varying delay,” Phys. A, Statist. Mech. Appl., vol. 366, pp. 197–211, Jul. 2006.

[20] G. Chen, J. Zhou, and Z. Liu, “Global synchronization of coupled delayed neural networks and applications to chaotic CNN models,” Int. J. Bifurcation Chaos, vol. 14, no. 7, pp. 2229–2240, 2004. [21] C. W. Wu, “Synchronization in arrays of coupled nonlinear systems with delay and nonreciprocal time-varying coupling,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 52, no. 5, pp. 282–286, May 2005. [22] J. Zhou and T. Chen, “Synchronization in general complex delayed dynamical networks,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 53, no. 3, pp. 733–744, Mar. 2006. [23] J. Cao, P. Li, and W. Wang, “Global synchronization in arrays of delayed neural networks with constant and delayed coupling,” Phys. Lett. A, vol. 353, no. 4, pp. 318–325, 2006. [24] W. Yu, J. Cao, and J. Lü, “Global synchronization of linearly hybrid coupled networks with time-varying delay,” SIAM J. Appl. Dyn. Syst., vol. 7, no. 1, pp. 108–133, 2008. [25] J. Cao, G. Chen, and P. Li, “Global synchronization in an array of delayed neural networks with hybrid coupling,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 38, no. 2, pp. 488–498, Apr. 2008. [26] Z.-G. Wu, P. Shi, H. Su, and J. Chu, “Sampled-data exponential synchronization of complex dynamical networks with time-varying coupling delay,” IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 8, pp. 1177–1187, Aug. 2013. [27] W. Lu, T. Chen, and G. Chen, “Synchronization analysis of linearly coupled systems described by differential equations with a coupling delay,” Phys. D, Nonlinear Phenomena, vol. 221, no. 2, pp. 118–134, 2006. [28] W. He and J. Cao, “Exponential synchronization of hybrid coupled networks with delayed coupling,” IEEE Trans. Neural Netw., vol. 21, no. 4, pp. 571–583, Apr. 2010. [29] A. Ray, “Output feedback control under randomly varying distributed delays,” J. Guid., Control, Dyn., vol. 17, no. 4, pp. 701–711, 1994. [30] Y. Tang, J.-A. Fang, M. Xia, and D. Yu, “Delay-distribution-dependent stability of stochastic discrete-time neural networks with randomly mixed time-varying delays,” Neurocomputing, vol. 72, nos. 16–18, pp. 3830–3838, 2009. [31] Z. Wang, Y. Wang, and Y. Liu, “Global synchronization for discrete-time stochastic complex networks with randomly occurred nonlinearities and mixed time delays,” IEEE Trans. Neural Netw., vol. 21, no. 1, pp. 11–25, Jan. 2010. [32] H. Bao and J. Cao, “Delay-distribution-dependent state estimation for discrete-time stochastic neural networks with random delay,” Neural Netw., vol. 24, no. 1, pp. 19–28, 2011. [33] B. Liu, X. Liu, G. Chen, and H. Wang, “Robust impulsive synchronization of uncertain dynamical networks,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 52, no. 7, pp. 1431–1441, Jul. 2005. [34] T. Yang, Impulsive Systems and Control: Theory and Applications. Commack, NY, USA: Nova, 2001. [35] Z.-H. Guan, Z.-W. Liu, G. Feng, and Y.-W. Wang, “Synchronization of complex dynamical networks with time-varying delays via impulsive distributed control,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 57, no. 8, pp. 2182–2195, Aug. 2010. [36] J. Lu, D. W. C. Ho, J. Cao, and J. Kurths, “Exponential synchronization of linearly coupled neural networks with impulsive disturbances,” IEEE Trans. Neural Netw., vol. 22, no. 2, pp. 329–336, Feb. 2011. [37] A. Khadra, X. Z. Liu, and X. Shen, “Analyzing the robustness of impulsive synchronization coupled by linear delayed impulses,” IEEE Trans. Autom. Control, vol. 54, no. 4, pp. 923–928, Apr. 2009. [38] X. Yang and Z. Yang, “Synchronization of TS fuzzy complex dynamical networks with time-varying impulsive delays and stochastic effects,” Fuzzy Sets Syst., vol. 235, pp. 25–43, Jan. 2014. [39] A. F. Filippov, Differential Equations With Discontinuous Righthand Sides (Mathematics and Its Applications). Boston, MA, USA: Kluwer, 1988. [40] C. W. Wu and L. O. Chua, “Synchronization in an array of linearly coupled dynamical systems,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 42, no. 8, pp. 430–447, Aug. 1995. [41] K. Gu, V. L. Kharitonov, and J. Chen, Stability of Time-Delay Systems. Boston, MA, USA: Birkhäuser, 2003. [42] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory (Studies in Applied Mathematics), vol. 15. Philadelphia, PA, USA: SIAM, 1994.

BAO et al.: EXPONENTIAL SYNCHRONIZATION OF COUPLED STOCHASTIC MEMRISTOR-BASED NEURAL NETWORKS

Haibo Bao received the B.S. and M.S. degrees from Northeast Normal University, Changchun, China, in 2002 and 2005, respectively, and the Ph.D. degree from Southeast University, Nanjing, China, in 2011, all in mathematics/applied mathematics. She was with the School of Science, Hohai University, Nanjing, from 2005 to 2012. In 2012, she joined the School of Mathematics and Statistics, Southwest University, Chongqing, China. From 2014 to 2015, she was a Research Associate with the Department of Electrical Engineering, Yeungnam University, Gyeongsan, Korea. She is currently an Associate Professor with Southwest University. Her current research interests include neural networks, complex dynamical networks, control theory, and the fractional calculus theory.

Ju H. Park received the Ph.D. degree in electronics and electrical engineering from the Pohang University of Science and Technology (POSTECH), Pohang, Korea, in 1997. He was a Research Associate with ERC-ARC, POSTECH, from 1997 to 2000. In 2000, he joined Yeungnam University, Gyeongsan, Korea, where he is currently the Chuma Chair Professor. From 2006 to 2007, he was a Visiting Professor with the Department of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA. He has authored a number of papers in his research areas. His current research interests include robust control and filtering, neural networks, complex networks, multiagent systems, and chaotic systems. Prof. Park serves as an Editor of the International Journal of Control, Automation and Systems. He is a Subject Editor/Associate Editor/Editorial Board Member for several international journals, including IET Control Theory and Applications, Applied Mathematics and Computation, the Journal of The Franklin Institute, Nonlinear Dynamics, the Journal of Applied Mathematics and Computing, and Congent Engineering.

201

Jinde Cao (M’07–SM’07) received the B.S. degree from Anhui Normal University, Wuhu, China, in 1986, the M.S. degree from Yunnan University, Kunming, China, in 1989, and the Ph.D. degree from Sichuan University, Chengdu, China, in 1998, all in mathematics/applied mathematics. He was a Professor with Yunnan University from 1996 to 2000. He was with Yunnan University from 1989 to 2000. In 2000, he joined the Department of Mathematics, Southeast University, Nanjing, China. From 2001 to 2002, he was a Post-Doctoral Research Fellow with the Department of Automation and Computer-Aided Engineering, Chinese University of Hong Kong, Hong Kong. From 2006 to 2008, he was a Visiting Research Fellow and Visiting Professor with the School of Information Systems, Computing and Mathematics, Brunel University, London, U.K. In 2014, he was a Visiting Professor with the School of Electrical and Computer Engineering, RMIT University, Melbourne, VIC, Australia. He is currently the Dean of the Department of Mathematics with Southeast University. He is a Distinguished Professor and Doctoral Advisor with Southeast University and also a Distinguished Adjunct Professor with King Abdulaziz University, Jeddah, Saudi Arabia. He has authored or co-authored over 300 journal papers and five edited books. His current research interests include nonlinear systems, neural networks, complex systems and complex networks, stability theory, and applied mathematics. Prof. Cao was an Associate Editor of the IEEE T RANSACTIONS ON N EURAL N ETWORKS , the Journal of The Franklin Institute, and Neurocomputing. He is an Associate Editor of the IEEE T RANSACTIONS ON C YBERNETICS , Differential Equations and Dynamical Systems, Mathematics and Computers in Simulation, and Neural Networks. He is a Reviewer of Mathematical Reviews and Zentralblatt-Math. He is an ISI Highly Cited Researcher in Mathematics and Engineering listed by Thomson Reuters.