(y1(0), y2(0), y3(0)) - IEEE Xplore

1 downloads 0 Views 900KB Size Report
versus the time t is plotted, where (x1(0), x2(0), x3(0)) = (1, 1, 1) and (y1(0), y2(0), y3(0)) = (0, 0, 1). We see that the designed response BN (20) completely ...
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 6, JUNE 2013

versus the time t is plotted, where (x 1 (0), x 2 (0), x 3 (0)) = (1, 1, 1) and (y1 (0), y2 (0), y3 (0)) = (0, 0, 1). We see that the designed response BN (20) completely synchronizes the drive BN (19) from the seventh step. V. C ONCLUSION In this brief, we addressed the problem of synchronization design for BNs. Our discussion was based on the technique of the semi-tensor product of matrices that allows an algebraic representation of BNs and thus facilitates rigorous analysis of the system dynamics. The main result provided a constructive method for designing a response BN to completely synchronize a given drive BN. An illustrative example was included to depict the design method. ACKNOWLEDGMENT The authors would like to thank the anonymous reviewers for their valuable comments and suggestions. R EFERENCES [1] B. Derrida, E. Gardner, and A. Zippelius, “An exactly solvable asymmetric neural network model,” Eur. Lett., vol. 4, no. 2, pp. 167–173, Jul. 1987. [2] K. E. Kurten, “Correspondence between neural threshold networks and Kauffman Boolean cellular automata,” J. Phys. A, Math. General, vol. 21, no. 11, pp. L615–L619, Jun. 1988. [3] L. Wang, E. E. Pichler, and J. Ross, “Oscillations and chaos in neural networks: An exactly solvable model,” Proc. Nat. Acad. Sci. United States Amer., vol. 87, no. 23, pp. 9467–9471, Dec. 1990. [4] M. H. Hassoun, Fundamentals of Artificial Neural Networks. Cambridge, MA, USA: MIT Press, 1995. [5] S. Bornholdt and T. Rohlf, “Topological evolution of dynamical networks: Global criticality from local dynamics,” Phys. Rev. Lett., vol. 84, no. 26, pp. 6114–6117, Jun. 2000. [6] S. Huang and D. E. Ingber, “Shape-dependent control of cell growth, differentiation, and apoptosis: Switching between attractors in cell regulatory networks,” Exp. Cell Res., vol. 261, no. 1, pp. 91–103, Nov. 2000. [7] R. Albert and H. G. Othmer, “The topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in Drosophila melanogaster,” J. Theoretical Biol., vol. 223, no. 1, pp. 1–18, Jul. 2003. [8] D. J. Irons, “Logical analysis of the budding yeast cell cycle,” J. Theoretical Biol., vol. 257, no. 4, pp. 543–559, Apr. 2009. [9] A. Veliz-Cuba and B. Stigler, “Boolean models can explain bistability in the lac operon,” J. Comput. Biol., vol. 18, no. 6, pp. 783–794, Jun. 2011. [10] L. G. Morelli and D. H. Zanette, “Synchronization of Kauffman networks,” Phys. Rev. E, vol. 63, no. 3, pp. 036204-1–036204-10, Mar. 2001. [11] M.-C. Ho, Y.-C. Hung, and I.-M. Jiang, “Stochastic coupling of two random Boolean networks,” Phys. Lett. A, vol. 344, no. 1, pp. 36–42, Aug. 2005. [12] Y.-C. Hung, M.-C. Ho, J.-S. Lih, and I.-M. Jiang, “Chaos synchronization of two stochastically coupled random Boolean networks,” Phys. Lett. A, vol. 356, no. 1, pp. 35–43, Jul. 2006. [13] Y.-C. Hung, “Microscopic interactions lead to mutual synchronization in a network of networks,” Phys. Lett. A, vol. 375, nos. 30–31, pp. 2809–2814, Jul. 2011. [14] J. Parriaux, P. Guillot, and G. Millérioux, “Synchronization of Boolean dynamical systems: A spectral characterization,” in Proc. 6th Int. Conf. Sequences Their Appl., 2010, pp. 373–386. [15] R. Li and T. Chu, “Complete synchronization of Boolean networks,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 5, pp. 840–846, May 2012. [16] D. Cheng, “Input-state approach to Boolean networks,” IEEE Trans. Neural Netw., vol. 20, no. 3, pp. 512–521, Mar. 2009. [17] D. Cheng and H. Qi, “State-space analysis of Boolean networks,” IEEE Trans. Neural Netw., vol. 21, no. 4, pp. 584–594, Apr. 2010.

1001

[18] D. Cheng and H. Qi, “A linear representation of dynamics of Boolean networks,” IEEE Trans. Autom. Control, vol. 55, no. 10, pp. 2251–2258, Oct. 2010. [19] D. Cheng, H. Qi, and Z. Li, Analysis and Control of Boolean Networks: A Semi-Tensor Product Approach. New York, USA: Springer-Verlag, 2011. [20] F. Robert, Discrete Iterations: A Metric Study. New York, USA: Springer-Verlag, 1986. [21] J. Heidel, J. Maloney, C. Farrow, and J. A. Rogers, “Finding cycles in synchronous Boolean networks with applications to biochemical systems,” Int. J. Bifurcat. Chaos, vol. 13, no. 3, pp. 535–552, Mar. 2003. [22] D. Cheng, H. Qi, and Z. Li, “Model construction of Boolean network via observed data,” IEEE Trans. Neural Netw., vol. 22, no. 4, pp. 525–536, Apr. 2011.

Bogdanov–Takens Singularity in Tri-Neuron Network With Time Delay Xing He, Chuandong Li, Senior Member, IEEE, Tingwen Huang, and Chaojie Li

Abstract— This brief reports a retarded functional differential equation modeling tri-neuron network with time delay. The Bogdanov–Takens (B-T) bifurcation is investigated by using the center manifold reduction and the normal form method. We get the versal unfolding of the norm forms at the B-T singularity and show that the model can exhibit pitchfork, Hopf, homoclinic, and double-limit cycles bifurcations. Some numerical simulations are given to support the analytic results and explore chaotic dynamics. Finally, an algorithm is given to show that chaotic tri-neuron networks can be used for encrypting a color image. Index Terms— Bogdanov–Takens bifurcation, tri-neuron network.

bifurcation,

homoclinic

I. I NTRODUCTION Neural networks (NNs) have great potential in practical applications, such as associative memory, optimization, and pattern recognition. In application of NNs, multistable NNs can be viewed as models for associative memory [1], and attractors can be interpreted as stored memories. For example, an equilibrium point can be considered as a single storage or memory pattern, or an optimum object, while a limit cycle can also restore various complex patterns. In order to realize a memory system, [4] examined the dynamical phenomenon Manuscript received January 17, 2012; revised October 22, 2012; accepted December 19, 2012. Date of publication March 12, 2013; date of current version April 5, 2013. This work was supported in part by the National Natural Science Foundation of China, under Grant 60974020, the Fundamental Research Funds for the Central Universities of China, under Project CDJZR10 18 55 01, and the National Priority Research Project, under Grant NPRP 4-1162-1-181 funded by Qatar National Research Fund, Qatar. X. He and C. Li are with the School of Electronics and Information Engineering, Southwest University, Chongqing 400715, China (e-mail: [email protected]; [email protected]). T. Huang is with Texas A&M University at Qatar, Doha 23874, Qatar (e-mail: [email protected]). C. J. Li is with the School of Science, Information Technology and Engineering, University of Ballarat, Mount St. Helen, VIC 3350, Australia (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2013.2238681

2162-237X/$31.00 © 2013 IEEE

1002

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 6, JUNE 2013

of NNs as bifurcations of attractors. In [2] and [3], the authors confirmed that an NN with chaotic simulated annealing has good searching ability for solving the traveling salesman problem. Wang and Shi [5] proposed a gradual noisy chaotic NN to solve the NP-complete broadcast scheduling problem. Since Hopfied [6] introduced a simplified NN model, there has been increasing interest in investigating the dynamical behaviors of NNs due to their important applications. Based on the Hopfield NN, Marcus and Westervelt [7] proposed a delayed NN (DNN) model, which is more relevant for describing a living system in neuroscience, and the time delay can lead to wide-ranging behaviors of the system, from relaxation to stable equilibrium, instability resulting in stable oscillations, and chaos. Afterward, many DNN models [8]–[15] have been constructed and studied extensively to understand the dynamical behaviors, which is useful in solving both theoretical and practical problems. For two-neuron networks with delays, there is extensive literature [9]–[11], [19], [20] on the dynamic behaviors because they are the simplest NNs, while there are also many references on the dynamics of delayed network models with three or more neurons [22], [24], [27], [28], [30]–[32]. Most works focus on Hopf bifurcation. It indicates that various local periodic solutions arise from the equilibrium point of NN. In fact, Hopf bifurcation is a simple codimension-1 bifurcation (one parameter varies) from the point of view of mathematics, whereas the dynamics of DNNs are very complex, and complicated bifurcation [Bogdanov–Takens (B-T), zero Hopf, double Hopf] may occur when two parameters vary at the same time. Near the codimension-2 bifurcation point, the global bifurcation (homoclinic, heteroclinic orbit) and chaos occur. From the point of view of NN’s applications, homoclinic and heteroclinic orbits can be interpreted as stored memories; in addition, chaotic NN may be useful in solving optimization problems. Therefore, there is an urgent and significant need to perform codimension-2 bifurcation analysis of the DNN model. In this brief, we consider a tri-neuron model with time delay [24], which is described by   (1) x˙i (t) = −x i (t) + αi tanh x i+2 (t) − βx i+2 (t − τ ) where i (mod 3), x i denotes the activation level of the i th neuron, which is capable of activation modulated by a dynamic threshold that depends on the history of its previous activations; αi represents the activity coefficient of i th neuron; β is the measure of the inhibitory influence of the past history; τ > 0 is the time delay reflecting synaptic delay, axonal and dendritic propagation time; and the term x i+2 of the tanh function in (1) denotes a local positive feedback (Fig. 1). In biology, such feedback is known as reverberation. In Liao et al. [24], not only obtained sufficient delaydependent criteria to ensure global asymptotical stability of (1) by constructing suitable Lyapunov functions but also studied Hopf bifurcation and resonance double Hopf bifurcation of (1). In addition to the dynamics of [24, eq. (1)], it is interesting to further explore other dynamics for practical applications. In [24] and [30], Hopf bifurcation computation of tri-neuron networks was focused on, where the linearization has a pair

Fig. 1.

Network architecture of (1).

of purely imaginary roots. Compared with [24] and [30], the contributions of this brief are to discuss B-T bifurcation computation of a tri-neuron network, where the corresponding characteristic equation has a double-zero root, and to give the curves of different bifurcation occurrences. By setting parameters near the B-T bifurcation point, it may provide a guide to design stable NNs and pave the way for the application of homoclinic, periodic, double cycle, and chaos solutions. In this brief, we choose τ and β as the bifurcation parameters. Using the center manifold reduction [17], [18], [23] and the normal form method [16], [21], [29], which provide a convenient tool to compute a simple form of the original differential equation, one can obtain the normal form to analyze the dynamic behaviors of (1). Finally a numerical chaotic solution is applied to encryption of color images. The rest of this brief is organized as follows. In Section II, we discuss the distribution of eigenvalues of the linearized system of (1). In Section III, we analyze normal form and unfolding for B-T bifurcation in tri-neuron network model with time delay, and the normal form is used to predict B-T bifurcation diagrams. In Section IV, some numerical simulations are given to support the analytic results, and a chaotic solution is used for encryption of color images. Section V summarizes the main conclusions.

II. D ISTRIBUTION OF E IGENVALUES In this section, we discuss the distribution of eigenvalues of (1) at the origin, and obtain the existence of B-T bifurcation. The point of B-T bifurcation of a fixed point occurs if the corresponding characteristic equation has a zero root of multiplicity two and no other root of the characteristic equation has zero real part. It is clear that (1) has an equilibrium (0, 0, 0). Expanding the function tanh, (1) can be written as x˙i (t) = −x i (t) + αi [x i+2 (t) − βx i+2 (t − τ )] αi [x i+2 (t) − βx i+2 (t − τ )]3 − + h.o.t 3

(2)

and the corresponding characteristic equation at the origin is F(λ) = |λI − A1 − B1e−λτ | = (λ+1)3 −α 3 (1−βe−λτ )3 = 0 (3)

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 6, JUNE 2013

where





−τ 0 α1 τ A1 = ⎝α2 τ −τ 0 ⎠ 0 α3 τ −τ ⎛ ⎞ 0 0 −α1 τβ 0 0 ⎠. B1 = ⎝−α2 τβ 0 0 −α3 τβ α=

√ 3 α1 α2 α3 ,

For τ = 0, from the Routh–Hurwitz criteria, the roots of (3) have negative real parts if and only if 8 + α 3 (1 − β)3 > 0. Next, we focus on the case of τ > 0. Let β0 = 1 − (1/α), and τ0 = 1/(α − 1). In order to discuss the distribution of root of (3), we need the following lemma. Lemma 1 [25]: Consider the exponential polynomial (0) P(λ, e−λτ1 , . . . , e−λτm ) = λn + p1(0) λn−1 + · · · + pn−1 λ+ (1) λ+ pn(1) ]e−λτ1 +· · ·+[ p1(m) λn−1 + pn(0) +[ p1(1)λn−1 +· · ·+ pn−1 (m) (m) · · · + pn−1 λ + pn ]e−λτm , where τi ≥ 0(i = 1, 2, . . . , m) and p(i) j (i = 0, 1, . . . , m; j = 1, 2, . . . , n) are constants. As (τ1 , τ2 , . . . , τm ) vary, the sum of the order of zeros of P(λ, e−λτ1 , . . . , e−λτm ) on the open right half-plane can change only if a zero appears on or crosses the imaginary axis. Theorem 1: Suppose β = β0 = 1 − (1/α) and 1 < α < 4. Then: 1) λ = 0 is a single root of (3) if and only if τ = τ0 ; 2) λ = 0 is a double zero root of (3) if and only if τ = τ0 ; 3) (3) does not have purely imaginary roots ±ωi (ω > 0); 4) all the roots of (3) except 0 have negative real parts. Proof: 1)–2) It is easy to calculate F (0) = 0 if β = β0 , and  2

F (λ) = 3 (λ + 1)2 − 3α 3 1 − βe−λτ βτ e−λτ

F (λ) = 6(λ + 1) − 6α 3 (1 − βe−λτ )β 2 τ 2 e−2λτ +3α 3 (1 − βe−λτ )2 βτ 2 e−λτ . From α > 1, we get τ0 = 1/(α − 1) > 0, and

F (0) |β=β0 ,τ =τ0 = 0,



F (0) |β=β0 ,τ =τ0 = 3 (α − 1) > 0.

This completes the proof of 1) and 2). When β = β0 , (3) can be written as F(λ) = (λ + 1)3 − α 3 (1 − β0 e−λτ )3 = F1 (λ)F2 (λ)F3 (λ) = 0 where F1 (λ) = λ + 1 − α(1 − β0 e−λτ ) √ 3 1 −λτ α(1 − β0 e−λτ ) F2 (λ) = λ + 1 + α(1 − β0 e ) − i 2 2 √ 3 1 α(1 − β0 e−λτ ). F3 (λ) = λ + 1 + α(1 − β0 e−λτ ) + i 2 2 1) Suppose λ = i ω (ω > 0) is a root of F1 (λ) = 0; then we obtain i ω + 1 − α + (α − 1) e−iωτ = 0. Separating the real and imaginary parts, we have 1 − α + (α − 1) cos (ωτ ) = 0, ω = (a − 1) sin (ωτ ) resulting in ω = 0, which is meaningless.

1003

2) Suppose λ = i ω (ω > 0) is a root of F2 (λ) = 0. Then we get √ 1 3 1 −λτ α −i i ω + 1 + α − (α − 1) e 2 2√ 2 3 +i (α − 1) e−λτ = 0. 2 Separating the real and imaginary parts, it is easy to obtain π

1 1 + α = (α − 1) cos ωτ + 2 3 √ π

3 α − ω = (α − 1) sin ωτ + . 2 3     As it is known that sin2 ωτ + π3 +cos 2 ωτ + π3 = 1, we have √ ω2 − 3αω + 3α = 0. If 1 < α < 4, ω has no real root. 3) λ = i ω (ω > 0) is not a root of F3 (λ) = 0. The proof is parallel to (2). From 1)–3), the conclusion 3) is correct.   4) Let β = β0 , for τ = 0, F (λ) = λ λ2 + 3λ + 3 = 0 has a zero root and two roots with negative real part. Using Lemma 1, we obtain the claim 4). Summarizing the discussions above, if β = β0 , τ = τ0 , and 1 < α < 4, (2) undergoes B-T bifurcation. In the next section, we will obtain the normal forms of the reduced equations. III. N ORMAL F ORM AND U NFOLDING FOR B-T B IFURCATION In this section, we shall derive the universal unfolding of B-T bifurcation. Due to the symmetry that (1) stays invariant if the signs of x i are simultaneously changed, the unfolding also possesses this symmetry. The method is based on the normal form method in [17], [18] and [23]. From Theorem 1, we know that, at the origin, the characteristic equation of (2) has a double zero root if β = β0 , τ = τ0 . Rescaling the time by t → t/τ , (2) can be written as x˙i (t) = −τ x i (t) + αi τ [x i+2 (t) − βx i+2 (t − 1)] αi τ 3 [x i+2 (t) − βx i+2 (t − 1)]3 + h.o.t. − 3 Let β = β0 + μ1 , τ = τ0 + μ2 ; then we have ⎧ x˙1 = (τ0 + μ2 )[−x 1 + α1 x 3 − α1 β0 x 3 (t − 1)] ⎪ ⎪ ⎪ ⎪ α1 τ03 [x 3 (t )−β0 x 3 (t −1)]3 ⎪ ⎪ − τ α μ x (t − 1) − + h.o.t 0 1 1 3 ⎪ 3 ⎪ ⎪ ⎨x˙2 = (τ0 + μ2 )[−x 2 + α2 x 1 − α2 β0 x 1 (t − 1)] α τ 3 [x (t )−β x (t −1)]3 ⎪ − τ0 α2 μ1 x 1 (t − 1) − 2 0 1 3 0 1 + h.o.t ⎪ ⎪ ⎪ ⎪ ⎪ x˙3 = (τ0 + μ2 )[−x 3 + α3 x 2 − α3 β0 x 2 (t − 1)] ⎪ ⎪ ⎪ ⎩ α τ 3 [x (t )−β x (t −1)]3 − τ0 α3 μ1 x 2 (t − 1) − 3 0 2 3 0 2 + h.o.t.

(4)

(5)   Choose the phase space C = C [−1, 0] ; R 3 as the Banach space of continuous function form [−1, 0] to R 3 . For any

1004

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 6, JUNE 2013

x ∈ C, define x t (θ ) = x (t + θ ) and x = sup |x (θ )|. −1