We review some of the history and early work in the area of synchronization in chaotic systems. We start with our own discovery of the phenomenon, but go on to establish the historical timeline of this topic back to the earliest known paper. The topic of synchronization of chaotic systems has always been intriguing, since chaotic systems are known to resist synchronization because of their positive Lyapunov exponents. The convergence of the two systems to identical trajectories is a surprise. We show how people originally thought about this process and how the concept of synchronization changed over the years to a more geometric view using synchronization manifolds. We also show that building synchronizing systems leads naturally to engineering more complex systems whose constituents are chaotic, but which can be tuned to output various chaotic signals. We finally end up at a topic that is still in very active exploration today and that is synchronization of dynamical systems in networks of oscillators.

Chaotic systems can be very simple, but they produce signals of surprising complexity. One characteristic of a chaotic system is that the signals produced by a chaotic system do not synchronize with any other system. It therefore seems impossible for two chaotic systems to synchronize with each other, but if the two systems exchange information in just the right way, they can synchronize. The conditions for synchronization can be described mathematically, and extended to situations where entire arrays of chaotic oscillators are coupled to each other. When an array of synchronized oscillators becomes desynchronized through the changing of a parameter, the first differences that emerge between oscillators can occur on short or long spatial scales. Chaotic synchronization is very sensitive to noise added to the coupling signal, but some techniques for overcoming this sensitivity point to mechanisms that may already exist in systems of coupled neurons. Eventually, the theory of arrays of coupled chaotic oscillators led to developments in the theory of networks.

Because of its positive Lyapunov exponents, an isolated chaotic system synchronizes with nothing else. The synchronization of two or more chaotic systems is actually another version of the usual thought experiment where one imagines two initial conditions starting close to each other in phase space. For two separated chaotic systems, trajectories starting at close initial condition will diverge at first. It was counterintuitive to have the two initial conditions case converge to the same trajectory. We now realize that two identical uncoupled chaotic systems have twice as many positive Lyapunov exponents as either of the systems by themselves. If, on the other hand, we add some sort of coupling between the isolated systems, we can alter the Lyapunov spectrum-depending on the coupling and the specific systems, we could decrease or increase the number of positive Lyapunov exponents. In our original synchronizing system, we went from a total of two positive Lyapunov exponents in the combination of two uncoupled systems to one positive Lyapunov exponent in the pair of coupled systems. We show below how we originally handled this situation mathematically so we could calculate the stability of the synchronized state. As we go through this article, we will also show how this approach matured and became general with a geometric way to view the synchronization in large networks of coupled oscillators.

By 1988, we had entered the world of nonlinear dynamics and published several papers on nonlinear of spin waves in magnetic materials.1,2 While the materials we were studying were important for radiofrequency electronics, the difficulty of the theory of spin waves prevented us from translating nonlinear dynamics into useful applications. While the Navy was supportive of basic research, at some point we needed to be able to show how our basic science might affect future applications. We needed to come up with a more immediate application of chaos. One idea that came to mind was some type of message masking or hiding, where a chaotic signal covers or hides a real message in a signal. Had we known a lot about cryptography and communications we might not have been so naive to think that this could be done simply. But sometimes ignorance is a blessing in another way. We talked for months on and off about ways to do this, but came up with no good, testable version.

Early in 1989 after a dynamics conference, we decided that combining two identical chaotic systems in a particular way would be worth a try. The idea was to take a signal from a component of one (the transmitter) and send it to a duplicate system (the receiver) where the receiver was missing the part of the system that the signal came from in the transmitter, but this missing part was compensated for by using the received signal at points in the receiver where the missing part would have supplied the signal. This is a bit confusing in words but clear in a diagram in Fig. 1.

FIG. 1.

The blocks and their labels represent the dynamic variables of the systems. The blocks represent the individual rules for the dynamics and the connections represent couplings. Thus, the drive or transmitter diagram may represent three, coupled ordinary differential equations. In this setup, the drive system sends the x signal to the response or receiver, which is missing its x part, but is otherwise identical to the drive, i.e., the y′ and z′ subsystems are identical to the y and z of the drive. This idea led to our first paper on chaotic synchronization3 as well as our second paper.4 

FIG. 1.

The blocks and their labels represent the dynamic variables of the systems. The blocks represent the individual rules for the dynamics and the connections represent couplings. Thus, the drive or transmitter diagram may represent three, coupled ordinary differential equations. In this setup, the drive system sends the x signal to the response or receiver, which is missing its x part, but is otherwise identical to the drive, i.e., the y′ and z′ subsystems are identical to the y and z of the drive. This idea led to our first paper on chaotic synchronization3 as well as our second paper.4 

Close modal

If we have chosen the correct variable to drive the response, then y′ and z′ subsystems will converge to y and z as the systems evolve together on identical trajectories, i.e., the response variables will synchronize to the drive variables. We can think of this situation as the differences | yy | and | zz | going to zero as time progresses. Below we will make this rigorous and quantitative.

A type of system like Fig. 1 that we first tried was actually a 2D map system constructed from logistic maps in a very simple way

(1)
(2)

where (1) is the drive and (2) is the response. For α = 3.9 and β = 1.1, we see the chaotic attractor of the drive in Fig. 2(a). If we start y and z at different values, we see the plot of ln(|zn − yn|) in Fig. 2(b) which shows convergence of znyn. This is not completely trivial since setting β = 1.6 gives Fig. 2(c) for the difference in y and z, which shows there are regions of the attractor where there are instabilities repelling y and z although the systems still synchronize.

FIG. 2.

The 2D logistic map system. (a) The attractor. (b) The convergence of the drive and response y and z components for the β = 1.1 case. (c) The convergence of the drive and response y and z components for the β = 1.6 case.

FIG. 2.

The 2D logistic map system. (a) The attractor. (b) The convergence of the drive and response y and z components for the β = 1.1 case. (c) The convergence of the drive and response y and z components for the β = 1.6 case.

Close modal

Although we were encouraged by this little calculation, we realized that we needed a more realistic drive-response system to display convergence and hence synchronization of one subsystem with another. That meant we had to set up differential equations that would more realistically mimic actual physical systems. And after making that last statement, we realized we eventually would make this whole approach more convincing if we could find a physical system that could be built to show synchronization.

We set up the usual Runge-Kutta (4,5) numerical solver with some unusual (at the time) vector fields made from an original vector field parts of the original (a subsystem) linked to the original by a drive signal from the missing part. We did this for the Lorenz and Rössler systems along with several others. The equations more clearly display what the word description says (note drives and responses always have the same parameter values)

(3)

(a) is the Lorenz drive- response system using x driving;

(4)

(b) is the Rössler drive-response system using x driving;

(5)

(c) is the Rössler drive- response system using y driving.

Immediate observations were that for parameters that gave chaotic behavior x-driving worked to synchronize Lorenz, y-driving worked to synchronize Rössler, but x-driving did not work to synchronize Rössler. For example, if we start a Rössler drive and a y-driven response from different x and z values then let them evolve trajectories as we see in Fig. 3(a). For a Lorenz system with x-driven response and different y and z values, we get Fig. 3(b). This synchronization has come to be called identical synchronization since the result (at infinite time) is that the two systems (drive and response) have exactly equal dynamical variables.

FIG. 3.

Rössler and Lorenz drive-response systems. (a) The Rössler y-drive-response system of Eq. (5). The y-component is the same in both drive and response systems. The systems are started out of sync, but the response soon spirals into an attractor synchronized with the drive. (b) The Lorenz x-drive-response system of Eq. (3). (Adapted from Refs. 3 and 4)

FIG. 3.

Rössler and Lorenz drive-response systems. (a) The Rössler y-drive-response system of Eq. (5). The y-component is the same in both drive and response systems. The systems are started out of sync, but the response soon spirals into an attractor synchronized with the drive. (b) The Lorenz x-drive-response system of Eq. (3). (Adapted from Refs. 3 and 4)

Close modal

All this is not obvious, but it emphasized that we needed a stability theory to explain why the phenomenon worked or did not work for all systems we tried. The usual Lyapunov exponents for the original Lorenz or Rössler systems give no information for the drive- response systems. But we can follow a similar process to develop a variational equation for our cases. For the Lorenz with x driving, we set y′ = y + δy and z′ = z + δz and subtract the y-z subsystems of the drive and response. This generates a variational equation for the y-z response subsystem

(6)

In a manner analogous to finding the usual Lyapunov exponents, we evolve Eq. (6) along with the associated drive and calculate stability exponents.5 Because these are different from the usual Lyapunov exponents and they depend (are conditional) on the x signal we call these exponents conditional Lyapunov exponents.6 For the Lorenz and Rössler systems, we get the exponents as shown in Table I.

TABLE I.

Conditional Lyapunov exponents for Rössler and Lorenz.

SystemDriveResponseConditional Lyapunov exponents
Rössler x (y,z(+0.2, −8.89) 
a = 0.2, b = 0.2 y (x,z(−0.056, −8.81) 
c = 9.0 z (y,z(+0.1, +0.1) 
Lorenz x (y,z(−1.81, −1.86) 
σ = 10, b = 8/3 y (x,z(−2.67, −9.99) 
r = 60.0 z (x,y(+0.0108, −11.01) 
SystemDriveResponseConditional Lyapunov exponents
Rössler x (y,z(+0.2, −8.89) 
a = 0.2, b = 0.2 y (x,z(−0.056, −8.81) 
c = 9.0 z (y,z(+0.1, +0.1) 
Lorenz x (y,z(−1.81, −1.86) 
σ = 10, b = 8/3 y (x,z(−2.67, −9.99) 
r = 60.0 z (x,y(+0.0108, −11.01) 

We can see immediately that the Rössler x-driving y-z subsystems is unstable and will not synchronize since one of the conditional Lyapunov exponents is positive.

It is easy to see that this can apply to dynamical equations and systems of any dimension. And one can also generalize to systems using multiple drive signals and maps.4 

As the above was developed, we worked on implementing this approach to synchronization in a real system. Thomas Carroll had long been interested in analog computers and circuits after seeing one in operation by his undergraduate professor. The circuit used operational amplifiers, capacitors, and resistors to simulate the equations for a bouncing ball with damping, and the circuit output was displayed on an oscilloscope. There was now a good reason to attempt to build an analog computer circuit: chaotic synchronization. Although other physical systems might have worked, a circuit can often be split into sub-circuits easily just like in the synchronization technique that was developed above. Tom found a chaotic circuit in the literature that had been developed by Newcomb and Sathyan7 and Newcomb and El-Leithy8 of the University of Maryland. The circuit has two lobes of unstable foci analogous to a Lorenz system. The circuit hopped between the lobes, but the hopping occurred at values of the dynamical variables (the voltages and currents) that made the system hysteretic. Tom built a pair of circuits: a drive circuit and a response circuit that synchronized to the drive circuit. This final experimental confirmation was important. It showed that the technique could be implemented in a real system with noise and parameter mismatch.

As for keeping the Navy interested in our work, we did a live demonstration of chaotic synchronization and signal masking in front of our lab director. As a plasma physicist, he was skeptical of this new field of chaos, as he believed plasma physicists were already studying nonlinear dynamics, but he was intrigued enough by our new work to allow us to continue. Tom Carroll even believes that chaotic synchronization directly caused him to be hired as a permanent employee.

Most of the above (except the section on the 2D logistic map synchronization) appears in our paper3 as well as our second paper4 along with a lot of other information on this approach to synchronization.

In the world of science and mathematics, new proofs, interesting phenomena, physical behavior, etc., are often discovered more than once, often without knowledge of the previous work beforehand. This happened to us. It turns out that synchronization of chaotic systems of the kind we presented above (identical synchronization) was discovered several times, often using different approaches. We list in reverse historical order publications and other information that we discovered over the years after our initial work. The reverse order here is not the order in which we actually discovered them.

1. In 1989, Aranson and Rul'kov published a paper analyzing synchronization zones in multidimensional dynamical systems.9 This paper studied the parameters and bifurcation diagrams of synchronized, driven microwave oscillators. This is an early study of synchronization in nonlinear multidimensional systems.

2. In 1989, Volkovskii and Rul'kov10 studied the bifurcations which lead to stochastic locking in coupled, self oscillating systems with piecewise nonlinear vector fields. They studied both mutual and unidirectional coupling. As in so many Russian papers on this subject, the authors also built experiments to test the theory.

3. Afraimovich et al. published a now famous paper in 1986 (Ref. 11) on the mutual synchronization of two time nonautonomous, chaotic oscillators. When we first came across this paper, we were confused by the label of “stochastic” and only later realized that it was identical synchronization through the dynamics similar to our system. At a conference, Rabinovich assured us that their approach was actually a general version of what we had done. Their equations of motion in Ref. 12 were

(7)

The mutual couplings c1 and c2 provided a dissipative dynamics pulling the two systems into synchrony despite their chaotic behavior if the coupling was large enough. If we set c1 = 0 and let c2 → ∞, we turn the first line in Eq. (7) into a drive and the second line into a response with y2 slaved to y1 just like our original approach. So Rabinovich was right. However, he was gracious in saying that they missed the possible applications of such synchronization, which is what our paper provided.

4. Pikovskii published two papers that had synchronization as their theme in 1984.13,14 By coupling chaotic systems, Pikovskii found he could synchronize two chaotic attractors.

5. Starting in 1983, Fujisaka and Yamada published a series of four papers on synchronization in chaotic systems in Progress in Theoretical Physics.15–18 These papers investigated in much detail several aspects of symmetrically coupled identical chaotic systems and their synchronization. The systems were thought of as locations along spatial axes with a diffusive type coupling through a rdiscrete analog of the usual diffusive coupling in continuous spatial systems, that is the spatial Laplacian operator 2. This yields a set of discrete, coupled ordinary differential equations (ODEs)

(8)

where D is the diffusion constant which acts like a coupling constant linking the dynamics of different spatial locations and, because of its dissipative nature, the coupling terms make the synchronization of different spatial dynamical locations possible for certain values of D. Fujisaka and Yamada's work is the earliest we know of for showing chaotic synchronization.

Once one see the interesting phenomena emerging from complex systems constructed from simpler components (e.g., drive- response or coupling systems), one can easily imagine constructing many new and interesting ensembles of simple, chaotic, or nonchaotic components. As the field of synchronization of chaotic systems began to grow, we would characterize the use of various coupling schemes and constructed systems as becoming a “cottage industry” in the world of nonlinear dynamics. The literature on all the possible approaches is huge and we will not attempt to cover it here. We will give only some highlights from our own work and some others that introduced new interesting phenomena discovered by constructing various coupled systems.

One of the first extensions we did on the drive-response approach is to cascade the systems so as to reproduce the incoming drive signal when the drive and response synchronize.19 This is shown in Fig. 4 below. Basically, we form two drive-response systems so that the second one in the cascade has the original drive component as part of the response, thus reproducing the incoming drive signal.

FIG. 4.

A schematic in how to set up a cascaded drive-response system that reproduces the original drive signal. Note this is only possible when the two different response systems are stable to their drive signal. For example, it is possible with the Lorenz system above, but not the Rössler system.

FIG. 4.

A schematic in how to set up a cascaded drive-response system that reproduces the original drive signal. Note this is only possible when the two different response systems are stable to their drive signal. For example, it is possible with the Lorenz system above, but not the Rössler system.

Close modal

We also showed that we could synchronize systems in such a way that both the drive and response filled a subset of the phase space with non-zero measure and they would synchronize. The goal here was to find more random looking functions which would serve communications better, but still be deterministic so they could be synchronized. We accomplished this by starting with an n-dimensional phase space and a linear, expanding map L (this provides the positive Lyapunov exponents) that acts on the phase space points and couples the components, i.e., the output is a linear combination of the input components. Thus, if x(m) is the n-dimensional vector of components, where m is the time step, then we write that the linearly transformed vector is just Lx(m). However, we must add another function acting on this expanded vector to fold it back into the original volume so the system will remain finite. We do this by using a piecewise linear folding function F. Fig. 5 shows a simple folding function analogous to a Bernoulli map. Another version is just the map F(x) = (x1 mod k, x2 mod k, …, xn mod k).

FIG. 5.

A folding map with mod at ±k (from Ref. 60).

FIG. 5.

A folding map with mod at ±k (from Ref. 60).

Close modal

Then the iterated coupled maps are just written as

(9)

where xd is the drive vector, xr is the response vector, and xr=(x1d,x2r,...,xnr), i.e., we drive only with the 1st component of xd. Obviously, this can all be generalized (see Ref. 20). The important point is that a very random attractor can be synchronized with its driven partner, the response. Fig. 6 shows a drive attractor for a 3D system. Note that the attractor fills a 3D cubic volume. The synchronized response is exactly the same.

FIG. 6.

Three-dimensional view of the full phase space trajectory for a volume preserving map (from Ref. 60).

FIG. 6.

Three-dimensional view of the full phase space trajectory for a volume preserving map (from Ref. 60).

Close modal

An advantage of this synchronized system is that the autocorrelations drop off to near zero after one step. The synchronization decay is very fast with differences between the drive and response down by a factor of 1000 in four steps. However, the spectrum of the system is broad-band and much like white noise. We built a circuit based on these ideas to show proof of principle. In addition we could mix in a message with the drive signal using the more robust XOR between the signal and the drive signal. This was easily recovered on the response side (see Ref. 20).

An important finding by Peng et al. was that systems with more than one positive Lyapunov exponent could still be synchronized using one drive (scalar) signal.21 There was speculation that one would need as many drive signals as there were positive Lyapunov exponents in the original system. The paper by Peng et al. cleared that controversy up. Using their formulation as shown below in Eq. (10), we found another interesting behavior of synchronized systems.

A postdoc working with us (Johnson) discovered that we could optimize the linear coupling between the drive and response by Peng et al. by tuning the coupling matrix to have the best properties to rapidly and stably synchronize the response.22 This resulted in the ability to synchronize the response so well that even when the drive's parameters changed by as much as 50% and it underwent a bifurcation (e.g., chaotic to periodic behavior), the response was carried with it undergoing the same bifurcation despite having different parameters. The general scheme looks like this

(10)

where B and K are vectors which can be tuned to guarantee the greatest stability of the response following Johnson et al.22 We developed a four-dimensional experimental system (a circuit) which has the dynamics of the following ODE system:

(11)

where ρ is the bifurcation parameter varied between 0.05 and 0.91 and Θ is the Heaviside step function. With Eq. (11) as the drive and the same with the coupling term as the response, we ran experiments in which ρ is varied by 50% of its value through a set of bifurcations. With the coupling term tuned to give maximum stability for the parameter matching case, we get the progression of attractors shown in Fig. 7 going from periodic, to multiperiodic, and on to chaotic dynamics. The response (with ρ held constant) follows the drive through the bifurcations. We called this behavior imposed bifurcations.

FIG. 7.

In the first two panels, the attractors of Eq. (11) are shown as ρ is varied in the drive through several bifurcations and the response follows, despite its ρ being held constant. The third panel shows the change in the drive bifurcation parameter (white bars), but the response parameter (black bars) is held constant (from Ref. 22).

FIG. 7.

In the first two panels, the attractors of Eq. (11) are shown as ρ is varied in the drive through several bifurcations and the response follows, despite its ρ being held constant. The third panel shows the change in the drive bifurcation parameter (white bars), but the response parameter (black bars) is held constant (from Ref. 22).

Close modal

We note briefly here that for the last 10–15 years, many control theory approaches have been proposed to ensure the synchronization of a drive-response pair. This work is a bit far afield for CHAOS and is certainly too large to include in any concise fashion. We simply note that most control theory techniques have been used to successfully synchronize two chaotic systems, such as sliding mode control, linear matrix inequalities (LMI), simple coupling, impulse stabilization, and many others along with several applications of observer theory to chaotic synchronization. One important contribution of control theory approaches is that for some systems, it has been possible to prove global synchronization. This is a much stronger result than linear stability theory, such as conditional Lyapunov exponents above.

A few years after working on synchronization of two chaotic systems, we ventured into the realm of synchronization in networks of oscillators. We began with the case of oscillators coupled using general forms of linear diffusive coupling, e.g., nearest neighbor linear diffusive coupling. We examined both periodic and non periodic boundary conditions. These systems are straightforward to analyze because we can diagonalize the coupling matrix and reduce the equations to easily interpreted forms. However, the ingredients for much more general scenarios are in these simple cases and we find some dynamical surprises along with the seed for a very general result in cases of more complex networks of oscillators.

In the case of linear diffusive coupling, we can write the coupling matrix as a shift-invariant or circulant matrix. Here are the equations of motion for the near-neighbor coupling case along with the coupling matrix

(12)

where x=(x1,...,xN), F(x)=(F(x1),...,F(xN)), σ is the coupling strength, and each xi is an n- dimensional dynamical variable at one node/oscillator in the network and F is an n-dimensional vector field, identical for each xi. The coupling is through the nN-dimensional matrix G = AB, where B is an n × n matrix picking out which components in each xi are coupled to its neighbor and the coupling matrix G is an N × N matrix given by

(13)

If s(t) is the synchronized motion of the system, that is, when all oscillators in the network are synchronized and follow the trajectory s(t), then we can view this as all the motion settles onto a synchronization manifold of n dimensions in an nN dimensional space. The synchronization manifold is a flat hyperspace defined by all the equations of the type xi = xj for i,j = 1,.,N. What is important is whether s(t) is a stable motion at each value of σ. This can be calculated from the variation of the equation of motion, Eq. (12), along the s(t) trajectory given by the linear stability equations of motion

(14)

The Lyapunov exponents of Eq. (14) determine the stability of the system, but as it stands, Eq. (14) does not give a clear picture of the stability of the synchronous state. This is because the perturbations which leave the state in synchrony, i.e., perturbations which have all equal coordinates, keeping the motion on the synchronization manifold, are mixed in with the perturbations which desynchronize the system. The former do not matter in the calculation of the Lyapunov exponents for the stability of the synchronous state. Only the perturbations which desynchronize the system determine the stability of the synchronous state. To separate the two sets of parallel and transverse perturbations effects on stability, we need to make a transformation of the variational equations which separate the two. That transformation is the diagonalization of the coupling term since the individual vector field part (F(x)) is already diagonal as a multiple of the identity matrix.

The transformation which diagonalizes G is the discrete Fourier transformation. This is the case for all shift invariant (circulant) matrices. Recall the first term in Eq. (14) is really 1NDF(s(t)), where 1N is the N × N identity matrix. Since we are diagonalizing the first terms in the direct products, this term is unchanged. Writing the new variational dynamical variables as ζ=Tξ with T=S1n, where S diagonalizes the circulant matrix A, we have a set of uncoupled ODEs as one would get with any mode decomposition in a linear equation

(15)

that is, λk's are the eigenvalues of matrix A.

At this point, Heagy, a postdoc working with us on this problem who derived Eq. (15) made several important observations. One is that the eigenvalue λ0 = 0 is associated with the variations in the synchronization state, i.e., variations that do not desynchronize the oscillators. The eigenvector for this state is (1,1,.,1)/√N. All other eigenvalues are associated with transverse variations which do desynchronize the oscillators. It is the stability of the latter variations that matter for the study of the stability of the fully synchronized state. If all the variations of the k ≠ 0 eigenmodes have negative Lyapunov exponents, then the synchronized state is stable. If one or more are positive, the synchronized state is unstable. And the most important observation is that the variational equations (Eq. (15)) are of the same form for all modes, except for the factor of λk multiplying the coupling strength σ. So if we solve for the Lyapunov exponents of one mode as a function of coupling σ, we automatically have the exponents for all modes by just scaling their couplings accordingly. It turns out that all these observations generalize to the case of arbitrary networks, including nonsymmetric and weighted coupling cases; more on that later.

When we applied this theory to Rössler oscillators coupled to nearest neighbors through the x variable, we discovered a new phenomenon called a short wavelength bifurcation.23 The origin of this name and the phenomenon is easily seen by considering the point above about each mode having the same functional form of Lyapunov exponent vs. σ except for the argument being scaled by the eigenvalues λk. Suppose we have a ring of four Rössler oscillators coupled through their x variables to nearest neighbors, then we will have two modes present (one of which is two-times degenerate) given by λ1 = λ3 = 4sin2(π4)=2 for the longer wave length mode and the other given by λ2 = 4sin2(π2)=4 for the shorter wavelength. By wavelength here we are thinking of the form of the perturbations associated with each mode around the ring. Hence, the shorter wavelength mode acts as though the coupling is twice as large. This leads to a plot of the maximum Lyapunov exponents μ for these transverse modes vs. coupling as in Fig. 8.

FIG. 8.

Plot of the maximum Lyapunov exponents for the three transverse modes in the four-Rössler ring (from Ref. 23).

FIG. 8.

Plot of the maximum Lyapunov exponents for the three transverse modes in the four-Rössler ring (from Ref. 23).

Close modal

The curves are the same, but are scaled differently in their argument. Note that for the x-coupled Rössler oscillators increasing the coupling can desynchronize the oscillators. The shortest wavelength mode goes unstable first. This is somewhat counterintuitive since most instabilities in diffusive coupling come from decreasing coupling strength until the longest wavelength mode becomes unstable. In a sense, this is an extreme form of a Turing bifurcation.24 The loss of synchronization stability for this case at high coupling is consistent with Table I which shows the x-driven response is unstable since the drive- response systems is the same as a diffusive, one-way coupling in which the coupling becomes infinite. A short wavelength bifurcation or instability also implies that some networks of the oscillators never have stable chaotic synchronization. In the case of the ring configuration above for the parameters used for the Rössler there is a size effect25 for which adding more oscillators to the ring will eventually prohibit any synchronization because the highest mode will undergo a short wavelength bifurcation as coupling increases before the lowest mode becomes stable.

In 1991, Pikovsky and Grassberger26 found that symmetry breaking bifurcations, as the coupling strength is changed between initially identical, synchronized chaotic one-dimensional systems, can cause certain orbits on the attractor to become unstable causing excursions out of the synchronous state. They exhibited the local strange invariant set which caused the excursions. This was the first evidence noted in the literature for a phenomenon latter called attractor bubbling. In 1994, Ashwin et al.,27,28 published papers which also showed that as one changes a parameter (e.g., coupling strength) that decreases the stability of the synchronous state of coupled, chaotic oscillators the dynamics and associated attractor geometry go through a series of changes. The first occurs when the synchronized attractor loses asymptotic stability. For example, certain periodic orbits in the synchronized chaotic attractor become unstable. At this stage, if there is a coexisting attractor that the system will eventually be attracted to as the parameter changes, the basin of attraction for the chaotic attractor becomes riddled with points from the basin of attraction of the coexisting attractor.29–31 The second change occurs at another parameter value the attractor becomes a chaotic saddle, no longer a true attractor. And the third change in behavior as the parameter continues to be varied is that the saddle becomes a chaotic repeller.

During the first stage if there is a little noise or parameter mismatch on the synchronized chaotic systems, then the systems will occasionally diverge from the synchronized state, then return to it. This constant fluctuation is what is usually called “Attractor Bubbling.” It occurs when the synchronized trajectory wanders near one of the unstable sets (e.g., a periodic orbit) and the system is temporarily kicked out of sync. If there is no coexisting attractor, the system will just bubble. Ashwin et al.,27,28 demonstrated the bubbling behavior in two coupled electronic circuits.

In studying the short-wavelength bifurcation/loss of synchronization stability, we noted that there are two coexisting attractors near the desynchronization threshold for the highest mode. Ott suggested that this situation might display riddled basins of attraction.32 They are arranged spatially like the wave of instability of the highest mode and a symmetric partner to it. In Fig. 9(a), we show the chaotic attractors for the original ring of four Rössler and the two symmetrically arranged periodic attractors which coexist with the synchronized chaotic attractors, and the attractors left after the coupling parameter is set at a value where the synchronized state is unstable. We automated our original four-Rössler experiment so we could scan a part of the phase space which showed the highest spatial frequency of deviations from the chaotic synchronized state (the spatial frequency of the periodic oscillator placement). This gave the first experimental picture of a riddled basin of attraction.33 

FIG. 9.

Attractors and their riddled basins of attraction. (a) The original, banded synchronized chaotic attractors along with the coexisting pair of periodic attractors. (b) The riddled basins of attraction represented by the high Fourier spatial frequency (2nd mode) of the fluctuations from the synchronized, chaotic state. White points go to either of the periodic attractors and black to the synchronized chaotic attractor. This Fourier mode is the same as the unstable mode from the short wavelength instability and so captures most of the spatial displacement from the chaotic attractor. (c) A close up of the riddled basin is shown in the inset for (b) (from Ref. 33).

FIG. 9.

Attractors and their riddled basins of attraction. (a) The original, banded synchronized chaotic attractors along with the coexisting pair of periodic attractors. (b) The riddled basins of attraction represented by the high Fourier spatial frequency (2nd mode) of the fluctuations from the synchronized, chaotic state. White points go to either of the periodic attractors and black to the synchronized chaotic attractor. This Fourier mode is the same as the unstable mode from the short wavelength instability and so captures most of the spatial displacement from the chaotic attractor. (c) A close up of the riddled basin is shown in the inset for (b) (from Ref. 33).

Close modal

An important concept came out in the early 1990s which captures possible relationships between a drive and a synchronized response or, more generally, a stable response. This was called generalized synchronization34 because the response is stable (conditional Lyapunov exponents are all positive), but is a different system from the drive, this is not identical synchronization as above, but a generalized form of it, hence the name. A criterion of generalized synchronization is that there is a function, say, φ, from the drive to the response so each point on the drive attractor x(t) is mapped to a point y(t) = φ(x(t)) on the response. The stability of the response is necessary for such a function to exist, but is not sufficient (see below). In a followup paper Abarbanel et al.35 provided a tool to check easily for the possibility of generalized synchronization, the auxiliary system method. This consists of starting with two identical responses and driving both with the same system. If the systems are stable and there is only one basin of attraction, then the two responses will synchronize and plotting the variables of one against their counterparts on the other response should yield the signature straight, diagonal line of synchronization. There have been many followup studies and discoveries of generalized synchronization and to this day the concept remains interesting.

We mentioned above that there are times when the stability of the response is not enough to guarantee that the function φ exists. An example is a periodic system driving a nonlinear system that has gone through a period-doubling bifurcation. This means that the drive goes around its attractor twice while the response goes around once. Hence, there must be two points on the response which correspond to each point on the drive. Of course, mathematically, a function cannot do that. This situation was found to exist (surprisingly) in a chaotic response by Parlitz et al.36 In the latter case, the drive and response were chaotic, but the response was entrained in a 2:1 way. For example, an unstable periodic orbit in the drive had a period-doubled unstable orbit in the response associated with it.

We mentioned above that we were initially somewhat naïve about the difficulties of actually using chaos for a communication system. Identical synchronization between two chaotic systems in the presence of noise becomes impossible for noise whose amplitude is greater than approximately 10% of the signal amplitude. The chaotic response system mixes noise and signal in a nonlinear fashion, making it impossible to separate the two by conventional methods, such as filtering or correlation.

Sterling made one attempt to reduce the effect of noise on a chaotic shift map.37 Sterling considered maps of the form

where x is a driving signal. A noise signal added to xn could cause yn to exceed the threshold for the modulus function, which would show up as a sudden change in the value of yn; lack of a corresponding change in xn indicated that the change was due to noise.

We realized that conventional noise reduction schemes depend on some form of time integration over a long time, the signal should integrate to some finite values, while the noise integrates to zero. Even simple filters sum values of a signal from a range of different times. We designed a chaotic system that operated on two different time scales to take advantage of time integration.

The two-frequency chaotic system was based on a piecewise linear version of the Rossler system.38 Denoting the piecewise linear Rossler vector field as Rpl allowed the two frequency Rossler system to be represented schematically as

where f = fast, s = slow, and pl = piecewise linear.

The slow vector field, Rs, was similar to Rpl, but it included extra damping so it did not oscillate on its own. The signal that was transmitted was a linear combination of the fast and slow signals.39 The transmitted signal was coupled into an identical response system using linear diffusive coupling. The output of the response system was the first component of the slow variables. As the difference between the time constants τfast and τslow increased, the low frequency part of the response system synchronized more and more closely to the low frequency part of the drive system, even for noise larger than the drive signal.

The reason the low frequency synchronization was so stable was traced to the presence of long period unstable periodic orbits with purely real Floquet multipliers. The presence of such orbits may be more common than is realized in other physical systems- in this same paper on chaotic systems, it was also shown that low period unstable periodic orbits (UPOs) with real Floquet multipliers were also present in a published neuron model.40 Such UPOs may have been responsible for the noise robustness of information transmission in the neuron model.

Above, we mentioned our first venture into synchronization of oscillators in a network for a shift-invariant coupling matrix. As we will show, the analysis of that case leads to a very general analysis of synchronization in arbitrarily coupled networks of oscillators. But first we should mention two other early and important works on synchronization in networks of coupled, chaotic oscillators.

In 1995 and 1996, Gade et al.41 and Gade,42 respectively, showed that synchronization in certain types of networks could have a general form for the analysis of the system's stability. In the 1995 paper, they showed that coupled maps on tree-structured networks would have better synchronization robustness than the same chaotic maps on a regular (e.g., nearest neighbor coupled) lattice. They also exhibited the results for synchronization in the limit of an infinite lattice. They also showed that solving the eigenvalue problem for the coupling matrix enabled a very general solution for the tree network coupling synchronization problem. The 1996 paper analyzed synchronization of chaotic maps in networks which had nonlocal coupling, for example, smallworld networks. Again the eigenvalue spectrum of the coupling matrix was the key to finding relations between the coupling constant and synchronization stability.

In 1998, Hu et al.43 showed that diffusively, linearly coupled oscillators on lattices with periodic boundary conditions can be analyzed for stability of the synchronous state using an eigenvalue approach. They introduced the idea of a matrix that picks out which components of each oscillator will be coupled to others and that the regions of the complex plane encompassing the eigenvalues of the coupling matrix will change drastically with choice of oscillator coupling matrix. They showed that the regions of stability for a specific type of coupling matrix remained the same as the lattice of oscillators is increased.

In 1998, we returned to the topic of synchronization of chaotic systems in networks. We started with the above case of circulant coupling matrices and quickly decided that we could do stability analyses for an arbitrary network for which the coupling matrix was diagonalizable. We started with the very general set of equations of motion for the coupled system

(16)

where everything has the same meaning except that we have a coupling function H: Rm—> Rm which makes this formulation of the problem very general. We started by assuming the coupling G is bidirectional, i.e., G is a symmetric, real matrix and has zero row sum, which enables global synchronization of all oscillators.

Out starting point is the important observation of scaling as above in Sec. III D. By diagonalizing G, we would end up using a scaling argument to generate the stability of each eigenmode from a calculation of only one, as in Eq. (15), but where λk is an eigenvalue of the coupling matrix under study. However, we realized that it was better to turn this argument around. Instead of generating a separate curve for each eigenmode, we could have a function which was the maximum Lyapunov exponent for the variational equation

(17)

Then this function would represent the maximum Lyapunov exponent as a function of the “coupling” α. To find the stability of a particular eigenmode (λk) at a particular value of coupling σ, we locate the point on the α axis where α = σ λk. An example of this function for the Rössler system where H merely picks out the x component to use in the coupling is shown in Fig. 10(a). The important observation is that Eq. (17) does not depend on the coupling matrix G, but only on the dynamics F and the coupling function H. This means that for symmetric coupling G, Fig. 10(a) shows a stability function that applies for any bidirectionally coupling network with F and H the same. We named the stability function the Master Stability Function.

FIG. 10.

Plots of the master stability function in the case of (a) symmetric coupling matrices (real eigenvalues) and (b) nonsymmetric coupling matrices (complex eigenvalues, in general) for x-coupled Rössler networks (from Refs. 23 and 61).

FIG. 10.

Plots of the master stability function in the case of (a) symmetric coupling matrices (real eigenvalues) and (b) nonsymmetric coupling matrices (complex eigenvalues, in general) for x-coupled Rössler networks (from Refs. 23 and 61).

Close modal

We realized that having a symmetric coupling matrix G is not necessary so long as G is diagonalizable. In general, the eigenvalues will be complex so we can solve for the MSF over the complex plane using a complex extension of Eq. (17), ζ̇=[ DF(s(t))+(α+iβ)DH(s(t)) ]ζ. To determine the stability of an eigenmode, locate the position of the complex eigenvalue (λk) and determine if it is in the stable or unstable region of the MSF (see Fig. 10(b)). The value of the MSF is that is separates the dynamics (the stability) from the network (the eigenmodes and eigenvalues) so each can be calculated separately and one calculation of the MSF can be used for the stability analysis of any network configuration with the same dynamics.

There is much more that can be done with the MSF, but we stop at this point. There is a separate article in this issue that explores many facets of synchronization in networks of oscillators. We recommend any reader interested in synchronization or collective phenomena in networks explore that article.

We complete this overview of synchronization with the brief mention of two areas of future work using synchronization. One is parameter estimation, which is related to a similar problem in control theory, the other is a continuation of synchronization into the larger topic of collective behavior and dynamical patterns in networks.

1. Some early work

An early exploration of how synchronization can be used to estimate parameters was the paper by Parlitz.44 In this paper, he starts with a dynamical system (chaotic or not), ẋ=f(x,p), where p is a vector of parameters for the system, and then shows how to construct a system driven by a component of the x system, ẏ=g(s,y,q) so that y synchronizes with x when q = p. Thus, we can get an estimate of p by synchronizing y to x. The trick is to choose s as a function of x properly so we get that synchronization.

2. More sophisticated approaches

More recently, a more sophisticated approach has emerged which takes into consideration the type of stability of the synchronizing system (y). This is work by Abarbanel et al.45–47 The basic idea is similar to Parlitz' above, but more care is taken to avoid mathematical difficulties and the method is generalized to more complex situations, including those involving time series and data using sophisticated regularization approaches. We do not have sufficient space here to even crudely cover all the ideas in this work, but we note that it sets the tone for much more development of the use of synchronization for parameter estimation and system analysis. We suggest that this is an area that has much potential for more development.

With the exception of parameter estimation and data assimilation (above) the synchronization of two systems by driving and coupling has been well explored. However, over the last decade, a new area of more general types of collective behavior has developed out of the synchronization studies. This is the area of synchronization patterns in networks of oscillators. By this, we mean that a network of identical oscillators may not be fully synchronized as in the above case of the MSF, but subsets of oscillators can synchronize to each other within the subset. This has been described in various terms, but the phrase cluster synchronization (CS) seems to be most appropriate and common. The term cluster is used to describe the collection of nodes (oscillators) in a network that are synchronized. They are not necessarily in close proximity it is just that their dynamics are the same. Many versions of CS have been displayed.48–56 These involve many different scenarios (unidirectional coupling, time delays, special network structures, etc.), many of which are engineered to yield certain cluster synchronization patterns. Recently, the use of symmetries57,58 and group theory has been extended to complex networks to discover patterns of cluster synchronization which are not obvious or which are too complicated to be discerned by eye or even human means.59–61 

The area of cluster synchronization of other collective patterns is large and still growing. It opens up the older global synchronization (where all oscillators are synchronized) to a much vaster range of possible types of dynamical order. This area should be fruitful for the future as larger and more complex networks are explored and more general approaches to finding and/or engineering synchronization patterns are discovered.

1.
T. L.
Carroll
,
L. M.
Pecora
, and
F. J.
Rachford
, “
Chaotic transients and multiple attractors in spin-wave experiments
,”
Phys. Rev. Lett.
59
,
2891
(
1987
).
2.
L. M.
Pecora
, “
Derivation and generalization of the Suhl spin-wave instability relations
,”
Phys. Rev. B
37
(
10
),
5473
(
1988
).
3.
L. M.
Pecora
and
T. L.
Carroll
, “
Synchronization in chaotic systems
,”
Phys. Rev. Lett.
64
,
821
(
1990
).
4.
L. M.
Pecora
and
T. L.
Carroll
, “
Driving systems with chaotic signals
,”
Phys. Rev. A
44
,
2374
(
1991
).
5.
J.-P.
Eckmann
and
D.
Ruelle
, “
Ergodic theory of chaos and strange attractors
,”
Rev. Mod. Phys.
57
,
617
(
1985
).
6.
We are grateful to E. E. Cawley for suggesting the name (private communication).
7.
R. W.
Newcomb
and
S.
Sathyan
, “
An RC Op-Amp chaos generator
,”
IEEE Trans. Circuits Syst.
30
,
54
(
1983
).
8.
R. W.
Newcomb
and
N.
El-Leithy
, “
Chaos generation using binary hysteresis
,”
Circuits Syst. Signal Process.
5
(
3
),
321
(
1986
).
9.
I. S.
Aranson
and
N. F.
Rul'kov
, “
Nontrivial structure of synchronization zones in multidimensional systems
,”
Phys. Lett. A
139
(
8
),
375
(
1989
).
10.
A. R.
Volkovskii
and
N. F.
Rul'kov
, “
Experimental study of bifurcations at the threshold for stochastic locking
,”
Sov. Tech. Phys. Lett.
15
,
249
(
1989
).
11.
V. S.
Afraimovich
,
N. N.
Verichev
, and
M. I.
Rabinovich
, “
Stochastic synchronization of oscillations in dissipative systems
,”
Radiophys. Quantum Electron.
29
(
9
),
795
(
1986
).
12.
V. S.
Afraimovich
,
N. N.
Verichev
, and
M. I.
Rabinovich
, “
Stochastic synchronization of oscillations in dissipative systems
,”
Izv. Vyssh. Uchebn. Zaved., Radiofiz.
29
(
9
),
1050
(
1986
).
13.
A. S.
Pikovskii
, “
Synchronization and stochastization of array of self-excited oscillators by external noise
,”
Radiophys. Quantum Electron.
27
(
5
),
390
(
1984
).
14.
A.
Pikovskii
, “
On the interaction of strange attractors
,”
Z. Phys. B
55
(
2
),
149
(
1984
).
15.
H.
Fujisaka
and
T.
Yamada
, “
Stability theory of synchronized motion in coupled-oscillator systems
,”
Prog. Theor. Phys.
69
(
1
),
32
(
1983
).
16.
H.
Fujisaka
and
T.
Yamada
, “
Stability theory of synchronized motion in coupled-oscillator systems. IV
,”
Prog. Theor. Phys.
74
,
918
(
1985
).
17.
T.
Yamada
and
H.
Fujisaka
, “
Stability theory of synchronized motion in coupled-oscillator systems. II
,”
Prog. Theor. Phys.
70
,
1240
(
1983
).
18.
T.
Yamada
and
H.
Fujisaka
, “
Stability theory of synchronized motion in coupled-oscillator systems. III
,”
Prog. Theor. Phys.
72
,
885
(
1984
).
19.
T. L.
Carroll
and
L. M.
Pecora
, “
Cascading synchronized chaotic systems
,”
Physica D
67
,
126
(
1993
).
20.
T. L.
Carroll
and
L. M.
Pecora
, “
Synchronizing hyperchaotic volume-preserving map circuits
,”
IEEE Trans. Syst. Circuits
45
(
6
),
656
(
1998
).
21.
J. H.
Peng
,
E. J.
Ding
,
M.
Ding
 et al, “
Synchronizing hyperchaos with a scalar transmitted signal
,”
Phys. Rev. Lett.
76
(
6
),
904
(
1996
).
22.
G. A.
Johnson
,
D. J.
Mar
,
T. L.
Carroll
 et al, “
Synchronization and imposed bifurcations in the presence of large parameter mismatch
,”
Phys. Rev. Lett.
80
(
18
),
3956
(
1998
).
23.
J. F.
Heagy
,
L. M.
Pecora
, and
T. L.
Carroll
, “
Short wavelength bifurcations and size instabilities in coupled oscillator systems
,”
Phys. Rev. Lett.
74
(
21
),
4185
(
1995
).
24.
A.
Turing
, “
The chemical basis of morphogenesis
,”
Philos. Trans. B
237
,
37
(
1952
).
25.
L. M.
Pecora
, “
Synchronization conditions and desynchronizing patterns in coupled limit-cycle and chaotic systems
,”
Phys. Rev. E
58
(
1
),
347
(
1998
).
26.
A. A.
Pikovsky
and
P.
Grassberger
, “
Symmetry breaking bifurcation for coupled chaotic attractors
,”
J. Phys. A
24
(
19
),
4587
(
1991
).
27.
P.
Ashwin
,
J.
Buescu
, and
I.
Stewart
, “
Bubbling of attractors and synchronization of chaotic oscillators
,”
Phys. Lett. A
193
,
126
(
1994
).
28.
P.
Ashwin
,
J.
Buescu
, and
I.
Stewart
, “
From attractor to chaotic saddle: A tale of transverse instability
,”
Nonlinearity
9
,
703
(
1994
).
29.
E.
Ott
,
J. C.
Alexander
,
I.
Kan
 et al, “
The transition to chaotic attractors with riddled basins
,”
Physica D
76
,
384
(
1994
).
30.
E.
Ott
and
J. C.
Sommerer
, “
Blowout bifurcations: The occurrence of riddled basins and on-off intermittency
,”
Phys. Lett. A
188
,
39
(
1994
).
31.
E.
Ott
,
J. C.
Sommerer
,
J. C.
Alexander
 et al, “
Scaling behavior of chaotic systems with riddled basins
,”
Phys. Rev. Lett.
71
,
4134
(
1993
).
32.
E.
Ott
, private communication.
33.
J. F.
Heagy
,
T. L.
Carroll
, and
L. M.
Pecora
, “
Experimental and numerical evidence for riddled basins in coupled chaotic oscillators
,”
Phys. Rev. Lett.
73
,
3528
(
1994
).
34.
N.
Rul'kov
,
M. M.
Sushchik
,
L. S.
Tsimring
 et al, “
Generalized synchronization of chaos in directionally coupled chaotic systems
,”
Phys. Rev. E
51
,
980
(
1995
).
35.
H. D. I.
Abarbanel
,
N. F.
Rulkov
, and
M. M.
Sushchik
, “
Generalized synchronization of chaos: The auxiliary system approach
,”
Phys. Rev. E
53
(
5
),
4528
(
1996
).
36.
U.
Parlitz
,
L.
Junge
, and
L.
Kocarev
, “
Subharmonic entrainment of unstable period orbits and generalized synchronization
,”
Phys. Rev. Lett.
79
,
3158
(
1997
).
37.
D. G.
Sterling
, “
Chaotic synchronization of coupled ergodic maps
,”
Chaos
11
,
29
(
2001
).
38.
T. L.
Carroll
, “
A simple circuit for demonstrating regular and synchronized chaos
,”
Am. J. Phys.
63
,
377
(
1995
).
39.
T. L.
Carroll
, “
Chaotic systems that are robust to added noise
,”
Chaos
15
,
013901
(
2005
).
40.
M. C.
Eguia
,
M. I.
Rabinovich
, and
H. D. I.
Abarbane
, “
Information transmission and recovery in neural communications channels
,”
Phys. Rev. E
62
(
5
),
7111
(
2000
).
41.
P. M.
Gade
,
H.
Cerdeira
, and
R.
Ramaswamy
, “
Coupled maps on trees
,”
Phys. Rev. E
52
,
2478
(
1995
).
42.
P. M.
Gade
, “
Synchronization in coupled map lattices with random nonlocal connectivity
,”
Phys. Rev. E
54
,
64
(
1996
).
43.
G.
Hu
,
J.
Yang
, and
W.
Liu
, “
Instability and controllability of linearly coupled oscillators: Eigenvalue analysis
,”
Phys. Rev. Lett.
80
(
3
),
496
(
1998
).
44.
U.
Parlitz
, “
Estimating model parameters from time series by autosynchronization
,”
Phys. Rev. Lett.
76
(
8
),
1232
(
1996
).
45.
H. D. I.
Abarbanel
,
D. R.
Creveling
, and
J. M.
Jeanne
, “
Estimation of parameters in nonlinear systems using balanced synchronization
,”
Phys. Rev. E
77
(
1
),
14
(
2008
).
46.
H. D. I.
Abarbanel
,
D. R.
Creveling
,
R.
Farsian
 et al, “
Dynamical state and parameter estimation
,”
SIAM J. Appl. Dyn. Syst.
8
(
4
),
1341
(
2009
).
47.
H.
Abarbanel
,
Predicting the Future: Completing Models of Observed Complex Systems (Understanding Complex Systems)
(
Springer
,
Berlin
,
2013
).
48.
C.
Zhou
and
J.
Kurths
, “
Hierarchical synchronization in complex networks with heterogeneous degrees
,”
Chaos
16
,
015104
(
2006
).
49.
C.
Allefeld
,
M.
Muller
, and
J.
Kurths
, “
Eigenvalue decomposition as a generalized synchronization cluster analysis
,”
Int. J. Bifurcations Chaos
17
,
3493
(
2007
).
50.
F.
Sorrentino
and
E.
Ott
, “
Network synchronization of groups
,”
Phys. Rev. E
76
,
056114
(
2007
).
51.
V.
Belykh
,
G. V.
Osipov
,
V. S.
Petrov
 et al, “
Cluster synchronization in osccillatory networks
,”
Chaos
18
,
037106
(
2008
).
52.
I.
Kanter
,
M.
Zigzag
,
A.
Englert
 et al, “
Synchronization of unidirectional time delay chaotic networks and the greatest common divisor
,”
Europhys. Lett.
93
,
60003
(
2011
).
53.
T.
Dahms
,
J.
Lehnert
, and
E.
Scholl
, “
Cluster and group synchronization in delay-coupled networks
,”
Phys. Rev. E
86
,
016202
(
2012
).
54.
C.
Fu
,
Z.
Deng
,
L.
Huang
 et al, “
Topological control of synchronous patterns in systems of networked chaotic oscillators
,”
Phys. Rev. E
87
(
3
),
032909
(
2013
).
55.
D. P.
Rosin
,
D.
Rontani
, and
D. J.
Gauthier
, “
Control of synchronization patterns in neural-like Boolean networks
,”
Phys. Rev. Lett.
110
(
10
),
104102
(
2013
).
56.
C.
Williams
,
T.
Murphy
,
R.
Roy
 et al, “
Experimental observations of group synchrony in a system of chaotic optoelectronic oscillators
,”
Phys. Rev. Lett.
110
,
064104
(
2013
).
57.
M.
Golubisky
,
I. N.
Stewart
, and
D. G.
Schaeffer
,
Singularities and Groups in Bifurcation Theory
(
Springer-Verlag
,
Berlin
,
1988
), Vol.
II
.
58.
M.
Golubitsky
,
I.
Stewart
, and
A.
Török
, “
Patterns of synchrony in coupled cell networks with multiple arrows
,”
SIAM J. Appl. Dyn. Syst.
4
(
1
),
78
(
2005
).
59.
L. M.
Pecora
,
F.
Sorrentino
,
A.
Hagerstrom
 et al, “
Cluster synchronization and isolated desynchronization in complex networks with symmetries
,”
Nat. Commun.
5
,
4079
(
2014
).
60.
L. M.
Pecora
,
T. L.
Carroll
,
G.
Johnson
, and
D.
Mar
,
Phys. Rev. E
56
(
5
),
5090
5100
(
1997
).
61.
K. S.
Fink
,
G.
Johnson
,
T.
Carroll
,
D.
Mar
, and
L.
Pecora
,
Phys. Rev.
61
(
5
),
5080
(
2000
).