Oscillatory behavior is ubiquitous in many natural and engineered systems, often emerging through self-regulating mechanisms. In this paper, we address the challenge of stabilizing a desired oscillatory pattern in a networked system where neither the internal dynamics nor the interconnections can be changed. To achieve this, we propose two distinct control strategies. The first requires the full knowledge of the system generating the desired oscillatory pattern, while the second only needs local error information. In addition, the controllers are implemented as co-evolutionary, or adaptive, rules of some edges in an extended plant-controller network. We validate our approach in several insightful scenarios, including synchronization and systems with time-varying network structures.

This work tackles the challenge of stabilizing a desired oscillatory pattern in a network with a fixed structure. We propose two distributed controllers to achieve this. These controllers are fundamentally different: the first is highly robust and eliminates nonlinearities but requires full knowledge of the reference system. The second, inspired by the neuromodulation of biological systems, only requires local error information. Moreover, the controllers are implemented as co-evolutionary rules for edges connecting an oscillatory node with the controlled system. This effectively results in an adaptive network where the adaptation rules shape the dynamics of the controlled nodes. We showcase the co-evolutionary controllers by exploring the synchronization of the controlled system, showing that our approach is independent of the network structure, and testing the performance of the controllers in time-varying networks. Our theoretical and numerical results show two distributed strategies to induce a desired oscillatory pattern by adaptively modulating node> dynamics.

Inducing and regulating a desired oscillatory behavior in networked systems is a fundamental problem in control theory, with ample relevance for both natural and engineered systems. Oscillations are ubiquitous in many real-world phenomena including industrial applications,1–3 chemistry,4,5 neural networks,6,7 psychology,8,9 and biology.10,11 More specifically, in biological systems, rhythmic signals govern important physiological processes, including the heartbeat’s regulation, circadian rhythms, central pattern generators, and neuronal signaling.12–15 From the engineering side, several technologies also need such periodic signals, for example, to achieve the coordination of automation processes, including robotic locomotion,16,17 or in the developments of synthetic biological systems.18–20 

Several naturally occurring oscillatory patterns are self-regulated and emerge from the intrinsic dynamics of the system. In contrast, there are many scenarios, both natural and engineered, where external control is required to induce or modify oscillations in a desired way. This is, for instance, relevant in neuromorphic engineering,21 where particular oscillatory patterns of biologically inspired neuronal networks are critical, for example, to mimic cognitive functions, coordinate sensorimotor functions, synchronize spiking neural networks, and enable efficient signal encoding, to name a few.22–24 While our interests in this paper are mainly theoretical, designing controllers that can reliably induce desired oscillations could lead to advances in neuromorphic engineering and design, brain-inspired control systems, and adaptive network dynamics, among others.

In this paper, we design two controllers that impose a desired oscillatory pattern on a networked system without modifying its internal dynamics or interconnections. Our focus is on a class of coupled mixed-feedback systems where the interplay of (fast) positive and (slow) negative feedback plays a key role in the generation of sustained oscillations [see (1) and further details below].

The manuscript is organized as follows: in Sec. II, we motivate and describe the problem at hand and propose two kinds of controllers. The first controller, while robust, needs complete information from the reference. In contrast, the second one only requires local error information. Next, in Sec. III, we embed the designed controllers into a networked framework, implementing them as a co-evolutionary, or adaptive, rule of the edges connecting a node (the controller’s node) to the nodes of the controlled system. We compare and evaluate the controllers’ performance in three key scenarios: synchronization, arbitrary network structures, and time-varying network structures. We conclude in Sec. IV, where we also discuss potential extensions of our work.

Notation: R and C denote the fields of real and complex numbers, respectively. We use sgn for the sign function and O to denote the big-Oh order symbol. An adjacency matrix is denoted by A, and a superscript r (for “reference”) or p (for “plant”), as in A r and A p, is used whenever it is necessary to distinguish the adjacency matrices of the reference and of the plant. Further notation is clarified when necessary.

Our analysis is concerned with a class of dynamic networks given by the 2 n-dimensional system:
(1)
where n 1 is the number of nodes (oscillators) in the network, 0 < ε 1 is a small parameter describing the timescale separation between the variables, and S is a locally odd sigmoid function. This means, in particular, that the function S : R R satisfies: (a) S ( 0 ) = 0, (b) d S d x ( x ) > 0 x R, (c) argmax ( S ( x ) ) = 0, and (d) S ( x ) = S ( x ) for x U with U a neighborhood of the origin. Moreover, α > 0, β > 0, and A = [ A i j ] R n × n is the adjacency matrix of the underlying graph.
Model (1) is an example of a coupled mixed-feedback system.25,26 Each isolated node, namely,
(2)
exhibits a (fast) positive feedback through S ( α x i ) (recall that α > 0) and (slow) negative feedback through the tendency of y i to follow x i.
Remark 1

The nodes of (1), which are given by (2), undergo a supercritical Hopf bifurcation of the origin for α = α := 1 S ( 0 ). This means that the response of each isolated node converges to the origin for α < α and to a limit cycle when α > α .

As in (2), a key feature of mixed-feedback systems is the interplay between positive and negative feedback loops across different timescales, which is commonly observed in many biological models, particularly in neuronal dynamics. For example, it is argued26 that mixed feedback is the fundamental mechanism for excitability and explains how biological systems are able to exhibit sustained and robust oscillations.27,28 Moreover, mixed-feedback architectures have been recently exploited to develop control methods for neuromorphic systems.25 

Our main motivation to consider systems given by (1) is the theory developed in Ref. 29. Relevant to us is that they propose a procedure that allows one, for example, to design an adjacency matrix such that the output x = ( x 1 , , x n ) R n corresponds to a rhythmic profile of the network (defined as the inverse problem in Ref. 29). While brief, let us be more precise: a rhythmic profile is an n-tuple ( σ 1 e ı ϕ 1 , , σ 1 e ı ϕ 1 ) C n, where σ i R represents the amplitudes and ϕ i [ 0 , 2 π ) phases. If for all i = 1 , , n, the solutions x i ( t ) of (1) converge to some periodic function with amplitude σ i and phase ϕ i defined as oscillating function,29 then one says that the network is rhythmic. Assume further that σ 1 > 0 and that σ 1 σ i, for all i = 2 , , n. The n-tuple ( 1 , ρ 2 e ı θ 2 , , ρ n e ı θ n ), where ρ i = σ i σ 1 and θ i = ϕ i ϕ 1 for all i = 2 , , n, is called a relative rhythmic profile. By following the procedure summarized below, one can construct an adjacency matrix such that the solutions of (1) (locally) converge to a particular rhythmic profile:

  1. Let ω x = ( 1 , ρ 2 e ı θ 2 , , ρ n e ı θ n ) with θ i { 0 , π } mod 2 π.

  2. Pick μ 1 = a + ı b with a , b > 0. Further choose μ 3 , , μ n R with μ i < a for all i = 3 , , n. Define D = diag ( μ 1 , μ 1 ¯ , μ 3 , , μ n ).

  3. Find an invertible matrix Q = [ w x w x ¯ B ] C n × n, where B R n × ( n 2 ).

  4. Define A = Q D Q 1.

Remark 2

  1. We refer to an adjacency matrix obtained from the above algorithm as rhythmic.

  2. Once one picks μ 1, one can then choose the parameters α and β such that the origin of (1) is unstable, leading to the rhythmic profile through a Hopf bifurcation, recall Remark 1 and see Ref. 29, Sec. VIII.

  3. In Ref. 29, the slow-fast as well as the mixed-feedback structure of (1) plays a fundamental role. This structure will also be relevant for the controllers presented in Secs. II and III.

We now consider the following problem: suppose we are given a reference rhythmic network and a plant network, both of type (1). The adjacency matrix of the plant cannot be adjusted, but we want to render the plant rhythmic with respect to the reference’s profile. Since the adjacency matrix of the plant is fixed, we shall design a controller that solves our problem.

Remark 3

While our focus is theoretical, we notice that the described situation frequently appears in real-life scenarios: (a) many neuromorphic devices have fixed hardwired interconnections but allow modulation through externally controllable nodes; (b) deep brain stimulation consists of applying external electrical pulses to the (fixed) brain inducing a desired rhythmic activity. Similar challenges arise in robotics, where fixed mechanical structures coordinate with body motion, or the synchronization of power grids, where the underlying network remains unchanged while controllers regulate, for example, frequency stability.

The control problem at hand is schematized in Fig. 1 and is described by the following system of equations:
(3)
where ( X , Y ) = ( X 1 , , X n , Y 1 , , Y n ) R 2 n are the states of the reference, ( x , y ) = ( x 1 , , x n , y 1 , , y n ) R 2 n are the states of the plant, A r and A p denote, respectively, the adjacency matrices of the reference and of the plant, and u i is the controller input into the ith node of the plant. The parameters α r > 0 and β r > 0 are chosen according to the procedure sketched in Sec. I and together with A r render the reference network rhythmic. The parameters α p > 0, β p > 0, and A p of the plant do not need to lead to a rhythmic profile. We also mention that the controller u i is proposed to directly influence the plant’s nodes in the same way as its internal connections and not as an independent mechanism due to the way the controller will be implemented in Sec. III.
FIG. 1.

Network (upper) and block diagram (lower) representations of our control problem (3), in which elements of the reference, plant, and control are presented in blue, green, and red, respectively. For the networks, the effect of the controller on the plant is displayed in red, while the rest of the connections are provided for illustrative purposes only, as any topology is allowed in our methodology. For the block diagram, uppercase ( X , Y ) and lowercase ( x , y ) represent the variables of the reference and of the plant, respectively, from which the error variables x ~ := X x and y ~ := Y y are defined for our analysis.

FIG. 1.

Network (upper) and block diagram (lower) representations of our control problem (3), in which elements of the reference, plant, and control are presented in blue, green, and red, respectively. For the networks, the effect of the controller on the plant is displayed in red, while the rest of the connections are provided for illustrative purposes only, as any topology is allowed in our methodology. For the block diagram, uppercase ( X , Y ) and lowercase ( x , y ) represent the variables of the reference and of the plant, respectively, from which the error variables x ~ := X x and y ~ := Y y are defined for our analysis.

Close modal
Let us define the errors ( x ~ , y ~ ) R 2 n by
The corresponding error dynamics read as
(4)
Remark 4

The error system given in (4) is not in closed form. We shall deal with this when we design the controllers in Secs. II BII C.

In the following, we propose two different kinds of controllers that render the origin of (4) locally stable. The first one eliminates the sigmoidal nonlinearities, which results in a robust controller but requires complete knowledge of the reference. For comparison purposes, we propose an additional controller inspired by the synaptic modulation of neuronal systems. Such a controller does not require any knowledge of the reference. For clarity of our exposition, these two types of controllers are first provided without considering their implementation, see Propositions 1 and 2. After that, we propose a particular way of implementing them in a networked system where all nodes are of type (1), including the controller. This will mean that the controller action is implemented via the adaptation of the weights connecting the controller node and the plant’s nodes; see more details in Sec. III.

We describe some generalities about the simulations that we present below. When required, more details are provided, and all codes are available in Ref. 30. In our simulations, the function S is chosen as the odd sigmoid S ( ) = tanh ( ), the small parameter ε is fixed at ε = 1 100, while many of the other parameters are chosen at random. When we say that a parameter is chosen “at random,” we always mean it under a uniform distribution and specify the interval of possible values. The following steps align with the algorithm described inSec. I:

  1. The relative amplitudes ρ i, i = 2 , , n, are chosen at random within the interval ( 1 3 , 1 ). The lower value 1 3 is simply chosen so that the ith relative amplitude is not too small in the simulations. The phases θ i, i = 2 , , n, are chosen at random within the interval ( 0 , 2 π ). The probability of θ i = π is zero.

  2. For the leading eigenvalue μ 1 = a + ı b, we let a = 1 and b is randomly chosen within the interval ( 1 100 , 1 10 ). The choice of b is so that the period of the rhythmic profile, which is29  T 2 π ( β ( μ 1 ) ) 1, is not too small. The rest of the eigenvalues μ i R, i = 3 , , n, are chosen at random within the interval ( 0 , 9 10 a ). This provides D = diag ( μ 1 , μ 1 ¯ , μ 3 , , μ n ).

  3. The complement matrix B is a randomly generated sparse matrix with nonzero coefficients within the interval ( 0 , 1 ). This matrix is generated until Q = [ w x w x ¯ B ] is invertible.

  4. The adjacency matrices A r and A p are, thus, given via the previous algorithm as A = Q D Q 1. Since the algorithm to obtain the adjacency matrices is mostly random, both matrices are, with probability 1, different. Likewise, since B is sparse and randomly generated, the topologies of the adjacency matrices are, with probability 1, different. We emphasize that, from the results we present below, the plant’s adjacency matrix A p does not have to be rhythmic. To keep all systems (reference, plant, and controller) within the same context, we have opted to keep it rhythmic for the simulations.

In addition, and due to the choice of μ 1, β is randomly chosen within the interval ( 0 , 1 ), and the corresponding α is set as α = 1 + ε β + 1 100 (see Ref. 29, Theorem 1).

To better see the effect of the controllers, we present our simulations in the following way: for the first 300 time units, the controller is off, and so we see the open-loop response. At t = 300, the controller is turned on, and so from thereon, we see the closed-loop response. In our algorithms, this is realized by multiplying the controller by l ( t t c ), where l ( x ) = 1 1 + e x and t c = 300 is the time at which the controller is turned on. Since we chose some parameters at random, the plots we show are representative of several simulation runs.

The first controller we propose follows from the next very simple observation: suppose that the controller can eliminate the sigmoidal nonlinearities of (4). The resulting error system is linear,
(5)
and the origin is globally exponentially stable. Hence, the ideal controller that achieves this can be simply computed as follows:
(6)
For future reference, let
(7)
For our implementations in Sec. III, it will be convenient that the controller is of integral type. Hence, we propose the closed-loop system:
(8)
Indeed, we have the following:
Proposition 1

Consider (8) with F i given by (7). If k i k > 0 for all i = 1 , , n and with k sufficiently large, then lim t | x ~ ( t ) | = O ( 1 k ).

Proof.

It suffices to let k 1 = = k n = k. For k 1 sufficiently large, the dynamics of u i evolve in a fast timescale. In the limit when k , F i is constant, and the equilibrium of d u i d τ = ( F i u i ), where τ = k t is the fast time, is hyperbolic. This is equivalent to saying that the critical manifold associated with (8), namely, C := { u i = F i } is normally hyperbolic31 and globally exponentially stable. The restriction of (8) to the critical manifold leads precisely to the linear error dynamics (5), whose origin is globally exponentially stable. Since the equilibria of both the fast and the slow dynamics are exponentially stable, the result follows from, for example, Fenichel’s (or Tikhonov’s) theorem.31,32

A simulation of (8) with F i given by (7) is provided in Fig. 2; recall the general considerations described in Sec. II A. Figure 2 shows in the first two rows and Fig. 2(a) the time series for x i ( t ), X i ( t ), u i ( t ) and the mean norm of the error 1 n | x ~ | in logarithmic scale for n = 4 nodes. In this and all following simulations, we prefer to show 1 n | x ~ | because we keep all the controller constants the same, even for different numbers of nodes. Therefore, for comparison purposes, it is convenient to normalize the error. In Fig. 2(b), we show the mean error for a simulation similar to the previous one but for n = 100. Figure 2(c) also shows the mean error for a simulation with n = 100 nodes but for time-varying adjacency matrices. More precisely, for this simulation, each entry A i j (for both the plant and the reference) of the adjacency matrices is of the form A i j = A i j ( t ) = A ¯ i j ( 1 + 1 5 sin ( ω i j t ) ) where ω i j is some random frequency within the interval ( 0 , 1 ) and A ¯ is a random rhythmic matrix. Since the controller fully uses these adjacency matrices, its performance is also good. Finally, on Fig. 2(d), we show another simulation for n = 100 nodes but now with a mismatch between the parameters used by the controller and those of the reference. More precisely, for this simulation, we perturb the parameters α r, β r, and A i j r used by the controller in (6), by some small random number within the interval ( 1 20 , 1 20 ). Such small perturbation is different for each parameter and each entry of A r. We notice that since the mismatch is relatively small, the performance of the controller is still reasonable, although the error is roughly one order of magnitude larger than the error without the mismatch [Fig. 2(b)]. For all these simulations, we have set k i = k = 100. It follows from Proposition 1 that the larger k, the smaller the error.

FIG. 2.

The first and second rows and (a) correspond to a simulation of (8) with u i given by (6) and for four nodes; x i (plant) is dashed, and X i (reference) is solid. (a)–(d) show, in logarithmic scale, the mean norm of the error, 1 n | x ~ |: (a) for four nodes, (b) for 100 nodes, (c) for 100 nodes with time-varying adjacency matrices, and (d) for 100 nodes with a mismatch in the reference’s parameters used by the controller. For t ( 0 , 300 ), the controller is off, and, hence, we see the open-loop response. At t = 300, the controller is turned on, and from thereon, we observe the closed-loop response. We notice, naturally, that for t > 300, the plant closely follows the reference. See more details in the main text and compare it with Fig. 3.

FIG. 2.

The first and second rows and (a) correspond to a simulation of (8) with u i given by (6) and for four nodes; x i (plant) is dashed, and X i (reference) is solid. (a)–(d) show, in logarithmic scale, the mean norm of the error, 1 n | x ~ |: (a) for four nodes, (b) for 100 nodes, (c) for 100 nodes with time-varying adjacency matrices, and (d) for 100 nodes with a mismatch in the reference’s parameters used by the controller. For t ( 0 , 300 ), the controller is off, and, hence, we see the open-loop response. At t = 300, the controller is turned on, and from thereon, we observe the closed-loop response. We notice, naturally, that for t > 300, the plant closely follows the reference. See more details in the main text and compare it with Fig. 3.

Close modal
FIG. 3.

Simulation of (3) with the neuromorphic inspired controller given by (10). The first and second rows and (a) correspond to a simulation with four nodes; x i (plant) is dashed, and X i (reference) is solid. (a)–(c) show, in logarithmic scale, the mean norm of the error 1 n | x ~ |: (a) for 4 nodes, (b) for 100 nodes, (c) for 100 nodes with time-varying adjacency matrices. For t ( 0 , 300 ), the controller is off, and, hence, we see the open-loop response. At t = 300, the controller is turned on, and from thereon, we observe the closed-loop response. We notice, naturally, that for t > 300, the plant closely follows the reference. See more details in the main text and compare it with Fig. 2.

FIG. 3.

Simulation of (3) with the neuromorphic inspired controller given by (10). The first and second rows and (a) correspond to a simulation with four nodes; x i (plant) is dashed, and X i (reference) is solid. (a)–(c) show, in logarithmic scale, the mean norm of the error 1 n | x ~ |: (a) for 4 nodes, (b) for 100 nodes, (c) for 100 nodes with time-varying adjacency matrices. For t ( 0 , 300 ), the controller is off, and, hence, we see the open-loop response. At t = 300, the controller is turned on, and from thereon, we observe the closed-loop response. We notice, naturally, that for t > 300, the plant closely follows the reference. See more details in the main text and compare it with Fig. 2.

Close modal
FIG. 4.

Synchronization of the plant with respect to a reference node using the adaptation rule (12) and (13). The first and second rows show the time series of X 1 (solid) and x i (dashed) and the corresponding adaptive weights a i for n = 4 nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for n = 100 nodes. In all simulations, we first show the system in open-loop for t [ 0 , 300 ). At t = 300, the controller is turned on, and we see how the error decreases. For t [ 500 , 550 ], a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After t = 550, we see how the error again decreases, showcasing the recovery from the added disturbance.

FIG. 4.

Synchronization of the plant with respect to a reference node using the adaptation rule (12) and (13). The first and second rows show the time series of X 1 (solid) and x i (dashed) and the corresponding adaptive weights a i for n = 4 nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for n = 100 nodes. In all simulations, we first show the system in open-loop for t [ 0 , 300 ). At t = 300, the controller is turned on, and we see how the error decreases. For t [ 500 , 550 ], a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After t = 550, we see how the error again decreases, showcasing the recovery from the added disturbance.

Close modal
FIG. 5.

Synchronization of the plant with respect to a reference node using the Hebbian-like adaptation rule (14). The first and second rows show the time series of X 1 (solid) and x i (dashed) and the adaptive weights a i for n = 4 nodes while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for n = 100 nodes. In all simulations, we first show the system in open-loop for t [ 0 , 300 ). At t = 300, the controller is turned on, and we see how the error decreases. For t [ 500 , 550 ], a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After t = 550, we see how the error again decreases, showcasing the recovery from the added disturbance. The most important distinction with Fig. 4 is that we observe periodic increases of the error, which coincide with the crossings of x c at the origin and due to the multiplicative effect of x c in the interconnection, namely, the term a i x c in each equation of the nodes.

FIG. 5.

Synchronization of the plant with respect to a reference node using the Hebbian-like adaptation rule (14). The first and second rows show the time series of X 1 (solid) and x i (dashed) and the adaptive weights a i for n = 4 nodes while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for n = 100 nodes. In all simulations, we first show the system in open-loop for t [ 0 , 300 ). At t = 300, the controller is turned on, and we see how the error decreases. For t [ 500 , 550 ], a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After t = 550, we see how the error again decreases, showcasing the recovery from the added disturbance. The most important distinction with Fig. 4 is that we observe periodic increases of the error, which coincide with the crossings of x c at the origin and due to the multiplicative effect of x c in the interconnection, namely, the term a i x c in each equation of the nodes.

Close modal
FIG. 6.

Simulation of (11) with (12) and (13) where the plant and the reference have different network topologies. At the top of the figure, we show on the left the reference’s graph, and on the right the plant’s graph for n = 4, notice that they are indeed different. The second and third rows show the time series of X 1 (solid) and x i (dashed) and the adaptive weights a i for n = 4 nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for n = 100 nodes. In all simulations, we first show the system in open-loop for t [ 0 , 300 ). At t = 300, the controller is turned on, and we see how the error decreases. For t [ 500 , 550 ], a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After t = 550, we see how the error again decreases, showcasing the recovery from the added disturbance.

FIG. 6.

Simulation of (11) with (12) and (13) where the plant and the reference have different network topologies. At the top of the figure, we show on the left the reference’s graph, and on the right the plant’s graph for n = 4, notice that they are indeed different. The second and third rows show the time series of X 1 (solid) and x i (dashed) and the adaptive weights a i for n = 4 nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for n = 100 nodes. In all simulations, we first show the system in open-loop for t [ 0 , 300 ). At t = 300, the controller is turned on, and we see how the error decreases. For t [ 500 , 550 ], a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After t = 550, we see how the error again decreases, showcasing the recovery from the added disturbance.

Close modal
FIG. 7.

Simulation of (11) with (14) where the plant and the reference have different network topologies. At the top of the figure, we show on the left the reference’s graph, and on the right the plant’s graph for n = 4, notice that they are indeed different. The second and third rows show the time series of X 1 (solid) and x i (dashed) and the adaptive weights a i for n = 4 nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for n = 100 nodes. In all simulations, we first show the system in open-loop for t [ 0 , 300 ). At t = 300, the controller is turned on, and we see how the error decreases. For t [ 500 , 550 ], a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After t = 550, we see how the error again decreases, showcasing the recovery from the added disturbance.

FIG. 7.

Simulation of (11) with (14) where the plant and the reference have different network topologies. At the top of the figure, we show on the left the reference’s graph, and on the right the plant’s graph for n = 4, notice that they are indeed different. The second and third rows show the time series of X 1 (solid) and x i (dashed) and the adaptive weights a i for n = 4 nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for n = 100 nodes. In all simulations, we first show the system in open-loop for t [ 0 , 300 ). At t = 300, the controller is turned on, and we see how the error decreases. For t [ 500 , 550 ], a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After t = 550, we see how the error again decreases, showcasing the recovery from the added disturbance.

Close modal
FIG. 8.

Simulation of (11) with (12) and (13) where the plant and the reference have different network topologies with time-varying weights. The first and second rows show the time series of X 1 (solid) and x i (dashed) and the adaptive weights a i for n = 4 nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for n = 100 nodes. In all simulations, we first show the system in open-loop for t [ 0 , 300 ). At t = 300, the controller is turned on, and we see how the error decreases. For t [ 500 , 550 ], a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After t = 550, we see how the error again decreases, showcasing the recovery from the added disturbance.

FIG. 8.

Simulation of (11) with (12) and (13) where the plant and the reference have different network topologies with time-varying weights. The first and second rows show the time series of X 1 (solid) and x i (dashed) and the adaptive weights a i for n = 4 nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for n = 100 nodes. In all simulations, we first show the system in open-loop for t [ 0 , 300 ). At t = 300, the controller is turned on, and we see how the error decreases. For t [ 500 , 550 ], a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After t = 550, we see how the error again decreases, showcasing the recovery from the added disturbance.

Close modal
FIG. 9.

Simulation of (11) with (14) where the plant and the reference have different network topologies with time-varying weights. The first and second rows show the time series of X 1 (solid) and x i (dashed) and the adaptive weights a i for n = 4 nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for n = 100 nodes. In all simulations, we first show the system in open-loop for t [ 0 , 300 ). At t = 300, the controller is turned on, and we see how the error decreases. For t [ 500 , 550 ], a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After t = 550, we see how the error again decreases, showcasing the recovery from the added disturbance.

FIG. 9.

Simulation of (11) with (14) where the plant and the reference have different network topologies with time-varying weights. The first and second rows show the time series of X 1 (solid) and x i (dashed) and the adaptive weights a i for n = 4 nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for n = 100 nodes. In all simulations, we first show the system in open-loop for t [ 0 , 300 ). At t = 300, the controller is turned on, and we see how the error decreases. For t [ 500 , 550 ], a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After t = 550, we see how the error again decreases, showcasing the recovery from the added disturbance.

Close modal

As already mentioned, a drawback of the controller presented in Sec. II B is its dependence on the full knowledge of the reference’s parameters. To offer an alternative, we propose a second controller whose behavior is inspired by the neuromodulation of synaptic weights in neuroscience and neuromorphic systems.

For clarity, let us recall the general error system,
(9)
The controller we now propose is given in an integral form by
(10)
with τ, k, and λ being positive constants.

Intuitively, the controller u i adapts to the rescaled error k λ x ~ i (quickly if τ is large) and effectively provides a proportional feedback u i k λ x ~ i. If the error x ~ i is large, then the controller compensates the second sigmoid function to balance them out. If the error is small, then both sigmoid functions have roughly the same value and cancel out, which leads to the exponentially stable linear error dynamics (5). Besides the well-known advantages of introducing the controller in integral form, the idea just described has its inspiration in some neurological mechanisms, where the dynamic equation for u i is in “leaky form,” similar to the way biological neurons process information. Likewise, the introduced controller can be regarded as an adaptive synaptic weight that modulates the influence of the error. All these interpretations become more evident in Sec. III.

The previous idea is formalized in Proposition 2.

Proposition 2

Consider error system (9) with the controller given through (10). If τ > 0 and k > 0 are sufficiently large, then lim t | x ~ ( t ) | = O ( 1 k ).

Proof.

First, notice that the nonlinear terms in (9) are bounded, that is, | S ( ) | m. Due to the linear part of (9), its trajectories are globally attracted to a forward invariant set around the origin. Since sup ( a , b ) R 2 | S ( a ) S ( b ) | = 2 m, a rough estimate of such a region is a ball of radius 2 m centered at the origin.

Substituting the steady state value of the controller u i = k λ x ~ i = σ x ~ i into the equation of d x ~ i d t in (9),
where we recall that ( X , Y ) and ( x , y ) are bounded.
For simplicity, let
for z R n. Notice that ζ i r ( z ) and ζ i p ( z ), i = 1 , , n, are bounded along solutions of (8), i.e., for z = x or z = X.
Consider the candidate Lyapunov function V = 1 2 i = 1 n ( x ~ i 2 + 1 ε y ~ i 2 ). Then,
We notice that x ~ i S ( ζ i r ( X ) ) m | x ~ i | and that S ( ζ i p ( x ) + σ x ~ i ) S ( σ x ~ i ) + O ( 1 σ ) m sgn ( x ~ i ) + O ( 1 σ ) as σ . This leads to
as σ , showing that the region where d V d t > 0 is bounded by a ball of radius O ( 1 σ ).

Figure 3 shows a simulation of (3) with the controller given by (10); recall the general considerations described in Sec. II A. The figure shows in the first two rows and Fig. 3(a) the time series for x i ( t ), X i ( t ), u i ( t ) and the mean norm of the error 1 n | x ~ | in logarithmic scale for n = 4 nodes. In Fig. 3(b), we show the mean error for a simulation similar to the previous one but for n = 100. Figure 3(c) also shows the mean error for a simulation with n = 100 nodes but for time-varying adjacency matrices in the same way as those for Fig. 2. Since this controller does not use the parameters of the plant, there is no equivalent simulation to that of the Fig. 3(d) in Fig. 2. For all the simulations in Fig. 3, τ = 1, λ = 1, and k = 100. It follows from Proposition 2 that the larger the k, the smaller the error.

In Sec. II, we have proposed two distinct controllers that render a reference profile stable for a plant with a fixed network structure. A system of the form (1) has the potential to be exploited in, for example, neuromorphic applications as it is suitable to be implemented in hardware. Therefore, it is reasonable to consider possible ways to also implement the controllers. While there are many ways to approach this, in this section, we propose to consider an extra node of type (1) as the controller. Since the dynamics of the controller’s node are relatively simple (either converge to the origin or oscillate), the actual control action is to be performed by adapting the weights of the edges connecting the controller’s node to the plant. Biological systems, particularly neuronal ones, are the inspiration for this. The controller is regarded as a pre-synaptic neuron and the nodes of the plant as post-synaptic, while the adaptation rule is associated with synaptic plasticity. More precisely, we consider the system,
(11)
where the control is now to be performed by the adaptation rule g i, and τ c > 0 sets the adaptation rate. The parameter α c > 1 is chosen so that the controller’s node ( x c , y c ) is oscillatory. By “implementation of the controllers,” we precisely mean that we will choose an adaptation rule g i to mimic the control actions of those in Secs. II B and II C.
The first controller, given by (6), is implemented through the adaptation rule,
(12)
where
(13)
with 0 < δ 1 and F i given by (7). Indeed, for | x c | > δ, we have that a i x c = F i and so the effective input to the ith node is exactly as in Proposition 1. Since x c is oscillatory, and crosses the origin, we must account for it and, hence, propose the piece-wise continuous rule given by (13).
Remark 5

  1. The form of (13) is due to the multiplicative action of x c. If x c = 1, then the controller would be exactly the same as in (6). However, from (2), the only stable equilibrium point of the controller’s node dynamics is the origin.

  2. While the adaptation rule is discontinuous, it is reminiscent of sliding mode control.33 Improvements to this approach, for example, to avoid discontinuities or to rigorously prove the uniqueness of solutions is not further discussed here but shall be considered in future research.

On the other hand, the controller given in Proposition 2 is heuristically implemented in (11) by
(14)
and compare with (10), where we notice that they share the same “error modulated behavior.”

Here, the biological inspiration is more evident: the adaptation rule (14) has two main components; one is a “pre-synaptic” activity-dependent term x ~ i x c, which is regulated by the “post-synaptic” error and is reminiscent of a Hebbian-like adaptation (one could say that the learning rate is mediated by how far the plant’s node is from the reference’s), and an intrinsic decay λ a i ensuring that the values of a i remain bounded.

Remark 6
The adaptation rule (14) is not obtained by letting u i = a i x c in Proposition 2. In such a case, one would obtain
(15)
While the term a i d x c d t in (15) can be interpreted as a “predictive” term, we encounter, again, the nuance of dividing by x c. Of course, one could opt to propose a similar discontinuous approach as in the previous case, but this would turn out to be difficult to justify implementation-wise. In addition, (15) lacks an intrinsic term guaranteeing boundedness, which is important in the present context.

A more sensible perspective is to recall that (10) effectively induces the feedback u i = k x ~ i into the error system. If now u i is considered u i = a i x c, then the ideal value of a i would be a i = k x ~ i x c, again introducing the division by x c. In our heuristic implementation, we consider x c a scaling, or gain, being its sign its most important aspect leading to (14). Naturally, since x c crosses zero, each time x c = 0, the “learned” value of a i will be forgotten. This is, indeed, visible in the simulations presented below.

In Subsecs. III AIII C, we present the simulations of (11), and we compare through different application scenarios, the adaptation rules given by (12) and (13) and (14). In addition to the general considerations for our numerical simulations described in Sec. II A, we add the following: in the time interval t ( 500 , 550 ), we add a constant random disturbance, taking values in the interval ( 1 , 1 ), to each node, different for each node and independent of the node. We do this to numerically test how the controlled plant recovers from a perturbed state.

To avoid repetitions, we mention here once that in our numerical simulations of (11) with the adaptive implementation given by (12) and (13) (specifically, Figs. 4, 6, and 8), we have used δ = 1 100 and τ c = 100. Regarding the adaptive implementation given by (14) (specifically Figs. 5, 7, and 9), we have used τ c = 100, k = 100 and λ = 1. We have kept these values the same across the different simulations for comparison purposes; of course, the performance of the adaptation rules can be improved if one tunes such parameters adequately.

In this section, we consider the problem of synchronization of the plant with respect to a provided reference. For this, the reference network has one node ( X 1 , Y 1 ), and the errors are thus defined by

Figure 4 shows a simulation of (11) with the adaptation rule given by (12) and (13). The observed spikes in the weight a i are due to the discontinuous form of the adaptation and are modulated by δ in (13). In contrast, Fig. 5 shows a simulation of (11) with the adaptation rule given by (14). For this case, we do not see anymore the aforementioned spiking behavior since a i is smooth. However, we observe that the error tends to increase periodically, and this coincides with x c crossing zero; as for x c = 0, the system is in open-loop. This issue is barely noticeable in the time series, though. We moreover notice that on both adaptation rules, the closed-loop systems recover well enough after the added disturbance in t [ 500 , 550 ].

Remark 7

While our main exposition has considered that the reference and the plant have the same number of nodes, it is clear that, as in this section, such a consideration is not necessary. If the plant has n 1 nodes and the reference has m 1 nodes, one simply needs to define n errors x ~ i = X j x i for i = 1 , , n and j { 1 , , m }.

To highlight the fact that our approach does not depend on the particular topologies of the reference’s and plant’s adjacency matrices, in this section, we present simulations with explicitly different adjacency matrices. Figure 6 shows a simulation of (12) and (13), where for n = 4, we have included the graphs of the reference and of the plant, which are different. As in the simulations of Sec. III A, the spikes in the time series of a i are due to the discontinuous implementation (12) and are modulated by δ in (12). For comparison, Fig. 7 shows a simulation of (11). Both simulations follow the same overall format we have used so far: for t [ 0 , 300 ], the controller is off, and hence, we see the open-loop response; at t = 300, the controller is turned on, and we see how the plant follows the reference, while for t [ 500 , 550 ], a constant random disturbance is added to each node. We notice from the time series that the performance of both controllers is reasonably good.

As a final application and comparison setup, we consider here that the adjacency matrices of both the reference and the plant are time-varying. We do this similar to what was presented in Sec. II. More precisely, for the simulations of this section, each entry A i j (for both the plant and the reference) of the adjacency matrices is of the form A i j = A i j ( t ) = A ¯ i j ( 1 + 1 5 sin ( ω i j t ) ), where ω i j is some random frequency within the interval ( 0 , 1 ) and A ¯ is a random rhythmic matrix, meaning that the underlying topologies of each network are also different. While we do not perform formal analysis under this setup, both adaptation approaches seem suitable to handle this scenario. On the one hand, we have that adaptation rules (11) and (12) use the full knowledge of the adjacency matrices. On the other hand, the adaptive rule (11) accounts for the time-varying adjacency matrices from the fact that a i is regulated by the error. Figure 8 corresponds to (10) with (11) and (12), while Fig. 9 uses (14). The description of the results is similar to Secs. III A and III B. In both cases, we can observe that, despite the adjacency matrices being time-varying, both adaptation rules perform well.

This paper has explored co-evolutionary control approaches to render a desired oscillatory pattern stable for a particular class of dynamic networks, keeping their internal properties fixed. We have developed two kinds of controllers: one that uses the full information from the reference and another that only requires local error information. From our analysis, both controllers seem to perform reasonably well. Their main difference is their potential implementation. While the first controller (see Proposition 1) performs observably well, it is hardly implementable not only because it requires full information about the plant but also because of its particular form [see (6)]. In contrast, the second controller (see Proposition 2) requires only local error information. We have also implemented such controllers as co-evolutionary rules in an extended dynamic network and tested their performance in three relevant scenarios. In all cases, again, the controllers perform reasonably well.

An immediate open problem to consider in the future is to extend our approach to more general coupled systems. As a concrete example, one may want to explore ideas similar to those developed here, but for coupled spiking neurons, whose models also have mixed-feedback loops. A challenge for these models is that the nonlinearity is not bounded, in contrast to (2). Another possibility is to consider that the reference and the plant are not of the same type. Hence, even if a system cannot inherently produce and sustain specific oscillations, a controller may impose them.

The work of L.G.V.P. was founded by the Data Science and Systems Complexity (DSSC) Centre of the University of Groningen, as part of a PhD scholarship. H.J.K. would like to acknowledge the financial support of the CogniGron research center and the Ubbo Emmius Funds (University of Groningen). The authors thank the anonymous reviewers for their thorough comments that helped to improve the manuscript.

The authors have no conflicts to disclose.

Luis Guillermo Venegas-Pineda: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Methodology (equal); Software (equal); Visualization (equal); Writing – original draft (equal). Hildeberto Jardón-Kojakhmetov: Conceptualization (equal); Formal analysis (equal); Methodology (equal); Project administration (equal); Software (equal); Supervision (equal); Writing – original draft (equal). Ming Cao: Conceptualization (equal); Formal analysis (equal); Project administration (equal); Supervision (equal).

The data that support the findings of this study are openly available in CoevolutionaryControl-2025, at https://github.com/hjardonk/CoevolutionaryControl-2025, Ref. 30.

1.
J.
Pantaleone
, “
Stability of incoherence in an isotropic gas of oscillating neutrinos
,”
Phys. Rev. D
58
,
073002
(
1998
).
2.
A.
Sajadi
,
R. W.
Kenyon
, and
B.
Hodge
, “
Synchronization in electric power networks with inherent heterogeneity up to 100% inverter-based renewable generation
,”
Nat. Commun.
13
,
2490
(
2022
).
3.
K.
Wiesenfield
,
P.
Colet
, and
S. H.
Strogatz
, “
Frequency locking in Josephson arrays: Connection with the Kuramoto model
,”
Phys. Rev. E
57
,
1563–1569
(
1998
).
4.
D.
Călugăru
,
J. F.
Totz
,
E. A.
Martens
, and
H.
Engel
, “
First-order synchronization transition in a large population of strongly coupled relaxation oscillators
,”
Sci. Adv.
6
,
eabb2637
(
2020
).
5.
A. F.
Taylor
,
M. R.
Tinsley
,
F.
Wang
,
Z.
Huand
, and
K.
Showalter
, “
Dynamical quorum sensing and synchronization in large populations of chemical oscillators
,”
Science
323
,
614
(
2009
).
6.
M.
Shafiei
,
S.
Jafari
,
F.
Parastesh
,
M.
Ozer
,
T.
Kapitaniak
, and
M.
Perc
, “
Time delayed chemical synapses and synchronization in multilayer neuronal networks with ephatic inter layer coupling
,”
Commun.Nonlinear Sci. Numer. Simul.
84
,
105175
(
2020
).
7.
P. J.
Uhlhaas
and
W.
Singer
, “
Neural synchrony in brain disorders: Relevance for cognitive dysfunctions and pathophysiology
,”
Neuron
52
,
155
168
(
2006
).
8.
A.
Correa
,
S.
Alguacil
,
L. F.
Ciria
,
A.
Jiménez
, and
M.
Ruíz
, “
Circadian rhythms and decision-making: A review and new evidence from electroencephalography
,”
Chronobiol. Int.
37
,
520
541
(
2020
).
9.
C. A.
Symanski
,
J. H.
Bladon
,
E. T.
Kullberg
,
P.
Miller
, and
S. P.
Jadhav
, “
Rhythmic coordination and ensemble dynamics in the hippocampal-prefrontal network during odor-place associative memory and decision making
,”
eLife
11
,
e79545
(
2022
).
10.
C.
Bick
,
M.
Goodfellow
,
C. R.
Laing
, and
E. A.
Martens
, “
Understanding the dynamics of biological and neural oscillator networks through exact mean-field reductions: A review
,”
J. Math. Neurosci.
10
,
9
(
2020
).
11.
M. J. B.
Hauser
, “
Synchronization of glycolytic activity in yeast cells
,”
Curr. Genet.
68
,
69
81
(
2022
).
12.
O.
Kiehn
, “
Decoding the organization of spinal circuits that control locomotion
,”
Nat. Rev. Neurosci.
17
,
224
238
(
2016
).
13.
E.
Marder
, “
Neuromodulation of neuronal circuits: Back to the future
,”
Neuron
76
,
1
11
(
2012
).
14.
E.
Marder
and
D.
Bucher
, “
Central pattern generators and the control of rhythmic movements
,”
Curr. Biol.
11
,
R986
R996
(
2001
).
15.
J. J.
Tyson
,
R.
Albert
,
A.
Goldbeter
,
P.
Ruoff
, and
J.
Sible
, “
Biological switches and clocks
,”
J. R. Soc. Interface
5
,
S1
S8
(
2008
).
16.
A. J.
Ijspeert
, “
Central pattern generators for locomotion control in animals and robots: A review
,”
Neural Networks
21
,
642
653
(
2008
).
17.
A. J.
Ijspeert
,
J.
Nakanishi
,
H.
Hoffmann
,
P.
Pastor
, and
S.
Schaal
, “
Dynamical movement primitives: Learning attractor models for motor behaviors
,”
Neural Comput.
25
,
328
373
(
2013
).
18.
L.
Potvin-Trottier
,
N. D.
Lord
,
G.
Vinnicombe
, and
J.
Paulsson
, “
Synchronous long-term oscillations in a synthetic gene circuit
,”
Nature
538
,
514
517
(
2016
).
19.
B. J.
Rosier
and
T. F.
de Greef
, “
Synthetic biology: How to make an oscillator
,”
eLife
4
,
e12260
(
2015
).
20.
J.
Stricker
,
S.
Cookson
,
M. R.
Bennett
,
W. H.
Mather
,
L. S.
Tsimring
, and
J.
Hasty
, “
A fast, robust and tunable synthetic gene oscillator
,”
Nature
456
,
516
519
(
2008
).
21.
E.
Chicca
,
F.
Stefanini
,
C.
Bartolozzi
, and
G.
Indiveri
, “
Neuromorphic electronic circuits for building autonomous cognitive systems
,”
Proc. IEEE
102
,
1367
1388
(
2014
).
22.
E.
Angelidis
,
E.
Buchholz
,
J.
Arreguit
,
A.
Rougé
,
T. C.
Stewart
,
A. V.
Arnim
,
A.
Knoll
, and
A. J.
Ijspeert
, “
A spiking central pattern generator for the control of a simulated lamprey robot running on SpiNNaker and Loihi neuromorphic boards
,”
Neuromorphic Comput. Eng.
1
,
014005
(
2021
).
23.
D. W.
Franklin
and
D. M.
Wolpert
, “
Computational mechanisms of sensorimotor control
,”
Neuron
72
,
425
442
(
2011
).
24.
E. P.
Frady
and
F. T.
Sommer
, “
Robust computation with rhythmic spike patterns
,”
Proc. Natl. Acad. Sci.
116
,
18050
18059
(
2019
).
25.
L.
Ribar
and
R.
Sepulchre
, “
Neuromorphic control: Designing multiscale mixed-feedback systems
,”
IEEE Control Syst.
41
,
34
63
(
2021
).
26.
R.
Sepulchre
,
G.
Drion
, and
A.
Franci
, “
Control across scales by positive and negative feedback
,”
Annu. Rev. Control Rob. Auton. Syst.
2
,
89
113
(
2019
).
27.
W.
Che
and
F.
Forni
, “
Dominant mixed feedback design for stable oscillations
,”
IEEE Trans. Autom. Control
69
,
1133
1140
(
2023
).
28.
T.
Tsai
,
Y. S.
Choi
,
W.
Ma
,
J. R.
Pomerening
,
C.
Tang
, and
J. E.
Ferrell
, “
Robust, tunable biological oscillations from interlinked positive and negative feedback loops
,”
Science
321
,
126
129
(
2008
).
29.
O.
Juárez-Álvarez
and
A.
Franci
, “Collective rhythm design in coupled mixed-feedback systems through dominance and bifurcations,” arXiv:2401.04324 [math.DS] (2024).
30.
H.
Jardón-Kojakhmetov
, “Code for ‘Co-evolutionary control of a class of coupled mixed-feedback systems’,” (2025).
31.
C.
Kuehn
,
Multiple Time Scale Dynamics
(
Springer
,
2015
), Vol. 191.
32.
P.
Kokotović
,
H. K.
Khalil
, and
J.
O’reilly
,
Singular Perturbation Methods in Control: Analysis and Design
(
SIAM
,
1999
).
33.
K. D.
Young
,
V. I.
Utkin
, and
U.
Ozguner
, “
A control engineer’s guide to sliding mode control
,”
IEEE Trans. Control Syst. Technol.
7
,
328
342
(
1999
).