Oscillatory behavior is ubiquitous in many natural and engineered systems, often emerging through self-regulating mechanisms. In this paper, we address the challenge of stabilizing a desired oscillatory pattern in a networked system where neither the internal dynamics nor the interconnections can be changed. To achieve this, we propose two distinct control strategies. The first requires the full knowledge of the system generating the desired oscillatory pattern, while the second only needs local error information. In addition, the controllers are implemented as co-evolutionary, or adaptive, rules of some edges in an extended plant-controller network. We validate our approach in several insightful scenarios, including synchronization and systems with time-varying network structures.
This work tackles the challenge of stabilizing a desired oscillatory pattern in a network with a fixed structure. We propose two distributed controllers to achieve this. These controllers are fundamentally different: the first is highly robust and eliminates nonlinearities but requires full knowledge of the reference system. The second, inspired by the neuromodulation of biological systems, only requires local error information. Moreover, the controllers are implemented as co-evolutionary rules for edges connecting an oscillatory node with the controlled system. This effectively results in an adaptive network where the adaptation rules shape the dynamics of the controlled nodes. We showcase the co-evolutionary controllers by exploring the synchronization of the controlled system, showing that our approach is independent of the network structure, and testing the performance of the controllers in time-varying networks. Our theoretical and numerical results show two distributed strategies to induce a desired oscillatory pattern by adaptively modulating node> dynamics.
I. INTRODUCTION
Inducing and regulating a desired oscillatory behavior in networked systems is a fundamental problem in control theory, with ample relevance for both natural and engineered systems. Oscillations are ubiquitous in many real-world phenomena including industrial applications,1–3 chemistry,4,5 neural networks,6,7 psychology,8,9 and biology.10,11 More specifically, in biological systems, rhythmic signals govern important physiological processes, including the heartbeat’s regulation, circadian rhythms, central pattern generators, and neuronal signaling.12–15 From the engineering side, several technologies also need such periodic signals, for example, to achieve the coordination of automation processes, including robotic locomotion,16,17 or in the developments of synthetic biological systems.18–20
Several naturally occurring oscillatory patterns are self-regulated and emerge from the intrinsic dynamics of the system. In contrast, there are many scenarios, both natural and engineered, where external control is required to induce or modify oscillations in a desired way. This is, for instance, relevant in neuromorphic engineering,21 where particular oscillatory patterns of biologically inspired neuronal networks are critical, for example, to mimic cognitive functions, coordinate sensorimotor functions, synchronize spiking neural networks, and enable efficient signal encoding, to name a few.22–24 While our interests in this paper are mainly theoretical, designing controllers that can reliably induce desired oscillations could lead to advances in neuromorphic engineering and design, brain-inspired control systems, and adaptive network dynamics, among others.
In this paper, we design two controllers that impose a desired oscillatory pattern on a networked system without modifying its internal dynamics or interconnections. Our focus is on a class of coupled mixed-feedback systems where the interplay of (fast) positive and (slow) negative feedback plays a key role in the generation of sustained oscillations [see (1) and further details below].
The manuscript is organized as follows: in Sec. II, we motivate and describe the problem at hand and propose two kinds of controllers. The first controller, while robust, needs complete information from the reference. In contrast, the second one only requires local error information. Next, in Sec. III, we embed the designed controllers into a networked framework, implementing them as a co-evolutionary, or adaptive, rule of the edges connecting a node (the controller’s node) to the nodes of the controlled system. We compare and evaluate the controllers’ performance in three key scenarios: synchronization, arbitrary network structures, and time-varying network structures. We conclude in Sec. IV, where we also discuss potential extensions of our work.
Notation: and denote the fields of real and complex numbers, respectively. We use for the sign function and to denote the big-Oh order symbol. An adjacency matrix is denoted by , and a superscript (for “reference”) or (for “plant”), as in and , is used whenever it is necessary to distinguish the adjacency matrices of the reference and of the plant. Further notation is clarified when necessary.
II. PROBLEM SETUP AND DESIGN OF THE CONTROLLERS
As in (2), a key feature of mixed-feedback systems is the interplay between positive and negative feedback loops across different timescales, which is commonly observed in many biological models, particularly in neuronal dynamics. For example, it is argued26 that mixed feedback is the fundamental mechanism for excitability and explains how biological systems are able to exhibit sustained and robust oscillations.27,28 Moreover, mixed-feedback architectures have been recently exploited to develop control methods for neuromorphic systems.25
Our main motivation to consider systems given by (1) is the theory developed in Ref. 29. Relevant to us is that they propose a procedure that allows one, for example, to design an adjacency matrix such that the output corresponds to a rhythmic profile of the network (defined as the inverse problem in Ref. 29). While brief, let us be more precise: a rhythmic profile is an -tuple , where represents the amplitudes and phases. If for all , the solutions of (1) converge to some periodic function with amplitude and phase defined as oscillating function,29 then one says that the network is rhythmic. Assume further that and that , for all . The -tuple , where and for all , is called a relative rhythmic profile. By following the procedure summarized below, one can construct an adjacency matrix such that the solutions of (1) (locally) converge to a particular rhythmic profile:
Let with .
Pick with . Further choose with for all . Define .
Find an invertible matrix , where .
Define .
We refer to an adjacency matrix obtained from the above algorithm as rhythmic.
Once one picks , one can then choose the parameters and such that the origin of (1) is unstable, leading to the rhythmic profile through a Hopf bifurcation, recall Remark 1 and see Ref. 29, Sec. VIII.
In Ref. 29, the slow-fast as well as the mixed-feedback structure of (1) plays a fundamental role. This structure will also be relevant for the controllers presented in Secs. II and III.
We now consider the following problem: suppose we are given a reference rhythmic network and a plant network, both of type (1). The adjacency matrix of the plant cannot be adjusted, but we want to render the plant rhythmic with respect to the reference’s profile. Since the adjacency matrix of the plant is fixed, we shall design a controller that solves our problem.
While our focus is theoretical, we notice that the described situation frequently appears in real-life scenarios: (a) many neuromorphic devices have fixed hardwired interconnections but allow modulation through externally controllable nodes; (b) deep brain stimulation consists of applying external electrical pulses to the (fixed) brain inducing a desired rhythmic activity. Similar challenges arise in robotics, where fixed mechanical structures coordinate with body motion, or the synchronization of power grids, where the underlying network remains unchanged while controllers regulate, for example, frequency stability.
Network (upper) and block diagram (lower) representations of our control problem (3), in which elements of the reference, plant, and control are presented in blue, green, and red, respectively. For the networks, the effect of the controller on the plant is displayed in red, while the rest of the connections are provided for illustrative purposes only, as any topology is allowed in our methodology. For the block diagram, uppercase and lowercase represent the variables of the reference and of the plant, respectively, from which the error variables and are defined for our analysis.
Network (upper) and block diagram (lower) representations of our control problem (3), in which elements of the reference, plant, and control are presented in blue, green, and red, respectively. For the networks, the effect of the controller on the plant is displayed in red, while the rest of the connections are provided for illustrative purposes only, as any topology is allowed in our methodology. For the block diagram, uppercase and lowercase represent the variables of the reference and of the plant, respectively, from which the error variables and are defined for our analysis.
In the following, we propose two different kinds of controllers that render the origin of (4) locally stable. The first one eliminates the sigmoidal nonlinearities, which results in a robust controller but requires complete knowledge of the reference. For comparison purposes, we propose an additional controller inspired by the synaptic modulation of neuronal systems. Such a controller does not require any knowledge of the reference. For clarity of our exposition, these two types of controllers are first provided without considering their implementation, see Propositions 1 and 2. After that, we propose a particular way of implementing them in a networked system where all nodes are of type (1), including the controller. This will mean that the controller action is implemented via the adaptation of the weights connecting the controller node and the plant’s nodes; see more details in Sec. III.
A. General aspects of the simulations
We describe some generalities about the simulations that we present below. When required, more details are provided, and all codes are available in Ref. 30. In our simulations, the function is chosen as the odd sigmoid , the small parameter is fixed at , while many of the other parameters are chosen at random. When we say that a parameter is chosen “at random,” we always mean it under a uniform distribution and specify the interval of possible values. The following steps align with the algorithm described inSec. I:
The relative amplitudes , , are chosen at random within the interval . The lower value is simply chosen so that the th relative amplitude is not too small in the simulations. The phases , , are chosen at random within the interval . The probability of is zero.
For the leading eigenvalue , we let and is randomly chosen within the interval . The choice of is so that the period of the rhythmic profile, which is29 , is not too small. The rest of the eigenvalues , , are chosen at random within the interval . This provides .
The complement matrix is a randomly generated sparse matrix with nonzero coefficients within the interval . This matrix is generated until is invertible.
The adjacency matrices and are, thus, given via the previous algorithm as . Since the algorithm to obtain the adjacency matrices is mostly random, both matrices are, with probability , different. Likewise, since is sparse and randomly generated, the topologies of the adjacency matrices are, with probability , different. We emphasize that, from the results we present below, the plant’s adjacency matrix does not have to be rhythmic. To keep all systems (reference, plant, and controller) within the same context, we have opted to keep it rhythmic for the simulations.
In addition, and due to the choice of , is randomly chosen within the interval , and the corresponding is set as (see Ref. 29, Theorem 1).
To better see the effect of the controllers, we present our simulations in the following way: for the first time units, the controller is off, and so we see the open-loop response. At , the controller is turned on, and so from thereon, we see the closed-loop response. In our algorithms, this is realized by multiplying the controller by , where and is the time at which the controller is turned on. Since we chose some parameters at random, the plots we show are representative of several simulation runs.
B. Elimination of nonlinearities
It suffices to let . For sufficiently large, the dynamics of evolve in a fast timescale. In the limit when , is constant, and the equilibrium of , where is the fast time, is hyperbolic. This is equivalent to saying that the critical manifold associated with (8), namely, is normally hyperbolic31 and globally exponentially stable. The restriction of (8) to the critical manifold leads precisely to the linear error dynamics (5), whose origin is globally exponentially stable. Since the equilibria of both the fast and the slow dynamics are exponentially stable, the result follows from, for example, Fenichel’s (or Tikhonov’s) theorem.31,32
A simulation of (8) with given by (7) is provided in Fig. 2; recall the general considerations described in Sec. II A. Figure 2 shows in the first two rows and Fig. 2(a) the time series for , , and the mean norm of the error in logarithmic scale for nodes. In this and all following simulations, we prefer to show because we keep all the controller constants the same, even for different numbers of nodes. Therefore, for comparison purposes, it is convenient to normalize the error. In Fig. 2(b), we show the mean error for a simulation similar to the previous one but for . Figure 2(c) also shows the mean error for a simulation with nodes but for time-varying adjacency matrices. More precisely, for this simulation, each entry (for both the plant and the reference) of the adjacency matrices is of the form where is some random frequency within the interval and is a random rhythmic matrix. Since the controller fully uses these adjacency matrices, its performance is also good. Finally, on Fig. 2(d), we show another simulation for nodes but now with a mismatch between the parameters used by the controller and those of the reference. More precisely, for this simulation, we perturb the parameters , , and used by the controller in (6), by some small random number within the interval . Such small perturbation is different for each parameter and each entry of . We notice that since the mismatch is relatively small, the performance of the controller is still reasonable, although the error is roughly one order of magnitude larger than the error without the mismatch [Fig. 2(b)]. For all these simulations, we have set . It follows from Proposition 1 that the larger , the smaller the error.
The first and second rows and (a) correspond to a simulation of (8) with given by (6) and for four nodes; (plant) is dashed, and (reference) is solid. (a)–(d) show, in logarithmic scale, the mean norm of the error, : (a) for four nodes, (b) for nodes, (c) for nodes with time-varying adjacency matrices, and (d) for nodes with a mismatch in the reference’s parameters used by the controller. For , the controller is off, and, hence, we see the open-loop response. At , the controller is turned on, and from thereon, we observe the closed-loop response. We notice, naturally, that for , the plant closely follows the reference. See more details in the main text and compare it with Fig. 3.
The first and second rows and (a) correspond to a simulation of (8) with given by (6) and for four nodes; (plant) is dashed, and (reference) is solid. (a)–(d) show, in logarithmic scale, the mean norm of the error, : (a) for four nodes, (b) for nodes, (c) for nodes with time-varying adjacency matrices, and (d) for nodes with a mismatch in the reference’s parameters used by the controller. For , the controller is off, and, hence, we see the open-loop response. At , the controller is turned on, and from thereon, we observe the closed-loop response. We notice, naturally, that for , the plant closely follows the reference. See more details in the main text and compare it with Fig. 3.
Simulation of (3) with the neuromorphic inspired controller given by (10). The first and second rows and (a) correspond to a simulation with four nodes; (plant) is dashed, and (reference) is solid. (a)–(c) show, in logarithmic scale, the mean norm of the error : (a) for nodes, (b) for nodes, (c) for nodes with time-varying adjacency matrices. For , the controller is off, and, hence, we see the open-loop response. At , the controller is turned on, and from thereon, we observe the closed-loop response. We notice, naturally, that for , the plant closely follows the reference. See more details in the main text and compare it with Fig. 2.
Simulation of (3) with the neuromorphic inspired controller given by (10). The first and second rows and (a) correspond to a simulation with four nodes; (plant) is dashed, and (reference) is solid. (a)–(c) show, in logarithmic scale, the mean norm of the error : (a) for nodes, (b) for nodes, (c) for nodes with time-varying adjacency matrices. For , the controller is off, and, hence, we see the open-loop response. At , the controller is turned on, and from thereon, we observe the closed-loop response. We notice, naturally, that for , the plant closely follows the reference. See more details in the main text and compare it with Fig. 2.
Synchronization of the plant with respect to a reference node using the adaptation rule (12) and (13). The first and second rows show the time series of (solid) and (dashed) and the corresponding adaptive weights for nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for nodes. In all simulations, we first show the system in open-loop for . At , the controller is turned on, and we see how the error decreases. For , a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After , we see how the error again decreases, showcasing the recovery from the added disturbance.
Synchronization of the plant with respect to a reference node using the adaptation rule (12) and (13). The first and second rows show the time series of (solid) and (dashed) and the corresponding adaptive weights for nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for nodes. In all simulations, we first show the system in open-loop for . At , the controller is turned on, and we see how the error decreases. For , a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After , we see how the error again decreases, showcasing the recovery from the added disturbance.
Synchronization of the plant with respect to a reference node using the Hebbian-like adaptation rule (14). The first and second rows show the time series of (solid) and (dashed) and the adaptive weights for nodes while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for nodes. In all simulations, we first show the system in open-loop for . At , the controller is turned on, and we see how the error decreases. For , a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After , we see how the error again decreases, showcasing the recovery from the added disturbance. The most important distinction with Fig. 4 is that we observe periodic increases of the error, which coincide with the crossings of at the origin and due to the multiplicative effect of in the interconnection, namely, the term in each equation of the nodes.
Synchronization of the plant with respect to a reference node using the Hebbian-like adaptation rule (14). The first and second rows show the time series of (solid) and (dashed) and the adaptive weights for nodes while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for nodes. In all simulations, we first show the system in open-loop for . At , the controller is turned on, and we see how the error decreases. For , a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After , we see how the error again decreases, showcasing the recovery from the added disturbance. The most important distinction with Fig. 4 is that we observe periodic increases of the error, which coincide with the crossings of at the origin and due to the multiplicative effect of in the interconnection, namely, the term in each equation of the nodes.
Simulation of (11) with (12) and (13) where the plant and the reference have different network topologies. At the top of the figure, we show on the left the reference’s graph, and on the right the plant’s graph for , notice that they are indeed different. The second and third rows show the time series of (solid) and (dashed) and the adaptive weights for nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for nodes. In all simulations, we first show the system in open-loop for . At , the controller is turned on, and we see how the error decreases. For , a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After , we see how the error again decreases, showcasing the recovery from the added disturbance.
Simulation of (11) with (12) and (13) where the plant and the reference have different network topologies. At the top of the figure, we show on the left the reference’s graph, and on the right the plant’s graph for , notice that they are indeed different. The second and third rows show the time series of (solid) and (dashed) and the adaptive weights for nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for nodes. In all simulations, we first show the system in open-loop for . At , the controller is turned on, and we see how the error decreases. For , a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After , we see how the error again decreases, showcasing the recovery from the added disturbance.
Simulation of (11) with (14) where the plant and the reference have different network topologies. At the top of the figure, we show on the left the reference’s graph, and on the right the plant’s graph for , notice that they are indeed different. The second and third rows show the time series of (solid) and (dashed) and the adaptive weights for nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for nodes. In all simulations, we first show the system in open-loop for . At , the controller is turned on, and we see how the error decreases. For , a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After , we see how the error again decreases, showcasing the recovery from the added disturbance.
Simulation of (11) with (14) where the plant and the reference have different network topologies. At the top of the figure, we show on the left the reference’s graph, and on the right the plant’s graph for , notice that they are indeed different. The second and third rows show the time series of (solid) and (dashed) and the adaptive weights for nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for nodes. In all simulations, we first show the system in open-loop for . At , the controller is turned on, and we see how the error decreases. For , a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After , we see how the error again decreases, showcasing the recovery from the added disturbance.
Simulation of (11) with (12) and (13) where the plant and the reference have different network topologies with time-varying weights. The first and second rows show the time series of (solid) and (dashed) and the adaptive weights for nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for nodes. In all simulations, we first show the system in open-loop for . At , the controller is turned on, and we see how the error decreases. For , a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After , we see how the error again decreases, showcasing the recovery from the added disturbance.
Simulation of (11) with (12) and (13) where the plant and the reference have different network topologies with time-varying weights. The first and second rows show the time series of (solid) and (dashed) and the adaptive weights for nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for nodes. In all simulations, we first show the system in open-loop for . At , the controller is turned on, and we see how the error decreases. For , a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After , we see how the error again decreases, showcasing the recovery from the added disturbance.
Simulation of (11) with (14) where the plant and the reference have different network topologies with time-varying weights. The first and second rows show the time series of (solid) and (dashed) and the adaptive weights for nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for nodes. In all simulations, we first show the system in open-loop for . At , the controller is turned on, and we see how the error decreases. For , a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After , we see how the error again decreases, showcasing the recovery from the added disturbance.
Simulation of (11) with (14) where the plant and the reference have different network topologies with time-varying weights. The first and second rows show the time series of (solid) and (dashed) and the adaptive weights for nodes, while (a) shows the corresponding error in logarithmic scale. (b) shows a similar simulation but for nodes. In all simulations, we first show the system in open-loop for . At , the controller is turned on, and we see how the error decreases. For , a random constant disturbance is added to each node of the plant, hence the observed increase in the error. After , we see how the error again decreases, showcasing the recovery from the added disturbance.
C. Neuromodulation inspired controller
As already mentioned, a drawback of the controller presented in Sec. II B is its dependence on the full knowledge of the reference’s parameters. To offer an alternative, we propose a second controller whose behavior is inspired by the neuromodulation of synaptic weights in neuroscience and neuromorphic systems.
Intuitively, the controller adapts to the rescaled error (quickly if is large) and effectively provides a proportional feedback . If the error is large, then the controller compensates the second sigmoid function to balance them out. If the error is small, then both sigmoid functions have roughly the same value and cancel out, which leads to the exponentially stable linear error dynamics (5). Besides the well-known advantages of introducing the controller in integral form, the idea just described has its inspiration in some neurological mechanisms, where the dynamic equation for is in “leaky form,” similar to the way biological neurons process information. Likewise, the introduced controller can be regarded as an adaptive synaptic weight that modulates the influence of the error. All these interpretations become more evident in Sec. III.
The previous idea is formalized in Proposition 2.
First, notice that the nonlinear terms in (9) are bounded, that is, . Due to the linear part of (9), its trajectories are globally attracted to a forward invariant set around the origin. Since , a rough estimate of such a region is a ball of radius centered at the origin.
Figure 3 shows a simulation of (3) with the controller given by (10); recall the general considerations described in Sec. II A. The figure shows in the first two rows and Fig. 3(a) the time series for , , and the mean norm of the error in logarithmic scale for nodes. In Fig. 3(b), we show the mean error for a simulation similar to the previous one but for . Figure 3(c) also shows the mean error for a simulation with nodes but for time-varying adjacency matrices in the same way as those for Fig. 2. Since this controller does not use the parameters of the plant, there is no equivalent simulation to that of the Fig. 3(d) in Fig. 2. For all the simulations in Fig. 3, , , and . It follows from Proposition 2 that the larger the , the smaller the error.
III. CO-EVOLUTIONARY IMPLEMENTATION OF THE CONTROLLERS AND APPLICATIONS
The form of (13) is due to the multiplicative action of . If , then the controller would be exactly the same as in (6). However, from (2), the only stable equilibrium point of the controller’s node dynamics is the origin.
While the adaptation rule is discontinuous, it is reminiscent of sliding mode control.33 Improvements to this approach, for example, to avoid discontinuities or to rigorously prove the uniqueness of solutions is not further discussed here but shall be considered in future research.
Here, the biological inspiration is more evident: the adaptation rule (14) has two main components; one is a “pre-synaptic” activity-dependent term , which is regulated by the “post-synaptic” error and is reminiscent of a Hebbian-like adaptation (one could say that the learning rate is mediated by how far the plant’s node is from the reference’s), and an intrinsic decay ensuring that the values of remain bounded.
A more sensible perspective is to recall that (10) effectively induces the feedback into the error system. If now is considered , then the ideal value of would be , again introducing the division by . In our heuristic implementation, we consider a scaling, or gain, being its sign its most important aspect leading to (14). Naturally, since crosses zero, each time , the “learned” value of will be forgotten. This is, indeed, visible in the simulations presented below.
In Subsecs. III A–III C, we present the simulations of (11), and we compare through different application scenarios, the adaptation rules given by (12) and (13) and (14). In addition to the general considerations for our numerical simulations described in Sec. II A, we add the following: in the time interval , we add a constant random disturbance, taking values in the interval , to each node, different for each node and independent of the node. We do this to numerically test how the controlled plant recovers from a perturbed state.
To avoid repetitions, we mention here once that in our numerical simulations of (11) with the adaptive implementation given by (12) and (13) (specifically, Figs. 4, 6, and 8), we have used and . Regarding the adaptive implementation given by (14) (specifically Figs. 5, 7, and 9), we have used , and . We have kept these values the same across the different simulations for comparison purposes; of course, the performance of the adaptation rules can be improved if one tunes such parameters adequately.
A. Synchronization
Figure 4 shows a simulation of (11) with the adaptation rule given by (12) and (13). The observed spikes in the weight are due to the discontinuous form of the adaptation and are modulated by in (13). In contrast, Fig. 5 shows a simulation of (11) with the adaptation rule given by (14). For this case, we do not see anymore the aforementioned spiking behavior since is smooth. However, we observe that the error tends to increase periodically, and this coincides with crossing zero; as for , the system is in open-loop. This issue is barely noticeable in the time series, though. We moreover notice that on both adaptation rules, the closed-loop systems recover well enough after the added disturbance in .
While our main exposition has considered that the reference and the plant have the same number of nodes, it is clear that, as in this section, such a consideration is not necessary. If the plant has nodes and the reference has nodes, one simply needs to define errors for and .
B. Arbitrary network topologies
To highlight the fact that our approach does not depend on the particular topologies of the reference’s and plant’s adjacency matrices, in this section, we present simulations with explicitly different adjacency matrices. Figure 6 shows a simulation of (12) and (13), where for , we have included the graphs of the reference and of the plant, which are different. As in the simulations of Sec. III A, the spikes in the time series of are due to the discontinuous implementation (12) and are modulated by in (12). For comparison, Fig. 7 shows a simulation of (11). Both simulations follow the same overall format we have used so far: for , the controller is off, and hence, we see the open-loop response; at , the controller is turned on, and we see how the plant follows the reference, while for , a constant random disturbance is added to each node. We notice from the time series that the performance of both controllers is reasonably good.
C. Time-varying adjacency matrices
As a final application and comparison setup, we consider here that the adjacency matrices of both the reference and the plant are time-varying. We do this similar to what was presented in Sec. II. More precisely, for the simulations of this section, each entry (for both the plant and the reference) of the adjacency matrices is of the form , where is some random frequency within the interval and is a random rhythmic matrix, meaning that the underlying topologies of each network are also different. While we do not perform formal analysis under this setup, both adaptation approaches seem suitable to handle this scenario. On the one hand, we have that adaptation rules (11) and (12) use the full knowledge of the adjacency matrices. On the other hand, the adaptive rule (11) accounts for the time-varying adjacency matrices from the fact that is regulated by the error. Figure 8 corresponds to (10) with (11) and (12), while Fig. 9 uses (14). The description of the results is similar to Secs. III A and III B. In both cases, we can observe that, despite the adjacency matrices being time-varying, both adaptation rules perform well.
IV. CONCLUSION AND DISCUSSION
This paper has explored co-evolutionary control approaches to render a desired oscillatory pattern stable for a particular class of dynamic networks, keeping their internal properties fixed. We have developed two kinds of controllers: one that uses the full information from the reference and another that only requires local error information. From our analysis, both controllers seem to perform reasonably well. Their main difference is their potential implementation. While the first controller (see Proposition 1) performs observably well, it is hardly implementable not only because it requires full information about the plant but also because of its particular form [see (6)]. In contrast, the second controller (see Proposition 2) requires only local error information. We have also implemented such controllers as co-evolutionary rules in an extended dynamic network and tested their performance in three relevant scenarios. In all cases, again, the controllers perform reasonably well.
An immediate open problem to consider in the future is to extend our approach to more general coupled systems. As a concrete example, one may want to explore ideas similar to those developed here, but for coupled spiking neurons, whose models also have mixed-feedback loops. A challenge for these models is that the nonlinearity is not bounded, in contrast to (2). Another possibility is to consider that the reference and the plant are not of the same type. Hence, even if a system cannot inherently produce and sustain specific oscillations, a controller may impose them.
ACKNOWLEDGMENTS
The work of L.G.V.P. was founded by the Data Science and Systems Complexity (DSSC) Centre of the University of Groningen, as part of a PhD scholarship. H.J.K. would like to acknowledge the financial support of the CogniGron research center and the Ubbo Emmius Funds (University of Groningen). The authors thank the anonymous reviewers for their thorough comments that helped to improve the manuscript.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
Luis Guillermo Venegas-Pineda: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Methodology (equal); Software (equal); Visualization (equal); Writing – original draft (equal). Hildeberto Jardón-Kojakhmetov: Conceptualization (equal); Formal analysis (equal); Methodology (equal); Project administration (equal); Software (equal); Supervision (equal); Writing – original draft (equal). Ming Cao: Conceptualization (equal); Formal analysis (equal); Project administration (equal); Supervision (equal).
DATA AVAILABILITY
The data that support the findings of this study are openly available in CoevolutionaryControl-2025, at https://github.com/hjardonk/CoevolutionaryControl-2025, Ref. 30.