Adaptivity is a dynamical feature that is omnipresent in nature, socio-economics, and technology. For example, adaptive couplings appear in various real-world systems, such as the power grid, social, and neural networks, and they form the backbone of closed-loop control strategies and machine learning algorithms. In this article, we provide an interdisciplinary perspective on adaptive systems. We reflect on the notion and terminology of adaptivity in different disciplines and discuss which role adaptivity plays for various fields. We highlight common open challenges and give perspectives on future research directions, looking to inspire interdisciplinary approaches.

Charles Darwin taught us that it is not the strongest of a species that survive, but the ones who are most adaptable to change. Likewise, the process of learning can be considered to be “any change in a system that produces a more or less permanent change in its capacity for adapting to its environment.”1 These two statements clearly underline the importance of adaptivity for life. Simply speaking, one could say: “To live means to adapt.” At the same time, adaptive mechanisms are also the essential features of (“intelligent”) artificial systems, from state-of-the-art control techniques for complex systems, to machine learning approaches and robotic systems. Perhaps the most basic notion of adaptivity is the ability to adjust to condition or change over time. This ability is an essential component of various natural and artificial processes considered in different research fields. It is also the key property of the human mind to perceive and enjoy music and visual arts and to create and invent and, thus, is the driving force behind all cultural achievements. Adaptive mechanisms take place on a wide range of spatial and temporal scales, from the adaptation of a single neuron, over the ability of a social system to adjust to a changing environment, up to the adaptation of the Earth system’s climate. Over the last few decades, substantial know-how to describe and control complex systems has been developed in different scientific areas. With the increasing potential of modern technology, on the one hand, and the enormous challenges facing humanity as a large social system, on the other hand, there is a renewed interest to take an interdisciplinary approach to adaptivity. This article gives an overview of the role of adaptive systems in different scientific fields and highlights prospects for future research directions on adaptivity.

A widespread feature of natural and artificial complex systems is their adaptivity. There is lively interest in modeling and understanding the various forms of adaptive mechanisms appearing in real-world systems and to develop new control strategies based on adaptive mechanisms.

Such control strategies play an essential role, especially in complex systems science, as they reflect to some extent the understanding we have of a complex system. Because of their interactions, relationships, dependencies, nonlinearities, and high-dimensionalities, the behavior of complex systems is inherently difficult to model. Machine learning tools are often used to solve predictions about complex systems. However, applying machine learning to complex systems is quite challenging because the training data set has to reflect the diverse dynamics. This usually results in the data set being very large, making such methods well suited for so-called big data.

Moreover, the focus today is not only on complex systems consisting of many interacting components, but as an interdisciplinary field, complex systems actually attract contributions from many different fields. Despite the strong drive for innovation and application of adaptive complex systems in various scientific fields, as conceptualized in Fig. 1, cross-fertilization between different disciplines is hardly promoted. A partial answer toward a mathematical theory of adaptive systems has been developed since the 1960s for control and optimization problems,2–6 including stochastic systems,7 a systematic exposition of the interrelations and interplay between adaptation and learning,7 as well as the use of the speed gradient method8 in adaptive control of network topology.9 In this review, we discuss recent interdisciplinary applications of adaptive dynamical systems and focus on collecting ideas that would allow for including modern research fields, such as complex network theory, power grid modeling, or climate systems where a full mathematical theory is still elusive.

Fig. 1.

Adaptivity across different scientific disciplines (blue) and applications (yellow) as well as its strong interlinking and interlocking, similar to a system of gears.

Fig. 1.

Adaptivity across different scientific disciplines (blue) and applications (yellow) as well as its strong interlinking and interlocking, similar to a system of gears.

Close modal

This Perspective article aims to make a first step in opening a dialogue between different scientific communities and the diverse formalism of their languages. It summarizes different perspectives on the concept of adaptivity and shows which open challenges are waiting to be taken up. To this end, it brings together the viewpoints on the topic of adaptivity of researchers from a wide range of backgrounds, including physics, biology, mathematics, computer and social science, and musicology. This Perspective article features a collection of contributions from experts representing various scientific disciplines. The individual contributions are guided by the following questions:

  1. What role do adaptive mechanisms play in their respective field? How can one define adaptivity? What methods are related to adaptivity? What applications are related to adaptivity?

  2. Which challenges can be solved by using adaptive mechanisms? Are there open research questions related to adaptivity? What are the future perspectives?

The article consists of four main topical parts: Network Perspective and Models of Adaptivity (Sec. II), Perception and Neural Adaptivity (Sec. III), Adaptivity and Artificial Learning (Sec. IV), and Adaptivity in Socio-Economic Systems (Sec. V). Each part contains perspectives from several specialists active in the respective area of research.

In Sec. II, we discuss different ideas on the definition of adaptivity from the perspective of nonlinear dynamics, control theory, and network science, and how adaptive systems can be used to understand real-world systems of interacting units (networks). In the beginning, a generic viewpoint on adaptivity with regard to the interplay of structure and function in dynamical network theory is introduced (Sec. II A). Building upon this idea, adaptation is discussed as a slowly evolving feedback mechanism (Sec. II B). Further highlighted are the interplay of adaptivity and noise as well as the role of adaptive control mechanisms in inducing critical transitions. Complementing the discussion on the notion of adaptivity, the question is raised: Is adaptivity in nonlinear dynamics, neuroscience, artificial intelligence, and socio-economic dynamics instances of the same abstract notion? To answer this question, the framework of dependent type theory is introduced and suggested to be utilized for comparing different notions of adaptivity (Sec. II C). Section II D summarizes the first part from the complex networks perspective where the interplay between dynamics and network topology is in the center of interest. Here, various connections between models featuring adaptivity are shown, and adaptive network models are highlighted as a powerful modeling approach toward real-world dynamical systems.

Section III focuses on the important role of adaptation in physiology, especially in the form of perception mechanisms and neuronal plasticity. Evolution tends to come up with similar solutions to related problems. The physiological properties of biological systems can be seen as complex networks of interactions, which are known as regulatory networks. Under similar contexts, such regulatory networks of distinct systems share similarities—these are so-called adaptation motifs, where specific adaption motifs have distinct functional significance (Sec. III A). Organisms, and, hence, their brains, have developed strategies to adapt to modifications in the environment across timescales, from adaptation to sudden changes in sensory stimuli to long timescales of evolutionary processes. Also, learning and memory formation can be viewed as adaptive processes, where learning in neuronal circuits relies on short- and specifically long-term synaptic plasticity (Sec. III B). Neuronal systems often consist of millions of neurons whose individual dynamics are often not accessible with mathematical methods. However, for the macroscopic collective dynamics emerging in such systems, several methodologies have been developed. A powerful method is the next generation neural mass approach, which allows for a low-dimensional reduction of neuronal populations equipped with frequency adaptation and short-term plasticity (Sec. III C). Computational models have proven to be useful for understanding the mechanisms underlying adaptation mechanisms in the brain. In medicine, for example, deep brain stimulation is the gold standard for treating medically refractory Parkinson’s patients who suffer from various motor and non-motor-symptoms and display an abnormal neuronal synchrony. Considering synaptic plasticity in computational modeling enables to design appropriate therapeutic stimulation (Sec. III D). Music is a constant adaptation process, where adaptations are active processes, including changing strategies, emotional reactions, or the development of new abilities. A physical culture theory is assuming music as an adaptive system to be represented by spatiotemporal electric fields in the brain, consisting of impulses, physical energy bursts, sent out, returning with certain damping, thereby causing new impulses (Sec. III E). In experiments, the magnitude of the neural response in the auditory cortex is decreasing if the same stimulus is presented repetitively with a constant stimulus onset interval. The gradual reduction of the magnitude is termed adaptation and is suggested to be due to modulations of synaptic coupling between neurons (Sec. III F).

Another wide field where adaptivity plays a key role is artificial intelligence and machine learning. We illuminate this field in Sec. IV. Indeed, at its very heart, “learning” means “adapting” to input data. The adapting system can, therefore, be, for example, a real or “artificial brain,” such as a neural network, and the adaptation rules may depend on the learning task, network architecture, and learning algorithm. Section IV provides a variety of perspectives on adaptivity in artificial learning, discussing current research, new applications, and open challenges. The methods span from deep neural networks (Sec. IV A), recurrent neural networks (Sec. IV B), and reinforcement learning (Sec. IV D) to reservoir computing (Sec. IV C). A common focus throughout Sec. IV is the two-way relationship between natural sciences and machine learning. On the one hand, tools from theoretical physics may provide insights into the functionality of machine learning algorithms, pushing our understanding beyond the “black box” paradigm. In particular, concepts from statistical physics are explored to address fundamental questions, such as reconciling the success of artificial learning with the curse of dimensionality (see Sec. IV A). Second, simple models inspired from physics are used to generate training data to probe specific features of machine learning algorithms, such as their ability to extract and utilize memory of a given input sequence (see Sec. IV B). On the other hand, the usage of machine learning tools to investigate (Secs. IV B and IV D) or to control (Secs. IV C and IV E) complex physical systems is a field of rapidly growing relevance. A sticking example is how reservoir-computing techniques open up new strategies to control chaotic nonlinear dynamics (Sec. IV C). In this context, another major challenge concerns the exploration of the rules of (and the control of) the collective or co-operative behavior of self-organizing multi-agent systems; from the design of new algorithms (Sec. IV D) to the control of real-world microscopic “biomimetic” intelligent particles and swarms of robots (Sec. IV E).

Section V is devoted to the large field of a socio-economic system. Here, adaptive mechanisms appear naturally and play an important role for their modeling. Adaptive networks also play a central role not only for realistic investigations of spreading dynamics but can help to study and design interventions for disease containment, mitigation, and eradication. Elaborating on this, in the last section of this fourth part, an overview on adaptivity in epidemiology is provided (Sec. V A). Another interesting topic is the interaction of social and epidemic systems where also the coevolutionary (adaptive) dynamics of the interaction structure and the dynamical units is in the focus of recent research (Sec. V B). Apart from the connection to epidemiology, social systems themselves are adaptive. Here, adaptivity can be regarded as the process of changing social systems through external influences. In this context, understanding these changes induced by increasing connectivity through online platforms or increasing availability of information are driving research questions (Sec. V C). The human factor is also considerably important for the (adaptive) control of power grids, e.g., considering a temporally changing energy consumption (Sec. V D). The challenges in order to be compatible with new circumstances are discussed from different viewpoints. In power grid systems, we find the adaptation of both the topology and dynamics of the grid. On the other side, there is an anthropogenic influence on the Earth system (Sec. V E). Here, we can learn much from the past about adaptive mechanisms in this complex system and the perturbations to which it is subjected. With this, Sec. VI of this article provides challenging open research questions that could be solved by using adaptivity one or the other way.

In this section, different ideas are discussed on how adaptivity can be defined in the context of nonlinear dynamics, control theory, and network science, and how adaptive systems could be used to understand real-world systems of interacting units (networks). Perspectives are provided on how different dynamical models featuring adaptive mechanisms are related and how these models can be used to investigate the dynamics of natural or man-made systems.

Adaptivity is a general concept commonly understood as a process or ability of a system to adjust itself to changing (external) conditions. Thus, when speaking of adaptivity, one implicitly distinguishes the “conditions” ( X ) and the adaptation property ( Y ). In the following, an attempt is made to define these two variables (components) with a special reference to the theory of adaptive dynamical networks.
  1. The structure Y is the adaptation matter, the part of the system responsible for the adaptation properties. In adaptive dynamical networks, this is usually understood as a network structure represented by connectivity and/or connection weights. By analogy with dynamical networks and neuroscience in general, we refer to this variable as a structure.

  2. The function X represents the conditions that trigger the adaptation. In adaptive dynamical networks, this is usually the dynamic state of the network, i.e., the collective and individual dynamics of the nodes. This factor may also include stochastic or external perturbations. These variables usually change with time, i.e., X ( t ) in the case of temporal adaptation. Following the terminology of the dynamical networks, we generally refer to this variable as function.

The non-adaptive systems correspond to a constant structure Y = Y 0, which is independent of the function X ( t ). By assuming that X is governed by a system of differential equations, a general representation of a non-adaptive system is
X ˙ ( t ) = f ( X , Y ) ,
(1)
Y ˙ = 0.
(2)
We assume here the general case that the structure Y influences the function X. Systems (1) and (2) are often used for modeling neural networks with fixed connectivity Y. An example of a non-adaptive dynamical network is the coupled system
x ˙ i = f i ( x i , t ) + j = 1 N κ i j g i j ( x i , x j ) ,
where x i ( t ) determines the state of node i = 1 , , N and κ i j is the connection weight ( κ i j = 0 if there is no connection). The absence of network adaptivity is indicated by the fixed structure κ i j. The function variable in this example is X = ( x 1 , , x N ), while the structure variable is Y = { κ i j } i , j = 1 , , N and it is constant. The class of non-adaptive networks is extremely useful for modeling many processes and phenomena in nature and technology;10–12 see also Secs. II D and V AV C.
When the structure depends on the function, we obtain an adaptive system
X ˙ ( t ) = f ( X , Y ) ,
(3)
Y ˙ ( t ) = g ( X , Y ) ,
(4)
with a mutual structure–function interaction.13 
An example of an adaptive dynamical network is
x i ˙ = f i ( x i , t ) + j = 1 N κ i j g ( x i , x j ) ,
(5)
κ ˙ i j = h ( x i , x j , κ i j ) ,
(6)
where the rule (6) is responsible for the adaptation and the temporal changes of the structure Y. The rule (6) is the case when the connection weight between node i and node j depends only on the function of these nodes x i ( t ) and x j ( t ). Of course, this is not the only possible adaptation rule. Particular realizations of the adaptation rule (6) are neuronal systems with plasticity. Specifically, when the plasticity is long-term, i.e., the structural changes act on a slower timescale than the functional dynamics (neuronal spiking),14–18 this leads to systems with multiple timescales. As a representative system, the paradigmatic adaptive network of phase oscillators,
ϕ ˙ i = ω i j = 1 N κ i j sin ( ϕ i ϕ j + α ) ,
(7)
κ ˙ i j = ε ( κ i j + sin ( ϕ i ϕ j + β ) ) ,
(8)
appears to be very useful to study various phenomena in adaptive networks, such as synchronization, frequency clustering, recurrent synchronization, adaptivity-induced resistance to noise, and others.17,19–24 Equations (7) and (8) are a special case of the more general equations (9) and (10) in Sec. II D; see also the examples discussed there. All of these phenomena are also revealed in more realistic and complex models, such as Hodgkin–Huxley neurons with spike-timing-dependent plasticity.17,25 Thus, paradigmatic models of the type (7) and (8) have demonstrated their effectiveness in studying and predicting novel phenomena characteristic for large classes of adaptive networks.

The main challenges in studying the above classes of adaptive dynamical networks are as follows:

  1. High dimensionality. If the number of nodes in the network is N, the number of possible connections is N 2. Thus, the dimensionality of the model increases dramatically compared to dynamical networks with a fixed structure.

  2. If the adaptation is slow, i.e., ε 1 in Eq. (8), the system becomes multi-scale with the slow manifold of dimension N 2. This additional multiscale structure provides opportunities for analysis,26 but for large networks, it goes far beyond the standard results employing geometric singular perturbation theory.

Despite recent advances in the study of dynamical adaptive networks, many challenging problems remain unsolved. These problems include mean-field theory, application to climate network modeling, understanding the role of adaptivity in machine learning, developing dimensionality reduction techniques, particularly methods for dealing with extremely high-dimensional slow manifolds. Besides large networks, small networks with adaptivity appear to have a highly nontrivial bifurcation structure compared to their non-adaptive counterparts. Studying and finding typical bifurcation scenarios in such systems (also known as Eckhaus instability or Busse-balloons in PDEs) is another open and challenging problem.

Adaptation is often qualitatively described as a slow evolution of network connectivity patterns due to a feedback from the nodal dynamics, drawing comparison to synaptic plasticity in neuronal systems;27 see also Sec. II A. Nevertheless, one should bear in mind that adaptation may also directly impact the features of nodal dynamics, with examples ranging from frequency adaptation in clapping audiences or flashing fireflies28 to scenarios where the limited availability of metabolic resources modulates neuronal excitability29,30 or contributes to maintaining neuronal systems near criticality.31 A detailed discussion concerning the two latter effects in relation to spike-frequency adaptation and short-term synaptic plasticity can be found in Sec. III C. While these two types of adaptation, affecting the coupling or nodal dynamics, may appear independently, it is also not uncommon that they act in concert guiding the system’s self-organization.32,33 So far, most of the systematic insights on the role of adaptation have been gained regarding its impact on synchronization, including how it gives rise to different states of (partial) synchrony,16,21,34–37 or the way it modifies the order of synchronization transition28 and the associated nucleation process.38 Another active branch of research concerns adaptation as a general control mechanism, establishing its role in inducing critical transitions30,31 and triggering of alternating or cyclic activity patterns.39–41 Moreover, unfolding studies employing reservoir computing for design of controllers for nonlinear, and, in particular, chaotic systems, also hold great promise; see Sec. IV C.

1. Interaction of adaptation and noise

An important, but still insufficiently understood problem concerns the interaction between adaptation and noise, an issue naturally arising in applications to neuroscience. In spite of an apparently desynchronizing effect of noise, it has been shown that adaptation and noise may give rise to a self-organized network activity that promotes growth of overall synaptic strength,17 thereby canceling the potentially desynchronizing stochastic effects. While this may seem counterintuitive, one should recall that classical synaptic plasticity rules, such as spike-timing-dependent plasticity,27 support synaptic potentiation if coupled neurons are approximately (but not identically) synchronized and maintain their relative order of firing.42 However, such self-organized resilience of synchronization to noise is so far evinced for coupled oscillators rather than coupled excitable or mixed excitable-oscillatory populations. Addressing the two latter cases would be highly relevant for applications in neuroscience where local dynamics typically involves excitability and diversity.43–45 

Apart from the mean effect on the overall coupling strength, an additional subtlety from the interaction of adaptation and noise concerning stochastic fluctuations so far addressed mostly at the microscopic level. For motifs of coupled stochastic excitable units, such an interaction may induce switching dynamics, i.e., slow stochastic fluctuations between coexisting metastable states. The switching is naturally reflected both at the level of nodal dynamics and the effective motif coupling configuration, given by the coupling strengths.46 In particular, for the example of a system of two identical excitable units, the noise can induce two different oscillatory modes with a different prevailing order of firing between the units. In the presence of slow adaptation, such metastable states engage in an alternating dynamics, accompanied by an alternation of coupling configurations characterized by a strong coupling in one direction and a strongly depressed one in the opposite direction. Translated to the language of neuroscience, the latter effect corresponds to switching between two functional neuronal motifs with directed couplings on the same structural motif.47 

Concerning stochastic fluctuations at the level of a single excitable system, it has been shown that a slowly adapting feedback, acting as a low pass filter to affect the unit’s excitability,40 may in an interaction with noise induce a novel form of behavior called stochastic bursting, an alternating activity involving episodes of relative silence interspersed with irregular spiking. Such stochastic bursting occurs in the parameter region that in the limit of an infinite scale separation between the units’ dynamics and adaptation supports bistability between noise-induced and noise-perturbed spiking. Apart from inducing a novel type of behavior, adaptation may also provide for a control mechanism of coherence resonance40 or may make the noise-induced suppression of spiking frequency within inverse stochastic resonance more efficient.41,48

2. Impact of an adaptation rate

An often overlooked feature of adaptation when elaborating its impact on emergent dynamics is the adaptation rate. Classically, an adaptation rate is considered to be sufficiently slow such that the overall dynamics may be treated within the framework of singular perturbation theory,26 separating between the fast local dynamics of units and the slow evolution of adaptation variables. However, the impact of an adaptation rate has not been investigated systematically, mostly due to a lack of an appropriate analytical method. In certain examples, it has numerically been shown that intermediate adaptation rates can substantially deviate the system’s behavior from the predictions of singular perturbation theory,46 and finding appropriate means to address this issue remains an open problem.

3. Mathematical approaches to adaptation

From a broader perspective, developing mathematical approaches to study adaptive networks is challenging because it requires reconciling different aspects of system behavior, such as criticality, feedback, multiple timescale dynamics, diversity, and noise. So far, an extension of a master stability function approach49 has proven effective in reducing the synchronization problem by separating for dynamical and topological features, allowing for a classification of system states with respect to synchronization properties. For coupled phase or neural oscillators, such an approach has revealed that adaptation may induce a desynchronization transition21 and support different multi-frequency hierarchical cluster states and chimera-like states of partial synchronization. Nevertheless, the general problem of the impact of adaptation on system’s multistability remains open. In certain cases, such as the Kuramoto phase oscillators with an asymmetric spike-timing-dependent plasticity-like plasticity rule, adaptation has been shown to induce multistability between the synchronized, desynchronized, and multiple partially synchronous states.16 Also, for adaptively coupled identical phase oscillators, multicluster states have been shown to exhibit a high degree of multistability.34,35 Apart from understanding the impact on synchronization problems, an important issue concerns the role of adaptation in inducing cyclic activity patterns by controlling critical transitions of the adaptation-free system. Treating such problems, such as the onset of collective activity bursts in heterogeneous systems adaptively coupled to a pool of resources,39 requires combining different reduction approaches50–52 and multiple timescale methods. Nevertheless, developing rigorous mathematical approaches where mean-field methods apply to layer dynamics while adaptation is treated by a reduced system is a vibrant field of investigation. In parallel, a hybrid approach for treating the interaction of adaptation and noise by combining the Fokker–Planck formalism with multiple timescale methods has recently been derived.40 Further generalization of an adaptation concept to cases where an adaptation rate itself varies in time may additionally require including methods from nonequilibrium thermodynamics and information theory. This naturally applies to sensory adaptation,53,54 where information transmission is optimized under different constraints, including metabolic costs, dynamic range, and intrinsic stochasticity.55 From the perspective of nonequilibrium thermodynamics, sensory adaptation is a dissipative process ruled by an energy–speed–accuracy tradeoff,53 where one may exploit the relation between adaptation and irreversibility,54 quantified by the entropy production.

This article discusses notions of adaptivity from the perspective of different disciplines, ranging from non-linear dynamics to psychology, neuroscience, and computer science. Yet, while most authors would agree that adaptivity is a property, their answers to the question “A property of what?” presented in the various contributions seem to differ. This is not accidental, but simply a consequence of the exploratory nature of the paper, and it poses a challenge for future work: Can we find a framework that is sufficiently generic to formulate and compare the notions of adaptivity in different research areas, understand their differences and similarities, identify shared concepts and computational methods, and facilitate the communication between disciplines?

We argue in this section that dependent type theory would be an ideal candidate for (formulating) such a framework. What do we mean by this? The reader who is unfamiliar with dependent type theory should for the moment think of it as a mathematical logic fused with a programming language (we will explain more in Sec. II C 2). Ionescu et al.56 argue that type theory fits most of the requirements for frameworks for modeling and programming put forward by Broy et al.57 In a research program originally initiated by Ionescu, type theory has been applied to understand notions of vulnerability, viability, reachability, avoidability (discrete dynamical systems), optimality (control theory), climate sensitivity, commitment, and responsibility (climate policy).58–61 The largest study of the above is Ionescu et al.58 where various notions of vulnerability, stemming from domains such as climate change, food security, or natural hazard studies, are compared.

1. Notions of adaptivity

A key idea commonly put forward is that adaptivity is a “feature of natural and artificial complex systems.” Thus, from this perspective, adaptivity is a property of a system. However, in their 1992 seminal paper “Reinforcement learning is direct adaptive optimal control,”62 Sutton et al. argue that what is adaptive is a method for controlling a system, rather than the system itself. This suggests to see adaptivity as a property of optimal control methods.

It is worth noticing that optimal control methods do not need to be adaptive. At least since 1957,63 we know that many deterministic and stochastic sequential decision problems can be solved for optimal policies via dynamic programming. Dynamic programming can indeed be applied to also solve non-deterministic, fuzzy, and, more generally, monadic sequential decision problems,64 as long as the uncertainty monads and the measures of uncertainty (for example, for stochastic uncertainty, the expected value measure) satisfy certain compatibility conditions.65 However, when the transition function (or the reward function) of a sequential decision problem is not given, optimal policies have to be learned by interacting, step by step, with an environment: for example, via Q-learning.66 This is learning to act optimally rather than optimal planning.

Even if we share the intuition that adaptivity is a property of a system, or of a method for controlling a system that interacts sequentially with an environment, it remains to clarify whether the notions of adaptivity in different domains arise as instances of the same abstract notion or whether they are genuinely different, potentially even incompatible. Such a clarification requires specifying and comparing different notions of adaptivity in a common framework. As mentioned above, in prior work, we have employed type theory for this purpose.

2. Logic and type theory

Most scientists are well trained in applying elementary mathematics and first-order logic to formulate properties in specific domains; e.g., in mathematics, you might define what it means for a function to be injective, or in dynamical systems theory, what it means for a function to be the flow of a dynamical system. Therefore, for a mathematically trained people, logic is a well-suited language to make precise and develop a shared understanding of concepts. Indeed, this purpose is at the heart of a modern mathematical logic, at least going back to Leibniz’ vision of a universal language that would not suffer from the ambiguities of natural language.

Dependent type theory67 takes the advantages of a mathematical logic one step further. It is a theory that may be seen both as a higher-order logic and as a pure functional programming language with a static type system. It was developed as a foundational theory for constructive mathematics by the Swedish mathematician and philosopher Per Martin-Löf.68 Dependent type theory has solid implementations69–73 and impeccable mathematical credentials74–77 (see also Refs. 78 and 79 for popular science accounts, including the voices of mathematicians who have turned to computer-aided formalization).

Due to its double role as logic and programming language, dependent type theory is well-suited as a framework for both formulating and machine checking mathematical specifications. Because types can represent propositions and well-typed programs correspond to proofs,80 dependent type theory is also the key for writing programs that are correct “by construction,” bridging the gap between the mathematical model and implementation. This is crucial for safety-critical applications81–85 but also in research areas in which testing model implementations is nearly impossible or too expensive.86 

3. Monadic dynamical systems

The vulnerability study of Ionescu et al.58 led to the introduction of monadic dynamical systems, combining ideas from generic programming87,88 and category theory89 with dynamical systems theory. Monadic dynamical systems are sufficiently general to capture various different definitions of vulnerability as instances of a common abstract schema. The framework for vulnerability was later extended by Botta et al.59,64,65 to a framework for specifying and solving sequential decision problems within dependent type theory. We think that this framework could also be applied and suitably extended to study different notions of adaptivity.

This subsection explores the applications of network models as outlined in Sec. II A in different domains. From a complex networks perspective, the interplay between dynamics and network topology is in the center of interest. Collective dynamics in networks of nonlinear oscillators is often characterized by synchronization phenomena,10,11 as already studied by Christiaan Huygens in 1656. Among these, partial synchronization patterns have become a major focus of research recently.90 Examples are provided by cluster or group synchronization (where within each cluster, all elements are completely synchronized, but between the clusters, there is a phase lag, or even a difference in frequency), and many other forms. A particularly intriguing example of partial synchronization patterns, which has recently gained much attention, is chimera states, i.e., symmetry-breaking states of partially coherent and partially incoherent behavior; for recent reviews, see Refs. 91–93. Chimera states in dynamical networks consist of spatially separated, coexisting domains of synchronized (spatially coherent) and desynchronized (spatially incoherent) dynamics. They are a manifestation of spontaneous symmetry-breaking in systems of identical oscillators and occur in a variety of physical, chemical, biological, neuronal, ecological, technological, or socio-economic systems. Other examples of partial synchronization include solitary states,94–96 where one single or a few elements behave differently compared with the behavior of the background group, i.e., the neighboring elements or hierarchical multifrequency clusters.20 

In adaptive networks, the coupling weights are not fixed but are continuously adapted by feedback of the dynamics, and both the local dynamics and the coupling weights evolve in time as co-evolutionary processes; compare with discussions in Secs. II A or II B. Adaptive networks have been reported for chemical,97 epidemic98 (see also Secs. V A and V B), biological, and social systems99 (see also Sec. V C). A paradigmatic example of adaptively coupled phase oscillators has recently attracted much attention20,21,34,35,100–107 and it appears to be useful for predicting and describing phenomena in more realistic and detailed models.18,25,108,109 It describes N adaptively coupled phase oscillators20,34 [as a general case of Eqs. (7) and (8) in Sec. II A],
ϕ ˙ i = ω i + j = 1 N a i j κ i j f ( ϕ i ϕ j ) ,
(9)
κ ˙ i j = ϵ ( κ i j + g ( ϕ i ϕ j ) ) ,
(10)
where ϕ i [ 0 , 2 π ) represents the phase of the ith oscillator ( i = 1 , , N), ω i is its natural frequency, and κ i j is the coupling weight of the connection from node j to i. Furthermore, f and g are 2 π-periodic functions where f is the coupling function, g is the adaptation rule, and ϵ 1 is the adaptation time constant. The connectivity between the oscillators is described by the entries a i j { 0 , 1 } of the adjacency matrix A. In particular, for the Kuramoto phase oscillator,110 the coupling function is f ( ϕ ) = sin ϕ , and synaptic neuronal plasticity may be described by g ( ϕ ) = cos ( ϕ + β ) where the parameter β describes different adaptivity rules.

One purpose of this section is to provide a new perspective by demonstrating that a wide range of models ranging from neuronal networks with synaptic plasticity via power grids to physiological networks modeling tumor disease and sepsis can be viewed as adaptive oscillator networks, and partial synchronization patterns can be described on equal footing. This modeling approach allows one to transfer methods and results from one system to the other.

A common class of network models describing power grids is given by N coupled phase oscillators with inertia,111 also known as a swing equation. It has been widely used in works on synchronization of complex networks and as a paradigm for the dynamics of modern power grids,112–122 
M ϕ ¨ i + γ ϕ ˙ i = P i + j = 1 N a i j h ( ϕ i ϕ j ) ,
(11)
where M is the inertia coefficient, γ is the damping constant, P i is the power of the ith oscillator (related to the natural frequency ω i = P i / γ), h is the coupling function, and a i j is the adjacency matrix as defined in Eq. (9). Another view on the role of adaptivity for power grid systems can also be found in Sec. V D.
It has been shown123 that the class of phase oscillator models with inertia is a natural subclass of systems with adaptive coupling weights where the weights denote the power flows between the corresponding nodes. We first write Eq. (11) in the form
ϕ ˙ i = ω i + ψ i ,
(12)
ψ ˙ i = γ M ( ψ i 1 γ j = 1 N a i j h ( ϕ i ϕ j ) ) ,
(13)
where ψ i is the deviation of the instantaneous phase velocity from the natural frequency ω i. We observe that this is a system of N phase oscillators (12) augmented by the adaptation (13) of the frequency deviation ψ i. Similar systems with a direct frequency adaptation have been studied in Refs. 28 and 124–126. Note that the coupling between the phase oscillators is realized in the frequency adaptation, which is different from the classical Kuramoto system.110 In order to introduce coupling weights into system (12) and (13), we express the frequency deviation ψ i as the sum ψ i = j = 1 N a i j χ i j of the dynamical power flows χ i j from the nodes j that are coupled with node i. The power flows are governed by the equation χ ˙ i j = ϵ ( χ i j + g ( ϕ i ϕ j ) ), where g ( ϕ i ϕ j ) h ( ϕ i ϕ j ) / γ are their stationary values127 and ϵ = γ / M. It is straightforward to check that ψ i, defined in such a way, satisfies the dynamical equation (13).
As a result, the swing equations (12) and (13) can be written as the following system of adaptively coupled phase oscillators:
ϕ ˙ i = ω i + j = 1 N a i j χ i j ,
(14)
χ ˙ i j = ϵ ( χ i j + g ( ϕ i ϕ j ) ) .
(15)
The obtained system corresponds to (9) and (10) with coupling weights χ i j and coupling function f ( ϕ i ϕ j ) 1. The coupling weights form a pseudocoupling matrix χ describing the power flow between the nodes. Note that the base network topology a i j of the phase oscillator system with inertia equation (11) is unaffected by the transformation.

In adaptive phase oscillator networks, there exists a diversity of multifrequency cluster states,20,35,107 including chimera states20 and solitary states.128 In a multifrequency cluster state, all oscillators split into M groups (called clusters), each of which is characterized by a common cluster frequency Ω μ. In particular, the temporal behavior of the ith oscillator of the μth cluster ( μ = 1 , , M) is given by ϕ i μ ( t ) = Ω μ t + ρ i μ + s i μ ( t ) where ρ i μ [ 0 , 2 π ) and s i μ ( t ) are bounded functions describing different types of phase clusters characterized by the phase relation within each cluster.34 

As an example, in Figs. 2(a) and 2(c), we present a four-cluster state of in-phase synchronous clusters on a globally coupled network. Hierarchical multicluster states are built out of single cluster states whose frequency scales approximately with the number N μ of elements in the cluster. The coupling matrix displayed in Fig. 2(e) shows the characteristic block diagonal shape known for adaptive networks. In particular, the oscillators within each cluster are more strongly connected than the oscillators between different clusters.

Fig. 2.

Hierarchical multicluster states in networks of coupled phase oscillators with inertia. Panels (a) and (b), (c) and (d), and (e) and (f) show the temporally averaged phase velocities ϕ ˙ j , phase snapshots ϕ j ( t ), and the pseudocoupling matrices χ i j ( t ), respectively, at t = 10 000. In (e), the oscillator indices are sorted in an increasing order of their mean phase velocity. The states were found by numerical integration of (11) with identical oscillators P i = 0, h ( ϕ ) = σ γ sin ( ϕ + α ), and uniform random initial conditions ϕ i ( 0 ) ( 0 , 2 π ), ψ i ( 0 ) ( 0.5 , 0.5 ). The parameter α is a phase lag of the interaction.129 Parameters: (a), (c), and (e) globally coupled networks, M = 1, γ = 0.05, σ = 0.016, α = 0.46 π and (b), (d), and (f) nonlocally coupled ring networks with coupling radius P = 40, M = 1, γ = 0.3, σ = 0.033, α = 0.8 π; N = 100. After Berner et al., Phys. Rev. E 103, 042315 (2021). Copyright 2021 American Physical Society.123 

Fig. 2.

Hierarchical multicluster states in networks of coupled phase oscillators with inertia. Panels (a) and (b), (c) and (d), and (e) and (f) show the temporally averaged phase velocities ϕ ˙ j , phase snapshots ϕ j ( t ), and the pseudocoupling matrices χ i j ( t ), respectively, at t = 10 000. In (e), the oscillator indices are sorted in an increasing order of their mean phase velocity. The states were found by numerical integration of (11) with identical oscillators P i = 0, h ( ϕ ) = σ γ sin ( ϕ + α ), and uniform random initial conditions ϕ i ( 0 ) ( 0 , 2 π ), ψ i ( 0 ) ( 0.5 , 0.5 ). The parameter α is a phase lag of the interaction.129 Parameters: (a), (c), and (e) globally coupled networks, M = 1, γ = 0.05, σ = 0.016, α = 0.46 π and (b), (d), and (f) nonlocally coupled ring networks with coupling radius P = 40, M = 1, γ = 0.3, σ = 0.033, α = 0.8 π; N = 100. After Berner et al., Phys. Rev. E 103, 042315 (2021). Copyright 2021 American Physical Society.123 

Close modal

A second example, which uses a splay state with ϕ j = 2 π k j / N and wavenumber k N as the building block for multiclusters, is shown in Figs. 2(b), 2(d), and 2(f). Splay states are characterized by the vanishing local order parameter R j = | k = 1 N a j k exp ( i ϕ k ) | = 0. Figures 2(b), 2(d), and 2(f) present a hierarchical mixed-type multicluster on a nonlocally coupled ring of phase oscillators. It consists of one large splay cluster with wavenumber k = 2 and a small in-phase cluster consisting of three solitary states.

In summary, the findings for partial synchronization of adaptively coupled phase oscillators can be transferred to networks of phase oscillators with inertia. This holds not only for simple homogeneous systems, but also for heterogeneous real-world networks, such as the German ultrahigh voltage power grid.123 

In recent years, studies on both types of models, oscillators with inertia and adaptively coupled oscillators, have revealed a plethora of common dynamical scenarios, including solitary states,118,119,128,130 multifrequency clusters,34,35,117,131 chimera states,20,103,132 hysteretic behavior, and non-smooth synchronization transitions.38,101,116,133,134 Power grids, as well as neuronal networks with synaptic plasticity, and other adaptive networks describe real-world systems of tremendous importance for our daily life, which exhibit partial synchronization patterns that may be important for the understanding of the onset of instability. Neural systems and power grid networks are also discussed in Secs. III and V, respectively. A particularly intriguing example and a future perspective is the functional modeling of physiological two-layer networks of the immune system and the parenchyma coupled adaptively by cytokines.135,136 This can be used for the modeling of tumor disease and sepsis with the immune layer as a reference point, where the healthy state is characterized by complete frequency synchronization and the pathological state is a multifrequency cluster state.

In this section, the focus is on adaptive mechanisms in physiological systems. Here, basic regulatory principles are highlighted, fundamental concepts for a physical culture theory are developed, mechanisms and modeling of perception are described, and concrete medical applications on neural networks are presented.

Here, we will explore motifs for adaptation in physiological regulatory networks. The physiological properties of biological systems arise from the myriad of interactions of their underlying components. As an example, the production rate of proteins from a gene depends on the abundance of other proteins, known as transcription factors, whose production depends on the abundance of other transcription factors. Similarly, the secretion of a hormone to the bloodstream depends on the concentrations of other blood factors, which are themselves affected by the levels of other hormones. These complex networks of interactions are known as regulatory networks.

To study regulatory networks, it is useful to notice that evolution tends to come up with similar solutions to related problems. It is often the case that, under similar contexts, the regulatory networks of distinct systems share mathematical similarities—these are so-called regulatory motifs or design principles.137–139 By identifying such design principles, one can extract a deeper understanding of the functional significance of the regulatory interactions. We may, therefore, ask what are the design principles that support adaptation—the ability of the system to adjust itself to function properly, despite uncertainty in internal parameters or the external environment.

Consider the problem of maintaining homeostasis of a blood factor, such as glucose (denoted x). Blood glucose needs to be maintained within a narrow range (around 5 mM) with deviations being detrimental or even life-threatening. Our bodies have a natural mechanism to lower blood glucose—we have specialized cells called β-cells, which can sense blood glucose and secrete the hormone insulin, which causes remote cells (fat cells, skeletal muscle cells, and liver cells) to reduce glucose levels. This mechanism can maintain glucose around some steady state, which would depend sensitively on many parameters, including the abundances β-cells, plasma volume, and the responses of cells to insulin. These can (and do) vary greatly between individuals; yet, we know that most individuals can maintain blood glucose within a narrow range.140 

A related problem occurs in bacterial chemotaxis. The bacteria E. coli navigates with a strategy resembling a random walk, where it moves and reorients with some set rate ϕ (typically once every few seconds). This is known as the tumbling rate. Navigation is achieved by adjusting ϕ according to sensed ligand molecules known as attractors and repellants. A step increase in an attractant molecule transiently decreases ϕ, leading to a net drift toward areas with higher attractant concentration. However, at fixed attractant concentration u, over a wide sensed range, ϕ is constant and independent of u.141,142 How is ϕ maintained constant, despite variations in the input activity of the circuit?

It has long been suggested that both problems are closely related to the engineering problem of disturbance rejection.143–145 This problem is exemplified by how a cruise-control system of a car maintains a fixed speed on varying slopes, or how a thermostat maintains a fixed temperature in uncertain operating conditions. The solution requires integral feedback: the controller feedback increases with the error (it integrates the error); therefore, at a steady state, the error is zero.

How is integral feedback implemented in biological circuits? In hormone circuits, there appears to be a simple answer [Fig. 3(a)]. Let x be the regulated variable and y be its regulating hormone, with Z being the mass of the tissue that secretes the hormone. In the blood glucose system, x is the blood glucose, y is the blood insulin, and Z is the β-cell mass. The following motif is observed across hormone systems: there is a slow negative feedback where the main regulator of the growth dynamics of Z is x; that is, x adjusts the death-, growth-, and replication-rates of the cells of Z. Thus,
Z ˙ = f ( x ) Z ,
(16)
Fig. 3.

Motifs for adaptation in physiological systems. (a) In hormone circuits, a hormone-regulated variable governs the growth rate of the tissue responsible for its secretion, enabling precise adaptation. This adaptation mechanism ensures that the dynamics of the regulated variable remain robust in the face of physiological variations. (b) Organisms employ a combination of logarithmic sensing, precise adaptation, coupled to movement regulation, to achieve robust sampling of an input field. This motif is observed in chemotaxis and potentially in the mammalian dopamine system.

Fig. 3.

Motifs for adaptation in physiological systems. (a) In hormone circuits, a hormone-regulated variable governs the growth rate of the tissue responsible for its secretion, enabling precise adaptation. This adaptation mechanism ensures that the dynamics of the regulated variable remain robust in the face of physiological variations. (b) Organisms employ a combination of logarithmic sensing, precise adaptation, coupled to movement regulation, to achieve robust sampling of an input field. This motif is observed in chemotaxis and potentially in the mammalian dopamine system.

Close modal

where f ( x ) is the x-dependent growth rate. The system will settle at the steady state where f ( x ) = 0 (denoted x 0) regardless of variation in the other physiological parameters, including plasma volume, secretion rate, and the responses of remote cells.

The ubiquity of the motif suggests that it is uniquely advantageous. Why is it so prevalent? Beyond integral feedback, another intriguing phenomena occur. Consider, for example, the following simple model for the glucose system:
u ˙ = u s x y , y ˙ = p Z γ y ,
(17)
where s is the sensitivity to the response of the hormone and p is the product of the per-cell secretion and (inverse) plasma volume. u is the time-dependent input, incorporating, e.g., meal intake. Equation (16) not only sets the steady state of x to x = x 0, it makes the entire dynamics in response to any input u invariant of s, p.146 These scale-invariant dynamics are evident in clinical data from distinct hormonal systems.146–149 Thus, in hormone systems, negative feedback from the regulated variable to its controlling tissue allows the system to adapt its dynamics to variability in key system parameters, which are uncertain and may be highly variable.

Scale invariance also occurs in bacterial chemotaxis; in this case, the dynamics of the tumbling rate ϕ ( t ) are modulated by the attractant input u ( t ) in a manner, which depends only on relative, rather than absolute, changes in u ( t ), a phenomenon known as fold-change detection.150 Fold-change detection is documented in the navigation systems of other simple organisms, including in worms and slime molds.151,152

What about more complex organisms? In vertebrates, including mice and humans, movement is controlled by the transmission of dopamine in the mid-brain. Dopamine is secreted in response to surprise (or prediction error) about rewards, such as food or drink; better outcomes than expected cause dopaminergic neurons to fire above their baseline rate, while worse outcomes transiently inhibit dopaminergic firing.153 The responses are also scale-invariant.154 Finally, when the animal moves, dopamine changes in a way that is consistent with a response to the temporal derivative of a spatial input field.155 

Upon closer examination, the dopamine system shares key similarities with the chemotaxis system, where in the case of dopamine, the input field corresponds to expectations about rewards.156 This input field decays spatially from actual locations where rewards are provided, similar to the decay of a chemical attractant from its source. Dopamine also invigorates movements in a manner analogous to the effect of attractants on bacterial movement.

We, therefore, identified another regulatory motif: fold-change detection of an input field, which modulates movement statistics [Fig. 3(b)]. What is the function of this motif? From the perspective of sensing, scale invariance allows us to remove uncertainty and retain sensitivity over a wide dynamic input range. An additional distinct advantage is apparent when we consider the coupling between sensing and movement. The fold-change detection circuit calculates the temporal logarithmic derivative of the input u ( t ). In a spatial setting, we can consider a spatial input field U ( x ); the movement dynamics of the organism over long time- and length- scales are captured by the stochastic dynamics,
d x = β v 2 ϕ log U d t + v 2 2 ϕ d W ,
(18)
where v is the typical movement speed, and β depends on circuit parameters. The steady-state distribution of the organism location is P ( x ) = U ( x ) β, which only depends on circuit parameters (rather than movement parameters); the motif, thus, provides a robust mechanism for sampling a power of the input field. This is again consistent with experimental observations on both chemotaxis and the dopamine system.156 Thus, in these systems, a motif that appears to support adaptation of sensing in the background of uncertain input levels, in fact, provides a mechanism for robust sampling of uncertain environments.

The examples considered here suggest that adaptation motifs that allow for scale-invariant dynamics are prevalent, and that specific adaptation regulatory motifs, which recur in similar contexts, have distinct functional significance. Identifying these motifs, and comparing their behavior in different contexts, is due to improve our understanding of how adaptation is achieved by complex regulatory networks.

“To live is to adapt to the world around us.”157 The environment of an organism can change on vastly different timescales, ranging from, e.g., a change in lighting to climate change. Organisms, and hence their brains, have developed strategies to adapt to these modifications in the environment across timescales, from adaptation to sudden changes in sensory stimuli to long timescales of evolutionary processes. In the following, some key adaptive mechanisms in the brain on short timescales are highlighted.

In principle, single neurons can adapt to changes in the environment based on two strategies, either by modifying their intrinsic or extrinsic properties. Intrinsic changes include, e.g., increase or decrease in the excitability of a neuron.160 Extrinsic changes are related to updates in the strength of the synaptic connections onto the neuron. An extrinsic mechanism that has been linked to adaptation on short timescales (tens to hundreds of milliseconds) is short-term synaptic plasticity. Input spikes that occur within short timescales can cause a transient decrease (short-term depression) or an increase (short-term facilitation) of the synaptic efficacy161 (see Sec. III C). The mechanism leading to a permanent increase or decrease in synaptic strength is long-term synaptic plasticity. In experiments, long-term changes in the synaptic strength can be induced via a “pairing protocol,” a prominent example being spike-timing-dependent plasticity.162 Repeatedly triggering a spike in the postsynaptic neuron following a spike in the presynaptic neuron within approx. 10 ms leads to long-term potentiation, while presynaptic spikes following postsynaptic spikes within 10–100 ms leads to long-term depression.163,164 Both short- and long-term plasticities have not only been identified at synapses between excitatory neurons but also at inhibitory-to-excitatory synapses (for more information, see Refs. 158 and 165).

A prominent experimental paradigm to test adaptation on short timescales is the “oddball paradigm.”166 In this paradigm, one (usually visual or auditory) stimulus is presented many times, the standard (or familiar, predictable) stimulus. The second stimulus is only presented rarely, the deviant (or novel, unpredictable) stimulus. On the whole-brain level, electroencephalogram measurements reveal that presenting the deviant stimulus leads to a strong negative deflection in the EEG signal compared to the signal following from a standard stimulus presentation, termed “mismatch negativity.”167,168 Similarly, measurements of either single neurons or neuronal populations in sensory cortices reveal elevated neuronal responses for deviant compared to the standard stimuli169–171 [Fig. 4(a)]. Computational models have proven to be useful for understanding the mechanisms underlying short-term adaptation in the brain (see also Secs. III C and III F). Multiple studies suggest that short-term plasticity is a critical mechanism underlying adaptation to familiar stimuli,172–174 and short-term plasticity at inhibitory synapses is important for controlling temporal context-dependent neuronal responses.175,176 In a complementary approach, it has been suggested that long-term plasticity at inhibitory-to-excitatory synapses underlies the difference in responses to familiar and novel stimuli.177 In this work, increase of inhibitory-to-excitatory synapses via long-term plasticity leads to a decrease in excitatory responses to familiar stimuli, while novel stimuli still lead to elevated responses.

Fig. 4.

(a) Oddball paradigm. Presenting a stimulus repeatedly (stimulus A) leads to a decrease of the neuronal response, while the deviant stimulus (stimulus B) leads to a high neuronal response. Panel adapted from Wu et al., Trends Neurosci. 45, 884–898 (2022). Copyright 2022 Elsevier, Inc.158 (b) Synaptic plasticity leading to strongly recurrently connected structures (assemblies). Panel adapted from Miehl et al., J. Physiol. (published online) (2023). Copyright 2023 John Wiley & Sons, Inc.159 

Fig. 4.

(a) Oddball paradigm. Presenting a stimulus repeatedly (stimulus A) leads to a decrease of the neuronal response, while the deviant stimulus (stimulus B) leads to a high neuronal response. Panel adapted from Wu et al., Trends Neurosci. 45, 884–898 (2022). Copyright 2022 Elsevier, Inc.158 (b) Synaptic plasticity leading to strongly recurrently connected structures (assemblies). Panel adapted from Miehl et al., J. Physiol. (published online) (2023). Copyright 2023 John Wiley & Sons, Inc.159 

Close modal

Many functional implications have been suggested for the role of reduced neuronal activity for familiar stimuli compared to elevated activity for novel stimuli, ranging from efficient coding and redundancy reduction, fast detection of unexpected events, to Bayesian inference.157,166 Another highly considered implication is predictive coding. In this framework, it is thought that the goal of the brain is to minimize the difference between its internal prediction about the world and the sensory input.178 High responses to novel stimuli can be thought of as the prediction error. However, how exactly these computations are implemented in the brain and how they are related to short- and long-term plasticity mechanisms are largely unresolved.

Neuronal circuits also need to be robust against perturbations. In experimental studies, disrupting the sensory inputs in the developing brain by performing deprivation experiments (e.g., closing the eye of an animal) leads to homeostatic adjustments of the respective neuronal circuits.179 A related question is how tightly neuron intrinsic properties, such as conductance densities, need to be regulated to maintain a proper circuit function.180 For example, computational models and machine learning tools reveal that similar circuit dynamics can be found even for vastly different ion channel conductance densities and that this degeneracy allows one to dynamically compensate perturbations on very fast timescales.181–183 Neuromodulators (such as serotonin, dopamine, etc.) are the chemicals that control the neuron’s intrinsic properties.184 Further computational studies have started investigating the combined effects of intrinsic and extrinsic neuron properties on neuronal activity and robust formation of switches between activity states, as found, e.g., in the sleep-wake cycle.185 

Furthermore, learning and memory formation can be viewed as adaptive processes. Interestingly, it is suggested that learning in neuronal circuits relies on the same mechanisms as described above, short- and specifically long-term synaptic plasticity. While short-term plasticity might underlie working memory,186 long-term plasticity has been hypothesized as the basis for long-term memory storage.14 One prominent idea is that groups of strongly interconnected neurons, so-called assemblies, are the basic unit of representation in the brain, and long-term plasticity has proven key for learning these connectivity structures in computational models159,187 [Fig. 4(b)]. Neuronal circuits face the problem of “stability-flexibility tradeoff,” meaning that on the one hand, synaptic connectivity should remain stable to allow for long-term memory storage and be robust against perturbations, while on the other hand, circuits should remain flexible allowing re-learning or learning of new representations.188 Computational studies modeling neuronal networks have suggested different solutions, such as reverberate neuronal activity,189 inhibitory-to-excitatory plasticity,190 or a combination of multiple synaptic plasticity and homeostatic mechanisms.191 

Despite recent promising developments, experimental and computational studies have only scratched the surface of understanding the role of intrinsic, short-, and long-term plasticity mechanisms in sensory adaptation. This endeavor is specifically important because deficits of information processing in neuropsychiatric diseases have been linked to disruptions in excitatory and inhibitory local circuits,192,193 and mismatch negativity has been suggested as a biomarker for psychotic disorders.194 Therefore, uncovering the role of different cellular dynamics can have positive therapeutical impacts (see Sec. III D).

Neural mass models are mean-field models developed to mimic the dynamics of homogenous populations of neurons. These models range from purely heuristic ones (as the well-known Wilson–Cowan model195), to more refined versions obtained by considering the eigenfunction expansion of the Fokker–Planck equation for the distribution of the membrane potentials.196,197 However, quite recently, a next generation neural mass model has been derived in an exact manner for heterogeneous populations of spiking neurons.198–200 This exact derivation is possible for networks of quadratic integrate and fire (QIF) neurons, representing the normal form of Hodgkin’s class I excitable membranes,201 thanks to the analytical techniques developed for coupled phase oscillators.50 Specifically, next generation neural mass models describe the dynamics of networks of spiking neurons in terms of macroscopic variables, such as the population firing rate and the mean membrane potential, and they have already found various applications in many neuroscientific contexts.202–211 Resuming the terminology introduced in Sec. III B, here, we investigate the dynamics emergent in next generation neural mass models when populations of neurons adapt to changes in the environment by modifying their intrinsic or extrinsic properties. In particular, we present an overview of the emergence of collective dynamics (e.g., synchronous, bursting neural dynamics) in next generation neural mass models that arise from spike-frequency adaptation or post-synaptic plasticity.

Spike-frequency adaptation is a widespread neurobiological phenomenon, exhibited by almost any type of neuron that generates action potentials. It occurs in vertebrates as well as in invertebrates, in peripheral as well as in central neurons, and may play an important role in neural information processing. As it will be clarified in the following, all biophysical mechanisms that can cause spike-frequency adaptation include a form of slow negative feedback to the excitability of the cell; therefore, spike-frequency adaptation represents an intrinsic mechanism to adaptation. More in detail, experimental work suggests that it is a result of different balancing currents triggered at a single cell after it generates a spike.212,213 Three main types of ionic adaptation currents that influence spike generation are known: voltage-gated potassium currents, which are caused by voltage-dependent, high-threshold potassium channels;214 the interplay of calcium currents and intracellular calcium dynamics with calcium-gated potassium channels,215 and the slow recovery from inactivation of the fast sodium channel.216 As a result of these cellular mechanisms, many neurons show a reduction in the firing frequency of their spike response following an initial increase when stimulated with a square pulse or step.

Short-term plasticity161,217–220 refers to a phenomenon in which synaptic efficacy changes over time in a way that reflects the history of presynaptic activity, thus resulting to be an extrinsic mechanism of adaptation (see Sec. III B). Two types of short-term plasticity, with opposite effects on synaptic efficacy, have been observed in experiments: short-term depression and short-term facilitation. On one hand, synaptic depression is caused by the depletion of neurotransmitters consumed during the synaptic signaling process at the axon terminal of a pre-synaptic neuron, and it has been linked to various mechanisms, such as receptor desensitization,221,222 receptor density reduction,223,224 or resource depletion at glial cells involved in synaptic transmission.32,225 On the other hand, synaptic facilitation is caused by the influx of calcium into the axon terminal after spike generation, which increases the release probability of neurotransmitters. Short-term plasticity has been found in various cortical regions and exhibits great diversity in properties.226–228 

In the context of spike-frequency adaptation, first efforts in the direction of applying a neural mass model were made in a network of coupled linear integrate and fire neurons, employing the Fokker–Planck formalism and an adiabatic approximation given long spike-frequency adaptation timescales.229 Analyzing this mean-field description, Gigante et al. were able to identify different types of collective bursting. Recently, it has been shown that an excitatory next generation neural mass equipped with different short-term mechanisms of global adaptation can give rise to bursting behaviors.209 Moreover, in Ref. 230, the authors have studied the effect of this adaptation mechanism on the macroscopic dynamics of excitatory and inhibitory next generation neural mass models by including in the original neural mass model proposed in Ref. 200 an additional collective afterhyperpolarization current, which temporarily hyperpolarizes the cell upon spike emission. In a single population spike-frequency, adaptation favors the emergence of population bursts in excitatory networks, while it hinders tonic population spiking for inhibitory ones. When considering two neural masses, symmetrically coupled in the absence of adaptation, it is possible to observe the emergence of macroscopic solutions with broken symmetry: namely, chimera-like solutions in the inhibitory case and anti-phase population spikes in the excitatory one. Here, the addition of spike-frequency adaptation leads to new collective dynamical regimes exhibiting cross-frequency coupling among the fast synaptic time scale and the slow adaptation one, ranging from anti-phase slow–fast nested oscillations to symmetric and asymmetric bursting phenomena.

In the context of short-term plasticity, a fundamental implementation has been first done by Mongillo et al. in Ref. 186 to explain the mechanisms underlying working memory. Working memory is the ability to temporarily store and manipulate stimuli representations that are no longer available to the senses. In particular, in the model suggested by Mongillo and co-authors, synaptic facilitation allows the system to maintain an item stored for a certain period in working memory, without the need for an enhanced spiking activity. Furthermore, synaptic depression is responsible for the emergence of population bursts, which correspond to a sub-population of neurons firing almost synchronously within a short time window.231,232 In this context, the bursting activity allows for item retrieval. The working memory mechanism is investigated in Ref. 186 by means of a recurrent network of spiking neurons, while a simplified heuristic firing rate model is employed to gain some insight into the population dynamics. A next generation neural mass model encompassing short-term synaptic facilitation and depression has been recently developed to revise the synaptic theory of working memory with a specific focus on the emergence of neural oscillations and their relevance for working memory operations.207 In particular, Taher and co-authors in Ref. 207 consider multiple coupled excitatory populations, each coding for one item, and a single inhibitory population connected to all the excitatory neurons. This architecture is justified by recent experimental results indicating that GABAergic (i.e., inhibitory) interneurons in mouse frontal cortex are not arranged in sub-populations and that they densely innervate all pyramidal (i.e., excitatory) cells.233 The role of inhibition is to avoid abnormal synchronization and to allow for a competition of different items once stored in the excitatory population activity. Furthermore, in order to mimic synaptic-based working memory, only the excitatory–excitatory synapses are assumed to be plastic displaying short-term depression and facilitation (at the contrary with what is shown in Sec. III B where examples of short-term plasticity in inhibitory-to-excitatory synapses are also considered). As a result, memory operations are joined to sustained or transient oscillations emerging in different frequency bands, in accordance with experimental results for primate and humans performing working memory tasks.234–237 Due to the possibility of reproducing working memory operations associated with population bursts delivered at different frequencies, the neural mass model with short-term plasticity presented in Ref. 207 can represent a first building block for the development of a unified control mechanism for working memory, relying on the frequencies of deliverance of the self-emerging trains of population bursts. However, a development toward realistic neural architectures would require to design a multi-layer network topology to reproduce the interactions among superficial and deep cortical layers.238 

Spike-frequency adaptation and post-synaptic plasticity can be modeled, respectively, as an additive and a multiplicative term in the evolution equation of the mean membrane potential in the exact neural mass model. The novelty of this neural mass model, besides not being heuristic, but derived in an exact manner from the microscopic underlying dynamics, is that it reproduces the evolution of the population firing rate as well as of the mean membrane potential. This allows us to get insight not only on the synchronized spiking activity, but also on the sub-threshold dynamics and to extract information correlated to local field potentials and electroencephalographic signals, which are usually measured to characterize the activity of the brain at a mesoscopic/macroscopic scale. Even though these adaptation mechanisms can express tremendously different timescales, ranging from a few hundred milliseconds (e.g., spike-frequency adaptation212) to days (e.g., postsynaptic receptor density reduction224), the mean-field descriptions remain applicable. However, note that a macroscopic model of synaptic plasticity cannot express vesicle depletion at the presynaptic site,211 as introduced for single cell models in Ref. 239. Finally, thanks to the fact that adding spike-frequency adaptation leads to new collective dynamical regimes exhibiting cross-frequency coupling among the fast synaptic time scale and the slow adaptation one, the adaptive mechanisms in the framework of exact neural mass models could be useful to develop new models of self-organizing biological neural circuits that produce rhythmic outputs even in the absence of rhythmic input. An example could be the central pattern generators, which are responsible for the generation of rhythmic movements, since these models are often based on two interacting oscillatory populations with adaptation, as reported for the spinal cord240 and the respiratory system.241 

Regular deep brain stimulation is the gold standard for treating medically refractory Parkinson’s patients.242–246 In patients with advanced Parkinson’s disease, it was shown that regular deep brain stimulation plus medication was superior to medication alone.247 Notwithstanding its therapeutic efficacy,248,249 side effects are an issue.250–253 In fact, regular deep brain stimulation may cause characteristic side effects denoted as deep brain stimulation-induced movement disorders.254,255 Treatment efficacy is another limitation. Regular deep brain stimulation administered to the standard targets, subthalamic nucleus, or globus pallidus internus is not effective for the therapy of gait and other so-called axial symptoms, e.g., balance and posture impairment, and hardly improves or even worsens speech as well as affective and cognitive symptoms.256–259 

Abnormal neuronal synchrony is a hallmark of Parkinson’s disease.260 Based on computational modeling, it was suggested to specifically counteract abnormal neuronal synchrony by desynchronizing stimulation with phase-dependent stimulus delivery261 or by administering compound stimuli, which cause desynchronization irrespective of the initial dynamic condition.262,263 By design, coordinated reset stimulation employs comparably weak, phase resetting stimuli and does not require sophisticated calibration procedures.263 Accordingly, it was selected for pre-clinical studies (animal experiments) and clinical studies. Initially, coordinated reset stimuli were suggested to be delivered in a demand-controlled manner in a closed-loop setting, e.g., by delivering coordinated reset stimuli whenever a neuronal population gets resynchronized or by adapting the amplitude of the coordinated reset stimuli to the amount of synchrony.263 At that time, no implantable pulse generators for coordinated reset stimulation were available for clinical tests.246,264 Engineering-based concepts led to the development of closed-loop brain stimulation devices that recorded muscular or neuronal activity to suppress unwanted neuronal activity whenever detected.265,266 Routine clinical applications of closed-loop deep brain stimulation still require a number of issues to be resolved.267 

In contrast, based on principles of adaptive dynamical systems, a qualitatively different stimulation approach was computationally developed.268 Adaptivity is a fundamental feature of the nervous system and, in fact, the entire body to cope with complex physiological processes subjected to environmental changes; see Secs. III AIII C, III E, and III F. By the same token, adaptive as well as maladaptive, i.e., less favorable responses to pathological changes, are key to disease mechanisms. For instance, in Parkinson’s disease, a lack of dopamine initiates a cascade of functional and structural changes.269 To specifically counteract disease-related adaptive changes, synaptic plasticity159,191 (see also Secs. III B and III C), specifically spike-timing-dependent plasticity,14,27,162,163 was incorporated in neuronal network models used to design therapeutic stimulation, giving rise to a radically new stimulation and treatment concept.268 

It was observed that coordinated reset stimulation can shift a network from an unfavorable, synchronized attractor to a more favorable, desynchronized attractor (Fig. 5).268 From then on, coordinated reset stimulation and further variants were computationally developed and optimized to robustly cause an “unlearning” of pathological synchrony and synaptic connectivity, in this way causing long-lasting therapeutic effects.268,270–277 A series of computational studies revealed novel stimulus response characteristics of neural networks with spike-timing-dependent plasticity:

  1. Rebound of synchrony after cessation of stimulation: Directly after cessation of coordinated reset stimulation, synchrony may reemerge and then spontaneously fade while further approaching the desynchronized attractor.270 

  2. Cumulative effects: Effects of coordinated reset stimulation may accumulate over time,278 and stimulation pauses may even improve the outcome.108 

  3. Acute vs long-term effects: Acute stimulation effects (observed during stimulation) and long-term effects (emerging when the system relaxes into a stable state after cessation of stimulation) may differ substantially.272,274 One can even decouple neurons, i.e., reduce their synaptic weights, without desynchronization during stimulation.274 In fact, acute effects do not necessarily serve as predictive markers for a long-term outcome.272,274

  4. Transition to non-invasive stimulation: Long-term effects are favorable because they enable to reduce stimulation time and, hence, potentially reduce side effects. However, a profound advantage of this type of stimulation is that it does not require implants to permanently deliver stimulation. Rather, as predicted theoretically,279,280 non-invasive stimulation can be delivered occasionally or regularly for a few hours. Non-invasive therapies are typically less risky and more appropriate for larger patient populations.

  5. Functional restoration: Not only stimulation-induced unlearning of abnormal synaptic connectivity and neuronal synchronization,268 but also reshaping network connectivity by differentially up- or downregulating different synaptic connections276 may contribute to restoration of function.

  6. Different plasticity mechanisms: In Parkinson’s disease pathophysiology, both spike-timing-dependent plasticity and structural plasticity281,282 are important269 and may induce different stimulation responses.283,284

These computationally derived predictions and results enabled to design appropriate protocols for pre-clinical and clinical studies.

Fig. 5.

Schematic illustrating how desynchronizing stimulation induces long-lasting therapeutic effects by leveraging plasticity. Spike-timing-dependent plasticity is a fundamental plasticity mechanism of the nervous system, which adapts the synaptic strengths based on the relative timings of post- and presynaptic spikes.14,27,163 Neural networks with spike-timing-dependent plasticity typically display bi- or multi-stability of stable states with stronger synchrony and synaptic connectivity and stable desynchronized states with weaker synaptic connectivity,16,268,270,272,274,278, as illustrated by a simple double-well potential here. These states serve as models for pathological and physiological conditions. Coordinated reset stimulation may shift the network into the basin of attraction of a stable desynchronized state, in this way causing a long-lasting desynchronization.268 

Fig. 5.

Schematic illustrating how desynchronizing stimulation induces long-lasting therapeutic effects by leveraging plasticity. Spike-timing-dependent plasticity is a fundamental plasticity mechanism of the nervous system, which adapts the synaptic strengths based on the relative timings of post- and presynaptic spikes.14,27,163 Neural networks with spike-timing-dependent plasticity typically display bi- or multi-stability of stable states with stronger synchrony and synaptic connectivity and stable desynchronized states with weaker synaptic connectivity,16,268,270,272,274,278, as illustrated by a simple double-well potential here. These states serve as models for pathological and physiological conditions. Coordinated reset stimulation may shift the network into the basin of attraction of a stable desynchronized state, in this way causing a long-lasting desynchronization.268 

Close modal

Invasive coordinated reset studies: Coordinated reset deep brain stimulation was successfully tested in Parkinsonian monkeys.280,285–287 For instance, a few hours of coordinated reset deep brain stimulation led to therapeutic effects lasting for one month.280 In addition, cumulative and long-lasting desynchronizing and therapeutic effects were observed in Parkinson’s patients treated with coordinated reset deep brain stimulation.264 

Non-invasive coordinated reset studies: Vibrotactile coordinated reset fingertip stimulation was developed to provide patients with a non-surgical and non-pharmacological treatment option.288 To this end, instead of administering electrical bursts through depth electrodes, weak, non-painful vibratory bursts were non-invasively delivered in a coordinated reset mode to patients’ fingertips.288 A first in human study289 as well as pilot studies290 showed that vibrotactile coordinated reset stimulation is safe and tolerable and revealed a statistically and clinically significant reduction of Parkinson’s disease symptoms off medication together with a significant reduction of high beta (21–30 Hz) power in the sensorimotor cortex. Remarkably, also, axial symptoms, difficult to treat with regular deep brain stimulation, responded well to vibrotactile coordinated reset in these studies.289,290 For illustration, see patient videos in Ref. 290. Of note, Parkinson’s disease patients improved during a month-long vibrotactile coordinated reset treatment when evaluated after medication withdrawal, indicating a substantial improvement of the patients’ conditions.290 These findings indicate that a vibrotactile coordinated reset treatment might even have an impact on metabolic and degenerative processes,290,291 e.g., by slowing or even counteracting degeneration-related processes, e.g., vicious circles giving rise to oxidant stress and mitochondrial impairment, causing a bioenergetic crisis and the death of dopamine neurons in the substantia nigra.292–294 

In summary, instead of simply suppressing unwanted neuronal activity, based on principles of adaptive dynamical systems, appropriately designed stimulation techniques intend to induce sustained therapeutic effects by moving affected neural systems to more favorable attractors (Fig. 5).

Understanding music is an interdisciplinary task.295 Musical instruments are built such that we can listen to them, actively play them, use them in social contexts, or use them in terms of individual demands and tasks. Therefore, scientific disciplines, such as the physics of musical instruments, music psychology and neuromusicology, music sociology, or political science, must interact to arrive at a holistic understanding of music. Furthermore, the role of music in culture, technology, economy, ethnicity, or its interactions with natural resources, such as wood or alternative material for musical instrument building, needs to be considered.

Therefore, music is a constant adaptation process. Listeners adapt to new musical pieces. Musicians adapt to audiences, new musical instruments available, or new ideas of compositional techniques. Instrument builders adapt to contemporary sound and performance demands, new materials, or new technologies. Society adapts to new musical pieces, genres, or ways of music presentations, such as mass media or streaming platforms. Such adaptations are processes, including changing strategies, emotional reactions, or the development of new abilities. The participants of such adaptations might welcome and deal with or might try to reject and oppose new developments.

In contemporary research, each scientific discipline uses its own methods for understanding and predicting music.295 Music psychology often uses statistics or Bayesian methods. Musical acoustics involves mainly analytical equations and discretization methods, such as finite-element or finite-difference methods. Music ethnology is still dominated by heuristic and historical methodology, while computational or analytical ethnomusicology also includes mathematical modeling, e.g., of tonal systems. In all fields, machine learning methods have become more and more important like connectionist models are nearly always used for composition (see, e.g., Briot et al.296 for an overview) or self-organizing Kohonen maps297 are often used for the analytical purpose.298–300 

The methodologies used, therefore, strongly depend on the subfields, but some also intertwine, e.g., in the field of psycho-acoustics, relating physics to perception, using algorithms calculating loudness, brightness, pitch, spatial audio, or the like. Still, to arrive at a common, robust, suitable algorithm able to model music in a global, holistic way in the future, also including extra-musical players, such as ecology, economy, or politics, a common ground is needed, not debatable among the very diverse disciplines involved. For example, a physical culture theory suggests music as an adaptive system to consist of impulses, physical energy bursts sent out, returning with a specific damping, thereby causing new impulses.301 In its most general form, the impulse pattern formulation can be written as a system parameter g representing an impulse sent out by one subsystem. This impulse is reflected at n other subsystems with damping parameters α and β k for each reflection point k, such as302,303
g + = g ln ( 1 α ( g k = 1 n β k e g g k ) ) .
(19)

The system parameter g is updated at each iteration step to g +, taking the most recent g and the previous g k into consideration. The logarithm reflects the exponential damping found in most systems. Adaptation is present for g + = g; e.g., with musical instruments, g can be taken as a periodicity of a musical tone. During the initial transient phase, g + ! = g, and the system struggles, leading to a complex initial transient sound. After the initial phase, a stable periodicity is reached, and a musical pitch is heard. For example, with a guitar, two subsystems are present, the string and the guitar body, both with their own eigenfrequencies. Still, when playing a note, the string’s vibration takes over the guitar body’s vibration; i.e., the body adapts to the string’s pitch. Therefore, the impulse pattern formulation is able to model the guitar tone very precisely, which is especially reflected in the length and complexity of the initial sound phase.302 

Such an impulse pattern formulation algorithm is scale-free and, therefore, able to model and predict very small networks as well as overall or general behavior fast and precise in musical acoustics303,304 or music perception and action.305 Such a self-organizing system is found as a basis for all musical instrument families. Moreover, it is the basis of brain dynamics306 and all interactions in society or politics.

For such a system to work for aesthetic and artistic matters, consciousness and conscious content, such as experiencing sound, vision, emotion, or any kind of cognition, need to be incorporated. The physical culture theory assumes conscious content to be spatiotemporal electric fields in the brain, complex enough to arrive at experiences of all kinds. Such a spatiotemporal field again is nothing but a complex impulse pattern. Brain dynamics is, therefore, no longer taken as an interplay of bottom-up and top-down processes but as a complex, self-organizing system. Localization of brain regions processing certain tasks, such as audition, vision, or thinking, is still evident in this picture, as auditory input enters the brain through the ear, cochlear, and auditory pathway to end in the auditory cortex (as, e.g., in the auditory oddball paradigm; see Fig. 4 and Secs. III B and III F). Still, already within this brain network, circular neural processing is often present, nearly directly connecting the cortex to the cochlear in the inner ear and back up to the cortex. Therefore, adaptation of the brain to an external input is an active process involving the whole brain, although the input of sensory information can clearly be located.

In such global musical networks, stable, bi-stable, bifurcating, complex, or chaotic scenarios occur.302 In terms of musical instrument sounds, a stable musical pitch is only established after a complex initial transient sound phase. Each new tone of a melody needs to undergo such changes. This also holds for brain activity.307 In ensemble playing, the interaction of musicians reacting to co-musicians’ performances is also undergoing such complex changes. Therefore, the whole system is a constant interplay of surprise and adaptation to changing scenarios. Although such adaptation might work, leading to a steady state, it also might fail to arrive at more extended times of chaos, noise, or bifurcating sounds. Adaptation and disruption are, therefore, two essential and ever-repeating sides of music on all levels, with sound, musical pieces, musical genre formation, or music history.

Most sounds, such as speech and music, evolve and unfold in time, and yet, the brain perceives them as one whole continuous entity (see also Sec. III E). For this, the brain needs to exhibit a memory mechanism whereby incoming stimuli are represented and integrated with the trace of the stimuli extending to the immediate past. This ability is termed temporal integration. While source localization and spectral analysis are suggested to be the task of subcortical areas, temporal integration of sounds is proposed to occur in the auditory cortex.308 In an attempt to understand how auditory cortex performs temporal binding, it was shown by intracranial and extracranial measurements that neural responses in auditory cortex are context sensitive.309,310 That is, the neural response to a stimulus is modified when the same stimulus is presented in the context of different stimuli where this sensitivity is a function of both temporal occurrence and spectral content of the preceding stimuli.311–313 The simplest form of context sensitivity in the auditory cortex occurs when the same stimulus is presented repetitively with a constant stimulus onset interval. The result is a gradual reduction of the magnitude of the neural responses and is termed adaptation. Adaptation is stimulus specific and a function of the interval between the stimulus onset interval.314 

The stimulus-specificity of adaptation was shown in oddball paradigms, where the repetitive presentation of a frequent standard stimulus is interrupted by an infrequent deviant stimulus (see also Fig. 4 in Sec. III B). The magnitude of the neural responses to the standards is smaller than the magnitude of the responses to the deviants.309,312 This is known as stimulus-specific adaptation and the mismatch responses in invasive and noninvasive measurements, respectively.311,312 Despite decades of research on adaptation and its relevance for stimulus-specific adaptation and mismatch responses, understanding how adaptation takes place in auditory cortex remains challenging. Already single neurons, due to their intrinsic properties, show adaptation, which is termed spike-frequency adaptation (see also Sec. III C).314 Adaptation is observed in the auditory nerve fibers of the cochlea as well as in the inferior colliculus and thalamus, which act as relay stations between the cochlea and the auditory cortex. There are reasons why adaptation in auditory cortex is neither only the result of single neurons adapting to the stimulus statistics nor just inherited from the subcortical regions.157,314 The time scales at which single neurons in different stations along the auditory pathway exhibit adaptation are different from those occurring in the auditory cortex.314,315 Unlike in the nonlemniscal pathway, adaptation does not occur in those subdivisions of the inferior colliculus and thalamus in the lemniscal pathway, which target the primary auditory cortex (i.e., the core area).312,314 Along the auditory pathway, adaptation manifests itself in more complex ways with its time scales in the auditory cortex adapting to the time scales of the stimulation.312,314

Neurons in the brain form networks and do not appear in isolation. The contact points between neurons are synapses whose dynamics are highly plastic. One prevailing view on the underlying mechanisms of adaptation in auditory cortex is that it is due to modulations of synaptic coupling between neurons. However, what accounts for modulations of synaptic coupling is an ongoing debate.316,317 Short-term synaptic depression has been hypothesized to be one plausible physiological mechanism176,318–320 (see also Secs. III B and III C). This type of synaptic plasticity, which occurs due to the repetitive stimulation of the pre-synaptic neurons, is mainly based on vesicle depletion and desensitization of release sites and calcium channels on the synapses of the pre-synaptic neurons.161 Short-term synaptic depression occurs at time scales that are similar to the time scales of context sensitive responses, and it has a high functional relevance for temporal filtering,321 gain control,219 and, although counterintuitively, efficient information transfer between neurons.322 

In our research, we implemented dynamics of short-term synaptic depression in a computational model whose network structure is based on the anatomy of the mammalian auditory cortex.323–325 The auditory cortex of mammals is characterized by the hierarchical core-belt–parabelt structure, where each of these three areas is subdivided into tonotopically organized fields.326,327 The model comprises mean-field excitatory and mean-field inhibitory cell populations, which are characterized by nonlinear firing rates. The interconnection between cell populations is modulated by short-term synaptic depression according to the spectrotemporal pattern of the stimulation. The linearized form of the state equations together with the slow–fast approximation of the equation for short-term synaptic depression allows for the analysis of the model dynamics in terms of damped harmonic oscillators, i.e., normal modes.324,325 We could show that the properties of the normal modes (i.e., frequency, phase, initial amplitude, spatial wave pattern, and decay rate) are functions of the macro- (gross anatomy) and micro-structure (synaptic weight values) of the auditory cortex network as well as of the spectrotemporal pattern of the stimulation. In this approach, the auditory cortex is viewed as a spatially extended structure, and the activity elicited by an external stimulus propagates in time and space. The dynamics of short-term synaptic depression, which locally traces the stimulus history at the synapses, determine the oscillations that are spread over the entire auditory cortex. In this view, local and global population activities that are revealed by intracranial and extracranial recordings, respectively, emerge from the constructive and destructive interference patterns of superimposed normal modes. This contrasts with the traditional view where, for example, an electromagnetic activity in the brain measured by means of magnetoencephalography reflects the summed activity of discrete local generators distributed over the auditory cortex. In the normal-mode view, adaptation in the auditory cortex can be described as modulations of the properties of these normal modes due to the modulations of synaptic coupling, where the reduction of a response magnitude is just a by-product.325 

In this section, different authors reflect on the meaning of adaptivity in the context of artificial learning. Among other topics, fundamental open problems in machine learning are discussed, as well as some perspectives on how machine learning can be used to solve physics problems and to create new control strategies for nonlinear (chaotic) systems are given. Toward the end of this section, the role of artificial learning to understand and control complex many-body systems and cooperative behavior is discussed.

Deep neural networks have powered a series of breakthroughs in machine learning over the last ten years. Since their early success in computer vision,328–332 they have set new standards in natural language processing333–336 and the playing of complex games, such as Go337,338 or Poker.339–341 Deep learning also increasingly impacts the natural sciences;342 for example, deep neural networks recently helped predict the 3D-structure of a nearly every human protein343 in a breakthrough for structural biology. Further applications of machine learning to solve physics problems are also given in Sec. IV B.

While neural networks used in machine learning are inspired by biological neural circuits, such as the ones described in Secs. III B and III C, the neurons in machine learning are much simpler than biological neurons. Yet, it turns out that a different form of adaptivity is behind the success of deep learning. We illustrate this point using the classic machine learning task of recognizing whether a given image shows a cat or a dog. Given an image x, represented by an array of pixel values, the classical approach was to compute a vector x ~ of features344–346 that represents the image, which is then fed into a classifier. Features could be the location of edges in an image or the correlations between patches of the same image. These features were designed a priori and required extensive domain knowledge.

The key idea of deep learning is instead to learn the relevant features directly from data. Therefore, rather than computing a feature vector using a predefined set of transformations, we try to learn a function f θ ( x ) that maps the raw images x directly to a “label” y = ± 1, indicating whether the image shows a cat or a dog. A neural network is a particular functional form for f θ ( x ), usually consisting of a series of alternating linear transformations and point-wise non-linear functions.347 The adjustable parameters θ, called weights, determine what the transformations compute exactly. They are found by maximizing the prediction accuracy of the network on a given set of images , which is called “training” the network.348 In practice, simple first-order optimization methods, such as stochastic gradient descent, work best.349,350 Training a neural network is, thus, a general-purpose procedure to obtain features that are well-adapted to the input data and the task at hand.

From a theoretical point of view, the success of this approach is surprising for several reasons. For example, fitting a function in a high-dimensional space, such as the space of natural images, suffers from the curse of dimensionality: the number of samples required to estimate such a function accurately scales exponentially in the input dimension.351 Many current research activities, for example, in statistical physics,342,352–354 are currently working to reconcile the success of neural networks with the curse of dimensionality.

One key to this puzzle is that images are not as high-dimensional as they seem. Most of the points in the high-dimensional input space do not represent images (at least not to a human observer) and instead look like random noise. The points that do represent real images tend to concentrate on a lower-dimensional manifold in input space, sketched as a two-dimensional curved surface in Fig. 6. While the manifold is not easily defined, it is tangible: its dimension has been estimated numerically355–359 and found to be 10–100 times smaller than the image dimension.

Fig. 6.

The manifold structure of realistic images. Each black dot indicates a point in a high-dimensional space, which could be an input for neural networks. In the eye of a human observer, most inputs in this space resemble random noise, such as the “images” shown on the left. Neural networks exploit the fact that realistic images tend to concentrate on a lower-dimensional manifold in input space, sketched here as a two-dimensional curved surface. Figure adapted from Goldt et al., Phys. Rev. X 10, 041044 (2020). Copyright 2020 American Physical Society.360 Images are taken from the ImageNet361 data set [Deng et al., “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 248–255. Copyright 2009 IEEE.].

Fig. 6.

The manifold structure of realistic images. Each black dot indicates a point in a high-dimensional space, which could be an input for neural networks. In the eye of a human observer, most inputs in this space resemble random noise, such as the “images” shown on the left. Neural networks exploit the fact that realistic images tend to concentrate on a lower-dimensional manifold in input space, sketched here as a two-dimensional curved surface. Figure adapted from Goldt et al., Phys. Rev. X 10, 041044 (2020). Copyright 2020 American Physical Society.360 Images are taken from the ImageNet361 data set [Deng et al., “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 248–255. Copyright 2009 IEEE.].

Close modal

It is difficult to analyze the impact of the low intrinsic dimension of images on neural networks theoretically, because we lack the mathematical tools to reason about real-world data. A series of works, therefore, introduced models of data with low intrinsic dimension, such as object manifolds,362 the hidden manifold,360,363 or the spiked covariate model.364,365 Each of these models offers a controlled environment in which the adaptivity of neural networks can be studied, using tools from statistics or statistical physics. One result of these studies is that neural networks can indeed adapt to lower-dimensional manifolds in their data better than classical methods of machine learning, such as kernel methods.364,366–371

These results set the blueprint for a research program that aims to understand the interplay of neural networks and the data on which they operate. What are the (potentially) low-dimensional structures in other data modalities, such as the human language or amino acid sequences, that neural networks can exploit?

Machine learning tools have found extensive use in the study of physical problems.342 While it is not possible to provide an exhaustive list of these applications in this Perspective, we highlight a few examples related to statistical physics, namely, learning and sampling from equilibrium distributions,372 classifying phases of matter,373,374 estimating free energy differences,375 identifying the direction of time’s arrow,376 and estimating entropy production.377 For a comprehensive review of machine learning in physical sciences, readers may refer to Ref. 342. However, the relationship between physics and machine learning is not one-sided. Tools from theoretical physics have illuminated how machine learning tools function352 (see also Sec. IV A). In the following, we examine these two directions through the lens of adaptivity.

First, we examine how machine learning can be applied to solve physics problems, with a focus on the role of adaptivity. In particular, we consider supervised learning tasks where input–output pairs are provided, and the objective is to train a neural network to accurately predict the target output value given an input. As discussed in Sec. IV A, adaptivity plays a crucial role in training the networks. In the optimization process, the network’s weights are adjusted to minimize the difference between the predicted and target output values so that the network can make accurate predictions. However, as we discuss in this section, the network’s prediction can be further enhanced by adapting to the history of previous inputs. This additional degree of adaptivity is particularly useful when working with sequential data. Recurrent neural networks allow for this type of adaptive inference by using an internal state that depends on the input at the previous step. Given a sequence of input tokens x t R n v and the hidden state h t R n h at time step t, this dependency can be captured as378,
h t = f ( x t , h t 1 ; θ ) ,
(20)
where f represents a neural network parameterized by θ. In the most basic form, the output of the network y t can be calculated by applying another parameterized function to h t. While in principle, these networks can capture long-term dependencies in a sequence, it has been shown that training them can be challenging due to vanishing or exploding gradients.379 More complicated constructions of recurrent neural networks, such as long short-term memory networks, solve this problem using a self-loop that allows the gradient to flow for longer.380 Modern machine translation tools build on these networks to map sequences in one language to sequences in another (seq2seq).334 

Among many applications of these models in physics, we briefly discuss inferring force fields from the trajectory of particles381 and chaotic time-series forecasting.382 Reference 381 considers the problem of inferring the force field in overdamped Brownian motion. Specifically, the input x t represents the position of the Brownian particle, and the output is the parameter(s) that describe the functional form of the potential. For example, in the case of a harmonic potential U ( x ) = 1 2 k x 2, the output of the network at the final step represents the inferred value of k. The recurrent neural network is shown to outperform conventional methods with limited data and can remarkably infer non-conservative time-dependent force fields, which conventional methods cannot handle. Reference 382 focuses on forecasting the dynamics of chaotic systems following the Kuramoto–Sivashinsky equation.383–385 The input is a discretized scalar field in space at step t, and the desired output is the value of the field at step t + 1. The authors use the framework of reservoir computing386 (a recurrent neural network with an untrainable input-to-internal-state mapping) to forecast the dynamics far beyond the Lyapunov time. In addition, see Sec. IV C for a discussion on using reservoir computing to control chaotic dynamical systems. In both of these examples, the network’s internal state is adjusted based on the input history [see Eq. (20)], allowing it to capture temporal dependencies in the input data sequence.

The two examples discussed earlier demonstrate applications of recurrent neural networks in solving physics problems. However, it is also important to examine the reverse direction, where physics problems can be used to better understand recurrent neural networks. Reference 387 provides a case study of this approach, where a simple model for seq2seq tasks is used to investigate the impact of the data distribution in learning using a physical problem. Specifically, it considers the stochastic switching-Ornstein–Uhlenbeck process, which is a latent variable model that describes the trajectories of a Brownian particle in a harmonic potential with a time-dependent center that stochastically alternates between two values. The non-Markovianity of the input sequence is controlled by varying the distributions of waiting times between these alternations. The goal is to infer the current location of the center from the particle’s past trajectory. The authors use several machine learning models for this task and demonstrate that increasing the memory of the learning model always improves the accuracy of the predictions, whereas increasing the non-Markovianity of the input sequences can either improve or degrade performance. They also identify an intriguing relationship between the performance of a learning model and distinct phases in the stationary state of the stochastic switching-Ornstein–Uhlenbeck process. In this case, as the memory of the learning model is increased, the network becomes more adaptable to longer-term dependencies in the input sequence, which in turn leads to improved performance.

The two-sided relationship between physics and machine learning is still in its early stages of development, leaving plenty of opportunities for further exploration. On the one hand, artificial intelligence can aid in discovering and explaining scientific phenomena, with emerging techniques, such as natural language processing models, potentially facilitating communication between users and algorithms.388 On the other hand, statistical physics has already been used to provide theoretical insights into the behavior of deep learning,352 and the theory of adaptive systems could prove particularly valuable in understanding the role of data structures and the dynamics of learning in recurrent neural networks.

In this section, we consider controlling complex dynamical systems using a closed-loop feedback based on a machine learning approach known as reservoir computing. Here, the concept of adaptivity appears in at least two guises: the dynamical system being controlled, often call the plant, and the controller. For a plant to be controlled to desired behavior, we need to have access to some signals generated by transducers attached to the plant that can be used to infer its dynamical state and have access to one or more parameters that adjust the state of the plant as illustrated in Fig. 7.

Fig. 7.

A complex dynamical system controlled using a closed-loop feedback. The controller is designed using an adaptive machine learning approach.

Fig. 7.

A complex dynamical system controlled using a closed-loop feedback. The controller is designed using an adaptive machine learning approach.

Close modal

The controller needs to process plant signals and perform inference to estimate the state, compare this to the requested plant behavior, and generate control perturbations that are applied to the adjustable system parameters. For complex dynamical systems, especially those that display chaos, the control perturbations are a nonlinear function of the plant’s state and requested behavior and, therefore, fall in the category of a nonlinear controller. Traditionally, nonlinear controllers require an accurate model of the plant, which often entails substantial effort from expert control engineers and mathematical model builders.

One highly successful alternative that was developed decades ago for controlling chaotic systems is to take advantage of unstable sets that are the backbone of the chaotic system in phase space, such as unstable periodic orbits.389,390 A chaotic system naturally visits these unstable sets, and control perturbations are designed using a linear algorithm that is valid in a local neighborhood of these sets. Controlling other behaviors, however, requires a fully nonlinear controller.

One approach for realizing a fully nonlinear controller is to use machine learning to learn a model of the plant,391 referred to as nonlinear system identification in the control engineering literature. Artificial deep neural networks in a feed-forward geometry are known to be universal approximators of functions (see Sec. IV A) and, hence, should be able to learn how to map measurements and requested state to control perturbations. Here, a multi-layer network of artificial neurons with nonlinear input–output functions is trained by adjusting the network link weights using supervised learning. While there has been good success using this approach, the amount of data needed to train the network can be substantial, making it difficult for the controller to adapt to changes in the plant.

Reservoir computing is a fast and low-data machine learning approach especially well suited for learning models of dynamical systems392 because it is also a dynamical system and it holds great promise for controlling dynamical systems. As seen in the lower dashed box of Fig. 7, the reservoir computer consists of an input layer (red squares), a pool of neurons (green dots, the “reservoir”), and an output layer (black squares). The neuron dynamics are described by a differential equation that is driven by a nonlinear function of the signals from the input layer and the output of other neurons in the reservoir and has a simple exponential time constant. Thus, it has short-term memory that can be matched to the plant dynamics. The link weights on the input layer and the internal “reservoir” of neurons are not trained; they are assigned randomly at the outset, and only the weights of the output layer are trained. This dramatically reduces the size of the training data as well as the training computation time. Furthermore, the neural network can perform multiple tasks by combining a single reservoir with different trained output layers. One approach for controlling dynamical systems with a reservoir computer is to train it to learn the inverse of a dynamical system in the presence of control;393 that is, we train it to learn the perturbations required to guide the system to the desired state sometime in the future. This approach works well for systems, such as a robotic arm, that display constrained low-dimensional behavior, but a parallel deep architecture appears to be required for controlling complex systems that display chaos.394 The training data required for reservoir-computing inverse control appear to be on the order of 10 000 data points and modest computation time, suggesting that it can be used for real-time adaption of the controller as the underlying plant changes its dynamics because of non-stationarity or a damage event.

An open question is whether the data requirements can be reduced further so that a small microprocessor typically found on internet-of-things devices can be used to retrain the controller. Our recent work395 that reformulates the reservoir computer as delay lines of the measured plant signals followed by a nonlinear output layer may be promising for this application because it reduces the amount of training data by a factor of ten or more. However, it is not yet clear whether this new approach gives up some adaptivity. We are working on extensions of this work to balance the desire for fast training with wide adaptivity.

Rapid and large-scale collective action is required to enter sustainable development pathways in coupled human–environment systems safely away from dangerous tipping elements396 (also, see Sec. V E). The question, however, of how collective or cooperative behavior—in which agents seek ways to improve their welfare jointly—emerges is unresolved.397 Evolutionary game theory has produced a sound equation-based analytical understanding of the mechanisms for the evolution of cooperation.398 Yet, this was primarily done with highly simplified models, lacking environmental context and cognitive processes.399 These elements are the center of artificial intelligence and cognitive neuroscience research,400,401 which only recently emphasized the need for developing cooperative intelligence.402,403 Moreover, analyzing systems composed of multiple intelligent agents typically requires expensive computer simulations, which are not straightforward to understand.404–407 Thus, little is known about how cooperative behavior emerges from and influences a collective of individually intelligent agents in complex environments.

There is a unique opportunity for adaptivity in non-linear dynamical systems to help solve this challenge. Based on the link between evolutionary game theory and reinforcement learning,408,409 we can model a collective of reinforcement learning agents as a dynamical system. Doing so provides improved, qualitative insights into the emerging collective learning dynamics,410 enabling equation-based analytical tractability of agent-based simulations.

Here, reinforcement learning is the central adaptive mechanism (cf. “Reinforcement learning is direct adaptive optimal control”;62 also, see Sec. II C). Reinforcement learning is a trial-and-error method of mapping observations to actions in order to maximize a numerical reward signal. The challenge is that those actions can change the environment’s state, and rewards may be delayed. Reinforcement learning is not only an artificial learning algorithm,401 it also has wide empirical support from neuroscience,153,411 psychology,412 and economics.413–416 It is, therefore, ideally suited to model coupled human-nature systems.

In their seminal work, Börgers and Sarin showed how one of the most basic reinforcement learning update schemes, cross-learning,413 can converge to the deterministic replicator dynamics of evolutionary games theory.417 The relationship between the two fields is as follows: one population with a frequency over phenotypes in the evolutionary setting corresponds to one agent with a frequency over actions in the learning setting.408 Since then, this analogy has been extended to other reinforcement learning variants, such as stateless Q-learning,418,419 regret-minimization,420 and fictitious play.421 Of particular relevance to modeling coupled human-nature systems is the dynamic formulation of the general and widely used class of temporal-difference learning,422 which is able to learn in changing state-full environments.

Typically, the learning dynamics are derived by performing a mathematical separation of timescales of the interacting process with the other agents and the environment and the process of adapting the agents’ policy to gain more reward over time.423 Under the complete separation of timescales, the dynamics become deterministic. One can understand such learning dynamics as an idealized model of the multi-agent learning process, in which agents learn as if they have a perfect model of the current environment, including the other agents’ current behavior.424 

This learning-dynamic approach offers a formal yet practical, lightweight, and deterministically reproducible way to uncover the principles of collective cooperation emerging from intelligent agents in changing environments. We briefly highlight three examples; for instance, it was found that, in contrast to non-changing static environments, no social reciprocity is required for cooperation to emerge in changing environments.425 The individual attitude of how much the agents care for the future alone can adjust the setting from a tragedy of the commons to a comedy, where agents predominantly learn to cooperate. However, for this mechanism to work, the severity of an environmental collapse must be sufficiently severe. Another work showed how the agents’ irreducible uncertainty about the actual environmental state can induce a tipping point toward mutually high-rewarding cooperation. However, this is only valid when all agents are equally uncertain about the environment.426 The last example highlights how the same temporal-difference learning dynamics can be used to model agents that not only learn to react to their physical but also to their social environment, which is likewise a pathway to mutually high-rewarding cooperation.427 

Such learning-dynamic studies focus on understanding the underlying principles of collective cooperation from intelligent agents in complex environments. Therefore, these models are reduced as much as possible to capture only the most essential features. However, evidence from deep multi-agent reinforcement learning studies shows that sustainable and cooperative behavior can likewise emerge from intelligent agents in high-dimensional environments.428–430 

The advantage of the learning-dynamics approach is that it opens up all the tools of dynamical systems theory to the study of collective learning; for instance, the learning dynamics have been found to exhibit multiple dynamic regimes, such as the convergence to fixed points, limit cycles, and chaos,419,422 critical transitions with a slowing down of the learning processes at the tipping point,426 and the separation of the learning dynamics into fast and slow eigendirections.426 

Future work in many directions is required to build this approach of adaptivity in non-linear dynamical systems into a new way of modeling human–environment interactions and socio-economic systems (see Sec. V). First, the presented learning dynamics need to become applicable to the system with many agents, using various types of mean-field approaches.431–433 Second, the learning dynamics need to consider the effect of intrinsic noise, which can substantially alter their collective behavior427,434 (see also Sec. II B). Third, the learning dynamics needs to be advanced to be able to model more refined notions of cognition, such as representation learning, learning and using intrinsic world models, and intrinsic motivations (see also Sec. III). A social-ecological resilience paradigm of multi-agent environment interactions, in turn, can benefit such endeavors.435,436

Over billions of years of evolution, motile micro-organisms have developed complex strategies to survive and thrive in their environment by integrating three components (Fig. 8): sensors, actuators, and information processing. Their biochemical networks and sensory systems are optimized to excel at specific tasks, such as to climb chemical gradients,438 to cope with ocean turbulence,439 and to efficiently forage for food.440,441 They have also acquired complex strategies to interact with their environment and with other micro-organisms, leading to the emergence of macroscopic collective patterns442 (also, see Sec. IV D). These patterns are driven by energy conversion from the smallest to the largest scales and permit micro-organisms to break free some of their physical limits; for example, dense systems of bacteria develop “active turbulence” at length scales where only laminar flows are expected from the underlying physical laws.443,444

Fig. 8.

Active matter with embodied intelligence. Bacteria, sperm cells, and ants are biological examples of active particles with embodied intelligence. They feature intelligent behaviors that permit them to survive and thrive in their ecosystem thanks to the integration of sensors, actuators, and information processing. Their behaviors also adapt to complex environments (e.g., foraging for food), and their dynamic interactions lead to collective emerging behaviors (e.g., swarming and hunting). The challenge is now to draw inspiration from nature to create microscopic artificial active particles with embodied intelligence that mimic these adaptive and dynamic emerging behaviors. Adapted from Cichos et al., Nat. Mach. Intell. 2, 94–103 (2020). Copyright 2020 Springer Nature.437 

Fig. 8.

Active matter with embodied intelligence. Bacteria, sperm cells, and ants are biological examples of active particles with embodied intelligence. They feature intelligent behaviors that permit them to survive and thrive in their ecosystem thanks to the integration of sensors, actuators, and information processing. Their behaviors also adapt to complex environments (e.g., foraging for food), and their dynamic interactions lead to collective emerging behaviors (e.g., swarming and hunting). The challenge is now to draw inspiration from nature to create microscopic artificial active particles with embodied intelligence that mimic these adaptive and dynamic emerging behaviors. Adapted from Cichos et al., Nat. Mach. Intell. 2, 94–103 (2020). Copyright 2020 Springer Nature.437 

Close modal

There are both scientific and technological reasons that are driving the quest toward biomimetic artificial active matter. Scientifically, biomimetic systems capable of harnessing energy and information flows are ideal model systems to investigate and test physics far from equilibrium, which is one of the greatest challenges for physics in the twenty-first century. Technologically, biomimetic active particles hold tremendous potential as autonomous agents for healthcare, sustainability, and security applications: for example, enabling the targeted localization, pick-up and delivery of microscopic objects in bioremediation, catalysis, chemical sensing, and drug delivery.445 

In the last two decades, the active-matter research field has tried to replicate the evolutionary success of micro-organisms in artificial systems.445 Researchers have replicated the actuators by developing artificial active particles that extract energy from their environment to perform mechanical work.446,447 Albeit to a lesser extent, they have also been able to replicate the sensors by making these active particles adjust their motion properties (e.g., their speed) to chemical, thermal, or illumination stimuli.448,449 However, these artificial particles are still largely incapable of autonomous information processing, which is dramatically limiting the potential of artificial microscopic active matter to provide scientific insight and technological applications.437 Thus, the active-matter research field is now confronted with several open challenges to create truly autonomous active particles.

1. Make active particles capable of autonomous information processing

Currently available active particles lack the complexity necessary for autonomous information processing. In fact, active particles are still rather simple in shape and behavior.445 They are often Janus microspheres or microrods with different materials on their two sides, which can self-propel and sterically interact with each other. This physical simplicity is a consequence of the relative simplicity of the employed design and fabrication processes, which in turn limits the range of behaviors achievable by the active particles. Despite this simplicity, the study of active particles has already led to major breakthroughs, such as to understand how plankton copes with turbulence439,450,451 and to program self-assembling robotic swarms.452,453

Motile micro-organisms exhibit more powerful and flexible strategies to survive and thrive in their environment. Even the simplest motile bacteria have evolved intelligent behaviors by following powerful adaptive strategies encoded in their shape, biophysical properties, and signal-processing networks: not only can they extract energy from their environment to move and interact with other bacteria, but they can also extract information by sensing their environment and adjust their behavior accordingly.438 The challenge is now to make active particles capable of autonomous information processing, such as living motile micro-organisms. This can be addressed by pushing the boundaries of design and microfabrication techniques to build microscopic active particles with embodied intelligence (microbots).454 These microbots will be able to sense their environment, to differentiate stimuli, and to adapt their behavior toward determinate goals.

2. Optimize the behavioral strategies adopted by individual active particles

The behavioral strategies that can be adopted by active particles are still very limited. There have been several studies on the behaviors of active particles in response to the properties of their environment;445,455–457 for example, the presence of periodic arrays of static obstacles alters the preferential swimming direction of self-propelling active particles, a fact that permits one to sort microswimmers on the basis of their swimming style.456 However, these behaviors are still rather simple and rely on in-built properties of the active particles that cannot be changed at will or adapted to different environmental conditions. This is a consequence of their limited capability of gaining information about their environment and reacting accordingly.

More complex behaviors have been achieved using micro-organisms instead of active particles; for example, the presence of obstacles in the environment has permitted to alter the pathway toward the formation of multicellular colonies of bacteria.458 Also, genetically modified bacteria whose speed is controllable by light have been arranged into complex and re-configurable density patterns using a digital light projector.459,460 The optimal behaviors in complex environments are often not obvious; for example, let us consider the foraging problem,440,441 where an active particle performs a blind search to catch some sparse targets. When the environment does not present spatial features, the number of caught targets is maximum for a Lévy-search strategy440,441 (even though this is still an active research field461). Surprisingly, in a porous medium, the optimal strategy mixes Lévy runs and Brownian diffusion.462 

The challenge is now to discover, understand, and engineer intelligent behavioral strategies that can be autonomously adopted by active particles with embodied intelligence. This can be addressed by designing and engineering the behavior of microbots to enable them to autonomously perform directed tasks in complex environments, such as efficient navigation, target localization, environment monitoring, and conditional execution of actions.

3. Optimize the interactions between active particles

Currently, active particles cannot communicate with each other beyond interacting through some simple physical interactions. Natural systems, such as swarms of midges, schools of fish, and flocks of birds, have evolved powerful sensing capabilities to gain information about their environments and to communicate.463,464 The underlying behavioral rules are often hard to identify.437,442,465,466 Active-matter studies provide the testing grounds for new non-equilibrium descriptions, which are by necessity often computational.467 They are either based on hypothesized mechanistic models for local interactions,442 upon coarse-grained hydrodynamic approximations,468 or on basic fluctuation theorems.469 The question is often how local energy input and physical interactions determine the macroscopic spatiotemporal patterns. Answers may be sought, e.g., by computational techniques.470–473 

Differently from computational studies, most active-matter experiments rely on very simple steric and short-range physical interactions. Even these simple interactions can lead to interesting complex behaviors and self-organization whose onset is often observed in artificial systems where increased energy input above a threshold density drives a phase transition to an aggregated state. An example of such behaviors is the formation of “living crystals,” which are metastable clusters of active particles.474,475

Much more interesting behaviors are observed when the interactions between the active particles can be tuned at will. This can be achieved by externally imposing interaction rules on the active particles; for example, external feedback control loops have been used to create information-based individual dynamical behavior476, or interactions477 between active particles, which explicitly depends on the information about the position of other particles. Such complex forms of interaction can also be achieved using macroscopic robots. In fact, the field of robotics can serve as a major source of inspiration for the development of active matter at the microscale;453,478,479 for example, some robots (5 cm in diameter) have been programmed to respond to sensorial inputs with a delay and have shown that, by controlling the delay, we can control the aggregation vs dispersion of the robots.480–482 

The challenge is now to identify and engineer optimal interaction rules that can be embodied in active particles interacting with other particles and with their environment. This can be addressed by programming microbots with embodied interaction strategies beyond the simple steric and short-range interactions employed by current active particles. This will permit researchers to realize microscopic swarms of artificial active particles capable of collective intelligent behaviors and to engineer microscopic ecosystems where multiple species of microbots and particles interact.

In this section, we provide a perspective on adaptivity from socio-economic systems, including topics, such as the conception of modern power grids, adaptive social interactions, and the role of adaptive mechanisms in epidemiological and climatic models.

Network epidemiology is a prime example of adaptive networks at work. Many infectious diseases spread via direct contacts. These contacts can be captured by social, transportation, and other logistic networks. They provide a mathematical framework to formalize the interaction of individuals (humans or animals) and, hence, potential paths of disease transmission. Locally, e.g., within a population or between group individuals, the dynamics of pathogens are often described by compartment models, such as the widely used susceptible–infected–recovered model originally introduced by Kermack and McKendrick.483 Adaptivity must be considered if the state of the networked system, say, the number of infected, triggers an adjustment of the network structure with the aim to mitigate an outbreak and to contain the disease. This closes the mutually influencing feedback loop of the dynamics on and of networks as depicted in Fig. 9: (i) The network structure governs the spreading of the disease (dynamics). (ii) In turn, the current state of the system leads to changes in the structure of interactions (networks).

Fig. 9.

Schematics of the interplay between dynamics and networks.

Fig. 9.

Schematics of the interplay between dynamics and networks.

Close modal

The dynamics-induced changes to the network are often akin to control schemes that involve minimizing a goal function to reach a target state.9 Similarly, non-pharmaceutical containment protocols, which demand a reduction of social contacts or restriction of movement, can be based on, for instance, the number of new infections. Prominent cases, where such applications of adaptive networks have been successfully implemented, include the H1N1 pandemics in 2009,484,485 the Ebola epidemic in 2014,486 and—of course—the on-going SARS-CoV-2 pandemic.487–490 In these examples, one prominent path of transmission was the global airline transportation network, which has been accounted for in many studies.491–493 

Extensive numerical simulations are able to explore possible interventions and quantify their impact. Key findings might be that international travel bans yield a limited delay of spreading as demonstrated for Ebola in 2014486 or the feasibility of zero-COVID or low-incidence strategies.494,495 They are also able to provide insight into less than optimal adherence to containment measures.490 In any case, these studies are valuable tools for policy makers to reach evidence-based and data-driven decisions and to inform the public about their potential impact.

The concept of adaptive networks for the study of epidemic spreading of infections diseases has a long history and dates back beyond the most recent examples of public health emergency of international concern. Rewiring of susceptibles to avoid contact with infected has been studied, for instance, by Gross et al.98,99 They employed a low-dimensional compartment model, which allowed an exhaustive bifurcation analysis, and identified dynamical patterns, such as first-order transitions and hysteresis. In short, as long as a node remained healthy, its network neighborhood evolved gradually. However, the moment an infection occurs, the degree of a node drops rapidly, and the node finds itself isolated until recovery. Note that due to the small-worldness of many social networks,496 there could be situations, where rewiring would potentially deteriorate the situation because it might create new shortcuts through the network that could—unintentionally—bring nodes closer to other, distant regions of infection.

Besides travel restrictions, surveillance and monitoring of incidence numbers are key ingredients for a rapid identification of an outbreak. For that purpose, the introduction of sentinel nodes on temporal networks has proven to be insightful and demonstrated in the case of animal diseases.90,497 These nodes should be monitored because of their position in the network that allows early detection and reliable identification of the origin of the outbreak for many different initial conditions. Therefore, they provide helpful and detailed clues where the network could be best adjusted. Similarly, screening a fraction of incoming patients has been shown to be effective as potential control measure nosocomial infectious diseases and the spread via hospital-referral networks.498,499 The impact of a rapid response has been exemplified during the early stages of the COVID-19 pandemic, where—in mainland China—containment policies effectively depleted the susceptible population and resulted in a subexponential growth of infection cases.488 Upon successful containment, restriction can be relaxed again, and the network returns to its original state.

Adaptive networks are a special case of time-varying or temporal networks, where every edge has a time stamp and is active for a certain amount of time.500,501 In epidemiology, in particular, the sequence of contacts is crucially important. Only time-respecting paths contribute to the transmission of a pathogen and the spreading of a disease. Any interaction with contacts/neighbors in the social network before their infection carries no risk of transmission. Luckily, concepts, such as network controllability,502 can be easily extended for temporal and multiplex networks.503,504 From a mathematical point of view, the temporal nature of networks—including changes of their structure due to adaptation—can be implemented by time-dependent adjacency matrices, which give rise to modeling frameworks for the spreading of epidemics, such as the individual-based and pair-based models.505–508 

To sum up, adaptive networks play a central role not only for realistic investigations of spreading dynamics but can help to study and design interventions for disease containment, mitigation, and eradication. With a further increase of data availability (often in real time), models of network epidemiology become more and more realistic and informative in their predictive power. Future challenges include the integration of purely epidemiological models and a mathematical framework for the dynamics of social behavior and opinion formation. This would lead the way for a holistic description of disease spreading on adaptive networks.

In the context of dynamical systems on networks, one manifestation of adaptivity is in so-called adaptive or coevolutionary networks.99,509

A network is a collection of entities together with a relation between these entities that are generally represented as nodes and links, respectively. In a dynamical setting, every node is a dynamical system that not only depends on its internal dynamics but also on the dynamics of its neighborhood in the network, i.e., the set of nodes it is linked to. Constituting for an adaptive network is the idea that the topology of the network and, therefore, the interactions between the individual nodes of the network are not static but rather also dynamic, coupled to the dynamics of the nodes. As such, we have a closed feedback loop in which the topology of the network influences the dynamics of the nodes and the state of the nodes influences the dynamics of the topology.99 Combining the so-called dynamics on the network with the dynamics of the network in that way is what makes the system fully adaptive (see Fig. 9 in Sec. V A).

To make this more concrete, let us consider the paradigmatic example of the adaptive voter model,511,512 which is an extension of classical models of opinion or consensus formation.513,514 In this model, one considers a population in which every individual subscribes to one of two contradictory opinions and in which the social relationships are encoded in some social network. As for the dynamics, one assumes that at each time step, individuals either adapt their opinion to the opinion of individuals in their neighborhood or that they break off their relationship with individuals of opposing opinions and rather connect with others of the same opinion. While the former corresponds to the dynamics on the network, the latter corresponds to the dynamics of the network. Depending on the relative strength of these two processes, in expectation, the population eventually reaches either a dynamic equilibrium characterized by non-vanishing prevalence of pairs of connected individuals with opposing opinions or a static equilibrium where the underlying social network fragments so that in every component, only one opinion prevails.511,515,516

Another paradigmatic example besides the adaptive voter model is that of an adaptive susceptible–infected–susceptible (SIS) epidemic.98 One considers again a population on an underlying social network that encodes the relationship between individuals. Every individual is then exposed to an SIS epidemic, meaning that individuals start off as susceptible, become infected at some rate when individuals in their neighborhood are infected, and upon recovery at another rate are susceptible again.517 In addition to these epidemic transitions, one allows, similar to the adaptive voter model, that susceptible individuals can break off the relationship with infected individuals and instead connect to a susceptible individual.98 Now, for the SIS epidemic and in expectation, it is well-known that at a critical infection rate, the system exhibits a supercritical transcritical bifurcation and beyond which the system eventually reaches an endemic dynamic equilibrium as opposed to the epidemic dying out. In contrast, due to the adaptivity, this bifurcation can turn from supercritical to subcritical, the consequence being that a region of bistability emerges and the transition to an endemic equilibrium is not continuous anymore.98,518

While these examples both illustrate the idea behind adaptive or coevolutionary networks in the sense that dynamics on the network and of the network depend on each other, they also highlight the fact that adaptivity can induce fundamental changes in the phenomenology. This suggests that, when developing models of the natural world, it can be paramount to take adaptive dynamics into account.

Recognizing the importance of adaptive networks, many research studies have been done focused on different aspects of the phenomenology that comes with adaptivity or extending existing models by introducing adaptivity. Hence, in the following, we are going to highlight some works from the last decade—without any claim to comprehensiveness.

In relation to the adaptive voter model that we have introduced before, it has been reported that if one considers directed as opposed to undirected networks in an adaptive voter model, fragmentation might occur far below the critical value due to the formation of self-stabilizing structure.519,520 Moreover, there has also been work extending the model to allow for a continuum of opinions (see also Sec. V C), which in many cases is a more realistic assumption, demonstrating the emergence of communities with diverse opinions rather than leading to fragmentation.521,522

Further investigations in the adaptive SIS epidemic and adaptive epidemics, in general, have led to studies about the bifurcation behavior523 and the epidemic threshold itself524 as well as the dynamics near this threshold with an emphasis on early-warnings signs.525 In the context of a pandemic (see also Sec. V A), adaptive epidemics have also been studied to assess the relationship between containment strategies of quarantining and social distancing.526 Besides rewiring as a mechanism for adaptivity,98,527 others have considered network growth due to birth and death processes,528 the latter in response to the epidemic upon being infected, and activation and deactivation of links following an adaptive strategy.529 

Apart from the adaptive voter model and adaptive epidemics, another frequently studied model system is that of coupled phase oscillators110,530 with adaptive coupling strengths (see also Secs. II A and II D). The main feature one is interested in these systems is the emergence of fully or partly synchronous states. Importantly, it has been shown that certain adaptivity rules promote the explosive transitions into synchrony.531 Moreover, others have reported that adaptivity can be used to control cluster synchronization9 or that slow adaptation leads to the emergence of frequency clusters.34,35

In recent years, there has been an increasing interest in generalizing the notion of networks to higher-order networks, i.e., simplicial complexes or more generally hypergraphs. Instead of only dyadic relations, these structures can also capture higher-order interactions. Consequently, evolutionary games532 as well as consensus formation in the form of an adaptive voter model have been investigated on simplicial complexes as well as hypergraphs.533,534 Due to their much more complex topology, these structures promise a much richer phenomenology while at the same time being considerably more complicated to handle so that it will be interesting to see what the coming years will bring.

Online communication can be understood as an adaptive, nonlinear system, all the more so because it increasingly involves many-to-many interactions and is, thus, a highly coupled system. In my research on self-organized online discourse, I interpret adaptivity as the process of changing social systems through external influences, such as technological developments. Information technology has made various aspects of our lives more dynamic, both in spatial and temporal dimensions. Connections with others can be made across spatial and sociodemographic constraints, and messages can be recorded and spread across the globe in seconds.

However, these increased dynamics and the resulting adaptations do not happen without values: As old boundaries are overcome, new ones emerge, if only because of finite amounts of available attention resulting from very simple limits on human processing capabilities, but also because of the implementation and commercial incentive structure of the technology. Here, I will present two mechanisms we have recently proposed for how social systems adapt to these changes and how online platforms shape this process along commercial interests since there is no apparent, neutral status quo in which social systems would evolve. To this end, I want to focus on two key questions that an individual decision maker faces online and their downstream consequences for macroscopic dynamics and the shape of public discourse.

First, connectivity is increasing through online platforms, and new connections can and are easily made. Since the famous six degrees of separation535 on the U.S. social network, networks seem much better connected; Facebook reports 3.5 degrees of separation on its friendship graph.536 Nevertheless, there are consistent reports of segregated, homophilic network structures on nearly all online platforms, as well as related trends of increasing polarization (see Ref. 537 for a recent overview). The mechanism that might resolve this apparent paradox may lie behind the question of whether we change our opinions according to our friends or whether we change our friends according to our opinions. In classical models of opinion dynamics, the network structure is fixed and the core assumption is a constructive process of an opinion change in a social interaction.538 In the long run, this process would predict convergence to a global consensus opinion with increasing connectivity; only under the assumption of disconnected networks or limited trust are disconnected opinions conceivable, let alone an outward or distancing movement of these clusters possible. We have recently proposed an alternative mechanism that describes the dynamics of an agent’s opinion o i ( t ),539,
o ˙ i = o i + K j = 1 N A i j ( t ) tanh ( α o j ) ,
(21)
which describes a process of mutual reinforcement of opinions within groups of shared stance [i.e., if s g n ( o i ) = s g n ( o j )]. The additive term tanh ( α o j ) moves both opinions in the same direction if they have the same sign and moves them toward the neutral state 0 if they have a different sign. Who is interacting with whom is governed by the time-dependent adjacency matrix A i j ( t ), which only has a non-zero entry if an interaction happens between i and j at time t. Its structure dynamically adapts to changing opinions, hence co-evolving and following a probability distribution ruled by homophily,540,
p i j = | o i ( t ) o j ( t ) | β j | o i ( t ) o j ( t ) | β ,
(22)
which is a term that might be partly driven by algorithmic recommendations suggesting like-minded others as interaction partners on social media. This combination helps to explain the potential emergence of growing polarization dynamics even under increasing connectivity [i.e., if the average path length of A i j ( t ) decreases, at least for controversial topics (i.e., high α)]. For more details, please see Ref. 539, and an extension into multi-dimensional opinion spaces, see Ref. 541.
Second, the increasing availability of information poses a challenge to the allocation of attention. So how does public discussion adapt to the increasing speed of available information? To describe this process, we quantified and modeled the dynamics of public interest for individual topics in various domains.542 The main result can be described as an acceleration of the dynamics of public interest in a topic and a narrowing of the amount of time spent on each topic, while the overall amount of attention spent on a topic remained stable over the years. For a mechanistic understanding of these dynamics, we modeled them as an adaptation of a Lotka–Volterra process for species competing for a common resource, with finite memory,
a i ˙ = r p a i ( 1 r c t e α ( t t ) a i ( t ) d t c j = 1 , j i N a j ) ,
(23)
where a i ( t ) describes the dynamics of the collective attention or public interest to a topic i. It depends on a growth term r p a i with an exponential growth rate r p if it is undisturbed. However, two terms are slowing and eventually reversing the growth process. That is, r c t e α ( t t ) a i ( t ) d t , which grows proportionally with the attention to the topic itself by exhausting the available resources at rate r c, and c j = 1 , j i N a j, which describes the constantly ongoing competition with all other topics j for that common resource. This we believe captures the essence of the idea of competitive attention economy originally formulated by Ref. 543 and describes well the empirical observations. It also captures the economic incentive structures to produce information faster in this competitive situation to have an advantage for gaining public interest.

In summary, I believe that these mechanisms may capture two adaptive mechanisms of social systems in response to increasing interconnectedness and information availability that are driven by fundamental limits of human cognition, namely, the ability to maintain a certain number of social contacts as well as to process a finite amount of information in parallel, as well as economic incentives to capture those. Future research in this area should aim to put those assumptions of mechanism of social dynamics on an empirical, probably experimental, footing to understand the causal drivers of how social systems adapt to changes in our world, e.g., technological and political changes.

The important role played by electricity in the daily life and activities causes a serious dependency of modern society on the reliable functionality of the power grid. Moreover, because of the interconnection of the power grid to other societal networks and systems, such as transportation,544 telecommunication,545 and health systems,546 it is of great importance that the power grid adjusts itself to changing conditions or, indeed, mitigates any internal and external perturbations and fluctuations, as generally discussed in Sec. II A for dynamic networks. Any failure in the power grid can quickly spread not only within the grid itself, but can set off a chain of failures, as a domino effect, in other social networks and systems. During energy transition and moving toward a CO 2-neutral power grid, fossil fuels sources should be replaced by renewable energy sources, such as wind, sunlight, water, and geothermal heat. The need for rapid CO 2 reduction is comprehensively discussed in Sec. V E. Among renewable energy sources, wind and solar power are sources inherently time-varying. This means that a constant generator power in Eq. (11) in Sec. II D will be replaced by irregular, hardly predictable wind and solar power that may constitute serious threats for power grid stability. Furthermore, the pattern of electricity consumption is changing due to the exploitation of green energy sources in other sectors, such as transportation547 and heating.548 Therefore, for being able to plan and operate future-compliant electricity grids with a continuously increasing share of renewable energy sources, it is vital to recognize the new origins of fluctuations in both supply and demand side, along with their statistical and stochastic characteristics to be able to adapt the power grid or to mitigate these fluctuations and, thus, maintain the energy balance in the grid.

The identification of these characteristics, along with the empirical data, enables us to develop valid data-driven models to describe the underlying system dynamics. Last, the combination of data-driven models and the complex network science empowers us to indicate the impact of new sources of both supply and demand on the current power grid and, therefore, to determine how the power grid structure and control systems should be adapted in the future to keep the energy balance and, consequently, the stability in the system.

In the following, we will review briefly some recent works related to the data analysis and data-driven models as well as their combination with the complex network science leading to a deep understanding of power grid dynamics.

1. Data analysis

Wind and solar power are highly dependent on weather conditions and, therefore, can ramp up or down in just a few seconds. In a power grid with a high integration of variable energy sources, these extreme short-time fluctuations not only influence the energy availability, but also the stability of the power grid. The analysis of the data of wind and solar power recorded in different regions around the world demonstrates multiple universal types of variability and nonlinearity in the short-time scales.549–552 Importantly, considering the aggregated variable energy sources of even country-wide installation of wind and solar fields shows that the data are still non-Gaussian and includes intermittent fluctuations.549 Indeed, this is the direct consequence of the long-range correlations of the wind velocity and the cloud size distribution that are approximately 600 and 1200 km, respectively.553,554 The footprint of these short-time intermittent fluctuations has been recently monitored in the power grid frequency variations.555 

The analysis of the highly resolved electricity consumption data of households that consume 29 % of all electricity in European Union556 shows that these data are highly intermittent. The intermittent fluctuations of electricity consumption cannot be captured from the data with a resolution of 1 h or even 15 min.557–559 The variability of energy sources, along with the uncertainty of the electricity consumption, can make it more difficult to balance supply and demand. Therefore, as the share of feed-in is increasing, a deeper understanding of the variable energy source dynamics as well as the advanced approach of balancing demand and supply by load shifting is required.560,561

2. Data-driven models

Identifying the stochastic behavior of the short-time variable energy sources and electricity consumption fluctuations allows us to construct a dynamic equation that governs these stochastic processes. The dynamic equation should include two main terms as follows:
X ˙ ( t ) = F ( X , t ) + G ( X , t ) ,
(24)
where F ( X , t ) is the deterministic term showing the trend of a stochastic process X ( t ) (which is here a variable source of energy or electricity consumption) vs time, and G ( X , t ) is the stochastic term modeling the extreme fluctuations and, indeed, non-Gaussianity in the considered process. Equation (24) is known as a stochastic differential equation, which is a non-parametric model. With the term “non-parametric,” we mean that all of the functions and parameters in the model can be found directly from the empirical time series. Recently, the jump-diffusion process562,563 and the superstatistics method564 have been introduced to model short-term variable energy sources and electricity consumption fluctuations, respectively. Moreover, in Ref. 564, a data-driven load profile that is consistent with high-resolution electricity consumption data is obtained. This data-driven load profile outperforms the standard load profile used by the energy supplier,565 and it does not require microscopic parameters for consumer behavior, consumer appliances, house infrastructures, or other features that other models depend on.566 

The data-driven models allow us not only to generate time series with identical statistics to empirical ones, but also by adjusting the parameters in the stochastic models, to consider the response of the power grid and control systems to different circumstances.

3. Complex network science

From a structural view point, the power grid is a complex network consisting of many units and agents that interact in a nonlinear way. Due to economic factors, power grids often run near their operational limits. The nature of renewable energies will add more and more fluctuations to this complex system, causing concerns about the reliability and stability of the power supply.567–569 Therefore, the probability of having grid instabilities will increase, which may result in more frequent occurrences of extreme events, such as cascading failures resulting in large blackouts. Any strategy under discussion, such as upgrading the existing power grid, the formation of virtual power plants combining different power sources, introducing new storage capacities, intelligent “smart grid” concepts, etc., will further increase the complexity of the existing systems and have to be based on the detailed knowledge of the dynamics of variable energy sources and consumer variable sources of energy. The data-driven models and the generation of data sets imitating the characteristics of the real data sets empower us to consider accurately the interplay of the network structure and features with supply and demand fluctuations and, therefore, resulting in deep insight into how the future structure and control systems should be designed to mitigate the intermittent fluctuations and allows us to increase the share of variable sources of energy in the power grid without any restriction.

The Earth system is a highly complex system with various interactions, including positive and negative feedbacks. Its representation is sometimes even called a horrendogram. However, it is also an open system that corresponds with its closer and farther surrounding. All these properties are crucial for the ability to adapt in response to external as well as intrinsic changes and perturbations.

There are outstanding examples of adaptive behavior in the history of the Earth system: About 66 000 000 years ago, a rather large asteroid struck Earth and formed the Chicxulub impactor crater with a diameter of about 180 km in the peninsular Yucatan in Mexico.570 This external shock induced titanic changes on the surface and in the atmosphere as megatsunamis, giant wildfires and a rapid strong decrease of the temperature. More importantly, it is now well accepted that it was the main cause of the Cretaceous–Paleogene extinction event, a mass extinction of 75% of plant and animal species on Earth, including all non-avian dinosaurs. However, it is important to emphasize that the Earth system was not destroyed due to this giant event, but it adapted and reached another stable regime after some time whose global climate was rather similar to the former one.571 Another example of a shock-like but intrinsic event was the Toba supervolcanic eruption about 74 000 years back in Sumatra.572 It changed the climate situation drastically and, in particular, induced a strong temperature decrease 3 5 ° C. However, the Earth system again adapted and reached via rather large fluctuations a stable climate regime whose global temperature was, however, clearly below the former one.573 

There are also recurrent-like strong influences on the Earth system over broad scales in time. On the one end, we have as long-term factors the Milankovic cycles, which are due to complex variations in eccentricity, axial tilt, and precession of the Earth’ motion in the solar system leading to main components of 41 000 years, 95 000 years, and others. These orbital forcing components have a strong influence on long-range climate dynamics, as the occurrence of glacials and interglacials. On the other end, recurrent patterns, such as El Ni n ~o Southern Oscillations (ENSO) in the range of 3–7 years, have a powerful impact on the onset and intensity of monsoons and the formation of extreme climate events. However, the Earth system has been able to adapt to all these recurrent events and acts in stable regimes, which can even become different, e.g., switching from glacial to interglacial.

However, one component of the Earth system has substantially increased its impact in the more recent past, the humans. The huge amount of greenhouse gas emissions, such as CO 2 and methane, is the most striking expression of this tremendous anthropogenic activity. There is clear evidence and broad acceptance that this has already caused distinct global warming and various other strong changes in the Earth system.575 Due to several reasons, the kind of adaptation of the Earth system in response to these emissions is hard to evaluate. One crucial uncertainty is the future development of these emissions despite the immense efforts for their serious reduction, e.g., via the UN Climate Change Conferences of the Parties (COP).

Therefore, typical scenarios of future Earth system’s adaption in dependence on different emission amounts are estimated based on combined models and measured data. However, there are challenging problems in modeling of the corresponding processes and data acquisition. A very promising approach to treat these tasks is based on the study of tipping elements because the Earth system comprises a number of such large-scale subsystems, which are vulnerable and can undergo large and possibly irreversible changes in response to anthropogenic perturbations beyond a critical threshold.574,576 The whole system of tipping elements, including their interactions, can be well described as a complex network in order to understand the spreading of tipping; i.e., will the tipping of one element exert only local effects or will it induce a cascading-like dynamics?577 This is a typical multistable system where phenomena, such as partial synchronization, are typical (see also Sec. II D). Additionally, intrinsic and external noise may strongly influence the dynamics of the Earth system (see also Sec. II B). We know the main elements of this network because they have been identified and described, such as dieback of the Amazon forest or melting of poles (see Fig. 10). However, the kind of interactions as well as the intrinsic dynamics at each tipping area are only very partly known.

Fig. 10.

The location of climate tipping elements in the cryosphere (blue), biosphere (green), and ocean/atmosphere (orange), and global warming levels at which their tipping points will likely be triggered. Pins are colored according to our central global warming threshold estimate being below 2 ° C, i.e., within the Paris Agreement range (light orange, circles); between 2 and 4 ° C, i.e., accessible with current policies (orange, diamonds); and 4 ° C and above (red, triangles). Figure from Armstrong McKay et al., Science 377, eabn7950 (2022). Copyright 2022 American Association for the Advancement of Science.574 

Fig. 10.

The location of climate tipping elements in the cryosphere (blue), biosphere (green), and ocean/atmosphere (orange), and global warming levels at which their tipping points will likely be triggered. Pins are colored according to our central global warming threshold estimate being below 2 ° C, i.e., within the Paris Agreement range (light orange, circles); between 2 and 4 ° C, i.e., accessible with current policies (orange, diamonds); and 4 ° C and above (red, triangles). Figure from Armstrong McKay et al., Science 377, eabn7950 (2022). Copyright 2022 American Association for the Advancement of Science.574 

Close modal

To treat the first problem, connections between the Amazon forest area and other tipping points have been recently uncovered quantitatively by analyzing near-surface air temperature fields.578 This way, teleconnections between the Amazon forest area and the Tibetan plateau as well as the West Antarctic ice sheet have been identified. In other studies based on conceptual models for selected tipping elements with complex structure–function interrelations as treated in Sec. II A of this Perspective, it has been shown that the polar ice sheets could be typically the initiators of tipping cascades, while the Atlantic Meridional overturning circulation acts as a mediator.577 However, these studies are in the beginning, and there are several crucial problems to solve until getting a reliable predictability of tipping dynamics and, hence, on evaluating in detail the adaptability of the Earth system, in particular, to anthropogenic influences. A promising way to retrieve these interactions will be the application of modern machine learning methods (see Sec. IV).

However, it is evident that the greenhouse gas emissions have to be strongly reduced. In Sec. V D of this Perspective paper, problems and approaches for reaching this ambitious goal are discussed.

To summarize, the Earth system is an adaptive one as is obvious from the past. We have now clear evidence that the huge anthropogenic influences create a new kind of perturbation, which has the power to induce a novel pathway of adaptation. This will end for sure in some stable regime, but it is very questionable whether we can live there.

The notion of adaptivity is used in a variety of contexts, from nonlinear dynamics over socioeconomic systems to cognitive science and musicology. This article presents various viewpoints on adaptive systems and the notion of adaptivity itself from different research disciplines aiming to open the dialogue between communities.

The article shows that the terminology and definition of “adaptivity” may vary among the communities. While “adaptability” refers generally to the ability of a system to amend its properties according to dynamic (external or intrinsic) changes, the specific details of adaptive mechanisms depend on the context and the community, for example, how and which part of a system can amend (adaptation rules) or what strategies enable the perception (or sensing) of such changes. In addition, the mathematical framework for describing adaptive mechanisms and adaptive systems also varies across communities.

On the other hand, various commonalities become apparent throughout this article. For example, a common starting point in many contexts is descriptions based on networks, where the notion of adaptivity is well established. Adaptive networks are applied in numerous fields, such as power grids, neural systems, and machine learning. Another commonality across disciplines is the link between adaptivity and feedback mechanisms, which are ubiquitous in both natural systems and engineering.

We believe that the similarities and differences provide opportunities for further cross-fertilization between the research communities centered around the concept of adaptivity as a common mechanism; for example, adaptive networks can serve as a powerful modeling paradigm for realistic dynamical systems, possibly applicable to even more systems, e.g., in the context of cognitive sciences, musicology, or active matter. Furthermore, a great opportunity lies in utilizing the mechanisms that have emerged in nature as inspiration and guiding principles to engineer artificial (intelligent, cooperative) systems and to develop control strategies. In this spirit, for example, the cooperative behavior of animals may guide the way to engineer robots capable to perform collective motion reminiscent of swarms of insects or schools of fish or the development of new machine learning algorithms may potentially profit from a deeper understanding of the brain provided by the field of neuroscience. Indeed, it has long been recognized that “The adaptiveness of the human organism, the facility with which it acquires new representations and strategies and becomes adept in dealing with highly specialized environments, makes it an elusive and fascinating target of our scientific inquiries and the very prototype of the artificial.”1 

This article follows the workshop on “Adaptivity in nonlinear dynamical systems,” which brought together specialists from various disciplines to share their views on the abstract concept of adaptivity. During the presentations and the coffee breaks, there was a lively exchange of ideas that highlighted the great interest in this topic. We hope that this Perspective article will be a first step in promoting knowledge transfer between disciplines.

In order to conclude this Perspective article, we collect the current open research questions for each section to stimulate future research on adaptivity in the different fields represented in this collection of perspectives and beyond.

  1. How does a mathematical theory of adaptive systems, which includes cutting-edge applications, such as, e.g., adaptive networks, look like?

  2. How can knowledge about adaptive mechanisms be used to better understand and influence processes in neuronal, physiological, and socio-economic systems?

  3. Can the knowledge about neural plasticity of the human brain be used to inspire the development of new artificial learning algorithms?

  4. What are the capabilities of modeling real-world dynamical systems by using adaptivity?

J.S., R.B., and S.A.M.L. thank the Joachim Herz Foundation for funding an interdisciplinary and international workshop on “Adaptivity in nonlinear dynamical systems,” which was held from 20 to 23 September 2022 and provided a platform for discussions that resulted in this article. We further thank the Potsdam Institute for Climate Impact Research for supporting and hosting the workshop. J.S. acknowledges funding support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—through the project 429685422. R.B. acknowledges funding support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—through the project 440145547. S.A.M.L. acknowledges funding support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—through the project 498288081. I.F. acknowledges funding from the Institute of Physics Belgrade through a grant by the Ministry of Education, Science and Technological Development of Republic of Serbia. P.H. acknowledges further support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under project ID 434434223—SFB 1461. J.M. acknowledges funding support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—through the project 458548755. P.L.-S. acknowledges financial support from the Volkswagen Foundation (grant “Reclaiming individual autonomy and democratic discourse online: How to rebalance human and algorithmic decision-making”). P.A.T. acknowledges funding support by the John A. Blume Foundation and the Vaughn and Nancy Bryson fund. G.V. would like to acknowledge funding from the H2020 European Research Council (ERC) Starting Grant ComplexSwimmers (Grant No. 677511), the Horizon Europe ERC Consolidator Grant MAPEI (Grant No. 101001267), and the Knut and Alice Wallenberg Foundation (Grant No. 2019.0079). S.G. acknowledges co-funding from Next Generation EU, in the context of the National Recovery and Resilience Plan, Investment PE1 - Project FAIR “Future Artificial Intelligence Research”, co-financed by the Next Generation EU [DM 1555 del 11.10.22].

The authors have no conflicts to disclose.

Jakub Sawicki, Rico Berner, and Sarah A. M. Loos contributed equally to this paper.

Jakub Sawicki: Conceptualization (equal); Writing – original draft (equal); Writing – review & editing (equal). Rico Berner: Conceptualization (equal); Writing – original draft (equal); Writing – review & editing (equal). Sarah A. M. Loos: Conceptualization (equal); Writing – original draft (equal); Writing – review & editing (equal). Mehrnaz Anvari: Writing – original draft (equal). Rolf Bader: Writing – original draft (equal). Wolfram Barfuss: Writing – original draft (equal). Nicola Botta: Writing – original draft (equal). Nuria Brede: Writing – original draft (equal). Igor Franović: Writing – original draft (equal). Daniel J. Gauthier: Writing – original draft (equal). Sebastian Goldt: Writing – original draft (equal). Aida Hajizadeh: Writing – original draft (equal). Philipp Hövel: Writing – original draft (equal). Omer Karin: Writing – original draft (equal). Philipp Lorenz-Spreen: Writing – original draft (equal). Christoph Miehl: Writing – original draft (equal). Jan Mölter: Writing – original draft (equal). Simona Olmi: Writing – original draft (equal). Eckehard Schöll: Writing – original draft (equal). Alireza Seif: Writing – original draft (equal). Peter A. Tass: Writing – original draft (equal). Giovanni Volpe: Writing – original draft (equal). Serhiy Yanchuk: Writing – original draft (equal). Jürgen Kurths: Writing – original draft (equal); Writing – review & editing (equal).

The data that support the findings of this study are available within the article.

1
H. A.
Simon
, “The sciences of the artificial,” in Karl Taylor Compton Lectures (MIT Press, Cambridge, MA, 1969).
2
V.
Yakubovich
, “
Theory of adaptive systems
,”
Sov. Phys.-Dokl.
13
,
852
855
(
1968
).
3
V.
Yakubovich
, “
Adaptive systems with multistep goal conditions
,”
Sov. Phys.-Dokl.
13
,
1096
1099
(
1968
).
4
V.
Fomin
,
A.
Fradkov
, and
V.
Yakubovich
, Adaptive Control of Dynamical Systems (Nauka, Moscow, 1981).
5
A. M.
Annaswamy
and
A. L.
Fradkov
, “
A historical perspective of adaptive control and learning
,”
Annu. Rev. Control
52
,
18
41
(
2021
).
6
A. L.
Fradkov
and
A. I.
Shepeljavyi
, “
The history of cybernetics and artificial intelligence: A view from Saint Petersburg
,”
Cybern. Phys.
11
,
253
263
(
2022
).
7
Y. Z.
Tsypkin
, “Adaptation and learning in automatic systems,” in Mathematics in Science and Engineering (Academic Press, 1971).
8
A.
Fradkov
,
Cybernetical Physics: From Control of Chaos to Quantum Control
(
Springer
,
Berlin
,
2007
).
9
J.
Lehnert
,
P.
Hövel
,
A. A.
Selivanov
,
A. L.
Fradkov
, and
E.
Schöll
, “
Controlling cluster synchronization by adapting the topology
,”
Phys. Rev. E
90
,
042914
(
2014
).
10
A.
Pikovsky
,
M.
Rosenblum
, and
J.
Kurths
,
Synchronization: A Universal Concept in Nonlinear Sciences
,
1st ed.
(
Cambridge University Press
,
Cambridge
,
2001
).
11
S.
Boccaletti
,
A. N.
Pisarchik
,
C. I.
del Genio
, and
A.
Amann
,
Synchronization: From Coupled Systems to Complex Networks
(
Cambridge University Press
,
Cambridge
,
2018
).
12
S.
Yanchuk
,
A. C.
Roque
,
E. E. N.
Macau
, and
J.
Kurths
, “
Dynamical phenomena in complex networks: Fundamentals and applications
,”
Eur. Phys. J. Spec. Top.
230
,
2711
2716
(
2021
).
13
J.
Cabral
,
V.
Jirsa
,
O.
Popovych
,
A.
Torcini
, and
S.
Yanchuk
, “Editorial: From structure to function in neuronal networks: Effects of adaptation, time-delays, and noise,”
Front. Syst. Neurosci.
16
, 871165 (2022).
14
L. F.
Abbott
and
S. B.
Nelson
, “
Synaptic plasticity: Taming the beast
,”
Nat. Neurosci.
3
,
1178
1183
(
2000
).
15
Y.
Dan
and
M.-M.
Poo
, “
Spike timing-dependent plasticity of neural circuits
,”
Neuron
44
,
23
30
(
2004
).
16
Y. L.
Maistrenko
,
B.
Lysyansky
,
C.
Hauptmann
,
O.
Burylko
, and
P. A.
Tass
, “
Multistability in the Kuramoto model with synaptic plasticity
,”
Phys. Rev. E
75
,
066207
(
2007
).
17
O. V.
Popovych
,
S.
Yanchuk
, and
P. A.
Tass
, “
Self-organized noise resistance of oscillatory neural networks with spike timing-dependent plasticity
,”
Sci. Rep.
3
,
2926
(
2013
).
18
L.
Lücken
,
O. V.
Popovych
,
P. A.
Tass
, and
S.
Yanchuk
, “
Noise-enhanced coupling between two oscillators with long-term plasticity
,”
Phys. Rev. E
93
,
032210
(
2016
).
19
T.
Aoki
and
T.
Aoyagi
, “
Co-evolution of phases and connection strengths in a network of phase oscillators
,”
Phys. Rev. Lett.
102
,
034101
(
2009
).
20
D. V.
Kasatkin
,
S.
Yanchuk
,
E.
Schöll
, and
V. I.
Nekorkin
, “
Self-organized emergence of multi-layer structure and chimera states in dynamical networks with adaptive couplings
,”
Phys. Rev. E
96
,
062211
(
2017
).
21
R.
Berner
,
S.
Vock
,
E.
Schöll
, and
S.
Yanchuk
, “
Desynchronization transitions in adaptive networks
,”
Phys. Rev. Lett.
126
,
028301
(
2021
).
22
R.
Berner
,
V.
Mehrmann
,
E.
Schöll
, and
S.
Yanchuk
, “
The multiplex decomposition: An analytic framework for multilayer dynamical networks
,”
SIAM J. Appl. Dyn. Syst.
20
,
1752
1772
(
2021
).
23
R.
Berner
and
S.
Yanchuk
, “
Synchronization in networks with heterogeneous adaptation rules and applications to distance-dependent synaptic plasticity
,”
Front. Appl. Math. Stat.
7
,
714978
(
2021
).
24
M.
Thiele
,
R.
Berner
,
P. A.
Tass
,
E.
Schöll
, and
S.
Yanchuk
, “
Asymmetric adaptivity induces recurrent synchronization in complex networks
,”
Chaos
33
,
023123
(
2023
).
25
V.
Röhr
,
R.
Berner
,
E. L.
Lameu
,
O. V.
Popovych
, and
S.
Yanchuk
, “
Frequency cluster formation and slow oscillations in neural populations with plasticity
,”
PLoS One
14
,
e0225094
(
2019
).
26
C.
Kuehn
,
Multiple Time Scale Dynamics
(
Springer International Publishing
,
Switzerland
,
2015
).
27
N.
Caporale
and
Y.
Dan
, “
Spike timing–dependent plasticity: A hebbian learning rule
,”
Annu. Rev. Neurosci.
31
,
25
(
2008
).
28
D.
Taylor
,
E.
Ott
, and
J. G.
Restrepo
, “
Spontaneous synchronization of coupled oscillator systems with frequency adaptation
,”
Phys. Rev. E
81
,
046214
(
2010
).
29
T.
Fardet
and
A.
Levina
, “
Simple models including energy and spike constraints reproduce complex activity patterns and metabolic disruptions
,”
PLoS Comput. Biol.
16
,
e1008503
(
2020
).
30
G.
Bonvento
and
J. P.
Bolaños
, “
Astrocyte-neuron metabolic cooperation shapes brain activity
,”
Cell Metab.
33
,
1546
(
2021
).
31
J. A.
Roberts
,
K. K.
Iyer
,
S.
Vanhatalo
, and
M.
Breakspear
, “
Critical role for resource constraints in neural models
,”
Front. Syst. Neurosci.
8
,
154
(
2014
).
32
Y. S.
Virkar
,
W. L.
Shew
,
J. G.
Restrepo
, and
E.
Ott
, “
Feedback control stabilization of critical dynamics via resource transport on multilayer networks: How glia enable learning dynamics in the brain
,”
Phys. Rev. E
94
,
042310
(
2016
).
33
A.
Levina
,
J. M.
Herrmann
, and
T.
Geisel
, “
Dynamical synapses causing self-organized criticality in neural networks
,”
Nat. Phys.
3
,
857
(
2007
).
34
R.
Berner
,
E.
Schöll
, and
S.
Yanchuk
, “
Multiclusters in networks of adaptively coupled phase oscillators
,”
SIAM J. Appl. Dyn. Syst.
18
,
2227
2266
(
2019
).
35
R.
Berner
,
J.
Fialkowski
,
D. V.
Kasatkin
,
V. I.
Nekorkin
,
S.
Yanchuk
, and
E.
Schöll
, “
Hierarchical frequency clusters in adaptive networks of phase oscillators
,”
Chaos
29
,
103134
(
2019
).
36
K. A.
Kroma-Wiley
,
P. J.
Mucha
, and
D. S.
Bassett
, “
Synchronization of coupled Kuramoto oscillators under resource constraints
,”
Phys. Rev. E
104
,
014211
(
2021
).
37
S.
Thamizharasan
,
V. K.
Chandrasekar
,
M.
Senthilvelan
,
R.
Berner
,
E.
Schöll
, and
D. V.
Senthilkumar
, “
Exotic states induced by coevolving connection weights and phases in complex networks
,”
Phys. Rev. E
105
,
034312
(
2021
).
38
J.
Fialkowski
,
S.
Yanchuk
,
I. M.
Sokolov
,
E.
Schöll
,
G. A.
Gottwald
, and
R.
Berner
, “
Heterogeneous nucleation in finite size adaptive dynamical networks
,”
Phys. Rev. Lett.
130
,
067402
(
2022
).
39
I.
Franović
,
S. R.
Eydam
,
S.
Yanchuk
, and
R.
Berner
, “
Collective activity bursting in a population of excitable units adaptively coupled to a pool of resources
,”
Front. Netw. Physiol.
2
,
841829
(
2022
).
40
I.
Franović
,
S.
Yanchuk
,
S. R.
Eydam
,
I.
Bačić
, and
M.
Wolfrum
, “
Dynamics of a stochastic excitable system with slowly adapting feedback
,”
Chaos
30
,
083109
(
2020
).
41
I.
Bačić
,
V.
Klinshov
,
V. I.
Nekorkin
,
M.
Perc
, and
I.
Franović
, “
Inverse stochastic resonance in a system of excitable active rotators with adaptive coupling
,”
EPL
124
,
40004
(
2018
).
42
A.
Knoblauch
,
F.
Hauser
,
M.-O.
Gewaltig
,
E.
Körner
, and
G.
Palm
, “
Does spike-timing-dependent synaptic plasticity couple or decouple neurons firing in synchrony?
,”
Front. Comput. Neurosci.
6
,
55
(
2012
).
43
D.
Pazo
and
E.
Montbrió
, “
Universal behavior in populations composed of excitable and self-oscillatory elements
,”
Phys. Rev. E
73
,
055202
(
2006
).
44
L. F.
Lafuerza
,
P.
Colet
, and
R.
Toral
, “
Nonuniversal results iinduced by diversity distribution in coupled excitable systems
,”
Phys. Rev. Lett.
105
,
084101
(
2010
).
45
V.
Klinshov
and
I.
Franović
, “
Two scenarios for the onset and suppression of collective oscillations in heterogeneous populations of active rotators
,”
Phys. Rev. E
100
,
062211
(
2019
).
46
I.
Bačić
,
S.
Yanchuk
,
M.
Wolfrum
, and
I.
Franović
, “
Noise-induced switching in two adaptively coupled excitable systems
,”
Eur. Phys. J. Spec. Top.
227
,
1077
(
2018
).
47
O.
Sporns
and
R.
Kötter
, “
Motifs in brain networks
,”
PLoS Biol.
2
,
e369
(
2004
).
48
I.
Bačić
and
I.
Franović
, “
Two paradigmatic scenarios for inverse stochastic resonance
,”
Chaos
30
,
033123
(
2020
).
49
L. M.
Pecora
and
T. L.
Carroll
, “
Master stability functions for synchronized coupled systems
,”
Phys. Rev. Lett.
80
,
2109
2112
(
1998
).
50
E.
Ott
and
T. M.
Antonsen
, “
Low dimensional behavior of large systems of globally coupled oscillators
,”
Chaos
18
,
037113
(
2008
).
51
E.
Ott
and
T. M.
Antonsen
, “
Long time evolution of phase oscillator systems
,”
Chaos
19
,
023117
(
2009
).
52
C.
Bick
,
M.
Goodfellow
,
C. R.
Laing
, and
E. A.
Martens
, “
Understanding the dynamics of biological and neural oscillator networks through exact mean-field reductions: A review
,”
J. Math. Neurosci.
10
,
9
(
2020
).
53
G.
Lan
,
P.
Sartori
,
S.
Neumann
,
V.
Sourjik
, and
Y.
Tu
, “
The energy–speed–accuracy trade-off in sensory adaptation
,”
Nat. Phys.
8
,
422
(
2012
).
54
D.
Conti
and
T.
Mora
, “
Nonequilibrium dynamics of adaptation in sensory systems
,”
Phys. Rev. E
106
,
054404
(
2022
).
55
G.
Tkačik
and
W.
Bialek
, “
Information processing in living systems
,”
Annu. Rev. Condens. Matter Phys.
7
,
89
(
2016
).
56
C.
Ionescu
,
P.
Jansson
, and
N.
Botta
, “Type theory as a framework for modelling and programming,” in Leveraging Applications of Formal Methods, Verification and Validation. Modeling, edited by T. Margaria and B. Steffen (Springer International Publishing, Cham, 2018), pp. 119–133.
57
M.
Broy
,
K.
Havelund
, and
R.
Kumar
, “Towards a unified view of modeling and programming,” in Leveraging Applications of Formal Methods, Verification and Validation: Discussion, Dissemination, Applications, edited by T. Margaria and B. Steffen (Springer International Publishing, Cham, 2016), pp. 238–257.
58
C.
Ionescu
,
R. J. T.
Klein
,
J.
Hinkel
,
K. S.
Kavi Kumar
, and
R.
Klein
, “
Towards a formal framework of vulnerability to climate change
,”
Environ. Model. Assess.
14
,
1
16
(
2009
).
59
N.
Botta
,
P.
Jansson
, and
C.
Ionescu
, “
Contributions to a computational theory of policy advice and avoidability
,”
J. Funct. Program.
27
,
e23
(
2017
).
60
N.
Brede
(2021).
“Toward a DSL for sequential decision problems with tipping point uncertainties,”
Zenodo.
61
N.
Botta
,
N.
Brede
,
M.
Crucifix
,
C.
Ionescu
,
P.
Jansson
,
Z.
Li
,
M.
Martínez
, and
T.
Richter
, “
Responsibility under uncertainty: Which climate decisions matter most?
,”
Environ. Model. Assess.
28
,
337
365
(
2023
).
62
R.
Sutton
,
A.
Barto
, and
R.
Williams
, “
Reinforcement learning is direct adaptive optimal control
,”
IEEE Control Syst. Mag.
12
,
19
22
(
1992
).
63
R.
Bellman
,
Dynamic Programming
(
Princeton University Press
,
1957
).
64
N.
Botta
,
P.
Jansson
,
C.
Ionescu
,
D. R.
Christiansen
, and
E.
Brady
, “
Sequential decision problems, dependent types and generic solutions
,”
Log. Methods Comput. Sci.
13
,
1–23
(
2017
).
65
N.
Brede
and
N.
Botta
, “
On the correctness of monadic backward induction
,”
J. Funct. Program.
31
,
e26
(
2021
).
66
C. J. C. H.
Watkins
and
P.
Dayan
, “
Q-learning
,”
Mach. Learn.
8
,
279
292
(
1992
).
67
B.
Nordström
,
K.
Petersson
, and
J.
Smith
,
Programming in Martin-Löf’s Type Theory
(
Oxford University Press
,
1990
).
68
P.
Martin-Löf
,
Intuitionistic Type Theory
(
Bibliopolis
,
1984
).
69
S.
Allen
,
M.
Bickford
,
R.
Constable
,
R.
Eaton
,
C.
Kreitz
,
L.
Lorigo
, and
E.
Moran
, “
Innovations in computational type theory using Nuprl
,”
J. Appl. Log.
4
,
428
469
(
2006
).
70
The Coq Development Team
, “
The Coq proof assistant
,” Zenodo, latest version.
71
U.
Norell
, “Towards a practical programming language based on dependent type theory,” Ph.D. thesis (Chalmers University of Technology, 2007).
72
E.
Brady
,
Type-Driven Development in Idris
(
Manning Publications Co.
,
2017
).
73
L.
de Moura
,
S.
Kong
,
J.
Avigad
,
F.
van Doorn
, and
J.
von Raumer
, “The lean theorem prover (system description),” in Automated Deduction—CADE-25 (Springer International Publishing, Cham, 2015), pp. 378–388.
74
V.
Voevodsky
, “Univalent foundations of mathematics,” in Logic, Language, Information and Computation: 18th International Workshop, WoLLIC 2011, Philadelphia, PA. Proceedings 18 (Springer, 2011), 4 pp.
75
G.
Gonthier
, “
Formal proof–the four-color theorem
,”
Not. AMS
55
,
1382
1393
(
2008
), https://community.ams.org/journals/notices/200811/tx081101382p.pdf.
76
G.
Gonthier
,
A.
Asperti
,
J.
Avigad
,
Y.
Bertot
,
C.
Cohen
,
F.
Garillot
,
S.
Le Roux
,
A.
Mahboubi
,
R.
O’Connor
,
S.
Ould Biha
, and
I.
Pasca
, “A machine-checked proof of the odd order theorem,” in Interactive Theorem Proving: 4th International Conference, ITP 2013, Rennes, France, 22–26 July 2013. Proceedings 4 (Springer, 2013), pp. 163–179.
77
K.
Buzzard
,
J.
Commelin
, and
P.
Massot
, “Formalising perfectoid spaces,” in Proceedings of the 9th ACM SIGPLAN International Conference on Certified Programs and Proofs (Association for Computing Machinery, 2020), pp. 299–312.
78
K.
Hartnett
, “Building the mathematical library of the future,” https://www.quantamagazine.org/building-the-mathematical-library-of-the-future-20201001 (2020).
79
S.
Ornes
, “How close are computers to automating mathematical reasoning,” https://www.quantamagazine.org/how-close-are-computers-to-automating-mathematical-reasoning-20200827/ (2020).
80
P.
Wadler
, “
Propositions as types
,”
Commun. ACM
58
,
75
84
(
2015
).
81
X.
Leroy
, “
Formal verification of a realistic compiler
,”
Commun. ACM
52
,
107
115
(
2009
).
82
N.
Swamy
,
J.
Chen
,
C.
Fournet
,
P.-Y.
Strub
,
K.
Bhargavan
, and
J.
Yang
, “Secure distributed programming with value-dependent types,” in Proceedings of ICFP 2011 (Association for Computing Machinery, 2011), pp. 266–278.
83
J.
Morgenstern
and
D.
Licata
, “Security-typed programming within dependently-typed programming,” in International Conference on Functional Programming (ACM, 2010).
84
E.
Brady
and
K.
Hammond
, “Resource-safe systems programming with embedded domain specific languages,” in Practical Aspects of Declarative Languages (Springer, 2012), pp. 242–257.
85
A.
Chlipala
,
Certified Programming with Dependent Types: A Pragmatic Introduction to the Coq Proof Assistant
(
MIT Press
,
2022
).
86
C.
Ionescu
and
P.
Jansson
, “Testing versus proving in climate impact research,” in Proc. TYPES 2011, Leibniz International Proceedings in Informatics (LIPIcs) (Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany, 2013), Vol. 19, pp. 41–54.
87
R. S.
Bird
and
O.
de Moor
, Algebra of Programming, Prentice Hall International Series in Computer Science (Prentice Hall, 1997).
88
E. Moggi, “Computational lambda-calculus and monads,” in
Proceedings of the Fourth Annual Symposium on Logic in Computer Science (LICS '89)
(IEEE, 1989), pp. 14-23; P. Wadler and M. Broy, “Monads for functional programming,” in Program Design Calculi, Proceedings of the {NATO} Advanced Study Institute on Program Design Calculi, NATO ASI Series (Springer, 1992), Vol. 118, pp. 233–264.
89
S.
MacLane
, Categories for the Working Mathematician, 2nd ed., Graduate Texts in Mathematics (Springer, 1978).
90
E.
Schöll
, “
Partial synchronization patterns in brain networks
,”
Europhys. Lett.
136
,
18001
(
2021
).
91
E.
Schöll
, “
Chimeras in physics and biology: Synchronization and desynchronization of rhythms
,”
Nova Acta Leopoldina
425
,
67
95
(
2020
). invited contribution.
92
J.
Sawicki
, Delay Controlled Partial Synchronization in Complex Networks, Springer Theses (Springer, Heidelberg, 2019).
93
A.
Zakharova
, “Chimera patterns in networks: Interplay between dynamics, structure, noise, and delay,” in Understanding Complex Systems (Springer, Cham, 2020).
94
Y.
Maistrenko
,
B.
Penkovsky
, and
M.
Rosenblum
, “
Solitary state at the edge of synchrony in ensembles with attractive and repulsive interactions
,”
Phys. Rev. E
89
,
060901
(
2014
).
95
P.
Jaros
,
Y.
Maistrenko
, and
T.
Kapitaniak
, “
Chimera states on the route from coherence to rotating waves
,”
Phys. Rev. E
91
,
022907
(
2015
).
96
V.
Semenov
,
A.
Zakharova
,
Y.
Maistrenko
, and
E.
Schöll
, “
Delayed-feedback chimera states: Forced multiclusters and stochastic resonance
,”
Europhys. Lett.
115
,
10005
(
2016
).
97
S.
Jain
and
S.
Krishna
, “
A model for the emergence of cooperation, interdependence, and structure in evolving networks
,”
Proc. Natl. Acad. Sci. U.S.A.
98
,
543
547
(
2001
).
98
T.
Gross
,
C. J.
Dommar D’Lima
, and
B.
Blasius
, “
Epidemic dynamics on an adaptive network
,”
Phys. Rev. Lett.
96
,
208701
(
2006
).
99
T.
Gross
and
B.
Blasius
, “
Adaptive coevolutionary networks: A review
,”
J. R. Soc. Interface
5
,
259
271
(
2008
).
100
R.
Gutiérrez
,
A.
Amann
,
S.
Assenza
,
J.
Gómez-Gardeñes
,
V.
Latora
, and
S.
Boccaletti
, “
Emerging meso- and macroscales from synchronization of adaptive networks
,”
Phys. Rev. Lett.
107
,
234103
(
2011
).
101
X.
Zhang
,
S.
Boccaletti
,
S.
Guan
, and
Z.
Liu
, “
Explosive synchronization in adaptive and multilayer networks
,”
Phys. Rev. Lett.
114
,
038701
(
2015
).
102
M.
Madadi Asl
,
A.
Valizadeh
, and
P. A.
Tass
, “
Dendritic and axonal propagation delays may shape neuronal networks with plastic synapses
,”
Front. Physiol.
9
,
1849
(
2018
).
103
D. V.
Kasatkin
and
V. I.
Nekorkin
, “
Synchronization of chimera states in a multiplex system of phase oscillators with adaptive couplings
,”
Chaos
28
,
093115
(
2018
).
104
D. V.
Kasatkin
and
V. I.
Nekorkin
, “
The effect of topology on organization of synchronous behavior in dynamical networks with adaptive couplings
,”
Eur. Phys. J. Spec. Top.
227
,
1051
(
2018
).
105
R.
Berner
,
J.
Sawicki
, and
E.
Schöll
, “
Birth and stabilization of phase clusters by multiplexing of adaptive networks
,”
Phys. Rev. Lett.
124
,
088301
(
2020
).
106
P.
Feketa
,
A.
Schaum
, and
T.
Meurer
, “
Synchronization and multi-cluster capabilities of oscillatory networks with adaptive coupling
,”
IEEE Trans. Autom. Control
66
,
3084
3096
(
2020
).
107
R.
Berner
, Patterns of Synchrony in Complex Networks of Adaptively Coupled Oscillators, Springer Theses (Springer, Cham, 2021).
108
O. V.
Popovych
,
M. N.
Xenakis
, and
P. A.
Tass
, “
The spacing principle for unlearning abnormal neuronal synchrony
,”
PLoS One
10
,
e0117205
(
2015
).
109
S.
Chakravartula
,
P.
Indic
,
B.
Sundaram
, and
T.
Killingback
, “
Emergence of local synchronization in neuronal networks with adaptive couplings
,”
PLoS One
12
,
e0178975
(
2017
).
110
Y.
Kuramoto
,
Chemical Oscillations, Waves and Turbulence
(
Springer-Verlag
,
Berlin
,
1984
).
111
G.
Filatrella
,
A. H.
Nielsen
, and
N. F.
Pedersen
, “
Analysis of a power grid using a Kuramoto-like model
,”
Eur. Phys. J. B
61
,
485
491
(
2008
).
112
F.
Dörfler
and
F.
Bullo
, “
Synchronization and transient stability in power networks and nonuniform Kuramoto oscillators
,”
SIAM J. Control Optim.
50
,
1616
1642
(
2012
).
113
M.
Rohden
,
A.
Sorge
,
M.
Timme
, and
D.
Witthaut
, “
Self-organized synchronization in decentralized power grids
,”
Phys. Rev. Lett.
109
,
064101
(
2012
).
114
A. E.
Motter
,
S. A.
Myers
,
M.
Anghel
, and
T.
Nishikawa
, “
Spontaneous synchrony in power-grid networks
,”
Nat. Phys.
9
,
191
197
(
2013
).
115
F. A.
Rodrigues
,
T. K. D. M.
Peron
,
P.
Ji
, and
J.
Kurths
, “
The Kuramoto model in complex networks
,”
Phys. Rep.
610
,
1
98
(
2016
).
116
L.
Tumash
,
S.
Olmi
, and
E.
Schöll
, “
Effect of disorder and noise in shaping the dynamics of power grids
,”
Europhys. Lett.
123
,
20001
(
2018
).
117
L.
Tumash
,
S.
Olmi
, and
E.
Schöll
, “
Stability and control of power grids with diluted network topology
,”
Chaos
29
,
123105
(
2019
).
118
H.
Taher
,
S.
Olmi
, and
E.
Schöll
, “
Enhancing power grid synchronization and stability through time delayed feedback control
,”
Phys. Rev. E
100
,
062306
(
2019
).
119
F.
Hellmann
,
P.
Schultz
,
P.
Jaros
,
R.
Levchenko
,
T.
Kapitaniak
,
J.
Kurths
, and
Y.
Maistrenko
, “
Network-induced multistability through lossy coupling and exotic solitary states
,”
Nat. Commun.
11
,
592
(
2020
).
120
C.
Kuehn
and
S.
Throm
, “
Power network dynamics on graphons
,”
SIAM J. Appl. Math.
79
,
1271
1292
(
2019
).
121
C. H.
Totz
,
S.
Olmi
, and
E.
Schöll
, “
Control of synchronization in two-layer power grids
,”
Phys. Rev. E
102
,
022311
(
2020
).
122
X.
Zhang
,
C.
Ma
, and
M.
Timme
, “
Vulnerability in dynamically driven oscillatory networks and power grids
,”
Chaos
30
,
063111
(
2020
).
123
R.
Berner
,
S.
Yanchuk
, and
E.
Schöll
, “
What adaptive neuronal networks teach us about power grids
,”
Phys. Rev. E
103
,
042315
(
2021
).
124
J. A.
Acebrón
and
R.
Spigler
, “
Adaptive frequency model for phase-frequency synchronization in large populations of globally coupled nonlinear oscillators
,”
Phys. Rev. Lett.
81
,
2229
(
1998
).
125
J. A.
Acebrón
,
L. L.
Bonilla
,
C. J.
Pérez Vicente
,
F.
Ritort
, and
R.
Spigler
, “
The Kuramoto model: A simple paradigm for synchronization phenomena
,”
Rev. Mod. Phys.
77
,
137
185
(
2005
).
126
P. S.
Skardal
,
D.
Taylor
, and
J. G.
Restrepo
, “
Complex macroscopic behavior in systems of phase oscillators with adaptive coupling
,”
Physica D
267
,
27
35
(
2013
).
127
B.
Schäfer
,
D.
Witthaut
,
M.
Timme
, and
V.
Latora
, “
Dynamically induced cascading failures in power grids
,”
Nat. Commun.
9
,
1975
(
2018
).
128
R.
Berner
,
A.
Polanska
,
E.
Schöll
, and
S.
Yanchuk
, “
Solitary states in adaptive nonlocal oscillator networks
,”
Eur. Phys. J. Spec. Top.
229
,
2183
2203
(
2020
).
129
H.
Sakaguchi
and
Y.
Kuramoto
, “
A soluble active rotater model showing phase transitions via mutual entertainment
,”
Prog. Theor. Phys.
76
,
576
581
(
1986
).
130
P.
Jaros
,
S.
Brezetsky
,
R.
Levchenko
,
D.
Dudkowski
,
T.
Kapitaniak
, and
Y.
Maistrenko
, “
Solitary states for coupled oscillators with inertia
,”
Chaos
28
,
011103
(
2018
).
131
I. V.
Belykh
,
B. N.
Brister
, and
V. N.
Belykh
, “
Bistability of patterns of synchrony in Kuramoto oscillators with inertia
,”
Chaos
26
,
094822
(
2016
).
132
S.
Olmi
, “
Chimera states in coupled Kuramoto oscillators with inertia
,”
Chaos
25
,
123125
(
2015
).
133
S.
Olmi
,
A.
Navas
,
S.
Boccaletti
, and
A.
Torcini
, “
Hysteretic transitions in the Kuramoto model with inertia
,”
Phys. Rev. E
90
,
042905
(
2014
).
134
J.
Barré
and
D.
Métivier
, “
Bifurcations and singularities for coupled oscillators with inertia and frustration
,”
Phys. Rev. Lett.
117
,
214102
(
2016
).
135
J.
Sawicki
,
R.
Berner
,
T.
Löser
, and
E.
Schöll
, “
Modelling tumor disease and sepsis by networks of adaptively coupled phase oscillators
,”
Front. Netw. Physiol.
1
,
730385
(
2022
).
136
R.
Berner
,
J.
Sawicki
,
M.
Thiele
,
T.
Löser
, and
E.
Schöll
, “
Critical parameters in dynamic network modeling of sepsis
,”
Front. Netw. Physiol.
2
,
904480
(
2022
).
137
J. J.
Tyson
,
K. C.
Chen
, and
B.
Novak
, “
Sniffers, buzzers, toggles and blinkers: Dynamics of regulatory and signaling pathways in the cell
,”
Curr. Opin. Cell Biol.
15
,
221
231
(
2003
).
138
U.
Alon
, An Introduction to Systems Biology, 2nd ed., Chapman & Hall/CRC Computational Biology Series (Taylor & Francis, Philadelphia, PA, 2019).
139
J.
Ferrell
,
Systems Biology of Cell Signaling
(
CRC Press
,
Boca Raton, FL
,
2021
).
140
S. E.
Kahn
,
R. L.
Prigeon
,
D. K.
McCulloch
,
E. J.
Boyko
,
R. N.
Bergman
,
M. W.
Schwartz
,
J. L.
Neifing
,
W. K.
Ward
,
J. C.
Beard
, and
J. P.
Palmer
, “
Quantification of the relationship between insulin sensitivity and beta-cell function in human subjects. Evidence for a hyperbolic function
,”
Diabetes
42
,
1663
1672
(
1993
).
141
R. M.
Macnab
and
D. E. J.
Koshland
, “
The gradient-sensing mechanism in bacterial chemotaxis
,”
Proc. Natl. Acad. Sci. U.S.A.
69
,
2509
2512
(
1972
).
142
H. C.
Berg
and
P. M.
Tedesco
, “
Transient response to chemotactic stimuli in escherichia coli
,”
Proc. Natl. Acad. Sci. U.S.A.
72
,
3235
3239
(
1975
).
143
T. M.
Yi
,
Y.
Huang
,
M. I.
Simon
, and
J.
Doyle
, “
Robust perfect adaptation in bacterial chemotaxis through integral feedback control
,”
Proc. Natl. Acad. Sci. U.S.A.
97
,
4649
4653
(
2000
).
144
H.
El-Samad
,
J. P.
Goff
, and
M.
Khammash
, “
Calcium homeostasis and parturient hypocalcemia: An integral feedback perspective
,”
J. Theor. Biol.
214
,
17
29
(
2002
).
145
C.
Briat
,
A.
Gupta
, and
M.
Khammash
, “
Antithetic integral feedback ensures robust perfect adaptation in noisy biomolecular networks
,”
Cell Syst.
2
,
15
26
(
2016
).
146
O.
Karin
,
A.
Swisa
,
B.
Glaser
,
Y.
Dor
, and
U.
Alon
, “
Dynamical compensation in physiological circuits
,”
Mol. Syst. Biol.
12
,
886
(
2016
).
147
O.
Karin
,
M.
Raz
,
A.
Tendler
,
A.
Bar
,
Y.
Korem Kohanim
,
T.
Milo
, and
U.
Alon
, “
A new model for the hpa axis explains dysregulation of stress hormones on the timescale of weeks
,”
Mol. Syst. Biol.
16
,
e9510
(
2020
).
148
Y.
Korem Kohanim
,
T.
Milo
,
M.
Raz
,
O.
Karin
,
A.
Bar
,
A.
Mayo
,
N.
Mendelson Cohen
,
Y.
Toledano
, and
U.
Alon
, “
Dynamics of thyroid diseases and thyroid-axis gland masses
,”
Mol. Syst. Biol.
18
,
e10919
(
2022
).
149
A.
Tendler
,
A.
Bar
,
N.
Mendelsohn-Cohen
,
O.
Karin
,
Y. K.
Kohanim
,
L.
Maimon
,
T.
Milo
,
M.
Raz
,
A.
Mayo
,
A.
Tanay
, and
U.
Alon
, “
Hormone seasonality in medical records suggests circannual endocrine circuits
,”
Proc. Natl. Acad. Sci. U.S.A.
118
,
e2003926118
(
2021
).
150
M. D.
Lazova
,
T.
Ahmed
,
D.
Bellomo
,
R.
Stocker
, and
T. S.
Shimizu
, “
Response rescaling in bacterial chemotaxis
,”
Proc. Natl. Acad. Sci. U.S.A.
108
,
13870
13875
(
2011
).
151
J.
Larsch
,
S. W.
Flavell
,
Q.
Liu
,
A.
Gordus
,
D. R.
Albrecht
, and
C. I.
Bargmann
, “
A circuit for gradient climbing in C. Elegans chemotaxis
,”
Cell Rep.
12
,
1748
1760
(
2015
).
152
K.
Kamino
,
Y.
Kondo
,
A.
Nakajima
,
M.
Honda-Kitahara
,
K.
Kaneko
, and
S.
Sawai
, “
Fold-change detection and scale invariance of cell–cell signaling in social amoeba
,”
Proc. Natl. Acad. Sci. U.S.A.
114
,
E4149
E4157
(
2017
).
153
W.
Schultz
,
P.
Dayan
, and
P. R.
Montague
, “
A neural substrate of prediction and reward
,”
Science
275
,
1593
1599
(
1997
).
154
P. N.
Tobler
,
C. D.
Fiorillo
, and
W.
Schultz
, “
Adaptive coding of reward value by dopamine neurons
,”
Science
307
,
1642
1645
(
2005
).
155
H. R.
Kim
,
A. N.
Malik
,
J. G.
Mikhael
,
P.
Bech
,
I.
Tsutsui-Kimura
,
F.
Sun
,
Y.
Zhang
,
Y.
Li
,
M.
Watabe-Uchida
,
S. J.
Gershman
, and
N.
Uchida
, “
A unified framework for dopamine signals across timescales
,”
Cell
183
,
1600
1616
(
2020
).
156
O.
Karin
and
U.
Alon
, “
The dopamine circuit as a reward-taxis navigation system
,”
PLoS Comput. Biol.
18
,
e1010340
(
2022
).
157
C. J.
Whitmire
and
G. B.
Stanley
, “
Rapid sensory adaptation redux: A circuit perspective
,”
Neuron
92
,
298
315
(
2016
).
158
Y. K.
Wu
,
C.
Miehl
, and
J.
Gjorgjieva
, “
Regulation of circuit organization and function through inhibitory synaptic plasticity
,”
Trends Neurosci.
45
,
884
898
(
2022
).
159
C.
Miehl
,
S.
Onasch
,
D.
Festa
, and
J.
Gjorgjieva
, “
Formation and computational implications of assemblies in neural circuits
,”
J. Physiol.
(published online) (
2023
).
160
D.
Debanne
,
Y.
Inglebert
, and
M.
Russier
, “
Plasticity of intrinsic neuronal excitability
,”
Curr. Opin. Neurobiol.
54
,
73
82
(
2019
).
161
R. S.
Zucker
and
W. G.
Regehr
, “
Short-term synaptic plasticity
,”
Annu. Rev. Physiol.
64
,
355
405
(
2002
).
162
D. E.
Feldman
, “
The spike-timing dependence of plasticity
,”
Neuron
75
,
556
571
(
2012
).
163
H.
Markram
,
J.
Lübke
,
M.
Frotscher
, and
B.
Sakmann
, “
Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs
,”
Science
275
,
213
215
(
1997
).
164
G.-G.
Bi
and
M.-M.
Poo
, “
Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type
,”
J. Neurosci.
18
,
10464
10472
(
1998
).
165
H.
Motanis
,
M. J.
Seay
, and
D. V.
Buonomano
, “
Short-term synaptic plasticity as a mechanism for sensory timing
,”
Trends Neurosci.
41
,
701
711
(
2018
).
166
A. I.
Weber
,
K.
Krishnamurthy
, and
A. L.
Fairhall
, “
Coding principles in adaptation
,”
Annu. Rev. Vis. Sci.
5
,
427
449
(
2019
).
167
R.
Näätänen
,
M.
Simpson
, and
N. E.
Loveless
, “
Stimulus deviance and evoked potentials
,”
Biol. Psychol.
14
,
53
98
(
1982
).
168
J. M.
Ross
and
J. P.
Hamm
, “
Cortical microcircuit mechanisms of mismatch negativity and its underlying subcomponents
,”
Front. Neural Circuits
14
,
13
(
2020
).
169
N.
Ulanovsky
,
L.
Las
, and
I.
Nelken
, “
Processing of low-probability sounds by cortical neurons
,”
Nat. Neurosci.
6
,
391
398
(
2003
).
170
R. G.
Natan
,
J. J.
Briguglio
,
L.
Mwilambwe-Tshilobo
,
S. I.
Jones
,
M.
Aizenberg
,
E. M.
Goldberg
, and
M. N.
Geffen
, “
Complementary control of sensory adaptation by two types of cortical interneurons
,”
eLife
4
,
e09868
(
2015
).
171
J.
Homann
,
S.
Ann
,
K. S.
Chen
,
D. W.
Tank
, and
M. J.
Berry
, “
Novel stimuli evoke excess activity in the mouse primary visual cortex
,”
Proc. Natl. Acad. Sci. U.S.A.
119
,
e2108882119
(
2022
).
172
R.
Mill
,
M.
Coath
,
T.
Wennekers
, and
S. L.
Denham
, “
Abstract stimulus-specific adaptation models
,”
Neural Comput.
23
,
435
476
(
2011
).
173
R.
Mill
,
M.
Coath
,
T.
Wennekers
, and
S. L.
Denham
, “
Characterising stimulus-specific adaptation using a multi-layer field model
,”
Brain Res.
1434
,
178
188
(
2012
).
174
I.
Hershenhoren
,
N.
Taaseh
,
F. M.
Antunes
, and
I.
Nelken
, “
Intracellular correlates of stimulus-specific adaptation
,”
J. Neurosci.
34
,
3303
3319
(
2014
).
175
Y.
Park
and
M. N.
Geffen
, “
A circuit model of auditory cortex
,”
PLoS Comput. Biol.
16
,
e1008016
(
2020
).
176
M. J.
Seay
,
R. G.
Natan
,
M. N.
Geffen
, and
D. V.
Buonomano
, “
Differential short-term plasticity of PV and SST neurons accounts for adaptation and facilitation of cortical neurons to auditory tones
,”
J. Neurosci.
40
,
9224
9235
(
2020
).
177
A.
Schulz
,
C.
Miehl
,
M. J.
Berry II
, and
J.
Gjorgjieva
, “
The generation of cortical novelty responses through inhibitory plasticity
,”
eLife
10
,
e65309
(
2021
).
178
R. P. N.
Rao
and
D. H.
Ballard
, “
Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects
,”
Nat. Neurosci.
2
,
79
87
(
1999
).
179
G. G.
Turrigiano
and
S. B.
Nelson
, “
Homeostatic plasticity in the developing nervous system
,”
Nat. Rev. Neurosci.
5
,
97
107
(
2004
).
180
E.
Marder
and
J.-M.
Goaillard
, “
Variability, compensation and homeostasis in neuron and network function
,”
Nat. Rev.
7
,
563
574
(
2006
).
181
S.
Onasch
and
J.
Gjorgjieva
, “
Circuit stability to perturbations reveals hidden variability in the balance of intrinsic and synaptic conductances
,”
J. Neurosci.
40
,
3186
3202
(
2020
).
182
P. J.
Gonçalves
,
J. M.
Lueckmann
,
M.
Deistler
,
M.
Nonnenmacher
,
K.
Öcal
,
G.
Bassetto
,
C.
Chintaluri
,
W. F.
Podlaski
,
S. A.
Haddad
,
T. P.
Vogels
,
D. S.
Greenberg
, and
J. H.
Macke
, “
Training deep neural density estimators to identify mechanistic models of neural dynamics
,”
eLife
9
,
e56261
(
2020
).
183
M.
Deistler
,
J. H.
Macke
, and
P. J.
Gonçalves
, “
Energy efficient network activity from disparate circuit parameters
,”
Proc. Natl. Acad. Sci. U.S.A.
119
,
e2207632119
(
2022
).
184
E.
Marder
,
T.
O’Leary
, and
S.
Shruti
, “
Neuromodulation of circuits with variable parameters: Single neurons and small circuits reveal principles of state-dependent and robust neuromodulation
,”
Annu. Rev. Neurosci.
37
,
329
346
(
2014
).
185
K.
Jacquerie
and
G.
Drion
, “
Robust switches in thalamic network activity require a timescale separation between sodium and T-type calcium channel activations
,”
PLoS Comput. Biol.
17
,
e1008997
(
2021
).
186
G.
Mongillo
,
O.
Barak
, and
M.
Tsodyks
, “
Synaptic theory of working memory
,”
Science
319
,
1543
1546
(
2008
).
187
R.
Yuste
, “
From the neuron doctrine to neural networks
,”
Nat. Rev. Neurosci.
16
,
487
497
(
2015
).
188
S.
Fusi
, “Computational models of long term plasticity and memory,” arXiv:1706.04946 (2017).
189
A.
Litwin-Kumar
and
B.
Doiron
, “
Formation and maintenance of neuronal assemblies through synaptic plasticity
,”
Nat. Commun.
5
,
5319
(
2014
).
190
C.
Miehl
and
J.
Gjorgjieva
, “
Stability and learning in excitatory synapses by nonlinear inhibitory plasticity
,”
PLoS Comput. Biol.
18
,
e1010682
(
2022
).
191
F.
Zenke
,
E. J.
Agnes
, and
W.
Gerstner
, “
Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks
,”
Nat. Commun.
6
,
6922
(
2015
).
192
J. P.
Hamm
,
D. S.
Peterka
,
J. A.
Gogos
, and
R.
Yuste
, “
Altered cortical ensembles in mouse models of schizophrenia
,”
Neuron
94
,
153
167
(
2017
).
193
R.
Batista-Brito
,
E.
Zagha
,
J. M.
Ratliff
, and
M.
Vinck
, “
Modulation of cortical circuits by top-down processing and arousal state in health and disease
,”
Curr. Opin. Neurobiol.
52
,
172
181
(
2018
).
194
G. A.
Light
and
R.
Näätänen
, “
Mismatch negativity is a breakthrough biomarker for understanding and treating psychotic disorders
,”
Proc. Natl. Acad. Sci. U.S.A.
110
,
15175
15176
(
2013
).
195
H. R.
Wilson
and
J. D.
Cowan
, “
A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue
,”
Kybernetik
13
,
55
80
(
1973
).
196
M.
Mattia
and
P.
Del Giudice
, “
Population dynamics of interacting spiking neurons
,”
Phys. Rev. E
66
,
051917
(
2002
).
197
E. S.
Schaffer
,
S.
Ostojic
, and
L. F.
Abbott
, “
A complex-valued firing-rate model that approximates the dynamics of spiking networks
,”
PLoS Comput. Biol.
9
,
e1003301
(
2013
).
198
T. B.
Luke
,
E.
Barreto
, and
P.
So
, “
Complete classification of the macroscopic behavior of a heterogeneous network of theta neurons
,”
Neural Comput.
25
,
3207
3234
(
2013
).
199
C. R.
Laing
, “
Derivation of a neural field model from a network of theta neurons
,”
Phys. Rev. E
90
,
010901
(
2014
).
200
E.
Montbrió
,
D.
Pazó
, and
A.
Roxin
, “
Macroscopic description for networks of spiking neurons
,”
Phys. Rev. X
5
,
021028
(
2015
).
201
G. B.
Ermentrout
and
N.
Kopell
, “
Parabolic bursting in an excitable system coupled with a slow oscillation
,”
SIAM J. Appl. Math.
46
,
233
253
(
1986
).
202
A.
Byrne
,
M. J.
Brookes
, and
S.
Coombes
, “
A mean field model for movement induced changes in the beta rhythm
,”
J. Comput. Neurosci.
43
,
143
158
(
2017
).