Complex networks are essentially heterogeneous not only in the basic properties of the constituent nodes, such as their degree, but also in the effects that these have on the global dynamical properties of the network. Networks of coupled identical phase oscillators are good examples for analyzing these effects, since an overall synchronized state can be considered a reference state. A small variation of intrinsic node parameters may cause the system to move away from synchronization, and a new phase-locked stationary state can be achieved. We propose a measure of phase dispersion that quantifies the functional response of the system to a given local perturbation. As a particular implementation, we propose a variation of the standard Kuramoto model in which the nodes of a complex network interact with their neighboring nodes, by including a node-dependent frustration parameter. The final stationary phase-locked state now depends on the particular frustration parameter at each node and also on the network topology. We exploit this scenario by introducing individual frustration parameters and measuring what their effect on the whole network is, measured in terms of the phase dispersion, which depends only on the topology of the network and on the choice of the particular node that is perturbed. This enables us to define a characteristic of the node, its functionability, that can be computed analytically in terms of the network topology. Finally, we provide a thorough comparison with other centrality measures.

Identical coupled Kuramoto oscillators tend to synchronize in the stationary state. This means that they will run with exactly the same phase and the same frequency. However, the introduction of a common forced phase difference, also called a frustration parameter, results in the phases of neighboring oscillators repelling each other, and in the long run, new configurations of partially synchronized oscillators become stable. Here, we make use of this mechanism to show that individual frustration parameters are crucial in generating different functional structures and allow us to define a new centrality measure, which we call “functionability.” The practical meaning of this measure can be important in complex networks such as the brain or power grids.

## I. INTRODUCTION

Synchronization has become one of the most paradigmatic examples of emergent properties in complex systems,^{1,2} since the degree of interaction between the oscillatory units of a discrete system shows that a variety of macroscopic states are available. Among the most studied such systems, because of its inherent simplicity, is the Kuramoto model (KM), in which phase oscillators interact continuously with other units through a sine function of the phase difference.^{3–5} In all-to-all models, there is a transition from an incoherent state to a coherent one that depends only on the relative strength of the two competing forces: the dispersion of intrinsic frequencies and the intensity of the coupling between units.

Over the last four decades, the KM has been thoroughly studied in regular lattices, and, with the sudden interest in complex networks, its role in irregular connectivity patterns has been heavily exploited.^{6} This has been achieved not only by analyzing synchronization properties (order parameters, control parameters, time to synchronize, etc.) but also through the use of the path to synchronization of neighboring units in order to identify higher-order connectivity patterns, for instance, in communities that form complex networks at different hierarchical levels.^{7–9} As already stated, in the original KM, the emphasis is on the relative strength in the two antagonistic contributions: frequency dispersion and coupling strength. However, when complex topologies are considered, it is important to disentangle these effects; for this reason, special interest has been arisen concerning the evolution of identical oscillators. In particular, a simple change in the coupling function, by inserting a phase frustration in the argument of the sine function, results in identical oscillators now being unable to synchronize, and it generates complex patterns of phase differences, which have been related to topological symmetries of the network.^{10,11}

The introduction of an identical frustration parameter in all the coupling terms produces a global effect on the network.^{10} However, in complex network science, there is a special interest in understanding what role the individual nodes play in the behavior of the overall network. Many centrality measures that are key to this field have been considered for many years in the social sciences, and there are more recent proposals such as Google Search PageRank as well as other measures related to the concept of controllability. However, to the best of our knowledge, there is no measure of the effect that a given node can have on the range of states a network can achieve which, in terms of phase oscillators, is measured in the different phase-difference patterns a network can traverse. Precisely, our proposal is to characterize the effect that a node can have on the global asynchronous state.

Among the systems where this new concept that we name “functionability” has an immediate application is the rapidly evolving field of brain networks.^{12} The molecular and cellular mechanisms of synapse formation and plasticity shape and guide the development and change of synaptic connections in the long run.^{13–15} Therefore, brain networks, on a short time scale, are considered to be static. Such structural networks are the substrate on which different temporal coactivation patterns can occur, also known as functional networks.^{16–21} All the functionalities of the brain, at either the low or the high level, are captured by different networks which nonetheless occur within the same physical medium. How does this essential feature of the brain arise? What mechanisms are responsible for a static network undergoing many different states? Are there specialized brain regions that are better at this job? Are they easy to identify?

Higher functionability may be positive for the system, as it reflects the capacity of a node to be involved in different tasks and can result in the network state shifting into one involving more complex temporal relationships between modules. However, highly functional nodes can also be potentially dangerous in systems where tiny perturbations can produce cascadelike effects that completely disrupt the network dynamics. An example of this is offered by the transfer networks of power grids, which have been widely studied in the field of complex networks, focusing on their structure to assess the damage of failures.^{22,23} However, power grids are highly sensitive to oscillatory dynamics and the synchronization of AC power^{24} and hence to perturbations of the phase lags between individual agents. Other such examples include the synchronization in heartbeats,^{25} multiprocessors and multicore processors,^{25} and traffic signal synchronization.^{26}

We are specifically interested in detecting the nodes that have a major impact on the network by enabling a broader spectrum of states or a larger dispersion from the “ground” state. Many centrality measures have been developed and defined over the years,^{27} some of them are even related to the dynamical properties of the nodes;^{28} but we specifically target a measure of variability or functionability that can be associated with the physical phenomenon of synchronization in order to provide it with meaning.^{1} Hence, we base a great deal of our work on an oscillatory dynamical model; but we aim to arrive at a compact mathematical expression that emerges from it.

The structure of the paper is as follows: in Sec. II, we provide a brief description and the main results of Kuramoto and frustrated Kuramoto oscillatory models. In Sec. III, we introduce a new oscillatory model based on the previous one: the general frustrated Kuramoto model (GFKM) as well as a measure to quantify the phase dispersion of the system when a perturbation is produced. This model is the building block of our main contribution, that is, the definition of a new centrality measure: functionability. Its mathematical expression and interpretation are provided in Sec. IV. Section V then presents a comparison between other well-known centrality measures, in order to stress the novelty of functionability, and Sec. VI considers weighted functionability. Finally, Sec. VII closes the paper with our conclusions. Further details concerning the validity of the model, as well as an example of a toy network, can be found in the Appendix.

## II. OSCILLATORY MODELS

### A. The Kuramoto model

The phenomenon of synchronization in complex networks has been widely studied in recent decades, and several models have been designed to reproduce and help us understand the behavior of real oscillatory systems.^{1,29,30} In 1975, Kuramoto presented one of the best-known and most compact models:^{31} a set of $N$ phase oscillators characterized by their phase, $\theta i(t)$, coupled by the sine of their phase differences. Each unit is explicitly influenced by its nearest neighbors, $j\u2208\Gamma i$, with coupling strength $Kij$. When oscillators are connected in a particular topology, the set of nearest neighbors is defined by the adjacency matrix of the network, $G(V,E)$, $A$. The set of nodes comprising the network, $V(G)$, consists of the oscillators, and the links between oscillators are represented by the set of network edges, $E(G)$.^{27}

Kuramoto further assumed homogeneous interactions: $Kij=K\u2200(i,j)$. Therefore, considering both the topology of the network that maps the connectivity between oscillators and the coupling strength, the dynamics of the system can be written as

where $\omega i$ represents the natural frequency of each oscillator.

Previous studies^{5} show that the long-time properties of the population of oscillators, in the absence of noise, are determined by the coupling strength, $K$, and the distribution of natural frequencies.

We consider that two nodes are *phase synchronized* when their phases have the same value,

When the phase difference has a constant value, that is, $\theta i(t)\u2212\theta j(t)=c\u2200t>t0$, we say there is phase locking between nodes $i$ and $j$. Similarly, we consider that two nodes are *frequency synchronized* when their frequencies have the same value:

When two nodes are both phase and frequency synchronized, we say these nodes are *completely synchronized*.

What is the effect of the $K$ and $\omega i$ parameters in Eq. (1)? An increase of the coupling strength accelerates the move toward closer phases and, hence, promotes the synchronization of the system. Conversely, a broader spectrum of natural frequencies, that is, larger differences within the set ${\omega i}$, tends to cause the phases of the oscillators to diverge (see Fig. 1).

In this paper, we assume that the network forms a single connected component. This ensures that when all the oscillators have the same natural frequency, $\omega 0$, there is only one attractor of the dynamics: the completely synchronized regime, $d\theta i(t)/dt=\omega 0\u2200i$.^{7,8} In the case of unimodal distributions of the natural frequency, the system becomes synchronized as long as the coupling parameter, $K$, is larger than a threshold value, $KC$.^{5} The particular evolution to such a state depends strongly on the connectivity structure or the network topology.^{32}

### B. The frustrated Kuramoto model (FKM)

In Sec. II A, we stated that a connected system described by a KM reaches complete synchronization when all the oscillators are identical, $\omega i=\omega 0\u2200i$. Besides tuning the natural frequency of each oscillator, is there an alternative way of breaking the natural synchrony of the system? Previous studies^{10,33,34} have suggested the introduction of a phase “frustration” parameter, $\alpha $, into the dynamics of the system as follows:

It has been shown that, as long as $\alpha <\pi /2$, the system becomes synchronized to a resulting frequency $\Omega $, where $d\theta i(t)/dt=\Omega \u2260\omega 0\u2200i$. Nevertheless, $\alpha $ forces connected nodes to be locked in phase and hence breaks the phase synchronization. The magnitude of such locking is determined by both the network topology and the parameters of the dynamics, $\alpha $. However, complete synchronization is conserved for topological symmetric nodes, a phenomenon that has been called remote synchronization. As the frustration increases, the asynchronous groups move away from each other. In order to quantify the level of synchronization between pairs of oscillators, we define a local order parameter between oscillators based on^{7}

Equation (5) measures the correlation between pairs of oscillators under the stationary regime, which is invariant under temporal translation.^{35}

In Fig. 2, 4 groups are internally synchronized but asynchronous between the 4 groups. The groups obtained capture the natural symmetries of the network: nodes 1 and 2, nodes 3 and 6, nodes 4 and 5, and node 0, isolated.

## III. MOVING AWAY FROM SYNCHRONIZATION

In Sec. II, we explore two oscillatory models. An oscillatory system described by KM can always achieve complete synchronization, regardless of its topology and frequency distribution (see conditions in Sec. II A). In contrast, an equivalent system described by FKM can always achieve frequency synchronization but, generally, not phase synchronization. The phase locking is completely determined by the symmetry of the network, if it exists (Sec. II B)

However, KM and FKM do not provide information on specific nodes, only on the network as a whole. We seek an intrinsic parameter that features each oscillator that could move the system away from its natural synchronized state. What would the effect of a phase frustration parameter that characterizes each such oscillator distinctly be? Several studies have focused on the effect of different natural frequencies of the oscillators; but we may be concerned with other types of natural properties connected to the phase shift between oscillators.

We would like to identify the nodes that have the largest effect in leading the whole system away from complete synchronization. To do so, we need to establish which nodes have the greatest capacity, with only a small perturbation, to produce a large dispersion in the phases of the population. In the next paragraphs, we build a model that is capable of breaking the natural phase synchronization and search for central nodes that are best suited to doing this. To this end, we introduce a dynamic model based on FKM: the general frustrated Kuramoto model (GFKM), which enables us to characterize each node individually by means of an intrinsic frustration parameter.

### A. The general frustrated Kuramoto model

In the present work, we consider the natural generalization of the FKM by considering the frustration phase parameter to be intrinsic to each oscillator, $\alpha i$, rather than a homogeneous property of the population. This assumption may depict a more realistic scenario, in which oscillators represent real systems with individual properties that are determined by the nature of each oscillator. The more general model, which we call the GFKM, is determined by the dynamics

### B. Finding the most functional nodes

We may find very distinct distributions of $g(\alpha )$, each of them providing the system with different behaviors. As our main goal is to localize the nodes that are most likely to move the system away from synchronization, we consider two possible distributions, although other interesting insights may be derived from different possibilities. We seek analytical results in order to gain a better understanding of the model and the effect of an intrinsic phase frustration parameter. Hence,

$\alpha i=\alpha h\u2200i$. This is a homogeneous phase frustration parameter and so GFKM turns out to be FKM, as previously described. Remote synchronization comes out in symmetric nodes.

$\alpha i=\alpha \u22600;\alpha j=0\u2200i\u2260j$, hence only node $i$ has a value different from zero for the phase frustration parameter. In this way, we break the overall problem into many individual problems. Here the question that normally arises is: how can a single node lead the system further away from complete synchronization?

Let us first suggest a simple way to measure phase dispersion between oscillators.

### C. Measuring perturbations in oscillatory systems

Consider a system that consists of a network of coupled oscillators. Suppose that a mechanism moves the system from an initial parameter configuration, $p$, to a new parameter configuration, $q$. Since each oscillator is characterized by a phase, $\theta i(t)$, the phases in configuration $p$, $\theta \u2192(p)$, will transform to updated phases in configuration $q$, $\theta \u2192(q)$. We assume that configurations $p$ and $q$ are two possible stationary states characterized by a set of values for the phase locking between the oscillators. In this situation, we define the effect on nodes $(i,j)$, $\epsilon ij$, generated by the configuration shifting from $p$ to $q$ as

where $\varphi ij\u2261\theta j\u2212\theta i$ and $\Delta \varphi ij\u2261\varphi ij(q)\u2212\varphi ij(p)$.^{38} Therefore, it is a measure of the change in phase difference between nodes $i$ and $j$ that, as defined in the stationary regime, is time independent. Equation (7) has the following properties: $\epsilon ij(p\u2192q)=\epsilon ji(p\u2192q)$, $\epsilon ii(p\u2192q)=0$, and $\epsilon ij(p\u2192p)=0$. Moreover, $\epsilon ij(p\u2192q)\u2208[0,1]$. In other words, if nodes $(i,j)$ were initially in phase and they change to be in antiphase, the effect or the change in phase difference would be the largest possible: $\epsilon ij(\Delta \varphi =\pi )=1$. When no changes are produced due to the change of configuration, that is, the phase difference between nodes $i$ and $j$ remains unchanged, the value of the effect is zero: $\epsilon ij(\Delta \varphi =0)=0$.

## IV. FUNCTIONABILITY: A NEW CENTRALITY MEASURE

In order to assess the impact on the whole system of a node being perturbed, we make use of the GFKM described in Eq. (6) and the effect measure defined in Eq. (7). As stated in Sec. III B, we will consider that the change in the configuration is produced by just one single node, $C$, called the control node. We will compute the *functionability*, $Fi$, of $C$. Then by performing the same procedure for each node, we will obtain a vector with the functionability of each of them: $F\u2192$.

**The control node, C**The initial configuration of the system, $p$, is such that all the oscillators are completely synchronized at the stationary state. The system then switches to configuration $q$, which is determined by the control node. This change is enacted by setting the set of frustration parameters, $\alpha \u2192$, in Eq. (6) as follows:where $C$ is the label of the control node, the effect of whose perturbation on the whole system we will assess.(8)$\alpha i(p)=0\u2200i\alpha i(q)={0ifi\u2260C,\alpha ifi=C,$**Functionability,**$F$ We define the*functionability*of node $C$ aswhere $p$ and $q$ are defined in Eq. (8), $\epsilon ij$ is defined in Eq. (7) and the state of the system is obtained from the dynamics of Eq. (6) with the aforementioned configurations. As already seen, the initial configuration, $p$, corresponds to full locked synchronization. Therefore, the functionability measures the total dispersion of the phases from this ground state due to the perturbation of a single node. The larger the dispersion, the more functionability a node has.(9)$FC(\alpha )\u2261\u2211i\u2211j\epsilon ij(p\u2192q(\alpha )),$

In Fig. 3, we can observe that, regardless of the control node selected, if the frustration parameter is set to $\alpha =0$, then the system is fully synchronized. Conversely, when $\alpha \u22600$, in the example considered, $\alpha =0.5$, the system becomes asynchronous. Here, the effect of the frustration depends on the control node selected. As explained, the quantification of this effect is captured by the *functionability* of the nodes.

### A. Mathematical expression of *functionability*

Equation (6) is a set of $N$ coupled nonlinear differential equations whose solution, in general, cannot be derived analytically. However, if the system reaches a frequency synchronized state (see the Appendix, Subsection 2),^{39} that is, $d\theta (t)i/dt=\Omega \u2200i,t$, and the argument of the sine is small enough, we can linearize it as follows:

We can expand Eq. (10),

where $di\u2261\u2211jAij$ is the degree of the $i$th node of the undirected network defined by the adjacency matrix, $A$. We define the Laplacian matrix, $L$, as follows:^{40}

Without loss of generality, we can set $\omega =0$ and $K=1$. If the system is synchronized, then $\theta \u02d9i=\Omega \u2200i$. A nonzero shared natural frequency does not affect the synchronization of the system, and the coupling strength plays a role in the time scale of the path to synchronization.

Equation (13) is then written as

We can compute the summation of Eq. (14) along all $i$ indices, $\u2211iN$,

where we have used the fact that the Laplacian matrix is defined from an odd function, or a zero-sum row, and hence,

Finally,

In the matrix form

where $[\alpha d\u2192]i=\alpha idi$.

In a connected graph, the Laplacian matrix, $L$, has one null eigenvalue, which corresponds to the eigenvector $1\u2192$, and hence the system of linear equations to solve $\theta \u2192$ in Eq. (18) is singular. Intuitively, we are left with one free parameter which depends on the initial phase conditions, $\theta \u2192(t=0)$. However, as we will see, the phase differences between oscillators are well determined.

**The reference node,**$R$. As the solution of the set $\theta \u2192$ given by Eq. (18) is undetermined, we need to work with phase*differences*, instead of absolute phases. We, therefore, define the reference node, $R$, as the node to which the phase of all other nodes is compared,(19)$\varphi i(R)=\varphi iR\u2261\theta i\u2212\theta R\varphi R(R)=0.$**The reduced Laplacian matrix**, $L~(C,R)$. Given an undirected network and its Laplacian $N\xd7N$ matrix (12), we define the reduced Laplacian $(N\u22121)\xd7(N\u22121)$ matrix for a pair of control (8) and reference (19) nodes as follows:We can define the selection matrix, $Jn,m$, which is an $N\xd7N$ identity matrix after the removal of the $n$th column and the $m$th row. Using this notation, Eq. (20) can be written as(20)$L~(C,R)\u2261L\u2212{Cth row,Rth column}.$(21)$L~(C,R)=JC,K\u22c5L\u22c5JC,KT.$

With the aforementioned consideration (that is, when the linearization requirements are met), we can compute the results of the phase differences analytically, $\varphi \u2192(R)$, with respect to a particular reference node, $R$. This result represents a key finding for Secs. V and VI.

where $C$ is the label of the control node. The stable state of the system when linearization conditions hold described in Eq. (14) then becomes

The unequivocal solution of phase differences for a given reference node, $\varphi \u2192(R)$, is given by

Hence,

In order to calculate the matrix $\epsilon $, whose elements are defined in Eq. (7),

if $\Delta \varphi ij(p\u2192q)\u223c0$, we can linearize the cosine and write $\epsilon ij$ as

To compute the *functionability* of a control node, $C$, as it is defined in Eq. (9), we use Eqs. (25) and (26), where node $R$ and node $j$ are equivalent,

$FC\u2261\u2211i\u2211j\epsilon ij(p\u2192q)$ or $FC\u2261\u2211i\u2211R\epsilon iR(p\u2192q)$ can be expanded in order to obtain a more compact expression. Using Eq. (27) and rearranging summations

Let us compute

Therefore,

where we have used the matrix property: $A\u22121B\u22121=(BA)\u22121$. Finally, Eq. (30) can be normalized by $1/N2$,

The values of *functionability* for the network in Fig. 1 taking $\alpha =0.2$ are $F\u2192={0.34,0.43, 0.43,0.18,0.36,0.36,0.18}$. As we can see from Fig. 4, nodes 1 and 2 obtain the highest scores, while nodes 3 and 6 have the lowest. The ranking of nodes is preserved regardless of the value of the frustration parameter, since all the dependence in $\alpha $ is a quadratic prefactor. Final phases of nodes 1 and 6 in the case $\alpha =0.55$ are shown in polar coordinates in Fig. 3.

### B. Interpretation of *functionability*

In Sec. IV, we define the *functionability* of node $C$, $FC$, as a measure of the effect that a forced-delay parameter introduced in the intrinsic properties of such node has on the synchronized state of reference of the network. That is, the potential of a single node to switch the network to an asynchronous state, as shown in Fig. 3. Despite the fact that the general definition of *functionability* is built from a dynamical model, as described by Eqs. (6), (8), (9), and (27), we have derived a very compact mathematical expression that is equivalent to that given by Eq. (30). Moreover, precisely because of the physical meaning of the measure, the interpretation of the final analytical expression is conserved: high values of $FC$ reflect the structural importance of the corresponding node. That is to say, the position where such nodes are located within the network has the potential to shift the state into more possible configurations with the same perturbation of their intrinsic dynamics. As already pointed out, this effect may be either beneficial or disruptive for a given system, depending on its nature and functions.

Moreover, we can take a closer look at the analytical expression of *functionability*, in Eq. (30), in order to understand the building blocks of it. On the one hand, for fixed values of the network size, $N$, and the frustration parameter, $\alpha $, we can identify two contributions to $FC$

The square of the degree of the node, $dC2$ and

The reduced Laplacian contribution, which we call

*L-Periphery*.*L-Periphery*$=\u2211RN\u2211ijN\u22121[(L~(C,R)L~T(C,R))\u22121]ij$.

The first term stands for the importance of the degree of the node (see the first column in Fig. 5). The more neighbors a node has, the more likely it is to be a more *functional* node. Moreover, this effect is enhanced by the square of the degree.

Conversely, if we locate all the nodes using the Fruchterman-Reingold force-directed algorithm available at NetworkX python library, which considers an attractive spring force between adjacent nodes and a repulsive electrical force between any pair of nodes, and use the second contribution of *functionability* as an attribute for size and color, we obtain an intuitive and qualitative meaning for it. Nodes that have higher values of the former contribution are located at the periphery of the visual representation of the network (see the second column in Fig. 5). Note also that L-periphery is largely negatively correlated with betweenness centrality, as can be seen in Fig. 7.

Hence, higher values of *functionability* correspond to nodes that are both well connected and peripheral. Therefore, *functionability* provide us with more information than other classic measures of node importance (see the third column in Fig. 5). *Functionability* and its two contributions are depicted in Fig. 5. Note that the product of the squared degree and the L-periphery results in *functionability*.

## V. NEW INSIGHTS FROM *FUNCTIONABILITY*

Section IV B provides us with a thoughtful analysis of *functionability*, considering both its interpretation and usefulness. This new centrality measure turned out to be unique in its analytic expression and definition when we compared it with other centrality measures, as we will see in Ref. 41

To this end, we provide an example of the computation of *functionability* centrality for the nodes of a well-known real network: the frontal cortex network of the *Caenorhabditis elegans* worm^{42} (see Fig. 6). Our aim is not to examine the details of the interpretation of the results but rather to compare functionability with other well-known centrality measures.

More classic centrality measures, such as node degree, betweenness, closeness, the eigenvector, and other spectral based centralities, have different outcomes and rankings for nodes from those of *functionability*, even if they are similar (see the Appendix, Subsection 5). Other centrality measures also have different meanings.^{43–48} As can be seen in Fig. 7, *functionability* has a positive correlation with degree, betweenness and the eigenvector centralities, although all the Pearson coefficient values are below 0.5. Figure 6 shows that the nodes that have the highest values of *functionability* correspond to neurons that do not usually appear as neurons with the highest degree, betweenness, or eigenvector centralities; and hence, *functionability* gives us additional information concerning such nodes.

We recall that nodes with higher values of *functionability* are more peripheral, as well as having higher degrees.

We may also wish to consider whether our newly-created *functionability* would be equivalent to other alternative centrality measures that have been developed recently, such as controllability, the core score, and collective influence. These measures target specific properties of the network and have not yet been incorporated into standard network libraries. For this reason, we cannot provide a quantitative comparison, but a close look at their definitions should shed some light on their relevance and will help to understand the meaning of functionability:

**Controllability,**$C$ Within the framework of control theory,^{49–51}special attention is paid to possible applications to complex networks, particularly brain networks.^{52,53}The term*control*refers to the ability of nodes to perturb the system in such a way that it reaches a desired state.^{54}In order to assess*controllability*, several methods have been developed; all of them assume a linear response dynamical model.^{55}We highlight two of them that were specifically designed to evaluate regional controllability, rather than as a global measure of the network.- Average controllability: As defined in Ref. 53, the average controllability identifies the nodes that can steer the network to many easily reachable states. The result that we are most interested in concerns the mathematical expression of the particular case when the set of control nodes reduces to one node at a time, and hence, provides a measure of the average controllability of each node of the network. It can be proved thatEquation (32) resembles the well-known Katz centrality measure, considering only odd length walks. Moreover, if we expand it and we keep the second-order dependency on the adjacency matrix, we recover a measure that is proportional to the degree of the node $K$.(32)$Ca(K)\u2261Trace(WK\u22121)=\u2211i[(I\u2212A2)\u22121]Ki.$
- Modal controllability: A node that has large modal controllability is one that participates in most of the dynamical modes of the linear system. In other words, it is a node that is able to access states that are difficult to reach.
^{53,56}Modal controllability is defined aswhere $\lambda (A)j$ is the eigenvalue of the $j$th mode and $vij$ is the contribution of node $i$ to the eigenvector of the $j$th mode.(33)$Cm(K)\u2261\u2211j=1N(1\u2212\lambda j(A)2)vij2,$ Other definitions can similarly be compared to

*functionability*, but their meaning moves away from a measure of the effect on the states of a network, for example, boundary controllability. Equations (32) and (33) are mathematical expressions which differ from Eq. (30) and have a different meaning.^{57}However, by looking at them, we can also find similarities regarding dependence on the adjacency matrix. In general, many centrality measures tend to be partially correlated.

**Core score**, $CS$. Despite much attention having been paid to community detection algorithms, another well-known mesoscale property of a network is its core–periphery structure: which nodes are part of a more densely connected core and which are part of a sparsely connected periphery.^{58–62}Rombach*et al.*^{58}proposed a continuous measure of a node’s closeness to the core, called “coreness.” The algorithm is based on an optimization procedure that considers cores with different sizes and boundaries, according to a transition function, and assesses to what extend a node matches this. In order to compute coreness, the authors define the*core quality*aswhere $\gamma $ is a vector that parametrizes the core quality. The elements $Cij$ are normally computed as $Cij=CiCj$, where $Ci$ are the elements of the local core values. The aim is to find a core vector, $C\u2192$, that maximizes $R\gamma $ and is a normalized shuffle of the vector $C\u2192\u2217$, which is determined using a transition function, providing a shuffled list of possible core vector values.(34)$R\gamma =\u2211i,jAijCij,$For a given set of parameters that determine the transition function of the core, $\gamma (\alpha ,\beta )$, they define the aggregate*core score*of each node $i$ aswhere $Z$ is a normalization factor.(35)$CS(i)=Z\u2211\gamma Ci(\gamma )\xd7R\gamma ,$The aim of developing the core score measure differs from that behind

*functionability*in many aspects. Nevertheless, the L-periphery centrality, which is one of the contributors to the former, may resemble the inverse core score outcome when the network is characterized by a clear core–periphery structure. Note that the aim of*functionability*is not related to finding communities or the core of a network.**Collective influence,**$CI$. A subset of measures aim to detect the most influential nodes in an adaptive way. Each method considers a different heuristics to rank all nodes, determine which node is ranked as the greatest spreader, and remove it. Scores are recomputed and the procedure is repeated iteratively until no nodes are left in the network. The simplest approach is done by the highly degree adaptive (HDA) method.^{63}Other collective influence (CI) tries to fill the gap left by the fact that the preceding set of methods does not optimize an objective global function. In contrast, CI is defined in such a way that it potentially identifies the minimal set of nodes that, if removed, would cause the network to become disconnected, understood in the framework of network percolation theory. It does this, furthermore, by means of an energy cost function.^{64–66}If $G(q)$ represents the probability of the existence of a giant component^{63,67,68}in the limit of $N\u2192\u221e$, then the problem reduces to finding the minimum fraction $qc$ such thatCI is computed by considering balls of different radius, $l$, whereby each size captures a different influence scale across the network. In Ref. 64, the authors show that the problem is equivalent to minimizing the cost function,(36)$qc=min{q\u2208[0,1]:G(q)=0}.$where $zi\u2261di\u22121$ and $di$ stands for the degree of node $i$. The vector $n\u2192$ represents whether a node, in the final configuration, belongs to the set of “influencers” or not. Then, the collective influence strength at level $l$ of node $i$ is(37)$El(n)=\u2211i=0Nzi\u2211j\u2208\u2202Ball(i,l)(\u220fk\u2208Pl(i,j)nk)zj,$and Eq. (37) becomes(38)$CIl(i)=zi\u2211j\u2208\u2202Ball(i,l)(\u220fk\u2208Pl(i,j)nk)zj,$Therefore, in order to minimize Eq. (39), we need to remove the node with the greatest $CIl$ value and iterate until a score is assigned to each node.(39)$El(n)=\u2211i=1NCIl(i).$*Functionability*is not obtained from an optimization algorithm or any iterative procedure. However, the physical interpretation of the measure does also have a global scope in the following way: the mathematical definition of $FC$ for a node $C$ is computed considering the effect that a local perturbation has on the whole network.

We have compared the proposed *functionability* centrality with two sets of measures of node importance. On the one hand, a correlation matrix between the built-in L-periphery centrality, degree, the eigenvector, and betweenness centralities, as benchmark references, has been presented for 4 different networks (see Fig. 7). As can also be stated from Eq. (30), *functionability* provides us with unique insight into the network, since it cannot be reproduced by other centralities. Note that the defined L-periphery centrality is highly negatively correlated with betweenness centrality. Moreover, both *functionability* and L-periphery are algorithmic-free, parameter-free, and deterministic centralities; all of these properties being highly beneficial for network analysis. One the other hand, three more recently developed measures, $C,CS$, and $CI$ have been explored in order to compare the definition of importance, and its computation as well as mathematical resemblance with *functionability*. We conclude that CS and CI aim to determine a more structural type of centrality, like revealing the participation of a node in the core or the set of nodes which would break the giant component apart. Hence, they may correlate in some ways with L-periphery. Conversely, controllability seeks the nodes which mostly enable the system to move toward a particular state, either those which are easy to access ($Ca$) or more modelike ones ($Cm$). Average controllability resembles the intuitive motivation of *functionability*, although it is neither mathematically equivalent nor does it have a similar physical interpretation or building blocks [see Eqs. (30) and (32)].

In addition, we should point out that *functionability* centrality does not rely on optimization procedures nor is it bound to the values of the parameters. Actually, we have proved that Eq. (30) is a compact mathematical expression for the measure. The value of the frustration parameter, $\alpha $, does not influence the ordering of the nodes (see the Appendix, Subsection 2 for more details).

Thereby, *functionability* is a unique measure of the effect of perturbing a node on the whole network by means of shifting the system to an asynchronous state. The final expression is a deterministic, parameter-free, and nonalgorithmic measure of centrality, with an underlying physical model to support it and enable an intuitive interpretation of it.

## VI. WEIGHTED *FUNCTIONABILITY*, $FW$

Both the dynamic model and the analytic expression of functionability can easily be extended to weighted networks. A weighted network is defined by the elements of the adjacency matrix, $W$, thus,

where $wij\u2208R$.

In a general setting, the intensity of the connections in a network vary. Considering these weights, the GFKM in Eq. (6) can be rewritten as

and the analytical expression of functionability for weighted networks is an extension of Eq. (30),

where

## VII. CONCLUSIONS

In the present work, we define *functionability*, a new centrality measure of the nodes in a network, in order to address the issue of which nodes, when perturbed, move the system from a synchronized state to one that is more asynchronous? In fact, we aim to sort the nodes by their potential effect on the whole network when one specific change to the intrinsic dynamics of the node spreads over the entire oscillatory system, thereby disrupting the initial synchronized state. This issue may be relevant for the identification of critical nodes that are either beneficial, by enabling access to a broader spectrum of states, or harmful, by destroying overall synchronization. Hence, depending on the system we are considering, the most *functional* nodes have to be considered when looking for a potential enhancement of the diversity of attainable states or the inhibition of risky instabilities in the system.

We propose to resolve the issue by defining a new centrality measure, called *functionability*, and symbolized as $F$. $F$ is a property that can be computed for each node and depends only on the structural connectivity of the network and the position or role of the node within it.

The system is considered to be made up of as many oscillators as the network has nodes. The dynamics that rules the evolution of the system is based on that of the well-known Kuramoto phase oscillators. The *functionability* of a node measures the dispersion of the phases generated by the induction of a phase frustration parameter in the corresponding node. Despite $F$ being defined in terms of the phase differences between nodes coming from a dynamic model, in the present work, we derive a mathematical expression for centrality only in terms of the network structure, and hence, we avoid a large demand for computing power. Therefore, our $F$ measure is parameter-free, algorithm-free, and deterministic. In addition to *functionability* being determined by network structure, the definition based on the dynamic model keeps the rank ordering of the nodes invariant.

Moreover, the physical meaning of the measure remains valid and, hence, provides the centrality with an easy-to-handle interpretation of the results obtained. In order to assess the novelty and potential new insight, *functionability* offers into the nodes in a network, we compare our results with other centrality measures in the cases of different simple network topologies. This comparison enables us to provide an intuitive explanation of the question: What do nodes with large values of $F$ have in common? We show that nodes that have both a large number of neighbors, and hence a high degree, but are also outside the main core of the network, that is, they are peripheral nodes, also have higher *functionability* scores. Because of this, $F$ may deliver nontrivial results regarding the importance of each node. To sum up, the nodes that enable the largest shifts from the synchronized state of a network are those which are both peripheral and also have a high degree. The magnitude of this effect can be quantified via the analytical expression of *functionability*.

Many real systems that are commonly modeled as a set of coupled oscillators may be assessed by our centrality *functionability*. We are usually concerned about such systems moving away from their stable synchronized state. Alternatively, we may wish for other systems to stay away from such complete synchronicity. Power grids and brain networks are examples of the former and latter situations. *Functionability* enables us to detect the nodes that are most central or relevant for moving the overall system away from synchronization. Epileptic attacks or power grid collapses may be derived from single nodes that, even if not located in the main core, change their intrinsic properties and spread asynchrony rapidly to the network, leading to potentially fatal states. It may be helpful to target such nodes in order to control both synchrony and asynchrony.

## ACKNOWLEDGMENTS

The authors acknowledge financial support from MINECO via Project Nos. FIS2015-71582-C2-2-P and PGC2018-094754-B-C22 (MINECO/FEDER,UE) and *Generalitat de Catalunya* via Grant No. 2017SGR341. G.R.-T. also acknowledges MECD for Grant No. FPU15/03053.

### APPENDIX: VALIDITY AND CALCULATIONS OF THE LINEAR MODEL

#### 1. An example of the functionability of a simple network

The Laplacian matrix of the network in Fig. 1 is

To calculate functionability, we then need to compute the reduced Laplacian matrix of the network in Fig. 1, taking node 0 as control and node 1 as reference, $L~(0,1)$, which is

#### 2. The linear model: Assumptions and validity

The final definition and usage of functionability centrality, in Eq. (30) or (31) if we are interested in the normalized definition, is based on nonlinear oscillatory dynamics, as explained in Sec. IV. However, several assumptions have been made in order to obtain a more compact and useful expression of it. The more restrictive one is the linearization of the model. However, this assumption relies on the fact that we are concerned with the state when nodes are synchronized in frequency and, therefore, dispersion comes from the breaking of the in-phase synchronized state. For a set of coupled oscillators in a network which follow the conditions described in Secs. II B and III A, this condition is broken when the frustration parameter, $\alpha $, becomes too large (see Fig. 2). In Fig. 9, the 2 methods of computing $F$ are compared for different values of $\alpha $. Regardless of the type of network topology, there is a threshold value at which the 2 methods diverge. This value changes from one network to another, as well as from size to size. However, it is important to note that when we enable enough simulation time steps, the approaches become even more similar because, as the network grows in its number of nodes, the time needed to reach the stable state increases too. Nevertheless, our interest is the usage of the analytical expression of functionability, rather than that obtained from the results of the stable state of a dynamic system, which enables us to make a physical interpretation of the measure. Moreover, the validity of the former has a large variability, even from node to node, that is, the dynamics may easily achieve chaotic behavior that is difficult to control and interpret, and hence, it may lead to different results, depending on the frustration parameter and initial conditions. In addition, Eq. (30) is proportional to $\alpha $, and, therefore, the ranking of nodes does not depend on this parameter but only on the connectivity structure. To conclude, if we set the value of the frustration parameter small enough, the two approaches are equivalent. However, as we move to larger values, the dynamic system approach will converge with a chaotic state for all nodes, while the analytical expression of functionability will be robust in the ranking of nodes. Moreover, the interpretation of the measure is still valid because it keeps rank ordering.

#### 3. Is phase distance a proper metric?

In Eq. (7), we define a measure of the distance between two nodes or oscillators after a perturbation is made on the system. Our system consists of a set of phase oscillators, that is, oscillators whose main and only variable is phase, and not amplitude. Hence, the distance between two nodes is a rather simpler function of an angular argument, symmetric with respect to $\pi $ and bounded between $0$ and $1$. In this section, we will comment on the mathematical implications of this definition. Namely, the distance we are using is not a proper metric, because it does not meet the triangle inequality, as we will see. Nevertheless, we will prove that it can be easily related to the Euclidean distance, which it is so. However, as we are not concatenating or adding different distances, this drawback will not be a problem. There are many optimization algorithms that use nonmetric distances without modifying the expected results.

In order to make an intuitive description of the distance $\epsilon ij$, we will consider each oscillator to lay on a two-dimensional plane. Each oscillator is characterized by a phase, which evolves in time, and a radius, which is equally set to one (see Fig. 3, for example).

The Euclidean distance of two vectors, $a\u2192$ and $b\u2192$, in an $n$-dimensional space is defined as

The squared Euclidean distance can be expanded as follows:

where we have considered that vectors $a\u2192$ and $b\u2192$ are normalized, that is, $\Vert a\u2192\Vert =1$ and $\Vert b\u2192\Vert =1$.

If we go back to the definition of the used distance between nodes, $\epsilon ij$ in Eq. (7), we notice that the functional form is the same as the cosine distance between two vectors, albeit in one dimension. Looking at Eq. (A4), we can rewrite cosine distance as

From the former equality, Eq. (A5), we can state that, although, in general, cosine distance is not a proper metric, as it does not meet the triangle inequality, it can be easily interpret by means of the Euclidean distance between both vectors.

The four properties a metric defined by a distance function $d(a\u2192,b\u2192)$ should satisfy are

$d(a\u2192,b\u2192)\u22650$,

$d(a\u2192,b\u2192)=0\u2194a\u2192=b\u2192$,

$d(a\u2192,b\u2192)=d(b\u2192,a\u2192)$, and

$d(a\u2192,c\u2192)\u2264d(a\u2192,b\u2192)+d(b\u2192,c\u2192)$.

Considering cosine distance, $dC(a\u2192,b\u2192)$, condition 4 can be written as

Equation (A6) is, in general, not satisfied. Let us provide a counterexample by considering 3 two-dimensional normalized to unity vectors, $a\u2192=(1,0)$, $b\u2192=(2/2,2/2)$, and $c\u2192=(0,1)$,

Nevertheless, we are interested in small phase differences, and hence, small angular arguments. In this situation, if $\u2220(a\u2192,b\u2192)=\theta 1$, $\u2220(b\u2192,c\u2192)=\theta 2$, and $\u2220(a\u2192,c\u2192)=\theta 3=\theta 1+\theta 2$, Eq. (A6) can be approximated to

#### 4. Example of the calculation for a 4-nodes network

In Sec. IV A, we derive the analytical expression of functionability for the case linearity conditions that are satisfied and under the considerations as described in Sec. IV.

In order to understand the mathematical derivation of the centrality, we are going to go through a simple example on a 4-nodes network by solving step by step the linear system arising from the problem stated.

Let us consider the following network. The adjacency and degree matrices of Fig. 10 are

Using Eq. (A9), the Laplacian matrix is

Going back to the definition of the reference node and the control node, and considering the definition of the Reduced Laplacian matrix, in Eq. (21), we can write it down for the case of the wineglass network. As an example, we calculate $L~(C,R)$ for $C=0$, $R=0$, and $R=1$.

Functionability is defined through the configuration described in Eq. (8). Thus, Eq. (16) for the control node $C$ leads to

Let us derive Eq. (10) for the wineglass network. The more general (linearized) system of equations for such a network is

In the stable state, for a given control node, $\theta \u02d9i=\Omega \u2200i$,

We then immediately calculate the phase differences generated when each of the nodes is the control node, $C$. To do so, we need to compute the differences with respect to all nodes, that is, considering each of the nodes as the reference node, $R$.

**Control node 0**($C=0$)(A15)$\alpha \u2192=(\alpha ,0,0,0)\Omega =\u22123\alpha 4.$*Reference node 0*($R=0$)(A16)${\Omega =\varphi 20\u22122\varphi 10,\Omega =\varphi 10\u22122\varphi 20,\Omega =\u2212\varphi 30,(\u22122101\u22122000\u22121)\u22c5(\varphi 10\varphi 20\varphi 30)=(\Omega \Omega \Omega ),(2\u221210\u2212120001)\u22c5(\varphi 10\varphi 20\varphi 30)=(3\alpha 43\alpha 43\alpha 4),L~(0,0)\varphi \u2192(0,0)=3\alpha 41\u2192,\varphi \u2192(0,0)=[L~(0,0)]\u221213\alpha 41\u2192,$(A17)$\varphi 10=3\alpha 4\varphi 20=3\alpha 4\varphi 30=3\alpha 4.$*Reference node 1*($R=1$)(A18)${\Omega =\varphi 01+\varphi 21,\Omega =\varphi 01\u22122\varphi 21,\Omega =\varphi 01\u2212\varphi 31,(1101\u22122010\u22121)\u22c5(\varphi 01\varphi 21\varphi 31)=(\Omega \Omega \Omega ),(\u22121\u221210\u2212120\u2212101)\u22c5(\varphi 01\varphi 21\varphi 31)=(3\alpha 43\alpha 43\alpha 4),L~(0,1)\varphi \u2192(0,1)=3\alpha 41\u2192,\varphi \u2192(0,0)=[L~(0,0)]\u221213\alpha 41\u2192,$(A19)$\varphi 01=\u22123\alpha 4\varphi 21=0\varphi 31=0.$*Reference node 2*($R=2$)(A20)${\Omega =\varphi 02\u22122\varphi 12,\Omega =\varphi 02+\varphi 12,\Omega =\varphi 02\u2212\varphi 32,(1\u22122011010\u22121)\u22c5(\varphi 02\varphi 12\varphi 32)=(\Omega \Omega \Omega ),(\u2212120\u22121\u221210\u2212101)\u22c5(\varphi 02\varphi 12\varphi 32)=(3\alpha 43\alpha 43\alpha 4),L~(0,2)\varphi \u2192(0,2)=3\alpha 41\u2192,\varphi \u2192(0,2)=[L~(0,2)]\u221213\alpha 41\u2192,$(A21)$\varphi 02=\u22123\alpha 4\varphi 12=0\varphi 32=0.$*Reference node 3*($R=3$)(A22)${\Omega =\varphi 03+\varphi 23\u22122\varphi 13,\Omega =\varphi 03+\varphi 13\u22122\varphi 23,\Omega =\varphi 03,(1\u22122111\u22122100)\u22c5(\varphi 03\varphi 13\varphi 23)=(\Omega \Omega \Omega ),(\u221212\u22121\u22121\u221212\u2212100)\u22c5(\varphi 03\varphi 13\varphi 23)=(3\alpha 43\alpha 43\alpha 4),L~(0,3)\varphi \u2192(0,3)=3\alpha 41\u2192,\varphi \u2192(0,3)=[L~(0,3)]\u221213\alpha 41\u2192,$(A23)$\varphi 03=\u22123\alpha 4\varphi 13=0\varphi 23=0.$

**Control node 1**($C=1$)(A24)$\alpha \u2192=(0,\alpha ,0,0)\Omega \u22122\alpha 4=\u2212\alpha 2.$- (1)
*Reference node 0*($R=0$)(A25)${\Omega =\varphi 10+\varphi 20+\varphi 30,\Omega =\varphi 10\u22122\varphi 20,\Omega =\u2212\varphi 30,(1111\u22122000\u22121)\u22c5(\varphi 10\varphi 20\varphi 30)=(\Omega \Omega \Omega ),(\u22121\u22121\u22121\u2212120001)\u22c5(\varphi 10\varphi 20\varphi 30)=(\alpha 2\alpha 2\alpha 2),$(A26)$L~(1,0)\varphi \u2192(1,0)=\alpha 21\u2192,\varphi \u2192(1,0)=[L~(1,0)]\u22121\alpha 21\u2192,\varphi 10=\u22125\alpha 6\varphi 20=\alpha 6\varphi 30=\alpha 2.$ - (2)
*Reference node 1*($R=1$)(A27)${\Omega =\varphi 20+\varphi 30\u22123\varphi 01,\Omega =\varphi 01\u22122\varphi 21,\Omega =\varphi 01\u2212\varphi 31,(\u22123111\u22122010\u22121)\u22c5(\varphi 01\varphi 21\varphi 31)=(\Omega \Omega \Omega ),(3\u22121\u22121\u2212120\u2212101)\u22c5(\varphi 01\varphi 21\varphi 31)=(\alpha 2\alpha 2\alpha 2),$(A28)$L~(1,1)\varphi \u2192(1,1)=\alpha 21\u2192,\varphi \u2192(1,1)=[L~(1,1)]\u22121\alpha 21\u2192,\varphi 01=5\alpha 6\varphi 21=2\alpha 3\varphi 31=4\alpha 3.$ - (3)
*Reference node 2*($R=2$) We can proceed as in previous cases and obtain(A29)$\varphi \u2192(1,2)=[L~(1,2)]\u22121\alpha 21\u2192,L~(1,2)=(3\u22121\u22121\u22121\u221210\u2212101),\varphi 02=\alpha 6\varphi 12=\u22122\alpha 3\varphi 32=2\alpha 3.$ - (4)
*Reference node 3*($R=3$)(A30)$\varphi \u2192(1,3)=[L~(1,3)]\u22121\alpha 21\u2192,L~(1,3)=(3\u22121\u22121\u22121\u221212\u2212100),\varphi 03=\u2212\alpha 2\varphi 13=\u22124\alpha 3\varphi 23=\u22122\alpha 3.$

- (1)

**Control node 2**($C=2$)(A31)$\alpha \u2192=(0,0,\alpha ,0)\Omega \u22122\alpha 4=\u2212\alpha 2.$- (1)
*Reference node 0*($R=0$)(A32)$\varphi \u2192(2,0)=[L~(2,0)]\u22121\alpha 21\u2192,L~(2,0)=(\u22121\u22121\u221212\u221210001),\varphi 10=\u2212\alpha 6\varphi 20=\u22125\alpha 6\varphi 30=\alpha 2.$ - (2)
*Reference node 1*($R=1$)(A33)$\varphi \u2192(2,1)=[L~(2,1)]\u22121\alpha 21\u2192,L~(2,1)=(3\u22121\u22121\u22121\u221210\u2212101),\varphi 01=\alpha 6\varphi 21=\u22122\alpha 3\varphi 31=2\alpha 3.$ - (3)
*Reference node 2*($R=2$)(A34)$\varphi \u2192(2,2)=[L~(2,2)]\u22121\alpha 21\u2192,L~(2,1)=(3\u22121\u22121\u2212120\u2212101),\varphi 02=5\alpha 6\varphi 12=2\alpha 3\varphi 32=4\alpha 3.$ - (4)
*Reference node 3*($R=3$)(A35)$\varphi \u2192(2,3)=[L~(2,3)]\u22121\alpha 21\u2192,L~(2,3)=(3\u22121\u22121\u221212\u22121\u2212100),\varphi 03=\u2212\alpha 2\varphi 13=\u22122\alpha 3\varphi 23=\u22124\alpha 3.$

- (1)

**Control node 3**($C=3$)(A36)$\alpha \u2192=(0,0,0,\alpha )\Omega =\u2212\alpha 4.$- (1)
*Reference node 0*($R=0$)(A37)$\varphi 10=\alpha 4\varphi 20=\alpha 4\varphi 30=\u22123\alpha 4.$ - (2)
*Reference node 1*($R=1$)(A38)$\varphi 01=\u2212\alpha 4\varphi 21=0\varphi 31=\u2212\alpha .$ - (3)
*Reference node 2*($R=2$)(A39)$\varphi 02=\u2212\alpha 4\varphi 12=0\varphi 32=\u2212\alpha .$ - (4)
*Reference node 3*($R=3$)(A40)$\varphi 03=3\alpha 4\varphi 13=\alpha \varphi 23=\alpha .$

- (1)

Once, we have computed $\varphi iR(C)$ for all possible control and reference nodes, we write down the matrix of phase differences, $\Delta \Phi (C)$, for each control node,

Using the distance matrix, Eq. (7), its approximation, in Eq. (26), and the analytical expression of functionability, in Eq. (30), we compute for each control node,

Therefore, if we sum all the elements of the $\Delta \Phi $ matrix, after squaring them, for each control node and divide the result by $4$, we obtain

which has the same result as Eq. (30) and can be checked through simulation of the dynamical system. If the frustration parameter is small enough, all of the expressions are equivalent. As $\alpha $ increases, the dynamical system may achieve a chaotic regime or shift the system out of frequency synchronization (see the Appendix, Subsection 2).

Functionability can be normalized in several ways, as we suggest in Eq. (31), but we stay with the most general result for this case.

#### 5. Classic centralities in *C. elegans* network

We provide the figures corresponding to the 131 nodes of the *C. elegans* frontal cortex network characterized by color and radius, which is proportional to the values of the L-periphery, degree, betweenness, and eigenvector centralities. Nodes are located at their real xy position. The top 10 neuron labels are shown in descending order (Figs. 11–14).

## REFERENCES

*Advances in Cardiac Signal Processing*(Springer, Berlin, 2007), pp. 121–165.

*Lecture Notes in Physics*(Springer, Berlin, 1975), Vol. 30.