Complex networks are essentially heterogeneous not only in the basic properties of the constituent nodes, such as their degree, but also in the effects that these have on the global dynamical properties of the network. Networks of coupled identical phase oscillators are good examples for analyzing these effects, since an overall synchronized state can be considered a reference state. A small variation of intrinsic node parameters may cause the system to move away from synchronization, and a new phase-locked stationary state can be achieved. We propose a measure of phase dispersion that quantifies the functional response of the system to a given local perturbation. As a particular implementation, we propose a variation of the standard Kuramoto model in which the nodes of a complex network interact with their neighboring nodes, by including a node-dependent frustration parameter. The final stationary phase-locked state now depends on the particular frustration parameter at each node and also on the network topology. We exploit this scenario by introducing individual frustration parameters and measuring what their effect on the whole network is, measured in terms of the phase dispersion, which depends only on the topology of the network and on the choice of the particular node that is perturbed. This enables us to define a characteristic of the node, its functionability, that can be computed analytically in terms of the network topology. Finally, we provide a thorough comparison with other centrality measures.

Identical coupled Kuramoto oscillators tend to synchronize in the stationary state. This means that they will run with exactly the same phase and the same frequency. However, the introduction of a common forced phase difference, also called a frustration parameter, results in the phases of neighboring oscillators repelling each other, and in the long run, new configurations of partially synchronized oscillators become stable. Here, we make use of this mechanism to show that individual frustration parameters are crucial in generating different functional structures and allow us to define a new centrality measure, which we call “functionability.” The practical meaning of this measure can be important in complex networks such as the brain or power grids.

Synchronization has become one of the most paradigmatic examples of emergent properties in complex systems,1,2 since the degree of interaction between the oscillatory units of a discrete system shows that a variety of macroscopic states are available. Among the most studied such systems, because of its inherent simplicity, is the Kuramoto model (KM), in which phase oscillators interact continuously with other units through a sine function of the phase difference.3–5 In all-to-all models, there is a transition from an incoherent state to a coherent one that depends only on the relative strength of the two competing forces: the dispersion of intrinsic frequencies and the intensity of the coupling between units.

Over the last four decades, the KM has been thoroughly studied in regular lattices, and, with the sudden interest in complex networks, its role in irregular connectivity patterns has been heavily exploited.6 This has been achieved not only by analyzing synchronization properties (order parameters, control parameters, time to synchronize, etc.) but also through the use of the path to synchronization of neighboring units in order to identify higher-order connectivity patterns, for instance, in communities that form complex networks at different hierarchical levels.7–9 As already stated, in the original KM, the emphasis is on the relative strength in the two antagonistic contributions: frequency dispersion and coupling strength. However, when complex topologies are considered, it is important to disentangle these effects; for this reason, special interest has been arisen concerning the evolution of identical oscillators. In particular, a simple change in the coupling function, by inserting a phase frustration in the argument of the sine function, results in identical oscillators now being unable to synchronize, and it generates complex patterns of phase differences, which have been related to topological symmetries of the network.10,11

The introduction of an identical frustration parameter in all the coupling terms produces a global effect on the network.10 However, in complex network science, there is a special interest in understanding what role the individual nodes play in the behavior of the overall network. Many centrality measures that are key to this field have been considered for many years in the social sciences, and there are more recent proposals such as Google Search PageRank as well as other measures related to the concept of controllability. However, to the best of our knowledge, there is no measure of the effect that a given node can have on the range of states a network can achieve which, in terms of phase oscillators, is measured in the different phase-difference patterns a network can traverse. Precisely, our proposal is to characterize the effect that a node can have on the global asynchronous state.

Among the systems where this new concept that we name “functionability” has an immediate application is the rapidly evolving field of brain networks.12 The molecular and cellular mechanisms of synapse formation and plasticity shape and guide the development and change of synaptic connections in the long run.13–15 Therefore, brain networks, on a short time scale, are considered to be static. Such structural networks are the substrate on which different temporal coactivation patterns can occur, also known as functional networks.16–21 All the functionalities of the brain, at either the low or the high level, are captured by different networks which nonetheless occur within the same physical medium. How does this essential feature of the brain arise? What mechanisms are responsible for a static network undergoing many different states? Are there specialized brain regions that are better at this job? Are they easy to identify?

Higher functionability may be positive for the system, as it reflects the capacity of a node to be involved in different tasks and can result in the network state shifting into one involving more complex temporal relationships between modules. However, highly functional nodes can also be potentially dangerous in systems where tiny perturbations can produce cascadelike effects that completely disrupt the network dynamics. An example of this is offered by the transfer networks of power grids, which have been widely studied in the field of complex networks, focusing on their structure to assess the damage of failures.22,23 However, power grids are highly sensitive to oscillatory dynamics and the synchronization of AC power24 and hence to perturbations of the phase lags between individual agents. Other such examples include the synchronization in heartbeats,25 multiprocessors and multicore processors,25 and traffic signal synchronization.26 

We are specifically interested in detecting the nodes that have a major impact on the network by enabling a broader spectrum of states or a larger dispersion from the “ground” state. Many centrality measures have been developed and defined over the years,27 some of them are even related to the dynamical properties of the nodes;28 but we specifically target a measure of variability or functionability that can be associated with the physical phenomenon of synchronization in order to provide it with meaning.1 Hence, we base a great deal of our work on an oscillatory dynamical model; but we aim to arrive at a compact mathematical expression that emerges from it.

The structure of the paper is as follows: in Sec. II, we provide a brief description and the main results of Kuramoto and frustrated Kuramoto oscillatory models. In Sec. III, we introduce a new oscillatory model based on the previous one: the general frustrated Kuramoto model (GFKM) as well as a measure to quantify the phase dispersion of the system when a perturbation is produced. This model is the building block of our main contribution, that is, the definition of a new centrality measure: functionability. Its mathematical expression and interpretation are provided in Sec. IV. Section V then presents a comparison between other well-known centrality measures, in order to stress the novelty of functionability, and Sec. VI considers weighted functionability. Finally, Sec. VII closes the paper with our conclusions. Further details concerning the validity of the model, as well as an example of a toy network, can be found in the  Appendix.

The phenomenon of synchronization in complex networks has been widely studied in recent decades, and several models have been designed to reproduce and help us understand the behavior of real oscillatory systems.1,29,30 In 1975, Kuramoto presented one of the best-known and most compact models:31 a set of N phase oscillators characterized by their phase, θi(t), coupled by the sine of their phase differences. Each unit is explicitly influenced by its nearest neighbors, jΓi, with coupling strength Kij. When oscillators are connected in a particular topology, the set of nearest neighbors is defined by the adjacency matrix of the network, G(V,E), A. The set of nodes comprising the network, V(G), consists of the oscillators, and the links between oscillators are represented by the set of network edges, E(G).27 

Kuramoto further assumed homogeneous interactions: Kij=K(i,j). Therefore, considering both the topology of the network that maps the connectivity between oscillators and the coupling strength, the dynamics of the system can be written as

dθidt=ωi+KjAijsin(θjθi),i=1,,N,jΓi,
(1)

where ωi represents the natural frequency of each oscillator.

Previous studies5 show that the long-time properties of the population of oscillators, in the absence of noise, are determined by the coupling strength, K, and the distribution of natural frequencies.

We consider that two nodes are phase synchronized when their phases have the same value,

θi(t)θj(t)=0t>t0.
(2)

When the phase difference has a constant value, that is, θi(t)θj(t)=ct>t0, we say there is phase locking between nodes i and j. Similarly, we consider that two nodes are frequency synchronized when their frequencies have the same value:

dθi(t)dtdθj(t)dt=0t>t0.
(3)

When two nodes are both phase and frequency synchronized, we say these nodes are completely synchronized.

What is the effect of the K and ωi parameters in Eq. (1)? An increase of the coupling strength accelerates the move toward closer phases and, hence, promotes the synchronization of the system. Conversely, a broader spectrum of natural frequencies, that is, larger differences within the set {ωi}, tends to cause the phases of the oscillators to diverge (see Fig. 1).

FIG. 1.

For each node in the network (upper figure), we see the temporal evolution of the Kuramoto model defined in Eq. (1) with a unimodal normal distribution of natural frequencies g(ω)=N(3,1) and random initial phases f(θ0)=N(0,0.5). The lower panels correspond to 2 different values of the coupling strength: K=0.2 (middle) and K=10 (bottom).

FIG. 1.

For each node in the network (upper figure), we see the temporal evolution of the Kuramoto model defined in Eq. (1) with a unimodal normal distribution of natural frequencies g(ω)=N(3,1) and random initial phases f(θ0)=N(0,0.5). The lower panels correspond to 2 different values of the coupling strength: K=0.2 (middle) and K=10 (bottom).

Close modal

In this paper, we assume that the network forms a single connected component. This ensures that when all the oscillators have the same natural frequency, ω0, there is only one attractor of the dynamics: the completely synchronized regime, dθi(t)/dt=ω0i.7,8 In the case of unimodal distributions of the natural frequency, the system becomes synchronized as long as the coupling parameter, K, is larger than a threshold value, KC.5 The particular evolution to such a state depends strongly on the connectivity structure or the network topology.32 

In Sec. II A, we stated that a connected system described by a KM reaches complete synchronization when all the oscillators are identical, ωi=ω0i. Besides tuning the natural frequency of each oscillator, is there an alternative way of breaking the natural synchrony of the system? Previous studies10,33,34 have suggested the introduction of a phase “frustration” parameter, α, into the dynamics of the system as follows:

dθidt=ω0+KjAijsin(θjθiα)jΓi.
(4)

It has been shown that, as long as α<π/2, the system becomes synchronized to a resulting frequency Ω, where dθi(t)/dt=Ωω0i. Nevertheless, α forces connected nodes to be locked in phase and hence breaks the phase synchronization. The magnitude of such locking is determined by both the network topology and the parameters of the dynamics, α. However, complete synchronization is conserved for topological symmetric nodes, a phenomenon that has been called remote synchronization. As the frustration increases, the asynchronous groups move away from each other. In order to quantify the level of synchronization between pairs of oscillators, we define a local order parameter between oscillators based on7 

ρij=cos(θi(t)θj(t)).
(5)

Equation (5) measures the correlation between pairs of oscillators under the stationary regime, which is invariant under temporal translation.35 

In Fig. 2, 4 groups are internally synchronized but asynchronous between the 4 groups. The groups obtained capture the natural symmetries of the network: nodes 1 and 2, nodes 3 and 6, nodes 4 and 5, and node 0, isolated.

FIG. 2.

Upper panel: Evolution of the dynamics (4) of the oscillations for the nodes of the network in Fig. 1. The parameters are set to K=1, ωi=0i, and the initial phases follow a normal unimodal distribution N(0,0.5). The frustration parameter, α, is set to 0.1 (upper), 0.5 (middle), and 0.8 (lower). Lower panel: Matrix showing the local order parameter defined in Eq. (5) between oscillators.

FIG. 2.

Upper panel: Evolution of the dynamics (4) of the oscillations for the nodes of the network in Fig. 1. The parameters are set to K=1, ωi=0i, and the initial phases follow a normal unimodal distribution N(0,0.5). The frustration parameter, α, is set to 0.1 (upper), 0.5 (middle), and 0.8 (lower). Lower panel: Matrix showing the local order parameter defined in Eq. (5) between oscillators.

Close modal

In Sec. II, we explore two oscillatory models. An oscillatory system described by KM can always achieve complete synchronization, regardless of its topology and frequency distribution (see conditions in Sec. II A). In contrast, an equivalent system described by FKM can always achieve frequency synchronization but, generally, not phase synchronization. The phase locking is completely determined by the symmetry of the network, if it exists (Sec. II B)

However, KM and FKM do not provide information on specific nodes, only on the network as a whole. We seek an intrinsic parameter that features each oscillator that could move the system away from its natural synchronized state. What would the effect of a phase frustration parameter that characterizes each such oscillator distinctly be? Several studies have focused on the effect of different natural frequencies of the oscillators; but we may be concerned with other types of natural properties connected to the phase shift between oscillators.

We would like to identify the nodes that have the largest effect in leading the whole system away from complete synchronization. To do so, we need to establish which nodes have the greatest capacity, with only a small perturbation, to produce a large dispersion in the phases of the population. In the next paragraphs, we build a model that is capable of breaking the natural phase synchronization and search for central nodes that are best suited to doing this. To this end, we introduce a dynamic model based on FKM: the general frustrated Kuramoto model (GFKM), which enables us to characterize each node individually by means of an intrinsic frustration parameter.

In the present work, we consider the natural generalization of the FKM by considering the frustration phase parameter to be intrinsic to each oscillator, αi, rather than a homogeneous property of the population. This assumption may depict a more realistic scenario, in which oscillators represent real systems with individual properties that are determined by the nature of each oscillator. The more general model, which we call the GFKM, is determined by the dynamics

dθidt=ω0+KjAijsin(θjθiαi)jΓi,
(6)

where αi is an intrinsic parameter of each oscillator. Other considerations regarding the model can be found in Refs. 36 and 37.

We may find very distinct distributions of g(α), each of them providing the system with different behaviors. As our main goal is to localize the nodes that are most likely to move the system away from synchronization, we consider two possible distributions, although other interesting insights may be derived from different possibilities. We seek analytical results in order to gain a better understanding of the model and the effect of an intrinsic phase frustration parameter. Hence,

  • αi=αhi. This is a homogeneous phase frustration parameter and so GFKM turns out to be FKM, as previously described. Remote synchronization comes out in symmetric nodes.

  • αi=α0;αj=0ij, hence only node i has a value different from zero for the phase frustration parameter. In this way, we break the overall problem into many individual problems. Here the question that normally arises is: how can a single node lead the system further away from complete synchronization?

Let us first suggest a simple way to measure phase dispersion between oscillators.

Consider a system that consists of a network of coupled oscillators. Suppose that a mechanism moves the system from an initial parameter configuration, p, to a new parameter configuration, q. Since each oscillator is characterized by a phase, θi(t), the phases in configuration p, θ(p), will transform to updated phases in configuration q, θ(q). We assume that configurations p and q are two possible stationary states characterized by a set of values for the phase locking between the oscillators. In this situation, we define the effect on nodes (i,j), εij, generated by the configuration shifting from p to q as

εij(pq)1cos(Δϕij)2,
(7)

where ϕijθjθi and Δϕijϕij(q)ϕij(p).38 Therefore, it is a measure of the change in phase difference between nodes i and j that, as defined in the stationary regime, is time independent. Equation (7) has the following properties: εij(pq)=εji(pq), εii(pq)=0, and εij(pp)=0. Moreover, εij(pq)[0,1]. In other words, if nodes (i,j) were initially in phase and they change to be in antiphase, the effect or the change in phase difference would be the largest possible: εij(Δϕ=π)=1. When no changes are produced due to the change of configuration, that is, the phase difference between nodes i and j remains unchanged, the value of the effect is zero: εij(Δϕ=0)=0.

Further considerations regarding the properties of εij as a distance metric can be found in the  Appendix, Subsection 3.

In order to assess the impact on the whole system of a node being perturbed, we make use of the GFKM described in Eq. (6) and the effect measure defined in Eq. (7). As stated in Sec. III B, we will consider that the change in the configuration is produced by just one single node, C, called the control node. We will compute the functionability, Fi, of C. Then by performing the same procedure for each node, we will obtain a vector with the functionability of each of them: F.

  • The control node, C The initial configuration of the system, p, is such that all the oscillators are completely synchronized at the stationary state. The system then switches to configuration q, which is determined by the control node. This change is enacted by setting the set of frustration parameters, α, in Eq. (6) as follows:
    αi(p)=0iαi(q)={0 if iC,α if i=C,
    (8)
    where C is the label of the control node, the effect of whose perturbation on the whole system we will assess.
  • Functionability,F We define the functionability of node C as
    FC(α)ijεij(pq(α)),
    (9)
    where p and q are defined in Eq. (8), εij is defined in Eq. (7) and the state of the system is obtained from the dynamics of Eq. (6) with the aforementioned configurations. As already seen, the initial configuration, p, corresponds to full locked synchronization. Therefore, the functionability measures the total dispersion of the phases from this ground state due to the perturbation of a single node. The larger the dispersion, the more functionability a node has.

In Fig. 3, we can observe that, regardless of the control node selected, if the frustration parameter is set to α=0, then the system is fully synchronized. Conversely, when α0, in the example considered, α=0.5, the system becomes asynchronous. Here, the effect of the frustration depends on the control node selected. As explained, the quantification of this effect is captured by the functionability of the nodes.

FIG. 3.

Polar representation of the final phases of the oscillators of the network in Fig. 1 obtained from the dynamics described by Eq. (6) and conditions (8). Upper left panel: The frustration parameter is set to α=0. The upper right and lower panels show the dispersion caused by nodes 1 and 6, respectively, when α=0.5.

FIG. 3.

Polar representation of the final phases of the oscillators of the network in Fig. 1 obtained from the dynamics described by Eq. (6) and conditions (8). Upper left panel: The frustration parameter is set to α=0. The upper right and lower panels show the dispersion caused by nodes 1 and 6, respectively, when α=0.5.

Close modal

Equation (6) is a set of N coupled nonlinear differential equations whose solution, in general, cannot be derived analytically. However, if the system reaches a frequency synchronized state (see the  Appendix, Subsection 2),39 that is, dθ(t)i/dt=Ωi,t, and the argument of the sine is small enough, we can linearize it as follows:

θ˙iω+Kj=1NAij(θjθiαi)i=1,N.
(10)

We can expand Eq. (10),

θ˙iω+Kj=1NAijθjK(θi+αi)j=1NAij=ω+Kj=1NAijθjK(θi+αi)di=ω+Kj=1NAijθjKθidiKαidi=ωK[j=1NAijθj+diθi+αidi],
(11)

where dijAij is the degree of the ith node of the undirected network defined by the adjacency matrix, A. We define the Laplacian matrix, L, as follows:40 

[L]ijdiδij[A]ij.
(12)

Using Eq. (12), Eq. (10) can be written as

θ˙iωK[j=1NLijθj+αidi].
(13)

Without loss of generality, we can set ω=0 and K=1. If the system is synchronized, then θ˙i=Ωi. A nonzero shared natural frequency does not affect the synchronization of the system, and the coupling strength plays a role in the time scale of the path to synchronization.

Equation (13) is then written as

Ω[j=1NLijθj+αidi]i.
(14)

We can compute the summation of Eq. (14) along all i indices, iN,

ΩNiNdiαi,

where we have used the fact that the Laplacian matrix is defined from an odd function, or a zero-sum row, and hence,

i,jNLij=0.
(15)

Finally,

Ω=1NiNdiαiαdi.
(16)

Introducing Eqs. (16) to (14), we obtain

αd[j=1NLijθj+αidi]j=1NLijθj=αdαidi=αdαidi.
(17)

In the matrix form

Lθ=αd1αd,
(18)

where [αd]i=αidi.

In a connected graph, the Laplacian matrix, L, has one null eigenvalue, which corresponds to the eigenvector 1, and hence the system of linear equations to solve θ in Eq. (18) is singular. Intuitively, we are left with one free parameter which depends on the initial phase conditions, θ(t=0). However, as we will see, the phase differences between oscillators are well determined.

  • The reference node,R. As the solution of the set θ given by Eq. (18) is undetermined, we need to work with phase differences, instead of absolute phases. We, therefore, define the reference node, R, as the node to which the phase of all other nodes is compared,
    ϕi(R)=ϕiRθiθRϕR(R)=0.
    (19)
  • The reduced Laplacian matrix, L~(C,R). Given an undirected network and its Laplacian N×N matrix (12), we define the reduced Laplacian (N1)×(N1) matrix for a pair of control (8) and reference (19) nodes as follows:
    L~(C,R)L{Cth row,Rth column}.
    (20)
    We can define the selection matrix, Jn,m, which is an N×N identity matrix after the removal of the nth column and the mth row. Using this notation, Eq. (20) can be written as
    L~(C,R)=JC,KLJC,KT.
    (21)

With the aforementioned consideration (that is, when the linearization requirements are met), we can compute the results of the phase differences analytically, ϕ(R), with respect to a particular reference node, R. This result represents a key finding for Secs. V and VI.

Considering the configuration defined in Eq. (8), Eq. (16) leads to

Ω=1NiNdiαi=αdCNi,
(22)

where C is the label of the control node. The stable state of the system when linearization conditions hold described in Eq. (14) then becomes

αdCN=[j=1NLijθj+αidi]i.
(23)

The unequivocal solution of phase differences for a given reference node, ϕ(R), is given by

ϕ(C,R)=[L~(C,R)]1Ω=[L~(C,R)]1(αdCN)1=[L~(C,R)]1(αdCN)1.
(24)

Hence,

ϕ(C,R)=[L~(C,R)]1(αdCN)1.
(25)

In order to calculate the matrix ε, whose elements are defined in Eq. (7),

εij(pq)1cos(Δϕij)2,

if Δϕij(pq)0, we can linearize the cosine and write εij as

εij(pq)(Δϕij)24.
(26)

To compute the functionability of a control node, C, as it is defined in Eq. (9), we use Eqs. (25) and (26), where node R and node j are equivalent,

εiR(pq)(αdC2N)2(l[L~1(C,R)]il)2,
(27)

FCijεij(pq) or FCiRεiR(pq) can be expanded in order to obtain a more compact expression. Using Eq. (27) and rearranging summations

FC=(αdC2N)2iR(l[L~1(C,R)]il)2=(αdC2N)2Ri(l[L~1(C,R)]il)2.
(28)

Let us compute

i(l[L~1(C,R)]il)2=il[L~1(C,R)]ilm[L~1(C,R)]im=lmi[L~1(C,R)]il[L~1(C,R)]im=lmi[(L~1(C,R))T]li[L~1(C,R)]im=lmi[(L~T(C,R))1]li[L~1(C,R)]im.
(29)

Therefore,

FC=(αdC2N)2RNijN1[(L~(C,R)L~T(C,R))1]ij,
(30)

where we have used the matrix property: A1B1=(BA)1. Finally, Eq. (30) can be normalized by 1/N2,

F^C1N2(αdC2N)2RNijN1[(L~(C,R)L~T(C,R))1]ij.
(31)

The values of functionability for the network in Fig. 1 taking α=0.2 are F={0.34,0.43, 0.43,0.18,0.36,0.36,0.18}. As we can see from Fig. 4, nodes 1 and 2 obtain the highest scores, while nodes 3 and 6 have the lowest. The ranking of nodes is preserved regardless of the value of the frustration parameter, since all the dependence in α is a quadratic prefactor. Final phases of nodes 1 and 6 in the case α=0.55 are shown in polar coordinates in Fig. 3.

FIG. 4.

Network with the nodes characterized by its radius, which is proportional to the value of functionability. The frustration parameter is set to α=0.2.

FIG. 4.

Network with the nodes characterized by its radius, which is proportional to the value of functionability. The frustration parameter is set to α=0.2.

Close modal

In Sec. IV, we define the functionability of node C, FC, as a measure of the effect that a forced-delay parameter introduced in the intrinsic properties of such node has on the synchronized state of reference of the network. That is, the potential of a single node to switch the network to an asynchronous state, as shown in Fig. 3. Despite the fact that the general definition of functionability is built from a dynamical model, as described by Eqs. (6), (8), (9), and (27), we have derived a very compact mathematical expression that is equivalent to that given by Eq. (30). Moreover, precisely because of the physical meaning of the measure, the interpretation of the final analytical expression is conserved: high values of FC reflect the structural importance of the corresponding node. That is to say, the position where such nodes are located within the network has the potential to shift the state into more possible configurations with the same perturbation of their intrinsic dynamics. As already pointed out, this effect may be either beneficial or disruptive for a given system, depending on its nature and functions.

Moreover, we can take a closer look at the analytical expression of functionability, in Eq. (30), in order to understand the building blocks of it. On the one hand, for fixed values of the network size, N, and the frustration parameter, α, we can identify two contributions to FC

  1. The square of the degree of the node, dC2 and

  2. The reduced Laplacian contribution, which we call L-Periphery. L-Periphery=RNijN1[(L~(C,R)L~T(C,R))1]ij.

The first term stands for the importance of the degree of the node (see the first column in Fig. 5). The more neighbors a node has, the more likely it is to be a more functional node. Moreover, this effect is enhanced by the square of the degree.

FIG. 5.

4 different networks with the nodes characterized by its radius, which is proportional to the value of node degree (first column), L-Periphery (second column), and functionability (third column).

FIG. 5.

4 different networks with the nodes characterized by its radius, which is proportional to the value of node degree (first column), L-Periphery (second column), and functionability (third column).

Close modal

Conversely, if we locate all the nodes using the Fruchterman-Reingold force-directed algorithm available at NetworkX python library, which considers an attractive spring force between adjacent nodes and a repulsive electrical force between any pair of nodes, and use the second contribution of functionability as an attribute for size and color, we obtain an intuitive and qualitative meaning for it. Nodes that have higher values of the former contribution are located at the periphery of the visual representation of the network (see the second column in Fig. 5). Note also that L-periphery is largely negatively correlated with betweenness centrality, as can be seen in Fig. 7.

FIG. 6.

131 nodes of the C. elegans frontal cortex network characterized by their radius, which are proportional to the functionability values, F. The top 10 neuron labels are shown in descending order.

FIG. 6.

131 nodes of the C. elegans frontal cortex network characterized by their radius, which are proportional to the functionability values, F. The top 10 neuron labels are shown in descending order.

Close modal
FIG. 7.

Correlation matrix, using the standard Pearson correlation coefficient, between 5 centrality measures: functionability, L-periphery, degree, eigenvector, and betweenness, from left to right. Correlations are computed from the centrality values of the nodes in the C. elegans frontal cortex network.

FIG. 7.

Correlation matrix, using the standard Pearson correlation coefficient, between 5 centrality measures: functionability, L-periphery, degree, eigenvector, and betweenness, from left to right. Correlations are computed from the centrality values of the nodes in the C. elegans frontal cortex network.

Close modal

Hence, higher values of functionability correspond to nodes that are both well connected and peripheral. Therefore, functionability provide us with more information than other classic measures of node importance (see the third column in Fig. 5). Functionability and its two contributions are depicted in Fig. 5. Note that the product of the squared degree and the L-periphery results in functionability.

Section IV B provides us with a thoughtful analysis of functionability, considering both its interpretation and usefulness. This new centrality measure turned out to be unique in its analytic expression and definition when we compared it with other centrality measures, as we will see in Ref. 41 

To this end, we provide an example of the computation of functionability centrality for the nodes of a well-known real network: the frontal cortex network of the Caenorhabditis elegans worm42 (see Fig. 6). Our aim is not to examine the details of the interpretation of the results but rather to compare functionability with other well-known centrality measures.

More classic centrality measures, such as node degree, betweenness, closeness, the eigenvector, and other spectral based centralities, have different outcomes and rankings for nodes from those of functionability, even if they are similar (see the  Appendix, Subsection 5). Other centrality measures also have different meanings.43–48 As can be seen in Fig. 7, functionability has a positive correlation with degree, betweenness and the eigenvector centralities, although all the Pearson coefficient values are below 0.5. Figure 6 shows that the nodes that have the highest values of functionability correspond to neurons that do not usually appear as neurons with the highest degree, betweenness, or eigenvector centralities; and hence, functionability gives us additional information concerning such nodes.

We recall that nodes with higher values of functionability are more peripheral, as well as having higher degrees.

We may also wish to consider whether our newly-created functionability would be equivalent to other alternative centrality measures that have been developed recently, such as controllability, the core score, and collective influence. These measures target specific properties of the network and have not yet been incorporated into standard network libraries. For this reason, we cannot provide a quantitative comparison, but a close look at their definitions should shed some light on their relevance and will help to understand the meaning of functionability:

  • Controllability,C Within the framework of control theory,49–51 special attention is paid to possible applications to complex networks, particularly brain networks.52,53 The term control refers to the ability of nodes to perturb the system in such a way that it reaches a desired state.54 In order to assess controllability, several methods have been developed; all of them assume a linear response dynamical model.55 We highlight two of them that were specifically designed to evaluate regional controllability, rather than as a global measure of the network.

    • Average controllability: As defined in Ref. 53, the average controllability identifies the nodes that can steer the network to many easily reachable states. The result that we are most interested in concerns the mathematical expression of the particular case when the set of control nodes reduces to one node at a time, and hence, provides a measure of the average controllability of each node of the network. It can be proved that
      Ca(K)Trace(WK1)=i[(IA2)1]Ki.
      (32)
      Equation (32) resembles the well-known Katz centrality measure, considering only odd length walks. Moreover, if we expand it and we keep the second-order dependency on the adjacency matrix, we recover a measure that is proportional to the degree of the node K.
    • Modal controllability: A node that has large modal controllability is one that participates in most of the dynamical modes of the linear system. In other words, it is a node that is able to access states that are difficult to reach.53,56 Modal controllability is defined as
      Cm(K)j=1N(1λj(A)2)vij2,
      (33)
      where λ(A)j is the eigenvalue of the jth mode and vij is the contribution of node i to the eigenvector of the jth mode.
    • Other definitions can similarly be compared to functionability, but their meaning moves away from a measure of the effect on the states of a network, for example, boundary controllability. Equations (32) and (33) are mathematical expressions which differ from Eq. (30) and have a different meaning.57 However, by looking at them, we can also find similarities regarding dependence on the adjacency matrix. In general, many centrality measures tend to be partially correlated.

  • Core score, CS. Despite much attention having been paid to community detection algorithms, another well-known mesoscale property of a network is its core–periphery structure: which nodes are part of a more densely connected core and which are part of a sparsely connected periphery.58–62 Rombach et al.58 proposed a continuous measure of a node’s closeness to the core, called “coreness.” The algorithm is based on an optimization procedure that considers cores with different sizes and boundaries, according to a transition function, and assesses to what extend a node matches this. In order to compute coreness, the authors define the core quality as
    Rγ=i,jAijCij,
    (34)
    where γ is a vector that parametrizes the core quality. The elements Cij are normally computed as Cij=CiCj, where Ci are the elements of the local core values. The aim is to find a core vector, C, that maximizes Rγ and is a normalized shuffle of the vector C, which is determined using a transition function, providing a shuffled list of possible core vector values.
    For a given set of parameters that determine the transition function of the core, γ(α,β), they define the aggregate core score of each node i as
    CS(i)=ZγCi(γ)×Rγ,
    (35)
    where Z is a normalization factor.

    The aim of developing the core score measure differs from that behind functionability in many aspects. Nevertheless, the L-periphery centrality, which is one of the contributors to the former, may resemble the inverse core score outcome when the network is characterized by a clear core–periphery structure. Note that the aim of functionability is not related to finding communities or the core of a network.

  • Collective influence,CI. A subset of measures aim to detect the most influential nodes in an adaptive way. Each method considers a different heuristics to rank all nodes, determine which node is ranked as the greatest spreader, and remove it. Scores are recomputed and the procedure is repeated iteratively until no nodes are left in the network. The simplest approach is done by the highly degree adaptive (HDA) method.63 Other collective influence (CI) tries to fill the gap left by the fact that the preceding set of methods does not optimize an objective global function. In contrast, CI is defined in such a way that it potentially identifies the minimal set of nodes that, if removed, would cause the network to become disconnected, understood in the framework of network percolation theory. It does this, furthermore, by means of an energy cost function.64–66 

    If G(q) represents the probability of the existence of a giant component63,67,68 in the limit of N, then the problem reduces to finding the minimum fraction qc such that
    qc=min{q[0,1]:G(q)=0}.
    (36)
    CI is computed by considering balls of different radius, l, whereby each size captures a different influence scale across the network. In Ref. 64, the authors show that the problem is equivalent to minimizing the cost function,
    El(n)=i=0NzijBall(i,l)(kPl(i,j)nk)zj,
    (37)
    where zidi1 and di stands for the degree of node i. The vector n represents whether a node, in the final configuration, belongs to the set of “influencers” or not. Then, the collective influence strength at level l of node i is
    CIl(i)=zijBall(i,l)(kPl(i,j)nk)zj,
    (38)
    and Eq. (37) becomes
    El(n)=i=1NCIl(i).
    (39)
    Therefore, in order to minimize Eq. (39), we need to remove the node with the greatest CIl value and iterate until a score is assigned to each node.

    Functionability is not obtained from an optimization algorithm or any iterative procedure. However, the physical interpretation of the measure does also have a global scope in the following way: the mathematical definition of FC for a node C is computed considering the effect that a local perturbation has on the whole network.

We have compared the proposed functionability centrality with two sets of measures of node importance. On the one hand, a correlation matrix between the built-in L-periphery centrality, degree, the eigenvector, and betweenness centralities, as benchmark references, has been presented for 4 different networks (see Fig. 7). As can also be stated from Eq. (30), functionability provides us with unique insight into the network, since it cannot be reproduced by other centralities. Note that the defined L-periphery centrality is highly negatively correlated with betweenness centrality. Moreover, both functionability and L-periphery are algorithmic-free, parameter-free, and deterministic centralities; all of these properties being highly beneficial for network analysis. One the other hand, three more recently developed measures, C,CS, and CI have been explored in order to compare the definition of importance, and its computation as well as mathematical resemblance with functionability. We conclude that CS and CI aim to determine a more structural type of centrality, like revealing the participation of a node in the core or the set of nodes which would break the giant component apart. Hence, they may correlate in some ways with L-periphery. Conversely, controllability seeks the nodes which mostly enable the system to move toward a particular state, either those which are easy to access (Ca) or more modelike ones (Cm). Average controllability resembles the intuitive motivation of functionability, although it is neither mathematically equivalent nor does it have a similar physical interpretation or building blocks [see Eqs. (30) and (32)].

In addition, we should point out that functionability centrality does not rely on optimization procedures nor is it bound to the values of the parameters. Actually, we have proved that Eq. (30) is a compact mathematical expression for the measure. The value of the frustration parameter, α, does not influence the ordering of the nodes (see the  Appendix, Subsection 2 for more details).

Thereby, functionability is a unique measure of the effect of perturbing a node on the whole network by means of shifting the system to an asynchronous state. The final expression is a deterministic, parameter-free, and nonalgorithmic measure of centrality, with an underlying physical model to support it and enable an intuitive interpretation of it.

Both the dynamic model and the analytic expression of functionability can easily be extended to weighted networks. A weighted network is defined by the elements of the adjacency matrix, W, thus,

{[W]ij=wij if ij,[W]ij=0 otherwise ,
(40)

where wijR.

In a general setting, the intensity of the connections in a network vary. Considering these weights, the GFKM in Eq. (6) can be rewritten as

dθidt=θ˙i=ω+Kj=1NWijsin(θjθiαi)i=1,,N,
(41)

and the analytical expression of functionability for weighted networks is an extension of Eq. (30),

FC=(αdC2N)2RNijN1[(L~W(C,R)L~WT(C,R))1]ij,
(42)

where

[LW]ijsiδij[W]ij.
(43)

We present a simple example of a weighted version of the 7-node network in Fig. 1. Note the difference in the color scale and size between Figs. 4 and 8.

FIG. 8.

In the upper panel, the nodes of the network are characterized by its radius which is now proportional to the value of the weighted functionability, FW, which corresponds to the weighted network shown below.

FIG. 8.

In the upper panel, the nodes of the network are characterized by its radius which is now proportional to the value of the weighted functionability, FW, which corresponds to the weighted network shown below.

Close modal

In the present work, we define functionability, a new centrality measure of the nodes in a network, in order to address the issue of which nodes, when perturbed, move the system from a synchronized state to one that is more asynchronous? In fact, we aim to sort the nodes by their potential effect on the whole network when one specific change to the intrinsic dynamics of the node spreads over the entire oscillatory system, thereby disrupting the initial synchronized state. This issue may be relevant for the identification of critical nodes that are either beneficial, by enabling access to a broader spectrum of states, or harmful, by destroying overall synchronization. Hence, depending on the system we are considering, the most functional nodes have to be considered when looking for a potential enhancement of the diversity of attainable states or the inhibition of risky instabilities in the system.

We propose to resolve the issue by defining a new centrality measure, called functionability, and symbolized as F. F is a property that can be computed for each node and depends only on the structural connectivity of the network and the position or role of the node within it.

The system is considered to be made up of as many oscillators as the network has nodes. The dynamics that rules the evolution of the system is based on that of the well-known Kuramoto phase oscillators. The functionability of a node measures the dispersion of the phases generated by the induction of a phase frustration parameter in the corresponding node. Despite F being defined in terms of the phase differences between nodes coming from a dynamic model, in the present work, we derive a mathematical expression for centrality only in terms of the network structure, and hence, we avoid a large demand for computing power. Therefore, our F measure is parameter-free, algorithm-free, and deterministic. In addition to functionability being determined by network structure, the definition based on the dynamic model keeps the rank ordering of the nodes invariant.

Moreover, the physical meaning of the measure remains valid and, hence, provides the centrality with an easy-to-handle interpretation of the results obtained. In order to assess the novelty and potential new insight, functionability offers into the nodes in a network, we compare our results with other centrality measures in the cases of different simple network topologies. This comparison enables us to provide an intuitive explanation of the question: What do nodes with large values of F have in common? We show that nodes that have both a large number of neighbors, and hence a high degree, but are also outside the main core of the network, that is, they are peripheral nodes, also have higher functionability scores. Because of this, F may deliver nontrivial results regarding the importance of each node. To sum up, the nodes that enable the largest shifts from the synchronized state of a network are those which are both peripheral and also have a high degree. The magnitude of this effect can be quantified via the analytical expression of functionability.

Many real systems that are commonly modeled as a set of coupled oscillators may be assessed by our centrality functionability. We are usually concerned about such systems moving away from their stable synchronized state. Alternatively, we may wish for other systems to stay away from such complete synchronicity. Power grids and brain networks are examples of the former and latter situations. Functionability enables us to detect the nodes that are most central or relevant for moving the overall system away from synchronization. Epileptic attacks or power grid collapses may be derived from single nodes that, even if not located in the main core, change their intrinsic properties and spread asynchrony rapidly to the network, leading to potentially fatal states. It may be helpful to target such nodes in order to control both synchrony and asynchrony.

The authors acknowledge financial support from MINECO via Project Nos. FIS2015-71582-C2-2-P and PGC2018-094754-B-C22 (MINECO/FEDER,UE) and Generalitat de Catalunya via Grant No. 2017SGR341. G.R.-T. also acknowledges MECD for Grant No. FPU15/03053.

1. An example of the functionability of a simple network

The Laplacian matrix of the network in Fig. 1 is

L=(4111001121000011200001002100000121000001211000012).
(A1)

To calculate functionability, we then need to compute the reduced Laplacian matrix of the network in Fig. 1, taking node 0 as control and node 1 as reference, L~(0,1), which is

L~=(110000112000102100001210000121100012).
(A2)

Note that matrix (A2) corresponds to matrix (A1), with the removal of the 0th row and the 1st column.

2. The linear model: Assumptions and validity

The final definition and usage of functionability centrality, in Eq. (30) or (31) if we are interested in the normalized definition, is based on nonlinear oscillatory dynamics, as explained in Sec. IV. However, several assumptions have been made in order to obtain a more compact and useful expression of it. The more restrictive one is the linearization of the model. However, this assumption relies on the fact that we are concerned with the state when nodes are synchronized in frequency and, therefore, dispersion comes from the breaking of the in-phase synchronized state. For a set of coupled oscillators in a network which follow the conditions described in Secs. II B and III A, this condition is broken when the frustration parameter, α, becomes too large (see Fig. 2). In Fig. 9, the 2 methods of computing F are compared for different values of α. Regardless of the type of network topology, there is a threshold value at which the 2 methods diverge. This value changes from one network to another, as well as from size to size. However, it is important to note that when we enable enough simulation time steps, the approaches become even more similar because, as the network grows in its number of nodes, the time needed to reach the stable state increases too. Nevertheless, our interest is the usage of the analytical expression of functionability, rather than that obtained from the results of the stable state of a dynamic system, which enables us to make a physical interpretation of the measure. Moreover, the validity of the former has a large variability, even from node to node, that is, the dynamics may easily achieve chaotic behavior that is difficult to control and interpret, and hence, it may lead to different results, depending on the frustration parameter and initial conditions. In addition, Eq. (30) is proportional to α, and, therefore, the ranking of nodes does not depend on this parameter but only on the connectivity structure. To conclude, if we set the value of the frustration parameter small enough, the two approaches are equivalent. However, as we move to larger values, the dynamic system approach will converge with a chaotic state for all nodes, while the analytical expression of functionability will be robust in the ranking of nodes. Moreover, the interpretation of the measure is still valid because it keeps rank ordering.

FIG. 9.

Comparison of the results obtained from the computation of functionability, F, both by means of the dynamical model (dotted lines) and the analytical expression in Eq. (30) (continuous lines). Functionability is calculated for different values of the frustration parameter, α[0,π/2], and all the nodes in the 4 different networks (see references in Fig. 7).

FIG. 9.

Comparison of the results obtained from the computation of functionability, F, both by means of the dynamical model (dotted lines) and the analytical expression in Eq. (30) (continuous lines). Functionability is calculated for different values of the frustration parameter, α[0,π/2], and all the nodes in the 4 different networks (see references in Fig. 7).

Close modal

3. Is phase distance a proper metric?

In Eq. (7), we define a measure of the distance between two nodes or oscillators after a perturbation is made on the system. Our system consists of a set of phase oscillators, that is, oscillators whose main and only variable is phase, and not amplitude. Hence, the distance between two nodes is a rather simpler function of an angular argument, symmetric with respect to π and bounded between 0 and 1. In this section, we will comment on the mathematical implications of this definition. Namely, the distance we are using is not a proper metric, because it does not meet the triangle inequality, as we will see. Nevertheless, we will prove that it can be easily related to the Euclidean distance, which it is so. However, as we are not concatenating or adding different distances, this drawback will not be a problem. There are many optimization algorithms that use nonmetric distances without modifying the expected results.

In order to make an intuitive description of the distance εij, we will consider each oscillator to lay on a two-dimensional plane. Each oscillator is characterized by a phase, which evolves in time, and a radius, which is equally set to one (see Fig. 3, for example).

The Euclidean distance of two vectors, a and b, in an n-dimensional space is defined as

dE(a,b)=ab2=(a1b1)2+(a2b2)2++(aNbN)2.
(A3)

The squared Euclidean distance can be expanded as follows:

dE2(a,b)=ab2=a2+b22ab=2(1cos(a,b)),
(A4)

where we have considered that vectors a and b are normalized, that is, a=1 and b=1.

If we go back to the definition of the used distance between nodes, εij in Eq. (7), we notice that the functional form is the same as the cosine distance between two vectors, albeit in one dimension. Looking at Eq. (A4), we can rewrite cosine distance as

dC(a,b)1cos(a,b)2=dE2(a,b)4
(A5)

From the former equality, Eq. (A5), we can state that, although, in general, cosine distance is not a proper metric, as it does not meet the triangle inequality, it can be easily interpret by means of the Euclidean distance between both vectors.

The four properties a metric defined by a distance function d(a,b) should satisfy are

  1. d(a,b)0,

  2. d(a,b)=0a=b,

  3. d(a,b)=d(b,a), and

  4. d(a,c)d(a,b)+d(b,c).

Considering cosine distance, dC(a,b), condition 4 can be written as

dC(a,c)dC(a,b)+dC(b,c)1cos(a,c)21cos(a,b)2+1cos(b,c)2cos(a,b)+cos(b,c)cos(a,c)1.
(A6)

Equation (A6) is, in general, not satisfied. Let us provide a counterexample by considering 3 two-dimensional normalized to unity vectors, a=(1,0), b=(2/2,2/2), and c=(0,1),

0.76+0.7601.521.
(A7)

Nevertheless, we are interested in small phase differences, and hence, small angular arguments. In this situation, if (a,b)=θ1, (b,c)=θ2, and (a,c)=θ3=θ1+θ2, Eq. (A6) can be approximated to

1θ122+1θ222(1θ322)1|θ1||θ2|0.
(A8)

Condition in Eq. (A8) is always true and, therefore, Eq. (7) is a proper distance for our purpose.

4. Example of the calculation for a 4-nodes network

In Sec. IV A, we derive the analytical expression of functionability for the case linearity conditions that are satisfied and under the considerations as described in Sec. IV.

In order to understand the mathematical derivation of the centrality, we are going to go through a simple example on a 4-nodes network by solving step by step the linear system arising from the problem stated.

Let us consider the following network. The adjacency and degree matrices of Fig. 10 are

A=(0111101011001000),D=(3000020000200001)such that[D]ii=di.
(A9)

Using Eq. (A9), the Laplacian matrix is

LDA=(3111121011201001).
(A10)

Going back to the definition of the reference node and the control node, and considering the definition of the Reduced Laplacian matrix, in Eq. (21), we can write it down for the case of the wineglass network. As an example, we calculate L~(C,R) for C=0, R=0, and R=1.

L~(0,0)=(210120001),L~(0,1)=(110120101).
(A11)

Functionability is defined through the configuration described in Eq. (8). Thus, Eq. (16) for the control node C leads to

Ω=1NlNdlαl=αdCNi.
(A12)

Let us derive Eq. (10) for the wineglass network. The more general (linearized) system of equations for such a network is

{θ0˙=θ1+θ2+θ33θ03α0,θ1˙=θ0+θ22θ12α1,θ2˙=θ0+θ12θ22α2,θ3˙=θ0θ3α3.
(A13)

In the stable state, for a given control node, θ˙i=Ωi,

{Ω=θ1+θ2+θ33θ03α0,Ω=θ0+θ22θ12α1,Ω=θ0+θ12θ22α2,Ω=θ0θ3α3.
(A14)

We then immediately calculate the phase differences generated when each of the nodes is the control node, C. To do so, we need to compute the differences with respect to all nodes, that is, considering each of the nodes as the reference node, R.

  • Control node 0 (C=0)
    α=(α,0,0,0)Ω=3α4.
    (A15)
    1. Reference node 0 (R=0)
      {Ω=ϕ202ϕ10,Ω=ϕ102ϕ20,Ω=ϕ30,(210120001)(ϕ10ϕ20ϕ30)=(ΩΩΩ),(210120001)(ϕ10ϕ20ϕ30)=(3α43α43α4),L~(0,0)ϕ(0,0)=3α41,ϕ(0,0)=[L~(0,0)]13α41,
      (A16)
      ϕ10=3α4ϕ20=3α4ϕ30=3α4.
      (A17)
    2. Reference node 1 (R=1)
      {Ω=ϕ01+ϕ21,Ω=ϕ012ϕ21,Ω=ϕ01ϕ31,(110120101)(ϕ01ϕ21ϕ31)=(ΩΩΩ),(110120101)(ϕ01ϕ21ϕ31)=(3α43α43α4),L~(0,1)ϕ(0,1)=3α41,ϕ(0,0)=[L~(0,0)]13α41,
      (A18)
      ϕ01=3α4ϕ21=0ϕ31=0.
      (A19)
    3. Reference node 2 (R=2)
      {Ω=ϕ022ϕ12,Ω=ϕ02+ϕ12,Ω=ϕ02ϕ32,(120110101)(ϕ02ϕ12ϕ32)=(ΩΩΩ),(120110101)(ϕ02ϕ12ϕ32)=(3α43α43α4),L~(0,2)ϕ(0,2)=3α41,ϕ(0,2)=[L~(0,2)]13α41,
      (A20)
      ϕ02=3α4ϕ12=0ϕ32=0.
      (A21)
    4. Reference node 3 (R=3)
      {Ω=ϕ03+ϕ232ϕ13,Ω=ϕ03+ϕ132ϕ23,Ω=ϕ03,(121112100)(ϕ03ϕ13ϕ23)=(ΩΩΩ),(121112100)(ϕ03ϕ13ϕ23)=(3α43α43α4),L~(0,3)ϕ(0,3)=3α41,ϕ(0,3)=[L~(0,3)]13α41,
      (A22)
      ϕ03=3α4ϕ13=0ϕ23=0.
      (A23)
FIG. 10.

The 4-nodes network considered for the derivation of a particular case for functionability. We call it the wineglass network.

FIG. 10.

The 4-nodes network considered for the derivation of a particular case for functionability. We call it the wineglass network.

Close modal
FIG. 11.

Same as Fig. 6 for the values of L-periphery.

FIG. 11.

Same as Fig. 6 for the values of L-periphery.

Close modal
FIG. 12.

Same as Fig. 6 for the values of degree centrality.

FIG. 12.

Same as Fig. 6 for the values of degree centrality.

Close modal
FIG. 13.

Same as Fig. 6 for the values of betweenness centrality.

FIG. 13.

Same as Fig. 6 for the values of betweenness centrality.

Close modal
FIG. 14.

Same as Fig. 6 for the values of the eigenvector centrality.

FIG. 14.

Same as Fig. 6 for the values of the eigenvector centrality.

Close modal
  • Control node 1 (C=1)
    α=(0,α,0,0)Ω2α4=α2.
    (A24)
    • (1)
      Reference node 0 (R=0)
      {Ω=ϕ10+ϕ20+ϕ30,Ω=ϕ102ϕ20,Ω=ϕ30,(111120001)(ϕ10ϕ20ϕ30)=(ΩΩΩ),(111120001)(ϕ10ϕ20ϕ30)=(α2α2α2),
      (A25)
      L~(1,0)ϕ(1,0)=α21,ϕ(1,0)=[L~(1,0)]1α21,ϕ10=5α6ϕ20=α6ϕ30=α2.
      (A26)
    • (2)
      Reference node 1 (R=1)
      {Ω=ϕ20+ϕ303ϕ01,Ω=ϕ012ϕ21,Ω=ϕ01ϕ31,(311120101)(ϕ01ϕ21ϕ31)=(ΩΩΩ),(311120101)(ϕ01ϕ21ϕ31)=(α2α2α2),
      (A27)
      L~(1,1)ϕ(1,1)=α21,ϕ(1,1)=[L~(1,1)]1α21,ϕ01=5α6ϕ21=2α3ϕ31=4α3.
      (A28)
    • (3)
      Reference node 2 (R=2) We can proceed as in previous cases and obtain
      ϕ(1,2)=[L~(1,2)]1α21,L~(1,2)=(311110101),ϕ02=α6ϕ12=2α3ϕ32=2α3.
      (A29)
    • (4)
      Reference node 3 (R=3)
      ϕ(1,3)=[L~(1,3)]1α21,L~(1,3)=(311112100),ϕ03=α2ϕ13=4α3ϕ23=2α3.
      (A30)
  • Control node 2 (C=2)
    α=(0,0,α,0)Ω2α4=α2.
    (A31)
    • (1)
      Reference node 0 (R=0)
      ϕ(2,0)=[L~(2,0)]1α21,L~(2,0)=(111210001),ϕ10=α6ϕ20=5α6ϕ30=α2.
      (A32)
    • (2)
      Reference node 1 (R=1)
      ϕ(2,1)=[L~(2,1)]1α21,L~(2,1)=(311110101),ϕ01=α6ϕ21=2α3ϕ31=2α3.
      (A33)
    • (3)
      Reference node 2 (R=2)
      ϕ(2,2)=[L~(2,2)]1α21,L~(2,1)=(311120101),ϕ02=5α6ϕ12=2α3ϕ32=4α3.
      (A34)
    • (4)
      Reference node 3 (R=3)
      ϕ(2,3)=[L~(2,3)]1α21,L~(2,3)=(311121100),ϕ03=α2ϕ13=2α3ϕ23=4α3.
      (A35)
  • Control node 3 (C=3)
    α=(0,0,0,α)Ω=α4.
    (A36)
    • (1)
      Reference node 0 (R=0)
      ϕ10=α4ϕ20=α4ϕ30=3α4.
      (A37)
    • (2)
      Reference node 1 (R=1)
      ϕ01=α4ϕ21=0ϕ31=α.
      (A38)
    • (3)
      Reference node 2 (R=2)
      ϕ02=α4ϕ12=0ϕ32=α.
      (A39)
    • (4)
      Reference node 3 (R=3)
      ϕ03=3α4ϕ13=αϕ23=α.
      (A40)

Once, we have computed ϕiR(C) for all possible control and reference nodes, we write down the matrix of phase differences, ΔΦ(C), for each control node,

ΔΦ(C=0)=(03α/43α/43α/43α/40003α/40003α/4000),
(A41)
ΔΦ(C=1)=(05α/6α/6α/25α/602α/34α/3α/62α/302α/3α/24α/32α/30),
(A42)
ΔΦ(C=2)=(0α/65α/6α/2α/602α/32α/35α/62α/304α/3α/22α/34α/30),
(A43)
ΔΦ(C=3)=(0α/4α/43α/4α/400αα/400α3α/4αα0).
(A44)

Using the distance matrix, Eq. (7), its approximation, in Eq. (26), and the analytical expression of functionability, in Eq. (30), we compute for each control node,

εij(pq)(Δϕij)24,
FCijεij(pq).

Therefore, if we sum all the elements of the ΔΦ matrix, after squaring them, for each control node and divide the result by 4, we obtain

F=(2732α2,13172α2,13172α2,4332α2),
(A45)

which has the same result as Eq. (30) and can be checked through simulation of the dynamical system. If the frustration parameter is small enough, all of the expressions are equivalent. As α increases, the dynamical system may achieve a chaotic regime or shift the system out of frequency synchronization (see the  Appendix, Subsection 2).

Functionability can be normalized in several ways, as we suggest in Eq. (31), but we stay with the most general result for this case.

5. Classic centralities in C. elegans network

We provide the figures corresponding to the 131 nodes of the C. elegans frontal cortex network characterized by color and radius, which is proportional to the values of the L-periphery, degree, betweenness, and eigenvector centralities. Nodes are located at their real xy position. The top 10 neuron labels are shown in descending order (Figs. 1114).

1.
A.
Pikovsky
,
M.
Rosenblum
, and
J.
Kurths
,
Synchronization: A Universal Concept in Nonlinear Sciences
(
Cambridge University Press
,
Potsdam
,
2001
).
2.
G. V.
Osipov
,
J.
Kurths
, and
C.
Zhou
,
Synchronization in Oscillatory Networks
(
Springer
,
Berlin
,
2007
).
3.
Y.
Kuramoto
,
Chemical Oscillations, Waves, and Turbulence
(
Springer
,
New York, NY
,
1984
).
4.
H.
Sakaguchi
and
Y.
Kuramoto
, “
A soluble active rotator model showing phase transitions via mutual entrainment
,”
Prog. Theor. Phys.
76
,
576
581
(
1986
).
5.
J. A.
Acebrón
,
L. L.
Bonilla
,
C. J.
Pérez-Vicente
,
F.
Ritort
, and
R.
Spigler
, “
The Kuramoto model: A simple paradigm for synchronization phenomena
,”
Rev. Mod. Phys.
77
,
137
185
(
2005
).
6.
A.
Arenas
,
A.
Díaz-Guilera
,
J.
Kurths
,
Y.
Moreno
, and
C.
Zhou
, “
Synchronization in complex networks
,”
Phys. Rep.
469
,
93
153
(
2008
).
7.
A.
Arenas
,
A.
Díaz-Guilera
, and
C. J.
Pérez-Vicente
, “
Synchronization reveals topological scales in complex networks
,”
Phys. Rev. Lett.
96
,
114102
(
2006
).
8.
A.
Arenas
,
A.
Díaz-Guilera
, and
C. J.
Pérez-Vicente
, “
Synchronization processes in complex networks
,”
Physica D
224
,
27
34
(
2006
).
9.
A.
Arenas
and
A.
Díaz-Guilera
, “
Synchronization and modularity in complex networks
,”
Eur. Phys. J. Spec. Top.
143
,
19
25
(
2007
).
10.
V.
Nicosia
,
M.
Valencia
,
M.
Chavez
,
A.
Díaz-Guilera
, and
V.
Latora
, “
Remote synchronization reveals network symmetries and functional modules
,”
Phys. Rev. Lett.
110
,
174102
(
2013
).
11.
L. M.
Pecora
,
F.
Sorrentino
,
A. M.
Hagerstrom
,
T. E.
Murphy
, and
R.
Roy
, “
Cluster synchronization and isolated desynchronization in complex networks with symmetries
,”
Nat. Commun.
5
,
4079
(
2014
).
12.
O.
Sporns
,
Networks of the Brain
(
The MIT Press
,
2010
).
13.
K.
Dharani
,
The Biology of Thought. A Neuronal Mechanism in the Generation of Thought—A New Molecular Model
(
Academic Press
,
2015
), pp.
53
74
.
14.
K.-P.
Lesch
and
J.
Waider
, “
Serotonin in the modulation of neural plasticity and networks: Implications for neurodevelopmental disorders
,”
Neuron
76
,
175
191
(
2012
).
15.
S. N.
Burke
and
C. A.
Barnes
, “
Neural plasticity in the ageing brain
,”
Nat. Rev. Neurosci.
7
,
30
40
(
2006
).
16.
O.
Sporns
, “
Structure and function of complex brain networks
,”
Dialogues Clin. Neurosci.
15
(
3
),
247
262
(
2013
).
17.
B.
Mišić
and
O.
Sporns
, “
From regions to connections and networks: New bridges between brain and behavior
,”
Curr. Opin. Neurobiol.
40
,
1
7
(
2016
).
18.
R.
Matthew Hutchison
,
T.
Womelsdorf
,
E. A.
Allen
,
P. A.
Bandettini
,
V. D.
Calhoun
,
M.
Corbetta
,
S.
Della Penna
,
J. H.
Duyn
,
G. H.
Glover
,
J.
Gonzalez-Castillo
,
D. A.
Handwerker
,
S.
Keilholz
,
V.
Kiviniemi
,
D. A.
Leopold
,
F.
de Pasquale
,
O.
Sporns
,
M.
Walter
, and
C.
Chang
, “
Dynamic functional connectivity: Promise, issues, and interpretations
,”
NeuroImage
80
,
360
378
(
2013
).
19.
G.
Buzsáki
and
A.
Draguhn
, “
Neuronal oscillations in cortical networks
,”
Science
304
,
1926
1929
(
2004
).
20.
R.
Schmidt
,
K. J. R.
Lafleur
,
M. A.
De Reus
,
L. H.
Van Den Berg
, and
M. P.
Van Den Heuvel
, “
Kuramoto model simulation of neural hubs and dynamic synchrony in the human cerebral connectome
,”
BMC Neurosci.
16
,
54
(
2015
).
21.
R. F.
Betzel
,
A.
Avena-Koenigsberger
,
J.
Goñi
,
Y.
He
,
M. A.
De Reus
,
A.
Griffa
,
P. E.
Vértes
,
B.
Mišic
,
J.-P.
Thiran
,
P.
Hagmann
,
M.
Van Den Heuvel
,
X.-N.
Zuo
,
E. T.
Bullmore
, and
O.
Sporns
, “
Generative models of the human connectome
,”
NeuroImage
124
,
1054
1064
(
2016
).
22.
S.
Arianos
,
E.
Bompard
,
A.
Carbone
, and
F.
Xue
, “
Power grid vulnerability: A complex network approach
,”
Chaos Interdiscip. J. Nonlinear Sci.
19
,
113111
(
2009
).
23.
A.
Réka
,
I.
Albert
, and
G. L.
Nakarado
, “
Structural vulnerability of the North American power grid
,”
Phys. Rev. E
69
,
025103
(
2004
).
24.
X.
Lei
,
X.
Li
, and
D.
Povh
, “
A nonlinear control for coordinating TCSC and generator excitation to enhance the transient stability of long transmission systems
,”
Electric Power Syst. Res.
59
,
103
109
(
2001
).
25.
U. R.
Acharya
,
K. P.
Joseph
,
N.
Kannathal
,
L. C.
Min
, and
J. S.
Suri
, “Heart rate variability,” in Advances in Cardiac Signal Processing (Springer, Berlin, 2007), pp. 121–165.
26.
D.-w.
Huang
and
W.-n.
Huang
, “
Traffic signal synchronization
,”
Phys. Rev. E
67
,
7
(
2003
).
27.
S.
Boccaletti
,
V.
Latora
,
Y.
Moreno
,
M.
Chavez
, and
D.-U.
Hwang
, “
Complex networks: Structure and dynamics
,”
Phys. Rep.
424
,
175
308
(
2006
).
28.
D.
Braha
and
Y.
Bar-Yam
, “
The statistical mechanics of complex product development: Empirical and analytical results
,”
Manage. Sci.
53
,
1127
1145
(
2007
).
29.
S. H.
Strogatz
, “
Exploring complex networks
,”
Nature
410
,
268
276
(
2001
).
30.
E.
Bullmore
and
O.
Sporns
, “
Complex brain networks: Graph theoretical analysis of structural and functional systems
,”
Nat. Rev. Neurosci.
10
,
186
198
(
2009
).
31.
Y.
Kuramoto
, “Self-entrainment of a population of coupled non-linear oscillators,” in Lecture Notes in Physics (Springer, Berlin, 1975), Vol. 30.
32.
J.
Gómez-Gardeñes
,
Y.
Moreno
, and
A.
Arenas
, “
Paths to synchronization on complex networks
,”
Phys. Rev. Lett.
98
,
034101
(
2007
).
33.
A.
Bergner
,
M.
Frasca
,
G.
Sciuto
,
A.
Buscarino
,
E. J.
Ngamga
,
L.
Fortuna
, and
J.
Kurths
, “
Remote synchronization in star networks
,”
Phys. Rev. E
85
,
26208
(
2012
).
34.
O. E.
Omel’chenko
and
M.
Wolfrum
, “
Nonuniversal transitions to synchrony in the Sakaguchi-Kuramoto model
,”
Phys. Rev. Lett.
109
,
164101
(
2012
).
35.
The original definition of the measure is time dependent. We are concerned only with the stationary regime though.
36.
X.
Jiang
and
D. M.
Abrams
, “
Symmetry-broken states on networks of coupled oscillators
,”
Phys. Rev. E
93
,
1
5
(
2016
).
37.
M.
Brede
and
A. C.
Kalloniatis
, “
Frustration tuning and perfect phase synchronization in the Kuramoto-Sakaguchi model
,”
Phys. Rev. E
93
,
1
13
(
2016
).
38.
J.
Ye
, “
Cosine similarity measures for intuitionistic fuzzy sets and their applications
,”
Math. Comput. Model.
53
,
91
97
(
2011
).
39.
In the original KM model (see Ref. 4), it has been proved that the frustration parameter must be αi<π/2 to reach a coherent state where there is a non-zero value for the order parameter. Otherwise, the system enters a kind of chaotic regime.
40.
R.
Merris
, “
Laplacian matrices of graphs: A survey
,”
Linear Algebra Appl.
197
,
143
176
(
1994
).
41.
M.-T.
Kuhnert
,
C.
Geier
,
C. E.
Elger
, and
K.
Lehnertz
, “
Identifying important nodes in weighted functional brain networks: A comparison of different centrality approaches
,”
Chaos
22
,
023142
(
2012
).
42.
M.
Kaiser
and
C. C.
Hilgetag
, “
Nonoptimal component placement, but short processing paths, due to long-distance projections in neural systems
,”
PLoS Comput. Biol.
2
,
1
11
(
2006
).
43.
P.
Bonacich
, “
Some unique properties of eigenvector centrality
,”
Soc. Networks
29
,
555
564
(
2007
).
44.
M.
Benzi
and
C.
Klymko
, “
On the limiting behaviour of parameter-dependent network centrality measures
,”
SIAM J. Matrix Anal. Appl.
36
,
686
706
(
2015
).
45.
E.
Estrada
and
D. J.
Higham
, “
Network properties revealed through matrix functions
,”
SIAM Rev.
52
,
696
(
2010
).
46.
S. P.
Borgatti
, “
Centrality and network flow
,”
Soc. Netw.
27
,
55
71
(
2005
).
47.
R.
Ghosh
,
S.-h.
Teng
,
K.
Lerman
, and
X.
Yan
, “The interplay between dynamics and networks: Centrality, communities, and cheeger inequality,” in
Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining—KDD ’14
(Association for Computing Machinery, 2014), pp. 1406–1415.
48.
G.
Rosell-Tarragó
,
E.
Cozzo
, and
A.
Díaz-Guilera
, “
A complex network framework to model cognition: Unveiling correlation structures from connectivity
,”
Complexity
2018
,
1918753
(
2018
).
49.
A. U.
Levin
and
K. S.
Narendra
, “
Control of nonlinear dynamical systems using neural networks: Controllability and stabilization
,”
IEEE Trans. Neural Netw.
4
,
192
206
(
1993
).
50.
Y.-Y.
Liu
,
J.-J.
Slotine
, and
A.-L.
Barabási
, “
Controllability of complex networks
,”
Nature
473
,
167
173
(
2011
).
51.
F.
Pasqualetti
,
S.
Zampieri
, and
F.
Bullo
, “
Controllability metrics, limitations and algorithms for complex networks
,”
IEEE Trans. Control Netw. Syst.
1
,
40
52
(
2014
); e-print arXiv:1308.1201.
52.
A.
Lombardi
and
M.
Hörnquist
, “
Controllability analysis of networks
,”
Phys. Rev. E
75
,
056110
(
2007
).
53.
S.
Gu
,
F.
Pasqualetti
,
M.
Cieslak
,
Q. K.
Telesford
,
A. B.
Yu
,
A. E.
Kahn
,
J. D.
Medaglia
,
J. M.
Vettel
,
M. B.
Miller
,
S. T.
Grafton
, and
D. S.
Bassett
, “
Controllability of structural brain networks
,”
Nat. Commun.
6
,
8414
(
2015
).
54.
M.
Galbiati
,
D.
Delpini
, and
S.
Battiston
, “
The power to control
,”
Nat. Phys.
9
,
126
128
(
2013
).
55.
F.
Sorrentino
, “
Effects of the network structural properties on its controllability
,”
Chaos
17
,
033101
(
2007
); e-print arXiv:0708.1097.
56.
A. M. A.
Hamdan
and
A. H.
Nayehf
, “
Measures of modal controllability and observability for first- and second-order linear systems
,”
J. Guid. Control. Dynam.
12
,
421
428
(
1989
).
57.
E.
Tang
,
C.
Giusti
,
G. L.
Baum
,
S.
Gu
,
E.
Pollock
,
A. E.
Kahn
,
D. R.
Roalf
,
T. M.
Moore
,
K.
Ruparel
,
R. C.
Gur
,
R. E.
Gur
,
T. D.
Satterthwaite
, and
D. S.
Bassett
, “
Developmental increases in white matter network controllability support a growing diversity of brain dynamics
,”
Nat. Commun. Nat. Commun.
1252
,
1608
(
2017
).
58.
P.
Rombach
,
M. A.
Porter
,
J. H.
Fowler
, and
P. J.
Mucha
, “
Core-periphery structure in networks (revisited)
,”
SIAM J. Discrete Math.
59
,
619
646
(
2017
).
59.
S. P.
Borgatti
and
M. G.
Everett
, “
Models of core/periphery structures
,”
Soc. Netw.
21
,
375
395
(
1999
).
60.
P.
Hagmann
,
L.
Cammoun
,
X.
Gigandet
,
R.
Meuli
,
C. J.
Honey
,
V. J.
Wedeen
, and
O.
Sporns
, “
Mapping the structural core of human cerebral cortex
,”
PLoS Biol.
6
,
e159
(
2008
).
61.
M.
Cucuringu
,
P.
Rombach
,
S. H.
Lee
, and
M. A.
Porter
, “
Detection of core–periphery structure in networks using spectral methods and geodesic paths
,”
Euro. J. Appl. Math.
27
,
846
887
(
2016
).
62.
A.
Ma
and
R. J.
Mondragón
, “
Rich-cores in networks
,”
PLoS ONE
10
,
e0119678
(
2015
).
63.
R.
Albert
,
H.
Jeaong
, and
A.-L.
Barabási
, “
Error and attack tolerance of complex networks
,”
Nature
406
,
542
(
2000
); e-print arXiv:0008064v1 [arXiv:cond-mat].
64.
F.
Morone
and
H. A.
Makse
, “
Influence maximization in complex networks through optimal percolation
,”
Nature
542
,
65
68
(
2015
); e-print arXiv:1506.08326.
65.
G. D.
Ferraro
,
A.
Moreno
,
B.
Min
,
F.
Morone
,
Ú.
Pérez-Ramírez
,
L.
Pérez-Cervera
,
L. C.
Parra
,
A.
Holodny
,
S.
Canals
, and
H. A.
Makse
, “
Finding influential nodes for integration in brain networks using optimal percolation theory
,”
Nat. Commun.
9
,
1608
(
2018
).
66.
F.
Morone
,
K.
Roth
,
B.
Min
,
H. E.
Stanley
, and
A.
Makse
, “
Model of brain activation predicts the neural collective influence map of the brain
,”
Proc. Natl. Acad. Sci. U.S.A.
114
,
3849
3854
(
2017
).
67.
D. S.
Callaway
,
M. E.
Newman
,
S. H.
Strogatz
, and
D. J.
Watts
, “
Network robustness and fragility: Percolation on random graphs
,”
Phys. Rev. Lett.
85
,
5468
5471
(
2000
).
68.
M. E.
Newman
, “
Scientific collaboration networks. I. Network construction and fundamental results
,”
Phys. Rev. E
64
,
016131
(
2001
).