Recently, it has been shown that the control energy required to control a large dynamical complex network is prohibitively large when there are only a few control inputs. Most methods to reduce the control energy have focused on where, in the network, to place additional control inputs. We also have seen that by controlling the states of a subset of the nodes of a network, rather than the state of every node, the required energy to control a portion of the network can be reduced substantially. The energy requirements exponentially decay with the number of target nodes, suggesting that large networks can be controlled by a relatively small number of inputs as long as the target set is appropriately sized. Here, we see that the control energy can be reduced even more if the prescribed final states are not satisfied strictly. We introduce a new control strategy called balanced control for which we set our objective function as a convex combination of two competitive terms: (i) the distance between the output final states at a given final time and given prescribed states and (ii) the total control energy expenditure over the given time period. We also see that the required energy for the optimal balanced control problem approximates the required energy for the optimal target control problem when the coefficient of the second term is very small. We validate our conclusions in model and real networks regardless of system size, energy restrictions, state restrictions, input node choices, and target node choices.

A typical problem in control applications is that of balancing the accuracy of the control action (i.e., how close one can reach a desired state) and the control effort (i.e., the expenditure of control signals). Here, we investigate the trade off between the control accuracy and the control effort in the context of dynamical complex networks.

Control of complex dynamical networks is not new.1–14 We define a network as controllable if a set of appropriate control signals can drive the network from an arbitrary initial condition to any final condition in finite time. If a network is controllable, there are several control signals we can choose from to approach the desired final condition. One important metric to characterize these control signals is the effort that each one requires. If u(t) is a control action which performs the task, the control energy associated with the control action is the cumulative magnitude of the control action over time. From optimal control theory, we can define the optimal control action which both satisfies our initial condition and final condition and minimizes the control effort required to perform the task over the time period of action.15 

Different types of control strategies in Refs. 6, 8–12, 14, and 16–19 have focused on controlling a broad range of networks such as power grids,20,21 communication networks,22,23 gene regulatory networks,24 neuronal systems,25,26 food webs,27 and social systems.28 While controlling a complex network, it becomes important to reduce the required control energy.

The minimum energy framework has been examined in Refs. 17 and 18 which have shown that based on the underlying network structure, the set of input nodes, the desired final state, and other parameters, the energy to control a network may lie on a distribution that spans a broad range of orders of magnitude.

One of the methods to reduce the required energy was investigated in Ref. 19, where additional control signals were added in optimal locations in the network according to each node's distance from the current set of control signals. Refs. 17 and 18 have only investigated the control energy for complex networks when the target set coincides with the set of all nodes. Refs. 14 and 29 examined methods to choose a minimal set of independent control signals necessary to control just the targets. Ref. 30, in which the control goal has been set to affect only a subset of the network nodes, chosen as the targets of the control action, has considered the effect of this choice on the required control energy. In fact, the energy requirements exponentially decay with the reduction of the number of target nodes.

With respect to the existing literature, here rather than exactly satisfying the prescribed final states we reduce the control energy by moving as close as possible to the prescribed final states. We introduce a new control strategy called balanced control where we set our performance measure as a convex combination of two objective functions: 1) a function which minimizes the distance from the final output states at final time to the desired final states and 2) a function which minimizes the control effort to achieve the first goal over the finite time period.

Complex networks consist of two parts; a set of nodes with their interconnections that represent the topology of the network, and the dynamics which describe the time evolution of the network nodes. First, we summarize the definitions needed to describe a network. We define V={i},i=1,,n to be the set of n nodes that constitute a network. The adjacency matrix is a real, square n × n matrix, A, which has nonzero elements Aij if node i receives a signal from node j. For each node i, we count the number of receiving connections, called the in-degree κiin and the number of outgoing connections, called the out-degree κiout. The average in-degree and average out-degree for a network are κav. One common way to characterize the topology of a network is by its degree distribution. Often the in-degree and out-degree distributions of networks that appear in science and engineering applications are scale-free, i.e., p(κ) ∼ κγ where κ is either the in-degree or out-degree with corresponding γin and γout, and most often 2 ≤ γ ≤ 3.31 

While most dynamical networks that arise in science and engineering are governed by nonlinear differential equations, the fundamental differences between individual networks and the uncertainty of precise dynamics make any substantial overarching conclusions difficult.6,29,31 Nonetheless, linear controllers have proven to be adequate in many applications by approximating nonlinear systems as linear systems in local regions of the n-dimensional state space.32 We examine linear dynamical systems, as it is a necessary first step to understand how balanced control may benefit nonlinear systems. The linear time invariant (LTI) network dynamics are

(1)

where x(t)=[x1(t),,xn(t)]T is the n × 1 time-varying state vector, u(t)=[u1(t),,um(t)]T is the m × 1 time-varying external control input vector, and y(t)=[y1(t),,yp(t)]T is the p × 1 time-varying vector of outputs, or targets. The n × n matrix A = {aij} is the adjacency matrix described previously, the n × m matrix B defines the nodes in which the m control input signals are injected, and the p × n matrix C expresses the relations between the states that are designated as the outputs. In addition, the diagonal values of A, aii, i = 1,…, n, which represent self-regulation, are chosen to be unique at each node (see justification in Ref. 33). These diagonal values are chosen to also guarantee that A is Hurwitz so the system in Eq. (1) is internally stable. A matrix A is called a Hurwitz,34 when all eigenvalues of A satisfy Reλi < 0 (See Section 3.3 in the book34). We restrict ourselves to the case when the matrix B (C) has linearly independent columns (rows) with a single nonzero element, i.e., each control signal is injected into a single node (defined as an input node) and each output is drawn from a single node (defined as a target node). We define PpV as the subset of target nodes and p=|Pp| as the number of target nodes. A small sample schematic is shown in Fig. 1(a) that demonstrates the graphical layout of our problem emphasizing the graph structure and the role of input nodes (e.g., node 1 in the figure) and targets (e.g., node 1, 2, and 3 in the figure).

FIG. 1.

Example Network. Panel (a) displays a sample network with three nodes. Each node has self regulation labeled by aii. Input node (node 1) is in blue and target nodes for balanced control are in magenta (nodes 1, 2, and 3). Node 1 is directly connected to an input u1 and target nodes 1, 2, and 3 are directly connected to outputs y1, y2, and y3, respectively. In panel (b), we examine the limiting relationship in Eq. (16) for the three node network. For a large value of α = 10–1, the output states and the optimal control input are provided in panels (c) and (e), respectively. For a small value of α = 10–7, the output states and the optimal control input are provided in panels (d) and (f), respectively.

FIG. 1.

Example Network. Panel (a) displays a sample network with three nodes. Each node has self regulation labeled by aii. Input node (node 1) is in blue and target nodes for balanced control are in magenta (nodes 1, 2, and 3). Node 1 is directly connected to an input u1 and target nodes 1, 2, and 3 are directly connected to outputs y1, y2, and y3, respectively. In panel (b), we examine the limiting relationship in Eq. (16) for the three node network. For a large value of α = 10–1, the output states and the optimal control input are provided in panels (c) and (e), respectively. For a small value of α = 10–7, the output states and the optimal control input are provided in panels (d) and (f), respectively.

Close modal

In our optimal balanced control problem, we attempt to minimize the following cost function,

(2)

subject to the constraints

Here, the final constraints are in the objective function and we call these constraints soft constraints as we do not require them to be satisfied exactly. Note that if we set C = In, where In is the n × n identity matrix, then y(t) = x(t). The vector yf is the prescribed final output state of the nodes described by the matrix C. Here, α ∈ (0, 1] is a scaling parameter by which we can penalize the two performance measures in the cost function in (2) to balance the control energy. Note that in the case in which α = 1, the cost function in Eq. (2) becomes the cost function associated with the optimal output cost control problem in Ref. 30, where different from Eq. (2), the final desired state is imposed as a hard constraint.

The solution of the optimization problem in Eq. (2) is obtained using Pontryagin's maximum principle35 (See sections 5.1 and 5.2 in the book35). The Hamiltonian equation introduces n costates ν(t)

From the Hamiltonian equation, the following dynamical relations can be determined:

The stationary equation is used to determine the optimal control input

(3)

The time evolution of the costates can be determined with a final condition of the form, ν(tf)=(1α)CTν¯, where ν¯=y(tf)yf

(4)

where ν¯ will be determined from the final output state. With the optimal control input known, the time evolution of the states can also be determined,

(5)

The final state of the targeted nodes can be determined

(6)

Here, W=t0tfeA(tfτ)BBTeAT(tfτ)dτ is the controllability Gramian. When C is defined as above, i.e., its rows are linearly independent versors, the reduced Gramian Wp is a p-dimensional principal submatrix of W, i.e., we write Wp = CWCT. The p dimensional vector ν¯ can be determined in a straightforward manner,

(7)

where β=(CeA(tft0)x0yf) and Up=(α1αIp+Wp). For 0 < α < 1, the p × p matrix Up is always a symmetric, positive definite matrix, and invertible. In fact, the matrix Wp is positive semidefinite and the eigenvalues of Up are the same as the eigenvalues of Wp plus the positive quantity α1α. Moreover, the eigenvectors of the matrices Up and Wp are the same. From Eqs. (3)–(7), the optimal control input signal when the final condition is in the objective function is equal to

(8)

The equation for the time evolution of the outputs is equal to

(9)

Different from the formulation of linear optimal control commonly seen in texts on the subject, we approach the problem in two unique ways. First, we consider the control action that minimizes the cumulative magnitude of the control input, restricted by a left boundary condition applied to the state of the system (the initial condition) and a final condition applied to only the outputs of the system. Second, we make specific the methodology as it applies to networks by restricting our definitions of the matrices B and C as discussed previously.

The energy associated with the control input in Eq. (8), while only targeting the nodes for balanced control in Pp, is defined as ϵ(p)=t0tfu*(t)Tu*(t)dt. Note that ϵ(p) also depends on which p nodes are in the set, Pp. The energy ϵ(p) is a measure of the ‘effort’ which must be provided to achieve the control goal. In the subsequent definitions and relations, when a variable is a function of p, we more specifically mean it is a function of a specific target set of size p of which there are n!p!(np)! possible sets. We can define the minimum balanced control energy (MBCE) when the control input is of the form in Eq. (8) as

(10)

where the vector β=CeA(tft0)x0yf and Mp=Up1WpUp1 is a p × p symmetric, real, semi-positive definite matrix. Note that the matrix Mp has the same set of eigenvectors as the matrix Wp. Moreover, the following relation relates the eigenvalues of Mp and Wp:

(11)

where we denote the eigenvalues of Mp as μi(p) and the eigenvalues of Wp as λi(p),i=1,,p. It follows that the energy expression (10) defines an ellipsoid in the variable β. The axes of the ellipsoid are unaffected by the particular choice of the parameter α, while the width of each axis changes with the square root of the corresponding eigenvalue of the matrix Mp.

We consider the eigenvalues of Mp as μi(p),i=1,,p, which are ordered such that 0μ1(p)μp(p). By defining the magnitude of the vector, |β|=β, we can define the ‘worst-case’ (or maximum) energy according to the Rayleigh quotient,

(12)

The upper extreme of the control energy denoted by ϵmax(p), for the control action in Eq. (8), is max{ϵ(p)}μp(p), which is what we call the ‘worst-case’ energy for optimal balanced control. For an arbitrary vector β, which can be represented as a linear combination of the eigenvectors of Mp, the energy can be defined as a weighted sum of the eigenvalues, μi(p), which includes the worst-case energy.

Here, we summarize briefly the results for the optimal output cost control problem in Ref. 30. The cost function is minimized

(13)

where we choose u(t) such that it satisfies the prescribed initial state, x(t0) = x0 and final output y(tf) = yf. We call this latter condition a hard constraint as these constraints need to be satisfied exactly.

The optimal control input signal when the final conditions are hard constraints (see the derivation in the supplementary material) and under the assumption that the triplet (A, B, C) is output controllable30 

(14)

The minimum energy for the optimal output cost control is

(15)

We would like to determine the limiting energy when α → 0 in Eq. (10). At each p, Pp contains p nodes in the target set and Up is the p × p invertible matrix. When α → 0, Up → Wp, the output controllability Gramian, and the MBCE energy for this limit is

(16)

We also provide the limiting behavior for the worst case energy direction

(17)

A small, three node example of the benefits of balanced control is shown in Fig. 1, where each node is in the target set (p = n = 3). Fig. 1(a) displays a sample network with three nodes. Input node (node 1) is in blue and target nodes for balanced control are in magenta (nodes 1, 2, and 3). Node 1 is directly connected to an input u1 and target nodes 1, 2, and 3 are directly connected to outputs y1, y2, and y3, respectively. In panel (b), we examine the limiting relationship in Eq. (16) for the three node network. From Fig. 1(b), we also see how the balanced control strategy reduces the control energy as the penalizing factor α increases. For a large value of α = 10−1, the output states and the optimal control input are provided in panels (c) and (e), respectively. For small value of α = 10−7, the output states and the optimal control input are provided in panels (d) and (f), respectively. From panels (e) and (f), the integral of the magenta curves is ϵ(3) ≈ 4.2 and ϵ(3) ≈ 219, respectively. We see that the energy can be reduced by 55 times in the former case.

In the cost function (2), the two performance measures are multiplied each by a penalizing factor. It is important to investigate the relationship between the optimal return corresponding to each performance measure as α varies. The optimal return value corresponding to (2) can be written

(18)

where ζ=||y(tf)yf||, the final state error at time tf, J1*=1α2ζ2, and J2*=α2ϵ(p). We call the ratio J1*/J* the optimal error return ratio and J2*/J* the optimal energy return ratio. Note that the sum of the two ratios is equal to 1, i.e., J1*/J*+J2*/J*=1.

Figure 2 shows how the ratios J1*/J* and J2*/J* vary with α for the case of a scale free network with n = 300, γin = γout = 2.5, and κ = 8. We set the fraction of target nodes, p/n = 0.8 and the final time tf = 1. The values on the abscissa axis are logα so that large values of α are shown on the left hand side and small values of α are shown on the right hand side. When α → 0, ζ → 0 faster than α (which multiplies ϵ(p)). Therefore, when α is very small J* is dominated by J2*, and we see from Fig. 2 that J1*/J*0,J2*/J*1 as α → 0. On the other hand, from Fig. 2, we see that as α approaches 1, J2*/J*0 and J1*/J*1. As α increases (decreases), the error component (the energy component) becomes dominant in the optimal return value in Eq. (18).

FIG. 2.

Ratio of optimal return J*. Ratio of optimal error return J1*/J* and ratio of optimal energy return J2*/J* are plotted versus the scaling parameter, α. For the simulation, we choose a scale free network with n = 300, γin = γout = 2.5, and κ = 8. We set the fraction of target nodes, p/n = 0.8 and the final time tf = 1.

FIG. 2.

Ratio of optimal return J*. Ratio of optimal error return J1*/J* and ratio of optimal energy return J2*/J* are plotted versus the scaling parameter, α. For the simulation, we choose a scale free network with n = 300, γin = γout = 2.5, and κ = 8. We set the fraction of target nodes, p/n = 0.8 and the final time tf = 1.

Close modal

We perform numerical simulations to examine the two important results discussed in Subsections II E and II F. For our simulations in Figures 3 and 4, we consider scale-free model networks, constructed with the static model in Ref. 36 for specific parameters κ, the average degree, and γin = γout = γ, the power law exponent of the in- and out-degrees, and we choose the initial state at the origin, x0 = 0, and final state ||yf||=1. We choose 10 different final states yf uniformly distributed on the unit sphere centered in the origin, and we take the mean of all results over these realizations. We also consider tf = 1 and the fraction of input nodes nd = 0.4. In Figs. 3 and 4, the solid line corresponds to the output cost control energy, E(p), where p / n is the associated target set.

FIG. 3.

The limiting relationship of ϵ(p) with respect to model network parameters γ and κ. Both panels (a) and (b) corresponds to the size of target fraction, p/n = 0.8. In the left half panels, the log of the control energy for balance control, ϵ(p), the final state error ζ, and the optimal return J* correspond to networks with a fixed κ = 8 and different power-law exponent (γin = γout) are plotted versus α, respectively. The solid line corresponds to the output cost control energy, E(p). The expected limiting relation is seen for each network irrespective of power-law exponent (γin = γout). In the right half panels, logϵ(p), ζ and logJ* corresponding to networks with a fixed γin = γout = 2.5 and different average degree (κ) are plotted versus α, respectively. The expected limiting relation is seen for each network irrespective of average degree (κ).

FIG. 3.

The limiting relationship of ϵ(p) with respect to model network parameters γ and κ. Both panels (a) and (b) corresponds to the size of target fraction, p/n = 0.8. In the left half panels, the log of the control energy for balance control, ϵ(p), the final state error ζ, and the optimal return J* correspond to networks with a fixed κ = 8 and different power-law exponent (γin = γout) are plotted versus α, respectively. The solid line corresponds to the output cost control energy, E(p). The expected limiting relation is seen for each network irrespective of power-law exponent (γin = γout). In the right half panels, logϵ(p), ζ and logJ* corresponding to networks with a fixed γin = γout = 2.5 and different average degree (κ) are plotted versus α, respectively. The expected limiting relation is seen for each network irrespective of average degree (κ).

Close modal
FIG. 4.

The limiting relationship of ϵ(p) as Time Horizon and Input Node Fraction are varied. We hold the fraction of target nodes p/n = 0.8. In the left half panels, the log of the control energy for balanced control, ϵ(p), the final state error ζ and the optimal return J* for different time horizons tf are plotted versus α. The solid line corresponds to the output cost control energy, E(p). We show the expected limiting relation for different time horizons. In the right half panels, logϵ(p), ζ and logJ* for different input node fraction nd are plotted versus α. We show the expected limiting relation for different input node fractions.

FIG. 4.

The limiting relationship of ϵ(p) as Time Horizon and Input Node Fraction are varied. We hold the fraction of target nodes p/n = 0.8. In the left half panels, the log of the control energy for balanced control, ϵ(p), the final state error ζ and the optimal return J* for different time horizons tf are plotted versus α. The solid line corresponds to the output cost control energy, E(p). We show the expected limiting relation for different time horizons. In the right half panels, logϵ(p), ζ and logJ* for different input node fraction nd are plotted versus α. We show the expected limiting relation for different input node fractions.

Close modal

On the left half panels of Fig. 3, we construct scale free networks with the static model such that n = 300 and κ = 8 in each case. Each point is the average of 10 realizations and the bars represent one standard deviation. The target nodes are chosen from the set of nodes randomly and independently for each realization. In Fig. 3(a), the expected limiting relation discussed in Subsection II F is seen for each network irrespective of power-law exponent (γin = γout). We notice that, when the network is more heterogeneous (γ is low, e.g., γ = 2.5), the terminal balancing is more beneficial compared to the networks that are more homogeneous (γ is high, e.g., γ = 3) as the balanced control energy remains close to the output control energy for homogeneous networks. We observe results qualitatively similar for any size of the terminal target set. On the left half panels of Fig. 3, we construct scale free networks with the static model such that n = 300 and γin = γout = 2.5 in each case. Each point is the average of 10 realizations and the bars represent one standard deviation. Each set of target nodes is chosen from the set of nodes randomly and independently for each realization. In Fig. 3(b), the expected limiting relation discussed in Subsection II F is seen for each network irrespective of average degree (κ). We notice that, when the network is more sparse (κ is low, e.g., κ = 5), the terminal balancing is more beneficial compared to the networks that are dense (κ is high, e.g., κ = 15) as the balanced control energy remains close to the output control energy for dense networks. This result holds for any size of the target sets. In panels (c) and (d), we show the error ζ of the final state at final time tf for the same target nodes and in panels (e) and (f), we show that the optimal return function J* decreases as α decreases.

Besides the average degree and power-law exponent which describe the network (Fig. 3), there are other parameters that can affect the control energy such as the time horizon and the number of designated input nodes. In Fig. 4, each panel (a)–(f) corresponds to the size of target node set p/n = 0.8. The target nodes are chosen from the set of nodes randomly and independently for every realization. Each point is the average of 10 realizations and the error bars represent one standard deviation. The network has properties: n = 300, γin = γout = 2.5, κ = 8. On the left half panels, we show the limiting relationship holds as the time horizon defined as tft0 changes. We notice that, when the time horizon is small (e.g., tft0 = 0.01), the terminal balancing is more beneficial compared to large time horizon (e.g., tft0 = 10) as the balanced control energy remains close to the output control energy. On the right half panels, we show the limiting relationship holds as the number of input nodes nd changes. We notice that, when the number of designated input nodes is small (e.g., nd = 0.4), the terminal balancing is more beneficial compared to a large number of input nodes (e.g., nd = 1) as the balanced control energy remains close to the output control energy for a large number of inputs. This result holds for any size of the target sets. In panels (c) and (d), we show the error ζ of the final state at final time tf for the same target nodes and in panels (e) and (f), we show that the optimal return function J* decreases as α decreases.

We also analyze datasets collected from various fields in science and engineering to study how the worst-case energy for MBCE changes with α and the size of the target set under balanced control for networks with more realistic structures. In Fig. 5, we consider six groups of datasets: Circuit,37 Protein Structure,37 Metabolic,38 Food Web,39–42 Social,43–46 and Infrastructure.47–49 For our simulation, we set tf = 1 and nd = 0.45. We take 30 realizations for one particular target fraction and take the mean over several realizations. We only show the results in Fig. 5 for p/n = 0.1 (for sufficient values of p our results are qualitatively the same). For comparison among the real datasets, we choose one network from each network group and plot ϵmax(p) versus α in Fig. 5. We see for small α, say, α = 10−10, the Metabolic network benefits most as the control energy for balanced control reduces significantly from the output control energy (magenta solid line). On the other hand, the Food Web and Social network are not benefited as much as balanced control energy remains approximately the same, in comparison. However, for large values of α, say, α = 10−1, all of the networks need approximately the same amount of energy for balanced control.

FIG. 5.

Comparison among the real dataset. The log of the maximum energy for terminal control, ϵmax(p), is plotted versus the scaling parameter, α.

FIG. 5.

Comparison among the real dataset. The log of the maximum energy for terminal control, ϵmax(p), is plotted versus the scaling parameter, α.

Close modal

In our analyses, similar to Ref. 17, we assume that the networks have stable dynamics. The scale free model networks we consider throughout the paper are constructed with the static model.36 Diagonal noise, δi, is included, drawn from a uniform distribution between −1 and 1 so that the eigenvalues of the adjacency matrix are all unique. The weighted adjacency matrix A is stabilized with a value η such that each diagonal value of A is {aii} = δi + η where i = 1,…, n. The value η is chosen such that the maximum eigenvalue of A is equal to −1. The matrix B is constructed by choosing which nodes in the network require an independent control signal. The matrices B (C) are composed of m (p) versors as columns (rows). We use the same method to choose input nodes as in Ref. 30.

The controllability Gramian, Wp, can be calculated as a function of the eigendecomposition30 of the state matrix A = VΛV−1. For this article, we use the multi-precision package Advanpix for Matlab. The Matlab toolbox Advanpix50 allows the computation of the eigendecomposition of Wp to be performed in an arbitrarily precise manner. This precision allows us to calculate the eigendecomposition of Wp, the invertible matrix Up, and the matrix Mp accurately. We also use Advanpix when computing the energy for the cost function in Eq. (10).

In this paper we provide an energy efficient control strategy we call balanced control strategy. We see that by changing the penalizing factor α in the cost function in Eq. (2), the control energy that is needed for balanced control can be reduced dramatically. For example, in Fig. 1(b), we see that the control energy can be reduced if we relax the final state conditions. We also see the limiting behavior as α → 0 MBCE approaches the output cost control energy. The above two results are general regardless of the network type, size, and other properties. See Figs. 1(b), 3, 4, and 5. From Figure 3, we see that the sparse and heterogeneous networks benefit more from our balanced control strategy than dense and homogeneous networks. We discuss the effect of other parameters, especially time horizon and number of input nodes on the MBCE and its limiting behavior. Several real datasets have also been examined to verify this result. In Fig. 5, we compare the results for different groups of real networks and conclude that the biological networks (e.g., metabolic, protein structure) are those that benefit most from the balanced control strategy.

See supplementary material for complete derivation of the minimum energy output control problem and the definition of versor.

We gratefully acknowledge support from the National Science Foundation through NSF Grant No. CMMI- 1400193, NSF Grant No. CRISP- 1541148, and the Office of Naval Research through ONR Award No. N00014-16-1-2637.

1.
F.
Sorrentino
,
M.
di Bernardo
,
F.
Garofalo
, and
G.
Chen
, “
Controllability of complex networks via pinning
,”
Phys. Rev. E
75
,
046103
(
2007
).
2.
A. S.
Mikhailov
and
K.
Showalter
, “
Introduction to focus issue: Design and control of self-organization in distributed active systems
,”
Chaos
18
(
2
),
026101
(
2008
).
3.
W.
Yu
,
G.
Chen
,
J.
Lu
, and
J.
Kurths
, “
Synchronization via pinning control on general complex networks
,”
SIAM J. Control Optim.
51
(
2
),
1395
1416
(
2013
).
4.
Y.
Tang
,
H.
Gao
,
J.
Kurths
, and
J.-A.
Fang
, “
Evolutionary pinning control and its application in UAV coordination
,”
IEEE Trans. Ind. Inf.
8
(
4
),
828
838
(
2012
).
5.
X. F.
Wang
and
G.
Chen
, “
Pinning control of scale-free dynamical networks
,”
Physica A: Stat. Mech. Appl.
310
(
3
),
521
531
(
2002
).
6.
Y.-Y.
Liu
,
J.-J.
Slotine
, and
A.-L.
Barabási
, “
Controllability of complex networks
,”
Nature
473
(
7346
),
167
173
(
2011
).
7.
Y.-Y.
Liu
,
J.-J.
Slotine
, and
A.-L.
Barabasi
, “
Reply to: Few inputs can reprogram biological networks
,”
Nature
478
(
7369
),
E4
E5
(
2011
).
8.
J.
Ruths
and
D.
Ruths
, “
Control profiles of complex networks
,”
Science
343
(
6177
),
1373
1376
(
2014
).
9.
T.
Summers
and
J.
Lygeros
, “
Optimal sensor and actuator placement in complex networks
,” in
Proceedings of the 19th IFAC World Congress
(
2014
).
10.
B.
Wang
,
L.
Gao
, and
Y.
Gao
, “
Control range: A controllability-based index for node significance in directed networks
,”
J. Stat. Mech.: Theory Exp.
2012
(
04
),
P04011
.
11.
T.
Nepusz
and
T.
Vicsek
, “
Controlling edge dynamics in complex networks
,”
Nat. Phys.
8
(
7
),
568
573
(
2012
).
12.
Z.
Yuan
,
C.
Zhao
,
Z.
Di
,
W.–X.
Wang
, and
Y.-C.
Lai
, “
Exact controllability of complex networks
,”
Nat. Commun.
(2447) (
2013
).
13.
F.-J.
Müller
and
A.
Schuppert
, “
Few inputs can reprogram biological networks
,”
Nature
478
(
7369
),
E4
E4
(
2011
).
14.
F.
Lo Iudice
,
F.
Garofalo
, and
F.
Sorrentino
, “
Structural permeability of complex networks to control signals
,”
Nat. Commun.
6
(8349) (
2015
).
15.
T.
Kailath
,
Linear Systems
(
Prentice-Hall Englewood Cliffs
,
NJ
,
1980
), Vol.
156
.
16.
X.-D.
Gao
,
W.-X.
Wang
, and
Y.-C.
Lai
, “
Control efficacy of complex networks
,”
Sci. Rep.
6
(28037) (
2016
).
17.
G.
Yan
,
G.
Tsekenis
,
B.
Barzel
,
J.-J.
Slotine
,
Y.-Y.
Liu
, and
A.-L.
Barabási
, “
Spectrum of controlling and observing complex networks
,”
Nat. Phys.
11
(
9
),
779
786
(
2015
).
18.
G.
Yan
,
J.
Ren
,
Y.-C.
Lai
,
C.-H.
Lai
, and
B.
Li
, “
Controlling complex networks: How much energy is needed?
,”
Phys. Rev. Lett.
108
(
21
),
218703
(
2012
).
19.
Y.-Z.
Chen
,
L.-Z.
Wang
,
W.-X.
Wang
, and
Y.-C.
Lai
, “
Energy scaling and reduction in controlling complex networks
,”
R. Soc. Open Sci.
3
(
4
),
160064
(
2016
).
20.
S.
Arianos
,
E.
Bompard
,
A.
Carbone
, and
F.
Xue
, “
Power grid vulnerability: A complex network approach
,”
Chaos
19
,
013119
(
2009
).
21.
G. A.
Pagani
and
M.
Aiello
, “
The power grid as a complex network: A survey
,”
Physica A: Stat. Mech. Appl.
392
,
2688
2700
(
2013
).
22.
J.-P.
Onnela
,
J.
Saramäki
,
J.
Hyvönen
,
G.
Szabó
,
M. A.
De Menezes
,
K.
Kaski
,
A.-L.
Barabási
, and
J.
Kertész
, “
Analysis of a large-scale weighted network of one-to-one human communication
,”
New J. Phys.
9
(
6
),
179
(
2007
).
23.
H.
Kwak
,
C.
Lee
,
H.
Park
, and
S.
Moon
, “
What is twitter, a social network or a news media?
,” in
Proceedings of the 19th International Conference on World Wide Web
(ACM,
2010
).
24.
B.
Palsson
,
Systems Biology
(
Cambridge University Press
,
2015
).
25.
O.
Sporns
, “
Structure and function of complex brain networks
,”
Dialogues Clin. Neurosci.
15
(
3
),
247
262
(
2013
).
26.
D.
Papo
,
J. M.
Buldú
,
S.
Boccaletti
, and
E. T.
Bullmore
, “
Complex network theory and the brain
,”
Philos. Trans. R. Soc. B
369
(
1653
),
20130520
(
2014
).
27.
K. T.
Allhoff
and
B.
Drossel
, “
When do evolutionary food web models generate complex networks?
,”
J. Theor. Biol.
334
,
122
129
(
2013
).
28.
K.
Lerman
and
R.
Ghosh
, “
Information contagion: An empirical study of the spread of news on Digg and twitter social networks
,”
ICWSM
10
,
90
97
(
2010
).
29.
J.
Gao
,
Y.-Y.
Liu
,
R. M.
D'Souza
, and
A.-L.
Barabási
, “
Target control of complex networks
,”
Nat. Commun.
5
(5415) (
2014
).
30.
I. S.
Klickstein
,
A.
Shirin
, and
F.
Sorrentino
, “
Energy scaling of targeted optimal control of complex networks
,”
Nat. Commun.
5
,
5415
(
2017
).
31.
R.
Albert
and
A.-L.
Barabási
, “
Statistical mechanics of complex networks
,”
Rev. Mod. Phys.
74
(
1
),
47
(
2002
).
32.
J.-J.
Slotine
and
W.
Li
 et al.,
Applied Nonlinear Control
(
Prentice-Hall Englewood Cliffs
,
NJ
,
1991
), Vol.
1991
.
33.
N. J.
Cowan
,
E. J.
Chastain
,
D. A.
Vilhena
,
J. S.
Freudenberg
, and
C. T.
Bergstrom
, “
Nodal dynamics, not degree distributions, determine the structural controllability of complex networks
,”
PLoS One
7
(
6
),
e38398
(
2012
).
34.
K. K.
Hassan
,
Nonlinear Systems
(
Prentice-Hall
,
Upper Saddle River, NJ
,
2002
).
35.
D. E.
Kirk
,
Optimal Control Theory: An Introduction
(
Courier Corporation
,
2012
).
36.
K.-I.
Goh
,
B.
Kahng
, and
D.
Kim
, “
Universal behavior of load distribution in scale-free networks
,”
Phys. Rev. Lett.
87
,
278701
(
2001
).
37.
R.
Milo
,
S.
Itzkovitz
,
N.
Kashtan
,
R.
Levitt
,
S.
Shen-Orr
,
I.
Ayzenshtat
,
M.
Sheffer
, and
U.
Alon
, “
Superfamilies of evolved and designed networks
,”
Science
303
,
1538
(
2004
).
38.
H.
Jeong
,
B.
Tombor
,
R.
Albert
,
Z. N.
Oltvai
, and
A.-L.
Barabási
, “
The large-scale organization of metabolic networks
,”
Nature
407
,
651
(
2000
).
40.
K. D.
Lafferty
,
R. F.
Hechinger
,
J. C.
Shaw
,
K. L.
Whitney
, and
A. M.
Kuris
, “
Food webs and parasites in a salt marsh ecosystem
,”
Disease Ecology: Community Structure and Pathogen Dynamics
(
Oxford University Press
,
2006
), pp.
119
134
.
41.
N. D.
Martinez
, “
Artifacts or attributes? Effects of resolution on the little rock lake food web
,”
Ecol. Monogr.
61
,
367
(
1991
).
42.
S. J.
Hall
and
D.
Raffaelli
, “
Food-web patterns: Lessons from a species-rich web
,”
J. Anim. Ecol.
60
,
823
(
1991
).
43.
P. M.
Gleiser
and
L.
Danon
, “
Community structure in jazz
,”
Adv. Complex Syst.
6
,
565
(
2003
).
44.
L. C.
Freeman
,
C. M.
Webster
, and
D. M.
Kirke
, “
Exploring social structure using dynamic three-dimensional color images
,”
Soc. Networks
20
,
109
(
1998
).
45.
T.
Opsahl
and
P.
Panzarasa
, “
Clustering in weighted networks
,”
Soc. Networks
31
,
155
(
2009
).
46.
R.
Guimera
,
L.
Danon
,
A.
Diaz-Guilera
,
F.
Giralt
, and
A.
Arenas
, “
Self-similar community structure in a network of human interactions
,”
Phys. Rev. E
68
,
065103
(
2003
).
47.
R. D.
Christie
, see https://www.ee.washington.edu/research/pstca/pf118/pg_tca118bus.htm for IEEE 118-bus system.
48.
P. J.
Menck
,
J.
Heitzig
,
J.
Kurths
, and
H. J.
Schellnhuber
, “
How dead ends undermine power grid stability
,”
Nat. Commun.
5
,
3969
(
2015
).
49.
V.
Colizza
,
R.
Pastor-Satorras
, and
A.
Vespignani
, “
Reaction–diffusion processes and metapopulation models in heterogeneous networks
,”
Nat. Phys.
3
,
276
(
2007
).
50.
Multiprecision Computing Toolbox, Advanpix, Tokyo.

Supplementary Material