In this paper, we propose a novel control approach for opinion dynamics on evolving networks. The controls modify the strength of connections in the network, rather than influencing opinions directly, with the overall goal of steering the population toward a target opinion. This requires that the social network remains sufficiently connected, the population does not break into separate opinion clusters, and that the target opinion remains accessible. We present several approaches to address these challenges, considering questions of controllability, instantaneous control, and optimal control. Each of these approaches provides a different view on the complex relationship between opinion and network dynamics and raises interesting questions for future research.

In this paper, we introduce a novel type of control problem for opinion formation on adaptive networks, in which the control variable affects the evolution of the underlying network rather than individuals’ opinions. We present various control strategies; analyze under which conditions, opinions can/cannot be steered toward a given target; and then corroborate and extend our analytical results with computational experiments and a study of optimal controls. We highlight the advantages and disadvantages of each approach as well as propose several directions for future research.

Fundamental models of opinion formation, such as those of DeGroot,1 Hegselmann and Krause,2 and Deffuant et al.3 have been repeatedly extended and adapted to create the rich and varied literature of modern opinion dynamics. Many such models extend the idea of bounded confidence to that of a more general, typically non-linear, interaction functions.4–6 The interaction function describes how the distance between individuals’ opinions affects whether they interact and the weight this interaction would be given; however, it can also be interpreted as a probability of those individuals interacting over some short time period.7 This interaction function creates an “instantaneous” network of potential interactions, based entirely on individuals’ current opinions, that is sometimes referred to as a state-dependent network.8 

Various authors have considered the question of controlling opinion dynamics of this form (sometimes, in the more general setting of interacting particle systems) by introducing a control variable that directly affects the evolution of each individual’s opinion.9–13 In the context of opinion dynamics, this could represent an external effect such as advertising. The typical goal of such a control is to bring the population to consensus or, more specifically, to consensus at a given target opinion. As such, controls can influence opinions directly, they can be highly effective in guiding the population toward a particular target opinion, and so, the question of optimal control is often considered. Some works have also studied the impact of introducing “strategic agents” whose opinion is controlled.14 

To make opinion dynamics more representative of the real world, it is also common to include a social network,15–17 which may be static, evolve independently, or evolve coupled to individuals’ opinions. It is important to note that the edge-weights of this social network are introduced as additional state variables and so, unlike the interaction function, are not determined by individuals’ current opinions. In order to interact, individuals require a non-zero connection in the social network and a non-zero interaction function. Recently, ideas around evolving networks, such as those considered for the Kuraomoto model of coupled oscillators,18 have also been applied to opinion dynamics.6 Here, the evolution of the network is used to model changing social relationships, which are affected by the history of individuals’ interactions. The goal of this paper is to study the potential impact of a control applied to the evolution of the social network. That is, controls gradually alter the extent to which pairs of individuals interact, rather than directly affecting their opinions. Such a control must work within the range of the population’s current opinions, while also accounting for the non-linear interaction function, possibility of the population breaking into clusters, and the impact of the initial network structure.

A related concept of “edge-based” controls, also referred to as a “decentralized adaptive strategy,” has previously been addressed with regard to other interacting particle systems,19–21 such as the Kuramoto model22 and Chua’s circuits.23 A recent review of adaptive dynamical networks, including discussion of these works, can be found in Ref. 8 (note that, in this context, the term adaptive networks is also used to refer to state-dependent networks, such as those generated by the interaction function). The focus in many prior works has been on providing equations for the evolution of edge weights and showing that these guarantee the stability of the fully synchronized state. This is somewhat different to the setup considered in this paper, in which a control variable will be introduced for each edge weight, and the goal is to determine how these control variables should be set over time to achieve consensus at a particular target. This is closer to the setting considered by Piccoli and Duteil in Ref. 5, in which each individual has a mass representing their influence in the population and control variables are introduced to affect the evolution of these masses. This can be considered as a specific case of network control, in which all edges connecting to a given individual are identical. Here, we consider more general network structures and adapt the network dynamics considered in Ref. 6 by replacing the appearance of the interaction function in the network dynamics with these new control variables.

The remainder of the paper is structured as follows. Section II describes precisely the mathematical setting and motivates the form of control we will consider. This system is then analyzed in Sec. III in three ways: Sec. III A presents several analytic results about the system’s controllability; Sec. III B studies the performance of a candidate control, inspired by the instantaneous control in Piccoli and Duteil;5 and Sec. III C attempts to improve upon this by considering the question of optimal control. Finally, Sec. IV concludes with possible future research directions. Several proofs and additional numerical examples are provided in  Appendixes A– D.

We begin by presenting a general mathematical model for opinion dynamics on an evolving network. We consider a population of N individuals and define Λ = { 1 , , N }. Fix initial opinions x ( 0 ) = ( x 1 ( 0 ) , , x N ( 0 ) ) [ 1 , 1 ] N and the edge weights of an initial network w ( 0 ) [ 0 , 1 ] N × N. We assume w i i ( t ) = 1 for all i Λ and t 0, meaning that individuals always give their own opinion maximal weight. For clarity, we assume individuals are labeled such that x 1 ( 0 ) x 2 ( 0 ) x N ( 0 ). Let x i ( t ) denote the opinion of individual i Λ at time t 0 and w i j ( t ) denote the weight of the edge between individuals i and j at time t.

We consider opinion dynamics based on the general formulation in Refs. 4, 24, and 25. As in Ref. 6, we also introduce dynamics for the edge weights in the form of ODEs. However, in this paper, these dynamics are driven by a control variable u L ( R + ; R N × N ). Throughout this paper, we consider a control that can affect all edges (except w i i) at all times and has perfect information about the current state of the system. We recognize that such a setup is not realistic and hope to consider in future works a control with more limited information and influence. In summary, we consider here the following non-linear coupled ODE system:
(1a)
(1b)
where f : R × [ 0 , 1 ] R describes the effect of the control and k i denotes an individuals’ in-degree. The in-degree describes the extent to which an individual is influenced by others in the network and is given by
(2)
The function ϕ : [ 0 , 2 ] [ 0 , 1 ] in (1a) is an interaction function describing how the difference between individuals’ opinions affects the strength/rate of their interactions.

To ensure that the system (1) is well posed and that opinions and weights remain in the desired intervals, we introduce the following assumptions on the interaction function and controls.

Assumption 1

The interaction function ϕ satisfies

  • ϕ ( r ) [ 0 , 1 ] for all r [ 0 , 2 ].

  • ϕ ( 0 ) > 0.

  • ϕ is Lipschitz continuous, with Lipschitz constant L ϕ.

A common interaction function in opinion dynamics is the bounded confidence function
(3)
for some fixed R [ 0 , 2 ]. This function has a discontinuity at R and so does not satisfy Assumption 1. However, we may consider instead a smoothed version of ϕ R, obtained by taking its convolution with a compactly supported mollifier. As discussed in Ref. 7, this corresponds to adding selection noise to the confidence bound, so is a reasonable replacement.
Definition II.1

We call an interaction function ϕ a smoothed bounded confidence function with radius R, if it satisfies Assumption 1 and there exists R > 0 such that ϕ ( r ) = 1 for all r [ 0 , R ].

We also introduce assumptions on the form of the control.

Assumption 2

The control function f : R × [ 0 , 1 ] R satisfies

  • f ( 0 , w i j ) = 0 for all w i j [ 0 , 1 ], so that edge weights remain constant if uncontrolled.

  • f ( u i j , 0 ) 0 for all u i j R, so that edge weights remain non-negative.

  • f ( u i j , 1 ) 0 for all u i j R, so that edge weights do not exceed 1.

  • f is bounded and integrable.

This final assumption on the boundedness of f highlights a key feature of this problem: edge weights w i j change continuously in time with a finite speed, meaning edges cannot be switched on/off instantaneously. As a result, the maximal value of f will play a major role in determining the controllability of (1). Note this does not mean that controls cannot be switched on/off instantaneously but rather that their effect is not instantaneous.

When Assumptions 1 and 2 hold, Proposition 3.1 from Ref. 6 ensures that x i ( t ) [ 1 , 1 ] and w i j ( t ) [ 0 , 1 ] for all i , j Λ and t 0.

This work focuses on a form of control function f motivated by the memory weight dynamics discussed in Ref. 6 and similar to those discussed in Ref. 26,
(4)
Here, s : R R + describes the rate at which w i j changes when controlled and : R [ 0 , 1 ] describes the target toward which w i j is directed. Similar to Ref. 5 in which an opinion dynamics model with evolving masses is considered, we aim to control the population to consensus at a desired value x [ 1 , 1 ]. The method for selecting the control u L ( R + ; R N × N ) in order to achieve this target is referred to as a control strategy.
Remark II.1
Other forms of control function could be considered, and the controllability of (1) is naturally dependent on this choice. As we will see throughout this paper, controlling (1) poses several major challenges, hence we study a control function that has the potential to significantly alter the network’s structure. By contrast, in Ref. 6, the authors also introduce logistic weight dynamics, motivating a control function of the form
(5)
in which u i j controls the rate at which w i j is increasing or decreasing. This form of control cannot add new edges, creating a strong dependence on the initial network that severely limits both the control’s effectiveness and our ability to analyze it.

To motivate our assumptions on ϕ, x and x ( 0 ), we first establish some basic conditions for consensus.

Definition II.2
We say the population reaches consensus if, for all i , j Λ,
Moreover, we say the population reaches consensus at a point x [ 1 , 1 ] if, for all i Λ,
This definition is commonly used when considering consensus or synchronization.20 Denote the minimum and maximum opinions in the population by
and the opinion diameter D ( t ) = x M ( t ) x m ( t ). Definition II.2 of consensus is then equivalent to requiring D ( t ) 0 as t . The following provides a useful characterization of consensus at a point.
Proposition II.1

Assume that the population reaches consensus. Then, the population reaches consensus at a point x [ 1 , 1 ] iff x [ x m ( t ) , x M ( t ) ] for all t 0.

Proof.

( ) Assume x [ x m ( t ) , x M ( t ) ] for all t 0. As the population reaches consensus, lim t x m ( t ) = lim t x M ( t ), so D ( t ) 0. As x [ x m ( t ) , x M ( t ) ], | x m ( t ) x | < D ( t ) so x m ( t ) x . Therefore, x i ( t ) x for all i Λ, and we have consensus at x .

( ) Assume that the population reaches consensus at x , but that there exists a time s 0 at which x [ x m ( s ) , x M ( s ) ]. As x m is increasing and x M is decreasing (see Proposition 2.1 in Ref. 6), we have that x [ x m ( t ) , x M ( t ) ] for all s t. If x < x m ( s ), then | x 1 ( t ) x | | x m ( s ) x | > 0 for all t 0. Similarly, if x > x M ( s ), then | x 1 ( t ) x | | x M ( s ) x | > 0 for all t 0. In both cases, this makes convergence of x 1 to x impossible, giving a contradiction.

From Proposition II.1, it is clear that we will at least require x [ x m ( 0 ) , x M ( 0 ) ] to have any hope of reaching consensus at x . This is in contrast to other control setups that affect opinions directly or via leaders, in which consensus could, in principle, be achieved outside the initial range of normal agents’ opinions.27,28 Of course, we will also require that consensus itself is possible. We introduce the following definitions to help clarify when this is not the case.

Definition II.3
For a given interaction function, ϕ : [ 0 , 2 ] [ 0 , 1 ], we denote the set of roots of ϕ by
(6)
If R ϕ is empty, then define r = 2, otherwise let r = inf ( R ϕ ).

By Assumption 1, r > 0. If ϕ is a smoothed bounded confidence function with radius R, we will also have that 0 < R < r .

Definition II.4

For a given r > 0, an opinion profile x ( t ) is called an r-chain if | x i + 1 ( t ) x i ( t ) | < r for all i Λ.

Proposition II.2

If ϕ is decreasing and there exists i Λ such that | x i + 1 ( 0 ) x i ( 0 ) | > r , then the population will not reach consensus.

Proof.

See, for example, Proposition 3.2 in Ref. 6. The fundamental idea is that once a gap in the opinion profile of size bigger than r appears, the closest individuals on either side of this gap will be unable to move closer than r as they will encounter a root of ϕ.

Motivated by Propositions II.1 and II.2, we introduce the following assumptions.

Assumption 3

The initial opinions x ( 0 ) and consensus target x satisfy

  • x [ x m ( 0 ) , x M ( 0 ) ].

  • x ( 0 ) is an r -chain.

These assumptions ensure that we are operating in an environment in which the opinions and networks are well defined and control to consensus may be feasible. The question of which consensus targets are achievable then depends on the initial network and the speed at which controls can alter the network structure.

In this section, we consider controls of the form (4) for given functions s : R R + and : R [ 0 , 1 ]. We assume that s ( 0 ) = 0, so that f satisfies Assumption 2.

We first show which consensus targets are guaranteed to be achievable when using an edge-creating control u + R + and an edge removing control u R , then investigate the performance of a candidate instantaneous control, and, finally, address questions of optimal control, for example, s and functions.

The first result of this section, Proposition III.1, shows that the model setup and assumptions described in Sec. II do not guarantee controllability, indeed we can always find situations in which the system is not controllable.

Proposition III.1

Fix some N > 1. Let the interaction function ϕ satisfy Assumption 1 and the control function f ( u i j , w i j ) satisfy Assumption 2. Then, for any x ( 1 , 1 ), there exist initial opinions x ( 0 ) [ 1 , 1 ] N and initial edge weights w ( 0 ) [ 0 , 1 ] N × N satisfying Assumption 3 for which control to consensus at x is not possible.

Proof.

Without loss of generality, assume x ( 1 , 0 ]. We will construct a range of x ( 0 ) and w ( 0 ) values for which control is not possible. Take x 1 ( 0 ) [ x ε , x ) for some small ε to be determined. Assume that ε is sufficiently small that x 1 ( 0 ) [ 1 , 1 ]. Let ν = min ( 1 x , 1 2 r ) > 0 and take x i ( 0 ) [ x + 1 2 ν , x + ν ] for all i Λ { 1 }. This gives a setup in which x 1 is “far away” from the rest of the population, while maintaining the required r -chain. Take w 1 j ( 0 ) 1 2 for all j Λ. The other entries of w ( 0 ) may take any value in [ 0 , 1 ]. These initial conditions and x satisfy Assumption 3.

We now show that if ε is chosen to be sufficiently small, there exists a time τ > 0 such that x i ( τ ) > x for all i Λ, hence by Proposition II.1 consensus at x is impossible. This is achieved using bounds on w 1 j, ϕ ( | x j x i | ) and x i.

As f satisfies Assumption 2, it is bounded, hence there exists a constant S > 0 such that | f ( u i j , w i j ) | < S for all u i j R and w i j [ 0 , 1 ]. Hence,
As w 1 j ( 0 ) 1 2, for t 1 4 S, we have w 1 j ( t ) 1 4 for all j Λ.

As ε can be chosen sufficiently small that D ( 0 ) < r , there exists a constant c > 0 such that ϕ ( r ) > c for all r [ 0 , D ( 0 ) ]. As D ( t ) D ( 0 ), for all t 0, ϕ ( | x j ( t ) x i ( t ) | ) > c for all i , j Λ and t 0.

Finally, for any i Λ, consider
Hence, x i ( t ) x i ( 0 ) t N r . Specifically, for i Λ { 0 } and t ν 4 N r , x i ( t ) x + 1 4 ν.
Combining these bounds, we have the following:
Thus, for t τ := min ( 1 4 S , ν 4 N r ), we have x 1 ( t ) x 1 ( 0 ) + ( c ν 16 ) ( N 1 N ) t. Note that the definition of τ is independent of ε, so by taking ε < ( c ν 16 ) ( N 1 N ) τ, we ensure that x 1 ( τ ) > x . In addition, x i ( τ ) x + 1 4 ν > x for i Λ { 0 }. Hence, at time τ, all opinions lie above x , so control to consensus at x is impossible.

Hence, it is not possible to provide a control function f satisfying Assumption 2, and so not possible to provide a control strategy to determine u that guarantees controllability for all x . Instead, we consider the initial conditions x ( 0 ) and w ( 0 ), as well as the consensus target x , to be fixed and ask when the system can be controlled for these fixed values.

We begin by considering the simple case in which w i j ( 0 ) = 0 for all i j, and the control acts only to create new edges. Recall that w i i is always equal to 1 for all i Λ.

Definition III.1

A network w [ 0 , 1 ] N × N is called empty if w i j = 0 for all i j and non-empty if there exists i j such that w i j > 0.

Proposition III.2
Let ϕ satisfy Assumption 1, x ( 0 ) and x satisfy Assumption 3, and w ( 0 ) be an empty network. Assume also that there exists a control value u + R for which
(7)
That is, the control u + can be used to create new edges. Then, there exists a control u L ( R + ; { 0 , u + } N × N ) such that the solution of (1) reaches consensus at x .
Proof.

The approach of the proof is to construct such a control. We proceed in three steps, first addressing the simplest case of N=2 individuals. In the second step, we reduce the general case to the N=2 case by identifying the the closest individuals to x above and below and gathering the rest of the population toward these two individuals. Finally in the third step, we apply the N=2 case once this gathering is complete.

Step 1: Consider first the case that N=2. Clearly, if x1(0)=x2(0) then we immediately have consensus at x=x1(0)=x2(0), so assume x1(0)<x2(0). We then define times T12 and T21 and set the controls uij for ij=12,21 according to
(8)
That is, Tij gives the time at which uij is switched off, hence a time after which wij remains fixed. We also define
(9)
for the controlled dynamics with (8). As long as T12 and T21 are not both zero, there will exist an edge between x1 and x2, in addition |x1(0)x2(0)|<r, so for (T12,T21)R2(0,0), the system reaches consensus. F(T12,T21), therefore, gives the consensus opinion. We now show that there exists values of T12 and T21 such that F(T12,T21)=x, meaning the system reaches consensus at the desired target x.

First, consider x12(x1(0)+x2(0)) and fix a value of T21>0. By Lemma A.1 in  Appendix A, for a fixed T21>0, the function F(,T21) is continuous. As w12(0)=w21(0)=0, F(0,T21)=x1(0) and F(T21,T21)=12(x1(0)+x2(0)). Hence, by the intermediate value theorem, for any given x[x1(0),12(x1(0)+x2(0))] there exists (T12,T21) such that the system reaches consensus at x. By an analogous argument, the same can be achieved for x[12(x1(0)+x2(0)),x2(0)]. This proves the claim in the case N=2.

Step 2: For N>2, we begin by gathering individuals toward points on either side of x. Assume there are no individuals i with xi(0)=x. Define individual a such that xa(0)<x and xi(0)xa(0) for all individuals i with xi(0)<x. That is, a is the closest individual whose initial opinion is strictly below x. If this initial opinion is not unique then choose the individual with the highest index. Similarly define individual b to be the closest individual whose initial opinion is strictly above x. If this initial opinion is not unique, then choose the individual with the lowest index. As there are no individuals with xi(0)=x, we will have b=a+1 and so |xa(0)xb(0)|<r. All individuals with xi(0)xa(0) will be brought upward toward xa(0), while all individuals with xi(0)xb(0) will be brought down toward xb(0). This is visualized in the diagram in Fig. 1. We will then apply the N=2 case to xa and xb.
FIG. 1.

Diagram for step 2 of the proof of Proposition III.2. In this step, individuals are sequentially gathered toward the central individuals xa and xb whose opinions are the closest to x above and below, respectively.

FIG. 1.

Diagram for step 2 of the proof of Proposition III.2. In this step, individuals are sequentially gathered toward the central individuals xa and xb whose opinions are the closest to x above and below, respectively.

Close modal
Recall that opinions are ordered so that x1(0)x2(0)xN(0). To explain how individuals are gathered upward toward xa we assume x1(0)<xa(0). If this is not the case, then there must be individuals with xi(0)xb(0), where a similar argument holds. We know |x1(0)x2(0)|<r, so setting u12=u+ will cause x1 to move toward x2 as
for t>0. As no controls have yet been applied to x2, x2(t)=x2(0). Hence, for any ε>0, we will eventually have |x1(t)x2(t)|<ε. Specifically, we can choose ε sufficiently small that |x1(t)x3(0)|<r, as we know |x2(t)x3(t)|=|x2(0)x3(0)|<r. Once this has occurred, we set u23=u+ and repeat (each time waiting until |x1(t)xj(0)|<r). In this way, all individuals i with xi(0)xa(0) can be sequentially gathered upward toward xa(0) without breaking the r-chain. In a similar way, all individuals j with xj>xb(0) can be gathered downwards toward xb(0).
Step 3: Once this gathering process is complete (at a time denoted by T), we will have D(T)<r and so |xi(t)xj(t)|<r for all i,j and all tT. This can be seen in the diagram in Fig. 2 and also in the example in Fig. 3 in which T is approximately 10. Using the N=2 case, xa and xb can be controlled to consensus at x and thus, by the chain of connections, all individuals will be brought to consensus at x.
FIG. 2.

Diagram for step 3 of the proof of Proposition III.2. In this step, the N=2 case is used to control xa and xb to x. All other individuals are connected to one of these two and so follow accordingly.

FIG. 2.

Diagram for step 3 of the proof of Proposition III.2. In this step, the N=2 case is used to control xa and xb to x. All other individuals are connected to one of these two and so follow accordingly.

Close modal
FIG. 3.

Example of the control method described in Proposition III.2. Beginning with an empty network, edges are created to gather the population closer to individuals near the target opinion x=0.5. The solution for the N=2 case is then used to ensure that those closest to x reach consensus at exactly this point. Opinion trajectories are colored according to individuals’ initial opinions. The target opinion x is indicated by a black dashed line. The inset plot shows the final approach to consensus at x.

FIG. 3.

Example of the control method described in Proposition III.2. Beginning with an empty network, edges are created to gather the population closer to individuals near the target opinion x=0.5. The solution for the N=2 case is then used to ensure that those closest to x reach consensus at exactly this point. Opinion trajectories are colored according to individuals’ initial opinions. The target opinion x is indicated by a black dashed line. The inset plot shows the final approach to consensus at x.

Close modal

If there is an individual i with xi(0)=x, then instead gather all individuals toward xi, while leaving individual i at their initial position (this is the same as setting a=b=i). As the network is initially empty, individual i will not move if no controls are applied to their weights. Here, there is no need to apply the N=2 case.

Figure 3 shows a numerical example of the control described in Proposition III.2 applied with s + = + = 1. We use a smoothed bounded confidence interaction function with r = 0.6 and R = 0.3. A population of size N = 50 is used with each x i ( 0 ) chosen uniformly at random in the interval [ 1 , 1 ]. Individuals a and b are identified and the problem is first solved for these N = 2 individuals. This is done by repeatedly testing values of T 12 and T 21 until the error between the consensus value achieved and x is below a given threshold (as the ODE is solved numerically with a fixed time step over a finite time interval the exact optimal values cannot be used). The gathering approach described is then implemented until the opinion diameter is sufficiently small, then the N = 2 case is used to bring the population to consensus. Note that, since the initial network is empty, if no control were applied no individuals would change their opinions.

When considering non-empty initial networks, Propositions II.1 and II.2 show that it is vital that the control can act sufficiently quickly to prevent x leaving the interval [ x m ( t ) , x M ( t ) ] or gaps of size r appearing in the opinion profile. The following lemma provides a useful bound on the distance an individual can travel if their edge weights are reduced exponentially quickly after some time t 0.

Lemma III.1
Let ϕ satisfy Assumption 1. For a starting time t 0 0, fix opinions x ( t 0 ) [ 1 , 1 ] N, and network w ( t 0 ) [ 0 , 1 ] N × N. Assume there exists a control value u R for which
(10)
Then, for an individual i, setting u i j ( t ) = u for all j i and t t 0 gives
Proof.
For simplicity, take t 0 = 0. Set u i j ( t ) = u for all j i and t 0. Then, the solution to (1b) is
(11)
and so for any t 0

Lemma III.1 shows that if S is sufficiently large, an individual’s opinion can be trapped within a small interval around its current position. This can be used to prevent individuals breaking an r-chain or bound individuals above or below x , and so will prove crucial in showing controllability.

Theorem III.1
Let ϕ be a smoothed bounded confidence function with radius R. Fix initial conditions x ( 0 ) [ 1 , 1 ] N, w ( 0 ) [ 0 , 1 ] N × N and a consensus target x , satisfying Assumption 3. Also assume that x x m ( 0 ) , x M ( 0 ). and u + and u as defined in (7) and (10), respectively. Then, for S sufficiently large, there exists a control u L ( R + ; { 0 , u + , u } N × N ) such that the solution of (1) reaches consensus at x . Specifically, define d 1 and d 2 by
(12)
(13)
in which case we take
(14)
Proof.

The approach of this proof is similar to Proposition III.2. Again we proceed in several steps. In step 1, we identify individuals a and b whose initial opinions are the closest below/above x and gather all other individuals toward x a or x b. This gives an opinion radius below r , which is further reduced below R by temporarily connecting individuals a and b. This step requires two cases, as the setup is slightly different if an individual has an initial opinion of exactly x . In step 2, control to consensus at x is managed by specifying controls for the central individuals a and b. A key difference from Proposition III.2 is that any non-zero initial edges cannot be removed in finite time, so Lemma III.1 must be applied to bound this potential error. The applications of Lemma III.1 provide the lower bound (14) on S.

Step 1: First note that, as x ( 0 ) is an r -chain, d 1 is well-defined. As S > ( D ( 0 ) max { d 1 , d 2 } ) by Lemma III.1, setting u i j = u for all t 0 gives
(15)
for all i Λ and
(16)
for all i Λ with x i ( 0 ) x . Note that if r = 2 then (15) is always satisfied and the condition on d 1 can be removed from (14).

We now describe the process by which individuals are gathered toward a pair of individuals near x .

Case 1: x i ( 0 ) x for all i Λ.

Here, we can identify individuals a and b as in Proposition III.2. As before, we use u + to sequentially gather more extreme individuals toward x a and x b. Here, x a ( t ) and x b ( t ) will not be fixed but, by setting all other controls to u , (15) and (16) ensure they do not cross x or move in a way that could break the r -chain. The gathering process described in Proposition III.2, combined with the exponentially fast removal of all other edges, also ensures that no other pair of individuals breaks the r -chain. We say that this gathering process is complete when all individuals are within a distance ε > 0 of their target (either x a or x b). As x i ( 0 ) x for all i Λ, we will have b = a + 1 and so | x a ( t ) x b ( t ) | < r for all t 0. Hence, by choosing ε sufficiently small, there exists a time T at which x a ( T ) < x , x b ( T ) > x and D ( T ) < r .

From here, we can simplify the problem by reducing the diameter from r to below R, meaning that the interaction function ϕ becomes equal to 1. If | x a ( T ) x | < R / 2 and | x b ( T ) x | < R / 2 then | x a ( T ) x b ( T ) | < R and we are done. If only | x a ( T ) x | < R / 2, then create an edge from b to a, bringing x a and x b within a distance R. This can be achieved while maintaining | x b x | > R / 2. If only | x b ( T ) x | < R / 2, then instead create an edge from a to b to bring x a and x b within a distance R. Again, this can be achieved while maintaining | x a x | > R / 2. Once the desired radius has been achieved (at a time denoted t 1), Lemma III.1 gives that setting controls back to u will prevent both x a and x b crossing x if
(17)
As S > D ( 0 ) 4 ( N 1 ) R, (17) is satisfied and so there exists a time T R T at which x a ( T R ) < x , x b ( T R ) > x and D ( T R ) < R.

Case 2: There exists i Λ such that x i ( 0 ) = x .

In this case, individuals a and b should be chosen from those j Λ with x j ( 0 ) x . This raises the possibility that | x a ( 0 ) x b ( 0 ) | > r . As such, once the gathering toward x a and x b is complete, both individuals may need to be temporarily connected to x i to bring the opinion diameter D below r and then below R, after which point individual i could simply be guided toward either x a or x b. This can be done in much the same way as in the previous case, by temporarily connecting either a, or b, or both, to i until the desired radius is reached (depending on the distances of x a and x b from x ). Indeed, (14) already ensures that S is sufficiently large for this to be done without breaking the r -chain and without x a and x b crossing x . The situation is essentially the same if there are multiple individuals whose initial condition is exactly x .

Step 2: As in Proposition III.2, the problem is now reduced to ensuring convergence of x a ( t ) and x b ( t ) to x , as the chain of connections is such that all other individuals’ opinions tend toward one of these. We begin from a time T R at which x a ( T R ) < x , x b ( T R ) > x and D ( T R ) < R. Hence, from this time onward, ϕ ( | x j x i | ) = 1 for all i , j Λ. Unlike Proposition III.2, we cannot now simply reduce to the case N = 2, as individuals a and b may remain connected to others in the population, albeit with an exponentially decaying weight.

Note that, up until time T R, u i j = u for i = a , b and j Λ { i }. Hence, (16) tells us that leaving u i j at u will prevent consensus. We, therefore, define times T a b , T b a ( T R , ) such that, for i j = a b , b a
(18)
That is, at times T i j we switch these edges from exponentially decreasing toward 0 to exponentially increasing toward +. Also define
(19)
for the controls given in (18). The function G gives the limiting opinion of individual a. As T b a < and w b i is exponentially decreasing for all i a , b, G will also give the limiting opinion of individual b, and thus the location of consensus for the whole population.

Pick some initial guess for the pair ( T ~ a b , T ~ b a ) with both values in the time interval ( T R , ). If G ( T ~ a b , T ~ b a ) > x then fix T b a = T ~ b a. By Lemma A.2 in  Appendix A, for a fixed T b a = T ~ b a > T R, the function G ( T a b , T b a ) is continuous in T a b. If T a b is made extremely large ( T a b T ~ b a), then x b decreases toward x a < x and, if T a b is sufficiently large, x b will be sufficiently below x before time T a b that G ( T a b , T ~ b a ) < x . Hence, by the intermediate value theorem there is a value of T a b for which G ( T a b , T ~ b a ) = x . Due to the persistent (exponentially small) edge weights between individuals a and b and the rest of the population, this ideal value of T a b will need to account for the dynamics of the whole system at t . As such its exact value would be impractical to compute in most cases. However, using the continuity of G, we have shown that such a value, and, therefore, such a control, exists.

If G ( T ~ a b , T ~ b a ) < x then use an analogous argument for a fixed T a b = T ~ a b.

Remark III.1

In the case of an empty starting network, having x = x i ( 0 ) for some i Λ simplifies the problem significantly. However, in the case of a non-empty starting network, such an i may instead create additional difficulty as it may form a crucial link in the r -chain but may repeatedly move above and below x , making it in some sense unreliable as a point to control toward.

Figure 4 shows a numerical example of the control described in Theorem III.1 using the same setup as described for Fig. 3. The initial network w ( 0 ) is an Erdös–Rényi random graph with edge probability p = 5 / N29 with edge weights then given uniformly in the interval [ 0 , 1 ]. As previously, suitable values of T ~ a b , T ~ b a were identified by iteratively solving the dynamics (this required solving the complete dynamics rather than the N = 2 case only). Note that we used s ( u i j ) = u i j 2 and so S = 1, which is far below the value indicated in (14) (of approximately 1200), but in this case control is clearly still possible.

FIG. 4.

Example of the control method described in Theorem III.1. Beginning with a non-empty network, edges are created to gather the population closer to individuals near the target opinion x = 0.5 and removed to prevent crossing this value or splitting the population into multiple clusters. Those individuals closest to x are then controlled to consensus precisely at this point. Opinion trajectories are colored according to individuals’ initial opinions. The target opinion x is indicated by a black dashed line. The inset plot shows the final approach to consensus at x .

FIG. 4.

Example of the control method described in Theorem III.1. Beginning with a non-empty network, edges are created to gather the population closer to individuals near the target opinion x = 0.5 and removed to prevent crossing this value or splitting the population into multiple clusters. Those individuals closest to x are then controlled to consensus precisely at this point. Opinion trajectories are colored according to individuals’ initial opinions. The target opinion x is indicated by a black dashed line. The inset plot shows the final approach to consensus at x .

Close modal

One might reasonably expect that consensus at a desired point could be achieved without such drastic controls. The approach described in the proof effectively erases the initial network to ensure that the r -chain is not broken and the target point remains inside the range of opinions, but it may well be the case that this is unnecessary and indeed, allowing more of the initial network to remain might encourage faster convergence to consensus. Therefore, having established that control is possible, we now aim to improve our control strategy.

We next consider an explicit candidate control strategy, inspired by the approach in.5 In Ref. 5, a model is analyzed where each individual has an evolving, non-zero mass that determines their influence, with the total mass preserved across the population. Giving an individual mass m j is equivalent to setting w i j = m j for all i Λ [although the weight dynamics (1b) considered in this paper would not preserve the total mass]. In the setup in Ref. 5, the population always reaches consensus, so it is sufficient to control the location of this consensus. The more general network setting considered in this paper poses additional challenges, but it is possible that a similar approach may provide a viable control strategy.

Analogously to the mass-weighted mean opinion considered in Ref. 5, we introduce here the degree-weighted mean opinion, given by
(20)
Note that this definition makes use of the out-degree, given by
(21)
While the in-degree k i describes the extent to which individual i is connected to the rest of the population, k i out describes the extent to which the population is connected to individual i and thus more closely reflects the idea of individual i’s influence and thus their “mass.” As each k i out ( t ) 1 for all t 0, x ¯ ( t ) is always well-defined.
Proposition III.3

If the population reaches consensus, then lim t x i ( t ) = lim t x ¯ ( t ) for all i = 1 , , N.

Proof.
Let x be the value at which the population reaches consensus, that is for all i Λ, lim t x i ( t ) = x . Also note that k i out ( t ) 1, so for all i Λ, lim t k i out ( t ) 1 > 0. Hence,
Hence, controlling the value of the cost function
(22)
to 0 is equivalent to controlling the population to consensus at x . Differentiating (22) gives
(23)
The value of this derivative depends on all of x, w, and u, but in the context of instantaneous control we consider x and w to be given and choose the values of u i j that achieve the most negative value of d V / d t. Here, this is achieved by setting
(24)
That is, we set u i j to be positive for all j if x ¯ and x j are on the same side of x and u i j to be negative for all j if x ¯ and x j are on different sides of x . Note that the control u i j depends on the opinion of individual i only through their contribution to x ¯ . In addition, it is not possible for ( x ¯ x j ) to have the same sign for all j Λ, since x ¯ is a convex combination of x j’s, hence there will always be at least one individual j for whom u i j is positive for all i Λ, and thus at least N 1 weights must be increasing at any given time.
To investigate the effectiveness of this control, we perform 10 000 simulations in which initial opinions are chosen uniformly at random in the interval [ 1 , 1 ] and a weighted Erdös–Rényi random network is created to give the initial network w ( 0 ). A consensus target x is then chosen uniformly at random inside the initial opinions. Each simulation is run until the opinions have reached a steady state, determined by the distance between opinion vectors at consecutive timepoints falling below a given threshold. For all these simulations, we use an exponential interaction function, specifically
(25)
Note that, when the network w is fixed, this interaction function guarantees consensus.6 For the control dynamics, we take s and given by
(26)
and so consider u i j [ 1 , 1 ] (meaning M = 1). These simple functions are chosen for s and to capture their intended purpose as the speed and direction of controls while also making analysis of the type performed in Sec. III C possible. The value of S determines the speed with which the control acts, and we take S = 1 in these simulations. Two example timeseries can be found in  Appendix C.

From each simulation, we ask two questions: has the population reached consensus, and has the population been guided to the consensus target?

To address the first question, we calculate the opinion diameter at the end of each simulation. For all simulations, the final opinion diameter was of the order 10 5, clearly indicating consensus. Results for each simulation, along with a local average, can be found in Fig. 9 in  Appendix D. The local average near a consensus target x is calculated using a weighted mean, with the weight given to a simulation result decaying exponentially with the distance between the consensus target in that simulation and x .

The emergence of consensus is perhaps unsurprising. If x ¯ > x then all edges connecting to individuals with opinions below x ¯ will be increasing in weight. Specifically, the edge connecting the individual with the maximum opinion to the individual with the minimum opinion will be increasing in weight. As the interaction function is always positive, this will draw the maximum opinion down toward the minimum opinion. If x ¯ < x then the opposite occurs, yet in either case the opinion diameter is shrinking due to this strengthening of edges. If x ¯ = x then controls switch off, but we know from Ref. 6 that for an exponential interaction function on a fixed connected network, consensus is guaranteed. As it is highly likely that our randomly generated w ( 0 ) will be connected, we thus expect consensus regardless of the value of x (although not necessarily at x ).

To address the question of consensus at x , we calculate the maximum distance from x at the end of the simulation (time T),
(27)
Note that the maximum distance cannot exceed 1 + | x |. The results for the 10 000 simulations are shown in Fig. 5. We observe a region of consensus targets, approximately [ 0.5 , 0.5 ] in which the final maximum distance is extremely small. This indicates that the control is reliably successful for targets in this region. Outside this region the final maximum distance grows almost linearly in | x |, indicating that the control struggles to achieve consensus above 0.5 or below 0.5.
FIG. 5.

Results of repeated tests of the instantaneous control (24) with M = 1. Each simulation uses uniformly random initial conditions and a weighted Erdös–Rényi random initial network and runs until opinions have reached a steady state. Each point shows the value of the maximum distance from x , given by (27), at the end of the simulation. The black line shows the local average.

FIG. 5.

Results of repeated tests of the instantaneous control (24) with M = 1. Each simulation uses uniformly random initial conditions and a weighted Erdös–Rényi random initial network and runs until opinions have reached a steady state. Each point shows the value of the maximum distance from x , given by (27), at the end of the simulation. The black line shows the local average.

Close modal

It is important to note that, although the results in Fig. 5 indicate this control strategy is typically successful for x [ 0.5 , 0.5 ], this success is not guaranteed. In fact, as previously noted, Proposition III.1 shows that there exist initial conditions and targets for which this control will fail. As the results presented in Fig. 5 were generated using random x ( 0 ), w ( 0 ), and x , it is also worth noting that Proposition III.1 gives a range of initial conditions for which the system is not controllable that has a strictly positive probability of occurring. Thus, if the experiments described were repeated a sufficiently large number of times, we would eventually see some instances with consensus targets in the interval [ 0.5 , 0.5 ] for which the control is not successful.

Moreover, this failure cannot necessarily be remedied by increasing S (the speed with which the control acts). As w i i ( t ) = 1 for all i Λ and t 0, x ¯ cannot take any arbitrary value in the opinion interval, even if the other edge weights could be changed instantaneously. This means that for certain initial conditions control to consensus at x is not possible for any S since, no matter how quickly the control acts, x ¯ cannot be controlled to x sufficiently quickly. We provide an example of this in  Appendix C.

Having examined the performance of this control strategy for an exponential interaction function, we now consider a smoothed bounded confidence interaction function, under which obtaining consensus is significantly more challenging. Figure 6 shows that this control is incapable of achieving consensus under such an interaction function. This is due to the fact that the form of the control (24) does not include any information about the interaction function and hence cannot take into account the complex behaviors it may cause. For example, in the lower panel of Fig. 6 we observe a situation in which half the population is connected exclusively to individuals outside their confidence bound, meaning they do not interact and their opinions remain constant. Any previous success of this control strategy appears to be heavily reliant on the exponential interaction function allowing interactions at all distances and hence promoting consensus.

FIG. 6.

Example implementations of the instantaneous control (24) with a smoothed bounded confidence interaction function. Opinion trajectories are colored according to individuals’ initial opinions. This control fails for both the moderate target of + 0.2 (top panel) and the more extreme target of 0.8 (bottom panel) as the population does not reach consensus. The target opinion x is indicated by a black dashed line. The degree-weighted mean opinion is given by the solid black line. Note that in the top panel the dashed line is not visible as the degree-weighted mean opinion is almost immediately brought to the target.

FIG. 6.

Example implementations of the instantaneous control (24) with a smoothed bounded confidence interaction function. Opinion trajectories are colored according to individuals’ initial opinions. This control fails for both the moderate target of + 0.2 (top panel) and the more extreme target of 0.8 (bottom panel) as the population does not reach consensus. The target opinion x is indicated by a black dashed line. The degree-weighted mean opinion is given by the solid black line. Note that in the top panel the dashed line is not visible as the degree-weighted mean opinion is almost immediately brought to the target.

Close modal
It is possible to help the control account for the interaction function by adapting the cost function V (22). One possibility is to include a term of the form
as this describes the energy of the system with a fixed network.4,6 Minima of E correspond to stationary opinion states, hence including this term would encourage the control to bring the system toward the nearest stationary state. However, minimizing E does not necessarily encourage consensus. The population may instead be guided rapidly toward a clustered state, preventing consensus and thus preventing consensus at x . This makes a function involving E a poor candidate for creating consensus at x .

Another solution to the clustering arising from the bounded confidence interaction function is to allow the control more information about the resulting dynamics. In the following Sec. III C, we consider an optimal control problem for (1) that hopes to remedy the failure observed in Fig. 6 and improve upon the drastic approach in Theorem III.1 of removing the entire initial network.

In this section, we continue considering controls of the form (4), with the restriction that, for some fixed M > 0, u i j ( t ) [ M , M ] for all i , j Λ and t 0. We denote this set of admissible controls by U ad. We also consider a finite time horizon, T > 0. For a given control u, we introduce the cost functional C ( u ), which includes both a cost of controls and a cost associated to the distance from the target opinion x ,
(28)
where ( x i ) i Λ is the solution to (1) with controls (4). Including the distance from x inside the integral, rather than as a terminal cost, promotes controlling opinions to consensus at x as quickly as possible. The constants α , β R + are chosen to balance the relative costs of control and distance from x , although in practice it is their ratio that is important.
Next we introduce two co-states p R N and q R N × N for x and w, respectively. The control theory Hamiltonian30 associated to (28) is then given by
(29)
(30)
Moreover, by the Pontryagin maximum principle,30 the optimal control u ~ and corresponding states ( x ~ , w ~ ) and co-states ( p ~ , q ~ ) satisfy the following:
  • d x ~ d t = p H ( x ~ , w ~ , p ~ , q ~ , u ~ ) and d w ~ d t = q H ( x ~ , w ~ , p ~ , q ~ , u ~ ), meaning the original dynamics (1) are satisfied.

  • d p ~ d t = x H ( x ~ , w ~ , p ~ , q ~ , u ~ ) , d q ~ d t = w H ( x ~ , w ~ , p ~ , q ~ , u ~ ), referred to as the adjoint dynamics.

  • p ( T ) = 0 and q ( T ) = 0, the terminal conditions for p and q.

  • H ( x ~ , w ~ , p ~ , q ~ , u ~ ) = max u U ad H ( x ~ , w ~ , p ~ , q ~ , u ), referred to as the maximization principle for u ~.

From the Hamiltonian (30), the adjoint dynamics are given by
(31a)
(31b)
where ϕ ~ ( x j x i ) = ϕ ( x j x i ) ( x j x i ) + ϕ ( x j x i ).
It is not possible to identify a priori the control u ~ that maximizes H ( x ~ , w ~ , p ~ , q ~ , u ) for general functions s and , hence for the remainder of this section we will again work with s and given in (26). For these functions, an explicit expression for u ~ can be obtained (see  Appendix B for derivation). Define lower and upper bounds, b l and b u, respectively, by
then the control u ~ = ( u ~ i j ) i , j Λ is given by
(32)

It is interesting to note that in both Proposition III.2, Theorem III.1 and the optimal control setup (32), the full range of controls offered by (4) is not utilized. Instead, only a single value u + that creates edges and a single value u that removes edges are used. This type of control, which switches between extreme values, is commonly known as a bang–bang control. Note that this does not mean edge weights always take integer values, indeed it is often crucial that they do not, only that controls always act to their fullest extent.

To identify the optimal controls we perform a forward–backward sweep (FBS) over (1) and (31), using a fourth order Runga–Kutta numerical scheme, to iteratively improve the controls. Note that during the search for the optimal control we allow continuous controls, rather than limiting to bang–bang controls, but observe convergence toward bang–bang controls of the form (32). We use an initial guess of u 0, that is we begin with no control.

In the following, we use the same initial conditions as for Fig. 4 and again take S = 1. Additional examples, using different initial conditions, can be found in  Appendix D. Our overall aim is to use these examples to make inferences about the general nature of optimal control strategies for this system.

The trajectory of opinions under the optimal control is shown in Fig. 7(a). We observe that the trajectory is quite different from that in Fig. 4 as individuals do not wait to be “gathered” toward x . This leads to significantly faster dynamics that appear to reach consensus before the final time t = 10, whereas the implementation of the method described in Theorem III.1 does not do so until approximately t = 30. Another feature of the dynamics in Fig. 7(a) is that, due to the controls, individuals’ opinions rarely cross over. This may help in maintaining the r -chain and keeping the x within the opinion interval.

FIG. 7.

Results of the FBS to find the optimal controls under (28), using edge weight dynamics of the form (4) with s and given by (26). The same initial conditions were used as for the examples in Figs. 3 and 4. (a) Opinion dynamics under the optimal controls for the cost functional (28). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion x is indicated by a black dashed line. (b) Snapshots of the optimal control at times t = 0 , 2 , 4 , 6. The horizontal axis gives the opinions x i for i Λ, the vertical axis gives the opinions x j for j Λ and points show the control u i j. Blue stars show positive controls, where edges are created/strengthened. Red triangles show negative controls, where edges are being weakened. Gray circles indicate no control. Dashed lines show the location of the target opinion x , hence as t increases, opinions are brought near this value.

FIG. 7.

Results of the FBS to find the optimal controls under (28), using edge weight dynamics of the form (4) with s and given by (26). The same initial conditions were used as for the examples in Figs. 3 and 4. (a) Opinion dynamics under the optimal controls for the cost functional (28). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion x is indicated by a black dashed line. (b) Snapshots of the optimal control at times t = 0 , 2 , 4 , 6. The horizontal axis gives the opinions x i for i Λ, the vertical axis gives the opinions x j for j Λ and points show the control u i j. Blue stars show positive controls, where edges are created/strengthened. Red triangles show negative controls, where edges are being weakened. Gray circles indicate no control. Dashed lines show the location of the target opinion x , hence as t increases, opinions are brought near this value.

Close modal

Figure 7(b) shows the optimal controls at four timepoints ( t = 0 , 2 , 4 , 6). In each panel, we show the value of each u i j ( t ) in the following way: a point is placed at ( x i ( t ) , x j ( t ) ), that is the horizontal axis corresponds to the opinion of individual i and the vertical axis to the opinion of individual j, and this point is colored according to the value of u i j ( t ). As our search for the optimal control converges toward a bang–bang control, all control values lie at (or extremely close to) 0, + 1 or 1, hence the values have been rounded so that a different marker may be used for each value. 0 appears in gray circles, + 1 in blue stars and 1 in red triangles.

At time t = 0, we see a definite striped pattern in the + 1 controls, showing where edges are being created or strengthened. The width and position of this stripe indicates that edges are strengthened only when pairs of individuals lie within a distance r and so could potentially interact. Where x i < x this stripe lies (for the most part) above the diagonal, indicating that individuals are being connected to those with a higher opinion that would bring them up toward x . When x i > x the opposite occurs. There are some + 1 controls that do not follow this rule as well as some “missing” points where they might otherwise be expected, suggesting that this typical behavior is not the only consideration. The distribution of 1 controls, showing where edges are being removed, appears to be much more uniform, making the motivation behind removing such edges unclear.

By time t = 2, the opinion diameter has reduced and the behavior of the controls has changed. The stripe of + 1 controls is still present but has reduced in width as many of the useful edges will have already been established. For x i > x , the majority of controls are by now set to zero. There is a new cluster of 1 points in the (approximate) region [ 0.6 , 0.3 ] × [ 0.5 , 0.7 ]. The purpose of this appears to be to prevent these individuals i with x i < x from increasing above x , by removing edges connecting them to individuals j with x j > x . While it is beneficial to connect individuals to those with higher opinions, in order to bring them closer to x , the control cannot afford to “overshoot” and encourage individuals to cross x .

Moving to time t = 4, we see that almost all controls are now set to zero. A small stripe of + 1 controls remains and a new cluster of 1 controls has appeared, this time to prevent individuals above x from moving below x . By t = 6 essentially all controls are set to zero (the very last control switches off at time 7.46). This switching off of controls before the population has neared consensus is made possible by the nature of the forwards-backwards-sweep, in which the control is effectively given knowledge of the future dynamics of the system and thus can adjust weights to account for this. This mirrors the usage of the intermediate value theorem in Proposition III.2 and Theorem III.1, where it is shown that the system can be controlled by choosing the correct time to switch on/off controls, but that knowledge of the full future dynamics is needed to identify these times. In a similar way, the optimal control can iteratively update earlier controls to account for the full dynamics, allowing controls to be switched off well before consensus.

 Appendix D contains further examples using the same initial opinions with an empty w ( 0 ), complete w ( 0 ) and an example using different x ( 0 ), x and a Watts–Strogatz random network for w ( 0 ). In all these examples, similar qualitative behaviors are observed. This provides some support for using examples of optimal control to learn more about what strategies are most effective in general, with the hope that this may pave the way to future analytic results concerning controllability and the optimality of such controls.

It is clear from Fig. 7(b) and the examples in  Appendix D that the optimal control in this setup is not sparse, meaning that many different edges are controlled throughout the dynamics. In future work, we will consider several modifications to the cost function (28), including methods to encourage a sparse control or penalize deviations from the initial network. The question then shifts from how close the population can be brought to x , to how much/little control is needed to achieve this.

The control problem we have discussed poses several interesting challenges. The difficulty of guaranteeing consensus in opinion formation is exacerbated by the problem of keeping the target point x inside the range of current opinions. Controllability can be proven under some rather strong assumptions about the range and speed of controls, yet we observe that successful controls can be found even when these assumptions do not hold. The examples of optimal controls point toward a promising alternative strategy for further theoretical results, as well as the possibility of searching for sparse controls.

The explicit strategy (24) arising from instantaneous control of V showed mixed results, but further work is required to identify precisely where and why its failures occur. One possibility, suggested by the success of optimal control and nature of the controllability results, is that information about the current state only is not sufficient. This raises the question of how much knowledge of the future dynamics a control requires in order to be effective in this setting.

Recent interest in the extension of mean-field limits to fixed and adaptive networks26,31 also raises the possibility of moving this type of control to the partial differential equation setting. Network control could also be considered at the finer scale of a stochastic agent-based model. Indeed, the type of network control considered here offers many interesting questions about the coupling between opinion and network dynamics and the possibility of successfully influencing their outcome.

A.N. was supported by the Engineering and Physical Sciences Research Council through the Mathematics of Systems II Centre for Doctoral Training at the University of Warwick (Reference No. EP/S022244/1). M.T.W. acknowledges partial support from the EPSRC Small Grant EPSRC No. EP/X010503/1 and Royal Society Grant for International Exchanges Ref. IES/R3/213113. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC-BY) license to Any Author Accepted Manuscript version arising from this submission.

The authors have no conflicts to disclose.

A. Nugent: Conceptualization (equal); Data curation (equal); Formal analysis (lead); Investigation (lead); Methodology (equal); Visualization (equal); Writing – original draft (lead); Writing – review & editing (equal). S. N. Gomes: Conceptualization (equal); Funding acquisition (equal); Methodology (equal); Project administration (lead); Supervision (equal); Writing – original draft (supporting); Writing – review & editing (equal). M. T. Wolfram: Conceptualization (equal); Methodology (equal); Project administration (supporting); Supervision (equal); Writing – original draft (supporting); Writing – review & editing (equal).

The data that support the findings of this study are available within the article. Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Lemma A.1

Fix some T 21 > 0, then the function F ( T 12 , T 21 ) for T 12 R + defined by (9) in Proposition III.2 is continuous in T 12.

Proof.
Fix τ 0. Consider two versions of the dynamics, the first ( x ( 1 ) , w ( 1 ) ) in which T 12 = τ and the second ( x ( 2 ) , w ( 2 ) ) in which T 12 = τ + h, for some small h > 0. Both versions are identical until time τ, so we consider only t τ. As both versions have the same fixed T 21, we have
for all t τ. Moreover, given the values of T 12, we also have
(A1)
(A2)
Hence, there exists a positive function γ : [ τ , ) R + such that
(A3)
Define z ( v ) ( t ) = x 2 ( v ) ( t ) x 1 ( v ) ( t ) 0 for v = 1 , 2. Then,
where the function Ω ( v ) ( t ) is given by
As τ > 0 and all w i j’s are non-decreasing, Ω ( v ) ( t ) Ω ( v ) ( τ ). Furthermore, z ( v ) is decreasing and z ( v ) ( 0 ) < r , hence Ω ( v ) ( t ) ϕ ( | z ( v ) | ) Ω ( v ) ( τ ) ϕ ( | z ( v ) ( 0 ) | ) > 0. Thus, there exists a positive constant c 1 (independent of the version ( v )) such that, for all t τ, v = 1 , 2,
(A4)
Note that if τ = 0 a slight modification is required as Ω ( v ) ( τ ) = 0, but a bound of the same form can still be obtained.
Additionally, by (A3), we can write Ω ( 2 ) ( t ) = Ω ( 1 ) ( t ) + Γ ( t ) for a non-negative function Γ : [ τ , ) R +. Due to this, z ( 1 ) and z ( 2 ) differ only in a rescaling of time. Specifically, we can write
(A5)
for
Using (A1) and (A2), it can be verified that δ ( t ) m ( h ) t for some function m ( h ) with m ( h ) 0 as h 0.
Combining this time rescaling with (A4), we obtain, for any t τ,
(A6)
where the constant c 2 is given by c 2 := z ( v ) ( τ ) e c 1 τ 1 c 1.
We can now obtain a uniform-in-time estimate for the difference between x 1 ( 1 ) ( t ) and x 1 ( 2 ) ( t ), allowing us to then considering the difference between their limits at t ,
where L ϕ is the Lipschitz constant for ϕ. As both z ( 2 ) and | z ( 2 ) ( r ) z ( 1 ) ( r ) | are exponentially decreasing, their integrals are bounded as t . In addition, by (A6),
for some positive constant c 3. Furthermore, as w 12 ( 2 ) is continuous,
Overall, this gives that | x 1 ( 2 ) ( t ) x 1 ( 1 ) ( t ) | is bounded above, uniformly in time, with a bound that tends to 0 as h . Thus, for h > 0,
By an almost identical argument, reversing the time change in (A5), the same holds for h < 0. Hence, we can conclude that, for a fixed T 21 > 0, F ( T 12 , T 21 ) is continuous in T 12.
Lemma A.2

Fix some T b a > T, as defined in Theorem III.1. Then, the function G ( T a b , T b a ) for T a b ( T , ), defined by (19), is continuous in T a b.

Proof.

As weights cannot be driven to exactly zero, only made exponentially small, we do not look specifically at the location of lim t x a ( t ), but instead consider the behavior of the entire ODE system. We show that making a small change to T a b, and, therefore, a small change to the weight w a b has a correspondingly small impact on the location of lim t x ( t ) as a whole, and thus on the location of lim t x a ( t ).

Recall that for t T, ϕ ( | x j ( t ) x i ( t ) | ) = 1 for all i , j Λ. Hence, the ODE system (1) becomes
The system can then be written as
for the matrix
The solution to this system can then be written in the form x ( t ) = x ( 0 ) exp ( 0 t A ( r ) d r ).
As in the proof of Lemma A.1, we consider two versions of the system, the first ( x ( 1 ) , w ( 1 ) ) in which T a b = τ and the second ( x ( 2 ) , w ( 2 ) ) in which T a b = τ + h, for some small h > 0. Define the non-negative function γ ( t ) = w a b ( 1 ) ( t ) w a b ( 2 ) ( t ). The controls u a b are known in both versions, hence we can compute γ ( t ) exactly. For t τ, γ ( t ) = 0. Then, for τ < t < τ + h,
Finally, for t τ + h,
Crucially,
(A7)
where m ( h ) : R R is a bounded function with m ( h ) 0 as h 0.
This change in w a b translates into a small alteration to the matrix A ( t ). Let A ( 1 ) ( t ) = A ( 2 ) ( t ) + Γ ( t ). This gives that
(A8)
where 1 N × N is the N × N identity matrix and o p is the operator norm.
We will now bound the entries of 0 t Γ ( r ) d r. As the only change between the two versions is in T a b and, therefore, in w a b, Γ i j = 0 for i a. For i = a,
and for j a , b,
Finally,
Therefore, by (A7), for any h > 0,
and
(A9)
As both the operator norm and matrix exponential are continuous, combining (A8) and (A9) gives that
(A10)
as h 0. An almost identical argument holds for small h < 0 (simply consider γ ( t )). Hence, for a fixed T b a, the function G ( T a b , T b a ) is continuous in T a b.
Taking s and as defined in (26), the Hamiltonian (30) becomes
When trying to maximize this function with respect to u, we can consider the effect of each u i j independently. Furthermore, much of H is independent of the choice of u. Hence, we define a new function
(B1)
and set
(B2)
If S q = 0, then clearly h has its maximum at υ = 0; hence, we assume this is not the case. The function h can then be written as (dropping the explicit dependence on q and v)
for a = S q 2 and b = ( 1 2 w 2 α S q ), both of which can take positive or negative values. The local extrema of h are at υ = 0 and υ := 2 b 3, with h ( 0 ) = 0 and h ( υ ) = 4 a b 3 27.

Recall that w [ 0 , 1 ], α > 0, S > 0 and q R. We aim to show that under these conditions, the maximum of h ( υ ) over υ [ 1 , 1 ] is never at υ = υ , hence must be in the set { 1 , 0 , 1 }. We, therefore, assume that there exists w , α , S , q satisfying the above conditions for which the maximum does indeed lie at υ and aim to find a contradiction.

As the maximum lies at υ , we have the following inequalities:
These are all satisfied only when a 0 and b 3, or when a 0 and b 1. Note that the sign of a coincides with the sign of q and q 0, so these can be rewritten as q > 0 and b 3, or q < 0 and b 1.

However, if q > 0 then b 1 2 w 1 < 3, so the first pair of inequalities is not possible. Also, if q < 0 then b > 1 2 w 1, so the second pair of inequalities is not possible.

Overall, this means the maximum of h ( υ ) over υ [ 1 , 1 ] is always in the set { 1 , 0 , 1 }. Comparing the values of h ( 1 ) , h ( 0 ) , h ( 1 ) gives (32).

Figure 8 demonstrates examples in which the control strategy (24) succeeds in guiding the population to consensus at a moderate value of x = 0.2, but fails for the more extreme value x = 0.8. Both examples use the setup described in Sec. III B but with identical initial conditions.

Figure 9 shows the final opinion diameter in each of the 10 000 simulations described in Sec. III B. In all cases, the population can be considered to have reached consensus as the opinion diameter is of the order 10 5.

Figure 10 gives an example trajectory demonstrating that, for certain initial conditions, control to x is impossible regardless of the speed of controls ( S). A small population size of N = 3 is chosen for simplicity. Initial opinions are given by x ( 0 ) = ( 0.85 , 0.5 , 0.9 ) with target x = 0.8 (shown by a dashed black line in Fig. 10). The initial weights w ( 0 ) are chosen to match the steady state of their controlled weight dynamics. That is, for i j, w i j ( 0 ) = 1 if u i j ( 0 ) = 1 and w i j ( 0 ) = 0 if u i j ( 0 ) = 1, where u i j is determined by (24). The initial network is
FIG. 8.

Example implementations of the instantaneous control (24) with exponential interaction function (25). This control succeeds for the moderate target of + 0.2 (top panel) but fails for the more extreme target of 0.8 (bottom panel). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion x is indicated by a black dashed line. The degree-weighted mean opinion is given by the solid black line. Note that in the top panel the dashed line is not visible as the degree-weighted mean opinion is almost immediately brought to the target.

FIG. 8.

Example implementations of the instantaneous control (24) with exponential interaction function (25). This control succeeds for the moderate target of + 0.2 (top panel) but fails for the more extreme target of 0.8 (bottom panel). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion x is indicated by a black dashed line. The degree-weighted mean opinion is given by the solid black line. Note that in the top panel the dashed line is not visible as the degree-weighted mean opinion is almost immediately brought to the target.

Close modal
FIG. 9.

Results of repeated tests of the instantaneous control (24) with S = 1. Each simulation uses uniformly random initial conditions and a weighted Erdös–Rényi random initial network and runs until opinions have reached a steady state. Each point shows the opinion diameter at the end of the simulation. The black line shows a local average.

FIG. 9.

Results of repeated tests of the instantaneous control (24) with S = 1. Each simulation uses uniformly random initial conditions and a weighted Erdös–Rényi random initial network and runs until opinions have reached a steady state. Each point shows the opinion diameter at the end of the simulation. The black line shows a local average.

Close modal
FIG. 10.

Example trajectory demonstrating that, for certain initial conditions, control to x is impossible regardless of the speed of controls ( S). Opinion trajectories are shown in red. The target opinion x is indicated by a black dashed line. The degree-weighted mean opinion is given by the solid black line and is constant.

FIG. 10.

Example trajectory demonstrating that, for certain initial conditions, control to x is impossible regardless of the speed of controls ( S). Opinion trajectories are shown in red. The target opinion x is indicated by a black dashed line. The degree-weighted mean opinion is given by the solid black line and is constant.

Close modal

As the degree-weighted mean opinion x ¯ (shown by a solid black line in Fig. 10) does not intersect any individual’s opinion trajectory or the target x , u i j ( t ) = u i j ( 0 ) for all i , j Λ and t [ 0 , 0.8 ]. As a result, all weights remain constant throughout the dynamics as they begin at the correct steady state. Hence, the speed of controls S is not significant, as the edge weights never change. Note that even though weights begin in a steady state determined by (24), a strategy designed to minimize the distance between x ¯ and x , x ¯ ( 0 ) x . This is a result of setting w i i ( t ) = 1 for all i Λ and t 0, which restricts the possible values of x ¯ . This represents the idea that each individual always gives their own opinion maximal weight, and that this weight cannot be affected by any control.

At time t 3.5, the individual with the lowest opinion crosses x , meaning that x is outside the opinion interval and thus control to consensus at x is impossible. Hence, for certain initial conditions and consensus targets, control to consensus using the instantaneous control (24) is not possible for any value of S.

In this appendix, we show several additional examples of the optimal control problem described in Sec. III C. The figures are presented in the same format as Fig. 7: the top panel shows the optimal dynamics while the lower panels show several snapshots of the optimal controls.

Figure 11 shows the optimal dynamics and controls using the same target and initial opinions as for Fig. 7 but with an empty initial network. As a result, the majority of negative controls are replaced with zero control, as there is no need to remove undesired edges. The pattern of positive controls is largely the same as seen in Fig. 7.

FIG. 11.

Results of the FBS to find the optimal controls under (28), using edge weight dynamics of the form (4) with s and given by (26). The same initial opinions are used as for the examples in Figs. 3 and 4. The initial network is empty. (a) Opinion dynamics under the optimal controls for the cost functional (28). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion x is indicated by a black dashed line. (b) Snapshots of the optimal control at times t = 0 , 2 , 4 , 6. The horizontal axis gives the opinions x i for i Λ, the vertical axis gives the opinions x j for j Λ and points show the control u i j. Blue stars show positive controls, where edges are created/strengthened. Red triangles show negative controls, where edges are being weakened. Gray circles indicate no control. Dashed lines show the location of the target opinion x , hence as t increases opinions are brought near this value.

FIG. 11.

Results of the FBS to find the optimal controls under (28), using edge weight dynamics of the form (4) with s and given by (26). The same initial opinions are used as for the examples in Figs. 3 and 4. The initial network is empty. (a) Opinion dynamics under the optimal controls for the cost functional (28). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion x is indicated by a black dashed line. (b) Snapshots of the optimal control at times t = 0 , 2 , 4 , 6. The horizontal axis gives the opinions x i for i Λ, the vertical axis gives the opinions x j for j Λ and points show the control u i j. Blue stars show positive controls, where edges are created/strengthened. Red triangles show negative controls, where edges are being weakened. Gray circles indicate no control. Dashed lines show the location of the target opinion x , hence as t increases opinions are brought near this value.

Close modal

Figure 12 shows the optimal dynamics and controls using the same target and initial opinions as for Fig. 7 but with a complete initial network. Here, the situation is reversed, as the majority of positive controls are now replaced with zero controls and the pattern of negative controls matches that in Fig. 7.

FIG. 12.

Results of the FBS to find the optimal controls under (28), using edge weight dynamics of the form (4) with s and given by (26). The same initial opinions are used as for the examples in Figs. 3 and 4. The initial network is complete. (a) Opinion dynamics under the optimal controls for the cost functional (28). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion x is indicated by a black dashed line. (b) Snapshots of the optimal control at times t = 0 , 2 , 4 , 6. The horizontal axis gives the opinions x i for i Λ, the vertical axis gives the opinions x j for j Λ and points show the control u i j. Blue stars show positive controls, where edges are created/strengthened. Red triangles show negative controls, where edges are being weakened. Gray circles indicate no control. Dashed lines show the location of the target opinion x , hence as t increases opinions are brought near this value.

FIG. 12.

Results of the FBS to find the optimal controls under (28), using edge weight dynamics of the form (4) with s and given by (26). The same initial opinions are used as for the examples in Figs. 3 and 4. The initial network is complete. (a) Opinion dynamics under the optimal controls for the cost functional (28). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion x is indicated by a black dashed line. (b) Snapshots of the optimal control at times t = 0 , 2 , 4 , 6. The horizontal axis gives the opinions x i for i Λ, the vertical axis gives the opinions x j for j Λ and points show the control u i j. Blue stars show positive controls, where edges are created/strengthened. Red triangles show negative controls, where edges are being weakened. Gray circles indicate no control. Dashed lines show the location of the target opinion x , hence as t increases opinions are brought near this value.

Close modal

Figure 13(a) shows the optimal dynamics and controls using a different target, different random initial opinions and a new Watts–Strogatz small-world initial network.32 Although the locations of positive and negative controls are different from those in Fig. 7 there is a similar overall pattern: stripes of positive controls whose width matches the interaction function, bringing individuals toward x ; many zero controls, especially toward the end of the dynamics; negative controls that do not form a consistent pattern.

FIG. 13.

Results of the FBS to find the optimal controls under (28), using edge weight dynamics of the form (4) with s and given by (26). Initial opinions were sampled uniformly at random n [ 1 , 1 ]. The initial network is a Watts–Strogatz random graph. (a) Opinion dynamics under the optimal controls for the cost functional (28). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion x is indicated by a black dashed line. (b) Snapshots of the optimal control at times t = 0 , 2 , 4 , 6. The horizontal axis gives the opinions x i for i Λ, the vertical axis gives the opinions x j for j Λ and points show the control u i j. Blue stars show positive controls, where edges are created/strengthened. Red triangles show negative controls, where edges are being weakened. Gray circles indicate no control. Dashed lines show the location of the target opinion x , hence as t increases, opinions are brought near this value.

FIG. 13.

Results of the FBS to find the optimal controls under (28), using edge weight dynamics of the form (4) with s and given by (26). Initial opinions were sampled uniformly at random n [ 1 , 1 ]. The initial network is a Watts–Strogatz random graph. (a) Opinion dynamics under the optimal controls for the cost functional (28). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion x is indicated by a black dashed line. (b) Snapshots of the optimal control at times t = 0 , 2 , 4 , 6. The horizontal axis gives the opinions x i for i Λ, the vertical axis gives the opinions x j for j Λ and points show the control u i j. Blue stars show positive controls, where edges are created/strengthened. Red triangles show negative controls, where edges are being weakened. Gray circles indicate no control. Dashed lines show the location of the target opinion x , hence as t increases, opinions are brought near this value.

Close modal
1.
M. H.
DeGroot
, “
Reaching a consensus
,”
J. Am. Stat. Assoc.
69
(
345
),
118
121
(
1974
).
2.
R.
Hegselmann
,
U.
Krause
et al., “
Opinion dynamics and bounded confidence models, analysis, and simulation
,”
5
(
3
),
1
(
2002
).
3.
G.
Deffuant
,
D.
Neau
,
F.
Amblard
, and
G.
Weisbuch
, “
Mixing beliefs among interacting agents
,”
Adv. Complex Syst.
3
(
01n04
),
87
98
(
2000
).
4.
S.
Motsch
and
E.
Tadmor
, “
Heterophilious dynamics enhances consensus
,”
SIAM Rev.
56
(
4
),
577
621
(
2014
).
5.
B.
Piccoli
and
N.
Pouradier Duteil
, “Control of collective dynamics with time-varying weights,” in Recent Advances in Kinetic Equations and Applications (Springer, 2021), pp. 289–308.
6.
A.
Nugent
,
S. N.
Gomes
, and
M.-T.
Wolfram
, “
On evolving network models and their influence on opinion formation
,”
Phys. D: Nonlinear Phenom.
456
,
133914
(
2023
).
7.
A.
Nugent
,
S. N.
Gomes
, and
M.-T.
Wolfram
, “Bridging the gap between agent based models and continuous opinion dynamics,” arXiv:2311.03039 (2023).
8.
R.
Berner
,
T.
Gross
,
C.
Kuehn
,
J.
Kurths
, and
S.
Yanchuk
, “Adaptive dynamical networks,” arXiv:2304.05652 (2023).
9.
G.
Albi
,
M.
Herty
, and
L.
Pareschi
, “Kinetic description of optimal control problems and applications to opinion consensus,” arXiv:1401.7798 (2014).
10.
C.
Qian
,
J.
Cao
,
J.
Lu
, and
J.
Kurths
, “
Adaptive bridge control strategy for opinion evolution on social networks
,”
Chaos
21
(
2
),
025116
(
2011
).
11.
G.
Albi
,
L.
Pareschi
, and
M.
Zanella
, “On the optimal control of opinion dynamics on evolving networks,” in System Modeling and Optimization: 27th IFIP TC 7 Conference, CSMO 2015, Sophia Antipolis, France, June 29-July 3, 2015, Revised Selected Papers 27 (Springer, 2016), pp. 58–67.
12.
F.
Liu
,
D.
Xue
,
S.
Hirche
, and
M.
Buss
, “
Polarizability, consensusability, and neutralizability of opinion dynamics on cooperative networks
,”
IEEE Trans. Autom. Control.
64
(
8
),
3339
3346
(
2018
).
13.
M.
Herty
and
D.
Kalise
, “Suboptimal nonlinear feedback control laws for collective dynamics,” in 2018 IEEE 14th International Conference on Control and Automation (ICCA) (IEEE, 2018), pp. 556–561.
14.
M.
DeBuse
and
S.
Warnick
, “Automatic control of opinion dynamics in social networks,” in 2023 IEEE Conference on Control Technology and Applications (CCTA) (IEEE, 2023), pp. 180–185.
15.
J.
Lorenz
, “
Continuous opinion dynamics under bounded confidence: A survey
,”
Int. J. Modern Phys. C
18
(
12
),
1819
1838
(
2007
).
16.
F.
Amblard
and
G.
Deffuant
, “
The role of network topology on extremism propagation with the relative agreement opinion dynamics
,”
Phys. A: Stat. Mech. Appl.
343
,
725
738
(
2004
).
17.
M.
Gabbay
, “
The effects of nonlinear interactions and network structure in small group opinion dynamics
,”
Phys. A: Stat. Mech. Appl.
378
(
1
),
118
126
(
2007
).
18.
R.
Berner
,
E.
Scholl
, and
S.
Yanchuk
, “
Multiclusters in networks of adaptively coupled phase oscillators
,”
SIAM J. Appl. Dyn. Syst.
18
(
4
),
2227
2266
(
2019
).
19.
P.
De Lellis
,
M.
di Bernardo
, and
F.
Garofalo
, “Decentralized adaptive control for synchronization and consensus of complex networks,” in Modelling, Estimation and Control of Networked Complex Systems (Springer, 2009), pp. 27–42.
20.
P.
DeLellis
,
M.
Di Bernardo
,
T. E.
Gorochowski
, and
G.
Russo
, “
Synchronization and control of complex networks via contraction, adaptation and evolution
,”
IEEE Circuits Syst. Mag.
10
(
3
),
64
82
(
2010
).
21.
I.
Rajapakse
,
M.
Groudine
, and
M.
Mesbahi
, “
Dynamics and control of state-dependent networks for probing genomic organization
,”
Proc. Natl. Acad. Sci. U.S.A.
108
(
42
),
17257
17262
(
2011
).
22.
P.
De Lellis
,
M.
Di Bernardo
,
F.
Sorrentino
, and
A.
Tierno
, “
Adaptive synchronization of complex networks
,”
Int. J. Comput. Math.
85
(
8
),
1189
1218
(
2008
).
23.
W.
Yu
,
P.
DeLellis
,
G.
Chen
,
M.
Di Bernardo
, and
J.
Kurths
, “
Distributed adaptive control of synchronization in complex networks
,”
IEEE Trans. Autom. Control.
57
(
8
),
2153
2158
(
2012
).
24.
V. D.
Blondel
,
J. M.
Hendrickx
, and
J. N.
Tsitsiklis
, “
Continuous-time average-preserving opinion dynamics with opinion-dependent communications
,”
SIAM J. Control Opt.
48
(
8
),
5214
5240
(
2010
).
26.
M. A.
Gkogkas
,
C.
Keuhn
, and
C.
Xu
, “
Continuum limits for adaptive network dynamics
,”
Commun. Math. Sci.
21
(
1
),
83
106
(
2023
).
27.
R.
Hegselmann
and
U.
Krause
, “
Opinion dynamics under the influence of radical groups, charismatic leaders, and other constant signals: A simple unifying model
,”
Netw. Heterog. Media
10
(
3
),
477
(
2015
).
28.
S.
Wongkaew
,
M.
Caponigro
, and
A.
Borzi
, “
On the control through leadership of the Hegselmann–Krause opinion formation model
,”
Math. Models Methods Appl. Sci.
25
(
03
),
565
585
(
2015
).
29.
P.
Erdős
,
A.
Rényi
et al., “
On the evolution of random graphs
,”
5
(
1
),
17
60
(
1960
).
30.
L. C.
Evans
, see http://math.berkeley.edu/∼evans/control.course.pdf for “An introduction to mathematical optimal control theory version 0.2” (1983).
31.
N.
Ayi
and
N.
Pouradier Duteil
, “Large-population limits of non-exchangeable particle systems,” arXiv:2401.07748 (2024).
32.
D. J.
Watts
and
S. H.
Strogatz
, “
Collective dynamics of ‘small-world’ networks
,”
Nature
393
(
6684
),
440
442
(
1998
).
Published open access through an agreement with University of Warwick Mathematics Institute