In this paper, we propose a novel control approach for opinion dynamics on evolving networks. The controls modify the strength of connections in the network, rather than influencing opinions directly, with the overall goal of steering the population toward a target opinion. This requires that the social network remains sufficiently connected, the population does not break into separate opinion clusters, and that the target opinion remains accessible. We present several approaches to address these challenges, considering questions of controllability, instantaneous control, and optimal control. Each of these approaches provides a different view on the complex relationship between opinion and network dynamics and raises interesting questions for future research.
In this paper, we introduce a novel type of control problem for opinion formation on adaptive networks, in which the control variable affects the evolution of the underlying network rather than individuals’ opinions. We present various control strategies; analyze under which conditions, opinions can/cannot be steered toward a given target; and then corroborate and extend our analytical results with computational experiments and a study of optimal controls. We highlight the advantages and disadvantages of each approach as well as propose several directions for future research.
I. INTRODUCTION
Fundamental models of opinion formation, such as those of DeGroot,1 Hegselmann and Krause,2 and Deffuant et al.3 have been repeatedly extended and adapted to create the rich and varied literature of modern opinion dynamics. Many such models extend the idea of bounded confidence to that of a more general, typically non-linear, interaction functions.4–6 The interaction function describes how the distance between individuals’ opinions affects whether they interact and the weight this interaction would be given; however, it can also be interpreted as a probability of those individuals interacting over some short time period.7 This interaction function creates an “instantaneous” network of potential interactions, based entirely on individuals’ current opinions, that is sometimes referred to as a state-dependent network.8
Various authors have considered the question of controlling opinion dynamics of this form (sometimes, in the more general setting of interacting particle systems) by introducing a control variable that directly affects the evolution of each individual’s opinion.9–13 In the context of opinion dynamics, this could represent an external effect such as advertising. The typical goal of such a control is to bring the population to consensus or, more specifically, to consensus at a given target opinion. As such, controls can influence opinions directly, they can be highly effective in guiding the population toward a particular target opinion, and so, the question of optimal control is often considered. Some works have also studied the impact of introducing “strategic agents” whose opinion is controlled.14
To make opinion dynamics more representative of the real world, it is also common to include a social network,15–17 which may be static, evolve independently, or evolve coupled to individuals’ opinions. It is important to note that the edge-weights of this social network are introduced as additional state variables and so, unlike the interaction function, are not determined by individuals’ current opinions. In order to interact, individuals require a non-zero connection in the social network and a non-zero interaction function. Recently, ideas around evolving networks, such as those considered for the Kuraomoto model of coupled oscillators,18 have also been applied to opinion dynamics.6 Here, the evolution of the network is used to model changing social relationships, which are affected by the history of individuals’ interactions. The goal of this paper is to study the potential impact of a control applied to the evolution of the social network. That is, controls gradually alter the extent to which pairs of individuals interact, rather than directly affecting their opinions. Such a control must work within the range of the population’s current opinions, while also accounting for the non-linear interaction function, possibility of the population breaking into clusters, and the impact of the initial network structure.
A related concept of “edge-based” controls, also referred to as a “decentralized adaptive strategy,” has previously been addressed with regard to other interacting particle systems,19–21 such as the Kuramoto model22 and Chua’s circuits.23 A recent review of adaptive dynamical networks, including discussion of these works, can be found in Ref. 8 (note that, in this context, the term adaptive networks is also used to refer to state-dependent networks, such as those generated by the interaction function). The focus in many prior works has been on providing equations for the evolution of edge weights and showing that these guarantee the stability of the fully synchronized state. This is somewhat different to the setup considered in this paper, in which a control variable will be introduced for each edge weight, and the goal is to determine how these control variables should be set over time to achieve consensus at a particular target. This is closer to the setting considered by Piccoli and Duteil in Ref. 5, in which each individual has a mass representing their influence in the population and control variables are introduced to affect the evolution of these masses. This can be considered as a specific case of network control, in which all edges connecting to a given individual are identical. Here, we consider more general network structures and adapt the network dynamics considered in Ref. 6 by replacing the appearance of the interaction function in the network dynamics with these new control variables.
The remainder of the paper is structured as follows. Section II describes precisely the mathematical setting and motivates the form of control we will consider. This system is then analyzed in Sec. III in three ways: Sec. III A presents several analytic results about the system’s controllability; Sec. III B studies the performance of a candidate control, inspired by the instantaneous control in Piccoli and Duteil;5 and Sec. III C attempts to improve upon this by considering the question of optimal control. Finally, Sec. IV concludes with possible future research directions. Several proofs and additional numerical examples are provided in Appendixes A– D.
II. MODEL FORMULATION
We begin by presenting a general mathematical model for opinion dynamics on an evolving network. We consider a population of individuals and define . Fix initial opinions and the edge weights of an initial network . We assume for all and , meaning that individuals always give their own opinion maximal weight. For clarity, we assume individuals are labeled such that . Let denote the opinion of individual at time and denote the weight of the edge between individuals and at time .
To ensure that the system (1) is well posed and that opinions and weights remain in the desired intervals, we introduce the following assumptions on the interaction function and controls.
The interaction function satisfies
for all .
.
is Lipschitz continuous, with Lipschitz constant .
We call an interaction function a smoothed bounded confidence function with radius , if it satisfies Assumption 1 and there exists such that for all .
We also introduce assumptions on the form of the control.
The control function satisfies
for all , so that edge weights remain constant if uncontrolled.
for all , so that edge weights remain non-negative.
for all , so that edge weights do not exceed .
is bounded and integrable.
This final assumption on the boundedness of highlights a key feature of this problem: edge weights change continuously in time with a finite speed, meaning edges cannot be switched on/off instantaneously. As a result, the maximal value of will play a major role in determining the controllability of (1). Note this does not mean that controls cannot be switched on/off instantaneously but rather that their effect is not instantaneous.
When Assumptions 1 and 2 hold, Proposition 3.1 from Ref. 6 ensures that and for all and .
To motivate our assumptions on , and , we first establish some basic conditions for consensus.
Assume that the population reaches consensus. Then, the population reaches consensus at a point iff for all .
Assume for all . As the population reaches consensus, , so . As , so . Therefore, for all , and we have consensus at .
Assume that the population reaches consensus at , but that there exists a time at which . As is increasing and is decreasing (see Proposition 2.1 in Ref. 6), we have that for all . If , then for all . Similarly, if , then for all . In both cases, this makes convergence of to impossible, giving a contradiction.
From Proposition II.1, it is clear that we will at least require to have any hope of reaching consensus at . This is in contrast to other control setups that affect opinions directly or via leaders, in which consensus could, in principle, be achieved outside the initial range of normal agents’ opinions.27,28 Of course, we will also require that consensus itself is possible. We introduce the following definitions to help clarify when this is not the case.
By Assumption 1, . If is a smoothed bounded confidence function with radius , we will also have that .
For a given , an opinion profile is called an -chain if for all .
If is decreasing and there exists such that , then the population will not reach consensus.
See, for example, Proposition 3.2 in Ref. 6. The fundamental idea is that once a gap in the opinion profile of size bigger than appears, the closest individuals on either side of this gap will be unable to move closer than as they will encounter a root of .
Motivated by Propositions II.1 and II.2, we introduce the following assumptions.
The initial opinions and consensus target satisfy
.
is an -chain.
These assumptions ensure that we are operating in an environment in which the opinions and networks are well defined and control to consensus may be feasible. The question of which consensus targets are achievable then depends on the initial network and the speed at which controls can alter the network structure.
III. MODEL ANALYSIS
In this section, we consider controls of the form (4) for given functions and . We assume that , so that satisfies Assumption 2.
We first show which consensus targets are guaranteed to be achievable when using an edge-creating control and an edge removing control , then investigate the performance of a candidate instantaneous control, and, finally, address questions of optimal control, for example, and functions.
A. Controllability
The first result of this section, Proposition III.1, shows that the model setup and assumptions described in Sec. II do not guarantee controllability, indeed we can always find situations in which the system is not controllable.
Fix some . Let the interaction function satisfy Assumption 1 and the control function satisfy Assumption 2. Then, for any , there exist initial opinions and initial edge weights satisfying Assumption 3 for which control to consensus at is not possible.
Without loss of generality, assume . We will construct a range of and values for which control is not possible. Take for some small to be determined. Assume that is sufficiently small that . Let and take for all . This gives a setup in which is “far away” from the rest of the population, while maintaining the required -chain. Take for all . The other entries of may take any value in . These initial conditions and satisfy Assumption 3.
We now show that if is chosen to be sufficiently small, there exists a time such that for all , hence by Proposition II.1 consensus at is impossible. This is achieved using bounds on , and .
As can be chosen sufficiently small that , there exists a constant such that for all . As , for all , for all and .
Hence, it is not possible to provide a control function satisfying Assumption 2, and so not possible to provide a control strategy to determine that guarantees controllability for all . Instead, we consider the initial conditions and , as well as the consensus target , to be fixed and ask when the system can be controlled for these fixed values.
We begin by considering the simple case in which for all , and the control acts only to create new edges. Recall that is always equal to for all .
A network is called empty if for all and non-empty if there exists such that .
The approach of the proof is to construct such a control. We proceed in three steps, first addressing the simplest case of individuals. In the second step, we reduce the general case to the case by identifying the the closest individuals to above and below and gathering the rest of the population toward these two individuals. Finally in the third step, we apply the case once this gathering is complete.
First, consider and fix a value of . By Lemma A.1 in Appendix A, for a fixed , the function is continuous. As , and . Hence, by the intermediate value theorem, for any given there exists such that the system reaches consensus at . By an analogous argument, the same can be achieved for . This proves the claim in the case .
Diagram for step 2 of the proof of Proposition III.2. In this step, individuals are sequentially gathered toward the central individuals and whose opinions are the closest to above and below, respectively.
Diagram for step 2 of the proof of Proposition III.2. In this step, individuals are sequentially gathered toward the central individuals and whose opinions are the closest to above and below, respectively.
Diagram for step 3 of the proof of Proposition III.2. In this step, the case is used to control and to . All other individuals are connected to one of these two and so follow accordingly.
Diagram for step 3 of the proof of Proposition III.2. In this step, the case is used to control and to . All other individuals are connected to one of these two and so follow accordingly.
Example of the control method described in Proposition III.2. Beginning with an empty network, edges are created to gather the population closer to individuals near the target opinion . The solution for the case is then used to ensure that those closest to reach consensus at exactly this point. Opinion trajectories are colored according to individuals’ initial opinions. The target opinion is indicated by a black dashed line. The inset plot shows the final approach to consensus at .
Example of the control method described in Proposition III.2. Beginning with an empty network, edges are created to gather the population closer to individuals near the target opinion . The solution for the case is then used to ensure that those closest to reach consensus at exactly this point. Opinion trajectories are colored according to individuals’ initial opinions. The target opinion is indicated by a black dashed line. The inset plot shows the final approach to consensus at .
If there is an individual with , then instead gather all individuals toward , while leaving individual at their initial position (this is the same as setting ). As the network is initially empty, individual will not move if no controls are applied to their weights. Here, there is no need to apply the case.
Figure 3 shows a numerical example of the control described in Proposition III.2 applied with . We use a smoothed bounded confidence interaction function with and . A population of size is used with each chosen uniformly at random in the interval . Individuals and are identified and the problem is first solved for these individuals. This is done by repeatedly testing values of and until the error between the consensus value achieved and is below a given threshold (as the ODE is solved numerically with a fixed time step over a finite time interval the exact optimal values cannot be used). The gathering approach described is then implemented until the opinion diameter is sufficiently small, then the case is used to bring the population to consensus. Note that, since the initial network is empty, if no control were applied no individuals would change their opinions.
When considering non-empty initial networks, Propositions II.1 and II.2 show that it is vital that the control can act sufficiently quickly to prevent leaving the interval or gaps of size appearing in the opinion profile. The following lemma provides a useful bound on the distance an individual can travel if their edge weights are reduced exponentially quickly after some time .
Lemma III.1 shows that if is sufficiently large, an individual’s opinion can be trapped within a small interval around its current position. This can be used to prevent individuals breaking an -chain or bound individuals above or below , and so will prove crucial in showing controllability.
The approach of this proof is similar to Proposition III.2. Again we proceed in several steps. In step 1, we identify individuals and whose initial opinions are the closest below/above and gather all other individuals toward or . This gives an opinion radius below , which is further reduced below by temporarily connecting individuals and . This step requires two cases, as the setup is slightly different if an individual has an initial opinion of exactly . In step 2, control to consensus at is managed by specifying controls for the central individuals and . A key difference from Proposition III.2 is that any non-zero initial edges cannot be removed in finite time, so Lemma III.1 must be applied to bound this potential error. The applications of Lemma III.1 provide the lower bound (14) on .
We now describe the process by which individuals are gathered toward a pair of individuals near .
Case 1: for all .
Here, we can identify individuals and as in Proposition III.2. As before, we use to sequentially gather more extreme individuals toward and . Here, and will not be fixed but, by setting all other controls to , (15) and (16) ensure they do not cross or move in a way that could break the -chain. The gathering process described in Proposition III.2, combined with the exponentially fast removal of all other edges, also ensures that no other pair of individuals breaks the -chain. We say that this gathering process is complete when all individuals are within a distance of their target (either or ). As for all , we will have and so for all . Hence, by choosing sufficiently small, there exists a time at which , and .
Case 2: There exists such that .
In this case, individuals and should be chosen from those with . This raises the possibility that . As such, once the gathering toward and is complete, both individuals may need to be temporarily connected to to bring the opinion diameter below and then below , after which point individual could simply be guided toward either or . This can be done in much the same way as in the previous case, by temporarily connecting either , or , or both, to until the desired radius is reached (depending on the distances of and from ). Indeed, (14) already ensures that is sufficiently large for this to be done without breaking the -chain and without and crossing . The situation is essentially the same if there are multiple individuals whose initial condition is exactly .
Step 2: As in Proposition III.2, the problem is now reduced to ensuring convergence of and to , as the chain of connections is such that all other individuals’ opinions tend toward one of these. We begin from a time at which , and . Hence, from this time onward, for all . Unlike Proposition III.2, we cannot now simply reduce to the case , as individuals and may remain connected to others in the population, albeit with an exponentially decaying weight.
Pick some initial guess for the pair with both values in the time interval . If then fix . By Lemma A.2 in Appendix A, for a fixed , the function is continuous in . If is made extremely large ( ), then decreases toward and, if is sufficiently large, will be sufficiently below before time that . Hence, by the intermediate value theorem there is a value of for which . Due to the persistent (exponentially small) edge weights between individuals and and the rest of the population, this ideal value of will need to account for the dynamics of the whole system at . As such its exact value would be impractical to compute in most cases. However, using the continuity of , we have shown that such a value, and, therefore, such a control, exists.
If then use an analogous argument for a fixed .
In the case of an empty starting network, having for some simplifies the problem significantly. However, in the case of a non-empty starting network, such an may instead create additional difficulty as it may form a crucial link in the -chain but may repeatedly move above and below , making it in some sense unreliable as a point to control toward.
Figure 4 shows a numerical example of the control described in Theorem III.1 using the same setup as described for Fig. 3. The initial network is an Erdös–Rényi random graph with edge probability 29 with edge weights then given uniformly in the interval . As previously, suitable values of were identified by iteratively solving the dynamics (this required solving the complete dynamics rather than the case only). Note that we used and so , which is far below the value indicated in (14) (of approximately 1200), but in this case control is clearly still possible.
Example of the control method described in Theorem III.1. Beginning with a non-empty network, edges are created to gather the population closer to individuals near the target opinion and removed to prevent crossing this value or splitting the population into multiple clusters. Those individuals closest to are then controlled to consensus precisely at this point. Opinion trajectories are colored according to individuals’ initial opinions. The target opinion is indicated by a black dashed line. The inset plot shows the final approach to consensus at .
Example of the control method described in Theorem III.1. Beginning with a non-empty network, edges are created to gather the population closer to individuals near the target opinion and removed to prevent crossing this value or splitting the population into multiple clusters. Those individuals closest to are then controlled to consensus precisely at this point. Opinion trajectories are colored according to individuals’ initial opinions. The target opinion is indicated by a black dashed line. The inset plot shows the final approach to consensus at .
One might reasonably expect that consensus at a desired point could be achieved without such drastic controls. The approach described in the proof effectively erases the initial network to ensure that the -chain is not broken and the target point remains inside the range of opinions, but it may well be the case that this is unnecessary and indeed, allowing more of the initial network to remain might encourage faster convergence to consensus. Therefore, having established that control is possible, we now aim to improve our control strategy.
B. Instantaneous control
We next consider an explicit candidate control strategy, inspired by the approach in.5 In Ref. 5, a model is analyzed where each individual has an evolving, non-zero mass that determines their influence, with the total mass preserved across the population. Giving an individual mass is equivalent to setting for all [although the weight dynamics (1b) considered in this paper would not preserve the total mass]. In the setup in Ref. 5, the population always reaches consensus, so it is sufficient to control the location of this consensus. The more general network setting considered in this paper poses additional challenges, but it is possible that a similar approach may provide a viable control strategy.
If the population reaches consensus, then for all .
From each simulation, we ask two questions: has the population reached consensus, and has the population been guided to the consensus target?
To address the first question, we calculate the opinion diameter at the end of each simulation. For all simulations, the final opinion diameter was of the order , clearly indicating consensus. Results for each simulation, along with a local average, can be found in Fig. 9 in Appendix D. The local average near a consensus target is calculated using a weighted mean, with the weight given to a simulation result decaying exponentially with the distance between the consensus target in that simulation and .
The emergence of consensus is perhaps unsurprising. If then all edges connecting to individuals with opinions below will be increasing in weight. Specifically, the edge connecting the individual with the maximum opinion to the individual with the minimum opinion will be increasing in weight. As the interaction function is always positive, this will draw the maximum opinion down toward the minimum opinion. If then the opposite occurs, yet in either case the opinion diameter is shrinking due to this strengthening of edges. If then controls switch off, but we know from Ref. 6 that for an exponential interaction function on a fixed connected network, consensus is guaranteed. As it is highly likely that our randomly generated will be connected, we thus expect consensus regardless of the value of (although not necessarily at ).
Results of repeated tests of the instantaneous control (24) with . Each simulation uses uniformly random initial conditions and a weighted Erdös–Rényi random initial network and runs until opinions have reached a steady state. Each point shows the value of the maximum distance from , given by (27), at the end of the simulation. The black line shows the local average.
Results of repeated tests of the instantaneous control (24) with . Each simulation uses uniformly random initial conditions and a weighted Erdös–Rényi random initial network and runs until opinions have reached a steady state. Each point shows the value of the maximum distance from , given by (27), at the end of the simulation. The black line shows the local average.
It is important to note that, although the results in Fig. 5 indicate this control strategy is typically successful for , this success is not guaranteed. In fact, as previously noted, Proposition III.1 shows that there exist initial conditions and targets for which this control will fail. As the results presented in Fig. 5 were generated using random , , and , it is also worth noting that Proposition III.1 gives a range of initial conditions for which the system is not controllable that has a strictly positive probability of occurring. Thus, if the experiments described were repeated a sufficiently large number of times, we would eventually see some instances with consensus targets in the interval for which the control is not successful.
Moreover, this failure cannot necessarily be remedied by increasing (the speed with which the control acts). As for all and , cannot take any arbitrary value in the opinion interval, even if the other edge weights could be changed instantaneously. This means that for certain initial conditions control to consensus at is not possible for any since, no matter how quickly the control acts, cannot be controlled to sufficiently quickly. We provide an example of this in Appendix C.
Having examined the performance of this control strategy for an exponential interaction function, we now consider a smoothed bounded confidence interaction function, under which obtaining consensus is significantly more challenging. Figure 6 shows that this control is incapable of achieving consensus under such an interaction function. This is due to the fact that the form of the control (24) does not include any information about the interaction function and hence cannot take into account the complex behaviors it may cause. For example, in the lower panel of Fig. 6 we observe a situation in which half the population is connected exclusively to individuals outside their confidence bound, meaning they do not interact and their opinions remain constant. Any previous success of this control strategy appears to be heavily reliant on the exponential interaction function allowing interactions at all distances and hence promoting consensus.
Example implementations of the instantaneous control (24) with a smoothed bounded confidence interaction function. Opinion trajectories are colored according to individuals’ initial opinions. This control fails for both the moderate target of (top panel) and the more extreme target of (bottom panel) as the population does not reach consensus. The target opinion is indicated by a black dashed line. The degree-weighted mean opinion is given by the solid black line. Note that in the top panel the dashed line is not visible as the degree-weighted mean opinion is almost immediately brought to the target.
Example implementations of the instantaneous control (24) with a smoothed bounded confidence interaction function. Opinion trajectories are colored according to individuals’ initial opinions. This control fails for both the moderate target of (top panel) and the more extreme target of (bottom panel) as the population does not reach consensus. The target opinion is indicated by a black dashed line. The degree-weighted mean opinion is given by the solid black line. Note that in the top panel the dashed line is not visible as the degree-weighted mean opinion is almost immediately brought to the target.
Another solution to the clustering arising from the bounded confidence interaction function is to allow the control more information about the resulting dynamics. In the following Sec. III C, we consider an optimal control problem for (1) that hopes to remedy the failure observed in Fig. 6 and improve upon the drastic approach in Theorem III.1 of removing the entire initial network.
C. Optimal control
and , meaning the original dynamics (1) are satisfied.
, , referred to as the adjoint dynamics.
and , the terminal conditions for and .
, referred to as the maximization principle for .
It is interesting to note that in both Proposition III.2, Theorem III.1 and the optimal control setup (32), the full range of controls offered by (4) is not utilized. Instead, only a single value that creates edges and a single value that removes edges are used. This type of control, which switches between extreme values, is commonly known as a bang–bang control. Note that this does not mean edge weights always take integer values, indeed it is often crucial that they do not, only that controls always act to their fullest extent.
To identify the optimal controls we perform a forward–backward sweep (FBS) over (1) and (31), using a fourth order Runga–Kutta numerical scheme, to iteratively improve the controls. Note that during the search for the optimal control we allow continuous controls, rather than limiting to bang–bang controls, but observe convergence toward bang–bang controls of the form (32). We use an initial guess of , that is we begin with no control.
In the following, we use the same initial conditions as for Fig. 4 and again take . Additional examples, using different initial conditions, can be found in Appendix D. Our overall aim is to use these examples to make inferences about the general nature of optimal control strategies for this system.
The trajectory of opinions under the optimal control is shown in Fig. 7(a). We observe that the trajectory is quite different from that in Fig. 4 as individuals do not wait to be “gathered” toward . This leads to significantly faster dynamics that appear to reach consensus before the final time , whereas the implementation of the method described in Theorem III.1 does not do so until approximately . Another feature of the dynamics in Fig. 7(a) is that, due to the controls, individuals’ opinions rarely cross over. This may help in maintaining the -chain and keeping the within the opinion interval.
Results of the FBS to find the optimal controls under (28), using edge weight dynamics of the form (4) with and given by (26). The same initial conditions were used as for the examples in Figs. 3 and 4. (a) Opinion dynamics under the optimal controls for the cost functional (28). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion is indicated by a black dashed line. (b) Snapshots of the optimal control at times . The horizontal axis gives the opinions for , the vertical axis gives the opinions for and points show the control . Blue stars show positive controls, where edges are created/strengthened. Red triangles show negative controls, where edges are being weakened. Gray circles indicate no control. Dashed lines show the location of the target opinion , hence as increases, opinions are brought near this value.
Results of the FBS to find the optimal controls under (28), using edge weight dynamics of the form (4) with and given by (26). The same initial conditions were used as for the examples in Figs. 3 and 4. (a) Opinion dynamics under the optimal controls for the cost functional (28). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion is indicated by a black dashed line. (b) Snapshots of the optimal control at times . The horizontal axis gives the opinions for , the vertical axis gives the opinions for and points show the control . Blue stars show positive controls, where edges are created/strengthened. Red triangles show negative controls, where edges are being weakened. Gray circles indicate no control. Dashed lines show the location of the target opinion , hence as increases, opinions are brought near this value.
Figure 7(b) shows the optimal controls at four timepoints ( ). In each panel, we show the value of each in the following way: a point is placed at , that is the horizontal axis corresponds to the opinion of individual and the vertical axis to the opinion of individual , and this point is colored according to the value of . As our search for the optimal control converges toward a bang–bang control, all control values lie at (or extremely close to) , or , hence the values have been rounded so that a different marker may be used for each value. appears in gray circles, in blue stars and in red triangles.
At time , we see a definite striped pattern in the controls, showing where edges are being created or strengthened. The width and position of this stripe indicates that edges are strengthened only when pairs of individuals lie within a distance and so could potentially interact. Where this stripe lies (for the most part) above the diagonal, indicating that individuals are being connected to those with a higher opinion that would bring them up toward . When the opposite occurs. There are some controls that do not follow this rule as well as some “missing” points where they might otherwise be expected, suggesting that this typical behavior is not the only consideration. The distribution of controls, showing where edges are being removed, appears to be much more uniform, making the motivation behind removing such edges unclear.
By time , the opinion diameter has reduced and the behavior of the controls has changed. The stripe of controls is still present but has reduced in width as many of the useful edges will have already been established. For , the majority of controls are by now set to zero. There is a new cluster of points in the (approximate) region . The purpose of this appears to be to prevent these individuals with from increasing above , by removing edges connecting them to individuals with . While it is beneficial to connect individuals to those with higher opinions, in order to bring them closer to , the control cannot afford to “overshoot” and encourage individuals to cross .
Moving to time , we see that almost all controls are now set to zero. A small stripe of controls remains and a new cluster of controls has appeared, this time to prevent individuals above from moving below . By essentially all controls are set to zero (the very last control switches off at time 7.46). This switching off of controls before the population has neared consensus is made possible by the nature of the forwards-backwards-sweep, in which the control is effectively given knowledge of the future dynamics of the system and thus can adjust weights to account for this. This mirrors the usage of the intermediate value theorem in Proposition III.2 and Theorem III.1, where it is shown that the system can be controlled by choosing the correct time to switch on/off controls, but that knowledge of the full future dynamics is needed to identify these times. In a similar way, the optimal control can iteratively update earlier controls to account for the full dynamics, allowing controls to be switched off well before consensus.
Appendix D contains further examples using the same initial opinions with an empty , complete and an example using different , and a Watts–Strogatz random network for . In all these examples, similar qualitative behaviors are observed. This provides some support for using examples of optimal control to learn more about what strategies are most effective in general, with the hope that this may pave the way to future analytic results concerning controllability and the optimality of such controls.
It is clear from Fig. 7(b) and the examples in Appendix D that the optimal control in this setup is not sparse, meaning that many different edges are controlled throughout the dynamics. In future work, we will consider several modifications to the cost function (28), including methods to encourage a sparse control or penalize deviations from the initial network. The question then shifts from how close the population can be brought to , to how much/little control is needed to achieve this.
IV. CONCLUSION
The control problem we have discussed poses several interesting challenges. The difficulty of guaranteeing consensus in opinion formation is exacerbated by the problem of keeping the target point inside the range of current opinions. Controllability can be proven under some rather strong assumptions about the range and speed of controls, yet we observe that successful controls can be found even when these assumptions do not hold. The examples of optimal controls point toward a promising alternative strategy for further theoretical results, as well as the possibility of searching for sparse controls.
The explicit strategy (24) arising from instantaneous control of showed mixed results, but further work is required to identify precisely where and why its failures occur. One possibility, suggested by the success of optimal control and nature of the controllability results, is that information about the current state only is not sufficient. This raises the question of how much knowledge of the future dynamics a control requires in order to be effective in this setting.
Recent interest in the extension of mean-field limits to fixed and adaptive networks26,31 also raises the possibility of moving this type of control to the partial differential equation setting. Network control could also be considered at the finer scale of a stochastic agent-based model. Indeed, the type of network control considered here offers many interesting questions about the coupling between opinion and network dynamics and the possibility of successfully influencing their outcome.
ACKNOWLEDGMENTS
A.N. was supported by the Engineering and Physical Sciences Research Council through the Mathematics of Systems II Centre for Doctoral Training at the University of Warwick (Reference No. EP/S022244/1). M.T.W. acknowledges partial support from the EPSRC Small Grant EPSRC No. EP/X010503/1 and Royal Society Grant for International Exchanges Ref. IES/R3/213113. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC-BY) license to Any Author Accepted Manuscript version arising from this submission.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
A. Nugent: Conceptualization (equal); Data curation (equal); Formal analysis (lead); Investigation (lead); Methodology (equal); Visualization (equal); Writing – original draft (lead); Writing – review & editing (equal). S. N. Gomes: Conceptualization (equal); Funding acquisition (equal); Methodology (equal); Project administration (lead); Supervision (equal); Writing – original draft (supporting); Writing – review & editing (equal). M. T. Wolfram: Conceptualization (equal); Methodology (equal); Project administration (supporting); Supervision (equal); Writing – original draft (supporting); Writing – review & editing (equal).
DATA AVAILABILITY
The data that support the findings of this study are available within the article. Data sharing is not applicable to this article as no new data were created or analyzed in this study.
APPENDIX A: PROOFS
Fix some , then the function for defined by (9) in Proposition III.2 is continuous in .
Fix some , as defined in Theorem III.1. Then, the function for , defined by (19), is continuous in .
As weights cannot be driven to exactly zero, only made exponentially small, we do not look specifically at the location of , but instead consider the behavior of the entire ODE system. We show that making a small change to , and, therefore, a small change to the weight has a correspondingly small impact on the location of as a whole, and thus on the location of .
APPENDIX B: MAXIMISATION PRINCIPLE
Recall that , , and . We aim to show that under these conditions, the maximum of over is never at , hence must be in the set . We, therefore, assume that there exists satisfying the above conditions for which the maximum does indeed lie at and aim to find a contradiction.
However, if then , so the first pair of inequalities is not possible. Also, if then , so the second pair of inequalities is not possible.
Overall, this means the maximum of over is always in the set . Comparing the values of gives (32).
APPENDIX C: ADDITIONAL EXAMPLES OF INSTANTANEOUS CONTROL
Figure 8 demonstrates examples in which the control strategy (24) succeeds in guiding the population to consensus at a moderate value of , but fails for the more extreme value . Both examples use the setup described in Sec. III B but with identical initial conditions.
Figure 9 shows the final opinion diameter in each of the 10 000 simulations described in Sec. III B. In all cases, the population can be considered to have reached consensus as the opinion diameter is of the order .
Example implementations of the instantaneous control (24) with exponential interaction function (25). This control succeeds for the moderate target of (top panel) but fails for the more extreme target of (bottom panel). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion is indicated by a black dashed line. The degree-weighted mean opinion is given by the solid black line. Note that in the top panel the dashed line is not visible as the degree-weighted mean opinion is almost immediately brought to the target.
Example implementations of the instantaneous control (24) with exponential interaction function (25). This control succeeds for the moderate target of (top panel) but fails for the more extreme target of (bottom panel). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion is indicated by a black dashed line. The degree-weighted mean opinion is given by the solid black line. Note that in the top panel the dashed line is not visible as the degree-weighted mean opinion is almost immediately brought to the target.
Results of repeated tests of the instantaneous control (24) with . Each simulation uses uniformly random initial conditions and a weighted Erdös–Rényi random initial network and runs until opinions have reached a steady state. Each point shows the opinion diameter at the end of the simulation. The black line shows a local average.
Results of repeated tests of the instantaneous control (24) with . Each simulation uses uniformly random initial conditions and a weighted Erdös–Rényi random initial network and runs until opinions have reached a steady state. Each point shows the opinion diameter at the end of the simulation. The black line shows a local average.
Example trajectory demonstrating that, for certain initial conditions, control to is impossible regardless of the speed of controls ( ). Opinion trajectories are shown in red. The target opinion is indicated by a black dashed line. The degree-weighted mean opinion is given by the solid black line and is constant.
Example trajectory demonstrating that, for certain initial conditions, control to is impossible regardless of the speed of controls ( ). Opinion trajectories are shown in red. The target opinion is indicated by a black dashed line. The degree-weighted mean opinion is given by the solid black line and is constant.
As the degree-weighted mean opinion (shown by a solid black line in Fig. 10) does not intersect any individual’s opinion trajectory or the target , for all and . As a result, all weights remain constant throughout the dynamics as they begin at the correct steady state. Hence, the speed of controls is not significant, as the edge weights never change. Note that even though weights begin in a steady state determined by (24), a strategy designed to minimize the distance between and , . This is a result of setting for all and , which restricts the possible values of . This represents the idea that each individual always gives their own opinion maximal weight, and that this weight cannot be affected by any control.
At time , the individual with the lowest opinion crosses , meaning that is outside the opinion interval and thus control to consensus at is impossible. Hence, for certain initial conditions and consensus targets, control to consensus using the instantaneous control (24) is not possible for any value of .
APPENDIX D: ADDITIONAL EXAMPLES OF OPTIMAL CONTROL
In this appendix, we show several additional examples of the optimal control problem described in Sec. III C. The figures are presented in the same format as Fig. 7: the top panel shows the optimal dynamics while the lower panels show several snapshots of the optimal controls.
Figure 11 shows the optimal dynamics and controls using the same target and initial opinions as for Fig. 7 but with an empty initial network. As a result, the majority of negative controls are replaced with zero control, as there is no need to remove undesired edges. The pattern of positive controls is largely the same as seen in Fig. 7.
Results of the FBS to find the optimal controls under (28), using edge weight dynamics of the form (4) with and given by (26). The same initial opinions are used as for the examples in Figs. 3 and 4. The initial network is empty. (a) Opinion dynamics under the optimal controls for the cost functional (28). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion is indicated by a black dashed line. (b) Snapshots of the optimal control at times . The horizontal axis gives the opinions for , the vertical axis gives the opinions for and points show the control . Blue stars show positive controls, where edges are created/strengthened. Red triangles show negative controls, where edges are being weakened. Gray circles indicate no control. Dashed lines show the location of the target opinion , hence as increases opinions are brought near this value.
Results of the FBS to find the optimal controls under (28), using edge weight dynamics of the form (4) with and given by (26). The same initial opinions are used as for the examples in Figs. 3 and 4. The initial network is empty. (a) Opinion dynamics under the optimal controls for the cost functional (28). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion is indicated by a black dashed line. (b) Snapshots of the optimal control at times . The horizontal axis gives the opinions for , the vertical axis gives the opinions for and points show the control . Blue stars show positive controls, where edges are created/strengthened. Red triangles show negative controls, where edges are being weakened. Gray circles indicate no control. Dashed lines show the location of the target opinion , hence as increases opinions are brought near this value.
Figure 12 shows the optimal dynamics and controls using the same target and initial opinions as for Fig. 7 but with a complete initial network. Here, the situation is reversed, as the majority of positive controls are now replaced with zero controls and the pattern of negative controls matches that in Fig. 7.
Results of the FBS to find the optimal controls under (28), using edge weight dynamics of the form (4) with and given by (26). The same initial opinions are used as for the examples in Figs. 3 and 4. The initial network is complete. (a) Opinion dynamics under the optimal controls for the cost functional (28). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion is indicated by a black dashed line. (b) Snapshots of the optimal control at times . The horizontal axis gives the opinions for , the vertical axis gives the opinions for and points show the control . Blue stars show positive controls, where edges are created/strengthened. Red triangles show negative controls, where edges are being weakened. Gray circles indicate no control. Dashed lines show the location of the target opinion , hence as increases opinions are brought near this value.
Results of the FBS to find the optimal controls under (28), using edge weight dynamics of the form (4) with and given by (26). The same initial opinions are used as for the examples in Figs. 3 and 4. The initial network is complete. (a) Opinion dynamics under the optimal controls for the cost functional (28). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion is indicated by a black dashed line. (b) Snapshots of the optimal control at times . The horizontal axis gives the opinions for , the vertical axis gives the opinions for and points show the control . Blue stars show positive controls, where edges are created/strengthened. Red triangles show negative controls, where edges are being weakened. Gray circles indicate no control. Dashed lines show the location of the target opinion , hence as increases opinions are brought near this value.
Figure 13(a) shows the optimal dynamics and controls using a different target, different random initial opinions and a new Watts–Strogatz small-world initial network.32 Although the locations of positive and negative controls are different from those in Fig. 7 there is a similar overall pattern: stripes of positive controls whose width matches the interaction function, bringing individuals toward ; many zero controls, especially toward the end of the dynamics; negative controls that do not form a consistent pattern.
Results of the FBS to find the optimal controls under (28), using edge weight dynamics of the form (4) with and given by (26). Initial opinions were sampled uniformly at random n . The initial network is a Watts–Strogatz random graph. (a) Opinion dynamics under the optimal controls for the cost functional (28). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion is indicated by a black dashed line. (b) Snapshots of the optimal control at times . The horizontal axis gives the opinions for , the vertical axis gives the opinions for and points show the control . Blue stars show positive controls, where edges are created/strengthened. Red triangles show negative controls, where edges are being weakened. Gray circles indicate no control. Dashed lines show the location of the target opinion , hence as increases, opinions are brought near this value.
Results of the FBS to find the optimal controls under (28), using edge weight dynamics of the form (4) with and given by (26). Initial opinions were sampled uniformly at random n . The initial network is a Watts–Strogatz random graph. (a) Opinion dynamics under the optimal controls for the cost functional (28). Opinion trajectories are colored according to individuals’ initial opinions. The target opinion is indicated by a black dashed line. (b) Snapshots of the optimal control at times . The horizontal axis gives the opinions for , the vertical axis gives the opinions for and points show the control . Blue stars show positive controls, where edges are created/strengthened. Red triangles show negative controls, where edges are being weakened. Gray circles indicate no control. Dashed lines show the location of the target opinion , hence as increases, opinions are brought near this value.