Inter-network combat and intra-network cooperation among structured systems are likely to have been recurrent features of human evolutionary history; however, little research has investigated the combat mechanism between structured systems that the adversarial interactions will cause the disability of agents and agents are prone to seek cooperation with neighbors. Hence, the current study has proposed a two-network combat game model and designed the corresponding rules of how to attack, how to be disabled, how to cooperate, and how to win. First, within the framework of our model, we have simulated the combat among four common network structures—the Erdős–Rényi (ER) random network, the grid graph, the small-world network, and the scale-free network. We found that the grid network always holds the highest winning percentage, while the ER random graph is most likely to lose when combating with the other three network structures. For each structure, we have also simulated the combat between the same network structures with different generating parameters. The simulations reveal that the small-world property and heterogeneity can promote winning a combat. Besides, by broadening and deepening cooperation, we have found that broader cooperation helps defeat the opposite system on grid and scale-free networks, yet hinders it on ER and Watts–Strogatz (WS) networks, while deeper cooperation can benefit to winning except on scale-free networks. These findings inform our understanding of the effects of structure and cooperation in a combat.

1.
X.
Zeng
,
J.
Chen
,
S.
Liang
, and
Y.
Hong
, “
Generalized Nash equilibrium seeking strategy for distributed nonsmooth multi-cluster game
,”
Automatica
103
,
20
26
(
2019
).
2.
Z.
Ding
,
C.
Chen
,
M.
Cui
,
W.
Bi
,
Y.
Chen
, and
F.
Li
, “
Dynamic game-based defensive primary frequency control system considering intelligent attackers
,”
Reliab. Eng. Syst. Saf.
216
,
107966
(
2021
).
3.
Q.-Y.
Fu
,
X.
Zhang
,
B.
Han
, and
W.-J.
Zhao
, “An effective and scalable approach for swarm-on-swarm air combat decision,” in Proceedings of 2021 5th Chinese Conference on Swarm Intelligence and Cooperative Control (Springer, 2023), pp. 483–494.
4.
J. W.
Weibull
,
Evolutionary Game Theory
(
MIT Press
,
1997
).
5.
G.
Szabó
,
J.
Vukov
, and
A.
Szolnoki
, “
Phase diagrams for an evolutionary prisoner’s dilemma game on two-dimensional lattices
,”
Phys. Rev. E
72
,
047107
(
2005
).
6.
F. C.
Santos
,
J. M.
Pacheco
, and
T.
Lenaerts
, “
Evolutionary dynamics of social dilemmas in structured heterogeneous populations
,”
Proc. Natl. Acad. Sci. U.S.A.
103
,
3490
3494
(
2006
).
7.
F. C.
Santos
and
J. M.
Pacheco
, “
Scale-free networks provide a unifying framework for the emergence of cooperation
,”
Phys. Rev. Lett.
95
,
098104
(
2005
).
8.
X.
Zeng
,
S.
Liang
, and
Y.
Hong
, “
Distributed variational equilibrium seeking of multi-coalition game via variational inequality approach
,”
IFAC-PapersOnLine
50
,
940
945
(
2017
).
9.
H.
Xiong
,
J.
Han
,
X.
Nian
, and
S.
Li
, “
Nash equilibrium computation of two-network zero-sum games with event-triggered communication
,”
J. Control Decis.
9
,
334
346
(
2022
).
10.
Y.
Lou
,
Y.
Hong
,
L.
Xie
,
G.
Shi
, and
K. H.
Johansson
, “
Nash equilibrium computation in subnetwork zero-sum games with switching communications
,”
IEEE Trans. Autom. Control
61
,
2920
2935
(
2015
).
11.
S.
Huang
,
J.
Lei
,
Y.
Hong
, and
U. V.
Shanbhag
, “No-regret distributed learning in two-network zero-sum games,” in 2021 60th IEEE Conference on Decision and Control (CDC) (IEEE, 2021), pp. 924–929.
12.
M.
Ye
,
G.
Hu
, and
F. L.
Lewis
, “
Nash equilibrium seeking for N-coalition noncooperative games
,”
Automatica
95
,
266
272
(
2018
).
13.
R.
Albert
,
H.
Jeong
, and
A.-L.
Barabási
, “
Error and attack tolerance of complex networks
,”
Nature
406
,
378
382
(
2000
).
14.
D. H.
Kim
,
D. A.
Eisenberg
,
Y. H.
Chun
, and
J.
Park
, “
Network topology and resilience analysis of South Korean power grid
,”
Phys. A: Stat. Mech. Appl.
465
,
13
24
(
2017
).
15.
T.
Lu
,
K.
Chen
,
Y.
Zhang
, and
Q.
Deng
, “
Research on dynamic evolution model and method of communication network based on real war game
,”
Entropy
23
,
487
(
2021
).
16.
M.
Jaderberg
,
W. M.
Czarnecki
,
I.
Dunning
,
L.
Marris
,
G.
Lever
,
A. G.
Castaneda
,
C.
Beattie
,
N. C.
Rabinowitz
,
A. S.
Morcos
,
A.
Ruderman
, and
N.
Sonnerat
, “
Human-level performance in 3D multiplayer games with population-based reinforcement learning
,”
Science
364
,
859
865
(
2019
).
17.
O.
Vinyals
,
I.
Babuschkin
,
W. M.
Czarnecki
,
M.
Mathieu
,
A.
Dudzik
,
J.
Chung
,
D. H.
Choi
,
R.
Powell
,
T.
Ewalds
,
P.
Georgiev
, and
J.
Oh
, “
Grandmaster level in StarCraft II using multi-agent reinforcement learning
,”
Nature
575
,
350
354
(
2019
).
18.
J.
Pachocki
,
G.
Brockman
,
J.
Raiman
,
S.
Zhang
,
H.
Pondé
,
J.
Tang
,
F.
Wolski
,
C.
Dennison
,
R.
Jozefowicz
,
P.
Debiak
et al., see https://blog.openai.com/openai-five for “OpenAi Five” (2018).
19.
Y.
Zhang
,
P.
Tino
,
A.
Leonardis
, and
K.
Tang
, “
A survey on neural network interpretability
,”
IEEE Trans. Emerg. Top. Comput. Intell.
5
,
726
742
(
2021
).
20.
A.
Karchere
and
F. P.
Hoeber
, “
Combat problems, weapon systems, and the theory of allocation
,”
J. Oper. Res. Soc. Am.
1
,
286
302
(
1953
).
21.
H. F.
Song
and
X.-J.
Wang
, “
Simple, distance-dependent formulation of the Watts-Strogatz model for directed and undirected small-world networks
,”
Phys. Rev. E
90
,
062801
(
2014
).
You do not currently have access to this content.