Network dynamical systems with higher-order interactions are a current trending topic, pervasive in many applied fields. However, our focus in this work is neurodynamics. We numerically study the dynamics of the smallest higher-order network of neurons arranged in a ring-star topology. The dynamics of each node in this network is governed by the Chialvo neuron map, and they interact via linear diffusive couplings. This model is perceived to imitate the nonlinear dynamical properties exhibited by a realistic nervous system where the neurons transfer information through multi-body interactions. We deploy the higher-order coupling strength as the primary bifurcation parameter. We start by analyzing our model using standard tools from dynamical systems theory: fixed point analysis, Jacobian matrix, and bifurcation patterns. We observe the coexistence of disparate chaotic attractors. We also observe an interesting route to chaos from a fixed point via period-doubling and the appearance of cyclic quasiperiodic closed invariant curves. Furthermore, we numerically observe the existence of codimension-1 bifurcation points: saddle-node, period-doubling, and Neimark–Sacker. We also qualitatively study the typical phase portraits of the system, and numerically quantify chaos and complexity using the 0–1 test and sample entropy measure, respectively. Finally, we study the synchronization behavior among the neurons using the cross correlation coefficient and the Kuramoto order parameter. We conjecture that unfolding these patterns and behaviors of the network model will help us identify different states of the nervous system, further aiding us in dealing with various neural diseases and nervous disorders.

Recently, the application of higher-order interactions in network neurodynamics has been a topic of conversation. Motivated, this study tries to answer whether higher-order interactions among neurons in a network promote chaos and synchronization. The network under consideration is a “small” network of Chialvo neurons and is arranged in a ring-star topology. The network is a topologically complete graph made of four nodes with one node at the center and the rest three at the periphery, linked via diffusive couplings (both pairwise and higher order). As our focus is the consequences of higher-order interactions toward the dynamics of the neuron network, we make the higher-order coupling strength the primary bifurcation parameter. After performing some trivial nonlinear analysis via fixed point calculation and Jacobian matrix computation, we vary the higher-order coupling strength and observe an intriguing route to chaos. The qualitative structure of the typical phase portraits under a range of values of the primary bifurcation parameter gives us a bird’s-eye view of the dynamical properties of the network. We are able to report various codimension-1 bifurcation points and also the emergence of chaos and complexity. Furthermore, this network also exhibits various synchronization behavior (reported via the cross correlation coefficients and the Kuramoto order parameter) under various combinations of the pairwise and higher-order coupling strengths. A high value of the higher-order coupling strength boosts more chaos and complexity while keeping the system fully synchronous in the excitatory regime.

## I. INTRODUCTION

Dynamical properties of complex systems are a prominent branch of computational science that has garnered the attention of physicists, mathematical modelers, quantitative biologists, computer scientists, and neuroscientists. *Neurodynamics* is a subfield that deals with the dynamical properties of neurons,^{1,2} usually studied via mathematical tools like ordinary differential equations, maps, partial differential equations, and delay differential equations. Thus, nonlinear equations are commonly used to illustrate the evolution of the dynamic firing and bursting behaviors exhibited by the neurons.^{3} Some popular neurodynamical models studied quite intensively are the FitzHugh–Nagumo model,^{4} the Hindmarsh–Rose model,^{5} and the Hodgkin–Huxley model.^{6} Usually, the dynamics are studied on a neuron, or a collection of multiple neurons, forming a complex topological structure. To model these complex structures, mathematical modelers utilize the concept of *networks*, where the nodes are perceived as neurons and the edges are electrical or chemical synaptic connections among the neurons. Then, the behavior is studied over time, which not only depends on the entity concerned but also on the other entities it is connected to.^{7} Usually, in network dynamics, most studies are based on the assumption that all interactions are dyadic or pairwise (only two nodes are involved in the interaction).^{8,9} However, in many real-life scenarios, interactions are not pairwise but higher order, involving more than two nodes. Some application areas of higher-order networks are the spread of rumors, peer pressure in social systems, scientific collaboration among multiple authors,^{10} disease spread, and the competency for food by multiple species in a complex ecosystem comprising non-pairwise interactions (see the review report by Battiston *et al.*^{11}). The spread of information and epidemics can be effectively studied using higher-order interactions on multiplex networks. Zhang *et al.*^{12} studied a two-layered model of epidemic transmission by incorporating a partial mapping relationship among the nodes across the two layers and higher-order interactions in one layer. Fan *et al.*^{13} investigated a novel model of coupled epidemic spreading on a classical multiplex network with higher-order interactions. Furthermore, Chen *et al.*^{14} proposed a discrete-time-based dynamical study of epidemics on higher-order networks. By considering higher-order interactions, this approach helps in precisely acquiring various discontinuous phase transitions and bistability phenomena. Another application area is molecular network modeling of protein chemical structures and pathways.^{15} Anwar *et al.*^{16} studied the complex emergent behaviors of swarmalators with higher-order interactions, illustrating the appearance of four distinct collective states: the synchronized, phase wave, mixed, and synchronized. They studied how abruptly the system transitions from the synchronized state to either the phase wave or the synchronized state based on the strength of the higher-order coupling. There is no doubt that higher-order interaction models in networks have found a well-deserved place in the neurodynamics literature.^{17–22} Higher-order networks in neurodynamics have found applications in neurodegeneration^{23,24} and many psychiatric disorders^{25,26} Interested readers with a physics background can have a look at the work by Battiston *et al.*,^{27} where the physics of higher-order interactions in complex systems are discussed.

Our work focuses on a “small” network. A small network is a minimal model made up of three to ten coupled oscillators working collectively as a unit. Our work is motivated by the broad need to study complex collective groups in terms of reduced mathematical models. The nervous system serves as an ideal candidate to be studied based on minimal models, for example, the smallest network possible for a particular topology. This is because the small network acts as a unit of a larger ensemble, replicating the behavior of a real-world nervous system. Another advantage of studying small networks is that they fill the gap between studies concerning a single oscillator model and multi-oscillator models consisting of at least a hundred oscillators. Some of the simple topologies that have proliferated in the network neurodynamics literature are the ring network,^{28,29} star network,^{30,31} ring-star network,^{32–34} lattice network,^{35,36} and multiplex network.^{37,38} Motivated, we introduce the smallest network of neuron oscillators (consisting of four oscillators), which are arranged in a ring-star topology: one in the center and the rest in the periphery.

Most studies in neurodynamics are based on continuous-time ordinary differential equations^{39} with discrete-time systems comparatively less studied. However, discrete-time models help in the faster, simpler, more flexible, and computationally more optimal understanding of neural dynamics.^{40} They also provide insights into how the neurons depend on parameters such as the refractory period,^{41} firing threshold, and network architecture. Some of the discrete neuron models studied are the Chialvo,^{42} Rulkov,^{43} Izhikevich,^{44} and discrete Hindmarsh–Rose model.^{5} Mehrabbeik *et al.*^{45} studied the significance of higher-order interactions in a network of memristive Rulkov maps using simplicial complexes and reported their synchronization patterns. The focus of our work is the Chialvo neuron, where each oscillator in our novel network follows the dynamics set by the Chialvo map, first put forward by Dante Chialvo in 1995. In the Chialvo neuron model, the slowness of the recovery variable is not constrained, so this model exhibits a wide range of dynamical behaviors, such as subthreshold oscillations, bistability, and chaotic orbits.^{40} Studies on the ring-star network of Chialvo neurons have been reported recently by Muni *et al.*^{33} and Ghosh *et al.*,^{34} but both these studies were considered in the thermodynamic limit, i.e., a large number of oscillators. Also, an additional topic in these two studies was the application of electromagnetic flux. Furthermore, no higher-order interactions were considered between the oscillators. In this paper, however, the difference from those articles is threefold.

We consider the smallest network of ring-star topology with just four nodes.

No electromagnetic flux is applied.

The interaction among the nodes is beyond pairwise, i.e., higher order.

For the parameter values that have been investigated, our model exhibits various dynamical patterns, like fixed points, period doubling, cyclic quasiperiodic closed invariant curves, and chaos. We also report the phenomenon of coexistence, where for a particular set of parameter values, we see the existence of attractors, which can be either periodic or chaotic. This is known as *multistability*^{46} and has been achieved by utilizing the tool of bifurcation analysis. Many real-world systems exhibit the multistability phenomenon, which appears unsuitable in engineering systems because multiple states cause errors in the system, but it is a useful phenomenon in neural systems that process information.^{33} Moreover, the appearance of regular and chaotic dynamics makes it obligatory to distinguish them mathematically to obtain a more accurate understanding of the behavior of the system.^{47} For this, a variety of methods such as the $0$– $1$ test^{48,49} and the standard Lyapunov exponents are commonly used in numerics. Nevertheless, it might not be possible to use the conventional Lyapunov exponent method for higher-dimensional systems.^{50} Thus, we implement the $0$– $1$ test to the time series data generated from our system for differentiating between chaotic and regular behavior portrayed. Additionally, the abundant complexity of the network calls for a measure to quantify it, and we do that via an *entropy* measure called the *sample entropy* first introduced by Richman *et al.*^{51} where they measured the complexity in physiological time series data. We have also applied sample entropy to the time series data of our system. The concept of entropy in connection with the information theory was first introduced by Shannon,^{52} and this concept has subsequently been employed in various studies related to neuroscience.^{53–55} Richman and his coauthors’ method is by far one of the most popular entropy measures for nonlinear dynamical analysis other than the *approximate entropy* devised by Pincus.^{56} In terms of applications, measuring the complexity of empirical data from EEG recordings can point toward the possible arrival of Alzheimer’s disease.^{57} Performing time series analysis by implementing the $0$– $1$ test and the sample entropy measure on our time series provides a plausible numerical bridge between chaoticity and complexity in our model.

Furthermore, as our model is a collection of oscillators connected to one another, forming a group, it becomes necessary to study what kind of synchronization patterns this network generates. Synchronization is an important feature of neuron ensembles and has been extensively studied in the field of dynamical systems. This phenomenon is visible at a wide array of spatiotemporal scales^{58,59} and has been explored in various fields of study. A list of important reads are Sumpter,^{60} Strogatz,^{61} and Shahal *et al.*^{62} Synchronization phenomena occur mostly in oscillator ensembles interacting together in a coordinated and coherent manner. For example, in a multiplex network, interlayer synchronization affects the structural complexity, giving rise to dynamical complexity such as intralayer synchronization and interlayer synchronization.^{63} Furthermore, if the interlayer coupling is weak, then increasing layer similarity will initially enhance the synchronization, before inhibiting it at a later time. However, for strong interlayer coupling, synchronization can be increased by increasing the number of layers.^{64} Xu *et al.*^{65} studied the properties of a finite-time $p$th moment asymptotically bounded for stochastic nonlinear systems and reported their application in the synchronization of neural networks. Another interesting study is reported by Liang *et al.*^{66} where the authors studied the consequence of the absence and presence of noise perturbation in the time and energy costs for synchronization in networks of Kuramoto oscillators. They concluded that there exists a trade-off between these two kinds of costs. Synchronization can give rise to both normal and abnormal patterns. Kuramoto^{67} was the one who first identified a chimera state, and it refers to the coexistence of both coherent and incoherent nodes within a certain ensemble of coupled phase oscillators. A handful of nodes will oscillate away from the main synchronized ensemble in this case, giving rise to a chimera. Also, sometimes, the nodes oscillate in two or more subgroups, giving rise to cluster states. These states have been observed in various natural phenomena, such as the sleep patterns of animals,^{68} the flashing patterns of fireflies,^{69} and many other systems.^{70–72} To quantify synchronization, we have utilized two cross correlation coefficient^{73,74} and the Kuramoto order parameter.^{75–77} Both of these have found a wide range of applications in the study of network dynamical systems.^{78–81}

We organized this article as follows: in Sec. II, we discuss what a higher-order network is and put forward some basic mathematical notations related to hypergraphs and simplicial complexes that make up higher-order networks. In Sec. III, we review the two-dimensional Chialvo map, which acts as the building block of our network model. Then, in Sec. IV, we introduce our novel higher-order neuron network, which is built on the smallest ring-star topology consisting of four Chialvo nodes with linear diffusive couplings. We also comment on the topology of this network in terms of graph theory terms like adjacency matrices. We further delve into analyzing the fixed point of this system, write down the Jacobian of the system, and explore its eigenvalues at the fixed point in Sec. V. This sets up an overview of the basic dynamical properties our system exhibits. In Sec. VI, we show the bifurcation diagram of the action potentials of the neurons with respect to the higher-order coupling strength (primary bifurcation parameter). We observe an interesting route to chaos: the appearance of fixed points, period doubling, cyclic quasiperiodic closed invariant curves, and finally chaos. Additionally, we implement MatContM in Sec. VII to report more codimension- $1$ bifurcation points (saddle-node and Neimark–Sacker) numerically, utilizing MatContM. These bifurcation patterns appear in our model on the variation of the higher-order coupling strength in the negative domain, which we otherwise do not observe if not for MatContM. Next, in Sec. VIII, we first look into the typical phase portraits that appear at certain interest values of the higher order coupling strength before implementing the $0$– $1$ test and sample entropy measure on our model and numerically testing for chaos and complexity. Finally, in Sec. IX, we employ the synchronization measures, the cross correlation coefficient, and the Kuramoto order parameter in our model. We provide concluding remarks and future directions in Sec. X. We have performed all the numerical simulations with Python 3.9.7 extensively using numpy, pandas, and matplotlib, except for MatContM,^{82,83} which is originally based on MATLAB.

## II. HIGHER ORDER NETWORK

In real-world systems, interactions can occur both pairwise and non-pairwise in a network. A network can be modeled by a mathematical object called a *graph*, which consists of nodes and edges. They can reveal information about the dynamical and structural characteristics of a system. Most often, in graph-based methods, relationships between entities (via the edges) are considered pairwise or dyadic. In real-life scenarios, however, nondyadic network relations and interactions involving the nonlinear interaction of more than two nodes are more common. To acquire insights from such complex systems, it is important to understand and analyze the polyadic network interactions. Such networks are called higher-order networks, consisting of higher-order interactions. These interactions can be mathematically encoded through *hypergraphs* and simplicial complexes. A hypergraph is a generalized version of a graph where edges can connect any number of vertices greater than or equal to two. A model based on hypergraphs is proposed by Chen *et al.*^{84} to describe the propagation of malware in the field of cyberspace. We first set up some definitions of hypergraphs and simplicial complexes following Bick *et al.*^{85}

For a finite set of vertices $ V$ and its collection of nonempty subsets $ L\u2286 P( V)$, where $ P( V)$ is the power set of $ V$, the tuple $ H= ( V , L )$ is called a hypergraph.

Like graphs, a hypergraph can be weighted and serves as the main backbone for the model of this paper. A good resource to study more about hypergraphs is the book by Berge.^{86}

Let $ V= { 1 , \u2026 , M}$, where $M$ is the number of vertices. Then, an $s$-simplex $\tau $ is

$\tau = { v 0 , \u2026 , v s}$ is a set of $s+1$ elements of $ V$,

$\tau \u2260\u2205$, and

$\tau $ has dimension $s$.

Given an $s$-simplex $\tau $, a face $ F$ of $\tau $ is a subset $ F\u2282\tau $.

A simplicial complex $ C$ is a collection of a finite number of $s$-simplices having the following properties:

$ F\u2282\tau $ and $\tau \u2208 C$ imply that $ F$ is an element of $ C$ and

for all vertices $v\u2208 V$, $ { v}\u2208 C$.

## III. CHIALVO NEURONS

^{42}which is the building block of our network model. Ibarz

*et al.*

^{40}provides a detailed review of this topic. The Chialvo map is a two-dimensional iterative map characterizing the general dynamics of excitable neuron systems, given by

*et al.*,

^{40}the model also exhibits subthreshold oscillations, bistability, and chaotic orbits, making it an ideal candidate for map-based neuron modeling techniques. This has also received quite a bit of attention from the research community in recent times.

^{33,34,87–90}

## IV. NETWORK MODEL

Our model is the smallest ring-star network made of four oscillators, with one at the center and three at the periphery, all linked to each other, forming a complete network (or a complete graph); see Fig. 1. The dynamics of each of these oscillators is governed by the discrete Chialvo map, discussed in Sec. III. Note that we incorporate simplicial complexes in our network model to allow for higher-order interactions among the neuron oscillators. This implies a multi-body interlinkage among the oscillators, which is beyond dyadic. We allow a maximum of $2$-simplexes within the model to make it have a minimal higher-order relationship, keeping in mind that our model is very minimal. Interested researchers are welcome to try to include a $3$-simplex, which would be possible with our model.

We number the central node as $1$ (colored blue) and the peripheral nodes as $2$ (colored red), $3$ (colored green), and $4$ (colored black). These neuron nodes/oscillators by themselves are the $0$-simplexes. The pairwise couplings for the star configuration, i.e., from node $1$ to $2$, $3$, and $4$, are denoted by $\mu $. The pairwise couplings for the ring configuration, i.e., among the nodes $2$, $3$, and $4$, are denoted by $ \sigma ( 1 )$. Thus, in total, we have six $1$-simplexes, given by the doubles $ { 1 , 2}$, $ { 1 , 3}$, $ { 1 , 4}$, $ { 2 , 3}$, $ { 2 , 4}$, and $ { 3 , 4}$, see Fig. 1(a). Next, we consider the higher-order interactions realized by triadic couplings, i.e., between three oscillators. Our ring-star topology is an ideal candidate to churn out four (which is a very good number for a small network model) such $2$-simplexes given by the triples $ { 1 , 2 , 3}$, $ { 1 , 2 , 4}$, $ { 1 , 3 , 4}$, and $ { 2 , 3 , 4}$. These coupling strengths are denoted by $ \sigma ( 2 )$; see Figs. 1(b)–1(e). The maximum possible dimension of the simplices for our model is $3$, with the largest possible simplex given by the quadruple $ { 1 , 2 , 3 , 4}$ itself. However, we omit this case from our model for simplicity. It should be noted that the curly braces “ $ {}$” represent the *roster* notation of a set. Throughout our study, we focus on how the dynamics of our neuron network behave mainly under the variation of $ \sigma ( 2 )$, i.e., we make $ \sigma ( 2 )$ the primary bifurcation parameter of our model.

^{91}We have followed the simplest diffusive linear couplings within the $s$-simplexes. These kinds of couplings biologically interpret electrical synapses between neurons. Similar coupling has been implemented in higher-order Hindmarsh–Rose neurons by Parastesh

*et al.*,

^{19}along with more complicated nonlinear couplings arising from chemical synapses. Another well-studied neuron map is the Rulkov map, and the higher-order interactions in a network of Rulkov neurons have been reported by Mirzaei

*et al.*

^{20}For a detailed review, the readers can refer to Majhi

*et al.*

^{92}

The ring-star topology of the network model plays a significant role in capturing the dynamics of three variants of topologies usually studied in the network neurodynamics literature: the ring network, the star network, and the ring-star network. Our model can be reduced to a ring network by setting $\mu =0$ and $ \sigma ( 1 )\u22600$, whereas to a star network by setting $\mu \u22600$ and $ \sigma ( 1 )=0$. However, how to handle $ \sigma ( 2 )$ in both of these cases opens a new research avenue. We can think of this model biologically as implying a hub-based unit of neurons replicating how the multipolar neurons form an ensemble for information processing in the *central nervous system*. The central node is a hub, or multipolar neuron, that transmits information to and from the peripheral multipolar neurons via the dendrites. The diffusive coupling strengths imitate how the dendrites control the information processing via electrical signals distributed through the *axons*. The higher-order interactions bring about how these electrical signals are processed among these neurons in more complicated and realistic ways in terms of mathematical modeling of biological systems. Also, our network model can be perceived as a unit that repeats itself to generate a more complex ensemble of neurons all connected in a complicated topology via diffusive couplings in the real-world nervous system. Thus, studying the dynamical properties of this unit will give a researcher an idea of the bigger picture of the intricate complexity of the nervous system.

Furthermore, our model is map-based, making it computationally efficient, and can be utilized to analyze empirical data from the likes of MRI and EEG. This can be achieved by explaining the firing patterns of the excitable neurons. The study of bifurcation through these modeling techniques reveals how the neurons transition from being at a stable state of some degree to another stable state, with random periods of chaotic firings as certain activities are performed. Looking into the chaotic behavior of these firing patterns also helps researchers identify neural disorders. Identifying disorders is the first step toward treating them and has become an essential field of research to aid medicine.^{93} These modeling techniques via bifurcation patterns pointing toward how the transition happens from a stable state to a chaotic state are a good reference for understanding a body’s circadian rhythms.^{94} Furthermore, studying how each of these neurons in the network synchronizes with each other over time tells us how perfectly information processing occurs among them. A group of neurons is synchronized with another when they organize themselves to form functional ensembles, oscillating in tandem with each other. The global and local communications in an ensemble drive all the sensory, motor, and cognitive tasks.^{95} When two neurons are synchronized, this indicates the neural activities between them are accurate.^{96} Often, abnormal synchronization refers to the inception of a probable pathophysiological mechanism underlying motor symptoms like Parkinson’s disease^{95,97} and epilepsy.^{98} Thus, these models become quite essential in analyzing empirical data and aiding the biomedical sciences.

## V. FIXED POINT ANALYSIS

In Table I, we list down the set of eight eigenvalues for a parameter set $( \sigma ( 1 ), \sigma ( 2 ))$ at $\mu =0.001$ and the other local parameters set as $a=0.759$, $b=0.421$, $c=0.84$, and $ k 0=0.03$. We observe a stable fixed point and fixed points of $k$-saddle type with $k=1,\u2026,4$. Extending on this, it becomes essential to draw a two-dimensional color-coded plot, making the higher-order coupling strength $ \sigma ( 2 )$ vary in one of the axes. This is like a two-dimensional bifurcation plot, where we focus on the qualitative behavior of the stability of fixed points on the variation of two coupling strengths as bifurcation parameters, see Fig. 2. In panel (a), we vary $ \sigma ( 1 )$ against $ \sigma ( 2 )$, both in the range $[\u22120.1,0.1]$ with $\mu =0.001$, and in panel (b), we vary $\mu $ against $ \sigma ( 2 )$, both in the range $[\u22120.1,0.1]$ with $ \sigma ( 1 )=0.008$. We number the regions (in white) according to the stability of the fixed point. Note that the red pixels denote the region containing stable fixed points, where $k=0$. All eigenvalues have absolute values greater than 1. The other pixels are colored according to the $k$-saddle type, where $k=1,\u2026,4$. These numbers are indicated in white on the plots. We observe distinct bifurcation boundaries where the stability of the fixed point changes from being stable to $k$-saddle types.

(σ^{(1)}, σ^{(2)})
. | λ_{i}
. | $| \lambda i|$ . | $| \lambda i|>1$ . | Type . |
---|---|---|---|---|

(0.01, 0.02) | ${ 0.999 876 48$, | ${ 0.999 876 48$, | {F, | stable |

$ 0.804 771 84+0.03787732i$, | $ 0.805 662 71$, | F, | ||

$ 0.804 771 84\u22120.03787732i$, | $ 0.805 662 71$, | F, | ||

$ 0.773 653 38$, | $ 0.773 653 38$, | F, | ||

$ 0.791 261 46+0.04988789i$, | $ 0.792 832 58$, | F, | ||

$ 0.791 261 46\u22120.04988789i$, | $ 0.792 832 58$, | F, | ||

${ 0.791 261 48+0.04988788i$, | $ 0.792 832 6$, | F, | ||

${ 0.791 261 48\u22120.04988788i}$ | $ 0.792 832 6}$ | F} | ||

(0.023, 0.02) | ${ 1.000 274 66$, | ${ 1.000 274 66$, | {T | 1-saddle |

$ 0.804 884 25+0.03776246i$, | $ 0.805 769 61$, | F, | ||

$ 0.804 884 25\u22120.03776246i$, | $ 0.805 769 61$, | F, | ||

$ 0.773 640 68$, | $ 0.773 640 68$, | F, | ||

$ 0.771 994 29+0.05800085i$, | $ 0.774 170 06$, | F, | ||

$ 0.771 994 29\u22120.05800085i$, | $ 0.774 170 06$, | F, | ||

$ 0.771 994 27+0.05800085i$, | $ 0.774 170 04$, | F, | ||

${ 0.771 994 27\u22120.05800085i}$ | $ 0.774 170 04}$ | F} | ||

(0.05, − 0.01) | ${ 1.079 642 52$, | ${ 1.079 642 52$, | {T, | 2-saddle |

$ 1.000 250 13$, | $ 1.000 250 13$, | T, | ||

$ 0.770 010 84$, | $ 0.770 010 84$, | F, | ||

$ 0.773 641 45$, | $ 0.773 641 45$, | F, | ||

$ 0.922 387 7$, | $ 0.922 387 7$, | F, | ||

$ 0.922 387 63$, | $ 0.922 387 63$, | F, | ||

$ 0.780 624 13$, | $ 0.780 624 13$, | F, | ||

${ 0.780 624 14}$ | ${ 0.780 624 14}$ | F} | ||

(0.01, − 0.01) | $ 1.079 572 93$, | $ 1.079 572 93$, | {T, | 3-saddle |

$ 0.999 920 94$, | $ 0.999 920 94$, | F, | ||

$ 0.773 651 95$, | $ 0.773 651 95$, | F, | ||

$ 0.770 011 68$, | $ 0.770 011 68$, | F, | ||

$ 1.051 498 91$, | $ 1.051 498 91$, | T, | ||

$ 1.051 498 86$, | $ 1.051 498 86$, | T, | ||

$ 0.771 068 15$, | $ 0.771 068 15$, | F, | ||

$ 0.771 068 15}$ | $ 0.771 068 15}$ | F} | ||

(0.033, − 0.022) | ${ 1.178 247 31$, | ${ 1.178 247 31$, | {T, | 4-saddle |

$ 1.000 260 82$, | $ 1.000 260 82$, | T, | ||

$ 0.773 641 11$, | $ 0.773 641 11$, | F, | ||

$ 0.767 421 41$, | $ 0.767 421 41$, | F, | ||

$ 1.080 012 72$, | $ 1.080 012 72$, | T, | ||

$ 1.080 012 65$, | $ 1.080 012 65$, | T, | ||

$ 0.770 006 32$, | $ 0.770 006 32$, | F, | ||

$ 0.770 006 32}$ | $ 0.770 006 32}$ | F} |

(σ^{(1)}, σ^{(2)})
. | λ_{i}
. | $| \lambda i|$ . | $| \lambda i|>1$ . | Type . |
---|---|---|---|---|

(0.01, 0.02) | ${ 0.999 876 48$, | ${ 0.999 876 48$, | {F, | stable |

$ 0.804 771 84+0.03787732i$, | $ 0.805 662 71$, | F, | ||

$ 0.804 771 84\u22120.03787732i$, | $ 0.805 662 71$, | F, | ||

$ 0.773 653 38$, | $ 0.773 653 38$, | F, | ||

$ 0.791 261 46+0.04988789i$, | $ 0.792 832 58$, | F, | ||

$ 0.791 261 46\u22120.04988789i$, | $ 0.792 832 58$, | F, | ||

${ 0.791 261 48+0.04988788i$, | $ 0.792 832 6$, | F, | ||

${ 0.791 261 48\u22120.04988788i}$ | $ 0.792 832 6}$ | F} | ||

(0.023, 0.02) | ${ 1.000 274 66$, | ${ 1.000 274 66$, | {T | 1-saddle |

$ 0.804 884 25+0.03776246i$, | $ 0.805 769 61$, | F, | ||

$ 0.804 884 25\u22120.03776246i$, | $ 0.805 769 61$, | F, | ||

$ 0.773 640 68$, | $ 0.773 640 68$, | F, | ||

$ 0.771 994 29+0.05800085i$, | $ 0.774 170 06$, | F, | ||

$ 0.771 994 29\u22120.05800085i$, | $ 0.774 170 06$, | F, | ||

$ 0.771 994 27+0.05800085i$, | $ 0.774 170 04$, | F, | ||

${ 0.771 994 27\u22120.05800085i}$ | $ 0.774 170 04}$ | F} | ||

(0.05, − 0.01) | ${ 1.079 642 52$, | ${ 1.079 642 52$, | {T, | 2-saddle |

$ 1.000 250 13$, | $ 1.000 250 13$, | T, | ||

$ 0.770 010 84$, | $ 0.770 010 84$, | F, | ||

$ 0.773 641 45$, | $ 0.773 641 45$, | F, | ||

$ 0.922 387 7$, | $ 0.922 387 7$, | F, | ||

$ 0.922 387 63$, | $ 0.922 387 63$, | F, | ||

$ 0.780 624 13$, | $ 0.780 624 13$, | F, | ||

${ 0.780 624 14}$ | ${ 0.780 624 14}$ | F} | ||

(0.01, − 0.01) | $ 1.079 572 93$, | $ 1.079 572 93$, | {T, | 3-saddle |

$ 0.999 920 94$, | $ 0.999 920 94$, | F, | ||

$ 0.773 651 95$, | $ 0.773 651 95$, | F, | ||

$ 0.770 011 68$, | $ 0.770 011 68$, | F, | ||

$ 1.051 498 91$, | $ 1.051 498 91$, | T, | ||

$ 1.051 498 86$, | $ 1.051 498 86$, | T, | ||

$ 0.771 068 15$, | $ 0.771 068 15$, | F, | ||

$ 0.771 068 15}$ | $ 0.771 068 15}$ | F} | ||

(0.033, − 0.022) | ${ 1.178 247 31$, | ${ 1.178 247 31$, | {T, | 4-saddle |

$ 1.000 260 82$, | $ 1.000 260 82$, | T, | ||

$ 0.773 641 11$, | $ 0.773 641 11$, | F, | ||

$ 0.767 421 41$, | $ 0.767 421 41$, | F, | ||

$ 1.080 012 72$, | $ 1.080 012 72$, | T, | ||

$ 1.080 012 65$, | $ 1.080 012 65$, | T, | ||

$ 0.770 006 32$, | $ 0.770 006 32$, | F, | ||

$ 0.770 006 32}$ | $ 0.770 006 32}$ | F} |

## VI. BIFURCATION STRUCTURE

Once we have an overall idea about the fixed point of the network system and its stability regions, the next step is to delve into the bifurcation pattern of its dynamical variables. In this section, we look into how the action potentials of the network change qualitatively with the variation of a bifurcation parameter. We consider the higher-order coupling strength $ \sigma ( 2 )$ as the primary bifurcation parameter and plot $ x 1$, $ x 2$, $ x 3$, and $ x 4$ against it. The purpose of these bifurcation patterns is to shed light on the structural stability of the network system and to realize the sensitivity of the action potentials to the primary bifurcation parameter.

Figure 3 represents the bifurcation plot obtained by plotting each node against the non-pairwise coupling strength $ \sigma ( 2 )$. The system is simulated for $5\xd7 10 4$ iterations, with the last $5\xd7 10 3$ iterations used for plotting to avoid transient effects. We plot $200$ values of $ \sigma ( 2 )$ in the range $[0.075,0.116]$, considering both forward and backward bifurcation. Forward bifurcation points are marked in black, whereas backward bifurcation points are marked in red. The local parameter values are set as $a=0.89,b=0.28,c=0.901, k 0=0.06,\mu =0.03, \sigma ( 1 )=0.001$. The dynamical variables are initialized by sampling uniformly from the range $[0.6,0.8]$. As $ \sigma ( 2 )$ is varied, we see the appearance of a fixed point [period- $1$, see Fig. 5(a)] in the range $ \sigma ( 2 )\u2208[0.075,0.08543]$. At $ \sigma ( 2 )\u22480.08543$, there is a period-doubling [see Fig. 5(b)], which continues until $ \sigma ( 2 )\u22480.10237$, at which point we observe the onset of quasi-periodicity. On the further increase of $ \sigma ( 2 )$, we see the appearance of smooth, disjoint, cyclic, quasiperiodic, closed invariant curves [see Fig. 5(c)], which merge into a single chaotic attractor when $ \sigma ( 2 )$ is increased even more via the loss of this smoothness through the attractor merging crisis. This transition from period- $1$ solution to chaos is discussed in more detail in Sec. VIII with illustrations of relevant typical phase portraits and implementing the $0$– $1$ chaos test on the bifurcation pattern. The reasoning behind colorblue plotting the bifurcation points of the model forward and backward in the same simulation environment is to allow us to report coexistence, indicating *multistability*. The attractors are topologically distinct from each other at some $ \sigma ( 2 )$ values, as indicated by the nonoverlapping of the black and red marks in Fig. 5. Note that we study the phase portraits at different $ \sigma ( 2 )$ values of interest in more detail in Sec. VIII. These are marked as vertical blue lines in Fig. 5, i.e., $ \sigma ( 2 )=0.08,0.09,0.11,0.115$.

It is also possible to observe the behavior by varying the pairwise coupling strengths $\mu $ and $ \sigma ( 1 )$ and generating two more bifurcation plots for our model. The existence of fixed-point, period doubling, and chaotic behavior can also be observed from these bifurcation plots. Note that we chose not to report these results in this work, focusing more on the higher-order coupling strength as the primary bifurcation parameter.

## VII. CODIMENSION-1 BIFURCATION PATTERNS

Apart from the period-doubling bifurcation, other codimension-1 bifurcations can be observed in complex dynamical systems as our network model. Those cannot be detected directly with simulations using Python. To capture these bifurcations, we use the numerical bifurcation tool for maps, MatContM.^{83} We have implemented this to report codimension- $1$ bifurcation points like *saddle-node* bifurcation (sometimes called *fold* or *limit-point* bifurcation), Neimark–Sacker bifurcation, and *branch points* on the curves of the fixed point in the negative $ \sigma ( 2 )$ domain (see Fig. 4), which is otherwise not captured by Python. The coupling strengths are set as $ \sigma ( 1 )=0.001$, $\mu =0.03$, and $ \sigma ( 2 )=[\u22121.4,0.1]$, and the local parameters are set as $a=0.89$, $b=0.28$, $c=0.901$, and $ k 0=0.06$. Various codimension- $1$ and - $2$ patterns were also reported by Muni *et al.*^{33} for a system of three-dimensional memristive Chialvo neurons.

Codimension- $1$ bifurcations like period-doubling, saddle-node, and Neimark–Sacker occur when one of the eigenvalues of the Jacobian matrix $J$ has modulus $1$ at the fixed point $ X \u2217$. Specifically, a saddle-node occurs when the eigenvalue is $1$, and a period-doubling occurs when the eigenvalue is $\u22121$ at $ X \u2217$. When the eigenvalue is complex with modulus $1$, then a Neimark–Sacker bifurcation takes place. A *branch point* of fixed points is similar to a pitchfork bifurcation of equilibria in continuous systems. It appears in systems with symmetry or equivariance under variation of a single parameter.^{99}

A bifurcation diagram of the map (4)–(6) is shown in Fig. 4. We chose $ \sigma ( 2 )$ as the bifurcation parameter because of its biological interest in understanding the dynamical behavior of a network of neurons. Varying $ \sigma ( 2 )$ in the range $[\u22120.4,0.1]$, three bifurcation points are detected—NS: Neimark–Sacker, LP: saddle-node (limit point), and BP: branch point. The supercritical Neimark–Sacker with a normal form coefficient of $\u22122.6556 e \u2212 01$ occurs at $( x 1, \sigma ( 2 ))=(11.6726,\u22120.22066)$, the saddle-node bifurcation occurs at $( x 1, \sigma ( 2 ))=(2.1761,\u22121.2672)$ with normal form coefficient $1.2894 e \u2212 02$, and the branch point at $( x 1, \sigma ( 2 ))=(2.5847,\u22121.0147).$

## VIII. TIME SERIES ANALYSIS

Our network simulation will generate a battery of time-series data, which is very rich in terms of complexity. Thus, it becomes imperative to study this richness via crucial tools that capture the intricate trends. One such pattern is the onset of *chaos* from regular periodic solutions. We need tools to quantify these temporal behaviors, and two of these tools that we have implemented here are the $0$– $1$ test for chaos and *sample entropy* for complexity. We will discuss these step by step in the next part of this section.

*et al.*

^{48}We discuss the steps of the algorithm and how it has been implemented in our system. For time series analysis, we need time series data ${x(n),n=1,\u2026, N}$. To start with the 0–1 test, we need to first compute two translation variables $ p e$ and $ q e$ for a small value $e\u2208(0,2\pi )$, given by

*et al.*,

^{48}there are two possibilities to compute this $ K e$: by

*regression*and by

*correlation*. We follow the first approach for this paper. The regression method is applied to $ D e$ rather than $ M e$ as $ D e$ portrays lesser variance than $ M e$, see Fig. 8. Because $ D e$ might be negative, it is imperative to modify $ D e$ as

First, we consider four parameter values of interest from the bifurcation diagram Fig. 3, given by $ \sigma ( 2 )=0.08,0.09,0.11,0.115$ (indicated by the blue dashed lines). At first glance, intuition tells us that at $ \sigma ( 2 )=0.08$ we have a fixed point; at $ \sigma ( 2 )=0.09$, we have a period-doubling; and at $ \sigma ( 2 )=0.11$ and $0.115$, we have chaos. The typical phase portraits for these four scenarios are given in Fig. 5. To generate these phase portraits, we have followed the same simulation steps as those in Fig. 3. Reiterating, simulation is run for $5\xd7 10 4$ iterations, and only the last $3\xd7 10 4$ points are plotted as $ x p$ vs $ y p$ to ensure transients are discarded. Note that following the color convention in our schematic Fig. 1, we have colored the nodes in the phase portraits, i.e., $1$: blue, $2$: red, $3$: green, and $4$: black. The initial values of the dynamical variables $ x 1\u2192 y 4$ are sampled uniformly from the range $[0.6,0.8]$. Looking at the phase portraits, we observe that panel (a) is indeed a fixed point, and panel (b) is a period-doubling. Panel (c) shows the loss of smoothness of a disjoint cyclic quasiperiodic closed invariant curve. Such a loss of smoothness usually marks the beginning of the transition toward the formation of a chaotic attractor. Further, an increase in $ \sigma ( 2 )$ results in the two disjoint cyclic components approaching each other and merging into a single chaotic attractor via an attractor merging crisis at $ \sigma ( 2 )=0.115$; see panel (d). We notice four dots, two colored in blue and the other two in black. This readily corroborates with the time series plots in Figs. 6(a) and 6(b). We notice that the central node (node $1$) oscillates with an amplitude different from the three peripheral nodes, which are in complete synchrony on their own. Figure 6(a) is a straight line (fixed point), and Fig. 6(b) shows regular patterns in two colors, indicating node $1$ oscillates at a different amplitude from the other three. Now, the third phase portrait exhibits an interesting phenomenon, where the attractors have split into two pieces. This behavior is anticipated as a transition toward chaos. Nodes $1$, $2$, and $4$ oscillate at different amplitudes from each other; however, the trajectory of node $3$ is in tandem with that of node $4$. This is also clearly indicated in the fairly irregular time series bursts in Fig. 6(c). Finally, Fig. 5(d) exhibits chaos, where the attractor for the central node is topologically distinguished from the attractors for the other nodes (node $2$ and $3$ are masked by node $4$). The peripheral nodes are in complete synchrony with each other. The time series 6(d) is highly irregular, indicating an onset of chaos. It should be noted that only the last $100$ iterations are illustrated in Fig. 6 to make the plots comprehensible. Our next task is to confirm all these regular and chaotic behaviors numerically via the $0$– $1$ test.

The first step is to compute the translation variables $ p e$ and $ q e$. For the above four cases, we have chosen $e=1.1$. As noted before, the simulations are run for $5\xd7 10 4$ iterations, out of which the first $2\xd7 10 4$ are discarded. This ensures that the time series data do not contain any transients. Thus, we have $ N=3\xd7 10 4$. Then, we plot $ p x 1$ vs $ q x 1$ showing the phase portraits of (32) and (33) for the central node (node 1), see Fig. 7. The trajectories are colored blue following the color code of the nodes set in Fig. 1. We only show these for the central node because the qualitative behavior of the other nodes will follow a similar pattern at the set parameter values. Thus, focusing on only one node suffices. We observe that the trajectory is highly bounded for the fixed point [panel (a), $ \sigma ( 2 )=0.08$], whereas it is slightly less bounded for the period-doubling case (panel (b), $ \sigma ( 2 )=0.09$). When it comes to the behavior at $ \sigma ( 2 )=0.11$ exhibiting transition to chaos, the trajectory lies somewhat in between bounded and diffusive. This also qualitatively indicates a transition to chaos. Finally, at $ \sigma ( 2 )=0.115$, where chaos is observed, the trajectory is a random walk showing a diffusive nature similar to a Brownian motion with zero drift. This brings us to mathematically analyzing the diffusive patterns of $ p e$ and $ q e$ via the computation of $ M e$ and the modified version $ D e$. To do that, we need to set $ N crit\u2264 N 10=3\xd7 10 3$. Let us consider two values for this term, i.e., $ N crit=300$ and $ N crit=50$. We use the first one to plot the graphs for $ M e$ and $ D e$, and the second one to ultimately compute $K$. Note that a small value of $ N crit$ makes the graphs of $ M e$ and $ D e$ look blocky but makes it faster to compute $K$, whereas a large value of $ N crit$ makes the graphs of $ M e$ and $ D e$ look less blocky but makes the computation of $K$ slower. These caveats require us to consider two values. We plot $ M e$ and $ D e$ and also compute $K$, showing the results in Fig. 8. Note that in panel (a), where $ \sigma ( 2 )=0.08$ (fixed point), both $ M e$ and $ D e$ are highly bounded as expected, with $K=0$. For $ \sigma ( 2 )=0.09$, we see that in panel (b), the $ M e$ and $ D e$ plots are slightly less bounded as compared to the ones in panel (a) (although difficult to say with bare eyes). In this case, we have a slightly higher value of $K\u22480.181$ pertaining to period doubling, which is regular by itself but is less regular when compared geometrically to a fixed point. For panel (c), where we are observing a transition to chaos (at $ \sigma ( 2 )=0.11$), we start seeing subtle changes in the growth rates of $ M e$ and $ D e$, which are no longer highly bounded. The $K$ value increases to approximately $0.517$, indicating a transition toward chaotic behavior. Finally, for $ \sigma ( 2 )=0.115$, the growth rates of $ M e$ and $ D e$ become linear, indicating chaos, with $K\u22480.792$. The simulations for the $0$– $1$ test have been achieved by converting the Julia code 01ChaosTest.jl^{101} to Python.

*sample entropy*following Richman

*et al.*

^{51}This gives us a quantifier to measure the rate of information production in the network system over time. First, we detail the algorithm for the sample entropy in terms of a general time series before implementing it in our model. Given the time series ${x(n),n=1,\u2026, N}$, we first define a vector $ x v(j)$ as

^{102}This has also been recently used to evaluate the sample entropy of a heterogeneous ring-star network of memristive Chialvo neurons in the thermodynamic limit by Ghosh

*et al.*

^{34}Like in the $0$– $1$ test, we simulate the model for $5\xd7 10 4$ time steps. But this time to evaluate sample entropy, we discard the first $2.5\xd7 10 4$, making $ N=2.5\xd7 10 4$. The package nolds provides us a function sampen() that has been written specifically to perform the task of sample entropy from time series data following the algorithm by Richman

*et al.*The default values of $v$ and $\delta $ in this function have been set to be $2$ and $0.2$ times the standard deviation of the time series data. The corresponding sample entropy values for the central node of our network for four different $ \sigma ( 2 )$ have been reported in Fig. 6. The reasoning behind considering only the central node lies parallel to the one discussed in the $0$– $1$ test. We see that when $ \sigma ( 2 )=0.08$, the attractor is a fixed point, and as expected, the system has very regular dynamics and thus very low complexity, given by $ SE=0$. This also occurs for the regular dynamics of period doubling, when $ \sigma ( 2 )=0.09$ and $ SE=0$. On the further increase of $ \sigma ( 2 )$ to $0.11$, we notice the transition to chaotic behavior supported by the increase in the $ SE$ value to approximately $0.381$, i.e., high complexity. Finally, for $ \sigma ( 2 )=0.115$, where the model exhibits the onset of chaos, very high complexity is again portrayed, supported by $ SE\u22480.358$.

The final bit in this section is to illustrate a bifurcation diagram of $K$ and $ SE$ over a wide range of $ \sigma ( 2 )$ values, giving us a full picture of how they vary from regular dynamics to chaotic dynamics, considering $ \sigma ( 2 )$ as the main bifurcation parameter. This is shown in Fig. 9. This also gives us a statistical viewpoint on the relationship between the chaoticity and the complexity of a dynamical system. We recreate the bifurcation diagram of the dynamical variable $ x 1$ with varying $ \sigma ( 2 )$ in the range $[0.075,0.115]$ in the top row of Fig. 9. The $K$ plot (in the middle row) shows that $K=0$ for the part where we notice a fixed point. As soon as there is an onset of period-doubling, we observe that $0<K<0.5$, still indicating regularity in the dynamics. On the further increase of $ \sigma ( 2 )$, as soon as there is an irregularity in the dynamics of the network (realized via transition to chaos and ultimately chaos), the $K$ value starts increasing beyond $0.45$ and reaches approximately $0.8$. Now, for the $ SE$ plot (in the third row), we see $ SE=0$ for all regular dynamics (fixed point, period-doubling). As soon as some irregularity sneaks into the dynamics, the $ SE$ value starts increasing exponentially, indicating higher complexity with higher irregularity. The vertical black dashed lines refer to the $ \sigma ( 2 )$ values of interest at which we have plotted the phase portraits, time series, $ p x 1$ vs $ q x 1$ portraits, and the $ M e$ and $ D e$ curves before. We can see via numerical simulations that chaos and complexity are intertwined and provide us a benchmark for further research on their unification and what kind of questions they might give rise to in the context of mathematical biology and healthcare.^{103} It would be interesting to have further research done on the mathematical framework for relating these two ubiquitous concepts.

## IX. SYNCHRONIZATION MEASURES

In this section, we numerically unravel the extent of synchronization in our model. We achieve this via two metrics: the cross correlation coefficient and the Kuramoto order parameter. The reasoning behind taking two metrics is to corroborate the numerical result from one with that from the other. This gives us an overview of the collective dynamics of the network model, where each node acts as an agent, organized with each other through pairwise and higher-order coupling strengths.

### A. Cross correlation coefficient

*chimera*or

*cluster states*, which are more notable when we consider networks in the thermodynamic limits, i.e., the total number of nodes is very high (close to infinity).

While running the numerics, the simulation was iterated for $5\xd7 10 4$ steps, out of which the first $3\xd7 10 4$ were discarded. The local parameters are set as $a=0.89$, $b=0.18$, $c=0.28$, and $ k 0=0.06$. The initial values for the dynamical variables were uniformly sampled from $[0.6,0.8]$, see Fig. 10. Special attention has been paid to the variation of $\Gamma $ with $ \sigma ( 2 )$. Note that in panel (a), we have fixed $\mu =0.03$ and varied both $ \sigma ( 1 )$ and $ \sigma ( 2 )$ in the range $[\u22120.1,0.1]$, giving rise to a color-coded two-dimensional bifurcation plot. Note that as $ \sigma ( 2 )>0$ and becomes more positive, the model falls into a complete synchronization regime. All the nodes oscillate in tandem, as implied by $\Gamma =1$. When $ \sigma ( 2 )<0$, we see that the model usually has $\Gamma <1$, and most of the time it ranges between $\u22120.5$ and $0.5$, meaning it is highly asynchronous. Further below, as $ \sigma ( 2 )$ approaches $\u22120.1$, the dynamics of the model diverge (the destruction of the attractors), as indicated by the white pixels in the model. In panel (b), we have fixed $ \sigma ( 1 )=0.001$ and varied both $ \sigma ( 2 )$ and $\mu $ in the range $[\u22120.1,0.1]$. Here too, we see that when $ \sigma ( 2 )$ becomes more positive, the model tends to fall into a complete synchrony regime, whereas when $ \sigma ( 2 )$ becomes more negative, the model becomes asynchronous first before seeing a divergence in the behavior as $ \sigma ( 2 )$ approaches $\u22120.1$.

### B. Kuramoto order parameter

Similar to Fig. 10, we have also illustrated the two-dimensional color-coded plots for the Kuramoto order parameter, see Fig. 11. Following the same simulation steps as in the cross correlation coefficient, we have varied $ \sigma ( 1 )$ against $ \sigma ( 2 )$ in panel (a) in the range $[\u22120.1,0.1]$ with $\mu =0.03$ and varied $\mu $ against $ \sigma ( 2 )$ in panel (b) in the range $[\u22120.1,0.1]$ with $ \sigma ( 1 )=0.001$. We again see in panel (a) that as $ \sigma ( 2 )$ becomes more positive, the dynamics of our network become completely synchronized, as indicated by $I=1$ (yellow pixels). As $ \sigma ( 2 )$ becomes more negative, however, the synchronization breaks down, with the oscillators oscillating away from one another. This is indicated by by a decrease in $I$ to a range of approximately $[0.23,0.79]$. Further below, as $ \sigma ( 2 )$ approaches $\u22120.1$, the network diverges, as illustrated by the white pixels. In panel (b), we see a similar behavior as $ \sigma ( 2 )$ is varied. This gives us an overall picture of how the cross correlation coefficient and the Kuramoto-order parameter simulations agree with each other and how they capture the synchronization phenomena in our network. It can be easily seen that the higher-order coupling strength significantly drives the dynamics of our network.

## X. CONCLUSION

In this study, we have put forward a novel small network of Chialvo neurons arranged in a ring-star topology made of four neurons (one in the center and three in the periphery), where the interactions are modeled beyond pairwise, i.e., higher-order couplings besides pairwise couplings. Furthermore, the couplings are linear and diffusive, replicating an electrical synapse in a realistic nervous system. One future direction is to implement nonlinear couplings that replicate more complicated electrochemical synapses among the neurons.

Our small network model can be treated as a unit of neuron ensemble, which acts as a bridge between a single neuron and a larger ensemble of neurons in the thermodynamic limit (with large numbers of nodes) arranged in a complicated topology coupled linearly or nonlinearly via electrical or chemical synapses, replicating complex functionalities in the nervous system. One interesting avenue to look into in the future is incorporating noise modulation into the system via additive and multiplicative noises and studying a broad range of spatiotemporal patterns.

This model is a nonlinear system of eight coupled equations, which allows us to explore a plethora of dynamical properties by studying its fixed point, performing stability analysis of the fixed point by looking into the eigenvalues of the Jacobian matrix, and reporting bifurcation patterns. The higher-order coupling strength is considered the primary bifurcation parameter in most cases, and we notice the appearance of an intriguing route to chaos via the appearance of fixed points, period-doubling, cyclic quasiperiodic closed invariant curves, and ultimately chaos. We also observe the emergence of other codimension- $1$ patterns like saddle-node and Neimark–Sacker. It would be interesting to test what kind of new bifurcation behavior comes to the surface once we consider our network in the thermodynamic limit and also look into the phenomenon of phase transitions in depth.^{9}

Additionally, we have looked into quantifying the concept of chaoticity and complexity of the network by implementing the $0$– $1$ test and the sample entropy measure on the time series data generated from our model. It would be alluring to look into how a test for chaos and complexity works for a temporal higher-order network where the coupling strengths (pairwise and non-pairwise) update over every iteration. It would also be a good research question to come up with some sort of closed-form analytical relationship between chaos and complexity.

Moreover, we have performed a bifurcation study of synchronization patterns via two-dimensional color-coded plots, measuring how collectively the network behaves over time using the cross correlation coefficient and the Kuramoto order parameter. We have noticed full synchrony when the higher-order coupling strength becomes more positive, meaning higher-order interaction promotes a tandem cumulative behavior. In the future, another exercise would be to explore whether these map-based network models have any conservative properties, like Hamiltonian in the case of their continuous-time counterparts. Is there an equivalent conserved quantity that we can model for these systems?

Like any other model, we believe ours is not limitation-free and will leave a lot of room for improvement. One way would be to fit our model using empirical data from viable sources. The model can be improved by implementing more complexities, for example, time-varying couplings, non-linear couplings, and couplings that integrate $s$-simplex with the highest possible values of $s$. Topologically, it can be improved by modeling a multiplex network, with the first step being a two-layered model and also increasing the number of nodes. The model can be made stochastic by incorporating noise modulations. Studies on finding chimera states, solitary states, traveling waves, and other phenomena like amplitude death could be a future prospect for these small network models.

Our outlook in this paper has been on studying a complex network neurodynamical model by designing the smallest ring-star network. We have come to the conclusion that a high value of the higher-order coupling strength in the excitatory regime will promote more chaoticity and complexity in the system while maintaining full synchrony among the neurons. We believe this will lay the foundation for applications of reduced modeling techniques in the fields of engineering, mathematical modeling, quantitative biology, neuroscience, etc.

Reiterating, we have performed a robust study of a small network of a neuron map with higher-order interactions using the tools from nonlinear dynamics. We have examined how higher-order interactions influence the behavior of our system. We have calculated the fixed points, simulated the phase portraits and time series, and illustrated various bifurcation diagrams. We have also studied the chaoticity and the complexity of the system as a consequence of higher-order interactions. We have concluded this study by looking into how higher-order interactions also influence synchronization within the system. The oscillators in our network were considered to be coupled via linear diffusive couplings; however, studies examining nonlinear couplings result in abrupt synchronization switching^{104} and this can be considered in future studies. Our model is exciting in the sense that small networks with higher-order interactions has not been extensively studied before. The ring-star topology we have considered is also a very rarely studied network geometry. Because our model is also a complete graph, it geometrically gives rise to a substantial amount of $2$-simplexes, even though the network is small. Chaoticity in most network dynamical systems is studied employing the maximum Lyapunov exponent. However, we have implemented the $0$– $1$ test, which is very new by itself in terms of its applications in neuron maps. Using numerical methods, we have also been able to show a weak correlation between chaoticity and complexity which is conjectured to be an interesting mathematical avenue. In the thermodynamic limit where the number of nodes in the network is very high, we might be able to see chimera states and study phase transitions.^{9} A study conducted by considering a higher-order Deffuant model encoded using a hypergraph induces a significant change in the onset of consensus,^{105} and that leads to a smooth phase transition. In the future, we plan to extend our smaller network to a larger one focusing more on finding the chimera patterns, solitary states, and other partial synchronization patterns along with studying the model’s phase transitions.

## ACKNOWLEDGMENTS

A.S.N and S.S.M acknowledge the computing resources provided by Digital University Kerala. The authors would like to thank all the anonymous reviewers for helping enhance the quality of this manuscript.

## AUTHOR DECLARATIONS

### Conflict of Interest

The authors have no conflicts to disclose.

### Author Contributions

**Anjana S. Nair:** Conceptualization (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Project administration (equal); Resources (equal); Software (equal); Validation (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal). **Indranil Ghosh:** Conceptualization (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Project administration (equal); Resources (equal); Software (equal); Supervision (equal); Validation (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal). **Hammed O. Fatoyinbo:** Formal analysis (equal); Investigation (equal); Methodology (equal); Project administration (equal); Software (supporting); Validation (supporting); Visualization (supporting); Writing – original draft (supporting); Writing – review & editing (equal). **Sishu S. Muni:** Conceptualization (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Project administration (equal); Resources (equal); Software (equal); Supervision (equal); Validation (equal); Visualization (supporting); Writing – original draft (supporting); Writing – review & editing (equal).

## DATA AVAILABILITY

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

## REFERENCES

*Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting*

*Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition*

*Understanding Complex Systems*(Springer, Berlin, 2012), pp. 195–232.

*et al.*, “

*Two Notes on Continuous-Time Neurodynamical Systems*

*New Trends in Nonlinear Dynamics and Pattern-Forming Phenomena*(Springer US, New York, 1990), pp. 89–93.

*The 0-1 Test for Chaos: A Review*, Lecture Notes in Physics (Springer, Berlin, 2016).

*p*th moment asymptotically bounded for stochastic nonlinear systems and its application in neural networks synchronization

*Sceloporus occidentalis*)

*Chemical Oscillations, Waves, and Turbulence*

*Matcontm, A Toolbox for Continuation and Bifurcation of Cycles of Maps: Command Line Use*

*Numerical Bifurcation Analysis of Maps: From Theory to Software*, Cambridge Monographs on Applied and Computational Mathematics (Cambridge University Press, 2019).

*Numerical Methods for Bifurcations of Dynamical Equilibria*