We study localized patterns in an exact mean-field description of a spatially extended network of quadratic integrate-and-fire neurons. We investigate conditions for the existence and stability of localized solutions, so-called *bumps*, and give an analytic estimate for the parameter range, where these solutions exist in parameter space, when one or more microscopic network parameters are varied. We develop Galerkin methods for the model equations, which enable numerical bifurcation analysis of stationary and time-periodic spatially extended solutions. We study the emergence of patterns composed of multiple bumps, which are arranged in a snake-and-ladder bifurcation structure if a homogeneous or heterogeneous synaptic kernel is suitably chosen. Furthermore, we examine time-periodic, spatially localized solutions (*oscillons*) in the presence of external forcing, and in autonomous, recurrently coupled excitatory and inhibitory networks. In both cases, we observe period-doubling cascades leading to chaotic oscillations.

Spatially extended networks of spiking model neurons are capable of producing spatiotemporal patterns that are observed experimentally in neuronal tissues. An important tool in investigating such patterns is low-dimensional neural field models, which describe the macroscopic dynamics of such networks. Many neural field models are derived heuristically and do not fully describe the dynamics of the underlying network of spiking neurons. We utilize a recently derived mean-field description for networks of quadratic integrate-and-fire (QIF) neurons, which yield an accurate description of the mean firing rate and the mean membrane potential of the network. Contrary to other neural field models, this model only contains nonlinearities of the mean-field variables up to quadratic order, which are amenable to the development of Galerkin methods for the numerical approximation of the problem. This allows us to study the bifurcation structure of stationary localized solutions and of time-varying localized solutions up to the emergence of chaos.

## I. INTRODUCTION

Localized states in neuronal networks, so-called bumps, are related to working memory^{1,2} and feature selectivity,^{3} whereby neurons encoding similar stimuli or features show an increased firing rate for the duration of the related cognitive task. Neural fields are well-known coarse-grained models of spatiotemporal neuronal activity,^{4–6} capable of reproducing dynamic phenomena found experimentally, such as traveling waves, temporal oscillations, and spatially localized states.^{7,8} A challenge faced in the derivation of neural field models is to establish an accurate mean-field description of the spiking dynamics of the underlying microscopic neural network. Classical neural field models recover the microscopic dynamics only in the limit of slow synapses,^{9} and the derivation of neural mass or neural field models from networks of spiking neurons is still an active area of research.^{10–18} In addition, in neural fields, the network firing rate is not an emergent quantity but rather the result of a modeling choice.

Some limitations can be overcome if the microscopic model is a heterogeneous network of synaptically coupled $\theta $ or QIF neurons, subject to random, Cauchy-distributed background currents. Recently, it has been shown that heterogeneous networks of $\theta $- and QIF neurons admit an exact mean-field description,^{19,20} which has later been expanded to spatially extended networks.^{21–24} In the thermodynamic limit, the network admits an exact mean-field description in terms of the network mean firing rate and voltage^{20} or in terms of a complex-valued order parameter.^{21,25}

We study a network of $n$ quadratic integrate-and-fire neurons,

where $Vi$ is the membrane potential of the $i$th neuron, $\eta i$ an intrinsic current, $si$ the synaptic input, and $J>0$ a global coupling parameter. The $i$th neuron emits a spike when $Vi$ reaches the firing threshold $V\theta $, and $Vi$ is reset immediately to $Vr$. Following Ref. 20, we distribute ${\eta i:i=1,\u2026,n}$ according to a Lorentzian distribution using the formula $\eta i=\eta +\Delta tan\u2061[\pi /2(2j\u2212n\u22121)/(n+1)]$, where $\eta $ is the center and $\Delta $ is the half-width of the Lorentzian distribution, respectively. An important difference in the model considered in the present paper is that neurons are distributed in space, in a domain $\Omega =(\u2212L/2,L/2]$, with $L\u226b1$, at evenly spaced positions ${xi=i\delta x\u2212L/2:i\u22081,\u2026,n}$, with $\delta x=L/n$. We associate with each lattice point $xi$ a random component of the vector ${\eta j}$, without repetitions. The synaptic current received by a neuron is determined by the synaptic footprint as follows:

where $w(x,y)$ models the synaptic coupling strength between neurons from position $y$ to position $x$ in the network and $tjk$ is the emission time of the $kth$ spike of the $jth$ neuron. The kernel $a(t)$ represents synaptic activation in response to incoming spikes, e.g., exponential synapses $a(t)=e\u2212t/\tau s/\tau s$ with synaptic time scale $\tau s$.^{26,27} In the mean-field description, we will let $\tau s\u21920$, i.e., $a(t)=\delta (t)$. We note in passing that, as demonstrated for leaky integrate-and-fire neurons,^{28} the mean-field derivation and the limit $\tau s\u21920$ do not commute.

A neural field model that describes without approximation of the average firing rate $r(x,t)$ and the average membrane potential $v(x,t)$ of the spatially extended networks presented above has been developed recently,^{23}

This neural field model inherits the coupling parameter $J$ and the parameters $\eta $ and $\Delta $ from the microscopic, spiking network. The mean-field description is exact in the limit $n\u2192\u221e$ and $V\theta =\u2212Vr\u2192\u221e$. The spatial coupling, or synaptic footprint, is given by the integral operator,

For the concrete calculations presented below, we will assume $\Omega =R$ or $\Omega =(\u2212L/2,L/2]\u2245S$ with $L\u226b1$ (a ring with large width). We use $\Omega =R$ for the theoretical framework in Sec. II and the Hermite–Galerkin method and use $\Omega =(\u2212L/2,L/2]\u2245S$ for the Fourier–Galerkin method and the numerical integration of both the macroscopic and microscopic model equations, as periodicity is enforced in this setting. We will study the model with a variety of kernels but, unless stated otherwise, we assume, with a small abuse of notation, $w(x,y)=w(|x\u2212y|$) and

when $\Omega =R$, or take the $2L$-periodic extension of $w$ when $\Omega =(\u2212L/2,L/2]\u2245S$. Hence, our default synaptic kernel will depend on the distance between two points in $\Omega $ and will have long-range inhibition and short-range excitation. With these choices, $w\u2297r$ is a convolution and $\u222b\Omega w(y)dy=1$.

This neural field model is related to mean-field descriptions of networks of theta neurons^{19,21,24,25} and was obtained using the Ott–Antonsen ansatz.^{29} It retains the transient dynamics of the microscopic network, including spike synchrony, and has, therefore, a richer dynamic repertoire than purely rate-based models.^{30} The derivation of such neural field models is analogous to mean-field approaches for spatially extended networks of phase-coupled oscillators.^{31} An example of localized solutions in this model is shown in Fig. 1(a), alongside a numerical simulation of the microscopic system of spiking neurons in Fig. 1(b).

The main aim of the present paper is to study spatiotemporal localized patterns supported by this model, such as the one presented in Fig. 1. Our investigation will be primarily numerical, and, therefore, we will also introduce several numerical schemes for the approximation of the QIF neural field model. The paper is structured as follows: in Sec. II, we discuss analytical methods to study stationary solutions and their bifurcations; in Sec. III, we introduce the numerical methods used to perform numerical bifurcation analysis of stationary and time-periodic localized structures, which are presented in Secs. IV and V, respectively; and we make a few concluding remarks in Sec. VI.

## II. STATIONARY SOLUTIONS

Stationary states of Eq. (3) are determined by the conditions $\u2202tr=\u2202tv=0$. Bounded solutions with $r(x)>0$ satisfy

The model supports both uniform and non-uniform steady states, which we discuss below in further detail.

### A. Spatially uniform states

Solutions to (6) depend in general on $x$. Spatially uniform solutions, for which $Jw\u2297r=Jr$, satisfy the quartic equation,

which has the following four solutions:

where $p$, $q$, and $S$ are given by

with

respectively. Physically relevant solutions are positive and real, and an inspection of the equations above reveals that $r4$ must be discarded, and the system admits either one or three homogeneous steady states. At sufficiently small (large) $\eta $, only one stable fixed point exists, represented by $r3$ ($r1$); also, there exists an interval in parameter space where the stable solutions $r1$, $r3$ coexist with $r2$, which is unstable. The conclusions presented above justify the bifurcation diagram found in Refs. 20 and 23 and reported in Fig. 2(a).

Loci of saddle-node bifurcations in the $(\eta ,J)$-plane can be found by setting $d\eta /dr=0$ in the first equation in (6) which combined with (7) yield a parameterization in $r$,

or, more explicitly,

where $\xb1$ denote two bifurcation branches of saddle-node bifurcation which collide at a cusp

### B. Turing bifurcations

A first step toward the construction of heterogeneous steady states is the determination of Turing bifurcations, which mark points in parameter space where a spatially uniform solution becomes unstable to spatially periodic patterns. We remark that it is known that spatially extended networks of QIF or $\theta $ neurons display this instability,^{23,24} and here, we present an analytic determination of the loci of such bifurcations in parameter space. Turing bifurcations of a homogeneous steady state $(r,v)$ can be identified by linear stability analysis of the model equations in Fourier space, which results in the following eigenvalue problem:

where $w^(k)$ is the Fourier transform of the connectivity kernel. A sufficient condition for a Turing bifurcation is the existence of a critical wavenumber $kc>0$ for which $detA(kc)=0$, which yields

and

Combining (15) and (16) results in an equation for the loci of the Turing bifurcation in the $(\eta ,J)$-plane. As $kc\u21920$, the resulting equation recovers (13), since $w^(kc)\u21921$. This analytic result agrees well with the numerical calculations of these loci, which will be presented further below.

### C. Spatial-dynamical system

After studying uniform and spatially periodic steady states, we move to the construction of localized steady states supported by the models. One strategy to study localized stationary states in nonlinear models posed on $R$ is to construct solutions to boundary-value problems derived from the model’s steady state equations.^{32–36} With this approach, localized steady states correspond to homoclinic orbits of a dynamical system in which $x$ plays the role of time (hence the term *spatial dynamics*).

In this section, we make some preliminary considerations on the spatial dynamics of steady state solutions to (3), although we do not explicitly study the associated spatial-dynamical system, as we will construct our solutions numerically in the following sections. Using the positions,

the steady state equation (6) is recast as

We note that Eq. (19) is formally equivalent to the Amari steady state equation;^{6} in Amari’s theory, $u$ represents the voltage, whereas in this case $u$ combines the steady state’s voltage and rate and scales as $u\u223c\u2212v2\u2212\eta $ for small $r$ and $u\u223c\pi 2r2\u2212\eta $ for large $r$, respectively.

Importantly, the identification with the Amari equation allows us to use spatial dynamics to characterize localized steady state solutions.^{37–40} The Fourier transform of $w$ is of the form $w^(k)=\Pi 1(k2)/\Pi 2(k2)$, with $\Pi i$ being a polynomial of order $i$, if $w$ is the bi-exponential kernel (5). Hence, the integral kernel can be regarded as Green’s function of a differential operator. In particular, the bi-exponential kernel (5) leads to the differential equation,

where a prime denotes differentiation with respect to $x$. The equation above can be cast as a 4D, first-order spatial-dynamical system in the vector $(u,u\u2032,u\u2033,u\u2034)$, which we omit here for brevity. To construct localized solutions to (3) we proceed in the same spirit as^{32–37,39,40} to each homogeneous steady state of (3) corresponds to one value $rj$ in (8), and hence one value $uj$ in (17), and one constant solution $(uj,0,0,0)$ to (20); in addition, there exists a region in parameter space, where $r1$ and $r3$ coexist and are stable [see also Fig. 2(a)]. A localized steady state of (3) is identified with a bounded, sufficiently regular function $u:R\u2192R$ which satisfies (20) with boundary conditions

Furthermore, we note that the quantity,

is conserved in the sense that, if (20) holds, then

Therefore, we expect to construct a localized stationary state in a region of parameter space where $H(u1,0,0,0,0)=H(u3,0,0,0,0)$. With a slight abuse of notation, we write this condition in terms of the variable $r$, as $H(r1)=H(r3)$, where $H$ is given by

In analogy with the literature mentioned above, we called *Maxwell points* the values on the $(\eta ,J)$-plane where the condition $H(r1)=H(r3)$ is met. We display the Maxwell point for our standard parameter set in Fig. 2(b), and we plot the locus of Maxwell points and the bistability region in Fig. 2(c).

## III. NUMERICAL SCHEMES

As anticipated in Secs. I and II, stationary states beyond onset are computed numerically; hence, we present, in this section, several numerical schemes used in the upcoming computations. In preparation for presenting the schemes, we rewrite the model as an ordinary differential equation on a function space. To simplify the notation, we apply, in this section, the scaling $r\u21a6r/\pi $, $J\u21a6\pi J$ to (3) and obtain

where $W$ is the integral operator defined as $(Wr)(x)=J(w\u2297r)(x)=J\u222b\Omega w(x,y)r(y)dy$. In the system above, we assume $r,v:R\u2192L\rho 2(\Omega ,C)$ (also denoted by $L\rho 2(\Omega $)), that is, at each time $t$, $r(t)$ and $v(t)$ belong to a weighted Lebesgue space of complex-valued functions defined on $\Omega $, with inner product,

and norm $\Vert f\Vert \rho =\u27e8f,f\u27e91/2$. Note that the subscript $\rho $ will be omitted when $\rho (x)\u22611$. We assume that, once complemented with initial conditions, system (22) defines a well-posed Cauchy problem on $L\rho 2(\Omega )\xd7L\rho 2(\Omega )$.

### A. Galerkin schemes

Galerkin schemes are derived by introducing a complete orthogonal basis ${\phi i:i\u2208N}$ for the weighted space $L\rho 2(\Omega )$ and seeking an approximation in the $n$-dimensional subspace spanned by ${\phi i:i\u2208\Lambda n}$, where $\Lambda n$ is an index set with $n$ elements, as follows:

A Galerkin scheme for (22) is then given by

that is,

for $i\u2208\Lambda n$, with coefficients given by

#### 1. Fourier–Galerkin scheme

When $\Omega =(\u2212L/2,L/2]\u2245S$, the functions $r(t$) and $v(t)$ are $L$-periodic. Therefore, we choose the Fourier basis $\phi j(x)=exp\u2061(ij2\pi x/L)$, $j\u2208Z$, which is a complete orthogonal basis for $L2(\Omega )$. The index set for this case is $\Lambda n={\u2212n/2,\u2026,n/2\u22121}$ with $n$ even. Exploiting the trigonometric properties of the Fourier basis, we obtain

In passing, we note that $\beta ij$ can also be expressed compactly, in terms of the Fourier coefficients of the kernel $w$, if the operator $W$ is convolutional. In addition, requiring $r$ and $v$ to be real-valued implies $(Ri,Vi)=(R\u2212i\u2217,V\u2212i\u2217)$. We call this method the *Fourier–Galerkin scheme*.

#### 2. Hermite–Galerkin scheme

When $\Omega =R$, a natural basis for the Galerkin scheme is given by the Hermite polynomials,

which are a complete orthogonal set for $L\rho 2(\Omega ,R)$ with weight $\rho (x)=exp\u2061(\u2212x2)$. For this scheme, $\Lambda n={0,\u2026,n\u22121}$. To avoid problems with the numerical evaluations of $\phi j$ for large $|x|$, we derive an alternative scheme, which uses inner products with weight $\rho (x)\u22611$, as the Fourier–Galerkin scheme. We seek a solution to (22) in the form

with $R0$ and $V0$ constant in $x$ and $r~,v~\u2208L2(\Omega ,R)$. This leads to the system

in which the homogeneous background dynamics for $(R0,V0)$ is decoupled from $(r~,v~)$ and follows the spatially clamped QIF mean field.^{20} Since the Hermite functions,

are an orthogonal set for $L2(\Omega ,R)$, an approximation to $r~,v~$ is sought in the space spanned by $\phi j$, with $j\u2208\Lambda n={1,\u2026,n}$, giving the scheme

for $i\u2208\Lambda n$. We call this method the *Hermite–Galerkin scheme*.

### B. Fourier collocation scheme

A *Fourier collocation scheme* can be derived in the case $\Omega =(\u2212L/2,L/2]\u2245S$. This method, which has been used in the past for Amari neural field models^{40,41} and the QIF neural field model,^{23} represents $(rn,vn$) by its values at the grid points $xj=\u2212L+2Lj/n$, $j\u2208\Lambda n={1,\u2026,n}$,

and evaluates $(Wrn)i$ either with a quadrature rule or, more efficiently, with a pseudospectral evaluation if $W$ is convolutional.

### C. Numerical considerations

To the best of our knowledge, the methods presented above are novel, and we leave the analysis of the numerical properties of these schemes to a separate publication. The calculations presented here have been tested against event-driven simulations of a large network of spiking neurons. We employ our schemes as follows: the Fourier collocation scheme with $n=5000$ is generally used for time simulations to obtain accurate initial guesses for the continuation. However, we observed that time-periodic orbits are reproduced with a similar accuracy by the Hermite–Galerkin scheme with just $n=50$ modes; hence, we select this scheme to continue periodic orbits. Finally, we use the Fourier–Galerkin scheme with $n=200$ for bifurcation analysis of steady states on large domains, when solutions are non-localized.

We compare the results of the QIF neural field model with the dynamics of the underlying network of spiking neurons. We integrate Eq. (1) using the Euler method, with time step $\Delta t=10\u22124$. The domain is chosen to be $\Omega =(\u2212L/2,L/2]$ with periodic boundary conditions. We choose $L=50$ and $5\xd7105$ model neurons, which ensures a good correspondence to the neural field model ($\delta x=10\u22124$); with $L=50$, the relative error of the normalization of the synaptic kernel, $\u222b\Omega w(y)dy=1$, is less than $10\u22125$. We note here that the microscopic description only matches the mean-field description if $\delta x$ is much smaller than the characteristic length scale of the synaptic kernel $w$. The synaptic input to each neuron is computed with Eq. (2). We follow Ref. 20 in computing the synaptic integration across a time window with $a\tau (t)=\Theta (\tau \u2212t)/\tau $, $\tau =10\u22123$ and in setting $V\theta =\u2212Vr=100$ with a refractory period of $2/Vi$ once the $ith$ neuron has exceeded $V\theta $. The latter approximates the limit $V\theta =\u2212Vr\u2192\u221e$. The refractory period is rounded to the nearest multiple of $\Delta t$, and after the refractory period, $Vi$ is set to $\u2212Vi$. Rasterplots are generated using a subset of $103$ randomly chosen model neurons.

## IV. STATIONARY LOCALIZED SOLUTIONS

We use the numerical schemes presented in Sec. III to study the bifurcation structure of stationary localized solution to the QIF mean field model. We initially study the model with our default excitatory-inhibitory kernel (5) and then show that a snaking bifurcation scenario is supported when the kernel is switched to a homogeneous oscillatory kernel, or to a kernel with harmonic heterogeneities, similarly to what is found for Amari neural field models.

### A. Local excitation, lateral inhibition kernel

We set $w$ as in (5), generate a stationary localized solution by numerically integrating the model equations in time, and then implement the Fourier–Galerkin scheme to continue the localized solutions in $\eta $, using AUTO.

In Fig. 3, we show the bifurcation diagram of localized solutions. Across a range of parameters, these occur as a pair of one wide, stable solution and one narrow, unstable solution. The solution branch connects to the branch of uniform solutions at points where Turing bifurcations occur, which also give rise to a branch of periodic solutions. Using the Fourier basis, it can be shown that the stable solution branch approaches the Maxwell point asymptotically, and solutions grow wider, which resemble two (stationary) interacting wave fronts. Because of the periodic boundary conditions, the solution branch grows larger again and forms another stable/unstable solution pair of locally low activity (not shown). The latter could be regarded as stationary versions of *traveling anti-pulses* reported in Refs. 42 and 43.

Because stable solutions are of particular interest, we present a two-parameter bifurcation diagram [Fig. 4(a)] of the saddle-node bifurcations that delimit the branch of stable solutions. As expected, the locus of saddle-node bifurcations of localized states encloses the Maxwell point. In addition, the loci of saddle-node bifurcations of localized and uniform steady states meet at two separate cusps, as shown in Fig. 4(b).

The bifurcation behavior of localized solutions described above is robust to changes in coupling parameters but, as we shall see below, it is strongly affected by changes in the kernel.

### B. Snaking with homogeneous kernel

*Homoclinic snaking* is a phenomenon that describes the formation of multiple, coexisting localized solutions in spatially extended models. Steady states are arranged in branches of intertwined snaking bifurcation diagrams, connected via ladders.^{33,35,44–46} Adopting the spatial-dynamics approach outlined above, localized solutions are interpreted as homoclinic orbits to a fixed point. Snaking solution branches correspond to symmetries of the problems, which are broken along the ladder branches.^{34,36} This scenario is not limited to partial differential equations, but have also been studied in the nonlocal Swift–Hohenberg equation,^{47} as well as in neural field models.^{37,38,40,41,48–50}

In the simplest setting, localized snaking solutions are found in regions of parameter space where there is bistability between a stationary homogeneous state and a periodic state. In nonlocal neural fields, homoclinic snaking has been observed with the following homogeneous damped-oscillatory kernel:^{39}

which we now adopt also for the QIF neural field model. This kernel leads to a subcritical Turing bifurcation of the lower stable branch of uniform solutions, from which an unstable branch of spatially periodic solutions emerges. This branch undergoes a saddle-node bifurcation, where spatially periodic solutions become stable. Eventually, the branch connects to the upper stable branch of uniform solutions (see Fig. 5).

As anticipated, spatially localized snaking solutions are found in this region of parameter space, and they are arranged in a typical snakes-and-ladders bifurcation structure, which is displayed in Fig. 6.

### C. Snaking with heterogeneous kernel

It is known that snaking bifurcation scenarios can be triggered by heterogeneities in the underlying evolution equations. Examples discussed in the literature include the Swift–Hohenberg,^{51} Amari,^{40} and Ginzburg–Landau^{52} equations. In neural field models, heterogeneities are naturally introduced via harmonic perturbations of a homogeneous (distance-dependent) kernel, which break the translational invariance of the problem.^{53–56} In Ref. 40, we have shown that the following kernel leads to snaking in the Amari model,

and we, therefore, investigate the effect of this kernel on the QIF neural field model.

In the absence of spatial forcing ($a=0$), a system with exponential connectivity does not yield stable localized solutions (see Fig. 7). In the presence of modulation, we find snaking branches that oscillate around the branch obtained for $a=0$ (see Fig. 8). Furthermore, for small values of $a$, the snaking width increases proportionally to the value of $a$ (not shown). These findings indicate that the snaking phenomenon in the QIF neural field model is entirely determined by the kernel choice, as in the Amari case.

## V. OSCILLONS

Various nonlinear models, including chemical, fluid-dynamical, and particle systems, support time-periodic, spatially localized states termed *oscillons* (see Ref. 57 and references therein). A comprehensive theory for the existence and bifurcation structure of such solutions is the subject of experimental, numerical, and analytical investigations. We study oscillons in the QIF neural field model in the two main settings where they are observed in other media: (i) a non-autonomous setting, whereby oscillons emerge as the medium is subject to a homogeneous, exogenous, time-periodic forcing and (ii) an autonomous setting, whereby oscillons emerge spontaneously as one of the model parameters is varied.

### A. Oscillons induced by harmonic forcing

We setup the QIF neural field model subject to a time-dependent, homogeneous, sinusoidal forcing with frequency $\omega $,

and cast it in the following, equivalent autonomous model formulation to perform numerical bifurcation analysis:

Note that the numerical framework proposed here is applicable also if the forcing is heterogeneous.

In this setting, we expect oscillons to emerge without bifurcation from a localized steady state of the QIF neural field model with $A=0$, upon imposing a small-amplitude forcing, $A\u226a1$. We, therefore, select the default kernel (5), set $\eta =\u221210$, for which the model with $A=0$ supports one stable (wide) and one unstable (narrow) bump (see Fig. 3), and continue time-periodic solutions to (25) in $A>0$ for $\omega =4$, close to the network’s resonant frequency.^{30}

One stable and one unstable branch of oscillons emerge from $A=0$, as shown in Fig. 9, and connect at a saddle-node bifurcation. The stable branch undergoes a sequence of period-doubling bifurcations leading to chaos, and examples of a period-doubled solution and a chaotic solution are shown in Fig. 9, demonstrating the correspondence between the QIF neural field model and the spiking network model.

In a recent study, we have investigated the effect of periodic forcing on a population of excitatory spiking neurons,^{30} whose solutions correspond to the spatially uniform states of the present model. In that context, it was shown that a sufficiently large forcing amplitude is able to suppress homogeneous oscillations. Here, we report that the same statement holds true for forced oscillons: no localized time-periodic solution is found to the right of the saddle-node bifurcation in Fig. 9, where the attractor is a spatially homogeneous, time-periodic state, which can be found by continuing in $A$ the low-activity uniform steady state (not shown).

### B. Spontaneous oscillons in coupled networks of excitatory and inhibitory neurons

In the second scenario, oscillons occur in autonomous systems. Direct numerical simulations of reaction diffusion systems display oscillons in the proximity of codimension-two Turing–Hopf bifurcations of the homogeneous steady state.^{58,59} Oscillons in these systems have typically been observed as large-amplitude structures; hence,they are conjectured to form via a subcritical Hopf bifurcation of a heterogeneous, spatially localized steady state. This conjecture, however, has not yet been confirmed by numerical bifurcation analysis which, in contrast to direct numerical simulations, allows to track both stable and unstable states.

Here,we employ the Hermite–Galerkin scheme to study the formation of oscillons in the QIF neural field model. As mentioned above, a necessary ingredient for oscillons is the presence of oscillatory bifurcations. These bifurcations are precluded in one-population networks of QIF neurons but, as we shall see, are possible in two-population models; therefore,we turn our attention to the following network of coupled excitatory and inhibitory populations:

The subscripts $e$ and indicate whether a variable or parameter refers to the excitatory or inhibitory population, respectively: the two populations have, for simplicity, the same heterogeneity parameter $\Delta $, but they have possibly different membrane time constants and average background currents. In single-population mean fields, excitation and inhibition are artificially lumped into a single excitatory-inhibitory kernel [see for instance (5), (23), and (24)], whereas in the new, more realistic model, the kernels are separate,

The connectivity parameters are chosen to be $Je=Ji=J$ to recover a similar setting used in the lumped model. In Fig. 10(a), we show the bifurcation diagram of localized solutions using $\eta e$ as the bifurcation parameter. The bifurcation structure is similar to the lumped model, with the exception that the range of parameters for which stable solutions exist is narrower. This computation confirms that stationary bumps are supported by the two-population network. In order to hunt for oscillons, we continue the solution for $\eta e=\u221210$ in the parameter $\tau i$: the bump becomes unstable at a subcritical Hopf bifurcation at $\tau i\u22481.14$, restabilizes at a saddle-node bifurcation, and undergoes a sequence of saddle-node bifurcations leading to a torus bifurcation (i.e., generalized Hopf bifurcation). The branch eventually restabilizes at a further saddle-node, leading to a period-doubling cascade which initiates around $\tau i\u22481.26$ and to chaos at $\tau i>1.29$ [Fig. 10(b)].

## VI. DISCUSSION

We introduced a framework to study localized solutions in a neural field model that was recently derived as an exact representation of the mean field dynamics of networks of spiking neurons. Although this model does not permit closed-form solutions such as the Amari model with Heaviside firing rates, we show that it is possible to give an analytical estimate for the range of model parameters for which stable localized solutions exist. The structure of the QIF neural field model permits the straightforward use of Galerkin methods, which unlike the Amari model has a linear nonlocal term.

We have demonstrated that stationary equations can be transformed into a formulation that is equivalent to the stationary Amari model, provided an effective firing rate function is defined. The significance of such a firing rate is chiefly mathematical: the neural field possesses a rate variable, which is combined with the voltage variable in the effective firing rate; however, this transformation allows to map out patterned steady states of the QIF neural field model using the same toolkit available for the Amari formulation. In both models, localized solutions emerge subcritically from a branch of homogeneous steady states, which then restabilize at a saddle-node bifurcation. In the Amari model, this behavior is parametrized by a firing threshold, whereas here we use the average excitability of the network to map out solutions. However, there is a correspondence between the excitability of the model used here and the firing threshold in the Amari model, in the sense that an increase in the firing threshold in the latter corresponds to a decrease in the excitability in the former. In addition, techniques developed for piecewise-linear firing rate functions in the Amari model^{60} could be adapted to work for steady states in the QIF neural field model, using the correspondence described above. Furthermore, all branches of stationary solutions computed in this paper, including the snaking branches, also occur in standard rate-based models. The crucial difference lies in the transient dynamics of the two models, which makes the model considered here dynamically richer and more realistic.

The development of a Galerkin method opened up the possibility to study oscillons using numerical bifurcation analysis. We focused here on sinusoidal forcing of bump solutions, which is a proxy of oscillations ubiquitous in neuronal systems. In previous work,^{30} the neural mass version of this model was studied in terms of its response to oscillatory forcing in various frequency bands, and the present paper makes this exploration feasible also in the spatially extended model. We leave this exploration to a future publication.

In coupled networks of excitatory and inhibitory populations, a small change in the inhibitory membrane time scale can have a significant effect on the existence and dynamics of bump solutions and can elicit oscillons. This was demonstrated for instantaneous synapses, and it remains to be seen how the dynamics change when synaptic delays are introduced to the model. Interestingly, oscillatory solutions which undergo torus bifurcations have been observed in spatially extended networks of excitatory and inhibitory neurons with conductance-based dynamics.^{61} Another natural extension would be to examine coupled multi-layer neural field models,^{62} which are known to give rise to localized bump solutions when neither layer does in isolation.^{63}

The Galerkin numerical methods derived in this paper can be applied directly to more general spatially extended models of QIF networks, such as the ones mentioned above. For instance, adding a synaptic variable can be accounted for with an additional Galerkin expansion and $n$ scalar variables per additional evolution equation.

Single population, QIF neural mass models with chemical as well as electrical synapses have recently been developed,^{64} and it was found that oscillations originate at Hopf bifurcations. Spatially extended versions of this model would then have the possibility of forming oscillons with a single population, although it is not clear whether Hopf bifurcations of bumps will occur near Hopf bifurcations of homogeneous states, which are the ones mapped in Ref. 64.

Understanding how slow–fast temporal scales are generated by the discrete network is an open question, which has recently been addressed in networks of sparsely coupled networks of QIF neurons.^{65} Employing our numerical methodology to these macroscopic mean fields is also possible, and one could study how such slow–fast phenomena occur in more realistic, spatially extended networks.

## ACKNOWLEDGMENTS

H.S. acknowledges financial support from the Spanish Ministry of Economics and Competitiveness through the María de Maeztu Program for Units of Excellence in R&D (No. MDM-2014-0445) and Grant (No. MTM2015-71509-C2-1-R) and from the German Research Council [DFG (No. KN 588/7-1) within priority program “Computational Connectomics” (SPP 2041)].

## REFERENCES

*Nonlinear Dynamics in Computational Neuroscience*(