Superconducting electronic circuits have much to offer with regard to neuromorphic hardware. Superconducting quantum interference devices (SQUIDs) can serve as an active element to perform the thresholding operation of a neuron's soma. However, a SQUID has a response function that is periodic in the applied signal. We show theoretically that if one restricts the total input to a SQUID to maintain a monotonically increasing response, a large fraction of synapses must be active to drive a neuron to threshold. We then demonstrate that an active dendritic tree (also based on SQUIDs) can significantly reduce the fraction of synapses that must be active to drive the neuron to threshold. In this context, the inclusion of a dendritic tree provides dual benefits of enhancing computational abilities of each neuron and allowing the neuron to spike with sparse input activity.

Motivations for developing artificial spiking neural systems include efficient hardware implementations of brain-inspired algorithms and construction of large-scale systems for studying the mechanisms of cognition. While much effort toward these ends employs semiconductor hardware based on silicon transistors,1–5 superconducting electronics have also received considerable attention. Superconducting circuits based on Josephson junctions (JJs6,7) have strengths that make them appealing for neural systems, including high speed, low energy consumption per operation, and native thresholding/spiking behaviors. In particular, two-junction superconducting quantum intereference devices (SQUIDs) are ubiquitous in superconducting electronics, and several efforts aim to utilize SQUIDs for various neuromorphic operations.8–22 

In this work, we will use the following component definitions. A synapse is a circuit that receives a single input from another neuron and produces an electrical current circulating in a storage loop. A dendrite is a circuit that receives an input proportional to the electrical output of one or more synapses and/or dendrites, performs a transfer function on the sum of the inputs, and produces an electrical current circulating in a storage loop as the output. A neuron cell body (also known as a soma) receives input proportional to the electrical output of one or more synapses and/or dendrites, performs a threshold operation on the sum of the inputs, and produces an output pulse if the threshold is exceeded. Outputs from the neuron cell body are routed to many downstream synapses. Fan-in is the collection and localization of multiple synaptic or dendritic signals into a dendrite or neuron cell body.

In SQUID-based neurons, magnetic flux applied to the SQUID loop ( Φ a) serves as synaptic input. Only for Φ a greater than some critical value of the flux ( Φ a th) will the SQUID produce a train of fluxons as an output signal.23 For Φ a < Φ a th, the SQUID will remain in a quiescent state. Additionally, Φ a th can be tuned with a current bias, Ib. However, SQUID neurons differ significantly from their biological counterparts in that the response to Φ a is periodic with a period of the single-flux quantum, Φ 0 = h / 2 e. In this work, we consider the ramifications on fan-in if we choose to limit the maximum applied flux of the synapses to ensure a monotonic response and show that a dendritic arbor can significantly improve fan-in properties.

Fan-in was recently analyzed in superconducting neuromorphic circuits, wherein single-flux quanta are used as signals between neurons.24 However, that work was not concerned with the case in which analog synaptic signals integrated and stored over time could drive a SQUID beyond the first half period of its response function. In the present study, we analyze fan-in in the context of leaky-integrator neuronal circuits that were originally designed for use in large-scale superconducting optoelectronic systems.25–28 However, the conclusions of this paper should be applicable to a wide variety of SQUID neurons.

We begin by considering the simple SQUID circuit shown in Fig. 1(a). This is equivalent to a neuron in which synapses directly feed into the neuron cell body, also known as a point neuron. We assume that all net flux applied to the loop ( Φ a) is due to the weighted contributions of synaptic inputs. This assumption may be realized in practice using gradiometer-style pickup coils,29 on-chip magnetic shielding,30,31 and perhaps an additional tuning transformer and bias line.29 Under such conditions, the response of the SQUID would be monotonic for 0 < Φ a < Φ 0 / 2 as seen in Fig. 1(b). Unfortunately, for this simple case, the cost of limiting Φ a to the monotonic regime is that a large fraction of synapses will need to be active to drive the neuron to threshold. For a neuron with n synapses, each capable of applying a maximum flux of Φ sy, enforcing monotonicity requires that
(1)
FIG. 1.

(a) SQUID circuit with DC bias (Ib) and flux input ( Φ a) through a transformer. (b) SQUID response function. Rfq is the rate of flux-quantum production as a function of the applied flux to the SQUID loop in units of the magnetic flux quantum Φ 0 = h / 2 e. Different curves correspond to different bias conditions.

FIG. 1.

(a) SQUID circuit with DC bias (Ib) and flux input ( Φ a) through a transformer. (b) SQUID response function. Rfq is the rate of flux-quantum production as a function of the applied flux to the SQUID loop in units of the magnetic flux quantum Φ 0 = h / 2 e. Different curves correspond to different bias conditions.

Close modal
To reach the threshold ( Φ a th), a critical number p of synaptic inputs must be active. For simplicity, we assume that each active input supplies the maximum value Φ sy of the flux. This implies Φ a th = p Φ sy. Combining with the upper-bound in Eq. (1) gives
(2)
This value corresponds to the minimum synaptic activity fraction required to generate an output on the SQUID. In other words, p/n gives the minimum fraction of active synapses required for the neuron to fire.
A crucial task is to determine how low Φ a th can be made in practice. The relevant control parameters are the SQUID bias current Ib, the SQUID loop inductance L, and the critical current of a single Josephson junction Ic. L and Ic are commonly combined in the dimensionless parameter β L = 2 L I c / Φ 0. A value of β L = 1 is a standard choice due to noise considerations.23 In the present work, we have solved the SQUID circuit equations numerically (see Ref. 23 and the supplementary material for model details). We have kept β L = 1 fixed and defined Φ a th as the minimum value of the applied flux for which at least one fluxon is produced by the SQUID. An empirical fit to these simulations gives p/n as a function of I b / I c
(3)
where A = 0.540 and B = 0.466, and the form is inspired by the analytical solution available in the β L = 0 case.23 

Noise prohibits biasing with Ib aribitrarily close to 2 I c. Single JJs are often biased in superconducting digital electronics with I b / I c = 0.7. In a SQUID, this would correspond to I b / I c = 1.4, as the SQUID comprises two JJs in parallel, and would require a minimum activity fraction of about 71%. A bias of I b / I c = 1.8 is an aggressive operating point and corresponds to p / n 34 %. Such activity levels are incommensurate with principles of efficient information processing in spiking neural networks. Considerations for sparse coding suggests that only 1%–16% of neurons in the brain may be active at any time due to power considerations.32 Additionally, a recent study posits that only 1% of synapses need be active to generate action potentials in sensory neurons.33 It, thus, appears that biologically realistic activity fractions and monotonic response are incompatible for superconducting point neurons.

However, the point neuron is not an accurate model of biological neurons. Instead, synaptic inputs are processed and filtered by an arbor of active dendrites that perform numerous computations,34–36 including intermediate threshold functions between subsets of synapses and the soma37 and detection of synaptic sequences.38 Active dendrites can be significant for adaptation and plasticity,39,40 can dramatically increase the information storage capacity relative to point neurons,41 and when modulated by inhibitory neurons, the dendritic tree can induce a given neuron to perform distinct computations at different times, enabling a given structural network to dynamically realize myriad functional networks.42 Active dendrites have also been identified for their role in reducing the necessary activity fraction to generate action potentials.33 Discussion of dendritic processing in superconducting neural hardware is found in Ref. 27. We now turn our attention to the effects of active dendrites on the fan-in properties of SQUID neurons with monotonic response.

A schematic of a dendritic tree is shown in Fig. 2(a). The architecture consists of input synapses (shown in blue), multiple levels of dendritic hierarchy (yellow), and the final cell body (green). These components have been defined above, and all three can be implemented with SQUID circuits, a self-similarity that facilitates scalable design and fabrication.

FIG. 2.

Dendritic tree. (a) Schematic illustration of the tree structure with blue synapses input to yellow dendrites. The neuronal cell body is shown in green with fan-out to downstream synapses. The fan-in factor (n) is labeled, as is the hierarchy level (h), total depth of hierarchy (H), and the total number of synapses (N). (b) The fan-in factor as a function of the total number of synapses for different values of the hierarchy depth. (c) The number of dendrites, ND normalized by the number of synapses for different values of H. ND grows as N 1 1 / H for large N, meaning that even large dendritic arbors do not exorbitantly increase the area or complexity of high fan-in neurons.

FIG. 2.

Dendritic tree. (a) Schematic illustration of the tree structure with blue synapses input to yellow dendrites. The neuronal cell body is shown in green with fan-out to downstream synapses. The fan-in factor (n) is labeled, as is the hierarchy level (h), total depth of hierarchy (H), and the total number of synapses (N). (b) The fan-in factor as a function of the total number of synapses for different values of the hierarchy depth. (c) The number of dendrites, ND normalized by the number of synapses for different values of H. ND grows as N 1 1 / H for large N, meaning that even large dendritic arbors do not exorbitantly increase the area or complexity of high fan-in neurons.

Close modal

We restrict attention to a homogeneous dendritic tree of the form shown in Fig. 2(a), wherein all dendrites receive the same number of inputs, n, which we refer to as the fan-in factor. The neuron cell body resides at level zero of the dendritic hierarchy, and synapses reside at level H, so the total number of synapses is n H N. In Fig. 2(a), we show a tree with fan-in factor n = 2 and three levels of hierarchy for a total of N = 8 synapses. For a homogeneous dendritic tree, the relationship among the number of synapses, fan-in factor, and hierarchy is plotted in Fig. 2(b). Biological neurons are less uniform and more complex, but homogeneous trees are a good starting point for artificial systems. Figure 2(c) shows how the additional hardware for the dendritic arbor scales a function of the number of synapses.

Equation (3) is applicable to any dendrite or neuron cell body in the dendritic tree, provided the maximum applied flux is limited to Φ 0 / 2. Working backward from the cell body, one can calculate that the minimum number of active synapses required to drive the neuron cell body to threshold is P = p H, and the fraction of synaptic activity for threshold is at least
(4)
Equation (3) is recovered as the special case of H = 1. The exponential dependence of the threshold activity fraction on H implies that dendritic trees can improve fan-in even with limited depth of the tree. This is illustrated in Fig. 3(a), where the activity fraction as a function of bias is plotted for dendritic trees of varying depths. We see that the point neuron case (H = 1) requires the highest activity fraction, but that the situation improves quickly with depth of the dendritic tree. For instance, with H = 5 and a conventional biasing of 1.4 I c, only 17% of synapses need to be active—a significant decrease compared to p / n = 71 % for a point neuron. Figure 3(b) shows the tree depth required to hit a target activity fraction as a function of the bias current for N = 10 4. Again for I b / I c = 1.4, we see that a 1% activity fraction will require at least H = 14, in which case the number of dendrites and synapses are nearly the same. Neuromorphic superconducting systems will almost certainly be more noise tolerant than their digital counterparts,24 so higher bias conditions may be tolerable or even optimal. At these higher “neuromorphic biases,” the required depth of the tree for a given activity fraction is significantly lower. If I b / I c is taken to be 1.8, a tree depth of only H = 5 is necessary for sub-1% threshold activity fraction. This dendritic tree would require dendrite fan-in of n 6 and about 1900 intermediate dendrites. Considering that every synapse requires a SQUID, the additional hardware fraction for this dendritic tree is less than 20%. In general, the number of dendrites, N D, will scale as N 1 1 / H for large N [Fig. 2(c)]. The addition of dendritic trees to high fan-in neurons will not be particularly cumbersome (area estimates for synapses can be found in Appendix B of Ref. 43) but can significantly reduce the activity fraction even down to 1%. Such biological values are abjectly impossible for point neurons, whose applied flux is limited to the range of monotonic response, providing a physical motivation for the use of dendritic trees in superconducting neurons to complement the computational motivations discussed above.
FIG. 3.

(a) The fraction of synapses required to be saturated to drive a neuron to the threshold as a function of the normalized bias to dendrites and the cell body. H = 1 corresponds to a point neuron. (b) The required depth of the dendritic arbor (H) for a neuron with 104 synapses to reach a given activity fraction as a function of the bias current.

FIG. 3.

(a) The fraction of synapses required to be saturated to drive a neuron to the threshold as a function of the normalized bias to dendrites and the cell body. H = 1 corresponds to a point neuron. (b) The required depth of the dendritic arbor (H) for a neuron with 104 synapses to reach a given activity fraction as a function of the bias current.

Close modal
The energy consumption of the dendritic arbor itself deserves consideration. For future superconducting systems, dynamic power will dominate static power consumption. The total fraction of all units (synapses, dendrites, and soma) that must be active to reach the threshold is given by
(5)
The energy consumption of synapses and dendrites is unlikely to be the same for most technologies. In the optoelectronic case, for example, synaptic events are likely to cost significantly more power than an active dendrite. Additionally, it can be shown that the total number of active units is likely to be higher in the point neuron case for almost all reasonable bias conditions as the number of added dendrites is compensated for by the greatly reduced number of active synapses. This suggests that a dendritic arbor is unlikely to dominate the power budget of high fan-in neurons. Neuronal circuits based on superconducting loops have been proposed in the prior work, particularly with regard to optoelectronic systems.26 We show here an application of these fan-in considerations to the specific case of loop neurons introduced in that work. A circuit diagram is shown in Fig. 4. The dendritic integration (DI) loop integrates signals from activity present at that dendrite (or synapse). The saturation current of the DI loop corresponds to the maximally weighted active synapse discussed previously. A mutual inductor ( M dc | di) couples this signal into a second loop, called the dendritic collection (DC) loop. This loop is not strictly necessary but allows for a more standardized design procedure as discussed below. The DC loop applies flux Φ a dr (the weighted contribution of afferent signals) through M dr | dc to the dendritic receiving (DR) loop. The DR loop forms the active component of the dendrite that has been the subject of our discussion thus far. Its output, a train of fluxons, is then coupled into another DI loop, allowing the chain to continue indefinitely. A schematic layout is provided in Fig. 4(b) to provide physical intuition about the circuit.
FIG. 4.

(a) Circuit under consideration. Input dendritic integration (DI) loops couple the signal into the dendritic collection (DC) loop via transformers. The net induced signal in the DC loop couples into the dendritic receiving (DR) loop, which is a SQUID. This SQUID is embedded in its own DI loop, which performs leaky integration on the accumulated signal. (b) Schematic of the physical layout of the circuit with components playing the roles of the circuit elements in (a). Circuit elements and loops are labeled to be consistent with the text.

FIG. 4.

(a) Circuit under consideration. Input dendritic integration (DI) loops couple the signal into the dendritic collection (DC) loop via transformers. The net induced signal in the DC loop couples into the dendritic receiving (DR) loop, which is a SQUID. This SQUID is embedded in its own DI loop, which performs leaky integration on the accumulated signal. (b) Schematic of the physical layout of the circuit with components playing the roles of the circuit elements in (a). Circuit elements and loops are labeled to be consistent with the text.

Close modal

The question remains: How do we limit the applied flux Φ a dr to enforce monotonicity in practice? For this circuit, a careful choice of inductances will suffice. The mathematical details are given in the supplementary material, but ultimately only a single constraint among all of the inductances is necessary. Additionally, the intermediate DC loop allows the monotonic condition to be met across a wide range of fan-in factors with only L di 2 being a function of n; the SQUID and its input coil need not to be redesigned for different choices of n. The consequences of the DC loop are further explored in the supplementary material.

We have considered the implications of limiting the maximum flux input to all SQUIDs in a superconducting neural circuit so the response is monotonically increasing. We have found that limiting the applied flux introduces a constraint on the activity fraction of synapses required to reach the threshold, and the addition of a dendritic tree ameliorates the situation. This behavior is independent of most details of the circuit (such as whether or not a collection loop is used). The physical arguments presented here are derived from this decision to limit the applied flux to handle the ostensible “worst-case” scenario in which all synaptic inputs are fully saturated simultaneously. It is fair to question whether it is necessary to design our circuits around this extreme situation. The monotonicity issue could, for instance, be solved by immediately resetting all post-synaptic potentials to zero upon threshold. This is the standard behavior exhibited by most leaky integrate-and-fire models. However, implementing such a mechanism in superconducting hardware without compromising the speed and efficiency of superconducting neurons appears challenging. Additionally, we have argued elsewhere27 that SQUID dendrites provide numerous opportunities for active, analog dendritic processing independent of the fan-in benefits described here. In that context, enforcement of monotonicity appears necessary. For these reasons, we contend that the best course of action is to allow synaptic signals to decay naturally without regard to thresholding events (which also preserves information) while limiting the applied flux in the manner described.

Still, one could argue we are over-preparing for the worst case scenario. Perhaps, we could leave the maximum possible applied flux to each SQUID unrestricted, and instances wherein SQUIDs are driven past a half-period of their response function will be sufficiently rare that we can ignore them in design. For general cognitive activity, we are likely to seek networks balanced at a critical point44–46 between excessive synchronization (order) and insufficient correlation (disorder). When cognitive circuits are poised close to this critical point, neuronal avalanches47 or cell assemblies48,49 are observed to be characterized by a power-law50 or lognormal51 distribution of sizes. A great deal of contemporary research52 indicates that operation near this critical point is advantageous for maximizing dynamic range53,54 and the number of accessible metastable states55 while supporting long-range correlations in network activity.56 With either power-law or lognormal distributions, network activity engaging many neurons is less probable than the activity involving few neurons, but periods of the activity involving large numbers of neurons are not so improbable as to be neglected and may be crucial episodes for information integration across the network. The probability of large events does not decay exponentially and must, therefore, be accommodated in hardware.

We reiterate that the primary assumption entering Eq. (3) is that the maximum applied signal is limited to a certain value. We have considered the ramifications in the specific context of SQUID components, but similar considerations may apply to other hardware. We encourage the reader to consider whether similar arguments may affect their favorite neuromorphic thresholding elements. We also note that limiting the applied flux to Φ 0 / 2 may not always be advisable. From the activation function of Fig. 1(b), it is evident that a dendrite with two synapses performs XOR if each synapse couples Φ 0 / 2 into the receiving SQUID. When both synapses are active, the device operates outside the monotonic response. We hope this article does not stifle investigation of the full neural utility of engineered SQUID responses.

See the supplementary material for the following three sections: the first gives a summary of the numerical solution to Φ a th. The second walks through the circuit in Fig. 4, deriving a constraint among the various inductances that will ensure monotonic operation. Additionally, there is a description of likely parameter values for the various components, informed by InductEx simulations. The third section deals with a different circuit variant, in which the collection loop is omitted and single flux operation is considered.

We thank Dr. Ken Segall and Dr. Michael Schneider for helpful discussions. B.A.P. was supported under the financial assistance via Award No. 70NANB18H006 from the U.S. Department of Commerce, National Institute of Standards and Technology.

The authors have no conflicts to disclose.

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
Event-Based Neuromorphic Systems
, edited by
S.-C.
Liu
,
T.
Delbruck
,
G.
Indiveri
,
A.
Whatley
, and
R.
Douglas
(
John Wiley and Sons
,
2015
).
2.
G.
Indiveri
,
B.
Linares-Barranco
,
T. J.
Hamilton
,
A.
Van Schaik
,
R.
Etienne-Cummings
,
T.
Delbruck
,
S.-C.
Liu
,
P.
Dudek
,
P.
Häfliger
,
S.
Renaud
et al, “
Neuromorphic silicon neuron circuits
,”
Front. Neurosci.
5
,
73
(
2011
).
3.
P. A.
Merolla
,
J. V.
Arthur
,
R.
Alvarez-Icaza
,
A. S.
Cassidy
,
J.
Sawada
,
F.
Akopyan
,
B. L.
Jackson
,
N.
Imam
,
C.
Guo
,
Y.
Nakamura
et al, “
A million spiking-neuron integrated circuit with a scalable communication network and interface
,”
Science
345
,
668
673
(
2014
).
4.
M.
Davies
,
N.
Srinivasa
,
T.-H.
Lin
,
G.
Chinya
,
Y.
Cao
,
S. H.
Choday
,
G.
Dimou
,
P.
Joshi
,
N.
Imam
,
S.
Jain
et al, “
Loihi: A neuromorphic manycore processor with on-chip learning
,”
IEEE Micro
38
,
82
99
(
2018
).
5.
C.
Schuman
,
T.
Potok
,
R.
Patton
,
J.
Birdwell
,
M.
Dean
,
G.
Rose
, and
J.
Plank
, “
A survey of neuromorphic computing and neural networks in hardware
,” arXiv:1705.06963v1 (
2017
).
6.
T. V.
Duzer
and
C.
Turner
,
Principles of Superconductive Devices and Circuits
, 2nd ed. (
Prentice Hall
,
1998
).
7.
A. M.
Kadin
,
Introduction to Superconducting Circuits
, 1st ed. (
John Wiley and Sons
,
1999
).
8.
Y.
Harada
and
E.
Goto
, “
Artificial neural network circuits with Josephson devices
,”
IEEE Trans. Magn.
27
,
2863
(
1991
).
9.
M.
Hidaka
and
L.
Akers
, “
An artificial neural cell implemented with superconducting circuits
,”
Supercond. Sci. Technol.
4
,
654
(
1991
).
10.
Y.
Mizugaki
,
K.
Nakajima
,
Y.
Sawada
, and
T.
Yamashita
, “
Implementation of new superconducting neural circuits using coupled SQUIDs
,”
IEEE Trans. Appl. Supercond.
4
(
1
),
1
8
(
1994
).
11.
Y.
Mizugaki
,
K.
Nakajima
,
Y.
Sawada
, and
T.
Yamashita
, “
Superconducting neural circuits using squids
,”
IEEE Trans. Appl. Supercond.
5
,
3168
(
1995
).
12.
E.
Rippert
and
S.
Lomatch
, “
A multilayered superconducting neural network implementation
,”
IEEE Trans. Appl. Supercond.
7
,
3442
(
1997
).
13.
P.
Crotty
,
D.
Schult
, and
K.
Segall
, “
Josephson junction simulation of neurons
,”
Phys. Rev. E
82
,
011914
(
2010
).
14.
T.
Onomi
,
Y.
Maenami
, and
K.
Nakajima
, “
Superconducting neural network for solving a combinatorial optimization problem
,”
IEEE. Trans. Appl. Supercond.
21
,
701
(
2011
).
15.
F.
Chiarello
,
P.
Carelli
,
M.
Castellano
, and
G.
Torrioli
, “
Artificial neural network based on squids: Demonstration of network training and operation
,”
Supercond. Sci. Technol.
26
,
125009
(
2013
).
16.
Y.
Yamanashi
,
K.
Umeda
, and
N.
Yoshikawa
, “
Pseudo sigmoid function generator for a superconductive neural network
,”
IEEE. Trans. Appl. Supercond.
23
,
1701004
(
2013
).
17.
K.
Segall
,
M.
LeGro
,
S.
Kaplan
,
O.
Svitelskiy
,
S.
Khadka
,
P.
Crotty
, and
D.
Schult
, “
Synchronization dynamics on the picosecond time scale in coupled Josephson junction networks
,”
Phys. Rev. E
95
,
032220
(
2017
).
18.
H.
Katayama
,
T.
Fujii
, and
N.
Hatakenaka
, “
Theoretical basis of SQUID-based artificial neurons
,”
J. Appl. Phys.
124
,
152106
(
2018
).
19.
J.
Shainline
,
S.
Buckley
,
A.
McCaughan
,
J.
Chiles
,
A.
Jafari-Salim
,
R.
Mirin
, and
S.
Nam
, “
Circuit designs for superconducting optoelectronic loop neurons
,”
J. Appl. Phys.
124
,
152130
(
2018
).
20.
M.
Schneider
,
C.
Donnelly
,
S.
Russek
,
B.
Baek
,
M.
Pufall
,
P.
Hopkins
,
P.
Dresselhaus
,
S.
Benz
, and
W.
Rippard
, “
Ultralow power artificial synapses using nanotextured magnetic Josephson junctions
,”
Sci. Adv.
4
,
1701329
(
2018
).
21.
M.
Schneider
,
C.
Donnelly
, and
S.
Russek
, “
Tutorial: High-speed low-power neuromorphic systems based on magnetic Josephson junctions
,”
J. Appl. Phys.
124
,
161102
(
2018
).
22.
E.
Toomey
,
K.
Segall
, and
K.
Berggren
, “
Design of a power efficient artificial neuron using superconducting nanowires
,”
Front. Neurosci.
13
,
933
(
2019
).
23.
J.
Clarke
and
A. I.
Braginski
,
The SQUID Handbook: Applications of SQUIDs and SQUID Systems
(
John Wiley & Sons
,
2006
).
24.
M.
Schneider
and
K.
Segall
, “
Fan-out and fan-in properties of superconducting neuromorphic circuits
,”
J. Appl. Phys.
128
,
214903
(
2020
).
25.
J.
Shainline
,
S.
Buckley
,
R.
Mirin
, and
S.
Nam
, “
Superconducting optoelectronic circuits for neuromorphic computing
,”
Phys. Rev. Appl.
7
,
034013
(
2017
).
26.
J.
Shainline
,
S.
Buckley
,
A.
McCaughan
,
J.
Chiles
,
M.
Castellanos-Beltran
,
C.
Donnelly
,
M.
Schneider
,
A.
Jafari-Salim
,
R.
Mirin
, and
S.
Nam
, “
Superconducting optoelectronic loop neurons
,”
J. Appl. Phys.
126
,
044902
(
2019
).
27.
J.
Shainline
, “
Fluxonic processing of photonic synapse events
,”
IEEE J. Sel. Top. Quantum Electron.
26
,
7700315
(
2020
).
28.
J.
Shainline
, “
Optoelectronic intelligence
,”
Appl. Phys. Lett.
118
,
160501
(
2021
).
29.
R.
Fagaly
, “
Superconducting quantum interference device instruments and applications
,”
Rev. Sci. Instrum.
77
,
101101
(
2006
).
30.
Y.
Yamanashi
and
N.
Yoshikawa
, “
Design and evaluation of magnetic field tolerant single flux quantum circuits for superconductive sensing systems
,”
IEICE Trans. Electron.
97
,
178
181
(
2014
).
31.
R.
Collot
,
P.
Febvre
,
J.
Kunert
,
H.-G.
Meyer
,
R.
Stolz
, and
J.
Issler
, “
Characterization of an on-chip magnetic shielding technique for improving SFQ circuit performance
,”
IEEE Trans. Appl. Supercond.
26
,
1300605
(
2016
).
32.
S. B.
Laughlin
and
T. J.
Sejnowski
, “
Communication in neuronal networks
,”
Science
301
,
1870
1874
(
2003
).
33.
L.
Goetz
,
A.
Roth
, and
M.
Häusser
, “
Active dendrites enable strong but sparse inputs to determine orientation selectivity
,”
Proc. Natl. Acad. Sci.
118
,
e2017339118
(
2021
).
34.
B. W.
Mel
, “
Information processing in dendritic trees
,”
Neural Comput.
6
,
1031
1085
(
1994
).
35.
M.
London
and
M.
Häusser
, “
Dendritic computation
,”
Annu. Rev. Neurosci.
28
,
503
(
2005
).
36.
G.
Stuart
and
N.
Spruston
, “
Dendritic integration: 60 Years of progress
,”
Nat. Neurosci.
18
,
1713
(
2015
).
37.
S.
Sardi
,
R.
Vardi
,
A.
Sheinin
,
A.
Goldental
, and
I.
Kanter
, “
New types of experiments reveal that a neuron functions as multiple independent threshold units
,”
Sci. Rep.
7
,
18036
(
2017
).
38.
J.
Hawkins
and
S.
Ahmad
, “
Why neurons have thousands of synapses a theory of sequence memory in neocortex
,”
Front. Neural Circuits
10
,
23
(
2016
).
39.
J.
Magee
and
D.
Johnston
, “
Plasticity of dendritic function
,”
Curr. Opin. Neurobiol.
15
,
334
(
2005
).
40.
P. J.
Sjostrom
,
E. A.
Rancz
,
A.
Roth
, and
M.
Hausser
, “
Dendritic excitability and synaptic plasticity
,”
Physiol. Rev.
88
,
769
840
(
2008
).
41.
P.
Poirazi
and
B. W.
Mel
, “
Impact of active dendrites and structural plasticity on the memory capacity of neural tissue
,”
Neuron
29
,
779
796
(
2001
).
42.
G.
Buzsáki
,
Rhythms of the Brain
(
Oxford University Press
,
2006
).
43.
B. A.
Primavera
and
J. M.
Shainline
, “
Considerations for neuromorphic supercomputing in semiconducting and superconducting optoelectronic hardware
,”
Front. Neurosci.
15
,
732368
(
2021
).
44.
H. E.
Stanley
, “
Scaling, universality, and renormalization: Three pillars of modern critical phenomena
,”
Rev. Mod. Phys.
71
,
S358
(
1999
).
45.
P.
Bak
,
C.
Tang
, and
K.
Wiesenfeld
, “
Self-organized criticality: An explanation of 1/f noise
,”
Phys. Rev. Lett.
59
,
381
(
1987
).
46.
P.
Bak
,
C.
Tang
, and
K.
Wiesenfeld
, “
Self-organized criticality
,”
Phys. Rev. A
38
,
364
(
1988
).
47.
J. M.
Beggs
and
D.
Plenz
, “
Neuronal avalanches in neocortical circuits
,”
J. Neurosci.
23
,
11167
11177
(
2003
).
48.
D.
Plenz
and
T. C.
Thiagarajan
, “
The organizing principles of neuronal avalanches: Cell assemblies in the cortex?
,”
Trends Neurosci.
30
,
101
110
(
2007
).
49.
G.
Buzsáki
, “
Neural syntax: Cell assemblies, synapsembles, and readers
,”
Neuron
68
,
362
385
(
2010
).
50.
J.
Beggs
, “
The criticality hypothesis: How local cortical networks might optimize information processing
,”
Philos. Trans. R. Soc. A
366
,
329
(
2008
).
51.
G.
Buzsáki
and
K.
Mizuseki
, “
The log-dynamic brain: How skewed distributions affect network operations
,”
Nat. Rev. Neurosci.
15
,
264
278
(
2014
).
52.
N.
Tomen
,
J. M.
Herrmann
, and
U.
Ernst
,
The Functional Role of Critical Dynamics in Neural Systems
(
Springer
,
2019
), Vol.
11
.
53.
O.
Kinouchi
and
M.
Copelli
, “
Optimal dynamical range of excitable networks at criticality
,”
Nat. Phys.
2
,
348
(
2006
).
54.
W.
Shew
,
H.
Yang
,
T.
Petermann
,
R.
Roy
, and
D.
Plenz
, “
Neuronal avalanches imply maximum dynamic range in cortical networks at criticality
,”
J. Neurosci.
29
,
15595
(
2009
).
55.
C.
Haldeman
and
J. M.
Beggs
, “
Critical branching captures activity in living neural networks and maximizes the number of metastable states
,”
Phys. Rev. Lett.
94
,
058101
(
2005
).
56.
M.
Kitzbichler
,
M.
Smith
,
S.
Christensen
, and
E.
Bullmore
, “
Broadband criticality of human brain network synchronization
,”
PLoS Comput. Biol.
5
,
e1000314
(
2009
).

Supplementary Material