We have implemented a Spiking Neural Network (SNN) architecture using a combination of spin orbit torque driven domain wall devices and transistor based peripheral circuits as both synapses and neurons. Learning in the SNN hardware is achieved both under completely unsupervised mode and partially supervised mode through mechanisms, incorporated in our spintronic synapses and neurons, that have biological plausibility, e.g., Spike Time Dependent Plasticity (STDP) and homoeostasis. High classification accuracy is obtained on the popular Iris dataset for both modes of learning.

Implementing Neural Network (NN) algorithms in specialized neuromorphic hardware, including spintronic hardware, has been a heavily pursued topic of research in recent years.1–9 Spiking Neural Network (SNN) algorithms are of particular interest in this regard. Power consumption is considered to be very low in hardware implementation of SNN because computation is event based.10–15 While some SNN implementations involve training a non-spiking NN first and then converting it to SNN16,17 some other implementations attempt at training the SNN hardware itself using Spike Time Dependent Plasticity (STDP) property of synapses and homoeostasis property of neurons.18–20 Local and unsupervised nature of such STDP and homoeostasis based learning, coupled with biological plausibility of these mechanisms,18,22–24 makes such training/learning in hardware SNN very interesting.

Spintronic devices have earlier been proposed as neurons and synapses in such SNN.19,25–27 In this paper, we have designed and simulated a hardware SNN using spintronic devices (Domain Wall (DW) based devices) as synapses as well as neurons in the same network. We have showed STDP and homoeostasis enabled learning in our designed SNN. In Section II and III, we carry out micromagnetic simulation28 to model the DW devices. Transistor based analog circuits are designed on SPICE circuit simulator that will apply the required current pulses into DW devices to make them function as synapses and neurons.19–21 In Section IV, we incorporate the neuron and synapse characteristics in a SNN we design. We train the SNN on a popular Machine Learning (ML) dataset (Iris dataset29) both by a completely unsupervised mechanism, enabled by STDP at synapses and homoeostasis at output stage neurons,18 and a partially supervised mechanism, enabled by STDP at synapses (unsupervised local learning rule) and supervised inhibitory current at the output stage neurons.30–32 Section V concludes the paper.

Both DW synapse and DW neuron (designed in the next section) utilize the physics of Spin Orbit Torque (SOT) driven DW motion in Heavy Metal (HM)/FerroMagnet (FM) heterostructures. The physics has been studied extensively in the past through experiments and micromagnetic simulations.33–36 When in-plane current flows through HM layer, DW in the FM layer above it experiences SOT. If the wall has Néel chirality due to Dzyaloshinskii Moriya Interaction (DMI) at the interface, then DW motion can happen even in the absence of magnetic field33,34 (Fig. 1).

FIG. 1.

General schematic of a SOT driven DW device. The MTJ needed for synapse or neuron functionality is not shown here.

FIG. 1.

General schematic of a SOT driven DW device. The MTJ needed for synapse or neuron functionality is not shown here.

Close modal

We carry out micromagnetic simulations to model DW motion in two different hetero-structures- Pt (HM)/CoFe (FM)/MgO (for synapse design) and Ta (HM)/CoFe (FM)/MgO (for neuron design). The dynamics of the magnetic moments of the FM layer (in which DW moves), under the influence of vertical spin current injected into it due to in-plane charge current through the heavy metal layer below, is simulated for either system using micromagnetic package “muma×3.”28 Simulation parameters used are from miromagnetic models in previous works,33,37–39 that are used to benchmark experimental data on DW motion in such systems (See Section 1 of supplementary material). DW velocity is plotted as function of in-plane charge current density for both the systems in Fig. 2. For both Pt/CoFe/MgO and Ta/CoFe/MgO systems, DW velocity increases linearly with current density within a certain range (Fig. 2). However, for the same current density the velocity is much lower for Ta/CoFe/MgO compared to Pt/CoFe/MgO because the magnitude of DMI is much lower for the Ta/CoFe/MgO system (0.06 mJ/m2) than that for the Pt/CoFe/MgO system (1.2 mJ/m2).37–39 

FIG. 2.

DW velocity vs. charge current density for Pt/CoFe/MgO and Ta/CoFe/MgO.

FIG. 2.

DW velocity vs. charge current density for Pt/CoFe/MgO and Ta/CoFe/MgO.

Close modal

Utilizing the physics of DW motion, a Magnetic Tunnel Junction (MTJ) device has been proposed as a synapse in analog hardware NN.7–9 The FM layer in which the DW moves is the free layer of the MTJ device. As the in-plane current flowing through the HM layer below the free layer moves the DW, average magnetization inside the free layer changes and hence conductance of the MTJ changes. Change in conductance (ΔG) of the synaptic device is related to the change in weight the synapse stores (Δw) as follows:

(1)

where Gmax and Gmin are the maximum and minimum conductances of the MTJ. Pt/CoFe/MgO system is chosen for DW synapse.7–9 Lateral dimensions of the synaptic DW device simulated here are 500 by 50 nm. Based on the values of R-A product and Tunneling Magneto-Resistance (TMR) of the MTJ,40,41Gmax = 2.9 × 10−3 Ω−1, Gmin = 6.1 × 10−3 Ω−1 (Fig. 3). wmax and wmin are the maximum and minimum values that the weight of any synapse in the SNN can take. Antiferromagnetic regions at the edges prevent the DW from getting destroyed.7,9

FIG. 3.

Conductance of MTJ in DW synapse vs magnitude of “write” current pulse.

FIG. 3.

Conductance of MTJ in DW synapse vs magnitude of “write” current pulse.

Close modal

Since velocity of the DW is proportional to the current density, when in-plane current pulse of a fixed duration (3 ns), also known as “write” current pulse, is applied on the device, conductance of the MTJ changes by a magnitude proportional to that of the current pulse (Fig. 3). Hence magnitude of the “write” current pulse (Iwrite) needed to bring about a certain change in conductance (ΔG) is given as follows:

(2)

where IwriteG = the slope of straight line that fits the conductance vs write current characteristic of the DW device (Fig. 3) = 1.63 × 105μA −Ω.

In the SNN we design in Section IV, a neuron of the input layer (pre-neuron) is connected to a neuron of the output layer (post-neuron) through a synapse and the weight of the synapse is updated by the STDP rule:22 

(3)

where Δw is the change in weight of the synapse, tpre is the time when the pre-neuron spikes, tpost is the time when the post-neuron spikes, Γ is a dimensionless constant of proportionality equal to 7, and τ1 and τ2 are the two time constants for the synapse.

From equation (1), (2) and (3), write current Iwrite needs to be applied on the DW synapse as follows in order to update the weight and train the SNN:

(4)

where I0 ≈ 5μA (from equation (1) and (2)). For that purpose, the DW synapse is integrated with a transistor based circuit operating in sub-threshold regime, which injects appropriate “write” current pulse into DW synapse, as shown in Fig. 4, Fig. 5 and Fig. 6 19,42

FIG. 4.

DW synapse, along with transistor based circuit.

FIG. 4.

DW synapse, along with transistor based circuit.

Close modal
FIG. 5.

Voltage and current of different components of the circuit in Fig. 4 for spiking pattern 1: (a) Gate voltage of T2 vs. time showing spiking pattern of pre-neuron (b) Gate voltage of T4 showing spiking pattern of post-neuron (c) Gate voltage of T3 and T7 (d) Drain current through T4 (Iwrite,1) and T8 (Iwrite,2).

FIG. 5.

Voltage and current of different components of the circuit in Fig. 4 for spiking pattern 1: (a) Gate voltage of T2 vs. time showing spiking pattern of pre-neuron (b) Gate voltage of T4 showing spiking pattern of post-neuron (c) Gate voltage of T3 and T7 (d) Drain current through T4 (Iwrite,1) and T8 (Iwrite,2).

Close modal

Iwrite in equation (4) can be considered a sum of two components: Iwrite = Iwrite,1 + Iwrite,2 where

(5)

In the circuit of Fig. 4, drain current through transistor T4 corresponds to Iwrite,1 and drain current through T8 corresponds to Iwrite,2. Based on SPICE simulations of the circuit in Fig. 4 we carry out on Cadence Vurtuoso simulator, we plot Iwrite,1 and Iwrite,2 as a function of time for spiking pattern 1 (Fig. 5) and spiking pattern 2 (Fig. 6). Circuit parameters are chosen such that τ1 = τ2 = 1.5 μs for all synapses in our designed SNN. For spiking pattern 1, pre-neuron spikes once (tpre) followed by several post-neuron spikes (tpost). Hence tpost > tpre here. Since pre-neuron spike corresponds to a sharp drop in gate voltage of T2 followed by its rise to original value (Fig. 5(a)), T2 being pMOS first turns on resetting voltage across capacitor C1 to 0.8 V (lower electrode of C1 in Fig. 4 considered positive terminal) and then turns off. For all time t after tpre C1 is charged through T1. So voltage across C1, and hence gate voltage of T3, increases linearly with ttpre with a slope inversely proportional to value of capacitance C1 (Fig. 5(c)). When post-neuron spikes at tpost (sharp rise in gate voltage of transistor T4 turning on T4 - Fig. 5(b)), since T3 is designed to operate in sub-threshold region,43 drain current through T3 and hence T4 (Iwrite,1) is proportional to e(tposttpreτ1) for spiking pattern 1 since tpost > tpre (Fig. 5(d)). The STDP time constant τ1 is directly proportional to sub-threshold swing of T3 and capacitance C1. Unlike C1, charging/discharging does not happen for C2 for spiking pattern 1 (Fig. 5(c)). This is because the equivalent of T2 for C2 (T6) is connected to post-neuron spike instead of pre-neuron spike and the equivalent of T4 for C2 (T8) is connected to pre-neuron spike instead of post-neuron spike. So Iwrite,2=0 for spiking pattern 1 since tpost > tpre, as desired. For spiking pattern 2 (Fig. 6), post-neuron spikes once (Fig. 6(b)) and pre-neuron spikes several times after that (Fig. 6(a)). Hence tpost < tpre here. In this case, voltage across C2 is reset after the post-neuron spike and then it drops linearly with ttpost due to discharge through T5, with a slope inversely proportional to value of capacitance C2 (Fig. 6(c)). T7 operates in sub-threshold region here and T8 is pMOS transistor here as opposed to T4 (nMOS). Hence when pre-neuron spikes at tpre T8 turns on and current through T8 (Iwrite,2) is proportional to e(tpretpostτ2) and has a negative sign (Fig. 6(d)). Thus, τ2 is now linearly proportional to sub-threshold swing of T7 and value of capacitance C2. Following similar reasoning as spiking pattern 1, for spiking pattern 2 (tpost < tpre) Iwrite,1 =0 (Fig. 6(d)). Thus the STDP rule (equation (3)) is implemented for our DW synapse.

The waveforms in Fig. 5 and Fig. 6 are for specific values of C1 and C2 in the circuit of Fig. 4 which lead to τ1 = τ2 = 1.5 μs. However dependence of τ1 on C1 and τ2 on C2, as obtained from multiple SPICE simulations of the STDP circuit in Fig. 4 for different values of C1 and C2, are plotted in Section 4 of supplementary material. Variation in values of C1 and C2 due to circuit imperfections leads to variation in values of τ1 and τ2. However if τ1 and τ2 lie within a certain range the designed SNN can be accurately trained on the given datasets. We discuss this in Section IV where we present the performance metrics for learning/training using our designed SNN.

In the Leaky Integrate Fire (LIF) model of neuron,18,23 its membrane potential v(t) is governed by the following equation:

(6)

where I(t) is input current to the neuron, GL is membrane conductance, EL is resting potential of neuron. Once v(t) reaches the threshold potential (Vth), the neuron generates a spike and v(t) drops to EL. For our designed SNN in next section, we take GL = 30 pS, EL = −70 mV, C = 1200 pF and Vth = 20 mV for the LIF model of our neurons. Time between two consecutive spikes generated by the neuron is plotted as a function of input dc current to the neuron after solving equation (6) for the given parameters (Fig. 7(a)). Our choice of LIF parameters is such that the time between two consecutive spikes of the neuron is in the order of the time constant of the STDP synapse (1.5 μs) and hence several orders higher than duration of the “write” current pulse into the DW synapse, which is same as the duration of each spike in our designed SNN (3 ns). This is a requirement for STDP based learning to work (Fig. 5, Fig. 6).

FIG. 6.

Voltage and current of different components of the circuit in Fig. 4 for spiking pattern 2: (a) Gate voltage of T8 vs. time showing spiking pattern of pre-neuron (b) Gate voltage of T6 showing spiking pattern of post-neuron (c) Gate voltage of T3 and T7 (d) Drain current through T4 (Iwrite,1) and T8 (Iwrite,2).

FIG. 6.

Voltage and current of different components of the circuit in Fig. 4 for spiking pattern 2: (a) Gate voltage of T8 vs. time showing spiking pattern of pre-neuron (b) Gate voltage of T6 showing spiking pattern of post-neuron (c) Gate voltage of T3 and T7 (d) Drain current through T4 (Iwrite,1) and T8 (Iwrite,2).

Close modal
FIG. 7.

(a) Time gap between consecutive spikes vs input current into DW neuron device. (b) DW neuron device and transistor based circuit for post-neuron.

FIG. 7.

(a) Time gap between consecutive spikes vs input current into DW neuron device. (b) DW neuron device and transistor based circuit for post-neuron.

Close modal

The DW neuron, integrated with a transistor based circuit, we design in Fig. 7(b) satisfies the desired LIF characteristic described above (Fig. 7(a)). Working principle of DW neuron is as follows: input current flowing through HM layer moves the DW in the FM layer from one end of the device to another, much like the DW synapse. However unlike the DW synapse, the MTJ is located only at the other end. When the DW reaches the other end, TMR of the MTJ changes abruptly and spike is generated.21 Thus time period between two consecutive spikes is equal to the time taken by the DW to traverse the length of the device (Fig. 7(a)). Since this time period must be several orders higher than the duration of “write” current pulse for DW synapse (3 ns), which is equal to the time taken by the DW to traverse the length of the synapse, the length of the DW neuron (6 μm) is chosen to be much higher than that for the DW synapse (500 nm). Width is also chosen to be higher for the DW neuron (600 nm) compared to the DW synapse (50 nm) because for the same magnitude of input current, higher width corresponds to smaller current density and hence lower velocity of DW. Ta/CoFe/MgO system is chosen for DW neuron instead of Pt/CoFe/MgO since DW velocity is found to be lower in Ta/CoFe/MgO than Pt/CoFe/MgO for the same current density (Fig. 2). The MTJ in the DW neuron has dimensions of 600 nm by 100 nm.

Once the DW reaches the other end, TMR of the MTJ increases due to switching of moment in the free layer. As a result, gate voltage of the transistor T1 in Fig. 7(b) increases, turning it on. This gate voltage is the output of the neuron (Vout) if it is a post-neuron since the spike required for post-neuron is positive (Fig. 5(b), Fig. 6(b)). For pre-neuron another two transistors based standard inverter circuit43 is connected to the gate voltage of this transistor T1 (Section 3 of supplementary material). The output of the inverter circuit shows a negative spike when domain wall in the neuron device reaches the MTJ, as required for a pre-neuron (Fig. 5(a), Fig. 6(a)). The ON current of transistor T1 flows in the reverse direction of input current in the DW neuron and moves DW to its initial position. This is equivalent to v(t) in LIF model of the neuron dropping to EL after a spike. Additional components are added to the transistor based circuit of Fig. 7(b), as discussed in Section 2 of supplementary material, to incorporate homoeostasis property in the DW neuron if required.

We next design a SNN with a layer of pre-neurons (input stage) connected to a layer of post-neurons (output stage) through synapses (Fig. 8). Based on the spike times of pre-neurons and post-neurons, weights of the synapses are updated by STDP rule (equation (4)) for every input/sample in each epoch. Then the process is repeated for several epochs to achieve learning/training.18,30,31 Currents proportional to the 16 different features of each input/flower in the Iris training and test sets (after basic feature engineering on the 4 features in the original dataset29–31) are applied on the pre-neurons. The label/type the input/flower belongs to is determined by the post-neuron that fires most for that input. Since there are 3 types of flowers in the dataset, we use 3 post-neurons. The post-neurons are connected inhibitorily with each other to implement the “Winner Take All” (WTA) mechanism.18,30,31 We implement the WTA mechanism among the post-neuron circuits of Fig. 7(b) through an additional transistor based circuit.44 The circuit schematic of the implementation and corresponding SPICE simulations are shown in Section 5 of supplementary material. Alternatively, WTA can be implemented through dipole coupling among the ferromagnetic layers of the domain wall neuron devices themselves, through which the domain walls move.25 Then the additional transistors we have used to implement WTA mechanism will not be required.

FIG. 8.

Schematic of designed SNN using DW neurons and synapses.

FIG. 8.

Schematic of designed SNN using DW neurons and synapses.

Close modal

The DW devices designed in previous sections, with the same parameters stated there, are used to model the neurons and synapses in our designed SNN. Learning is achieved through weight update of synapses due to “write” current pulses applied on the DW synapses, following equations (1)–(4). STDP always leads to some degree of unsupervised nature to the learning. However based on how we control the spiking of post-neurons, we have two different learning schemes here- completely unsupervised and partially supervised. Under the completely unsupervised scheme, the post-neurons have an additional homoeostasis property (Section 2 of supplementary material). During training, when input of one type makes a post-neuron spike, its spiking threshold Vth (in the LIF model) goes up by 7 mV, followed by a decay with time constant τhomeo = 15 μs18 (Section 2 of supplementary material). Because of the increased threshold, only when another input is of the same type, the incoming current to that post-neuron is large enough for the neuron to spike. For input of other types it does not spike. Thus classification is achieved without a learner.18 Under the partially supervised scheme, the post-neurons do not have homoeostasis property. Instead, during the training process, for an input of a particular type, inhibitory currents are applied on all post-neurons except the post neuron, which we want to spike most for any input of that type.30,31 For either scheme, high training accuracy (45 train samples) and test accuracy (105 test samples) are obtained after ≈ 15 epochs (Fig. 9). The net energy dissipated in all the synapses because of Joule heating for “write current” pulses during the learning process is calculated to be in the range of 50-200 fJ.

FIG. 9.

Training and test accuracy of designed SNN on Iris dataset.

FIG. 9.

Training and test accuracy of designed SNN on Iris dataset.

Close modal

The classification accuracy for the SNN depends on values of τ1 and τ2 since they control the STDP based synaptic weight update rule (equation 3), upon which training/learning of the SNN is based. We observe from our simulations that both train and test accuracy continue to be around 90 percent as desired as long as τ1 is in between 1.3 μs and 1.9 μs and τ2 is in between 1 μs and 1.7 μs, given other parameters in SNN are not changed. From Section 4 of supplementary material, this corresponds to variation of capacitance C1 between 0.7 pF and 1.05 pF and C2 between 2.5 pF and 4.8 pF in the STDP exhibiting synapse circuit of Fig. 4. Variation of capacitances within this range due to circuit imperfections will hence not affect the performance of the designed SNN.

We have simulated STDP enabled learning under two different schemes in SNN hardware using DW devices both as synapses ane neurons. We have obtained high classification accuracy on a popular ML dataset- the Iris dataset. Training it on larger and more complicated datasets however involves many more STDP synapse circuits since the number of pre-neurons and post-neurons goes up. If the the STDP rule in equation (3) is simplified by eliminating the exponential characteristic,45 number of transistors in the STDP circuit will go down making the overall circuit simpler and more scalable. Training our designed SNN on larger and more complicated datasets will be the subject of our future study.

See supplementary material for circuit implementations of homoeostasis and Winner Take All (WTA) mechanism.

Debanjan Bhowmik thanks Department of Science and Technology (DST), India for INSPIRE Faculty Award and Science and Engineering Research Board (SERB), India for Early Career Research (ECR) Award, which helped fund this research.

1.
H.
Tsai
,
S.
Ambrogio
,
P.
Narayanan
,
R. M.
Shelby
, and
G. W.
Burr
,
J. Phys. D Appl. Phys.
51
,
283001
(
2018
).
2.
A.
Sebastian
,
M. L.
Gallo
,
G. W.
Burr
,
S.
Kim
,
M.
BrightSky
 et al.,
J. Appl. Phys.
124
,
111101
(
2018
).
3.
C.
Li
,
D.
Belkin
,
Y.
Li
,
P.
Yan
,
M.
Hu
 et al.,
Nat. Commun.
9
,
2385
(
2018
).
4.
F.
Cai
,
J. M.
Correll
,
S. H.
Lee
,
Y.
Lim
,
V.
Bothra
 et al.,
Nat. Electronics
2
,
290
299
(
2019
).
5.
J.
Torrejon
,
M.
Riou
,
F. A.
Araujo
,
S.
Tsunegi
,
G.
Khalsa
 et al.,
Nature
547
,
428
432
(
2017
).
6.
M.
Azam
,
D.
Bhattacharya
,
D.
Querlioz
, and
J.
Atulasimha
,
J. Appl. Phys.
124
,
152122
(
2018
).
7.
A.
Sengupta
,
Y.
Shim
, and
K.
Roy
,
IEEE Trans. Biomed. Circuits Syst.
10
,
1152
(
2016
).
8.
U.
Saxena
,
D.
Kaushik
,
M.
Bansal
,
U.
Sahu
, and
D.
Bhowmik
,
IEEE Trans. Magn.
54
,
1
(
2018
).
9.
D.
Bhowmik
,
U.
Saxena
,
A.
Dankar
,
A.
Verma
,
D.
Kaushik
 et al.,
J. Magn. Magn. Mater.
489
,
165434
(
2019
).
10.
P. A.
Merolla
,
J. V.
Arthur
,
R.
Alvarez-Icaza
,
A. S.
Cassidy
,
J.
Sawada
 et al.,
Science
345
(
6197
),
668
(
2014
).
11.
M.
Davies
,
N.
Srinivasa
,
T. H.
Lin
,
G.
Chinya
,
Y.
Cao
 et al.,
IEEE Micro
38
(
1
),
82
(
2018
).
12.
M.
Bouvier
,
A.
Valentian
,
T.
Mesquida
,
F.
Rummens
,
M.
Reyboz
 et al.,
ACM J. Emerg. Tech. Com.
15
(
2
),
1
(
2019
).
13.
C. S.
Thakur
,
J. L.
Molin
,
G.
Cauewenberghs
,
G.
Indiveri
,
K.
Kumar
 et al.,
Front. Neurosci.
12
,
891
(
2018
).
14.
W.
Maass
,
Neural Networks
10
(
9
),
1659
(
1997
).
15.
W.
Maass
,
Proceedings of the IEEE
103
(
12
),
2219
(
2015
).
16.
A.
Sengupta
,
Y.
Ye
,
R.
Wang
,
C.
Liu
, and
K.
Roy
,
Front. Neurosci.
13
,
95
(
2019
).
17.
P.
Diehl
,
D.
Neil
,
J.
Binas
,
M.
Cook
,
S.
Liu
 et al.,
Proceedings in IEEE International Joint Conference on Neural Networks (IJCNN)
(
IEEE
,
2015
).
18.
P. U.
Diehl
and
M.
Cook
,
Front. Comput. Neurosci.
9
,
99
(
2015
).
19.
A.
Sengupta
,
A.
Banerjee
, and
K.
Roy
,
Phys. Rev. Appl.
6
,
064003
(
2016
).
20.
K.
Yue
,
Y.
Liu
,
R. K.
Lake
, and
A. C.
Parker
,
Sci. Adv.
5
,
eaau8170
(
2019
).
21.
A.
Sengupta
and
K.
Roy
,
Appl. Phys. Rev.
4
,
041105
(
2017
).
22.
G. Q.
Bi
and
M. M.
Poo
,
J. Neurosci.
18
,
10464
10472
(
1998
).
23.
P.
Dayan
and
L. F.
Abbott
, Chapter 5,
The MIT Press
(
2005
).
24.
W.
Zhang
and
D. J.
Linden
,
Nat. Rev. Neurosci.
4
,
885
900
(
2003
).
25.
N.
Hassan
,
X.
Hu
,
L.
Jiang-Wei
,
W. H.
Brigner
,
O. G.
Akinola
 et al.,
J. Appl. Phys.
124
,
152127
(
2018
).
26.
T.
Bhattacharya
,
S.
Li
,
H.
Yangki
 et al.,
IEEE Access
7
,
5034
(
2019
).
27.
O.
Akinola
,
X.
Hu
,
C. H.
Bennett
,
M.
Marinella
,
J. S.
Friedman
 et al.,
J. Phys. D Appl. Phys.
52
,
49LT01
(
2019
).
28.
A.
Vansteenkiste
,
J.
Leliaert
,
M.
Dvornik
,
M.
Helsen
,
F.
Garcia-Sanchez
 et al.,
AIP Adv.
4
,
107133
(
2014
).
30.
A.
Biswas
,
S.
Prasad
,
S.
Lashkare
, and
U.
Ganguly
, arXiv:1612.02233 (
2016
).
31.
S.
Prasad
,
A.
Biswas
,
A.
Shukla
, and
U.
Ganguly
,
Proceedings in International Conference on Artificial Neural Networks (ICANN)
(
2017
).
32.
C. H.
Bennett
,
N.
Hassan
,
X.
Hu
,
J. A. C.
Incornvia
,
J. S.
Friedman
 et al. 
Proc. SPIE
11090
,
110903I
(
2019
).
33.
S.
Emori
,
U.
Bauer
,
S. M.
Ahn
,
E.
Martinez
, and
G. S. D.
Beach
,
Nat. Mater.
12
,
611
616
(
2013
).
34.
K. S.
Ryu
,
L.
Thomas
,
S. H.
Yang
, and
S.
Parkin
,
Nat. Nanotechnol.
8
,
527
533
(
2013
).
35.
D.
Bhowmik
,
M. E.
Nowakowski
,
L.
You
,
O.
Lee
,
D.
Keating
 et al.,
Sci. Rep.
5
,
11823
(
2015
).
36.
I. M.
Miron
,
T.
Moore
,
H.
Szambolics
,
L. D.
Buda-Prejbeanu
,
S.
Auffret
 et al.,
Nat. Mater.
10
,
419
(
2011
).
37.
S.
Emori
,
E.
Martinez
,
K. J.
Lee
,
H. W.
Lee
,
U.
Bauer
 et al.,
Phys. Rev. B
90
,
184427
(
2014
).
38.
E.
Martinez
,
S.
Emori
,
N.
Perez
,
L.
Torres
, and
G. S. D.
Beach
,
J. Appl. Phys.
115
,
213909
(
2014
).
39.
R.
LoConte
,
E.
Martinez
,
A.
Hrabec
,
A.
Lamperti
,
T.
Schulz
 et al.,
Phys. Rev. B
91
,
014433
(
2015
).
40.
J.-G.
Zhu
and
C.
Park
,
Materials Today
9
,
36
(
2006
).
41.
S.
Ikeda
,
K.
Miura
,
H.
Yamamoto
,
K.
Mizunama
,
H. D.
Gan
 et al.,
Nat. Mater.
9
,
721
(
2010
).
42.
C.
Bartolozzi
and
G.
Indiveri
,
Neural Comput.
19
,
2581
(
2007
).
43.
C.
Hu
,
Modern Semiconductor Devices
(
Pearson
,
2009
).
44.
J.
Lazzaro
,
R.
Ryckebusch
,
M. A.
Mahowald
, and
C. A.
Mead
,
Advances in Neural Information Processing Systems (NIPS)
,
1988
.
45.
D.
Querlioz
,
O.
Bichler
, and
P.
Dollfus
,
IEEE Trans. Nanotechnol.
12
(
3
),
288
(
2013
).

Supplementary Material