While artificial intelligence, capable of readily addressing cognitive tasks, has transformed technologies and daily lives, there remains a huge gap with biological systems in terms of performance per energy unit. Neuromorphic computing, in which hardware with alternative architectures, circuits, devices, and/or materials is explored, is expected to reduce the gap. Antiferromagnetic spintronics could offer a promising platform for this scheme. Active functionalities of antiferromagnetic systems have been demonstrated recently and several works indicated their potential for biologically inspired computing. In this perspective, we look through the prism of these works and discuss prospects and challenges of antiferromagnetic spintronics for neuromorphic computing. Overview and discussion are given on non-spiking artificial neural networks, spiking neural networks, and reservoir computing.

Rapid development of artificial intelligence (AI) is profoundly affecting many areas in which cognitive functions are required, from cancer treatment1 and natural language processing2 to board games3 and music industry.4 The progress of AI is mostly attributed to the advances in conventional computing hardware and software technologies inspired by biological neural networks. The ability to train and run the artificially implemented neural networks came with the arrival of low-power and highly parallelizable computation in graphics processing units (GPUs) and labeled data for supervised training. This scheme, however, has a limitation in how closely it can approach the biological counterpart and ultimate model—the human brain—which operates with less power than a single GPU while performing a vast variety of tasks without supervision.

The limitation of current artificial systems, as discussed below, lies in the mismatch between the mathematical representations of neural networks and conventional hardware used to run them. This mismatch limits the scale of current AI systems and hampers the development of future, more biologically plausible, algorithms. The problem can be alleviated by tailoring complementary metal–oxide–semiconductor (CMOS) circuits for a chosen algorithm. Several successful cases have been demonstrated for different types of mathematical models of neural networks.5–7 At the same time, there is a prospect that alternative to CMOS, material-based approaches could considerably increase the efficiency. Some of the dynamics of physical processes in such systems, e.g., magnetization reversal, should match the dynamics of components in neural networks; in other words, they should be described by the same or similar equations so that the required computation is naturally carried out. Static states of the systems would be used to store information and thus need to be compatible with the type of memorized variables in the hardware-implemented neural networks. For instance, most types of artificial neural networks operate with non-binary variables, and it is potentially more efficient to store them in materials with more than two stable states typical for digital CMOS circuits. Indeed, a recent hardware implementation of a convolutional neural network with crossbars of inherently multi-state resistive memory cells showed a much higher power efficiency and performance density than an AI-optimized GPU.8 

Such an extension of biological plausibility from the algorithm to hardware is the basic motivation of neuromorphic computing. Various physical systems and materials are considered and actively tested for this application. Spintronic devices with their non-volatility, fast and non-linear dynamics, CMOS compatibility, and virtually unlimited endurance could provide a promising platform for the neuromorphic computing.9,10 For decades, active components of spintronic devices have been based on ferromagnets (FMs) whose magnetization is controlled by magnetic field (e.g., in hard disk drives) or current (e.g., in magnetoresistive random-access memories). On the other hand, recent studies have revealed unexplored functionalities of antiferromagnets (AFMs) through the relativistic spin–orbit coupling, including functionalities of a spin current detector11/generator12 and a memory element.13 

AFMs have compensated spin structures, which result in zero net magnetic moment and give rise to properties, that have no parallels in FMs, such as the absence of stray fields, insensitivity to external magnetic fields, and THz-range dynamics. These properties may open new perspectives for efficient memory devices, THz signal generators and detectors, and magnonic and neuromorphic computing14,15 devices based on antiferromagnetic materials. The specific spin structure of AFMs can also affect adjacent FM layers. This way, in certain AFM/FM heterostructures, the AFM can be simultaneously a source of spin current and spatially inhomogeneous exchange interaction thus allowing field-free multilevel switching of the FM.12 On the other hand, some properties of crystal AFMs can be reproduced in periodic multilayers of FMs separated by thin nonmagnetic layers and magnetized in a compensated way due to exchange interaction between them. Such superlattices are called synthetic AFMs and allow enjoying some benefits of crystal AFMs.16 

In this article, we focus on the prospect of antiferromagnetic spintronics for neuromorphic computing while reviewing how AFM materials, synthetic AFMs, and AFM/FM systems can be or have been engaged for this purpose. Section II is devoted to describing the types of neural networks that are widespread and used in modern applications, including recurrent and convolutional neural networks. They partly inherit the architecture and simplified principles of information processing of the brain. We refer to such neural networks as non-spiking artificial neural networks or ANNs. The biological plausibility of non-spiking ANNs is limited as they are synchronous and only use amplitudes of the propagating signals to encode and process information. Spiking artificial neural networks (SNNs), dealt with in Sec. III, overcome this limitation by using both amplitudes and timing of the pulses (called “spikes”), allowing local and asynchronous information processing. SNNs potentially enable more efficient and complex brain-like functionality but are especially power-costly to be simulated on CMOS circuits. This makes neuromorphic implementations of such networks attractive and we discuss the associated AFM-based options. Section IV is dedicated to reservoir computing (RC), which is of recent interest because it allows using various existing physical systems for computation including, in principle, AFM memories and oscillators. Section V concludes the article by summarizing the properties of AFMs that are important for neuromorphic computing now or may become important in the future.

In general, an ANN can represent a computational model of a biological neural network or dedicated hardware used to run it. To enhance the efficiency of implemented ANNs, it is important to understand the mismatches between mathematical representations of neural networks and von Neumann architectures (Fig. 1). ANNs are inspired by the operation of the human brain, and despite certain differences among different types, all have common building blocks and layout principles. They consist of layers of “neurons”—units that sum incoming weighted signals from preceding neurons and emit a signal further depending on the sum. Neurons are interconnected by arrays of “synapses”—units with variable and plastic non-binary weights, which represent strengths of connections between pairs of neurons. The weights of synapses get updated through the training. These properties—multistate memory and intermixed “logic” and “storage”—constitute major differences from von Neumann architectures of modern computers. Simulating them with software has a limitation in terms of energy efficiency. CMOS-based computing hardware specialized for a dominant mathematical operation—such as a tensor processing unit (TPU) optimized for a high volume of low-precision matrix-dominated calculations7,17—can considerably increase the efficiency of specific tasks. Neuromorphic computing aims for further specialization by mimicking an architecture with layers of neurons and synapses and using more efficient units themselves. The latter includes multilevel non-volatile memory cells that can store synaptic weights. They are often referred to as “memristors”18—despite the somewhat different original meaning of the term19—and will be addressed so in this paper. Efficient implementation of the synapses is especially important as they outnumber neurons by a factor of 104 in the human brain and typically by 10–1000 in artificial systems. Several works have demonstrated the utility of using arrays of inherently multi-level devices by implementing functional and highly efficient ANNs.8,20–23 The search for the optimal memristor technology for this framework is ongoing and requires scalability and efficiency as well as durability and long-term non-volatility of the devices.

FIG. 1.

Schematics of (a) von Neumann architecture and (b) generic artificial neural network. Red circles represent transistors in (a) and neurons in (b). Blue rhombs represent memory bits in (a) and synapses in (b). Hardware implementation of artificial neural networks solves the problem of the bottleneck of von Neumann architectures by parallelizing connectivity. However, it requires either full compatibility of synapses and neurons or building a signal converter into each connection.

FIG. 1.

Schematics of (a) von Neumann architecture and (b) generic artificial neural network. Red circles represent transistors in (a) and neurons in (b). Blue rhombs represent memory bits in (a) and synapses in (b). Hardware implementation of artificial neural networks solves the problem of the bottleneck of von Neumann architectures by parallelizing connectivity. However, it requires either full compatibility of synapses and neurons or building a signal converter into each connection.

Close modal

Fast, durable, and non-volatile spintronic devices could be the system of choice for the memristors. However, switching of FM by spin–transfer torque24,25 or spin–orbit torque26,27 (SOT) in nonmagnet (NM)/FM heterostructures is typically binary and thus would not allow an efficient multilevel switching. In 2014, nontrivial spin Hall effects were theoretically predicted28 and the inverse spin Hall effect was experimentally observed in several Mn-based antiferromagnetic alloys.11 Subsequently, direct spin Hall effect was observed in PtMn, which allowed using the AFM as a source of SOT for magnetization switching.12 In combination with exchange bias (EB)29 at the AFM/FM interface, it enabled field-free SOT switching of a FM with perpendicular easy axis.12,30 Importantly for neuromorphic applications, such replacement of NM with AFM led to a change of the switching mode from binary to memristive [Fig. 2(a)]. A subsequent study has revealed that this behavior originates from inhomogeneity of EB at the AFM/FM interface, which leads to the reversal of some areas of the FM layer at different current than others.31 Propagation of domain walls between the areas is impeded by the EB variation. The resultant non-volatile states can represent synaptic weights in a single memristor and replace multiple binary bits. As a use case, 36 such devices were embedded into an ANN trained to associate imperfect (noisy) patterns with their “ideal” versions (operation called “associative memory”) based on the Hopfield model—a representative model of recurrent ANNs.32 The patterns were I, C, and T symbols recorded in 3 × 3 matrices [Fig. 2(b)]. Ideal synaptic weights were initially calculated from these matrices and then trained with Hebbian and anti-Hebbian learning.33 The resultant weights of the 36 memristors considerably improved the performance of the associative memory.

FIG. 2.

(a) Field-free switching in an AFM/FM stack by spin–orbit torque is possible thanks to exchange bias and depends on its direction. The switching is memristive, i.e., multiple magnetic states can be achieved by adjusting applied current amplitude and stored in a non-volatile manner. Reproduced with permission from Fukami et al., Nat. Mater. 15, 535 (2016). Copyright 2016 Springer Nature. (b) Top panel: ideal I, C, T patterns for memorization. Bottom panel: examples of noisy input for recognition. Reproduced with permission from Borders et al., Appl. Phys. Express 10, 013007 (2017). Copyright 2017 The Japan Society of Applied Physics.

FIG. 2.

(a) Field-free switching in an AFM/FM stack by spin–orbit torque is possible thanks to exchange bias and depends on its direction. The switching is memristive, i.e., multiple magnetic states can be achieved by adjusting applied current amplitude and stored in a non-volatile manner. Reproduced with permission from Fukami et al., Nat. Mater. 15, 535 (2016). Copyright 2016 Springer Nature. (b) Top panel: ideal I, C, T patterns for memorization. Bottom panel: examples of noisy input for recognition. Reproduced with permission from Borders et al., Appl. Phys. Express 10, 013007 (2017). Copyright 2017 The Japan Society of Applied Physics.

Close modal

The readout of the AFM/FM memristors is performed via the FM layer (with anomalous Hall effect or resistance of a magnetic tunnel junction). The FM itself, however, is not memristive. The multistate switching is caused by EB and thus originates in the AFM layer. Hence, memristive switching can be expected to persist in a purely AFM device. Until recently, ordered spins of AFMs could not be manipulated in a practical way, having limited their applications to a source of EB in magnetic field sensors and magnetoresistive random-access memory. In 2016, however, the spin structure of CuMnAs was electrically manipulated by staggered Néel SOT and electrically read out by the anisotropic magnetoresistance.13,34 This has triggered extensive research and other materials soon followed, including epitaxial conducting Mn2Au35 and insulating NiO.36–38 In all the cases, switching, probed by anisotropic magnetoresistance, spin Hall magnetoresistance, or x-ray magnetic linear dichroism, was indeed memristive [e.g., Fig. 3(a)] (although recent studies have shown that the electrical signal may be partially caused by electromigration39,40). The AFM memristors are spared from stray fringing fields due to compensated spin structure and therefore do not interfere with each other. This property may become attractive as the future hardware implementations of ANNs evolve into larger and more densely interconnected circuits, perhaps in three-dimensional space. At the same time, several issues need to be addressed. Among them are finding a more practical method of electrical readout (with larger signal and fewer electrodes required), increasing retention time (needed for non-volatility of synaptic weights), and maximizing device area that can be switched.34,35 Questions related to multilevel switching include reproducibility and scalability of the device states. The latter is determined by the area of binary switching, which is around 200 nm in current AFM/FM devices and has not been determined in AFMs yet (and is expected to be smaller). This device size threshold can be used to obtain memristive and binary devices in the same stack, as discussed in Sec. III.

FIG. 3.

Functionality of an “integrator” in (a) AFM device and (b) AFM/FM device. Readout signal unambiguously corresponds to the duration of a sequence of pulses, applied to an initialized device (for a given pulse amplitude). Breakdown of the universal trajectory for shorter pulses [from 5 μs in (a) and 10 μs in (b)] corresponds to the pulse duration below which the temperature increase due to the Joule heating does not reach saturation. Panel (a) is reproduced with permission from Olejník et al., Nat. Commun. 8, 15434 (2017). Copyright 2017 Springer Nature.

FIG. 3.

Functionality of an “integrator” in (a) AFM device and (b) AFM/FM device. Readout signal unambiguously corresponds to the duration of a sequence of pulses, applied to an initialized device (for a given pulse amplitude). Breakdown of the universal trajectory for shorter pulses [from 5 μs in (a) and 10 μs in (b)] corresponds to the pulse duration below which the temperature increase due to the Joule heating does not reach saturation. Panel (a) is reproduced with permission from Olejník et al., Nat. Commun. 8, 15434 (2017). Copyright 2017 Springer Nature.

Close modal

Although less numerous, neurons are more complex than synapses and thus their scalable and efficient hardware implementation would be beneficial for an ANN footprint and energy consumption. In an ANN, operation of a neuron consists of two steps: (1) summation of weighted inputs from other neurons and (2) application of an activation function that receives the sum and outputs the amplitude of the signal to be emitted. Considering the broad variety of activation functions (an ANN may even use different activation functions for different layers), their rigid realization in non-CMOS hardware may be complicated. The summation, however, does not depend on the activation function and can be readily done with both the AFM/FM41 and AFM42 memristors described above, as shown in Fig. 3. Hence AFM-based memristors could underlie both synapses and neurons of hardware-implemented ANNs.

While non-spiking ANNs have proven their efficiency in certain cognitive tasks owing to the biological plausibility of the architecture, their way of encoding and processing information significantly differs from the biological one. The ANNs described above are synchronous systems governed by a global clock frequency like von Neumann machines. The computation is frame-driven, proceeding layer-by-layer, and therefore timestamps of pulse generation and arrival do not matter. Information in non-spiking ANNs is carried and processed based on the amplitudes of the propagating pulses. In biological neural networks, by contrast, timing and frequency of pulses are the basis of information encoding and processing. Pulses (spikes) in such systems propagate asynchronously and cause immediate changes in a synapse or neuron they arrive to, regardless of the state of the rest of the circuit.43,44 To achieve the cognitive power of the biological neural networks, it may be necessary to adopt this data representation paradigm.

Figure 4 depicts functionalities of neurons and synapses in SNNs. A neuron integrates incoming pulses like a neuron of an ANN but does it with leakage and fires an outcoming spike as soon as a certain threshold is reached [“leaky integrate-and-fire” (LIF) functionality].44,45 These properties give rise to time coding because each neuron (a) becomes dependent on firing times of other neurons [e.g., the same set of pulses applied as a denser sequence will trigger a neuron with higher probability due to less leakage (Fig. 4, left graph inset)] and (b) may influence other neurons through its firing time. This added dimension—time coding—replaces the coding by pulse amplitude because SNN neurons output a standard pulse every time they fire. Synapses in SNNs represent strengths of connections between the neurons and adjust amplitudes of propagating signals similar to synapses of non-spiking ANNs. The time coding extends to them in the way they get updated. Unlike synapses in non-spiking ANNs, which are updated by dedicated external signals, SNN synapses are updated based on the firing of the pairs of neurons they connect. If a spike from the preceding neuron reaches a synapse before a spike from the succeeding one, the synaptic weight increases representing a positive causality of the two events.46,47 If the order of pulses is opposite, the synaptic weight decreases. Shorter delays between the pulses result in larger weight updates. This functionality is called spike-timing-dependent plasticity (STDP) (Fig. 4, right graph inset).

FIG. 4.

A fragment of a biological spiking neural network describing functionalities of a neuron and a synapse. The insets were adapted with permission from Kurenkov et al., Adv. Mater. 31, 1900636 (2019). Copyright 2019 Wiley-VCH Verlag GmbH & Co. KGaA. The neuron (red figure on the left) receives and integrates electrical pulses from other neurons. The result of the integration constantly decreases due to “leakage.” If the integral exceeds a threshold (red graph inset on the left), the neuron fires a pulse to successively connected neurons. Leakage results in the increased probability of triggering by denser or longer sequences of incoming pulses (see the red graph inset). A neuron with such properties is called “leaky integrate-and-fire.” The pulses that propagate between neurons get attenuated by synapses (blue circular inset on the right). Stronger connection between a given pair of neurons is represented by a more conducting synaptic state or “larger synaptic weight.” The weight gets adjusted based on the firing moments of the connected neurons as shown in the blue graph inset on the right. Arrival of a pulse from the pre-synaptic neuron before a pulse from the post-synaptic neuron results in the increase of the weight. Shorter delays between the pulses cause larger updates and reversed order of the pulses results in a decrease of the synaptic weight. This rule of synaptic weight update is called “spike-timing-dependent plasticity.”

FIG. 4.

A fragment of a biological spiking neural network describing functionalities of a neuron and a synapse. The insets were adapted with permission from Kurenkov et al., Adv. Mater. 31, 1900636 (2019). Copyright 2019 Wiley-VCH Verlag GmbH & Co. KGaA. The neuron (red figure on the left) receives and integrates electrical pulses from other neurons. The result of the integration constantly decreases due to “leakage.” If the integral exceeds a threshold (red graph inset on the left), the neuron fires a pulse to successively connected neurons. Leakage results in the increased probability of triggering by denser or longer sequences of incoming pulses (see the red graph inset). A neuron with such properties is called “leaky integrate-and-fire.” The pulses that propagate between neurons get attenuated by synapses (blue circular inset on the right). Stronger connection between a given pair of neurons is represented by a more conducting synaptic state or “larger synaptic weight.” The weight gets adjusted based on the firing moments of the connected neurons as shown in the blue graph inset on the right. Arrival of a pulse from the pre-synaptic neuron before a pulse from the post-synaptic neuron results in the increase of the weight. Shorter delays between the pulses cause larger updates and reversed order of the pulses results in a decrease of the synaptic weight. This rule of synaptic weight update is called “spike-timing-dependent plasticity.”

Close modal

Spike-timing-based approach enables the unparalleled efficiency of the human brain in event-driven processing of spatiotemporal information and shows promise to significantly improve the performance of artificial systems48 in real-time tasks such as robot navigation,49 interaction with event-based sensors,50 and quick decision-making (due to fast inference).51 Beyond computation, the ability to reproduce the operation of the human brain is also valuable for neuroscience52–54 and brain–machine interfaces.45,55

SNNs have not become widely adopted because much power is required to simulate them on conventional hardware. Figure 5 schematically shows the relation between physical units and time domains for various types of computing hardware. In an SNN, every synapse and neuron effectively relies on an independent clock to function [Fig. 5(c)]. Central processing units (CPUs) are synchronous architectures and require a uniform clock governing the cycles of logic evaluation [Fig. 5(a)]. As a result, a CPU must distribute its time sequentially between all the SNN units. At the same time, most of the logic in the CPU remains unused because the calculation complexity of an individual neuron or synapse is not high. This fact results in a slow but high-power operation and vast inefficiency. More fast and efficient SNN simulations can be done with GPUs as they have multiple clock domains and smaller cores. Specifically tailored CMOS architectures for SNNs—such as TrueNorth5 and Loihi6—allow even higher efficiency as the logic of each of the multiple cores is optimized for local calculations. The most specialized architecture would possibly consist of synapses and neurons individually constructed from CMOS components56 [Fig. 5(c)]. However, projecting the scale of such architectures to a brain-like system would require 106–108 times more transistors than in a modern CPU, corresponding to megawatts of consumed power compared to 20 W required by the human brain.

FIG. 5.

Relation between physical units and time domains in which they operate in different architectures. (a) von Neumann devices like modern CPUs are synchronous meaning that all components operate in the same clock domain (some modern CPUs have several clocks). (b) GPUs are better suited for parallelized computations as they have thousands of simpler cores each operating in its own time domain. (c) Each unit of a spiking network operates in its own time domain and performs a rather simple calculation. The human brain has ∼1015 such time domains, which makes it impossible to simulate with a modern CPU regardless of the performance of the latter.

FIG. 5.

Relation between physical units and time domains in which they operate in different architectures. (a) von Neumann devices like modern CPUs are synchronous meaning that all components operate in the same clock domain (some modern CPUs have several clocks). (b) GPUs are better suited for parallelized computations as they have thousands of simpler cores each operating in its own time domain. (c) Each unit of a spiking network operates in its own time domain and performs a rather simple calculation. The human brain has ∼1015 such time domains, which makes it impossible to simulate with a modern CPU regardless of the performance of the latter.

Close modal

Considering these difficulties, it is of immense interest to search for more suitable approaches and material systems. Time measurement is the key factor in the synaptic and neuronal operations in SNNs. In biological systems, the slope of bipolar spikes of complex shape works as a measure of time intervals.57 In principle, this approach can be extended to a memristor of any type with a threshold (such as in Refs. 12 and 58). Indeed, this has been used in ferroelectric,58 phase change,59,60 resistive,61 organic,62 and graphene63 devices to show STDP functionality. However, neurons have not been replicated in the same materials or based on the same principle as they exhibit different properties—leaky and probabilistic transition between the binary states. Artificial neurons developed so far, therefore, have been based on dynamics of binary switching in a Mott insulator64–67 or floating-body MOSFET68 using unipolar pulses. Considering the enormous number of synapse-to-neuron connections, this incompatibility may impede the fabrication of large-scale SNNs. Hence, at the hardware level, although SNNs parallelism solves the so-called von Neumann bottleneck problem [Fig. 1(a)], it creates a challenging requirement of compatibility between synapses and neurons.

The compatibility requires material and operational uniformity between artificial synapses and neurons. The first condition translates to a requirement of a material system that can switch in binary and memristive fashions; the second—to a uniform time measuring principle. Both requirements can be satisfied by the AFM/FM memristors described above.

The AFM/FM memristor shows multilevel switching because it is comprised of numerous areas that show binary switching at different currents. Hence, binary switching can be achieved by reducing device size below the binary area size. As mentioned in Sec. II, this fact enables either multilevel or binary switching in the same material depending only on the device size31 [Fig. 6]. The time measurement, in its turn, can be implemented most naturally with a dynamical physical process. Temperature can play such a role in the AFM/FM memristor as it (1) can be increased by an incoming pulse due to the Joule heating, (2) dissipates between the pulses, and (3) affects switching by the subsequent pulse via thermally assisted magnetization reversal. As a result, denser sequences of pulses arriving to the binary device heat it up leading to an increased probability of switching and thus reproducing biological LIF-like property [Fig. 6(b)].41 Similarly, the smaller the interval between two pulses of opposite polarities arriving to the memristive device, the larger change in the final device state is produced, resulting in an STDP-like operation [Fig. 6(a)].41 This material and operational consistency allows realizing uniform STDP and LIF units and connect them with simple circuits, making hardware implementations of large-scale SNNs more feasible. The approach can be potentially extended to purely antiferromagnetic systems as they demonstrate thermally assisted SOT switching as well.42,69 The possible challenges of the approach include avoidance of thermal crosstalk in densely packed architectures and resetting the LIF device after firing (i.e., switching back its magnetization).

FIG. 6.

(a) Operating principle of an artificial synapse based on the memristive AFM/FM device. Two equal pulses of opposite polarities arrive with a delay (upper panel). The pulse that arrives earlier reverses magnetization of some domains and increases the temperature due to Joule heating (middle panel, red color represents increased temperature). The second pulse is of the same amplitude but provides larger switching (in the opposite direction) due to the residual heat from the first pulse and heat-assisted magnetization reversal (lower panel). Therefore, the final state of the device depends on the delay, i.e., the causality between the pulses. (b) Operating principle of an artificial neuron based on a binary single-domain device of the same material. A single pulse is insufficient for switching (upper panel) but a train of pulses can deliver enough heat to lower the energy barrier (middle panels) and eventually lead to magnetization reversal by one of the arriving pulses (lower panel). Adapted with permission from Kurenkov et al., Adv. Mater. 31, 1900636 (2019). Copyright 2019 Wiley-VCH Verlag GmbH & Co. KGaA.

FIG. 6.

(a) Operating principle of an artificial synapse based on the memristive AFM/FM device. Two equal pulses of opposite polarities arrive with a delay (upper panel). The pulse that arrives earlier reverses magnetization of some domains and increases the temperature due to Joule heating (middle panel, red color represents increased temperature). The second pulse is of the same amplitude but provides larger switching (in the opposite direction) due to the residual heat from the first pulse and heat-assisted magnetization reversal (lower panel). Therefore, the final state of the device depends on the delay, i.e., the causality between the pulses. (b) Operating principle of an artificial neuron based on a binary single-domain device of the same material. A single pulse is insufficient for switching (upper panel) but a train of pulses can deliver enough heat to lower the energy barrier (middle panels) and eventually lead to magnetization reversal by one of the arriving pulses (lower panel). Adapted with permission from Kurenkov et al., Adv. Mater. 31, 1900636 (2019). Copyright 2019 Wiley-VCH Verlag GmbH & Co. KGaA.

Close modal

Another way to implement a compatible synapse and neuron is proposed in a recent theoretical work,70 in which the dynamics of topological charges carried by domain walls in an AFM is utilized. A single domain with uniaxial magnetic anisotropy could work as a LIF neuron. It receives domain walls38 from other neurons and its Néel vector switches by π if the delivered topological charge is enough. While switching, the domain emits its own domain wall, which then ballistically propagates through a nanowire to other neurons. If the excitation intensity is insufficient, the Néel vector of the domain returns to the initial state dissipating the accumulated energy. Memristive synaptic connections could be realized in a similar AFM in a form of nanowires filled with an interacting gas of domain walls. A higher pressure of domain walls in the wire boosts the propagation of domain walls, resulting in a stronger synaptic connection. To implement these mechanisms, additional experimental investigation of lesser-explored topics of generation, control, and detection of AFM domain walls will be needed.

The dynamics of AFMs allows more than one way of realizing artificial synapses and neurons. A GHz-range auto-oscillatory AFM neuron71 and a neuron, based on skyrmions in a synthetic AFM72 have been proposed as well. Physical implementations of SNNs, unlike their software counterparts, have limited adjustability. The adjustability may be necessary in future networks as such mechanisms are present in biology at global (e.g., neuronal tuning) and local levels (e.g., inhibition of a neuron). Therefore, the diversity of approaches, discussed in this section, is important for maximizing the chances of physical realization of SNNs and should be further extended.

Reservoir computing (RC) is a type of neuromorphic computing that, in comparison with non-spiking ANNs and SNNs, does not impose firm restrictions on hardware and can be readily compatible with some physical systems. RC can be considered a type of neural network in which the connectivity and weights of hidden neurons are random and fixed (and not necessarily known). Only the weights of the output neurons are adjusted during training [Fig. 7(b)]. The ensemble of hidden neurons is called a “reservoir” and its purpose is to receive an input and map it to a higher-dimensional space by a non-linear transformation where it can be classified by a linear transformation of the output neurons.73 Because the reservoir is fixed, its set of transformations is not adjustable. Therefore, it must be large and complex enough to include the non-linear transformation needed to make the input data linearly separable. Simultaneously, its response must be consistent, i.e., equal inputs must generate equal outputs. Recently, a variety of physical systems with non-adjustable but complex non-linear dynamics have been tested as candidates for the reservoir layer [Fig. 7(c)]. The search for such systems has intensified despite several limitations of RC (e.g., impossibility to disentangle data processing in the reservoir) because of the fast and simple training and ability to recognize and predict real-world temporal data such as weather or stock indexes.73 

FIG. 7.

Transition from neural networks to physical reservoir computing. (a) Schematics of a feed-forward artificial neural network. Red circles represent input neurons, and green circles represent output neurons. Gray lines are weighted connections between neurons. All weights are changed during training. (b) Reservoir computer is a neural network in which hidden neurons of a deep ANN are replaced with recurrent neurons (in echo state network or ESN) or spiking neurons (in liquid state machine or LSM) with fixed connections and weights. The reservoir must be complex enough for making input linearly separable by applying a suitable non-linear function to it. (c) Physical reservoir is a nonlinear physical system with sufficient dimensionality, which replaces hidden neurons of an ESN or LSM.

FIG. 7.

Transition from neural networks to physical reservoir computing. (a) Schematics of a feed-forward artificial neural network. Red circles represent input neurons, and green circles represent output neurons. Gray lines are weighted connections between neurons. All weights are changed during training. (b) Reservoir computer is a neural network in which hidden neurons of a deep ANN are replaced with recurrent neurons (in echo state network or ESN) or spiking neurons (in liquid state machine or LSM) with fixed connections and weights. The reservoir must be complex enough for making input linearly separable by applying a suitable non-linear function to it. (c) Physical reservoir is a nonlinear physical system with sufficient dimensionality, which replaces hidden neurons of an ESN or LSM.

Close modal

To work as a reservoir, a physical system must have sufficient non-linearity (to separate signals of different classes but not to become unstable with noise), high-dimensionality (considerably exceeding dimensionality of the input), and short-term memory effect (with a time scale longer than meaningful temporal correlations of the input).73,74 Experimental works on AFM-based reservoirs are yet to come, and the list of candidates includes arrays of antiferromagnetic memristors, spin torque oscillators (STOs), and synthetic AFM skyrmions.

It has been theoretically shown that arrays of memristors can form a robust RC system.75,76 Interestingly, the volatility of the used memristors, which is undesirable for memory applications, becomes an asset in RC. It enables short-term memory, reproducibility of the results over multiple attempts (by returning the states of the memristors to the same initial levels), and prevents memristors from getting “stuck” in their terminal states. Such volatile memristors based on WOx resistive memory have been used in RC for both recognition tasks of spoken77 and handwritten digits78,79 and forecasting tasks of a chaotic time-series77 [Fig. 8(a)]. SOT switching of antiferromagnetic memristors has been reported to be also volatile in some materials, i.e., such devices relax to their initial states within minutes after switching.42,80,81 Thus, the use of AFM memristors seems feasible if the relaxation rate can be adjusted to the desired time scale.42 

FIG. 8.

(a) WOX leaky memristor-based RC for digit recognition of 5 × 4 images. Only the output ϴ weights are trained. Reproduced with permission from Du et al., Nat. Commun. 8, 2204 (2017). Copyright 2017 Springer Nature. (b) RC based on a ferromagnetic STO with non-linear voltage-vs-current response. Time-multiplexing is used to effectively increase dimensionality. Reproduced with permission from Torrejon et al., Nature 547, 428 (2017). Copyright 2017 Springer Nature. (c) Top panel: Néel skyrmion fabrics-based reservoir; voltage is applied between the yellow points. Bottom panel: the corresponding current pathways through the reservoir. Reproduced with permission from Bourianoff et al., AIP Adv. 8, 055602 (2018). Copyright 2018 AIP Publishing LLC.

FIG. 8.

(a) WOX leaky memristor-based RC for digit recognition of 5 × 4 images. Only the output ϴ weights are trained. Reproduced with permission from Du et al., Nat. Commun. 8, 2204 (2017). Copyright 2017 Springer Nature. (b) RC based on a ferromagnetic STO with non-linear voltage-vs-current response. Time-multiplexing is used to effectively increase dimensionality. Reproduced with permission from Torrejon et al., Nature 547, 428 (2017). Copyright 2017 Springer Nature. (c) Top panel: Néel skyrmion fabrics-based reservoir; voltage is applied between the yellow points. Bottom panel: the corresponding current pathways through the reservoir. Reproduced with permission from Bourianoff et al., AIP Adv. 8, 055602 (2018). Copyright 2018 AIP Publishing LLC.

Close modal

Another spintronic system—spin torque oscillator (STO)82—has a nonlinear dependence of frequency on the amplitude of applied current and history-dependent magnetization dynamics.83,84 However, the dimensionality of a single-node reservoir represented by an STO is insufficient for complex functionality. It can be effectively increased by time-multiplexing of the input, for example, by adding a delay to the feedback. It allows a single node to emulate a network of virtual nodes.85 The application of such an approach to a ferromagnetic STO allowed considerable enhancement of spoken digits recognition, although pre- and post-processing of the data by conventional means was required86 [Fig. 8(b)]. Extension of this approach to antiferromagnetic STOs, which have been numerically studied recently,87–91 is interesting considering their high operating frequency.

The dimensionality can also be increased by coupling several STOs. It is more experimentally challenging but potentially more powerful than time-multiplexing.84,92 This way, spoken vowel recognition was achieved without time-multiplexing and post-processing by four spin–transfer torque driven ferromagnetic STOs, connected in series and coupled by current.93 More recently, synchronization of a two-dimensional array of 64 spin Hall effect driven STOs94 has been reported.95 Considering that the spin Hall effect can drive antiferromagnetic STOs and they can be coupled through THz-frequency current,96–99 complex arrays of AFM STOs may be used in the future RC systems.

One more spintronic object potentially applicable to RC is a skyrmion.97–99 It can nonlinearly change resistance of a hosting FM stripe as a function of its position, structure, and applied current.100 As in the case of STO, a single skyrmion reservoir does not have a sufficient dimensionality. It has therefore been suggested that a “skyrmion fabric” (a phase combining skyrmions, skyrmion crystals, and domain walls101) covered with numerous nanocontacts will have sufficient dimensionality and nonlinearity for RC.74 In this case, an input, realized as voltage applied to the contacts, would deform the magnetic structure, and each structure would correspond to a certain current pattern in the reservoir due to magnetoresistive effects [Fig. 8(c)]. Using AFM or synthetic AFM skyrmions or skyrmion fabrics could allow good scaling properties (due to fine texture102), fast inputs (due to fast dynamics103), and suppression of transverse deflection, inherent to FM skyrmions102,104,105 (to prevent annihilation at the reservoir edges).

It is not yet clear what approach will be adequate. For instance, skyrmion fabrics reservoir is multidimensional and highly nonlinear, but some experimental difficulties are expected, including magnitude of readout signal and consistent response at ambient temperatures. Proof-of-concept of RC by STOs, on the other hand, has already been demonstrated, and scalability of such systems will be a future challenge. RC based on volatile AFM memristors appears scalable and experimentally viable; however, it may be the least tunable approach. Comparison of the reservoirs is additionally complicated by the absence of an established way to compare their computing capacities.

The benefits of hardware specialized for brain-inspired computing become more apparent with the recent successful demonstrations. The examples include CMOS-based,5,17 optical,106,107 and condensed matter systems.8,41,86 Spintronic devices have been proposed and successfully used for various types of neuromorphic hardware. They can act as memristors in ANNs, simulate synapses and neurons in SNNs and be non-linear physical reservoirs in RC. In order to make spintronics-based approach a viable candidate for neuromorphic computing, several breakthroughs are still needed to satisfy the multiple requirements. Nevertheless, based on the current state of the field, spintronic devices, having both volatile and non-volatile properties, virtually unlimited endurance, and non-linear dynamics, look promising for neuromorphic architectures. From a technical standpoint, inherent scalability of the devices108,109 is beneficial, while a small—compared to other solid-state systems—readout signal may pose a challenge.

AFMs are important for neuromorphic spintronics because they allow simple realization of a scalable and reproducibly operating memristor—a central device for most neural network and RC types. Although some FM systems also demonstrate memristive switching, they often require more complex patterning, are less scalable, or operate less reproducibly.110–115 Antiferromagnetic devices are also attractive because of their scalability and the absence of stray fields. However, increasing the readout signal, currently originating from the anisotropic/spin Hall magnetoresistance, is a big challenge. Anticipated solutions include utilizing tunneling anisotropic116 or topological117,118 magnetoresistance at room temperatures such as recently achieved large anomalous Hall effect and electrical manipulation of Mn3Sn.119,120 A readily available alternative is to use AFM/FM structures. Exchange bias in them induces memristivity in the FM and allows obtaining a large readout signal using, for instance, tunneling magnetoresistance.

Another distinguishing feature of AFMs—THz-range dynamics80—does not appear to be as crucial in neuromorphic applications as in memory devices80 or probabilistic computing.121,122 GHz-range of FM switching123 thus seems sufficient for current models. It may, however, become important for future RC systems or ultra-fast ANNs/SNNs. One more potentially important aspect of THz-range dynamics in AFMs is the fast propagation of high-frequency spin waves.124,125 Spin waves have been proposed for efficient digital information processing126,127 as they can transfer spin angular momentum without Joule heat-related losses. Recently, they have been also envisaged for neuromorphic computing as an inter-neuron signal carrier in a SNN128 and a feed-forward STO-based ANN129 and as a nonlinear element for RC.130 Efficient spin wave devices may be achieved in insulators with negligible Ohmic losses. AFM insulators not only can be switched by spin torque36 but also can carry spin waves131 with enough angular momentum to switch magnetization of an adjacent layer.132 Therefore, magnonics and AFM insulators could enable units of ANNs (e.g., AFM insulator memristor), SNNs, and RC (e.g., interfering STOs or spin wave reservoirs) superior in efficiency to their conducting counterparts.

The next peculiarity of AFMs—insensitivity to external magnetic fields—provides more difficulties than benefits today but may become pivotal in the future. Both biological133 and artificial neural networks must be large-scale for complex and diverse functionality. Dense three-dimensional neuromorphic systems, required to achieve the scale and interconnection density of the human brain, will impose stringent limitations on crosstalk of its components and may prohibit the use of FM materials. This may make AFMs an ultimate candidate for neuromorphic computing from spintronics considering that they allow the same diversity of approaches as FMs. Achieving this will require further extensive study of both fundamental and practical aspects of AFMs: clarifying the source of memristivity, its scalability, and relation to domain structure; studying the dynamics of switching and the role of domain walls; achieving long-term non-volatility by increasing thermal stability; and developing better readout methods.

The authors are grateful to Y. Horio, C. Zhang, S. DuttaGupta, Y. Tserkovnyak, O. Tretiakov, M. Stiles, and T. Dohi for discussion. This work was supported in part by the ImPACT Program of CSTI, JSPS KAKENHI (Nos. 17H06093, 18KK0143, 19K15428, and 19H05622), JST-OPERA (No. JPMJOP1611), JST-CREST (No. JPMJCR19K3), JSPS Core-to-Core Program, and Cooperative Research Projects of RIEC.

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

1.
O.
Troyanskaya
,
Z.
Trajanoski
,
A.
Carpenter
,
S.
Thrun
,
N.
Razavian
, and
N.
Oliver
,
Nat. Cancer
1
,
149
(
2020
).
2.
T.
Young
,
D.
Hazarika
,
S.
Poria
, and
E.
Cambria
,
IEEE Comput. Intell. Mag.
13
,
55
(
2018
).
3.
D.
Silver
,
J.
Schrittwieser
,
K.
Simonyan
,
I.
Antonoglou
,
A.
Huang
,
A.
Guez
,
T.
Hubert
,
L.
Baker
,
M.
Lai
,
A.
Bolton
,
Y.
Chen
,
T.
Lillicrap
,
F.
Hui
,
L.
Sifre
,
G.
van den Driessche
,
T.
Graepel
, and
D.
Hassabis
,
Nature
550
,
354
(
2017
).
4.
B. L. T.
Sturm
,
M.
Iglesias
,
O.
Ben-Tal
,
M.
Miron
, and
E.
Gómez
,
Arts
8
,
115
(
2019
).
5.
P. A.
Merolla
,
J. V.
Arthur
,
R.
Alvarez-Icaza
,
A. S.
Cassidy
,
J.
Sawada
,
F.
Akopyan
,
B. L.
Jackson
,
N.
Imam
,
C.
Guo
,
Y.
Nakamura
,
B.
Brezzo
,
I.
Vo
,
S. K.
Esser
,
R.
Appuswamy
,
B.
Taba
,
A.
Amir
,
M. D.
Flickner
,
W. P.
Risk
,
R.
Manohar
, and
D. S.
Modha
,
Science
345
,
668
(
2014
).
6.
M.
Davies
,
N.
Srinivasa
,
T.-H.
Lin
,
G.
Chinya
,
Y.
Cao
,
S. H.
Choday
,
G.
Dimou
,
P.
Joshi
,
N.
Imam
,
S.
Jain
,
Y.
Liao
,
C.-K.
Lin
,
A.
Lines
,
R.
Liu
,
D.
Mathaikutty
,
S.
McCoy
,
A.
Paul
,
J.
Tse
,
G.
Venkataramanan
,
Y.-H.
Weng
,
A.
Wild
,
Y.
Yang
, and
H.
Wang
,
IEEE Microbiol.
38
,
82
(
2018
).
8.
P.
Yao
,
H.
Wu
,
B.
Gao
,
J.
Tang
,
Q.
Zhang
,
W.
Zhang
,
J. J.
Yang
, and
H.
Qian
,
Nature
577
,
641
(
2020
).
9.
S.
Fukami
and
H.
Ohno
,
J. Appl. Phys.
124
,
151904
(
2018
).
10.
J.
Grollier
,
D.
Querlioz
,
K. Y.
Camsari
,
K.
Everschor-Sitte
,
S.
Fukami
, and
M. D.
Stiles
,
Nat. Electron.
(published online 2020).
11.
W.
Zhang
,
M. B.
Jungfleisch
,
W.
Jiang
,
J. E.
Pearson
,
A.
Hoffmann
,
F.
Freimuth
, and
Y.
Mokrousov
,
Phys. Rev. Lett.
113
,
196602
(
2014
).
12.
S.
Fukami
,
C.
Zhang
,
S.
DuttaGupta
,
A.
Kurenkov
, and
H.
Ohno
,
Nat. Mater.
15
,
535
(
2016
).
13.
P.
Wadley
,
B.
Howells
,
J.
Elezny
,
C.
Andrews
,
V.
Hills
,
R. P.
Campion
,
V.
Novak
,
K.
Olejnik
,
F.
Maccherozzi
,
S. S.
Dhesi
,
S. Y.
Martin
,
T.
Wagner
,
J.
Wunderlich
,
F.
Freimuth
,
Y.
Mokrousov
,
J.
Kune
,
J. S.
Chauhan
,
M. J.
Grzybowski
,
A. W.
Rushforth
,
K. W.
Edmonds
,
B. L.
Gallagher
, and
T.
Jungwirth
,
Science
351
,
587
(
2016
).
14.
T.
Jungwirth
,
X.
Marti
,
P.
Wadley
, and
J.
Wunderlich
,
Nat. Nanotechnol.
11
,
231
(
2016
).
15.
T.
Jungwirth
,
J.
Sinova
,
A.
Manchon
,
X.
Marti
,
J.
Wunderlich
, and
C.
Felser
,
Nat. Phys.
14
,
200
(
2018
).
16.
R. A.
Duine
,
K.-J.
Lee
,
S. S. P.
Parkin
, and
M. D.
Stiles
,
Nat. Phys.
14
,
217
(
2018
).
17.
Y. E.
Wang
,
G. Y.
Wei
, and
D.
Brooks
, arXiv:1907.10701 (
2019
).
18.
D. B.
Strukov
,
G. S.
Snider
,
D. R.
Stewart
, and
R. S.
Williams
,
Nature
453
,
80
(
2008
).
19.
L. O.
Chua
,
IEEE Trans. Circuit Theory
18
,
507
(
1971
).
20.
S. B.
Eryilmaz
,
D.
Kuzum
,
R.
Jeyasingh
,
S.
Kim
,
M.
BrightSky
,
C.
Lam
, and
H.-S. P.
Wong
,
Front. Neurosci.
8
,
205
(
2014
).
21.
G. W.
Burr
,
R. M.
Shelby
,
S.
Sidler
,
C.
di Nolfo
,
J.
Jang
,
I.
Boybat
,
R. S.
Shenoy
,
P.
Narayanan
,
K.
Virwani
,
E. U.
Giacometti
,
B. N.
Kurdi
, and
H.
Hwang
,
IEEE Trans. Electron Devices
62
,
3498
(
2015
).
22.
M.
Prezioso
,
F.
Merrikh-Bayat
,
B. D.
Hoskins
,
G. C.
Adam
,
K. K.
Likharev
, and
D. B.
Strukov
,
Nature
521
,
61
(
2015
).
23.
Z.
Wang
,
S.
Joshi
,
S.
Savel’ev
,
W.
Song
,
R.
Midya
,
Y.
Li
,
M.
Rao
,
P.
Yan
,
S.
Asapu
,
Y.
Zhuo
,
H.
Jiang
,
P.
Lin
,
C.
Li
,
J. H.
Yoon
,
N. K.
Upadhyay
,
J.
Zhang
,
M.
Hu
,
J. P.
Strachan
,
M.
Barnell
,
Q.
Wu
,
H.
Wu
,
R. S.
Williams
,
Q.
Xia
, and
J. J.
Yang
,
Nat. Electron.
1
,
137
(
2018
).
24.
J. C.
Slonczewski
,
J. Magn. Magn. Mater.
159
,
L1
(
1996
).
25.
L.
Berger
,
Phys. Rev. B
54
,
9353
(
1996
).
26.
I. M.
Miron
,
K.
Garello
,
G.
Gaudin
,
P.-J.
Zermatten
,
M. V.
Costache
,
S.
Auffret
,
S.
Bandiera
,
B.
Rodmacq
,
A.
Schuhl
, and
P.
Gambardella
,
Nature
476
,
189
(
2011
).
27.
L.
Liu
,
C.-F.
Pai
,
Y.
Li
,
H. W.
Tseng
,
D. C.
Ralph
, and
R. A.
Buhrman
,
Science
336
,
555
(
2012
).
28.
H.
Chen
,
Q.
Niu
, and
A. H.
MacDonald
,
Phys. Rev. Lett.
112
,
017205
(
2014
).
29.
W. H.
Meiklejohn
and
C. P.
Bean
,
Phys. Rev.
102
,
1413
(
1956
).
30.
Y.-W.
Oh
,
S.
Chris Baek
,
Y. M.
Kim
,
H. Y.
Lee
,
K.-D.
Lee
,
C.-G.
Yang
,
E.-S.
Park
,
K.-S.
Lee
,
K.-W.
Kim
,
G.
Go
,
J.-R.
Jeong
,
B.-C.
Min
,
H.-W.
Lee
,
K.-J.
Lee
, and
B.-G.
Park
,
Nat. Nanotechnol.
11
,
878
(
2016
).
31.
A.
Kurenkov
,
C.
Zhang
,
S.
DuttaGupta
,
S.
Fukami
, and
H.
Ohno
,
Appl. Phys. Lett.
110
,
092410
(
2017
).
32.
W. A.
Borders
,
H.
Akima
,
S.
Fukami
,
S.
Moriya
,
S.
Kurihara
,
Y.
Horio
,
S.
Sato
, and
H.
Ohno
,
Appl. Phys. Express
10
,
013007
(
2017
).
33.
D. O.
Hebb
,
The Organization of Behavior: A Neuropsychological Theory
(
John Wiley and Sons, Inc.
,
New York
,
1949
).
34.
M. J.
Grzybowski
,
P.
Wadley
,
K. W.
Edmonds
,
R.
Beardsley
,
V.
Hills
,
R. P.
Campion
,
B. L.
Gallagher
,
J. S.
Chauhan
,
V.
Novak
,
T.
Jungwirth
,
F.
Maccherozzi
, and
S. S.
Dhesi
,
Phys. Rev. Lett.
118
,
057701
(
2017
).
35.
S. Y.
Bodnar
,
L.
Šmejkal
,
I.
Turek
,
T.
Jungwirth
,
O.
Gomonay
,
J.
Sinova
,
A. A.
Sapozhnik
,
H.-J.
Elmers
,
M.
Kläui
, and
M.
Jourdan
,
Nat. Commun.
9
,
348
(
2018
).
36.
T.
Moriyama
,
K.
Oda
,
T.
Ohkochi
,
M.
Kimata
, and
T.
Ono
,
Sci. Rep.
8
,
14167
(
2018
).
37.
X. Z.
Chen
,
R.
Zarzuela
,
J.
Zhang
,
C.
Song
,
X. F.
Zhou
,
G. Y.
Shi
,
F.
Li
,
H. A.
Zhou
,
W. J.
Jiang
,
F.
Pan
, and
Y.
Tserkovnyak
,
Phys. Rev. Lett.
120
,
207204
(
2018
).
38.
L.
Baldrati
,
O.
Gomonay
,
A.
Ross
,
M.
Filianina
,
R.
Lebrun
,
R.
Ramos
,
C.
Leveille
,
F.
Fuhrmann
,
T. R.
Forrest
,
F.
MacCherozzi
,
S.
Valencia
,
F.
Kronast
,
E.
Saitoh
,
J.
Sinova
, and
M.
Klaüi
,
Phys. Rev. Lett.
123
,
177201
(
2019
).
39.
C. C.
Chiang
,
S. Y.
Huang
,
D.
Qu
,
P. H.
Wu
, and
C. L.
Chien
,
Phys. Rev. Lett.
123
,
227203
(
2019
).
40.
Y.
Cheng
,
S.
Yu
,
M.
Zhu
,
J.
Hwang
, and
F.
Yang
,
Phys. Rev. Lett.
124
,
27202
(
2020
).
41.
A.
Kurenkov
,
S.
DuttaGupta
,
C.
Zhang
,
S.
Fukami
,
Y.
Horio
, and
H.
Ohno
,
Adv. Mater.
31
,
1900636
(
2019
).
42.
K.
Olejník
,
V.
Schuler
,
X.
Marti
,
V.
Novak
,
Z.
Kaspar
,
P.
Wadley
,
R. P.
Campion
,
K. W.
Edmonds
,
B. L.
Gallagher
,
J.
Garces
,
M.
Baumgartner
,
P.
Gambardella
, and
T.
Jungwirth
,
Nat. Commun.
8
,
15434
(
2017
).
43.
A. L.
Hodgkin
and
A. F.
Huxley
,
J. Physiol.
117
,
500
(
1952
).
44.
F.
Ponulak
and
A.
Kasinski
,
Acta Neurobiol. Exp.
71
,
409
(
2011
).
45.
N. K.
Kasabov
,
Neural Networks
52
,
62
(
2014
).
46.
G.-Q.
Bi
and
M.-M.
Poo
,
J. Neurosci.
18
,
10464
(
1998
).
47.
S.
Song
,
K. D.
Miller
, and
L. F.
Abbott
,
Nat. Neurosci.
3
,
919
(
2000
).
48.
B.
Han
,
A.
Sengupta
, and
K.
Roy
, in 2016 International Joint Conference on Neural Networks (IEEE, 2016), pp. 971–976.
49.
M.
Beyeler
,
N.
Oros
,
N.
Dutt
, and
J. L.
Krichmar
,
Neural Networks
72
,
75
(
2015
).
50.
A.
Tavanaei
,
M.
Ghodrati
,
S. R.
Kheradpisheh
,
T.
Masquelier
, and
A.
Maida
,
Neural Networks
111
,
47
(
2019
).
51.
M.
Pfeiffer
and
T.
Pfeil
,
Front. Neurosci.
12
,
774
(
2018
).
52.
E. M.
Izhikevich
and
G. M.
Edelman
,
Proc. Natl. Acad. Sci. U. S. A.
105
,
3593
(
2008
).
53.
C.
Eliasmith
,
T. C.
Stewart
,
X.
Choo
,
T.
Bekolay
,
T.
DeWolf
,
Y.
Tang
, and
D.
Rasmussen
,
Science
338
,
1202
(
2012
).
54.
C.
Eliasmith
and
O.
Trujillo
,
Curr. Opin. Neurobiol.
25
,
1
(
2014
).
55.
T.
Werner
,
E.
Vianello
,
O.
Bichler
,
D.
Garbin
,
D.
Cattaert
,
B.
Yvert
,
B.
De Salvo
, and
L.
Perniola
,
Front. Neurosci.
10
,
474
(
2016
).
56.
E.
Chicca
,
F.
Stefanini
,
C.
Bartolozzi
, and
G.
Indiveri
,
Proc. IEEE
102
,
1367
(
2014
).
57.
C.
Zamarreño-Ramos
,
L. A.
Camuñas-Mesa
,
J. A.
Pérez-Carrasco
,
T.
Masquelier
,
T.
Serrano-Gotarredona
, and
B.
Linares-Barranco
,
Front. Neurosci.
5
,
26
(
2011
).
58.
S.
Boyn
,
J.
Grollier
,
G.
Lecerf
,
B.
Xu
,
N.
Locatelli
,
S.
Fusil
,
S.
Girod
,
C.
Carrétéro
,
K.
Garcia
,
S.
Xavier
,
J.
Tomas
,
L.
Bellaiche
,
M.
Bibes
,
A.
Barthélémy
,
S.
Saïghi
, and
V.
Garcia
,
Nat. Commun.
8
,
14736
(
2017
).
59.
D.
Kuzum
,
R. G. D.
Jeyasingh
,
B.
Lee
, and
H.-S. P.
Wong
,
Nano Lett.
12
,
2179
(
2012
).
60.
Y.
Li
,
Y.
Zhong
,
L.
Xu
,
J.
Zhang
,
X.
Xu
,
H.
Sun
, and
X.
Miao
,
Sci. Rep.
3
,
1619
(
2013
).
61.
Y.-F.
Wang
,
Y.-C.
Lin
,
I.-T.
Wang
,
T.-P.
Lin
, and
T.-H.
Hou
,
Sci. Rep.
5
,
10150
(
2015
).
62.
F.
Alibart
,
S.
Pleutin
,
O.
Bichler
,
C.
Gamrat
,
T.
Serrano-Gotarredona
,
B.
Linares-Barranco
, and
D.
Vuillaume
,
Adv. Funct. Mater.
22
,
609
(
2012
).
63.
M. T.
Sharbati
,
Y.
Du
,
J.
Torres
,
N. D.
Ardolino
,
M.
Yun
, and
F.
Xiong
,
Adv. Mater.
30
,
1802353
(
2018
).
64.
M. D.
Pickett
,
G.
Medeiros-Ribeiro
, and
R. S.
Williams
,
Nat. Mater.
12
,
114
(
2013
).
65.
P.
Stoliar
,
J.
Tranchant
,
B.
Corraze
,
E.
Janod
,
M. P.
Besland
,
F.
Tesler
,
M.
Rozenberg
, and
L.
Cario
,
Adv. Funct. Mater.
27
,
1604740
(
2017
).
66.
J.
del Valle
,
P.
Salev
,
F.
Tesler
,
N. M.
Vargas
,
Y.
Kalcheim
,
P.
Wang
,
J.
Trastoy
,
M. H.
Lee
,
G.
Kassabian
,
J. G.
Ramírez
,
M. J.
Rozenberg
, and
I. K.
Schuller
,
Nature
569
,
388
(
2019
).
67.
J.
del Valle
,
P.
Salev
,
Y.
Kalcheim
, and
I. K.
Schuller
,
Sci. Rep.
10
,
4292
(
2020
).
68.
S.
Dutta
,
V.
Kumar
,
A.
Shukla
,
N. R.
Mohapatra
, and
U.
Ganguly
,
Sci. Rep.
7
,
8257
(
2017
).
69.
Z.
Kašpar
,
K.
Olejník
,
V.
Novák
, and
T.
Jungwirth
, “Heat assisted switching of AFM CuMnAs memory cell,” in 2018 International Conference on Magnetism, 2018, Ref. B1-09.
70.
S.
Zhang
and
Y.
Tserkovnyak
, arXiv:2003.11058 (
2020
).
71.
R.
Khymyn
,
I.
Lisenkov
,
J.
Voorheis
,
O.
Sulymenko
,
O.
Prokopenko
,
V.
Tiberkevich
,
J.
Akerman
, and
A.
Slavin
,
Sci. Rep.
8
,
15727
(
2018
).
72.
X.
Chen
,
W.
Kang
,
D.
Zhu
,
X.
Zhang
,
N.
Lei
,
Y.
Zhang
,
Y.
Zhou
, and
W.
Zhao
,
Nanoscale
10
,
6139
(
2018
).
73.
G.
Tanaka
,
T.
Yamane
,
J. B.
Héroux
,
R.
Nakane
,
N.
Kanazawa
,
S.
Takeda
,
H.
Numata
,
D.
Nakano
, and
A.
Hirose
,
Neural Networks
115
,
100
(
2019
).
74.
G.
Bourianoff
,
D.
Pinna
,
M.
Sitte
, and
K.
Everschor-Sitte
,
AIP Adv.
8
,
055602
(
2018
).
75.
J. P.
Carbajal
,
J.
Dambre
,
M.
Hermans
, and
B.
Schrauwen
,
Neural Comput.
27
,
725
(
2015
).
76.
N.
Soures
,
L.
Hays
, and
D.
Kudithipudi
, in 2017 International Joint Conference on Neural Networks (IEEE, 2017), pp. 2414–2420.
77.
J.
Moon
,
W.
Ma
,
J. H.
Shin
,
F.
Cai
,
C.
Du
,
S. H.
Lee
, and
W. D.
Lu
,
Nat. Electron.
2
,
480
(
2019
).
78.
C.
Du
,
F.
Cai
,
M. A.
Zidan
,
W.
Ma
,
S. H.
Lee
, and
W. D.
Lu
,
Nat. Commun.
8
,
2204
(
2017
).
79.
R.
Midya
,
Z.
Wang
,
S.
Asapu
,
X.
Zhang
,
M.
Rao
,
W.
Song
,
Y.
Zhuo
,
N.
Upadhyay
,
Q.
Xia
, and
J. J.
Yang
,
Adv. Intell. Syst.
1
,
1900084
(
2019
).
80.
K.
Olejník
,
T.
Seifert
,
Z.
Kašpar
,
V.
Novák
,
P.
Wadley
,
R. P.
Campion
,
M.
Baumgartner
,
P.
Gambardella
,
P.
Němec
,
J.
Wunderlich
,
J.
Sinova
,
P.
Kužel
,
M.
Müller
,
T.
Kampfrath
, and
T.
Jungwirth
,
Sci. Adv.
4
,
eaar3566
(
2018
).
81.
M.
Dunz
,
T.
Matalla-Wagner
, and
M.
Meinert
,
Phys. Rev. Res.
2
,
013347
(
2020
).
82.
S. I.
Kiselev
,
J. C.
Sankey
,
I. N.
Krivorotov
,
N. C.
Emley
,
R. J.
Schoelkopf
,
R. A.
Buhrman
, and
D. C.
Ralph
,
Nature
425
,
380
(
2003
).
83.
A.
Slavin
and
V.
Tiberkevich
,
IEEE Trans. Magn.
45
,
1875
(
2009
).
84.
S.
Tsunegi
,
T.
Taniguchi
,
K.
Nakajima
,
S.
Miwa
,
K.
Yakushiji
,
A.
Fukushima
,
S.
Yuasa
, and
H.
Kubota
,
Appl. Phys. Lett.
114
,
164101
(
2019
).
85.
L.
Appeltant
,
M. C.
Soriano
,
G.
Van der Sande
,
J.
Danckaert
,
S.
Massar
,
J.
Dambre
,
B.
Schrauwen
,
C. R.
Mirasso
, and
I.
Fischer
,
Nat. Commun.
2
,
468
(
2011
).
86.
J.
Torrejon
,
M.
Riou
,
F. A.
Araujo
,
S.
Tsunegi
,
G.
Khalsa
,
D.
Querlioz
,
P.
Bortolotti
,
V.
Cros
,
K.
Yakushiji
,
A.
Fukushima
,
H.
Kubota
,
S.
Yuasa
,
M. D.
Stiles
, and
J.
Grollier
,
Nature
547
,
428
(
2017
).
87.
R.
Cheng
,
D.
Xiao
, and
A.
Brataas
,
Phys. Rev. Lett.
116
,
207603
(
2016
).
88.
R.
Khymyn
,
I.
Lisenkov
,
V.
Tiberkevich
,
B. A.
Ivanov
, and
A.
Slavin
,
Sci. Rep.
7
,
43705
(
2017
).
89.
D.-K.
Lee
,
B.-G.
Park
, and
K.-J.
Lee
,
Phys. Rev. Appl.
11
,
054048
(
2019
).
90.
R. E.
Troncoso
,
K.
Rode
,
P.
Stamenov
,
J. M. D.
Coey
, and
A.
Brataas
,
Phys. Rev. B
99
,
054433
(
2019
).
91.
H.
Zhong
,
S.
Qiao
,
S.
Yan
,
L.
Liang
,
Y.
Zhao
, and
S.
Kang
,
J. Magn. Magn. Mater.
497
,
166070
(
2020
).
92.
T.
Kanao
,
H.
Suto
,
K.
Mizushima
,
H.
Goto
,
T.
Tanamoto
, and
T.
Nagasawa
,
Phys. Rev. Appl.
12
,
024052
(
2019
).
93.
M.
Romera
,
P.
Talatchian
,
S.
Tsunegi
,
F.
Abreu Araujo
,
V.
Cros
,
P.
Bortolotti
,
J.
Trastoy
,
K.
Yakushiji
,
A.
Fukushima
,
H.
Kubota
,
S.
Yuasa
,
M.
Ernoult
,
D.
Vodenicarevic
,
T.
Hirtzlin
,
N.
Locatelli
,
D.
Querlioz
, and
J.
Grollier
,
Nature
563
,
230
(
2018
).
94.
A. A.
Awad
,
P.
Dürrenfeld
,
A.
Houshang
,
M.
Dvornik
,
E.
Iacocca
,
R. K.
Dumas
, and
J.
Åkerman
,
Nat. Phys.
13
,
292
(
2017
).
95.
M.
Zahedinejad
,
A. A.
Awad
,
S.
Muralidhar
,
R.
Khymyn
,
H.
Fulara
,
H.
Mazraati
,
M.
Dvornik
, and
J.
Åkerman
,
Nat. Nanotechnol.
15
,
47
(
2020
).
96.
D. V.
Slobodianiuk
,
O. R.
Sulymenko
, and
O. V.
Prokopenko
, in 2018 IEEE 38th International Conference Electronics and Nanotechnology (IEEE, 2018), pp. 470–473.
97.
A. N.
Bogdanov
and
D. A.
Yablonskii
,
Sov. Phys. JETP
68
,
101
(
1989
).
98.
S.
Muhlbauer
,
B.
Binz
,
F.
Jonietz
,
C.
Pfleiderer
,
A.
Rosch
,
A.
Neubauer
,
R.
Georgii
, and
P.
Boni
,
Science
323
,
915
(
2009
).
99.
N.
Romming
,
C.
Hanneken
,
M.
Menzel
,
J. E.
Bickel
,
B.
Wolter
,
K.
von Bergmann
,
A.
Kubetzka
, and
R.
Wiesendanger
,
Science
341
,
636
(
2013
).
100.
D.
Prychynenko
,
M.
Sitte
,
K.
Litzius
,
B.
Krüger
,
G.
Bourianoff
,
M.
Kläui
,
J.
Sinova
, and
K.
Everschor-Sitte
,
Phys. Rev. Appl.
9
,
014034
(
2018
).
101.
C.-Y.
You
and
N.-H.
Kim
,
Curr. Appl. Phys.
15
,
298
(
2015
).
102.
J.
Barker
and
O. A.
Tretiakov
,
Phys. Rev. Lett.
116
,
147203
(
2016
).
103.
C. A.
Akosa
,
O. A.
Tretiakov
,
G.
Tatara
, and
A.
Manchon
,
Phys. Rev. Lett.
121
,
097204
(
2018
).
104.
T.
Dohi
,
S.
DuttaGupta
,
S.
Fukami
, and
H.
Ohno
,
Nat. Commun.
10
,
5153
(
2019
).
105.
W.
Legrand
,
D.
Maccariello
,
F.
Ajejas
,
S.
Collin
,
A.
Vecchiola
,
K.
Bouzehouane
,
N.
Reyren
,
V.
Cros
, and
A.
Fert
,
Nat. Mater.
19
,
34
(
2020
).
106.
K.
Vandoorne
,
P.
Mechet
,
T.
Van Vaerenbergh
,
M.
Fiers
,
G.
Morthier
,
D.
Verstraeten
,
B.
Schrauwen
,
J.
Dambre
, and
P.
Bienstman
,
Nat. Commun.
5
,
3541
(
2014
).
107.
Y.
Shen
,
N. C.
Harris
,
S.
Skirlo
,
M.
Prabhu
,
T.
Baehr-Jones
,
M.
Hochberg
,
X.
Sun
,
S.
Zhao
,
H.
Larochelle
,
D.
Englund
, and
M.
Soljačić
,
Nat. Photonics
11
,
441
(
2017
).
108.
H.
Sato
,
E. C. I.
Enobio
,
M.
Yamanouchi
,
S.
Ikeda
,
S.
Fukami
,
S.
Kanai
,
F.
Matsukura
, and
H.
Ohno
,
Appl. Phys. Lett.
105
,
062403
(
2014
).
109.
K.
Watanabe
,
B.
Jinnai
,
S.
Fukami
,
H.
Sato
, and
H.
Ohno
,
Nat. Commun.
9
,
663
(
2018
).
110.
A.
Segal
,
M.
Karpovski
, and
A.
Gerber
,
J. Magn. Magn. Mater.
324
,
1557
(
2012
).
111.
S.
Lequeux
,
J.
Sampaio
,
V.
Cros
,
K.
Yakushiji
,
A.
Fukushima
,
R.
Matsumoto
,
H.
Kubota
,
S.
Yuasa
, and
J.
Grollier
,
Sci. Rep.
6
,
31510
(
2016
).
112.
Y.
Huang
,
W.
Kang
,
X.
Zhang
,
Y.
Zhou
, and
W.
Zhao
,
Nanotechnology
28
,
08LT02
(
2017
).
113.
K. M.
Song
,
J.-S.
Jeong
,
B.
Pan
,
X.
Zhang
,
J.
Xia
,
S. K.
Cha
,
T.-E.
Park
,
K.
Kim
,
S.
Finizio
,
J.
Raabe
,
J.
Chang
,
Y.
Zhou
,
W.
Zhao
,
W.
Kang
,
H.
Ju
, and
S.
Woo
,
Nat Electron.
3
,
148
(
2020
).
114.
H.
Zhong
,
Y.
Wen
,
Y.
Zhao
,
Q.
Zhang
,
Q.
Huang
,
Y.
Chen
,
J.
Cai
,
X.
Zhang
,
R.
Li
,
L.
Bai
,
S.
Kang
,
S.
Yan
, and
Y.
Tian
,
Adv. Funct. Mater.
29
,
1806460
(
2019
).
115.
S.
Das
,
A.
Zaig
,
H.
Nhalil
,
L.
Avraham
,
M.
Schultz
, and
L.
Klein
,
Sci. Rep.
9
,
20368
(
2019
).
116.
B. G.
Park
,
J.
Wunderlich
,
X.
Martí
,
V.
Holý
,
Y.
Kurosaki
,
M.
Yamada
,
H.
Yamamoto
,
A.
Nishide
,
J.
Hayakawa
,
H.
Takahashi
,
A. B.
Shick
, and
T.
Jungwirth
,
Nat. Mater.
10
,
347
(
2011
).
117.
P.
Tang
,
Q.
Zhou
,
G.
Xu
, and
S. C.
Zhang
,
Nat. Phys.
12
,
1100
(
2016
).
118.
L.
Šmejkal
,
J.
Železný
,
J.
Sinova
, and
T.
Jungwirth
,
Phys. Rev. Lett.
118
,
106402
(
2017
).
119.
S.
Nakatsuji
,
N.
Kiyohara
, and
T.
Higo
,
Nature
527
,
212
(
2015
).
120.
H.
Tsai
,
T.
Higo
,
K.
Kondou
,
T.
Nomoto
,
A.
Sakai
,
A.
Kobayashi
,
T.
Nakano
,
K.
Yakushiji
,
R.
Arita
,
S.
Miwa
,
Y.
Otani
, and
S.
Nakatsuji
,
Nature
580
,
608
(
2020
).
121.
J.
Kaiser
,
K.
Camsari
,
S.
Datta
, and
P.
Upadhyaya
,
APS March Meet.
64
,
S39.00001
(
2019
).
122.
W. A.
Borders
,
A. Z.
Pervaiz
,
S.
Fukami
,
K. Y.
Camsari
,
H.
Ohno
, and
S.
Datta
,
Nature
573
,
390
(
2019
).
123.
K.
Garello
,
C. O.
Avci
,
I. M.
Miron
,
M.
Baumgartner
,
A.
Ghosh
,
S.
Auffret
,
O.
Boulle
,
G.
Gaudin
, and
P.
Gambardella
,
Appl. Phys. Lett.
105
,
212402
(
2014
).
124.
T.
Kampfrath
,
A.
Sell
,
G.
Klatt
,
A.
Pashkin
,
S.
Mährlein
,
T.
Dekorsy
,
M.
Wolf
,
M.
Fiebig
,
A.
Leitenstorfer
, and
R.
Huber
,
Nat. Photonics
5
,
31
(
2011
).
125.
A. V.
Chumak
,
V. I.
Vasyuchka
,
A. A.
Serga
, and
B.
Hillebrands
,
Nat. Phys.
11
,
453
(
2015
).
126.
M.
Jamali
,
J. H.
Kwon
,
S. M.
Seo
,
K. J.
Lee
, and
H.
Yang
,
Sci. Rep.
3
,
3160
(
2013
).
127.
R.
Cheng
,
M. W.
Daniels
,
J.-G.
Zhu
, and
D.
Xiao
,
Sci. Rep.
6
,
24223
(
2016
).
128.
L.
Zeng
,
D.
Zhang
,
Y.
Zhang
,
F.
Gong
,
T.
Gao
,
S.
Tu
,
H.
Yu
, and
W.
Zhao
, in 2016 IEEE International Symposium on Circuits and Systems (IEEE, 2016), pp. 918–921.
129.
H.
Arai
and
H.
Imamura
,
J. Appl. Phys.
124
,
152131
(
2018
).
130.
R.
Nakane
,
G.
Tanaka
, and
A.
Hirose
,
IEEE Access
6
,
4462
(
2018
).
131.
Y.
Kajiwara
,
K.
Harii
,
S.
Takahashi
,
J.
Ohe
,
K.
Uchida
,
M.
Mizuguchi
,
H.
Umezawa
,
H.
Kawai
,
K.
Ando
,
K.
Takanashi
,
S.
Maekawa
, and
E.
Saitoh
,
Nature
464
,
262
(
2010
).
132.
Y.
Wang
,
D.
Zhu
,
Y.
Yang
,
K.
Lee
,
R.
Mishra
,
G.
Go
,
S.-H.
Oh
,
D.-H.
Kim
,
K.
Cai
,
E.
Liu
,
S. D.
Pollard
,
S.
Shi
,
J.
Lee
,
K. L.
Teo
,
Y.
Wu
,
K.-J.
Lee
, and
H.
Yang
,
Science
366
,
1125
(
2019
).
133.
R. O.
Deaner
,
K.
Isler
,
J.
Burkart
, and
C.
Van Schaik
,
Brain. Behav. Evol.
70
,
115
(
2007
).