Neuromorphic computing approaches become increasingly important as we address future needs for efficiently processing massive amounts of data. The unique attributes of quantum materials can help address these needs by enabling new energy-efficient device concepts that implement neuromorphic ideas at the hardware level. In particular, strong correlations give rise to highly non-linear responses, such as conductive phase transitions that can be harnessed for short- and long-term plasticity. Similarly, magnetization dynamics are strongly non-linear and can be utilized for data classification. This Perspective discusses select examples of these approaches and provides an outlook on the current opportunities and challenges for assembling quantum-material-based devices for neuromorphic functionalities into larger emergent complex network systems.

Brain-inspired computing is promising for the development of highly efficient complex computation with minimal energy consumption. Today, software implementations of neural networks have become ubiquitous, including facial recognition software, language translation, and search engines. Unfortunately, the energy being used on these tasks is growing unsustainably. Fortunately, the brain also provides inspiration for approaches to minimize this energy. Brain-inspired computing can not only address the efficiency of computing in data centers, but it can also address the need to provide efficient computation solutions locally at potential sensors and close to the end-user in order to avoid the inefficiencies of communicating data over long-distances. Critically, traditional computational systems are hitting a roadblock mainly because of the difficulty in scaling up a complementary metal oxide semiconductor (CMOS)-based computer to the complexity of the brain.1 An important limiting factor is the prohibitively large energy consumption of the local (on-chip) and global (system) levels. In addition, the architecture of the brain has a high degree of three-dimensional connectivity, whereas “classical” computers have a much simpler pseudo-three-dimensional (i.e., a stack of small numbers of two-dimensional systems) architecture with small connectivity between devices. Consequently, the current limitations of classical CMOS-based computation give rise to a new energy challenge for modern computational tasks.2 Overcoming these will require new innovative approaches that combine in a holistic, interdisciplinary manner new materials, device designs, computing architectures, and algorithm developments.

Most research on neural networks and neuromorphic computing uses CMOS technology.3 However, as an increasing number of applications emerge, more complicated networks require more resources. One approach to limit the energy used is to use specialized hardware, such as graphical processing units (GPUs) or tensor processing units (TPUs), that are optimized for carrying out the many vector–matrix multiplications that are used in neural networks. So far, this is the most commercially viable approach.

In a more forward-looking approach to capturing some of the brain’s efficiency for cognitive computing, many researchers are trying to more closely mimic the brain by building spiking networks. Large corporations4–6 have developed specialized chips that are, in some sense, still conventional digital computers since they are implemented with standard CMOS technology but are optimized for running artificial neural network algorithms. These chips are designed around digital CMOS implementations of synapses and neurons, where a large number of relatively small computing units are in close physical proximity to the working memory that is distributed throughout the system. These remarkable chips implement millions of neurons and hundreds of millions of synapses.

Other approaches7 are hybrid in the sense that the neurons are physically implemented by analog electronics.8 For instance, the membrane potential of each neuron is implemented by the charge stored in a capacitor. The neurons emit a spike when the voltage of the capacitor reaches a preset threshold potential. However, the system is hybrid because the spike is not represented by the emission of an analog action potential, but by a spike event, which is communicated to downstream neurons through a network that digitally implements the synaptic connections. The power consumption of spiking systems might be further optimized by using transistors working in the subthreshold regime.9 However, operating in this regime leads to more variability in the electronic response.

Miniaturization of CMOS electronics is feasible down to the nanometer scale. However, all neuromorphic computing systems that represent a neuron in a compact manner require a capacitor, whose miniaturization remains a serious challenge. The miniaturization of the capacitors is crucially limited by the dielectric constant of CMOS compatible materials. It may be possible to overcome this problem with a radically different approach that rests on different physical principles for the implementation of artificial spiking neurons.

One way to reduce energy consumption in artificial neurons and other approaches to brain-inspired computing is to exploit the pronounced non-linear properties of quantum materials. The principal idea behind this energy efficiency is that, in quantum materials, a small electrical stimulus may produce a large response that can be electrical, mechanical, optical, magnetic, etc., through a material change of state. For example, several oxides exhibit metal–insulator transitions in which a small voltage or current can produce a large (several orders of magnitude) change in resistivity.10 Furthermore, during the resistive switching, Joule heating provides an intrinsic physical mechanism for emulating short- and long-term plasticity as well as spike-timing-dependent plasticity,11 key mechanisms for learning, without external electric pulse control. In parallel, recent works12,13 show that magnetization dynamics provide a rich variety of nonlinear behavior that can be harnessed for classification. New quantum materials offer novel pathways for manipulating such magnetization dynamics and giving rise to new functionalities important for neuromorphic computing, such as analog memory. These properties then provide the material basis for emulating neurons, synapses, axons, and dendrites.

In the human brain, learning and memory are governed by synapses. One way to emulate synaptic behavior is to base the artificial synapses on the modulation of electrical resistivity by an electric stimulus (current or voltage). The nonvolatile synaptic “weights” (i.e., resistivity) must be set by an energy efficient method and they should vary continuously. One traditional implementation of artificial synapses is based on resistive switching phenomena and uses phase change materials or transition metal oxides.14–18 The continuous resistivity changes in phase change materials are created by intermediate phases. Density functional theory (DFT) calculations have shown that structural distortions result in a continuous change from crystallinity to amorphicity. Transition metal oxides are also used to emulate neurons (“neuristors”).19–21 These are spiking devices, which exhibit leaky, integrate, and fire behavior. The change in the volatile resistance from a high resistance state to a low resistance state is caused by the application of a voltage higher than a threshold value, and from low resistance to a high resistance state on the removal of the applied voltage. The properties of these materials depend strongly on defects that introduce states in the energy gap changing the switching mechanism from thermal to electronic.

This Perspective focuses on identifying materials that can be used to engineer devices that perform as artificial neurons and synapses in a more compact manner than their CMOS equivalents. CMOS electronics has been designed for high precision numerical computing but has not been optimized for low precision categorical information like image processing or other targets of artificial intelligence. There are aspects of neural function in the brain, spike trains, oscillatory behavior in collections of neurons that require significant chip area and/or energy when implemented in CMOS electronics.22 The goal here is to facilitate such computation based on novel physics and devices that depend on material properties not found in doped silicon. Efficiency could be improved by using memristive crossbar arrays to enhance the efficiency of TPUs, creating more efficient spiking neurons or synapses, or by novel approaches that map to higher order neural activity.

One of the defining features of quantum materials is having complex Hamiltonians that yield unexpected and useful properties associated with various aspects of the collective electronic wave function, namely, its lattice, spin, charge, and orbital character. This variety of phenomena offers some advantages over single-function semiconductors that are used most commonly in today’s devices. First, quantum materials can offer distinct functionalities built in the same material. For instance, a material can display resistive and magnetic properties that may be independently tunable with appropriate external perturbations. Second, the governing interactions, while microscopic in origin, thermodynamically couple across longer length scales in the material, yielding mesoscopic and macroscopic phases that are entirely emergent in nature. Thus, large networks based on quantum materials may be useful in neuromorphic computing because the emergent properties of the whole network (categorically distinct from the properties summing all individual devices) mimic the way the brain functions that are more complex than simply linearly combining neurons and synapses. In other words, the entire macroscopic state of the emergent neural network can retain information, learn, and potentially adapt to different stimuli. Central to these ideas is that quantum materials have properties governed by fundamental quantum mechanics at different length scales.

Many quantum materials also harbor complex magnetic order or novel ways to connect charge transport to spin phenomena. Toward the goal of new computational paradigms, magnetic spin torque oscillators can enable neuromorphic computing applications.23 In spin–torque oscillators, a spin-polarized current excites magnetization dynamics that can produce a high frequency electrical signal, typically in the range of 100 MHz to 50 GHz. The resulting magnetization dynamics is nonlinear and tunable in phase, amplitude, and frequency. Such oscillators are of great interest in neuromorphic computing where their response to external perturbations, phase locking, and mutual synchronization can be exploited.13 Coupled spin torque oscillators offer opportunities for complex neuromorphic functions.

Quantum materials enable new functionality in spin–torque oscillators by adding means of controlling and coupling such oscillators for neuromorphic computing. The new functionalities then can be controlled by the application of light, and electric and magnetic fields.24 Spin–torque oscillators can also be stochastic, “jumping” randomly between two or more discrete magnetic states.25 These characteristics can enable a compact and energy efficient source of random numbers,26 including those needed to create stochastic binary networks for solving optimization problems.27,28

The spin–torque oscillators studied to date are almost exclusively based on ferromagnetic metals. Examples include vortex oscillators or spin-Hall nano-oscillator arrays used to demonstrate neuromorphic computing.29–31 Vortex oscillators are composed of ferromagnetic metals that form a magnetic tunnel junction.32 A voltage bias on the junction causes a magnetic vortex in one of the electrodes of the junction to oscillate generating an oscillating electrical signal. Spin-Hall nano-oscillators are formed from ferromagnetic nanoconstrictions in which a spin current excites spin waves in the nanoconstriction region that causes resistance oscillations.33 

Quantum materials, such as transition metal oxides that exhibit phase transitions, can enable new oscillator functionalities. This is because the oscillator characteristics can change dramatically at phase transitions and, at a first order phase transition, can be hysteretic endowing the oscillator with memory, i.e., its resonance frequency and output power can depend on its prior state.34,35 This can enable learning in a neuromorphic circuit, such as the adjustment of a synaptic weight. The exploration of these ideas has just begun.

Multiple oscillators can be coupled such that they phase lock, generating a larger output signal.36 Coupling can also be used as a means of propagating information in an array of oscillators. The coupling mechanisms can be electrical or magnetic. For example, if oscillators are connected in series, the oscillating current generated by one oscillator will flow through the other oscillators providing a coupling mechanism.37 Oscillators have also been demonstrated to couple by spin waves when they “share” the same magnetic layer. Here, spin waves emitted by one oscillator can be transmitted to another and have been shown to lead to frequency and phase locking.38 Quantum materials may offer additional coupling mechanisms through phase transitions that electrically couple oscillators.34,35 Tunable or even on/off coupling may be possible at magnetic phase transitions.

Materials that exhibit resistive switching form the backbone of neuromorphic devices that circumvent traditional computing technologies based on CMOS logic and Von Neumann architectures.39 Materials—such as VO2—that exhibit weakly stimulated transitions between metallic and Mott insulating states represent the latest frontier in this research area. An important aspect that makes Mott insulators (and transition metal oxides in general) interesting materials for neuromorphic functionalities stems from the presence of valence band d-electrons. These electronic states exhibit characteristics between that of weakly localized s and p orbitals and highly localized, strongly magnetic, f-orbitals. Consequently, d-electron states can undergo dramatic reorganizations due to changes in either the local (e.g., carrier density, lattice structure, and interaction mechanisms) or global (e.g., temperature, pressure, and electric or magnetic fields) environment (see Fig. 1). These reorganizations often significantly modify transport behavior, as exemplified by the several orders-of-magnitude change in resistivity seen across Mott metal–insulator transitions.40 

FIG. 1.

Strongly correlated materials, specifically transition metal oxides, offer various degrees of freedom that can be tuned in heterostructures and can be useful for neuromorphic applications.

FIG. 1.

Strongly correlated materials, specifically transition metal oxides, offer various degrees of freedom that can be tuned in heterostructures and can be useful for neuromorphic applications.

Close modal

One reason Mott insulators are of great interest in the context of artificial neuron devices is that, recently, researchers showed that strong electric fields can induce electronic breakdown in these materials.41 This resistive collapse can be induced by a succession of electric pulses analogous to the incoming electrical spikes that excite a brain’s neuron.42 In a Mott neuron device, electric excitations drive resistive collapse, leading to a sudden current surge that acts as an action potential spike. An example showing the similarities between biological neurons and artificial neuristors is shown in Fig. 2. Importantly, this resistive switching is volatile:41 the Mott material returns to a pristine insulating state after the applied voltage is terminated.

FIG. 2.

(a) An exemplary biological neuron that consists of dendrites, soma, and acon, with (b) a typical firing pattern triggered by neurotransmitters. Adapted from Ref. 43. (c) A quantum material neuristor composed of a VO2 thin film and Au/Ti contacts. (d) Spiking dynamics triggered by Joule-heating across the metal–insulator transition in the neuristor. Adapted from Ref. 42.

FIG. 2.

(a) An exemplary biological neuron that consists of dendrites, soma, and acon, with (b) a typical firing pattern triggered by neurotransmitters. Adapted from Ref. 43. (c) A quantum material neuristor composed of a VO2 thin film and Au/Ti contacts. (d) Spiking dynamics triggered by Joule-heating across the metal–insulator transition in the neuristor. Adapted from Ref. 42.

Close modal

While this behavior is experimentally well-established, the underlying mechanisms are still unclear. As established in the pioneering work of Ridley,44 the electric breakdown is related to negative differential resistance phenomena associated with the formation of conductive filaments. However, experimental evidence of filament formation during Mott insulator breakdown is mostly indirect and interpreted using numerical simulations. Understanding Mott electric breakdown is an important challenge for implementing practical and reliable artificial neurons. Since the Mott metal–insulator transition often coincides with a structural phase change, one should also explore local strain effects in filamentary structures, which may lead to slow relaxations. Interestingly, recent work by Del Valle et al. provides new experimental insight into the process of filamentary incubation.45 

Modifying the local defect concentration presents an appealing and effective new way of modifying the Mott metal–insulator transition. Examples include controlling oxygen vacancies in vanadates,46 doping rare-earth nickelates (e.g., SmNiO3) with light ions, such as hydrogen,47 and controlling vacancies and doping in LaCoO3.48 Due to their high mobility, the distribution of light ions or ion vacancies is sensitive to short voltage pulses, enabling controllable tuning of neuromorphic device resistance. Memory nano-devices based on this effect are promising candidates for artificial neuromorphic synapses. Applying voltage pulses to hydrogen-doped perovskite (e.g., RNiO3, where R is a rare earth cation) neuromorphic devices induces hydrogen dopant migration,47 enabling controllable tuning of electronic properties and new phase formation. Phase transformation has also been realized in La0.7Sr0.3CoO3−δ—specifically, a series of topotactic transitions from the equilibrium ferromagnetic metallic perovskite structure (δ ≈ 0) to an antiferromagnetic semiconducting brownmillerite structure (δ = 0.5) and further to a weakly ferromagnetic insulating Ruddlesden–Popper structure (La1.4Sr0.5Co1+νO4−δ).49 A similar transition was also observed between the equilibrium brownmillerite SrCoO2.5 phase to the metastable SrCoO3 phase.50–52 Oxygen vacancy concentrations in the cobaltites have been varied in multiple ways, e.g., by depositing oxygen-scavenging metals,53 annealing in reducing environments,51 using electric fields,52 and with epitaxial strain.

The existence of variable lifetime metastable states is another key aspect of quantum material phase transitions of great relevance to neuromorphic computing. In many cases, the energy landscape features several accessible minima with different energies. Conversely, when an oxide is doped with charged ionic defects, the defect mobility allows for subtle changes in the material’s resistance. These changes, however, occur at slow timescales. This allows for metastable intermediate states that can be used to encode short term memory. The elasticity and plasticity of electrical resistance changes derived from tunable band-filling via mobile charged defects is a unique feature of strongly correlated oxides. This, in turn, enables the retention of information at various time scales, potentially mimicking a defining feature of the animal brain: memory storage and computational processes that occur at various time scales.

1. Realizing synaptic functions in neuron systems

Quantum materials undergoing metal–insulator transitions, specifically VO2, which has near room temperature metal–insulator transition,54 have emerged among prominent candidates for neuristor platforms in bio-inspired devices. However, many fundamental questions related to the mechanisms of resistive switching behavior, stochastic electric response, and the balance between volatile and non-volatile switching remain an obstacle to the practical use of these devices in neuromorphic computing.

Realizing synaptic functions in neuron systems requires very-large-scale integration systems that contain electronic analog circuits to mimic neuro-biological architectures. The joints of these three-dimensional cross-bar-shaped circuits are composed of numerous pre-neuristor lines, synaptor, and post-neuristor lines. Naturally, different materials are being considered to construct these operative lines in order to emulate the functions of neuristors and synaptors. As a result, the interfacial compatibility and strain effects of the materials will be a huge challenge for device fabrication. A single material that could realize both functions would be ideal. VO2, because of its near room-temperature metal–insulator-transition and its subthreshold firing effects,54 has been established as a good candidate for imitating the functions of neuristors.55 However, until now, no one has shown the use of VO2 as synaptors. It is also possible to realize synaptic functions in VO2, where the oxygen-vacancy containing V5O9 Magnéli phase acts as conductive filaments56 as demonstrated by combining in situ transmission electron microscopy (TEM) and ex situ transport measurements. Tuning the chemical composition, electric field, and working temperature gives a non-volatile switching process that can be removed by an annealing process, providing a “forget” function in synaptors (see Fig. 3).

FIG. 3.

(a) A scanning transmission electron microscopy image showing the cross section of a thin VO2 device that can mimic the function of neuristor and synaptor for in situ TEM experiments. (b) A schematic diagram showing the formation of V5O9 conductive filament in the VO2 matrix. (c) The non-volatile IV curve obtained at room temperature. Adapted from Ref. 56.

FIG. 3.

(a) A scanning transmission electron microscopy image showing the cross section of a thin VO2 device that can mimic the function of neuristor and synaptor for in situ TEM experiments. (b) A schematic diagram showing the formation of V5O9 conductive filament in the VO2 matrix. (c) The non-volatile IV curve obtained at room temperature. Adapted from Ref. 56.

Close modal

These results illustrate that a single VO2 material could provide the full function of a neuron cell.56 Recent studies have shown that it is possible to realize essential neuromorphic functions, such as neurons, synapses, and capacitors, within a single nickelate device that can be reprogrammed on purpose by fast electric pulses.57 

2. First-principles theoretical treatment of correlated electron systems

Recently, quantum-mechanical methods, such as density functional theory (DFT), implemented in efficient codes have led to tremendous advances in predicting candidate materials for memory devices. Computational approaches can give insights into structural, electronic, and magnetic properties of broad classes of oxides and their associated memory technologies, including ferroelectric random-access-memory (FeRAM), phase-change memory, magnetic RAM (MRAM), and spin–transfer–torque MRAM (STT-MRAM).

The structural and electronic transitions occurring in phase change materials can be impacted by defects.58 Although difficult to unravel experimentally, the microscopic origin of these effects can be investigated computationally. For example, the lowering of vanadate metal–insulator transition temperatures due to point defects, such as oxygen vacancies, has been demonstrated computationally and also noted in experiments.58 Rare earth nickelates, ReNiO3, are other potential candidates for phase change materials. In these systems, Kotiuga and Rabe and Cui et al., respectively, studied the effects of oxygen vacancies and Li doping in order to drive and understand changes in electronic structure that give rise to an improved memory device.59,60 A mechanistic understanding of the interplay between electronic structure changes and magnetic states was provided by computational investigations that could then be used to interpret experiments.

Another set of phase change materials involving Sc-Sb-Te alloys was studied using first-principles simulations.61 The authors predict the structural evolution of these alloys upon optical excitation, providing a clear indication of the transition from a crystalline to an amorphous phase. Recently, perovskite oxides are gaining much interest as phase change materials for neuromorphic computing. The mechanism behind the metal–insulator transition in perovskites has been addressed computationally for several systems using first principles, DFT calculations. Recently, Zhang et al. elucidated the impact of oxygen vacancies on the structural, electronic, and magnetic changes responsible for the metal–insulator transition in cobaltites.48 Interestingly, they showed that cooperative structural distortions, rather than local bonding changes, are responsible for the gap closing.

Zhang et al.48 also developed, and experimentally validated, a first-principles model that accurately predicts the electric bias required to drive the metal–insulator transition. Bennett et al.62 observed the effect of n-doping on the transition from ferromagnetic (FM) metal to antiferromagnetic (AFM) insulator in SrCoO3. They showed that the metal–insulator transition is triggered by n-doping leading to a self-hole doped insulator due to strong charge-lattice (or electron–phonon) coupling. As a result, controlling the hole–ligand ratio is a key factor in controlling the transition. Moreover, dopant metal ions and oxygen vacancies play a key role in a non-volatile low power consuming memory devices; toward this end, clustering of oxygen vacancies around the Cu dopant in cubic ATiO3 (A = Ba, Be, and Mg) leads to the formation of conductive filaments that are responsible for electrical switching in such devices.63 Similar mechanisms were also reported for MFeO3 (M = Gd, Nd) by substitutional doping of Al ions.64 

Wang et al.61 studied the use of intermediate phases to realize synaptic behavior using DFT calculations; they showed that the structural distortions result in a continuous change from a crystalline to an amorphous state. Unlike phase-change-material-based synapses with continuously adjustable device conductances, phase-change-material-based neurons exhibit threshold firing. A change from a high resistance state to a low resistance state is only observed when the applied voltage is higher than a threshold value and returns to the high-resistive state when the applied voltage is removed. The effect of doping on the “firing” of phase change material neurons has also been studied through first principles calculations.58 For instance, introducing a defect instantly decreases the V2O3 bandgap, switching the system from high resistance to low resistance.

3. Challenges for first principles calculations

First principles calculations are useful tools to gain a microscopic understanding of transition metal oxides and predict the impact of doping, strain, or magnetic ordering on their electronic and atomic structure. These calculations are particularly useful at addressing these effects one at a time, which can be challenging to do experimentally, and at eventually understanding their complex interplay. Although progress in DFT-based calculations has greatly accelerated our understanding of transition metal oxide properties, significant challenges remain.

One challenge is to improve the accuracy of theoretical predictions of strongly correlated materials with highly localized d- or f-electrons. Standard semi-local DFT functionals are usually unable to treat such systems accurately. These problems may be mitigated to some extent by adding a semi-empirical Hubbard U parameter (DFT + U) to the Kohn–Sham Hamiltonians using dynamical mean field theory or hybrid functionals. Semi-local functionals fail because of delocalization errors and self-interaction errors, which, in some cases, are partially fixed by the addition of a Hubbard U term to the Hamiltonian. The DFT + U method has yielded accurate results for the structural and electronic properties of several transition metal oxides. However, appropriate U values are often determined empirically, and finding U values that reproduce properties across multiple phases of transition metal oxides can be challenging.

Dynamical mean field theory, which has been applied both to realistic materials and to Hubbard models, has led to great progress in understanding correlated systems. Dynamical mean field theory provides an exact and accurate description of local intra-shell correlations. It was recently used65 to understand correlation-driven charge order in the metallic VO2 phase.

Hybrid functionals, which incorporate a fraction of exact exchange, can sometimes yield more accurate structural and electronic properties of transition metal oxides than DFT + U descriptions; however, they are much more demanding from a computational standpoint. Wave function based methods, such as coupled-cluster, have been recently applied to transition metal oxides as well, showing improvement over DFT-based methods, although their use is still in its infancy. Finally, we note that progress in understanding metal–insulator transition and other key properties of transition metal oxides at finite temperature is expected from coupling DFT-based or wavefunction based descriptions of the electronic structure with molecular dynamics.

4. Enhancing device speeds via hidden phases

Current memristive devices typically operate with speeds in the 10–350 MHz range. Achieving terahertz speeds would represent more than three orders of magnitude improvement over the fastest neuromorphic devices16 and lead to ultrafast processing of data and lower energy consumption. Hidden phases are material states that are typically not thermodynamically accessible, that is, not reachable by bias field or temperature change, but can be activated by terahertz pulses.66,67 They emerge from out-of-equilibrium processes in a wide range of quantum materials and could be key to building terahertz hardware synapses and neurons that require the emergence of multiple and controllable analog states.

State-of-the-art large scale simulations68 show that Pb(Mg1/3Nb2/3)O3, the prototype of relaxor ferroelectrics, can emulate all the key neuronic and memristive synaptic features—spiking, integration, and tunable multiple non-volatile states—through terahertz pulse activation of hidden phases. The atomistic insight provided by these computations further reveals that the stabilization of multiple hidden phases occurs via the rearrangement, evolution, and/or percolation of nanoscale regions in response to the terahertz electric field pulses. The different dielectric constants of the multiple hidden phases in relaxor ferroelectrics are also highly promising for creating ultrafast memcapacitor devices that are more energy-efficient than memristors, as no electric current runs through them during operation. As hidden phases occur in a broad class of quantum materials, including superconductors, colossal magnetoresistance manganites, charge-density wave materials, and incipient ferroelectrics, they open up a new avenue for creating quantum terahertz neuromorphic devices.

5. Novel heterostructures for neuromorphic computing

Another unique aspect of strongly correlated oxides is the variety of macroscopic phases that they offer, sometimes within the same material or within two adjacently grown oxide materials (which has proven highly feasible using modern thin film deposition and heterostructuring tools). For instance, depending on the charge carrier concentration, a cuprate superconductor can also feature insulating antiferromagnetic properties.69 This phase diversity can be used to design arrays of Josephson junctions that exhibit neuromorphic behavior. Integrating a second material, such as a nickelate or vanadate, that displays neuromorphic behavior at different time scales or stimuli strengths would emulate the variety of scales seen in the behavior in the brain. Interfacing two materials can thus enable a combination of stimulus/response regimes. For instance, magnetic oscillator properties can be controlled via the tunable electrical resistance of an adjacent vanadate.35 Oxides may enable the combination of distinct charge, spin, or superconducting properties for future conversion of information modality from one form to another.

6. Engineering Mott materials for functionality

Although earliest implementations of Mott neuristors treated the underlying materials in a binary fashion,70 either insulating or metallic, more recent studies have leveraged metastable and multi-state properties to achieve subthreshold firing.54 The discovery of such unanticipated effects reflects one of the most interesting aspects of Mott neuromorphics: the complexity of metal–insulator transitions in Mott materials continues to yield surprising discoveries, portending future possibilities for device-level phenomena. For instance, the exquisite sensitivity of Mott insulators to small changes in processing parameter71 suggests powerful routes for engineering and controlling neuromorphic functionality through fine-tuning material composition and structure, applied strain, or light-induced phenomena.

Magnetization states in magnetically ordered materials have a long history of being used for storing both digital and analog information.72–74 At the same time, magnetic materials have several attributes that make them interesting for new neuromorphic computational schemes.75 The stability of magnetization states can be engineered with well-defined variable relaxation times that can exceed years. At the same time, these magnetization states can still be manipulated with very low power, either through local magnetic fields or electrical means.76–79 This tunable non-volatility can be used for emulating synaptic weights. Conversely, it is possible to excite both stochastic and coherent magnetization dynamics. In both cases, the magnetization dynamics is determined by non-linear equations of motion, which can be used for thresholding behavior or other complex dynamic interactions that may resemble the functionality of natural neurons. Research on both of these aspects has benefited significantly from the development of spintronics,76–79 which is based on using the spin degree of electronic charge currents that allows us to manipulate electronic charge currents through magnetization states and vice versa. In this section, we describe current-based means of manipulating magnetizations, the search for materials to optimize these effects, and some of the approaches to neuromorphic computing that can be done with them.12,13,80

Two powerful ways to control magnetization dynamics are through spin–transfer torques81,82 and spin–orbit torques83 (see Fig. 4). Spin–transfer torques occur in multilayer devices with currents flowing perpendicular to the layers. The current passing through one magnetic layer becomes spin polarized and exerts a torque on the magnetization in a subsequent layer. This torque forms the basis for switching the memory states in magnetic random access memory (MRAM).72,84 It also leads to the dynamics in spin–torque nano-oscillators.85,86 Spin–transfer torques and spin–orbit torques are related, but in the latter case, the spin currents that create the torques arise from a current through an adjacent layer of a material with strong spin–orbit coupling. Each of these torques has its own disadvantages and advantages for controlling magnetization dynamics in spintronics-based neuromorphic applications.

FIG. 4.

Magnetic tunnel junctions (dark red arrows represent magnetization direction). (a) Standard magnetic tunnel junction with fixed and free layers and the control current following the same path as the read current. (b) Magnetic tunnel junction grown on the heavy metal layer with separate read and control current paths. Reprinted from Ref. 87, with the permission of AIP Publishing.

FIG. 4.

Magnetic tunnel junctions (dark red arrows represent magnetization direction). (a) Standard magnetic tunnel junction with fixed and free layers and the control current following the same path as the read current. (b) Magnetic tunnel junction grown on the heavy metal layer with separate read and control current paths. Reprinted from Ref. 87, with the permission of AIP Publishing.

Close modal

One of the disadvantages of spin–orbit torques is the orientation of the torques. Both spin–transfer torques and spin–orbit torques are defined with respect to a spin direction. For spin–transfer torques, the direction is set by the direction of the magnetization in the polarizing layer and is relatively easy to control. For the dominant spin–orbit torques, the spin direction is set by geometry, being perpendicular to both the multilayer growth direction and the applied electric field. Such torques are very efficient at switching the magnetization in this same direction (i.e., in the spin direction) but not so efficient for magnetizations that are in the growth direction, an orientation presently favored in MRAM applications.72,84 An active research direction is to efficiently produce torques with different spin directions. Doing so requires reducing the symmetry of the system. There are many ways to do so. One of the earliest is the use of spin–orbit layers with lower crystalline symmetry,88 but this approach has yet to produce torques strong enough to switch layers with perpendicular magnetizations. Alternatively, the magnetization in additional layers reduces the symmetry and can lead to perpendicular torques strong enough to reverse the magnetization.89,90 This magnetization can be ferromagnetic89,90 or antiferromagnetic.91 In both cases, it may be necessary to introduce a non-magnetic layer between the two magnetic layers to break the exchange coupling between them.

Interfaces92 play a crucial role in determining the strength of the spin–orbit torques in all possible mechanisms. In the models in which the spin current is created in the interior of the generating layer,93,94 the ferromagnetic exchange interaction at the interface converts the spin current into a torque. In another mechanism,95,96 the applied electric field combines with the spin–orbit coupling at the interface to create a spin accumulation at the interface that is misaligned from the magnetization and exerts a torque on it. Finally, in spin–orbit torque systems with a second magnetic layer, the interface of that layer can produce spin currents that lead to torques with useful directions through the processes of spin filtering and spin precession.87 

The quest for identifying new materials and novel heterostructures with efficient spin–orbit torques is still wide open.97 As mentioned earlier, both interfaces and symmetry properties play an important role, which we would like to illustrate with three recent examples. One of the early materials that were investigated for possible spin generation from charge currents was gold, but early measurements of the conversion efficiencies in gold thin films provided results that differed by at least one order of magnitude.98,99 A decade later an explanation for this possible discrepancy was provided by suggesting that the thickness of the gold films is very important and the charge to spin conversion can be significantly enhanced in thin films.100 However, reducing the film thickness is limited by the requirement of having continuous films for transport. It was shown that this limitation can be circumvented by using Si/Au multilayers,101 where the gold thickness can be reduced down to the nanometer-regime, which results in spin-charge conversion efficiencies or order unity [see Fig. 5(a)]. This is one example that shows that interfacial effects have the potential for significantly enhancing spin–orbit torques.

FIG. 5.

New directions for spin–orbit torques. (a) Enhanced spin–orbit torques from interfaces; spin to charge conversion measured in Si/Au multilayers through non-local transport as function of Au layer thickness compared to previous measurements of single layer Au films. The data for the single layers are from Ref. 98 for Seki et al., Ref. 100 for Chen et al., and Ref. 99 for Mihajlović et al. Reprinted with permission from Ref. 101. Copyright (2021) by the American Physical Society. (b) Unconventional torques due to magnetic symmetry breaking; second harmonic Hall measurements in FeRh/Ni80Fe20 bilayers indicate an unusual polarization direction of current induced spin accumulations in FeRh. The polarization direction is about 45° with the current direction, which coincides with the equilibrium direction for antiferromagnetic spins in FeRh. Adapted from Ref. 102. (c) Torques with the symmetry of the planar Hall effect associated with Co/Ni acting on a CoFeB layer. The variation of the spin–torque ferromagnetic resonance linewidth with dc current is plotted vs the magnetization angle θ. The sin 2θ variation is distinct from the symmetry of the spin-Hall effect. With a spin torque of this form, the magnetization direction sets the spin-polarization direction and flow direction. Reprinted with permission from Ref. 103. Copyright (2020) by the American Physical Society.

FIG. 5.

New directions for spin–orbit torques. (a) Enhanced spin–orbit torques from interfaces; spin to charge conversion measured in Si/Au multilayers through non-local transport as function of Au layer thickness compared to previous measurements of single layer Au films. The data for the single layers are from Ref. 98 for Seki et al., Ref. 100 for Chen et al., and Ref. 99 for Mihajlović et al. Reprinted with permission from Ref. 101. Copyright (2021) by the American Physical Society. (b) Unconventional torques due to magnetic symmetry breaking; second harmonic Hall measurements in FeRh/Ni80Fe20 bilayers indicate an unusual polarization direction of current induced spin accumulations in FeRh. The polarization direction is about 45° with the current direction, which coincides with the equilibrium direction for antiferromagnetic spins in FeRh. Adapted from Ref. 102. (c) Torques with the symmetry of the planar Hall effect associated with Co/Ni acting on a CoFeB layer. The variation of the spin–torque ferromagnetic resonance linewidth with dc current is plotted vs the magnetization angle θ. The sin 2θ variation is distinct from the symmetry of the spin-Hall effect. With a spin torque of this form, the magnetization direction sets the spin-polarization direction and flow direction. Reprinted with permission from Ref. 103. Copyright (2020) by the American Physical Society.

Close modal

The second example for generating unconventional spin–orbit torques from low symmetries exploits that fact that the magnetic structure of antiferromagnetic materials can significantly reduce the symmetry that is given by the crystalline structure alone. At the same time, due to their vanishing net magnetization, their magnetic structure is typically robust against moderately strong magnetic fields, a fact that is advantageous for systems using spin torque oscillators that often require additional external magnetic fields. Exotic torques have been experimentally observed104–106 and theoretically predicted.107,108 A particular interesting case is provided by the antiferromagnet FeRh. Here, spin orbit torque efficiencies of up to 300% were observed at low temperatures, while angular dependent measurements also indicate that the polarization orientation of the electric current induced spins is determined by the direction of the spins in the antiferromagnet [see Fig. 5(b)]. Thus, antiferromagnets may provide a very efficient way to generate spin torques with geometries that are ordinarily unattainable.

The final example uses the result that ferromagnetic materials can also produce spin–orbit torques with novel symmetries.89 Spin–orbit coupling within ferromagnetic metals can be large and produce the well-known anomalous and planar Hall effects. The former, the anomalous Hall effect, can lead to spin currents polarized along the magnetization direction that flow perpendicular to the magnetization and electric field. Torques with the symmetry of the planar Hall effect are again polarized along the magnetization direction but flow parallel to the magnetization direction.89 Torques of this symmetry from a Co/Ni multilayer have been observed to act on a CoFeB free layer [Fig. 5(c)]103 and are the same order of magnitude as torques associated with the spin-Hall effect in Pt.109 These torques may enable exciting spin waves and switching perpendicularly magnetized layers as well as permit new geometries for spin oscillators for neuromorphic computing.

Spintronic synapses harness the non-volatility of magnetization at the nanoscale to implement synaptic weights.12 Several devices have been proposed with various degrees of biological resemblance. Spin–torque MRAM can store 32-bit floating point synaptic weights in ultra-low power chips.110 Magnetic tunnel junctions emulate binary weights by switching between two magnetization states.111–113 The intrinsic stochastic nature of magnetization switching in two-state magnetic tunnel junctions can also be leveraged for learning.114 Memristive behavior is obtained by modifying the magnetization texture to obtain gradual switching via spin–torque115,116 or spin–orbit torques.117 Domain walls118 or skyrmions119,120 imitate neurostransmitter nucleation and propagation in these structures. Optical excitations24 and antiferromagnetic dynamics121 open the path to ultrafast, terahertz, spintronic synaptic devices.

Spintronic neurons leverage the non-linearity of magnetization dynamics. Reservoir computers based on large amplitude excitations of skyrmions,122 domain walls,123 spin-waves,124 and vortices23,125 have been proposed. Related experiments typically rely on time-multiplexing the dynamics of a single magnetic dot or junction, and demonstrate state-of-the-art performance on tasks such as spoken digit recognition or memory capacity.23,124,126 Spintronic oscillators emulate the rhythmic features and synchronization behavior of biological neurons. Spin–torque and spin–orbit torque nano-oscillators solve classification tasks, such as vowel recognition, by synchronizing to external signals.29,30 Large scale implementations of neural networks based on spintronic oscillators will require scaling down the oscillators, achieving mutual synchronization in large arrays, and controlling the synchronization process via dedicated synapses.127 

Typical spin–torque–oscillator designs based on a single free layer require an external static magnetic field to operate, but it is possible to eliminate the need for an applied field. One approach is based on an antiferromagnetically exchange-coupled composite free layer.128 The mechanism of operation is based on the exchange field due to the antiferromagnetic coupling between the soft and hard sub-layers of the free layer. Such oscillators have large amplitude magnetization oscillations in the soft sub-layer that are tunable over a broad frequency range. They can generate an electric signal by placing a pinned in-plane layer with a magnetic tunnel junction or generate magnetic field outside the spin–torque oscillator. Figure 6 summarizes the operation of an antiferromagnetically exchange coupled composite spin–torque oscillator using a two-macrospin model. Micromagnetic modeling shows that, for weaker currents, the two-macrospin model captures the behavior of the full model, but, for stronger currents, the precession may become non-uniform depending on the oscillator size.

FIG. 6.

Antiferromagnetic exchange coupled composite structure. The inset shows the antiferromagnetically exchange coupled composite spin–torque oscillator structure, including a soft layer (SL), hard layer (HL), and spacer, e.g., Ru layer for antiferromagnetic coupling. A perpendicular polarization layer is assumed to reside under the hard layer. The results obtained via the two-macrospin model are shown for (a) precessional frequency f and (b) z-component of the magnetization mz in the soft layer and the hard layer as functions of the current density for different surface exchange energy density coupling Jex for D = 20 nm, th = ts = 0.8 nm, tspacer = 0.3 nm, Ms,s = 13.5 × 105 A/m, Ms,h = 4.7 × 105 A/m, Ku,h = 0.4 MJ/m3, and αh = αs = 0.008.

FIG. 6.

Antiferromagnetic exchange coupled composite structure. The inset shows the antiferromagnetically exchange coupled composite spin–torque oscillator structure, including a soft layer (SL), hard layer (HL), and spacer, e.g., Ru layer for antiferromagnetic coupling. A perpendicular polarization layer is assumed to reside under the hard layer. The results obtained via the two-macrospin model are shown for (a) precessional frequency f and (b) z-component of the magnetization mz in the soft layer and the hard layer as functions of the current density for different surface exchange energy density coupling Jex for D = 20 nm, th = ts = 0.8 nm, tspacer = 0.3 nm, Ms,s = 13.5 × 105 A/m, Ms,h = 4.7 × 105 A/m, Ku,h = 0.4 MJ/m3, and αh = αs = 0.008.

Close modal

Electric field effects are particularly promising for controlling spintronic synapses as they are low power, can lead to non-volatile variations, and modify spin textures in multiple ways, e.g., via interfacial effects in oxide/magnetic structures35,129 or by locally changing the perpendicular anisotropy of the magnetic thin film. A future challenge is to go beyond the neighbor to neighbor coupling intrinsically delivered by exchange or dipolar coupling between the oscillator–neurons, and achieve controllable long range connections. It will also be interesting to design neurons with spiking behaviors as in biology, as well as increasing their speed above the gigahertz range. For this, antiferromagnets, as well as synthetic antiferromagnets, are promising due to their terahertz speed and spiking ability.130,131

While new materials could provide efficient generation of magnetization dynamics as the basis for neuromorphic computing using spin oscillators,23,29,132,133 they do not address adjusting synaptic weights or enabling high connectivity between individual oscillators. A neuromorphic circuit requires the output of a neuron to serve as the input to other neurons with learning associated with adjustment of the “synaptic” weights of the inputs to the neurons. High connectivity is also required, and an advantage of rf signal transmission between neurons.132 This connectivity, however, requires transforming the rf output of a spin oscillator neuron back to a dc signal to serve as an input to other neurons. A spin resonator can provide this function. A rf input can excite spin precession that mixes the rf signal down to dc, known as the spin-diode effect134 and spin–transfer ferromagnetic resonance.135 This behavior is common to spin resonators composed of ferromagnetic metals. However, being able to adjust the output characteristics is not. Incorporating a quantum material—a metal–insulator-transition metal oxide—can give spin resonators hysteresis and memory of their prior state,35 an important characteristic for oscillator-based synapses (Fig. 7). Specifically, it was shown that a dc current that heats the quantum material close to the metal–insulator transition can be used to adjust the synaptic “weight” of the dc output signal.

FIG. 7.

(a) Schematic showing a hybrid metal–insulator-transition oxide/ferromagnetic metal nanoconstriction. (b) When the V2O3 is in an insulating state, current is concentrated in the nanoconstriction region and excites spin waves that convert a rf current near the ferromagnet’s resonance frequency to a dc voltage. The resonance frequency can be controlled with dc current through the constriction. Adapted from Ref. 35.

FIG. 7.

(a) Schematic showing a hybrid metal–insulator-transition oxide/ferromagnetic metal nanoconstriction. (b) When the V2O3 is in an insulating state, current is concentrated in the nanoconstriction region and excites spin waves that convert a rf current near the ferromagnet’s resonance frequency to a dc voltage. The resonance frequency can be controlled with dc current through the constriction. Adapted from Ref. 35.

Close modal

This initial demonstration raises the question of what other materials may be well suited to integrate synaptic functionalities into spin oscillators. Although much of the research on oxides over the last few decades has concentrated on the 3d transition elements (due to the smaller spatial extent of the wavefunction resulting in strong correlations), a few promising 5d oxides have recently emerged as useful tools in spin-based technologies because of the stronger spin orbit coupling of the 5d electrons as compared to the 3d electrons. Various reports have shown that growing very thin layers of ruthenates,136,137 iridates,138 or combinations of the two139–141 can result in new interfacial magnetic properties that can also be controlled with electric fields.140 The attractive aspect of these materials is that with the stronger spin orbit coupling, combined with the complex and tunable properties of oxides, a new platform for spin-based neuromorphic oscillators can be created that could feature intriguing magneto-transport properties in itinerant magnets with more energy favorable conditions to drive the oscillators.

The microwave properties of spintronic devices provide a unique perspective for supplying distributed power without hardwired connections. Neuromorphic systems reduce the energy consumption of neural networks by placing memory and computing devices as close to each other as possible and connecting them densely in a way that resembles the brain. Solving state of the art automatic classification tasks, such as image recognition, will require hundreds of millions of synaptic and neuronic devices. It is a challenge to bring the required power to each of these devices even if the individual energy consumption is low due to the wiring complexity. Furthermore, the advantage of merging computing and memory may be lost if the inputs to be classified by the network need to be pre-processed to meet the requirements of a specific hardware. For example, the digitization of fast, analog signals and their transfer to a hardware neural network via a digital bus can consume several hundreds of watts.

Spintronic devices are promising to solve these two issues because they can intrinsically sense analog signals and harvest the energy brought by these signals. The magnetization of nanoscale dots is highly sensitive to environmental electromagnetic fields that can be used as inputs to a neural network or to power the devices to which these dots are connected. Spin-diodes that convert radio-frequency signals to direct voltages are a great example of such process.134 They can sense142 and harvest143 microwave signals, and embody synaptic devices that perform the multiply-and-accumulate operations directly on input radio-frequency signals without digitization.133 Interfaces with phase change materials control their synaptic weights directly.35 

In addition to microwave signals, spintronic devices are also sensitive to a wide range of signals, such as optical beams and acoustic waves.144 A future direction of research for neuromorphic spintronics is thus to build ultralow-power neural networks that natively sense input signals and harvest the energy they need to operate.

1. Neurons and synapses, volatile and non-volatile resistive switching

As we have seen in previous sections, a great deal of progress has been achieved toward implementing artificial neurons and synapses that exploit the physical phenomena of resistive switching, also known as memristance. It is useful to recall that the artificial neuron and the artificial synapse rely on qualitatively different types of resistive switching, volatile and non-volatile, respectively. Biological neurons fire electric spikes, but after those events, they return spontaneously to a resting state or a quiescent state. Biological synapses, in contrast, modify their state, the synaptic coupling, during learning and then keep that state for a long time, even a lifetime as for significant memories. Thus, these two qualitatively different neural functions have a correlate in the volatile and non-volatile phenomena observed in resistive switching as mentioned earlier. We know from the architecture of the brain that its cognitive functions emerge from the interaction of a massive number of spiking neurons whose interconnections are modulated by synapses. Thus, the next stage in the development of neuromorphic hardware is to build networks of artificial neurons and synapses, which are subject to and evolve according to those two types of physical phenomena.

2. An ideal neuromorphic quantum material

It is desirable to find a quantum material that can embody both types of resistive switching. A first step in that direction was recently demonstrated by Cheng et al.56 Those authors reported the non-volatile resistive switching of VO2, which is a compound that exhibits volatile resistive switching near room temperature associated with its metal–insulator transition.145 The non-volatile behavior of VO2 that was observed resulted from a chemical phase change (topotactic) transition into conductive V3O5 induced by a strong electric field, which caused the loss of oxygen ions. The transition could be reversed by externally heating the sample, making it interesting for applications, to achieve the recovery of VO2 solely by electric means, such as self-heating. Other materials that are worth investigating include nickelates such as SmNiO3, which also display a metal–insulator transition driven by temperature,146 making it potentially useful for implementing neurons via volatile switching.

3. Implementing networks

So far, it has been easier to implement networks of artificial synapses than neurons. Networks of neurons using new quantum materials have only been reported at the level of numerical model simulations.147 A recent example is the work of Oh et al.,148 where a rectified linear unit (ReLU)149 neural network was simulated from a VO2 neuron device data, predicting an excellent energetic performance. Specifically, using experimentally validated performance of the existing VO2 neuron device, the simulated system level performance of the Mott ReLU was compared to analog CMOS and digital analog-to-digital converter (ADC) circuits. Even with these non-optimized Mott devices, the energy consumption and latency is comparable, while the lateral footprint is reduced by at least three orders of magnitude. Furthermore, assuming a better thermal design and more efficient heater circuitry, one can expect for optimized Mott ReLU a reduction in energy consumption of at least two orders of magnitude compared to the analog CMOS or digital ADC circuitry.148 

An earlier work in this direction was done by Jerry et al.,150 where the stochastic properties of VO2 devices were simulated to perform a digit recognition task. This focus on synaptic networks is due to several reasons. First, non-volatile resistive switching, relevant to synapses, has been intensively investigated over the last 20 years. Furthermore, non-volatile effects are easier to observe since they occur in an astonishing number of transition metal oxides.151,152 Importantly, they already shows excellent performance in devices made with simple oxides, such as TiO2, TaO2, and HfO2.153 For example, a recent synaptic neural network implemented with TiO2 is shown in Fig. 8, which illustrates the current state of the art.

FIG. 8.

A crossbar array device of TiO2 non-volatile memristors that implement a synaptic neural network. Adapted from Ref. 154.

FIG. 8.

A crossbar array device of TiO2 non-volatile memristors that implement a synaptic neural network. Adapted from Ref. 154.

Close modal

In contrast, the progress in networks of artificial spiking neurons has been slower. Implementing such devices requires quantum materials that exhibit a stimulus-driven insulator to metal transition, such as the vanadates and nickelates, mentioned earlier. These materials are harder to fabricate in good quality thin films, as the transition properties strongly depend on the substrate and deposition conditions.155 

It is a pressing issue to make progress in the implementation of networks of neurons. A key difference between neurons and synapses, beyond their type of resistive switching, is a functional one. The essential function of a synapse is to encode the coupling intensity between neurons, but synapses do not necessarily need to interact. This is in stark contrast to neurons, where the excitation state of an upstream one needs to elicit, or at least contribute to, the excitation of several downstream.

4. Neuron–neuron interaction

Achieving control and physical insight into neuron–neuron interactions is a significant challenge. Almost all the work in the field has remained at the level of a single neuron device.145,156,157 This includes the work of Jerry et al.,150 where a multi-neuron VO2 device was implemented. However, the neurons were independent, not interacting, and the device function exploited the stochastic behavior of the ensemble of independent neurons.

A first step to achieve the neuron–neuron interaction would be to get one excited neuron to induce the excitation state of another. It should be kept in mind that the interaction should be a priori scalable to multiple neurons since building networks is the ultimate goal. Since the Mott neurons rely on an electro-thermal incubation process,158 we envision different types of neuron–neuron interactions based on both electric and thermal coupling.

5. Electric coupling

The simplest demonstration of electric coupling is to connect two consecutive Mott neurons so the excitation of the first one induces the excitation of the second. Lin et al.159 implemented such a monosynaptic neuron circuit. The thyristor, a conventional semiconductor electronics component,160 which was recently recognized to have memristive properties that are qualitative similar to Mott materials, demonstrated coupling with leaky-integrate-and-fire artificial neurons.161,162

Another way to couple neurons electrically is using capacitors to store charge. The simplest spiking neurons based on Mott materials are spiking oscillators, where a Mott device is connected in parallel to a capacitor.70 Initially, the latter charges while the former is in its natural insulating state; when the voltage VC reaches a threshold the Mott device undergoes a metal–insulator transition leading to a spike of current as the capacitor discharges; and then the cycle restarts. This behavior was the basis of the neuristor proposed in 2012.70 In more recent work, Adda et al.163 imaged and studied the dynamics of these oscillations in detail. Systems of oscillators can show complex emergent dynamical behavior, as known from the behavior of coupled pendulums,164 ranging from synchronization to chaotic motion. This establishes a parallel with neural behavior, from brain-waves measured in electroencephalograms to chaotic brain discharges during epileptic seizures.

6. Thermal coupling

A second type of coupling is thermal, which exploits the fact that Mott neurons can be triggered by self-heating under significant power injection. del Valle et al. demonstrated this approach with a caloritronics-based neuristor42 in which the heating from a current pulse through a thin wire of Ti in close physical contact with a VO2 device induced resistive collapse of the insulating state. An important step would be to replace the Ti wire by a Mott device, where the heat produced by driving one Mott neuron would induce the excitation (i.e., resistive collapse of a second Mott device). The viability of this type of coupling would open interesting possibilities to explore by arranging the Mott devices in different geometries (Fig. 9), which may enable spatial spike train propagation via a heat wave propagating through the devices. Toward this end, the inherent stochastic nature of resistance switching during phase transitions coupled with local thermal fluctuations present in interconnected devices has recently been exploited to enable homeostasis in proof-of-concept neural network simulations, demonstrating novel approaches to harness the unique behavior of thermally sensitive correlated semiconductors.165 

FIG. 9.

Different geometries for thermal coupling. The parallel geometry may be extended to produce spike trains though a propagating heat wave.

FIG. 9.

Different geometries for thermal coupling. The parallel geometry may be extended to produce spike trains though a propagating heat wave.

Close modal

7. Flux quantization and dynamics

A very different approach toward complex networks with cerebral plasticity is provided by a random Josephson network made up from the perovskite YBa2Cu3O7.166 Such a random Josephson network has a very high speed response from the stimulus of a train of input spikes and is capable of short-term learning. The output is a series of voltage spikes representing the memory state of the Josephson circuit.

A closed superconducting loop can maintain a circulating current and trap magnetic flux in units of flux quanta in the loop.167 For appropriate loop parameters (LIcF0), a single flux quantum either positive or negative, can be trapped. Furthermore, if the superconducting loop encompasses a Josephson junction, upon reaching the critical current of the junction, a single flux quantum will enter or exit the loop. This technology has been applied to logic circuits, the simplest form being a flux shuttle.168,169 The switching time is 1012 s, and the energy consumed in that switch is on the scale of attojoules.

Fully coupled randomly disordered superconducting networks with open-ended channels for inputs and outputs are a new architecture for neuromorphic computing. We have shown that such a network can be designed around a disordered array of synaptic networks using superconducting devices and circuits as an example.170 A similar architectural approach may be compatible with several other materials and devices. The randomness is important as it allows scalability at an exponential rate providing substantial computing power and memory. The scalability of such an architecture is illustrated in Fig. 10 where the number of states available in a disordered array is 3n for an array of n loops that allow only a single flux quantum. In the figure showing a network of ten loops, the number of possible states is 310.

FIG. 10.

Each loop can be in at least three flux states. Flux can be moved in or out via a Josephson junction (red bars). To drive and probe the ten loop memory, inputs (ij) outputs (oj) and feedback (bj) are used. Ten disordered loops can have at least 310 states.

FIG. 10.

Each loop can be in at least three flux states. Flux can be moved in or out via a Josephson junction (red bars). To drive and probe the ten loop memory, inputs (ij) outputs (oj) and feedback (bj) are used. Ten disordered loops can have at least 310 states.

Close modal

A simple array of three multiply coupled (interconnected) disordered loops containing Josephson junctions forms a fully recurrent network together with compatible neuron-like elements and feedback loops, enabling unsupervised learning.170 Several of these individual neural network arrays can be coupled together in a disordered way to form a hierarchical architecture of recurrent neural networks that is similar to that of a biological brain. The plasticity of this structure can be built into the design by modifying the number of fluxoids trapped in the loop and the binding energy (the Ic of the Josephson junctions).

Even more flexibility has been suggested171 by a hybrid design of (a) the array described above and (b) a more “stable” platform like hydrogen doped RNiO3.172 It has been shown in nickelates (RNiO3) that H+ ions can be introduced and by controlling electric field, the ion H+ will drift toward a negatively charged electrode resulting in a much higher resistance along that pathway, at time scales that are at the other end of the spectrum and complimentary to those of random Josephson junction arrays. This was accomplished via a series of voltage pulses and resulted in a long-term memory. The substantial differences in time scale between the Josephson array, which is driven into a particular neuromorphic state, and the H-doped nickelate, which is also driven into a neuromorphic state by voltage pulses, allows a large time-spectrum, which opens the opportunity for tuning neuroplasticity in a large range of time scales.

Section II B describes different magnetic devices that can function as neurons and synapses. For these to accomplish useful cognitive computing, large numbers of them need to be coupled together with controllable coupling strengths. Just as magnetic devices have a wide variety of behaviors, they have a wide variety of ways to couple them, including electrical, electromagnetic, and direct exchange coupling.

The read-out mechanism of spintronic devices is generally either magnetoresistance of some form or generalized Hall conductance. The field of spintronics blossomed with the discovery of giant and tunneling magnetoresistance,173–176 the large change in resistance on changes in the magnetic configuration. Such changes in resistance also provide an electrical mechanism to couple devices because changes in the current through these devices change the current through subsequent devices, thereby changing the spin–transfer or spin–orbit torques on those subsequent devices.29 The efficiency of this approach is determined by both the magnetoresistance of the devices and the efficiency of the spin–transfer torques that change when the current and voltage change. This electrical approach to coupling has the advantages that its range is not limited and its signals can be readily integrated with CMOS circuitry to aid in amplification and fan out.127 It has the disadvantage that the ohmic losses due to the current can limit its energy efficiency.

A coupling mechanism that does not suffer from ohmic losses is magnetostatic coupling between the magnetizations of magnetic devices. Such coupling is exemplified in the use of artificial spin ice177,178 to implement coupled systems of Ising spins. An artificial spin ice consists of an array of nanomagnetic structures that interact through the magnetic fields that they all possess. The nanomagnets are typically stable in one of two configurations, giving the two states of an Ising model. Since the beginning of artificial intelligence development, Ising spin systems have served as a model for artificial neural networks. Spin switching between two states, indeed, mimics neuron spiking, while the strengths of coupling between spins emulate synaptic weights. The famous Hopfield model179 shows, for example, how memories can be stored in such a spin system, in an associative way that resembles the brain’s behavior. This process can exploit thermally activated transitions as in Boltzmann machines.180 Building hardware Ising spintronic systems to implement neural networks natively is, thus, a compelling solution to compute through the physics of coupled nanomagnets.

Spin ice is a natural platform for such purpose. Recent proposals show that assemblies of coupled nanomagnets can compute through energy minimization or through their transient behavior via reservoir computing. Their reconfigurability by removing nanomagnets elements has been demonstrated in stochastic Kagome lattice imitating nano-Galton boards.183 The coupling between magnets in these systems is local by nature. It is, therefore, a challenge to build more powerful, state-of-the-art, all to all connected neural networks in which each magnet is coupled to each other magnet in the network in a reconfigurable way. For artificial intelligence applications, the efficiency of hardware Ising machines such as quantum annealing systems that couple Josephson junctions, or CMOS and optical implementations is reduced by the limited number of neurons that can be all connected to each other.

Magnetostatic interactions also couple spin–torque oscillators, although these oscillations are at gigahertz frequencies and so can also be thought of as RF coupling. Similarly, if the ferromagnetic material is continuous between two oscillators, spin waves can couple them. There are some similarities and difference between these two types of coupling.36,38,182,183 The strength of magnetostatic interactions decreases rapidly with distance, and they do not introduce any phase shifts in the interactions between spin–torque oscillators. As a result, for a spin–torque-oscillator array, magnetostatic interactions lead to near-neighbor coupling that propagates as a chain. As an example, a frequency-synchronized linear array with random parameter distributions but with a consecutive angle shift between adjacent spin–torque oscillators results in an angle wave along the array (Fig. 11).128 On the other hand, spin wave interactions between spin–torque oscillators introduce phase differences due to the finite spin wave velocity and their strength may decay slower than that of magnetostatic interactions.184 

FIG. 11.

Coupled spin torque oscillator chain. Long chain of spin torque oscillators with perpendicular anisotropy with the following soft layer parameters: D = 60 nm, t = 1 nm, Ms = 1000 emu/cm3, Ku,h = 1 kerg/cm3, and α = 0.01. An external perpendicular magnetic field of 4 kOe is applied. The current has a random distribution of 2% between different spin torque oscillator (similar effects are obtained for distributions in other parameters).

FIG. 11.

Coupled spin torque oscillator chain. Long chain of spin torque oscillators with perpendicular anisotropy with the following soft layer parameters: D = 60 nm, t = 1 nm, Ms = 1000 emu/cm3, Ku,h = 1 kerg/cm3, and α = 0.01. An external perpendicular magnetic field of 4 kOe is applied. The current has a random distribution of 2% between different spin torque oscillator (similar effects are obtained for distributions in other parameters).

Close modal

Fortunately, spintronics offers multiple ways to connect magnetic devices through long range interactions. Optical and spin wave beams could be exploited for this purpose. Another promising option is the wireless coupling offered by microwave signals. Spin–torque nano-oscillators can take a dc electrical input and emit a microwave power that is a non-linear function of the input, resembling the ReLu neural activation used today in artificial neural network.185 This radio-frequency signal can then be broadcast to other elements of the network, synapses and neurons. Spintronic diodes can natively sense this microwave signal and convert it to a dc voltage that can be used to feed other devices.134,135 Recent experiments show that these spintronic diodes perform the elementary synaptic operations and that their strength can be controlled in a non-volatile way when they are interfaced with oxide materials that undergo a phase change near room temperature, such as vanadium oxides.35 

The potential of magnetic tunnel junctions for neuromorphic computing has been demonstrated by leveraging the non-linear dynamics of harmonic, sinusoidal oscillations induced through a dc spin-polarized current.23 However, a key ingredient of biological neurons is to emit spikes when the membrane potential overcomes a threshold. Therefore, a major class of bio-inspired neural network algorithms—called spiking neural networks—exploits spikes for computing. Such networks encode information in the timing between spikes in addition to the rate of the spikes. They circumvent the heavy external circuitry that is usually required for training synaptic weights. The electrical spikes applied to the synapses can indeed directly induce weight modification and learning through, for example, the bio-inspired learning rule called spike-timing-dependent plasticity.186 The multifunctionality of spintronic devices can be leveraged to achieve spiking behavior. Two methods have been proposed.

The first concept is based on the windmill magnetization dynamics in magnetic tunnel junctions with two weakly coupled free layers.187 A spin-polarized dc current induces a perpetual switching of both magnetic layers.188 Neuron-like spikes are obtained by exploiting the transient behavior of this windmill motion in response to voltage pulses. The second way proposed to achieve the spiking behavior is to harness the dynamics in antiferromagnets.121,189,190 The coupled dynamics of the two magnetic sub-systems can, indeed, be rewritten as an equation similar to the phase dynamics of Josephson junctions, known to exhibit spikes of activity. Recently, a ferromagnetic structure that gives similar behavior in ferromagnetic systems was proposed.131 These proposals are theoretical for now and demonstrating them experimentally is an important challenge for future spintronic neuromorphic systems.

One of the most useful features of strongly correlated oxides is that the ground state properties can be modified easily by various means, including implantation of or irradiation with light, inert ions, such as He and Ar.191–195 A key advantage of this approach is the ability to focus a beam of light ions down to a beam diameter on the order of a few nanometers, localizing the effect the light ions have on the correlated material.194,195 One application is modifying the oxygen stoichiometry within a narrow region of a few nanometers. Since the properties of oxides are closely linked to their oxidation states, such modifications give control of material properties with exquisite lateral resolution.

Rare earth nickelates, which have a metal–insulator transition coupled with a paramagnetic–antiferromagnetic transition196 provide properties that can be dramatically modified. Upon irradiating the films homogeneously with a beam of helium ions, both transitions are strongly suppressed, and the resulting ground state is a metallic/paramagnetic material. Measurements indicate a sizable change in the nickel valency state as the driving mechanism behind the suppression of the transition. Similar effects are expected to occur in other oxides, such as manganates, titanates, and vanadates, which feature numerous magnetic ground states across phase diagrams where the oxidation states are the control parameter. Furthermore, a focused ion beam could be used to draw domains of paramagnetic regions within a backdrop of antiferromagnetic regions (or vice versa). This could be extended to delineate magnetic domain walls along any desired pattern. Finally, these domain walls could be drawn to connect two magnetic oscillators, thus potentially resulting in coupling them through magnons or spinons that live exclusively on the domain wall (Fig. 12). The tunable properties of correlated oxides allow versatile ways of designing domain walls with different materials and could then be used to create and control new coupling mechanisms.

FIG. 12.

(a) A He beam could be used to write a ferromagnetic (FM) channel within a paramagnetic film that acts as a coupling channel between two spin torque oscillators. (b) Similarly, a domain wall of nm width can be drawn between to oscillators to allow for coupling.

FIG. 12.

(a) A He beam could be used to write a ferromagnetic (FM) channel within a paramagnetic film that acts as a coupling channel between two spin torque oscillators. (b) Similarly, a domain wall of nm width can be drawn between to oscillators to allow for coupling.

Close modal

Spintronic structures for neuromorphic applications often involve spin textures that can be characterized as topological objects, such as vortex cores and skyrmions.197 Such topological objects possess a winding number, given by W=14πm̂(xm̂×ym̂)dxdy. W is the winding number, and m is the unit vector of magnetization and two topological objects are said to have distinct topologies if their winding numbers are different. A topological object can exist only under specific conditions, and therefore modulating its local environment can provide a new form of tunability.

It was recently realized198 that the stray field environment from four Permalloy nanomagnets surrounding a Permalloy disk can be described using topological arguments and a discretized form of the winding number, given by W=12πβΔθi, where β is the relative change in orientation of the nearest neighbor’s magnetization (±1), and Δθi is the angular difference between the magnetization of two nearest neighbors. It was found that the winding number of the topological spin texture (vortex, antivortex, or uniform) in the Permalloy disk would always match the winding number from the stray field in the surrounding nanomagnets. A similar topological argument was also found to hold true for disks physically connected by exchange-mediated nanomagnets.

An array of disk/nanomagnet structures (see Fig. 13) controlled by an effective topology could, thus, potentially be implemented in neuromorphic or unconventional computing schemes. An active state (see Fig. 13) described by regions with different topologies, indicated by the blue or the green disks, could then be set, for example, by an external field or current, and probed (e.g., by resistance measurement), to distinguish different topological states of the array. We foresee the ideas described here being extended to spin textures, such as skyrmions, in which the arrays would no longer be planar (such as in Fig. 13), but could have a three-dimensional structure.

FIG. 13.

(a) Array of ferromagnetic disks (blue) surrounded by elongated nanomagnets (orange), in which the nanomagnet stray fields provide a topological environment that connects the disks. (b) An active state, for example, set by an external stimulus, in which the array can develop regions with distinct topology (indicated by different colors).

FIG. 13.

(a) Array of ferromagnetic disks (blue) surrounded by elongated nanomagnets (orange), in which the nanomagnet stray fields provide a topological environment that connects the disks. (b) An active state, for example, set by an external stimulus, in which the array can develop regions with distinct topology (indicated by different colors).

Close modal

In this Perspective, we have presented some of the recent remarkable progress in developing quantum materials that enable novel devices, developing these devices, and identifying networks that can take advantage of the properties of these devices. Despite this progress, there is much more to be done before these materials become a commercial reality. In this section, we describe a few of the outstanding directions for future research for devices, networks, and the related technologies of sensing and harvesting.

Despite the wide range of properties possessed by quantum materials, the constraints provided by designing competitive devices for neuromorphic computing are equally tight. The search for the optimal materials is quite difficult. There are several routes for further progress.

One route is to embrace computational techniques to speed up the search for materials with desired properties. Computational techniques have been an invaluable tool in the study of quantum materials for neuromorphic computing. Beyond merely an investigative tool, such techniques, particularly in combination with state-of-the-art machine learning algorithms, have the potential to transform quantum materials discovery and design as well. While electronic structure calculations based on density functional theory provide “clean” data (free of the vagaries of experimental measurement) on the intrinsic properties of materials, machine learning algorithms can unveil the underlying structure–property relationships and predict where the next breakthrough material will be found. The challenge is developing a sufficiently large dataset to build accurate machine learning models for quantum material property predictions.

Another way to design optimal materials is to go beyond the search for one specific material that has all the desired properties but to combine a set of materials with all said properties instead. The physical properties of strongly correlated materials appeal to a broad scientific community because of the versatility and tunability of their electronic responses via internal and/or external perturbations.10,199,200 However, the number of ways to control a single correlated material is limited by the available internal degrees of freedom. This presents a difficult challenge when a device concept requires a specific mechanism of control that is not accessible within a material. A general solution to this limitation is to judiciously design heterostructures that hybridizes the functionalities of two seemingly unrelated materials. In this fashion, the properties of one material can be used to change the functionalities of the other. This could provide exquisite control of the magnetic materials by voltage application, modification of the metal-to-insulator transition via optical means, changes in the optical properties of semiconductors by electric fields, or control of the conductivity with stress.

Artificial spiking neurons based on Mott insulator materials are a clear example of how much work is still needed to understand the properties of these materials in which novel devices are based. Significant progress has been recently made toward understanding of key physical phenomenon that enables it, namely, the unique non-destructive and “self-healing” electric breakdown in a strongly correlated insulator. Still, questions remain on how to better understand the nanoscale thermodynamics of the device. This is relevant because at those small dimensions, both heating and cooling occurs fast, even within nanoseconds. Moreover, better understanding of these thermodynamics is also needed as the Mott materials are driven significantly out of equilibrium, producing the growth of filaments in the conductive phase, that are believed to undergo a metal–insulator transition either driven by self-heating or carrier injection,201 eventually having to relax from the metastable phase.42,202

Carrier recombination dynamics as the metallic state decays back to the insulating state is another factor that can pose fundamental limits to the operating frequencies of Mott threshold switching neurons.203 These challenges for the Mott spiking neuron bring to light another important issue in the development of novel materials and devices—the ability to measure their behavior at the nanoscale. In order to characterize the material properties in different phases, it is necessary to use different analysis techniques, such as x-ray diffraction, Raman spectroscopy, and terahertz transmission. Ideally, these measurements would be made in operando, particularly in materials undergoing phase transitions that give them their desired properties. A pump–probe characterization method is ideal for this purpose since it can use the different source combinations for the driving mechanism and analysis method. For example, vanadium oxides can be triggered with external voltage, current source, or thermal source and the formation of filament can be optically mapped, such as in situ x-ray nanoimaging.163,204

It is not a trivial task to integrate the driving excitation system for the phase transition and the analysis tools. When using a Raman spectrometer to analyze a temperature-driven device, careful design of the thermal chamber for the sample is necessary. In addition, while driving the phase transition with one source, the probe might also contribute to the phase transition so that it is important to measure the dependence of the signal on the probe power.205,206 The design can be more sophisticated when triggering the phase transition with simultaneous stimuli (thermal, electrical, and optical) or characterizing the materials with some sophisticated techniques.

Achieving high spatial resolution for mapping filament formation in phase transition materials, tip enhanced Raman spectroscopy seems to be a promising approach. However, both the optical probe and strong electrical field could induce the phase transition in addition to the original excitation source. The designed optical beam spatial profile, the applied electrical field, and the size of the tips determine the accuracy of the results. While considering optical characterization of the dynamics of phase transition with high temporal resolution and the behavior over a specific band, the use of broadband light source or ultrafast laser is inevitable. Consequently, careful design of optical system for aberration compensation is also necessary. Integrating the characterization system with the driving system for phase transition requires not only a careful design of the individual system but also the investigation of the contribution of multiple driving sources to the phase transition.

A key characteristic of neuromorphic devices is their high speed, with operation times that can be well below the nanosecond time scale. Brain components, on the other hand, operate much slower, at the millisecond time scale. The speed of artificial neuronic devices is an advantage as it can lower the overall energy consumption of a system. Furthermore, since massive parallelism of the brain is challenging to achieve in hardware, having high speed individual components may compensate for a lower degree of intrinsic parallelism by time multiplexing, to reach an overall equivalent operating time at the system level.

While typical computing always evolves toward doing the calculation faster, it may also be important for neuromorphic computing to optimize materials toward slower performance or performance over mixed time scales. The brain’s functionality may, indeed, originate from its processing speed matching the input rate of the information it is processing. Consider, for instance, the task of auditory or visual pattern recognition which the brain can accomplish with very high accuracy albeit slow speeds. Recent work has shown various ways to accomplish this using neuromorphic hardware and software,29,148 which can be processed at speeds far faster than the brain. These tantalizing results show how some processes may result to be faster than in the brain.

As shown in this Perspective, there has been considerable progress in designing quantum materials that emulate individual elements for neuromorphic computing: oscillating or spiking neurons, and diverse types of synapses with short- or long-term memory. There has also been progress in coupling a small number of devices together to make networks capable of carrying out calculations. A big challenge is to demonstrate that the networks based on the novel devices are competitive in terms of speed, size, resilience, energy efficiency, and production costs. Demonstrating capabilities requires substantial effort particularly for devices and networks that are not based on CMOS substrates because of the current market domination enjoyed by CMOS.

Demonstrating the advantages of a device based on novel materials requires the implementation of a network unless the device is a drop-in replacement for an existing device. In other cases, it is necessary to consider the supporting circuitry. Here, the most straightforward approach is to base the supporting circuitry on CMOS because it has sufficiently established complexity to address almost all tasks. As CMOS runs into the limits of scaling, other possibilities may evolve that avoid some of the constraints created when adopting it. CMOS circuits have been primarily designed for very efficient digital logic, so when the supporting circuitry can be used in that fashion, it may not add much overhead. Digital logic allows for high precision operation with high noise tolerance. Many novel devices aim to use relatively low precision analog encoding of information to take advantage of the fact that most neuromorphic computing tasks do not require high precision. While this approach can lead to low energy use in the device itself, straightforward analog operation using voltages other than the maximum or minimum of the embedding circuitry can lead to significant overhead. Significant design effort is required to ensure that the total energy cost of a new device is competitively small.

One approach to reducing the energy cost is to pulse the voltages rather than apply them in steady state. As an example, consider a superparamagnetic tunnel junction used to generate random bits. It can be much more efficient to read its state with pulses using a pre-charge sense amplifier than to read it continuously.25,123 In addition, the details of the supporting circuitry require us to consider what application the novel device will be used for. Designing this circuitry to carry out tasks for an existing approach is the best way to achieve a direct comparison of the novel device’s efficiency.148,207

A drawback of using CMOS as a substrate is that it uses a relatively fixed window of operating voltages.208 Devices that need larger voltages will require specialized circuitry to apply the larger voltages, even if only for the forming step and will require high power transistors to control that voltage. Such circuits will greatly increase the costs in circuit area and energy consumption for those devices. On the other hand, devices that operate at lower voltages than that of the CMOS circuit do not provide equivalent savings. Either the voltage would need to be brought down through control transistors, the resistance and energy dissipation of which might negate any energy savings from lower operating voltages, or additional low voltage circuitry would need to be introduced, greatly increasing the needed circuit area. These issues highlight one desirable property to guide the search for device materials intended to operate on a CMOS substrate—that they operate efficiently at voltages close to those of the CMOS circuitry in which they will be embedded. Alternatively, designing novel CMOS circuits optimized for the new devices could be justified if the advantages offered by a novel device is sufficiently great.

Another challenge for building artificial neural networks and highly connected systems from quantum materials are the large spatial variations in heating and temperature. These may become important when massive amounts of time-dependent signals are used to process and encode data. Next-generation computing technologies operate at exaFLOPS (1018 floating-point operations per second).207 CMOS-based von Neumann-based architectures of such a complexity would consume ∼20 to 30 MW of power, roughly the electrical energy of a whole city. Bio-inspired computation could have much lower power consumption using integrated non-volatile memory and logic and exploiting their learning capabilities from unstructured data.

As an example, local thermal dissipation in the commonly used crossbar architecture may cause unwanted interference between artificial synapses that are in close physical proximity. Moreover, temporal coincidences of signals may also cause local thermal disruptions and perhaps cannot be ignored when systems are packaged at nanoscale-dimensions and in three dimensions. In traditional two-dimensional architectures, thermal management is facilitated by the fact that the neural network is proximal to a large thermal sink (i.e., the substrate). However, a three-dimensional system can no longer rely on this thermal sink. Artificial neural networks based on quantum materials, therefore, raise the challenge of exploiting temperature changes for computing while maintaining thermal variations in the range in which the system can operate.

Low temperature operations provide a higher efficiency use of a wider set of quantum materials. These operations come with different energy efficiency issues than room temperature. Superconducting circuitry operates at extremely low voltages so that novel devices in superconducting circuits will also need to operate at low voltages.210 Low temperature CMOS is also in development and may provide some of the control circuitry for superconducting circuits. While it will be more efficient than room temperature circuits, it will be much less efficient than superconducting circuits and its energy consumption. This energy consumption will have to be accounted for to the extent needed for such circuit coupling in room temperature environments. It also requires impedance matching with superconducting circuits,211 which carries additional overhead. Finally, for low temperature applications, any heat generated will have to be transferred to room temperature, which uses 75 times more energy in the ideal case (Carnot efficiency) and closer to 1000 in practice.210 Still, even with the overhead, superconducting circuits can be more efficient in applications like those more compute intensive than data intensive.

One device that we have featured in this Perspective illustrates some of the issues associated with demonstrating viability. Despite much progress at the device level, there is only limited development of the Mott neuron42,145,156 and the neuristor,70 into networks. Establishing the utility of these devices requires progress on the significant challenge of interconnecting these neurons and understanding how they couple and interact when driven out-of-equilibrium. Many paths hold possibilities. For instance, thermal coupling was recently demonstrated in the spike firing by heating of a metallic Ti nanowire crossing the gap of a VO2 device.157 Another possibility derives from the interaction in the filamentary incubation times of two Mott neurons in physical proximity. This coupling raises the possibility of different multi-electrode geometries with possible thermal and electro-thermal coupling. The viability of these devices starts with being able to implement a system of two, or even many Mott neurons, where the firing of an upstream neuron can drive the spiking of a downstream one. Making a useful network will require implementing synaptic tuning, that is, making non-volatile changes in a resistive switching material driven by the spiking.

One approach to neuromorphic computing that might eliminate much of the CMOS circuitry is to use optics. A key component of such implementations might be phase transition materials. While phase transition materials have been attracting research attention due to their hysteresis behavior, i.e., memory functionality, and significant variation of material properties in different phases,206 they are also optically active. The phase transition can be driven by various stimuli. Analogously to the conductivity change in electrically driven phase transition materials, the variation of optical properties212 allows the development of a photonic-based neuromorphic system that takes advantage of optical connectivity for scalability and speed to increase the throughput of such systems.

Vanadium dioxide can have more than a 15% change in reflectivity, more than a 100% change in third order nonlinear coefficient, and a less than 100 fs transition time when phase transition is triggered by ultrafast laser pulses. The details of excitation beams such as pulse width, pulse duration, repetition rate, wavelength (i.e., optical carrier frequency), and polarizations prescribe the optical properties needed in materials.213 

Based on variations of optical properties, a hybrid photonic-based computing circuit, which consisted of a 2D optical waveguide integrated with phase transition materials deposited on top of it, has been implemented.214 To implement the leaky-integrate-and-fire behavior of a neuron, an all-optical circuit with a semiconductor optical amplifier and nonlinear fibers was demonstrated.215 Another potential approach can be explored by using a gain medium as an integrator and phase transition materials as output couplers, since the reflectivity of phase transition material changes with varying incident optical pump powers. To emulate synaptic plasticity, devices with electrically driven phase transition materials have demonstrated the capability of updating the synaptic weight and generating different output signals in response to different input signals.

Spintronics also offers promising solutions to enhance the connectivity of neuromorphic networks. The properties of magnetic materials are sensitive to strain, light, electric and magnetic waves, in the dc and rf ranges. By themselves, or combined with other materials, they can generate waves that can then propagate to other magnetic elements, such as spin waves and radio frequency signals. Wave-like connectivity between neuromorphic spintronic elements can be exploited to build densely connected neural networks.132 The emission and propagation of magnetic quasiparticles in two- or three-dimensional magnetic networks is also an interesting option to achieve high connectivity.

An intriguing option to enhance the size and connectivity of artificial neural networks is to use interconnected qubits for machine learning.216 The advantage of quantum neural networks is the possibility of encoding neurons in the basis states of a quantum ensemble rather than in qubits, thus obtaining neural networks of significant (exponential) size, on a very small number of qubits. A small number of noisy qubits can be made useful for computing by adopting a hybrid quantum/classical approach by offloading some subroutines such as data pre- and post-processing to a classical computer. Since universal quantum computing is still out of reach for currently existing Noisy Intermediate Scale Quantum (NISQ) devices, quantum machine learning and quantum neural networks could provide more immediate applications for these devices.216 

While the road to viable implementation of networks using novel devices has many roadblocks, the potential benefits keep it promising. The human brain is a clear model that more efficient devices and networks exist for many problems than our current implementations. Bioinspired computation encompasses a broad spectrum of new technologies based on energy-efficient biological systems, particularly on the human brain performance. The human brain has 100 × 109 neurons and 1000 trillion synapses;217 assuming that each synapse stores 5 bits per synapse of information,218 the human brain storage capacity is about 1 petabyte. Biological intelligence results from complex sensory information processing and cognition, so there are evolutive morphological variations of the brain among different species.220 This is clearly seen in the mammalian variation of the cerebral cortex,221 the dependence of the brains’ operational temperate range with the encephalization quotient for different animals219 or the complexity of the cerebral cortex’s neural circuitry.220 

How does the neural activity in the brain lead to energy-efficient computation? Remarkable advances in network science open up new perspectives to understand this highly complex process unveiling emerging features of low consumption computing. Despite the neural networks transport electrical and chemical signals through an intricate neuronal web, their behavior is governed by universal laws that capture significant emerging features222 that could shed light about elementary information processing of active elements (neurons, synapses, dendrites) and structural properties of the brain (neural network, vascular network).

One of the central themes in modern solid-state physics is combining materials in order to find new response functionalities that emerge only when two distinct materials come into proximity. Recently, we showed that in a judiciously designed bilayer that contains two materials with no apparent similarity, a photosensitive semiconductor CdS and a strongly correlated Mott insulator displays such kind of novel functionality.223 While the Mott insulator itself has very little response to electromagnetic radiation (light), when the two are adjacent one can see a dramatic effect of shining light: the Mott metal–insulator transition is completely suppressed. As it pertains to neuromorphic systems, the Mott insulator in question here has also shown to have spiking behavior in the right conditions. Therefore, an adequately designed device that includes the CdS could result in a photosensitive neuron-like system, which could detect light as it simultaneously uses the spiking behavior to encode information about the light. In this fashion, a system similar to the eye/brain sensor could be developed.

Spin-based systems also offer a wide range of possibilities for native sensing of inputs and energy harvesting. Spintronic sensors have been demonstrated to be competitive for magnetic field sensing, molecular sorting, radio-frequency signal analysis, and spin wave detection.224 Spintronic neural network, therefore, possesses the ability to sense such inputs directly in the analog domain, without energy-costly initial conversion to the digital domain. Spintronic nanodevices can also harvest energy from signals collected in the outside world, such as radio-frequency or heat.144 Spintronics, therefore, offers a promising path to build energy efficient physical neural networks that directly sense the input signals they process, and harvest part, or the entirety of the power they need to operate.

Bioelectricity is the main stimulus in beings to regulate the functions of neuron cells, tissues, and organs. Thus, for a long time, electric biasing has been considered as a key component in realizing neuromorphic computing. However, with the advancement of energy harvesting technologies, other sources of energy may also be used in a well-controlled way. If a neuromorphic system can take advantages of clean, renewable, easy access energies, for instance, solar, hydrogen, and geothermal, the systems can be more energy efficient and environmental-friendly.

VO2 is an archetypal Mott system and can be used to emulate the functions of neurons and synapses.54,56 Our preliminary results have shown that, by combining it with photoactive materials, the metal-to-insulator transition temperature of VO2 can change dramatically under sunlight exposure, illustrating a strong photo response.223 This response is largely due to the Mott nature of VO2, where the electron/hole doping originating from the photoactive materials strongly influences its physical properties. Indeed, studies have shown that vanadium oxide devices can be used to store energy locally as concentration cells.203 Many questions related to these new applications remain open: (1) how the sunlight alters the functionality of VO2 as neuristors and artificial synapses, (2) the role of thermal heat from the sunlight in the metal-to-insulator transition, and (3) the mechanism that controls the proton doping/reaction in the VOx family.

A major challenge for neuromorphic computing is to achieve online learning with high accuracy and low energy consumption. The state-of-the-art algorithm for training neural networks, backpropagation of error, is not hardware friendly. It requires hardware implementations of complex circuits for computing gradients, storing gradients—necessitating huge memory resources as there is one gradient per synapse—and programming the synaptic elements with these gradients. In addition, physical neural networks require a huge number of complex interconnected artificial neurons and synapses. There are currently substantial efforts to design and build such circuits.225 An interesting approach is to exploit network dynamics and physics to extract gradients, which avoids directly computing them through chain derivatives.226 However, these approaches still require circuits to store and apply gradients.

A dream for neuromorphic computing is to create physical neural networks that learn intrinsically, through the interplay of synapses and neurons, as the brain does, without the need of cumbersome external circuits. Biologically inspired learning rules, such as spike-timing dependent plasticity, can implement such learning and can be implemented with memristive devices.16 In that case, neurons emit voltage spikes that directly program the synaptic devices to which they are connected, depending on the timing of spikes. The problem of this local learning rule is that it does not minimize an error at the output of the network, giving poor accuracy on complex tasks. A solution is to add a supervision term to the learning rule, but the procedure to compute this term is quite mathematical and resulting circuits can be complex.227 

An interesting alternative to train physical neural networks is to look for learning algorithms that are deeply rooted in physics. Hopfield networks,179 Boltzmann machines,228 and equilibrium propagation229 exploit the ability of a physical system that possesses an energy function to relax to equilibrium for learning. The later algorithm is particularly promising because it has been shown to compute gradients equivalent to backpropagation through time, scales to complex image recognition tasks and implements intrinsic learning. It belongs to a class of algorithms developed to understand how the backpropagation of errors could be implemented in the brain, ones that exploit neural dynamics to compute and propagate gradients in the network.230 These algorithms, being designed to train physical networks of synapses and neurons in brain hardware, are particularly promising for creating physical networks that learn intrinsically, in materio, through the physics of quantum neuromorphic devices.

The preparation of the manuscript was done through the collective efforts of the members of the Energy Frontier Research Center (EFRC) Quantum Materials for Energy Efficient Neuromorphic Computing, funded by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), under Award No. DE-SC0019273. Figures 1 and 12 were designed by Mario Rojas Grave De Peralta.

The authors have no conflicts to disclose.

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

1.
J. B.
Aimone
, “
A roadmap for reaching the potential of brain-derived computing
,”
Adv. Intell. Syst.
3
,
2000191
(
2021
).
2.
A.
Mehonic
and
A. J.
Kenyon
, “
Brain-inspired computing needs a master plan
,”
Nature
604
,
255
260
(
2022
).
3.
S.
Furber
, “
Large-scale neuromorphic computing systems
,”
J. Neural Eng.
13
,
051001
(
2016
).
4.
M.
Davies
,
N.
Srinivasa
,
T.-H.
Lin
,
G.
Chinya
,
Y.
Cao
,
S. H.
Choday
,
G.
Dimou
,
P.
Joshi
,
N.
Imam
,
S.
Jain
,
Y.
Liao
,
C.-K.
Lin
,
A.
Lines
,
R.
Liu
,
D.
Mathaikutty
,
S.
McCoy
,
A.
Paul
,
J.
Tse
,
G.
Venkataramanan
,
Y.-H.
Weng
,
A.
Wild
,
Y.
Yang
, and
H.
Wang
, “
Loihi: A neuromorphic manycore processor with on-chip learning
,”
IEEE Micro
38
,
82
99
(
2018
).
5.
S. B.
Furber
,
D. R.
Lester
,
L. A.
Plana
,
J. D.
Garside
,
E.
Painkras
,
S.
Temple
, and
A. D.
Brown
, “
Overview of the SpiNNaker system architecture
,”
IEEE Trans. Comput.
62
,
2454
2467
(
2012
).
6.
P. A.
Merolla
,
J. V.
Arthur
,
R.
Alvarez-Icaza
,
A. S.
Cassidy
,
J.
Sawada
,
F.
Akopyan
,
B. L.
Jackson
,
N.
Imam
,
C.
Guo
,
Y.
Nakamura
,
B.
Brezzo
,
I.
Vo
,
S. K.
Esser
,
R.
Appuswamy
,
B.
Taba
,
A.
Amir
,
M. D.
Flickner
,
W. P.
Risk
,
R.
Manohar
, and
D. S.
Modha
, “
A million spiking-neuron integrated circuit with a scalable communication network and interface
,”
Science
345
,
668
673
(
2014
).
7.
J.
Schemmel
,
A.
Grübl
,
S.
Hartmann
,
A.
Kononov
,
C.
Mayr
,
K.
Meier
,
S.
Millner
,
J.
Partzsch
,
S.
Schiefer
,
S.
Scholze
 et al, “
Live demonstration: A scaled-down version of the BrainScaleS wafer-scale neuromorphic system
,” in
2012 IEEE international symposium on circuits and systems (ISCAS)
(
IEEE
,
2012
), p.
702
.
8.
C. S.
Thakur
,
J. L.
Molin
,
G.
Cauwenberghs
,
G.
Indiveri
,
K.
Kumar
,
N.
Qiao
,
J.
Schemmel
,
R.
Wang
,
E.
Chicca
,
J.
Olson Hasler
,
J.-s.
Seo
,
S.
Yu
,
Y.
Cao
,
A.
van Schaik
, and
R.
Etienne-Cummings
, “
Large-scale neuromorphic spiking array processors: A quest to mimic the brain
,”
Front. Neurosci.
12
,
891
(
2018
).
9.
B. V.
Benjamin
,
P.
Gao
,
E.
McQuinn
,
S.
Choudhary
,
A. R.
Chandrasekaran
,
J.-M.
Bussat
,
R.
Alvarez-Icaza
,
J. V.
Arthur
,
P. A.
Merolla
, and
K.
Boahen
, “
Neurogrid: A mixed-analog-digital multichip system for large-scale neural simulations
,”
Proc. IEEE
102
,
699
716
(
2014
).
10.
M.
Imada
,
A.
Fujimori
, and
Y.
Tokura
, “
Metal-insulator transitions
,”
Rev. Mod. Phys.
70
,
1039
(
1998
).
11.
K.
Seo
,
I.
Kim
,
S.
Jung
,
M.
Jo
,
S.
Park
,
J.
Park
,
J.
Shin
,
K. P.
Biju
,
J.
Kong
,
K.
Lee
,
B.
Lee
, and
H.
Hwang
, “
Analog memory and spike-timing-dependent plasticity characteristics of a nanoscale titanium oxide bilayer resistive switching device
,”
Nanotechnology
22
,
254023
(
2011
).
12.
J.
Grollier
,
D.
Querlioz
, and
M. D.
Stiles
, “
Spintronic nanodevices for bioinspired computing
,”
Proc. IEEE
104
,
2024
2039
(
2016
).
13.
J.
Grollier
,
D.
Querlioz
,
K. Y.
Camsari
,
K.
Everschor-Sitte
,
S.
Fukami
, and
M. D.
Stiles
, “
Neuromorphic spintronics
,”
Nat. Electron.
3
,
360
370
(
2020
).
14.
M. A.
Zidan
,
J. P.
Strachan
, and
W. D.
Lu
, “
The future of electronics based on memristive systems
,”
Nat. Electron.
1
,
22
29
(
2018
).
15.
D.
Ielmini
,
Z.
Wang
, and
Y.
Liu
, “
Brain-inspired computing via memory device physics
,”
APL Mater.
9
,
050702
(
2021
).
16.
Z.
Wang
,
H.
Wu
,
G. W.
Burr
,
C. S.
Hwang
,
K. L.
Wang
,
Q.
Xia
, and
J. J.
Yang
, “
Resistive switching materials for information processing
,”
Nat. Rev. Mater.
5
,
173
195
(
2020
).
17.
S. H.
Lee
,
X.
Zhu
, and
W. D.
Lu
, “
Nanoscale resistive switching devices for memory and computing applications
,”
Nano Res.
13
,
1228
1243
(
2020
).
18.
Y.
Xi
,
B.
Gao
,
J.
Tang
,
A.
Chen
,
M.-F.
Chang
,
X. S.
Hu
,
J.
Van Der Spiegel
,
H.
Qian
, and
H.
Wu
, “
In-memory learning with analog resistive switching memory: A review and perspective
,”
Proc. IEEE
109
,
14
42
(
2020
).
19.
S.
Choi
,
J.
Yang
, and
G.
Wang
, “
Emerging memristive artificial synapses and neurons for energy-efficient neuromorphic computing
,”
Adv. Mater.
32
,
2004659
(
2020
).
20.
R.
Yang
,
H.-M.
Huang
, and
X.
Guo
, “
Memristive synapses and neurons for bioinspired computing
,”
Adv. Electron. Mater.
5
,
1900287
(
2019
).
21.
K.
Yang
,
J.
Joshua Yang
,
R.
Huang
, and
Y.
Yang
, “
Nonlinearity in memristors for neuromorphic dynamic systems
,”
Small Sci.
2
,
2100049
(
2022
).
22.
G.
Indiveri
,
B.
Linares-Barranco
,
T. J.
Hamilton
,
A.
van Schaik
,
R.
Etienne-Cummings
,
T.
Delbruck
,
S.-C.
Liu
,
P.
Dudek
,
P.
Häfliger
,
S.
Renaud
,
J.
Schemmel
,
G.
Cauwenberghs
,
J.
Arthur
,
K.
Hynna
,
F.
Folowosele
,
S.
Saighi
,
T.
Serrano-Gotarredona
,
J.
Wijekoon
,
Y.
Wang
, and
K.
Boahen
, “
Neuromorphic silicon neuron circuits
,”
Front. Neurosci.
5
,
73
(
2011
).
23.
J.
Torrejon
,
M.
Riou
,
F. A.
Araujo
,
S.
Tsunegi
,
G.
Khalsa
,
D.
Querlioz
,
P.
Bortolotti
,
V.
Cros
,
K.
Yakushiji
,
A.
Fukushima
,
H.
Kubota
,
S.
Yuasa
,
M. D.
Stiles
, and
J.
Grollier
, “
Neuromorphic computing with nanoscale spintronic oscillators
,”
Nature
547
,
428
(
2017
).
24.
A.
Chakravarty
,
J. H.
Mentink
,
C. S.
Davies
,
K. T.
Yamada
,
A. V.
Kimel
, and
T.
Rasing
, “
Supervised learning of an opto-magnetic neural network with ultrashort laser pulses
,”
Appl. Phys. Lett.
114
,
192407
(
2019
).
25.
A.
Mizrahi
,
T.
Hirtzlin
,
A.
Fukushima
,
H.
Kubota
,
S.
Yuasa
,
J.
Grollier
, and
D.
Querlioz
, “
Neural-like computing with populations of superparamagnetic basis functions
,”
Nat. Commun.
9
,
1533
(
2018
).
26.
D.
Vodenicarevic
,
N.
Locatelli
,
A.
Mizrahi
,
J. S.
Friedman
,
A. F.
Vincent
,
M.
Romera
,
A.
Fukushima
,
K.
Yakushiji
,
H.
Kubota
,
S.
Yuasa
,
S.
Tiwari
,
J.
Grollier
, and
D.
Querlioz
, “
Low-energy truly random number generation with superparamagnetic tunnel junctions for unconventional computing
,”
Phys. Rev. Appl.
8
,
054045
(
2017
).
27.
B.
Sutton
,
K. Y.
Camsari
,
B.
Behin-Aein
, and
S.
Datta
, “
Intrinsic optimization using stochastic nanomagnets
,”
Sci. Rep.
7
,
44370
(
2017
).
28.
W. A.
Borders
,
A. Z.
Pervaiz
,
S.
Fukami
,
K. Y.
Camsari
,
H.
Ohno
, and
S.
Datta
, “
Integer factorization using stochastic magnetic tunnel junctions
,”
Nature
573
,
390
393
(
2019
).
29.
M.
Romera
,
P.
Talatchian
,
S.
Tsunegi
,
F. A.
Araujo
,
V.
Cros
,
P.
Bortolotti
,
J.
Trastoy
,
K.
Yakushiji
,
A.
Fukushima
,
H.
Kubota
,
S.
Yuasa
,
M.
Ernoult
,
D.
Vodenicarevic
,
T.
Hirtzlin
,
N.
Locatelli
,
D.
Querlioz
, and
J.
Grollier
, “
Vowel recognition with four coupled spin-torque nano-oscillators
,”
Nature
563
,
230
234
(
2018
).
30.
M.
Zahedinejad
,
A. A.
Awad
,
S.
Muralidhar
,
R.
Khymyn
,
H.
Fulara
,
H.
Mazraati
,
M.
Dvornik
, and
J.
Åkerman
, “
Two-dimensional mutually synchronized spin Hall nano-oscillator arrays for neuromorphic computing
,”
Nat. Nanotechnol.
15
,
47
52
(
2020
).
31.
A.
Houshang
,
M.
Zahedinejad
,
S.
Muralidhar
,
R.
Khymyn
,
M.
Rajabali
,
H.
Fulara
,
A. A.
Awad
,
J.
Åkerman
,
J.
Chȩciński
, and
M.
Dvornik
, “
Phase-binarized spin Hall nano-oscillator arrays: Towards spin Hall Ising machines
,”
Phys. Rev. Appl.
17
,
014003
(
2022
).
32.
A.
Dussaux
,
B.
Georges
,
J.
Grollier
,
V.
Cros
,
A. V.
Khvalkovskiy
,
A.
Fukushima
,
M.
Konoto
,
H.
Kubota
,
K.
Yakushiji
,
S.
Yuasa
,
K. A.
Zvezdin
,
K.
Ando
, and
A.
Fert
, “
Large microwave generation from current-driven magnetic vortex oscillators in magnetic tunnel junctions
,”
Nat. Commun.
1
,
8
(
2010
).
33.
V. E.
Demidov
,
S.
Urazhdin
,
A.
Zholud
,
A. V.
Sadovnikov
, and
S. O.
Demokritov
, “
Nanoconstriction-based spin-Hall nano-oscillator
,”
Appl. Phys. Lett.
105
,
172410
(
2014
).
34.
M.
Zahedinejad
,
H.
Fulara
,
R.
Khymyn
,
A.
Houshang
,
M.
Dvornik
,
S.
Fukami
,
S.
Kanai
,
H.
Ohno
, and
J.
Åkerman
, “
Memristive control of mutual spin Hall nano-oscillator synchronization for neuromorphic computing
,”
Nat. Mater.
21
,
81
87
(
2022
).
35.
J.-W.
Xu
,
Y.
Chen
,
N. M.
Vargas
,
P.
Salev
,
P. N.
Lapa
,
J.
Trastoy
,
J.
Grollier
,
I. K.
Schuller
, and
A. D.
Kent
, “
A quantum material spintronic resonator
,”
Sci. Rep.
11
,
15082
(
2021
).
36.
A. A.
Awad
,
P.
Dürrenfeld
,
A.
Houshang
,
M.
Dvornik
,
E.
Iacocca
,
R. K.
Dumas
, and
J.
Åkerman
, “
Long-range mutual synchronization of spin Hall nano-oscillators
,”
Nat. Phys.
13
,
292
299
(
2017
).
37.
J.
Grollier
,
V.
Cros
, and
A.
Fert
, “
Synchronization of spin-transfer oscillators driven by stimulated microwave currents
,”
Phys. Rev. B
73
,
060409
(
2006
).
38.
A.
Houshang
,
E.
Iacocca
,
P.
Dürrenfeld
,
S. R.
Sani
,
J.
Åkerman
, and
R. K.
Dumas
, “
Spin-wave-beam driven synchronization of nanocontact spin-torque oscillators
,”
Nat. Nanotechnol.
11
,
280
286
(
2016
).
39.
D.
Kuzum
,
S.
Yu
, and
H.-S. P.
Wong
, “
Synaptic electronics: Materials, devices and applications
,”
Nanotechnology
24
,
382001
(
2013
).
40.
N. F.
Mott
, “
Metal-insulator transition
,”
Rev. Mod. Phys.
40
,
677
(
1968
).
41.
H.-T.
Kim
,
B.-G.
Chae
,
D.-H.
Youn
,
S.-L.
Maeng
,
G.
Kim
,
K.-Y.
Kang
, and
Y.-S.
Lim
, “
Mechanism and observation of Mott transition in VO2-based two- and three-terminal devices
,”
New J. Phys.
6
,
52
(
2004
).
42.
J.
del Valle
,
P.
Salev
,
Y.
Kalcheim
, and
I. K.
Schuller
, “
A caloritronics-based Mott neuristor
,”
Sci. Rep.
10
,
4292
(
2020
).
43.
H.
Markram
,
E.
Muller
,
S.
Ramaswamy
,
M. W.
Reimann
,
M.
Abdellah
,
C. A.
Sanchez
,
A.
Ailamaki
,
L.
Alonso-Nanclares
,
N.
Antille
,
S.
Arsever
,
G. A. A.
Kahou
,
T. K.
Berger
,
A.
Bilgili
,
N.
Buncic
,
A.
Chalimourda
,
G.
Chindemi
,
J.-D.
Courcol
,
F.
Delalondre
,
V.
Delattre
,
S.
Druckmann
,
R.
Dumusc
,
J.
Dynes
,
S.
Eilemann
,
E.
Gal
,
M. E.
Gevaert
,
J.-P.
Ghobril
,
A.
Gidon
,
J. W.
Graham
,
A.
Gupta
,
V.
Haenel
,
E.
Hay
,
T.
Heinis
,
J. B.
Hernando
,
M.
Hines
,
L.
Kanari
,
D.
Keller
,
J.
Kenyon
,
G.
Khazen
,
Y.
Kim
,
J. G.
King
,
Z.
Kisvarday
,
P.
Kumbhar
,
S.
Lasserre
,
J.-V.
Le Bé
,
B. R. C.
Magalhães
,
A.
Merchán-Pérez
,
J.
Meystre
,
B. R.
Morrice
,
J.
Muller
,
A.
Muñoz-Céspedes
,
S.
Muralidhar
,
K.
Muthurasa
,
D.
Nachbaur
,
T. H.
Newton
,
M.
Nolte
,
A.
Ovcharenko
,
J.
Palacios
,
L.
Pastor
,
R.
Perin
,
R.
Ranjan
,
I.
Riachi
,
J.-R.
Rodríguez
,
J. L.
Riquelme
,
C.
Rössert
,
K.
Sfyrakis
,
Y.
Shi
,
J. C.
Shillcock
,
G.
Silberberg
,
R.
Silva
,
F.
Tauheed
,
M.
Telefont
,
M.
Toledo-Rodriguez
,
T.
Tränkler
,
W.
Van Geit
,
J. V.
Díaz
,
R.
Walker
,
Y.
Wang
,
S. M.
Zaninetta
,
J.
DeFelipe
,
S. L.
Hill
,
I.
Segev
, and
F.
Schürmann
, “
Reconstruction and simulation of neocortical microcircuitry
,”
Cell
163
,
456
492
(
2015
).
44.
B. K.
Ridley
, “
Specific negative resistance in solids
,”
Proc. Phys. Soc.
82
,
954
(
1963
).
45.
J.
Del Valle
,
N. M.
Vargas
,
R.
Rocco
,
P.
Salev
,
Y.
Kalcheim
,
P. N.
Lapa
,
C.
Adda
,
M.-H.
Lee
,
P. Y.
Wang
,
L.
Fratino
,
M. J.
Rozenberg
, and
I. K.
Schuller
, “
Spatiotemporal characterization of the field-induced insulator-to-metal transition
,”
Science
373
,
907
911
(
2021
).
46.
L.
Hu
,
X.
Luo
,
K. J.
Zhang
,
X. W.
Tang
,
L.
Zu
,
X. C.
Kan
,
L.
Chen
,
X. B.
Zhu
,
W. H.
Song
,
J. M.
Dai
, and
Y. P.
Sun
, “
Oxygen vacancies-induced metal-insulator transition in La2/3Sr1/3VO3 thin films: Role of the oxygen substrate-to-film transfer
,”
Appl. Phys. Lett.
105
,
111607
(
2014
).
47.
J.
Chen
,
W.
Mao
,
B.
Ge
,
J.
Wang
,
X.
Ke
,
V.
Wang
,
Y.
Wang
,
M.
Döbeli
,
W.
Geng
,
H.
Matsuzaki
,
J.
Shi
, and
Y.
Jiang
, “
Revealing the role of lattice distortions in the hydrogen-induced metal-insulator transition of SmNiO3
,”
Nat. Commun.
10
,
694
(
2019
).
48.
S.
Zhang
,
H.
Vo
, and
G.
Galli
, “
Predicting the onset of metal–insulator transitions in transition metal oxides—A first step in designing neuromorphic devices
,”
Chem. Mater.
33
,
3187
3195
(
2021
).
49.
I.-T.
Chiu
,
M.-H.
Lee
,
S.
Cheng
,
S.
Zhang
,
L.
Heki
,
Z.
Zhang
,
Y.
Mohtashami
,
P. N.
Lapa
,
M.
Feng
,
P.
Shafer
,
A. T.
N’Diaye
,
A.
Mehta
,
J. A.
Schuller
,
G.
Galli
,
S.
Ramanathan
,
Y.
Zhu
,
I. K.
Schuller
, and
Y.
Takamura
, “
Cation and anion topotactic transformations in cobaltite thin films leading to Ruddlesden-Popper phases
,”
Phys. Rev. Mater.
5
,
064416
(
2021
).
50.
Q.
Lu
and
B.
Yildiz
, “
Voltage-controlled topotactic phase transition in thin-film SrCoOx monitored by in situ x-ray diffraction
,”
Nano Lett.
16
,
1186
1193
(
2016
).
51.
H.
Jeen
,
W. S.
Choi
,
M. D.
Biegalski
,
C. M.
Folkman
,
I.-C.
Tung
,
D. D.
Fong
,
J. W.
Freeland
,
D.
Shin
,
H.
Ohta
,
M. F.
Chisholm
, and
H. N.
Lee
, “
Reversible redox reactions in an epitaxially stabilized SrCoOx oxygen sponge
,”
Nat. Mater.
12
,
1057
1063
(
2013
).
52.
N.
Lu
,
P.
Zhang
,
Q.
Zhang
,
R.
Qiao
,
Q.
He
,
H.-B.
Li
,
Y.
Wang
,
J.
Guo
,
D.
Zhang
,
Z.
Duan
,
Z.
Li
,
M.
Wang
,
S.
Yang
,
M.
Yan
,
E.
Arenholz
,
S.
Zhou
,
W.
Yang
,
L.
Gu
,
C.-W.
Nan
,
J.
Wu
,
Y.
Tokura
, and
P.
Yu
, “
Electric-field control of tri-state phase transformation with a selective dual-ion switch
,”
Nature
546
,
124
128
(
2017
).
53.
D. A.
Gilbert
,
A. J.
Grutter
,
P. D.
Murray
,
R. V.
Chopdekar
,
A. M.
Kane
,
A. L.
Ionin
,
M. S.
Lee
,
S. R.
Spurgeon
,
B. J.
Kirby
,
B. B.
Maranville
,
A. T.
N’Diaye
,
A.
Mehta
,
E.
Arenholz
,
K.
Liu
,
Y.
Takamura
, and
J. A.
Borchers
, “
Ionic tuning of cobaltites at the nanoscale
,”
Phys. Rev. Mater.
2
,
104402
(
2018
).
54.
J.
Del Valle
,
P.
Salev
,
F.
Tesler
,
N. M.
Vargas
,
Y.
Kalcheim
,
P.
Wang
,
J.
Trastoy
,
M.-H.
Lee
,
G.
Kassabian
,
J. G.
Ramírez
,
M. J.
Rozenberg
, and
I. K.
Schuller
, “
Subthreshold firing in Mott nanodevices
,”
Nature
569
,
388
392
(
2019
).
55.
J.
Lin
,
Annadi
,
S.
Sonde
,
C.
Chen
,
L.
Stan
,
K.
Achari
,
S.
Ramanathan
, and
S.
Guha
, “
Low-voltage artificial neuron using feedback engineered insulator-to-metal-transition devices
,” in
2016 IEEE International Electron Devices Meeting (IEDM)
(
IEEE
,
2016
), pp.
34
35
.
56.
S.
Cheng
,
M.-H.
Lee
,
X.
Li
,
L.
Fratino
,
F.
Tesler
,
M.-G.
Han
,
J.
del Valle
,
R. C.
Dynes
,
M. J.
Rozenberg
,
I. K.
Schuller
, and
Y.
Zhu
, “
Operando characterization of conductive filaments during resistive switching in Mott VO2
,”
Proc. Natl. Acad. Sci. U. S. A.
118
,
e2013676118
(
2021
).
57.
H.-T.
Zhang
,
T. J.
Park
,
A. N. M. N.
Islam
,
D. S. J.
Tran
,
S.
Manna
,
Q.
Wang
,
S.
Mondal
,
H.
Yu
,
S.
Banik
,
S.
Cheng
,
H.
Zhou
,
S.
Gamage
,
S.
Mahapatra
,
Y.
Zhu
,
Y.
Abate
,
N.
Jiang
,
S. K. R. S.
Sankaranarayanan
,
A.
Sengupta
,
C.
Teuscher
, and
S.
Ramanathan
, “
Reconfigurable perovskite nickelate electronics for artificial intelligence
,”
Science
375
,
533
539
(
2022
).
58.
R.
Tran
,
X.-G.
Li
,
S. P.
Ong
,
Y.
Kalcheim
, and
I. K.
Schuller
, “
Metal-insulator transition in V2O3 with intrinsic defects
,”
Phys. Rev. B
103
,
075134
(
2021
).
59.
M.
Kotiuga
and
K. M.
Rabe
, “
High-density electron doping of SmNiO3 from first principles
,”
Phys. Rev. Mater.
3
,
115002
(
2019
).
60.
Y.
Cui
,
X.
Liu
,
W.
Fan
,
J.
Ren
, and
Y.
Gao
, “
Metal–insulator transition in RNiO3 (R = Pr, Nd, Sm, Gd, Tb, Dy, Ho, Er) induced by Li doping: A first-principles study
,”
J. Appl. Phys.
129
,
235107
(
2021
).
61.
X.-P.
Wang
,
X.-B.
Li
,
N.-K.
Chen
,
J.
Bang
,
R.
Nelson
,
C.
Ertural
,
R.
Dronskowski
,
H.-B.
Sun
, and
S.
Zhang
, “
Time-dependent density-functional theory molecular-dynamics study on amorphization of Sc-Sb-Te alloy under optical excitation
,”
npj Compt. Mater.
6
,
31
(
2020
).
62.
M. C.
Bennett
,
G.
Hu
,
G.
Wang
,
O.
Heinonen
,
P. R.
Kent
,
J. T.
Krogel
, and
P.
Ganesh
, “
Origin of metal-insulator transitions in correlated perovskite metals
,”
Phys. Rev. Res.
4
,
L022005
(
2022
).
63.
U.
Resheed
,
T.
Alsuwian
,
M.
Imran
,
H.
Algadi
,
E. A.
Khera
,
R. M. A.
Khalil
,
C.
Mahata
, and
F.
Hussain
, “
Density functional theory insight into metal ions and vacancies for improved performance in storage devices
,”
Int. J. Energy Res.
45
,
10882
10894
(
2021
).
64.
T.
Alsuwian
,
F.
Kousar
,
U.
Rasheed
,
M.
Imran
,
F.
Hussain
,
R. M. A.
Khalil
,
H.
Algadi
,
N.
Batool
,
E. A.
Khera
,
S.
Kiran
, and
M. N.
Ashiq
, “
First principles investigation of physically conductive bridge filament formation of aluminum doped perovskite materials for neuromorphic memristive applications
,”
Chaos, Solitons Fractals
150
,
111111
(
2021
).
65.
S.
Singh
,
T. A.
Abtew
,
G.
Horrocks
,
C.
Kilcoyne
,
P. M.
Marley
,
A. A.
Stabile
,
S.
Banerjee
,
P.
Zhang
, and
G.
Sambandamurthy
, “
Selective electrochemical reactivity of rutile VO2 towards the suppression of metal-insulator transition
,”
Phys. Rev. B
93
,
125132
(
2016
).
66.
I.
Vaskivskyi
,
I. A.
Mihailovic
,
S.
Brazovskii
,
J.
Gospodaric
,
T.
Mertelj
,
D.
Svetin
,
P.
Sutar
, and
D.
Mihailovic
, “
Fast electronic resistance switching involving hidden charge density wave states
,”
Nat. Commun.
7
,
11442
(
2016
).
67.
X.
Li
,
T.
Qiu
,
J.
Zhang
,
E.
Baldini
,
J.
Lu
,
A. M.
Rappe
, and
K. A.
Nelson
, “
Terahertz field–induced ferroelectricity in quantum paraelectric SrTiO3
,”
Science
364
,
1079
1082
(
2019
).
68.
S.
Prosandeev
,
J.
Grollier
,
D.
Talbayev
,
B.
Dkhil
, and
L.
Bellaiche
, “
Ultrafast neuromorphic dynamics using hidden phases in the prototype of relaxor ferroelectrics
,”
Phys. Rev. Lett.
126
,
027602
(
2021
).
69.
N. P.
Armitage
,
P.
Fournier
, and
R. L.
Greene
, “
Progress and perspectives on electron-doped cuprates
,”
Rev. Mod. Phys.
82
,
2421
(
2010
).
70.
M. D.
Pickett
,
G.
Medeiros-Ribeiro
, and
R. S.
Williams
, “
A scalable neuristor built with Mott memristors
,”
Nat. Mater.
12
,
114
117
(
2013
).
71.
Z.
Zhang
,
F.
Zuo
,
C.
Wan
,
A.
Dutta
,
J.
Kim
,
J.
Rensberg
,
R.
Nawrodt
,
H. H.
Park
,
T. J.
Larrabee
,
X.
Guan
,
Y.
Zhou
,
S. M.
Prokes
,
C.
Ronning
,
V. M.
Shalaev
,
A.
Boltasseva
,
M. A.
Kats
, and
S.
Ramanathan
, “
Evolution of metallicity in vanadium dioxide by creation of oxygen vacancies
,”
Phys. Rev. Appl.
7
,
034008
(
2017
).
72.
D.
Apalkov
,
B.
Dieny
, and
J. M.
Slaughter
, “
Magnetoresistive random access memory
,”
Proc. IEEE
104
,
1796
1830
(
2016
).
73.
R. L.
Stamps
,
S.
Breitkreutz
,
J.
Åkerman
,
A. V.
Chumak
,
Y.
Otani
,
G. E. W.
Bauer
,
J.-U.
Thiele
,
M.
Bowen
,
S. A.
Majetich
,
M.
Kläui
,
I. L.
Prejbeanu
,
B.
Dieny
,
N. M.
Dempsey
, and
B.
Hillebrands
, “
The 2014 magnetism roadmap
,”
J. Phys. D: Appl. Phys.
47
,
333001
(
2014
).
74.
R. H.
Dee
, “
Magnetic tape for data storage: An enduring technology
,”
Proc. IEEE
96
,
1775
1785
(
2008
).
75.
N.
Locatelli
,
V.
Cros
, and
J.
Grollier
, “
Spin-torque building blocks
,”
Nat. Mater.
13
,
11
20
(
2014
).
76.
S. A.
Wolf
and
D. M.
Treger
, “
Scanning the issue—Special issue on spintronics
,”
Proc. IEEE
91
,
647
651
(
2003
).
77.
I.
Žutić
,
J.
Fabian
, and
S. D.
Sarma
, “
Spintronics: Fundamentals and applications
,”
Rev. Mod. Phys.
76
,
323
(
2004
).
78.
A.
Brataas
,
A. D.
Kent
, and
H.
Ohno
, “
Current-induced torques in magnetic materials
,”
Nat. Mater.
11
,
372
(
2012
).
79.
H.
Ohno
,
M. D.
Stiles
, and
B.
Dieny
, “
Spintronics
,”
Proc. IEEE
104
,
1782
(
2016
).
80.
A.
Sengupta
and
K.
Roy
, “
Neuromorphic computing enabled by physics of electron spins: Prospects and perspectives
,”
Appl. Phys. Express
11
,
030101
(
2018
).
81.
M. D.
Stiles
and
J.
Miltat
, “
Spin-transfer torque and dynamics
,” in
Spin Dynamics in Confined Magnetic Structures III
(
Springer
,
2006
), pp.
225
308
.
82.
D. C.
Ralph
and
M. D.
Stiles
, “
Spin transfer torques
,”
J. Magn. Magn. Mater.
320
,
1190
1216
(
2008
).
83.
A.
Manchon
,
J.
Železnỳ
,
I. M.
Miron
,
T.
Jungwirth
,
J.
Sinova
,
A.
Thiaville
,
K.
Garello
, and
P.
Gambardella
, “
Current-induced spin-orbit torques in ferromagnetic and antiferromagnetic systems
,”
Rev. Mod. Phys.
91
,
035004
(
2019
).
84.
A. D.
Kent
and
D. C.
Worledge
, “
A new spin on magnetic memories
,”
Nat. Nanotechnol.
10
,
187
191
(
2015
).
85.
T. J.
Silva
and
W. H.
Rippard
, “
Developments in nano-oscillators based upon spin-transfer point-contact devices
,”
J. Magn. Magn. Mater.
320
,
1260
1271
(
2008
).
86.
T.
Chen
,
R. K.
Dumas
,
A.
Eklund
,
P. K.
Muduli
,
A.
Houshang
,
A. A.
Awad
,
P.
Dürrenfeld
,
B. G.
Malm
,
A.
Rusu
, and
J.
Åkerman
, “
Spin-torque and spin-Hall nano-oscillators
,”
Proc. IEEE
104
,
1919
1945
(
2016
).
87.
V. P.
Amin
,
P. M.
Haney
, and
M. D.
Stiles
, “
Interfacial spin–orbit torques
,”
J. Appl. Phys.
128
,
151101
(
2020
).
88.
D.
MacNeill
,
G. M.
Stiehl
,
M. H. D.
Guimaraes
,
R. A.
Buhrman
,
J.
Park
, and
D. C.
Ralph
, “
Control of spin–orbit torques through crystal symmetry in WTe2/ferromagnet bilayers
,”
Nat. Phys.
13
,
300
305
(
2017
).
89.
T.
Taniguchi
,
J.
Grollier
, and
M. D.
Stiles
, “
Spin-transfer torques generated by the anomalous Hall effect and anisotropic magnetoresistance
,”
Phys. Rev. Appl.
3
,
044001
(
2015
).
90.
S.-h. C.
Baek
,
V. P.
Amin
,
Y.-W.
Oh
,
G.
Go
,
S.-J.
Lee
,
G.-H.
Lee
,
K.-J.
Kim
,
M. D.
Stiles
,
B.-G.
Park
, and
K.-J.
Lee
, “
Spin currents and spin–orbit torques in ferromagnetic trilayers
,”
Nat. Mater.
17
,
509
513
(
2018
).
91.
S.
Hu
,
D.-F.
Shao
,
H.
Yang
,
M.
Tang
,
Y.
Yang
,
W.
Fan
,
S.
Zhou
,
E. Y.
Tsymbal
, and
X.
Qiu
, “
Efficient field-free perpendicular magnetization switching by a magnetic spin Hall effect
,” arXiv:2103.09011 (
2021
).
92.
F.
Hellman
,
A.
Hoffmann
,
Y.
Tserkovnyak
,
G. S. D.
Beach
,
E. E.
Fullerton
,
C.
Leighton
,
A. H.
MacDonald
,
D. C.
Ralph
,
D. A.
Arena
,
H. A.
Dürr
,
P.
Fischer
,
J.
Grollier
,
J. P.
Heremans
,
T.
Jungwirth
,
A. V.
Kimel
,
B.
Koopmans
,
I. N.
Krivorotov
,
S. J.
May
,
A. K.
Petford-Long
,
J. M.
Rondinelli
,
N.
Samarth
,
I. K.
Schuller
,
A. N.
Slavin
,
M. D.
Stiles
,
O.
Tchernyshyov
,
A.
Thiaville
, and
B. L.
Zink
, “
Interface-induced phenomena in magnetism
,”
Rev. Mod. Phys.
89
,
025006
(
2017
).
93.
K.
Ando
,
S.
Takahashi
,
K.
Harii
,
K.
Sasage
,
J.
Ieda
,
S.
Maekawa
, and
E.
Saitoh
, “
Electric manipulation of spin relaxation using the spin Hall effect
,”
Phys. Rev. Lett.
101
,
036601
(
2008
).
94.
L.
Liu
,
O. J.
Lee
,
T. J.
Gudmundsen
,
D. C.
Ralph
, and
R. A.
Buhrman
, “
Current-induced switching of perpendicularly magnetized magnetic layers using spin torque from the spin Hall effect
,”
Phys. Rev. Lett.
109
,
096602
(
2012
).
95.
A.
Manchon
and
S.
Zhang
, “
Theory of nonequilibrium intrinsic spin torque in a single nanomagnet
,”
Phys. Rev. B
78
,
212405
(
2008
).
96.
I. M.
Miron
,
K.
Garello
,
G.
Gaudin
,
P.-J.
Zermatten
,
M. V.
Costache
,
S.
Auffret
,
S.
Bandiera
,
B.
Rodmacq
,
A.
Schuhl
, and
P.
Gambardella
, “
Perpendicular switching of a single ferromagnetic layer induced by in-plane current injection
,”
Nature
476
,
189
193
(
2011
).
97.
Q.
Shao
,
P.
Li
,
L.
Liu
,
H.
Yang
,
S.
Fukami
,
A.
Razavi
,
H.
Wu
,
K.
Wang
,
F.
Freimuth
,
Y.
Mokrousov
,
M. D.
Stiles
,
S.
Emori
,
A.
Hoffmann
,
J.
Åkerman
,
K.
Roy
,
J.-P.
Wang
,
S.-H.
Yang
,
K.
Garello
, and
W.
Zhang
, “
Roadmap of spin–orbit torques
,”
IEEE Trans. Magn.
57
,
800439
(
2021
).
98.
T.
Seki
,
Y.
Hasegawa
,
S.
Mitani
,
S.
Takahashi
,
H.
Imamura
,
S.
Maekawa
,
J.
Nitta
, and
K.
Takanashi
, “
Giant spin Hall effect in perpendicularly spin-polarized FePt/Au devices
,”
Nat. Mater.
7
,
125
129
(
2008
).
99.
G.
Mihajlović
,
J. E.
Pearson
,
M. A.
Garcia
,
S. D.
Bader
, and
A.
Hoffmann
, “
Negative nonlocal resistance in mesoscopic gold Hall bars: Absence of the giant spin Hall effect
,”
Phys. Rev. Lett.
103
,
166601
(
2009
).
100.
C.
Chen
,
D.
Tian
,
H.
Zhou
,
D.
Hou
, and
X.
Jin
, “
Generation and detection of pure spin current in an H-shaped structure of a single metal
,”
Phys. Rev. Lett.
122
,
016804
(
2019
).
101.
M. S.
El Hadri
,
J.
Gibbons
,
Y.
Xiao
,
H.
Ren
,
H.
Arava
,
Y.
Liu
,
Z.
Liu
,
A.
Petford-Long
,
A.
Hoffmann
, and
E. E.
Fullerton
, “
Large spin-to-charge conversion in ultrathin gold-silicon multilayers
,”
Phys. Rev. Mater.
5
,
064410
(
2021
).
102.
J.
Gibbons
,
T.
Dohi
,
V. P.
Amin
,
F.
Xue
,
H.
Ren
,
J.-W.
Xu
,
H.
Arava
,
S.
Shim
,
H.
Saglam
,
Y.
Liu
,
J. E.
Pearson
,
N.
Mason
,
A. K.
Petford-Long
,
P. M.
Haney
,
M. D.
Stiles
,
E. E.
Fullerton
,
A. D.
Kent
,
S.
Fukami
, and
A.
Hoffmann
, “
Large exotic spin torques in antiferromagnetic iron rhodium
,” arXiv:2109.11108 (
2021
).
103.
C.
Safranski
,
J. Z.
Sun
,
J.-W.
Xu
, and
A. D.
Kent
, “
Planar Hall driven torque in a ferromagnet/nonmagnet/ferromagnet system
,”
Phys. Rev. Lett.
124
,
197204
(
2020
).
104.
Y.
Liu
,
Y.
Liu
,
M.
Chen
,
S.
Srivastava
,
P.
He
,
K. L.
Teo
,
T.
Phung
,
S.-H.
Yang
, and
H.
Yang
, “
Current-induced out-of-plane spin accumulation on the (001) surface of the IrMn3 antiferromagnet
,”
Phys. Rev. Appl.
12
,
064046
(
2019
).
105.
J.
Holanda
,
H.
Saglam
,
V.
Karakas
,
Z.
Zang
,
Y.
Li
,
R.
Divan
,
Y.
Liu
,
O.
Ozatay
,
V.
Novosad
,
J. E.
Pearson
, and
A.
Hoffmann
, “
Magnetic damping modulation in IrMn3/Ni80Fe20 via the magnetic spin Hall effect
,”
Phys. Rev. Lett.
124
,
087204
(
2020
).
106.
T.
Nan
,
C. X.
Quintela
,
J.
Irwin
,
G.
Gurung
,
D.-F.
Shao
,
J.
Gibbons
,
N.
Campbell
,
K.
Song
,
S.-Y.
Choi
,
L.
Guo
,
R. D.
Johnson
,
P.
Manuel
,
R. V.
Chopdekar
,
I.
Hallsteinsen
,
T.
Tybell
,
P. J.
Ryan
,
J.-W.
Kim
,
Y.
Choi
,
P. G.
Radaelli
,
D. C.
Ralph
,
E. Y.
Tsymbal
,
M. S.
Rzchowski
, and
C. B.
Eom
, “
Controlling spin current polarization through non-collinear antiferromagnetism
,”
Nat. Commun.
11
,
4671
(
2020
).
107.
Y.
Mokrousov
,
H.
Zhang
,
F.
Freimuth
,
B.
Zimmermann
,
N. H.
Long
,
J.
Weischenberg
,
I.
Souza
,
P.
Mavropoulos
, and
S.
Blügel
, “
Anisotropy of spin relaxation and transverse transport in metals
,”
J. Phys.: Condens. Matter
25
,
163201
(
2013
).
108.
F.
Freimuth
,
S.
Blügel
, and
Y.
Mokrousov
, “
Anisotropic spin Hall effect from first principles
,”
Phys. Rev. Lett.
105
,
246602
(
2010
).
109.
L.
Zhu
,
D. C.
Ralph
, and
R. A.
Buhrman
, “
Maximizing spin-orbit torque generated by the spin Hall effect of Pt
,”
Appl. Phys. Rev.
8
,
031308
(
2021
).
110.
Y.
Ma
,
S.
Miura
,
H.
Honjo
,
S.
Ikeda
,
T.
Hanyu
,
H.
Ohno
, and
T.
Endoh
, “
A 600-μW ultra-low-power associative processor for image pattern recognition employing magnetic tunnel junction-based nonvolatile memories with autonomic intelligent power-gating scheme
,”
Jpn. J. Appl. Phys.
55
,
04EF15
(
2016
).
111.
T.
Greenberg-Toledo
,
B.
Perach
,
I.
Hubara
,
D.
Soudry
, and
S.
Kvatinsky
, “
Training of quantized deep neural networks using a magnetic tunnel junction-based synapse
,”
Semicond. Sci. Technol.
36
,
114003
(
2021
).
112.
S.
Jung
,
H.
Lee
,
S.
Myung
,
H.
Kim
,
S. K.
Yoon
,
S.-W.
Kwon
,
Y.
Ju
,
M.
Kim
,
W.
Yi
,
S.
Han
,
B.
Kwon
,
B.
Seo
,
K.
Lee
,
G.-H.
Koh
,
K.
Lee
,
Y.
Song
,
C.
Choi
,
D.
Ham
, and
S. J.
Kim
, “
A crossbar array of magnetoresistive memory devices for in-memory computing
,”
Nature
601
,
211
216
(
2022
).
113.
J. M.
Goodwill
,
N.
Prasad
,
B. D.
Hoskins
,
M. W.
Daniels
,
A.
Madhavan
,
L.
Wan
,
T. S.
Santos
,
M.
Tran
,
J. A.
Katine
,
P. M.
Braganca
,
M. D.
Stiles
, and
J. J.
McClelland
, “
Implementation of a binary neural network on a passive array of magnetic tunnel junctions
,” arXiv:2112.09159 (
2021
).
114.
A. F.
Vincent
,
J.
Larroque
,
N.
Locatelli
,
N. B.
Romdhane
,
O.
Bichler
,
C.
Gamrat
,
W. S.
Zhao
,
J.-O.
Klein
,
S.
Galdin-Retailleau
, and
D.
Querlioz
, “
Spin-transfer torque magnetic memory as a stochastic memristive synapse for neuromorphic systems
,”
IEEE. Trans. Biomed. Circuits Syst.
9
,
166
174
(
2015
).
115.
S.
Lequeux
,
J.
Sampaio
,
V.
Cros
,
K.
Yakushiji
,
A.
Fukushima
,
R.
Matsumoto
,
H.
Kubota
,
S.
Yuasa
, and
J.
Grollier
, “
A magnetic synapse: Multilevel spin-torque memristor with perpendicular anisotropy
,”
Sci. Rep.
6
,
31510
(
2016
).
116.
M.
Sharad
,
C.
Augustine
,
G.
Panagopoulos
, and
K.
Roy
, “
Spin-based neuron model with domain-wall magnets as synapse
,”
IEEE Trans. Nanotechnol.
11
,
843
853
(
2012
).
117.
A.
Kurenkov
,
S.
DuttaGupta
,
C.
Zhang
,
S.
Fukami
,
Y.
Horio
, and
H.
Ohno
, “
Artificial neuron and synapse realized in an antiferromagnet/ferromagnet heterostructure using dynamics of spin–orbit torque switching
,”
Adv. Mater.
31
,
1900636
(
2019
).
118.
X.
Zhang
,
W.
Cai
,
M.
Wang
,
B.
Pan
,
K.
Cao
,
M.
Guo
,
T.
Zhang
,
H.
Cheng
,
S.
Li
,
D.
Zhu
,
L.
Wang
,
F.
Shi
,
J.
Du
, and
W.
Zhao
, “
Spin-torque memristors based on perpendicular magnetic tunnel junctions for neuromorphic computing
,”
Adv. Sci.
8
,
2004645
(
2021
).
119.
K. M.
Song
,
J.-S.
Jeong
,
B.
Pan
,
X.
Zhang
,
J.
Xia
,
S.
Cha
,
T.-E.
Park
,
K.
Kim
,
S.
Finizio
,
J.
Raabe
,
J.
Chang
,
Y.
Zhou
,
W.
Zhao
,
W.
Kang
,
H.
Ju
, and
S.
Woo
, “
Skyrmion-based artificial synapses for neuromorphic computing
,”
Nat. Electron.
3
,
148
155
(
2020
).
120.
R.
Chen
,
C.
Li
,
Y.
Li
,
J. J.
Miles
,
G.
Indiveri
,
S.
Furber
,
V. F.
Pavlidis
, and
C.
Moutafis
, “
Nanoscale room-temperature multilayer skyrmionic synapse for deep spiking neural networks
,”
Phys. Rev. Appl.
14
,
014096
(
2020
).
121.
R.
Khymyn
,
I.
Lisenkov
,
J.
Voorheis
,
O.
Sulymenko
,
O.
Prokopenko
,
V.
Tiberkevich
,
J.
Akerman
, and
A.
Slavin
, “
Ultra-fast artificial neuron: Generation of picosecond-duration spikes in a current-driven antiferromagnetic auto-oscillator
,”
Sci. Rep.
8
,
15727
(
2018
).
122.
D.
Pinna
,
G.
Bourianoff
, and
K.
Everschor-Sitte
, “
Reservoir computing with random skyrmion textures
,”
Phys. Rev. Appl.
14
,
054020
(
2020
).
123.
R. V.
Ababei
,
M. O.
Ellis
,
I. T.
Vidamour
,
D. S.
Devadasan
,
D. A.
Allwood
,
E.
Vasilaki
, and
T. J.
Hayward
, “
Neuromorphic computation with a single magnetic domain wall
,”
Sci. Rep.
11
,
15587
(
2021
).
124.
S.
Watt
,
M.
Kostylev
,
A. B.
Ustinov
, and
B. A.
Kalinikos
, “
Implementing a magnonic reservoir computer model based on time-delay multiplexing
,”
Phys. Rev. Appl.
15
,
064060
(
2021
).
125.
S.
Tsunegi
,
T.
Taniguchi
,
K.
Nakajima
,
S.
Miwa
,
K.
Yakushiji
,
A.
Fukushima
,
S.
Yuasa
, and
H.
Kubota
, “
Physical reservoir computing based on spin torque oscillator with forced synchronization
,”
Appl. Phys. Lett.
114
,
164101
(
2019
).
126.
S.
Tsunegi
,
T.
Taniguchi
,
S.
Miwa
,
K.
Nakajima
,
K.
Yakushiji
,
A.
Fukushima
,
S.
Yuasa
, and
H.
Kubota
, “
Evaluation of memory capacity of spin torque oscillator for recurrent neural networks
,”
Jpn. J. Appl. Phys.
57
,
120307
(
2018
).
127.
N.
Prasad
,
P.
Mukim
,
A.
Madhavan
, and
M. D.
Stiles
, “
Associative memories using complex-valued Hopfield networks based on spin-torque oscillator arrays
,” arXiv:2112.03358 (
2021
).
128.
I.
Volvach
, “
Micromagnetic modeling and analysis of magnetic tunnel junctions for spintronics applications
,” Ph.D. thesis,
University of California
,
San Diego
,
2021
.
129.
H.
Fulara
,
M.
Zahedinejad
,
R.
Khymyn
,
M.
Dvornik
,
S.
Fukami
,
S.
Kanai
,
H.
Ohno
, and
J.
Åkerman
, “
Giant voltage-controlled modulation of spin Hall nano-oscillator damping
,”
Nat. Commun.
11
,
4006
(
2020
).
130.
R.
Khymyn
,
I.
Lisenkov
,
V.
Tiberkevich
,
B. A.
Ivanov
, and
A.
Slavin
, “
Antiferromagnetic THz-frequency Josephson-like oscillator driven by spin current
,”
Sci. Rep.
7
,
43705
(
2017
).
131.
D.
Marković
,
M. W.
Daniels
,
P.
Sethi
,
A. D.
Kent
,
M. D.
Stiles
, and
J.
Grollier
, “
Easy-plane spin Hall nano-oscillators as spiking neurons for neuromorphic computing
,”
Phys. Rev. B
105
,
014411
(
2022
).
132.
N.
Leroux
,
D.
Marković
,
E.
Martin
,
T.
Petrisor
,
D.
Querlioz
,
A.
Mizrahi
, and
J.
Grollier
, “
Radio-frequency multiply-and-accumulate operations with spintronic synapses
,”
Phys. Rev. Appl.
15
,
034067
(
2021
).
133.
N.
Leroux
,
A.
Mizrahi
,
D.
Marković
,
D.
Sanz-Hernández
,
J.
Trastoy
,
P.
Bortolotti
,
L.
Martins
,
A.
Jenkins
,
R.
Ferreira
, and
J.
Grollier
, “
Hardware realization of the multiply and accumulate operation on radio-frequency signals with magnetic tunnel junctions
,”
Neuromorphic Comput. Eng.
1
,
011001
(
2021
).
134.
A. A.
Tulapurkar
,
Y.
Suzuki
,
A.
Fukushima
,
H.
Kubota
,
H.
Maehara
,
K.
Tsunekawa
,
D. D.
Djayaprawira
,
N.
Watanabe
, and
S.
Yuasa
, “
Spin-torque diode effect in magnetic tunnel junctions
,”
Nature
438
,
339
342
(
2005
).
135.
J. C.
Sankey
,
P. M.
Braganca
,
A. G. F.
Garcia
,
I. N.
Krivorotov
,
R. A.
Buhrman
, and
D. C.
Ralph
, “
Spin-transfer-driven ferromagnetic resonance of individual nanomagnets
,”
Phys. Rev. Lett.
96
,
227601
(
2006
).
136.
D.
Kan
,
R.
Aso
,
R.
Sato
,
M.
Haruta
,
H.
Kurata
, and
Y.
Shimakawa
, “
Tuning magnetic anisotropy by interfacially engineering the oxygen coordination environment in a transition metal oxide
,”
Nat. Mater.
15
,
432
437
(
2016
).
137.
H.
Boschker
,
T.
Harada
,
T.
Asaba
,
R.
Ashoori
,
A. V.
Boris
,
H.
Hilgenkamp
,
C. R.
Hughes
,
M. E.
Holtz
,
L.
Li
,
D. A.
Muller
,
H.
Nair
,
P.
Reith
,
X.
Renshaw Wang
,
D. G.
Schlom
,
A.
Soukiassian
, and
J.
Mannhart
, “
Ferromagnetism and conductivity in atomically thin SrRuO3
,”
Phys. Rev. X
9
,
011027
(
2019
).
138.
J.
Matsuno
,
K.
Ihara
,
S.
Yamamura
,
H.
Wadati
,
K.
Ishii
,
V. V.
Shankar
,
H.-Y.
Kee
, and
H.
Takagi
, “
Engineering a spin-orbital magnetic insulator by tailoring superlattices
,”
Phys. Rev. Lett.
114
,
247209
(
2015
).
139.
J.
Matsuno
,
N.
Ogawa
,
K.
Yasuda
,
F.
Kagawa
,
W.
Koshibae
,
N.
Nagaosa
,
Y.
Tokura
, and
M.
Kawasaki
, “
Interface-driven topological Hall effect in SrRuO3-SrIrO3 bilayer
,”
Sci. Adv.
2
,
e1600304
(
2016
).
140.
Y.
Ohuchi
,
J.
Matsuno
,
N.
Ogawa
,
Y.
Kozuka
,
M.
Uchida
,
Y.
Tokura
, and
M.
Kawasaki
, “
Electric-field control of anomalous and topological Hall effects in oxide bilayer thin films
,”
Nat. Commun.
9
,
213
(
2018
).
141.
L.
Hao
,
D.
Meyers
,
H.
Suwa
,
J.
Yang
,
C.
Frederick
,
T. R.
Dasa
,
G.
Fabbris
,
L.
Horak
,
D.
Kriegner
,
Y.
Choi
,
J.-W.
Kim
,
D.
Haskel
,
P. J.
Ryan
,
H.
Xu
,
C. D.
Batista
,
M. P. M.
Dean
, and
J.
Liu
, “
Giant magnetic response of a two-dimensional antiferromagnet
,”
Nat. Phys.
14
,
806
810
(
2018
).
142.
A.
Safin
,
V.
Puliafito
,
M.
Carpentieri
,
G.
Finocchio
,
S.
Nikitov
,
P.
Stremoukhov
,
A.
Kirilyuk
,
V.
Tyberkevych
, and
A.
Slavin
, “
Electrically tunable detector of THz-frequency signals based on an antiferromagnet
,”
Appl. Phys. Lett.
117
,
222411
(
2020
).
143.
G.
Finocchio
,
R.
Tomasello
,
B.
Fang
,
A.
Giordano
,
V.
Puliafito
,
M.
Carpentieri
, and
Z.
Zeng
, “
Perspectives on spintronic diodes
,”
Appl. Phys. Lett.
118
,
160502
(
2021
).
144.
A.
Hirohata
,
K.
Yamada
,
Y.
Nakatani
,
I.-L.
Prejbeanu
,
B.
Diény
,
P.
Pirro
, and
B.
Hillebrands
, “
Review on spintronics: Principles and device applications
,”
J. Magn. Magn. Mater.
509
,
166711
(
2020
).
145.
W.
Yi
,
K. K.
Tsang
,
S. K.
Lam
,
X.
Bai
,
J. A.
Crowell
, and
E. A.
Flores
, “
Biological plausibility and stochasticity in scalable VO2 active memristor neurons
,”
Nat. Commun.
9
,
4661
(
2018
).
146.
H.-T.
Zhang
,
P.
Panda
,
J.
Lin
,
Y.
Kalcheim
,
K.
Wang
,
J. W.
Freeland
,
D. D.
Fong
,
S.
Priya
,
I. K.
Schuller
,
S. K. R. S.
Sankaranarayanan
,
K.
Roy
, and
S.
Ramanathan
, “
Organismic materials for beyond von Neumann machines
,”
Appl. Phys. Rev.
7
,
011309
(
2020
).
147.
J.
Núñez
,
M. J.
Avedillo
,
M.
Jiménez
,
J. M.
Quintana
,
A.
Todri-Sanial
,
E.
Corti
,
S.
Karg
, and
B.
Linares-Barranco
, “
Oscillatory neural networks using VO2 based phase encoded logic
,”
Front. Neurosci.
15
,
655823
(
2021
).
148.
S.
Oh
,
Y.
Shi
,
J.
Del Valle
,
P.
Salev
,
Y.
Lu
,
Z.
Huang
,
Y.
Kalcheim
,
I. K.
Schuller
, and
D.
Kuzum
, “
Energy-efficient Mott activation neuron for full-hardware implementation of neural networks
,”
Nat. Nanotechnol.
16
,
680
687
(
2021
).
149.
A. F.
Agarap
, “
Deep learning using rectified linear units (ReLU)
,” arXiv:1803.08375 (
2018
).
150.
M.
Jerry
,
A.
Parihar
,
B.
Grisafe
,
A.
Raychowdhury
, and
S.
Datta
, “
Ultra-low power probabilistic IMT neurons for stochastic sampling machines
,” in
2017 Symposium on VLSI Technology
(
IEEE
,
2017
), pp.
T186
T187
.
151.
R.
Waser
and
M.
Aono
, “
Nanoionics-based resistive switching memories
,”
Nat. Mater.
6
,
833
840
(
2007
).
152.
H.-S. P.
Wong
,
H.-Y.
Lee
,
S.
Yu
,
Y.-S.
Chen
,
Y.
Wu
,
P.-S.
Chen
,
B.
Lee
,
F. T.
Chen
, and
M.-J.
Tsai
, “
Metal–oxide RRAM
,”
Proc. IEEE
100
,
1951
1970
(
2012
).
153.
Y. S.
Chen
,
H. Y.
Lee
,
P. S.
Chen
,
P. Y.
Gu
,
C. W.
Chen
,
W. P.
Lin
,
W. H.
Liu
,
Y. Y.
Hsu
,
S. S.
Sheu
,
P. C.
Chiang
,
W. S.
Chen
,
F. T.
Chen
,
C. H.
Lien
, and
M.-J.
Tsai
, “
Highly scalable hafnium oxide memory with improvements of resistive distribution and read disturb immunity
,” in
2009 IEEE International Electron Devices Meeting (IEDM)
(
IEEE
,
2009
), pp.
1
4
.
154.
H.
Kim
,
M.
Mahmoodi
,
H.
Nili
, and
D. B.
Strukov
, “
4K-memristor analog-grade passive crossbar circuit
,”
Nat. Commun.
12
,
5198
(
2021
).
155.
M.-H.
Lee
,
Y.
Kalcheim
,
J. d.
Valle
, and
I. K.
Schuller
, “
Controlling metal–insulator transitions in vanadium oxide thin films by modifying oxygen stoichiometry
,”
ACS Appl. Mater. Interfaces
13
,
887
896
(
2021
).
156.
P.
Stoliar
,
J.
Tranchant
,
B.
Corraze
,
E.
Janod
,
M.-P.
Besland
,
F.
Tesler
,
M.
Rozenberg
, and
L.
Cario
, “
A leaky-integrate-and-fire neuron analog realized with a Mott insulator
,”
Adv. Funct. Mater.
27
,
1604740
(
2017
).
157.
J.
del Valle
,
J. G.
Ramírez
,
M. J.
Rozenberg
, and
I. K.
Schuller
, “
Challenges in materials and devices for resistive-switching-based neuromorphic computing
,”
J. Appl. Phys.
124
,
211101
(
2018
).
158.
R.
Rocco
,
J.
del Valle
,
H.
Navarro
,
P.
Salev
,
I. K.
Schuller
, and
M.
Rozenberg
, “
Exponential escape rate of filamentary incubation in Mott spiking neurons
,”
Phys. Rev. Appl.
17
,
024028
(
2022
).
159.
J.
Lin
,
S.
Guha
, and
S.
Ramanathan
, “
Vanadium dioxide circuits emulate neurological disorders
,”
Front. Neurosci.
12
,
856
(
2018
).
160.
M. J.
Rozenberg
,
O.
Schneegans
, and
P.
Stoliar
, “
An ultra-compact leaky-integrate-and-fire model for building spiking neural networks
,”
Sci. Rep.
9
,
11123
(
2019
).
161.
P.
Stoliar
,
O.
Schneegans
, and
M. J.
Rozenberg
, “
A functional spiking neural network of ultra compact neurons
,”
Front. Neurosci.
15
,
635098
(
2021
).
162.
P.
Stoliar
,
O.
Schneegans
, and
M. J.
Rozenberg
, “
Implementation of a minimal recurrent spiking neural network in a solid-state device
,”
Phys. Rev. Appl.
16
,
034030
(
2021
).
163.
C.
Adda
,
M.-H.
Lee
,
Y.
Kalcheim
,
P.
Salev
,
R.
Rocco
,
N. M.
Vargas
,
N.
Ghazikhanian
,
C.-P.
Li
,
G.
Albright
,
M.
Rozenberg
, and
I. K.
Schuller
, “
Direct observation of the electrically triggered insulator-metal transition in V3O5 far below the transition temperature
,”
Phys. Rev. X
12
,
011025
(
2022
).
164.
A. T.
Winfree
, “
Biological rhythms and the behavior of populations of coupled oscillators
,”
J. Theor. Biol.
16
,
15
42
(
1967
).
165.
H.
Yu
,
A. N. M. N.
Islam
,
S.
Mondal
,
A.
Sengupta
, and
S.
Ramanathan
, “
Switching dynamics in vanadium dioxide-based stochastic thermal neurons
,”
IEEE Trans. Electron Devices
69
,
3135
(
2022
).
166.
S. A.
Cybart
,
S. M.
Anton
,
S. M.
Wu
,
J.
Clarke
, and
R. C.
Dynes
, “
Very large scale integration of nanopatterned YBa2Cu3O7−δ Josephson junctions in a two-dimensional array
,”
Nano Lett.
9
,
3581
3585
(
2009
).
167.
M.
Tinkham
,
Introduction to Superconductivity
(
Courier Corporation
,
2004
).
168.
T. A.
Fulton
,
R. C.
Dynes
, and
P. W.
Anderson
, “
The flux shuttle—A Josephson junction shift register employing single flux quanta
,”
Proc. IEEE
61
,
28
35
(
1973
).
169.
K. K.
Likharev
and
V. K.
Semenov
, “
RSFQ logic/memory family: A new Josephson-junction technology for sub-terahertz-clock-frequency digital systems
,”
IEEE Trans. Appl. Supercond.
1
,
3
28
(
1991
).
170.
U. S.
Goteti
and
R. C.
Dynes
, “
Superconducting neural networks with disordered Josephson junction array synaptic networks and leaky integrate-and-fire loop neurons
,”
J. Appl. Phys.
129
,
073901
(
2021
).
171.
U. S.
Goteti
,
I. A.
Zaluzhnyy
,
S.
Ramanathan
,
R. C.
Dynes
, and
A.
Frano
, “
Low-temperature emergent neuromorphic networks with correlated oxide devices
,”
Proc. Natl. Acad. Sci. U. S. A.
118
,
e2103934118
(
2021
).
172.
J.
Shi
,
Y.
Zhou
, and
S.
Ramanathan
, “
Colossal resistance switching and band gap modulation in a perovskite nickelate by electron doping
,”
Nat. Commun.
5
,
4860
(
2014
).
173.
M. N.
Baibich
,
J. M.
Broto
,
A.
Fert
,
F. N.
Van Dau
,
F.
Petroff
,
P.
Etienne
,
G.
Creuzet
,
A.
Friederich
, and
J.
Chazelas
, “
Giant magnetoresistance of (001) Fe/(001) Cr magnetic superlattices
,”
Phys. Rev. Lett.
61
,
2472
(
1988
).
174.
G.
Binasch
,
P.
Grünberg
,
F.
Saurenbach
, and
W.
Zinn
, “
Enhanced magnetoresistance in layered magnetic structures with antiferromagnetic interlayer exchange
,”
Phys. Rev. B
39
,
4828
(
1989
).
175.
S. S. P.
Parkin
,
C.
Kaiser
,
A.
Panchula
,
P. M.
Rice
,
B.
Hughes
,
M.
Samant
, and
S.-H.
Yang
, “
Giant tunnelling magnetoresistance at room temperature with MgO (100) tunnel barriers
,”
Nat. Mater.
3
,
862
867
(
2004
).
176.
S.
Yuasa
,
T.
Nagahama
,
A.
Fukushima
,
Y.
Suzuki
, and
K.
Ando
, “
Giant room-temperature magnetoresistance in single-crystal Fe/MgO/Fe magnetic tunnel junctions
,”
Nat. Mater.
3
,
868
871
(
2004
).
177.
R. F.
Wang
,
C.
Nisoli
,
R. S.
Freitas
,
J.
Li
,
W.
McConville
,
B. J.
Cooley
,
M. S.
Lund
,
N.
Samarth
,
C.
Leighton
,
V. H.
Crespi
, and
P.
Schiffer
, “
Artificial ‘spin ice’ in a geometrically frustrated lattice of nanoscale ferromagnetic islands
,”
Nature
439
,
303
306
(
2006
).
178.
C.
Nisoli
,
R.
Moessner
, and
P.
Schiffer
, “
Colloquium: Artificial spin ice: Designing and imaging magnetic frustration
,”
Rev. Mod. Phys.
85
,
1473
(
2013
).
179.
J. J.
Hopfield
, “
Neural networks and physical systems with emergent collective computational abilities
,”
Proc. Natl. Acad. Sci. U. S. A.
79
,
2554
2558
(
1982
).
180.
G. E.
Hinton
and
T. J.
Sejnowski
, “
Learning and relearning in Boltzmann machines
,” in
Parallel Distributed Processing, Vol. 1
, edited by
D.
Rumelhart
and
J.
McClelland
(
MIT Press
,
Cambridge
,
1986
), Chap. 7, pp.
282
317
.
181.
D.
Sanz-Hernández
,
M.
Massouras
,
N.
Reyren
,
N.
Rougemaille
,
V.
Schánilec
,
K.
Bouzehouane
,
M.
Hehn
,
B.
Canals
,
D.
Querlioz
,
J.
Grollier
,
F.
Montaigne
, and
D.
Lacour
, “
Tunable stochasticity in an artificial spin network
,”
Adv. Mater.
33
,
2008135
(
2021
).
182.
T.
Kendziorczyk
,
S. O.
Demokritov
, and
T.
Kuhn
, “
Spin-wave-mediated mutual synchronization of spin-torque nano-oscillators: A micromagnetic study of multistable phase locking
,”
Phys. Rev. B
90
,
054414
(
2014
).
183.
A. N.
Slavin
and
V. S.
Tiberkevich
, “
Theory of mutual phase locking of spin-torque nanosized oscillators
,”
Phys. Rev. B
74
,
104401
(
2006
).
184.
A.
Slavin
and
V.
Tiberkevich
, “
Nonlinear auto-oscillator theory of microwave generation by spin-polarized current
,”
IEEE Trans. Magn.
45
,
1875
1918
(
2009
).
185.
V.
Nair
and
G. E.
Hinton
, “
Rectified linear units improve restricted Boltzmann machines
,” in
International Conference on Machine Learning
,
2010
.
186.
G.-q.
Bi
and
M.-m.
Poo
, “
Synaptic modification by correlated activity: Hebb’s postulate revisited
,”
Annu. Rev. Neurosci.
24
,
139
166
(
2001
).
187.
R.
Matsumoto
,
S.
Lequeux
,
H.
Imamura
, and
J.
Grollier
, “
Chaos and relaxation oscillations in spin-torque windmill spiking oscillators
,”
Phys. Rev. Appl.
11
,
044093
(
2019
).
188.
J. C.
Slonczewski
, “
Current-driven excitation of magnetic multilayers
,”
J. Magn. Magn. Mater.
159
,
L1
L7
(
1996
).
189.
S.
Zhang
and
Y.
Tserkovnyak
, “
Antiferromagnet-based neuromorphics using dynamics of topological charges
,”
Phys. Rev. Lett.
125
,
207202
(
2020
).
190.
Y.
Liu
,
I.
Barsukov
,
Y.
Barlas
,
I. N.
Krivorotov
, and
R. K.
Lake
, “
Synthetic antiferromagnet-based spin Josephson oscillator
,”
Appl. Phys. Lett.
116
,
132409
(
2020
).
191.
C. H.
Chen
,
A. E.
White
,
K. T.
Short
,
R. C.
Dynes
,
J. M.
Poate
,
D. C.
Jacobson
,
P. M.
Mankiewich
,
W. J.
Skocpol
, and
R. E.
Howard
, “
Ion beam induced damage and superlattice formation in epitaxial YBa2Cu3O7−δ thin films
,”
Appl. Phys. Lett.
54
,
1178
(
1989
).
192.
W.
Lang
and
J. D.
Pedarnig
, “
Ion irradiation of high-temperature superconductors and its application for nanopatterning
,” in
Nanoscience and Engineering in Superconductivity
(
Springer
,
Berlin, Heidelberg
,
2010
), pp.
81
104
.
193.
A. E.
White
,
K. T.
Short
,
D. C.
Jacobson
,
J. M.
Poate
,
R. C.
Dynes
,
P. M.
Mankiewich
,
W. J.
Skocpol
,
R. E.
Howard
,
M.
Anzlowar
,
K. W.
Baldwin
,
A. F. J.
Levi
,
J. R.
Kwo
,
T.
Hsieh
, and
M.
Hong
, “
Ion-beam-induced destruction of superconducting phase coherence in YBa2Cu3O7−δ
,”
Phys. Rev. B
37
,
3755
(
1988
).
194.
Y.
Naitou
and
S.
Ogawa
, “
Anderson localization of graphene by helium ion irradiation
,”
Appl. Phys. Lett.
108
,
171605
(
2016
).
195.
M. G.
Stanford
,
P. R.
Pudasaini
,
A.
Belianinov
,
N.
Cross
,
J. H.
Noh
,
M. R.
Koehler
,
D. G.
Mandrus
,
G.
Duscher
,
A. J.
Rondinone
,
I. N.
Ivanov
,
T. Z.
Ward
, and
P. D.
Rack
, “
Focused helium-ion beam irradiation effects on electrical transport properties of few-layer WSe2: Enabling nanoscale direct write homo-junctions
,”
Sci. Rep.
6
,
27276
(
2016
).
196.
S.
Das
and
A.
Frano
(unpublished) (
2022
).
197.
Topology in Magnetism
, edited by
J.
Zang
,
V.
Cros
, and
A.
Hoffmann
(
Springer
,
Germany
,
2018
), Vol. 192.
198.
H.
Arava
,
F.
Barrows
,
M. D.
Stiles
, and
A. K.
Petford-Long
, “
Topological control of magnetic textures
,”
Phys. Rev. B
103
,
L060407
(
2021
).
199.
D. G.
Schlom
,
L.-Q.
Chen
,
C.-B.
Eom
,
K. M.
Rabe
,
S. K.
Streiffer
, and
J.-M.
Triscone
, “
Strain tuning of ferroelectric thin films
,”
Annu. Rev. Mater. Res.
37
,
589
626
(
2007
).
200.
J. H.
Ngai
,
F. J.
Walker
, and
C. H.
Ahn
, “
Correlated oxide physics and electronics
,”
Annu. Rev. Mater. Res.
44
,
1
17
(
2014
).
201.
Y.
Kalcheim
,
A.
Camjayi
,
J.
Del Valle
,
P.
Salev
,
M.
Rozenberg
, and
I. K.
Schuller
, “
Non-thermal resistive switching in Mott insulator nanowires
,”
Nat. Commun.
11
,
2985
(
2020
).
202.
F.
Tesler
,
C.
Adda
,
J.
Tranchant
,
B.
Corraze
,
E.
Janod
,
L.
Cario
,
P.
Stoliar
, and
M.
Rozenberg
, “
Relaxation of a spiking Mott artificial neuron
,”
Phys. Rev. Appl.
10
,
054001
(
2018
).
203.
Y.
Shi
,
A. E.
Duwel
,
D. M.
Callahan
,
Y.
Sun
,
F. A.
Hong
,
H.
Padmanabhan
,
V.
Gopalan
,
R.
Engel-Herbert
,
S.
Ramanathan
, and
L.-Q.
Chen
, “
Dynamics of voltage-driven oscillating insulator-metal transitions
,”
Phys. Rev. B
104
,
064308
(
2021
).
204.
A. G.
Shabalin
,
J.
del Valle
,
N.
Hua
,
M. J.
Cherukara
,
M. V.
Holt
,
I. K.
Schuller
, and
O. G.
Shpyrko
, “
Nanoscale imaging and control of volatile and non-volatile resistive switching in VO2
,”
Small
16
,
2005439
(
2020
).
205.
P.
Miao
,
J.
Wu
,
Y.
Du
,
Y.
Sun
, and
P.
Xu
, “
Phase transition induced Raman enhancement on vanadium dioxide (VO2) nanosheets
,”
J. Mater. Chem. C
6
,
10855
10860
(
2018
).
206.
E.
Abreu
,
S. N.
Gilbert Corder
,
S. J.
Yun
,
S.
Wang
,
J. G.
Ramírez
,
K.
West
,
J.
Zhang
,
S.
Kittiwatanakul
,
I. K.
Schuller
,
J.
Lu
,
S. A.
Wolf
,
H.-T.
Kim
,
M.
Liu
, and
R. D.
Averitt
, “
Ultrafast electron-lattice coupling dynamics in VO2 and V2O3 thin films
,”
Phys. Rev. B
96
,
094309
(
2017
).
207.
M. W.
Daniels
,
A.
Madhavan
,
P.
Talatchian
,
A.
Mizrahi
, and
M. D.
Stiles
, “
Energy-efficient stochastic computing with superparamagnetic tunnel junctions
,”
Phys. Rev. Appl.
13
,
034016
(
2020
).
208.
N. H.
Weste
and
D.
Harris
,
CMOS VLSI Design: A Circuits and Systems Perspective
, 4th ed. (
Addison-Wesley Publishing Company
,
2010
).
209.
M. U.
Ashraf
,
F. A.
Eassa
,
A.
Ahmad
, and
A.
Algarni
, “
Empirical investigation: Performance and power-consumption based dual-level model for exascale computing systems
,”
IET Software
14
,
319
327
(
2020
).
210.
D. S.
Holmes
,
A. L.
Ripple
, and
M. A.
Manheimer
, “
Energy-efficient superconducting computing—Power budgets and requirements
,”
IEEE Trans. Appl. Supercond.
23
,
1701610
(
2013
).
211.
A. N.
McCaughan
,
V. B.
Verma
,
S. M.
Buckley
,
J. P.
Allmaras
,
A. G.
Kozorezov
,
A. N.
Tait
,
S. W.
Nam
, and
J. M.
Shainline
, “
A superconducting thermal switch with ultrahigh impedance for interfacing superconductors to semiconductors
,”
Nat. Electron.
2
,
451
456
(
2019
).
212.
H.
Wu
,
J.
Zhou
,
C.
Lan
,
Y.
Guo
, and
K.
Bi
, “
Microwave memristive-like nonlinearity in a dielectric metamaterial
,”
Sci. Rep.
4
,
5499
(
2014
).
213.
W.
Zhang
,
R.
Mazzarello
,
M.
Wuttig
, and
E.
Ma
, “
Designing crystallization in phase-change materials for universal memory and neuro-inspired computing
,”
Nat. Rev. Mater.
4
,
150
168
(
2019
).
214.
J.
Feldmann
,
M.
Stegmaier
,
N.
Gruhler
,
C.
Ríos
,
H.
Bhaskaran
,
C. D.
Wright
, and
W. H. P.
Pernice
, “
Calculating with light using a chip-scale all-optical abacus
,”
Nat. Commun.
8
,
1256
(
2017
).
215.
K.
Kravtsov
,
M. P.
Fok
,
P. R.
Prucnal
, and
D.
Rosenbluth
, “
Ultrafast all-optical implementation of a leaky integrate-and-fire neuron
,”
Opt. Express
19
,
2133
2147
(
2011
).
216.
D.
Marković
and
J.
Grollier
, “
Quantum neuromorphic computing
,”
Appl. Phys. Lett.
117
,
150501
(
2020
).
217.
J. E.
Niven
, “
Neuronal energy consumption: Biophysics, efficiency and evolution
,”
Curr. Opin. Neurobiol.
41
,
129
135
(
2016
).
218.
T. M.
Bartol
, Jr.
,
C.
Bromer
,
J.
Kinney
,
M. A.
Chirillo
,
J. N.
Bourne
,
K. M.
Harris
, and
T. J.
Sejnowski
, “
Nanoconnectomic upper bound on the variability of synaptic plasticity
,”
elife
4
,
e10778
(
2015
).
219.
J.
Trastoy
and
I. K.
Schuller
, “
Criticality in the brain: Evidence and implications for neuromorphic computing
,”
ACS Chem. Neurosci.
9
,
1254
1258
(
2018
).
220.
M. A.
Hofman
, “
Evolution of the human brain: When bigger is better
,”
Front. Neuroanat.
8
,
15
(
2014
).
221.
S.
Herculano-Houzel
, “
Neuronal scaling rules for primate brains: The primate advantage
,”
Prog. Brain Res.
195
,
325
340
(
2012
).
222.
K. J.
Miller
,
L. B.
Sorensen
,
J. G.
Ojemann
, and
M.
Den Nijs
, “
Power-law scaling in the brain surface electric potential
,”
PLoS Comput. Biol.
5
,
e1000609
(
2009
).
223.
H.
Navarro
,
J.
del Valle
,
Y.
Kalcheim
,
N. M.
Vargas
,
C.
Adda
,
M.-H.
Lee
,
P.
Lapa
,
A.
Rivera-Calzada
,
I. A.
Zaluzhnyy
,
E.
Qiu
,
O.
Shpyrko
,
M.
Rozenberg
,
A.
Frano
, and
I. K.
Schuller
, “
A hybrid optoelectronic Mott insulator
,”
Appl. Phys. Lett.
118
,
141901
(
2021
).
224.
P. P.
Freitas
,
R.
Ferreira
, and
S.
Cardoso
, “
Spintronic sensors
,”
Proc. IEEE
104
,
1894
1918
(
2016
).
225.
W.
Zhang
,
B.
Gao
,
J.
Tang
,
P.
Yao
,
S.
Yu
,
M.-F.
Chang
,
H.-J.
Yoo
,
H.
Qian
, and
H.
Wu
, “
Neuro-inspired computing chips
,”
Nat. Electron.
3
,
371
382
(
2020
).
226.
M.
Ernoult
,
J.
Grollier
,
D.
Querlioz
,
Y.
Bengio
, and
B.
Scellier
, “
Updates of equilibrium prop match gradients of backprop through time in an RNN with static input
,” arXiv:1905.13633 (
2019
).
227.
M.
Payvand
,
M. E.
Fouda
,
F.
Kurdahi
,
A.
Eltawil
, and
E. O.
Neftci
, “
Error-triggered three-factor learning dynamics for crossbar arrays
,” in
2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)
(
IEEE
,
2020
), pp.
218
222
.