This article provides a review of current development and challenges in brain-inspired computing with memristors. We review the mechanisms of various memristive devices that can mimic synaptic and neuronal functionalities and survey the progress of memristive spiking and artificial neural networks. Different architectures are compared, including spiking neural networks, fully connected artificial neural networks, convolutional neural networks, and Hopfield recurrent neural networks. Challenges and strategies for nanoelectronic brain-inspired computing systems, including device variations, training, and testing algorithms, are also discussed.

Brain-inspired computing aims to improve computing efficiency by simulating how the brain functions (see Fig. 1). This novel computing paradigm is becoming increasingly popular because of its numerous potential applications, such as self-driving, language translation, and even game playing. However, most traditional artificial neural networks (ANNs) are still running on von-Neumann computing architectures1,2 implemented on transistor-based digital circuits, yielding an energy consumption that is orders of magnitude higher than biological brains.3 

FIG. 1.

Schematic illustration of a brain-inspired computing system. The main functional units of the brain-inspired computing systems are synapses,23 dendrites,24 and neurons,23,24 which can be directly simulated using memristive devices on-chip. The artificial neuron soma comprises the membrane and the spike event generator. The dendrites are connected to synapses interfacing the neuron with other neurons. The key computational element is the membrane, which stores the membrane potential in memristive devices.

FIG. 1.

Schematic illustration of a brain-inspired computing system. The main functional units of the brain-inspired computing systems are synapses,23 dendrites,24 and neurons,23,24 which can be directly simulated using memristive devices on-chip. The artificial neuron soma comprises the membrane and the spike event generator. The dendrites are connected to synapses interfacing the neuron with other neurons. The key computational element is the membrane, which stores the membrane potential in memristive devices.

Close modal

Novel circuit elements are therefore proposed to reduce this disparity in energy efficiency4 (see Fig. 1). One of the most promising candidates is the memristor, a two-terminal electronic element that directly relates electrical charge to flux.5 The memristor was first conceptualized by L. O. Chua in 19716 and was linked with physical devices by Williams's team at Hewlett-Packard (HP) Labs in 2008.7 Memristors are particularly suitable for serving as synapses in brain-inspired computing architectures thanks to their intrinsic dynamics, analog behavior, nonvolatility, high speed, low power, high density, and great scalability.8 

In this review, we first survey the progress of understanding the working mechanisms of the memristors. Efforts have been made to quantitatively describe the experimentally observed resistive switching. Frameworks have been developed to incorporate multiphysics, including the thermal, chemical, and electrical effects. Such physical modeling could be abstracted to form circuit models, which are of reasonable accuracy and dramatically reduce the complexity for circuit designers.

The main functional units of the brain-inspired computing systems are synapses and neurons that can be directly simulated using memristive synapses and neurons (see Fig. 1). The electrical-bias-history dependent resistance of memristors makes them capable of simulating the synaptic and neural dynamics of the brain, including various short/long-term synaptic plasticities and neural leaky integration-and-fire, which has led to compact and low-energy artificial synapses and neurons. Such devices, particularly artificial synapses, could be conveniently grouped into crossbars to implement weighted sums or vector-matrix multiplications on-chip.9 These memristive crossbars implement spiking neural networks (SNNs),10 fully connected ANNs,11 convolutional neural networks (CNNs),12,13 and Hopfield recurrent neural networks (RNNs),14,15 which can be extended to deep neural networks (DNNs)13,16 for a variety of applications including image recognition,17 sparse coding,18,19 temporal sequence classification,20 and reinforcement learning (RL).21,22

We provide our perspectives on the developments of memristive devices, circuits, systems and algorithms and the advancement of the memristor-based brain-inspired computing (see Fig. 1). The associated challenges, opportunities, and possible solutions will also be discussed.

Memristive switching has been realized in different material systems based on different underlying physical mechanisms (see Fig. 2). In this section, we review the electrochemical metallization, valence change, thermal electrical effect, and phase change mechanisms (PCM) that power the memristive switching and the associated physical/compact modeling.

FIG. 2.

(a) TEM image of Ag filaments at LRS and (b) ruptured Ag filament at HRS in an ECM device based on the Ag/SiO2/Pt structure, (c) electrical measurement of forming, (d) rupture process of the conductive filament,25 (e) threshold switching behavior realized by ECM devices based on the Pt/SiOxNy:Ag/Pt structure, and (f) experimental demonstration of paired pulse depression (PPD) following paired pulse facilitation (PPF) in the diffusive SiOxNy:Ag memristor.26 

FIG. 2.

(a) TEM image of Ag filaments at LRS and (b) ruptured Ag filament at HRS in an ECM device based on the Ag/SiO2/Pt structure, (c) electrical measurement of forming, (d) rupture process of the conductive filament,25 (e) threshold switching behavior realized by ECM devices based on the Pt/SiOxNy:Ag/Pt structure, and (f) experimental demonstration of paired pulse depression (PPD) following paired pulse facilitation (PPF) in the diffusive SiOxNy:Ag memristor.26 

Close modal

In 1971, L. O. Chua first theoretically formulated the concept of memristors.6 Almost 40 years later, the HP lab experimentally realized the first memristor with a metal/insulator/metal (MIM) structure7 and provided empirical evidence to support the resistive switching model in the metal oxide system,33 which established a link between the theory and the experimental results. There are mainly four types of mechanisms for memristors: electrochemical metallization mechanism (ECM),34–36 valence change mechanism (VCM),37–39 thermochemical mechanism (TCM),40,41 and phase change mechanism (PCM).42–44 Memristors based on other mechanisms, such as Mott transition,15,45 photonic-induced switching,46,47 and ferroelectric transition,48,49 are also investigated as emerging devices for brain-inspired computing systems.

1. Electrochemical metallization cells

Electrochemical metallization (ECM) based devices utilize the migration of metal ions to realize the memristive switching. A typical ECM device has a MIM structure. The two metal electrodes are often called the anode and the cathode, according to their electrochemical activities. Active metals, in most cases Ag25 and Cu,50 are used as the anodes in ECM devices, while Pt, Au, TiN, ITO, and W are often used as the counter electrode.7,51,52 A forming process53 is usually needed to incorporate the cations into the solid electrolyte, before stable resistive switching behavior can be obtained. During the forming and switching processes, redox reactions will happen both at metal/electrolyte interfaces54,55 and inside the electrolyte.25,56 If a positive voltage is applied to the anode, oxidation reactions will take place at the anode interface, and counter reactions may happen simultaneously at the cathode interface to maintain charge neutrality.35 Following the electric field, these cations will migrate toward the cathode and get reduced inside the electrolyte or at the cathode interface, leading to different filament growth modes.56 As long as voltage is applied across the ECM device, the redox and migration would continue, until a conductive filament is formed between the two electrodes, as shown in Fig. 2(a). The formation for the conductive filament switches the device from a high resistance state (HRS) to a low resistance state (LRS)57 [see Fig. 2(c)]. When opposite voltage is applied, the redox process and cation migration will take place conversely [see Fig. 2(b)], resulting in the rupture of the conductive filament and a resistance change from LRS to HRS [see Fig. 2(d)]. In ECM, the filament can sometimes be fully dissolved in the solid electrolyte during the RESET process, providing large on/off ratios.57,58 The growth direction and geometry of the conductive filament strongly depend on the characteristics of the electrolyte layer and the electrode material. Typically, various oxides,36,59 sulfides,60,61 selenides,62 amorphous silicon,63,64 and polymers39,65 are used as the insulating materials. In situ observations via TEM56,66 and SEM67,68 have been carried out to unveil the cation migration and filament growth/dissolution processes. In addition to nonvolatile behavior69 or the so-called long-term plasticity, volatile or short-term plasticity is also observed in these devices. A typical example is the recently developed diffusive memristor,70 where the formed filament breaks into incremental metal nanoclusters spontaneously in order to minimize the interfacial energy, thus functioning as a selector device [see Fig. 2(e)] and mimicking the short-term plasticity71 in biological synapses [see Fig. 2(f)]. In general, ECM devices have scalability down to atomic scale,72 the ability of working under ultralow voltages,73,74 and a large dynamic range. However, they may suffer from limitations in terms of weight precision and weight update linearity and symmetry.

2. Valence change cells

The valence change mechanism (VCM) is responsible for another major type of memristor. The first memristor fabricated by HP is a VCM device of a Pt/TiO2/Pt structure. Judging by the internal structure of the device during and after switching, the VCM devices can be classified into the filamentary type and the nonfilamentary type.

a. Filamentary type

VCM devices also possess MIM structures. Similar to ECM cells, the resistive switching behavior in filamentary VCM devices can also be attributed to the formation/dissolution of conductive filaments.75 However, the formation of filaments in VCM is often a result of the migration of oxygen vacancies or oxygen anions and the subsequent redox reactions that usually lead to the formation of a suboxide phase with higher conductivity.28,76 In the as-deposited oxide films, especially those prepared by atomic layer deposition, the films are close to their stoichiometry and the density of oxygen vacancies is usually low, which are not capable of filament-based resistive switching.77 Hence, an electroforming process is required to form conductive filaments [see Fig. 3(e)], which is accompanied by the eruption of oxygen gas.55 When a reversed voltage is applied, the filaments can be dissolved [see Fig. 3(f)], returning the device back to HRS. Typical materials used for the oxide layer in filamentary type memristors are HfO2,78 TaOx,79 TiO2,80 WO3,81 SrTiO3,82 ZnO,27 and SiO2.83 Metals such as Pt, Au, Ir, W, and Ta are often used as the electrode materials in university labs.7,52In situ observations of filament growth/dissolution in VCM have also been performed using TEM84 on devices based on ZnO,27 TaOx,84 and HfOx.85 In order to avoid the high-voltage forming process, which is not favorable for array-level neuromorphic applications, optimizations on the materials and device structures of VCM have been carried out, for example, by introducing a bilayer structure with an oxygen deficient layer.77 Such a bilayer structure is also found to be able to improve the gradual switching behavior that is desirable for neural network applications.86,87 Similar to ECM devices, VCM devices can be scaled down to a few nanometers.88 Integration with a high-density 3D structure has also been demonstrated.89 To date, VCM based memristors are commercially available—memory chips by Panasonic and Fujitsu based on VCM devices entered the market in 2016. However, similar to ECM devices, the fundamental filament formation and dissolution processes in VCM also lead to inevitable device variations, and the weight update linearity/symmetry also requires optimizations for neuromorphic systems with online learning capability.

FIG. 3.

Operations, mechanisms, and ionic dynamics of filamentary devices. (a) Unipolar switching behavior. (b) In situ TEM images of LRS and (c) ruptured filament at HRS after the RESET process in a TCM device based on the Pt/ZnO/Pt structure.27 (d) Bipolar switching behavior. (c) TEM image of Ag filaments at LRS and (d) ruptured Ag filament at HRS in an ECM device based on the Ag/SiO2/Pt structure.25 (e) High-resolution TEM image of a Magnéli phase Ti4O7 nanofilament at LRS. (f) Disconnected Ti4O7 structure in the conical shape at HRS in a filamentary VCM device based on the Pt/TiO2/Pt sturcture.28 

FIG. 3.

Operations, mechanisms, and ionic dynamics of filamentary devices. (a) Unipolar switching behavior. (b) In situ TEM images of LRS and (c) ruptured filament at HRS after the RESET process in a TCM device based on the Pt/ZnO/Pt structure.27 (d) Bipolar switching behavior. (c) TEM image of Ag filaments at LRS and (d) ruptured Ag filament at HRS in an ECM device based on the Ag/SiO2/Pt structure.25 (e) High-resolution TEM image of a Magnéli phase Ti4O7 nanofilament at LRS. (f) Disconnected Ti4O7 structure in the conical shape at HRS in a filamentary VCM device based on the Pt/TiO2/Pt sturcture.28 

Close modal
b. Nonfilamentary type

A potential approach to decrease the device-to-device (D2D) and cycle-to-cycle (C2C) variations might be to avoid the localized, filamentary resistive switching and to explore memristors based on a nonfilamentary mechanism. A typical example is the memristive device based on redox processes occurring at the electrode/oxide interfaces, which lead to varied thicknesses of interfacial layers and thus different resistance states.90 Such redox processes taking place uniformly at the interface avoid the stochastic filament formation process and ensure good D2D and C2C uniformity. The varied thickness of the interfacial layer as a function of applied signals has been directly observed,30 where a thicker interfacial layer leads to a switching from LRS to HRS [see Fig. 4(b)], and an inverted process accounts for the SET process, as shown in Fig. 4(a). The typical material system used for nonfilamentary VCM devices are Ti/Pr0.7Ca0.3MnO3 (PCMO),91 ZrO2:Y (YSZ)/PCMO,92 and InGaZnO (IGZO)/α-IGZO.93 In addition to the improved uniformity, such nonfilamentary VCM based artificial synapses also show more incremental weight states in general. However, they still suffer from limitations in symmetry and linearity, which is not advantageous for the online training of ANNs. Different from filamentary type devices where the filament can be quickly heated up by Joule heating to accelerate atom motion for a fast switching and then be quickly cooled down to maintain the positions of moved atoms for a long retention, the nonfilamentary type of device typically suffers from a slow switching speed and a poor retention.94 Therefore, one has to resort to a new mechanism for achieving a high switching speed and long retention simultaneously, such as borrowing the concept of battery in a three terminal device.95 

FIG. 4.

(a) LRS and (b) HRS of a nonfilamentary type VCM device based on the TiN/PCMO/Pt structure. The interface layer between TiN and PCMO is α-TiOxNy formed and extended during the RESET process.29 (c) TEM images of LRS and (d) amorphous region at HRS in a PCM device based on Ge2Se2Te5.30 (e) Schematic of the nucleation driven crystallization and (f) growth driven crystallization.31 (g) The crystallized states upon heating (marked by the red circles) are shown in TEM images. (h) AgInSbTe (AIST) exhibits no contrast, yielding a single-crystalline state.32 

FIG. 4.

(a) LRS and (b) HRS of a nonfilamentary type VCM device based on the TiN/PCMO/Pt structure. The interface layer between TiN and PCMO is α-TiOxNy formed and extended during the RESET process.29 (c) TEM images of LRS and (d) amorphous region at HRS in a PCM device based on Ge2Se2Te5.30 (e) Schematic of the nucleation driven crystallization and (f) growth driven crystallization.31 (g) The crystallized states upon heating (marked by the red circles) are shown in TEM images. (h) AgInSbTe (AIST) exhibits no contrast, yielding a single-crystalline state.32 

Close modal

3. Thermochemical memory cells

The thermochemical memory (TCM) cell has material systems and device structures akin to those of valence change memory cells. However, herein the main driving force of the stoichiometric changes and redox reactions are thermal effects,41 which result in a variation of local conductivity in conducting filaments. As a result, the SET (potentiation) and RESET (depression) processes in TCM cells can be realized by voltage signals with the same polarity, that is, a unipolar operation process [see Fig. 3(a)], in contrast to the bipolar operation [see Fig. 3(d)] in VCM and ECM devices. More specifically, the SET and RESET processes are caused by a competition between thermophoresis and diffusion processes, which move the oxygen vacancies into and out of the conduction channel volume, respectively.96 The thermophoresis process is dominant when starting from a HRS, which increases the concentration of oxygen vacancies in the filament region and leads to potentiation,97 as shown in Fig. 3(b). On the contrary, the diffusion process becomes dominant in on-state devices due to the large concentration gradient of oxygen vacancies, and the filament is dissolved upon the application of voltage signals with the same polarity [see Fig. 3(c)]. It is common that thermochemical and valence change mechanisms coexist in the same metal oxide based memristors, depending on the operation conditions, such as the current compliance.98,99 NiO is a typical material that demonstrates thermochemical reactions during resistive switching,100 while thermochemical reactions in TiO2,101 HfO2,101 Fe2O3,101 CoO,102 Al2O3,103 and SiOx104 have also been investigated.

4. Phase change cells

Phase change memory (PCM) is likely the most mature memristor technology to date,105 where the SET and RESET processes are based on the crystallization and amorphization of the PCM.106 During RESET, a high but short voltage pulse is applied, melting part of the phase change materials. Such melted materials cool down very quickly and an amorphous region will be formed, resulting in the resistance change from LRS [see Fig. 4(c)] to HRS [see Fig. 4(d)]. Based on the nucleation rate of the phase change material, the SET process can be classified as nucleation dominated and growth dominated.107 Typically, Ge2Sb2Te5 (GST)-based devices are nucleation dominated [see Figs. 4(e) and 4(g)], and AgInSbTe (AIST)-based devices are growth dominated [see Figs. 4(f) and 4(h)]. In nucleation dominated PCM devices, incubation first takes place inside the amorphous region, followed by crystal growth. The nanocrystals grow gradually, until the amorphous region is finally transformed into a polycrystalline structure. During such a process, the PCM device gradually switches from HRS to LRS.108 However, for growth dominated material systems, the crystallization always occurs at the amorphous/crystalline interface. The amorphous material gradually transforms into the crystalline state and finally forms a single crystal once the SET process is complete. For both types of materials, the typical crystallization temperature is about 500–700 K.109 A voltage pulse higher than the threshold voltage is needed to elevate the temperature of the amorphous region and induce the incubation and crystallization processes, whereas the temperature cannot be too high (∼900 K) to melt the phase change material.

FIG. 5.

Schematic illustration of an STDP computation for high-density nanoscale SNN implementations. (a) Circuit implementation of an SNN, (b) illustration of an STDP computation, and (c) schematic of the SNN algorithm. The STDP algorithms were implemented in an experimental platform that combines a software-based environment (blue part), high precision instrumentation (yellow part), and arrays of PCMs (red part). All these components were in a feedback loop. The generation of the signals and the translation between the neuronal input, the calculation of pulse power/width, and the logic related to the firing threshold detection and firing events were implemented in software. The generation of the nanosecond-time-scale pulses and the measurements were performed by automated instrumentation. The membrane potential was stored and read out from the arrays of PCMs. Reproduced with permission from Tuma et al., Nat. Nanotechnol. 11, 693 (2016). Copyright 2016 Springer Nature.23 

FIG. 5.

Schematic illustration of an STDP computation for high-density nanoscale SNN implementations. (a) Circuit implementation of an SNN, (b) illustration of an STDP computation, and (c) schematic of the SNN algorithm. The STDP algorithms were implemented in an experimental platform that combines a software-based environment (blue part), high precision instrumentation (yellow part), and arrays of PCMs (red part). All these components were in a feedback loop. The generation of the signals and the translation between the neuronal input, the calculation of pulse power/width, and the logic related to the firing threshold detection and firing events were implemented in software. The generation of the nanosecond-time-scale pulses and the measurements were performed by automated instrumentation. The membrane potential was stored and read out from the arrays of PCMs. Reproduced with permission from Tuma et al., Nat. Nanotechnol. 11, 693 (2016). Copyright 2016 Springer Nature.23 

Close modal

Moreover, the crystallization rate also plays an important role in PCM devices, as it could be the major factor that limits the operation speed of phase change mechanism based memristors. The main reason for the low crystallization speed, usually in ∼nanosecond scale for GST based devices110 and in ∼microsecond scale for AIST based devices,111 is still under debate. A dominant explanation31 attributes the low crystallization speed to the difference of atomic structures between the amorphous and crystalline states. Preseeding nuclei inside the amorphous region can bypass the incubation and improve the crystallization speed.112 However, such a method requires a prenucleation process, which is not desired in practical applications. Novel phase change materials such as ScSbTe (SST) are also investigated,113 which has a cubic atomic structure in the amorphous state. This allows it to switch between the amorphous state and the crystalline state with ultrafast speed (<500 fs), making it promising for high-speed applications. In addition, PCM devices also suffer from the resistance shift in HRS, as the amorphous state is metastable due to stress release114 and trap dynamics,115 which could be harmful for device operation. Moreover, the thermal dissipation in PCM and the large driving current required during RESET can be fatal and can prevent further applications in ultralarge scale integration. Hence, structural and material optimization has been performed, for example, by inserting a capping layer at the top interface between the electrode and the phase change material to maintain the material temperate during operations116 or by changing the geometry of the PCM and the bottom electrode (BE) to fully utilize the heat generated during the SET/RESET process.117,118 As for materials of PCM devices, the combination of tellurium with germanium and antimony has been the focus for material optimizations. New materials, especially tellurides, are still under intensive investigations. In general, PCM-based artificial synapses have demonstrated manufacturing maturity, high scalability, large scale integration, and good stability, making them a promising candidate for neuromorphic systems. However, the amorphization mechanism decides that the weight update in PCM synapses is inherently asymmetric due to the abrupt RESET, which significantly affects network performances in many cases. It should also be noted that the dynamic process of crystallization has also been exploited to build artificial neurons.119 

5. Performance metrics of memristor based artificial synapses

The main figure of merit for memristor based artificial synapses includes the weight update linearity, symmetry, resistance level stability, weight precision, energy consumption, scalability, speed, endurance, and retention. The desired performance could depend on detailed applications. For example, high weight update linearity, symmetry, and a large number of states are required to achieve online learning in neuromorphic systems, which will also demand high speed and high endurance because of the frequent update of synaptic weights. To date, most memristor-based artificial synapses suffer from insufficient linearity and symmetry,120 although a number of studies have been devoted to optimizing such aspects.121–124 A modification on the amplitude or width of voltage pulses125 can also improve the linearity and symmetry of memristive synapses at the expense of cumbersome periphery circuitry to generate such complicated signals. The operation speed of memristive synapses can reach a few nanoseconds43,75 with a record speed of 85 ps,126 which is comparable to DRAM127 and other transistor-based devices or memory circuits128 and is much faster than Flash. However, the above requirements can be largely relaxed if only inference is needed, but long retention will be required instead. Furthermore, low energy consumption and high device scalability could be needed for both online learning and inference in order to implement large neural networks on chip. State-of-the-art devices can have an energy consumption as low as few femtojoule26 or even subfemtojoule129 per spike, which is already comparable to that of biological synapses. All the memristive synapses mentioned above can take up a cell area of 4F2 in actual circuit layout31 (F is the footprint of a certain technology node). Compared with DRAM (4F2), static random access memory (SRAM) (140F2), FeRAM (22F2), and magnetic random access memory (MRAM) (20F2),43 such a cell area is competitive and shows the promise of memristors. Looking into array-level requirements, the major challenges lie in the intrinsic device variations, the lack of ideal selectors, and the immature large scale integration technology. The selector issue might be avoided when the array scale is reduced but will be necessary in large scale arrays in order to ensure correct updating and reading. Hence, optimization and exploration on the physics, device, and array level are necessary to ensure future applications of memristors. An interesting trend in device optimizations emerges recently by the introduction of new material systems such as two dimensional (2D) materials,130 Mott insulators,45,131,132 ferroelectric materials,133,134 magnetic materials,135 perovskites,136 and organic materials137 and/or novel synaptic structures such as synaptic transistors130 and ferroelectric transistors.134 This has led to a flourish in brain-inspired computing with memristors and opens up new opportunities for high-performance synaptic devices.

The theoretical modeling of memristors could be roughly classified into two categories, i.e., the physical model and the compact model. The physical modeling of a memristor aims to correlate the electrical properties with the underlying physical mechanisms. Guan et al. developed a hybrid model that took into account the electron conduction and ion movement.155 Onofrio et al. simulated ultrafast resistance switching based on molecular dynamics.156 

Recently, the nanoparticle diffusion due to the minimization of interfacial energy has been observed. A direct emulation of both short- and long-term plasticities of synapses is enabled by the diffusive memristor for brain-inspired computing systems. To link the electrical, nanomechanical, and heat degrees of freedom, the resistance of the diffusive memristor model can be approximated as the sum of tunneling resistances between N − 1 nanoparticle islands70 

RM=0N1Rtexp[(xi+1xi)/l],
(1)

where Rt is the tunneling resistance amplitude, x0 and xN are the spatial coordinates of the input and output terminals (x0<x1<x2<···<xN1<xN), and l is the effective tunneling length. Let L denote the half size of the device. To describe the nanoparticle diffusion and memristive dynamics, an overdamped Langevin equation is employed for metallic nanoparticles trapped by a potential U, responsible for the interfacial energy, and subject to a random force ξi, and the magnitude is determined by temperature T70 

ηdxidt=U(xi)xi+αV(t)L+2ηkBTξi,
(2)

where viscosity η is balanced by all other forces acting on nanoparticles. When the voltage V(t) is on, αV(t)/L represents a bias in the electric field E=V(t)/L affecting nanoparticles with induced charge α. The random force 2ηkBTξi describes the diffusion of the nanoparticles.

Compact models are abstracted from the physical models with reduced computational complexity preserved ability to capture the essence of memristive dynamics. Such models benefit memristor-based circuit designs, usually calibrated on empirical observations.157 To represent the link between magnetic flux (ϕ) and electric charge (q), the memristance (M(q) in Ω) of a memristor can be defined as6 

M(q)=dϕ/dq.
(3)

For example, a compact model for TaOx-based memristors is based on the following coupled equations:145 

{i(t)=G(x,v)v(t),dx(t)dt=F(x,v),
(4)

where G(x, v) is the conductance of a TaOx-based memristor, x denotes the state variable for a memristor, v is a varying voltage, and F(x, v) describes the dynamical evolution of state x and varying voltage v. The device conductance is approximated by the parallel combination145 

i=v[xGm+(1x)aexp(b|v|)],
(5)

where Gm, a, and b are constants. To better fit the experimental data, the model can be modified as follows:145 

dx/dt=Asinh[v/σoff]exp[(xoff/x)2]exp[1/(1+βp)](v<0),
(6)

where A, σoff,xoff, and β are constants and p is the power. This compact model shows success in simulating large memristive arrays for brain-inspired systems with high density. Additionally, the physical insight derived from the model helps understanding the physical mechanism of the memristive devices.

A comparative summary of different memristive models is given in Table I. The initial definition of the memristor is proposed by Chua's model,6 which defines the memristor. The HP model7 first links the memristor definition with the linear ion drift effects along with nonlinear effects.33 Simmons tunneling barrier modeling methods139–141 are developed into SPICE compatible physics-based memristive models, which have been adopted and refined by the Generalized model.142 Threshold effects143,144 are also considered in memristive models. In Stanford models, the filament gap is considered as the state variable.147,148 Zhang's synaptic model146 features voltage-threshold and can mimic the synaptic behavior of recent memristive devices with different materials. Unipolar devices and temperature effects149,150 have also been investigated. Other bipolar switching mechanisms,151 e.g., the change in conductive filament (CF) size,152 and other factors are also taken into account. The diffusive model70 enables a direct emulation of both short- and long-term plasticities of brain-inspired synapses.

TABLE I.

Comparative analysis of different memristive models.

ModelDevice typeState variableControl mechanismThresholdBoundary effectsSimulation compatible
Chua model6  Generic Flux or charge I No N.A. N.A. 
Linear ion drift7  Bipolar 0wD I No External window functions SPICE 
Nonlinear ion drift33  Bipolar 0w1 V No External window functions No 
Exponential138  Bipolar Switching speed V No Yes No 
Pickett-Abdalla model139–141  Bipolar aoffwaon I No No SPICE 
Generalized model142  Bipolar 0w1 V Yes External window functions SPICE/Verilog/MATLAB 
TEAM143  Bipolar xonxxoff I I Specific window functions SPICE/Verilog/MATLAB 
VTEAM144  Bipolar xonxxoff V V Specific window functions SPICE/Verilog/MATLAB 
Tantalum oxide model145  Bipolar xonxxoff V V Yes No 
Synaptic model146  Bipolar 0wD V V External window functions SPICE/Verilog/MATLAB 
Stanford147,148 Bipolar Filament gap (gV T No SPICE/Verilog/MATLAB 
Filament dissolution149,150 Unipolar Concentration of ions V T No COMSOL 
Physical electro thermal151  Bipolar Concentration of ions V T Yes COMSOL 
Bocquet unipolar152  Unipolar Concentration of ions V T Yes COMSOL/SPICE 
Bocquet bipolar153  Bipolar CF radius V T Yes SPICE 
Gonzalez-Cordero154  Bipolar CF radius V T Yes SPICE 
Diffusive model70  Bipolar Filament length (lV and T V and T Yes No 
ModelDevice typeState variableControl mechanismThresholdBoundary effectsSimulation compatible
Chua model6  Generic Flux or charge I No N.A. N.A. 
Linear ion drift7  Bipolar 0wD I No External window functions SPICE 
Nonlinear ion drift33  Bipolar 0w1 V No External window functions No 
Exponential138  Bipolar Switching speed V No Yes No 
Pickett-Abdalla model139–141  Bipolar aoffwaon I No No SPICE 
Generalized model142  Bipolar 0w1 V Yes External window functions SPICE/Verilog/MATLAB 
TEAM143  Bipolar xonxxoff I I Specific window functions SPICE/Verilog/MATLAB 
VTEAM144  Bipolar xonxxoff V V Specific window functions SPICE/Verilog/MATLAB 
Tantalum oxide model145  Bipolar xonxxoff V V Yes No 
Synaptic model146  Bipolar 0wD V V External window functions SPICE/Verilog/MATLAB 
Stanford147,148 Bipolar Filament gap (gV T No SPICE/Verilog/MATLAB 
Filament dissolution149,150 Unipolar Concentration of ions V T No COMSOL 
Physical electro thermal151  Bipolar Concentration of ions V T Yes COMSOL 
Bocquet unipolar152  Unipolar Concentration of ions V T Yes COMSOL/SPICE 
Bocquet bipolar153  Bipolar CF radius V T Yes SPICE 
Gonzalez-Cordero154  Bipolar CF radius V T Yes SPICE 
Diffusive model70  Bipolar Filament length (lV and T V and T Yes No 

In this section, the spike-timing-dependent plasticity (STDP) rule will first be introduced. Second, the major units of the brain-inspired computing systems such as synapses and neurons will be explored. In addition, different memristor-based neuronal models and synaptic learning mechanisms will be discussed.

The most intensively studied functional units of the brain-inspired computing systems are synapses and neurons. Synaptic learning rules adopted in DNNs were primarily achieved via the backpropagation (BP) algorithm,158 which requires a backward pass of the gradient computation in the neural networks. Compared with the BP algorithm, a more biorealistic approach is the STDP learning rule159 (see Fig. 5). The STDP concept is generalized from the neuroscience field.160,161 The synaptic weight increases or decreases if the preneuron spikes before or after the postneuron, respectively,

Δw={A+exp(Δtτ+),Δt>0,Aexp(Δtτ),Δt<0,
(7)

where A+,A,τ+, and τ are constants. tpost and tpre are the time-constants of pre- and postsynaptic firings, respectively, which define Δt=tposttpre.

The early observation of STDP was made on Ag/Si-based memristive devices.63 A similar STDP behavior has also been observed in Cu2O-based memristive devices.162 The effort in engineering TaOx-based first order and second order memristors yields more uniform and controllable STDP behaviors.121,163 Additionally, 2T1M and 1T1M circuit schemes that simulate the STDP functionality on HfOx-based memristors for simple online pattern learning and recognition have been reported.164 To get rid of the waveform engineering that demands the overlapping of pre- and postsynaptic signals, second order memristors and memristors of diffusive dynamics have been developed with intrinsic timing capabilities.70,163

Memristors are also used as key components of artificial neurons. Until now, Mott-insulators, phase-changes, ferromagnets, and diffusive-ion-based memristive devices have been explored for this purpose.10,12,45,165–171 In 2013, Pickett et al. used two NbO2 Mott-memristors as the counterpart of ion-channels of a biological neuron and capacitors as the cell membrane to emulate the Hodgkin-Huxley (H-H) model.45 Besides, Moon et al. developed the NbO2-based Mott-memristor oscillator for neuron applications.10 Followed by this, Lin et al.170 and Zhang et al.168 demonstrated an Leaky Integrate and Fire (LIF) neuron using a VO2-based Mott-memristor and an Ag-based diffusive memristor, respectively. Pablo et al. reported an LIF neuron using a single Mott-based memristor, which implemented three basic functions of spiking neurons, i.e., leak, integration, and fire. In 2018, Wang et al.12 employed diffusive memristor neurons in a fully memristive neural network. In the same year, an integrated capacitive neural network with a memcapacitive leaky LIF neurotransistor has been built.169 

Brain-inspired computing utilizes hardware to simulate neurobiological models such as SNNs and ANNs. Neurons in ANNs are characterized by single, static, continuous-valued activation. Biological neurons use discrete spikes to compute and transmit information. SNNs are therefore more biologically realistic than ANNs. However, training DNNs (such as CNNs and RNNs) using SNN models is still challenging. Spiking the neurons' transfer function is usually nondifferentiable, which prevents using the BP algorithm. SNNs still lag behind ANNs in terms of accuracy, but SNNs typically require much fewer operations and are better candidates for processing spatiotemporal data.

In this section, first, the literature on memristive SNNs will be surveyed. Second, the progress of hardware implementation of memristive fully connected ANNs is discussed, with a focus on image recognition. Third, reports on memristive CNNs for DNNs will be summarized, with the same focus on image recognition. Finally, we will introduce memristive Hopfield RNNs to achieve different tasks such as route searching and speech recognition.

Brain-inspired computing models can be broadly divided into ANNs, SNNs, and other extended models.172,173 Among them, ANNs are mainly used in machine learning, especially deep learning. A more faithful way to mimic how the brain processes information is the SNN. Hardware implementations of SNNs have been demonstrated by IBM TrueNorth, Intel Loihi,174 and Tsinghua Tianjic.175 In this subsection, we mainly focus on several unique characteristics of SNNs. First, the outputs of the spiking neurons are time-space-encoded pulses. Second, the timing domain information is encoded by the output of the SNN neuron. Therefore, multiple neurons encode information in a spatiotemporal two dimensional (2D) space.

1. SNNs link computer science with neuroscience

A learning rule for feedforward SNNs by BP is that the temporal error at the output has been derived.176 Progress in both unsupervised learning and supervised learning is reviewed.177 The Winner takes all (WTA) and STDP rules are utilized to build an SNN for position detection in circuit simulation.178 Recently, brain-inspired SNN learning algorithms have been developed by Wolfgang Maass.179–181 In addition, memristor-based synapses are combined with CMOS-based LIF output neurons to perform visual pattern recognition tasks.182 For example, a large SNN with simplified STDP synapses is simulated to recognize mixed National Institute of Standards and Technology (MNIST) digits under different variations.183 A CMOS-based brain-inspired hardware system has emulated different spike types observed in cortical pyramidal neurons and coincidence detection.184 A simple probabilistic WTA network has been reported by employing memristor synapses with weight-dependent STDP behavior.185 An all-memristive SNN, using phase-change synapses and neurons, has been developed186 to detect the correlation between input streams. A Hopfield SNN was built using an 11k-bit array of Mo/PCMO memristive synapses for image recognition.10 In 2016, Serb et al.185 reported a 4 × 2 SNN trained using the intrinsic STDP of the memristors with WTA neurons, which could cluster simple patterns after the unsupervised training. Pantazi et al.186 demonstrated a single layer SNN with both phase change memristive synapses and neurons. In 2017, Milo et al. proposed a spiking HNN model with STDP based on 30 excitatory synapses and 30 inhibitory synapses combined with six fully connected digital neurons.187 In this work, the training, recall, and stability via external stimulation and recurrent co-operation and competition were demonstrated.

Many advances in understanding and modeling SNNs are from the neuroscience research. For instance, an SNN with STDP learning is simulated to differentiate spoken words and find temporal encoding rather than simple rate encoding.20 An SNN architecture called NeuCube188 has been used for modeling spatio- and spectrotemporal brain data (STBD).

2. Comparison between SNNs and ANNs

A common learning method for SNNs is the STDP algorithm, which cannot guarantee a high performance in general learning tasks. Therefore, SNNs are more suitable for low accuracy computing applications (see Tables II and III). SNNs are mostly bioplausible, real-time, low power, and spatiotemporal and suitable for brain-inspired computing systems, while ANNs are fast speed, computationally intensive, and suitable for current von-Neumann computing systems. Although BP-based ANNs have achieved many successes in pattern recognition tasks, SNN-based brain-inspired systems are more biorealistic and can be implemented on memristive devices with suitable STDP behaviors (see Table III). The detailed comparison between SNNs and ANNs is also shown in Table II.

TABLE II.

Comparison between SNNs and ANNs.

CategoryANN (Artificial Neural Network)SNN (Spiking Neural Network )
Neuronal activations Multilevel (fixed or floating point) Timing domain coded spikes (binary values) 
Timing expression Recurrent connections in RNNs and other networks Membrane potential and recurrent connections 
Spatial expression Regular interconnected neuronal array. The processing of images usually adopts sliding windows in convolution operation Nonregularly interconnected neurons. Generally, there is no sliding window process (requiring parallel expanding of convolutions) 
Activation function Sigmoid, Tanh, ReLU, and Leaky ReLU Integrate and fire, Hodgkin–Huxley 
Reasoning Convolution, pooling, and multilayer perceptron model (MLP) Leaky Integrate and Fire model (LIF) 
Training algorithm BP and Winner-take-all (WTA) STDP, Hebb's law, and BP 
Represent negative neuronal values Negative activation value Inhibitory neurons 
Implementation Tensorflow, PyTorch, MiXNet, GPU, and TPU TrueNorth, SpiNNaker, Neurogrid, and Loihi 
Theoretical sources Mathematical derivation Brain enlightenment 
CategoryANN (Artificial Neural Network)SNN (Spiking Neural Network )
Neuronal activations Multilevel (fixed or floating point) Timing domain coded spikes (binary values) 
Timing expression Recurrent connections in RNNs and other networks Membrane potential and recurrent connections 
Spatial expression Regular interconnected neuronal array. The processing of images usually adopts sliding windows in convolution operation Nonregularly interconnected neurons. Generally, there is no sliding window process (requiring parallel expanding of convolutions) 
Activation function Sigmoid, Tanh, ReLU, and Leaky ReLU Integrate and fire, Hodgkin–Huxley 
Reasoning Convolution, pooling, and multilayer perceptron model (MLP) Leaky Integrate and Fire model (LIF) 
Training algorithm BP and Winner-take-all (WTA) STDP, Hebb's law, and BP 
Represent negative neuronal values Negative activation value Inhibitory neurons 
Implementation Tensorflow, PyTorch, MiXNet, GPU, and TPU TrueNorth, SpiNNaker, Neurogrid, and Loihi 
Theoretical sources Mathematical derivation Brain enlightenment 
TABLE III.

Comparative analysis of different memristive brain-inspired architectures.

ArchitecturesApplications and simulation resultsDevice requirementAdvantagesLimitationsFurther studies
SNN Handwritten digits recognition (82%)125 and letter recognition (99%)216  Synaptic plasticity More biorealistic, spatio-temporal event-driven, energy-efficient Lack of high efficient training algorithms and spike-based data Design scalable neurons producing spikes with a small area and power dissipation on-chip and optimize algorithms suitable for memristive synapses 
ANN (One layer) Handwritten digits recognition (83%)217 and face recognition (88.08%)200  Analog conductance Fast speed Limited to simple tasks Investigate the scalability of the system 
ANN (Two layer) Simple digits recognition (100%)4,9 Analog conductance Solve relative more complex tasks Require multilayer BP gradient computation, difficult to be implemented on-chip Investigate the performance with real devices for large scale systems 
CNN Handwritten digit recognition (94%)218  Low power, scalable, analog conductance, and long retention Shared kernels, solve very complex tasks Improvement of power dissipation and scalability issues Investigate the implementation of a fully on-chip system without software part 
RNN Associative learning214 and traveling salesman problem131  Low power, scalable, analog conductance, and long retention Associative memory Require more sources for deep learning tasks Full circuit level design of the architecture and investigation of scalability and different applications 
ArchitecturesApplications and simulation resultsDevice requirementAdvantagesLimitationsFurther studies
SNN Handwritten digits recognition (82%)125 and letter recognition (99%)216  Synaptic plasticity More biorealistic, spatio-temporal event-driven, energy-efficient Lack of high efficient training algorithms and spike-based data Design scalable neurons producing spikes with a small area and power dissipation on-chip and optimize algorithms suitable for memristive synapses 
ANN (One layer) Handwritten digits recognition (83%)217 and face recognition (88.08%)200  Analog conductance Fast speed Limited to simple tasks Investigate the scalability of the system 
ANN (Two layer) Simple digits recognition (100%)4,9 Analog conductance Solve relative more complex tasks Require multilayer BP gradient computation, difficult to be implemented on-chip Investigate the performance with real devices for large scale systems 
CNN Handwritten digit recognition (94%)218  Low power, scalable, analog conductance, and long retention Shared kernels, solve very complex tasks Improvement of power dissipation and scalability issues Investigate the implementation of a fully on-chip system without software part 
RNN Associative learning214 and traveling salesman problem131  Low power, scalable, analog conductance, and long retention Associative memory Require more sources for deep learning tasks Full circuit level design of the architecture and investigation of scalability and different applications 

3. Reinforcement learning in SNN

Reinforcement learning (RL), which can acquire knowledge and solve problems without human knowledge or supervision, is a promising approach for machine learning. During the RL process, the learning agent makes action and receives rewards from the environment in every time step to achieve the target by the optimal policy. RL can be implemented on both SNNs and ANNs. A hardware-friendly RL algorithm based on memristive SNNs has been proposed22 to update the action-value (Q-value) function and optimize the policy as follows:

Q(st,at)Q(st,at)+η[rt,+αQ(st+1,at+1)Q(st,at)],
(8)

where Q(st,at) is the Q value after action at+1 in state st+1 and rt,+αQ(st+1,at+1)Q(st,at) represents the time difference (TD). In addition, learning rate η indicates the importance of previous experience. The memristive SNN RL algorithm has been applied to an acrobat system,22 where the agent aims to take less steps and TD to reach the target. RL in ANNs will be discussed in Sec. IV B.

Brain-inspired computing with ANNs based on emerging non-volatile memory (NVM) devices has attracted significant attention.94 Initial studies of memristive ANNs mainly focused on fully connected neural networks [see Fig. 6(a)].190,191 Compared with memristive SNNs, memristive fully connected neural networks for ANNs are much faster and can solve relatively more complex tasks (see Table III). In 1961, crossbars of resistive elements were proposed by Steinbuch192 to parallelize vector-matrix multiplication utilizing Kirchhoff's current law. However, the proposal was less attractive than digital implementations due to the lack of the tunable resistor until Snider193 reinvented this idea with memristors in 2008. Memristors, initially proposed for nonvolatile memory applications, were then engineered for computing application,63,94,194 which directly use intrinsic physics laws to obviate the large energy and time overhead incurred by frequent data shuttling in von-Neumann systems. Alibart et al.195 utilized a small array to classify 3 × 3 in 2013. Park et al.196 used a PCMO memristor array consisting of 192 cells to classify the electroencephalography signals. Also in 2015, Prezioso et al.190 reported a 12 × 12 passive Al2O3/TiO2-x-based memristive array to classify the 3 × 3 patterns. Followed by this, Burr et al.197 employed 165 000 phase change memristors that were integrated to implement a three-layer perceptron to classify the MNIST hand written digits using BP on-chip learning methods. Yu et al.198 reported a three-layer perceptron built on the 16 Mb binary resistive switching memory macrochip for online training to classify a resized MNIST binary network with an accuracy of 96.5%. In 2017, Choi et al.199 demonstrated an in situ unsupervised training of a 9 × 2 TaOx-based memristive array to analyze the breast cancer data. In addition, Yao et al.200 used an 8 × 128 1T1M array of HfAlyOx-based memristors to classify the Yale face database. In 2018, Hu et al.191 used a 128 × 64 1T1M array of HfOx memristors to classify the MNIST dataset. The BP algorithm for in situ training [see Fig. 6(b)] was employed by Li et al.189 Based on the same 128 × 64 1T1M array [see Fig. 6(c)], a two-layer perception [see Fig. 6(d)] was built to recognize reduced size MNIST digits. Ambrogio et al.201 demonstrated an artificial synapse by pairing phase change memristors with three-transistor-one-capacitor (3T1C) structures to facilitate linear and symmetrical weight updates, leading to 97.95% accuracy in MNIST digit recognition. Bayat et al.202 demonstrated analog neurons to classify 4 × 4 patterns with Al2O3/TiO2-x-based memristive synapses. Compared with the CMOS approach, analog in situ training offers parallel signal processing with low power consumption.203 

FIG. 6.

Memristive fully connected neural networks. (a) Schematic of fully connected neural networks, (b) flow chart of the BP algorithm for in situ training, (c) photograph of the integrated 128 × 64 1T1M array, and (d) implementation of the multilayer fully connected neural network with memristor crossbars. Reproduced with permission from Li et al., Nat. Commun. 9, 2385 (2018). Copyright 2018 Author(s), licensed under a Creative Commons Attribution 4.0 License.189 

FIG. 6.

Memristive fully connected neural networks. (a) Schematic of fully connected neural networks, (b) flow chart of the BP algorithm for in situ training, (c) photograph of the integrated 128 × 64 1T1M array, and (d) implementation of the multilayer fully connected neural network with memristor crossbars. Reproduced with permission from Li et al., Nat. Commun. 9, 2385 (2018). Copyright 2018 Author(s), licensed under a Creative Commons Attribution 4.0 License.189 

Close modal

As mentioned previously, RL inspired by cognitive neuroscience can make use of the ANNs built on analog memristive arrays.21,204 The parallel and energy-efficient in situ reinforcement learning with a 3-layer fully connected memristive deep-Q network21 has been implemented on a 128 × 64 1T1M array with applications to classic control problems including cart-pole and mountain car games.

DNNs have attracted considerable attention since CNNs show promising performance in pattern recognition. Additionally, CNNs are resilient to image translation and scaling (see Table III). Input data convolve with a set of shared kernels in each convolutional layer [see Fig. 7(a)]. Deep CNNs consist of cascaded layers where each convolutional layer is usually followed by a subsampling layer [see Fig. 7(a)],18,19 which performs a max pooling or averaging operation to reduce the size of input data. Ultimately, the final layer can be a fully connected neural network. Shared weight kernels are convolved with the entire maps, and the training time is therefore significantly reduced in comparison to fully connected neural networks (see Table III). Compared with memristive SNNs and fully connected ANNs, memristive CNNs can solve a very complex task (see Table III).

FIG. 7.

Memristive convolutional neural networks. (a) Schematic of the deep convolutional neural network, including the convolutional layer, subsampling layer, and fully connected layer. (b) Random examples of the MNIST-backrand dataset, (c) random examples of the CIFAR-10 dataset, and (d) architecture of the mixed hardware–software experiment for CIFAR-10/100 recognition. Reproduced with permission from Ambrogio et al., Nature 558, 60 (2018). Copyright 2018 Springer Nature.201 

FIG. 7.

Memristive convolutional neural networks. (a) Schematic of the deep convolutional neural network, including the convolutional layer, subsampling layer, and fully connected layer. (b) Random examples of the MNIST-backrand dataset, (c) random examples of the CIFAR-10 dataset, and (d) architecture of the mixed hardware–software experiment for CIFAR-10/100 recognition. Reproduced with permission from Ambrogio et al., Nature 558, 60 (2018). Copyright 2018 Springer Nature.201 

Close modal

To achieve memristive CNNs on-chip, the kernel operation205 and memristive convolutions for feature extraction206 have been proposed.207–209 Memristor-based 2D convolutional kernels were investigated by Li et al.203 in 2017. Another important issue for memristive DNNs, the pooling method in memristor-based brain-inspired circuits, was also presented.18,19 A mini CNN with both memristive synapses and memristive neurons was demonstrated by Wang et al.12 in 2018. The fully connected layer was trained by a simplified STDP rule. In 2018, a hybrid software-hardware CNN201 was presented to classify the MNIST-backrand [see Fig. 7(b)], CIFAR-10 [see Fig. 7(c)], and CIFAR-100 datasets [see Fig. 7(d)].

The Hopfield neural network (HNN) was first proposed by John Hopfield in 1982,210 which is a form of recurrent neural network (RNN) [see Fig. 8(a)]. RNNs are becoming increasingly popular for sequence learning tasks such as language modeling,211 handwriting prediction and generation,212 and speech recognition.213 The difference between RNNs and ordinary feedforward neural networks is that the computational neurons in RNNs receive their own output from the previous input. Such an effect in RNNs enables the context learning in sequential inputs (see Table III).

FIG. 8.

Memristive Hopfield recurrent neural networks. (a) Schematic of the recurrent neural network and (b) schematic of the Hopfield network. Reproduced with permission from Kumar et al., Nature 548, 318 (2017). Copyright 2017 Springer Nature.131 

FIG. 8.

Memristive Hopfield recurrent neural networks. (a) Schematic of the recurrent neural network and (b) schematic of the Hopfield network. Reproduced with permission from Kumar et al., Nature 548, 318 (2017). Copyright 2017 Springer Nature.131 

Close modal

There are two major types of HNNs, i.e., the discrete type and the continuous type. The discrete type always serves as an addressable memory with binary threshold nodes for associative learning, while the continuous type always solves the combinational optimization problem, such as traveling salesman problem (see Table III). In the most ideal situation, they can converge to the global minimum value of energy, but they sometimes converge to a local minimum value in real-world situations. A model for understanding human memory is also provided via Hopfield RNNs. Eryilmaz et al. constructed the first memristive hardware HNN system in a 10 × 10 phase change memristive array.14,15 The pattern recognition is demonstrated robust against the variations of memristive synapses. The more the training iterations, the larger the variation can be withstood. Subsequently, Hu et al. developed a hardware HNN system for associative memory,214 where patterns are stored by the memristive weights. The prestored patterns can be retrieved directly or through associative intermediate states. To prevent the network from being trapped into the local minimum value, Kumar et al. incorporated the chaos signal generated by using a NbO2 Mott-based memristor into the HNN, and then, an all-memristor-based HNN system was proposed to solve the traveling salesman problem [see Fig. 8(b)].131 Contributing to the chaos signal (see Fig. 9), the efficiency and accuracy of the system were greatly improved. Combined with convolutional computation, Wang et al. developed recurrent convolutional memristive networks215 to classify the MNIST dataset on-chip.

FIG. 9.

Chaos-aided global minimization. (a) Schematic depiction of the global minimization process, (b) evolution of the Hopfield energy (E) with and without fluctuations, and (c) histograms of the two distributions of (b). Reproduced with permission from Kumar et al., Nature 548, 318 (2017). Copyright 2017 Springer Nature.131 

FIG. 9.

Chaos-aided global minimization. (a) Schematic depiction of the global minimization process, (b) evolution of the Hopfield energy (E) with and without fluctuations, and (c) histograms of the two distributions of (b). Reproduced with permission from Kumar et al., Nature 548, 318 (2017). Copyright 2017 Springer Nature.131 

Close modal

We would like to conclude this section on different memristive brain-inspired architectures such as SNNs, fully connected ANNs, CNNs, and RNNs. A comparative analysis of different memristive brain-inspired architectures is listed in Table III. We anticipate a breakthrough in the demonstration of high efficient training algorithms for brain-inspired computing systems with memristors in the near future. For SNNs, more spike-based data can be generated for exploring new training algorithms. For ANNs, memristive devices with low power, scalable, analog conductance, and long retention are required to be further developed.

In this section, we list major challenges and potential solutions for memristor-based brain-inspired computing systems.219 The optimization of such systems is application specific, including but not limited to scaling up the size of the memristive arrays, improving synaptic precision, reducing leakage current, and designing efficient peripheral control circuits. At the algorithm and system level, the research of the brain will constantly inspire the development of memristor-based brain-inspired computing systems.

In principle, brain-inspired computing with memristive synapses requires the memristors to have a wide resistance range to accommodate a large number of resistance states. Low fluctuation and drift in each resistance state and high absolute resistance values are required for the inference, while a large endurance is necessary for repeated programming/training.221 For SNNs, synaptic plasticity of the memritive devices is in demand. For ANNs within one or two layers, analog conductance with a wide resistance range and high endurance is necessary. In addition, a low device variation and symmetric weight updating are required for efficient network training. For DNNs (such as CNNs and RNNs), beyond analog conductance with long retention, low power, and good scalability are also required.

The device stability is critical to the computing accuracy, as the drift of conductance states with time or environmental changes will result in undesired synaptic weight changes. This has been one of the chief challenges for memristive brain-inspired computing systems. Memristors may suffer from different variations (see Table IV). There are two types of device variations: D2D variation and C2C variation. The D2D variation, evident in filamentary type memristors, is associated with a random electroforming process that creates a different filament structure in each device.77 Some studies also suggest that reducing the thickness of the bulk of the memristive system can eliminate the effect of breakdown or electrical forming.77 The C2C variation comes from the unstable channels or random formation of new channels. The selection of materials can greatly affect the C2C variation. Other origins of variations, such as geometry variation222 and process variation,223 have also been investigated. A method to reduce the impact of the variation is the binarization or quantization of neural network weights. The potential of memristive devices can be unleashed as electronic synapses and neurons in brain-inspired computing systems with appropriate device engineering and algorithm and peripheral circuit designing.

TABLE IV.

Key challenges and possible strategies of brain-inspired computing with memristors on the device, circuit, and system levels.

Key challengesPossible strategies
Device level 
Materials Fabricate standard-process and compatible new materials and interconnect materials with high conductance Use alternative 2D materials, polymer, and functional materials, and develop new processes for new materials 
Analog behavior Limited levels and lack of accuracy Design a stable interface-type device or represent one synapse with multiple memristors and utilize three terminal electrochemical transistors 
Device variations C2C variation and D2D variation Comprehend the microscopic picture of the switching and control the electroforming step and the atomic motion 
Modeling Less computational complexity and high physics-fidelity for large-scale system simulation Build SPICE models of memristors, combined physical and empirical behavior of devices 
Circuit level 
Scaling IR drop and sneak path Fabricate a device with high resistance, use the selector device, and develop a 3D crossbar 
Read/write circuits Efficient read/write scheme for digital/analog mode Fabricate a memristor with rectifying effects and use the selector device 
Peripheral circuits BP progress in DNNs, weight update calculations, and loss functions for DNNs Use analog circuits, field programmable gate array (FPGA), and look-up-table (LUT) connected to the chips and approximate circuits for loss functions 
System level 
Operations Develop a general computing system for data mapping, dot product, and STDP Experimentally build applications with a memristive crossbar 
Training and testing accuracies Develop practical network topology and learning algorithm Develop hybrid algorithms, and brain-inspired systems consist of both ANNs and SNNs 
Key challengesPossible strategies
Device level 
Materials Fabricate standard-process and compatible new materials and interconnect materials with high conductance Use alternative 2D materials, polymer, and functional materials, and develop new processes for new materials 
Analog behavior Limited levels and lack of accuracy Design a stable interface-type device or represent one synapse with multiple memristors and utilize three terminal electrochemical transistors 
Device variations C2C variation and D2D variation Comprehend the microscopic picture of the switching and control the electroforming step and the atomic motion 
Modeling Less computational complexity and high physics-fidelity for large-scale system simulation Build SPICE models of memristors, combined physical and empirical behavior of devices 
Circuit level 
Scaling IR drop and sneak path Fabricate a device with high resistance, use the selector device, and develop a 3D crossbar 
Read/write circuits Efficient read/write scheme for digital/analog mode Fabricate a memristor with rectifying effects and use the selector device 
Peripheral circuits BP progress in DNNs, weight update calculations, and loss functions for DNNs Use analog circuits, field programmable gate array (FPGA), and look-up-table (LUT) connected to the chips and approximate circuits for loss functions 
System level 
Operations Develop a general computing system for data mapping, dot product, and STDP Experimentally build applications with a memristive crossbar 
Training and testing accuracies Develop practical network topology and learning algorithm Develop hybrid algorithms, and brain-inspired systems consist of both ANNs and SNNs 

Peripheral circuits control the read/write process in the memristive brain-inspired computing systems. In addition, different peripheral circuits are required for different functions for specific neural networks such as SNNs, ANNs, and CNNs. Take a two-layer memristor-based ANN as an example, for the kth input sample, the system receives an input vector VI(k)M and produces an output vector VO(k)N. The weights of the fully connected layer W is an N×M memristive matrix that transforms the input to the output via the following equation:9 

VO(k)=W(k)VI(k)
(9)

or

VOj(k)=i=1MWji(k)VIi(k),
(10)

where i=1,2,,M and j=1,2,,N. CMOS transmission gates (TGs) can be utilized as switches to control the on/off status of the read/write voltage because of their excellent switching characteristics [see Figs. 10(a) and 10(b)]. For memristive synapses in ANNs, a synaptic array combines a memristive crossbar array of M (Gji) and a constant resistive circuit of Gs [see Fig. 10(c)]. Gs (=1/Rs) is the conductance of Rs, and Gji (=1/Rji) is the conductance of the memristor. VIi is the input voltage. According to Kirchhoff's voltage law, both positive and negative synaptic weights can be represented by a single memristor

Wji=R0×(GsGji).
(11)
FIG. 10.

Peripheral circuits for memristor-based ANNs. (a) Structure of the CMOS transmission gate (TG), (b) symbol of CMOS TG, (c) memristive ANN circuits, (d) method for decreasing the memristance of M22, and (e) increasing the memristance of M22. Reproduced with permission from Zhang et al., IEEE Trans. Electron Devices 64, 1806–1811 (2017). Copyright 2017 IEEE.9 

FIG. 10.

Peripheral circuits for memristor-based ANNs. (a) Structure of the CMOS transmission gate (TG), (b) symbol of CMOS TG, (c) memristive ANN circuits, (d) method for decreasing the memristance of M22, and (e) increasing the memristance of M22. Reproduced with permission from Zhang et al., IEEE Trans. Electron Devices 64, 1806–1811 (2017). Copyright 2017 IEEE.9 

Close modal

Note that if Gs>Gji, the synaptic weight Wji is positive, and on the other hand, if Gs<Gji, the synaptic weight Wji is negative.

For the forward-propagation (FP) process, the comparator A2 [see Fig. 10(c)] enables a binary step activation function as follows:

VOj=f(VOj)={VHifVOj>0,VLifVOj0,
(12)

where VH and VL (VL=0) are the high and low voltages of comparator A2, respectively.

For the BP process, the error ΔV between the estimator VO and the target output VT is represented by the output of comparator A3 [see Fig. 10(c)]. During the BP process, the synaptic weights W are updated [see Figs. 10(d) and 10(e)] to minimize the error vector

ΔV(k)=VT(k)VO(k).
(13)

The final mean square error (MSE) Ee can be expressed as follows:

Ee=1K0k=1K0||ΔV(k)||2.
(14)

For the synaptic weight updating process, more specific peripheral circuits are required to control the read/write process and calculate the require time t or voltage amplitude v. The optimization of the peripheral programming circuits is crucial for applications using online learning methods. Recently, possible methods for analog on-chip BP learning for memristive neural networks were proposed.220 The BP algorithm consists of four major steps, i.e., FP, BP to the output layer, BP to the hidden layer, and synaptic weight updating process. The architecture for the BP algorithm implementation [see Figs. 11(a) and 11(b)] can be divided into different blocks [see Fig. 11(c)]. In the FP step, the dot product of the input matrix and synaptic weight is calculated [see Fig. 11(b)] and passed through the Sigmoid activation function. Detail circuit implementations such as the Sigmoid activation function and analog switch circuits are also shown in Fig. 11(d). To implement the BP algorithm with gradient descent, MAIN BLOCK (MB)1 [see Figs. 11(b) and 11(d)] performs the calculation of the sigmoid and its derivative functions. As the memristive values are read and processed sequentially, MEMORY UNIT(MU) 4 and 5 [see Fig. 11(b)] can be utilized to store the updated values. The weight updating process is implemented by applying the voltage pulses of a particular duration t and amplitude v across each memristor. The update voltage pulse depends on the desired change in weights or error. The amplitude of the update pulse is calculated by weight updating circuits MB2 and MB4 [see Figs. 11(b) and 11(d)]. The final stage in the BP algorithm is the weight updating to minimize the error. Brain-inspired computing systems with memristors are expected to further improve the performance of online learning and reduce the complexity of peripheral programming circuits in the future (see Table IV).

FIG. 11.

Peripheral circuits for memristive neural networks. (a) An overall architecture for the BP algorithm implementation, including four major steps, i.e., (1) FP, (2) BP to the output layer, (3) BP to the hidden layer, and (4) synaptic weight updating process. (b) Analog BP learning circuits for memristive neural networks, where MB1 performs the calculation of the sigmoid and its derivative functions, MB2 and MB4 calculate the amplitude of the weight update pulse, and MU4 and MU5 store the updated values. (c) Deep neural network implementation with BP learning using memristive crossbar arrays for FP, BP, and weight update. (d) Circuit implementations of the main blocks, including MB1, MB2, MB3, MB4, Sigmoid, and multiplication circuits. Reproduced with permission from Krestinskaya et al., IEEE Trans. Circuits Syst. I 66, 719–732 (2019). Copyright 2019 IEEE.220 

FIG. 11.

Peripheral circuits for memristive neural networks. (a) An overall architecture for the BP algorithm implementation, including four major steps, i.e., (1) FP, (2) BP to the output layer, (3) BP to the hidden layer, and (4) synaptic weight updating process. (b) Analog BP learning circuits for memristive neural networks, where MB1 performs the calculation of the sigmoid and its derivative functions, MB2 and MB4 calculate the amplitude of the weight update pulse, and MU4 and MU5 store the updated values. (c) Deep neural network implementation with BP learning using memristive crossbar arrays for FP, BP, and weight update. (d) Circuit implementations of the main blocks, including MB1, MB2, MB3, MB4, Sigmoid, and multiplication circuits. Reproduced with permission from Krestinskaya et al., IEEE Trans. Circuits Syst. I 66, 719–732 (2019). Copyright 2019 IEEE.220 

Close modal

Training and testing are the standard processes for ANNs especially DNNs. During the training process, the training data are utilized to minimize the error by optimizing the parameters of the learning system. The testing process, on the other hand, is usually a straightforward evaluation of the network output with testing data (which have no overlap with the training data). The frequent read/write of the parameters of memristive synaptic weights demands fast and endurable programming of memristors. In addition, there is a traded-off between hardware cost and testing accuracy.

For ANNs within one or two layers, the testing accuracy is lower compared with DNNs (such as CNNs and RNNs). The performance can be improved with large scale systems to solve more complex tasks.

While CNNs suffer from the problem of excessive power dissipation and poor scalability, the implementation of a fully on-chip system without software part needs to be further investigated. Recently, research efforts on network pruning and precision reduction demonstrate that low precision neural networks such as binary neural networks (BNNs)198 and quantized neural networks (QNNs)13 based on CNNs are capable of carrying relatively complex pattern recognition tasks such as MNIST, CIFAR-10, and CIFAR-100. In 2018, Ambrogio et al.201 presented a hybrid software-hardware convolutional neural network to classify the widely used CIFAR-10 and CIFAR-100 datasets with accuracies of 88.29% and 67.96% [see Figs. 12(b) and 12(c)]. Training and testing accuracies of MNIST, MNIST-backrand, CIFAR-10, and CIFAR-100 datasets are listed in Fig. 13(a). The simulation results of the training on the MNIST dataset with different techniques are also shown in Fig. 13(b). Efforts are needed to develop new algorithms that can exploit the unique properties of the memristive devices to realize a compact and energy-efficient memristive brain-inspired computing system (see Table IV).

FIG. 12.

Mixed hardware–software results on the MNIST-backrand, CIFAR-10, and CIFAR-100 datasets. (a) Transfer-learning experimental test accuracies for MNIST-backrand, (b) experimental test accuracies for CIFAR-10, and (c) experimental test accuracies for CIFAR-100. Reproduced with permission from Ambrogio et al., Nature 558, 60 (2018). Copyright 2018 Springer Nature.201 

FIG. 12.

Mixed hardware–software results on the MNIST-backrand, CIFAR-10, and CIFAR-100 datasets. (a) Transfer-learning experimental test accuracies for MNIST-backrand, (b) experimental test accuracies for CIFAR-10, and (c) experimental test accuracies for CIFAR-100. Reproduced with permission from Ambrogio et al., Nature 558, 60 (2018). Copyright 2018 Springer Nature.201 

Close modal
FIG. 13.

Accuracy comparison and effect of different techniques. (a) Training and test accuracies and (b) simulation results of PCM hardware training on the MNIST dataset. Reproduced with permission from Ambrogio et al., Nature 558, 60 (2018). Copyright 2018 Springer Nature.201 

FIG. 13.

Accuracy comparison and effect of different techniques. (a) Training and test accuracies and (b) simulation results of PCM hardware training on the MNIST dataset. Reproduced with permission from Ambrogio et al., Nature 558, 60 (2018). Copyright 2018 Springer Nature.201 

Close modal

RNNs are more suitable for other tasks such as associative learning214 and traveling salesman problem.131 However, the refinement of the circuits and architecture, improved scalability, and different applications are still required.

SNNs still lag behind ANNs in terms of accuracy. Taking the handwritten digit recognition task as an example, the accuracy of memristive SNNs is only 82%,125 while the accuracy of memristive ANNs achieves 94%.218 The current challenge is lacking highly efficient training algorithms and spike-based data. Scalable on-chip neurons that produce spikes with a small chip area and power dissipation shall be developed to work with memristive synapses.

Memristor-based brain-inspired computing is still in an infancy stage with abundant opportunities and challenges. Innovations are critical not only to the materials and devices but also to the circuits and systems. Research studies should better exploit the underlying device physics and brain-inspired algorithms. In addition, novel materials and devices may further advance memristive brain-inspired computing systems. To make further progress on efficient implementation of memristor-based brain-inspired computing systems, the key challenges and potential strategies from device, circuit, and system levels are provided, as listed in Table IV.

At the device level, more reliable and practical memristive devices and crossbar arrays are required. The memristive mechanism needs to be more thoroughly comprehended to further improve the performance. Furthermore, more theoretical efforts are needed to design the material stack based on the new understanding of mechanisms (see Table IV).

At the circuit level, memristive circuits are desired to be effectively enlarged with efficient read/write schemes. Equipping memristors with transistor-based selectors or rectifying capabilities can reduce the sneak path currents. To overcome the device variations, low precision algorithms and systems such as fuzzy learning methods and quantized neural networks can be beneficial. The fuzzy modeling method can be used to define the memristive states in a dynamic way. QNNs (including BNNs) can simultaneously accelerate and compress the learning process while suffering negligible loss in training accuracy. Large scale memristor-based circuits for brain-inspired computing systems are also required to further increase the computation speed (see Table IV).

Finally, at the system level, the learning algorithms of memristive ANNs and SNNs are still under development. Practical network topology and the learning algorithm for both ANNs and SNNs are required to develop a general computing system for ANNs and SNNs, including data mapping, dot product, and STDP. In addition, ANNs and SNNs using RL can be further explored to solve different practical problems beyond the existing demonstrations (see Table IV).

In summary, this article provides a comprehensive review of various switching mechanisms of memristors and neural and synaptic functionalities that can be potentially implemented in memristive devices, circuits, and systems. The challenges and potential solutions have been discussed at the device, circuit, and system levels. We believe that this article will stimulate more research efforts in this exciting field.

This study was partially supported by the National Natural Science Foundation of China (No. 61806129) and China Post-Doctoral Science Foundation (Nos. 2018M640820 and 2019T120751) and partially supported by the U.S. Air Force Research Laboratory (AFRL) (Grant No. FA8750-15-2-0044). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of AFRL.

1.
X.
Hu
,
S.
Duan
,
G.
Chen
, and
L.
Chen
, “
Modeling affections with memristor-based associative memory neural networks
,”
Neurocomputing
223
,
129
137
(
2017
).
2.
Z.
Wang
and
X.
Wang
, “
A novel memristor-based circuit implementation of full-function pavlov associative memory accorded with biological feature
,”
IEEE Trans. Circuits Syst. I
65
,
2210
2220
(
2018
).
3.
S.
Adee
, “
Ibm unveils a new brain simulator
,” in
IEEE Spectrum
(IEEE,
2009
).
4.
Y.
Zhang
,
X.
Wang
, and
E. G.
Friedman
, “
Memristor-based circuit design for multilayer neural networks
,”
IEEE Trans. Circuits Syst. I
65
,
677
686
(
2018
).
5.
Y.
Zhang
,
Y.
Shen
,
X.
Wang
, and
L.
Cao
, “
A novel design for memristor-based logic switch and crossbar circuits
,”
IEEE Trans. Circuits Syst. I
62
,
1402
1411
(
2015
).
6.
L. O.
Chua
, “
Memristor–the missing circuit element
,”
IEEE Trans. Circuit Theory
CT-18
,
507
519
(
1971
).
7.
D. B.
Strukov
,
G. S.
Snider
,
D. R.
Stewart
, and
R. S.
Williams
, “
The missing memristor found
,”
Nature
453
,
80
83
(
2008
).
8.
P.
Junsangsri
and
F.
Lombardi
, “
Design of a hybrid memory cell using memristance and ambipolarity
,”
IEEE Trans. Nanotechnol.
12
,
71
80
(
2013
).
9.
Y.
Zhang
,
Y.
Li
,
X.
Wang
, and
E. G.
Friedman
, “
Synaptic characteristics of ag/aginsbte/ta-based memristor for pattern recognition applications
,”
IEEE Trans. Electron Devices
64
,
1806
1811
(
2017
).
10.
K.
Moon
,
E.
Cha
,
J.
Park
,
S.
Gi
,
M.
Chu
,
K.
Baek
,
B.
Lee
,
S.
Oh
, and
H.
Hwang
, “
High density neuromorphic system with Mo/Pr0.7Ca0.3MnO3 synapse and NbO2 imt oscillator neuron
,” in
IEEE International Electron Devices Meeting (IEDM)
(IEEE,
2015
), p.
17
.
11.
S. N.
Truong
and
K.-S.
Min
, “
New memristor-based crossbar array architecture with 50-% area reduction and 48-% power saving for matrix-vector multiplication of analog neuromorphic computing
,”
J. Semicond. Technol. Sci.
14
,
356
363
(
2014
).
12.
Z.
Wang
,
S.
Joshi
,
S.
Savelev
,
W.
Song
,
R.
Midya
,
Y.
Li
,
M.
Rao
,
P.
Yan
,
S.
Asapu
,
Y.
Zhuo
 et al, “
Fully memristive neural networks for pattern classification with unsupervised learning
,”
Nat. Electron.
1
,
137
(
2018
).
13.
Y.
Zhang
,
M.
Cui
,
L.
Shen
, and
Z.
Zeng
, “
Memristive quantized neural networks: A novel approach to accelerate deep learning on-chip
,”
IEEE Trans. Cybern.
(published online).
14.
S. B.
Eryilmaz
,
D.
Kuzum
,
R. G.
Jeyasingh
,
S.
Kim
,
M.
BrightSky
,
C.
Lam
, and
H.-S. P.
Wong
, “
Experimental demonstration of array-level learning with phase change synaptic devices
,” in
IEEE International Electron Devices Meeting (IEDM)
(
IEEE
,
2013
), p.
25
.
15.
S. B.
Eryilmaz
,
D.
Kuzum
,
R.
Jeyasingh
,
S.
Kim
,
M.
BrightSky
,
C.
Lam
, and
H.-S. P.
Wong
, “
Brain-like associative learning using a nanoscale non-volatile phase change synaptic device array
,”
Front. Neurosci
8
,
205
(
2014
).
16.
Y.
Zhang
,
M.
Cui
,
Y.
Liu
, and
L.
Shen
, “
Hybrid cmos-memristive convolutional computation for on-chip learning
,”
Neurocomputing
355
,
48
56
(
2019
).
17.
Y.
Lecun
,
Y.
Bengio
, and
G.
Hinton
, “
Deep learning
,”
Nature
521
,
436
(
2015
).
18.
A. P.
James
,
I.
Fedorova
,
T.
Ibrayev
, and
D.
Kudithipudi
, “
Htm spatial pooler with memristor crossbar circuits for sparse biometric recognition
,”
IEEE Trans. Biomed. Circuits Syst.
11
,
640
651
(
2017
).
19.
S. N.
Truong
,
K. V.
Pham
, and
K.-S.
Min
, “
Spatial-pooling memristor crossbar converting sensory information to sparse distributed representation of cortical neurons
,”
IEEE Trans. Nanotechnol.
17
,
482
491
(
2018
).
20.
I.
Higgins
,
S.
Stringer
, and
J.
Schnupp
, “
Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain
,”
PLoS One
12
,
e0180174
(
2017
).
21.
Z.
Wang
,
C.
Li
,
W.
Song
,
M.
Rao
,
D.
Belkin
,
Y.
Li
 et al, “
Reinforcement learning with analogue memristor arrays
,”
Nat. Electron.
2
,
115
124
(
2019
).
22.
X.
Ji
,
Y.
Zhang
,
C.
Li
,
T.
Wu
, and
X.
Hu
, “
Reinforcement learning in memristive spiking neural networks through modulation of resume
,”
AIP Conf. Proc.
2073
,
020094
(
2019
).
23.
T.
Tuma
,
A.
Pantazi
,
M. L.
Gallo
,
A.
Sebastian
, and
E.
Eleftheriou
, “
Stochastic phase-change neurons
,”
Nat. Nanotechnol.
11
,
693
(
2016
).
24.
S. A.
Aamir
,
P.
Müller
,
G.
Kiene
,
L.
Kriener
,
Y.
Stradmann
,
A.
Grübl
,
J.
Schemmel
, and
K.
Meier
, “
A mixed-signal structured adex neuron for accelerated neuromorphic cores
,”
IEEE Trans. Biomed. Circuits Syst.
12
,
1027
1037
(
2018
).
25.
Y.
Yang
,
P.
Gao
,
S.
Gaba
,
T.
Chang
,
X.
Pan
, and
W.
Lu
, “
Observation of conducting filament growth in nanoscale resistive memories
,”
Nat. Commun.
3
,
732
(
2012
).
26.
H.
Zhao
,
Z.
Dong
,
H.
Tian
,
D.
DiMarzi
,
M.-G.
Han
,
L.
Zhang
,
X.
Yan
,
F.
Liu
,
L.
Shen
,
S.-J.
Han
 et al, “
Atomically thin femtojoule memristive device
,”
Adv. Mater.
29
,
1703232
(
2017
).
27.
J.-Y.
Chen
,
C.-L.
Hsin
,
C.-W.
Huang
,
C.-H.
Chiu
,
Y.-T.
Huang
,
S.-J.
Lin
,
W.-W.
Wu
, and
L.-J.
Chen
, “
Dynamic evolution of conducting nanofilament in resistive switching memories
,”
Nano Lett.
13
,
3671
3677
(
2013
).
28.
D.-H.
Kwon
,
K. M.
Kim
,
J. H.
Jang
,
J. M.
Jeon
,
M. H.
Lee
,
G. H.
Kim
,
X.-S.
Li
,
G.-S.
Park
,
B.
Lee
,
S.
Han
 et al, “
Atomic structure of conducting nanofilaments in TiO2 resistive switching memory
,”
Nat. Nanotechnol.
5
,
148
(
2010
).
29.
K.
Baek
,
S.
Park
,
J.
Park
,
Y.-M.
Kim
,
H.
Hwang
, and
S. H.
Oh
, “
In situ tem observation on the interface-type resistive switching by electrochemical redox reactions at a tin/pcmo interface
,”
Nanoscale
9
,
582
593
(
2017
).
30.
R.
Wutting
,
Phase Change Materials: Science and Applications
(
Springer
,
2009
).
31.
W.
Zhang
,
R.
Mazzarello
,
M.
Wuttig
, and
E.
Ma
, “
Designing crystallization in phase-change materials for universal memory and neuro-inspired computing
,”
Nat. Rev. Mater.
4
,
150
168
(
2019
).
32.
M.
Salinga
,
E.
Carria
,
A.
Kaldenbach
,
M.
Bornhöfft
,
J.
Benke
,
J.
Mayer
, and
M.
Wuttig
, “
Measurement of crystal growth velocity in a melt-quenched phase-change material
,”
Nat. Commun.
4
,
2371
(
2013
).
33.
J. J.
Yang
,
M. D.
Pickett
,
X.
Li
,
D. A.
Ohlberg
,
D. R.
Stewart
, and
R. S.
Williams
, “
Memristive switching mechanism for metal/oxide/metal nanodevices
,”
Nat. Nanotechnol.
3
,
429
(
2008
).
34.
R.
Waser
,
R.
Dittmann
,
G.
Staikov
, and
K.
Szot
, “
Redox-based resistive switching memories–nanoionic mechanisms, prospects, and challenges
,”
Adv. Mater.
21
,
2632
2663
(
2009
).
35.
I.
Valov
,
R.
Waser
,
J. R.
Jameson
, and
M. N.
Kozicki
, “
Electrochemical metallization memories–fundamentals, applications, prospects
,”
Nanotechnology
22
,
254003
(
2011
).
36.
C.
Schindler
,
G.
Staikov
, and
R.
Waser
, “
Electrode kinetics of Cu–SiO2-based resistive switching cells: Overcoming the voltage-time dilemma of electrochemical metallization memories
,”
Appl. Phys. Lett.
94
,
072109
(
2009
).
37.
D.
Ielmini
, “
Resistive switching memories based on metal oxides: Mechanisms, reliability and scaling
,”
Semicond. Sci. Technol.
31
,
063002
(
2016
).
38.
H.
Akinaga
and
H.
Shima
, “
Resistive random access memory (reram) based on metal oxides
,”
Proc. IEEE
98
,
2237
2251
(
2010
).
39.
J.
Lee
and
W. D.
Lu
, “
On-demand reconfiguration of nanomaterials: When electronics meets ionics
,”
Adv. Mater.
30
,
1702770
(
2018
).
40.
F.
Pan
,
S.
Gao
,
C.
Chen
,
C.
Song
, and
F.
Zeng
, “
Recent progress in resistive random access memories: Materials, switching mechanisms, and performance
,”
Mater. Sci. Eng., R
83
,
1
59
(
2014
).
41.
D.
Ielmini
,
R.
Bruchhaus
, and
R.
Waser
, “
Thermochemical resistive switching: Materials, mechanisms, and scaling projections
,”
Phase Transitions
84
,
570
602
(
2011
).
42.
H.-S. P.
Wong
,
S.
Raoux
,
S.
Kim
,
J.
Liang
,
J. P.
Reifenberg
,
B.
Rajendran
,
M.
Asheghi
, and
K. E.
Goodson
, “
Phase change memory
,”
Proc. IEEE
98
,
2201
2227
(
2010
).
43.
S. G.
Sarwat
, “
Materials science and engineering of phase change random access memory
,”
Mater. Sci. Technol.
33
,
1890
1906
(
2017
).
44.
P.
Noé
,
C.
Vallée
,
F.
Hippert
,
F.
Fillot
, and
J.-Y.
Raty
, “
Phase-change materials for non-volatile memory devices: From technological challenges to materials science issues
,”
Semicond. Sci. Technol.
33
,
013002
(
2018
).
45.
M. D.
Pickett
,
G.
Medeiros-Ribeiro
, and
R. S.
Williams
, “
A scalable neuristor built with mott memristors
,”
Nat. Mater.
12
,
114
(
2013
).
46.
M.
Lee
,
W.
Lee
,
S.
Choi
,
J.-W.
Jo
,
J.
Kim
,
S. K.
Park
, and
Y.-H.
Kim
, “
Brain-inspired photonic neuromorphic devices using photodynamic amorphous oxide semiconductors and their persistent photoconductivity
,”
Adv. Mater.
29
,
1700951
(
2017
).
47.
H.-K.
He
,
R.
Yang
,
W.
Zhou
,
H.-M.
Huang
,
J.
Xiong
,
L.
Gan
,
T.-Y.
Zhai
, and
X.
Guo
, “
Photonic potentiation and electric habituation in ultrathin memristive synapses based on monolayer MoS2
,”
Small
14
,
1800079
(
2018
).
48.
A.
Chanthbouala
,
V.
Garcia
,
R. O.
Cherifi
,
K.
Bouzehouane
,
S.
Fusil
,
X.
Moya
,
S.
Xavier
,
H.
Yamada
,
C.
Deranlot
,
N. D.
Mathur
 et al, “
A ferroelectric memristor
,”
Nat. Mater.
11
,
860
(
2012
).
49.
D.
Kim
,
H.
Lu
,
S.
Ryu
,
C.-W.
Bark
,
C.-B.
Eom
,
E.
Tsymbal
, and
A.
Gruverman
, “
Ferroelectric tunnel memristor
,”
Nano Lett.
12
,
5697
5702
(
2012
).
50.
Z.
Wang
,
M.
Rao
,
R.
Midya
,
S.
Joshi
,
H.
Jiang
,
P.
Lin
,
W.
Song
,
S.
Asapu
,
Y.
Zhuo
,
C.
Li
 et al, “
Threshold switching of Ag or Cu in dielectrics: Materials, mechanism, and applications
,”
Adv. Funct. Mater.
28
,
1704862
(
2018
).
51.
I.
Valov
, “
Interfacial interactions and their impact on redox-based resistive switching memories (rerams)
,”
Semicond. Sci. Technol.
32
,
093006
(
2017
).
52.
K. M.
Kim
,
D. S.
Jeong
, and
C. S.
Hwang
, “
Nanofilamentary resistive switching in binary oxide system; a review on the present status and outlook
,”
Nanotechnology
22
,
254002
(
2011
).
53.
X.
Tian
,
S.
Yang
,
M.
Zeng
,
L.
Wang
,
J.
Wei
,
Z.
Xu
,
W.
Wang
, and
X.
Bai
, “
Bipolar electrochemical mechanism for mass transfer in nanoionic resistive memories
,”
Adv. Mater.
26
,
3649
3654
(
2014
).
54.
H.
Ling
,
M.
Yi
,
M.
Nagai
,
L.
Xie
,
L.
Wang
,
B.
Hu
, and
W.
Huang
, “
Controllable organic resistive switching achieved by one-step integration of cone-shaped contact
,”
Adv. Mater.
29
,
1701333
(
2017
).
55.
K.
Liu
,
L.
Qin
,
X.
Zhang
,
J.
Zhu
,
X.
Sun
,
K.
Yang
,
Y.
Cai
,
Y.
Yang
, and
R.
Huang
, “
Interfacial redox processes in memristive devices based on valence change and electrochemical metallization
,”
Faraday Discuss.
213
,
41
52
(
2019
).
56.
Y.
Yang
,
P.
Gao
,
L.
Li
,
X.
Pan
,
S.
Tappertzhofen
,
S.
Choi
,
R.
Waser
,
I.
Valov
, and
W. D.
Lu
, “
Electrochemical dynamics of nanoscale metallic inclusions in dielectrics
,”
Nat. Commun.
5
,
4232
(
2014
).
57.
X.
Zhao
,
J.
Ma
,
X.
Xiao
,
Q.
Liu
,
L.
Shao
,
D.
Chen
,
S.
Liu
,
J.
Niu
,
X.
Zhang
,
Y.
Wang
 et al, “
Breaking the current-retention dilemma in cation-based resistive switching devices utilizing graphene with controlled defects
,”
Adv. Mater.
30
,
1705193
(
2018
).
58.
J. S.
Han
,
Q. V.
Le
,
J.
Choi
,
K.
Hong
,
C. W.
Moon
,
T. L.
Kim
,
H.
Kim
,
S. Y.
Kim
, and
H. W.
Jang
, “
Air-stable cesium lead iodide perovskite for ultra-low operating voltage resistive switching
,”
Adv. Funct. Mater.
28
,
1705783
(
2018
).
59.
K.-L.
Lin
,
T.-H.
Hou
,
J.
Shieh
,
J.-H.
Lin
,
C.-T.
Chou
, and
Y.-J.
Lee
, “
Electrode dependence of filament formation in hfo2 resistive-switching memory
,”
J. Appl. Phys.
109
,
084104
(
2011
).
60.
K.
Terabe
,
T.
Hasegawa
,
T.
Nakayama
, and
M.
Aono
, “
Quantized conductance atomic switch
,”
Nature
433
,
47
(
2005
).
61.
T.
Tamura
,
T.
Hasegawa
,
K.
Terabe
,
T.
Nakayama
,
T.
Sakamoto
,
H.
Sunamura
,
H.
Kawaura
,
S.
Hosaka
, and
M.
Aono
, “
Material dependence of switching speed of atomic switches made from silver sulfide and from copper sulfide
,”
J. Phys.: Conf. Ser.
61
,
1157
(
2007
).
62.
M.
Aono
and
T.
Hasegawa
, “
The atomic switch
,”
Proc. IEEE
98
,
2228
2236
(
2010
).
63.
S. H.
Jo
,
T.
Chang
,
I.
Ebong
,
B. B.
Bhadviya
,
P.
Mazumder
, and
W.
Lu
, “
Nanoscale memristor device as synapse in neuromorphic systems
,”
Nano Lett.
10
,
1297
1301
(
2010
).
64.
S. H.
Jo
,
K.-H.
Kim
, and
W.
Lu
, “
High-density crossbar arrays based on a Si memristive system
,”
Nano Lett.
9
,
870
874
(
2009
).
65.
K.
Krishnan
,
T.
Tsuruoka
,
C.
Mannequin
, and
M.
Aono
, “
Mechanism for conducting filament growth in self-assembled polymer thin films for redox-based atomic switches
,”
Adv. Mater.
28
,
640
648
(
2016
).
66.
Q.
Liu
,
S.
Long
,
H.
Lv
,
W.
Wang
,
J.
Niu
,
Z.
Huo
,
J.
Chen
, and
M.
Liu
, “
Controllable growth of nanoscale conductive filaments in solid-electrolyte-based reram by using a metal nanocrystal covered bottom electrode
,”
ACS Nano
4
,
6162
6168
(
2010
).
67.
X.
Guo
,
C.
Schindler
,
S.
Menzel
, and
R.
Waser
, “
Understanding the switching-off mechanism in Ag+ migration based resistively switching model systems
,”
Appl. Phys. Lett.
91
,
133513
(
2007
).
68.
Y.
Yang
,
B.
Chen
, and
W. D.
Lu
, “
Memristive physically evolving networks enabling the emulation of heterosynaptic plasticity
,”
Adv. Mater.
27
,
7720
7727
(
2015
).
69.
Z.-H.
Tan
,
R.
Yang
,
K.
Terabe
,
X.-B.
Yin
,
X.-D.
Zhang
, and
X.
Guo
, “
Synaptic metaplasticity realized in oxide memristive devices
,”
Adv. Mater.
28
,
377
384
(
2016
).
70.
Z.
Wang
,
S.
Joshi
,
S. E.
Savelev
,
H.
Jiang
,
R.
Midya
,
P.
Lin
,
M.
Hu
,
N.
Ge
,
J. P.
Strachan
,
Z.
Li
 et al, “
Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing
,”
Nat. Mater.
16
,
101
(
2017
).
71.
T.
Ohno
,
T.
Hasegawa
,
T.
Tsuruoka
,
K.
Terabe
,
J. K.
Gimzewski
, and
M.
Aono
, “
Short-term plasticity and long-term potentiation mimicked in single inorganic synapses
,”
Nat. Mater.
10
,
591
(
2011
).
72.
I.
Valov
,
I.
Sapezanskaia
,
A.
Nayak
,
T.
Tsuruoka
,
T.
Bredow
,
T.
Hasegawa
,
G.
Staikov
,
M.
Aono
, and
R.
Waser
, “
Atomically controlled electrochemical nucleation at superionic solid electrolyte surfaces
,”
Nat. Mater.
11
,
530
(
2012
).
73.
A. A.
Bessonov
,
M. N.
Kirikova
,
D. I.
Petukhov
,
M.
Allen
,
T.
Ryhänen
, and
M. J.
Bailey
, “
Layered memristive and memcapacitive switches for printable electronics
,”
Nat. Mater.
14
,
199
(
2015
).
74.
J. H.
Yoon
,
Z.
Wang
,
K. M.
Kim
,
H.
Wu
,
V.
Ravichandran
,
Q.
Xia
,
C. S.
Hwang
, and
J. J.
Yang
, “
An artificial nociceptor based on a diffusive memristor
,”
Nat. Commun.
9
,
417
(
2018
).
75.
H.-S. P.
Wong
,
H.-Y.
Lee
,
S.
Yu
,
Y.-S.
Chen
,
Y.
Wu
,
P.-S.
Chen
,
B.
Lee
,
F. T.
Chen
, and
M.-J.
Tsai
, “
Metal–oxide rram
,”
Proc. IEEE
100
,
1951
1970
(
2012
).
76.
J. P.
Strachan
,
M. D.
Pickett
,
J. J.
Yang
,
S.
Aloni
,
A.
David Kilcoyne
,
G.
Medeiros-Ribeiro
, and
R.
Stanley Williams
, “
Direct identification of the conducting channels in a functioning memristive device
,”
Adv. Mater.
22
,
3573
3577
(
2010
).
77.
J. J.
Yang
,
F.
Miao
,
M. D.
Pickett
,
D. A.
Ohlberg
,
D. R.
Stewart
,
C. N.
Lau
, and
R. S.
Williams
, “
The mechanism of electroforming of metal oxide memristive switches
,”
Nanotechnology
20
,
215201
(
2009
).
78.
H.
Jiang
,
L.
Han
,
P.
Lin
,
Z.
Wang
,
M. H.
Jang
,
Q.
Wu
,
M.
Barnell
,
J. J.
Yang
,
H. L.
Xin
, and
Q.
Xia
, “
Sub-10 nm ta channel responsible for superior performance of a hfo2 memristor
,”
Sci. Rep.
6
,
28525
(
2016
).
79.
J. J.
Yang
,
M.-X.
Zhang
,
J. P.
Strachan
,
F.
Miao
,
M. D.
Pickett
,
R. D.
Kelley
,
G.
Medeiros-Ribeiro
, and
R. S.
Williams
, “
High switching endurance in TaOx memristive devices
,”
Appl. Phys. Lett.
97
,
232102
(
2010
).
80.
A.
Wedig
,
M.
Luebben
,
D.-Y.
Cho
,
M.
Moors
,
K.
Skaja
,
V.
Rana
,
T.
Hasegawa
,
K. K.
Adepalli
,
B.
Yildiz
,
R.
Waser
 et al, “
Nanoscale cation motion in TaOx, HfOx and TiOx memristive systems
,”
Nat. Nanotechnol.
11
,
67
(
2016
).
81.
T.
Dongale
,
S.
Mohite
,
A.
Bagade
,
P.
Gaikwad
,
P.
Patil
,
R.
Kamat
, and
K.
Rajpure
, “
Development of Ag/WO3/ITO thin film memristor using spray pyrolysis method
,”
Electron. Mater. Lett.
11
,
944
948
(
2015
).
82.
C.
Baeumer
,
C.
Funck
,
A.
Locatelli
,
T. O.
Mentes
,
F.
Genuzio
,
T.
Heisig
,
F.
Hensling
,
N.
Raab
,
C. M.
Schneider
,
S.
Menzel
 et al, “
In-gap states and band-like transport in memristive devices
,”
Nano letters
19
,
54
60
(
2019
).
83.
W.
Li
,
X.
Liu
,
Y.
Wang
,
Z.
Dai
,
W.
Wu
,
L.
Cheng
,
Y.
Zhang
,
Q.
Liu
,
X.
Xiao
, and
C.
Jiang
, “
Design of high-performance memristor cell using w-implanted SiO2 films
,”
Appl. Phys. Lett.
108
,
833
(
2016
).
84.
J.-Y.
Chen
,
C.-W.
Huang
,
C.-H.
Chiu
,
Y.-T.
Huang
, and
W.-W.
Wu
, “
Switching kinetic of vcm-based memristor: Evolution and positioning of nanofilament
,”
Adv. Mater.
27
,
5028
5033
(
2015
).
85.
H.
Sun
,
Z.
Yang
,
M.
Wei
,
W.
Sun
,
X.
Li
,
S.
Ye
,
Y.
Zhao
,
H.
Tan
,
E. L.
Kynaston
,
T. B.
Schon
 et al, “
Chemically addressable perovskite nanocrystals for light-emitting applications
,”
Adv. Mater.
29
,
1701153
(
2017
).
86.
M.-J.
Lee
,
C. B.
Lee
,
D.
Lee
,
S. R.
Lee
,
M.
Chang
,
J. H.
Hur
,
Y.-B.
Kim
,
C.-J.
Kim
,
D. H.
Seo
,
S.
Seo
 et al, “
A fast, high-endurance and scalable non-volatile memory device made from asymmetric Ta2 O5-x/TaO2-x bilayer structures
,”
Nat. Mater.
10
,
625
(
2011
).
87.
L.
Hu
,
S.
Fu
,
Y.
Chen
,
H.
Cao
,
L.
Liang
,
H.
Zhang
,
J.
Gao
,
J.
Wang
, and
F.
Zhuge
, “
Ultrasensitive memristive synapses based on lightly oxidized sulfide films
,”
Adv. Mater.
29
,
1606927
(
2017
).
88.
S.
Pi
,
C.
Li
,
H.
Jiang
,
W.
Xia
,
H.
Xin
,
J. J.
Yang
, and
Q.
Xia
, “
Memristor crossbar arrays with 6-nm half-pitch and 2-nm critical dimension
,”
Nat. Nanotechnol.
14
,
35
(
2019
).
89.
C.
Li
,
L.
Han
,
H.
Jiang
,
M.-H.
Jang
,
P.
Lin
,
Q.
Wu
,
M.
Barnell
,
J. J.
Yang
,
H. L.
Xin
, and
Q.
Xia
, “
Three-dimensional crossbar arrays of self-rectifying Si/SiO2/Si memristors
,”
Nat. Commun.
8
,
15666
(
2017
).
90.
S.
Park
,
S.
Jung
,
M.
Siddik
,
M.
Jo
,
J.
Lee
,
J.
Park
,
W.
Lee
,
S.
Kim
,
S. M.
Sadaf
,
X.
Liu
 et al, “
Memristive switching behavior in pr0. 7Ca0.3MnO3 by incorporating an oxygen-deficient layer
,”
Phys. Status Solidi RRL
5
,
409
––
411
(
2011
).
91.
A.
Herpers
,
C.
Lenser
,
C.
Park
,
F.
Offi
,
F.
Borgatti
,
G.
Panaccione
,
S.
Menzel
,
R.
Waser
, and
R.
Dittmann
, “
Spectroscopic proof of the correlation between redox-state and charge-carrier transport at the interface of resistively switching Ti/pcmo devices
,”
Adv. Mater.
26
,
2730
2735
(
2014
).
92.
B.
Arndt
,
F.
Borgatti
,
F.
Offi
,
M.
Phillips
,
P.
Parreira
,
T.
Meiners
,
S.
Menzel
,
K.
Skaja
,
G.
Panaccione
,
D. A.
MacLaren
 et al, “
Spectroscopic indications of tunnel barrier charging as the switching mechanism in memristive devices
,”
Adv. Funct. Mater.
27
,
1702282
(
2017
).
93.
Z. Q.
Wang
,
H. Y.
Xu
,
X. H.
Li
,
H.
Yu
,
Y. C.
Liu
, and
X. J.
Zhu
, “
Synaptic learning and memory functions achieved using oxygen ion migration/diffusion in an amorphous ingazno memristor
,”
Adv. Funct. Mater.
22
,
2759
2765
(
2012
).
94.
J. J.
Yang
,
D. B.
Strukov
, and
D. R.
Stewart
, “
Memristive devices for computing
,”
Nat. Nanotechnol.
8
,
13
(
2013
).
95.
J. J.
Yang
and
Q.
Xia
, “
Organic electronics: Battery-like artificial synapses
,”
Nat. Mater.
16
,
396
(
2017
).
96.
S.
Kumar
,
Z.
Wang
,
X.
Huang
,
N.
Kumari
,
N.
Davila
,
J. P.
Strachan
,
D.
Vine
,
A. D.
Kilcoyne
,
Y.
Nishi
, and
R. S.
Williams
, “
Conduction channel formation and dissolution due to oxygen thermophoresis/diffusion in hafnium oxide memristors
,”
ACS Nano
10
,
11205
11210
(
2016
).
97.
J. J.
Yang
,
I. H.
Inoue
,
T.
Mikolajick
, and
C. S.
Hwang
, “
Metal oxide memories based on thermochemical and valence change mechanisms
,”
MRS Bull.
37
,
131
137
(
2012
).
98.
X. L.
Shao
,
K. M.
Kim
,
K. J.
Yoon
,
S. J.
Song
,
J. H.
Yoon
,
H. J.
Kim
,
T. H.
Park
,
D. E.
Kwon
,
Y. J.
Kwon
,
Y. M.
Kim
 et al, “
A study of the transition between the non-polar and bipolar resistance switching mechanisms in the TiN/TiO2/Al memory
,”
Nanoscale
8
,
16455
––
16466
(
2016
).
99.
C.-Y.
Liu
,
J.-Y.
Ho
,
J.-J.
Huang
, and
H.-Y.
Wang
, “
Transient current of resistive switching of a niox resistive memory
,”
Jpn. J. Appl. Phys.
51
,
041101
(
2012
).
100.
W.
Sun
,
B.
Gao
,
M.
Chi
,
Q.
Xia
,
J. J.
Yang
,
H.
Qian
, and
H.
Wu
, “
Understanding memristive switching via in situ characterization and device modeling
,”
Nat. Commun.
10
,
3453
(
2019
).
101.
I. H.
Inoue
,
S.
Yasuda
,
H.
Akinaga
, and
H.
Takagi
, “
Nonpolar resistance switching of metal/binary-transition-metal oxides/metal sandwiches: Homogeneous/inhomogeneous transition of current distribution
,”
Phys. Rev. B
77
,
035105
(
2008
).
102.
H.
Shima
,
F.
Takano
,
H.
Akinaga
,
Y.
Tamai
,
I. H.
Inoue
, and
H.
Takagi
, “
Resistance switching in the metal deficient-type oxides: NiO and COO
,”
Appl. Phys. Lett.
91
,
012901
(
2007
).
103.
T.
Hickmott
, “
Low-frequency negative resistance in thin anodic oxide films
,”
J. Appl. Phys.
33
,
2669
2682
(
1962
).
104.
J.
Simmons
and
R.
Verderber
, “
New conduction and reversible memory phenomena in thin insulating films
,”
Proc. R. Soc. London, Ser. A
301
,
77
102
(
1967
).
105.
S. W.
Fong
,
C. M.
Neumann
, and
H.-S. P.
Wong
, “
Phase-change memory–towards a storage-class memory
,”
IEEE Trans. Electron Devices
64
,
4374
4385
(
2017
).
106.
S.
Raoux
,
G. W.
Burr
,
M. J.
Breitwisch
,
C. T.
Rettner
,
Y.-C.
Chen
,
R. M.
Shelby
,
M.
Salinga
,
D.
Krebs
,
S.-H.
Chen
,
H.-L.
Lung
 et al, “
Phase-change random access memory: A scalable technology
,”
IBM J. Res. Dev.
52
,
465
(
2008
).
107.
E. R.
Meinders
,
A. V.
Mijiritskii
,
L.
Van Pieterson
, and
M.
Wuttig
,
Optical Data Storage: Phase-Change Media and Recording
(
Springer Science & Business Media
,
2006
), Vol. 4.
108.
M.
Suri
,
O.
Bichler
,
D.
Querlioz
,
B.
Traoré
,
O.
Cueto
,
L.
Perniola
,
V.
Sousa
,
D.
Vuillaume
,
C.
Gamrat
, and
B.
DeSalvo
, “
Physical aspects of low power synapses based on phase change memory devices
,”
J. Appl. Phys.
112
,
054904
(
2012
).
109.
R.
Jeyasingh
,
S. W.
Fong
,
J.
Lee
,
Z.
Li
,
K.-W.
Chang
,
D.
Mantegazza
,
M.
Asheghi
,
K. E.
Goodson
, and
H.-S. P.
Wong
, “
Ultrafast characterization of phase-change material crystallization properties in the melt-quenched amorphous phase
,”
Nano Lett.
14
,
3419
3426
(
2014
).
110.
T.
Lee
and
S.
Elliott
, “
Ab initio computer simulation of the early stages of crystallization: Application to Ge2 Sb2 Te5 phase-change materials
,”
Phys. Rev. Lett.
107
,
145702
(
2011
).
111.
B.-S.
Lee
,
G. W.
Burr
,
R. M.
Shelby
,
S.
Raoux
,
C. T.
Rettner
,
S. N.
Bogle
,
K.
Darmawikarta
,
S. G.
Bishop
, and
J. R.
Abelson
, “
Observation of the role of subcritical nuclei in crystallization of a glassy solid
,”
Science
326
,
980
984
(
2009
).
112.
D.
Loke
,
T.
Lee
,
W.
Wang
,
L.
Shi
,
R.
Zhao
,
Y.
Yeo
,
T.
Chong
, and
S.
Elliott
, “
Breaking the speed limits of phase-change memory
,”
Science
336
,
1566
1569
(
2012
).
113.
F.
Rao
,
K.
Ding
,
Y.
Zhou
,
Y.
Zheng
,
M.
Xia
,
S.
Lv
,
Z.
Song
,
S.
Feng
,
I.
Ronneberger
,
R.
Mazzarello
 et al, “
Reducing the stochasticity of crystal nucleation to enable subnanosecond memory writing
,”
Science
358
,
1423
––
1427
(
2017
).
114.
M.
Boniardi
,
A.
Redaelli
,
A.
Pirovano
,
I.
Tortorelli
,
D.
Ielmini
, and
F.
Pellizzer
, “
A physics-based model of electrical conduction decrease with time in amorphous Ge2 Sb2 Te5
,”
J. Appl. Phys.
105
,
084506
(
2009
).
115.
A.
Pirovano
,
A. L.
Lacaita
,
F.
Pellizzer
,
S. A.
Kostylev
,
A.
Benvenuti
, and
R.
Bez
, “
Low-field amorphous state resistance and threshold voltage drift in chalcogenide materials
,”
IEEE Trans. Electron Devices
51
,
714
719
(
2004
).
116.
C.
Ahn
,
S. W.
Fong
,
Y.
Kim
,
S.
Lee
,
A.
Sood
,
C. M.
Neumann
,
M.
Asheghi
,
K. E.
Goodson
,
E.
Pop
, and
H.-S. P.
Wong
, “
Energy-efficient phase-change memory with graphene as a thermal barrier
,”
Nano Lett.
15
,
6809
6814
(
2015
).
117.
S.
La Barbera
,
D. R.
Ly
,
G.
Navarro
,
N.
Castellani
,
O.
Cueto
,
G.
Bourgeois
,
B.
De Salvo
,
E.
Nowak
,
D.
Querlioz
, and
E.
Vianello
, “
Narrow heater bottom electrode-based phase change memory as a bidirectional artificial synapse
,”
Adv. Electron. Mater.
4
,
1800223
(
2018
).
118.
Y.
Chen
,
C.
Rettner
,
S.
Raoux
,
G.
Burr
,
S.
Chen
,
R.
Shelby
,
M.
Salinga
,
W.
Risk
,
T.
Happ
,
G.
McClelland
 et al, “
Ultra-thin phase-change bridge memory device using gesb
,” in
2006 International Electron Devices Meeting
(
IEEE
,
2006
), pp.
1
4
.
119.
M.
Salinga
,
B.
Kersting
,
I.
Ronneberger
,
V. P.
Jonnalagadda
,
X. T.
Vu
,
M. L.
Gallo
,
I.
Giannopoulos
,
O.
Cojocaru-Mirédin
,
R.
Mazzarello
, and
A.
Sebastian
, “
Monatomic phase change memory
,”
Nat. Mater.
17
,
681
(
2018
).
120.
S.
Yu
, “
Neuro-inspired computing with emerging nonvolatile memorys
,”
Proc. IEEE
106
,
260
285
(
2018
).
121.
Z.
Wang
,
M.
Yin
,
T.
Zhang
,
Y.
Cai
,
Y.
Wang
,
Y.
Yang
, and
R.
Huang
, “
Engineering incremental resistive switching in TaOx based memristors for brain-inspired computing
,”
Nanoscale
8
,
14015
14022
(
2016
).
122.
C.
Sung
,
S.
Lim
,
H.
Kim
,
T.
Kim
,
K.
Moon
,
J.
Song
,
J.-J.
Kim
, and
H.
Hwang
, “
Effect of conductance linearity and multi-level cell characteristics of taox-based synapse device on pattern recognition accuracy of neuromorphic system
,”
Nanotechnology
29
,
115203
(
2018
).
123.
S.
Lim
,
C.
Sung
,
H.
Kim
,
T.
Kim
,
J.
Song
,
J.-J.
Kim
, and
H.
Hwang
, “
Improved synapse device with mlc and conductance linearity using quantized conduction for neuromorphic systems
,”
IEEE Electron Device Lett.
39
,
312
315
(
2018
).
124.
K.
Moon
,
M.
Kwak
,
J.
Park
,
D.
Lee
, and
H.
Hwang
, “
Improved conductance linearity and conductance ratio of 1t2r synapse device for neuromorphic systems
,”
IEEE Electron Device Lett.
38
,
1023
1026
(
2017
).
125.
Y.
Shi
,
L.
Nguyen
,
S.
Oh
,
X.
Liu
,
F.
Koushan
,
J. R.
Jameson
, and
D.
Kuzum
, “
Neuroinspired unsupervised learning and pruning with subquantum cbram arrays
,”
Nat. Commun.
9
,
5312
(
2018
).
126.
B. J.
Choi
,
A. C.
Torrezan
,
J. P.
Strachan
,
P.
Kotula
,
A.
Lohn
,
M. J.
Marinella
,
Z.
Li
,
R. S.
Williams
, and
J. J.
Yang
, “
High-speed and low-energy nitride memristors
,”
Adv. Funct. Mater.
26
,
5290
5296
(
2016
).
127.
D. S.
Jeong
,
R.
Thomas
,
R.
Katiyar
,
J.
Scott
,
H.
Kohlstedt
,
A.
Petraru
, and
C. S.
Hwang
, “
Emerging memories: Resistive switching mechanisms and current status
,”
Rep. Prog. Phys.
75
,
076502
(
2012
).
128.
S.
Yu
,
Y.
Wu
, and
H.-S. P.
Wong
, “
Investigating the switching dynamics and multilevel capability of bipolar metal oxide resistive switching memory
,”
Appl. Phys. Lett.
98
,
103514
(
2011
).
129.
F.
Xiong
,
A. D.
Liao
,
D.
Estrada
, and
E.
Pop
, “
Low-power switching of phase-change materials with carbon nanotube electrodes
,”
Science
332
,
568
570
(
2011
).
130.
J.
Zhu
,
Y.
Yang
,
R.
Jia
,
Z.
Liang
,
W.
Zhu
,
Z. U.
Rehman
,
L.
Bao
,
X.
Zhang
,
Y.
Cai
,
L.
Song
 et al, “
Ion gated synaptic transistors based on 2d van der waals crystals with tunable diffusive dynamics
,”
Adv. Mater.
30
,
1800195
(
2018
).
131.
S.
Kumar
,
J. P.
Strachan
, and
R. S.
Williams
, “
Chaotic dynamics in nanoscale nbo 2 mott memristors for analogue computing
,”
Nature
548
,
318
(
2017
).
132.
Y.
Zhou
and
S.
Ramanathan
, “
Mott memory and neuromorphic devices
,”
Proc. IEEE
103
,
1289
1310
(
2015
).
133.
V.
Garcia
and
M.
Bibes
, “
Ferroelectric tunnel junctions for information storage and processing
,”
Nat. Commun.
5
,
4289
(
2014
).
134.
S.
Oh
,
T.
Kim
,
M.
Kwak
,
J.
Song
,
J.
Woo
,
S.
Jeon
,
I. K.
Yoo
, and
H.
Hwang
, “
Hfzro x-based ferroelectric synapse device with 32 levels of conductance states for neuromorphic applications
,”
IEEE Electron Device Lett.
38
,
732
735
(
2017
).
135.
N.
Locatelli
,
V.
Cros
, and
J.
Grollier
, “
Spin-torque building blocks
,”
Nat. Mater.
13
,
11
(
2014
).
136.
H.
Tian
,
L.
Zhao
,
X.
Wang
,
Y.-W.
Yeh
,
N.
Yao
,
B. P.
Rand
, and
T.-L.
Ren
, “
Extremely low operating current resistive memory based on exfoliated 2d perovskite single crystals for neuromorphic computing
,”
ACS Nano
11
,
12247
12256
(
2017
).
137.
T.
Berzina
,
A.
Smerieri
,
M.
Bernabò
,
A.
Pucci
,
G.
Ruggeri
,
V.
Erokhin
, and
M.
Fontana
, “
Optimization of an organic memristor as an adaptive memory element
,”
J. Appl. Phys.
105
,
124515
(
2009
).
138.
D. B.
Strukov
and
R. S.
Williams
, “
Exponential ionic drift: Fast switching and low volatility ofáthin-film memristors
,”
Appl. Phys. A
94
,
515
519
(
2009
).
139.
M. D.
Pickett
,
D. B.
Strukov
,
J. L.
Borghetti
,
J. J.
Yang
,
G. S.
Snider
,
D. R.
Stewart
, and
R. S.
Williams
, “
Switching dynamics in titanium dioxide memristive devices
,”
J. Appl. Phys.
106
,
074508
(
2009
).
140.
H.
Abdalla
and
M. D.
Pickett
, “
Spice modeling of memristors
,”
in 2011 IEEE International Symposium on Circuits and Systems (ISCAS) (
IEEE
,
2011
), pp.
1832
1835
.
141.
R. S.
Williams
,
M. D.
Pickett
, and
J. P.
Strachan
, “
Physics-based memristor models
,” in
2013 IEEE International Symposium on Circuits and Systems (ISCAS) (
IEEE
,
2013
), pp.
217
220
.
142.
C.
Yakopcic
,
T. M.
Taha
,
G.
Subramanyam
, and
R. E.
Pino
, “
Generalized memristive device spice model and its application in circuit design
,”
IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst.
32
,
1201
1214
(
2013
).
143.
S.
Kvatinsky
,
E. G.
Friedman
,
A.
Kolodny
, and
U. C.
Weiser
, “
Team: Threshold adaptive memristor model
,”
IEEE Trans. Circuits Syst. I
60
,
211
221
(
2013
).
144.
S.
Kvatinsky
,
M.
Ramadan
,
E. G.
Friedman
, and
A.
Kolodny
, “
Vteam: A general model for voltage-controlled memristors
,”
IEEE Trans. Circuits Syst. II
62
,
786
790
(
2015
).
145.
J. P.
Strachan
,
A. C.
Torrezan
,
F.
Miao
,
M. D.
Pickett
,
J. J.
Yang
,
W.
Yi
,
G.
Medeiros-Ribeiro
, and
R. S.
Williams
, “
State dynamics and modeling of tantalum oxide memristors
,”
IEEE Trans. Electron Devices
60
,
2194
2202
(
2013
).
146.
Y.
Zhang
,
X.
Wang
,
Y.
Li
, and
E. G.
Friedman
, “
Memristive model for synaptic circuits
,”
IEEE Trans. Circuits Syst. II
64
,
767
771
(
2017
).
147.
X.
Guan
,
S.
Yu
, and
H. S. P.
Wong
, “
A spice compact model of metal oxide resistive switching memory with variations
,”
IEEE Electron Device Lett.
33
,
1405
1407
(
2012
).
148.
Z.
Jiang
,
Y.
Wu
,
S.
Yu
,
L.
Yang
,
K.
Song
,
Z.
Karim
, and
H. S. P.
Wong
, “
A compact model for metal oxide resistive random access memory with experiment verification
,”
IEEE Trans. Electron Device
63
,
1884
1892
(
2016
).
149.
U.
Russo
,
D.
Ielmini
,
C.
Cagli
, and
A. L.
Lacaita
, “
Filament conduction and reset mechanism in nio-based resistive-switching memory (rram) devices
,”
IEEE Trans. Electron Devices
56
,
186
192
(
2009
).
150.
F.
Nardi
,
S.
Larentis
,
S.
Balatti
,
D. C.
Gilmer
, and
D.
Ielmini
, “
Resistive switching by voltage-driven ion migration in bipolar rram–part i: Experimental study
,”
IEEE Trans. Electron Devices
59
,
2461
2467
(
2012
).
151.
S.
Kim
,
S.-J.
Kim
,
K. M.
Kim
,
S. R.
Lee
,
M.
Chang
,
E.
Cho
,
Y.-B.
Kim
,
C. J.
Kim
,
U.-I.
Chung
, and
I.-K.
Yoo
, “
Physical electro-thermal model of resistive switching in bi-layered resistance-change memory
,”
Sci. Rep.
3
,
1680
(
2013
).
152.
M.
Bocquet
,
D.
Deleruyelle
,
C.
Muller
, and
J.-M.
Portal
, “
Self-consistent physical modeling of set/reset operations in unipolar resistive-switching memories
,”
Appl. Phys. Lett.
98
,
263507
(
2011
).
153.
M.
Bocquet
,
D.
Deleruyelle
,
H.
Aziza
,
C.
Muller
,
J.-M.
Portal
,
T.
Cabout
, and
E.
Jalaguier
, “
Robust compact model for bipolar oxide-based resistive switching memories
,”
IEEE Trans. Electron Devices
61
,
674
681
(
2014
).
154.
G.
González-Cordero
,
J.
Roldan
,
F.
Jiménez-Molinos
,
J.
Suñé
,
S.
Long
, and
M.
Liu
, “
A new compact model for bipolar rrams based on truncated-cone conductive filaments—a verilog-a approach
,”
Semicond. Sci. Technol.
31
,
115013
(
2016
).
155.
X.
Guan
,
S.
Yu
,
H.-S. P.
Wong
 et al, “
On the switching parameter variation of metal-oxide rram—part i: Physical modeling and simulation methodology
,”
IEEE Trans. Electron Devices
59
,
1172
1182
(
2012
).
156.
N.
Onofrio
,
D.
Guzman
, and
A.
Strachan
, “
Atomic origin of ultrafast resistance switching in nanoscale electrometallization cells
,”
Nat. Mater.
14
,
440
(
2015
).
157.
P.-Y.
Chen
and
S.
Yu
, “
Compact modeling of rram devices and its applications in 1t1r and 1s1r array design
,”
IEEE Trans. Electron Devices
62
,
4022
4028
(
2015
).
158.
R.
Hecht-Nielsen
, “
Theory of the backpropagation neural network
,” in
Neural Networks for Perception
(
Elsevier
,
1992
), pp.
65
93
.
159.
G.-Q.
Bi
and
M.-M.
Poo
, “
Synaptic modification by correlated activity: Hebb's postulate revisited
,”
Annu. Rev. Neurosci.
24
,
139
166
(
2001
).
160.
S.
Song
,
K. D.
Miller
, and
L. F.
Abbott
, “
Competitive hebbian learning through spike-timing-dependent synaptic plasticity
,”
Nat. Neurosci.
3
,
919
(
2000
).
161.
G.-Q.
Bi
and
M.-M.
Poo
, “
Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type
,”
J. Neurosci.
18
,
10464
10472
(
1998
).
162.
S.-J.
Choi
,
G.-B.
Kim
,
K.
Lee
,
K.-H.
Kim
,
W.-Y.
Yang
,
S.
Cho
,
H.-J.
Bae
,
D.-S.
Seo
,
S.-I.
Kim
, and
K.-J.
Lee
, “
Synaptic behaviors of a single metal–oxide–metal resistive device
,”
Appl. Phys. A
102
,
1019
1025
(
2011
).
163.
M. A.
Zidan
,
Y.
Jeong
, and
W. D.
Lu
, “
Temporal learning using second-order memristors
,”
IEEE Trans. Nanotechnol.
16
,
721
723
(
2017
).
164.
D.
Ielmini
,
S.
Ambrogio
,
V.
Milo
,
S.
Balatti
, and, and
Z.-Q.
Wang
, “
Neuromorphic computing with hybrid memristive/cmos synapses for real-time learning
,” in
2016 IEEE International Symposium on Circuits and Systems (ISCAS)
(
IEEE
,
2016
), pp.
1386
1389
.
165.
A.
Jaiswal
,
S.
Roy
,
G.
Srinivasan
, and
K.
Roy
, “
Proposal for a leaky-integrate-fire spiking neuron based on magnetoelectric switching of ferromagnets
,”
IEEE Trans. Electron Devices
64
,
1818
1824
(
2017
).
166.
P.
Stoliar
,
J.
Tranchant
,
B.
Corraze
,
E.
Janod
,
M.-P.
Besland
,
F.
Tesler
,
M.
Rozenberg
, and
L.
Cario
, “
A leaky-integrate-and-fire neuron analog realized with a mott insulator
,”
Adv. Funct. Mater.
27
,
1604740
(
2017
).
167.
J.
Torrejon
,
M.
Riou
,
F. A.
Araujo
,
S.
Tsunegi
,
G.
Khalsa
,
D.
Querlioz
,
P.
Bortolotti
,
V.
Cros
,
K.
Yakushiji
,
A.
Fukushima
 et al, “
Neuromorphic computing with nanoscale spintronic oscillators
,”
Nature
547
,
428
(
2017
).
168.
X.
Zhang
,
W.
Wang
,
Q.
Liu
,
X.
Zhao
,
J.
Wei
,
R.
Cao
,
Z.
Yao
,
X.
Zhu
,
F.
Zhang
,
H.
Lv
 et al, “
An artificial neuron based on a threshold switching memristor
,”
IEEE Electron Device Lett.
39
,
308
311
(
2018
).
169.
Z.
Wang
,
M.
Rao
,
J.-W.
Han
,
J.
Zhang
,
P.
Lin
,
Y.
Li
,
C.
Li
,
W.
Song
,
S.
Asapu
,
R.
Midya
 et al, “
Capacitive neural network with neuro-transistors
,”
Nat. Commun.
9
,
3208
(
2018
).
170.
J.
Lin
,
S.
Sonde
,
C.
Chen
,
L.
Stan
,
K.
Achari
,
S.
Ramanathan
,
S.
Guha
 et al, “
Low-voltage artificial neuron using feedback engineered insulator-to-metal-transition devices
,” in
IEEE International Electron Devices Meeting (IEDM)
(
IEEE
,
2016
), pp.
34
35
.
171.
R.
Midya
,
Z.
Wang
,
S.
Asapu
,
X.
Zhang
,
M.
Rao
,
W.
Song
,
Y.
Zhuo
,
N.
Upadhyay
,
Q.
Xia
, and
J. J.
Yang
, “
Reservoir computing using diffusive memristors
,”
Adv. Intell. Syst.
1
,
1900084
(
2019
).
172.
R.
Midya
,
Z.
Wang
,
S.
Asapu
,
S.
Joshi
,
Y.
Li
,
Y.
Zhuo
,
W.
Song
,
H.
Jiang
,
N.
Upadhay
,
M.
Rao
 et al, “
Artificial neural network (ANN) to spiking neural network (SNN) converters based on diffusive memristors
,”
Adv. Electron. Mater.
5
,
1900060
(
2019
).
173.
C.
Li
,
Z.
Wang
,
M.
Rao
,
D.
Belkin
,
W.
Song
,
H.
Jiang
,
P.
Yan
,
Y.
Li
,
P.
Lin
,
M.
Hu
 et al, “
Long short-term memory networks in memristor crossbar arrays
,”
Nat. Mach. Intell.
1
,
49
(
2019
).
174.
P. A.
Merolla
,
J. V.
Arthur
,
R.
Alvarez-Icaza
,
A. S.
Cassidy
,
J.
Sawada
,
F.
Akopyan
,
B. L.
Jackson
,
N.
Imam
,
C.
Guo
,
Y.
Nakamura
 et al, “
A million spiking-neuron integrated circuit with a scalable communication network and interface
,”
Science
345
,
668
673
(
2014
).
175.
J.
Pei
,
L.
Deng
,
S.
Song
,
M.
Zhao
,
Y.
Zhang
,
S.
Wu
,
G.
Wang
,
Z.
Zou
,
Z.
Wu
,
W.
He
 et al, “
Towards artificial general intelligence with hybrid tianjic chip architecture
,”
Nature
572
,
106
(
2019
).
176.
S. M.
Bohte
,
J. N.
Kok
, and
H. L.
Poutre
, “
Error-backpropagation in temporally encoded networks of spiking neurons
,”
Neurocomputing
48
,
17
37
(
2002
).
177.
S.
Ghosh-Dastidar
and
H.
Adeli
, “
Spiking neural networks
,”
Int. J. Neural Syst.
19
,
295
308
(
2009
).
178.
I. E.
Ebong
and
P.
Mazumder
, “
Cmos and memristor-based neural network design for position detection
,”
Proc. IEEE
100
,
2050
2060
(
2012
).
179.
C.
Pokorny
,
M. J.
Ison
,
A.
Rao
,
R.
Legenstein
,
C.
Papadimitriou
, and
W.
Maass
, “
STDP forms associations between memory traces in networks of spiking neurons
,”
Cereb. Cortex
2019
, bhz140.
180.
G.
Bellec
,
F.
Scherr
,
E.
Hajek
,
D.
Salaj
,
R.
Legenstein
, and
W.
Maass
, “
Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets
,” preprint arXiv:1901.09049 (
2019
).
181.
T.
Bohnstingl
,
F.
Scherr
,
C.
Pehle
,
K.
Meier
, and
W.
Maass
, “
Neuromorphic hardware learns to learn
,”
Front. Neurosci.
13
,
483
(
2019
).
182.
M.
Chu
,
B.
Kim
,
S.
Park
,
H.
Hwang
,
M.
Jeon
, and
B.
Lee
, “
Neuromorphic hardware system for visual pattern recognition with memristor array and cmos neuron
,”
IEEE Trans. Ind. Electron.
62
,
2410
2419
(
2015
).
183.
D.
Querlioz
,
O.
Bichler
,
P.
Dollfus
, and
C.
Gamrat
, “
Immunity to device variations in a spiking neural network with memristive nanodevices
,”
IEEE Trans. Nanotechnol.
12
,
288
295
(
2013
).
184.
J.
Schemmel
,
L.
Kriener
,
P.
Müller
, and
K.
Meier
, “
An accelerated analog neuromorphic hardware system emulating nmda-and calcium-based non-linear dendrites
,” in
2017 International Joint Conference on Neural Networks (IJCNN) (
IEEE
,
2017
), pp.
2217
2226
.
185.
A.
Serb
,
J.
Bill
,
A.
Khiat
,
R.
Berdan
,
R.
Legenstein
, and
T.
Prodromakis
, “
Unsupervised learning in probabilistic neural networks with multi-state metal-oxide memristive synapses
,”
Nat. Commun.
7
,
12611
(
2016
).
186.
A.
Pantazi
,
S.
Woźniak
,
T.
Tuma
, and
E.
Eleftheriou
, “
All-memristive neuromorphic computing with level-tuned neurons
,”
Nanotechnology
27
,
355205
(
2016
).
187.
V.
Milo
,
D.
Ielmini
, and, and
E.
Chicca
, “
Attractor networks and associative memories with stdp learning in rram synapses
,” in
2017 IEEE International Electron Devices Meeting (IEDM)
(
IEEE
,
2017
), pp.
11
12
.
188.
N. K.
Kasabov
, “
Neucube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data
,”
Neural Networks
52
,
62
76
(
2014
).
189.
C.
Li
,
D.
Belkin
,
Y.
Li
,
P.
Yan
,
M.
Hu
,
N.
Ge
,
H.
Jiang
,
E.
Montgomery
,
P.
Lin
,
Z.
Wang
 et al, “
Efficient and self-adaptive in-situ learning in multilayer memristor neural networks
,”
Nat. Commun.
9
,
2385
(
2018
).
190.
M.
Prezioso
,
F.
Merrikh-Bayat
,
B.
Hoskins
,
G.
Adam
,
K. K.
Likharev
, and
D. B.
Strukov
, “
Training and operation of an integrated neuromorphic network based on metal-oxide memristors
,”
Nature
521
,
61
64
(
2015
).
191.
M.
Hu
,
C. E.
Graves
,
C.
Li
,
Y.
Li
,
N.
Ge
,
E.
Montgomery
,
N.
Davila
,
H.
Jiang
,
R. S.
Williams
,
J. J.
Yang
 et al, “
Memristor-based analog computation and neural network classification with a dot product engine
,”
Adv. Mater.
30
,
1705914
(
2018
).
192.
K.
Steinbuch
, “
Die lernmatrix
,”
Biol. Cybern.
1
,
36
45
(
1961
).
193.
G. S.
Snider
, “
Cortical computing with memristive nanodevices
,”
SciDAC Rev.
10
,
58
65
(
2008
).
194.
S.
Yu
,
Y.
Wu
,
R.
Jeyasingh
,
D.
Kuzum
, and
H.-S. P.
Wong
, “
An electronic synapse device based on metal oxide resistive switching memory for neuromorphic computation
,”
IEEE Trans. Electron Devices
58
,
2729
2737
(
2011
).
195.
F.
Alibart
,
E.
Zamanidoost
, and
D. B.
Strukov
, “
Pattern classification by memristive crossbar circuits using ex situ and in situ training
,”
Nat. Commun.
4
,
2072
(
2013
).
196.
S.
Park
,
M.
Chu
,
J.
Kim
,
J.
Noh
,
M.
Jeon
,
B. H.
Lee
,
H.
Hwang
,
B.
Lee
, and
B.-G.
Lee
, “
Electronic system with memristive synapses for pattern recognition
,”
Sci. Rep.
5
,
10123
(
2015
).
197.
G. W.
Burr
,
R. M.
Shelby
,
S.
Sidler
,
C. D.
Nolfo
,
J.
Jang
,
I.
Boybat
,
R. S.
Shenoy
,
P.
Narayanan
,
K.
Virwani
,
E. U.
Giacometti
 et al, “
Experimental demonstration and tolerancing of a large-scale neural network (165 000 synapses) using phase-change memory as the synaptic weight element
,”
IEEE Trans. Electron Devices
62
,
3498
3507
(
2015
).
198.
S.
Yu
,
Z.
Li
,
P.-Y.
Chen
,
H.
Wu
,
B.
Gao
,
D.
Wang
,
W.
Wu
, and
H.
Qian
, “
Binary neural network with 16 mb rram macro chip for classification and online training
,” in
2016 IEEE International Electron Devices Meeting (IEDM)
(
IEEE
,
2016
), pp.
16.2.1
16.2.4
.
199.
S.
Choi
,
J. H.
Shin
,
J.
Lee
,
P.
Sheridan
, and
W. D.
Lu
, “
Experimental demonstration of feature extraction and dimensionality reduction using memristor networks
,”
Nano Lett.
17
,
3113
3118
(
2017
).
200.
P.
Yao
,
H.
Wu
,
B.
Gao
,
S. B.
Eryilmaz
,
X.
Huang
,
W.
Zhang
,
Q.
Zhang
,
N.
Deng
,
L.
Shi
,
H.-S. P.
Wong
 et al, “
Face classification using electronic synapses
,”
Nat. Commun.
8
,
15199
(
2017
).
201.
S.
Ambrogio
,
P.
Narayanan
,
H.
Tsai
,
R. M.
Shelby
,
I.
Boybat
,
C.
Nolfo
,
S.
Sidler
,
M.
Giordano
,
M.
Bodini
,
N. C.
Farinha
 et al, “
Equivalent-accuracy accelerated neural-network training using analogue memory
,”
Nature
558
,
60
(
2018
).
202.
F. M.
Bayat
,
M.
Prezioso
,
B.
Chakrabarti
,
H.
Nili
,
I.
Kataeva
, and
D.
Strukov
, “
Implementation of multilayer perceptron network with highly uniform passive memristive crossbar circuits
,”
Nat. Commun.
9
,
2331
(
2018
).
203.
C.
Li
,
M.
Hu
,
Y.
Li
,
H.
Jiang
,
N.
Ge
,
E.
Montgomery
,
J.
Zhang
,
W.
Song
,
N.
Davila
,
C. E.
Graves
,
Z.
Li
,
J. P.
Strachan
,
P.
Lin
,
Z.
Wang
,
M.
Barnell
,
Q.
Wu
,
R. S.
Williams
,
J. J.
Yang
, and
Q.
Xia
, “
Analogue signal and image processing with large memristor crossbars
,”
Nat. Electron.
1
,
52
59
(
2018
).
204.
H.
Jiang
,
C.
Li
,
R.
Zhang
,
P.
Yan
,
P.
Lin
,
Y.
Li
,
J. J.
Yang
,
D.
Holcomb
, and
Q.
Xia
, “
A provable key destruction scheme based on memristive crossbar arrays
,”
Nat. Electron.
1
,
548
(
2018
).
205.
L.
Gao
,
P. Y.
Chen
, and
S.
Yu
, “
Demonstration of convolution kernel operation on resistive cross-point array
,”
IEEE Electron Device Lett.
37
,
870
873
(
2016
).
206.
P. M.
Sheridan
,
C.
Du
, and
W. D.
Lu
, “
Feature extraction using memristor networks
,”
IEEE Trans. Neural Network Learn. Syst.
27
,
2327
2336
(
2016
).
207.
X.
Zeng
,
S.
Wen
,
Z.
Zeng
, and
T.
Huang
, “
Design of memristor-based image convolution calculation in convolutional neural network
,”
Neural Comput. Appl.
30
,
503
508
(
2018
).
208.
X.
Xie
,
S.
Wen
,
Z.
Zeng
, and
T.
Huang
, “
Memristor-based circuit implementation of pulse-coupled neural network with dynamical threshold generators
,”
Neurocomputing
284
,
10
16
(
2018
).
209.
S. N.
Truong
,
S.-J.
Ham
, and
K.-S.
Min
, “
Neuromorphic crossbar circuit with nanoscale filamentary-switching binary memristors for speech recognition
,”
Nanoscale Res. Lett.
9
,
629
629
(
2014
).
210.
J. J.
Hopfield
, “
Neural networks and physical systems with emergent collective computational abilities
,”
Proc. Natl. Acad. Sci.
79
,
2554
2558
(
1982
).
211.
T.
Mikolov
,
M.
Karafiát
,
L.
Burget
,
J.
Černocký
, and
S.
Khudanpur
, “
Recurrent neural network based language model
,” in
Eleventh Annual Conference of the International Speech Communication Association
(
2010
).
212.
A.
Graves
, “
Generating sequences with recurrent neural networks
,” preprint arXiv:1308.0850 (
2013
).
213.
A.
Graves
,
A.-R.
Mohamed
, and
G.
Hinton
, “
Speech recognition with deep recurrent neural networks
,” in
IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP) (
IEEE
,
2013
), pp.
6645
6649
.
214.
S.
Hu
,
Y.
Liu
,
Z.
Liu
,
T.
Chen
,
J.
Wang
,
Q.
Yu
,
L.
Deng
,
Y.
Yin
, and
S.
Hosaka
, “
Associative memory realized by a reconfigurable memristive hopfield neural network
,”
Nat. Commun.
6
,
7522
(
2015
).
215.
Z.
Wang
,
C.
Li
,
P.
Lin
,
M.
Rao
,
Y.
Nie
,
W.
Song
,
Q.
Qiu
,
Y.
Li
,
P.
Yan
,
J. P.
Strachan
 et al, “
In situ training of feed-forward and recurrent convolutional memristor networks
,”
Nat. Mach. Intell.
1
,
434
442
(
2019
).
216.
E.
Covi
,
S.
Brivio
,
A.
Serb
,
T.
Prodromakis
,
M.
Fanciulli
, and
S.
Spiga
, “
Hfo2-based memristors for neuromorphic applications
,” in
IEEE International Symposium on Circuits and Systems (ISCAS)
(
IEEE
,
2016
) pp.
393
396
.
217.
D.
Soudry
,
D. D.
Castro
,
A.
Gal
,
A.
Kolodny
, and, and
S.
Kvatinsky
,
Hebbian Learning Rules with Memristors
(
Israel Institute of Technology
,
Haifa, Israel
,
2013
).
218.
C.
Yakopcic
,
M. Z.
Alom
, and
T. M.
Taha
, “
Extremely parallel memristor crossbar architecture for convolutional neural network implementation
,”
in International Joint Conference on Neural Networks (IJCNN) (
IEEE
,
2017
), pp.
1696
1703
.
219.
J.
Tang
,
F.
Yuan
,
X.
Shen
,
Z.
Wang
,
M.
Rao
,
Y.
He
,
Y.
Sun
,
X.
Li
,
W.
Zhang
,
Y.
Li
 et al, “
Bridging biological and artificial neural networks with emerging neuromorphic devices: Fundamentals, progress, and challenges
,”
Adv. Mater.
31
,
1902761
(
2019
).
220.
O.
Krestinskaya
,
K. N.
Salama
, and
A. P.
James
, “
Learning in memristive neural network architectures using analog backpropagation circuits
,”
IEEE Trans. Circuits Syst. I
66
,
719
732
(
2019
).
221.
Q.
Xia
and
J. J.
Yang
, “
Memristive crossbar arrays for brain-inspired computing
,”
Nat. Mater.
18
,
309
(
2019
).
222.
M.
Hu
,
H.
Li
,
Y.
Chen
,
X.
Wang
, and
R. E.
Pino
, “
Geometry variations analysis of TiO2 thin-film and spintronic memristors
,” in
2011 16th Asia and South Pacific Design Automation Conference (ASP-DAC)
(
IEEE
,
2011
), pp.
25
30
.
223.
D.
Niu
,
Y.
Chen
,
C.
Xu
, and
Y.
Xie
, “
Impact of process variations on emerging memristor
,” in
Proceedings of the 47th Design Automation Conference
(
ACM
,
2010
), pp.
877
882
.