Complementary metal–oxide–semiconductor (CMOS)-based neural architectures and memristive devices containing many artificial synapses are promising technologies that are being developed for pattern recognition and machine learning. However, the volatility and design complexity of traditional CMOS architectures, and the trade-off between the operating time and power consumption of conventional memristive devices, have tended to impede the path to achieve the interconnectivity/compactness and information density of the brain using either approach. Here, by developing a nanoscale deposit-only-metal-electrode-fabrication-based uniform-partial-state-transition-facilitated approach, we demonstrate a fast artificial synapse with a Rapid-operating-time, Intermediate-bias-range, Multiple-states, and Several-synaptic-functions (RIMS) synapse, implemented using deposit-only, nanopillar-based Ge2Sb2Te5-type memristive devices. A previously unconsidered, fast, paired-pulse facilitation/depression using ∼50 ns spikes with an ∼1 µs inter-spike interval within an ∼1 V range and with a low-energy consumption of ∼1.8 pJ per paired-spike as well as a previously inaccessible multi-state, rapid long-term potentiation/depression with ∼15 distinct states using ∼50 ns spikes within a 0.7/1.4 V range was achieved. Fast spike-timing-dependent plasticity using ∼50 ns spikes with an ∼1 µs inter-spike interval within a 1.3 V range was also achieved. Electro-thermal simulations reveal a uniform-partial-state-transition-facilitated variation in conductance states. This artificial synapse, equipped with a nanoscale deposit-only-metal-electrode-fabrication-based uniform-partial-state-transition-facilitated framework, shows the potential for a substantial overall performance improvement in artificial-intelligence tasks.

Artificial neural networks, modeled on the structure of a biological brain, are generally constructed using software rather than hardware and the software runs on a traditional computer chip. This usually slows down the storage and processing of information. Building key features of a biological neural network directly in the hardware of an electronic system could make it hundreds of times more efficient. Therefore, a chip built this way may accelerate machine learning. Recently, building an electronic system that can mimic certain functions of the biological brain based on the ability to remember, learn, and process multidimensional information through a flexible and energy-efficient computation process, has attracted significant interest. Typically, as many as 1015 synapses are present in the human cerebral cortex.1 This makes hardware implementation of such a network in a compact electronic system exceptionally challenging due to the lack of compact electronic elements. Certain synaptic functions have been demonstrated using traditional three-terminal silicon-based electronic devices, with the device conductance representing the synaptic weight.2 Notably, synaptic operations, including long-term potentiation/depression (LTP/LTD), spike-timing-dependent plasticity (STDP) learning processes, paired-pulse facilitation/depression (PPF/PPD), and low power consumption, have been simulated or emulated in a device.3–5 The recent implementation of a camera-photodiode array with neuromorphic functions is an example of the excellent progress being made.6 However, device compactness, which has vital significance for high-density information storage and multidimensional information processing in a biological neural network, has not yet been well realized with conventional electronic devices.7 This presents an obstacle to the practical applications of artificial-neural-network hardware.

Two-terminal electronic devices with small feature sizes are excellent candidates for realizing device compactness. However, various synaptic functions tend to be achieved by different types of electronic devices.8,9 Nevertheless, multiple units of the same type of electronic device could be used to perform various synaptic operations to avoid the use of different types of electronic devices, but this will still lead to an increase in the overall device footprint and an increase in the complexity of device interconnectivity. Even though synaptic plasticity has been widely demonstrated,10 the transmission of various signals (more than two types of signals) in a single electronic device has rarely been reported. At a chemical synapse in the brain, one pre-synaptic cell releases various neurotransmitter molecules into the synaptic cleft that is adjacent to another cell.11 This means that the chemical synapse passes a variety of information from a pre-synaptic cell to a post-synaptic cell, which is similar to the tunable behavior of a programmable chip, and this provides the potential to avoid the use of different types of electronic devices. Additionally, it is possible that the utilization of traditional memristive materials can be an impediment to the realization of high-density information storage. Traditional memristive materials exhibit a trade-off between a reduction in the operating time and, at the same time, a reduction in power consumption, which makes the building of high-density hardware challenging. A potential solution is the utilization of memristive materials that show excellent operating times and, at the same time, maintain a low power consumption.12 These materials may permit the construction of an electronic device with an excellent operating time and low power consumption through a nano-sized facile-fabrication-type, uniform-partial-state-transition-assisted approach. Moreover, memristive materials can also benefit from multi-state/analog programming. They can act as multi-state storage devices, opening up the tantalizing opportunity to build an advanced high-density synapse comprising a multi-state storage device that is able to process a large amount of information in a single electronic device. However, the operation of traditional memristive materials can rely on a non-uniform degree of change in conductance states, which can limit the stability/number of the conductance states.

Here, a faster artificial synapse with a nanoscale deposit-only-metal-electrode-fabrication-based uniform-partial-state-transition-facilitated framework and a Rapid-operating-time, Intermediate-bias-range, Multiple-states, Several-synaptic-functions (so-called RIMS) synapse is demonstrated using a memristive device with more pristine-type deposited-only nanopillars, rather than more damage-vulnerable, deposited-and-etched nanopillars, as the key computing component. A two-terminal Ge2Sb2Te5 (GST)-based memristive device has been developed to implement the synaptic functions. A fast PPF/PPD (∼50 ns spikes with an ∼1 µs inter-spike interval within an ∼1 V range and with an energy consumption of ∼1.8 pJ per paired-spike) as well as a multi-state and rapid LTP/LTD (∼15 distinct states using ∼50 ns spikes within a 0.7/1.4 V range), is achieved. Additionally, fast STDP can also be achieved (∼50 ns spikes with an ∼1 µs inter-spike interval within a 1.3 V range). Electro-thermal simulations elucidate the uniform-partial-state-transition-facilitated variation in conductance states. By utilizing a memristive device with deposited-only nanopillars, this newly developed artificial synapse shows PPF/PPD behavior in a GST-based device and more conductance states than in previously reported synapses using GST-based devices (Table SII/Table SIII in the supplementary material) and thus will be able to process both temporal and spatial information more efficiently.

Cell characteristics and electro-thermal simulations for our RIMS synapse are presented in Fig. 1. A schematic structure of the two-terminal phase-change-memory (PCM) cell based on Ge2Sb2Te5 (GST) is illustrated in Fig. 1(a). A heater electrode and an alternate electrode form a mushroom/pillar-based architecture in the cell structure, and the active neuromorphic-memory layer is sandwiched between the electrodes. The GST alloy is used as the neuromorphic medium. A 40 nm-thick TiW alternate electrode was used as the starting material on which a 50 nm-thick GST neuromorphic-memory layer was deposited. A 40 nm-thick Si3N4 insulating layer was then deposited, patterned, and etched to form vias with a diameter of ∼200 nm. Finally, a 40 nm-thick TiW heater electrode/pillar was deposited to complete the structure [Fig. 1(b)].

FIG. 1.

Cell characteristics and electro-thermal simulations of a RIMS synapse. (a) Illustration of the cell structure of a RIMS synapse. The heater and alternate electrodes correspond to the post-synaptic and pre-synaptic neurons, respectively. The diameter of the via is ∼200 nm. (b) Schematic illustration of the process to fabricate the cells by deposit-and-etch (left) and deposit-only (right) processes to form the heater electrodes. (c) Voltage-dependent conductance values of the artificial synapse. (d) Plot of the calculated peak temperature as a function of the number of spikes applied to the artificial synapse for long-term potentiation (LTP) using identical spikes (∼0.8 V, 50 ns) and staircase-based spikes (from 0.6 to 1.35 V, with bias-voltage increments of +50 mV, 50 ns). (e) Snapshots of the calculated thermal distribution in the artificial synapse after applying one (top) and 15 (bottom) identical-based spikes shown in (d). (f) Comparison of the calculated peak temperature for asymmetric and symmetric Hebbian processes using an ∼0.8 V, 50 ns pre-synaptic spike, −0.5 V, 50 ns post-synaptic spikes, and an ∼1 µs inter-spike interval and an ∼0.8 V, 50 ns pre-synaptic spike, −0.4 V, 500 ns post-synaptic spikes, and an ∼1 µs inter-spike interval, respectively. Details of the simulation protocol can be found in the supplementary material.

FIG. 1.

Cell characteristics and electro-thermal simulations of a RIMS synapse. (a) Illustration of the cell structure of a RIMS synapse. The heater and alternate electrodes correspond to the post-synaptic and pre-synaptic neurons, respectively. The diameter of the via is ∼200 nm. (b) Schematic illustration of the process to fabricate the cells by deposit-and-etch (left) and deposit-only (right) processes to form the heater electrodes. (c) Voltage-dependent conductance values of the artificial synapse. (d) Plot of the calculated peak temperature as a function of the number of spikes applied to the artificial synapse for long-term potentiation (LTP) using identical spikes (∼0.8 V, 50 ns) and staircase-based spikes (from 0.6 to 1.35 V, with bias-voltage increments of +50 mV, 50 ns). (e) Snapshots of the calculated thermal distribution in the artificial synapse after applying one (top) and 15 (bottom) identical-based spikes shown in (d). (f) Comparison of the calculated peak temperature for asymmetric and symmetric Hebbian processes using an ∼0.8 V, 50 ns pre-synaptic spike, −0.5 V, 50 ns post-synaptic spikes, and an ∼1 µs inter-spike interval and an ∼0.8 V, 50 ns pre-synaptic spike, −0.4 V, 500 ns post-synaptic spikes, and an ∼1 µs inter-spike interval, respectively. Details of the simulation protocol can be found in the supplementary material.

Close modal

The behavior of the phase-change process in a memristive cell is still an important subject under active study. PCM functions, based on the reversible switching between the amorphous and crystalline states of a chalcogenide material, showing a pronounced contrast in the electrical conductance or optical reflectivity, are generally fast, on the several tens of nanosecond timescale. Various models have been suggested, including a change in the bulk-material conductivity due to a change in thermal distribution.13 Here, we attribute the conductance change of our RIMS synapse to a change in the thermal distribution induced by various input spikes. The GST layer can act as an active layer. We further find that a modification of the type of spike can modulate the thermal distribution in the GST layer. Typically, the simulated peak temperature is observed in the GST layer. The amorphous state of the GST layer can be switched to a crystalline state by being subjected to a large number of input spikes. Electro-thermal simulations of the thermal distribution in a GST cell show that the peak temperature in the GST layer increases with an increasing number of spikes, which we called a “partial-state transition” [Fig. 1(d)]. As a result, when a small number of spikes are applied, the current increases because the amorphous GST layer is switched to a partially crystalline layer due to Joule heating. This intermediate conductance state results in an increase in the current, producing an increase in the conduction of the RIMS synapse. When a large number of spikes are administered, the amorphous GST layer switches to a crystalline state due to a higher temperature distribution. As a result, the current after a large number of spikes is greater than the current after a small number of spikes, which leads to the tunable characteristics of our RIMS synapse. As a result, the conduction of the RIMS synapse increases substantially. Generally, the crystallization transformation of phase-change materials can be divided into two processes: (i) a crystal-nucleation process and (ii) a crystal-growth process.14 Crystal nuclei can appear in the interior of a melted amorphous layer at high temperature. These nuclei can then gradually grow larger, and finally, the whole active region of the layer will be in the crystalline state. Staircase-based spikes comprise a first spike and subsequent spikes. Upon administering an appropriate first spike, the spike can facilitate the crystal-nucleation process and provide the proper initial temperature distribution for a subsequent crystal-growth process. The amorphous GST layer can then show an optimum crystallization process upon the application of appropriate subsequent spikes. This can cause the current to increase uniformly with an increased number of spikes, thereby increasing the conductance state of a RIMS synapse evenly with an increasing number of spikes. Moreover, when subsequent spikes are administered, the temperature distribution increases during heating and then decreases during cooling. For appropriate subsequent spike amplitudes, the Joule heat produced could be enough to keep the temperature at a high level, which can lead to a slow temperature drop during cooling. Hence, the temperature could be maintained at the crystallization temperature and the energy consumption will be minimized.

A previous study of the melting kinetics in GST proposed a melting mechanism in which crystalline clusters first fragment into disconnected medium-range-ordered structural units, viz., planes and cubes of atoms, which subsequently break up into discrete fourfold rings, which themselves finally dissolve as the system melts.15 Based upon the present results, this picture can be extended to the progressive melting modeled in the depression sequence. Heat pulses above the melting temperature can induce a “thermal shock” (i.e., partial melting) in the material, breaking bonds and forming a significant concentration of defects (e.g., dangling bonds), most likely at the crystal–glass interface. The rapid quenching following the pulse leads to this disordered state being “frozen in.” Provided that the disordered structures are metastable, insufficient thermal energy is available during the rest phases to facilitate the atomic diffusion required to remove them, at least not on a short timescale, whereas the short bursts of energy delivered by subsequent pre-spikes can lead to a delayed reordering. This can be mediated initially by annihilation of the chemically disordered (“frustrated”) structural units, which spontaneously reform as ordered entities and perhaps then act as an attachment site for additional atoms. Moreover, multiple researchers have investigated the scenario involving the interaction between the germanium–antimony–telluride material in the cells and the titanium-based electrodes.16–18 In these experiments, titanium-based electrode layers appeared to be significantly less prone to interaction/migration. As a result, it is unlikely that the titanium-based electrode and germanium–antimony–telluride material interact to a degree sufficient to account for the progressive melting represented in the depression sequence.

Additionally, traditional heater electrodes formed by a deposited-and-etched process may have a large degree of damage at the interfaces. On the other hand, the heater electrodes created by a deposit-only process in this study could have a smaller degree of damage at the interfaces. This may lead to a more robust, defect-free interface with substantially decreased contact resistance and its variation, consequently resulting in reduced amorphization/melting currents (Fig. S1 in the supplementary material).

A phenomenon, similar to the learning-experience behavior of human beings, is observed in our RIMS synapse. Generally, pre-synaptic spikes from different neurons tend to trigger a post-synaptic current through synapses in a post-synaptic neuron to establish dynamic logic in a neural network.19 PPF and PPD functions, among the basic dynamic logic functions, are demonstrated by applying two successive pre-synaptic spikes with various inter-spike intervals. For a demonstration of the dynamic logic functions, we define the degree of relevancy of the two signals in terms of Δt (inter-spike interval). So, a smaller Δt value corresponds to a higher degree of relevancy. Because we are interested in the weight of the RIMS synapse, we record the ratio of the current amplitude obtained after the second spike to the current amplitude obtained after the first spike (paired-pulse ratio). A clear dependency of the paired-pulse ratio on the degree of relevancy can be observed in Fig. 2(a), with a high degree of relevancy related to a high paired-pulse ratio and a low degree of relevancy related to a low paired-pulse ratio. When two spikes with a low bias voltage (∼1 V, 50 ns) and with a low Δt value of ∼1 µs are applied, the paired-pulse ratio is above unity (PPF function). In other words, the two signals with a high degree of relevancy are able to change the synaptic weight, so our RIMS synapse can learn the input signal. However, the paired-pulse ratio values decrease with an increase in the Δt value from 1 to 100 µs due to a decrease in the degree of relevancy. This means that two signals with a high degree of relevancy can be learned in a short time and be remembered by our RIMS synapse. On the other hand, for two spikes with a high bias voltage (∼2 V, 50 ns) and a Δt value of ∼1 µs, the paired-pulse ratio is below unity (PPD function). A higher paired-pulse ratio is observed with an increase in the Δt value from 1 to 100 µs (higher degree of relevancy), necessary for achieving various synaptic weights and broadening the types of correlated learning processes. The energy consumption for one operation can be calculated by multiplying the spike voltage amplitude by the current flowing through the cell and then multiplying by the operating time (E = V × I × t). When switching from the high-conductance state to the low-conductance state is considered, the energy consumption is much higher than that from the low-conductance state to the high-conductance state due to the high reset current involved, viz., for the phase-change-based resistive-switching cell. In this work, the highest current in the high-conductance state obtained after the first spike for the PPF function is ∼26.78 µA. The inset of Fig. 2(b) shows the energy consumption per paired-spike of a RIMS synapse for the PPF and PPD functions. For the first paired-spike stimulation with a time interval of 1 µs in the PPF and PPD functions, the values of energy consumption are ∼1.8 and ∼5.7 pJ, respectively, which are below a baseline of 10 pJ for existing artificial synapses with PPF functions and with the use of nanosecond spikes, an intermediate bias range, and microsecond spike-time intervals (Table SII in the supplementary material). Moreover, the energy consumption per paired-spike also decreases with an increased inter-spike interval for the PPF function. The average energy consumption per paired-spike per time interval is less than ∼1.6 and ∼6.2 pJ for the PPF and PPD functions, respectively, due to a low operating current.

FIG. 2.

Synaptic characteristics of a RIMS synapse. (a) and (b) Inter-spike interval-dependent paired-pulse ratio for PPF (a) and PPD (b) stimulations. The inset in (a) shows a schematic of the STP testing setup, pre-synaptic spikes, and post-synaptic currents for PPF stimulation, while the inset in (b) shows the pre-synaptic spikes and post-synaptic currents for the PPD stimulation and energy consumption per paired-spike for each inter-spike interval for the PPF and PPD stimulations. The initial conductance states of the PPF and PPD processes are the low-conductance state and high-conductance state, respectively. (c) and (d) Plot of conductance as a function of the number of spikes for LTP and LTD achieved by (c) identical and (d) staircase-based spikes. The insets show the bias spikes used for the LTP and LTD stimulations. (e) and (f) Plot of the percentage change in conductance as a function of inter-spike interval for (e) asymmetric and (f) symmetric Hebbian-learning STDP demonstrations. The inset in (e) shows a schematic of the STDP testing setup, pre-synaptic and post-synaptic spikes. Details of the statistics can be found in the supplementary material.

FIG. 2.

Synaptic characteristics of a RIMS synapse. (a) and (b) Inter-spike interval-dependent paired-pulse ratio for PPF (a) and PPD (b) stimulations. The inset in (a) shows a schematic of the STP testing setup, pre-synaptic spikes, and post-synaptic currents for PPF stimulation, while the inset in (b) shows the pre-synaptic spikes and post-synaptic currents for the PPD stimulation and energy consumption per paired-spike for each inter-spike interval for the PPF and PPD stimulations. The initial conductance states of the PPF and PPD processes are the low-conductance state and high-conductance state, respectively. (c) and (d) Plot of conductance as a function of the number of spikes for LTP and LTD achieved by (c) identical and (d) staircase-based spikes. The insets show the bias spikes used for the LTP and LTD stimulations. (e) and (f) Plot of the percentage change in conductance as a function of inter-spike interval for (e) asymmetric and (f) symmetric Hebbian-learning STDP demonstrations. The inset in (e) shows a schematic of the STDP testing setup, pre-synaptic and post-synaptic spikes. Details of the statistics can be found in the supplementary material.

Close modal

The demonstration of variations in the synaptic weight by using a RIMS synapse can be an important step toward realizing other complex biological functions through the use of neuromorphic cells. Figure 2(c) shows the results for a RIMS synapse programmed by using a series of 15 identical-based high-bias spikes (∼1.5 V, 50 ns, and 1 µs inter-spike interval), followed by a series of 15 identical-based low-bias spikes (∼0.8 V, 50 ns, and 1 µs inter-spike interval). The conductance, namely, the synaptic weight, decreases with an increasing number of high-bias spikes. Additionally, the conductance obtained after the high-bias spikes is lower compared to that obtained before the high-bias spikes, which means that the change in the synaptic weight of a RIMS synapse with high-bias spikes exhibits LTD plasticity. After the application of an increased number of low-bias spikes, the RIMS synapse returns to its original high-conductance state. This means that the administration of low-bias spikes is a potentiation process, in which the conductance obtained after the low-bias spikes is higher compared to that obtained before the low-bias spikes, which is analogous to LTP plasticity. Moreover, the variation in the conductance stimulated by the staircase-based spikes [increasing from 0.6 to 2.05 V, 50 ns, and 1 µs inter-spike interval; Fig. 2(d)] is similar to that produced by the identical-based spikes. Both RIMS synapses subjected to identical and staircase-based spikes showed ∼15 conductance states, which is above a baseline of ten states for GST for several ten-nanosecond spikes and within a range of several volts (Table SIII in the supplementary material). The conductance states of a RIMS synapse subjected to staircase-based spikes are more evenly distributed than those of a RIMS synapse subjected to identical-based spikes. This is likely to be related to the difference in the bias amplitude of the spikes and their resulting thermal distribution so produced [Fig. 1(d)]. Although one can achieve a higher number of conductance states with the help of a smaller bias-voltage increment/decrement (<±50 mV) for a RIMS synapse with staircase-based spikes, we limited the number of conductance states to ∼15 since we observed an overlap in the low-conductance state and the low-noise margin when the cell is being read. Furthermore, tunable synaptic long-term plasticity for an individual LTD/LTP function, as well as excellent retention times for intermediate conductance states, could be achieved (Fig. S3 in the supplementary material).

Spike-timing-dependent plasticity (STDP), which can also be demonstrated in our RIMS synapse, is one of the basic forms of unsupervised learning.20–22 The STDP function, based on the synaptic connecting strength, is generally modulated by neuronal firing (spiking) events.23–25 In a biological synapse, the strengthening of the synapse is stimulated by the pre-synaptic neuron fired before the post-synaptic neuron, and weakening of the synapse is stimulated by the pre-synaptic neuron fired after the post-synaptic neuron. In artificial synapses, these biological activities are mimicked by potentiation (strengthening) and depression (weakening) of the response. A pair of spikes are administered to the heater and alternate electrodes as pre-synaptic and post-synaptic spikes to implement STDP [see the inset in Fig. 2(e)]. The relative timing Δtpre–post is defined as the time interval (inter-spike interval) between the initial time of the pre-synaptic spike and that of the post-synaptic spike. After the pre-synaptic and post-synaptic spikes are administered, the conductance of the synapse is then read. The percentage change in conductance is defined as ΔW = (GG0G0) × 100, where G is the conductance obtained after the spikes and G0 is the initial conductance. Various forms of STDP were postulated by Hebb.26 Hebbian-learning processes may be grouped as (1) an asymmetric Hebbian-learning process and (2) a symmetric Hebbian-learning process, depending on the variation of the synaptic weight.27,28 The percentage change in conductance as a function of inter-spike interval obtained with the values of Δtpre–post = 1–60 µs and Δtpre–post = −1 to −60 µs is displayed in Fig. 2(e). When the pre-synaptic spikes (∼0.8 V, 50 ns) are administered before the post-synaptic spikes (−0.5 V, 50 ns), (Δtpre–post = 1–60 µs), the percentage change in the conductance of the cell gradually decreases with an increasing time interval. On the other hand, when the pre-synaptic spikes are administered after the post-synaptic spikes (Δtpre–post = −1 to −60 µs), the percentage change in the conductance of the cell gradually increases with an increasing time interval. These processes result in the overall achievement of an asymmetric Hebbian-learning process. Moreover, when the pre-synaptic spikes (∼0.8 V, 50 ns) are administered before (Δtpre–post = 1–60 µs) and after (Δtpre–post = −1 to −60 µs) the post-synaptic spikes (−0.4 V, 500 ns), a symmetric Hebbian-learning process appears. The percentage change in the conductance of the potentiation process for Δtpre–post ∼ 1 µs of the asymmetric Hebbian-learning process is higher compared to that of the symmetric Hebbian-learning process due to a higher temperature distribution in the cell [Fig. 1(f)]. Moreover, asymmetric and symmetric anti-Hebbian learning processes, with microsecond inter-spike intervals and with the use of nanosecond spikes, are also achieved [Figs. S4(a) and S4(b) in the supplementary material]. A series of pre-treating spikes (facilitating crystal nucleation and growth, as well as the melting and quenching processes) are administered in all learning processes before the pre-synaptic spikes to achieve the STDP [Fig. S4(e) in the supplementary material].

We performed a numerical simulation of the STDP behavior using a compact model of the experimentally observed change in conductance for STDP. We fitted our experimental data for Hebbian-learning processes with the following equations:

(1)
(2)
(3)

where W is the synaptic weight; t is the time; and a, b, c, and d are fitting parameters. The curves of an asymmetric Hebbian-learning process with positive and negative inter-spike intervals are fitted using Eqs. (1) and (2), respectively, while the experimental data of a symmetric Hebbian-learning process are fitted using Eq. (3). By choosing appropriate values of the fitting parameters, the calculated values fit the experimental data well [Figs. 3(a) and 3(b)].

FIG. 3.

Modeling and self-adaptation of spike-timing-dependent plasticity. Experimental data of (a) asymmetric and (b) symmetric Hebbian-learning processes (triangles), together with the results of fitting (solid lines) using Eqs. (1)(3). The insets show the corresponding fitting parameters for each learning process. (c) Simulated network, (d) its equivalent circuit, and (e) input and output spiking activities. A time constant of ∼10 ms and an activation threshold of −54 mV were used. (f) The initial and final distributions of weights for various initial weights G0 after performing the simulation for 1 s. The top panel shows the initial distribution of weights for the three values of G0, and the three middle panels show the final distribution of weights for these G0 values for the multiplicative-based STDP model. The bottom panel shows the final distribution of weights of the additive-based STDP model.

FIG. 3.

Modeling and self-adaptation of spike-timing-dependent plasticity. Experimental data of (a) asymmetric and (b) symmetric Hebbian-learning processes (triangles), together with the results of fitting (solid lines) using Eqs. (1)(3). The insets show the corresponding fitting parameters for each learning process. (c) Simulated network, (d) its equivalent circuit, and (e) input and output spiking activities. A time constant of ∼10 ms and an activation threshold of −54 mV were used. (f) The initial and final distributions of weights for various initial weights G0 after performing the simulation for 1 s. The top panel shows the initial distribution of weights for the three values of G0, and the three middle panels show the final distribution of weights for these G0 values for the multiplicative-based STDP model. The bottom panel shows the final distribution of weights of the additive-based STDP model.

Close modal

By using the verified STDP [Fig. 3(a)], the time evolution of the cell conductance was simulated in a simple neuromorphic network with one soma, described by a leaky-integrated-and-fire model and a thousand input synapses [Fig. 3(c)].29 Similar spikes of the shape denoted by black lines in Fig. 3(c) were fed to the network, with initiation times drawn randomly from a Poisson distribution corresponding to an average spiking rate of 15 Hz. Figures 3(c) and 3(d) present the neuromorphic network and its equivalent circuit, respectively. The network comprises 1000 RIMS synapses/GST cells, each assumed to exhibit similar STDP behavior corresponding to the shape of the asymmetric Hebbian curve in Fig. 3(a) with specific parameters (e.g., amplitude and time constant). The setup consists of all these synapses connected to a leaky-integrate-and-fire (LIF) neuron with a threshold potential of −54 mV. When an input stimulus is administered in the pre-synaptic region of the network, bias-voltage spikes, exhibiting shapes similar to the ones in our experiment, are fired at random, Poisson-distributed timings. The information is then propagated through the synapses to the LIF neuron in the post-synaptic region, resulting in an increased output potential [Fig. 3(e)]. Upon hitting the threshold voltage, the LIF neuron fires a post-synaptic spike with a spike waveform similar to the ones in our experiment. This combination of STDP causes an evolution in the weights (conductances) of all the synapses. The simulations were performed for 1 s, and models with a multiplicative-rule assumption (G0 is dependent of ΔG) and an additive-rule assumption (G0 is independent of ΔG) were used. For a demonstration of the stability of the training of the neural network, we define the degree of self-adaptation of a model by the shape of the distribution. Thus, a more bell-shaped distribution corresponds to a higher degree of self-adaptation. Because we are interested in the variation of weights, we investigated the weight distribution for different models. A clear dependence of the weight distribution can be observed in Fig. 3(f), with a high degree of self-adaptation being the most stable and a low degree of self-adaptation being the least stable. When the models with an additive-rule assumption were used, the cell conductance does not converge to a bell-shaped distribution. In other words, a model with a low degree of self-adaptation does not allow stable training in artificial-neural-network tasks. On the other hand, the cell conductances eventually converged to a stable bell-shaped distribution, independently of their initial values, for the model with a multiplicative-rule assumption due to an increase in the degree of self-adaptation. This means that the model with a high degree of self-adaptation could enable stable training in artificial-neural-network tasks. Moreover, the model with a multiplicative-rule assumption can correspond qualitatively to biologically observed STDP behavior and also to existing multiplicative-based models that exhibit a similar degree of self-adaptation, which is considered important for long-term stability of training in spiking-neural-network tasks.30,31

Applications, such as artificial neural networks built using electronic systems, are challenging because the systems need to meet a set of requirements: (1) cell compactness (multiple operations per cell/large number of conductance states), (2) small energy consumption, and (3) short operating time.32 The examples shown in this work indicate that the current state of the RIMS synapse can achieve these previously unmet sets of requirements using phase-change materials. The key improvement in the RIMS synapse to enable these applications is the modulation of the crystallization/melting kinetics, achieved by using deposit-only electrodes with nanosized pillars and based on the use of staircase-based spikes. In traditional computers, the memory (slow but nonvolatile) and logic (fast but volatile) are physically separated. Here, the RIMS synapses can show spike widths for the symmetric STDP learning process below 100 ns for the current phase-change artificial synapse within a volt range and with a microsecond inter-spike interval (Table SIV in the supplementary material) and, at the same time, maintain nonvolatile operations, which can combine these two separate functions together for saving space and energy costs. Moreover, the RIMS synapses can demonstrate energy consumption per paired-spike below the 10-pJ baseline for existing artificial synapses with PPF functions and with the use of nanosecond spikes, an intermediate bias range, and microsecond spike-time intervals, which enable systems to perform synaptic operations in a low-energy manner for saving neural-network-hardware energy. Additionally, the RIMS synapses can show a number of conductance states above the baseline of ten states for state-of-the-art phase-change synapses with several ten nanosecond spikes and within a range of several volts, which allow systems to process multiple points of data more like the human brain for substantially improving the accuracy/efficiency of neural network hardware. Furthermore, the RIMS synapse can perform a number of synaptic operations per cell above the reference of two operations per cell for previous phase-change artificial synapses with nanosecond spikes with a range of few volts, which can allow synaptic computations to be implemented in an area-efficient way for significantly reducing neural-network-hardware size.

A RIMS synapse with a key computing component—a nanoscale deposit-only-metal-electrode-fabrication-based uniform-partial-state-transition-facilitated framework—has been demonstrated with emergent memristive devices. In particular, a Ge2Sb2Te5-based deposited-only-nanopillar-type device was developed to enable a rapid operating time, intermediate bias range, multiple states, and many synaptic functions synapse. The test results of the RIMS synapse exhibit fast PPF/PPD (∼50 ns spikes with an ∼1 µs inter-spike interval within an ∼1 V range and with an energy consumption of ∼1.8 pJ per paired-spike) as well as multi-state and rapid LTP/LTD (∼15 distinct states using ∼50 ns spikes within a 0.7/1.4 V range). The RIMS synapse also shows fast STDP (∼50 ns spikes with an ∼1 µs inter-spike interval within a 1.3 V range). The origin of the uniform-partial-state-transition-facilitated increase in conductance states is elucidated by electro-thermal simulations. Given that these two-terminal memristive devices are scalable and stackable, our results could lead to a high-density integration of artificial synapses with small features. Hence, the demonstration of the present synapse can represent an important step toward the construction of faster, large-scale artificial-synapse arrays.

See the supplementary material for device fabrication and testing, simulation setup, additional electrical characterization, electro-thermal simulations, and a comparison with previous work.

We thank N. Sethu and Q. Wang for their contributions. We also thank K. G. Lim, L. T. Ng, B. Wang, W. J. Wang, R. Zhao, and T. C. Chong for important discussions. The authors acknowledge support from the Singapore University of Technology and Design (Grant No. SUTDT12017003), Changi General Hospital (Singapore) (Grant No. CGH-SUTD-HTIF2019-001), the Ministry of Education (Singapore) (Grant No. MOE2017-T2-2-064), the Agency of Science Technology and Research (Singapore) (Grant No. A20G9b0135), and SUTD-Zhejiang-University [SUTD-ZJU (VP) 201903] grant programs. D. K. Loke acknowledges support from the Massachusetts Institute of Technology-SUTD International Design Center and the National Supercomputing Center, Singapore (Grant No. 15001618). X. S. Go acknowledges the support of an SUTD Graduate Scholarship.

The authors declare no conflict of interest.

The data that support the findings of this study are available from the corresponding authors upon reasonable request.

2.
A. S.
Cassidy
,
J.
Georgiou
, and
A. G.
Andreou
,
Neural Networks
45
,
4
(
2013
).
3.
G.
Rachmuth
,
H. Z.
Shouval
,
M. F.
Bear
, and
C.-S.
Poon
,
Proc. Natl. Acad. Sci. U. S. A.
108
(
49
),
E1266
(
2011
).
4.
Y.-F.
Chang
,
B.
Fowler
,
Y.-C.
Chen
,
F.
Zhou
,
C.-H.
Pan
,
T.-C.
Chang
, and
J. C.
Lee
,
Sci. Rep.
6
(
1
),
21268
(
2016
).
5.
A.
Thomas
,
A. N.
Resmi
,
A.
Ganguly
, and
K. B.
Jinesh
,
Sci. Rep.
10
(
1
),
12450
(
2020
).
6.
L.
Mennel
,
J.
Symonowicz
,
S.
Wachter
,
D. K.
Polyushkin
,
A. J.
Molina-Mendoza
, and
T.
Mueller
,
Nature
579
(
7797
),
62
(
2020
).
7.
S.
Salahuddin
,
K.
Ni
, and
S.
Datta
,
Nat. Electron.
1
(
8
),
442
(
2018
).
8.
L.
Daniele
and
A.
Stefano
,
Nanotechnology
31
(
9
),
092001
(
2019
).
9.
Q.
Wan
,
M. T.
Sharbati
,
J. R.
Erickson
,
Y.
Du
, and
F.
Xiong
,
Adv. Mater. Technol.
4
(
4
),
1900037
(
2019
).
10.
S.
Choi
,
J.
Yang
, and
G.
Wang
,
Adv. Mater.
32
(
51
),
2004659
(
2020
).
11.
Chemical Synapses, Neuroscience
, 2nd ed., edited by
D.
Purves
,
G. J.
Augustine
,
D.
Fitzpatrick
, et al.
(
Sinauer Associates
,
Sunderland, MA
,
2001
).
12.
D.
Loke
,
J. M.
Skelton
,
L.-T.
Law
,
W.-J.
Wang
,
M.-H.
Li
,
W.-D.
Song
,
T.-H.
Lee
, and
S. R.
Elliott
,
Adv. Mater.
26
(
11
),
1725
(
2014
).
13.
D. K.
Loke
,
G. J.
Clausen
,
J. F.
Ohmura
,
T.-C.
Chong
, and
A. M.
Belcher
,
ACS Appl. Nano Mater.
1
(
12
),
6556
(
2018
).
14.
W.
Zhang
,
R.
Mazzarello
,
M.
Wuttig
, and
E.
Ma
,
Nat. Rev. Mater.
4
(
3
),
150
(
2019
).
15.
D.
Loke
,
J. M.
Skelton
,
W.-J.
Wang
,
T.-H.
Lee
,
R.
Zhao
,
T. C.
Chong
, and
S. R.
Elliott
,
Proc. Natl. Acad. Sci. U. S. A.
111
(
37
),
13272
13277
(
2014
).
16.
S. G.
Alberici
,
R.
Zonca
, and
B.
Pashmakov
,
Appl. Surf. Sci.
231-232
,
821
(
2004
).
17.
L.
Krusin-Elbaum
,
C.
Cabral
, Jr.
,
K. N.
Chen
,
M.
Copel
,
D. W.
Abraham
,
K. B.
Reuter
,
S. M.
Rossnagel
,
J.
Bruley
, and
V. R.
Deline
,
Appl. Phys. Lett.
90
(
14
),
141902
(
2007
).
18.
K. N.
Chen
,
C.
Cabral
, and
L.
Krusin-Elbaum
,
Microelectron. Eng.
85
,
2346
(
2008
).
20.
Y.
Li
,
Y.
Zhong
,
L.
Xu
,
J.
Zhang
,
X.
Xu
,
H.
Sun
, and
X.
Miao
,
Sci. Rep.
3
(
1
),
1619
(
2013
).
21.
S.
Ambrogio
,
N.
Ciocchini
,
M.
Laudato
,
V.
Milo
,
A.
Pirovano
,
P.
Fantini
, and
D.
Ielmini
,
Front. Neurosci.
10
,
56
(
2016
).
22.
Y.
Zhong
,
Y.
Li
,
L.
Xu
, and
X. S.
Miao
,
Phys. Status Solidi RRL
9
(
7
),
414
(
2015
).
23.
R. C.
Froemke
and
Y.
Dan
,
Nature
416
(
6879
),
433
(
2002
).
24.
B.
Linares-Barranco
and
T.
Serrano-Gotarredona
,
Nat. Preced.
(published online,
2009
).
25.
T.
Serrano-Gotarredona
,
T.
Masquelier
,
T.
Prodromakis
,
G.
Indiveri
, and
B.
Linares-Barranco
,
Front. Neurosci.
7
,
2
(
2013
).
26.
D. O.
Hebb
,
The Organization of Behavior
(
Wiley
,
New York
,
1949
).
27.
B.
Berninger
and
G.-Q.
Bi
,
BioEssays
24
(
3
),
212
(
2002
).
28.
S.
Song
,
K. D.
Miller
, and
L. F.
Abbott
,
Nat. Neurosci.
3
(
9
),
919
(
2000
).
29.
W.
Gerstner
and
W. M.
Kistler
,
Spiking Neuron Models
(
Cambridge University Press
,
New York, NY
,
2008
).
30.
M. C. W.
van Rossum
,
G. Q.
Bi
, and
G. G.
Turrigiano
,
J. Neurosci.
20
(
23
),
8812
(
2000
).
31.
J.
Rubin
,
D. D.
Lee
, and
H.
Sompolinsky
,
Phys. Rev. Lett.
86
(
2
),
364
(
2001
).
32.
S. G.
Kim
,
J. S.
Han
,
H.
Kim
,
S. Y.
Kim
, and
H. W.
Jang
,
Adv. Mater. Technol.
3
,
1800457
(
2018
).

Supplementary Material