Despite the significant progress made in deep learning on digital computers, their energy consumption and computational speed still fall short of meeting the standards for brain-like computing. To address these limitations, reservoir computing (RC) has been gaining increasing attention across communities of electronic devices, computing systems, and machine learning, notably with its in-memory or in-sensor implementation on the hardware–software co-design. Hardware regarded, in-memory or in-sensor computers leverage emerging electronic and optoelectronic devices for data processing right where the data are stored or sensed. This technology dramatically reduces the energy consumption from frequent data transfers between sensing, storage, and computational units. Software regarded, RC enables real-time edge learning thanks to its brain-inspired dynamic system with massive training complexity reduction. From this perspective, we survey recent advancements in in-memory/in-sensor RC, including algorithm designs, material and device development, and downstream applications in classification and regression problems, and discuss challenges and opportunities ahead in this emerging field.

In recent years, significant progress has been made in the fields of bio-inspired computing and artificial intelligence (AI) by drawing inspiration from biological systems to create innovative algorithms and computational models. This interdisciplinary field has given rise to revolutionary techniques such as artificial neural networks (ANNs)1 and spiking neural networks (SNNs),2 which excel at tackling complex learning and optimization challenges. These advancements have been applied to various fields, including computer vision,3 natural language processing,4 robotics,5 and healthcare,6 laying the foundation for a new generation of intelligent systems and providing immense potential for achieving artificial general intelligence (AGI).7 

Despite recent advancements, AI models operating on traditional digital computers have yet to match the energy efficiency and rapid learning capabilities of the human brain. For instance, adapting or fine-tuning a large language model (LLM) toward a particular task or domain can consume a large carbon footprint,8 while the human brain requires only about 20 W of power to learn something new in just a few hours.9–12 This limitation in energy efficiency primarily stems from the physical separation between sensing, memory, and computing units in digital computers, known as the von Neumann bottleneck.13–20 This issue is exacerbated by the slowing down of Moore’s law as transistor size approaches its physical limit. The slow learning of AI models can be partially attributed to the cumbersome stochastic gradient descent optimization,21 which is particularly challenging to manage on resource-constrained edge computers, especially considering the data explosion in the Internet of Things (IoT) era. Consequently, energy-efficient and fast-learning AI in intelligent edge systems has been sought-after.

A potential solution to this challenge is in-memory and in-sensor reservoir computing (RC).20,22 From a hardware perspective, these systems leverage emerging and scalable electronic and optoelectronic devices, enabling them to store, process, and sense data all in one place. These nanoscale devices23–28 not only address the transistor scaling limit but also substantially decrease the cost of transferring data between sensing, memory, and processing components, thereby resolving the von Neumann bottleneck. On the software side, RC deviates from conventional deep neural networks (DNNs)1 that require large datasets to optimize numerous model parameters using stochastic gradient descent21 and error backpropagation.29 Instead, RC primarily relies on a complex, black-box dynamic system, requiring only a limited number of trainable parameters for optimization.30–32 This considerably reduces the training cost. The synergy between hardware and software components offers a new solution for edge AI hardware that demands both high efficiency and real-time learning.

Therefore, this paper first surveys the new solution from the software perspective. We discuss the three major RC models: echo state networks (ESNs),30 liquid state machines (LSMs),31 and single dynamic nodes with delayed feedback32 and their differences. Afterward, we introduce two representative RC-associated hardware architectures, the in-memory and in-sensor architectures,33–35 and discuss the emerging materials and devices that constitute the circuits of such novel computing systems. Then, from an application perspective, we categorize RC systems into two major categories: classification and regression problems. Within each category, various applications using different input modalities, such as images, sounds, and graphs, are summarized. Finally, we discuss novel perspectives on the future development of RC systems in three aspects: materials, architecture, and applications. For instance, multimodality and the neuron architecture search (NAS) reservoir are discussed.

RC is a framework originally proposed in the early 2000s for training recurrent neural networks (RNNs). Currently, it is a vibrant and emerging AI research domain that has recently gained wide attention due to its extremely low training complexity. RC is an overarching concept that encompasses different RNN models, such as ESN,30 LSM,31 and the single node delayed feedback dynamical model.32 

ESNs were first introduced by Jaeger in 200130 and have gained popularity in various applications, such as time series prediction,30 speech recognition,36 and control tasks.37 

An ESN is a type of RNN that consists of an echo state layer and a readout map layer. The echo state layer is a recurrent layer with randomly generated invariant weights that serves as a dynamic memory characterized by its internal state. The reservoir states are utilized to train a linear readout layer, typically a single layer fully connected network serving as a regression or classification head, to produce the desired output, as shown in Fig. 1(a). The dynamics of the reservoir, or the relationship between the reservoir state x(t) at time t and the state x(t + 1) at time t + 1, is governed by the state evolution function as follows:
(1)
Here, u(t + 1) is the input at time t + 1, Win is the input weight matrix, and Wr is the recurrent weight matrix. f is an activation function, such as a hyperbolic tangent function (tanh). These weight matrices consist of random weights and remain fixed. Additionally, the relationship between the reservoir output y(t) and state x(t) at time t is governed by the following equation:
(2)
where Wout is the reservoir-to-output weight matrix. Linear regression is usually adopted to find the optimal Wout that minimizes the error between the output y(t + 1) and the target. As only the weights of the readout layer are trained, this simplifies the training process and avoids the vanishing or exploding gradient issues commonly found in traditional RNNs. ESNs feature the “echo state property”, which states that the echo state layer’s internal dynamics should be rich and diverse enough to represent the input signals while also being stable enough to prevent chaotic behavior. When the echo state layer possesses this property, it can create a high-dimensional, nonlinear representation of the input signals, allowing the simple readout layer to learn the desired output more easily.
FIG. 1.

Classification of RC systems. (a) ESN: an RNN with random and fixed weights and real-valued activations, fed to a single-layer trainable readout map. (b) LSM: an SNN variant of the RC, featuring spiking neurons to handle event data. (c) Single nonlinear node with delayed feedback: the node’s evolution depends on both current input and the delayed state.

FIG. 1.

Classification of RC systems. (a) ESN: an RNN with random and fixed weights and real-valued activations, fed to a single-layer trainable readout map. (b) LSM: an SNN variant of the RC, featuring spiking neurons to handle event data. (c) Single nonlinear node with delayed feedback: the node’s evolution depends on both current input and the delayed state.

Close modal

The SNN variant of the RC, known as the LSM, was introduced by Maass et al. in 2002.31 The term “liquid” refers to the idea that the network’s activity resembles the constantly changing patterns in a liquid, where input signals can create various activity patterns. LSMs are particularly useful in dealing with event-type data, such as those acquired by dynamic vision and audio sensors.38–40 

The basic concept of LSMs is similar to that of ESNs, where a large spiking RNN with randomly generated invariant connections, called the liquid, is used as a dynamic memory. The liquid layer is followed by a readout map layer, as shown in Fig. 1(b). The main difference between LSMs and ESNs is that LSMs use spiking neurons (e.g., integrate-and-fire neurons) as their basic computational units, while ESN internal units are time-independent activation functions. The LSM comprises an input layer and a reservoir layer to extract spiking features from raw event inputs using randomly generated invariant synaptic weights.39 At time t, the incoming synaptic current I to the ith hidden neuron is the weighted summation of the N input spikes θα and the M recurrent input spikes θβ,
(3)
where w(α,i) and w(β,i) are the randomly initialized weights of synapses interfacing the ith hidden neuron with the αth input neuron and βth hidden neuron, which stay fixed in the course of training. According to the leaky integrate-and-fire (LIF) neuron model, the dynamics of the membrane potential follow:
(4)
where τm, cm, and urest are constants, representing the membrane’s leaky time, capacitance, and resting potential, respectively. Once the membrane potential of the ith hidden neuron exceeds the firing threshold uth, the neuron produces a spike, i.e., θi(t) = 1, and then the membrane potential is reset to zero,
(5)
where the spike θi(t) is equivalent to the recurrent input spike θβ as presented in Eq. (3).
As suggested in Maass et al.,31 a memory-less readout can be used to classify the states of the liquid. We take the ANN-type readout as an example from the literature.38,41 In particular, a counter is followed to accumulate the asynchronous spiking features and generate a synchronous signal oi for the ith neuron over a time window T. As proposed by Maass et al.,31 a memory-less readout can be employed to classify the states of the liquid. We use the ANN-type readout as an example, as demonstrated in the literature.38,41 Specifically, a counter is implemented to accumulate the asynchronous spiking features and generate a synchronous signal oi for the ith neuron over a time window T,
(6)
The ANN readout layer receives accumulated neural action potentials from counters and infers labels at a predefined time. Structurally, the readout map is mostly a simple linear transformation or a fully connected layer,
(7)
where counter vector O consists of the signal oi, and Y is the output of LSM. In contrast to the SNN reservoir with fixed connection weights, the parameters Wout of the ANN readout layer require optimization.
The evolution of a single nonlinear node with delayed feedback32 depends not only on its current input but also on its past states, as shown in Fig. 1(c). The simplest form of such a system can be expressed by using a scalar delay differential equation,
(8)
where J(t) = I(t) × M(t) represents the product of input I(t) and the mask function M(t) at time t. γ is an adjustable parameter usually referred to as input gain. f is a function describing the system’s dynamics, and τ is the time delay. The behavior of the system depends on both the current masked input J(t) and the delayed state x(tτ). A key characteristic of such time-continuous delay systems is that their state space becomes infinite-dimensional, as their state at time t depends on the output of the nonlinear virtual node during the continuous time interval [tτ, t], with τ being the delay time. While the dynamics of the delay system remain finite-dimensional, they exhibit properties of high dimensionality and short-term memory. Consequently, delayed feedback systems satisfy the definition of reservoirs while featuring reduced hardware complexity.
The reservoir’s transient dynamical response is read out by an output layer, which consists of linear weighted sums of the reservoir node states,
(9)
where N is the number of nodes in the reservoir, wi is the weight of the ith node in the output layer, x(tτN(Ni)) is the state of the ith virtual node at a delayed time [tτN(Ni)], and τ is the delay time constant. The weights wi need to be updated through training to minimize the difference between the reservoir outputs and the target.

Traditional digital computers are primarily based on von Neumann and Harvard architectures, which comprise input/output (e.g., sensors), memory, processing, and control units.42 In this configuration, data are first collected from sensors and then undergo digital–analog conversion before being stored in memory. Following this, the control unit manages the data transfer from memory to the processing units.43 Despite the von Neumann architecture remaining the dominant model for most general-purpose computers at present, it has raised significant concerns regarding the “von Neumann bottleneck,” or the energy and latency costs associated with data movement between physically separated sensors, memory, and processing units.35,44–46 To address these challenges, in-memory computing and in-sensor computing architectures have been proposed. RC systems with emerging memory naturally fall into these categories.

In-memory computing architecture embeds computational capabilities directly within the memory, thus eliminating the frequent data shuttling between memory and processing units. Figures 2(a) and 2(b) show the block diagram of a representative in-memory RC system. The data sampled by sensors are first passed to the in-memory computing module implementing the reservoir, followed by the readout map module.

FIG. 2.

System architecture of in-memory and in-sensor RC. (a) In-memory RC systems for ESN/LSM: Depending on the RC model, sensory inputs are either digitized or received by in-memory computing units implementing ESN/LSM. (b) In-memory RC systems for single node delayed feedback, where sensory inputs are directly sent to emerging devices acting as single nonlinear nodes with delayed feedback. The readout maps are typically implemented digitally. (c) In-sensor RC system: bringing computation closer to the front end by directly processing input data within sensors of different modalities, effectively integrating RC functionality into the sensor.

FIG. 2.

System architecture of in-memory and in-sensor RC. (a) In-memory RC systems for ESN/LSM: Depending on the RC model, sensory inputs are either digitized or received by in-memory computing units implementing ESN/LSM. (b) In-memory RC systems for single node delayed feedback, where sensory inputs are directly sent to emerging devices acting as single nonlinear nodes with delayed feedback. The readout maps are typically implemented digitally. (c) In-sensor RC system: bringing computation closer to the front end by directly processing input data within sensors of different modalities, effectively integrating RC functionality into the sensor.

Close modal

The in-memory reservoir module consists of emerging memory, peripherals, and input/output interfaces. The reservoir model and the associated circuit depend on the choice of emerging memory. Volatile electronic memory (e.g., discrete volatile resistive switches) functions as a single nonlinear node with delayed feedback, given the inherent dynamics, which can be described by a delay differential equation.47–50 Non-volatile memory (e.g., crossbar arrays of non-volatile resistive switches) can physically implement ESNs and LSMs, primarily capitalizing on the intrinsic programming stochasticity to produce fixed random weights of ESNs51,52 and LSMs,38 which significantly reduces training overhead.

The readout map module can either be implemented in digital (i.e., CMOS51) or analog (i.e., emerging non-volatile memory24). The former digitizes reservoir outputs, featuring precise and fast readout map weight updates. The latter benefits the readout map in terms of inference energy efficiency via in-memory computing. The trade-off depends on the readout weight population and the emerging memory programming energy.24,26,53,54

In-sensor computing architecture physically integrates data acquisition, memory, and processing, notably all in the analog domain, thus eliminating the need for analog-to-digital conversion and mitigating the hardware challenges in heterogeneous integration of sensors, memory, and logic circuits. Figure 2(c) illustrates the block diagram of a representative in-sensor RC system, consisting of an in-sensor reservoir module followed by the readout map module.

The in-sensor reservoir module mostly incorporates discrete nonlinear nodes with delayed feedback. Recent implementations of such nodes leverage emerging sensors, which are paired with peripheral circuits (analog–digital conversion), input/output interfaces, and buffers. These emerging sensors can work with various input modalities, including optical images,55–57 electrical signals such as electroencephalogram (EEG) and electrocardiogram (ECG) signals,27,58,59 mechanical signals like tactile signals,60 and chemical signals such as odor.61 Moreover, recent efforts have been made to develop multimodal in-sensor reservoirs to fuse the information carried by individual modalities.60 The underlying ionic or electronic dynamics of the sensors are essentially governed by the delay differential equation, thereby implementing sensing, memory, and processing within the same device.

The in-sensor reservoir outputs are typically digitized before being sent to the digital readout map for post-processing in a manner similar to that of in-memory reservoirs.

Finally, to illustrate the advantages of the in-memory/in-sensor RC paradigm, we have summarized the energy consumption overhead of inference per input for RC systems compared to traditional digital hardware and the training cost ratio compared to trainable DNN models in Table I. Overall, both in-memory and in-sensor RC systems can save several times to hundreds of times the energy consumption overhead for ESN, LSM, or Single Node Delayed Feedback. Moreover, thanks to the untrained random weights in RC systems, the training cost can be reduced by up to hundreds of times compared to trainable DNNs.

TABLE I.

The energy efficiency and training cost reduction for in-memory/in-sensor reservoir computing.

RC typesComputing paradigmEnergy efficiencyTraining cost (compare to trainable DNNs)
ESN51  In-memory About 2–40 times Reduce more 
  reservoir computing (compare to digital hardware) than 90% 
LSM38  In-memory About 23–150 times More than 
 reservoir computing (compare to digital hardware)  100 reduction 
Single node In-memory/in-sensor 3–6 nJ per input Achieve about 
delayed feedback47,48,57  reservoir computing  5–15 reduction 
RC typesComputing paradigmEnergy efficiencyTraining cost (compare to trainable DNNs)
ESN51  In-memory About 2–40 times Reduce more 
  reservoir computing (compare to digital hardware) than 90% 
LSM38  In-memory About 23–150 times More than 
 reservoir computing (compare to digital hardware)  100 reduction 
Single node In-memory/in-sensor 3–6 nJ per input Achieve about 
delayed feedback47,48,57  reservoir computing  5–15 reduction 

The in-memory and in-sensor reservoir modules in Fig.2 comprise analog circuits using emerging devices, which are also widely used for implementing synapses and neurons in neuromorphic computing.62–65 Such devices and materials are not only energy-efficient due to their inherent in-sensor and in-memory computing abilities, but they also feature potentially higher integration density and smaller RC system footprints compared to digital alternatives. Moreover, these emerging devices streamline the in-memory and in-sensor chip fabrications by eliminating the heterogeneous integration of sensing, memory, and processing units. The RC module is characterized by nonlinear dynamics and fading memory. Therefore, dynamic nonlinearity and fading memory have become the most important prerequisites for emerging memory device-based analog circuits for hardware reservoirs.24,66,67 In this section, we will review the devices and materials that meet these requirements and are used in both in-memory and in-sensor reservoirs, focusing on their mechanisms and strengths.

Two types of emerging memory devices are used for in-memory RC. For ESNs and LSMs, their weight matrices are random and fixed. In such cases, non-volatile memory (e.g., redox resistive switches, ferroelectric tunneling junctions) crossbar arrays are utilized to encode the weight matrices and perform analog matrix-vector multiplication using Ohm’s law and Kirchhoff’s law. This in-memory computing scheme co-locates both memory and processing, featuring high energy efficiency.68–70 Furthermore, the inherent programming variation of these devices is no longer a disadvantage in neural network training but instead offers a unique opportunity to provide truly random and high-density weight matrices at a low cost compared to digital random number generation.71–73 Regarding the single nonlinear node with time-delayed feedback, emerging memory devices exhibiting short-term memory can physically implement the delay differential equation and thus represent the dynamics of the reservoir. Such memory can be ion-driven, including redox oxide and peroxide, or result from electronic effects such as charge trapping or polarization, as exhibited by ferroelectric devices. As shown in Fig. 3, four different types of in-memory devices are introduced.

FIG. 3.

Emerging memory devices for in-memory RC. (a) Redox oxide resistive switches show short-term memory attributed to Ag atom diffusion. Reproduced with permission from Wang et al., “Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing,” Nat. Mater. 16, 101–108 (2017). Copyright 2017 Springer Nature. (b) Perovskite resistive switch shows volatile behavior with weak AgI formation and dissolution. Reproduced with permission from Zhu et al., “Memristor networks for real-time neural activity analysis,” Nat. Commun. 11, 2439 (2020). Copyright 2020 Author(s), licensed under a Creative Commons Attribution 4.0 License. (c) Interface-type redox oxide resistive switch shows volatile behavior due to the oxygen ions migration-induced Schottky-like barrier change. Reproduced with permission from Zhong et al., “Dynamic memristor-based reservoir computing for high-efficiency temporal signal processing,” Nat. Commun. 12, 408 (2021). Copyright 2021 Author(s), licensed under a Creative Commons Attribution 4.0 License. (d) Interface-type perovskite resistive switch with inert Au electrode shows volatile behavior under positive and negative ions co-migration. Reproduced with permission from Yang et al., “A perovskite memristor with large dynamic space for analog-encoded image recognition,” ACS Nano 16, 21324–21333 (2022). Copyright 2022 ACS Publications. (e) Ferroelectric tunneling junction shows short-term memory with the ultra-thin (3.5 nm) HZO film. From Yu et al., 2021 Symposium on VLSI Technology. Copyright 2021 JSAP. (f) Nanowire network coupled via junctions of Ag ions-induced short-term memory in a random topology, forming a complicated nonlinear system. Reproduced with permission from Milano et al., “In materia reservoir computing with a fully memristive architecture based on self-organizing nanowire networks,” Nat. Mater. 21, 195–202 (2022). Copyright 2022 Springer Nature.

FIG. 3.

Emerging memory devices for in-memory RC. (a) Redox oxide resistive switches show short-term memory attributed to Ag atom diffusion. Reproduced with permission from Wang et al., “Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing,” Nat. Mater. 16, 101–108 (2017). Copyright 2017 Springer Nature. (b) Perovskite resistive switch shows volatile behavior with weak AgI formation and dissolution. Reproduced with permission from Zhu et al., “Memristor networks for real-time neural activity analysis,” Nat. Commun. 11, 2439 (2020). Copyright 2020 Author(s), licensed under a Creative Commons Attribution 4.0 License. (c) Interface-type redox oxide resistive switch shows volatile behavior due to the oxygen ions migration-induced Schottky-like barrier change. Reproduced with permission from Zhong et al., “Dynamic memristor-based reservoir computing for high-efficiency temporal signal processing,” Nat. Commun. 12, 408 (2021). Copyright 2021 Author(s), licensed under a Creative Commons Attribution 4.0 License. (d) Interface-type perovskite resistive switch with inert Au electrode shows volatile behavior under positive and negative ions co-migration. Reproduced with permission from Yang et al., “A perovskite memristor with large dynamic space for analog-encoded image recognition,” ACS Nano 16, 21324–21333 (2022). Copyright 2022 ACS Publications. (e) Ferroelectric tunneling junction shows short-term memory with the ultra-thin (3.5 nm) HZO film. From Yu et al., 2021 Symposium on VLSI Technology. Copyright 2021 JSAP. (f) Nanowire network coupled via junctions of Ag ions-induced short-term memory in a random topology, forming a complicated nonlinear system. Reproduced with permission from Milano et al., “In materia reservoir computing with a fully memristive architecture based on self-organizing nanowire networks,” Nat. Mater. 21, 195–202 (2022). Copyright 2022 Springer Nature.

Close modal

1. Oxide redox resistive switches

Redox resistive switches [Figs. 3(a) and 3(c)] are capacitor-like. They operate on the formation or dissolution of conductive filaments in the dielectric layer due to redox reactions and ion migration. Depending on whether the filaments spontaneously rupture, redox oxide resistive switches are categorized into volatile and non-volatile types, which are employed by single nodes with delayed feedback and ESNs/LSMs, respectively.

Volatile redox oxide devices, which are widely reported for their short-term memory, are used to implement delay differential equations of single nonlinear nodes with time-delayed feedback. Such short-term memory can be attributed to cations,24,34,74,75 such as Ag and Cu ions. When Ag and Cu serve as electrodes of redox resistive switches, they are oxidized to Ag+ and Cu+ and migrate under an external electric field. They are subsequently reduced to atoms, gradually forming conductive paths. Upon voltage removal, the metallic channels coalesce into particles, causing the conductance to spontaneously decay. This behavior of input history-dependent filament growth and decay meets the definition of a general memristor and the delay differential equation.

Short-term memory can also be attributed to anions, as reported in TiOx,47,53,76 WOx,48,49 and AlOx.77 Taking a Ti/TiOx/TaOy/Pt resistive switch as an example,47 oxygen ions or vacancies migrate in the TaOx layer which are primarily driven by the applied electric field. The TiOx layer serves as a reservoir for oxygen ions, creating a chemical potential difference that induces oxygen ion diffusion. A positive voltage on the top electrode causes oxygen ions to drift away from the Pt/TaOx interface due to the electric field, reducing the Schottky-like barrier and increasing conductance. When the external bias is removed, oxygen ions diffuse back because of the chemical gradient, and the device gradually relaxes to its high-resistance state. Like the cation, such conductance evolution meets the general definition of a memristor and the delay differential equation.

Non-volatile redox resistive switches can physically implement the random and fixed weights of ESNs and LSMs.23 Anion-based redox oxides like TaOx and HfOx are widely reported for their stable and analog conductance due to the formation of oxygen vacancy conductive filaments. The inherent programming stochasticity of non-volatile redox resistive switches, such as those based on TaOx, implements an echo state graph neural network.51 Specifically, a fixed voltage is applied to an array of TaOx resistive switches. The resultant conductance of the array exhibits a normal distribution due to the device-to-device variation in breakdown voltage, which implements the fixed random weights of the echo state graph layer and offers a significant energy efficiency boost due to in-memory computing.

2. Perovskite resistive switch

Perovskite [Figs. 3(b) and 3(d)] with its ABX3 structure features rich ionic dynamics,26,78–81 which are divided into cation-dominated or cation–anion co-driven. These dynamics can be tailored for RC, such as the single nonlinear node with delayed feedback. In terms of cation-driven ionic dynamics, perovskite resistive switches with Ag electrodes exhibit tunable volatility, enabling single nonlinear nodes with delayed feedback. The reaction between Ag+ and the X ions in perovskites is the key to tunable data retention.78 Ag+ and X ions migrate under an external electric field, thus changing the perovskite conductivity. Applying weak stimulation results in volatile resistive switching due to the formation of an unstable AgX layer caused by weak drift behavior. This meets the general definition of a memristor and the delay differential equation. As the external stimulation increases, the AgX layer strengthens, and thick filaments form via ion drift in addition to ion diffusion. As a result, volatile and non-volatile perovskite resistive switches can simultaneously implement single nonlinear nodes with delayed feedback and RC readout maps, respectively. In terms of cation–anion co-driven dynamics, perovskite resistive switches with inert electrodes have exhibited volatile resistive switching for the single nonlinear node with delayed feedback. This stems from the interface effect under voltage bias.82 When external voltages are applied, positive and negative ions with extremely low activation energy within the perovskite accumulate at the device’s upper and lower interfaces. This changes the interfacial barriers, enhances carrier injection, and alters device conductance. Upon voltage removal, the interfacial barriers gradually recover, yielding volatile resistive switching that meets the general definition of a memristor and the delay differential equation.

3. Ferroelectric tunneling junctions

Ferroelectric tunneling junctions (FTJs) [Fig. 3(e)] comprise two metal electrodes separated by a thin ferroelectric layer. When a voltage is applied across the electrodes, the ferroelectric domains switch polarization directions, which modulate the tunneling barrier, causing a change in the tunneling current through the junction. As the associated lattice distortion is very limited, FTJs feature fast switching at ultra-low switching energy.25,83,84 The tunable switching energy also yields different data retention for different types of RC. As for non-volatile FTJs, multi-domain FTJs feature multi-bit data storage per cell as well as intrinsic device variations. Crossbar arrays of FTJs with controllable device-to-device variation have also been proposed to implement fixed random weights for ESNs and LSMs.85 As for volatile FTJs, their short-term memory is due to ferroelectric degradation.50 This is because the energy barrier between states scales with the ferroelectric domain volume, so degradation is frequently spotted in FTJs of ferroelectric layers less than 7 nm.86 Furthermore, by applying special electrical operation schemes that impose a small pulse sequence with opposite polarity after a polarization pulse, the conductance decay time constants could be modulated for a reconfigurable single nonlinear node with delayed feedback.50 

4. Nanowire

The nanowire network [Fig. 3(f)] resembles the random and complex neural network of the brain. It is a collection of nonlinear dynamic nodes (i.e., junctions between two nanowires) interconnected in a random topology like that of ESN and LSM.28,87–90 Like neuron cells interacting through synaptic intersections, nanowires interact with each other via intersections of different core/shell materials. For example, volatile redox resistive switches are formed at the junction between Ag nanowires coated with polyvinylpyrrolidone (PVP).54 When a voltage is applied across intersecting nanowires, it triggers the anodic dissolution of Ag into Ag+ ions. These ions then travel through the PVP-insulated nanowire shell layer, forming a conductive bridge. This bridge adjusts the conductivity of the junction, creating a node of nonlinear dynamics. Such a network of coupled nonlinear dynamic nodes with a random coupling topology produces very complicated nonlinear dynamics and short-term memory of the reservoir.

In-sensor RC seeks to integrate data acquisition, memory, and computation within a single unit, maximizing the energy and area efficiency of reservoirs. In-sensor reservoirs are mostly single nonlinear nodes with delayed feedback. They employ novel optoelectronic devices, electronic memory, and tactile sensors for vision,55–57 bio-electrical signals (e.g., EEG and ECG),27,58,59 and tactile signals,60 respectively. The dynamic responses of such sensors feature a short-term memory, matching the requirement of delay differential equations and thus implementing the nonlinear nodes with delayed feedback. Figure 4 shows different types of in-sensor devices.

FIG. 4.

Emerging devices for in-sensor RC. (a) An organic electrolyte gated transistor shows reconfigurable volatile and non-volatile properties, while the light-induced ions could be injected into the amorphous region or crystalline region under low or high gate potentials. Reproduced with permission from Wang et al., “An organic electrochemical transistor for multimodal sensing, memory and processing,” Nat. Electron. 6, 281–291 (2023). Copyright 2023 Author(s), licensed under a Creative Commons Attribution 4.0 License. (b) 2D material-based photoelectric device shows electrical and optical pulse switching behavior with dual Sn and S vacancies. Reproduced with permission from Sun et al., “In-sensor reservoir computing for language learning via two-dimensional memristors,” Sci. Adv. 7, eabg1455 (2021). Copyright 2021 AAAS. (c) Perovskite photodetector shows slow current decay by inserting the P(VDF-TrFE l) layer as a potential well. Reproduced with permission from Lao et al., “Ultralow-power machine vision with self-powered sensor reservoir,” Adv. Sci. 9, 2106092 (2022). Copyright 2022 Author(s), licensed under a Creative Commons Attribution 4.0 License. (d) Oxide photodetector is used as a photo-synapse due to its persistent photoconductivity. Reproduced with permission from Zhang et al., “In-sensor reservoir computing system for latent fingerprint recognition with deep ultraviolet photo-synapses and memristor array,” Nat. Commun. 13, 6590 (2022). Copyright 2022 Author(s), licensed under a Creative Commons Attribution 4.0 License.

FIG. 4.

Emerging devices for in-sensor RC. (a) An organic electrolyte gated transistor shows reconfigurable volatile and non-volatile properties, while the light-induced ions could be injected into the amorphous region or crystalline region under low or high gate potentials. Reproduced with permission from Wang et al., “An organic electrochemical transistor for multimodal sensing, memory and processing,” Nat. Electron. 6, 281–291 (2023). Copyright 2023 Author(s), licensed under a Creative Commons Attribution 4.0 License. (b) 2D material-based photoelectric device shows electrical and optical pulse switching behavior with dual Sn and S vacancies. Reproduced with permission from Sun et al., “In-sensor reservoir computing for language learning via two-dimensional memristors,” Sci. Adv. 7, eabg1455 (2021). Copyright 2021 AAAS. (c) Perovskite photodetector shows slow current decay by inserting the P(VDF-TrFE l) layer as a potential well. Reproduced with permission from Lao et al., “Ultralow-power machine vision with self-powered sensor reservoir,” Adv. Sci. 9, 2106092 (2022). Copyright 2022 Author(s), licensed under a Creative Commons Attribution 4.0 License. (d) Oxide photodetector is used as a photo-synapse due to its persistent photoconductivity. Reproduced with permission from Zhang et al., “In-sensor reservoir computing system for latent fingerprint recognition with deep ultraviolet photo-synapses and memristor array,” Nat. Commun. 13, 6590 (2022). Copyright 2022 Author(s), licensed under a Creative Commons Attribution 4.0 License.

Close modal

1. Organic electrochemical transistor

The organic electrochemical transistors function by moving ions from the gate electrolyte into the channel and vice versa, enabling doping of the channel and, consequently, adjusting the conductance. It responds to both light and voltage stimulation while featuring short-term memory.57,91–94 When it comes to light stimulation, charges are generated throughout the channel during the photoexcitation process. The majority of mobile carriers are electrons, which contribute to the rise of photocurrent, whereas holes are localized immediately after being generated. Upon switching off the light, the channel current first experiences a rapid decrease due to the recombination of electrons with holes in shallow traps. This is followed by a gradual decay due to inefficient recombination of electrons with holes in deeper traps. This leads to short-term memory and implements a single nonlinear node with delayed feedback. In terms of voltage stimulation, organic electrolyte transistors respond to EEG and ECG signals while displaying short-term memory. This short-term memory is based on ion injection into and diffusion back from the channel. Furthermore, the characteristic time of the short-term memory, which depends on the ion trapping energy, can be tunable in a vertical traverse structure,27 as shown in Fig. 4(a), where a small (large) gate voltage can inject ions into the amorphous (crystalline) region, leading to a volatile (non-volatile) behavior.

2. 2D materials

2D materials are crystalline solids composed of a single or few layers of atoms that often exhibit unique optoelectronic properties. Various 2D material-based field-effect transistors have been reported for in-sensor computing, such as MoS2,95–98 h-BN,99 In2Se3,60,100 WSe2,101,102 SnS,56 and layered black phosphorus.103,104 As illustrated in Fig. 4(b), using SnS as an example, the channel conductance of an SnS device exhibits nonlinear short-term memory. This is attributed to charge trapping and detrapping in defect states associated with Sn and S vacancies. The device displays synaptic depression and facilitation under electrical and optical stimulation, respectively. This implements a single nonlinear node with delayed feedback.

3. Perovskite

In addition to in-memory computing, perovskites are also widely used in solar cells. Their photoresponse makes perovskite optoelectronic cells contenders for in-sensor RC.105–107 For instance, as shown in Fig. 4(c), a self-powered Au/P(VDF − TrFE)/Cs2AgBiBr6/ITO worked as a nonlinear node with delayed feedback for in-sensor RC.108 Under optical stimulation, the separation of photon-generated carriers by the built-in electric field in the Schottky barrier results in an increase in photocurrent. After the stimulus is removed, the high binding energy of the valence electrons in the designed P(VDF-TrFE) layer forms a potential well for hole carriers at the P(VDF − TrFE)/Cs2AgBiBr6 interface. This potential well hinders the migration of photon-generated hole carriers toward the Au electrode, leading to a gradual current decay that meets the definition of delay differential equations.

4. Oxide

Metal oxide photodetectors cover a wide spectrum of electromagnetic radiation due to the numerous possibilities of bandgaps55,109 and hence in-sensor reservoirs. For instance, a-GaOx photo-synapses [Fig. 4(d)] have been reported to implement in-sensor reservoirs due to the persistent photoconductivity effect, which involves the generation, trapping, and detrapping of non-equilibrium carriers within the a-GaOx. Under ultraviolet light, the number of photogenerated electron–hole pairs increases in the a-GaOx layer, while the holes drift toward the electrode. When the light is turned off, electrons and holes gradually recombine, thus exhibiting short-term memory for the nonlinear node with delayed feedback.

RC has been widely used in statistical classification and regression, as illustrated in Table II. For classification, RC has been employed across different data modalities, such as images, audio, events, general graphs, spatiotemporal sequences, and multimodal fusion. In the case of regression problems, RC is mainly utilized for time series datasets to forecast future trends in time series signals.

TABLE II.

RC systems can be categorized into two main application areas: regression tasks and classification tasks. Regression tasks primarily deal with time series input signals. Classification tasks, which are extensively studied in RC systems, can be further divided based on input data modality, including images, audio, event-driven data, graph data, sequence data, and multimodal data fusion. Different analyses have been classified, and the corresponding datasets can be found in the relevant RC system literature.

TaskData typeDataset
Regression Time series Hénon map,47 nonlinear autoregressive 
  moving average (NARMA)110,111 
  Mackey–Glass time series,48,110,111 Santa Fe laser time series110  
Classification Image Handwritten digit,24,49,50,54,57,60,111 self-made pattern images,54,56 
  self-made noisy images,49 FVC 2002 database55  
 Sound NIST TI46 database47,48 
 Event-data N-MNIST,38 N-TIDIGITS,38  
  DVS Gesture128,57 neural firing patterns26  
 Graph MUTAG, COLLAB and CORA51  
 Sequence MIT-BIH heart arrhythmia database,53,91  
  PTB-XL,27 self-made gesture data25,53 
 Multimodal Tactile and visual digit,60 audio and 
  visual digit,38 audio and electrophysiological signal112  
TaskData typeDataset
Regression Time series Hénon map,47 nonlinear autoregressive 
  moving average (NARMA)110,111 
  Mackey–Glass time series,48,110,111 Santa Fe laser time series110  
Classification Image Handwritten digit,24,49,50,54,57,60,111 self-made pattern images,54,56 
  self-made noisy images,49 FVC 2002 database55  
 Sound NIST TI46 database47,48 
 Event-data N-MNIST,38 N-TIDIGITS,38  
  DVS Gesture128,57 neural firing patterns26  
 Graph MUTAG, COLLAB and CORA51  
 Sequence MIT-BIH heart arrhythmia database,53,91  
  PTB-XL,27 self-made gesture data25,53 
 Multimodal Tactile and visual digit,60 audio and 
  visual digit,38 audio and electrophysiological signal112  

RC has been extensively explored for classification problems using both in-memory and in-sensor implementations. These approaches have been categorized into six major classes based on the modality of input signals, as shown in Fig. 5.

FIG. 5.

RC system for statistical classification. (a) In-memory nonlinear nodes with delayed feedback RC for MNIST handwritten digit classification. Reproduced with permission from Du et al., “Reservoir computing using dynamic memristors for temporal information processing,” Nat. Commun. 8, 2204 (2017). Copyright 2017 Author(s), licensed under a Creative Commons Attribution 4.0 License. (b) In-memory nonlinear nodes with delayed feedback RC for spoken-digit recognition. Reproduced with permission from Moon et al., “Temporal data classification and forecasting using a memristor-based reservoir computing system,” Nat. Electron. 2, 480–487 (2019). Copyright 2019 Springer Nature. (c) In-memory nonlinear nodes with delayed feedback RC for classification of neural firing pattern data. Reproduced with permission from John et al., “Reconfigurable halide perovskite nanocrystal memristors for neuromorphic computing,” Nat. Commun. 13, 2074 (2022). Copyright 2022 Author(s), licensed under a Creative Commons Attribution 4.0 License. (d) In-memory ESNs for MUTAG and COLLAB graph datasets classification.51 Reproduced with permission Wang et al., “Echo state graph neural networks with analog random resistive memory arrays,” Nat. Mach. Intell. 5, 104–113 (2023). Copyright 2023 Author(s), licensed under a Creative Commons Attribution 4.0 License. (e) In-sensor nonlinear nodes with delayed feedback RC for real-time cardiac disease diagnoses. Reproduced with permission from Wang et al., “An organic electrochemical transistor for multimodal sensing, memory and processing,” Nat. Electron. 6, 281–291 (2023). Copyright 2023 Author(s), licensed under a Creative Commons Attribution 4.0 License. (f) Multimode RC for multisensory handwritten digit-recognition. Reproduced with permission from Liu et al., “An optoelectronic synapse based on α-In2Se3 with controllable temporal dynamics for multimode and multiscale reservoir computing,” Nat. Electron. 5, 761–773 (2022). Copyright 2022 Springer Nature.

FIG. 5.

RC system for statistical classification. (a) In-memory nonlinear nodes with delayed feedback RC for MNIST handwritten digit classification. Reproduced with permission from Du et al., “Reservoir computing using dynamic memristors for temporal information processing,” Nat. Commun. 8, 2204 (2017). Copyright 2017 Author(s), licensed under a Creative Commons Attribution 4.0 License. (b) In-memory nonlinear nodes with delayed feedback RC for spoken-digit recognition. Reproduced with permission from Moon et al., “Temporal data classification and forecasting using a memristor-based reservoir computing system,” Nat. Electron. 2, 480–487 (2019). Copyright 2019 Springer Nature. (c) In-memory nonlinear nodes with delayed feedback RC for classification of neural firing pattern data. Reproduced with permission from John et al., “Reconfigurable halide perovskite nanocrystal memristors for neuromorphic computing,” Nat. Commun. 13, 2074 (2022). Copyright 2022 Author(s), licensed under a Creative Commons Attribution 4.0 License. (d) In-memory ESNs for MUTAG and COLLAB graph datasets classification.51 Reproduced with permission Wang et al., “Echo state graph neural networks with analog random resistive memory arrays,” Nat. Mach. Intell. 5, 104–113 (2023). Copyright 2023 Author(s), licensed under a Creative Commons Attribution 4.0 License. (e) In-sensor nonlinear nodes with delayed feedback RC for real-time cardiac disease diagnoses. Reproduced with permission from Wang et al., “An organic electrochemical transistor for multimodal sensing, memory and processing,” Nat. Electron. 6, 281–291 (2023). Copyright 2023 Author(s), licensed under a Creative Commons Attribution 4.0 License. (f) Multimode RC for multisensory handwritten digit-recognition. Reproduced with permission from Liu et al., “An optoelectronic synapse based on α-In2Se3 with controllable temporal dynamics for multimode and multiscale reservoir computing,” Nat. Electron. 5, 761–773 (2022). Copyright 2022 Springer Nature.

Close modal

Images are originally spatial signals. Converting spatial images into spatiotemporal signals makes them compatible with RC. In terms of in-memory computing, oxide redox resistive switches and FTJs implementing nonlinear nodes with delayed feedback are benchmarked on datasets such as MNIST.24,49,50,111 For instance, a crossbar array of redox resistive switches serves as the nonlinear nodes of a delayed feedback RC, while a digital computer is used for supervised training of a readout layer to classify the MNIST dataset.49 In addition, redox resistive switches, rather than digital hardware, have been used to implement the readout map, achieving an accuracy of 83% on the same dataset.24 As for in-sensor computing, the current approach mainly involves using optoelectronic devices to achieve delayed feedback RC to process images, such as handwritten digit images from the MNIST dataset,57 optical garment images sampled from the Fashion MNIST dataset,57 and letter images from the E-MNIST dataset57 or the Korean letter images.56 Moreover, to enhance the performance of image recognition, novel reservoir architectures have also been proposed. For instance, the rotation-based architecture employs an ensemble of nonlinear nodes and dynamically rotates the links between the input channels and the nonlinear nodes.111 

Audio signals are temporal in nature, and RC can be utilized to extract features from them. Large parallel RC systems have been built using in-memory computing by connecting multiple single nonlinear nodes with delayed feedback in parallel. These systems have achieved a low word error rate of 0.4% on spoken-digit datasets.47 Similarly, dynamic tungsten oxide (WOx) memristor-based reservoirs have exhibited comparable performance.48 

Event data, which are sparse representations generated by dynamic vision or audio sensors, mimic the signals received by human eyes or ears. This data type is intrinsically sparse, offering significant energy savings while preserving privacy and security. Due to their inherent temporal dimension, event data are well-suited for RC. In-memory RC implementations, such as LSMs, have been used for zero-shot learning feature alignments between the N-MNIST and N-TIDIGITS datasets.38 In-sensor computing approaches have employed organic nonlinear dynamic nodes to construct reservoir-in-pixel architectures, which can classify DVS128 datasets.57 Additionally, reconfigurable halide perovskite-based nonlinear dynamic nodes have achieved ∼90% classification accuracy for four event-type neural firing patterns.26 

General graphs, comprising nodes and edges, are natural representations of molecules and social networks. Messages passing on graphs can be treated temporally, making them compatible with RC. In in-memory reservoirs, each graph node can be associated with an ESN. These ESNs share the same weights and are coupled according to the graph topology. ESN-based graph embeddings have demonstrated significant energy efficiency improvements when classifying MUTAG molecule datasets and COLLAB citation network datasets.51 

Spatiotemporal signals generated by IoT devices, such as wearable sensors, can be processed by RC due to their temporal dimension. In-memory computing approaches have utilized single node delayed feedback RC for recognizing health categories in the MIT-BIH arrhythmia database and self-collected gesture data.53 High-density 3D stacked redox resistive switches have been employed for dynamic gesture data classification.25 In-sensor computing methods have implemented organic electrochemical transistors as nonlinear nodes with delayed feedback for diagnosing cardiac diseases.27 

Multimodal data fusion has been explored in RC systems due to its rich representation and strong generalization capabilities. The recognition and perception of multimodal data are more biologically plausible, as they resemble the information acquisition process in the human brain. Recent research has focused on equipping RC with multimodal inputs, such as tactile, visual, and auditory signals. For instance, tactile and visual signal combinations have been employed in RC systems for digit recognition, resulting in significant improvements in forward inference performance.60 RC systems have also been applied to touchless user interfaces for virtual reality, leveraging their acoustic and electrophysiological perception capabilities to provide a more immersive experience resistant to interference.112 Furthermore, LSMs implemented with redox resistive switches have demonstrated multimodal zero-shot transfer learning using event visual and audio datasets.38 

Regression aims to establish the relationship between independent and dependent variables. A representative problem in regression is fitting or predicting chaotic systems, as shown in Fig. 6. This challenge is primarily due to the positive Lyapunov exponent characterizing chaotic systems, which leads to exponential growth in the separation between closely related trajectories. As a result, even minor prediction errors can quickly cause significant divergence from the ground truth.48,113 In-memory reservoirs based on nonlinear nodes with delayed feedback have been used for regression tasks such as the Hénon map,47 NARMA,110,111 Mackey–Glass time series, and Santa Fe laser time series.48,110,111 These reservoirs capitalize on the short-term memory and nonlinear properties of redox resistive switches.47,48,110 Additionally, an RC system entirely built on redox resistive switches, possessing a novel rotating reservoir architecture, is used for Mackey–Glass time series prediction. In this way, the readout layer is implemented on non-volatile redox resistive switches.111 

FIG. 6.

RC systems for regression. Mackey–Glass time series prediction with a rotating in-memory RC system. Reproduced with permission from Liang et al., “Rotating neurons for all-analog implementation of cyclic reservoir computing,” Nat. Commun. 13, 1549 (2022). Copyright 2022 Author(s), licensed under a Creative Commons Attribution 4.0 License.

FIG. 6.

RC systems for regression. Mackey–Glass time series prediction with a rotating in-memory RC system. Reproduced with permission from Liang et al., “Rotating neurons for all-analog implementation of cyclic reservoir computing,” Nat. Commun. 13, 1549 (2022). Copyright 2022 Author(s), licensed under a Creative Commons Attribution 4.0 License.

Close modal

At present, although in-memory and in-sensor RC with emerging memory have shown promising results in certain tasks, they still cannot parallel traditional DNNs on digital hardware for the rest of the tasks. This paper provides an outlook on potential directions for the future RC systems in three aspects: hardware, architecture, and application, as illustrated in Fig. 7.

FIG. 7.

The outlook for future RC systems. Hardware-wise, in-memory/in-sensor devices with large-scale integration capability, CMOS process compatibility, and function diversity are favored for RC implementations.114–116 Device properties, including device variation, switching speed, and endurance cycle, are analyzed for different applications. In addition, how to partition RC systems into analog and digital sub-systems matters to the overall efficiency and footprint.51,53,117 Architecture-wise, multimodal inputs and deep in-sensor RC will increase the spectrum of applications. Moreover, NAS could be employed for automatic RC hardware–software co-design. Application-wise, there might be a transition from simple classification and regression to tasks like generation (e.g., chatbots), cybersecurity (e.g., data privacy and DNN protection), and system control (e.g., robotics).

FIG. 7.

The outlook for future RC systems. Hardware-wise, in-memory/in-sensor devices with large-scale integration capability, CMOS process compatibility, and function diversity are favored for RC implementations.114–116 Device properties, including device variation, switching speed, and endurance cycle, are analyzed for different applications. In addition, how to partition RC systems into analog and digital sub-systems matters to the overall efficiency and footprint.51,53,117 Architecture-wise, multimodal inputs and deep in-sensor RC will increase the spectrum of applications. Moreover, NAS could be employed for automatic RC hardware–software co-design. Application-wise, there might be a transition from simple classification and regression to tasks like generation (e.g., chatbots), cybersecurity (e.g., data privacy and DNN protection), and system control (e.g., robotics).

Close modal

Hardware-wise, we summarize challenges with both single devices and systems, as shown in the bottom row of Fig. 7.

At the device level, no current emerging devices simultaneously provide large-scale integration, CMOS compatibility, and multimodal sensing capabilities. In-memory devices, such as redox resistive switches and FTJs, feature high integration density and CMOS compatibility. However, they frequently lack multimodal sensing capabilities. For example, monolithic integration of 4M redox resistive switches and CMOS has been reported.114 The CMOS compatibility also endorses the yield of the redox resistive switches. However, for 2D material-based optoelectronic devices, as a representative in-sensor RC solution, large-scale integration is difficult due to challenges with wafer-scale transfer or CVD growth for heterostructures. As such, the 2D material-based crossbar array still shows lower integration density and a relatively poor yield.115,116 In-sensor devices favoring functional materials, like perovskite, 2D materials, and organic materials, offer a wealth of multimodal sensing opportunities. Nevertheless, they face substantial challenges in large-scale array fabrication and homogeneous integration.

Along with Table III, we examine and compare several electrical properties of different material-device combinations for RC, namely the variations, switching speed, and endurance. Device variations, including cycle-to-cycle variation and device-to-device variation, negatively impact the performance of RC models. Compared with redox resistive switches, homogeneous ion motion in OECTs and perovskite resistive switches may outperform filamentary switching in achieving reduced cycle-to-cycle variations.27,38,81 The device-to-device variation is similar among different material-device pairs, and it could likely be improved using advanced CMOS-compatible manufacturing.27,51,81,85,95 Regarding switching, it is essential to achieve application-specific switching incubation and relaxation time for nonlinear nodes with delayed feedback RC. Redox resistive switches and ferroelectric devices provide options for fast operation,53 while perovskite materials and OECTs are more suitable for relatively large time-scale operations.26,27,82,99 The switching speed also scales with the energy efficiency for nonlinear nodes with delayed feedback RC, as it underpins the duration to process given time sequences. Finally, in terms of endurance, greater endurance benefits all types of RC applications. Due to the continuous dynamic memory behavior, the requirement for switching endurance of single-node delayed feedback RC would be much larger than that of ESN and LSM. So far, redox resistive switches and perovskite materials have demonstrated endurance greater than 106,26,51 owing to the relatively stable host materials.

TABLE III.

Comparison of selected properties for RC systems.

Max. cycle-to-cycle repeatability (%)Min. device-to-device variance (%)Min. switching incubation/relaxation timeMax. endurance
Redox <1638  <1551  1 ns53/⋯ 3 × 106 cycles51  
Ferroelectric ⋯ 1985  ⋯ ⋯ 
Perovskite 2.581  14.481  2 ms82/>5 ms26  2 × 106 cycles26  
2D material ⋯ 12.895  1 ms99/⋯ 2 × 103 cycles98  
OECT 0.4927  1727  0.82 ms27/1.399 ns57  8 × 103 cycles27  
Nanowire ⋯ ⋯ 10 μs54/⋯ 3 × 103 cycles90  
Max. cycle-to-cycle repeatability (%)Min. device-to-device variance (%)Min. switching incubation/relaxation timeMax. endurance
Redox <1638  <1551  1 ns53/⋯ 3 × 106 cycles51  
Ferroelectric ⋯ 1985  ⋯ ⋯ 
Perovskite 2.581  14.481  2 ms82/>5 ms26  2 × 106 cycles26  
2D material ⋯ 12.895  1 ms99/⋯ 2 × 103 cycles98  
OECT 0.4927  1727  0.82 ms27/1.399 ns57  8 × 103 cycles27  
Nanowire ⋯ ⋯ 10 μs54/⋯ 3 × 103 cycles90  

At the system level, efficiency and footprint are closely related to the partition between analog and digital components. There are two popular ways to partition the RC system. The first partition features a digital readout layer and an analog reservoir consisting of emerging in-memory or in-sensor devices.53 Hybrid analog–digital systems demonstrate balanced efficiency and programmability, where the efficiency (programmability) comes from analog in-memory or in-sensor computing (digital readout map). However, the overall performance is limited by analog–digital conversion. The second partition is an entirely analog system, a fully analog RC system to capitalize on the efficiency advantages of in-memory and in-sensor computing.117 Specifically, current research on reconfigurable volatile and non-volatile devices suggests that a homogeneously integrated physical RC system could deliver significant performance enhancement. However, current analog memory devices are less precise and slower when serving as the weights of the readout map, which requires a time-consuming tuning process.

Furthermore, such partitioning can be tailored toward particular applications or hardware–software co-design, a general method for improving overall system performance.51 On the hardware side, it is expected to design more efficient circuit mapping for the algorithm. On the software side, the algorithm’s tolerance for limited hardware nonidealities should be improved. The design goal is to concurrently maximize efficiency, footprint, and dataset performance by adjusting both hardware and software design parameters. Such design parameters include but are not limited to, signal design, the number of hardware neurons, analog-to-digital conversion bitwidth, RC model structure, and readout weight optimization. An example is the co-design for emerging memory-based readout map training. As emerging memory suffers from programming energy and stochasticity, lightweight first-order and controlled error (FORCE) learning could be employed,118 which minimizes emerging memory programming compared to alternative training protocols.

At present, there are three key aspects worth exploring at the architecture level for both in-memory and in-sensor RC.

First, there is a trend to shift from single modality to multiple modalities. Most current work emphasizes individual modalities, such as visual or auditory. However, achieving human-like intelligence requires integrating multiple modalities, including vision, taste, smell, pain, touch, and hearing. This would result in more information and context for a given task or problem, improving RC system performance.

Second, the in-sensor architecture is gaining popularity on the edge. However, current in-sensor RC primarily employs nonlinear nodes with delayed feedback, where nodes are somewhat independent, limiting functionality. Moreover, the depth of in-sensor reservoirs is shallow since they convert signals from one type to another. This limitation can be addressed by cascading in-sensor and in-memory reservoirs to form a deep RC system. Additionally, dynamic reservoir structures, such as rotating RC systems, may benefit complex tasks.111 

Third, the architectural design overlaps with the hardware–software co-design. As such, it can adopt hardware-aware neural architecture search (NAS).119,120 Using a reward function to benchmark both hardware and software performance, a search agent for hardware-aware NAS explores the design space to identify the best design parameters that maximize the reward. This scheme helps to explore novel software/hardware architectural designs, such as feedback-based RC systems.121 

Thanks to their low training complexity and high efficiency, in-memory and in-sensor RC systems using emerging memory have already found wide applications in classification and regression. RC can be used for new applications such as generation (e.g., chatbots), cybersecurity (privacy and model protection), as well as system control (robotics).

Generative models, comprising the most recent LLMs, drive chatbots capable of simulating human conversations through text or voice.122 These chatbots deliver information and services to users in a variety of fields, such as customer service, entertainment, education, and social interaction. RNNs, including LSTMs, have been widely utilized for machine translation123 and text generation.124 As a result, generative models present a promising application opportunity for RC.

From a security perspective, RC may have implications for data privacy and machine learning model protection. Data privacy is garnering increasing attention as cloud computing (e.g., LLMs) exposes users’ personal privacy information to risk. Potential solutions could include homomorphic encryption125,126 or differential privacy techniques,127 which are computationally expensive for edge devices. The RC system may serve as an alternative solution to encrypt or encode edge data, leveraging the same capability that has been used for PUF.

Another important field of cybersecurity is model protection. This is because the development of machine learning models requires significant investment in data and training, thus making model protection indispensable.128–133 In-memory and in-sensor RC might offer novel solutions. For instance, in-memory ESN/LSM exploits the intrinsic stochasticity of emerging memory during the manufacturing process, which is difficult for replication. Therefore, it is suggested that it might likely protect intellectual property.

In recent years, the field of robotics has witnessed a surge in the application and study of RC systems, primarily due to their low training cost and impressive performance in tackling complex and practical tasks. For instance, by leveraging multimodal input LSM and digital neuromorphic chips, intelligent robots can be designed to excel in place recognition while consuming less power and exhibiting reduced latency compared to traditional mobile robot processors, such as the Jetson Xavier NX.134 Furthermore, LSM has been employed for flight navigation control in drones, showcasing superior generalization capabilities in contrast to conventional algorithms and delivering excellent performance in novel environments.135 Despite these advancements, the current robotic applications of these algorithms predominantly depend on traditional digital hardware. Investigating the potential of memristive devices to implement compute-storage integrated RC systems represents a crucial avenue for future research.

This research is supported by the Hong Kong Research Grant Council (Grant Nos. 27206321, 17205922, 17212923)and is also partially supported by ACCESS – AI Chip Center for Emerging Smart Systems, sponsored by the Innovation and Technology Fund (ITF), Hong Kong SAR.

The authors have no conflicts to disclose.

N.L., J.C., and R.Z. contributed equally to this work.

Ning Lin: Conceptualization (equal); Visualization (equal); Writing – original draft (equal). Jia Chen: Conceptualization (equal); Visualization (equal); Writing – original draft (equal). Ruoyu Zhao: Writing – review & editing (equal). Yangu He: Supervision (equal); Writing – review & editing (equal). Kwunhang Wong: Conceptualization (equal); Supervision (equal); Writing – review & editing (equal). Qinru Qiu: Conceptualization (equal); Supervision (equal); Writing – review & editing (equal). Zhongrui Wang: Conceptualization (equal); Supervision (equal); Writing – review & editing (equal). J. Joshua Yang: Conceptualization (equal); Supervision (equal); Writing – review & editing (equal).

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

1.
Y.
LeCun
,
Y.
Bengio
, and
G.
Hinton
, “
Deep learning
,”
Nature
521
,
436
444
(
2015
).
2.
W.
Gerstner
,
W. M.
Kistler
,
R.
Naud
, and
L.
Paninski
,
Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition
(
Cambridge University Press
,
2014
).
3.
A.
Krizhevsky
,
I.
Sutskever
, and
G. E.
Hinton
, “
ImageNet classification with deep convolutional neural networks
,” in
Advances in Neural Information Processing Systems 25
,
2012
.
4.
J.
Devlin
,
M.-W.
Chang
,
K.
Lee
, and
K.
Toutanova
, “
BERT: Pre-training of deep bidirectional transformers for language understanding
,” in
Proceedingsof the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
, Minneapolis, MN (Association for Computational Linguistics,
2018
), Vol. 1, pp.
4171
4186
.
5.
J.
Kober
,
J. A.
Bagnell
, and
J.
Peters
, “
Reinforcement learning in robotics: A survey
,”
Int. J. Rob. Res.
32
,
1238
1274
(
2013
).
6.
R.
Miotto
,
F.
Wang
,
S.
Wang
,
X.
Jiang
, and
J. T.
Dudley
, “
Deep learning for healthcare: Review, opportunities and challenges
,”
Briefings Bioinf.
19
,
1236
1246
(
2018
).
7.
N.
Fei
,
Z.
Lu
,
Y.
Gao
,
G.
Yang
,
Y.
Huo
,
J.
Wen
,
H.
Lu
,
R.
Song
,
X.
Gao
,
T.
Xiang
et al, “
Towards artificial general intelligence via a multimodal foundation model
,”
Nat. Commun.
13
,
3094
(
2022
).
8.
T.
Brown
,
B.
Mann
,
N.
Ryder
,
M.
Subbiah
,
J. D.
Kaplan
,
P.
Dhariwal
,
A.
Neelakantan
,
P.
Shyam
,
G.
Sastry
,
A.
Askell
et al, “
Language models are few-shot learners
,”
Adv. Neural Inf. Process. Syst.
33
,
1877
1901
(
2020
).
9.
R.
Ananthanarayanan
,
S. K.
Esser
,
H. D.
Simon
, and
D. S.
Modha
, “
The cat is out of the bag: Cortical simulations with 109 neurons, 1013 synapses
,” in
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
(
IEEE
,
2009
), pp.
1
12
.
10.
P.
Lennie
, “
The cost of cortical computation
,”
Curr. Biol.
13
,
493
497
(
2003
).
11.
D.
Attwell
and
S. B.
Laughlin
, “
An energy budget for signaling in the grey matter of the brain
,”
J. Cereb. Blood Flow Metab.
21
,
1133
1145
(
2001
).
12.
C.
Howarth
,
P.
Gleeson
, and
D.
Attwell
, “
Updated energy budgets for neural computation in the neocortex and cerebellum
,”
J. Cereb. Blood Flow Metab.
32
,
1222
1232
(
2012
).
13.
F.
Alibart
,
E.
Zamanidoost
, and
D. B.
Strukov
, “
Pattern classification by memristive crossbar circuits using ex situ and in situ training
,”
Nat. Commun.
4
,
2072
(
2013
).
14.
M.
Prezioso
,
F.
Merrikh-Bayat
,
B. D.
Hoskins
,
G. C.
Adam
,
K. K.
Likharev
, and
D. B.
Strukov
, “
Training and operation of an integrated neuromorphic network based on metal-oxide memristors
,”
Nature
521
,
61
64
(
2015
).
15.
S.
Yu
,
Z.
Li
,
P.-Y.
Chen
,
H.
Wu
,
B.
Gao
,
D.
Wang
,
W.
Wu
, and
H.
Qian
, “
Binary neural network with 16 Mb RRAM macro chip for classification and online training
,” in
2016 IEEE International Electron Devices Meeting (IEDM)
(
IEEE
,
2016
), pp.
16
22
.
16.
P.
Yao
,
H.
Wu
,
B.
Gao
,
S. B.
Eryilmaz
,
X.
Huang
,
W.
Zhang
,
Q.
Zhang
,
N.
Deng
,
L.
Shi
,
H.-S. P.
Wong
, and
H.
Qian
, “
Face classification using electronic synapses
,”
Nat. Commun.
8
,
15199
(
2017
).
17.
F. M.
Bayat
,
M.
Prezioso
,
B.
Chakrabarti
,
H.
Nili
,
I.
Kataeva
, and
D.
Strukov
, “
Implementation of multilayer perceptron network with highly uniform passive memristive crossbar circuits
,”
Nat. Commun.
9
,
2331
(
2018
).
18.
F.
Cai
,
J. M.
Correll
,
S. H.
Lee
,
Y.
Lim
,
V.
Bothra
,
Z.
Zhang
,
M. P.
Flynn
, and
W. D.
Lu
, “
A fully integrated reprogrammable memristor–CMOS system for efficient multiply–accumulate operations
,”
Nat. Electron.
2
,
290
299
(
2019
).
19.
X.
Chen
,
Y.
Han
, and
Y.
Wang
, “
Communication lower bound in convolution accelerators
,” in
2020 IEEE International Symposium on High Performance Computer Architecture (HPCA)
(
IEEE
,
2020
), pp.
529
541
.
20.
A.
Sebastian
,
M.
Le Gallo
,
R.
Khaddam-Aljameh
, and
E.
Eleftheriou
, “
Memory devices and applications for in-memory computing
,”
Nat. Nanotechnol.
15
,
529
544
(
2020
).
21.
H.
Robbins
and
S.
Monro
, “
A stochastic approximation method
,”
Ann. Math. Stat.
22
,
400
407
(
1951
).
22.
F.
Zhou
and
Y.
Chai
, “
Near-sensor and in-sensor computing
,”
Nat. Electron.
3
,
664
671
(
2020
).
23.
M.
Payvand
,
F.
Moro
,
K.
Nomura
,
T.
Dalgaty
,
E.
Vianello
,
Y.
Nishi
, and
G.
Indiveri
, “
Self-organization of an inhomogeneous memristive hardware for sequence learning
,”
Nat. Commun.
13
,
5793
(
2022
).
24.
R.
Midya
,
Z.
Wang
,
S.
Asapu
,
X.
Zhang
,
M.
Rao
,
W.
Song
,
Y.
Zhuo
,
N.
Upadhyay
,
Q.
Xia
, and
J. J.
Yang
, “
Reservoir computing using diffusive memristors
,”
Adv. Intell. Syst.
1
,
1900084
(
2019
).
25.
W.
Sun
,
W.
Zhang
,
J.
Yu
,
Y.
Li
,
Z.
Guo
,
J.
Lai
,
D.
Dong
,
X.
Zheng
,
F.
Wang
,
S.
Fan
et al, “
3D reservoir computing with high area efficiency (5.12 TOPS/mm2) implemented by 3D dynamic memristor array for temporal signal processing
,” in
2022 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits)
(
IEEE
,
2022
), pp.
222
223
.
26.
R. A.
John
,
Y.
Demirağ
,
Y.
Shynkarenko
,
Y.
Berezovska
,
N.
Ohannessian
,
M.
Payvand
,
P.
Zeng
,
M. I.
Bodnarchuk
,
F.
Krumeich
,
G.
Kara
et al, “
Reconfigurable halide perovskite nanocrystal memristors for neuromorphic computing
,”
Nat. Commun.
13
,
2074
(
2022
).
27.
S.
Wang
,
X.
Chen
,
C.
Zhao
,
Y.
Kong
,
B.
Lin
,
Y.
Wu
,
Z.
Bi
,
Z.
Xuan
,
T.
Li
,
Y.
Li
et al, “
An organic electrochemical transistor for multi-modal sensing, memory and processing
,”
Nat. Electron.
6
,
281
291
(
2023
).
28.
R.
Fang
,
W.
Zhang
,
K.
Ren
,
P.
Zhang
,
X.
Xu
,
Z.
Wang
, and
D.
Shang
, “
In-materio reservoir computing based on nanowire networks: Fundamental, progress, and perspective
,”
Mater. Futures
2
,
022701
(
2023
).
29.
D. E.
Rumelhart
,
G. E.
Hinton
, and
R. J.
Williams
, “
Learning representations by back-propagating errors
,”
Nature
323
,
533
536
(
1986
).
30.
H.
Jaeger
, “
The ‘echo state’ approach to analysing and training recurrent neural networks-with an erratum note
,”
Technical Report
,
German National Research Center for Information Technology GMD
,
Bonn, Germany
,
2001
, Vol.
148
, p.
13
.
31.
W.
Maass
,
T.
Natschläger
, and
H.
Markram
, “
Real-time computing without stable states: A new framework for neural computation based on perturbations
,”
Neural Comput.
14
,
2531
2560
(
2002
).
32.
L.
Appeltant
,
M. C.
Soriano
,
G.
Van der Sande
,
J.
Danckaert
,
S.
Massar
,
J.
Dambre
,
B.
Schrauwen
,
C. R.
Mirasso
, and
I.
Fischer
, “
Information processing using a single dynamical node as complex system
,”
Nat. Commun.
2
,
468
(
2011
).
33.
J. J.
Yang
,
D. B.
Strukov
, and
D. R.
Stewart
, “
Memristive devices for computing
,”
Nat. Nanotechnol.
8
,
13
24
(
2013
).
34.
Z.
Wang
,
S.
Joshi
,
S. E.
Savel’ev
,
H.
Jiang
,
R.
Midya
,
P.
Lin
,
M.
Hu
,
N.
Ge
,
J. P.
Strachan
,
Z.
Li
et al, “
Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing
,”
Nat. Mater.
16
,
101
108
(
2017
).
35.
X.
Chen
,
X.
Yin
,
M.
Niemier
, and
X. S.
Hu
, “
Design and optimization of FeFET-based crossbars for binary convolution neural networks
,” in
2018 Design, Automation and Test in Europe Conference and Exhibition (DATE)
(
IEEE
,
2018
), pp.
1205
1210
.
36.
F.
Triefenbach
,
A.
Jalalvand
,
K.
Demuynck
, and
J.-P.
Martens
, “
Acoustic modeling with hierarchical reservoirs
,”
IEEE/ACM Trans. Audio, Speech, Language Process.
21
,
2439
2450
(
2013
).
37.
E. A.
Antonelo
,
B.
Schrauwen
, and
D.
Stroobandt
, “
Event detection and localization for small mobile robots using reservoir computing
,”
Neural Networks
21
,
862
871
(
2008
).
38.
N.
Lin
,
S.
Wang
,
Y.
Li
,
B.
Wang
,
S.
Shi
,
Y.
He
,
W.
Zhang
,
Y.
Yu
,
Y.
Zhang
,
X.
Qi
et al, “
Resistive memory-based zero-shot liquid state machine for multimodal event data learning
,” arXiv:2307.00771 (
2023
).
39.
A.
Patiño-Saucedo
,
H.
Rostro-González
,
T.
Serrano-Gotarredona
, and
B.
Linares-Barranco
, “
Liquid state machine on spinnaker for spatio-temporal classification tasks
,”
Front. Neurosci.
16
,
819063
(
2022
).
40.
L.
Deckers
,
I. J.
Tsang
,
W.
Van Leekwijck
, and
S.
Latré
, “
Extended liquid state machines for speech recognition
,”
Front. Neurosci.
16
,
1023470
(
2022
).
41.
J.
Zhu
,
L.
Wang
,
X.
Xiao
,
Z.
Yang
,
Z.
Kang
,
S.
Li
, and
L.
Peng
, “
An event based gesture recognition system using a liquid state machine accelerator
,” in
Proceedings of the Great Lakes Symposium on VLSI 2022 (Association for Computing Machinery,
2022), pp.
361
365
.
42.
D. A.
Patterson
and
J. L.
Hennessy
,
Computer Organization and Design ARM Edition: The Hardware Software Interface
(
Morgan Kaufmann
,
2016
).
43.
N. P.
Jouppi
,
C.
Young
,
N.
Patil
,
D.
Patterson
,
G.
Agrawal
,
R.
Bajwa
,
S.
Bates
,
S.
Bhatia
,
N.
Boden
,
A.
Borchers
et al, “
In-datacenter performance analysis of a tensor processing unit
,” in
Proceedings of the 44th Annual International Symposium on Computer Architecture
(
IEEE
,
2017
), pp.
1
12
.
44.
A.
Shafiee
,
A.
Nag
,
N.
Muralimanohar
,
R.
Balasubramonian
,
J. P.
Strachan
,
M.
Hu
,
R. S.
Williams
, and
V.
Srikumar
, “
ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars
,”
ACM SIGARCH Comput. Archit. News
44
,
14
26
(
2016
).
45.
P.
Chi
,
S.
Li
,
C.
Xu
,
T.
Zhang
,
J.
Zhao
,
Y.
Liu
,
Y.
Wang
, and
Y.
Xie
, “
PRIME: A novel processing-in-memory architecture for neural network computation in ReRAM-based main memory
,”
ACM SIGARCH Comput. Architect. News
44
,
27
39
(
2016
).
46.
L.
Song
,
X.
Qian
,
H.
Li
, and
Y.
Chen
, “
PipeLayer: A pipelined ReRAM-based accelerator for deep learning
,” in
2017 IEEE International Symposium on High Performance Computer Architecture (HPCA)
(
IEEE
,
2017
), pp.
541
552
.
47.
Y.
Zhong
,
J.
Tang
,
X.
Li
,
B.
Gao
,
H.
Qian
, and
H.
Wu
, “
Dynamic memristor-based reservoir computing for high-efficiency temporal signal processing
,”
Nat. Commun.
12
,
408
(
2021
).
48.
J.
Moon
,
W.
Ma
,
J. H.
Shin
,
F.
Cai
,
C.
Du
,
S. H.
Lee
, and
W. D.
Lu
, “
Temporal data classification and forecasting using a memristor-based reservoir computing system
,”
Nat. Electron.
2
,
480
487
(
2019
).
49.
C.
Du
,
F.
Cai
,
M. A.
Zidan
,
W.
Ma
,
S. H.
Lee
, and
W. D.
Lu
, “
Reservoir computing using dynamic memristors for temporal information processing
,”
Nat. Commun.
8
,
2204
(
2017
).
50.
J.
Yu
,
Y.
Li
,
W.
Sun
,
W.
Zhang
,
Z.
Gao
,
D.
Dong
,
Z.
Yu
,
Y.
Zhao
,
J.
Lai
,
Q.
Ding
et al, “
Energy efficient and robust reservoir computing system using ultrathin (3.5 nm) ferroelectric tunneling junctions for temporal data learning
,” in
2021 Symposium on VLSI Technology
(
IEEE
,
2021
), pp.
1
2
.
51.
S.
Wang
,
Y.
Li
,
D.
Wang
,
W.
Zhang
,
X.
Chen
,
D.
Dong
,
S.
Wang
,
X.
Zhang
,
P.
Lin
,
C.
Gallicchio
et al, “
Echo state graph neural networks with analogue random resistive memory arrays
,”
Nat. Mach. Intell.
5
,
104
113
(
2023
).
52.
S.
Wang
,
H.
Chen
,
W.
Zhang
,
Y.
Li
,
D.
Wang
,
S.
Shi
,
Y.
Zhao
,
K. C.
Loong
,
X.
Chen
,
Y.
Dong
et al, “
Convolutional echo-state network with random memristors for spatiotemporal signal classification
,”
Adv. Intell. Syst.
4
,
2200027
(
2022
).
53.
Y.
Zhong
,
J.
Tang
,
X.
Li
,
X.
Liang
,
Z.
Liu
,
Y.
Li
,
Y.
Xi
,
P.
Yao
,
Z.
Hao
,
B.
Gao
et al, “
A memristor-based analogue reservoir computing system for real-time and power-efficient signal processing
,”
Nat. Electron.
5
,
672
681
(
2022
).
54.
G.
Milano
,
G.
Pedretti
,
K.
Montano
,
S.
Ricci
,
S.
Hashemkhani
,
L.
Boarino
,
D.
Ielmini
, and
C.
Ricciardi
, “
In materia reservoir computing with a fully memristive architecture based on self-organizing nanowire networks
,”
Nat. Mater.
21
,
195
202
(
2022
).
55.
Z.
Zhang
,
X.
Zhao
,
X.
Zhang
,
X.
Hou
,
X.
Ma
,
S.
Tang
,
Y.
Zhang
,
G.
Xu
,
Q.
Liu
, and
S.
Long
, “
In-sensor reservoir computing system for latent fingerprint recognition with deep ultraviolet photo-synapses and memristor array
,”
Nat. Commun.
13
,
6590
(
2022
).
56.
L.
Sun
,
Z.
Wang
,
J.
Jiang
,
Y.
Kim
,
B.
Joo
,
S.
Zheng
,
S.
Lee
,
W. J.
Yu
,
B.-S.
Kong
, and
H.
Yang
, “
In-sensor reservoir computing for language learning via two-dimensional memristors
,”
Sci. Adv.
7
,
eabg1455
(
2021
).
57.
X.
Wu
,
S.
Wang
,
W.
Huang
,
Y.
Dong
,
Z.
Wang
, and
W.
Huang
, “
Wearable in-sensor reservoir computing using optoelectronic polymers with through-space charge-transport characteristics for multi-task learning
,”
Nat. Commun.
14
,
468
(
2023
).
58.
D.
Kudithipudi
,
Q.
Saleh
,
C.
Merkel
,
J.
Thesing
, and
B.
Wysocki
, “
Design and analysis of a neuromemristive reservoir computing architecture for biosignal processing
,”
Front. Neurosci.
9
,
502
(
2016
).
59.
C.
Merkel
,
Q.
Saleh
,
C.
Donahue
, and
D.
Kudithipudi
, “
Memristive reservoir computing architecture for epileptic seizure detection
,”
Procedia Comput. Sci.
41
,
249
254
(
2014
).
60.
K.
Liu
,
T.
Zhang
,
B.
Dang
,
L.
Bao
,
L.
Xu
,
C.
Cheng
,
Z.
Yang
,
R.
Huang
, and
Y.
Yang
, “
An optoelectronic synapse based on α-In2Se3 with controllable temporal dynamics for multimode and multiscale reservoir computing
,”
Nat. Electron.
5
,
761
773
(
2022
).
61.
T.
Wang
,
H.-M.
Huang
,
X.-X.
Wang
, and
X.
Guo
, “
An artificial olfactory inference system based on memristive devices
,”
InfoMat
3
,
804
813
(
2021
).
62.
R.
Wang
,
J.-Q.
Yang
,
J.-Y.
Mao
,
Z.-P.
Wang
,
S.
Wu
,
M.
Zhou
,
T.
Chen
,
Y.
Zhou
, and
S.-T.
Han
, “
Recent advances of volatile memristors: Devices, mechanisms, and applications
,”
Adv. Intell. Syst.
2
,
2000055
(
2020
).
63.
D.
Kim
,
B.
Jeon
,
Y.
Lee
,
D.
Kim
,
Y.
Cho
, and
S.
Kim
, “
Prospects and applications of volatile memristors
,”
Appl. Phys. Lett.
121
,
010501
(
2022
).
64.
W.
Zuo
,
Q.
Zhu
,
Y.
Fu
,
Y.
Zhang
,
T.
Wan
,
Y.
Li
,
M.
Xu
, and
X.
Miao
, “
Volatile threshold switching memristor: An emerging enabler in the AIoT era
,”
J. Semicond.
44
,
053102
(
2023
).
65.
K.
Sun
,
J.
Chen
, and
X.
Yan
, “
The future of memristors: Materials engineering and neural networks
,”
Adv. Funct. Mater.
31
,
2006773
(
2021
).
66.
J.
Cao
,
X.
Zhang
,
H.
Cheng
,
J.
Qiu
,
X.
Liu
,
M.
Wang
, and
Q.
Liu
, “
Emerging dynamic memristors for neuromorphic reservoir computing
,”
Nanoscale
14
,
289
298
(
2022
).
67.
Z.
Qi
,
L.
Mi
,
H.
Qian
,
W.
Zheng
,
Y.
Guo
, and
Y.
Chai
, “
Physical reservoir computing based on nanoscale materials and devices
,”
Adv. Funct. Mater.
33
,
2306149
(
2023
).
68.
S.
Yu
,
H.
Jiang
,
S.
Huang
,
X.
Peng
, and
A.
Lu
, “
Compute-in-memory chips for deep learning: Recent trends and prospects
,”
IEEE Circuits Syst. Mag.
21
,
31
56
(
2021
).
69.
G. W.
Burr
,
R. M.
Shelby
,
A.
Sebastian
,
S.
Kim
,
S.
Kim
,
S.
Sidler
,
K.
Virwani
,
M.
Ishii
,
P.
Narayanan
,
A.
Fumarola
et al, “
Neuromorphic computing using non-volatile memory
,”
Adv. Phys.: X
2
,
89
124
(
2017
).
70.
J.-M.
Hung
,
X.
Li
,
J.
Wu
, and
M.-F.
Chang
, “
Challenges and trends in developing nonvolatile memory-enabled computing chips for intelligent edge devices
,”
IEEE Trans. Electron Devices
67
,
1444
1453
(
2020
).
71.
D.
Niu
,
Y.
Chen
,
C.
Xu
, and
Y.
Xie
, “
Impact of process variations on emerging memristor
,” in
Proceedings of the 47th Design Automation Conference
(
IEEE
,
2010
), pp.
877
882
.
72.
J.
Bürger
and
C.
Teuscher
, “
Variation-tolerant computing with memristive reservoirs
,” in
2013 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH)
(
IEEE
,
2013
), pp.
1
6
.
73.
M. S.
Kulkarni
and
C.
Teuscher
, “
Memristor-based reservoir computing
,” in
Proceedings of the 2012 IEEE/ACM International Symposium on Nanoscale Architectures
(
IEEE
,
2012
), pp.
226
232
.
74.
W.
Wang
,
M.
Wang
,
E.
Ambrosi
,
A.
Bricalli
,
M.
Laudato
,
Z.
Sun
,
X.
Chen
, and
D.
Ielmini
, “
Surface diffusion-limited lifetime of silver and copper nanofilaments in resistive switching devices
,”
Nat. Commun.
10
,
81
(
2019
).
75.
X.
Zhang
,
S.
Liu
,
X.
Zhao
,
F.
Wu
,
Q.
Wu
,
W.
Wang
,
R.
Cao
,
Y.
Fang
,
H.
Lv
,
S.
Long
et al, “
Emulating short-term and long-term plasticity of bio-synapse based on Cu/a-Si/Pt memristor
,”
IEEE Electron Device Lett.
38
,
1208
1211
(
2017
).
76.
J.
Yang
,
H.
Cho
,
H.
Ryu
,
M.
Ismail
,
C.
Mahata
, and
S.
Kim
, “
Tunable synaptic characteristics of a Ti/TiO2/Si memory device for reservoir computing
,”
ACS Appl. Mater. Interfaces
13
,
33244
33252
(
2021
).
77.
X.
Li
,
J.
Tang
,
Q.
Zhang
,
B.
Gao
,
J. J.
Yang
,
S.
Song
,
W.
Wu
,
W.
Zhang
,
P.
Yao
,
N.
Deng
et al, “
Power-efficient neural network with artificial dendrites
,”
Nat. Nanotechnol.
15
,
776
782
(
2020
).
78.
X.
Zhu
,
Q.
Wang
, and
W. D.
Lu
, “
Memristor networks for real-time neural activity analysis
,”
Nat. Commun.
11
,
2439
(
2020
).
79.
R. A.
John
,
N.
Yantara
,
S. E.
Ng
,
M. I. B.
Patdillah
,
M. R.
Kulkarni
,
N. F.
Jamaludin
,
J.
Basu
,
N.
Ankit
,
S. G.
Mhaisalkar
,
A.
Basu
, and
N.
Mathews
, “
Diffusive and drift halide perovskite memristive barristors as nociceptive and synaptic emulators for neuromorphic computing
,”
Adv. Mater.
33
,
2007851
(
2021
).
80.
J.-Y.
Mao
,
Z.
Zheng
,
Z.-Y.
Xiong
,
P.
Huang
,
G.-L.
Ding
,
R.
Wang
,
Z.-P.
Wang
,
J.-Q.
Yang
,
Y.
Zhou
,
T.
Zhai
, and
S. T.
Han
, “
Lead-free monocrystalline perovskite resistive switching device for temporal information processing
,”
Nano Energy
71
,
104616
(
2020
).
81.
L.-W.
Chen
,
W.-C.
Wang
,
S.-H.
Ko
,
C.-Y.
Chen
,
C.-T.
Hsu
,
F.-C.
Chiao
,
T.-W.
Chen
,
K.-C.
Wu
, and
H.-W.
Lin
, “
Highly uniform all-vacuum-deposited inorganic perovskite artificial synapses for reservoir computing
,”
Adv. Intell. Syst.
3
,
2000196
(
2021
).
82.
J.
Yang
,
F.
Zhang
,
H.-M.
Xiao
,
Z.-P.
Wang
,
P.
Xie
,
Z.
Feng
,
J.
Wang
,
J.
Mao
,
Y.
Zhou
, and
S.-T.
Han
, “
A perovskite memristor with large dynamic space for analog-encoded image recognition
,”
ACS Nano
16
,
21324
21333
(
2022
).
83.
Z.
Chen
,
W.
Li
,
Z.
Fan
,
S.
Dong
,
Y.
Chen
,
M.
Qin
,
M.
Zeng
,
X.
Lu
,
G.
Zhou
,
X.
Gao
, and
J. M.
Liu
, “
All-ferroelectric implementation of reservoir computing
,”
Nat. Commun.
14
,
3585
(
2023
).
84.
D.
Kim
,
J.
Kim
,
S.
Yun
,
J.
Lee
,
E.
Seo
, and
S.
Kim
, “
Ferroelectric synaptic devices based on CMOS-compatible HfAlOx for neuromorphic and reservoir computing applications
,”
Nanoscale
15
,
8366
8376
(
2023
).
85.
K.
Ota
,
M.
Yamaguchi
,
S.
Kabuyanagi
,
S.
Fujii
,
M.
Saitoh
, and
M.
Yoshikawa
, “
Variability-controlled HfZrO2 ferroelectric tunnel junctions for reservoir computing
,”
IEEE Trans. Electron Devices
69
,
7089
7095
(
2022
).
86.
T.
Ma
and
J.-P.
Han
, “
Why is nonvolatile ferroelectric memory field-effect transistor still elusive?
,”
IEEE Electron Device Lett.
23
,
386
388
(
2002
).
87.
T.
Kotooka
,
S.
Lilak
,
A.
Stieg
,
J.
Gimzewski
,
N.
Sugiyama
,
Y.
Tanaka
,
H.
Tamukoh
,
Y.
Usami
, and
H.
Tanaka
, “
Ag2Se nanowire network as an effective in-materio reservoir computing device
,”
(to be published).
88.
K.
Fu
,
R.
Zhu
,
A.
Loeffler
,
J.
Hochstetter
,
A.
Diaz-Alvarez
,
A.
Stieg
,
J.
Gimzewski
,
T.
Nakayama
, and
Z.
Kuncic
, “
Reservoir computing with neuromemristive nanowire networks
,” in
2020 International Joint Conference on Neural Networks (IJCNN)
(
IEEE
,
2020
), pp.
1
8
.
89.
H. G.
Manning
,
F.
Niosi
,
C. G.
da Rocha
,
A. T.
Bellew
,
C.
O’Callaghan
,
S.
Biswas
,
P. F.
Flowers
,
B. J.
Wiley
,
J. D.
Holmes
,
M. S.
Ferreira
, and
J. J.
Boland
, “
Emergence of winner-takes-all connectivity paths in random nanowire networks
,”
Nat. Commun.
9
,
3219
(
2018
).
90.
G.
Milano
,
M.
Luebben
,
Z.
Ma
,
R.
Dunin-Borkowski
,
L.
Boarino
,
C. F.
Pirri
,
R.
Waser
,
C.
Ricciardi
, and
I.
Valov
, “
Self-limited single nanowire systems combining all-in-one memristive and neuromorphic functionalities
,”
Nat. Commun.
9
,
5151
(
2018
).
91.
M.
Cucchi
,
C.
Gruener
,
L.
Petrauskas
,
P.
Steiner
,
H.
Tseng
,
A.
Fischer
,
B.
Penkovsky
,
C.
Matthus
,
P.
Birkholz
,
H.
Kleemann
, and
K.
Leo
, “
Reservoir computing with biocompatible organic electrochemical networks for brain-inspired biosignal classification
,”
Sci. Adv.
7
,
eabh0693
(
2021
).
92.
S.
Pecqueur
,
M.
Mastropasqua Talamo
,
D.
Guérin
,
P.
Blanchard
,
J.
Roncali
,
D.
Vuillaume
, and
F.
Alibart
, “
Neuromorphic time-dependent pattern classification with organic electrochemical transistor arrays
,”
Adv. Electron. Mater.
4
,
1800166
(
2018
).
93.
E. R. W.
van Doremaele
,
P.
Gkoupidenis
, and
Y.
van de Burgt
, “
Towards organic neuromorphic devices for adaptive sensing and novel computing paradigms in bioelectronics
,”
J. Mater. Chem. C
7
,
12754
12760
(
2019
).
94.
K.
Janzakova
,
M.
Ghazal
,
A.
Kumar
,
Y.
Coffinier
,
S.
Pecqueur
, and
F.
Alibart
, “
Dendritic organic electrochemical transistors grown by electropolymerization for 3D neuromorphic engineering
,”
Advanced Science
8
,
2102973
(
2021
).
95.
N.
Jiang
,
J.
Tang
,
W.
Zhang
,
Y.
Li
,
N.
Li
,
X.
Li
,
X.
Chen
,
R.
Fang
,
Z.
Guo
,
F.
Wang
et al, “
Bioinspired in-sensor reservoir computing for self-adaptive visual recognition with two-dimensional dual-mode phototransistors
,”
Adv. Opt. Mater.
11
,
2300271
(
2023
).
96.
M.
Farronato
,
P.
Mannocci
,
M.
Melegari
,
S.
Ricci
,
C. M.
Compagnoni
, and
D.
Ielmini
, “
Reservoir computing with charge-trap memory based on a MoS2 channel for neuromorphic engineering
,”
Adv. Mater.
35
,
2205381
(
2022
).
97.
F.
Liao
,
Z.
Zhou
,
B. J.
Kim
,
J.
Chen
,
J.
Wang
,
T.
Wan
,
Y.
Zhou
,
A. T.
Hoang
,
C.
Wang
,
J.
Kang
et al, “
Bioinspired in-sensor visual adaptation for accurate perception
,”
Nat. Electron.
5
,
84
91
(
2022
).
98.
J. H.
Nam
,
S.
Oh
,
H. Y.
Jang
,
O.
Kwon
,
H.
Park
,
W.
Park
,
J.-D.
Kwon
,
Y.
Kim
, and
B.
Cho
, “
Low power MoS2/Nb2O5 memtransistor device with highly reliable heterosynaptic plasticity
,”
Adv. Funct. Mater.
31
,
2104174
(
2021
).
99.
J.
Zha
,
S.
Shi
,
A.
Chaturvedi
,
H.
Huang
,
P.
Yang
,
Y.
Yao
,
S.
Li
,
Y.
Xia
,
Z.
Zhang
,
W.
Wang
et al, “
Electronic/optoelectronic memory device enabled by tellurium-based 2D van der Waals heterostructure for in-sensor reservoir computing at the optical communication band
,”
Adv. Mater.
35
,
2211598
(
2023
).
100.
K.
Liu
,
B.
Dang
,
T.
Zhang
,
Z.
Yang
,
L.
Bao
,
L.
Xu
,
C.
Cheng
,
R.
Huang
, and
Y.
Yang
, “
Multilayer reservoir computing based on ferroelectric α-In2Se3 for hierarchical information processing
,”
Adv. Mater.
34
,
2108826
(
2022
).
101.
G.
Ding
,
B.
Yang
,
R.-S.
Chen
,
W.-A.
Mo
,
K.
Zhou
,
Y.
Liu
,
G.
Shang
,
Y.
Zhai
,
S.-T.
Han
, and
Y.
Zhou
, “
Reconfigurable 2D WSe2-based memtransistor for mimicking homosynaptic and heterosynaptic plasticity
,”
Small
17
,
2103175
(
2021
).
102.
H.-K.
He
,
F.-F.
Yang
, and
R.
Yang
, “
Flexible full two-dimensional memristive synapses of graphene/WSe2−xOy/graphene
,”
Phys. Chem. Chem. Phys.
22
,
20658
20664
(
2020
).
103.
T.
Ahmed
,
M.
Tahir
,
M. X.
Low
,
Y.
Ren
,
S. A.
Tawfik
,
E. L.
Mayes
,
S.
Kuriakose
,
S.
Nawaz
,
M. J.
Spencer
,
H.
Chen
et al, “
Fully light-controlled memory and neuromorphic computation in layered black phosphorus
,”
Adv. Mater.
33
,
2004207
(
2021
).
104.
T.
Ahmed
,
S.
Kuriakose
,
E. L.
Mayes
,
R.
Ramanathan
,
V.
Bansal
,
M.
Bhaskaran
,
S.
Sriram
, and
S.
Walia
, “
Optically stimulated artificial synapse based on layered black phosphorus
,”
Small
15
,
1900966
(
2019
).
105.
M.-Z.
Li
,
L.-C.
Guo
,
G.-L.
Ding
,
K.
Zhou
,
Z.-Y.
Xiong
,
S.-T.
Han
, and
Y.
Zhou
, “
Inorganic perovskite quantum dot-based strain sensors for data storage and in-sensor computing
,”
ACS Appl. Mater. Interfaces
13
,
30861
30873
(
2021
).
106.
Q.
Chen
,
Y.
Zhang
,
S.
Liu
,
T.
Han
,
X.
Chen
,
Y.
Xu
,
Z.
Meng
,
G.
Zhang
,
X.
Zheng
,
J.
Zhao
et al, “
Switchable perovskite photovoltaic sensors for bioinspired adaptive machine vision
,”
Adv. Intell. Syst.
2
,
2000122
(
2020
).
107.
H.
Shao
,
Y.
Li
,
W.
Yang
,
X.
He
,
L.
Wang
,
J.
Fu
,
M.
Fu
,
H.
Ling
,
P.
Gkoupidenis
,
F.
Yan
et al, “
A reconfigurable optoelectronic synaptic transistor with stable Zr‐CsPbI3 nanocrystals for visuomorphic computing
,”
Adv. Mater.
35
,
2208497
(
2023
).
108.
J.
Lao
,
M.
Yan
,
B.
Tian
,
C.
Jiang
,
C.
Luo
,
Z.
Xie
,
Q.
Zhu
,
Z.
Bao
,
N.
Zhong
,
X.
Tang
et al, “
Ultralow-power machine vision with self-powered sensor reservoir
,”
Adv. Sci.
9
,
2106092
(
2022
).
109.
H.
Tan
,
G.
Liu
,
X.
Zhu
,
H.
Yang
,
B.
Chen
,
X.
Chen
,
J.
Shang
,
W. D.
Lu
,
Y.
Wu
, and
R.-W.
Li
, “
An optoelectronic resistive switching memory with integrated demodulating and arithmetic functions
,”
Adv. Mater.
27
,
2797
2803
(
2015
).
110.
J.
Moon
,
Y.
Wu
, and
W. D.
Lu
, “
Hierarchical architectures in reservoir computing systems
,”
Neuromorphic Comput. Eng.
1
,
014006
(
2021
).
111.
X.
Liang
,
Y.
Zhong
,
J.
Tang
,
Z.
Liu
,
P.
Yao
,
K.
Sun
,
Q.
Zhang
,
B.
Gao
,
H.
Heidari
,
H.
Qian
, and
H.
Wu
, “
Rotating neurons for all-analog implementation of cyclic reservoir computing
,”
Nat. Commun.
13
,
1549
(
2022
).
112.
M.
Pei
,
Y.
Zhu
,
S.
Liu
,
H.
Cui
,
Y.
Li
,
Y.
Yan
,
Y.
Li
,
C.
Wan
, and
Q.
Wan
, “
Power-efficient multisensory reservoir computing based on Zr-doped HfO2 memcapacitive synapse arrays
,”
Adv. Mater.
35
,
2305609
(
2023
).
113.
C.
Frazier
and
K. M.
Kockelman
, “
Chaos theory and transportation systems: Instructive example
,”
Transp. Res. Rec.
1897
,
9
17
(
2004
).
114.
W.
Wan
,
R.
Kubendran
,
C.
Schaefer
,
S. B.
Eryilmaz
,
W.
Zhang
,
D.
Wu
,
S.
Deiss
,
P.
Raina
,
H.
Qian
,
B.
Gao
et al, “
A compute-in-memory chip based on resistive random-access memory
,”
Nature
608
,
504
512
(
2022
).
115.
S.
Chen
,
M. R.
Mahmoodi
,
Y.
Shi
,
C.
Mahata
,
B.
Yuan
,
X.
Liang
,
C.
Wen
,
F.
Hui
,
D.
Akinwande
,
D. B.
Strukov
, and
M.
Lanza
, “
Wafer-scale integration of two-dimensional materials in high-density memristive crossbar arrays for artificial neural networks
,”
Nat. Electron.
3
,
638
645
(
2020
).
116.
Y.
Shen
,
W.
Zheng
,
K.
Zhu
,
Y.
Xiao
,
C.
Wen
,
Y.
Liu
,
X.
Jing
, and
M.
Lanza
, “
Variability and yield in h-BN-based memristive circuits: The role of each type of defect
,”
Adv. Mater.
33
,
2103656
(
2021
).
117.
Z.
Wang
,
S.
Joshi
,
S.
Savel’ev
,
W.
Song
,
R.
Midya
,
Y.
Li
,
M.
Rao
,
P.
Yan
,
S.
Asapu
,
Y.
Zhuo
et al, “
Fully memristive neural networks for pattern classification with unsupervised learning
,”
Nat. Electron.
1
,
137
145
(
2018
).
118.
X.
Zhang
,
Z.
Wu
,
R.
Wang
,
J.
Lu
,
J.
Wei
,
Q.
Liu
, and
M.
Liu
, “
Brain-like networks in random memristor array based on force training
,” in
2021 5th IEEE Electron Devices Technology and Manufacturing Conference (EDTM)
(
IEEE
,
2021
), pp.
1
3
.
119.
B.
Zoph
and
Q. V.
Le
, “
Neural architecture search with reinforcement learning
,” in
5th International Conference on Learning Representations
, Toulon, France, 24–26 April
2017
(ICLR,
2017
).
120.
H.
Liu
,
K.
Simonyan
, and
Y.
Yang
, “
DARTS: Differentiable architecture search
,” in
7th International Conference on Learning Representations,
New Orleans, LA
,
6-9 May 2019 (ICLR, 2019).
121.
W.
Maass
,
P.
Joshi
, and
E. D.
Sontag
, “
Computational aspects of feedback in neural circuits
,”
PLoS Comput. Biol.
3
,
e165
(
2007
).
122.
A.
Radford
,
K.
Narasimhan
,
T.
Salimans
,
I.
Sutskever
et al, “
Improving language understanding by generative pre-training
,” (
2018
).
123.
I.
Sutskever
,
O.
Vinyals
, and
Q. V.
Le
, “
Sequence to sequence learning with neural networks
,” in
Advances in neural information processing systems 27
,
2014
.
124.
A.
Graves
, “
Generating sequences with recurrent neural networks
,” arXiv:1308.0850 (
2013
).
125.
X.
Li
,
B.
Gao
,
B.
Lin
,
R.
Yu
,
H.
Zhao
,
Z.
Wang
,
Q.
Qin
,
J.
Tang
,
Q.
Zhang
,
X.
Li
et al, “
First demonstration of homomorphic encryption using multi-functional RRAM arrays with a novel noise-modulation scheme
,” in
2022 International Electron Devices Meeting (IEDM)
(
IEEE
,
2022
), pp.
33
35
.
126.
Y.
Park
,
Z.
Wang
,
S.
Yoo
, and
W. D.
Lu
, “
RM-NTT: An RRAM-based compute-in-memory number theoretic transform accelerator
,”
IEEE J. Explor. Solid-State Comput. Devices Circuits
8
,
93
101
(
2022
).
127.
B.
Park
,
R.
Hwang
,
D.
Yoon
,
Y.
Choi
, and
M.
Rhu
, “
DiVa: An accelerator for differentially private machine learning
,” in
2022 55th IEEE/ACM International Symposium on Microarchitecture (MICRO)
(
IEEE
,
2022
), pp.
1200
1217
.
128.
M.
Xue
,
J.
Wang
, and
W.
Liu
, “
DNN intellectual property protection: Taxonomy, attacks and evaluations
,” in
Proceedings of the 2021 on Great Lakes Symposium on VLSI (Association for Computing Machinery,
2021), pp.
455
460
.
129.
S.
Huang
,
H.
Jiang
,
X.
Peng
,
W.
Li
, and
S.
Yu
, “
XOR-CIM: Compute-in-memory SRAM architecture with embedded XOR encryption
,” in
Proceedings of the 39th International Conference on Computer-Aided Design
(
IEEE
,
2020
), pp.
1
6
.
130.
N.
Lin
,
X.
Chen
,
H.
Lu
, and
X.
Li
, “
Chaotic weights: A novel approach to protect intellectual property of deep neural networks
,”
IEEE TRansac. Comp. Aided Des. Integr. Circ. Syst.
40
,
1327
1339
(
2021
).
131.
Y.
Cai
,
X.
Chen
,
L.
Tian
,
Y.
Wang
, and
H.
Yang
, “
Enabling secure in-memory neural network computing by sparse fast gradient encryption
,” in
2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)
(
IEEE
,
2019
), pp.
1
8
.
132.
N.
Lin
,
X.
Chen
,
C.
Xia
,
J.
Ye
, and
X.
Li
, “
ChaoPIM: A PIM-based protection framework for DNN accelerators using chaotic encryption
,” in
2021 IEEE 30th Asian Test Symposium (ATS)
(
IEEE
,
2021
), pp.
1
6
.
133.
J.
Wang
,
Z.
Chen
,
Y.
Chen
,
Y.
Xu
,
T.
Wang
,
Y.
Yu
,
V.
Narayanan
,
S.
George
,
H.
Yang
, and
X.
Li
, “
WeightLock: A mixed-grained weight encryption approach using local decrypting units for ciphertext computing in DNN accelerators
,” in
2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)
(
IEEE
,
2023
), pp.
1
5
.
134.
F.
Yu
,
Y.
Wu
,
S.
Ma
,
M.
Xu
,
H.
Li
,
H.
Qu
,
C.
Song
,
T.
Wang
,
R.
Zhao
, and
L.
Shi
, “
Brain-inspired multimodal hybrid neural network for robot place recognition
,”
Sci. Robot.
8
,
eabm6996
(
2023
).
135.
M.
Chahine
,
R.
Hasani
,
P.
Kao
,
A.
Ray
,
R.
Shubert
,
M.
Lechner
,
A.
Amini
, and
D.
Rus
, “
Robust flight navigation out of distribution with liquid neural networks
,”
Sci. Robot.
8
,
eadc8892
(
2023
).