Coupled oscillators are highly complex dynamical systems, and it is an intriguing concept to use this oscillator dynamics for computation. The idea is not new, but is currently the subject to intense research as part of the quest for “beyond Moore” electronic devices. To a large extent, these efforts are motivated by biological observations: neural systems and mammalian brains, which seem to operate on oscillatory signals. In this paper, we give a survey of oscillator-based computing, with the goal of understanding its promise and limitation for next-generation computing. Our focus will be on the physics of (mostly nanoscale) oscillatory systems and on their characteristics that may enable effective computing.

The goal of this paper is to give a survey of oscillator-based computing (OBC), with the emphasis on the underlying physics that enables novel applications in computing and information processing. The complex dynamics of interacting oscillators has long been a topic of study in physics and mathematics and has been widely used as a model system for a number of biological processes. Recently, coupled oscillators have also been investigated as a potentially practical way of performing computation, especially as building blocks of artificial intelligence (AI) hardware. This review is intended to introduce the reader to the physical mechanisms at work for OBC and to demonstrate their value and utility for information-processing applications, especially for beyond-Moore1 computing and signal-processing devices.

Computation (especially symbolic computation) is most often approached from Turing's definition of machine-based computing. For understanding computing as a physical process, it is often useful to resort to a somewhat loose definition2 and look at computing as a simulation procedure. In this simulation, a complex system is modeled in analogy with a more controllable/tunable/accessible physical system.3,4 The word “analog computing,” in fact derives from this concept, i.e., that electrical analogs can be built to model a harder-to-access physical system.5 For example, one can make a network of circuit integrators/differentiators and mechanical components to simulate an aerodynamic problem,6 as it was done often in the early days of computing. The variables of the simulated system (i.e., the information) are most straightforwardly represented by voltage levels at circuit nodes.

An equally valid, but much less commonly used way for representing information in a computing device is to use the phase and/or frequency of oscillatory signals to carry information in addition to, or instead of, the signal levels. One may argue that a purely level-based computing scheme inevitably wastes the information carried by the timing of the signals. Using phase and frequency as a carrier may allow for a more rich representation of information. A key characteristic of OBCs is that information is primarily represented by frequencies and phases of oscillatory signals, while the signal amplitude may or may not play a role.

Another key characteristic of OBC is largely inspired by biological observations. Neuromorphic computing devices are most often imagined as interconnected units of elementary processors, which are loosely referred to as neurons. The interaction of these units drives them into a collective state, and this state carries the results of a computation.

The artificial neurons should obey certain requirements in order to perform computation—typically, they are multi-input devices, which compute a superposition of their inputs and then output a nonlinear function of this sum. While such an operation is conceptually simple, it is not at all easy to find physically realizable low-power, robust, reproducibly behaving elements that could serve as building blocks of the neurons.

Many types of oscillators exist that can straightforwardly realize neuron functions. Most physical oscillators show a suitable nonlinear phase and frequency response if they are perturbed by incoming oscillatory signals. For example, two interacting oscillators will run at exactly the same frequency if the difference in their free-running frequencies is below a certain threshold value. In addition to being good nonlinear units, oscillators are also ubiquitous in the physical world, making them attractive for realizing computing systems.

Our definition of an OBC derives from the above-described attributes of a computing system. The first (and somewhat trivial) requirement is that in an OBC the signals are carried by the phase and frequency of oscillatory signals. The second (and less straightforward) requirement is that signals must be processed by the (nonlinear) interactions between oscillators.

The above definition narrows down OBC to a fairly specific class of circuit architectures. For example, it excludes spiking analog circuits from our definition of OBC. Spiking neural networks employ oscillatory signal representation, and so they fulfill the first requirement for OBC. But they use different processing techniques: their computing units integrate, count, and multiply spike sequences and do “not” rely on oscillator interactions.

There are a large number of various computing schemes (Boolean or non-Boolean, special, or general purpose) that may be implemented using oscillator dynamics. Most of these computing models are not specific to OBC; rather, they are oscillatory versions of some known analog computing model. Figure 1 shows a schematic high-level overview of analog, dynamic computing concepts that are perhaps most relevant for OBCs. The focus of this review will mostly be on collective state models of computing, which are a large subset of neuromorphic or non-Boolean computing paradigms.

FIG. 1.

Many models of computing can be realized by oscillators, instead of the more commonly used and better-known level-based implementations. The figure gives an overview of a few of those computing models, i.e., computing paradigms that can be implemented (and mostly have been implemented) by oscillators.

FIG. 1.

Many models of computing can be realized by oscillators, instead of the more commonly used and better-known level-based implementations. The figure gives an overview of a few of those computing models, i.e., computing paradigms that can be implemented (and mostly have been implemented) by oscillators.

Close modal

To a large extent, the motivation to study OBC comes from biology. The central nervous system is believed to use time-dependent signals (pulse or spike sequences) to communicate and process information—this dynamic nature of information processing in the brain is what probably distinguishes it most from today's digital computers.7 Excellent examples of using biological models to develop computational architectures are given in Refs. 8 and 9. There is a large body of work large-scale brain simulations,10,11 and the usefulness of oscillatory models in biology is discussed in Ref. 12. In this paper, we focus on engineering approaches of OBC, and the interested reader is referred to Ref. 13 for a comprehensive overview of biological aspects of OBC and to Ref. 14 for an insight into connections between oscillatory biological and neural systems.

Historically, the idea of OBC dates back to von Neumann's 1954 patent,15 and his concept16 is an early and still very relevant example of OBC. This scheme uses phases of oscillator signals to realize Boolean, digital computation and also serves as a perfect example on how one can translate a level-based computing scheme to a phase/frequency based representation. For this reason, we start by reviewing this concept in Sec. II along with more recent proposals that resurrect this idea.

The attractiveness of OBC largely hinges on finding a suitable oscillator as a building block of the computer. One may use electrical oscillators and even standard, fabrication-friendly CMOS circuitry. Transistor action, however, is not at all required to every oscillator type, and so the field is widely open for using emerging devices or possibly nonelectrical variables.17 OBCs realized with nanoscale, highly efficient oscillators have the potential to yield truly revolutionary devices. We argue that the physical realization of the oscillators is pivotal for their success in real-life application and devote Sec. III to the physics of emerging nano-oscillator devices.

Computing in OBCs occurs by oscillator interactions, more specifically, by oscillator synchronization. We will survey various means of physical oscillator interconnections in Sec. IV, while in Sec. V, we show how these interconnection topologies perform computing. Synchronization phenomena have large literature in physics and nonlinear science,18,19 and we also review some of the relevant mathematics in Sec. V.

One of the most promising applications for OBC is that they may be used as hardware accelerators in artificial intelligence (AI) hardware. In AI algorithms, the vast majority of computing power is spent on performing simple, repetitive calculations, such as calculating dot products, convolutions, applying nonlinearities, and recognizing or matching simple patterns. It is quite possible that Boolean, CMOS-based circuitry is suboptimal in doing these tasks. For this reason, deep learning algorithms and convolutional neural networks20–22 became major drivers for seeking out new, possibly non-Boolean hardware. Section VI will give a case study of a few OBCs that target efficient execution of very specific, repetitive computing tasks.

OBCs have the potential to attack computationally hard problems23—problem classes that have no efficient solution on a Boolean machine and problems that are usually discussed in the framework of quantum computation. Our review will devote Sec. VII to these new application areas.

The reader will see that OBC has grown into a vast field—the 150+ reference we cite in this review represents only a small fraction of the literature, and there are books devoted to this topic.24 Most work focuses on a particular device25 or a particular computing architecture or the mathematical or biological aspects of OBC. One purpose of this review is to give a broad and comprehensive perspective, while keeping the focus on the physics of oscillators and their interactions.

We intentionally choose not to organize this review around a particular computing model or a particular device, and we choose not to overemphasize perceived benefits of OBCs. There is no consensus on the “best way” to use oscillators in computing, and there are very few attempts to benchmark oscillator-based solutions against digital or level-based analog circuits. So this review presents the field the way it stands now: a somewhat loose collection of ideas and concepts that nevertheless holds the potential for a breakthrough for new-generation computing hardware.

von Neumann's groundbreaking idea was to use oscillators as logic gates, where information was represented by the phase.15,16 It is worthwhile to note that digital computes in the early fifties reached then-breathtaking speeds of several megahertz. So it seemed a natural idea to look at a logic circuit not as a switch between zeros and ones, but rather as an electrical oscillator switching between different phases of oscillation. von Neumann's device became a success story in the 1950s: Goto26 and others further developed his concept and fully functional Boolean computers were realized using oscillators.27 In these works, the basic oscillatory element usually is referred to as a parametron28 and the logic scheme is phase logic.

The operation of phase logic is based on subharmonic injection-locked oscillators (SHILOs). Unlike auto-oscillators, SHILOs do not require active circuit elements for operation, only nonlinear resonant elements. SHILOs also cannot generate oscillatory signals from a DC input, but they respond to an incoming AC excitation. More specifically, a SHILO, with a resonant frequency of f0, may be driven by a signal of 2f0 frequency—this pumping signal feeds energy into the oscillator, which will resonate at f0. The driving signal (voltage or current) of the oscillator is synchronized to the pump, i.e., they should have a fixed phase relation.

Figure 2(a) shows a circuit schematic from the 1950s, showing a logic gate from a parametron-based computer. This is one possible implementation of a SHILO, and in this particular case, the building blocks are inductively coupled nonlinear LC parametric oscillators. Oscillator nonlinearity derives from the nonlinear hysteresis of the ferrite cores. A 2f0 excitation applied on the inductance periodically modulates the L inductance and serves as the energy source, compensating the resistive losses in the circuit. Energy transfer between different oscillation modes is most often understood in terms of the Manley-Rowe relations.29 

FIG. 2.

(a) Circuit schematics of a magnetic (LC oscillator-based) parametron, the drawing is based on Ref. 26. (b) The sketch illustrates how an oscillator with a fundamental frequency of f0 can synchronize to a pumping signal having 2×f0 frequency. The synchronized signal can have only two stable phases, representing a bit. (c) Shows an integration-friendly variation of SHILOs using ring oscillators, reproduced with permission from Csaba et al.“Neural network based on parametrically-pumped oscillators,” in IEEE International Conference on, Electronics, Circuits and Systems (ICECS) (IEEE, 2016), pp. 45–48. Copyright 2016 IEEE.

FIG. 2.

(a) Circuit schematics of a magnetic (LC oscillator-based) parametron, the drawing is based on Ref. 26. (b) The sketch illustrates how an oscillator with a fundamental frequency of f0 can synchronize to a pumping signal having 2×f0 frequency. The synchronized signal can have only two stable phases, representing a bit. (c) Shows an integration-friendly variation of SHILOs using ring oscillators, reproduced with permission from Csaba et al.“Neural network based on parametrically-pumped oscillators,” in IEEE International Conference on, Electronics, Circuits and Systems (ICECS) (IEEE, 2016), pp. 45–48. Copyright 2016 IEEE.

Close modal

The key concept of using SHILOs for representing digital signals is sketched in Fig. 2(b). There are exactly two distinct phases in which a signal at frequency f0 may be synchronized to a pumping signal twice that frequency 2×f0. The zero-crossings of the pumping and pumped signals coincide. The two possible phases with respect to the pumping signal represent the binary “0” and “1” states in phase logic.

A parametric oscillator that is started up from an off state may choose either one of the two phases shown in Fig. 2(a). Once the oscillations reach a sufficient amplitude, their phase is fairly stable and a strong external signal (at f0) would be required to flip the phase. Upon startup, however, any additional perturbation can pull the oscillator toward its own phase. The oscillator can function as a latch, i.e., phase memory.

Oscillators can accept multiple inputs, i.e., multiple f0 frequency signals that can pull their phases. For example, in the case of the LC parametric oscillators, these inputs can be additional windings on the ferrite core. If a particular oscillator receives multiple inputs with different phases, then it will follow the phase of the “majority” of input oscillators. This majority operation is logically universal, and from majority gates (and inverters), one can straightforwardly realize more familiar NAND/NOR gates and any combinatorial circuits.

Minuscule input signals can decide the phase state of the parametric oscillator, but once the oscillator reaches steady-state oscillations, it can provide a strong output to logic gates at subsequent logic stages. The logic gates can amplify oscillatory signals, provide fan out, and can be concatenated to make large networks. They fulfill the five tenets of Boolean computation.31 

von Neumann's oscillatory computer serves as a perfect example of how one can redesign a level-based computing scheme (in this case, standard Boolean logic) to operate in the phase space. It also teaches a lesson on how crucial the physical realization of the oscillator is. Ferrite-core based LC oscillators in the 1950s were competitive with vacuum-tube-based circuits, but they quickly became obsolete when miniaturized and increasingly fast transistors appeared.

The idea of phase-based logic has been resurrected in several recent proposals, using micro or nanoelectronic building blocks for emerging electronic devices. In the late 1990s, there was great interest in single-electron transistors (SETs) and one of the first modern phase-logic proposal was done by Kiehl's group.32,33 Oscillations in a SET circuit are due to Coulomb blockade. The physics is entirely different from that of LC oscillators; yet, the logic operations can be performed almost exactly the same way as in the case of the LC oscillators.

In micro and nanomechanical systems (MEMS), interconversion between kinetic and potential energy is the source of oscillations. The oscillator equations are formally similar to the LC oscillator equations, and so not surprisingly, phase logic operations can be demonstrated in this physical system as well.28 

Ring oscillators are perhaps the most microelectronic-friendly implementation for phase-based logic.34,35 A single ring oscillator can be tapped at different circuit nodes, and in this way, phase-shifted copies of the oscillator signal are easily available. Figure 2(c) schematically shows one way of using ring oscillators as SHILOs: parametric pumping is applied as the power supply of one inverter.

Spiking (neural) networks perhaps are the first successful engineering application related to neurally inspired circuit architectures, and they share many features with OBCs. The boundary between OBCs and spiking networks is somewhat diffuse, and OBC circuit designs share many features of spiking neurons. The promise of spiking neural networks derives from the fact that a single spike can carry extremely low energy (on the order of 1016 J), and it is acceptable to lose some fraction of the spikes,36–46 which results in higher error tolerance. Spiking neural networks use oscillators for generating the signals but do not take advantage of the nonlinear interaction between oscillators, and so most of them do not belong to OBCs as we defined them.

Just like a digital computer that is built from billions of transistors, an envisioned OBC will contain millions or billions of interconnected oscillators. The demands for the elementary oscillators are high and the success of OBC will eventually depend upon whether one can find oscillatory building blocks that are (1) compact, (2) low lower, (3) high-frequency, (4) low noise, and (5) robust, (6) can be efficiently interconnected to each other, and (7) can easily interface with electronic circuitry.

Satisfying all the above requirements is a tall order. Depending on the chosen computing architecture, some of the requirements may be more or less crucial and relevant.

An oscillator is a fairly broad term, as almost every physical system that is capable of producing periodic changes in a physical variable commonly is referred to as an oscillator. From the point of view of applications in OBC, it is useful to classify the types below.

Auto oscillators typically generate AC signals from a DC energy source, and the dynamical properties of the system define a limit cycle, with a fixed amplitude and frequency. These systems are the ones that are most commonly referred to as oscillators.

Oscillators may also be merely responding to an oscillatory signal imposed on them. Forced oscillators and parametric oscillators (like the SHILOs, described in Sec. II) exemplify this behavior. The response of the oscillators is related to the incoming excitatory signal, but does not necessarily occur at the same frequency.

Energy recycling oscillators periodically convert energy between two different forms, and the energy dissipation in each oscillation cycle can be significantly less than the total energy involved in the oscillation. For example, an LC oscillator characterized by a quality factor of Q loses approximately 1/Qth fraction of its energy in each oscillation cycle. Most solid-state oscillators do not have this property: the energy that is associated with the oscillatory signal is simply thrown (dissipated) away.

In electrical oscillators, it is useful to distinguish between switch-based and nonswitch-based oscillators. A switch-based oscillator typically charges and discharges a capacitor by periodically connecting it to the power supply and the ground. These oscillators are fairly simple to construct, but they produce squarelike-waves. Electronic oscillators using analog amplifiers can generate clean sinusoidal waveforms, but these circuits are significantly more complex than the switch-based ones.

There are a large variety of physical (or chemical or biological) processes that may produce oscillations. Perhaps the most common types of oscillations are based on the oscillation of charges.47 Almost any other physical quantity can produce oscillations, with mechanical or magnetic oscillations being the most widely used, besides the electrical ones. One may also mix different physical quantities, for example, using electrical circuitry for oscillator interconnections only, and employ other state variables “inside” the oscillatory device.

Perhaps the most straightforward physical implementation for OBC would use LC oscillators, which periodically convert between energy stored in magnetic or electric fields. These were indeed the choice for early oscillator-based computers,26 but inductors were not competitive with solid state devices. Inductances consume very large chip areas, have high serial resistance, and often are limited to low-frequency operation.48 

There are many variants of integrable, room-temperature electrical oscillators one may use. Ring oscillators are one of the most compact transistor-based oscillators—their frequency and power consumption can vary over a wide range, and in subthreshold mode, they can compete with low-power nanoscale oscillators.49,50 Interconnections to each other and interfaces to electronic circuitry are straightforward.

Electrical oscillators may be built from emerging devices, such as phase change materials—these are often referred to as memristive oscillators.51 A sandwich structure made from these materials becomes conductive (highly restive) above (below) a threshold voltage, and the device behaves as a switch with hysteresis. Embedded in a simple RC circuit, such a device may operate as a relaxation oscillator. The oscillation frequency is set by the RC time constant of the oscillator.52 The switching element itself is a rather simple structure and highly scalable.

A large class of oscillators relies on oscillatory motion of magnetic moments (spins) in ferromagnetic materials. Spin-torque oscillators and spin-hall oscillators use a spin-polarized current to excite spin-precession in a ferromagnetic layer and, as this current flows through the oscillator, are modulated by the oscillating magnetization due to a magnetoresistance effect [giant magnetoresistance (GMR) or via magnetic tunnel junctions (MTJs)] or spin pumping. The magnetic thin film is capable of undergoing high-frequency (in the gigahertz regime) and high-Q oscillations, but the interconversion efficiency between the electric and magnetic degrees of freedom is relatively low, and for these reasons, magnetic oscillators require relatively high power to run. On the upside, their high oscillation frequency makes them outstanding candidates for high-speed computing applications, and they are one of the most popular physical realizations for neuromorphic circuits.53–55 

Parametric devices can be built from magnetic materials by modulating their magnetic properties. One possibility to achieve such modulation is via voltage-controlled anisotropy.56 These circuits lend themselves naturally to the realization of von-Neumann type oscillatory computers.56,57

In a mechanical [MEMS or NEMS (NanoElectroMechanical Systems)] oscillator, a vibrating body, such as a cantilever,28,58–60 is the source of oscillations. In terms of dynamic behavior, they share many similarities with LC oscillators, but they are much more amenable to large-scale integration.61,62

In terms of figures of merits, superconducting devices may come closest to realizing a perfect oscillator: they consume ultralow energies, are capable of performing high frequency operation, and do not necessarily occupy large chip areas.63–66 Superconducting LC components are lossless, and active circuit elements (Josephson junctions) are available in this technology. Their obvious disadvantage is the required cooling apparatus, and they are challenging to integrate with input/output circuitry and memory units, and mutual interconnections are also a challenge. While superconducting circuits are a hotly researched area (mostly for applications related to quantum computing), they do not belong to the mainstream of research in analog computing.

Single-electron devices (SEDs) provide another low-temperature technology for parametric oscillatory devices. In SEDs, metal islands are connected via tunneling barriers, and individual electrons can tunnel between these islands. The capacitances between metal islands are so low that a single tunneling event can significantly change the electrostatics of the circuit and block the flow of further electrons—until the electron leaves the island. The time constant of the tunneling events defines the time period of the oscillations. The operation of SET-based parametric devices was demonstrated for (Boolean) phase logic and also neuromorphic circuitry.32,33 These devices share many benefits and disadvantages of superconducting circuitry, and their performance numbers are close to those. Due to the low energy involved in the tunneling event, these devices require cryogenic temperatures to operate.

Chemical oscillators are also investigated for computing applications.67 Electrochemical reactions can produce oscillatory currents, and one may even observe complex pattern formations as a result of many-oscillator interactions. Typically, the oscillation frequency is low, and it is hard to see how one could engineer the interconnection of multiple oscillators or electrical interfaces.

Table I shows an overview of possible physical oscillators, and we also provide estimates of some relevant parameters and figures of merits. The table will be further analyzed in the remainder of this chapter, but one can immediately notice that there is no “perfect” oscillator, one which would simultaneously excel in all figures of merits. The table also shows that ring oscillators, a very old-fashioned technology, show a fairly good overall performance.

TABLE I.

Possible building blocks of an oscillatory computing architecture along with a few figures of merit. This table does not include any overhead coming from device-device interconnections, and the lowest energy dissipation does not occur at the highest frequency. Ring oscillators can be viewed as the good baseline for benchmarking emerging devices.

Oscillator nameState variableFrequency (Hz)Energy/cycle (J)Possible coupling mechanismSources
Ring oscillator Electric Up to 20 GHz 1015 Electrical 68  
Relaxation oscillator based on phase-transitions Electrical Up to 10 GHz 1017 Electrical 52  
LC oscillator Electrical Up to 100 GHz  Electrical 48  
Superconducting oscillator Electric and magnetic Several 10 GHz 1017 Electrical, inductive, and capacitive 64,65  
Mechanical (NEMS) oscillator/RBO Mechanical Up to 20 GHz 1014 Electrical or mechanical 58  
Spin torque oscillator (STO) Magnetic Upward 50 GHz 1015 Electric, magnetic or spin wave 55,69  
Chemical Electrochemistry 102 No data No data 67  
Magnetic anisotropy controlled parametric Magnetic Up to 20 GHz No data Electrical 56  
Spin-Hall oscillator Magnetic Up to 20 GHz 1016 Electric, magnetic or spin wave 70  
SET device Electric 10 GHz 1018 Electrical 33  
Oscillator nameState variableFrequency (Hz)Energy/cycle (J)Possible coupling mechanismSources
Ring oscillator Electric Up to 20 GHz 1015 Electrical 68  
Relaxation oscillator based on phase-transitions Electrical Up to 10 GHz 1017 Electrical 52  
LC oscillator Electrical Up to 100 GHz  Electrical 48  
Superconducting oscillator Electric and magnetic Several 10 GHz 1017 Electrical, inductive, and capacitive 64,65  
Mechanical (NEMS) oscillator/RBO Mechanical Up to 20 GHz 1014 Electrical or mechanical 58  
Spin torque oscillator (STO) Magnetic Upward 50 GHz 1015 Electric, magnetic or spin wave 55,69  
Chemical Electrochemistry 102 No data No data 67  
Magnetic anisotropy controlled parametric Magnetic Up to 20 GHz No data Electrical 56  
Spin-Hall oscillator Magnetic Up to 20 GHz 1016 Electric, magnetic or spin wave 70  
SET device Electric 10 GHz 1018 Electrical 33  

In the context of OBCs, perhaps the most important figure of merit for an oscillator is its power consumption, which includes both the power of each oscillator and the power consumed by the interconnections. Below, we focus on the oscillator alone, and interconnections will be discussed in Sec. IV B.

A good baseline for comparing the power dissipation of various oscillators is to calculate the power figures for ultralow power ring oscillators, which are used in, for example, Radio Frequency IDentification (RFID) transponders.68,71 The voltage-controlled oscillator described in Ref. 68 consumes 24 nW at 5.24 MHz, that is, Ediss=4.7×1015 J per oscillation cycle. With vanadium oxide relaxation oscillators, one possibly can achieve an order of magnitude better:72 projects 0.5μW at 1.6 GHz, giving Ediss1016 J per cycle. Obviously, in the case of electrical oscillators, there is no overhead in converting between electrical and nonelectrical state variables, albeit long-range interconnections may require power-hungry amplifiers.

An attractive oscillator in terms of energy would be an energy-recycling oscillator, i.e., where during each oscillation cycle, energy is reversibly converted (largely without losses) between two forms, instead of being dissipated. LC and MEMS oscillators exhibit this property.73 

On-chip LC oscillators, however, have low Q factors. Small-sized inductors at room temperature have large serial resistances,48 resulting in large resistive losses and small Q. So little can be gained by energy recycling circuitry. The picture is totally different for superconducting LC elements, and they are ideal oscillatory blocks in terms of power consumption. In Table I, superconducting devices stand out with an energy of Ediss1017 J per oscillation/spike. Single-electron devices may perform even better, due to the very low energy associated with a single electron tunneling event.

Mechanical oscillators (MEMS/NEMS) are energy recycling oscillators and can have very high Q factors. The equations describing the interconversion between kinetic and potential energies in a MEMS oscillator are formally very similar to the LC oscillator equations—so the dynamic behavior of a MEMS oscillator very much resembles that of a high-Q LC circuit. However, in the case of MEMS, their low transduction efficiency is the main challenge. As mechanical oscillators must be driven (and possibly interconnected) by electrical signals, their net power efficiency will depend on the efficiency of interconversion between electric and mechanical signals. This can rarely be done better than with a few-percent efficiency.74 

Spin torque oscillators (STOs) are current-driven devices and typically run at submilliamperes current levels and oscillate in the gigahertz frequency range. The source of oscillation is the precession of magnetic moments in ferromagnetic materials, and the energy required for these oscillations is supplied by spin polarized currents flowing through the magnetic layers. Damping in the magnetic material is relatively low, and resistive losses in the STO magnetic layer stack account for the vast majority of the dissipated power. The magnetic oscillations modulate the resistance of the STO layer stack, and one can detect an electrical signal as a result of magnetization oscillations.

As an example, assuming VSTO=0.1 V, iSTO=0.1 mA, and f =10 GHz, the energy consumed in an STO per oscillation cycle is Ediss4.7×1015 J per cycle, not very far from the ring oscillator figure. Reducing the resistance of the stack is, in principle, possible, but difficult in practice. There are additional design tradeoffs. For example, spin oscillators based on tunneling magnetoresistance provide large electrical outputs, but have a high net resistance. The net power efficiency of STO circuitry can be significantly reduced if amplifier circuitry is needed for coupling or readout of the signals. Emerging physics (such as the spin-Hall effect) may significantly boost the efficiency of magnetoelectric interconnections, and voltage-controlled anisotropy could allow us to drive parametric oscillators with almost no resistive current flow.

Finally, a general remark about power consumption: The energy of thermal fluctuations at room temperature is on the order of kT=26 meV =4.1430×1021 J. The energy involved in each oscillation cycle should be at least a few times this value to avoid the oscillator signal getting completely lost in noise. The oscillators presented above are still several orders of magnitude away from this value (with the exception of superconducting oscillators)—so, there is certainly room for much more energy efficient physical systems.

Low-power analog computing devices are inevitably subject to noise. Nanoscale oscillators, for which very little energy is involved in the oscillation process, will have noisy waveforms, with relatively poor frequency and phase stability.75–77 

In most cases, noise is detrimental to device operation or can render a computing scheme unrealizable. For example, Ref. 78 describes schemes for using the STO phase and frequency for analog computation, and it turns out that the phase noise of STOs is way too high for this scheme to be feasible and only STO frequency can be used.

Some nanoscale oscillators can be engineered to show noisy, stochastic behavior. For example, reducing the volume of the magnetic layer in an STO will reduce the potential barrier seen by magnetic moments. If the height of this barrier becomes comparable with kT, thermal fluctuations can stochastically switch the magnetic layer.79 The device may still be used as an oscillator where externally imposed currents can modulate the noise. There are computational schemes that use such stochastic oscillations for computing.

One important argument in favor of OBCs is that frequency- or phase-based representation of information is believed to be inherently more noise tolerant than amplitude-based (level-based) representations. The superiority of phase/frequency based coding is well established in telecommunication theory.80,81 The work of Wang and Roychowdhury35 makes an argument that these principles transfer to oscillatory computing devices.

In OBCs, computing is done by coupling of oscillators—these (mutual) interactions will alter the phase/frequency or even the amplitude of oscillators. The mechanism of oscillator interaction, often referred to as synchronization, was observed by Huygens back in 1665. He described how two pendulum clocks (mechanical oscillators) start to tick in synchrony (i.e., assume the same phase) when they are hung on the same wall, which provides some weak mechanical coupling between them. Weak couplings work perfectly for the purpose of synchronization, and this phenomenon is pervasive.

Synchronization requires only weak coupling between oscillators, and these weak couplings are often given in the system “for free” through some parasitic effect. So one approach for oscillator coupling is to exploit such existing physical interconnections using the physics of the oscillator state variables. MEMS oscillators may be coupled via acoustic vibrations propagating in the semiconductor substrate. STOs couple via magnetic interactions and spin waves. Electrical oscillators couple through capacitive/inductive effects or via shared ground or power supply lines. Of course, this approach implies imitations on the coupling topology and only a few (usually simple) coupling configurations are possible.

The second, more often followed route is to engineer the couplings, i.e., to design which oscillators can be coupled and with what strength. This gives full flexibility in the design and allows a wide array of computing schemes. However, there is a high price to pay for that: most non-Boolean computing schemes (especially neuromorphic networks) are highly interconnected architectures, and the number of required interconnects greatly outweighs the number of oscillators. In this case, the ease with which the oscillators can be interconnected becomes the most important figure of merit and hard-to-interconnect oscillators are practically useless in an OBC. So one may argue that the figure of merits for oscillators as given in Table I are not at all that relevant, and “good” oscillators are the ones that can be coupled by compact, low-power, high-fanout interconnections. Even in standard digital circuits, interconnections often account for most of the circuit complexity, and moving data between far-lying points of the circuit accounts for most of the power consumption. It is not hard to see that in highly interconnected analog (oscillatory) circuitry, interconnections will be the bottleneck.

It is important to point out that a physical connection between two oscillators does not necessarily mean that they will influence each other. For example, two weakly coupled oscillators will see each other if their frequencies lie sufficiently close. Oscillators with very different frequencies will, in most cases, just ignore each other. One could say that the physical and logical (effective) couplings inside the network could be quite different from each other.

1. Electrical interconnections

Electrical interconnections are a straightforward choice for coupling electrical oscillators.30,77,82 The strength of electrical interconnections may be made fixed or be tunable via simple circuitry. Capacitive or inductive elements may result in positive or negative coupling coefficients (i.e., ones that push or pull the phases against/toward each other). Figure 3(a) shows an example of electrical oscillator interconnection, via an RC element assuming VO2-based relaxation oscillators.

FIG. 3.

Perhaps the most straightforward way to couple electrical oscillators is to use RC elements for interconnection—a circuit diagram of the interconnection of VO2-based relaxation oscillators is shown in (a) as an example. The figure is reproduced with permission from Raychowdhury et al., “Computing with networks of oscillatory dynamical systems,” Proc. IEEE 107, 73–89 (2018). Copyright 2018 IEEE. Oscillators with nonelectric state variables, such as STOs, can be coupled in several different ways. STO may all be coupled to the magnetic field of a waveguide, as shown in b—the figure is reproduced with permission from Csaba et al., “Analog circuits based on the synchronization of field-line coupled spin-torque oscillators,” in IEEE 15th International Conference on Nanotechnology (IEEE-NANO) (IEEE, 2015), pp. 1343–1345. Copyright 2015 IEEE. Direct electrical coupling [as in c, reproduced with permission from Lebrun et al., “Mutual synchronization of spin torque nano-oscillators through a long-range and tunable electrical coupling scheme,” Nat. Commun. 8, 15825 (2017). Copyright 2017 Sprinter Nature.] is possible if the STOs provide strong electrical outputs. Coupling is possible in the entirely magnetic domain as well; in panel d, nine STOs are coupled by direct magnetic (spin-wave) interactions [reproduced with permission from Awad et al., “Long-range mutual synchronization of spin Hall nano-oscillators,” Nat. Phys. 13(3), 292–299 (2016). Copyright 2016 Springer Nature.]—albeit in this geometry, only nearest-neighbor coupling is straightforward.

FIG. 3.

Perhaps the most straightforward way to couple electrical oscillators is to use RC elements for interconnection—a circuit diagram of the interconnection of VO2-based relaxation oscillators is shown in (a) as an example. The figure is reproduced with permission from Raychowdhury et al., “Computing with networks of oscillatory dynamical systems,” Proc. IEEE 107, 73–89 (2018). Copyright 2018 IEEE. Oscillators with nonelectric state variables, such as STOs, can be coupled in several different ways. STO may all be coupled to the magnetic field of a waveguide, as shown in b—the figure is reproduced with permission from Csaba et al., “Analog circuits based on the synchronization of field-line coupled spin-torque oscillators,” in IEEE 15th International Conference on Nanotechnology (IEEE-NANO) (IEEE, 2015), pp. 1343–1345. Copyright 2015 IEEE. Direct electrical coupling [as in c, reproduced with permission from Lebrun et al., “Mutual synchronization of spin torque nano-oscillators through a long-range and tunable electrical coupling scheme,” Nat. Commun. 8, 15825 (2017). Copyright 2017 Sprinter Nature.] is possible if the STOs provide strong electrical outputs. Coupling is possible in the entirely magnetic domain as well; in panel d, nine STOs are coupled by direct magnetic (spin-wave) interactions [reproduced with permission from Awad et al., “Long-range mutual synchronization of spin Hall nano-oscillators,” Nat. Phys. 13(3), 292–299 (2016). Copyright 2016 Springer Nature.]—albeit in this geometry, only nearest-neighbor coupling is straightforward.

Close modal

Electrical connections are often the easiest, most flexible option even for oscillators operating on nonelectrical state variables. There is almost always a high energy penalty for doing this. As we discussed above, in the case of NEMS devices and STOs, the transduction efficiency (i.e., the amount of energy converted to/from the electrical and nonelectrical degrees of freedom) is not more than a few percent. So oscillator couplings require active interconnections, i.e., amplifier stages in between the oscillators.

One example, given in Ref. 83 and sketched in Fig. 3(b), describes how high-frequency STOs can be brought into interaction via a waveguide. The waveguide (referred to as the field line) is a simple electrical wire, providing magnetic fields for interaction with the STOs. The STO outputs are picked up by an amplifier, and this amplifier feeds the field line with current—this scheme brings all the STOs into mutual interaction with each other.

In some cases, STOs may provide a sufficiently strong output, so that they could be coupled directly, by passive interconnections. Such a scheme was demonstrated in Ref. 84, as shown in Fig. 3(c).

2. Nonelectrical interconnections

For nonelectrical oscillators, it is highly desirable to use the same nonelectrical state variable for the interconnections that the oscillator is based on.

In the case of NEMS oscillators, mechanical (acoustic) couplings are the most natural and efficient.60,85 For spin oscillators, there are different possibilities: magnetic moments may be coupled via their dipole (magnetic field) or spin-wave excitations. Spin polarized currents may also directly couple spin oscillators without the need of extra circuitry,86,87 and topological surface states88 may amplify this effect.

Perhaps the highest state-of-the-art for direct oscillator-oscillator coupling is reached in spin-wave coupled STOs.69,89 In order to achieve direct spin-wave coupling, magnetic oscillators should share the same magnetic film. The oscillatory precession of magnetization in the STO generates propagating spin waves in the film, which can reach and affect neighboring STOs, resulting in STO synchronization. Spin oscillators based on the spin Hall effect have a more favorable geometry (i.e., in experiments, they may be placed closer to each other), and Ref. 70 reports coupling of nine oscillators, which is the largest number reported to date. This experimental setup is sketched in Fig. 3(d).

Direct physical coupling of emerging oscillators might enable us to fully utilize the potential of emerging state variables, but geometry constraints may strongly limit the structure of realizable couplings. For example, the work of Ref. 70 can only realize nearest neighbor coupling, while most proposed applications of STO networks78 would require all-to-all couplings. The coupling range is also limited: spin waves in most magnetic materials propagate at most few hundred nanometers, and of course, strong coupling occurs only between nearby oscillators. There are magnetic materials that allow significantly longer spin wave propagation lengths, such as Yttrium Iron Garnet (YIG), where spin wave propagation lengths on the order of several ten microns were measured.90 In principle, thousands of STOs could be coupled to each other in such low-damping magnetic materials, but it is a technological challenge to integrate spin oscillators on such magnetic films.

After a discussion on the physics of oscillators and oscillator interactions, we now turn to the question of what sort of computation can be implemented by coupled oscillator dynamics. With the exception of the discussion of von Neumann's Boolean scheme (phase logic) in Sec. II, we have not yet addressed this question, and the present chapter is primarily devoted to non-Boolean models of computing in oscillatory networks.

One way to compute with oscillators is to let phase signals propagate from input to output, in a well-defined, sequential manner. This is done in phase logic, which is essentially a feedforward neural network implemented by oscillators.

A very different model of computing, which we refer to as collective-state computing, views computation as a result of complex, multidirectional interactions in networks of interconnected primitives (such as neurons in neural networks and oscillators in our case). Such collective-state computing is our main interest in this paper, and an overwhelming majority of current OBC research studies deal with such models.

Synchronization (mutual oscillator interactions) can drive the oscillator network in a collective state, such as an attractor state or limit cycle. In this collective state, the phases and frequencies of oscillators are not independent of each other, but form patterns. These patterns represent the result of a computation.

A high-level description of a typical collective-state computing process is outlined in the following:

  • Input is given to the network, which could be a frequency or phase pattern that is forced onto the oscillators and presents the physical initial state of each individual oscillator. Such an input pattern (in the phase of frequency) might be an image that needs to be processed.

  • Inputs are removed, and the phases and frequencies of oscillators evolve due to their mutual interactions. Synchronization drives the network toward an energy minimum, which also represents a minimum in an energylike constraint function. This could be a stationary phase or frequency pattern, which represents the processed input.

  • The result of the computation is read out by extracting phases or frequencies from groups of oscillators.

This is not the only possible mode of operation for an OBC. For example, central pattern generators (briefly described in Sec. VI C) generate a time-dependent output pattern (signal) as a response to an input signal, and this input may also change continuously in time. A meaningful computational process does not necessarily mean that a steady state (steady phase or frequency patterns) is reached.

Most computing models use the phase of the oscillators as the carrier of information. Oscillator frequency is more stable, but it is also harder to influence by coupling.

Mathematically, oscillator interaction is described in terms of synchronization, i.e., the emergence of phase/frequency patterns in the oscillator cluster.18,91,92 Perhaps the simplest model to describe the formation of these states is the celebrated Kuramoto model.19,93 The model provides an analytical solution for sinusoidal oscillators that are coupled linearly and weakly by their phase variable, that is, each oscillator pushes/pulls other oscillator phases proportionally to their phase difference. The model describes the sudden phase transition of the oscillator network from a desynchronized state (uncorrelated couplings) to a coherently oscillating state at a particular interaction strength. A network of Kuramoto oscillators will often display mesmerizing and complex phase patterns,94 hinting that this complexity can be harnessed for computation.

In most works on OBCs, numerical simulations, rather than analytical solutions, are used to study networks. The reason for this is that the Kuramoto model alone gives very few hints on the computational utility of a certain network. The Kuramoto model also assumes that only oscillator phases are perturbed by oscillator-oscillator interactions, and frequencies remain intact, which is usually a poor approximation for most physical oscillators and typical coupling strengths. One most often needs time-demanding numerical simulations or numerical approximations to determine phase dynamics of irregularly connected, highly nonlinear oscillators. It is possible to directly calculate coupled oscillator dynamics or use approximate, but much more efficient phase-domain models.95 

The synchronization network is a term often used in the mathematical or nonlinear science literature to refer to systems of Kuramoto oscillators or OBCs.96 

What computations a network can perform depends on the interconnection weights. One may engineer the weights of the network to perform certain functions. For OBC, one may even use the same interconnection network that a standard neural network uses—see Sec. V C for an example on how to do this. As in the case of level-based neural networks, it is easy to end up with an interconnection-heavy, hard-to-realize design.

Rather than designing-in desired interconnections, one may follow another route and try to use the couplings that are inherent in the physical system and then see what such functions such a network is capable of performing. This route is followed, for example, in reservoir computing.53,97,98 In reservoir computing, the oscillator network acts as a complex, nonlinear system with memory (for the requirements of a reservoir, see Refs. 99 and 100) and this complex network dynamics is turned into useful computation by an output layer, which maps the network output to the desired computational result.

A unique characteristic of OBCs is that externally injected oscillatory signals can tune oscillator-oscillator interactions or bring noninteracting oscillators into coupling. This idea is originally described in Ref. 101. Consider two oscillators, running at frequencies of f0 and f1 and being physically coupled (say via a common circuit node). Such oscillators will not interact if their f0 and f1 frequencies are too far apart (neither they are harmonics). However, an externally applied oscillatory signal with the frequency of f=f0f1 will bring these oscillators into interaction. The possibility to use external signals to define connections, instead of physical rewiring of the network, may allow us to overcome interconnection bottlenecks.

The Hopfield network is one of the most studied neural network models.102,103 It is also prominent in the OBC literature and has been used as the starting point for the construction of oscillatory associative memory concepts, described, for example in Refs. 30, 101, 104, and 105.

In the traditional Hopfield model, two-state neurons (computing nodes) interact with all other neurons in the network via positive or negative weights. The weights are chosen to define certain attractor states for the network, which are, in turn, minima of an energylike constraint function. For example, if convergence to a black-and-white image is desired, neurons corresponding to like-colored pixels of the image are interconnected by positive weights (i.e., pull each other toward the same state), while neurons with different colors will repel each other (since they are interconnected by negative weights). Weights can be determined by a simple formula (such as the Hebbian rule106) or a learning algorithm.107 Each neuron sums all its inputs, then applies a nonlinear (typically sigmoid) activation function, and finally sends this output to all other neurons. The network may have a unique ground state, where all positively (negatively) connected neurons are in the same (opposite) state, such that all constraints are satisfied. If not all couplings can be satisfied, then the network can have multiple stationary states and will act as an auto-associative memory: if the initial state of the neurons resembles one of the patterns programmed into the weights, the network will converge to this preprogrammed pattern.

Such conventional level-based Hopfield networks can be reformulated for using oscillators as its building blocks (neurons). Neurons become oscillators, and their phase state represents the output. Oscillators may be interconnected to pull together toward the same phase (in-phase), i.e., synchronize to the same phase, or they may be interconnected with a phase delay that will cause them to synchronize out-of-phase (antiphase), i.e., with a 180° phase shift. These two types of interconnections correspond to negative or positive couplings of the level-based Hopfield network. The nonlinearity of the synchronization process means that the sigmoid activation function is already built in the oscillators. When the network converges, the oscillators form two groups with identical phases inside each group, and the resulting pattern is the output of the associative memory operation.

Most OBC computing models are based on the same principle as the Hopfield network, that is, to find an “oscillatory ground state” and use the convergence of the network toward a stationary phase or frequency pattern, which pattern minimizes an energy function. A comprehensive overview of various Hopfield-like computing models is given in Ref. 108, also explicitly underlying the relation to biological systems.

One variant of the Hopfield network measures the output phase correlations in the network, rather than looking perfectly synchronized states of oscillators. Oscillator interactions do not necessarily yield perfectly in-phase or antiphase running oscillators—any (phase) correlation between two oscillators could have computational value, and such phase correlation may be possible to access via an output circuitry. For example, in Refs. 109–112, oscillators are controlled by externally applied AC signals, which bring groups of oscillators in a phase-correlated state. The output is readout by extracting the pairwise phase correlations between oscillators and applying threshold criteria. The coupling weights (which are realized by the externally injected signals) can be trained by computational learning algorithms for specific functions. Several examples of learning algorithms were published.53,109,110

Using oscillators alone does not address the main problem of Hopfield (like) networks: Hopfield nets require all-to-all interconnections and therefore are not scalable beyond a few-ten neurons. But since oscillators communicate in the frequency domain and one can create “virtual” interconnections by using externally injected signals,101 there are a number of strategies to deal with the interconnection bottleneck. The work of Ref. 113 uses a frequency-domain multiplexing scheme to drastically reduce the number of interconnections—neural network architectures that are unrealizable in a level-based system may be physically realizable with oscillators. One may make an analogy here with frequency-division multiplexing (FDM) in telecommunications. A small number of high-bandwidth physical links can be used to create a large number of virtual interconnections between processing units (neurons). Such ideas were discussed in the framework of neural networks, and101,113–118 possibly, oscillators are the most straightforward hardware for realizing neural networks using FDM.

An overview of the different interconnection schemes is shown in Fig. 4. Depending on the computational model and the problem to be solved, one may use all-to-all interconnections, various dynamic interconnections, or a simpler scheme when all oscillators are connected via a common node.

FIG. 4.

A few possible schemes for interconnecting oscillators into a useful computing network. (a) Shows the classical scheme for an all-to-all interconnected, Hopfield-like net, (b) oscillators enable to realize this network functionality with much fewer interactions, using frequency-division multiplexing (i.e., dynamic interconnections); both the figures are reproduced with permission from F. C. Hoppensteadt and E. M. Izhikevich, “Oscillatory neurocomputers with dynamic connectivity,” Phys. Rev. Lett. 82(14), 2983 (1999). Copyright 1999 American Physical Society. d shows another dynamic interconnection scheme, where oscillator clusters are forced to synchronize by externally injected signals.111 The externally injected signals are defined by an offline computational learning algorithm. The panel of (d) [reproduced with permission from Nikonov et al., “Coupled-oscillator associative memory array operation for pattern recognition,” IEEE J. Exploratory Solid-State Comput. Devices Circuits 1, 85–93 (2015). Copyright 2015 IEEE.] sketches a simpler interconnection scheme, where frequency controlled oscillators interact via a common node, with the output of the computation being the degree of synchronization between the oscillators.

FIG. 4.

A few possible schemes for interconnecting oscillators into a useful computing network. (a) Shows the classical scheme for an all-to-all interconnected, Hopfield-like net, (b) oscillators enable to realize this network functionality with much fewer interactions, using frequency-division multiplexing (i.e., dynamic interconnections); both the figures are reproduced with permission from F. C. Hoppensteadt and E. M. Izhikevich, “Oscillatory neurocomputers with dynamic connectivity,” Phys. Rev. Lett. 82(14), 2983 (1999). Copyright 1999 American Physical Society. d shows another dynamic interconnection scheme, where oscillator clusters are forced to synchronize by externally injected signals.111 The externally injected signals are defined by an offline computational learning algorithm. The panel of (d) [reproduced with permission from Nikonov et al., “Coupled-oscillator associative memory array operation for pattern recognition,” IEEE J. Exploratory Solid-State Comput. Devices Circuits 1, 85–93 (2015). Copyright 2015 IEEE.] sketches a simpler interconnection scheme, where frequency controlled oscillators interact via a common node, with the output of the computation being the degree of synchronization between the oscillators.

Close modal

Harnessing the computational power of the collective state of a many-oscillator system (as it is, supposedly, done by the mammalian brains) is the Holy Grail of OBC research—perhaps of all neurally inspired computing models. So far, no large-scale problem of high practicality and interest has emerged that OBCs solved with much higher efficiency than digital computers. There are different problem classes though, which may yield successful applications of OBCs in the near term. One possibility is to forget about large-scale problems, solve relatively simple but ubiquitous tasks, and do this with extremely high (energy) efficiency. Such problems will be discussed in Sec. VI. The other possibility is to attack extremely hard problems, with absolutely no known effective solutions; attempts to do so will be surveyed in Sec. VII.

Hardware accelerators for image processing, vowel recognition, gait control, and combinatorial optimization are examples when a relatively simple OBC can show impressive performance. A few case studies, some of them similar to the examples of Ref. 82, will be given below. These are representative, but somewhat arbitrary examples, and by no means, they intend to cover all possible application areas for OBCs.

In many computing tasks, the vast majority of resources (energy, time, and hardware) are spent on relatively simple, repetitive jobs. This is especially true in areas such as image processing, where a large number of convolution, filtering, and image processing steps have to be performed on a massive amount of input data (i.e., video streams). An analog device that can calculate, for example, just a simple dot product in a fast and energy efficient way would significantly boost the overall efficiency of the entire image processing pipeline (IPP).

The focus of a recent DARPA project119 targeted exactly such tasks.78 The demonstration of a complete IPP was pursued, with analog oscillatory computing primitives at its heart. The underlying idea was that an efficient Euclidean distance calculation on analog data can be done by exploiting oscillator interaction.120 The effort included circuit design and algorithm design components and also a nanodevice work package, where nanoscale mechanical and magnetic oscillators were developed as hardware components for the image processing pipeline (IPP).

The analog distance-calculating unit uses current-controlled oscillators, in this case, STOs, which are coupled to each other with equal positive weights. The network topology is therefore very simple—in this case, coupling via a common field line was used, as shown in Fig. 3(c) The analog inputs of the network are the currents (or voltages) driving the STOs, and in a limited frequency range, their frequency is linear in the input. The STOs should be nominally identical [have identical frequency response f(i) to input currents]. The output of the network is an integrator, which in the simplest case, can be an RC lowpass filter that sums up and averages the oscillator outputs. Figure 5(a) shows a circuit schematic of this network. For each cluster, the number of analog inputs is equal to the number of STOs, and there is a single output.

FIG. 5.

Elements of an image-processing pipeline based on field-line-coupled STOs. (a) Circuit schematics showing the STOs, connected via the field line (red) and generating an output via an integrator. (b) The voltage on the integrator is proportional to the Euclidean distance of an input image from a filter template. (c) Using the STO-based distance calculator, a patch-based Gabor filter was applied on the image, filtering out 45° lines in this example.

FIG. 5.

Elements of an image-processing pipeline based on field-line-coupled STOs. (a) Circuit schematics showing the STOs, connected via the field line (red) and generating an output via an integrator. (b) The voltage on the integrator is proportional to the Euclidean distance of an input image from a filter template. (c) Using the STO-based distance calculator, a patch-based Gabor filter was applied on the image, filtering out 45° lines in this example.

Close modal

If the applied input currents are very different from each other, then (for a given coupling strength) the oscillators will not synchronize and run independent of each other. In case the driving currents are close to equal, then the STO network will be synchronized. In the latter, synchronized case, the STO outputs sum up coherently (in-phase) on the output integrator, which gives a higher output value than the incoherent (random-phase) superposition of oscillator signals. If the input current vector is an element-wise difference of two analog current vectors, then the output signal (i.e., the degree of coherence in the STO network) is a good measure of the Euclidean distance of the input currents (or voltages).36,41,121

Calculating Euclidean-distance patches of an image and substituting each pixel value with the distance from a fixed value is equivalent to a Gabor filtering operation.78 For example, using a cluster of 25 coupled STOs and a fixed vector corresponding to a 45° line, one can filter for this 45° line in 5 × 5 image patches on a larger (say 256 × 256) image. The result of such a filtering operation, using the full circuit model of the STO cluster, is shown in Fig. 5(c).

The implementation of the oscillator network using STOs has been studied extensively via numerical simulations, and the network has been built and experimentally characterized.122–124 The interconnection of the STOs, applying analog inputs and picking up STO signals, required a relatively large amount of conventional electronics. More “implementation friendly” spin-wave interconnections could not be used in this case since they cannot provide equal all-to-all coupling between more than nearest-neighbor STOs. While the realized circuit is much more efficient than a comparable digital solution, the expected performance improvement due to the use of nanodevices was significantly lowered by the large amount of required conventional electronics. The D/A and A/D converters that interface the STO cluster to digital circuitry and the amplifiers required to pick up STO signals account for the vast majority of the consumed power.

A very similar Euclidean-distance calculating device has been realized using relaxation oscillators as well.125 The interconnections in this system were done entirely in the electrical domain using passive (resistive or capacitive) couplings.

The Euclidean-distance calculation is a relatively simple operation and may benefit from a special-purpose hardware accelerator if used extensively. Trade-offs need to be considered when using analog hardware for a given purpose in an otherwise fully digital computing environment. Machine intelligence and AI applications20,21,126–129 offer many possibilities for special-purpose hardware accelerators. Analog circuits are being developed for such purposes,22 and OBCs promise to play a role in the future.

As described above, the Euclidean distance is measured by the degree of “mutual” oscillator synchronization. As it turns out, one may use oscillators for such computation without relying on a collective state: in this mode of operation, one would detect the degree of oscillator synchronization to an externally injected signal.124 The Euclidean distance calculation is a relatively simple operation, which may be solved without relying on reaching a computational ground state.

High-speed classification of one-dimensional data (e.g., a time dependent signal) is also a task of high practical importance. For example, STO-based OBCs could classify several ten gigahertz radio frequency signals in real time. A few case studies are known from the literature for STO-based vowel recognition,53 which is a closely related problem, albeit de-emphasizes the importance of high-frequency processing.130,131

The realization of the interconnection network in STOs is a major challenge, as already has been pointed out repeatedly, and even more so if high-speed connections are needed. One way to circumvent this problem is to use only a single oscillator, without interconnection to nearby STOs and employing a time-delayed feedback mechanism, as described in Ref. 53. One may view this scheme as coupled-oscillator computing, where instead of multiple oscillators, time-delayed “copies” of a single oscillator are being used. Alternatively, one may look at it as a form of reservoir computing, with a single dynamic node. While this scheme has been experimentally demonstrated in Ref. 53, it is not hard to argue that the “heavy lifting” in the computing process is done by the electronics that generates the time delays and pre- and postprocesses the STO signals.

Another strategy to deal with the interconnection bottleneck in coupled STO systems is to create dynamic interconnections, as per the description in Sec. V B. Four STOs can be coupled by simply interconnecting them in series, but this “wiring” does not allow for much functionality if used statically. However, by injecting external signals, as described in Ref. 112, one can create complex dynamic (virtual) interconnections in a four-STO network. The network weights can be set by an offline training algorithm. One may rightfully argue that there is, again, no free lunch: hardware complexity can be avoided at the cost of complex signal generators that create the dynamic connections. The full potential of this scheme could be realized if these signals can be generated by on-board STOs as well.

It has been shown that OBCs may be used as central pattern generators (CPGs). The study of CPGs provides a strong link to neurobiology and was an early and important motivation for the study of OBCs.132 A detailed description of vanadium-oxide oscillators as CPGs is given in Ref. 133, including their fabrication and circuit models, and recent experimental results can be found in Ref. 134. Positive and negative couplings between the oscillators generate a wide range of different burst sequences, which, for example, may correspond to limb movements at different gaits. The interconnections between the relaxation oscillators are realized by resistances and capacitances, as described above. The circuit design can be challenging as resistive couplings have much narrower locking ranges than the capacitive ones, and they are also much more sensitive to circuit parameters. A wide variety of gaits can be generated in this fashion with a simple, few-oscillator circuit.

Combinatorial problems can often be restated as energy-minimization (optimization) problems. In fact, Hopfield-type networks were successfully used to solve combinatorial problems.135 In general, combinatorial optimization problems can be reformulated for oscillatory devices.

An important example of this class of problems is the well-known graph-coloring problem. To our knowledge, Wu136 and Wu et al.137 is the first to propose oscillators for a graph-coloring problem. The basic idea here is to identify oscillator phases with colors, and to use the fact that resistive coupling tends to “pull” phases (colors) together and capacitive coupling tends to “push” phases (colors) away from each other. It is not hard to see then that the dynamics of in-phase and out-of-phase synchronizing oscillators maps to the solution of a graph coloring problem, where the oscillator interconnections directly correspond to the graph edges. A more general approach is shown in Ref. 23, together with an implementation using vanadium-oxide-based relaxation oscillators. In the approach used in Refs. 23 and 82, the graph coloring problem is first reformulated to finding an ordering of the nodes (on a phase circle) such that the same-colored nodes appear close together in the ordering, but are not connected by a graph edge. In contrast, unlike-colored nodes may share the same graph edge, but they order to have different phases (colors). In this fashion, a combinatorial problem may be reformulated into an energy-minimization (optimization) task that can be implemented by a network of coupled oscillators.

Many combinatorial problems are computationally hard (or even NP-hard or NP-complete). Efficient solutions of such problems (even efficient approximations) are considered by many as the Holy Grail of nonconventional computing algorithm research, and OBC has shown its promise as an approach.

Computationally hard problems (such as NP-hard problems)138,139 are usually discussed in the context of quantum information theory (quantum computation and quantum simulators). A quantum system in a fully entangled state can be described by an exponentially growing number of superposition coefficients, i.e., the time evolution of N coupled 2-state systems generally requires 2N number of internal variables. The promise of a quantum computer or quantum simulator is that it can operate simultaneously on exponentially large (2N sized) data, and so a relatively small-sized hardware could, in principle, process vast-sized problems. Of course, this is also the challenge of a quantum computer as the exponentially large number of internal variables need to be controlled.140 Currently, large-scale industrial and academic efforts are ongoing to experimentally realize a practical quantum computer.141 

It is widely believed that no classical system, only quantum processors, could do the feat of storing/processing information that is exponentially growing with the size of the system. In contrast to this common belief, it is quite possible that there is no fundamental difference between quantum and classical systems in this respect. This can be argued for on three grounds: (1) the widely accepted relations in mathematics between complexity classes (P, NP) are actually unproven, (2) the information content in collective states (excitations) of a classical system may grow exponentially with the system size, and (3) eventually, both classical and quantum systems have to operate in noisy environments and likely will be limited by the same type of physical constraint, i.e., achieving control over an exponentially large number of variables.140 One should appreciate the significance and difficulty level of NP-hard problems (for an excellent insight, see Ref. 142), and it is quite possible that NP hard problems, in their general form, are beyond reach for both classical and quantum computers.

In light of the above, it makes sense to think about oscillator-based accelerators for NP-hard problems, i.e., to design OBCs that compete with “realizable” quantum computers. OBCs may be able to efficiently solve, or approximate, such problems.

OBC for the graph coloring problem, described in Sec. VI D, perhaps is the most mature idea here, but there are many other approaches. Memcomputing is another means of using collective states for exponentially hard problems,143–149 and it can be implemented in various physical systems, among them are oscillators.143 The underlying idea is that the number of collective excitation modes in a physical system grows quickly with the system size, possibly enabling the solution of hard problems. The arguments of Ref. 143 are especially important, as they study the feasibility of an exponentially growing state space in the presence of noise.

The Ising problem is another well-known hard benchmark in computational physics that is NP complete. Sophisticated (and room-sized) optical devices are being developed150 to handle the Ising problem, and it is an intriguing question how far one could go with simple oscillatory circuits as described in Ref. 151.

It must be noted that it is not exclusively OBCs that have been proposed to handle NP-hard problems, but other complex, nonoscillatory analog systems,152–155 memristors,145 and even the dynamic behavior of a digital system.156 The prospect that complex analog dynamics systems may attack NP-hard problems could become the most important argument for their research, and if this is proven to be true, all issues with analog/digital interfaces would become a nonissue.

In this paper, we gave a physics-oriented overview of the flourishing research field of oscillator-based computing (OBC). Much work in this field is motivated by biological analogies, i.e., neuromorphic computing. Moreover, much work is based on novel oscillators based on emerging technology hardware, such as spin-oscillator-based computing systems. Several case studies demonstrated the utility and promise of OBC for certain types of problems.

Still, it seems that a fundamental “why” question remains unanswered, namely, why one would use oscillatory components instead of other nonlinear circuit building blocks? It is very much possible to realize analog, neuromorphic, non-Boolean computing devices from nonoscillatory nonlinear elements – much researched Cellular Nonlinear Networks (CNNs)157 do exactly that and memristor-based, nonoscillatory neural neuromorphic architectures are a hot topic nowadays.158 We mentioned above that oscillators are attractive due to the vast number of possible implementations—but oscillators have obvious disadvantages as well. For example, they must run continuously, dissipating power all the time as it is energetically costly to power them up/down. This means that there must be strong benefits to offset such disadvantages.

We conclude with a few, somewhat hand-waving arguments to corroborate the usefulness of OBCs. The first of these arguments is that OBCs use narrow-bandwidth device-to-device communication channels (i.e., oscillators running at a given frequency), which may allow efficient intracircuit communication in the presence of noise.80,101,114,117 Noise-tolerant communication allows low voltage levels to be used, and consequently, ultimately low-power operation could be accomplished. Also, oscillators can do frequency division multiplexing by hardware and so possibly realize high-interconnection networks with a limited number of physical interconnections. A second possible argument is that coupled oscillators are ubiquitous in the physical world, and so one may find devices that fit very well for a given computational task, which devices can be easily wired together by physical interactions. A third, very encouraging fact is that interacting oscillators now appear in circuits proposed for the solution of NP-hard problems. It is very much possible that they might steal the show from quantum computing and yield hardware that could handle problems that seem intractable with today's resources.

Finding a convincing and somewhat general argument for oscillatory computing system remains an unsolved challenge. A much older and still unsettled question is whether, in general, digital or analog solutions are superior for neuromorphic (bioinspired) computational tasks,159 but there are many benchmarks and case studies out there. On the other hand, oscillator-based analog vs level-based analog benchmarks are almost nonexistent. Perhaps the most important challenge for future research in OBCs is to find the perfect match between the device and the computational problem, i.e., to find the applications and circuit architectures where OBCs significantly outperform level-based computing devices.

Note added in proof. After the acceptance of our paper, the following relevant reference was brought to our attention: D. E. Nikonov, P. Kurahashi, J. S. Ayers, H-J. Lee, Y. Fan, and I. A. Young. “A coupled CMOS oscillator array for 8ns and 55pJ inference in convolutional neural networks,” arXiv preprint arXiv:1910.11803 (2019).

The authors are grateful to George Bourianoff and Dmitri Nikonov for the motivation to join an Intel-led oscillator-based computing project and to Matt Pufall and Trond Ytterdal for excellent technical collaborations. We also acknowledge funding from the DARPA UPSIDE (Unconventional Processing of Signals for Intelligent Data Exploitation) project, from the NSF NEB 2020 (NanoElectronics Beyond 2020) grant, and from the NSF EXCEL (EXtremely Energy Efficient Collective ELectronics) award. G. C. acknowledges the support of the KAP-2018 grant at Pazmany University, supporting his research visit to Notre Dame.

1.
See http://spectrum.ieee.org/static/special-report-50-years-of-moores-law for “e.g.
IEEE Spectrum: Special Report: 50 Years of Moore's Law, the Glorious History and Inevitable Decline of One of Technology's Greatest Winning Streaks
.”
2.
C.
Horsman
,
S.
Stepney
,
R. C.
Wagner
, and
V.
Kendon
, “
When does a physical system compute?
,”
Proc. R. Soc. A
470
(
2169
),
20140182
(
2014
).
3.
K.
Zuse
, “
The computing universe
,”
Int. J. Theor. Phys.
21
(
6–7
),
589
600
(
1982
).
4.
R. P.
Feynman
, “
Simulating physics with computers
,”
Int. J. Theor. Phys.
21
(
6
),
467
488
(
1982
).
5.
O.
Bournez
and
A.
Pouly
, “
A survey on analog models of computation
,” preprint arXiv:1805.05729 (
2018
).
6.
F. R. J.
Spearman
,
J. J.
Gait
,
A. V.
Hemingway
, and
R. W.
Hynes
, “
TRIDAC, a large analogue computing machine
,”
Proc. IEE-Part B
103
(
9
),
375
390
(
1956
).
7.
R.
Rojas
,
Neural Networks: A Systematic Introduction
(
Springer
1996
).
8.
P.
Baldi
and
R.
Meir
, “
Computing with arrays of coupled oscillators: An application to preattentive texture discrimination
,”
Neural Comput.
2
(
4
),
458
471
(
1990
).
9.
S.
Furber
and
S.
Temple
, “
Neural systems engineering
,”
J. R. Soc. Interface
4
(
13
),
193
206
(
2007
).
10.
H.
de Garis
,
C.
Shuo
,
B.
Goertzel
, and
L.
Ruiting
, “
A world survey of artificial brain projects, Part I: Large-scale brain simulations
,”
Neurocomputing
74
,
3
29
(
2010
).
11.
E. M.
Izhikevich
and
G. M.
Edelman
, “
Large-scale model of mammalian thalamocortical systems
,”
Proc. Natl. Acad. Sci. U. S. A
105
(
9
),
3593
3598
(
2008
).
12.
D.
Bhowmik
and
M.
Shanahan
, “
How well do oscillator models capture the behaviour of biological neurons?
,” in
International Joint Conference on Neural Networks (IJCNN)
(IEEE,
2012
), pp.
1
8
.
13.
C. D.
Schuman
,
T. E.
Potok
,
R. M.
Patton
,
J.
Douglas Birdwell
,
M. E.
Dean
,
G. S.
Rose
, and
J. S.
Plank
, “
A survey of neuromorphic computing and neural networks in hardware
,” preprint arXiv:1705.06963 (
2017
).
14.
D.
Malagarriga
,
M. A.
García-Vellisca
,
A. E. P.
Villa
,
J. M.
Buldú
,
J.
García-Ojalvo
, and
A. J.
Pons
, “
Synchronization-based computation through networks of coupled oscillators
,”
Front. Comput. Neurosci.
9
,
97
(
2015
).
15.
J.
von Neumann
, “
Non-linear capacitance or inductance switching, amplifying, and memory organs
,” U.S. Patent 2,815,488 (3 December
1957
).
16.
R. L.
Wigington
, “
A new concept in computing
,”
Proc. IRE
47
(
4
),
516
523
(
1959
).
17.
K.
Bernstein
,
R. K.
Cavin
 III
,
W.
Porod
,
A.
Seabaugh
, and
J.
Welser
, “
Device and architecture outlook for beyond-CMOS switches
,”
Proc. IEEE
98
(
12
),
2169
2184
(
2010
).
18.
Synchronization: A Universal Concept in Nonlinear Sciences
, edited by,
A.
Pikovsky
,
M.
Rosenblum
, and
J.
Kurths
(
Cambridge university Press
,
2003
), Vol.
12
.
19.
S. H.
Strogatz
, “
From Kuramoto to crawford: Exploring the onset of synchronization in populations of coupled oscillators
,”
Physica D
143
(
1
),
1
20
(
2000
).
20.
L.
Deng
and
D.
Yu
, “
Deep learning: Methods and applications
,”
Foundations Trends Signal Process.
7
(
3–4
),
197
387
(
2014
).
21.
Y.
LeCun
,
Y.
Bengio
, and
G.
Hinton
, “
Deep learning
,”
Nature
521
(
7553
),
436
444
(
2015
).
22.
J.
Lu
,
S.
Young
,
I.
Arel
, and
J.
Holleman
, “
A 1 TOPS/W analog deep machine-learning engine with floating-gate storage in 0.13 μ m CMOS
,”
IEEE J. Solid-State Circuits
50
(
1
),
270
281
(
2015
).
23.
A.
Parihar
,
N.
Shukla
,
M.
Jerry
,
S.
Datta
, and
A.
Raychowdhury
, “
Vertex coloring of graphs via phase dynamics of coupled oscillatory networks
,”
Sci. Rep.
7
(
1
),
911
(
2017
).
24.
M. G.
Kuzmina
,
E. A.
Manykin
, and
E. S.
Grichuk
, “
Oscillatory neural networks
,”
Problems of Parallel Information Processing
(
Walter de Gruyter
,
2013
).
25.
J.
Grollier
,
S.
Guha
,
H.
Ohno
, and
I. K.
Schuller
, “
Preface to Special Topic: New physics and materials for neuromorphic computation
,”
J. Appl. Phys.
124
,
151801
(
2018
).
26.
E.
Goto
, “
The parametron, a digital computing element which utilizes parametric oscillation
,”
Proc. IRE
47
(
8
),
1304
1316
(
1959
).
27.
S.
Muroga
and
K.
Takashima
, “
The parametron digital computer Musasino-1
,”
IRE Trans. Electron. Comput.
3
,
308
316
(
1959
).
28.
I.
Mahboob
and
H.
Yamaguchi
, “
Bit storage and bit flip operations in an electromechanical oscillator
,”
Nat. Nanotechnol.
3
(
5
),
275
279
(
2008
).
29.
J. M.
Manley
and
H. E.
Rowe
, “
Some general properties of nonlinear elements-part I. General energy relations
,”
Proc. IRE
44
(
7
),
904
913
(
1956
).
30.
G.
Csaba
,
T.
Ytterdal
, and
W.
Porod
, “
Neural network based on parametrically-pumped oscillators
,” in
IEEE International Conference on, Electronics, Circuits and Systems (ICECS)
(IEEE,
2016
), pp.
45
48
.
31.
D. E.
Nikonov
and
I. A.
Young
, “
Overview of beyond-CMOS devices and a uniform methodology for their benchmarking
,”
Proc. IEEE
101
(
12
),
2498
2533
(
2013
).
32.
H. A. H.
Fahmy
and
R. A.
Kiehl
, “
Complete logic family using tunneling-phase-logic devices
,” in
The Eleventh International Conference on Microelectronics, ICM'99
(IEEE,
1999
), pp.
153
156
.
33.
T.
Ohshima
and
R. A.
Kiehl
, “
Operation of bistable phase-locked single-electron tunneling logic elements
,”
J. Appl. Phys.
80
(
2
),
912
923
(
1996
).
34.
J.
Roychowdhury
, “
Boolean computation using self-sustaining nonlinear oscillators
,”
Proc. IEEE
103
(
11
),
1958
1969
(
2015
).
35.
T.
Wang
and
J.
Roychowdhury
, “
PHLOGON: Phase-based logic using oscillatory nano-systems
,” in
International Conference on Unconventional Computation and Natural Computation
(Springer, Cham,
2014
), pp.
353
366
.
36.
Z.
Hull
,
D.
Chiarulli
,
S.
Levitan
,
G.
Csaba
,
W.
Porod
,
M.
Pufall
,
W.
Rippard
 et al, “
Computation with Coupled Oscillators in an Image Processing Pipeline
,” in
Proceedings of the 15th International Workshop on Cellular Nanoscale Networks and Their Applications, CNNA 2016
(VDE,
2016
), pp.
1
2
.
37.
H.
Paugam-Moisy
, “
Spiking neuron networks: A survey
,” No. EPFL-REPORT-83371 (IDIAP,
2006
).
38.
W.
Maass
, “
Networks of spiking neurons: The third generation of neural network models
,”
Neural Networks
10
(
9
),
1659
1671
(
1997
).
39.
H.
Paugam-Moisy
and
S.
Bohte
, “
Computing with spiking neuron networks
,”
Handbook of Natural Computing
(
Springer
Berlin Heidelberg
,
2012
), pp.
335
376
.
40.
A.
Grning
and
S. M.
Bohte
, “
Spiking neural networks: Principles and challenges
,” in
ESANN
(
2014
).
41.
M. R.
Pufall
,
W. H.
Rippard
,
G.
Csaba
,
D. E.
Nikonov
,
G. I.
Bourianoff
, and
W.
Porod
, “
Physical implementation of coherently coupled oscillator networks
,”
IEEE J. Exploratory Solid-State Comput. Devices Circuits
1
,
76
84
(
2015
).
42.
E. M.
Izhikevich
,
B.
Szatmary
, and
C.
Petre
, “
Invariant pulse latency coding systems and methods systems and Methods
,” U.S. patent 8,467,623 (18 June
2013
).
43.
E. M.
Izhikevich
, “
Polychronization: Computation with spikes
,”
Neural Comput.
18
(
2
),
245
282
(
2006
).
44.
D.
Querlioz
,
O.
Bichler
, and
C.
Gamrat
, “
Simulation of a memristor-based spiking neural network immune to device variations
,” in
The 2011 International Joint Conference on Neural Networks (IJCNN)
(IEEE,
2011
), pp.
1775
1781
.
45.
G.
Csaba
,
A.
Papp
,
W.
Porod
, and
R.
Yeniceri
, “
Non-boolean computing based on linear waves and oscillators
,” in
2015 45th European Solid State Device Research Conference (ESSDERC)
(IEEE,
2015
), pp.
101
104
.
46.
J. M.
Cruz-Albrecht
,
M. W.
Yung
, and
N.
Srinivasa
, “
Energy-efficient neuron, synapse and STDP integrated circuits
,”
IEEE Trans. Biomed. Circuits Syst.
6
(
3
),
246
256
(
2012
).
47.
J. R.
Westra
,
C. J.
Verhoeven
, and
A. H.
Van Roermund
,
Oscillators and Oscillator Systems
(
Kluwer
,
2000
).
48.
D. S.
Gardner
,
G.
Schrom
,
F.
Paillet
,
B.
Jamieson
,
T.
Karnik
, and
S.
Borkar
, “
Review of on-chip inductor structures with magnetic films
,”
IEEE Trans. Magn.
45
(
10
),
4760
4766
(
2009
).
49.
B. D.
Sahoo
, “
Ring oscillator based sub-1V leaky integrate-and-fire neuron circuit
,” in
IEEE International Symposium on, Circuits and Systems (ISCAS)
(IEEE,
2017
), pp.
1
4
.
50.
M. D.
Feuer
,
R. H.
Hendel
,
R. A.
Kiehl
,
J. C. M.
Hwang
,
V. G.
Keramidas
,
C. L.
Allyn
, and
R.
Dingle
, “
High-speed low-voltage ring oscillators based on selectively doped heterojunction transistors
,”
IEEE Electron Device Lett.
4
(
9
),
306
307
(
1983
).
51.
F.
Corinto
,
A.
Ascoli
, and
M.
Gilli
, “
Nonlinear dynamics of memristor oscillators
,”
IEEE Trans. Circuits Syst. I
58
(
6
),
1323
1336
(
2011
).
52.
N.
Shukla
,
A.
Parihar
,
M.
Cotter
,
M.
Barth
,
X.
Li
,
N.
Chandramoorthy
,
H.
Paik
 et al, “
Pairwise coupled hybrid vanadium dioxide-MOSFET (HVFET) oscillators for non-boolean associative computing
,” in
IEEE International Electron Devices Meeting (IEDM)
(IEEE,
2014
), pp.
28
27
.
53.
J.
Torrejon
,
M.
Riou
,
F. A.
Araujo
,
S.
Tsunegi
,
G.
Khalsa
,
D.
Querlioz
,
P.
Bortolotti
 et al, “
Neuromorphic computing with nanoscale spintronic oscillators
,”
Nature
547
(
7664
),
428
(
2017
).
54.
A.
Ruotolo
,
V.
Cros
,
B.
Georges
,
A.
Dussaux
,
J.
Grollier
,
C.
Deranlot
,
R.
Guillemet
,
K.
Bouzehouane
,
S.
Fusil
, and
A.
Fert
, “
Phase-locking of magnetic vortices mediated by antivortices
,”
Nat. Nanotechnol.
4
(
8
),
528
532
(
2009
).
55.
N.
Locatelli
,
V.
Cros
, and
J.
Grollier
, “
Spin-torque building blocks
,”
Nat. Mater.
13
(
1
),
11
20
(
2014
).
56.
Y.-J.
Chen
,
H. K.
Lee
,
R.
Verba
,
J. A.
Katine
,
I.
Barsukov
,
V.
Tiberkevich
,
J. Q.
Xiao
,
A. N.
Slavin
, and
I. N.
Krivorotov
, “
Parametric resonance of magnetization excited by electric field
,”
Nano Lett.
17
(
1
),
572
577
(
2016
).
57.
S.
Urazhdin
,
V.
Tiberkevich
, and
A.
Slavin
, “
Parametric excitation of a magnetic nanocontact by a microwave field
,”
Phys. Rev. Lett.
105
(
23
),
237204
(
2010
).
58.
D. N.
Guerra
,
A. R.
Bulsara
,
W. L.
Ditto
,
S.
Sinha
,
K.
Murali
, and
P.
Mohanty
, “
A noise-assisted reprogrammable nanomechanical logic gate
,”
Nano Lett.
10
(
4
),
1168
1171
(
2010
).
59.
I.
Mahboob
,
E.
Flurin
,
K.
Nishiguchi
,
A.
Fujiwara
, and
H.
Yamaguchi
, “
Interconnect-free parallel logic circuits in a single mechanical resonator
,”
Nat. Commun.
2
,
198
(
2011
).
60.
J. C.
Coulombe
,
M. C.
York
, and
J.
Sylvestre
, “
Computing with networks of nonlinear mechanical oscillators
,”
PLoS One
12
(
6
),
e0178663
(
2017
).
61.
D.
Weinstein
and
S. A.
Bhave
, “
The resonant body transistor
,”
Nano Lett.
10
(
4
),
1234
1237
(
2010
).
62.
B.
Bahr
,
Y.
He
,
Z.
Krivokapic
,
S.
Banna
, and
D.
Weinstein
, “
32 GHz resonant-fin transistors in 14 nm FinFET technology
,” in
IEEE International Solid-State Circuits Conference-(ISSCC)
(IEEE,
2018
), pp.
348
350
.
63.
J.-C.
Mage
,
B.
Marcilhac
,
M.
Poulain
,
Y.
Lemaitre
,
J.
Kermorvant
, and
J.-M.
Lesage
, “
Low noise oscillator based on 2D superconducting resonator
,” in
Joint Conference of the IEEE International, Frequency Control and the European Frequency and Time Forum (FCS)
(IEEE,
2011
), pp.
1
4
.
64.
K.
Wiesenfeld
,
P.
Colet
, and
S. H.
Strogatz
, “
Synchronization transitions in a disordered Josephson series array
,”
Phys. Rev. Lett.
76
(
3
),
404
(
1996
).
65.
K.
Segall
,
M.
LeGro
,
S.
Kaplan
,
O.
Svitelskiy
,
S.
Khadka
,
P.
Crotty
, and
D.
Schult
, “
Synchronization dynamics on the picosecond time scale in coupled Josephson junction neurons
,”
Phys. Rev. E
95
(
3
),
032220
(
2017
).
66.
J.
Zhao
,
P.
Zhao
,
H.
Yu
, and
Y.
Yu
, “
External driving synchronization in a superconducting quantum interference device based oscillator
,”
Jpn. J. Appl. Phys., Part 1
55
(
11
),
110301
(
2016
).
67.
N.
Mazouz
,
K.
Krischer
,
G.
Figen
, and
G.
Ertl
, “
Synchronization and pattern formation in electrochemical oscillators: Model calculations
,”
J. Phys. Chem. B
101
(
14
),
2403
2410
(
1997
).
68.
S.
Farzeen
,
G.
Ren
, and
C.
Chen
, “
An ultra-low power ring oscillator for passive UHF RFID transponders
,” in
53rd IEEE International Midwest Symposium on Circuits and Systems (MWSCAS)
(IEEE,
2010
), pp.
558
561
.
69.
S.
Kaka
,
M. R.
Pufall
,
W. H.
Rippard
,
T. J.
Silva
,
S. E.
Russek
, and
J. A.
Katine
, “
Mutual phase-locking of microwave spin torque nano-oscillators
,”
Nature
437
(
7057
),
389
392
(
2005
).
70.
A. A.
Awad
,
P.
Durrenfeld
,
A.
Houshang
,
M.
Dvornik
,
E.
Iacocca
,
R. K.
Dumas
, and
J.
Akerman
, “
Long-range mutual synchronization of spin Hall nano-oscillators
,”
Nat. Phys.
13
(
3
),
292
299
(
2017
).
71.
M. J.
Deen
,
M. H.
Kazemeini
, and
S.
Naseh
, “
Performance characteristics of an ultra-low power VCO
,” in
Proceedings of the 2003 International Symposium on Circuits and Systems, ISCAS'03
(IEEE,
2003
), Vol.
1
, pp.
I
-
I
.
72.
A.
Parihar
,
N.
Shukla
,
S.
Datta
, and
A.
Raychowdhury
, “
Exploiting synchronization properties of correlated electron devices in a non-boolean computing fabric for template matching
,”
IEEE J. Emerging Sel. Top. Circuits Syst.
4
(
4
),
450
459
(
2014
).
73.
C.-C.
Nguyen
and
R. T.
Howe
, “
An integrated CMOS micromechanical resonator high-Q oscillator
,”
IEEE J. Solid-State Circuits
34
(
4
),
440
455
(
1999
).
74.
O.
Paul
, “
Micro transducer operation
,” in
MEMS: A Practical Guide of Design, Analysis, and Applications
, edited by
J.
Korvink
and
O.
Paul
(
Springer Science Business Media
,
Chicago
,
2010
).
75.
A. A.
Abidi
, “
Phase noise and jitter in CMOS ring oscillators
,”
IEEE J. Solid-State Circuits
41
(
8
),
1803
1816
(
2006
).
76.
P.
Maffezzoni
,
B.
Bahr
,
Z.
Zhang
, and
L.
Daniel
, “
Oscillator array models for associative memory and pattern recognition
,”
IEEE Trans. Circuits Syst. I
62
(
6
),
1591
1598
(
2015
).
77.
N.
Shukla
,
A.
Parihar
,
E.
Freeman
,
H.
Paik
,
G.
Stone
,
V.
Narayanan
,
H.
Wen
,
Z.
Cai
,
V.
Gopalan
,
R.
Engel-Herbert
, and
D. G.
Schlom
, “
Synchronized charge oscillations in correlated electron systems
,”
Sci. Rep.
4
,
4964
(
2014
).
78.
D. E.
Nikonov
,
G.
Csaba
,
W.
Porod
,
T.
Shibata
,
D.
Voils
,
D.
Hammerstrom
,
I. A.
Young
, and
G. I.
Bourianoff
, “
Coupled-oscillator associative memory array operation for pattern recognition
,”
IEEE J. Exploratory Solid-State Comput. Devices Circuits
1
,
85
93
(
2015
).
79.
A.
Mizrahi
,
N.
Locatelli
,
R.
Lebrun
,
V.
Cros
,
A.
Fukushima
,
H.
Kubota
,
S.
Yuasa
,
D.
Querlioz
, and
J.
Grollier
, “
Controlling the phase locking of stochastic magnetic bits for ultra-low power computation
,”
Sci. Rep.
6
,
30535
(
2016
).
80.
W.
Shmuel
and
J. D.
Cowan
,
Reliable Computation in the Presence of Noise
, No. 22 (
MIT Press
,
Cambridge, Mass
,
1963
).
81.
D.
Middleton
and Institute of Electrical and Electronics Engineers
,
An Introduction to Statistical Communication Theory
(
McGraw-Hill
,
New York
,
1960
),Vol.
960
.
82.
A.
Raychowdhury
,
A.
Parihar
,
G. H.
Smith
,
V.
Narayanan
,
G.
Csaba
,
M.
Jerry
,
W.
Porod
, and
S.
Datta
, “
Computing with networks of oscillatory dynamical systems
,”
Proc. IEEE
107
(
1
),
73
89
(
2018
).
83.
G.
Csaba
,
W.
Porod
,
M.
Pufall
, and
W.
Rippard
, “
Analog circuits based on the synchronization of field-line coupled spin-torque oscillators
,” in
IEEE 15th International Conference on Nanotechnology (IEEE-NANO)
(IEEE,
2015
), pp.
1343
1345
.
84.
R.
Lebrun
,
S.
Tsunegi
,
P.
Bortolotti
,
H.
Kubota
,
A. S.
Jenkins
,
M.
Romera
,
K.
Yakushiji
 et al, “
Mutual synchronization of spin torque nano-oscillators through a long-range and tunable electrical coupling scheme
,”
Nat. Commun.
8
,
15825
(
2017
).
85.
Y.
Xu
and
J. E.-Y.
Lee
, “
Mechanically coupled SOI Lame-mode resonator-arrays: Synchronized oscillations with high quality factors of 1 million
,” in
Joint European Frequency and Time Forum and International Frequency Control Symposium (EFTF/IFC)
(IEEE,
2013
), pp.
133
136
.
86.
M.
Elyasi
,
C. S.
Bhatia
, and
H.
Yang
, “
Synchronization of spin-transfer torque oscillators by spin pumping, inverse spin Hall, and spin Hall effects
,”
J. Appl. Phys.
117
(
6
),
063907
(
2015
).
87.
T.
Taniguchi
, “
Synchronization of spin torque oscillators through spin Hall magnetoresistance
,”
IEEE Trans. Magn.
53
(
11
),
1
7
(
2017
).
88.
C.-Z.
Wang
,
H.-Y.
Xu
,
N. D.
Rizzo
,
R. A.
Kiehl
, and
Y.-C.
Lai
, “
Phase Locking of a pair of ferromagnetic nano-oscillators on a topological insulator
,”
Phys. Rev. Appl.
10
(
6
),
064003
(
2018
).
89.
H.
Arai
and
H.
Imamura
, “
Spin-wave coupled spin torque oscillators for artificial neural network
,”
J. Appl. Phys.
124
(
15
),
152131
(
2018
).
90.
C.
Liu
,
J.
Chen
,
T.
Liu
,
F.
Heimbach
,
H.
Yu
,
Y.
Xiao
,
J.
Hu
 et al, “
Long-distance propagation of short-wavelength spin waves
,”
Nat. Commun.
9
(
1
),
738
(
2018
).
91.
F. C.
Hoppensteadt
and
E. M.
Izhikevich
, “
Synchronization of MEMS resonators and mechanical neurocomputing
,”
IEEE Trans. Circuits Syst. I
48
(
2
),
133
138
(
2001
).
92.
F. C.
Hoppensteadt
and
E. M.
Izhikevich
, “
Pattern recognition via synchronization in phase-locked loop neural networks
,”
IEEE Trans. Neural Networks
11
(
3
),
734
738
(
2000
).
93.
J. A.
Acebrn
,
L. L.
Bonilla
,
C. J.
Prez Vicente
,
F.
Ritort
, and
R.
Spigler
, “
The Kuramoto model: A simple paradigm for synchronization phenomena
,”
Rev. Mod. Phys.
77
(
1
),
137
(
2005
).
94.
D. M.
Abrams
and
S. H.
Strogatz
, “
Chimera states for coupled oscillators
,”
Phys. Rev. Lett.
93
(
17
),
174102
(
2004
).
95.
Y.
Fang
,
V. V.
Yashin
,
D. M.
Chiarulli
, and
S. P.
Levitan
, “
A simplified phase model for oscillator based computing
,” in
IEEE Computer Society Annual Symposium on VLSI (ISVLSI)
(IEEE,
2015
), pp.
231
236
.
96.
S.
Ling
,
R.
Xu
, and
A. S.
Bandeira
, “
On the landscape of synchronization networks: A perspective from nonconvex optimization
,”
SIAM J. Optim.
29
(
3
),
1879
1907
(
2019
).
97.
T.
Yamane
,
Y.
Katayama
,
R.
Nakane
,
G.
Tanaka
, and
D.
Nakano
, “
Wave-based reservoir computing by synchronization of coupled oscillators
,” in
Neural Information Processing
, Lecture Notes in Computer Science Vol.
9491
, edited by
S.
Arik
,
T.
Huang
,
W.
Lai
, and
Q.
Liu
(
Springer
,
2015
).
98.
G.
Dion
,
S.
Mejaouri
, and
J.
Sylvestre
, “
Reservoir computing with a single delay-coupled non-linear mechanical oscillator
,”
J. Appl. Phys.
124
(
15
),
152132
(
2018
).
99.
B.
Schrauwen
,
D.
Verstraeten
, and
J. V.
Campenhout
, “
An overview of reservoir computing: Theory, applications and implementations
,” in
Proceedings of the 15th European Symposium on Artificial Neural Networks
(
2007
), pp.
471
482
.
100.
G.
Tanaka
,
T.
Yamane
,
J. B.
Héroux
,
R.
Nakane
,
N.
Kanazawa
,
S.
Takeda
,
H.
Numata
,
D.
Nakano
, and
A.
Hirose
, “
Recent advances in physical reservoir computing: A review
,”
Neural Networks
115
,
100
123
(
2019
).
101.
F. C.
Hoppensteadt
and
E. M.
Izhikevich
, “
Oscillatory neurocomputers with dynamic connectivity
,”
Phys. Rev. Lett.
82
(
14
),
2983
(
1999
).
102.
J. J.
Hopfield
, “
Neural networks and physical systems with emergent collective computational abilities
,”
Proc. Natl. Acad. Sci. U. S. A.
79
(
8
),
2554
2558
(
1982
).
103.
A. N.
Michel
,
J. A.
Farrell
, and
W.
Porod
, “
Qualitative analysis of neural networks
,”
IEEE Trans. Circuits Syst.
36
(
2
),
229
243
(
1989
).
104.
B.
Popescu
,
G.
Csaba
,
D.
Popescu
,
A. H.
Fallahpour
,
P.
Lugli
,
W.
Porod
, and
M.
Becherer
, “
Simulation of coupled spin torque oscillators for pattern recognition
,”
J. Appl. Phys.
124
(
15
),
152128
(
2018
).
105.
R.
Follmann
,
E. E.
Macau
,
E.
Rosa
, and
J. R.
Piqueira
, “
Phase oscillatory network and visual pattern recognition
,”
IEEE Trans. Neural Networks Learn. Syst.
26
(
7
),
1539
1544
(
2015
).
106.
W.
Gerstner
, “
Chapter 6: Hebbian Learning and Plasticity
,”
From Neuron to Cognition Via Comput. Neurosci.
(
2016
).
107.
W. A.
Borders
,
H.
Akima
,
S.
Fukami
,
S.
Moriya
,
S.
Kurihara
,
Y.
Horio
,
S.
Sato
, and
H.
Ohno
, “
Analogue spin-orbit torque device for artificial-neural-network-based associative memory operation
,”
Appl. Phys. Express
10
(
1
),
013007
(
2016
).
108.
F. C.
Hoppensteadt
and
E. M.
Izhikevich
,
Weakly Connected Neural Networks
(
Springer Science and Business Media
,
2012
), Vol.
126
.
109.
E.
Vassilieva
,
G.
Pinto
,
J. A.
De Barros
, and
P.
Suppes
, “
Learning pattern recognition through quasi-synchronization of phase oscillators
,”
IEEE Trans. Neural Networks
22
(
1
),
84
95
(
2011
).
110.
D.
Vodenicarevic
,
N.
Locatelli
,
F. A.
Araujo
,
J.
Grollier
, and
D.
Querlioz
, “
A nanotechnology-ready computing scheme based on a weakly coupled oscillator network
,”
Sci. Rep.
7
,
44772
(
2017
).
111.
D.
Vodenicarevic
,
N.
Locatelli
,
J.
Grollier
, and
D.
Querlioz
, “
Nano-oscillator-based classification with a machine learning-compatible architecture
,”
J. Appl. Phys.
124
(
15
),
152117
(
2018
).
112.
M.
Romera
,
P.
Talatchian
,
S.
Tsunegi
,
F. A.
Araujo
,
V.
Cros
,
P.
Bortolotti
,
J.
Trastoy
 et al, “
Vowel recognition with four coupled spin-torque nano-oscillators
,”
Nature
563
(
7730
),
230
(
2018
).
113.
D.
Heger
and
K.
Krischer
, “
Robust autoassociative memory with coupled networks of Kuramoto-type oscillators
,”
Phys. Rev. E
94
(
2
),
022309
(
2016
).
114.
Y.
Yuminaka
,
Y.
Sasaki
,
T.
Aoki
, and
T.
Higuchi
, “
Design of neural networks based on wave-parallel computing technique
,” in
Cellular Neural Networks and Analog VLSI
(
Springer U.S.
,
1998
), pp.
91
103
.
115.
A.
Mondragon-Torres
,
R.
Gonzalez-Carvajal
,
J.
Pineda de Gyvez
, and
E.
Sanchez-Sinencio
, “
Frequency-domain intrachip communication schemes for CNN
,” in
Fifth IEEE International Workshop on Cellular Neural Networks and Their Applications Proceedings
(IEEE,
1998
), pp.
398
403
..
116.
M. P.
Craven
,
K. M.
Curtis
, and
B. R.
Hayes-Gill
, “
Frequency division multiplexing in analogue neural network
,”
Electron. Lett.
27
(
11
),
918
920
(
1991
).
117.
M. P.
Craven
,
K. M.
Curtis
, and
B. R.
Hayes-Gill
, “
Consideration of multiplexing in neural network hardware
,”
IEE Proc.-Circuits, Devices Syst.
141
(
3
),
237
240
(
1994
).
118.
A.
Horvath
,
G.
Csaba
, and, and
W.
Porod
, “
Dynamic coupling of spin torque oscillators for associative memories
,” in
14th International Workshop on Cellular Nanoscale Networks and Their Applications (CNNA)
(IEEE,
2014
), pp.
1
2
.
120.
T. T.
Bui
and
T.
Shibata
, “
Compact bell-shaped analog matching-cell module for digital-memory-based associative processors
,”
Jpn. J. Appl. Phys., Part 1
47
(
4S
),
2788
(
2008
).
121.
K.
Yogendra
,
C.
Liyanagedera
,
D.
Fan
,
Y.
Shim
, and
K.
Roy
, “
Coupled spin-torque nano-oscillator-based computation: A simulation study
,”
ACM J. Emerging Technol. Comput. Syst.
13
(
4
),
56
(
2017
).
122.
G.
Csaba
and
W.
Porod
, “
Computational study of spin-torque oscillator interactions for non-Boolean computing applications
,”
IEEE Trans. Magn.
49
(
7
),
4447
4451
(
2013
).
123.
M.
Pufall
,
W. H.
Rippard
,
E.
Jue
,
G.
Csaba
, and
K.
Roy
, “
Estimating degree of match with arrays of spin torque oscillators
,” in
62nd Annual Conference on Magnetism and Magnetic Materials November 6–10, 2017
(
Pittsburgh
,
PA
,
2017
).
124.
M.
Koo
,
M. R.
Pufall
,
Y.
Shim
,
A. B.
Kos
,
G.
Csaba
,
W.
Porod
,
W. H.
Rippard
, and
K.
Roy
, “
Coupled spin-torque-oscillator based distance computation: Application to image processing
,”
Nat. Electron.
(unpublished).
125.
N.
Shukla
,
S.
Datta
,
A.
Parihar
, and
A.
Raychowdhury
, “
Computing with coupled relaxation oscillators
,” in
Future Trends in Microelectronics: Journey into the Unknown
, edited by
S.
Luryi
,
J.
Xu
, and
A.
Zaslavsky
(
Wiley
,
2016
).
126.
D.
Graupe
,
Principles of Artificial Neural Networks
(
World Scientific
,
2013
), Vol.
7
.
127.
D.
Anderson
and
G.
McNeill
, “
Artificial neural networks technology
,”
Kaman Sci. Corp.
258
(
6
),
1
83
(
1992
).
128.
I.
Goodfellow
,
Y.
Bengio
, and
A.
Courville
,
Deep Learning
(
MIT Press
,
2016
).
129.
D.
Yu
and
L.
Deng
, “
Deep learning and its applications to signal and information processing [exploratory dsp]
,”
IEEE Signal Process. Mag.
28
(
1
),
145
154
(
2011
).
130.
A.
He
,
K. K.
Bae
,
T. R.
Newman
,
J.
Gaeddert
,
K.
Kim
,
R.
Menon
,
L.
Morales-Tirado
,
Y.
Zhao
,
J. H.
Reed
, and
W. H.
Tranter
, “
A survey of artificial intelligence for cognitive radios
,”
IEEE Trans. Veh. Technol.
59
(
4
),
1578
1592
(
2010
).
131.
A.
Fehske
,
J.
Gaeddert
, and
J. H.
Reed
, “
A new approach to signal classification using spectral correlation and neural networks
,” in
First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks, DySPAN 2005
(IEEE,
2005
), pp.
144
150
.
132.
R. H.
Rand
,
A. H.
Cohen
, and
P. J.
Holmes
, “
Systems of coupled oscillators as models of central pattern generators
,”
Neural Control of Rhythmic Movements in Vertebrates
(
Wiley
,
Cohen
,
1988
), pp.
333
367
.
133.
A.
Velichko
,
M.
Belyaev
,
V.
Putrolaynen
,
A.
Pergament
, and
V.
Perminov
, “
Switching dynamics of single and coupled VO2-based oscillators as elements of neural networks
,”
Int. J. Mod. Phys. B
31
(
2
),
1650261
(
2017
).
134.
S.
Dutta
,
A.
Parihar
,
A.
Khanna
,
J.
Gomez
,
W.
Chakraborty
,
M.
Jerry
,
B.
Grisafe
,
A.
Raychowdhury
, and
S.
Datta
, “
Programmable coupled oscillators for synchronized locomotion
,”
Nat. Commun.
10
(
1
),
3299
(
2019
).
135.
K. A.
Smith
, “
Neural networks for combinatorial optimization: A review of more than a decade of research
,”
Informs J. Comput.
11
(
1
),
15
34
(
1999
).
136.
C. W.
Wu
, “
Graph coloring via synchronization of coupled oscillators
,”
IEEE Trans. Circuits Syst. I
45
(
9
),
974
978
(
1998
).
137.
J.
Wu
,
L.
Jiao
,
R.
Li
, and
W.
Chen
, “
Clustering dynamics of nonlinear oscillator network: Application to graph coloring problem
,”
Physica D
240
(
24
),
1972
1978
(
2011
).
138.
R. M.
Karp
, “
Reducibility among combinatorial problems
,” in
Complexity of Computer Computations
(
Springer
U.S.
,
1972
), pp.
85
103
.
139.
H. T.
Siegelmann
, “
Computation beyond the turing limit
,” in
Neural Networks and Analog Computation
(
Birkhauser
,
Boston
,
1999
), pp.
153
164
.
140.
M.
Dyakonov
, “
When will useful quantum computers be constructed?
,”
IEEE Spectrum
56
(
3
),
24
29
(
2019
).
141.
M. W.
Johnson
,
M. H. S.
Amin
,
S.
Gildert
,
T.
Lanting
,
F.
Hamze
,
N.
Dickson
,
R.
Harris
,
A. J.
Berkley
,
J.
Johansson
,
P.
Bunyk
,
E. M.
Chapple
,
C.
Enderud
,
J. P.
Hilton
,
K.
Karimi
,
E.
Ladizinsky
,
N.
Ladizinsky
,
T.
Oh
,
I.
Perminov
,
C.
Rich
,
M. C.
Thom
,
E.
Tolkacheva
,
C. J. S.
Truncik
,
S.
Uchaikin
,
J.
Wang
,
B.
Wilson
 et al, “
Quantum annealing with manufactured spins
,”
Nature
473
,
194
198
(
2011
).
142.
S.
Aaronson
, “
Guest column: NP-complete problems and physical reality
,”
ACM Sigact News
36
(
1
),
30
52
(
2005
).
143.
F. L.
Traversa
,
C.
Ramella
,
F.
Bonani
, and
M.
Di Ventra
, “
Memcomputing NP-complete problems in polynomial time using polynomial resources and collective states
,”
Sci. Adv.
1
(
6
),
e1500031
(
2015
).
144.
Y. V.
Pershin
and
M.
Di Ventra
, “
Memcomputing: A computing paradigm to store and process information on the same physical platform
,” in
International Workshop on Computational Electronics (IWCE)
(IEEE,
2014
), pp.
1
2
.
145.
Y. V.
Pershin
and
M.
Di Ventra
, “
Memcomputing: A computing paradigm to store and process information on the same physical platform
,” in
2014 International Workshop on Computational Electronics (IWCE)
(IEEE, 2014), pp. 1–2.
146.
F. L.
Traversa
and
M.
Di Ventra
, “
Universal memcomputing machines
,”
IEEE Trans. Neural Networks Learn. Syst.
26
(
11
),
2702
2715
(
2015
).
147.
M.
Di Ventra
and
Y. V.
Pershin
, “
Just add memory
,”
Sci. Am.
312
,
56
61
(
2015
).
148.
Y. V.
Pershin
and
M.
Di Ventra
, “
Solving mazes with memristors: A massively parallel approach
,”
Phys. Rev. E
84
(
4
),
046703
(
2011
).
149.
F. L.
Traversa
,
F.
Bonani
,
Y. V.
Pershin
, and
M.
Di Ventra
, “
Dynamic computing random access memory
,”
Nanotechnology
25
(
28
),
285201
(
2014
).
150.
Y.
Yamamoto
,
K.
Aihara
,
T.
Leleu
,
K-i
Kawarabayashi
,
S.
Kako
,
M.
Fejer
,
K.
Inoue
, and
H.
Takesue
, “
Coherent Ising machines—Optical neural networks operating at the quantum limit
,”
npj Quantum Inf.
3
(
1
),
49
(
2017
).
151.
T.
Wang
and
J.
Roychowdhury
, “
Oscillator-based ising machine
,” e-print arXiv:1709.08102.
152.
M.
Ercsey-Ravasz
,
T.
Roska
, and
Z.
Neda
, “
Cellular neural networks for NP-hard optimization
,”
EURASIP J. Adv. Signal Process.
2009
,
646975
(
2009
).
153.
M.
Ercsey-Ravasz
and
Z.
Toroczkai
, “
The chaos within Sudoku
,”
Sci. Rep.
2
,
725
(
2012
).
154.
M.
Ercsey-Ravasz
and
Z.
Toroczkai
, “
Optimization hardness as transient chaos in an analog approach to constraint satisfaction
,”
Nat. Phys.
7
(
12
),
966
(
2011
).
155.
B.
Molnár
,
F.
Molnár
,
M.
Varga
,
Z.
Toroczkai
, and
M.
Ercsey-Ravasz
, “
A continuous-time MaxSAT solver with high analog performance
,”
Nat. Commun.
9
(
1
),
4864
(
2018
).
156.
M.
Di Ventra
and
F. L.
Traversa
, “
Perspective: Memcomputing: Leveraging memory and physics to compute efficiently
,”
J. Appl. Phys.
123
,
180901
(
2018
).
157.
T.
Roska
and
L. O.
Chua
, “
The CNN universal machine: An analogic array computer
,”
IEEE Trans. Circuits Syst. II
40
(
3
),
163
173
(
1993
).
158.
Y. V.
Pershin
and
M.
Di Ventra
, “
Experimental demonstration of associative memory with memristive neural networks
,”
Neural Networks
23
(
7
),
881
886
(
2010
).
159.
R. W.
Brockett
, “
Analog and digital computing
,” in
Future Tendencies in Computer Science, Control and Applied Mathematics
, Lecture Notes in Computer Science Vol.
653
, edited by
A.
Bensoussan
and
J. P.
Verjus
(
Springer
,
Berlin, Heidelberg
,
1992
).