Coupled oscillators are highly complex dynamical systems, and it is an intriguing concept to use this oscillator dynamics for computation. The idea is not new, but is currently the subject to intense research as part of the quest for “beyond Moore” electronic devices. To a large extent, these efforts are motivated by biological observations: neural systems and mammalian brains, which seem to operate on oscillatory signals. In this paper, we give a survey of oscillator-based computing, with the goal of understanding its promise and limitation for next-generation computing. Our focus will be on the physics of (mostly nanoscale) oscillatory systems and on their characteristics that may enable effective computing.

## I. INTRODUCTION

The goal of this paper is to give a survey of oscillator-based computing (OBC), with the emphasis on the underlying physics that enables novel applications in computing and information processing. The complex dynamics of interacting oscillators has long been a topic of study in physics and mathematics and has been widely used as a model system for a number of biological processes. Recently, coupled oscillators have also been investigated as a potentially practical way of performing computation, especially as building blocks of artificial intelligence (AI) hardware. This review is intended to introduce the reader to the physical mechanisms at work for OBC and to demonstrate their value and utility for information-processing applications, especially for beyond-Moore^{1} computing and signal-processing devices.

Computation (especially symbolic computation) is most often approached from Turing's definition of machine-based computing. For understanding computing as a physical process, it is often useful to resort to a somewhat loose definition^{2} and look at computing as a simulation procedure. In this simulation, a complex system is modeled in analogy with a more controllable/tunable/accessible physical system.^{3,4} The word “analog computing,” in fact derives from this concept, i.e., that electrical analogs can be built to model a harder-to-access physical system.^{5} For example, one can make a network of circuit integrators/differentiators and mechanical components to simulate an aerodynamic problem,^{6} as it was done often in the early days of computing. The variables of the simulated system (i.e., the information) are most straightforwardly represented by voltage levels at circuit nodes.

An equally valid, but much less commonly used way for representing information in a computing device is to use the phase and/or frequency of oscillatory signals to carry information in addition to, or instead of, the signal levels. One may argue that a purely level-based computing scheme inevitably wastes the information carried by the timing of the signals. Using phase and frequency as a carrier may allow for a more rich representation of information. A key characteristic of OBCs is that information is primarily represented by frequencies and phases of oscillatory signals, while the signal amplitude may or may not play a role.

Another key characteristic of OBC is largely inspired by biological observations. Neuromorphic computing devices are most often imagined as interconnected units of elementary processors, which are loosely referred to as neurons. The interaction of these units drives them into a collective state, and this state carries the results of a computation.

The artificial neurons should obey certain requirements in order to perform computation—typically, they are multi-input devices, which compute a superposition of their inputs and then output a nonlinear function of this sum. While such an operation is conceptually simple, it is not at all easy to find physically realizable low-power, robust, reproducibly behaving elements that could serve as building blocks of the neurons.

Many types of oscillators exist that can straightforwardly realize neuron functions. Most physical oscillators show a suitable nonlinear phase and frequency response if they are perturbed by incoming oscillatory signals. For example, two interacting oscillators will run at exactly the same frequency if the difference in their free-running frequencies is below a certain threshold value. In addition to being good nonlinear units, oscillators are also ubiquitous in the physical world, making them attractive for realizing computing systems.

Our definition of an OBC derives from the above-described attributes of a computing system. The first (and somewhat trivial) requirement is that in an OBC the signals are carried by the phase and frequency of oscillatory signals. The second (and less straightforward) requirement is that signals must be processed by the (nonlinear) interactions between oscillators.

The above definition narrows down OBC to a fairly specific class of circuit architectures. For example, it excludes spiking analog circuits from our definition of OBC. Spiking neural networks employ oscillatory signal representation, and so they fulfill the first requirement for OBC. But they use different processing techniques: their computing units integrate, count, and multiply spike sequences and do “not” rely on oscillator interactions.

There are a large number of various computing schemes (Boolean or non-Boolean, special, or general purpose) that may be implemented using oscillator dynamics. Most of these computing models are not specific to OBC; rather, they are oscillatory versions of some known analog computing model. Figure 1 shows a schematic high-level overview of analog, dynamic computing concepts that are perhaps most relevant for OBCs. The focus of this review will mostly be on collective state models of computing, which are a large subset of neuromorphic or non-Boolean computing paradigms.

To a large extent, the motivation to study OBC comes from biology. The central nervous system is believed to use time-dependent signals (pulse or spike sequences) to communicate and process information—this dynamic nature of information processing in the brain is what probably distinguishes it most from today's digital computers.^{7} Excellent examples of using biological models to develop computational architectures are given in Refs. 8 and 9. There is a large body of work large-scale brain simulations,^{10,11} and the usefulness of oscillatory models in biology is discussed in Ref. 12. In this paper, we focus on engineering approaches of OBC, and the interested reader is referred to Ref. 13 for a comprehensive overview of biological aspects of OBC and to Ref. 14 for an insight into connections between oscillatory biological and neural systems.

Historically, the idea of OBC dates back to von Neumann's 1954 patent,^{15} and his concept^{16} is an early and still very relevant example of OBC. This scheme uses phases of oscillator signals to realize Boolean, digital computation and also serves as a perfect example on how one can translate a level-based computing scheme to a phase/frequency based representation. For this reason, we start by reviewing this concept in Sec. II along with more recent proposals that resurrect this idea.

The attractiveness of OBC largely hinges on finding a suitable oscillator as a building block of the computer. One may use electrical oscillators and even standard, fabrication-friendly CMOS circuitry. Transistor action, however, is not at all required to every oscillator type, and so the field is widely open for using emerging devices or possibly nonelectrical variables.^{17} OBCs realized with nanoscale, highly efficient oscillators have the potential to yield truly revolutionary devices. We argue that the physical realization of the oscillators is pivotal for their success in real-life application and devote Sec. III to the physics of emerging nano-oscillator devices.

Computing in OBCs occurs by oscillator interactions, more specifically, by oscillator synchronization. We will survey various means of physical oscillator interconnections in Sec. IV, while in Sec. V, we show how these interconnection topologies perform computing. Synchronization phenomena have large literature in physics and nonlinear science,^{18,19} and we also review some of the relevant mathematics in Sec. V.

One of the most promising applications for OBC is that they may be used as hardware accelerators in artificial intelligence (AI) hardware. In AI algorithms, the vast majority of computing power is spent on performing simple, repetitive calculations, such as calculating dot products, convolutions, applying nonlinearities, and recognizing or matching simple patterns. It is quite possible that Boolean, CMOS-based circuitry is suboptimal in doing these tasks. For this reason, deep learning algorithms and convolutional neural networks^{20–22} became major drivers for seeking out new, possibly non-Boolean hardware. Section VI will give a case study of a few OBCs that target efficient execution of very specific, repetitive computing tasks.

OBCs have the potential to attack computationally hard problems^{23}—problem classes that have no efficient solution on a Boolean machine and problems that are usually discussed in the framework of quantum computation. Our review will devote Sec. VII to these new application areas.

The reader will see that OBC has grown into a vast field—the 150+ reference we cite in this review represents only a small fraction of the literature, and there are books devoted to this topic.^{24} Most work focuses on a particular device^{25} or a particular computing architecture or the mathematical or biological aspects of OBC. One purpose of this review is to give a broad and comprehensive perspective, while keeping the focus on the physics of oscillators and their interactions.

We intentionally choose not to organize this review around a particular computing model or a particular device, and we choose not to overemphasize perceived benefits of OBCs. There is no consensus on the “best way” to use oscillators in computing, and there are very few attempts to benchmark oscillator-based solutions against digital or level-based analog circuits. So this review presents the field the way it stands now: a somewhat loose collection of ideas and concepts that nevertheless holds the potential for a breakthrough for new-generation computing hardware.

## II. PHASE LOGIC AND vON NEUMANN'S OSCILLATORY COMPUTER

von Neumann's groundbreaking idea was to use oscillators as logic gates, where information was represented by the phase.^{15,16} It is worthwhile to note that digital computes in the early fifties reached then-breathtaking speeds of several megahertz. So it seemed a natural idea to look at a logic circuit not as a switch between zeros and ones, but rather as an electrical oscillator switching between different phases of oscillation. von Neumann's device became a success story in the 1950s: Goto^{26} and others further developed his concept and fully functional Boolean computers were realized using oscillators.^{27} In these works, the basic oscillatory element usually is referred to as a parametron^{28} and the logic scheme is phase logic.

The operation of phase logic is based on subharmonic injection-locked oscillators (SHILOs). Unlike auto-oscillators, SHILOs do not require active circuit elements for operation, only nonlinear resonant elements. SHILOs also cannot generate oscillatory signals from a DC input, but they respond to an incoming AC excitation. More specifically, a SHILO, with a resonant frequency of *f*_{0}, may be driven by a signal of $2f0$ frequency—this pumping signal feeds energy into the oscillator, which will resonate at *f*_{0}. The driving signal (voltage or current) of the oscillator is synchronized to the pump, i.e., they should have a fixed phase relation.

Figure 2(a) shows a circuit schematic from the 1950s, showing a logic gate from a parametron-based computer. This is one possible implementation of a SHILO, and in this particular case, the building blocks are inductively coupled nonlinear LC parametric oscillators. Oscillator nonlinearity derives from the nonlinear hysteresis of the ferrite cores. A $2f0$ excitation applied on the inductance periodically modulates the *L* inductance and serves as the energy source, compensating the resistive losses in the circuit. Energy transfer between different oscillation modes is most often understood in terms of the Manley-Rowe relations.^{29}

The key concept of using SHILOs for representing digital signals is sketched in Fig. 2(b). There are exactly two distinct phases in which a signal at frequency *f*_{0} may be synchronized to a pumping signal twice that frequency $2\xd7f0$. The zero-crossings of the pumping and pumped signals coincide. The two possible phases with respect to the pumping signal represent the binary “0” and “1” states in phase logic.

A parametric oscillator that is started up from an off state may choose either one of the two phases shown in Fig. 2(a). Once the oscillations reach a sufficient amplitude, their phase is fairly stable and a strong external signal (at *f*_{0}) would be required to flip the phase. Upon startup, however, any additional perturbation can pull the oscillator toward its own phase. The oscillator can function as a latch, i.e., phase memory.

Oscillators can accept multiple inputs, i.e., multiple *f*_{0} frequency signals that can pull their phases. For example, in the case of the LC parametric oscillators, these inputs can be additional windings on the ferrite core. If a particular oscillator receives multiple inputs with different phases, then it will follow the phase of the “majority” of input oscillators. This majority operation is logically universal, and from majority gates (and inverters), one can straightforwardly realize more familiar NAND/NOR gates and any combinatorial circuits.

Minuscule input signals can decide the phase state of the parametric oscillator, but once the oscillator reaches steady-state oscillations, it can provide a strong output to logic gates at subsequent logic stages. The logic gates can amplify oscillatory signals, provide fan out, and can be concatenated to make large networks. They fulfill the five tenets of Boolean computation.^{31}

von Neumann's oscillatory computer serves as a perfect example of how one can redesign a level-based computing scheme (in this case, standard Boolean logic) to operate in the phase space. It also teaches a lesson on how crucial the physical realization of the oscillator is. Ferrite-core based LC oscillators in the 1950s were competitive with vacuum-tube-based circuits, but they quickly became obsolete when miniaturized and increasingly fast transistors appeared.

The idea of phase-based logic has been resurrected in several recent proposals, using micro or nanoelectronic building blocks for emerging electronic devices. In the late 1990s, there was great interest in single-electron transistors (SETs) and one of the first modern phase-logic proposal was done by Kiehl's group.^{32,33} Oscillations in a SET circuit are due to Coulomb blockade. The physics is entirely different from that of LC oscillators; yet, the logic operations can be performed almost exactly the same way as in the case of the LC oscillators.

In micro and nanomechanical systems (MEMS), interconversion between kinetic and potential energy is the source of oscillations. The oscillator equations are formally similar to the LC oscillator equations, and so not surprisingly, phase logic operations can be demonstrated in this physical system as well.^{28}

Ring oscillators are perhaps the most microelectronic-friendly implementation for phase-based logic.^{34,35} A single ring oscillator can be tapped at different circuit nodes, and in this way, phase-shifted copies of the oscillator signal are easily available. Figure 2(c) schematically shows one way of using ring oscillators as SHILOs: parametric pumping is applied as the power supply of one inverter.

Spiking (neural) networks perhaps are the first successful engineering application related to neurally inspired circuit architectures, and they share many features with OBCs. The boundary between OBCs and spiking networks is somewhat diffuse, and OBC circuit designs share many features of spiking neurons. The promise of spiking neural networks derives from the fact that a single spike can carry extremely low energy (on the order of $10\u221216$ J), and it is acceptable to lose some fraction of the spikes,^{36–46} which results in higher error tolerance. Spiking neural networks use oscillators for generating the signals but do not take advantage of the nonlinear interaction between oscillators, and so most of them do not belong to OBCs as we defined them.

## III. OSCILLATORS FOR OBC

Just like a digital computer that is built from billions of transistors, an envisioned OBC will contain millions or billions of interconnected oscillators. The demands for the elementary oscillators are high and the success of OBC will eventually depend upon whether one can find oscillatory building blocks that are (1) compact, (2) low lower, (3) high-frequency, (4) low noise, and (5) robust, (6) can be efficiently interconnected to each other, and (7) can easily interface with electronic circuitry.

Satisfying all the above requirements is a tall order. Depending on the chosen computing architecture, some of the requirements may be more or less crucial and relevant.

### A. Fundamental types of oscillators

An oscillator is a fairly broad term, as almost every physical system that is capable of producing periodic changes in a physical variable commonly is referred to as an oscillator. From the point of view of applications in OBC, it is useful to classify the types below.

Auto oscillators typically generate AC signals from a DC energy source, and the dynamical properties of the system define a limit cycle, with a fixed amplitude and frequency. These systems are the ones that are most commonly referred to as oscillators.

Oscillators may also be merely responding to an oscillatory signal imposed on them. Forced oscillators and parametric oscillators (like the SHILOs, described in Sec. II) exemplify this behavior. The response of the oscillators is related to the incoming excitatory signal, but does not necessarily occur at the same frequency.

Energy recycling oscillators periodically convert energy between two different forms, and the energy dissipation in each oscillation cycle can be significantly less than the total energy involved in the oscillation. For example, an LC oscillator characterized by a quality factor of *Q* loses approximately 1/Qth fraction of its energy in each oscillation cycle. Most solid-state oscillators do not have this property: the energy that is associated with the oscillatory signal is simply thrown (dissipated) away.

In electrical oscillators, it is useful to distinguish between switch-based and nonswitch-based oscillators. A switch-based oscillator typically charges and discharges a capacitor by periodically connecting it to the power supply and the ground. These oscillators are fairly simple to construct, but they produce squarelike-waves. Electronic oscillators using analog amplifiers can generate clean sinusoidal waveforms, but these circuits are significantly more complex than the switch-based ones.

### B. Physical implementations of oscillators

There are a large variety of physical (or chemical or biological) processes that may produce oscillations. Perhaps the most common types of oscillations are based on the oscillation of charges.^{47} Almost any other physical quantity can produce oscillations, with mechanical or magnetic oscillations being the most widely used, besides the electrical ones. One may also mix different physical quantities, for example, using electrical circuitry for oscillator interconnections only, and employ other state variables “inside” the oscillatory device.

Perhaps the most straightforward physical implementation for OBC would use LC oscillators, which periodically convert between energy stored in magnetic or electric fields. These were indeed the choice for early oscillator-based computers,^{26} but inductors were not competitive with solid state devices. Inductances consume very large chip areas, have high serial resistance, and often are limited to low-frequency operation.^{48}

There are many variants of integrable, room-temperature electrical oscillators one may use. Ring oscillators are one of the most compact transistor-based oscillators—their frequency and power consumption can vary over a wide range, and in subthreshold mode, they can compete with low-power nanoscale oscillators.^{49,50} Interconnections to each other and interfaces to electronic circuitry are straightforward.

Electrical oscillators may be built from emerging devices, such as phase change materials—these are often referred to as memristive oscillators.^{51} A sandwich structure made from these materials becomes conductive (highly restive) above (below) a threshold voltage, and the device behaves as a switch with hysteresis. Embedded in a simple RC circuit, such a device may operate as a relaxation oscillator. The oscillation frequency is set by the RC time constant of the oscillator.^{52} The switching element itself is a rather simple structure and highly scalable.

A large class of oscillators relies on oscillatory motion of magnetic moments (spins) in ferromagnetic materials. Spin-torque oscillators and spin-hall oscillators use a spin-polarized current to excite spin-precession in a ferromagnetic layer and, as this current flows through the oscillator, are modulated by the oscillating magnetization due to a magnetoresistance effect [giant magnetoresistance (GMR) or via magnetic tunnel junctions (MTJs)] or spin pumping. The magnetic thin film is capable of undergoing high-frequency (in the gigahertz regime) and high-*Q* oscillations, but the interconversion efficiency between the electric and magnetic degrees of freedom is relatively low, and for these reasons, magnetic oscillators require relatively high power to run. On the upside, their high oscillation frequency makes them outstanding candidates for high-speed computing applications, and they are one of the most popular physical realizations for neuromorphic circuits.^{53–55}

Parametric devices can be built from magnetic materials by modulating their magnetic properties. One possibility to achieve such modulation is via voltage-controlled anisotropy.^{56} These circuits lend themselves naturally to the realization of von-Neumann type oscillatory computers.^{56,57}

In a mechanical [MEMS or NEMS (NanoElectroMechanical Systems)] oscillator, a vibrating body, such as a cantilever,^{28,58–60} is the source of oscillations. In terms of dynamic behavior, they share many similarities with LC oscillators, but they are much more amenable to large-scale integration.^{61,62}

In terms of figures of merits, superconducting devices may come closest to realizing a perfect oscillator: they consume ultralow energies, are capable of performing high frequency operation, and do not necessarily occupy large chip areas.^{63–66} Superconducting LC components are lossless, and active circuit elements (Josephson junctions) are available in this technology. Their obvious disadvantage is the required cooling apparatus, and they are challenging to integrate with input/output circuitry and memory units, and mutual interconnections are also a challenge. While superconducting circuits are a hotly researched area (mostly for applications related to quantum computing), they do not belong to the mainstream of research in analog computing.

Single-electron devices (SEDs) provide another low-temperature technology for parametric oscillatory devices. In SEDs, metal islands are connected via tunneling barriers, and individual electrons can tunnel between these islands. The capacitances between metal islands are so low that a single tunneling event can significantly change the electrostatics of the circuit and block the flow of further electrons—until the electron leaves the island. The time constant of the tunneling events defines the time period of the oscillations. The operation of SET-based parametric devices was demonstrated for (Boolean) phase logic and also neuromorphic circuitry.^{32,33} These devices share many benefits and disadvantages of superconducting circuitry, and their performance numbers are close to those. Due to the low energy involved in the tunneling event, these devices require cryogenic temperatures to operate.

Chemical oscillators are also investigated for computing applications.^{67} Electrochemical reactions can produce oscillatory currents, and one may even observe complex pattern formations as a result of many-oscillator interactions. Typically, the oscillation frequency is low, and it is hard to see how one could engineer the interconnection of multiple oscillators or electrical interfaces.

Table I shows an overview of possible physical oscillators, and we also provide estimates of some relevant parameters and figures of merits. The table will be further analyzed in the remainder of this chapter, but one can immediately notice that there is no “perfect” oscillator, one which would simultaneously excel in all figures of merits. The table also shows that ring oscillators, a very old-fashioned technology, show a fairly good overall performance.

Oscillator name . | State variable . | Frequency (Hz) . | Energy/cycle (J) . | Possible coupling mechanism . | Sources . |
---|---|---|---|---|---|

Ring oscillator | Electric | Up to 20 GHz | $10\u221215$ | Electrical | 68 |

Relaxation oscillator based on phase-transitions | Electrical | Up to 10 GHz | $10\u221217$ | Electrical | 52 |

LC oscillator | Electrical | Up to 100 GHz | Electrical | 48 | |

Superconducting oscillator | Electric and magnetic | Several 10 GHz | $10\u221217$ | Electrical, inductive, and capacitive | 64,65 |

Mechanical (NEMS) oscillator/RBO | Mechanical | Up to 20 GHz | $10\u221214$ | Electrical or mechanical | 58 |

Spin torque oscillator (STO) | Magnetic | Upward 50 GHz | $10\u221215$ | Electric, magnetic or spin wave | 55,69 |

Chemical | Electrochemistry | 10^{2} | No data | No data | 67 |

Magnetic anisotropy controlled parametric | Magnetic | Up to 20 GHz | No data | Electrical | 56 |

Spin-Hall oscillator | Magnetic | Up to 20 GHz | $10\u221216$ | Electric, magnetic or spin wave | 70 |

SET device | Electric | 10 GHz | $10\u221218$ | Electrical | 33 |

Oscillator name . | State variable . | Frequency (Hz) . | Energy/cycle (J) . | Possible coupling mechanism . | Sources . |
---|---|---|---|---|---|

Ring oscillator | Electric | Up to 20 GHz | $10\u221215$ | Electrical | 68 |

Relaxation oscillator based on phase-transitions | Electrical | Up to 10 GHz | $10\u221217$ | Electrical | 52 |

LC oscillator | Electrical | Up to 100 GHz | Electrical | 48 | |

Superconducting oscillator | Electric and magnetic | Several 10 GHz | $10\u221217$ | Electrical, inductive, and capacitive | 64,65 |

Mechanical (NEMS) oscillator/RBO | Mechanical | Up to 20 GHz | $10\u221214$ | Electrical or mechanical | 58 |

Spin torque oscillator (STO) | Magnetic | Upward 50 GHz | $10\u221215$ | Electric, magnetic or spin wave | 55,69 |

Chemical | Electrochemistry | 10^{2} | No data | No data | 67 |

Magnetic anisotropy controlled parametric | Magnetic | Up to 20 GHz | No data | Electrical | 56 |

Spin-Hall oscillator | Magnetic | Up to 20 GHz | $10\u221216$ | Electric, magnetic or spin wave | 70 |

SET device | Electric | 10 GHz | $10\u221218$ | Electrical | 33 |

### C. Power considerations in physical oscillators

In the context of OBCs, perhaps the most important figure of merit for an oscillator is its power consumption, which includes both the power of each oscillator and the power consumed by the interconnections. Below, we focus on the oscillator alone, and interconnections will be discussed in Sec. IV B.

A good baseline for comparing the power dissipation of various oscillators is to calculate the power figures for ultralow power ring oscillators, which are used in, for example, Radio Frequency IDentification (RFID) transponders.^{68,71} The voltage-controlled oscillator described in Ref. 68 consumes 24 nW at 5.24 MHz, that is, $Ediss=4.7\xd710\u221215$ J per oscillation cycle. With vanadium oxide relaxation oscillators, one possibly can achieve an order of magnitude better:^{72} projects $0.5\u2009\mu $W at 1.6 GHz, giving $Ediss\u224810\u221216$ J per cycle. Obviously, in the case of electrical oscillators, there is no overhead in converting between electrical and nonelectrical state variables, albeit long-range interconnections may require power-hungry amplifiers.

An attractive oscillator in terms of energy would be an energy-recycling oscillator, i.e., where during each oscillation cycle, energy is reversibly converted (largely without losses) between two forms, instead of being dissipated. LC and MEMS oscillators exhibit this property.^{73}

On-chip LC oscillators, however, have low *Q* factors. Small-sized inductors at room temperature have large serial resistances,^{48} resulting in large resistive losses and small *Q*. So little can be gained by energy recycling circuitry. The picture is totally different for superconducting LC elements, and they are ideal oscillatory blocks in terms of power consumption. In Table I, superconducting devices stand out with an energy of $Ediss\u224810\u221217$ J per oscillation/spike. Single-electron devices may perform even better, due to the very low energy associated with a single electron tunneling event.

Mechanical oscillators (MEMS/NEMS) are energy recycling oscillators and can have very high *Q* factors. The equations describing the interconversion between kinetic and potential energies in a MEMS oscillator are formally very similar to the LC oscillator equations—so the dynamic behavior of a MEMS oscillator very much resembles that of a high-Q LC circuit. However, in the case of MEMS, their low transduction efficiency is the main challenge. As mechanical oscillators must be driven (and possibly interconnected) by electrical signals, their net power efficiency will depend on the efficiency of interconversion between electric and mechanical signals. This can rarely be done better than with a few-percent efficiency.^{74}

Spin torque oscillators (STOs) are current-driven devices and typically run at submilliamperes current levels and oscillate in the gigahertz frequency range. The source of oscillation is the precession of magnetic moments in ferromagnetic materials, and the energy required for these oscillations is supplied by spin polarized currents flowing through the magnetic layers. Damping in the magnetic material is relatively low, and resistive losses in the STO magnetic layer stack account for the vast majority of the dissipated power. The magnetic oscillations modulate the resistance of the STO layer stack, and one can detect an electrical signal as a result of magnetization oscillations.

As an example, assuming $VSTO=0.1$ V, $iSTO=0.1$ mA, and *f *=* *10 GHz, the energy consumed in an STO per oscillation cycle is $Ediss\u22484.7\xd710\u221215$ J per cycle, not very far from the ring oscillator figure. Reducing the resistance of the stack is, in principle, possible, but difficult in practice. There are additional design tradeoffs. For example, spin oscillators based on tunneling magnetoresistance provide large electrical outputs, but have a high net resistance. The net power efficiency of STO circuitry can be significantly reduced if amplifier circuitry is needed for coupling or readout of the signals. Emerging physics (such as the spin-Hall effect) may significantly boost the efficiency of magnetoelectric interconnections, and voltage-controlled anisotropy could allow us to drive parametric oscillators with almost no resistive current flow.

Finally, a general remark about power consumption: The energy of thermal fluctuations at room temperature is on the order of $kT=26$ meV $=4.1430\xd710\u221221$ J. The energy involved in each oscillation cycle should be at least a few times this value to avoid the oscillator signal getting completely lost in noise. The oscillators presented above are still several orders of magnitude away from this value (with the exception of superconducting oscillators)—so, there is certainly room for much more energy efficient physical systems.

### D. Noise considerations

Low-power analog computing devices are inevitably subject to noise. Nanoscale oscillators, for which very little energy is involved in the oscillation process, will have noisy waveforms, with relatively poor frequency and phase stability.^{75–77}

In most cases, noise is detrimental to device operation or can render a computing scheme unrealizable. For example, Ref. 78 describes schemes for using the STO phase and frequency for analog computation, and it turns out that the phase noise of STOs is way too high for this scheme to be feasible and only STO frequency can be used.

Some nanoscale oscillators can be engineered to show noisy, stochastic behavior. For example, reducing the volume of the magnetic layer in an STO will reduce the potential barrier seen by magnetic moments. If the height of this barrier becomes comparable with *kT*, thermal fluctuations can stochastically switch the magnetic layer.^{79} The device may still be used as an oscillator where externally imposed currents can modulate the noise. There are computational schemes that use such stochastic oscillations for computing.

One important argument in favor of OBCs is that frequency- or phase-based representation of information is believed to be inherently more noise tolerant than amplitude-based (level-based) representations. The superiority of phase/frequency based coding is well established in telecommunication theory.^{80,81} The work of Wang and Roychowdhury^{35} makes an argument that these principles transfer to oscillatory computing devices.

## IV. COUPLING OF OSCILLATORS

### A. Coupling via physical interactions vs engineered couplings

In OBCs, computing is done by coupling of oscillators—these (mutual) interactions will alter the phase/frequency or even the amplitude of oscillators. The mechanism of oscillator interaction, often referred to as synchronization, was observed by Huygens back in 1665. He described how two pendulum clocks (mechanical oscillators) start to tick in synchrony (i.e., assume the same phase) when they are hung on the same wall, which provides some weak mechanical coupling between them. Weak couplings work perfectly for the purpose of synchronization, and this phenomenon is pervasive.

Synchronization requires only weak coupling between oscillators, and these weak couplings are often given in the system “for free” through some parasitic effect. So one approach for oscillator coupling is to exploit such existing physical interconnections using the physics of the oscillator state variables. MEMS oscillators may be coupled via acoustic vibrations propagating in the semiconductor substrate. STOs couple via magnetic interactions and spin waves. Electrical oscillators couple through capacitive/inductive effects or via shared ground or power supply lines. Of course, this approach implies imitations on the coupling topology and only a few (usually simple) coupling configurations are possible.

The second, more often followed route is to engineer the couplings, i.e., to design which oscillators can be coupled and with what strength. This gives full flexibility in the design and allows a wide array of computing schemes. However, there is a high price to pay for that: most non-Boolean computing schemes (especially neuromorphic networks) are highly interconnected architectures, and the number of required interconnects greatly outweighs the number of oscillators. In this case, the ease with which the oscillators can be interconnected becomes the most important figure of merit and hard-to-interconnect oscillators are practically useless in an OBC. So one may argue that the figure of merits for oscillators as given in Table I are not at all that relevant, and “good” oscillators are the ones that can be coupled by compact, low-power, high-fanout interconnections. Even in standard digital circuits, interconnections often account for most of the circuit complexity, and moving data between far-lying points of the circuit accounts for most of the power consumption. It is not hard to see that in highly interconnected analog (oscillatory) circuitry, interconnections will be the bottleneck.

It is important to point out that a physical connection between two oscillators does not necessarily mean that they will influence each other. For example, two weakly coupled oscillators will see each other if their frequencies lie sufficiently close. Oscillators with very different frequencies will, in most cases, just ignore each other. One could say that the physical and logical (effective) couplings inside the network could be quite different from each other.

### B. Physical realization of oscillator interconnections

#### 1. Electrical interconnections

Electrical interconnections are a straightforward choice for coupling electrical oscillators.^{30,77,82} The strength of electrical interconnections may be made fixed or be tunable via simple circuitry. Capacitive or inductive elements may result in positive or negative coupling coefficients (i.e., ones that push or pull the phases against/toward each other). Figure 3(a) shows an example of electrical oscillator interconnection, via an RC element assuming VO_{2}-based relaxation oscillators.

Electrical connections are often the easiest, most flexible option even for oscillators operating on nonelectrical state variables. There is almost always a high energy penalty for doing this. As we discussed above, in the case of NEMS devices and STOs, the transduction efficiency (i.e., the amount of energy converted to/from the electrical and nonelectrical degrees of freedom) is not more than a few percent. So oscillator couplings require active interconnections, i.e., amplifier stages in between the oscillators.

One example, given in Ref. 83 and sketched in Fig. 3(b), describes how high-frequency STOs can be brought into interaction via a waveguide. The waveguide (referred to as the field line) is a simple electrical wire, providing magnetic fields for interaction with the STOs. The STO outputs are picked up by an amplifier, and this amplifier feeds the field line with current—this scheme brings all the STOs into mutual interaction with each other.

#### 2. Nonelectrical interconnections

For nonelectrical oscillators, it is highly desirable to use the same nonelectrical state variable for the interconnections that the oscillator is based on.

In the case of NEMS oscillators, mechanical (acoustic) couplings are the most natural and efficient.^{60,85} For spin oscillators, there are different possibilities: magnetic moments may be coupled via their dipole (magnetic field) or spin-wave excitations. Spin polarized currents may also directly couple spin oscillators without the need of extra circuitry,^{86,87} and topological surface states^{88} may amplify this effect.

Perhaps the highest state-of-the-art for direct oscillator-oscillator coupling is reached in spin-wave coupled STOs.^{69,89} In order to achieve direct spin-wave coupling, magnetic oscillators should share the same magnetic film. The oscillatory precession of magnetization in the STO generates propagating spin waves in the film, which can reach and affect neighboring STOs, resulting in STO synchronization. Spin oscillators based on the spin Hall effect have a more favorable geometry (i.e., in experiments, they may be placed closer to each other), and Ref. 70 reports coupling of nine oscillators, which is the largest number reported to date. This experimental setup is sketched in Fig. 3(d).

Direct physical coupling of emerging oscillators might enable us to fully utilize the potential of emerging state variables, but geometry constraints may strongly limit the structure of realizable couplings. For example, the work of Ref. 70 can only realize nearest neighbor coupling, while most proposed applications of STO networks^{78} would require all-to-all couplings. The coupling range is also limited: spin waves in most magnetic materials propagate at most few hundred nanometers, and of course, strong coupling occurs only between nearby oscillators. There are magnetic materials that allow significantly longer spin wave propagation lengths, such as Yttrium Iron Garnet (YIG), where spin wave propagation lengths on the order of several ten microns were measured.^{90} In principle, thousands of STOs could be coupled to each other in such low-damping magnetic materials, but it is a technological challenge to integrate spin oscillators on such magnetic films.

## V. COMPUTING BY OSCILLATOR DYNAMICS

After a discussion on the physics of oscillators and oscillator interactions, we now turn to the question of what sort of computation can be implemented by coupled oscillator dynamics. With the exception of the discussion of von Neumann's Boolean scheme (phase logic) in Sec. II, we have not yet addressed this question, and the present chapter is primarily devoted to non-Boolean models of computing in oscillatory networks.

One way to compute with oscillators is to let phase signals propagate from input to output, in a well-defined, sequential manner. This is done in phase logic, which is essentially a feedforward neural network implemented by oscillators.

A very different model of computing, which we refer to as collective-state computing, views computation as a result of complex, multidirectional interactions in networks of interconnected primitives (such as neurons in neural networks and oscillators in our case). Such collective-state computing is our main interest in this paper, and an overwhelming majority of current OBC research studies deal with such models.

### A. Collective state computing with oscillators

Synchronization (mutual oscillator interactions) can drive the oscillator network in a collective state, such as an attractor state or limit cycle. In this collective state, the phases and frequencies of oscillators are not independent of each other, but form patterns. These patterns represent the result of a computation.

A high-level description of a typical collective-state computing process is outlined in the following:

Input is given to the network, which could be a frequency or phase pattern that is forced onto the oscillators and presents the physical initial state of each individual oscillator. Such an input pattern (in the phase of frequency) might be an image that needs to be processed.

Inputs are removed, and the phases and frequencies of oscillators evolve due to their mutual interactions. Synchronization drives the network toward an energy minimum, which also represents a minimum in an energylike constraint function. This could be a stationary phase or frequency pattern, which represents the processed input.

The result of the computation is read out by extracting phases or frequencies from groups of oscillators.

This is not the only possible mode of operation for an OBC. For example, central pattern generators (briefly described in Sec. VI C) generate a time-dependent output pattern (signal) as a response to an input signal, and this input may also change continuously in time. A meaningful computational process does not necessarily mean that a steady state (steady phase or frequency patterns) is reached.

Most computing models use the phase of the oscillators as the carrier of information. Oscillator frequency is more stable, but it is also harder to influence by coupling.

Mathematically, oscillator interaction is described in terms of synchronization, i.e., the emergence of phase/frequency patterns in the oscillator cluster.^{18,91,92} Perhaps the simplest model to describe the formation of these states is the celebrated Kuramoto model.^{19,93} The model provides an analytical solution for sinusoidal oscillators that are coupled linearly and weakly by their phase variable, that is, each oscillator pushes/pulls other oscillator phases proportionally to their phase difference. The model describes the sudden phase transition of the oscillator network from a desynchronized state (uncorrelated couplings) to a coherently oscillating state at a particular interaction strength. A network of Kuramoto oscillators will often display mesmerizing and complex phase patterns,^{94} hinting that this complexity can be harnessed for computation.

In most works on OBCs, numerical simulations, rather than analytical solutions, are used to study networks. The reason for this is that the Kuramoto model alone gives very few hints on the computational utility of a certain network. The Kuramoto model also assumes that only oscillator phases are perturbed by oscillator-oscillator interactions, and frequencies remain intact, which is usually a poor approximation for most physical oscillators and typical coupling strengths. One most often needs time-demanding numerical simulations or numerical approximations to determine phase dynamics of irregularly connected, highly nonlinear oscillators. It is possible to directly calculate coupled oscillator dynamics or use approximate, but much more efficient phase-domain models.^{95}

The synchronization network is a term often used in the mathematical or nonlinear science literature to refer to systems of Kuramoto oscillators or OBCs.^{96}

### B. Interconnections define the network function

What computations a network can perform depends on the interconnection weights. One may engineer the weights of the network to perform certain functions. For OBC, one may even use the same interconnection network that a standard neural network uses—see Sec. V C for an example on how to do this. As in the case of level-based neural networks, it is easy to end up with an interconnection-heavy, hard-to-realize design.

Rather than designing-in desired interconnections, one may follow another route and try to use the couplings that are inherent in the physical system and then see what such functions such a network is capable of performing. This route is followed, for example, in reservoir computing.^{53,97,98} In reservoir computing, the oscillator network acts as a complex, nonlinear system with memory (for the requirements of a reservoir, see Refs. 99 and 100) and this complex network dynamics is turned into useful computation by an output layer, which maps the network output to the desired computational result.

A unique characteristic of OBCs is that externally injected oscillatory signals can tune oscillator-oscillator interactions or bring noninteracting oscillators into coupling. This idea is originally described in Ref. 101. Consider two oscillators, running at frequencies of *f*_{0} and *f*_{1} and being physically coupled (say via a common circuit node). Such oscillators will not interact if their *f*_{0} and *f*_{1} frequencies are too far apart (neither they are harmonics). However, an externally applied oscillatory signal with the frequency of $f=f0\u2212f1$ will bring these oscillators into interaction. The possibility to use external signals to define connections, instead of physical rewiring of the network, may allow us to overcome interconnection bottlenecks.

### C. Construction of an oscillatory Hopfield network

The Hopfield network is one of the most studied neural network models.^{102,103} It is also prominent in the OBC literature and has been used as the starting point for the construction of oscillatory associative memory concepts, described, for example in Refs. 30, 101, 104, and 105.

In the traditional Hopfield model, two-state neurons (computing nodes) interact with all other neurons in the network via positive or negative weights. The weights are chosen to define certain attractor states for the network, which are, in turn, minima of an energylike constraint function. For example, if convergence to a black-and-white image is desired, neurons corresponding to like-colored pixels of the image are interconnected by positive weights (i.e., pull each other toward the same state), while neurons with different colors will repel each other (since they are interconnected by negative weights). Weights can be determined by a simple formula (such as the Hebbian rule^{106}) or a learning algorithm.^{107} Each neuron sums all its inputs, then applies a nonlinear (typically sigmoid) activation function, and finally sends this output to all other neurons. The network may have a unique ground state, where all positively (negatively) connected neurons are in the same (opposite) state, such that all constraints are satisfied. If not all couplings can be satisfied, then the network can have multiple stationary states and will act as an auto-associative memory: if the initial state of the neurons resembles one of the patterns programmed into the weights, the network will converge to this preprogrammed pattern.

Such conventional level-based Hopfield networks can be reformulated for using oscillators as its building blocks (neurons). Neurons become oscillators, and their phase state represents the output. Oscillators may be interconnected to pull together toward the same phase (in-phase), i.e., synchronize to the same phase, or they may be interconnected with a phase delay that will cause them to synchronize out-of-phase (antiphase), i.e., with a 180° phase shift. These two types of interconnections correspond to negative or positive couplings of the level-based Hopfield network. The nonlinearity of the synchronization process means that the sigmoid activation function is already built in the oscillators. When the network converges, the oscillators form two groups with identical phases inside each group, and the resulting pattern is the output of the associative memory operation.

### D. Models for oscillatory collective state computing

Most OBC computing models are based on the same principle as the Hopfield network, that is, to find an “oscillatory ground state” and use the convergence of the network toward a stationary phase or frequency pattern, which pattern minimizes an energy function. A comprehensive overview of various Hopfield-like computing models is given in Ref. 108, also explicitly underlying the relation to biological systems.

One variant of the Hopfield network measures the output phase correlations in the network, rather than looking perfectly synchronized states of oscillators. Oscillator interactions do not necessarily yield perfectly in-phase or antiphase running oscillators—any (phase) correlation between two oscillators could have computational value, and such phase correlation may be possible to access via an output circuitry. For example, in Refs. 109–112, oscillators are controlled by externally applied AC signals, which bring groups of oscillators in a phase-correlated state. The output is readout by extracting the pairwise phase correlations between oscillators and applying threshold criteria. The coupling weights (which are realized by the externally injected signals) can be trained by computational learning algorithms for specific functions. Several examples of learning algorithms were published.^{53,109,110}

Using oscillators alone does not address the main problem of Hopfield (like) networks: Hopfield nets require all-to-all interconnections and therefore are not scalable beyond a few-ten neurons. But since oscillators communicate in the frequency domain and one can create “virtual” interconnections by using externally injected signals,^{101} there are a number of strategies to deal with the interconnection bottleneck. The work of Ref. 113 uses a frequency-domain multiplexing scheme to drastically reduce the number of interconnections—neural network architectures that are unrealizable in a level-based system may be physically realizable with oscillators. One may make an analogy here with frequency-division multiplexing (FDM) in telecommunications. A small number of high-bandwidth physical links can be used to create a large number of virtual interconnections between processing units (neurons). Such ideas were discussed in the framework of neural networks, and^{101,113–118} possibly, oscillators are the most straightforward hardware for realizing neural networks using FDM.

An overview of the different interconnection schemes is shown in Fig. 4. Depending on the computational model and the problem to be solved, one may use all-to-all interconnections, various dynamic interconnections, or a simpler scheme when all oscillators are connected via a common node.

Harnessing the computational power of the collective state of a many-oscillator system (as it is, supposedly, done by the mammalian brains) is the Holy Grail of OBC research—perhaps of all neurally inspired computing models. So far, no large-scale problem of high practicality and interest has emerged that OBCs solved with much higher efficiency than digital computers. There are different problem classes though, which may yield successful applications of OBCs in the near term. One possibility is to forget about large-scale problems, solve relatively simple but ubiquitous tasks, and do this with extremely high (energy) efficiency. Such problems will be discussed in Sec. VI. The other possibility is to attack extremely hard problems, with absolutely no known effective solutions; attempts to do so will be surveyed in Sec. VII.

## VI. CASE STUDIES

Hardware accelerators for image processing, vowel recognition, gait control, and combinatorial optimization are examples when a relatively simple OBC can show impressive performance. A few case studies, some of them similar to the examples of Ref. 82, will be given below. These are representative, but somewhat arbitrary examples, and by no means, they intend to cover all possible application areas for OBCs.

### A. Oscillators for efficient image processing

In many computing tasks, the vast majority of resources (energy, time, and hardware) are spent on relatively simple, repetitive jobs. This is especially true in areas such as image processing, where a large number of convolution, filtering, and image processing steps have to be performed on a massive amount of input data (i.e., video streams). An analog device that can calculate, for example, just a simple dot product in a fast and energy efficient way would significantly boost the overall efficiency of the entire image processing pipeline (IPP).

The focus of a recent DARPA project^{119} targeted exactly such tasks.^{78} The demonstration of a complete IPP was pursued, with analog oscillatory computing primitives at its heart. The underlying idea was that an efficient Euclidean distance calculation on analog data can be done by exploiting oscillator interaction.^{120} The effort included circuit design and algorithm design components and also a nanodevice work package, where nanoscale mechanical and magnetic oscillators were developed as hardware components for the image processing pipeline (IPP).

The analog distance-calculating unit uses current-controlled oscillators, in this case, STOs, which are coupled to each other with equal positive weights. The network topology is therefore very simple—in this case, coupling via a common field line was used, as shown in Fig. 3(c) The analog inputs of the network are the currents (or voltages) driving the STOs, and in a limited frequency range, their frequency is linear in the input. The STOs should be nominally identical [have identical frequency response *f*(*i*) to input currents]. The output of the network is an integrator, which in the simplest case, can be an RC lowpass filter that sums up and averages the oscillator outputs. Figure 5(a) shows a circuit schematic of this network. For each cluster, the number of analog inputs is equal to the number of STOs, and there is a single output.

If the applied input currents are very different from each other, then (for a given coupling strength) the oscillators will not synchronize and run independent of each other. In case the driving currents are close to equal, then the STO network will be synchronized. In the latter, synchronized case, the STO outputs sum up coherently (in-phase) on the output integrator, which gives a higher output value than the incoherent (random-phase) superposition of oscillator signals. If the input current vector is an element-wise difference of two analog current vectors, then the output signal (i.e., the degree of coherence in the STO network) is a good measure of the Euclidean distance of the input currents (or voltages).^{36,41,121}

Calculating Euclidean-distance patches of an image and substituting each pixel value with the distance from a fixed value is equivalent to a Gabor filtering operation.^{78} For example, using a cluster of 25 coupled STOs and a fixed vector corresponding to a 45° line, one can filter for this 45° line in 5 × 5 image patches on a larger (say 256 × 256) image. The result of such a filtering operation, using the full circuit model of the STO cluster, is shown in Fig. 5(c).

The implementation of the oscillator network using STOs has been studied extensively via numerical simulations, and the network has been built and experimentally characterized.^{122–124} The interconnection of the STOs, applying analog inputs and picking up STO signals, required a relatively large amount of conventional electronics. More “implementation friendly” spin-wave interconnections could not be used in this case since they cannot provide equal all-to-all coupling between more than nearest-neighbor STOs. While the realized circuit is much more efficient than a comparable digital solution, the expected performance improvement due to the use of nanodevices was significantly lowered by the large amount of required conventional electronics. The D/A and A/D converters that interface the STO cluster to digital circuitry and the amplifiers required to pick up STO signals account for the vast majority of the consumed power.

A very similar Euclidean-distance calculating device has been realized using relaxation oscillators as well.^{125} The interconnections in this system were done entirely in the electrical domain using passive (resistive or capacitive) couplings.

The Euclidean-distance calculation is a relatively simple operation and may benefit from a special-purpose hardware accelerator if used extensively. Trade-offs need to be considered when using analog hardware for a given purpose in an otherwise fully digital computing environment. Machine intelligence and AI applications^{20,21,126–129} offer many possibilities for special-purpose hardware accelerators. Analog circuits are being developed for such purposes,^{22} and OBCs promise to play a role in the future.

As described above, the Euclidean distance is measured by the degree of “mutual” oscillator synchronization. As it turns out, one may use oscillators for such computation without relying on a collective state: in this mode of operation, one would detect the degree of oscillator synchronization to an externally injected signal.^{124} The Euclidean distance calculation is a relatively simple operation, which may be solved without relying on reaching a computational ground state.

### B. Recognition of 1D time sequences

High-speed classification of one-dimensional data (e.g., a time dependent signal) is also a task of high practical importance. For example, STO-based OBCs could classify several ten gigahertz radio frequency signals in real time. A few case studies are known from the literature for STO-based vowel recognition,^{53} which is a closely related problem, albeit de-emphasizes the importance of high-frequency processing.^{130,131}

The realization of the interconnection network in STOs is a major challenge, as already has been pointed out repeatedly, and even more so if high-speed connections are needed. One way to circumvent this problem is to use only a single oscillator, without interconnection to nearby STOs and employing a time-delayed feedback mechanism, as described in Ref. 53. One may view this scheme as coupled-oscillator computing, where instead of multiple oscillators, time-delayed “copies” of a single oscillator are being used. Alternatively, one may look at it as a form of reservoir computing, with a single dynamic node. While this scheme has been experimentally demonstrated in Ref. 53, it is not hard to argue that the “heavy lifting” in the computing process is done by the electronics that generates the time delays and pre- and postprocesses the STO signals.

Another strategy to deal with the interconnection bottleneck in coupled STO systems is to create dynamic interconnections, as per the description in Sec. V B. Four STOs can be coupled by simply interconnecting them in series, but this “wiring” does not allow for much functionality if used statically. However, by injecting external signals, as described in Ref. 112, one can create complex dynamic (virtual) interconnections in a four-STO network. The network weights can be set by an offline training algorithm. One may rightfully argue that there is, again, no free lunch: hardware complexity can be avoided at the cost of complex signal generators that create the dynamic connections. The full potential of this scheme could be realized if these signals can be generated by on-board STOs as well.

### C. Pattern generation and gait control

It has been shown that OBCs may be used as central pattern generators (CPGs). The study of CPGs provides a strong link to neurobiology and was an early and important motivation for the study of OBCs.^{132} A detailed description of vanadium-oxide oscillators as CPGs is given in Ref. 133, including their fabrication and circuit models, and recent experimental results can be found in Ref. 134. Positive and negative couplings between the oscillators generate a wide range of different burst sequences, which, for example, may correspond to limb movements at different gaits. The interconnections between the relaxation oscillators are realized by resistances and capacitances, as described above. The circuit design can be challenging as resistive couplings have much narrower locking ranges than the capacitive ones, and they are also much more sensitive to circuit parameters. A wide variety of gaits can be generated in this fashion with a simple, few-oscillator circuit.

### D. Oscillators for combinatorial problems

Combinatorial problems can often be restated as energy-minimization (optimization) problems. In fact, Hopfield-type networks were successfully used to solve combinatorial problems.^{135} In general, combinatorial optimization problems can be reformulated for oscillatory devices.

An important example of this class of problems is the well-known graph-coloring problem. To our knowledge, Wu^{136} and Wu *et al.*^{137} is the first to propose oscillators for a graph-coloring problem. The basic idea here is to identify oscillator phases with colors, and to use the fact that resistive coupling tends to “pull” phases (colors) together and capacitive coupling tends to “push” phases (colors) away from each other. It is not hard to see then that the dynamics of in-phase and out-of-phase synchronizing oscillators maps to the solution of a graph coloring problem, where the oscillator interconnections directly correspond to the graph edges. A more general approach is shown in Ref. 23, together with an implementation using vanadium-oxide-based relaxation oscillators. In the approach used in Refs. 23 and 82, the graph coloring problem is first reformulated to finding an ordering of the nodes (on a phase circle) such that the same-colored nodes appear close together in the ordering, but are not connected by a graph edge. In contrast, unlike-colored nodes may share the same graph edge, but they order to have different phases (colors). In this fashion, a combinatorial problem may be reformulated into an energy-minimization (optimization) task that can be implemented by a network of coupled oscillators.

Many combinatorial problems are computationally hard (or even NP-hard or NP-complete). Efficient solutions of such problems (even efficient approximations) are considered by many as the Holy Grail of nonconventional computing algorithm research, and OBC has shown its promise as an approach.

## VII. NEURAL NETWORKS FOR QUANTUM-HARD PROBLEMS

Computationally hard problems (such as NP-hard problems)^{138,139} are usually discussed in the context of quantum information theory (quantum computation and quantum simulators). A quantum system in a fully entangled state can be described by an exponentially growing number of superposition coefficients, i.e., the time evolution of *N* coupled 2-state systems generally requires $2N$ number of internal variables. The promise of a quantum computer or quantum simulator is that it can operate simultaneously on exponentially large ($2N$ sized) data, and so a relatively small-sized hardware could, in principle, process vast-sized problems. Of course, this is also the challenge of a quantum computer as the exponentially large number of internal variables need to be controlled.^{140} Currently, large-scale industrial and academic efforts are ongoing to experimentally realize a practical quantum computer.^{141}

It is widely believed that no classical system, only quantum processors, could do the feat of storing/processing information that is exponentially growing with the size of the system. In contrast to this common belief, it is quite possible that there is no fundamental difference between quantum and classical systems in this respect. This can be argued for on three grounds: (1) the widely accepted relations in mathematics between complexity classes (P, NP) are actually unproven, (2) the information content in collective states (excitations) of a classical system may grow exponentially with the system size, and (3) eventually, both classical and quantum systems have to operate in noisy environments and likely will be limited by the same type of physical constraint, i.e., achieving control over an exponentially large number of variables.^{140} One should appreciate the significance and difficulty level of NP-hard problems (for an excellent insight, see Ref. 142), and it is quite possible that NP hard problems, in their general form, are beyond reach for both classical and quantum computers.

In light of the above, it makes sense to think about oscillator-based accelerators for NP-hard problems, i.e., to design OBCs that compete with “realizable” quantum computers. OBCs may be able to efficiently solve, or approximate, such problems.

OBC for the graph coloring problem, described in Sec. VI D, perhaps is the most mature idea here, but there are many other approaches. Memcomputing is another means of using collective states for exponentially hard problems,^{143–149} and it can be implemented in various physical systems, among them are oscillators.^{143} The underlying idea is that the number of collective excitation modes in a physical system grows quickly with the system size, possibly enabling the solution of hard problems. The arguments of Ref. 143 are especially important, as they study the feasibility of an exponentially growing state space in the presence of noise.

The Ising problem is another well-known hard benchmark in computational physics that is NP complete. Sophisticated (and room-sized) optical devices are being developed^{150} to handle the Ising problem, and it is an intriguing question how far one could go with simple oscillatory circuits as described in Ref. 151.

It must be noted that it is not exclusively OBCs that have been proposed to handle NP-hard problems, but other complex, nonoscillatory analog systems,^{152–155} memristors,^{145} and even the dynamic behavior of a digital system.^{156} The prospect that complex analog dynamics systems may attack NP-hard problems could become the most important argument for their research, and if this is proven to be true, all issues with analog/digital interfaces would become a nonissue.

## VIII. CONCLUSIONS AND OUTLOOK

In this paper, we gave a physics-oriented overview of the flourishing research field of oscillator-based computing (OBC). Much work in this field is motivated by biological analogies, i.e., neuromorphic computing. Moreover, much work is based on novel oscillators based on emerging technology hardware, such as spin-oscillator-based computing systems. Several case studies demonstrated the utility and promise of OBC for certain types of problems.

Still, it seems that a fundamental “why” question remains unanswered, namely, why one would use oscillatory components instead of other nonlinear circuit building blocks? It is very much possible to realize analog, neuromorphic, non-Boolean computing devices from nonoscillatory nonlinear elements – much researched Cellular Nonlinear Networks (CNNs)^{157} do exactly that and memristor-based, nonoscillatory neural neuromorphic architectures are a hot topic nowadays.^{158} We mentioned above that oscillators are attractive due to the vast number of possible implementations—but oscillators have obvious disadvantages as well. For example, they must run continuously, dissipating power all the time as it is energetically costly to power them up/down. This means that there must be strong benefits to offset such disadvantages.

We conclude with a few, somewhat hand-waving arguments to corroborate the usefulness of OBCs. The first of these arguments is that OBCs use narrow-bandwidth device-to-device communication channels (i.e., oscillators running at a given frequency), which may allow efficient intracircuit communication in the presence of noise.^{80,101,114,117} Noise-tolerant communication allows low voltage levels to be used, and consequently, ultimately low-power operation could be accomplished. Also, oscillators can do frequency division multiplexing by hardware and so possibly realize high-interconnection networks with a limited number of physical interconnections. A second possible argument is that coupled oscillators are ubiquitous in the physical world, and so one may find devices that fit very well for a given computational task, which devices can be easily wired together by physical interactions. A third, very encouraging fact is that interacting oscillators now appear in circuits proposed for the solution of NP-hard problems. It is very much possible that they might steal the show from quantum computing and yield hardware that could handle problems that seem intractable with today's resources.

Finding a convincing and somewhat general argument for oscillatory computing system remains an unsolved challenge. A much older and still unsettled question is whether, in general, digital or analog solutions are superior for neuromorphic (bioinspired) computational tasks,^{159} but there are many benchmarks and case studies out there. On the other hand, oscillator-based analog vs level-based analog benchmarks are almost nonexistent. Perhaps the most important challenge for future research in OBCs is to find the perfect match between the device and the computational problem, i.e., to find the applications and circuit architectures where OBCs significantly outperform level-based computing devices.

*Note added in proof.* After the acceptance of our paper, the following relevant reference was brought to our attention: D. E. Nikonov, P. Kurahashi, J. S. Ayers, H-J. Lee, Y. Fan, and I. A. Young. “A coupled CMOS oscillator array for 8ns and 55pJ inference in convolutional neural networks,” arXiv preprint arXiv:1910.11803 (2019).

## ACKNOWLEDGMENTS

The authors are grateful to George Bourianoff and Dmitri Nikonov for the motivation to join an Intel-led oscillator-based computing project and to Matt Pufall and Trond Ytterdal for excellent technical collaborations. We also acknowledge funding from the DARPA UPSIDE (Unconventional Processing of Signals for Intelligent Data Exploitation) project, from the NSF NEB 2020 (NanoElectronics Beyond 2020) grant, and from the NSF EXCEL (EXtremely Energy Efficient Collective ELectronics) award. G. C. acknowledges the support of the KAP-2018 grant at Pazmany University, supporting his research visit to Notre Dame.

## References

*No. 22*(