Skip to Main Content
Skip Nav Destination

What’s under the hood of a quantum computer?

5 March 2021

Many layers lie between everyday users and the delicate, error-prone hardware they manipulate.

When most people sit down at their computers to work, they’re thinking about all the things they need to get done; far from mind is any consideration of how their keystrokes and mouse clicks are translated into logic operations and electrical signals. That separation between hardware and user interface is the product of decades of development. Now quantum computer developers are navigating similar terrain.

The quantum computing stack is everything that lies between a user and the physical qubits. The stack needs to perform essential functions; for instance, it must facilitate user interaction, turn inputs into hardware manipulation, and correct for numerous error sources. (For more about quantum architectures, see the article by Anne Matsuura, Sonika Johri, and Justin Hogaboam, Physics Today, March 2019, page 40.) There’s no one right way to divide those tasks into discrete levels, though, and researchers and technology companies are still pursuing different visions for future quantum architectures.

On page 28 of Physics Today’s March 2021 issue, Harrison Ball, Michael Biercuk, and Michael Hush present the quantum computing stack proposed by Q-CTRL, the quantum technology company founded by Biercuk. The authors explain in detail how the functionality of a quantum firmware layer—one component of a quantum computer—is critical for managing qubit errors. Here we explain what happens in the rest of the layers of a quantum computer.

Qubit hardware

Classical computers store information as bits that each take a value of 0 or 1. Underlying those bits are field-effect transistors that act as switches; each can take a value of either 0 or 1 depending on whether the switch is on or off. At the most basic level, everything a computer does—save information, execute calculations, run programs—is just manipulating the values of those billions of bits with small electrical voltages.

Quantum computers instead rely on qubits that can be in one of two states, ∣0〉 or ∣1〉, or a linear superposition of those two states, ∣ψ〉 = α∣0〉 + β∣1〉, in which the coefficients α and β are related to the probability of finding the qubit in each state.

Why is it useful for qubits to exist in a superposition of states? It comes down to how much information you can store in n independent bits compared with the same number of qubits that are linked through entanglement—a phenomenon that cannot be described by classical physics.

Each classical bit requires only one value to describe whether it’s on or off, so n bits represent n binary digits. At first glance it may seem like qubits would have 2n numbers akin to those binary digits because each has two coefficients, α and β. But the advantage can be even bigger than that; describing a quantum state made of n qubits can require up to 2n coefficients.

Consider, for example, a three-qubit system. Each qubit can be in the state ∣0〉 or ∣1〉, so there are eight possible states that the system could be measured in—and eight coefficients describing the probability of each state. The more qubits in a system, the bigger the informational advantage over classical bits. Taking advantage of that huge computation space is no mean feat, though; writing algorithms that benefit from qubit properties is a challenge because, although computations may manipulate 2n parameters, they output just n values—the final qubit states. (For more on that, see the section on quantum algorithms below.)

Whereas classical computing has largely settled on one type of bit hardware, qubits still come in many varieties. Any two-level quantum system—a nuclear spin, a photon’s polarization, or a quantum dot’s spin, to name a few—can be used as a qubit. The usefulness of a particular system, however, depends on things such as how easily the qubits are to manipulate and entangle, how long they remain in desired quantum states, and how prone they are to having their states destroyed by outside noise.

ion-trap quantum computer
In this ion-trap quantum computer, gold electrodes produce a trap for charged particles. The electrodes are structured to permit microwave and laser-beam access. The entire system is housed in an ultrahigh-vacuum chamber. Credit: Michael J. Biercuk, University of Sydney

One popular example of qubit hardware implementation is trapped-ion qubits. In those designs, charged particles are confined by electromagnetic traps, and a valence electron moving between two states acts as the qubit. Hyperfine transitions in neutral atoms can serve the same function (see the article by David Weiss and Mark Saffman, Physics Today, July 2017, page 44), as can electron spin-flips in quantum dots (see the article by Lieven Vandersypen and Mark Eriksson, Physics Today, August 2019, page 38).

Some of the most well-known quantum computers, including those from IBM and Google, rely on superconducting transmon qubits. Transmons are superconducting islands of charge in which the difference between ∣0〉 and ∣1〉 is the presence of an additional Cooper pair of bound electrons.

Quantum firmware

IBM quantum computer
A refrigeration system houses an IBM Q System One quantum computer Credit: IBM

Qubits are prone to errors. All sorts of environmental factors—thermal fluctuations, electromagnetic radiation, magnetic fields—can knock a qubit out of its intended state. That degradation of information is known as decoherence and can occur in a fraction of a second. Despite the use of refrigeration to reduce thermal fluctuations, decoherence eventually creeps in and produces hardware errors, like accidentally flipping a qubit’s state from ∣0〉 to ∣1〉. (The commonly used refrigeration systems, like the one shown above from IBM, are what many people picture when they imagine a quantum computer.) The number of operations that can be performed with a qubit is limited by the qubit’s decoherence time. Moreover, every set of qubit hardware has its own unique deviations from ideal performance (see the article by Ian Walmsley and Herschel Rabitz, Physics Today, August 2003, page 43).

But higher levels in the quantum computing stack can’t be expected to account for such system-to-system variation; a programmer needs to be able to request that an operation be performed without knowing about the underlying hardware’s quirks. (Imagine if every computer required personalized software!)

Quantum firmware creates a virtualized version of the qubit hardware for the higher levels of the computing stack. It is focused on all the low-level quantum control tasks that can be used to stabilize the hardware and mitigate errors. For instance, it uses information about the hardware to autonomously define error-resistant versions of the RF or microwave pulses that act on the qubits to execute quantum logic operations.

Although quantum firmware alone doesn’t solve the problem of hardware errors, it is particularly efficient at suppressing slow drifts in hardware parameters such as a qubit’s resonant frequency; those drifts are a dirty secret of quantum computing hardware. That capability makes firmware a strong complement to quantum error correction protocols that are better suited to dealing with stochastic errors.

For more on the quantum firmware layer, see the Physics Today article by Ball, Biercuk, and Hush referred to earlier.

Hardware-aware quantum compiler

In classical computers, compilers take higher-level instructions for tasks that need to be completed and translate those instructions into a series of operations that are performed using the underlying hardware. The same thing happens in a quantum computer.

The hardware-aware quantum compiler, also known as a transpiler, is responsible for figuring out how to complete a set of logic operations in a manner that accounts for the physical connections between qubits. Although physical qubits can’t easily be moved, the states of two qubits can be swapped for an effective rearrangement. The transpiler works out how to implement an arbitrary operation between qubits given the hardware constraints, such as which qubits are directly connected to each other. It also decides which qubits to use for each operation—for instance, if a particular qubit is known to be faulty, information might need to be routed around it.

In the current era of quantum computing, the hardware-aware compiler is the only compiler. As such, it bears the additional responsibility of reducing the number of quantum logic operations needed to execute an algorithm. Optimizing qubit usage in that way allows a task to be completed as quickly as possible, which is important given the short lives of qubit states.

In the future, when quantum error correction is routinely used, some of this responsibility will be borne by higher-level logical-layer compilation. The lower-level compiler will be tasked with translating logical-qubit operations into their constituent physical-qubit manipulations.

Quantum error correction

Even with quantum firmware, errors inevitably arise from both decoherence and imperfect qubit manipulation. Quantum error correction (QEC) is designed to detect and fix those errors. It works by smearing information across many qubits in a way that protects against individual qubit failures. Each error-correcting group of physical qubits makes up a single logical qubit that can then be used in a quantum circuit. Amazingly, logical qubits can be designed such that even as the underlying qubit states decohere, the logical qubit state persists, in principle indefinitely.

Once a logical qubit is encoded, a complex algorithm is used to identify errors and apply corrections in a way that doesn’t lose the encoded information. (Measuring the qubits directly would destroy their quantum states.) A simple implementation uses redundancy to provide protection; even if one of the qubits ends up in the wrong state, the probability that they’re all wrong is lower.

Correcting qubit errors with QEC is inherently resource intensive—some current schemes use tens of physical qubits per logical block—and will likely require more qubits than are available in existing devices to provide any real benefit. Accordingly, QEC is more important in the long term than it is for current machines. Quantum firmware aims to reduce the burden on QEC routines by dealing with more predictable noise, thereby improving QEC’s resource efficiency.

Logical-level compilation and circuit optimization

quantum circuit that produces a Bell state
Two qubits start in pure ∣0〉 states. A Hadamard gate acts on the first qubit and puts it in a superposition of states ∣0〉 and ∣1〉 with an equal probability of finding the qubit in each state. The two-qubit CNOT gate flips the target qubit (⊕) to ∣1〉 only if the control qubit (•) is in state ∣1〉, thereby producing the entangled output state shown. Bell states are used in, for example, quantum cryptography (see the article by Marcos Curty, Koji Azuma, and Hoi-Kwong Lo, Physics Today, March 2021, page 36).

A quantum circuit is a map of the sequential logic gates that are applied to a series of qubits to run an algorithm. A simple example of a circuit that entangles two qubits in a Bell state is shown to the right.

The initial qubit states are on the left, the final states on the right, and between them a series of gates that indicate the operations performed on each qubit. The qubits represented in the circuit aren’t physical qubits; rather, they’re abstract objects known as logical qubits. One logical qubit may be realized using many interacting physical qubits whose hardware errors are mitigated by QEC.

A single algorithm can be represented by multiple logically equivalent circuits, and the goal of circuit optimization is to find the one requiring the fewest operations or timesteps. Executing fewer operations enables the algorithm to run faster—an important goal for any quantum computer, whether or not it is using QEC.

Quantum algorithms and applications

Quantum algorithms play the same role as classical algorithms: They provide step-by-step instructions for completing a computational task.

Although a regular algorithm could in principle be run on a quantum computer, a true quantum algorithm takes advantage of the underlying hardware’s quantum nature. For example, manipulating one qubit in a quantum computer affects the entire n-qubit state and each of the 2n coefficients needed to describe it, effectively doing that many operations in parallel. However, it’s not quite parallel computing. When the final qubit states are measured, each is either a 0 or a 1; the algorithm outputs only n values rather than all 2n coefficients. (For more on quantum computation, see, for example, the article by Charles Bennett, Physics Today, October 1995, page 24.)

Given that measurement limitation, truly taking advantage of a quantum computer’s huge computational space is tricky. The entire field of quantum algorithm development is devoted to figuring out how to efficiently leverage that resource. Some problems, like factorizing prime numbers, are known to be sped up by quantum algorithms. That speedup is reflected in the number of steps the algorithm must go through to arrive at an answer. Whereas the number of steps a conventional computer requires to factor a prime number scales exponentially with the size of the number, the number of steps for a quantum computer scales only polynomially. Quantum Fourier transforms are also significantly faster than their classical counterparts. Other tasks, such as playing chess, garner little to no benefit from quantum algorithms because the number of steps needed would still grow too quickly with the complexity of the problem.

A variational quantum algorithm is a compromise between classical and quantum ones. It breaks up a computation into a small quantum component and a larger classical optimization problem and therefore requires a much smaller quantum computer than, say, the quantum Fourier transform. Such algorithms are promising for solving problems in finance, logistics, and chemistry.

User interface, QAAS, and operating system

Most people who want to use quantum computers aren’t going to build or even buy one—at least not anytime soon. To facilitate access to the limited existing quantum computing resources, companies have put together cloud-based infrastructures that allow remote operation. As in a classical computer, the highest level of the quantum computing stack provides the interface that users interact with.

Amazon Braket, Microsoft Azure Quantum, and Rigetti Quantum Cloud Services are examples of quantum-as-a-service (QAAS) offerings. However, those companies aren’t necessarily providing access to their own quantum computers; rather, they connect users and computers. For example, Amazon Braket can connect users to resources from D-Wave, Rigetti, and IonQ. That approach makes quantum computers similar to other managed, cloud-based computational resources, such as graphical processing units.

The above services can be used to write code using high-level programming languages. The resulting algorithms probably wouldn’t look particularly exotic to someone with programming experience. For example, the open-source software development kits Ocean (from D-Wave), Qiskit (from IBM), and Forest (from Rigetti) support the programming language Python. Languages specifically designed for quantum computing include Quantum Computation Language (QCL), which resembles C, and Q Language, which works as an extension in C++. The code defines a sequence of operations that constitute a logical algorithm.

Qiskit algorithm, akin to a “Hello World!” program
A short Qiskit algorithm, akin to a “Hello, World!” program, that initializes one qubit in the state ∣1〉. Credit: D. Koch, L. Wessing, P. M. Alsing, http://arxiv.org/abs/1903.04359

Updated 8 March 2021: The fourth paragraph in the “Qubit hardware” section has been updated to address possible confusion over the advantage of using qubits over classical bits.

Close Modal

or Create an Account

Close Modal
Close Modal