Most physicists are taught that quantum physics has two parts: coherent wave mechanics, in which quantum states evolve according to the reversible Schrödinger wave equation, and the measurement process, in which quantum states irreversibly collapse into a particular state given by the details of the measurement. So when a quantum superposition state such as a|0〉 + b|1〉 is measured with a device that discriminates perfectly between |0〉 and |1〉, the system evolves to the definite state |0〉 or |1〉 with respective probabilities |a|2 and |b|2. Of course, when such an ideal measurement is repeated on the surviving state, the same definite state will appear again and again. Many of us, starting with Albert Einstein, do not like the apparent disconnect between those two rules and may not even subscribe to that conventional “Copenhagen interpretation” of quantum mechanics. It is nevertheless useful because it correctly predicts the results of all experiments performed to date, even though it gives the appearance that the observer necessarily affects the system.

In the past two decades, the advent of quantum information science and the ability of experiments to control quantum systems such as single atoms have provided the happy opportunity to once again ponder the disconnect between the fundamental quantum concepts of coherent evolution and measurement. Quantum systems can now be measured with nearly perfect efficiency, meaning that the observed distribution of a measurement is given almost exactly by the probabilities of the underlying superposition, as prescribed by the conventional Copenhagen rules that we are taught.

However, a new term, the quantum nondemolition, or QND, has crept into the literature these past decades. Also known as “quantum back-action-evading measurement,” it’s a needlessly confusing term that seems to highlight the mundane aspect of a perfect quantum measurement: If one already knows that the system is in a particular eigenstate of the measuring device, then, obviously, a measurement on the system will produce that eigenstate and leave the system intact. Zero information is gained from the repeated measurement. On the other hand, when the system is not in an eigenstate of the measuring device, the quantum state can be thought to collapse to one of its eigenstates. In that case, information is gained from the system, and the QND measurement most certainly demolishes the system. The concept of QND measurement adds nothing to the usual rules of quantum measurement, regardless of interpretation. But worse, the term confounds students and newcomers who may even be comfortable with the prescriptions of quantum physics.

Not all measurements are perfect, of course, and some people may wish to use the QND nomenclature to distinguish the efficiency of a measuring device. However, in the spirit of quantum decoherence theory (see, for example, Wojciech Zurek’s article in PHYSICS TODAY, October 1991, page 36), we can still think of any measurement as a QND measurement plus a possible interaction with a reservoir that can either mask the information obtained from the system or even uncontrollably change the quantum state. The interaction with the reservoir is what can make quantum problems difficult, and it is there that we confront the conceptual difficulties with quantum mechanics and the quantum–classical divide.

As a common example of an imperfect measurement, consider photodetection. When a single photon strikes a photomultiplier tube and generates a pulse of photocurrent, the photon is lost forever. Or is it? The photocurrent may interact with a macroscopic system of a bulk conductor, and we may measure the resulting voltage across the conductor. Sure, the photon has disappeared, but if our detector indicates that we had one photon, we can always create another and get the same answer again and again, exactly like a QND measurement. Even if we begin with a coherent superposition of zero and one photon, which is not an eigenstate of our measurement device, that approach works. After the first measurement collapses the state into |0〉 or |1〉, we know which one came out and we either prepare zero or one photon forever after. A real photodetector is not 100% efficient, of course, meaning that a single photon striking the detector could register nothing, or that even in the absence of photons the noisy detector fires anyway. Again we can think of this as a QND measurement in which we have lost information to the environment. In practice, we trace over the environment and then must describe the system in terms of a density matrix containing the probabilities of various outcomes.

In every case, the concept of QND measurement is confusing and unnecessary. It wasn’t needed in the formative years of quantum mechanics; and now with quantum information theory, entanglement, and decoherence theory, we still do not need it. Marvelous advances in many experimental quantum systems, such as individual atoms and superconducting circuits, are allowing quantum measurements to be performed almost exactly like in textbooks, as most of us have learned. Why not demolish the term “QND”?