Zurek replies: Repeatability is a textbook postulate—not mine, as the comment by Alexia Auffèves and Philippe Grangier suggests. However, I like the idea of a “context,” provided its origin is quantum. Using an ab initio classical apparatus (made out of quantum atoms!) to supply a context is inconsistent. The Copenhagen interpretation used it as an ingenious and successful ad hoc fix, but it cannot be a fundamental solution. The inability to delineate a quantum–classical boundary makes matters worse. Textbooks adopt such a “shut up and calculate” ploy to paper over the inconsistency of their axioms.
Quantum Darwinism and decoherence favor quasi-classical pointer states, accessible indirectly to many observers via the imprints they deposit in the environment. Pointer states in turn depend on the quantum context provided by the environment. They emerge from a quantum substrate and supply context for the quantum states of microsystems.
Quantum theory is universally valid. Quantum Darwinism and decoherence explain why macrosystems behave classically: Decoherence limits the validity of the quantum principle of superposition. Preferred states are a compromise between amplification, which calls for many copies of the original, and the no-cloning theorem. An unknown state can’t be cloned, but decoherence can amplify pointer states and disseminate information about them throughout the environment.
Observers access only environment fragments, hence they extract only classical information. Quantum coherence, as explained in my article, is out of reach. Observers will agree about states communicated by environment fragments, but only data about pointer states can be shared.1 Objective existence arises from epiontic—that is, epistemic and ontological—quantum states via the proliferation of information. Robust, quasi-classical branches and elusive information about them both emerge via amplification-like decoherence.
Deducing the measurement postulates—Born’s rule and hermiticity of observables—from a simple, self-consistent credo is an advance. The assumptions, that states inhabit Hilbert space and evolve unitarily, should be judged by what follows from them and not by an intuitive appeal to classical prejudices. After all, counterintuitive assumptions—for example, that the speed of light is independent of an observer’s motion—have led to deep and valid consequences.
The comment by Ruth Kastner states that phase randomness is key for decoherence and quantum Darwinism and that it is unlikely. Both are incorrect: Random phases between states are typical when phases can be defined. Moreover, random phases aren’t essential for decoherence; an outflow of information suffices. Indeed, standard models—for example, of quantum Brownian motion—often employ environments in equilibrium, for which phases are undefined and the question of their randomness is ill-posed. An initial correlation with the environment complicates calculations,2 but it does not eliminate decoherence. As difficulties encountered by quantum computing research show, decoherence is generic and hard to avoid.
Quantum Darwinism requires more than decoherence; it relies on an environment that can store and communicate information. Solar radiation enables that,3 since the Sun is a localized photon source. Quantum Darwinism is impossible in a completely equilibrated environment, whose subsystems cannot store information.
The question of how to define systems is indeed of interest, and I have noted it before.4 The answer must go beyond decoherence: What constitutes a system depends on interactions that make or break it; molecules, for example, would fall apart if atomic interactions weakened.
There is no reason to complicate matters by making the definition of a system part of the measurement problem. Systems are there from the start—Schrödinger’s cat is obviously distinct from the nucleus that determines the cat’s fate—and their presence alone does not solve the problem. Indeed, the measurement problem disappears absent a system distinct from a measuring apparatus. Quantum evolution of a universe without parts is deterministic. There is no apparatus to record a definite outcome and so no need to talk about measurement outcomes: There is literally nothing to explain. Recognizing the role of the environment—as another system—is then natural in seeking answers.
I am in good company when accused of the H-theorem “mistake.” Still, Ludwig Boltzmann dealt with the evolution of a many-body system that is classical and isolated. In that classical setting, probability was a fault of an observer ignorant of the system’s precise state. Boltzmann’s Stosszahlansatz postulates uncorrelated trajectories and is a way of representing that ignorance. Although the Boltzmann equation is useful, calling on Stosszahlansatz to prove the H-theorem—that is, the second law of thermodynamics—was suspect. Evolution is deterministic and reversible, so while observers’ ignorance may increase, microscopic entropy of the closed system cannot change. Moreover, the information and hence ignorance of observers is irrelevant for classical dynamics.
As I show in my article, in our quantum world probabilities arise via entanglement. A single photon that escapes with information about the fate of Schrödinger’s cat is enough to decohere the cat’s state. An observer can never catch up with the photon, so reversibility of the equations of motion notwithstanding, information loss can be irreversible for prosaic yet compelling reasons. The resulting entropy production can be accounted for by decoherence.5
The whole measurement (or Schrödinger’s cat) setup starts far from equilibrium. Quantum Darwinism and decoherence show why, in such nonequilibrium multipartite settings, pointer states—and not their superpositions—survive and information about them is amplified and recorded in many copies by the environment. Both the nonequilibrium setting and the division of the universe into systems are needed to state the problem. When certain assumptions are needed to state the problem, surely they can be used to solve it.