Albert Einstein famously quipped, “I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.” In Life 3.0: Being Human in the Age of Artificial Intelligence, MIT professor Max Tegmark makes a persuasive case that artificial intelligence (AI) may be the weapon—or the enemy—that reduces us to those sticks and stones, unless we humans and our AI partners act with extreme care.

Tegmark is also the author of Our Mathematical Universe: My Quest for the Ultimate Nature of Reality (2014). He clearly has found a formula for a successful book: a provocative subject, a well-reasoned argument, an easy-to-understand classification scheme (for example, a four-level multiverse in My Mathematical Universe; three forms of life in Life 3.0), whimsical illustrations, and personal anecdotes that humanize science. Tegmark also throws in something that’s become close to a personal trademark—a vivid fictional opening scene that reads as if it had been ripped from a screenplay.

Tegmark defines life rather narrowly, as activities that can process information and can retain said information’s complexity after replication. By that definition, it may not be obvious how a bacterium, which most people would say is alive, actually qualifies as life. In contrast, an electronic thermostat clearly processes complex information and so should qualify under his definition. Tegmark builds on the idea of life as information processing to construct a three-form schema. Life 1.0 is the most basic biological stage; Life 2.0 cannot design its own hardware but can alter its software—for example, by learning a new language. Tegmark then leads us to a life beyond human: superhuman AI, or Life 3.0.

Tegmark conceives of Life 3.0 as entities that not only can reinvent themselves via software enhancements but also can upgrade their hardware. Although we humans can install artificial prostheses, we cannot (as yet) make duplicates or create improved versions of ourselves at will. As such, we only qualify, at best, as “Life 2.1.”

After dispensing with definitions, Tegmark takes us through the history of computing and AI. The book Life 3.0 is less about the design or implementation of the third generation of life, though such topics are thoroughly discussed, than it is about Life 3.0’s implications. Much has been made of the perils of runaway AI. Elon Musk, for example, called it “potentially more dangerous than nukes,” and Stephen Hawking said it “could spell the end of the human race.”

Tegmark is more optimistic than Hawking or Musk about the future of humans living with (or under) Life 3.0. Still, his is a cautionary tale, and he warns of the potential dangers should AI develop goals that are not aligned with humanity’s. Consider, for example, what would happen if terrorists were to equip autonomous drones with weapons, creating weaponized AI. Life 3.0 urges us to prepare for an onslaught of superintelligent AI now rather than waiting and, with luck, learning from our mistakes as humans typically do. His argument is clear: We can afford neither procrastination nor mistakes this time.

Tegmark presents a thorough account of massive flaws in several previous human–AI interactions that have led to disaster in part because of code errors, such as the tragic 1979 death of a Ford Motor Company worker struck in the head by a robotic arm. But he suggests that future AI will be superresilient, immune to software bugs. And what about competition? What happens when two AIs square off against each other in a battle for scarce resources?

The book is not without a few minor irritations. At first glance, the decimal place in his classification scheme seemed superfluous. Surely Tegmark doesn’t mean there is an infinitude of different levels of life? But, in fact, he does seem to assert such a continuum, with Life 2.1 being humans who are able to learn languages and skills and thus improve their software and also make upgrades to their hardware with prostheses. Life 3.0 will be able to replicate itself—that is, the information it contains. We all know life can sometimes be irrational, but is there really any difference between Life 3.1 and Life 3.1415…?

There’s also a bit too much hagiography of Musk; too many plugs for funding the Future of Life Institute, of which Tegmark is a founder; and too many photos and lists of people discussing AI at after-workshop dinners. Most of those could easily have been jettisoned to reduce the book’s heft and to accentuate other material more essential to Tegmark’s arguments.

Alas, Tegmark offers no solutions for AI life-forms that plague us today, such as AI chatbots or irritating Microsoft paper clips. But he does offer hope, if we act quickly and intelligently. Will there be a Life 4.0, or even a Life 3.1? It’s hard to know. Perhaps if humanity takes Tegmark’s call to action seriously, there may be more than just hope. There may actually be a chance for a future.

Brian Keating is a professor of physics at the Center for Astrophysics and Space Sciences at the University of California, San Diego. He was elected as a fellow of the American Physical Society in 2016. His new book, Losing the Nobel Prize: A Story of Cosmology, Ambition, and the Perils of Science's Highest Honor, will be published by W.W. Norton in April 2018.