I recently hiked a snow-covered trail renowned for its lack of cell service. Yet somehow, as I passed from one bend to another, a radio signal leaked into my almost-dead smartphone. Torn out of my reverie in the frigid air and under blue skies, without thinking I began scrolling through my messages. I’d received an urgent work entreaty, so I trudged back to my car and fired up my computer-controlled, hydrocarbon-combusting engine, and then I plugged in my 10-billion-transistor device and let it vigorously shuttle electrons. Only afterward, back on the trail, did I question why on earth a few hundred bytes of data were worth all of this.
It’s no big news that human technology has many of us by the scruff of the neck. Our machines and algorithms serve us, but we serve them too. With its duplicitous nature, social media provides connectivity and opportunity with one hand while it drains our attention and resources with the other. You pay for every Facebook post, Instagram story, and tweet with your own neural activity and investment in hardware and energy.
We keep inventing more of such hidden burdens. Crypto enthusiasts expound on the democratic possibilities of decentralized, secure data and currencies derived from blockchain technologies. Yet those technologies can be voraciously resource hungry; it’s inherent to how they work. Other dubious inventions, like non-fungible tokens, rely on those same structures, and machine learning and streaming services consume energy resources as well. Some applications are profoundly useful, yet many appear utterly frivolous for a civilization teetering on the brink of planetary disaster brought on by unthinking resource use.
Part of the energetic overhead for all those activities originates in the fundamentals of how we handle information. A modern microprocessor features tens of billions of transistors—structures that represent an extreme reduction of local entropy, which takes a lot of work to accomplish. A much-cited study from back in 2002 introduced the phrase “the 1.7 kilogram microchip,” which references the approximate mass of hydrocarbon fuel and chemicals then required to assemble a single DRAM chip a mere 2 grams in mass. Fabrication also required 32 kilograms of water and about 700 grams of elemental gases.1
Of course, the actual running of digital computation is getting more efficient over time. Some improvements come from greater miniaturization; others come from a trend to hardware specialization rather than generalization. The catch is that the tasks we give devices are growing exponentially. Take the example of deep-learning systems: A 2019 study showed how training an all-bells-and-whistles version of the Transformer natural-language processing model, working with over 210 million parameters, can gulp down an amount of energy equivalent to the emission of more than 284 metric tons of carbon dioxide, about the same as the lifetime emissions of five gasoline automobiles.2
An investigation of global data in 2011 found a two-decade trend of about 60% growth per annum in our species’ total computing capacity. That outpaced what continues to be a roughly 20–30% annual growth in data-storage capacity.3 It’s unclear which growth drives which, but perhaps sheer necessity is contributing to computing growth. Still, it’s easy to see that a large proportion of our informational world—including reams of mundane financial data, social media posts of lunchtime sandwiches, and promulgations of false information—has questionable importance for the survival of our species. We don’t really know what the total semantic quality is of the more than 2.5 quintillion bytes of data generated each day by our civilization. Consequently, we wind up expending ever more effort to find benefits.
One projection suggests that by 2040 computing will necessitate more energy than the world currently produces.4 Simultaneously, the total “anthropogenic mass”—all of the matter embedded in inanimate solid objects made by humans—is estimated to already exceed the total biomass.5
The implications of such ideas are both fascinating and concerning. We know that if the resources demanded by our global civilization are not balanced against their environmental impacts, we’ll suffer. At the same time, the vast, externalized informational world that we generate and sustain—an entity that I have dubbed the “dataome” in my 2021 book The Ascent of Information: Books, Bits, Genes, Machines, and Life’s Unending Algorithm—has helped make us one of the most successful and sophisticated species Earth has ever seen. We’ve engineered an astonishing amplification of biological traits by off-loading memory, communication, and problem-solving to other places, outside of our cells and genes.
Maybe we can innovate our way out of informational meltdown. Some people pin (perhaps unrealistic) hopes to the realization of more generalized quantum computing. But while qubits use little energy to compute, their environmental conditions require significant power. As of 2015 the hardware of a D-Wave Systems machine consumed about 25 kilowatts of power, much of which was used to maintain refrigeration.6 It’s still unclear how that will scale further. But no matter what, the infrastructure and exponential growth of data storage and retrieval required will remain a burden.
Humans may have catalyzed the rise of a dataome and a world increasingly structured and restructured in service of information, but it’s not obvious that the extraordinary benefits we enjoy will continue to outweigh the burdens. The big question is where that problem takes us. Explaining biological evolution has benefited from the concept of the selfish gene, whose ability to propagate relies not on the advantage it bestows but on its ability to enhance its own transmission. The dataome suggests that those resource-seeking informational forms can spill like a tsunami into other domains and follow thermodynamic imperatives that are indifferent to parochial human needs, dissipating energy until our planet’s contents are once again in equilibrium with the rest of a cold cosmos.