On an autumn afternoon in 1971, a group of scientists—mostly astronomers—sat in an auditorium in the Soviet Union at the first US–Soviet conference on the search for extraterrestrial intelligence, or SETI. The topic of discussion that afternoon was artificial intelligence (AI) and how it might revolutionize or destroy our world.
Some things never change. Today AI has a strong connection to SETI and to astronomy more broadly. AI is used to analyze vast amounts of astronomical data generated by powerful computer simulations and detailed sky surveys. Initiatives such as Breakthrough Listen use AI to analyze hundreds of hours of data from such telescopes as the Green Bank Telescope in West Virginia, with astronomers hoping to find signals that exhibit expected attributes of alien technosignatures. In sum, AI helps astronomers become better at identifying, predicting, and understanding features of our universe—and does it all much faster than humans alone can.1
But the great strides have also brought about great panic. In the past couple of years, we have seen the development and mass deployment of next-generation AI tools, such as ChatGPT. That development has spurred both excitement and concern over AI’s impact on our world. Citing “profound risks to society and humanity,” a group of scientists, policy analysts, and businesspeople authored an open letter proposing a pause on giant AI experiments.2 Six days later Time published an article by AI theorist Eliezer Yudkowsky titled “Pausing AI developments isn’t enough. We need to shut it all down,” which made the bold assertion: “If we go ahead on this everyone will die, including children.”3
In that article Yudkowsky likened developing “superhuman AI” to encountering an advanced alien civilization. Technologists warn about the singularity—a term coined by mathematician and computer scientist Vernor Vinge and popularized by futurist Ray Kurzweil—which denotes a hypothetical future moment when AI will surpass human intelligence. In 2018 Elon Musk warned that AI might become an “immortal dictator from which we can never escape.”4 In other words, AI has prompted a full-blown existential panic.
AI anxiety
Some might be inclined to dismiss that hand-wringing as a form of neo-Luddism. The word “Luddite” refers to a group of 19th-century British textile artisans who were concerned that unskilled mechanized operators were depriving them of their means of livelihood and launched a violent movement in which they attacked and destroyed factories and machinery.
Today the term Luddite is often used to dismiss those with concerns about new technologies as being technophobic and resistant to progress. But the 19th-century Luddites were not mere technophobes. Social historians have shown that rather than simply destroying machines, they also lobbied various local and national authorities for new regulations and labor laws.5 In other words, it was not the technology that they feared but the unfair labor practices that took advantage of technology to disenfranchise laborers.
New technologies frequently spark anxieties, but often not without good reason. Social scientists and policymakers know AI has myriad problems, which are not necessarily a result of the technology itself but, rather, how it is used. The American Civil Liberties Union has tracked the ways in which AI can have harmful social impacts. For instance, AI can perpetuate housing and hiring discrimination through biased algorithms and flawed data sets, leading to unjust denials of housing and job opportunities.
Indeed, AI worried SETI scientists long before the development of ChatGPT. In 1965 Soviet SETI scientist Iosif Shklovsky wrote about possible existential threats facing humanity and cautioned that “profound crises lie in wait for a developing civilization and one of them may well prove fatal,” giving the example of “a crisis precipitated by the creation of artificial intelligent beings.”6
Shklovsky’s concerns about AI were just a drop in the bucket. Tasked with the mission of seeking out and possibly communicating with extraterrestrial civilizations, should they exist, SETI scientists had to think about the big picture: the nature of life, civilization, intelligence, and, critically, what can bring those things to an end.
After all, if there were a cosmic silence—if astronomers never detect signs of extraterrestrial life—it might reveal a universal truth on the nature of intelligent life. Perhaps, as Shklovsky warned, extraterrestrial civilizations had too many crises to overcome for them to survive long enough to communicate with others. SETI necessitates the pondering of such possibilities.
Astronomy more broadly, even outside of SETI, carries similar existentialist considerations. Many of the celestial phenomena that astronomers study—black holes, supernovae, and stellar flares, to list just three—can be planet killers. They have roughly determined the expiration date of our own planet, when our sun will expand enough to swallow Earth or at least get close enough to scorch it into a searing hot rock. Cosmologist Katie Mack famously wrote about what she calls “the end of everything.”7 Physicists and astronomers can make projections not only for the end of us but for the end of the entire universe. It seems natural, then, that astronomers who worked in SETI might be inclined to speculate on how civilizations end.
In doing so, SETI scientists imagined not only dozens of ways in which life might exist in the universe but also dozens of ways it might die. SETI had its finger on the pulse of earthly anxieties, and its preoccupation with Earth’s technologies—especially the harmful ones—ultimately shaped the character of its search.
The Cold War
Given SETI’s existentialist vein, it should come as no surprise that the field was founded during the Cold War. More specifically, it grew out of the field of radio astronomy, which rapidly expanded at the end of World War II, partly as a result of new technologies such as radar. After the war, some radar technicians and operators began careers in astronomy.
But although radio astronomy developed in dozens of countries after the war, SETI was nearly exclusively conducted in the US and the Soviet Union, perhaps because of the influence of the space race prompting consideration on what we might find “out there.”
The SETI community generally considers the first search for extraterrestrial civilizations to have been conducted in 1960 by the US astronomer Frank Drake. His search, named Project Ozma, took place at the National Radio Astronomy Observatory in Green Bank, West Virginia, using its new 85-foot telescope. The project got its name from Princess Ozma, a character in L. Frank Baum’s Oz novels (on which the popular film The Wizard of Oz was based).
Drake explained that Oz was “a land far away, difficult to reach, and populated by strange and exotic beings” (reference 8, page xi), perhaps not unlike the lands and creatures he wished to communicate with. In the novels, the fictitious narrator employs wireless radio technology to establish communications with the faraway realm of Oz. Like the books’ narrator, Drake wanted to use radio to speak with exotic worlds “somewhere over the rainbow.”
Project Ozma observed two Sunlike star systems, Tau Ceti and Epsilon Eridani, at a wavelength close to the 21 cm hydrogen line. Drake’s idea was inspired but simple: If there are intelligent extraterrestrials who have developed radio technology just as we have, then we might be able to detect them using radio telescopes on Earth. The results of the project were null, but they led to many subsequent searches by others. A new subfield of astronomy was born.
SETI astronomers held a cosmopolitan perspective, with Drake and others like him predicting that the detection of extraterrestrial intelligence might result in a wave of peace and unity on our own planet. In 1992 he wrote,
I fully expect an alien civilization to bequeath us vast libraries of useful information, to do with as we wish. This “Encyclopedia Galactica” will create the potential for improvements in our lives that we cannot predict. During the Renaissance, rediscovered ancient texts and new knowledge flooded medieval Europe with the light of thought, wonder, creativity, experimentation, and exploration of the natural world. Another, even more stirring Renaissance will be fueled by the wealth of alien scientific, technical, and sociological information that awaits us. (reference 8, page 160)
Other SETI scientists shared that perspective, and they facilitated collaboration between Soviet and American astrophysicists. Even so, the Cold War backdrop presented challenges to the new, optimistic science. In trying to cooperate in their searches, Soviet and American SETI scientists discovered that it was difficult to communicate with each other, let alone extraterrestrials, because of political barriers like travel bans, mail interference, and, perhaps especially, interference from the intelligence community.
Alien intelligence
Drake first developed the idea to search for artificial radio signals during a period when, for the first time in our known history, artificial radio signals pervaded outer space. Spy satellites and spacecraft were growing in number, and radio-astronomy infrastructure was being used for both science and military applications. The same telescope that could detect evidence of extraterrestrial intelligence could also track an intercontinental ballistic missile.9
The tension between those applications created a challenge for SETI. While it promoted international collaboration across the Iron Curtain, it was also entangled with military and governmental interests. Because their goals were often the same—detecting narrowband, artificial signals in space—SETI scientists became adept at developing techniques that were exploited by the intelligence community for deep-space listening.10 Thus, in many ways, SETI embodied the tensions of the 1960s: It imagined a hopeful future in space during a present marked by military conflict and nuclear threats.
That duality—a science rooted in both internationalism and warfare—may be what prompted SETI’s consideration of existential threats. Carl Sagan raised the concern in the book Intelligent Life in the Universe, which he coauthored in 1966 with Shklovsky. They wrote, “Another question of some relevance to our own time, and one whose interest is not restricted to the scientist alone, is this: Do technical civilizations tend to destroy themselves shortly after they become capable of interstellar radio communication?” (reference 11, page 358).
Drake had similar concerns. At a conference in Green Bank in 1961, he revealed the now-famous Drake equation, which was designed to help SETI scientists organize discussions on the number of extant extraterrestrial civilizations in our galaxy. His equation’s final variable L stands for a civilization’s longevity. Drake understood that it was important to calculate not only how many intelligent civilizations might arise in our galaxy but also how long they would survive.
When Drake presented his equation, the nuclear arms race was reaching a head; less than one year later, the Cuban Missile Crisis would bring the world to the brink of disaster. The longevity of intelligent technological civilizations was a pressing question.
The great filter
Despite the assumption that intelligent life could be abundant in the universe, SETI scientists have long lacked any conclusive evidence of a message from an extraterrestrial intelligence. In response to the apparent cosmic silence, in 1950 physicist Enrico Fermi famously asked, “Where are they?” Indeed, the Fermi paradox, as it is now known, refers to the seeming contradiction between the high probability of extraterrestrial life existing in the universe and the lack of evidence for alien civilizations.
Although many SETI scientists attribute that absence to the dearth of comprehensive searches, some theorists began to propose the existence of what they call a “great filter,” which acts as an obstacle preventing intelligent civilizations from establishing contact with one another. Think of the filter as a probability barrier—hurdles that life forms have to face at various points in their development.
The great-filter theory posits that highly unlikely evolutionary transitions must occur for an Earthlike planet to generate an intelligent civilization capable of being detected by our current technology. The great filter can either be behind us—implying that we have already surmounted a highly improbable event that enables our civilization’s development—or ahead of us. In the latter case, it might come in the form of potential disaster, such as self-destruction by our own technology.
Consideration of death by atomic bomb colored many aspects of SETI’s early thinking. At that first US–Soviet conference in 1971, ideas for contacting extraterrestrials were mixed with solutions for the nuclear arms race. James Elliot, an astronomer at Cornell University’s Laboratory for Planetary Studies, presented a paper called “X-Ray pulses for interstellar communication,” a benign title for a radical idea. Elliot proposed that nuclear weapons could serve as an announcement message from Earth when attempting to contact extraterrestrial beings.
If the US and the Soviet Union combined their nuclear arsenals to create a single large explosion far from Earth, he suggested, the emitted x rays could potentially be detected at a significant distance by intelligences on other worlds. In short, nuclear disarmament and extraterrestrial contact could be handled at the same time.
Andrei Sakharov, a prominent physicist, disarmament activist, and key figure in the Soviet thermonuclear project, proposed a different communication system that also leveraged thermonuclear explosions. He suggested placing a series of explosions at various locations in our solar system to make flash lamps that could be used to communicate simple messages, such as primary number sequences.12
It is not a coincidence, of course, that the two scientists proposed nuclear solutions for such communications. The nuclear arms race defined much of the Cold War period and fear of the bomb loomed in the minds of civilians and scientists alike. During the 1960s and 1970s, the Soviet Union and the US each accumulated alarming arsenals of thousands of nuclear warheads—more than 10 times the amount required to render Earth uninhabitable to humans.
Many SETI scientists became avid antinuclear activists. In 1983 Sagan authored an essay titled “Nuclear war and climatic catastrophe: Some policy implications,” published in Foreign Affairs. He argued that unless the US and the Soviet Union halted their arms race, humanity faced a high risk of extinction. The following year, he coauthored a book titled The Cold and the Dark: The World After Nuclear War, in which he popularized the concept of nuclear winter, a dire climate catastrophe that might be caused by nuclear war.
Philip Morrison, one of the participants in the 1971 US–Soviet Union SETI conference, had held a prominent role in the Manhattan Project and supervised the construction of the atomic bomb that detonated over Nagasaki, Japan. Following his firsthand observation of the catastrophic aftermath as a member of the Manhattan Project’s survey team, Morrison transformed into a staunch advocate against nuclear weapons and established the Federation of American Scientists and the Institute for Defense and Disarmament Studies. He was perhaps more acutely aware of the devastating effects of nuclear technology than most other scientists and argued that one of the main benefits of SETI is that it is a tool that reveals our own future.
Morrison proposed calling SETI the “archaeology of the future.” As he explained it, although studying the past through archaeology is fascinating because it informs us about our own history, SETI grants us the opportunity to explore our future because it shows what we have the potential to become. He claimed that SETI was “a missing element in our understanding of the universe which tells us what our future is like, and what our place in the universe is. If there’s nobody else out there, that’s also quite important to know.”13 Such thinking was, of course, highly deterministic. Over time, SETI scientists’ existential fear had turned into existential forecast. They began to project their concerns about Earth and our civilization onto their expectations of what they might find in the universe.
The cosmic mirror
We might call this projection the cosmic mirror, a popular concept that suggests SETI might unify the world because it helps human beings to see themselves in a cosmic context. SETI scientist Jill Tarter once defined the cosmic mirror in an article for CNN, describing it as “the mirror in which all humans can see themselves as the same, when compared to the extraterrestrial other…. It is the mirror that reminds us of our common origins in stardust.”14
But there’s another side to the cosmic mirror. Although it can remind us of our common origins, it can also highlight our problems and conflict. Take, for example, the creation of the Voyager program’s Golden Record. Designed in part by SETI scientists, including Sagan and Drake, the phonograph record was a message launched aboard the Voyager 1 and Voyager 2 spacecraft. Sagan and his team wanted to include on the record a diverse selection of sounds, images, and greetings that were intended to convey to potential extraterrestrial civilizations a snapshot of humanity’s cultural and scientific achievements.
Despite the intended extraterrestrial focus, however, the team encountered unexpected terrestrial challenges. Recognizing the need to eschew an American bias, Sagan sought to include greetings in various languages. Limited on time, he visited the UN headquarters and asked all of the delegates to record a greeting in their native language, ensuring a diverse representation of humanity. While moving forward with the recordings, however, he quickly realized that all the chiefs of delegations were male—there would be no representation of a woman’s voice.15
That realization sparked a crucial question about the record’s design. Should it accurately depict the world and acknowledge world leadership’s gender imbalance, which stems from a history of patriarchy? Should it show Earth as it truly is, including its horrors and injustices? Ultimately, the team chose a more positive portrayal and avoided depictions of violence and negativity.
The cosmic mirror can also be used to show how our anxieties about technology are manifested in our ideas about the universe. SETI scientists anxious about the rise of AI predict that we will find AI in outer space—in fact, it might be all we find. Shklovsky and Sagan once wrote that we “will some day very likely be able to create artificial intelligent beings which hardly differ from men, except for being significantly more advanced. Such beings would be capable of self-improvement, and probably would be much longer-lived than conventional human beings” (reference 11, page 486).
A cosmos populated mostly by technological beings is sometimes referred to as the “postbiological universe.” Former NASA chief historian Steven Dick coined the phrase and argued that “cultural evolution over the long time scales of the universe has resulted in something beyond biology, namely, artificial intelligence.” He defined the postbiological universe as “one in which the majority of intelligent life has evolved beyond flesh and blood intelligence, in proportion to its longevity.”16 Scientists who prescribe to the postbiological-universe theory believe that we are far more likely to encounter technology than biological life as we explore the universe.
With the rise of new AI technologies, a renewed interest has emerged in the postbiological universe and the potential proliferation of AI in it. Harvard University astronomer Avi Loeb, for example, recently made headlines for saying that small metal spheres found in meteor fragments on the seafloor were potentially from “a spacecraft from another civilization” and for telling the New York Times that it was “most likely a technological gadget with artificial intelligence.”17
Although those claims have come under scrutiny, many SETI scientists agree that AI is what they are most likely to find in their search for extraterrestrials. SETI Institute scientist Seth Shostak has made the point that any aliens humanity should expect to encounter are likely past the point of AI development, considering that humans were able to accomplish the feat so quickly after inventing radio technologies.18 Clearly, the anxieties and hopes we hold about our own technological civilization shape the way we imagine other worlds.
What to make of all this? It would be wrong to dismiss SETI’s value because of these earthly trappings. In some sense, its ability to keep its fingers on the cultural pulse is what helps SETI transform and develop creative new strategies.
For example, the First Penn State SETI Symposium, held in 2022, had talks focused on “pollution SETI,” which purports to identify evidence of industrial activity in exoplanetary atmospheres. One new NASA-funded initiative, Categorizing Atmospheric Technosignatures, aims to study exoplanetary atmospheres to create a catalog of potential atmospheric technosignatures, which might include known pollutants like chlorofluorocarbons.
During a period in Earth history when we are extraordinarily concerned with our planet’s health, it occurs to us that perhaps we are not the only civilization to mistreat our home world. In the post-Cold War period, perhaps we have not forgotten the existential threat of the nuclear bomb, but our focus has shifted as we face other technologically rooted threats, such as climate change.
Throughout its history, SETI has considered the threats our world faces and developed optimism that humanity might overcome them. Although it is sometimes marked by a troubling determinism, which might hinder clearer thinking about the possibility of life elsewhere in the universe, SETI has led its practitioners to fight for scientific internationalism and activism against technological tyranny.
As seen in the idea of using nuclear arms as messages, SETI shows how the technological threat is not truly technical, but societal. Many of our existential threats, be they pandemics, natural disasters, or AI, are rooted in how our society chooses to use technology. Behind the word “intelligence” in SETI is a small but persistent worry that perhaps intelligent civilizations are not intelligent enough—that is, not intelligent enough to avoid destroying themselves.
The fact is that we know nothing about alien intelligence or the way technological societies—other than our own—progress. Instead, we project our anxieties about our own civilization onto extraterrestrials.
We are unlikely to learn much about the true nature of extraterrestrial civilizations from such speculative research, but in observing those patterns of deterministic forecasting, we may see how our predictions for extraterrestrials are tied up in our projections for our own civilization’s future. Just as Cold War–era SETI motivated scientists to cooperate, the introspection that the field fosters might prove more successful at prompting global peacemaking than the actual discovery of an alien civilization.
REFERENCES
Rebecca Charbonneau is a historian of science and a Jansky fellow at the National Radio Astronomy Observatory in Charlottesville, Virginia.