We are now quite certain that over the next century the world will warm up by a few degrees. A few degrees—the difference between early morning and mid morning—doesn’t sound like much. But, in fact, the impacts turn out to be dire. It’s worth asking, how did scientists come to understand this? We need to convince the public of the threats we face; yet how can we convince them if we don’t explain how scientists came to know what they know? The history of any scientific development can address general questions of how scientists do their work and reach their conclusions. But the history of climate change impact studies turns out to be a peculiar kind of history, not at all the sort of story that historians of the physical sciences are used to telling.
To be sure, the study of impacts began like most histories of science: in the realm of speculation. And poor speculation at that. Through the first half of the 20th century, when global warming from the greenhouse effect was itself only a speculation, the handful of scientists who thought about it supposed any warming would be for the good. For example, Svante Arrhenius (figure 1) published the first calculations in 1896 and claimed that the world “may hope to enjoy ages with more equable and better climates.”1 Others tended to agree that global warming, or any effect of the progress of human industry, could only lead to a beneficent future.
Figure 1. Svante Arrhenius. In 1896 the Swedish physical chemist published an estimate that doubling the carbon dioxide level in the atmosphere would raise global average temperature by a few degrees Celsius. Over the next half century, Arrhenius and others assumed that higher temperatures would be beneficial to civilization.
Figure 1. Svante Arrhenius. In 1896 the Swedish physical chemist published an estimate that doubling the carbon dioxide level in the atmosphere would raise global average temperature by a few degrees Celsius. Over the next half century, Arrhenius and others assumed that higher temperatures would be beneficial to civilization.
Early concerns
In the late 1950s, a few scientists realized that the level of carbon dioxide gas in the atmosphere might be rising and suggested that the average global temperature might climb a few degrees Celsius before the end of the 21st century. Roger Revelle, the most senior of those researchers (figure 2), publicly speculated that in the 21st century the greenhouse effect might exert “a violent effect on the earth’s climate” (as quoted by Time magazine in its 28 May 1956 issue). He thought the temperature rise might eventually melt the Greenland and Antarctic ice sheets, which would raise sea levels enough to flood coastlines. In 1957 Revelle told a congressional committee that the greenhouse effect might someday turn Southern California and Texas into real deserts. He also remarked that the Arctic Ocean might become ice free. But everyone understood that it was all speculation, more science fiction than scientific prediction.
Figure 2. Roger Revelle. In the late 1950s the American oceanographer was the first to publicly speculate that future global warming might bring serious harm to some regions and make the Arctic Ocean ice free.
Figure 2. Roger Revelle. In the late 1950s the American oceanographer was the first to publicly speculate that future global warming might bring serious harm to some regions and make the Arctic Ocean ice free.
More scientists began to look at the matter after 1960, when observations showed that the level of CO2 in the atmosphere was indeed rising rapidly. In 1963 a pathbreaking meeting was convened by the private Conservation Foundation; called “Implications of Rising Carbon Dioxide Content of the Atmosphere,” that meeting set the pattern for many later exercises in impact studies. Already at that embryonic stage of understanding, the meeting brought together experts in CO2 chemistry, climate, fisheries, agriculture, and so forth. And it resulted in a consensus report, which warned that if fossil-fuel burning continued, “the earth will be changed, more than likely for the worse.” But the group admitted that they could scarcely say what dangers might lie a century ahead. They suspected forest productivity would improve, which did not sound bad. And the distribution of species—including important ones for commercial fisheries—would change, which could be bad or good. The only thing the assemblage of experts felt confident about was that rising temperatures would increase melting of the world’s glaciers, which would raise the sea level and bring immense flooding to low-lying areas (figure 3). There were no numbers or probabilities. It was science only in the sense that scientists were making their best guesses and admitting that it was sheer guesswork.
Figure 3. Superstorm Sandy inundated parts of New York City in 2012. Damage from rising sea level, due to the melting of glaciers and the polar ice caps as well as thermal expansion of seawater, was the first impact of global warming that scientists predicted with confidence.
Figure 3. Superstorm Sandy inundated parts of New York City in 2012. Damage from rising sea level, due to the melting of glaciers and the polar ice caps as well as thermal expansion of seawater, was the first impact of global warming that scientists predicted with confidence.
Global warming caught the attention of the US President’s Science Advisory Committee in 1965. They reported that “by the year 2000 the increase in atmospheric CO2 … may be sufficient to produce measurable and perhaps marked changes in climate.”2 Without attempting to say anything specific, they remarked dryly that the resulting changes “could be deleterious from the point of view of human beings.” The following year, a panel of the US National Academy of Sciences (NAS) took a different tack, warning against “dire predictions of drastic climatic changes.”3 Dire predictions of one or another imminent climate catastrophe had, in fact, been a staple of the popular press for decades as magazines, books, and other media peddled colorful speculations of every variety. The academy panel expected no extraordinary climate change until well into the 21st century, and that was so far away! As for the long run, the panel remarked that the geological record showed swings of temperature comparable to what the greenhouse effect might cause, and “although some of the natural climatic changes have had locally catastrophic effects, they did not stop the steady evolution of civilization.”
The rise of environmentalism
That conclusion was not entirely reassuring. Concern grew among the few scientists who paid attention to climate theories. Meanwhile, the rise of environmentalism was raising public doubts about the benefits of human activity for the planet; smoke in city air and pesticides on farms were no longer tokens of progress but instigators of regional or even global harm. A landmark study conducted at MIT in 1970 covered a variety of environmental problems and included a section on greenhouse warming. The experts concluded it might bring “widespread droughts, changes of the ocean level, and so forth,” but they could not get beyond such vague worries.4 A meeting in Stockholm the following year came to similar conclusions and added that we might pass a point of no return if the Arctic Ocean’s ice cover disappeared. That occurrence would change the world’s weather in ways that the scientists could not guess at but that they thought might be serious.5
Governments were now putting some of the environmental movement’s demands into law; thus arose a practical need for formal environmental impact assessments. A new industry was born with expert consultants who strove to forecast effects on the natural environment of everything from building a dam to regulating factory emissions. Beyond the local scale, concerned people applied increasingly sophisticated scientific tools to study the impacts of deforestation, acid rain, and many other far-ranging activities. They looked at impacts not only on natural ecosystems but also on human health and economic activities. Assessing the long-term impacts of greenhouse gases fitted easily into such a research paradigm.
One example of the broadened view was the 1977 report Energy and Climate, from a panel of geophysicists convened by the NAS.6 By now models of all sorts, from elementary radiation physics to elaborate computer exercises, projected an average global warming of 3°, give or take, following a doubling of the atmosphere’s CO2 level. What would that mean? Like all studies of the period, the experts just used general physical principles to deduce what sort of consequences might result; they had no detailed scientific projections or observations to cite. On the positive side, the Arctic Ocean might eventually be opened to shipping. On the negative side, there would be “significant effects in the geographic extent and location of important commercial fisheries… . Marine ecosystems might be seriously disrupted.” Stresses on the polar ice caps might lead to a surge of ice into the sea and bring a “rise in sea level of about 5 meters within 300 years.” As for agriculture, there would be “far-reaching consequences” that “we cannot specify… . We can only suggest some of the possible effects. A few of these would be beneficial; others would be disruptive.” There could be terrible “human disasters” like the recent African droughts. However, the panel made clear it could not foresee what would actually happen. Two years later another academy panel said much the same and took brief note of an additional threat—the rise of CO2 in the atmosphere would make the oceans more acidic. Here, too, they thought the consequences were beyond guessing.7
All those committees managed to reach a consensus on what they were saying: Everybody signed off on the conclusions. They could do that because in most areas they agreed to tell the public that they were uncertain—except they were certain there were risks, serious possibilities that needed to be addressed with dedicated research efforts.
More categories of impacts emerged, and each began to attract its own little band of specialists. For example, an elaborate 1983 study by the US Environmental Protection Agency looked into sea-level rise. The experts concluded that by the end of the 21st century they “could confidently expect major coastal impacts, including shoreline retreat, … flooding, saltwater intrusion, and various economic effects.”8
Detailed studies emerge
By the early 1980s the studies were starting to look less like seat-of-the-pants guesses; they had numbers, equations, and references to a growing peer-reviewed scientific literature. The key developments were computer projections of future temperature rise along with changes in precipitation, soil moisture, and so forth. A 1983 NAS report was the most detailed assessment up till then.9 In a category like agriculture, the experts looked, for example, at how soybean yields had varied with temperature in the past and what a physiological simulation for wheat said about the response to changes in solar radiation and soil moisture. For sea-level rise, they could calculate how much seawater would expand with heat and make a very rough model of what might happen to the Antarctic ice sheets. They also looked at coral-reef records of sea level during previous warm epochs. With less of an attempt at precision, the academy’s experts pointed out that an increase in extreme summer temperatures would worsen the “excess human death and illness” that came with heat waves. Also, melting of permafrost in the Arctic could require adaptations in engineering. In addition, climate shifts “may change the habitats of disease vectors.” Finally and most important, “In our calm assessment we may be overlooking things that should alarm us.” There might be effects that no expert could predict or even imagine, effects all the more dangerous because they would take the world by surprise (figure 4).
Figure 4. Forests in the American West have recently been devastated by bark beetles that multiply in the absence of harsh winters. Since the 1980s scientists have predicted that climate change would extend the range of disease vectors, but this specific impact was not anticipated. Scientists did regularly warn that there would be unforeseen impacts.
Figure 4. Forests in the American West have recently been devastated by bark beetles that multiply in the absence of harsh winters. Since the 1980s scientists have predicted that climate change would extend the range of disease vectors, but this specific impact was not anticipated. Scientists did regularly warn that there would be unforeseen impacts.
The studies to this point had used a simple cause-and-effect model. Physical scientists would run computer models to predict changes in precipitation and the like. Others would then step in to calculate immediate consequences—for example, using historical records to predict how corn yields would vary with the weather. But if farmers could no longer get good results from corn, wouldn’t they plant something more suited to their new climate? During the 1980s, some impact studies began to take account of how humans might adapt to climate change. By the end of the decade, some studies were linking models of crop responses with economic models. Complex interactions were no less crucial in natural ecosystems. Life scientists began to calculate how forests, coral reefs, and other environments might respond to the rise of greenhouse gases. For example, could tree species move their ranges poleward fast enough to keep up with the temperature rise? At a still higher level of complexity, some studies began to account for the way different climate impacts might interact with each other.
Those more sophisticated approaches guided the first comprehensive official US government report, ordered by Congress in 1986 from the EPA.10 The EPA’s findings continued the trend toward predicting more serious, more numerous, and more specific kinds of damage. The experts concluded (as summarized by the New York Times on 20 October 1988) that “some ecological systems, particularly forests …, may be unable to adapt quickly enough to a rapid increase in temperature… . Most of the nation’s coastal marshes and swamps would be inundated by salt water… . An earlier snowmelt and runoff could disrupt water management systems… . Diseases borne by insects, including malaria and Rocky Mountain spotted fever could spread as warmer weather expanded the range of the insects.” Many of the predictions, such as the expansion of diseases, had been mentioned before but were only now coming under detailed discussion.
Studies of how climate change might affect human health expanded particularly swiftly in the 1990s, catching the attention of both experts and the public. As in some other categories, the health-effects work was increasingly supervised not by a particular government but by international organizations, including the venerable World Health Organization and the new Intergovernmental Panel on Climate Change (IPCC), established in 1988. Yet with health, as in other arenas, it was becoming clear that global generalizations were of much less value than studies at a regional level. For example, insects that carry tropical diseases like dengue fever and malaria would expand their ranges. The main impacts would be felt in developing nations, while people in the developed world tended to worry chiefly about how such diseases might spread to the temperate zones.
The question of regions
Any regional analysis had to start with the climate changes that would result from a given level of greenhouse gases, as calculated by computer models. But although the increasingly sophisticated models had come to a rough agreement on global features like the rise of average temperature, they differed in the regional details. In places where many factors balanced one another—for example, in the region between the Sahara and the African rain forests—one model might predict a benign increase of rainfall and another, terrible droughts. Policymakers did not much care about the average global temperature—they wanted to know how things would change in their own locality.
Unable to make quantitative predictions of just what might happen in each region, the IPCC decided to study “vulnerabilities”—the nature of damage that a given regional system might sustain from any of the likely sorts of climate change. That approach was in line with an established practice of vulnerability studies in many other research areas, from food supplies to earthquakes. The experts also considered benefits, but the very term “vulnerability” showed that by now most of them believed the net effects of greenhouse warming would be harmful. Some disagreed, which raised a serious controversy during the discussions leading to the IPCC’s initial report of 1990 (available at http://www.ipcc.ch along with subsequent assessments). Russian climatologists argued that warming would have important benefits—for frigid Siberia, warming sounded like a great idea.
In the usual IPCC fashion, the 1990 Working Group on impacts forged a consensus by admitting deep scientific uncertainty. The panel couldn’t even say whether net global agricultural potential would increase or decrease on a doubling of atmospheric CO2. While acknowledging there might be benefits in some northern locales, the panel warned that “there may be severe effects in some regions,” ranging from extinction of species to a 1-meter rise in sea level by 2100, which would displace tens of millions of people. Droughts could be a problem, although in areas like the western US with elaborate dam systems, the panel thought the problem would be manageable. On the other hand, it foresaw increased frequency and severity of flooding. Again, consensus was achieved only by agreeing that the report could not assert much for certain beyond generalized statements about risks and, especially, vulnerabilities.
The IPCC and the computer modeling community made a big step forward in 1997 with a pioneering report titled The Regional Impacts of Climate Change: An Assessment of Vulnerability.11 Each of seven regions of the globe got its own detailed account of vulnerabilities, based on computer runs carried out expressly for the exercise. More than a dozen different models were compared in order to assess the degree of reliability. At that level it was obviously necessary to consider not only the local climate and ecological systems but also the local economic, social, and political conditions and trends and to draw in the social sciences as equal partners with geophysics and biology. It was becoming a standard practice to consider how people might adapt. For example, the panel concluded that Africa was “the continent most vulnerable to the impacts of projected changes.” That was not just because so many parts of Africa were already water stressed, subject to tropical diseases, and so forth, but still more because population pressures and political failings were causing environmental degradation that would multiply the problems of climate change. Above all, Africa’s “widespread poverty limits adaptation capabilities.” By contrast, the carefully managed agricultural systems of Europe and North America might even contrive to benefit from a modest warming and rise in the level of CO2 (which could act as a fertilizer for some crops), although the developed nations would certainly suffer some harmful impacts as well.
Such assessments, and the publics they addressed, could see impacts in the developed world as manageable because they were looking little more than half a century ahead. The late 21st century was still so far away! Surely by then, humanity would have taken control of its emissions; surely CO2 would not rise to three or four times the preindustrial level.
Scenarios and probabilities
The future state of the climate would depend crucially on what emission controls nations chose to impose—and that was the biggest uncertainty of all. Thus was exposed a problem with the standard way of predicting impacts. Scientists had tried to look into the future by looking to a most likely outcome within a range of possibilities: “Global average temperature will rise 3° plus or minus 50%” or the like. People would then estimate the consequences of a 3-degree rise.
Professional futurologists in the social sciences had abandoned that method of prediction decades earlier, when they realized that most of their predictions had been far off the mark. They turned to an approach practiced by military planners and war gamers since the 1940s: Instead of working only with the most likely future, imagine a wide range of possible futures, and for each of them develop a detailed scenario. The aim was to stimulate thinking about how operations should be structured so they would hold up for any of the likely contingencies. Since the 1980s most corporations and government agencies had used scenarios for their planning.
The IPCC had taken up that approach from the outset, assembling experts to write scenarios in a lengthy intergovernmental process. The result, published in 1992, was a set of six scenarios, each describing a way that the world’s population, economies, and political structures might evolve over the decades. Experts in various fields of physical and social sciences could try to figure how much of each of the various greenhouse gases would be emitted by the society of a given scenario, compute the likely climate changes, and then estimate how that society would try to adapt. A second try in 1996 produced no fewer than 40 scenarios. There were so many unknowns, and so many differences from region to region with each region demanding its own detailed study, that the small community of researchers could explore only a few of the possibilities in depth. Many research projects used only one scenario, the middle one with emissions neither sharply restricted nor rising explosively.
Meanwhile, the IPCC got increasingly specific about just what the consensus of experts meant. The panel reported whether they judged a given impact to be “more likely than not,” or “likely,” or “very likely,” and so forth (figure 5). In the panel’s 2001, 2007, and 2013 reports, the most impressive parts resembled the earlier reports; they simply laid out a variety of the possible impacts. In fact, all the major impacts of climate change as we now understand them were well understood on the global scale by 2001. The later IPCC reports were mainly distinguished by their increasing regional specificity and their increasing certainty that the impacts were well on their way. “Likely” shifted to “very likely,” and the wording of the executive summaries of the reports got increasingly strong in the hope that people would pay heed.
Figure 5. The 2011 heat wave and drought that struck Texas was exacerbated by global warming, according to some calculations. Early estimates of the impact of the rising carbon dioxide level on agriculture were uncertain, but by around 2000 it became clear that the overall effects would be harmful.
Figure 5. The 2011 heat wave and drought that struck Texas was exacerbated by global warming, according to some calculations. Early estimates of the impact of the rising carbon dioxide level on agriculture were uncertain, but by around 2000 it became clear that the overall effects would be harmful.
Most people read only the executive summaries. The IPCC impacts reports themselves were enormous, but they were an odd sort of science. They could not be read like a physics paper, presenting a logical sequence of arguments and observations explaining why, for example, wheat yields in the American Great Plains were expected to decline by 4%. The reports were more like review papers, citing hundreds of studies of computer models, historical records, and so forth. Anyone seeking to be convinced would have to dig down into the papers, which were themselves often elliptical; computer modelers’ papers in particular rarely had space to do more than specify the special characteristics of the model of the moment and give graphs and tables of the results.
Attempts at precision could be misleading. For example, studies published from the 1970s into the mid 1980s estimated that by 2100 the sea level might rise anywhere from a few tenths of a meter to a few meters. The upper limit dropped to about half a meter in the IPCC’s 1995 report, and it stayed there in the reports through 2007; many readers did not notice that the 2007 report explicitly did not include an addition that might come if polar ice sheets began to surge into the oceans in the next few decades. Most scientists considered that quite unlikely, but there were always some who argued that it was possible. Not until its 2013 report did the IPCC grudgingly admit that the sea level might rise a meter and a half by 2100. And even then the IPCC gave scant attention to such impacts that did not seem pretty likely to happen, even if they would be catastrophic if they did befall us.
That cautious approach was different from the practice in many other kinds of impact studies. For example, the building codes of cities in earthquake zones and evacuation plans for people living near nuclear reactors dealt with problems that might have less than one chance in a hundred of happening in the next century. The IPCC, by contrast, was preoccupied with impacts that were more likely than not. Those were shaping up to be bad enough.
A peculiar kind of science
This brief summary of the history of scientific understanding of the impacts of climate change is a peculiar history, as histories of science go. Since the real work began in the 1960s, I have not had occasion to mention a single name of an individual: My actors were committees. I have not even cited any single landmark discovery paper; the committees were looking over dozens of papers, then hundreds, each contributing a little bit to the overall picture. Nor have I described any grand false leads, dead ends, or controversies, which are so common in the history of science. The seat-of-the-pants guesses that scientists started with in the 1960s turned out to be roughly correct; the story was one of adding to the list of impacts, putting numbers to each item, and becoming ever more certain that the things foreseen would indeed come to pass. And in this short article I have certainly not been able—any more than the IPCC in its lengthy reports—to present a convincing case, based on logic and observations, of why anyone should believe the consensus statements.
A closer look, if I had much more space, would certainly turn up plenty of individuals, along with lots of mistakes and controversies about details. Each new idea was first brought up by someone and then argued out at length. Our history of committees is like the swan that glides serenely on the surface while paddling furiously underneath. Still, I haven’t been telling a Whig history, reconstructing after the fact an understanding that never existed at the time. In this peculiar case a consensus was constructed by committees on the fly, a consensus that became increasingly detailed and certain decade by decade. The topic was so important that people recognized very early on that it could not be left to a few individuals making statements to the newspapers. Experts had to analyze the entirety of the peer-reviewed literature, even have elaborate computer studies done expressly for their use, and get together to hammer out conclusions that everyone could agree were scientifically sound. To be sure, in some areas they could only agree on the extent of their uncertainty, but that, too, was a genuine and important scientific conclusion.
On the other hand, many people have argued vociferously against the entire scientific consensus on impacts, right up to the present. For example, a Hoover Institution publication held that “global warming, if it were to occur, would probably benefit most Americans.” There would be lower heating bills and other energy savings. Others emphasized, as a Heartland Institute publication declared, that “more carbon dioxide in the air would lead to more luxuriant crop growth and greater crop yields” while taking no account of the likely heat waves and droughts.12 No careful study or hard analysis backed up such statements. Our mainstream history, the history of expert committees, stands aside from all that.
The public knew little of how the committees came to exist and nothing of how they functioned. The experts’ consensus reached ordinary people as a few paragraphs, at most, in a news story, boiling down an already much compressed executive summary.
I submit that a major problem in communicating climate realities to the public is that the media, and everyone else addressing the public, feature individual scientists and their discoveries and disagreements. We have scarcely come to grips with committee consensus, a different kind of history of science. You will find no account digging into details of committee deliberations. I haven’t been able to do it here, and I am not sanguine about prospects for getting it done. In fact, the IPCC and the NAS and their members have been highly reluctant to make public any documents or recollections about just what goes on in the committee deliberations. Only recently, under pressure from critics, has the IPCC made its review process entirely transparent to the public. Be that as it may, I suggest historians and social scientists should give more attention to those committees. If we did, the public would have a better idea of how “science” comes to say what it does say about global warming —and a good many other issues.
This article is adapted from the author’s lecture on accepting the American Physical Society’s 2015 Abraham Pais Prize for History of Physics, delivered at the 2015 APS March Meeting in San Antonio, Texas. A longer version of this article with complete references is available at http://www.aip.org/history/climate/impacts.htm.
REFERENCES
Spencer Weart is historian emeritus at the American Institute of Physics, Center for History of Physics, in College Park, Maryland.