For how long and in what ways can humans sustain the energy-intensive way of life we take for granted? That consequential question is one that physicists must help answer. As we pass the middle of 2016, oil prices are at a 10-year low, partly because of the surge in production of oil and natural gas from fracking. The current fracking boom may ease the transition to a new mix of energy resources. Conversely, it may make us complacent and delay the transition or incite popular resentment and impede the transition.

The physics community must participate in shaping how energy issues play out over the coming decades. The development of fusion reactors, photovoltaic cells, and other potential energy sources clearly requires contributions from physicists. As educators, many of us occupy the central position of teaching students the very definition of energy and the fundamental limits on extraction of free energy from heat. Beyond the classroom, we should all be concerned with the public’s understanding of what energy means. Even in the specific case of fossil fuels, there is room for our increased technical engagement through collaboration.

As we look at the past 200 years of hydrocarbon production and think about the future, it is possible to have two surprisingly different reactions: one optimistic and one pessimistic. Optimists see reasonable development and adaptation for the foreseeable future. Pessimists worry that our whole economy is unstable and could crumble at any time.

Optimism and pessimism have mathematical analogues in simple models. The historical equation of optimism is

(1)

whose solution gives

(2)

Here t is time and ϕ can describe human population growth, the rate of energy production or consumption, or the rate of economic growth. Over the past two centuries, all those quantities have grown roughly exponentially.

For population, it is clear why exponential growth happens: If couples produce a bit more than two children on average, then each new generation will be larger than the previous one. For the period since 1800, the exponent α for world population has been on the order of 1%.

In the US, both major political parties advocate exponential economic growth at around the rate of 2.5% per year. Any other position would be politically untenable. Politicians debate what role government should play in achieving growth, not whether growth is desirable. With respect to economic growth, most people are living as optimists.

Pessimists start with equation (1) and add a term. Petroleum, land, soil, forests, minerals, and many other resources exist on Earth in finite amounts. When the amount R of a resource that’s been extracted reaches the carrying capacity R, production must cease. That limitation can be captured by writing

(3)

where the term in parentheses is the simplest way to ensure that growth stops when R reaches the limiting value. Dividing through by R and defining ϕR/R produces the simplest pessimistic version of a growth equation:

(4)

which leads to

(5)

Resource production peaks at some time, arbitrarily taken to be t=0; it rises before peak production and falls afterward in a symmetrical fashion. In physics, ϕ from equation (5) is the beloved Fermi function, which describes the occupation of energy states for electrons and other fermions, whereas in population growth or statistics it is called the logistic curve.

Physicist and famous pessimist Albert Bartlett spent decades warning about the risks of optimism:

They believe that perpetual growth is desirable, and hence it must be possible, and so it can’t possibly be a problem. Yet the “spherical Earth” people go around talking about “limits” and about the limits that are implied by the term “carrying capacity.” But limits are awkward because limits interfere with perpetual growth, so there is a growing move to do away with the concept of limits.1 

Bartlett emphasized that the exponential function in equation (2) is greedy and eventually unphysical. One can easily forget the implications of the rate at which ϕ becomes larger. The “rule of 70” says that when ϕ grows exponentially at r%, then the number of years needed for ϕ to double is 70/r: Anything with a growth rate of 1% per year will double in approximately 70 years.

If the human population, for example, continues to grow at 1% per year, then 1000 years from now only 1 m2 of Earth’s surface will be available per person. That kind of extreme land shortage is unlikely to be the primary factor that limits human population growth, but the calculation illustrates the impossibility of indefinite population growth.

What equation (5) says is that the production rate of finite quantities grows almost exponentially until the overall resource is half depleted. Before that time, essentially no guidance comes from production rates to indicate that growth is nearing an end. Nor is the resource itself gone. As much is left as has been extracted. But inexorable limitations from finite supply will lead to a decline in production.

By most accounts, the pessimistic view of resource limits began with Thomas Malthus two centuries ago. International discussion of resource limits garnered substantial public attention in 1972 with publication of The Limits to Growth by the global think tank Club of Rome. The detailed predictions of that book were based on the World3 model, a computer program to simulate the interacting dynamics of population, economics, resource production, and other factors, including ecological limits. At the time, the World3 model required a mainframe computer to run; its updated descendant (see for example, www.bit-player.org/extras/limits) now can be run interactively from the Web. The authors of The Limits to Growth wrote, on page 190, “We are convinced that realization of the quantitative restraints of the world environment and of the tragic consequences of an overshoot is essential to the initiation of new forms of thinking that will lead to a fundamental revision of human behavior and, by implication, of the entire fabric of present-day society.”

Of all the finite resources highlighted in The Limits to Growth, none are more significant than petroleum and natural gas, in part because power consumption per capita and GDP per capita are highly correlated (see figure 1). To the physics community this makes sense: Power is the ability to do work per unit time and thus underlies all activity in a fundamental way. The oil shock of 1973, produced by the combination of US production decline and an OPEC embargo, gave credibility to the sense that oil is finite. And because of that significance, no resource has been studied more carefully.

Figure 1. The correlation between hydrocarbon-based power consumption and economic output for most countries on Earth. A power-law fit finds that annual GDP per person is G = $10 500 (C/kW)0.64, where C is hydrocarbon-based energy consumption per second per person. The tight power-law relationship indicates that economic prosperity is not currently feasible without consumption of hydrocarbon fuels. The power law is reminiscent of scaling laws in biology;15 the flow of petroleum through economies resembles the flow of blood in mammals. On average, the hydrocarbon power consumed in the US is 8 kW per person, the same as 80 incandescent 100 W bulbs burning continuously. If the US were to rely only on its currently available renewables—biomass cogeneration, wood, hydropower, geothermal, wind, passive solar, and photovoltaics—power consumption would drop to four bulbs per person; eliminating hydropower and biofuels would reduce the number to one or two. The reduction would entail such a change in lifestyle as to make the US unrecognizable.16 (Data source: Central Intelligence Agency, World Factbook, 2015; DOE/Energy Information Administration, 2015.)

Figure 1. The correlation between hydrocarbon-based power consumption and economic output for most countries on Earth. A power-law fit finds that annual GDP per person is G = $10 500 (C/kW)0.64, where C is hydrocarbon-based energy consumption per second per person. The tight power-law relationship indicates that economic prosperity is not currently feasible without consumption of hydrocarbon fuels. The power law is reminiscent of scaling laws in biology;15 the flow of petroleum through economies resembles the flow of blood in mammals. On average, the hydrocarbon power consumed in the US is 8 kW per person, the same as 80 incandescent 100 W bulbs burning continuously. If the US were to rely only on its currently available renewables—biomass cogeneration, wood, hydropower, geothermal, wind, passive solar, and photovoltaics—power consumption would drop to four bulbs per person; eliminating hydropower and biofuels would reduce the number to one or two. The reduction would entail such a change in lifestyle as to make the US unrecognizable.16 (Data source: Central Intelligence Agency, World Factbook, 2015; DOE/Energy Information Administration, 2015.)

Close modal

The most famous figure to consider the finite supply of oil and gas is M. King Hubbert. In 1956 the Shell Oil Co geophysicist concluded that the US had reserves of 150 billion to 200 billion barrels of oil and predicted that US oil production would peak in 1970 at 8.2 million barrels per day (1 barrel ≈ 159 l). He arrived at those conclusions in three separate ways: by fitting past production to equation (5), by adding up proven reserves from oil company data, and by extrapolating into the future the discovery rate of new oil fields.2 Hubbert’s prediction for the US gave an impressively accurate account of the following 50 years. The actual peak came in 1970 at 9 million barrels per day and then declined in accord with the logistic curve, as shown in figure 2.

Figure 2. M. King Hubbert’s 1956 prediction for the daily production rate of US crude oil through 2015, compared with actual production. Until 2008, production (brown) followed Hubbert’s model (blue) reasonably well, but since then horizontal drilling and hydrofracturing have created a new peak (red).

Figure 2. M. King Hubbert’s 1956 prediction for the daily production rate of US crude oil through 2015, compared with actual production. Until 2008, production (brown) followed Hubbert’s model (blue) reasonably well, but since then horizontal drilling and hydrofracturing have created a new peak (red).

Close modal

Using similar methods, although subject to greater uncertainty, Hubbert predicted that cumulative discovery for the world would rise eventually to 1250 billion barrels and that production would peak around the year 2000 at 12 billion barrels per year, or 33 million barrels per day. His estimate of world oil reserves was too low—according to many current estimates, by a factor of two. In the physics community, Kjell Aleklett in 2012 calculated a cumulative 2300 billion barrels of oil to be discovered, produced, and consumed. He predicted peak production around 2015 at 90 million barrels per day.3 Other authors who recently reached a similar estimate include Richard Miller and Steven Sorrell, who concluded, “A sustained decline in global conventional production appears probable before 2030 and there is significant risk of this beginning before 2020.”4 

Specialists usually know what resources are in place, have a good handle on consumption rates and trends, and are aware of looming resource limits well ahead of any peak. Thus, say the optimists, few devastating surprises crop up. Production plateaus are foreshadowed by basic economic principles of supply and demand; the resource becomes more expensive. Technology development—kindled by the more expensive resource—creates more affordable alternatives. In the view of thoughtful optimists such as Jesse Ausubel, the calamities foreseen by pessimists do not unfold. Ausubel wrote,

Exploring, inventive humanity exemplifies the lifting of carrying capacity. Through the invention and diffusion of technology, humans alter and expand their niche, redefine resources, and violate population forecasts. In the 1920’s, the leading demographer, Raymond Pearl, estimated the globe could support two billion people, while today about six billion dwell here. Today, many Earth observers seem stuck in their mental petri dishes. The resources around us are elastic.5 

Take oil as an example. Whale oil was used for lighting lamps in the 1800s. As the seas approached “peak whale,” the supply of whales decreased and expeditions had to venture ever farther from home port in search of prey. In the two decades from 1840 to 1860, the price of whale oil increased fourfold.6 Contrary to Bartlett, the coming of peak whale did trigger a response. In 1859 natural petroleum was discovered in Pennsylvania and kerosene, a distillate of petroleum, became a substitute for whale oil, whose production declined in the 1860s. Economics, technology, and petroleum saved the whales.

With the invention of the first production-line automobile in 1908, the demand for oil—and for its distillates, gasoline and diesel, as transportation fuels—began to grow. In terms of sustainability, that development was positive because it was becoming difficult to provide hay (another carbon-based energy source) to fuel all the domesticated animals that had provided transportation for millennia. For more than 150 years, one source of oil after another has been developed to transport a population that has grown from less than 1 billion to more than 7 billion people—progress, unprecedented in history, literally fueled by oil.

All along, highly educated forecasters have anticipated a peak in global oil production just around the corner. But they have been wrong every time. Hubbert’s prediction of a global oil peak around the year 2000 was wrong. Even his description of the peak in US oil production eventually proved wrong. Including data from just a few years beyond 2008 gives the red points in figure 2. US domestic crude oil production has nearly returned to the level of 1970.

Meanwhile, with 1200 billion barrels of oil consumed to date, the BP Statistical Review of World Energy puts current proved, unconsumed reserves at 1700 billion barrels.7 By comparison, at the end of 2004, proved reserves found and not consumed were 1360 billion barrels. In the past decade, 340 billion more barrels of oil have been found and declared recoverable than were consumed. Facilitated by demand for liquid fuels, and more importantly by ingenuity and technology, production continues to increase.

As gasoline, diesel, and kerosene become more expensive, the demand for transportation fuels will be satisfied by biofuels, fuel cells, batteries, or other new resources. Even global energy-demand estimates may prove too high, as efficiencies continue to reduce per capita demand and developing economies adopt more efficient technologies. Demand for oil may be slowing, due to ride sharing, public transportation, and improvements in vehicle fuel economy.

Indeed, a transition away from oil has been under way for 40 years. The percentage of oil in the global energy mix peaked in the late 1970s, just below 50%; today it is just over 30% and declining. That was the interesting peak in oil. But it came and went with little fanfare as the energy systems of the world changed and adapted. Thus, argue the optimists, the eventual plateau of oil production won’t come from limits to global oil resources but instead from reduced demand and affordable alternatives provided by technology and adaptation. Making the switch to other fuels will involve hard work and require innovation, but it will not cause the end of civilization as we know it.

The debate on whether the world economy will continue to grow through the next several decades without shocks owing to shortages in fuel or other critical materials is far from settled. Certainly the current short-term glut in oil makes it hard to focus policy on impending shortages. Oil prices were down below $30/barrel in early 2016, compared with more than $100/barrel as recently as the middle of 2014, partly due to oil supply and partly due to financial speculation. Two primary factors drive the glut: first, increased production by OPEC, led by Saudi Arabia, and second, horizontal drilling and hydraulic fracturing—hydrofracturing or fracking—to extract oil and gas from shales. Many features of the shale-gas story capture in microcosm the larger problem of estimating oil and gas reserves. The technical work under way to understand what reserves are available may be a model for the larger question of world resources.

Hydrofracturing technology goes back to the 1940s, but the assembly of techniques leading to the current production boom are generally attributed to George Mitchell. In the 1980s and 1990s, his company, Mitchell Energy and Development, embarked on a decade of experimentation in the Barnett shale near Dallas to develop techniques to produce shale gas. Those techniques have come to alleviate worries about peak oil. Ironically, Mitchell was passionate about sustainability science at least since the 1970s and was a primary sponsor of the technical work that led to the publication of The Limits to Growth. The Cynthia and George Mitchell Foundation today emphasizes projects that examine energy and water sustainability; it also supports fundamental-physics research.

Extracting gas and oil from shale formations once seemed impossible for a very simple reason. The existence of hydrocarbon-rich mudrocks, deposited in ancient ocean beds, has been known for a long time. However, the lithified mud layers are extremely impermeable. At a depth of a few kilometers, they may be as little as 30 m thick. Their permeability shows up in Darcy’s law for flow of liquids in permeable media, q=(k/μ)p, where q is flux per area, μ is the viscosity of the liquid, p is the pressure gradient making the fluid move, and k is the permeability. For sand, k is around 1 darcy (1 darcy ≈ 10−12 m2). For sandstone, k ranges down to 10−3 darcy. And for shales it can be as low as 10−8 or even 10−9 darcy. That’s about 10−20 m2, which suggests angstrom spacing; the channels in the rock are not really so thin, but transport is indeed slow and difficult.

Hydrofracturing produces large cracks in the rock. The process involves drilling down as much as several kilometers; turning the wellbore sideways into the shale layer; proceeding horizontally for a kilometer or more; injecting 25 000 m3 of pressurized water, a few additives, and 10 000 tons of sand to keep cracks open; and extracting the gas. (A considerably more detailed description of the process was provided by Donald Turcotte, Eldridge Moores, and John Rundle, Physics Today, August 2014, page 34.)

In 2012 the Alfred P. Sloan Foundation funded an interdisciplinary group led by the University of Texas at Austin’s Bureau of Economic Geology to estimate the future of natural-gas production from the major shale-gas and oil formations, or plays, in the US. Understanding why hydrofracturing works and how much gas and oil will ultimately be produced is a complex interdisciplinary problem. Geologists understand the formations in which the gas and oil are found. Petroleum engineers understand particular features of the rock and fluid setting that make hydrocarbon production possible with current technology. Economists understand the interplay between price and production. And physicists … well, it is not evident that anything is left. Nevertheless, the Sloan Foundation study’s results turned out to incorporate unanticipated contributions from physicists: the construction of individual well decline curves.

Every individual shale-gas well starts with strong production, but it declines rapidly within a few years of drilling. To predict the future of natural-gas production, the expected production trajectory of individual wells is a necessary first step. Yet each individual well is enormously complicated. The hydrofracturing process creates a subsurface fracture network of uncertain character and extent; not a single such network has ever been imaged in reasonable detail.

The natural gas reaches the wellbore through a hierarchical transport path involving connected pores, natural fractures, and, finally, larger fractures propped open by the injected sand. That complexity makes it understandable to fall back on conventional phenomenological models of oil wells,8 particularly a class of curves assembled by Jan Arps.9 Those phenomenological curves, however, were based on experience with conventional oil reservoirs from which oil seeped long distances through much more porous and permeable rock. Detailed numerical reservoir simulations are too time-consuming to be used well by well to explore production scenarios from tens of thousands of wells.

A seemingly unrelated problem suggested a solution. The condensed-matter physics community has had successes employing extremely simple models and obtaining quantitatively accurate descriptions of complex systems. One example is the theory of localization: electron transport in random systems at low temperature. The original calculations of Philip Anderson in 1958 inspired a new field of study, but Anderson’s attempt to solve the problem in exact detail left many questions out of reach. Almost 20 years later, a gang of four—Anderson plus Elihu Abrahams, Donald Licciardello, and Rama Ramakrishnan—tackled the problem anew and solved it in a novel way.10 (See the article by Ad Lagendijk, Bart van Tiggelen, and Diederik S. Wiersma, Physics Today, August 2009, page 24.) Most of their theory was contained in the assertion that for each dimension, the resistivity of a disordered solid at low temperature is a single universal function of the size of the solid scaled by a reference length.

The production over time of hydrofractured wells turns out to be amenable to a very similar sort of approach.11 It is based on the following idea: Hydrofracturing opens up some amount ℳ of gas for extraction. Everything else is inaccessible. That gas diffuses through unbroken rock to a fracture, through which it easily and quickly migrates to the production well. The characteristic minimum distance between fractures is 2d. From d one can calculate an interference time τ=d2/(k/scg), where k is again permeability, s is the volume fraction of space occupied by hydrocarbons, and cg is compressibility. The interference time τ is the characteristic time needed for gas to diffuse a distance d to the nearest fracture.

Detailed solutions lead to the following physical picture for the depletion. In the beginning, natural gas of uniform density and pressure diffuses into a fracture, which acts as an absorbing boundary. The flow of gas decays as 1/t. A front of low pressure and density moves away from the boundary. Eventually, around time τ, the low-pressure front moving from one boundary hits a low-pressure front coming in from an opposing fracture across the way at a distance 2d. After the two fronts meet, the production rate falls exponentially as the finite volume is depleted.

The result is one curve (see figure 3) that gives a reasonable description of the behavior in time of all wells in the play. To apply the curve to any given well, the production curve from that well must be rescaled by two parameters. The time axis must be rescaled by τ and the cumulative production axis must be rescaled by ℳ. Those parameters are not known and must be obtained by fits to production. One can view the fitting process as an efficient way to learn, by observing a well’s early history, how much will come out and how long it will take.

Figure 3. Scaling curve for the output from Barnett shale wells compared with the production history of 6600 wells. The scaled time t=t/τ is time in units of the interference time τ after which production begins to decay exponentially. (Data courtesy of Frank Male.)

Figure 3. Scaling curve for the output from Barnett shale wells compared with the production history of 6600 wells. The scaled time t=t/τ is time in units of the interference time τ after which production begins to decay exponentially. (Data courtesy of Frank Male.)

Close modal

For the Sloan Foundation study, the procedure was applied to 14 000 gas wells in the Barnett play, 2700 in the Haynesville play, 3500 in the Fayetteville play, 5300 in the Marcellus play,12,13 and 9700 gas and oil wells in the Eagle Ford play. New rounds of data coming in are making it possible to verify that earlier predictions were on track.

To have much use, the calculations have to be embedded in a larger context. Production depends not just on how much each well produces but also on how many wells are drilled, the length and orientation of the horizontal well, the volume of water used in the hydraulic fracture process, original rock properties, and more. In addition to temperature, pressure, and other geophysical information, drilling wells depends on economic forecasts. Assembling all that information requires sophisticated statistical techniques. Any attempt to assess future production must be a team effort to evaluate a set of future scenarios for the natural gas and oil produced from each of the plays in the study.

Given simple models of future gas and oil prices and operational costs, the Sloan Foundation study’s production models give a probability distribution for production outcomes over time. The production models also include a variable to handle the effects of technological improvement. Analyses of individual wells conclude that recovery efficiency is greater than 50% in the best wells and less than 1% in the worst. Overall, only 10–20% of the natural gas in the total field is actually being recovered by today’s processes. That leaves room for technical improvements to raise by a factor of two or three the amount of gas ultimately extracted.

One scenario with reasonable future price and operating costs for the Barnett play is that by 2030 it will have delivered 44 trillion cubic feet of natural gas. That amount represents approximately 10% of the original gas in place, and it is just under two years of total natural-gas consumption in the US. Persistent low prices could lead to 50% less recovery, whereas higher prices could increase the total by 50%, and new recovery technologies could provide another factor of two. The Haynesville, Fayetteville, and Eagle Ford plays will be able to provide comparable amounts, with two to three times as much from the Marcellus play.

In the end, the gap between optimists and pessimists is narrowed by assessing a range of realistic inputs to arrive at data-driven future scenarios. That approach to consensus does not resolve all disagreements about policy. Debate will continue on questions such as whether natural-gas export terminals are well or ill advised and whether natural gas will provide decades of stable supply that enable a switch14 from coal-fueled power plants. But good science can improve the quality of decision making by reducing uncertainty.

Over the past two centuries, the applications that originate from physics have spawned many disciplines. However, physicists risk becoming marginalized because in so many subjects we are no longer the most specialized. We discovered electrons and holes, but we don’t design circuits as reliably as electrical engineers. We uncovered the atomic structure of matter, but we are not as knowledgeable about materials properties as materials scientists. The practical applications of physics to oil detection, transport, and recovery come from geoscientists and petroleum engineers.

All US NSF research-grant proposals are judged partly on the broader impacts of the research. In today’s environment it is tempting for research physicists to work on problems that are cute or pretty and complacently address those broader impacts only through lip service. We can and should do better. Physicists have plenty to contribute to issues of energy, education, and more.

As in the case of the decline curves described above, relevant ideas from physics may not be evident at first. Progress will come only in patient collaboration with colleagues in other disciplines. Engaging in such interdisciplinary work is more difficult than it seems for many reasons. One is a growing expectation that academic applied research will be funded by the industries that benefit. That makes sense in that companies are often best positioned to decide whether the work will be profitable, and they have excellent, otherwise inaccessible data.

However, the future of the world’s energy supply involves interests other than corporate profits, and the more government agencies back out of supporting it, the more the public should worry that research involves conflicts of interest. The problem is not limited to industry. Large nongovernmental organizations also fund and have the potential to influence academic research. Just because conflicts of interest are declared, as we do below as authors of this article, does not mean they are removed.

The topic of energy is indeed part of every first physics course on mechanics, taken by millions of students every year in high school and college. Yet when the time arrives to introduce the topic, energy somehow diminishes to a trick for solving problems with roller coasters.

When Physics Today last dealt broadly with energy, in April 2002, Ernest Moniz (currently US secretary of energy) and Melanie Kenderdine closed their article (page 40) with a list of recommendations that included implementing efficiency improvements, upgrading energy infrastructure, and developing new clean-energy technologies. They urged that “there was no time to lose.” Apart from developing new hydrocarbon sources, most items on that list have not proceeded very far. The optimistic scenarios about the energy future will not come by accident or default. Their arrival will depend on a realistic assessment of the risks we run, on interdisciplinary research to develop solutions, and on a well-informed public that supports science and understands how much modern society depends on consumable energy to survive.

Michael Marder acknowledges support from the Condensed Matter and Materials Theory Program of NSF and Shell Oil Co. Tadeusz Patzek acknowledges support from the Shell Oil Co and the Sloan Foundation. Scott Tinker acknowledges support from the Sloan Foundation. Additional information about possible conflicts of interest may be found at the Sloan website and http://www.beg.utexas.edu/people/scott-tinker.

1.
A. A.
Bartlett
,
Phys. Teach.
34
,
342
(
1996
).
2.
M.
Inman
,
The Oracle of Oil: A Maverick Geologist’s Quest for a Sustainable Future
,
W. W. Norton
(
2016
).
3.
K.
Aleklett
,
Peeking at Peak Oil
, M. Lardelli, trans.,
Springer
(
2012
), p.
48
.
4.
R. G.
Miller
,
S. R.
Sorrell
,
Philos. Trans. R. Soc. A
372
,
20130179
(
2013
).
5.
J. H.
Ausubel
, “
Resources Are Elastic
” (
1999
), http://phe.rockefeller.edu/EMwinter.
6.
S. J. M.
Eaton
,
Petroleum: A History of the Oil Region of Venango County, Pennsylvania
,
J. P. Skelly
(
1866
), p.
283
.
7.
BP plc
,
BP Statistical Review of World Energy
, 65th ed. (
June 2016
).
8.
L. P.
Dake
,
Fundamentals of Reservoir Engineering
,
Elsevier
(
1978
).
9.
J. J.
Arps
,
Trans. Am. Inst. Min. Metall. Eng.
160
,
228
(
1945
).
10.
E.
Abrahams
 et al.,
Phys. Rev. Lett.
42
,
673
(
1979
).
11.
T. W.
Patzek
,
F.
Male
,
M.
Marder
,
Proc. Natl. Acad. Sci. USA
110
,
19731
(
2013
).
12.
F.
Male
 et al.,
J. Unconv. Oil Gas Resour.
10
,
11
(
2015
).
13.
S.
Ikonnikova
 et al.,
Econ. Energy Environ. Policy
4
,
19
(
2015
).
14.
For more on the transition from oil and coal to other energy sources, see www.switchenergyproject.com.
15.
L. M. A.
Bettencourt
 et al.,
Proc. Natl. Acad. Sci. USA
104
,
7301
(
2007
).
16.
J. A.
Tainter
,
T. W.
Patzek
,
Drilling Down: The Gulf Oil Debacle and Our Energy Dilemma
,
Copernicus/Springer
(
2012
).

Michael Marder is a professor of physics at the University of Texas at Austin. Tadeusz Patzek is a professor of Earth sciences and engineering and director of the Upstream Petroleum Engineering Research Center at the King Abdullah University of Science and Technology in Thuwal, Saudi Arabia. Scott Tinker is a professor of geosciences and director of the Bureau of Economic Geology at the University of Texas at Austin.