Many years ago I read an essay in which the author—I think it might have been Frederik Pohl—complained about historical inaccuracies in sword-and-sorcery novels. Of course, it's hardly "wrong" in such works to have fire-breathing dragons, power-bestowing rings, or shape-shifting ravens. Rather, what so irked Pohl were errors in depicting a plausible world based on medieval technology.
In particular, he lambasted authors for getting the economics wrong. The lack of labor-saving devices in the Middle Ages meant that growing and raising food—the main economic activity—occupied most of the people most of the time. A tale of warring princes may have talking bears and magic cloaks but it must have plenty of peasants!
Pohl's polemic came back to mind when I encountered an essay in the latest issue of Isis, one of the journals carried by the Niels Bohr Library and Archives. Under the title "Time, money, and history," David Edgerton of Imperial College London urges his fellow historians of science to include economic factors more fully in how they regard and study science.
Edgerton presents several pieces of historical evidence to make his case. For example, in his view the R&D component of the Manhattan project is routinely overestimated and overemphasized by historians (or "oversized," to use Edgerton's word). Of the project's $2 billion budget, only $70 million—3.5%—was spent on R&D. The lion's share went to DuPont and other large corporations for building two nuclear factories at Oak Ridge in Tennessee and Hanford in Washington State (shown here under construction).

Addressing his fellow historians, Edgerton writes:
We need to follow all the money, not just that going to the university. Rough estimates of the comparative scale of industrial, government, and academic research through the century show that the usual implicit maps of the historians systematically oversize academic research by comparison with government and industrial research. Industry and the military (largely in industry) have been—nearly everywhere and nearly always—the main funders of research and development. Not only research within the academy but, indeed, those aspects of academic research least connected to industry are oversized—physics, particularly particle physics, and biology, particularly molecular biology—while chemistry, mathematics, and engineering are undersized.
Edgerton's essay triggered another literary recollection—this time, of a paper I'd read in the March issue of the British Journal for the History of Science. In "The limits to ‘spin-off’: UK defence R&D and the development of gallium arsenide technology," Graham Spinardi of Edinburgh University tells a fascinating story of government-sponsored R&D. And in doing so, he implicitly supports Edgerton's case.
Gallium arsenide is a semiconductor whose properties make it better than silicon for certain applications. Thanks to its excellent electron mobility, GaAs can operate at the high frequencies used for mobile telephony. And thanks to its direct, as opposed to indirect, bandgap, GaAs beats Si as a material for making lasers, LEDs, and other optoelectronic devices.
But Si has offsetting advantages that continue to give it an edge over GaAs and other semiconductors in computational applications. Si is cheap, stable, and readily doped. It has an insulating phase, thanks to its native oxide. And its hole mobility, while an order of magnitude lower than the electron mobility of GaAs, is still high enough to support gigahertz clock rates.
By the mid 1950s, Si's predominance in commercial electronics was clear. It was also clear that semiconductors would have military applications. Recognizing that market forces alone would likely propel Si-based technologies, the British defense establishment decided at that time to fund research into GaAs, which was less developed. The goal, to quote one of Spinardi's sources, was "to leapfrog silicon technology."
Fateful repercussions
That decision had fateful repercussions for Britain's electronics industry. In a sense, the British government's investment in GaAs paid off. By the 1990s, when the commercial applications of GaAs in LEDs and microwave telecommunications were taking off, British labs had already developed devices and manufacturing techniques that could support a consumer-focused GaAs industry.
But, as Spinardi explains, those companies, which included Plessey and Marconi, opted instead to continue developing devices for their original military sponsors. What's more, their decades-long focus on GaAs had left them little room to develop Si-based technologies. Britain's electronics industry missed out on booms in Si and GaAs.
To discover why British companies failed to achieve commercial success in GaAs in proportion to their expertise, Spinardi interviewed researchers and managers and studied documents from labs and government departments. Within the economic and industrial conditions of the time, those companies did not act foolishly. In fact, they developed successful products in three areas: defense, equipment for processing GaAs, and radar.
Those areas have one thing in common, notes Spinardi. They're inhabited by a few big, commercial or government customers, rather than thousands or millions of individual customers. To quote Spinardi: "Unlike consumer products, where buyers are sought after production, made-to-order goods are only produced once a buyer has agreed terms. Investment is therefore far less risky because it can be based on, and costed into, procurement contracts."
That aversion to risk partly reflected British corporate governance. The big British electronics companies were public and had to answer to their shareholders. In the short term, bidding on large contracts made commercial sense.
It might also have made sense in the long term. As British investment in GaAs was beginning to bear fruit in the 1980s, Sony, Matsushita and other deep-pocketed Japanese companies were moving into the consumer electronics industry. The US had begun investing in military applications of semiconductors. Competing against either the Japanese in the commercial sector or the Americans in their own military sector might have been a futile and costly mistake.
Whether Britain's investment in GaAs was a success or a failure is a matter of perspective. On the one hand, the expected military applications were realized, to the gain of both Britain's armed forces and their British suppliers. On the other hand, having made that investment, the final step to mass-market success was surely small enough that at least one company could have, and maybe should have, taken the financial risk and jumped. Spinardi concludes his paper with this observation:
The dominance of defense in the post-war UK innovation system helped provide a technology base with much spin-off potential, but ironically it also engendered industrial conditions that may have limited the capacity of UK industry to make the most of this.