Post and Votta reply: The readers who responded to our article have interesting and valid comments, some of which merit further discussion.
Thomas Sheahen makes several points supporting our premise that computational science’s credibility needs improvement. Otherwise, the science cannot inform strategic decisions affecting society. With regard to Sheahen’s specific criticisms, climate models have included clouds since the 1960s. 1
Global warming is a fact. Identifying the causes and predicting future warming are active areas of research. A library search for papers on global warming turned up more than 5000 published since January 2000. The climate-modeling community has recognized that existing models are inadequate and has been working to improve them and to identify and add new effects. 2 The Community Climate System Model and Earth System Modeling Frame-work programs have been formed to coordinate the national and, to some extent, the international efforts in this area. One of us (Post) attended the 9th CCSM Workshop in Santa Fe, New Mexico, in 2004. There it was evident that the models are being improved, software engineering is becoming a key part of the CCSM program, and the verification and validation process is becoming a central part of model development. The international climate-modeling community has a program with a multibillion-dollar annual budget to gather, analyze, and store detailed data for continental, oceanic, atmospheric, and polar weather and climate phenomena. 3 The newer models generally confirm what the earlier models identified: the importance of CO2 emissions in global warming. Other predictions of climate and weather phenomena also are making an impact. For example, predictions of El Niño effects are being used to make agricultural decisions. 4 Although the models have undergone tremendous progress and continual improvement, immense challenges remain, and work on addressing them continues.
Craig Bolon’s assertion that computational science needs the injection of the project discipline employed by IT organizations like Microsoft is not totally correct. Computational science and engineering will benefit from improved software project-management processes, but not necessarily from the same kinds used in the IT industry. For instance, IT processes emphasize the importance of detailed and prescriptive requirements, but for scientific software such requirements are difficult to develop. Code development for computational science usually involves research to find the best solutions and most accurate models needed for credible answers. It’s also not clear that Microsoft Windows is without problems. Windows XP recently had to be massively rewritten to minimize security vulnerabilities. Windows users now download security patches frequently—sometimes daily.
The paper Rudolf Eigenmann refers to is “The T Experiments: Errors in Scientific Software,” by Les Hatton. 5 It illustrates the value of “code benchmarking”—comparing the results of many codes for a single problem and determining the reasons for divergent results.
Josip Loncaric expands on two key points in our paper. The first is that developers and users of scientific and engineering codes need considerable domain knowledge to ensure that their results are as accurate as possible. We think this situation is probably getting worse rather than better. The challenge of developing codes for very complex, massively parallel computers has increased the emphasis on programming skills. As a result, the emerging generation of computational scientists is skilled in code development but much less so in the relevant scientific discipline.
The second point Loncaric high-lights is that a model for a natural system—physical, chemical, biological, and so forth—is often much more than the sum of the individual components. For physical systems, Robert Laughlin recently pointed out that much of science today is inherently reductionist. 6 Present scientific research paradigms emphasize the detailed study of the individual elements that contribute to a complex system’s behavior. High-energy physics, for example, involves the study of fundamental particles at progressively higher accelerator energies. Yet successful models of complex systems, such as low-temperature superconductors, are relatively insensitive to the detailed accuracy of the individual constituent effects. Laughlin stresses that successful models capture the emergent principles that determine the behavior of complex systems. Examples of these emergent principles are conservation laws, the laws of thermodynamics, detailed balance, and preservation of symmetries.
Since a computational simulation is only a model of nature, not nature itself, there is no assurance that a collection of highly accurate individual components will capture the emergent effects. Yet most computational simulations implicitly assume that if each component is accurate, the whole code will be accurate. Nature includes all of the emergent phenomena, but a computational model may not. This perspective underscores the importance of validation of the integrated code and of individual models.
Denes Marton takes us to task for possibly misspelling Széchenyi, Széchényi, or Szechenyi. Unfortunately, non-Hungarian speakers like us can find all three spellings for the bridge in various sources. We relied on the advice of a Hungarian colleague, who has admitted, on further inquiry, that Széchenyi is likely correct.
All the English-language accounts we could find mentioned the original bridge construction in the 1840s and the reconstruction after World War II. We don’t doubt Marten’s additional historical elements about the reconstruction in 1913–1915, but we couldn’t find an English account of them.