My first professional encounter with the Monte Carlo method came not during my long-abandoned career as an astronomer when I might have used the computational technique, but years later when I ran Physics Today’s Search and Discovery department.
In 2004, I faced the task of describing a new Monte Carlo algorithm. Devised by Erik Luijten (while taking a shower, he told me), the new algorithm could do what the standard one, the Metropolis algorithm, couldn’t: efficiently simulate a colloid whose suspended particles had widely different sizes.
Suspecting that some of my readers might be unfamiliar with Metropolis, I included a short tutorial. I pointed out that using an alternative, more direct simulation method—molecular dynamics (MD)—was impractical: It’s possible to calculate the forces acting on all the colloid’s particles, but only for a modest number of consecutive time steps. The movie-like simulation that MD produces would be too brief to provide physical insight.
But the Metropolis algorithm, I told my readers, doesn’t follow every particle all the time. Rather, it calculates snapshots of the system and uses statistical mechanics to combine them. Comparing the two methods, I wrote:
So, if MD is like a movie, the Metropolis algorithm is like a sparse set of shuffled snapshots. If you simulated a cocktail party with the Metropolis algorithm, you wouldn’t see dynamical events, such as guests arriving and departing, or rare events, such as a waiter refilling a punchbowl. But, taken together, the Metropolis snapshots would fairly represent the party in full swing. From them, you could deduce whether, on average, people had enjoyed themselves.
My latest brush with Monte Carlo happened last week. Looking for research to write about, I came across a paper by Luis Zamora and his colleagues entitled “A Monte Carlo tool to study the mortality reduction due to breast screening programs.”
Screening for breast cancer is difficult and controversial. It’s difficult because the principal method, x-ray mammography, cannot by itself determine whether a lesion is malignant. Because of that limitation, follow-up biopsies are essential, but most lesions—roughly 4 in 5—turn out to be benign.
Controversy surrounds the question of when to start screening. Not only is the disease harder to detect in young women, it’s also less prevalent. Definitive evidence in favor of screening women aged between 40 and 49 years is lacking. Yet doctors—who treat individuals, not populations—are reluctant to tell patients under 49 that they don’t need a mammogram yet. Why take even a small risk?
The tool that Zamora and his colleagues have built simulates the fate of a population of women who enter a screening program. You can adjust the program’s age range and participation rate. Clinically derived parameters, such as the probability of detecting a tumor, are incorporated into the tool.
Zamora and his colleagues present their results in graphs and tables, which are hard to summarize in a short column. They predict, for example, that breast cancer mortality can be reduced by 29% if 100% of women aged 50–70 are screened every two years.
But they did discover what appears to be a critical parameter. For a screening program to be effective, its participation rate must be at least 50%. In the US, where 16.3% of the population lacks health insurance, that target is unfortunately ambitious.
This essay by Charles Day first appeared on page 88 of the March/April 2013 issue of Computing in Science & Engineering, a bimonthly magazine published jointly by the American Institute of Physics and IEEE Computer Society.