Applying for time to use telescopes, synchrotron light sources, nanofabrication labs, and other shared research facilities can be fraught. “It drives a lot of discussion and angst,” says Mike Dunne, SLAC associate laboratory director and head of the Linac Coherent Light Source, the lab’s free-electron laser. “The oversubscription rate is high, and no system is perfect.”

Many publicly funded facilities are open to scientists from around the world. Demand to access many of them is rising and acceptance rates are correspondingly slipping. Evaluating the large numbers of proposals has become a growing challenge.

In efforts to increase fairness and efficiency, facility and program managers are tweaking, testing, and studying variations on traditional time-allocation procedures. Changes include requiring that applicants commit to reviewing other proposals, using machine learning to assign proposals to reviewers, and making peer review anonymous. Studies on the effectiveness of hybrid and remote panels, scoring schemes, review length, and combinations of evaluations and lotteries are also underway.

The overriding criterion for allocating time and resources to users is scientific impact. “Is it transformative for science?” says Christine Chen, science policy group lead for the James Webb Space Telescope (JWST). The oversubscription rate in its most recent cycle was about nine to one.

Using a customized scanning electron microscope, Swarup China studies properties of aerosols. The microscope is one of more than 150 instruments that researchers compete to use at the Environmental Molecular Sciences Laboratory at Pacific Northwest Laboratory.

ANDREA STARR/PACIFIC NORTHWEST NATIONAL LABORATORY

Using a customized scanning electron microscope, Swarup China studies properties of aerosols. The microscope is one of more than 150 instruments that researchers compete to use at the Environmental Molecular Sciences Laboratory at Pacific Northwest Laboratory.

ANDREA STARR/PACIFIC NORTHWEST NATIONAL LABORATORY

Close modal

In most cases, proposals are evaluated by external reviewers, first individually and then in panel discussions. The panels then assign a ranking or score. Along the way, in-house checks are made for technical feasibility, duplication, instrument configuration, and other potential conflicts.

Aiming to reduce gender bias in its allocation awards, in 2018 the Space Telescope Science Institute introduced dual-anonymous peer review for the Hubble Space Telescope and then in 2021 did so for the JWST. The approach requires that proposers avoid putting any identifying information in applications—no names, no institutions, no giveaway achievements or collaborations. (See “Doling out Hubble time with dual-anonymous evaluation,” by Lou Strolger and Priyamvada Natarajan, Physics Today online, 1 March 2019, and “Dual-anonymous peer review gains traction,” by Rachel Berkowitz, Physics Today online, 16 December 2021.) Gender, age, and prestige bias all fell. For Hubble, says Chen, gender bias dropped from about 5% to around 1%. And last year 12% of the successful JWST proposals were headed by students, up from 1–2% before dual-anonymous review.

In addition, scientists involved in the review process say that dual-anonymous assessment has shifted conversations. “Panel discussions became more focused and efficient,” says B-G Andersson, who from 2008 until 2022 oversaw time allocation for the SOFIA airborne observatory, a collaboration of NASA and the German Aerospace Center. Panel members “are not distracted. They stick to the science,” says Andersson, now associate director for research at the University of Texas at Austin's McDonald Observatory. “And they aren’t tempted to try helping their friends.”

Rick Washburn manages the user program for the Environmental Molecular Sciences Laboratory (EMSL) at Pacific Northwest National Laboratory. The facility offers scientists access to more than 150 instruments, including NMR machines, mass spectrometers, and electron microscopes. In 2022 EMSL tried dual-anonymous reviewing, he says, and it is broadening its implementation of the approach given the “quantifiable improvements in new principal investigators and gender equity and qualitative improvements in the panel discussions.”

Some facilities have introduced an evaluation system whereby each team that competes for time commits to reviewing other proposals. If a team fails to submit reviews, it is disqualified from competing for time allocations for that round.

The Atacama Large Millimeter/Submillimeter Array (ALMA) piloted so-called distributed peer review in 2019 and now uses it for nearly all proposals. The exceptions are for the most time-intensive projects—the roughly 40 proposals a year requesting more than 50 hours on the facility’s 12-meter array or more than 150 hours on its 7-meter array—and for time allocated at the director’s discretion. About 240 of the roughly 1700 proposals ALMA receives a year are awarded time.

With panels, says ALMA observatory scientist John Carpenter, “we would have to gather 150 reviewers, and a reviewer would have to read about 100 proposals and discuss them over a couple of days. That became difficult to sustain.” With distributed peer review, says Carpenter, who introduced the approach at ALMA, “instead of a few people reviewing many proposals, many people review a few.”

Last year ALMA began using machine learning to assign proposals to reviewers. “It works pretty well,” Carpenter says. “We use keywords to filter out bad matches and are using new algorithms to optimize the assignments.”

For the Very Large Telescope, the VLT interferometer, and its other telescopes in Chile, the European Southern Observatory (ESO) manages about 1800 proposals a year. On average, about a quarter of them win time, but for the most in-demand telescope, which is equipped with the Multi-Unit Spectroscopic Explorer, or MUSE, only one-eighth of proposals are successful, according to Nando Patat, who heads ESO’s Observing Programmes Office.

About half the proposals to ESO are evaluated with distributed peer review. Patat notes that it’s easy to find experts—and in turn to get more helpful reviews—from the large pool of applicants. And because the panels have a reduced load, he says, “reviewers now have more time to discuss the proposals. They are happy, it’s easier to recruit reviewers, and they provide better feedback to the applicants.”

The Atacama Large Millimeter/Submillimeter Array in Chile is among the world’s most in-demand scientific facilities. The requested amounts of time to use it currently exceed the available time by more than sevenfold.

ESO/B. TAFRESHI (TWANIGHT.ORG)

The Atacama Large Millimeter/Submillimeter Array in Chile is among the world’s most in-demand scientific facilities. The requested amounts of time to use it currently exceed the available time by more than sevenfold.

ESO/B. TAFRESHI (TWANIGHT.ORG)

Close modal

Some scientists worry that their proposals don’t get proper scrutiny with distributed review. They cite the lack of discussion, the possibility that graduate students—rather than more senior scientists—evaluate proposals, the difficulty in assigning knowledgeable reviewers to projects outside the mainstream, and possible conflicts of interest if the reviewer wants time on the same instrument. “I have yet to meet someone who says distributed peer review is great,” says Meredith MacGregor, an astronomer at Johns Hopkins University. “Most people are frustrated. It seems chaotic.”

But Patat notes that distributed peer review minimizes bias, fosters better proposal–referee matching, puts more eyes on each proposal, and reduces the load on individual reviewers. He cites surveys in which applicants say they are satisfied with the process. And he, Carpenter, and others on the administrative side report that junior scientists take reviewing seriously. Based on surveys of applicants, Patat says that if anything, junior scientists provide better feedback than do senior scientists.

Distributed peer review is easier for observatory staff. In the last allocation cycle at the JWST, says Chen, “we had to recruit 600 reviewers. It’s hard.” Still, the JWST is mostly sticking with the panel format. “Understanding who has what expertise is important,” she says. “And we want to avoid harassment in committee situations, people who dominate the conversation, and other inappropriate behavior. We are very careful about who we invite to participate on discussion panels.” (For proposals to observe for 20 hours or less, the JWST uses a variation of distributed peer review, with reviewers coming from its time-allocation committee rather than from proposers.)

Observatory spokespeople and users both acknowledge that the lack of panel discussions about proposals is a shortcoming of distributed peer review. “There are times when someone misunderstands or misses something, and their opinion—and the final assessment—changes in the course of discussion,” says Carpenter. The observatories try to compensate by giving reviewers a chance to adjust their assessments after they have seen those of others’, but at ALMA, for example, that occurs in only about 8% of cases, he says.

Although distributed peer review is new for observatories and other user facilities, computer science has been using the approach to vet conference papers for decades, says Nihar Shah, a Carnegie Mellon University computer scientist who studies peer review. “A conference may have 20 000 submissions,” he says. Shah has advised both ALMA and ESO on the process and the pros and cons of distributed peer review. He studies related topics such as the relative benefits of ranking versus rating proposals, automated assignment of reviewers, how to incentivize reviewers to write meaningful reviews, and how to spot and avoid collusion rings—when researchers maneuver to get each other as a reviewer and thus boost their chances of success.

Alison Hatt is a communications lead at Lawrence Berkeley National Laboratory and previously ran user programs at the lab’s Molecular Foundry and at EMSL. As an independent consultant, she interviewed a dozen user facility representatives for a study, commissioned by the Advanced Light Source at Berkeley, on peer-review practices.

Hatt recommends that facilities take a coarser-grained approach to scoring proposals and then apply “a partial lottery.” After accepting the top proposals and rejecting the worst, the ones in the middle, which can be tricky and subjective to differentiate among, could be chosen at random. Even with dual-anonymous reviews, she says, bias is not eliminated completely. “Humans do the evaluating, so it’s not really quantitative,” Hatt says. The facilities haven’t yet adopted lotteries, “but they are considering it. They are more receptive than I expected.”

So what does a researcher have to do to win time on telescopes and other user facilities? And what can they do if they don’t? Johns Hopkins’s MacGregor says she has been successful but still “doesn’t have a good understanding of what it takes to get time on telescopes.” Her approach is to ask only for the time she’ll need and to show that her science results will have an impact beyond her own immediate research.

Burçin Mutlu-Pakdil, an assistant professor of astronomy at Dartmouth College (see the interview in Physics Today, August 2024, page 24) has won time on Hubble and other telescopes. “I am constantly asking for time,” she says. “I write proposals every other month.” For her proposals, she says, she simulates the observations. “You have to convince the observatory that there is no risk. You need to argue that your observation has impact whether you see what you expect or not.”

Researchers who don’t win time on facilities can apply again. They can apply for less time and use those results to bolster their case for more time. In astronomy, they can mine archival data. They can apply at other facilities. They can team up with other scientists who have won time. “Working in multiple wavelengths makes me more resilient,” says MacGregor. “It’s best to have multiple projects going.”

1.
L.
Strolger
,
P.
Natarajan
, “
Doling out Hubble time with dual-anonymous evaluation
,”
Phys. Today
,
1
March
2019
.
2.
R.
Berkowitz
, “
Dual-anonymous peer review gains traction
,”
Phys. Today
,
16
December
2021
.