For three and a half years after its launch in 2009, the Kepler space telescope remained fixed on the same 150 000 stars in a 115-square-degree patch of sky. Software would sift through the data sent to Earth and identify probable exoplanets by flagging periodic dips in stellar brightness caused by the planets crossing in front of their suns.
But in 2013 the second of the spacecraft’s four stabilizing reaction wheels failed. To keep the mission going, the Kepler team was forced to reorient the telescope. During the 2014–18 K2 mission, Kepler surveyed different target regions along the ecliptic for about 80 days each, all while repositioning itself multiple times daily to counteract the push from the Sun’s radiation pressure. The result was a trove of noisy data that inevitably would contain planet-caused brightness dips, but they were impossible to parse with the algorithms that had been developed for the original mission.
Now Jon Zink of UCLA and his colleagues have put their newly developed planet-identifying software to the test and analyzed more than 220 000 stars that caught the gaze of Kepler during K2. The resulting sample of planet candidates, about half of which had not been identified in previous studies, should enable researchers to do statistical analyses on K2 data rather than just manually sift through stellar brightness curves for individual planets.
The researchers’ planetary detection software first removes known sources of systematic noise, particularly from the repositioning of the telescope that occurred every six hours. Another algorithm looks for and eliminates periodic signals that are not caused by planets, such as brightness changes from stellar variability. Then the software searches the cleaned-up light curves for pronounced brightness dips that resemble planetary transits. It found 140 000.
A series of steps to vet those signals—including looking for evidence that the signal was from instrument error or that the transiting body was a star rather than a planet—eliminated most of those detections from consideration. Once the software package identifies a planetary candidate, it pulls the host star’s properties, acquired by the star-mapping Gaia spacecraft, to determine planetary radius and other parameters.
The researchers ran several tests to determine the effectiveness of their planet-hunting software. They mapped out the completeness of the sample for planets of various sizes and orbital periods by injecting artificial planetary transit signals into the raw data. They also calculated an approximate false-positive rate by inverting the light curves, making real planetary transits appear as brightening, rather than dimming, events. Based on the rate the software wrongly flagged those curves, the researchers estimate that 91% of the planetary candidates are true astrophysical signals.
The automated process identified 747 probable planets in the K2 data. If confirmed with observations by other telescopes, some of those exoworlds could prove to be valuable case studies for understanding planetary formation and evolution. But the researchers’ focus is less on individual planets than on the statistical power of their sample. A primary goal of the original Kepler mission was to identify the abundance in the galaxy of various kinds of planets, particularly ones that resemble Earth. With a sample of planets acquired through a vetted automated process, researchers can now perform the same kinds of statistical studies with K2 that they did on the original Kepler data set. That’s exactly what Zink and colleagues plan to do in an upcoming paper. Such demographic studies should complement those based on Kepler’s original field of view. The K2 stars span a larger range of galactic latitudes and include far more red dwarfs, which astronomers suspect may be capable of hosting Earth-like planets. (J. K. Zink et al., Astron. J., in press, https://arxiv.org/abs/2109.0267.)