Thanks to a number of technical developments, the precision of RV surveys has been steadily improving. While the spectrographs of fifty years ago yielded RVs with errors in excess of 1 km/s, today's state-of-the-art stabilised spectrographs boast 10 cm/s precisions. Yet very little has changed in the way individual RVs are actually extracted from observed spectra: i.e., cross correlating observed spectra with a weighted template. This approach suffers a few notable drawbacks, including that
- a given template will generally be an imperfect match to any observed star's spectrum;
- information is discarded when computing RVs;
- the RV extraction process is sensitive to stellar activity variability and telluric contamination; and
- acquiring more spectra does not improve the accuracy or precision of existing RVs, despite the additional spectra containing potentially-useful new constraints.
I'll present a new data-driven approach for extracting RVs that aims to address these drawbacks. The new method models each spectrum in an ensemble of spectra with a Gaussian process (GP), then aligns each GP model with every other GP model spectrum. In so doing, the method effectively builds up a super-resolved, low-noise template spectrum, and is incidentally also able to yield RVs that are less sensitive to stellar activity and telluric contamination than RVs extracted with more conventional approaches.
This new method is conceptually simple, and its performance very favourable using both synthetic and real data. As such, it has the potential to enable the study of smaller planets around a wider variety of stars than has previously been possible. It could also be fruitfully applied to archival data.