Speakers
Sebastian Macaluso
(New York University)
Heiko Mueller
Description
In this talk, we present exploratory work to enable benchmark tests for physics challenges, such as “The Machine Learning Landscape of Top Taggers” comparison or the LHCOlympics2020. We introduce the “Reproducible Open Benchmarks for Data Analysis Platform” (ROB) for this task and we aim to show a demo where ROB is implemented on a sample case. Given a benchmark workflow, users would provide code with their algorithm (e.g. docker containers) and trained parameters. Then the back-end would process the workflow (the algorithms could also be part of a downstream analysis task) and evaluate the metrics on a test dataset. Finally, plots and tables would be updated.
Authors
Kyle Stuart Cranmer
(New York University (US))
Irina Espejo Morales
(New York University)
Shih-Chieh Hsu
(University of Washington Seattle (US))
Sebastian Macaluso
(New York University)
Aaron Maritz
Heiko Mueller
Ajay Rawat