Description
In this talk we propose to survey the computational reproducibility practices, opportunities and challenges in view of fostering Open and FAIR Science in research communities.
We discuss several thinking models regarding computational reproducibility, focusing on the broader knowledge preservation and reuse aspects rather than on the raw computing evolution aspects.
Building upon several use cases from experimental particle physics and related scientific disciplines, we discuss the variety of sociological and technological challenges inherent in making the research innately reproducible and reusable.
From the researcher point of view, we argue how "preproducibility" should come early in the scientific process in order to ensure its future reusability.
From the data infrastructure point of view, we argue how the data repository services benefit from accompanying "analysis engines" to ensure the correctness of data curation procedures of the validity of data usage recipes.
The ultimate goal of the Open Science and Data Preservation efforts is to facilitate future reuse and reinterpretation of scientific data by new generation of researchers. A strong focus on the computational reproducibility of original data analyses provides a way to facilitate the reuse and reinterpretation of Open and FAIR data even many years after the original publication.
This talk is heavily inspired, but not limited to, the experiences and lessons learnt from the past ten years of running the CERN Open Data portal and the REANA reproducible analysis platform for the particle physics community.
Tagline
This talk proposes to survey the computational reproducibility practices, opportunities and challenges in view of fostering Open and FAIR science in research communities. We discuss how the reproducible practices benefit both researchers and data infrastructures in facilitating future data reuse.
Keywords | reproducibility, reuse, reinterpretation |
---|