Speaker
Description
The Deep Underground Neutrino Experiment (DUNE), hosted by the U.S. Department of Energy’s Fermilab, is expected to begin operations in the late 2020s. The validation of one far detector module design for DUNE will come from operational experience gained from deploying offline computing infrastructure for the ProtoDUNE (PD) Horizontal Drift (HD) detector. The computing infrastructure of PD HD is developed to achieve the physics goals by storing, globally distributing, cataloging, reconstructing, simulating, and analyzing data. Offline computing activities start with raw data in Hierarchical Data Format (HDF5) collected by DUNE’s data acquisition (DAQ) system. The data is conveyed from the neutrino platform at CERN to the host labs using a data pipeline that includes Rucio for replica management and FTS3 for transport. The database system for PD HD comprises several backend relational databases (run configuration, beam instrumentation, slow controls, and calibration) that are fed into a Master Store database, an unstructured database, containing all of the information collected from the backend databases. A subset of this information, needed by offline users, is then moved into a lighter-weight relational database that users can access via an API or by an art service during the reconstruction and analysis stage. The primary software strategy for event reconstruction has been the adoption of algorithms that are flexible and accessible enough to support creative software solutions and advanced algorithms as HEP computing evolves. The collaboration anticipates making substantial use of HPCs for ProtoDUNE HD algorithms.