19–25 Oct 2024
Europe/Zurich timezone

Evolution of DUNE's Production System

24 Oct 2024, 14:24
18m
Room 2.B (Conference Room)

Room 2.B (Conference Room)

Talk Track 4 - Distributed Computing Parallel (Track 4)

Speaker

Jacob Michael Calcutt (Brookhaven National Lab)

Description

The DUNE experiment will start running in 2029 and record 30 PB/year of raw waveforms from Liquid Argon TPCs and photon detectors. The size of individual readouts can range from 100 MB to a typical 8 GB full readout of the detector to extended readouts of up to several 100 TB from supernova candidates. These data then need to be cataloged, stored and then distributed for processing worldwide. This large massive amount of data and heterogeneous computing environment necessitates powerful and robust distributed computing infrastructure. In the process of building up that infrastructure, DUNE’s production system has recently undergone an overhaul, in which it has integrated 1) a new workflow management system (justIN) 2) a new data catalog (MetaCat) and 3) a state-of- the-art data management system (Rucio). Simulations of DUNE’s Far Detector and its prototypes ProtoDUNE-HD (PDHD) and ProtoDUNE-VD, as well as data from PDHD serve as the first tests of this infrastructure.

Primary author

Jacob Michael Calcutt (Brookhaven National Lab)

Presentation materials