Speaker
Description
The ALICE experiment has undergone a major upgrade for LHC Run 3 and will record 50 times more data than before.
The new computing scheme for Run 3 replaces the traditionally separate online and offline frameworks by a unified one, which is called O².
Processing will happen in two phases.
During data taking, a synchronous processing phase performs data compression, calibration, and quality control on the online computing farm.
The output is stored on an on-site disk buffer.
When there is no beam in the LHC, the same computing farm is used for the asynchronous reprocessing of the data which yields the final reconstruction output.
O² is organized in three projects.
The Event Processing Nodes (EPN) equipped with GPUs deliver the bulk of the compute capacity and perform the majority of the reconstruction and the calibration.
The First Level Processors (FLP) receive the data via optical links from the detectors and perform local processing where it is needed, which can optionally happen in the user logic of the FPGA based readout card.
Between the FLP and the EPN farms the data is distributed in the network such that the EPNs receive complete collision data for the processing.
The Physics and Data Processing (PDP) group develops the software framework and the reconstruction and calibration algorithms.
The current O² setup is capable of handling the foreseen peak data rate during 50 kHz of Pb-Pb collisions in real time.