Speaker
Description
The LHCb experiment, one of the four major experiments at the Large Hadron Collider (LHC), excels in high-precision measurements of particles that are produced relatively frequently (strange, charmed and bottom hadrons). Key to LHCb's potential is its sophisticated trigger system that enables complete event reconstruction, selection, alignment and calibration in real-time. Through the Turbo stream processing model, the experiment substantially reduces data volume, while preserving full physics potential, by only persisting parts of the event data. Initially deployed during Run 2, this approach has evolved to become the standard processing paradigm for all LHCb physics objectives in Run 3, with significant enhancements and use of its flexibility. This presentation will demonstrate the performance and capabilities of this storage model during Run 3 operations, highlighting how its expanded adaptability has enabled LHCb to optimise finite storage resources while maintaining crucial data redundancy safeguards.
Significance
The LHCb experiment changed nearly all of its subdetectors and changed its trigger paradigm in Run 3. Doing in-trigger event analyses turned out to be fruitful, but also challenging, as we were dealing with a completely new detector. We think it's good to disseminate how certain flexibility still guaranteed that these trigger-level analyses were still possible, keeping an eye on the updates of both ATLAS and CMS planned for Run 4.
| Experiment context, if any | The LHCb experiment |
|---|