Speaker
Description
The incoming Run 3 at LHC for ALICE will be characterized by a 50-100 times larger data-taking rate, which requires a redesign of many analysis algorithms in order to meet stricter memory and computational time constraints.
This applies especially to event mixing, which is a crucial component of correlation frameworks. Mixing is a technique of selecting distinct collisions grouped by, e.g., multiplicity and z-vertex and iterating over the tracks (V0s, cascades, …) belonging to these collisions. The resulting tuples of tracks are then used for calculating multiparticle correlations.
Previously, mixing was done in buffers, which directly represented bins of the properties under consideration. However, this approach is slow and requires additional linear memory, unattainable with typical data sizes expected for Run 3. Therefore, a new algorithm was developed with a lazy generation of combinations of data elements. It needs only a small constant memory for storing one combination at a time. The implementation is as generic as possible. It allows for tuples of any size, from any analysis table, and for any number of grouping properties. Moreover, many calculations are performed already during compile time thanks to C++17/20 meta-programming. The mixing algorithm can also be applied beyond particle physics, e.g., to generate pairs of sport competitors from different countries belonging to the same international federation.