Speaker
Description
ALICE is one of the four major LHC experiments at CERN. When the accelerator enters the Run 3 data-taking period, starting in 2021, ALICE expects almost 100 times more Pb-Pb central collisions than now, resulting in a large increase of data throughput. In order to cope with this new challenge, the collaboration had to extensively rethink the whole data processing chain, with a tighter integration between Online and Offline computing worlds. Such a system, codenamed ALICE O2, is being developed in collaboration with the FAIR experiments at GSI. It is based on the ALFA framework which provides a generalised implementation of the ALICE High Level Trigger approach, designed around distributed software entities coordinating and communicating via message passing.
We will highlight our efforts to integrate ALFA within the ALICE O2 environment. We analyse the challenges arising from the different running environments for production and development, and conclude on requirements for a flexible and modular software framework. In particular we will present the ALICE O2 Data Processing Layer which exploits ALICE specific requirements in terms of Data Model. The main goal is to reduce the complexity of development of algorithms and managing a distributed system, and by that leading to a significant simplification for the large majority of the ALICE users. We will show examples for the usage of the Data Processing Layer in different contexts: local reconstruction after detector read-out, full reconstruction, simulation, and quality control. For each, we will provide a brief description, an evaluation of the performance and an assessment on the added value from the Data Processing Layer.
Finally we will give an outlook on future developments and possible generalisations and reabsorption in the ALFA framework of some of the concepts and components we developed.