The increase in luminosity by a factor of 100 for the HL-LHC with respect to Run 1 poses a big challenge from the data analysis point of view. It demands a comparable improvement in software and processing infrastructure. The use of GPU enhanced supercomputers will increase the amount of computer power and analysis languages will have to be adapted to integrate them. The particle physics community has traditionally developed their own tools to analyze the data, usually creating dedicated ROOT-based data formats. However, there have been several attempts to explore the inclusion of new tools into the experiments data analysis workflow considering data formats not necessarily based on ROOT. Concepts and techniques include declarative languages to specify hierarchical data selection and transformation, cluster systems to manage processing the data, Machine Learning integration at the most basic levels, statistical analysis techniques, etc. This talk will provide an overview of the current efforts in the field, including efforts in traditional programming languages like C++, Python, and Go, and efforts that have invented their own languages like Root Data Frame, CutLang, ADL, coffea, and functional declarative languages. There is a tremendous amount of activity in this field right now, and this talk will attempt to summarize the current state of the field.
|Consider for promotion||Yes|