Speaker
Description
High Performance Computing (HPC) supercomputers are expected to play an increasingly important role in HEP computing in the coming years. While HPC resources are not necessarily the optimal fit for HEP workflows, computing time at HPC centers on an opportunistic basis has already been available to the LHC experiments for some time, and it is also possible that part of the pledged computing resources will be offered as CPU time allocations at HPC centers in the future. The integration of the experiment workflows to make the most efficient use of HPC resources is therefore essential.
This presentation will describe the work that has been necessary to integrate LHCb workflows at HPC sites. This has required addressing two types of challenges: in the distributed computing area, for efficiently submitting jobs, accessing the software stacks and transferring data files; and in the software area, for optimising software performance on hardware architectures that differ significantly from those traditionally used in HEP. The talk will cover practical experience for the deployment of Monte Carlo generation and simulation workflows at the HPC sites available to LHCb. It will also describe the work achieved on the software side to improve the performance of these applications using parallel multi-process and multi-threaded approaches.
Consider for promotion | No |
---|