Speaker
Description
High Energy Particle Physics (HEP) relies on efficient and sustainable computing infrastructures operating at a global scale. These infrastructures must support a broad range of workloads, including machine learning applications, large-scale production campaigns, and heterogeneous end-user analysis jobs. Ensuring that available computing resources can be effectively utilized across this spectrum is therefore of key interest to HEP as a whole.
In preparation for the High-Luminosity LHC phase, which is scheduled to start in the 2030s, computing centers are facing increasing demands for performance and energy efficiency. In this context, the CMS and ATLAS collaborations are currently evaluating how GPU resources could be incorporated into their computing models, motivated by their potential for high throughput and improved efficiency. However, the general suitability of these resources across the full HEP landscape remains an open question, as centralized benchmarking is still in development.
This contribution assesses the performance gains from GPUs for HEP computing in the context of a batch-processing environment using three representative benchmark scenarios. In addition, we explore the opportunities of GPU partitioning via prototype Multi-Process Service (MPS) and Multi-Instance GPU (MIG) HTCondor setups, demonstrating flexible and efficient integration into HEP batch systems.