Speaker
Description
The Large Hadron Collider (LHC) at CERN has generated a vast amount of information from physics events, reaching peaks of TB of data per day. Many reports show that the current analysis models (and more generally, data processing interfaces) would not be able to efficiently accommodate the amount of data in the next few years. It is both the responsibility of the frameworks to provide efficient computing tools and the user's responsibility to optimally exploit these resources. The latter is of particular interest in this lecture.
The purpose of this talk is to familiarize students with mechanisms to efficiently profile the performance of C++ and Python applications, going through real-world HEP analysis. The core of the lecture will be the identification of hotspots via perf
and techniques for mitigation of different kinds of bottlenecks.