Brief description: Code modernisation is of primary importance within the particle physics domain due to the continuous increase of the amount of computation needed, combined with the availability of new, more advanced computing hardware. High performance computers are built, today, with a combination of resources including many-core processors, large caches, fast memory and high bandwidth communications fabrics. To best exploit those features, parallelism and, therefore, optimal concurrency and synchronisation, vectorisation, efficient communication and memory management have to be integrated in the software. This lecture gives an introduction to the basic concepts of code profiling and application performance measurement. Through a series of examples, different typical bottlenecks and optimisation strategies will be illustrated.
Speaker's short bio: Dr. Sofia Vallecorsa is a senior physicist with extensive experience on software development in the High Energy Physics domain, in particular on Machine Learning applications within CERN openlab (http://openlab.cern.ch). Before joining openlab, Dr. Vallecorsa has been CERN Scientific Associate responsible for the development of Deep Learning based technologies for the simulation of particle transport through detectors and she has worked on optimisation of the GeantV detector simulation prototype on modern hardware architectures. From 2013 to 2015, Dr. Vallecorsa was a researcher at the University of Geneva, where her work focused on Dark Matter searches both for the ATLAS experiment (https://atlas.cern/) and the IceCube experiment (https://icecube.wisc.edu/) at the South Pole. Dr Vallecorsa was until 2013, a Research Fellow of the Israel Institute of Technology (Technion), being the main responsible for the development of a software, designed for treatment of collider physics raw data, specifically for the ATLAS Muon Spectrometer detector.