Thematic CERN School of Computing - spring 2021

Europe/Zurich
Online event

Online event

Sebastian Lopienski (CERN), Joelma Tolomeo (CERN), Jarek Polok (CERN)
Description

The 8th Thematic CERN School of Computing (tCSC spring 2021) will take place on June 14-18, 2021 in an online format.

The theme of the School is "Scientific Software for Heterogeneous Architectures" - see the academic programme for more details.

The School is targeted at postgraduate (i.e. minimum of Bachelor degree or equivalent) students, engineers and scientists with a few years' experience in particle physics, in computing, or in related fields. We welcome applications from all countries and nationalities.

Due to the ongoing Covid-19 pandemic, this School is organised in an online format. Nevertheless, we aim at creating the usual rich, interactive learning experience, for which CERN School of Computing (CSC) is known since years. 

Important Dates

  • late March - applications open
  • Friday April 30 (midnight CEST) - deadline for application
  • Tuesday May 18 - you will be informed about the outcome of the selection by email
  • Monday June 14 to Friday June 18 - the school 

    

 

Surveys
Quiz for Track 2, Lecture 2 "Modern programming languages for HEP" by Sebastien Ponce
Quiz for Track 3, Lecture 1 "Scientific computing on heterogeneous architectures" by Dorothea vom Bruch
Quiz for Track 3, Lecture 2 "Programming for GPUs" by Dorothea vom Bruch
CERN School of Computing
    • 09:00 09:30
      Opening Session 30m
      Speakers: Frederic Hemmer (CERN), Sebastian Lopienski (CERN)
      • Word from the CERN IT Department Head 15m
        Speaker: Frederic Hemmer (CERN)
      • Introduction to CERN School of Computing 15m
        Speaker: Sebastian Lopienski (CERN)
    • 09:30 10:30
      Preparing for the HL-LHC computational challenge 1h

      In this talk we will introduce some basic concepts related to HEP data processing and analysis workflows, seeing them in action in the context of LHC experiments. We’ll also talk about the evolution of the LHC accelerator and experiments. We’ll characterise at a high level what are the consequences of those upgrades for the HEP data processing software, in particular in the context of an evolving hardware and computing infrastructure.

      Speaker: Danilo Piparo (CERN)
    • 10:30 10:50
      Coffee break 20m
    • 10:50 11:50
      Introduction to efficient computing 1h

      Technologies and Platforms - lecture 1

      • The evolution of computing hardware and what it means in practice
      • The seven dimensions of performance
      • Controlling and benchmarking your computer and software
      • Software that scales with the hardware
      • Advanced performance tuning in hardware
      Speaker: Andrzej Nowak
    • 11:50 12:20
      Self-presentation: 1 minute per person 30m

      School participants, lecturers and organizers
      (in alphabetical order):

      • Alfonsi Alice
      • Bachmayer Marie
      • Baptista de Souza Leite Juan
      • Barbetti Matteo
      • Barlou Maria
      • Brunner David
      • Bury Florian
      • Campora Daniel
      • Carrere Matthieu
      • Choi Wonqook
      • Chug Neha
      • Connor Patrick
      • Cristella Leonardo
      • De Simoni Micol
      • Fargier Sylvain
      • Favoni Matteo
      • Ferencek Dinko
      • Galli Massimiliano
      • Garcia Chavez Tonatiuh
      • Gilman Alexander Leon
      • Hedia Sassia
      • Lasaosa Garcia Clara
      • Leon Coello Moises David
      • Lopienski Sebastian

      Continued in the afternoon session (right after the lunch break):

      • Manfreda Alberto
      • Mania Georgiana
      • Martikainen Laura
      • Mishra Saswat
      • Mostafa Jalal
      • Ouvrard Xavier Eric
      • Padulano Vincenzo
      • Piparo Danilo
      • Polok Jarek
      • Ponce Sebastien
      • Popescu Andrei
      • Pournaghi Atousa
      • Rafanoharana Dimbiniaina
      • Reid Tres
      • Shchedrolosiev Mykyta
      • Sobol Bartosz
      • Storetvedt Maksim Melnik
      • Sunneborn Gudnadottir Olga
      • Tolomeo Joelma
      • Triantafyllou Natalia
      • Vage Liv Helen
      • Vnuchenkot Anna
      • vom Bruch Dorothea
    • 14:00 14:30
      Self-presentation: 1 minute per person 30m

      (continued from the session before lunch: https://indico.cern.ch/event/1017080/contributions/4291663/)

    • 14:30 14:50
      Coffee break 20m
    • 14:50 17:00
      Group assignment for Track 1: Technologies and Platforms 2h 10m

      The goal of this exercise is to provoke you into thinking about some of the key choices in computing.

      The scenario
      Modern scientific experiments are massive producers of data. Imagine that you’re a computing manager for one such experiment, which produces 100 terabits of raw data per second and has no computing infrastructure yet. Your task is to use your current knowledge to conceptualize data processing for your experiment and, in the process, to uncover important choices to make.

      The challenge
      Focus on key aspects of compute and software, and less so on networks, accelerators or data flows. What kind of considerations, tradeoffs and assumptions would you have to take into account?

      What kind of equipment would you use, where would you put it and why? What kind of software would you run? What do you think would be the rough purchase and maintenance cost and effort? Can you identify gaps in your current knowledge that you would need to fill in?

      What we expect
      You're not expected to have all the answers! In many cases already listing the important questions can be helpful. Seasoned professionals can spend even 10 years of their careers making such a plan for a single experiment.

      Try to answer the challenge in conceptual terms, and using rough estimates. When faced with unknowns, you can make assumptions – make sure to clearly specify when that’s the case. It’s best if you present your solution on the basis of a 1-slide diagram illustrating key concepts and components, but it’s not a requirement.

    • 09:00 10:00
      Writing parallel software 1h

      Parallel and Optimised Scientific Software - lecture 1

      • Amdahl's and Gustafson's laws
      • Asynchronous execution
      • Finding concurrency, task vs. data parallelism
      • Using threading in C++ and Python, comparison with multi-process
      • Resource protection and thread safety
      • Locks, thread local storage, atomic operations
      Speaker: Danilo Piparo (CERN)
    • 10:00 11:00
      Modern programming languages for HEP 1h

      Parallel and Optimised Scientific Software - lecture 2

      • Why Python and C++ ?
      • Recent evolutions: C++ 11/14/17
      • Modern features of C++ related to performance
      • Templating versus inheritance, pros and cons of virtual inheritance
      • Python 3, and switching from Python 2
      Speaker: Sebastien Ponce (CERN)
    • 11:00 11:20
      Coffee break 20m
    • 11:20 12:20
      Optimizing existing large codebase 1h

      Parallel and Optimised Scientific Software - lecture 3

      • Measuring performance, tools and key indicators
      • Improving memory handling
      • The nightmare of thread safety
      • Code modernization and low level optimizations
      • Data structures for efficient computation in modern C++
      Speaker: Sebastien Ponce (CERN)
    • 14:00 14:05
      School photo 5m

      We will be taking a "group photo" of the school - a picture of the participants connected to the Zoom room. This group photo, containing a lot of small but recognizable pictures of individual participants, will afterwards be published on the school website (and possibly, in other CERN publications).

      • If you want to be part of the group photo, please enable your camera when we will be taking the photo (technically, screenshots of the Zoom gallery view).
      • If you prefer not to be included in this group photo, please just keep your camera off.

      In any case, your name will not appear in the final edited group photo.

    • 14:05 14:25
      Parallel and optimised scientific software - exercise introduction 20m

      Optimisation of an existing, production grade large codebase

      Speaker: Sebastien Ponce (CERN)
    • 14:25 17:00
      Parallel and optimised scientific software - exercise 2h 35m

      Optimisation of an existing, production grade large codebase

      Speakers: Sebastien Ponce (CERN), Arthur Hennequin (CNRS)
    • 19:00 20:00
      Special evening talk: Future of the Universe and of Humanity 1h
      Speaker: Ivica Puljak (University of Split)
    • 09:00 10:00
      Data-oriented design 1h

      Technologies and Platforms - lecture 3

      • Hardware vectorization in detail – theory vs. practice
      • Software design for vectorization and smooth data flow
      • How can compilers and other tools help?
      Speaker: Andrzej Nowak
    • 10:00 11:00
      Practical vectorization 1h

      Parallel and Optimised Scientific Software - lecture 4

      • Measuring vectorization level
      • What to expect from vectorization
      • Preparing code for vectorization
      • Vectorizing techniques in C++: intrinsics, libraries, autovectorization
      Speaker: Sebastien Ponce (CERN)
    • 11:00 11:20
      Coffee break 20m
    • 11:20 12:20
      Scientific computing on heterogeneous architectures 1h

      Programming for Heterogeneous Architectures - lecture 1

      • Introduction to heterogeneous architectures and the performance challenge
      • From general to specialized: Hardware accelerators and applications
      • Type of workloads ideal for different accelerators
      • Trade-offs between multi-core and many-core architectures
      • Implications of heterogeneous hardware on the design and architecture of scientific software
      • Embarrassingly parallel scientific applications in HPC and CERN
      Speaker: Dorothea vom Bruch (CPPM/CNRS)
    • 14:00 15:30
      Group assignment for Track 2: Parallel and Optimised Scientific Software 1h 30m

      Topic for group 1: Large software systems are more and more difficult to maintain over years. In addition, programming languages evolve and the relevant expertise is lost (e.g. about programming in Fortran). When is the right moment to restart from scratch a large software project? Is it possible at all?

      Topic for group 2: How to ensure long term data preservation? Today, we can read the writings of Newton, and redo his computations. However, in 300 years, will someone be able to rerun todays' software? How to make it happen? Is it feasible at all?

      Topic for group 3: How to ensure good test coverage of a large code base? How to test software that will run on thousands of machines concurrently?

      Topic for group 4: A lot of bad quality code and bugs are introduced in physics software due to lack of knowledge of computing languages by non expert software developers. How can we spread better the computer science knowledge and best practices in large scientific collaborations?

      Topic for group 5: Every now and then, new hardware or software appears, with often very promising prospects. However, the risk is that they disappear within a few years (think of object oriented databases, Google glasses etc.). How to take benefit of latest technologies without jeopardising a multi-decade project?

      Topic for group 6: What's the impact of hardware evolution and choices on software and programming languages? Is is realistic to have hardware agnostic programming languages?

      Topic for group 7: According to Donald Knuth, "Premature optimization is the root of all evil". Have you ever had similar experiences? How to decide when a good moment to do optimisation, and what to optimise?

    • 15:30 15:50
      Coffee break 20m
    • 15:50 16:30
      Student lightning talks 40m
      Speakers: Bartosz Marek Sobol (Jagiellonian University Krakow), David Brunner (DESY), Florian Bury (Catholic University of Louvain), Georgiana Mania (DESY), Micol De Simoni (Sapienza University of Rome)
      • FRED: a fast Monte Carlo code on GPU for Treatment Planning Software 7m

        In this presentation, I would like to talk about the fast MC, FRED, which was the focus of my PhD and, now, of my postdoc. FRED is a fast MC that runs on GPU and it has been developed for medical applications. I would give shortly the framework where FRED was developed explaining why we need a fast MC in medical applications and I would give some information about its state of arts (performances, what we can track, next goals).

        Speaker: Micol De Simoni (Sapienza University of Rome)
      • Track reconstruction on heterogeneous architectures with SYCL 7m

        With modern physics experiments comes the need to process more and more data. In this talk, I briefly present my latest work on online data processing in DAQ system of PANDA experiment at FAIR / GSI, Darmstadt, Germany using heterogeneous computing platforms and SYCL programming model, the research’s challenges and goals.

        Speaker: Bartosz Sobol (Jagiellonian University Krakow)
      • Matrix Element Regression with Deep Neural Networks -- breaking the CPU barrier 7m

        The Matrix Element Method (MEM) is a powerful method to extract information from measured events at collider experiments. Compared to multivariate techniques built on large sets of experimental data, the MEM does not rely on an examples-based learning phase but directly exploits our knowledge of the physics processes. This comes at a price, both in term of complexity and computing time since the required multi-dimensional integral of a rapidly varying function needs to be evaluated for every event and physics process considered. This can be mitigated by optimizing the integration, as is done in the MoMEMta package, but the computing time remains a concern, and often makes the use of the MEM in full-scale analysis unpractical or impossible. We investigate in this paper the use of a Deep Neural Network (DNN) built by regression of the MEM integral as an ansatz for analysis, especially in the search for new physics.

        Speaker: Florian Bury (Catholic University of Louvain)
      • Exploring Heterogeneous Architectures in Track Reconstruction Software 7m

        Track reconstruction algorithms are computationally intensive due to their combinatorial nature and pose a great challenge for HL-LHC era. The estimated compute time will not fit the budget unless the code becomes more efficient and highly parallel. Exploring heterogeneous architectures is the at core of this change and current R&D efforts (mostly focused on CUDA) show promising results, but the goal hasn't been reached yet. The talk introduces the problem(s), some notable published results and connects this to my PhD research topic.

        Speaker: Georgiana Mania (DESY)
      • PyTorch C++ API 7m

        The most popular language for machine learning is python. This presentation shows an alternative interface written in C++ provided by PyTorch. Discussed are similarities and differences of the C++/python API of PyTorch and its pro/cons regarding the usage in physics analysis.

        Speaker: David Brunner (DESY)
    • 16:30 17:00
      Parallel and optimised scientific software - exercise debriefing 30m

      Optimisation of an existing, production grade large codebase

      Speaker: Sebastien Ponce (CERN)
    • 09:00 10:30
      Group assignment for Track 3: Programming for heterogeneous architectures 1h 30m

      The scenario

      Imagine you have to process 400 terabits of raw data per second at a future HEP experiment. Assume the data arrives in a data center, with the information from all sub-detectors already combined for every event. There is no strict latency requirement, i.e. you can use deep buffers inside the servers to store the data until a decision is taken.

      The physics you are interested in requires the most complete knowledge of the event possible, ideally track reconstruction, particle identification, calorimeter reconstruction, particle building, maybe jet reconstruction. And possibly other objects you would like the trigger to reconstruct to make your analysis more sensitive.

      The task

      Design a trigger system which reduces the rate by at least a factor 1000!


      Each group will choose and discuss one of the three following topics:

      Topic 1

      Can you achieve the data reduction in a single step? What are the advantages / disadvantages of multiple selection steps?

      Which computing architecture(s) would you choose for your data center? Would you offload parts of the workload to accelerators? If yes, which ones?

      Topic 2

      What would be the dataflow of your DAQ system? How would you model the data processing and relations between the different parts of your system?

      How would you ensure the pipelining between accelerators and the servers? Are there any bottlenecks to consider? How would you address them?

      Topic 3

      How does the detector layout influence the data flow (eg. would you rather have a homogeneous detector where all particles pass through the same detectors or one with different
      sub-detectors in different regions of phase space)? Do you have recommendations for the detector design that would allow for a more performant DAQ system, depending on the
      architectures you choose for your system?

    • 10:30 10:50
      Coffee break 20m
    • 10:50 11:50
      Design patterns and best practices 1h

      Programming for Heterogeneous Architectures - lecture 4

      • Good practices: single precision, floating point rounding, avoid register spilling, prefer single source
      • Other standards: SYCL, HIP, OpenCL
      • Middleware libraries and cross-architecture compatibility
      • Reusable parallel design patterns with real-life applications
      Speaker: Daniel Campora (University of Maastricht)
    • 11:50 12:20
      Programming for heterogeneous architectures - exercise debriefing 30m
      Speakers: Daniel Campora (University of Maastricht), Dorothea vom Bruch (CPPM/CNRS)
    • 14:00 15:00
      Summary and future technologies overview 1h

      Technologies and Platforms - lecture 4

      • Teaching program summary and wrap-up
      • Next-generation memory technologies and interconnect
      • Future computing evolution
      Speaker: Andrzej Nowak
    • 15:00 15:30
      Coffee break 30m
    • 15:30 16:10
      Exam 40m
    • 16:40 17:00
      Closing Session 20m
      Speaker: Sebastian Lopienski (CERN)