Experimental Particle and Astro-Particle Physics Seminar
Abstract:
At the CERN Large Hadron Collider (LHC), real-time event filtering systems must process millions of proton-proton collisions every second on field programmable gate arrays (FPGAs) and perform efficient reconstruction and decision making. Within a few microseconds, over 98% of the collision data must be discarded fast and accurately. As the LHC is upgraded to its high luminosity phase, HL-LHC, these systems must deal with an overwhelming data rate corresponding to 5% of the total internet traffic and will face unprecedented data complexity. In order to ensure data quality is maintained such that meaningful physics analyses can be performed, highly efficient ML algorithms are being utilised for data processing. This has necessitated the development of novel methods and tools for extremely high throughput, ultra low latency inference on specialised hardware.
At the same time, rapid developments in large language models (LLMs) have led to powerful and highly accurate models that can be trained unsupervised on vasts amount of data to uncover powerful high dimensional embeddings of language that can be used for several purposes, referred to as foundation models.
In this talk, we will discuss how real-time ML is used to process and filter enormous amounts of data in order to improve physics acceptance. We will discuss state-of-the-art techniques for designing and deploying ultrafast ML algorithms on FPGA and ASIC hardware. Finally, we will explore what role foundation models and unsupervised learning can potentially have in enhancing accuracy and potentially processing speed for particle physics task. And also whether it is feasible for these enormous models to be implemented for real-time inference in particle physics experiments.