Speaker
Description
For the High-Luminosity Large Hadron Collider, the trigger and data acquisition system of the CMS experiment will be entirely replaced. Novel design choices have been explored, including ATCA prototyping platforms with SoC controllers and newly available interconnect technologies with serial optical links with data rates up to 28 Gb/s. Trigger analyses will be performed through sophisticated algorithms, including widespread use of Machine Learning, in large FPGAs, such as the Xilinx Ultrascale family. The system will process over 50 Tb/s of detector data with an event rate of 750 kHz. We describe system design and prototyping and review trigger algorithm exemplars.
Summary (500 words)
The High-Luminosity LHC will open an unprecedented window on the weak-scale nature of the Universe, providing high-precision measurements of the standard model, as well as searches for new physics. Such precision measurements and searches require information-rich datasets with a statistical power that matches the high-luminosity provided by the upgrade of the LHC. Collecting those datasets efficiently will be a challenging task in the harsh environment of 200 proton-proton interactions per LHC bunch crossing.
For this purpose, the Compact Muon Solenoid (CMS) collaboration has designed an efficient data-processing hardware trigger (Level-1) that will include tracking information and high-granularity calorimeter information. The system design will take full advantage of advances in FPGA and link technologies over recent years, providing a high-performance, low-latency (<12.5 us) computing platform for large throughput and sophisticated data correlation across diverse data sources.
Modern technologies offer an effective solution to achieve these goals. The upgraded system will make use of recently released Xilinx UltraScale FPGA technology, hosted on ATCA hardware platforms, including SoC controllers and communicating through optical interconnects at up to 28 Gb/s.
In order to enable the physics programme sophisticated trigger algorithms are required, for example to combine data from different detector elements optimally. Machine Learning algorithms will be used widely and a range of algorithms have been prototyped, in many cases using High Level Synthesis tools to produce firmware which has been tested in prototype hardware.
The talk will cover the technological aspects of the recently approved upgrade to the trigger system, emphasising the results of prototyping and its impact on the system design. The implementation of conventional and Machine Learning algorithms will be discussed and examples from the recently approved technical design report [1] discussed.
[1] CMS Collaboration, The Phase-2 Upgrade of the CMS Level-1 Trigger, CERN-LHCC-2020-004 (http://cds.cern.ch/record/2714892)