We will discuss the feasibility of a system capturing Level-1 intermediate data at the LHC beam-crossing rate of 40 MHz and carrying out online analyses based on these data. This 40 MHz scouting system has the potential to enable the study of otherwise inaccessible signatures. In such a system, data from the Level-1 trigger is preprocessed in an FPGA before being transferred to a computer for further analysis. A demonstrator was operated at the end of Run-2 using trigger data from CMS. We will present this system as well as the possible architecture of a Phase-2 40 MHz scouting system.
The CMS experiment will be upgraded for operation at the High-Luminosity LHC to maintain and extend its optimal physics performance under extreme pileup conditions. Upgrades will include an entirely new tracking system, supplemented by a track trigger processor capable of providing tracks to the Level-1 trigger, as well as a high-granularity calorimeter in the endcap region. New front-end and back-end electronics will also provide the Level-1 trigger with high-resolution information from the barrel calorimeter and the muon systems. The upgraded Level-1 processors, based on powerful FPGAs, will be able to carry out sophisticated feature searches with resolutions often similar to the offline ones, while keeping pileup effects under control.
In this paper, we discuss the feasibility of a system capturing Level-1 intermediate data at the beam-crossing rate of 40 MHz and carrying out online analyses based on these limited-resolution data.
This 40 MHz scouting system would provide fast and virtually unlimited statistics for detector diagnostics, alternative luminosity measurements and, in some cases, calibrations, and it has the potential to enable the study of otherwise inaccessible signatures, either too common to fit in the L1 accept budget, or with requirements which are orthogonal to “mainstream” physics, such as long-lived particles.
To realise such a system, data from the Level-1 trigger is branched off into a dedicated system with optical multi-gigabit inputs and powerful FPGAs for preprocessing before being moved into the memory of compute nodes. Algorithms familiar from the big data industry can be used to quickly operate on the recorded data before a distilled data set is stored on disk.
We discuss the design of a demonstrator system operated at the end of Run-2 using the Global Muon Trigger data from CMS. This demonstrator was implemented on a Xilinx KCU1500 Acceleration Development Board that is equipped with a Xilinx Kintex Ultrascale FPGA and provides eight GTH transceivers for optical multi-gigabit communication as well as two x8 PCIe Gen3 interfaces bifurcated to a x16 edge connector. The demonstrator board received muon data via all eight input links at 10 Gbit/s and applied a basic zero suppression scheme before transmitting the remaining valid data to the host computer via DMA. To optimise the performance during data taking measurements of the performance of the board to host DMA transfer for different drivers and packet sizes were done. The system was then used to record data during the last week of the LHC's proton-proton run in 2018 as well as during the entire 2018 lead-lead run. At the LHC's peak instantaneous luminosity up to 800 MB/s were transferred to the host, this value dropping with decreasing luminosity.
Plans for further demonstrators envisaged for Run 3 as well as the requirements and possible architecture of a Phase-2 40 MHz scouting system are also discussed.