Operation of the CMS Level-1 calorimeter trigger in high pileup conditions and motivations for Phase 2

21 Sept 2018, 09:00
25m
CAR 1.09 (aula)

CAR 1.09 (aula)

Oral Systems, Planning, Installation, Commissioning and Running Experience Systems, Planning, Installation, Commissioning and Running Experience

Speaker

Aaron Bundock (Imperial College (GB))

Description

To maintain high trigger efficiencies and stable rates during significant changes to beam conditions throughout 2017, the CMS Level-1 calorimeter trigger required dynamic and flexible operation. Successfully running since 2015, utilising Xilinx Virtex 7 690 FPGAs and 10 Gbps optical links, the versatile design has enabled quick adaption to improve algorithms to mitigate large rates from high pileup and changes in detector response, and as the LHC responded to a number of unexpected challenges. Operational experience and lessons learned will be presented, and how they will inform important decisions in the design and implementation of the Phase 2 trigger upgrade.

Summary

During 2017, the LHC for the first time delivered an instantaneous luminosity of 2×10^34 cm^-2s^-1, double the nominal design performance. Delivering this unprecedented luminosity generated a number of challenges, both for the LHC to maintain stable beam conditions for extended proton fills, and for CMS to run successfully with highly efficient triggers sensitive to a wide range of interesting physics signatures and keep within the 100 kHz Level-1 (L1) trigger bandwidth.

The ability of CMS to adapt to changing beam conditions was significantly enhanced by using 0.94+0.94 Tbps input+output bandwidth optical processor boards with a large Virtex-7 FPGA. These provide substantial logic resources with flexible architecture that can be quickly utilised to add new features and develop existing algorithms as conditions change. Considerations to architecture and features of the Phase 2 trigger upgrade that fully exploit this flexibility will be discussed.

An increase in number of protons per bunch crossing and reduced beam emittance raised considerably the number of pileup interactions. Peak average pileup increased from ~45 in 2016 to ~55 in 2017. In this regime, there is strong pileup dependence of rates for missing transverse energy, jet sum and low threshold multi-jet algorithms, with some rates doubling with an increase of ~5 in pileup. This required implementation of additional region-dependent pileup mitigation in L1 calorimeter algorithms, utilising LUTs tuned using data, that estimate event-by-event the energy threshold of energy deposits from pileup that should be suppressed, typically less than 10 GeV, to effectively reduce the pileup dependence of rates without losing signal efficiency. For Phase 2, tracker information will be available at L1, which will greatly improve resilience to pileup levels potentially reaching as high as 200. Current developments in pileup mitigation for Phase 2 will be presented.

The intense luminosity delivered during 2017 also resulted in higher levels of radiation damage to detectors, particularly forward regions of the calorimeters. In ECAL, the forward-most PbWO4 crystals by the end of 2017 required amplification factors of up to 50, bringing noise levels up to an RMS of 1-2 GeV, above the threshold for energy deposits to be included in the calorimeter trigger algorithms. To mitigate this effect in 2018 will require adaption of the current calibration scheme to progressively zero-suppress the corresponding trigger primitives as detector response evolves throughout the year using programmable LUTs. The ability to effectively manage changes in detector response will be very important for HL-LHC.

During 2017 data taking, a problem developed in one of the LHC sectors that made it difficult to maintain stable beam conditions with nominal long bunch trains (48b fill scheme). To allow operation at full luminosity it was required to switch to smaller bunch trains (8b4e fill scheme). This induced large variations in the level of out-of-time pileup, leading to higher rates, that were partially reduced by improvements to pileup mitigation. Potential ways of further handling increased levels of pileup dependence due to bunch structure, such as Finite Input Response filters, will be discussed, particularly with regard to HL-LHC.

Primary author

Aaron Bundock (Imperial College (GB))

Presentation materials

Peer reviewing

Paper