Speaker
Description
The Large Hadron Collider (LHC) will soon undergo a high-luminosity (HL) upgrade to improve future searches for new particles and to measure particle properties with increased precision. The upgrade is expected to provide a dataset ten times larger than the one currently available by the end of its data-taking period. The increased beam intensity will also increase the number of simultaneous interactions per bunch-crossing (pileup), presenting greater challenges in particle reconstruction. To address this, the CMS Collaboration will replace its current endcap calorimeters with a radiation-hard high-granularity calorimeter (HGCAL). Containing six million readout channels, HGCAL will measure the energy and position of particles with unprecedented precision. However, this level of granularity will result in a significant increase in the data rates, of the order of 5 Pb/s. These extreme rates must be reduced by several orders of magnitude within a few microseconds by the CMS trigger system in order to be feasibly processed. Front-end ECON-T application-specific integrated circuits (ASICs) perform the first-stage reduction. The current baseline applies a variable threshold that keeps only the trigger cells (TC) that surpasses it, discarding the rest. Although this baseline algorithm successfully reduces the data rates, it fails to utilize the full granularity of HGCAL. We introduce an ASIC implementation of a physics-aware conditional autoencoder (CAE) that compresses the full granularity online. The network differs from a traditional autoencoder architecture by conditioning on physically significant information within the latent space. Through this approach, we are able to efficiently encode the data before transmission off-detector where decoding can take place on dedicated FPGAs.