25–28 Sept 2023
Imperial College London
Europe/London timezone

Optimizing Sparse Neural Architectures for Low-Latency Anomaly Detection

25 Sept 2023, 18:15
5m
Blackett Laboratory, Lecture Theatre 1 (Imperial College London)

Blackett Laboratory, Lecture Theatre 1

Imperial College London

Blackett Laboratory
Lightning Talk Contributed Talks Contributed Talks

Speaker

Luke McDermott (UC San Diego & Modern Intelligence)

Description

Within the framework of the L1 trigger's data filtering mechanism, ultra-fast autoencoders are instrumental in capturing new physics anomalies. Given the immense influx of data at the LHC, these networks must operate in real-time, making rapid decisions to sift through vast volumes of data. Meeting this demand for speed without sacrificing accuracy becomes essential, especially when considering the time-sensitive nature of identifying key physics events. With ultra low-latency requirements at the trigger, we can leverage hardware-aware neural architecture search techniques to find optimal models. Our approach leverages supernetworks to explore potential subnetworks through evolutionary search and unstructured neural network pruning, facilitating the discovery of high-performing sparse autoencoders. For efficient search, we train predictor networks for each objective, lowering the sample cost of evolutionary search. Here, we optimize for the post-pruning model. Due to the unique nature of reconstruction-based anomaly detection methods, we explore how neural network pruning and sparsity affect the generalizability on out-of-distribution data in this setting.

Primary authors

Dmitri Demler Jason Weitz (UC San Diego) Javier Mauricio Duarte (Univ. of California San Diego (US)) Luke McDermott (UC San Diego & Modern Intelligence) Nhan Tran (Fermi National Accelerator Lab. (US))

Presentation materials