Speaker
Description
The High-Luminosity LHC will generate unprecedented data rates, pushing real-time trigger systems to their limits. We present a novel approach deploying graph neural networks (GNNs) on FPGAs to achieve fast, sub-microsecond inference for Level-0 muon triggers. Exploiting the sparse, relational structure of detector hits, the method preserves key spatial correlations while enabling hardware-efficient, low-latency execution. We explore model compression, pipelined parallelism, and resource-aware design to optimise throughput under stringent real-time constraints. Preliminary results indicate that this approach can scale to high-rate environments, demonstrating the potential of FPGA-accelerated GNNs for AI-assisted event selection at the first step of the Level-0 muon trigger chain. Our work highlights strategies for integrating machine learning with FPGA-based triggers, offering a path toward real-time processing in next-generation high-energy physics experiments.