Speaker
Description
Particle flow reconstruction is crucial to analyses performed at general-purpose detectors, such as ATLAS and CMS. Recent developments have shown that a machine-learned particle-flow reconstruction using graph neural networks offer a prospect for computationally efficient event reconstruction [1-2]. Focusing on scalability of machine-learning based models for full event reconstruction, we compare two alternative models for particle flow reconstruction that can process full events consisting of tens of thousands of input elements, while avoiding quadratic memory allocation and computation cost. We test the models on a newly developed granular and detailed dataset based on full GEANT4 detector simulation for particle flow reconstruction studies. Using supercomputing, we carry out extensive hyperparameter optimization to choose a model configuration that significantly outperforms the baseline rule-based implementation on a cluster-based dataset; where the inputs are charged particle tracks and calorimeter clusters. We characterize the physics performance, using event-level quantities such as jet and missing transverse energy response, and computational performance of the model and find that using mixed precision can significantly improve training speed. We further demonstrate that the resulting model architecture and software setup is highly portable across hardware vendors, supporting training on NVidia, AMD, and Habana cards. Finally, we show that the model can be trained, alternatively, on a highly granular dataset consisting of tracks and raw calorimeter hits, resulting in a physics performance that is competitive with baseline particle flow, limited currently by training throughput. We expect that with additional effort in dataset design, model development and high-performance training, it will be possible to improve event reconstruction performance over current baselines. The extensive simulated dataset and model training code are made available under the FAIR principles.
[1] https://arxiv.org/abs/2101.08578
[2] https://arxiv.org/abs/2303.17657