23–28 Oct 2022
Villa Romanazzi Carducci, Bari, Italy
Europe/Rome timezone

Equivariant Neural Networks for Particle Physics: PELICAN

27 Oct 2022, 11:00
30m
Area Poster (Floor -1) (Villa Romanazzi)

Area Poster (Floor -1)

Villa Romanazzi

Poster Track 2: Data Analysis - Algorithms and Tools Poster session with coffee break

Speaker

Alexander Bogatskiy (Flatiron Institute, Simons Foundation)

Description

We hold these truths to be self-evident: that all physics problems are created unequal, that they are endowed with their unique data structures and symmetries, that among these are tensor transformation laws, Lorentz symmetry, and permutation equivariance. A lot of attention has been paid to the applications of common machine learning methods in physics experiments and theory. However, much less attention is paid to the methods themselves and their viability as physics modeling tools. One of the most fundamental aspects of modeling physical phenomena is the identification of the symmetries that govern them. Incorporating symmetries into a model can reduce the risk of over-parameterization, and consequently improve a model's robustness and predictive power. As usage of neural networks continues to grow in the field of particle physics, more effort will need to be invested in narrowing the gap between the black-box models of ML and the analytic models of physics.

Building off of previous work, we demonstrate how careful choices in the details of network design – creating a model both simpler and more grounded in physics than the traditional approaches – can yield state-of-the-art performance within the context of problems including jet tagging and particle four-momentum reconstruction. We present the Permutation-Equivariant and Lorentz-Invariant or Covariant Aggregator Network (PELICAN), which is based on three key ideas: symmetry under permutations of particles, Lorentz symmetry, and the ambiguity of the aggregation process in Graph Neural Networks. For the first, we use the most general permutation-equivariant layer acting on rank 2 tensors, which can be viewed as a maximal generalization of Message Passing. For the second, we use classical theorems of Invariants Theory to reduce the 4-vector inputs to a tensor of Lorentz-invariant latent quantities. Finally, the flexibility of the aggregation process commonly used in Graph Networks can be leveraged for improved accuracy, in particular to allow variable scaling with the size of the input.

Significance

This is one of the first applications of group equivariant neural architectures in particle physics. In particular, the novel permutation-equivariant layer allows for efficient weight-sharing, state-of-the-art performance, and better generalization with lower model complexity. This is also a unique architecture in particle physics that is able to tackle manifestly Lorentz-equivariant tasks such as momentum reconstruction without breaking core physical symmetries. In the setting of classification tasks, the Top Tagging dataset has been widely used as a benchmark for various architectures, and PELICAN is now the state of the art, proving the importance of further research into symmetry-based network design.

References

[Previous work on this problem] A. Bogatskiy, B. Anderson, J. T. Offermann, M. Roussi, D. W. Miller, and R. Kondor (ICML, 2020) arXiv:2006.04780

Poster on PELICAN presented at SNOWMASS 2022 in Seattle, WA, USA (not yet available online as of this writing).

Primary authors

Alexander Bogatskiy (Flatiron Institute, Simons Foundation) Jan Tuzlic Offermann (University of Chicago (US)) Timothy Hoffman (University of Chicago) Xiaoyang Liu (University of Chicago) David Miller (University of Chicago (US))

Presentation materials

There are no materials yet.