Speaker
Description
“The Multi-disciplinary Use Cases for Convergent Approaches to AI Explainability (MUCCA) project is pioneering efforts to enhance the transparency and interpretability of AI algorithms in complex scientific fields. This study focuses on the application of Explainable AI (XAI) in high-energy physics (HEP), utilising a range of machine learning (ML) methodologies, from classical boosted decision trees to Graph Neural Networks (GNNs), to explore new physics models.
Our work leverages case studies from the ATLAS experiment at the Large Hadron Collider (LHC) to demonstrate the potential of ML in improving sensitivity to new physics. Notably, GNNs outperformed traditional Convolutional Neural Networks (CNNs), with the DarkJetGraphs code achieving a 2-5% improvement in detection accuracy and twice the background rejection efficiency. These findings affirm the value of GNNs in extending the search space for new physics models that conventional methods may not adequately capture. Balancing the use of cutting-edge ML with transparency and interpretability through XAI techniques is critical to ensuring both scientific rigor and robust result interpretation.
The presented research highlights this balance, with further developments in XAI techniques, including Kappa pruning and differentiable programming, proved to further enhance performance. A full publication of these methods and results is anticipated in Autumn 2024.