Speaker
Description
The Multi-disciplinary Use Cases for Convergent new Approaches to AI explainability (MUCCA) project is pioneering efforts to enhance the transparency and interpretability of AI algorithms in complex scientific endeavours. The presented study focuses on the role of Explainable AI (xAI) in the domain of high-energy physics (HEP). Approaches based on Machine Learning (ML) methodologies, from classical boosted decision trees to Graph Neural Nets, are considered to search for new physics models.
A set of use-cases are exploited to highlight the potential of ML, based on studies performed on the ATLAS experiment at the Large Hadron Collider (LHC). Results demonstrate there can be significant enhancements in sensitivity when using ML approaches, affirming the effectiveness of these tools in exploring a broad range of phase space and new physics models that traditional searches may not reach. Maintaining this balance is critical for consistent result interpretation and scientific rigour. The studies performed so far and presented in this talk emphasise this crucial balance in HEP between state-of-the-art ML techniques and transparency achievable through xAI.