Interpretability of AI models and FAIR for AI

US/Central
Description

The FAIR4HEP project is a DOE-funded collaboration of scientists at the UIUC, MIT, UCSD, UMN, and ANL whose purpose is to advance our understanding of the relationship between our data and artificial intelligence (AI) models by empowering scientists to explore both through the development of frameworks adhering to the principles of findability, accessibility, interoperability, and reusability (FAIR). Using high-energy physics (HEP) as the science use-case, this project will investigate FAIR ways to share our AI models and related data, create an environment where novel approaches to AI can be explored and applied to new data, and enable new insights for applying AI techniques.

Videoconference
IRIS-HEP topical meetings
Zoom Meeting ID
68133510887
Host
David Lange
Alternative hosts
Robert Currier Tuck, Shawn Mc Kee
Useful links
Join via phone
Zoom URL
    • 1
      Exploring Interpretability of Neural Networks in the Context of FAIR principles for AI Models

      While neural networks are traditionally regarded as black box nonlinear functional surrogates, their interpretability is crucial to ensure model reliability and reusability. This talk will explore feature importance and neuron activity in the context of an Interaction Network model trained to distinguish boosted H->bb jets from QCD backgrounds. We will explore a number of approaches to probe the inside of the underlying Neural Network to understand its activity for different jet classes and how that information can be used to optimize model architecture without compromising its performance.

      Speaker: Avik Roy (University of Illinois at Urbana-Champaign)
    • 2
      Explaining machine-learned particle-flow reconstruction

      The particle-flow (PF) algorithm is used in general-purpose particle detectors to reconstruct a comprehensive particle-level view of the collision by combining information from different subdetectors. A graph neural network (GNN) model, known as the machine-learned particle-flow (MLPF) algorithm, has been developed to substitute the rule-based PF algorithm. However, understanding the model's decision making is not straightforward, especially given the complexity of the set-to-set prediction task, dynamic graph building, and message-passing steps. In this talk, we explore the application of an explainable AI technique, called the layerwise-relevance propagation, for GNNs and apply it to the MLPF algorithm to gauge the relevant nodes and features for its predictions. Through this process, we gain insight into the model's decision-making.

      Speaker: Farouk Mokhtar (Univ. of California San Diego (US))