Interpretability of AI models and FAIR for AI
Monday, March 14, 2022 -
10:30 AM
Monday, March 14, 2022
10:30 AM
Exploring Interpretability of Neural Networks in the Context of FAIR principles for AI Models
-
Avik Roy
(
University of Illinois at Urbana-Champaign
)
Exploring Interpretability of Neural Networks in the Context of FAIR principles for AI Models
Avik Roy
(
University of Illinois at Urbana-Champaign
)
10:30 AM - 11:00 AM
While neural networks are traditionally regarded as black box nonlinear functional surrogates, their interpretability is crucial to ensure model reliability and reusability. This talk will explore feature importance and neuron activity in the context of an Interaction Network model trained to distinguish boosted H->bb jets from QCD backgrounds. We will explore a number of approaches to probe the inside of the underlying Neural Network to understand its activity for different jet classes and how that information can be used to optimize model architecture without compromising its performance.
11:00 AM
Explaining machine-learned particle-flow reconstruction
-
Farouk Mokhtar
(
Univ. of California San Diego (US)
)
Explaining machine-learned particle-flow reconstruction
Farouk Mokhtar
(
Univ. of California San Diego (US)
)
11:00 AM - 11:30 AM
The particle-flow (PF) algorithm is used in general-purpose particle detectors to reconstruct a comprehensive particle-level view of the collision by combining information from different subdetectors. A graph neural network (GNN) model, known as the machine-learned particle-flow (MLPF) algorithm, has been developed to substitute the rule-based PF algorithm. However, understanding the model's decision making is not straightforward, especially given the complexity of the set-to-set prediction task, dynamic graph building, and message-passing steps. In this talk, we explore the application of an explainable AI technique, called the layerwise-relevance propagation, for GNNs and apply it to the MLPF algorithm to gauge the relevant nodes and features for its predictions. Through this process, we gain insight into the model's decision-making.