Speaker
Ullrich Koethe
(Visual Learning Lab Heidelberg)
Description
Interpretable models are a hot topic in neural network research. My talk will look on interpretability from the perspective of inverse problems, where one wants to infer backwards from observations to the hidden characteristics of a system. I will focus on three aspects: reliable uncertainty quantification, outlier detection, and disentanglement into meaningful features. It turns out that invertible neural networks -- networks that work equally well in the forward and inverse direction -- are great tools for that kind of analysis: They act as non-linear generalizations of classical methods like PCA and ICA. Examples from physics, medicine, and computer vision demonstrate the practical utility of the new method.