Deep learning with low-level features: A free lunch?

Not scheduled
20m
32-123 (MIT)

32-123

MIT

https://goo.gl/maps/Wx14Gpe2wRy
Poster

Speakers

Chase Owen Shimmin (Yale University (US)) Ben Nachman (Lawrence Berkeley National Lab. (US))

Description

Recent studies have shown that deep learning techniques applied on low-level features can outperform methods that use only high-level “engineered” features. However, we argue it is worth considering the price of this improved performance. For instance, using physically-motivated inputs such as IRC-safe substructure observables acts as a regularizing prior in the learning procedure. Moreover it can be shown that the sensitivity of a network’s outputs to small perturbations scales directly with the dimensionality of the input data. It is impractical to validate all high-dimensional correlations and the current validation approaches typically only check one-dimensional distributions of high-level features which are not the same data presented to the network. Drawing on ideas from AI safety, we illustrate potential challenges that could arise using jet tagging as our example. In particular, we show that small perturbations to jet constituents can dramatically change classifier performance without significantly affecting high-level observables. Presently, the examples we demonstrate are extreme, but will hopefully start a dialogue about how to assess and ensure robustness of deep learning applications.

Authors

Chase Owen Shimmin (Yale University (US)) Ben Nachman (Lawrence Berkeley National Lab. (US))

Presentation materials

There are no materials yet.