Speaker
Description
Deep learning techniques have gained tremendous attention from researchers in many fields, including particle physics. However such techniques typically do not capture model uncertainty. Bayesian models offer a solid framework to quantify the uncertainty, but they normally come with a high computational cost. A recent paper develops a new theoretical framework casting dropout in Neural Networks (NNs) as approximate Bayesian inference for Gaussian processes without changing either the models or the training.
In this talk, I will present how this method can be applied to evaluate multi-classification uncertainty using the Modified National Institute of Standards and Technology (MNIST) dataset. The results from evaluating will include both the model uncertainty, as well as uncertainties from systematic mis-modeling of the training data. I will also present preliminary results of this method applied to the ATLAS identification of jets coming from b-quarks with high momentum, and compare the difference in uncertainties between NNs trained on samples of low momentum only and those including high momentum jets.
Are you are a member of the APS Division of Particles and Fields? | No |
---|