Speaker
Description
Deep neural network based classifiers allow for efficient estimation of likelihood ratios in high dimensional spaces. Classifier-based cuts are thus being used to process experimental data, for example in top tagging. To efficiently investigate new theory, it is essential to estimate the behavior of these cuts efficiently. We suggest circumventing the full simulation of the experimental setup and instead predict the classifier output from high-level features. The in-distribution behavior is modeled using a generative mapping while out-of-distribution areas are indicated using bayesian machine learning. We compare standard methods of bayesian deep learning, as well as a novel stochastic Markov Chain, to a baseline of full Monte Carlo sampling.