23–25 Nov 2020
Europe/Stockholm timezone

neos: Physics analysis as a differentiable program

25 Nov 2020, 11:29
3m

Speaker

Mr Nathan Daniel Simpson (Lund University (SE))

Description

The advent of deep learning has yielded powerful tools to automatically compute gradients of computations. This is because 'training a neural network' equates to iteratively updating its parameters using gradient descent to find the minimum of a loss function. Deep learning is then a subset of a broader paradigm; a workflow with free parameters that is end-to-end optimisable, provided one can keep track of the gradients all the way through. This paradigm is known as differentiable programming.

This work introduces neos, which is an example implementation of a fully differentiable HEP workflow, made possible by leveraging the Python modules jax and pyhf. In particular, through using a technique called fixed-point differentiation, neos makes the frequentist construction of the profile likelihood differentiable. This allows a neural network-based summary statistic to be trained with respect to the expected p-value calculated downstream. Doing this results in an optimisation process that is aware of how every step in the workflow changes the p-value, including the modelling and treatment of nuisance parameters.

Abstract Track Flash talk, LHC

Primary authors

Mr Nathan Daniel Simpson (Lund University (SE)) Dr Lukas Alexander Heinrich (CERN)

Presentation materials