5–9 Jul 2021
Europe/Zurich timezone

PyTorch INFERNO

6 Jul 2021, 17:40
10m
Lightning talk Plenary session Tuesday

Speaker

Dr Giles Chatham Strong (Universita e INFN, Padova (IT))

Description

The INFERence-aware Neural Optimisation (INFERNO) algorithm (de Castro and Dorigo, 2018 https://www.sciencedirect.com/science/article/pii/S0010465519301948), allows one to fully optimise neural networks for the task of statistical inference by including the effects of systematic uncertainties in the training. This has significant advantages for work in HEP, where the uncertainties are often only included right at the end of an analysis, and spoil the usage of classification as a proxy task to statistical inference.

The loss itself, however, can be somewhat difficult to integrate into traditional frameworks due to its requirements to access the model and the data at different points during the optimisation cycle. Including both a lightweight neural-network framework for PyTorch, and the required inference functions, the PyTorch-INFERNO package provides a "drop-in" implementation of the INFERNO loss. The package also aims to serve as a demonstration of how potential users can implement the loss themselves to drop into their framework of choice.

In this lightning talk, I will give a quick overview of both the algorithm and the package, a well as discuss some of the more general requirements for implementing the algorithm as a drop-in loss.

GitHub: https://github.com/GilesStrong/pytorch_inferno
Docs: https://gilesstrong.github.io/pytorch_inferno/
Blog-posts (part 1 of 5): https://gilesstrong.github.io/website/statistics/hep/inferno/2020/12/04/inferno-1.html

Author

Dr Giles Chatham Strong (Universita e INFN, Padova (IT))

Presentation materials