Speaker
Description
The Lipschitz constant of the map between the input and output space represented by a neural network is a natural metric by which the robustness of the model can be measured. We present a new method to constrain the Lipschitz constant of dense deep learning models that can also be generalized to other architectures. The method relies on a simple weight normalization scheme during training which ensures every layer is 1-Lipschitz. A simple residual connection could then be used to make the model monotonic in any subset of its inputs. This is useful in scenarios where domain knowledge dictates monotonic dependence. Examples can be found in algorithmic fairness requirements or, as presented here, in the classification of particle decay structures. The normalization is minimally constraining and allows the underlying architecture to maintain higher expressiveness compared to other techniques which aim to either control the Lipschitz constant of the model or ensure its monotonoicity. We show how the algorithm was used to train a powerful, robust, and interpretable discriminator for heavy flavor decays in the LHCb High-Level Trigger (HLT) system.
Significance
A new monotone and robust neural network architecture for classification is introduced and used in the inclusive trigger selections for upgrade LHCb. This is a first time a neural network is used for such significant selections, enabled by the guarantees that the architecture provides.
Speaker time zone | Compatible with America |
---|