Speaker
Description
It is widely known that predictions for jet substructure features vary significantly between Monte Carlo generators. This is especially true for the output of deep neural networks (NN) trained with high-dimensional feature spaces to tag the origin of a jet. However, even though the spectra of a given NN varies between generators, it could be that the function learned by different generators is the same. We investigate the universality of jet substructure information by training a NN with a variety of generators and testing these NNs on the same generator. By fixing the testing generator, we can see if the NNs have learned to use the same information, even if the extent to which that information is expressed varied between training datasets. Our target physics process is boosted Higgs bosons and we explore the implications of universality on uncertainties for searches for new particles at the Large Hadron Collider and beyond.
Affiliation | National Tsing Hua University |
---|---|
Academic Rank | PhD student |