Speaker
Description
In this talk we will critically assess the robustness of uncertainties on parton distribution functions (PDFs) determined using neural networks from global sets of experimental data collected from multiple experiments. The determination of PDFs is an inverse problem, and we study the way the neural network model tackles it when inconsistencies between input datasets are present. We use a closure test approach, in which the regression model is applied to artificial data produced from a known underlying truth, to which the output of the model can be compared and its accuracy can be assessed in a statistically reliable way. We explore various phenomenologically relevant scenarios in which inconsistencies arise due to incorrect estimation of correlated systematic uncertainties. We show that the neural network generally corrects for the inconsistency except in cases of extreme uncertainty underestimation, and we validate a previously proposed procedure to detect such extreme cases.