I will discuss how to use neural networks to detect data departures from a given reference model, with no prior bias on the nature of the new physics responsible for the discrepancy. The algorithm that I will describe returns a global p-value that quantifies the tension between the data and the reference model. It also allows to compare directly what the network has learned with the data, giving a fully transparent account of the nature of possible signals. The potential applications are broad, from LHC physics searches to cosmology and beyond.