Neural networks are so powerful universal approximator of complicated patterns in large-scale data, leading the explosive developments of AI in terms of deep learning. However, in many cases, usual neural networks are trained to possess poor level of abstraction, so that the model's predictability and generalizability can be quite unstable, depending on the quality and amount of the data used for training. In this presentation, we introduce a new neural network architecture which has improved capability of capturing the key features and the physical laws hidden in data, in a mathematically more robust and simpler way. We demonstrate the performance of the new architecture, with an application for high energy particle scattering processes at the LHC.
|Preferred contribution length||20 minutes|