The GeantV project introduces fine grained parallelism, vectorisation, efficient memory management and NUMA awareness in physics simulations. It is being developed to improve accuracy, while preserving, at the same time, portability through different architectures (Xeon Phi, GPU). This approach brings important performance benefits on modern architectures and a good scalability through a large number of threads.
Within the GeantV framework we have started developing a machine learning based tool for fast simulation. Machine learning techniques have been used in different applications by the HEP community, however the idea of using them to replace detector simulation is still rather new. Our plan is to provide, in GeantV, a fully configurable tool to train a neural network to reproduce the detector response and replace standard Monte Carlo simulation. This represents a completely generic approach in the sense that such a network could be designed and trained to simulate any kind of detector and, eventually, the whole data processing chain in order to get, directly in one step, the final reconstructed quantities. Such development is intended to address the ever increasing need for simulated events expected for LHC experiments and their upgrades, such as the High Luminosity LHC. We will present results of the first tests we run on several machine learning and deep learning models, including computer vision techniques, to simulate particle showers in calorimeters.