Speaker
Description
The future High Energy Physics experiments, based on upgraded or next generation particle accelerators with higher luminosity and energy, will put more stringent demands on the simulation as far as precision and speed are concerned. In particular, matching the statistical uncertainties of the collected experimental data, will require the simulation toolkits to be more CPU-efficient, while keeping the same, if not higher, precision of the physics. On the other hand, the computing architectures have evolved considerably opening new opportunities for code improvements, based on parallelism and use of compute accelerators.
In this talk we present the R&D activities to cope with the new HEP computing challenges, taking place in the context of the Geant4 simulation toolkit. We first discuss the general scope and plan of this initiative and we introduce the different directions that are being explored with the potential benefits they can bring. The second part is focused on a few concrete examples of the R&D projects, like the use of tasking-based parallelism with possible off-load to GPUs, introduction of vectorization at different stages of the simulation or implementation of ‘per volume’-specialized geometry navigators. We discuss the technical details of the different prototype implementations. In conclusion, our first results in those different areas are reported and the plans for the near future are presented.
Consider for promotion | Yes |
---|