Speaker
Description
Neutrinos are particles that interact rarely, so identifying them requires large detectors which produce lots of data. Processing this data with the computing power available is becoming more difficult as the detectors increase in size to reach their physics goals. Liquid argon time projection chamber (LArTPC) neutrino experiments are expected to grow in the next decade to have 100 times more wires than currently operating experiments, and modernization of LArTPC reconstruction code, including parallelization both at data- and instruction-level, will help to mitigate this challenge.
The LArTPC hit finding algorithm, which reconstructs signals from the detector wires, is used across multiple experiments through a common software framework. In this talk we discuss a parallel implementation of this algorithm. Using a standalone setup we find speed up factors of two times from vectorization and 30-100 times from multi-threading on Intel architectures, with close to ideal scaling at low thread counts. This new version has been incorporated back into the framework so that it can be used by experiments. On a serial execution, the integrated version is about 10 times faster than the previous one and, once parallelization is enabled, further speedups comparable to the standalone program are achieved.
To fully take advantage of all levels of parallelism in a production environment, data processing must be done at a high performance computing center (HPC). A HPC workflow is being developed to be used as part of a central processing campaign for LArTPC experiments with the goal to efficiently utilize the available parallel resources within and across nodes, as well as AI algorithms. Further opportunities for algorithm parallelism in the reconstruction and GPU code portability are also being explored.
References
https://arxiv.org/abs/2107.00812
Speaker time zone | Compatible with America |
---|