Speaker
Description
The reconstruction of charged particle trajectories is an essential component of high energy physics experiments. Recently proposed pipelines for track finding, built based on the Graph Neural Networks (GNNs), provide high reconstruction accuracy, but need to be optimized, in terms of speed, especially for online event filtering. Like other deep learning implementations, both the training and inference of particle tracking methods can be optimized to fully benefit from the GPU’s parallelism ability. However, the inference of particle reconstruction could also benefit from multicore parallel processing on CPUs. In this context, it is imperative to explore the impact of the number of cores of a CPU on the speed related performance of running inference. Using PyTorch and Facebook AI Similarity Search (Faiss) library multiple CPU threads capability, multiprocessing for the filtering inference loop and the weakly connected components algorithm for labeling results in faster latency times for the inference pipeline. This tracking pipeline based on the Graph Neural Networks (GNNs) is evaluated on multi-core Intel Xeon Gold 6148s Skylake and Intel Xeon 8268s Cascade Lake CPUs. Computational time is measured and compared using different numbers of cores per task. The experiments show that the multi-core parallel execution outperforms the sequential one.
Consider for young scientist forum (Student or postdoc speaker) | Yes |
---|