Speaker
Description
In 2019 the Large Hadron Collider will undergo upgrades in order to increase the luminosity by a factor two if compared to today's nominal luminosity. Currently CMS software parallelization strategy is oriented at scheduling one event per thread. However tracking timing performance depends from the factorial of the pileup leading the current approach to increase latency. When designing a HEP trigger stage, the average processing time is a main constraint and the one-event-per-thread approach will lead to a smaller than ideal fraction of events for which tracking is run. GPUs are becoming wider, with millions of threads running concurrently, and their width is expected to increase in the following years. A many-threads-per-event approach would scale with the pileup offloading the combinatorics to the number of threads available on the GPU. The aim is to have GPUs running at the CMS High Level Trigger during Run 3 for reconstructing Pixel Tracks directly from RAW data. The main advantages would be: - Avoid recurrent data movements between host and device; - Use parallel-friendly data structures without having to transform data into different (OO) representations; - Increase the throughput density of the HLT (events* s^-1 * liter^-1), hence increasing the input rate; - Attract students and give them a set of skills that is very valuable outside HEP.
Primary Keyword (Mandatory) | Trigger |
---|---|
Secondary Keyword (Optional) | Parallelizarion |