Speaker
Description
When the HL-LHC starts in a few years from now the CMS experiment will be challenged with way more complex proton-proton collision events as well as an increased data logging rate. Present projections suggest that the CPU demands for reconstruction and processing will grow beyond the capacity expected from usual technology progress. Therefore, an effort has been started to optimize software to exploit modern hardware architectures such as GPUs. To be able to execute programs compiled from a single source code CMS has decided to employ the Alpaka portability library. One application that profits from a GPU enabled approach is the unpacking of raw data information to objects that are fed into the local reconstruction. This contribution focuses on the unpacking of hit clusters in the CMS HL-LHC Outer Tracker. The presentation will detail the efforts to port an algorithm originally implemented for CPUs to a parallelizable algorithm suited for GPUs as well as the choice of data structures to enable an efficient offloading of the execution to GPU kernels. Furthermore, results from the validation and performance measurements will be presented.