Speaker
Description
The large volume of data collected at the LHC makes it challenging to maintain existing trigger schemes and save data for offline processing. As a result, it's becoming increasingly important to execute algorithms with greater selection capability online, utilizing low-latency devices with high parallelization. The implementation of deep learning algorithms on FPGAs could be a winning strategy for a more efficient, online event selection. However, the sub-microsecond latency requirements of FPGA-based trigger and data acquisition systems make deep learning algorithm design difficult. In particular, models must be compressed and reshaped appropriately before being implemented on FPGAs, so as not to overtake the available resources. We propose an original method for compressing and reshaping Deep Neural Networks, using as a baseline study the identification of jets containing $b$ quarks arising from a boosted Higgs boson decay. During training, our pruning technique can choose relevant input features and remove uninteresting nodes. As a result, the neural network's overall size is reduced, with the ultimate dimensions decided by the user. Our method is simple to integrate into existing Deep Neural Network classifiers, allowing for the identification of the best network design suitable with the FPGA resources available while reducing the quantity of input data and network size without sacrificing performance. Promising findings are shown, accompanied by a roadmap for future advancements and applications.