Speaker
Description
Deep sets network architectures have useful applications in finding
correlations in unordered and variable length data input, thus having the
interesting feature of being permutation invariant. Its use on FPGA would open
up accelerated machine learning in areas where the input has no fixed length or
order, such as inner detector hits for clustering or associated particle tracks
for jet tagging. We adapted DIPS (Deep Impact Parameter Sets), a deep sets
neural network flavour tagging algorithm previously used in ATLAS offline
low-level flavour tagging and online b-jet trigger preselections, for use on
FPGA with the aim to assess its performance and resource costs. QKeras and
HLS4ML are used for quantisation-aware training and translation for FPGA
implementation, respectively. Some challenges are addressed, such as finding
replacements for functionality not available in HLS4ML (e.g. Time Distributed
layers) and implementations of custom HLS4ML layers. Satisfactory
implementations are tested on an actual FPGA board for the assessment of true
resource consumption and latency. We show the optimal FPGA-based algorithm
performance relative to CPU-based full precision performance previously
achieved in the ATLAS trigger, as well as performance trade-offs when reducing
FPGA resource usage as much as possible. The project aims to demonstrate a
viable solution for performing sophisticated Machine Learning-based tasks for
accelerated reconstruction or particle identification for early event rejection
while running in parallel to other more intensive tasks on FPGA.