3–6 Oct 2022
Southern Methodist University
America/Chicago timezone

Fast recurrent neural networks on FPGAs with hls4ml

4 Oct 2022, 14:45
15m
Southern Methodist University

Southern Methodist University

Speaker

Elham E Khoda (University of Washington (US))

Description

Recurrent neural networks have been shown to be effective architectures for many tasks in high energy physics, and thus have been widely adopted. Their use in low-latency environments has, however, been limited as a result of the difficulties of implementing recurrent architectures on field-programmable gate arrays (FPGAs). In this paper we present an implementation of two types of recurrent neural network layers- long short-term memory and gated recurrent unit- within the hls4ml [1] framework. We demonstrate that our implementation is capable of producing effective designs for both small and large models, and can be customized to meet specific design requirements for inference latencies and FPGA resources. We show the performance and synthesized designs for multiple neural networks, many of which are trained specifically for jet identification tasks at the CERN Large Hadron Collider.

[1] J. Duarte et al., “Fast inference of deep neural networks in FPGAs for particle physics”, JINST 13 (2018) P07027, arXiv:1804.06913

Primary authors

Aaron Wang Caterina Vernieri (SLAC National Accelerator Laboratory (US)) Mr Chaitanya Paikara (University of Washington) Dylan Sheldon Rankin (Massachusetts Inst. of Technology (US)) Elham E Khoda (University of Washington (US)) Michael Aaron Kagan (SLAC National Accelerator Laboratory (US)) Philip Coleman Harris (Massachusetts Inst. of Technology (US)) Rafael Teixeira De Lima (SLAC National Accelerator Laboratory (US)) Ms Richa Rao (University of Washington) Scott Hauck Shih-Chieh Hsu (University of Washington Seattle (US)) Sioni Paris Summers (CERN) Vladimir Loncar

Presentation materials