Speaker
Benjamin Ramhorst
(ETH Zurich)
Description
In this tutorial, you will get familiar with the hls4ml library. This library converts pre-trained Machine Learning models into FPGA firmware, targeting extreme low-latency inference. You will learn techniques for model compression, including how to reduce the footprint of your model using state-of-the-art techniques such as quantization. Finally, you will learn how to synthesize your model for implementation on the chip. Familiarity with Machine Learning using Python and Keras is beneficial for participating in this tutorial but not required. https://github.com/fastmachinelearning/hls4ml-tutorial