Link to the tutorial slides here
In order to participate in the tutorial, you need to have a valid GitHub account. If you do not already have one, please register at
https://github.com/join
The Notebooks for this tutorial can then be accessed at
http://34.121.105.225/hub/login
--------------------------------------------------------------------------
With edge computing, real-time inference of deep neural networks (DNNs) on custom hardware has become increasingly relevant. Smartphone companies are incorporating Artificial Intelligence (AI) chips in their design for on-device inference to improve user experience and tighten data security, and the autonomous vehicle industry is turning to application-specific integrated circuits (ASICs) to keep the latency low.
While the typical acceptable latency for real-time inference in applications like those above is O(1) ms, other applications require sub-microsecond inference. For instance, high-frequency trading machine learning (ML) algorithms are running on field-programmable gate arrays (FPGAs), highly accurate devices, to make decisions within nanoseconds. At the extreme inference spectrum end of both the low-latency (as in high-frequency trading) and limited-area (as in smartphone applications) is the processing of data from proton-proton collisions at the Large Hadron Collider (LHC) at CERN. Here, latencies of O(1) microsecond is required and resources are strictly limited.
In this tutorial you will get familiar with the hls4ml library. This library converts pre-trained Machine Learning models into FPGA firmware, targeting extreme low-latency inference in order to stay within the strict constraints imposed by the CERN particle detectors. You will learn techinques for model compression, including how to reduce the footprint of your model using state-of-the art techniques such as model pruning and quantization through quantization aware training. Finally, you will learn how to synthesize your model for implementation on chip. Familiarity with Machine Learnining using Python and Keras is beneficial for participating in this tutorial, but not required.
Lecturers:
Thea Aarrestad (CERN)
Sioni Summers (CERN)
An event organised by UZH and mPP
Local organising committee:
Darius Faroughy (UZH)
Davide Lancierini (UZH)
Vinicius Massami Mikuni (UZH)
mPP coordinators:
Thea K. Årrestad (CERN)
Jennifer Ngadiuba (Caltech)
Maurizio Pierini (CERN)
Vladimir Loncar (CERN)
Sioni Summers (CERN)
Jean-Roch Vlimant (Caltech)