How to do ultrafast Deep Neural Network inference on FPGAs
Wednesday 6 February 2019 -
08:30
Monday 4 February 2019
Tuesday 5 February 2019
Wednesday 6 February 2019
08:30
Registration
Registration
08:30 - 09:00
Room: Y16-G-15
09:00
Welcome
-
Thea Aarrestad
(
Universitaet Zuerich (CH)
)
Welcome
Thea Aarrestad
(
Universitaet Zuerich (CH)
)
09:00 - 09:15
Room: Y16-G-15
09:15
What is HLS4ML?
-
Jennifer Ngadiuba
(
CERN
)
What is HLS4ML?
Jennifer Ngadiuba
(
CERN
)
09:15 - 10:30
Room: Y16-G-15
An introduction to the HLS4ML framework
10:30
Coffee break
Coffee break
10:30 - 11:00
Room: Y16-G-15
11:00
Firmware implementation with SDAccel
Firmware implementation with SDAccel
11:00 - 12:00
Room: Y16-G-15
Export HLS design to firmware with Xilinx SDAccel on Amazon cloud.
12:00
Lunch break
Lunch break
12:00 - 13:30
Room: Y16-G-15
13:30
Optimize FPGA design: quantization and parallelization with HLS4ML
-
Jennifer Ngadiuba
(
CERN
)
Optimize FPGA design: quantization and parallelization with HLS4ML
Jennifer Ngadiuba
(
CERN
)
13:30 - 14:30
Room: Y16-G-15
14:30
Optimize FPGA design: model compression
Optimize FPGA design: model compression
14:30 - 15:30
Room: Y16-G-15
15:30
Coffee break
Coffee break
15:30 - 16:00
Room: Y16-G-15
16:00
Model acceleration on cloud FPGAs
-
Jennifer Ngadiuba
(
CERN
)
Model acceleration on cloud FPGAs
Jennifer Ngadiuba
(
CERN
)
16:00 - 17:00
Room: Y16-G-15