Speaker
Description
Trying to modulate the RF cavity of a Synchrotron Light Source by leveraging Reinforcement Learning, resulted in a hardware implementation of the Gated Recurrent Unit (GRU) on the Versal AI Engine, by AMD Xilinx and extremely efficient in performing the main numerical operations needed by the model.
RNNs have been designed to handle time series, and they are the perfect candidates to handle this kind of task. Although RNNs don't parallelize well, their ability to distill input and pass information into the hidden state, could be beneficial for real time control tasks.
I will firstly introduce the AI Engine and its features. Second, I will introduce the GRU cell. Then I will show how numerical operations are being implemented in the AI Engine and its memory limitations. Finally, I will discuss how we tackled the lack of built-in activation functions, since the tanh and sigmoid function implementations are needed.
Talk's Q&A | During the talk |
---|---|
Talk duration | 20'+10' |
Will you be able to present in person? | Yes |