High-precision time measurements on the order of picoseconds ($10^{-12}$), as required in fluorescence lifetime microscopy and time-of-flight (ToF) applications, can be achieved using Time-to-Digital Converters (TDCs). Traditional timing methods rely on high-frequency clock counters, which become impractical for such small time intervals. A solution is to exploit the hardware of FPGAs to...
Time-to-digital converters (TDCs) are essential for precise time measurements in several domains. FPGA-based TDCs are a flexible, low-cost, highly customizable alternative. We have implemented an FPGA-based TDC based on a tapped delay line architecture, targeting the latest device families of AMD, to measure single particle transit times in a silicon sensor with a <30ps resolution at Particle...
Trying to modulate the RF cavity of a Synchrotron Light Source by leveraging Reinforcement Learning, resulted in a hardware implementation of the Gated Recurrent Unit (GRU) on the Versal AI Engine, by AMD Xilinx and extremely efficient in performing the main numerical operations needed by the model.
RNNs have been designed to handle time series, and they are the perfect candidates to handle...
Neural networks with a latency requirement at the order of $\mu$s, like the ones used at the CERN Large Hadron Colliders, are typically deployed on FPGAs fully unrolled. A bottleneck for deployment of such neural networks is area utilization, which is directly related to the constant matrix-vector multiplications (CMVM) performed in the networks. In this work, we implement an algorithm that...
Chisel4ml is a tool we developed for generating fast implementations of deeply quantized neural networks to FPGA devices. The tool is implemented in the Chisel Hardware Construction Language, and has a frontend in Python, to enable interfacing with neural network training libraries. We will present basics of the Chisel language and compare chisel4ml against hls4ml. In general, chisel4ml, is...
Kernel methods are fundamental in machine learning. They excel in regression, classification, and dimensionality reduction. They model nonlinear relationships and are widely used in many fields such as in face recognition, wind forecasting, and molecular energy estimation. However, their reliance on a kernel matrix leads to quadratic complexity in computation and storage, which makes...
The ever-increasing data rates and ultra-low-latency requirements of particle physics experiments demand innovations for real-time decision-making. Transformer Neural Networks (TNNs) have demonstrated state-of-the-art performance in classification tasks, including jet tagging, but implementations on CPUs and GPUs fail to meet the constraints for real-time triggers. This work introduces two...
Reservoir Computing (RC) is a new paradigm in Machine Learning, alternative to Neural Networks on predicting dynamical systems, offering advantages in efficiency and computational simplicity. These characteristics make RC particularly well-suited for implementation on resource-constrained hardware such as FPGAs, enabling low-power, real-time edge computing. Next-Generation Reservoir Computing...