Speaker
Description
Efficient computational strategies are paramount for devices in resource-limited settings, particularly within high-energy physics experiments. To address this, we propose research primarily focused on improved energy efficiency and reduced latency inherent to AI algorithms implemented with analog circuits such as memristive crossbar arrays that perform in-memory matrix-vector multiply computations for Artificial Neural Networks (ANN) as compared with digital Al (e.g. on FPGAs). This study proposes the customization of a small ANN model and within the HLS4ML framework to transition quantized NN models into the analog domain. We have created an 8-T unit Cell that employs a compute-in-memory SRAM cell for multi-bit precision interference, featuring memory-integrated data conversion and multiplication-free operations. This investigation will offer comprehensive insights into the performance of the Analog AI model and opens up the possibility of extending the analog AI model to more complex and larger models. We are also exploring HEP applications for this technology, including current and future colliders and experiments.