Speaker
Description
This talk presents the development of a framework for TTN based binary classification. The primary objective of this study is to train and evaluate the performance of TTN classifiers and optimise the code for their efficient deployment on computing accelerators, such as General Purpose Graphics Processing Units (GPGPU) or Field-Programmable Gate-Arrays (FPGA). To evaluate the effectiveness of the implementation, we show different TTN classifier models, trained on synthetic ML datasets as well as on physics data for more challenging classification tasks. In the context of HEP applications, the computational burden of these models becomes pivotal, therefore we discuss information-aware pruning methods based on the explainability feature of Quantum-inspired machine learning models. Moreover, simulating the hardware logic we evaluate other compression possibilities. In conclusion possible further developments of this software and its integration in more robust frameworks such as Quantum TEA are discussed.