Speaker
Description
The advantage of quantum computers over classical devices lies in the possibility of using quantum superposition effects of n qubits to perform exponential computations in parallel. This effect makes it possible to reduce the computational complexity of certain classes of problems, such as optimisation, sampling or combinatorial problems in large scale fault-tolerant quantum computers.
However, today we only have access to Noisy Intermediate-Scale Quantum (NISQ) hardware dominated by short coherence time, small number of qubits, and limited lattice connectivity.
These limitations have justified intense R&D towards the design and optimisation of dedicated NISQ algorithms. Additionally, classical data embedding in quantum circuits and data I/O represent practical challenges that can hinder the advantages of quantum algorithms.
We study the quantum counterpart of Support Vector Machines (namely Quantum Support Vector Machines - QSVMs) and a a new QML architecture that combines a classical encoder neural network and a Variation Quantum Circuit (VQC) into a single model, i.e., a Neural Network Variational Quantum Circuit (NNVQC) for the binary classification of High Energy Physics data. Specifically, we focus on the identification of the Higgs boson in the ttH(bb) channel. Quantum computing approaches can potentially tackle this computationally expensive task by leveraging the so-called quantum feature maps to encode classical data into quantum states. Recent proposals based on the kernel trick assume a one-feature-to-one-qubit mapping of the data. The limited number of available qubits on NISQ devices imposes the need for feature compression on complex datasets. The challenge is to maintain sufficient information to achieve a high classification accuracy while performing an effective reduction.
This contribution studies the effect of different data compression and dimensionality reduction techniques with respect to quantum machine learning algorithms. We identify, implement, and compare conventional feature extraction methods suitable for QSVMs via a literature based empirical approach. No universally superior method has been identified.
Furthermore, we develop five distinct Auto-Encoder architectures, including a Variational and an end-to-end Sinkhorn Auto-Encoder with a classical classification neural network attached to its latent space. The latent spaces produced with optimal hyperparameters and data normalisation were passed to a QSVM that was used to perform the ttH(bb) classification. The QSVM performance is improved for some of the considered Auto-Encoder latent spaces. The classification power of the NNVQC and of its classical counterparts are comparable.
The training and performance of VQC models is affected by noise inherent to NISQ devices. The influence of three different types of quantum hardware noise is studied: measurement errors, single qubit gate errors, and two-qubit gate errors (e.g., CNOT gate). The QSVM and NNVQC are trained using noise models that emulate the behaviour of available quantum computers with high accuracy. We conclude that the tested QML models are suitable for operation on current NISQ devices.
References
Previous related work with ideal simulations (no noise): https://doi.org/10.1051/epjconf/202125103070
Significance
A new, more complex QML architecture is studied in the context of the ttH(bb) classification task and improvements are observed. Further, the effect of the different noise components in NISQ devices is explored as we transition from ideal simulation studies to implementing the developed algorithms on available quantum computers. This work serves as a crucial step in our ongoing efforts towards robust QML applications in HEP.
Speaker time zone | Compatible with Europe |
---|