Speaker
Description
Quantum systems are well known to create non-classical patterns. The thought that they could also be used to recognize highly complex patterns hidden in data is beyond excitement, leading to the young interdisciplinary field of quantum machine learning (QML). Nevertheless, while a quantum advantage in data analysis can be in principle achieved thanks to the exponentially large Hilbert space, the scalability of QML models has been in heated debates. The very same exponential large space can at the same time hinder the scalability if it is poorly handled.
In this talk, I will provide an overview of QML and discuss key challenges related to scalability. In particular, I will focus on barren plateaus, or more generally exponential concentration phenomena, in which the training loss landscape becomes exponentially flat with increasing model size. Such effects arise in a variety of QML settings, including quantum neural networks for supervised learning [1], quantum generative models [2], and quantum kernel methods [3]. I will discuss both the origins and the consequences of these phenomena. Understanding these fundamental limitations of data analysis with quantum computers is essential for developing QML models with guaranteed scalability.
[1] S. Thanasilp, S. Wang, NA. Nghiem, P. Coles, M. Cerezo, "Subtleties in the trainability of quantum machine learning models", Quantum Machine Intelligence 5 (1), 21 (2023).
[2] MS. Rudolph, S. Lerch, S. Thanasilp, O. Kiss, O. Shaya, S. Vallecorsa, M. Grossi, Z. Holmes, "Trainability barriers and opportunities in quantum generative modeling", npj Quantum Information 10 (1), 116 (2024),
[3] S. Thanasilp, S. Wang, M. Cerezo, Z. Holmes, "Exponential concentration in quantum kernel methods", Nature Communications 15 (1), 5200 (2024)