- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Some pictures taken during the event can be found at https://cds.cern.ch/record/2881086?ln=de
The registration has reached its maximum capacity. If you wish to be added to the waiting list, please contact QTML-logistics@cern.ch. Please note that the conference will be webcast.
Programme
The programme will start on Sunday 19th of November at 10.30 AM with a tutorial which will go on until 6 PM in two parallel sessions.
On Monday the 20th of November we kick off the QTML conference at 8:45.
The daily programme starts at 9 am (except on Monday!). At 1.15 PM we break for lunch, and resume the programme from 2.45 PM to 6.30 PM (Depending on the days!). During the week we will organise two poster sessions, one on Tuesday 21 November and the second one on Thursday 23 November. The conference will end on Friday 24 November in the afternoon.
A conference dinner is organised on Wednesday evening for those who signed up for it.
The full online programme is available online!
THANK YOU to our sponsors!
In this tutorial, I will cover recent advances in developing learning theory for quantum machines. The tutorial will focus on the basic techniques for establishing prediction guarantees in quantum machine learning models and the fundamental ideas for proving the advantages of quantum machines over classical machines in learning from experiments.
Speaker: Elisa Bahumer
This tutorial gives a gentle introduction to the crucial interplay between quantum algorithms and quantum complexity theory, with an eye on developments in the Quantum Machine Learning sphere. We begin with basic complexity classes such as BQP, followed by the HHL algorithm for its complete problem, Matrix Inversion. We then discuss how the Quantum Singular Value Transform (QSVT) significantly generalizes HHL to general quantum algorithms for Linear Algebra. Finally, as time permits, we discuss the other side of the coin – what does it mean to “dequantize” algorithms like the QSVT, and when is it possible?
Speaker: Elisa Bahumer
OPENING
Speaker: Michele Grossi & Alberto Di Meglio
The core computational tasks in quantum systems are the computation of expectations of operators, including reduced density matrices, and the computation of the ground state energy of a quantum system. Many tools have been developed in the literature to achieve this, including Density Functional Theory (DFT), Density Matrix Renormalization Group (DMRG) and other Tensor Network methods, Variational Monte Carlo (VMC) and so on. Recently, some methods based on Machine Learning have also been pioneered such as FermiNet and PauliNet and other Neural Variational methods. In this work we will build a bridge between the rich Machine Learning literature on Loopy Belief Propagation and its generalizations for posterior inference and the above mentioned quantum computational tasks. It was shown recently that LBP can be used to contract Tensor Networks and compute Reduced Density Matrices. Here we generalize this concept to a new class of generalized LBP methods, known as Region Graph BP and as a particular example we implemented TreeEP. We show that a very general framework exists that encompasses both classical LBP and quantum LBP, which can be used to compute expectations as well as ground state energies and states. We hope that this work will encourage cross fertilization between these two fields.
Joint work with:
Evgenii Egorov
Antonio Rotundo
Ido Niesen
Roberto Bondesan
Abstract: There is no shortage of quantum machine learning papers observing that a particular quantum model "beats its classical counterparts on real-world datasets". However, the subtlety of choices made in benchmark experiments, the small scale of the models and data, as well as narratives influenced by the commercialisation of quantum technologies carry the danger of a strong positivity bias. To judge the true potential of prominent ideas in quantum machine learning we are conducting one of the first large-scale meta-studies that systematically tests 12 popular supervised quantum models at scale using the PennyLane software framework. This talk gives a sneak peek of some surprising preliminary results, and reveals the technical and conceptual difficulty of robust benchmarking, a skill which deserves more attention in the quantum applications literature.
Abstract: Variational quantum computing schemes have received considerable attention due to their high versatility and potential to make practical use of near-term quantum devices. Despite their promise, the trainability of these algorithms can be hindered by barren plateaus (BPs) induced by the expressiveness of the parametrized quantum circuit, the entanglement of the input data, the locality of the observable or the presence of hardware noise. Up to this point, these sources of BPs have been regarded as independent and have been studied only for specific circuit architectures. In this work, we present a general Lie algebraic theory that provides an exact expression for the variance of the loss function of sufficiently deep parametrized quantum circuits, even in the presence of certain noise models. Our results unify under one single framework all aforementioned sources of BPs by leveraging generalized (and subsystem independent) notions of entanglement and operator locality. Finally, our results lead to a critical question: Does the inherent structure that precludes the presence of BPs in a variational model (a requisite for trainability) simultaneously render it classically simulable?
Abstract:
Although still a relatively niche field in classical machine learning, topological data analysis has raised substantial interest from the perspective of quantum algorithms in the last few years.
In this talk we will introduce the topic of topological data analysis, and discuss the state-of-art of quantum algorithms for this problem, together with their promises and limitations, possible generalisations and connections to many-body physics.
The quest to understand the fundamental constituents of the universe is at the heart of particle physics. However, the complexity of particle interactions, the volume of data produced by experiments, and the intricacy of theoretical models present significant challenges to advancements in this field. In recent years, artificial intelligence has emerged as a transformative tool for overcoming these challenges, offering new pathways to accelerate the pace of discovery and fostering a deeper understanding of the fundamental forces of nature. This talk aims to elucidate the pivotal role AI plays in particle physics, from optimizing detector design and operation to analyzing vast datasets and validating theoretical models.
What can we quantum-learn in the age of noisy quantum computation? Both more and less than you think. Noise limits our ability to error-mitigate, a term that refers to near-term schemes where errors that arise in a quantum computation are dealt with in classical pre-processing. I present a unifying framework for error mitigation and an analysis that strongly limits the degree to which quantum noise can be effectively `undone' for larger system sizes, and shows that current error mitigation schemes are more or less as good as they can be. After presenting this negative result, I'll switch to discussing how noise can be a friendly foe: non-unital noise, unlike its unital counterparts, surprisingly results in absence of barren plateaus in quantum machine learning.
Quantum error correction will ultimately empower quantum computers to
leverage their full potential. However, substantial device overhead and
the need for frequent syndrome measurements, which are themselves
error-prone, render the demonstration of logical qubits that
significantly surpass break-even still challenging. Autonomous quantum
error correction represents a promising alternative, where an engineered
environment allows to bypass the syndrome measurements. In this talk, I
show how we use reinforcement learning to search for, and find, bosonic
code spaces that can surpass break-even under experimentally feasible
conditions. Bosonic codes are, for instance, available and utilized in
some of the currently most promising and widespread quantum processors
based on superconducting qubits. Surprisingly, when we increase the
search space by relaxing the constraints on ideal quantum error
correction, we find simple and robust code words that significantly
surpass break-even while minimizing device overhead. This RL code not
only reduces device complexity compared to other proposed encodings, but
also outperforms its competitors in terms of its capability to correct
errors.