Help us make Indico better by taking this survey! Aidez-nous à améliorer Indico en répondant à ce sondage !

Quantum Technology Initiative Journal Club

Europe/Zurich
513/R-070 - Openlab Space (CERN)

513/R-070 - Openlab Space

CERN

15
Show room on map
Alice Barthe (Leiden University (NL)), Michele Grossi (CERN), Su Yeon Chang (EPFL - Ecole Polytechnique Federale Lausanne (CH))
Description

Weekly Journal Club meetings organised in the framework of the CERN Quantum Technology Initiative (QTI) to present and discuss scientific papers in the field of quantum science and technology. The goal is to help researchers keep track of current findings and walk away with ideas for their own research. Some previous knowledge of quantum physics would be helpful, but is not required to follow the talks.

To propose a paper for discussion, contact: michele.grossi@cern.ch

Zoom Meeting ID
63779300431
Host
Michele Grossi
Alternative hosts
Su Yeon Chang, Matteo Robbiati
Passcode
55361000
Useful links
Join via phone
Zoom URL
    • 16:00 17:00
      CERN QTI Journal CLUB
      Convener: Dr Michele Grossi (CERN)
      • 16:00
        Elies Gil-Fuster 50m

        TITLE: On the relation between trainability and dequantization of variational quantum learning models

        Link to the paper: https://arxiv.org/abs/2406.07072

        ABSTRACT:
        The quest for successful variational quantum machine learning (QML) relies on the design of suitable parametrized quantum circuits (PQCs), as analogues to neural networks in classical machine learning. Successful QML models must fulfill the properties of trainability and non-dequantization, among others. Recent works have highlighted an intricate interplay between trainability and dequantization of such models, which is still unresolved. In this work we contribute to this debate from the perspective of machine learning, proving a number of results identifying, among others when trainability and non-dequantization are not mutually exclusive. We begin by providing a number of new somewhat broader definitions of the relevant concepts, compared to what is found in other literature, which are operationally motivated, and consistent with prior art. With these precise definitions given and motivated, we then study the relation between trainability and dequantization of variational QML. Next, we also discuss the degrees of "variationalness" of QML models, where we distinguish between models like the hardware efficient ansatz and quantum kernel methods. Finally, we introduce recipes for building PQC-based QML models which are both trainable and nondequantizable, and corresponding to different degrees of variationalness. We do not address the practical utility for such models. Our work however does point toward a way forward for finding more general constructions, for which finding applications may become feasible.

        Speaker: Elies Gil-Fuster (Freie Universität Berlin)