Quantum Technology Initiative Journal Club

Europe/Zurich
513/R-070 - Openlab Space (CERN)

513/R-070 - Openlab Space

CERN

15
Show room on map
Michele Grossi (CERN)
Description

Weekly Journal Club meetings organised in the framework of the CERN Quantum Technology Initiative (QTI) to present and discuss scientific papers in the field of quantum science and technology. The goal is to help researchers keep track of current findings and walk away with ideas for their own research. Some previous knowledge of quantum physics would be helpful, but is not required to follow the talks.

To propose a paper for discussion, contact: michele.grossi@cern.ch

Zoom Meeting ID
63779300431
Host
Michele Grossi
Alternative host
Matteo Robbiati
Passcode
55361000
Useful links
Join via phone
Zoom URL
    • 16:00 17:00
      CERN QTI Journal CLUB
      Convener: Dr Michele Grossi (CERN)
      • 16:00
        Junyong Lee (Yonsei University, Korea), Jeihee Cho (Yonsei University, Korea) 40m

        TITLE: Q-MAML: Quantum Model-Agnostic Meta-Learning for Variational Quantum Algorithms

        Link: https://arxiv.org/abs/2501.05906

        Abstract: In the Noisy Intermediate-Scale Quantum (NISQ) era, using variational quantum algorithms (VQAs) to solve optimization problems has become a key application. However, these algorithms face significant challenges, such as choosing an effective initial set of parameters and the limited quantum processing time that restricts the number of optimization iterations. In this study, we introduce a new framework for optimizing parameterized quantum circuits (PQCs) that employs a classical optimizer, inspired by Model-Agnostic Meta-Learning (MAML) technique. This approach aim to achieve better parameter initialization that ensures fast convergence. Our framework features a classical neural network, called Learner}, which interacts with a PQC using the output of Learner as an initial parameter. During the pre-training phase, Learner is trained with a meta-objective based on the quantum circuit cost function. In the adaptation phase, the framework requires only a few PQC updates to converge to a more accurate value, while the learner remains unchanged. This method is highly adaptable and is effectively extended to various Hamiltonian optimization problems. We validate our approach through experiments, including distribution function mapping and optimization of the Heisenberg XYZ Hamiltonian. The result implies that the Learner successfully estimates initial parameters that generalize across the problem space, enabling fast adaptation.

        Speakers: Dr Jeihee Cho (Yonsei University), Dr Junyong Lee (Yonsei University)