【845】Trainability barriers and opportunities in quantum generative modeling

5 Sept 2023, 19:00
1h 30m
Hall 1st floor

Hall 1st floor

Poster Quantum Computing (by NCCR SPIN) Poster Session

Speaker

Sacha Lerch

Description

Quantum generative models have the potential to provide a quantum advantage, but their scalability is still in question. We investigate the barriers to training quantum generative models, focusing on exponential loss concentration. The interplay between explicit and implicit models and losses is explored, leading to untrainability of explicit losses (e.g., KL-divergence). Maximum Mean Discrepancy, a commonly-used implicit loss, can be trainable with the appropriate kernel choice. However, the trainability comes with spurious minima due to indistinguishability of high-order correlations. We also propose to leverage quantum computers leading to a quantum fidelity-type loss. Lastly, data from high-energy-physics experiments is used to compare the performance of different loss functions.

Theoretical Work Theory

Authors

Mr Manuel Rudolph Sacha Lerch Supanut Thanasilp (EPFL) Zoë Holmes (EPFL)

Co-authors

Dr Michele Grossi (CERN) Oriel Orphee Moira Kiss (Universite de Geneve (CH)) Dr Sofia Vallecorsa (CERN)

Presentation materials

There are no materials yet.