Speaker
Description
Quantum generative models have the potential to provide a quantum advantage, but their scalability is still in question. We investigate the barriers to training quantum generative models, focusing on exponential loss concentration. The interplay between explicit and implicit models and losses is explored, leading to untrainability of explicit losses (e.g., KL-divergence). Maximum Mean Discrepancy, a commonly-used implicit loss, can be trainable with the appropriate kernel choice. However, the trainability comes with spurious minima due to indistinguishability of high-order correlations. We also propose to leverage quantum computers leading to a quantum fidelity-type loss. Lastly, data from high-energy-physics experiments is used to compare the performance of different loss functions.
| Theoretical Work | Theory |
|---|