Speaker
Description
The Manifold Hypothesis (MH) states that naturally occurring, real-world data is intrinsically low-dimensional and thus provides a guideline for the success of most machine learning techniques. Manifold learning is a sub-field of machine learning that aims to reduce dimensionality and recover the relevant geometry underlying high-dimensional datasets. Data analysis tasks involving tracking, clustering, anomaly/signal detection and classification are prime applications of manifold learning, since they are most effectively solved once the data is transformed to a low-dimensional and well-organized representation. Through the MH, applications go beyond these: Generative Adversarial Networks are widely used for fast simulations and their training and stability issues are directly linked to the persistence of the confinement of data to low-dimensional manifolds; a priori dimensionality reduction is thus an oft-proposed regularization step. The LHC requires such data analyses at scales so demanding that recently, attention has turned to quantum computing; however, the effective formulation of quantum algorithms to achieve a significant speed-up is highly non-trivial. We have recently showed that manifold learning can be achieved with purely quantum primitives: evolving certain localized wavepackets through data-derived Hamiltonian dynamics produces states localized along geodesics of the latent space; then, for example, taking expectations yields geodesic coordinates that serve to transform the data into latent degrees of freedom. We propose to use these results to devise quantum algorithms for manifold learning and subsequent data analysis tasks that are required of particle physics experiments at the LHC.
CERN group or section submitting a project proposal | CERN Openlab |
---|