-
Michael Douglas (Stony Brook University)13/12/2021, 16:00
Machine learning critically depends on high quality datasets. In a theoretical subject like string theory, we can generate datasets, but what sort of data should we generate and study? We discuss this question from several perspectives: mathematical (generating solutions), statistical (getting representative samples), and methodological (improving access to prior work).
Go to contribution page -
Yang-Hui He (London Institute, Royal Institution)13/12/2021, 16:30
We propose a novel approach toward the vacuum degeneracy problem of the string landscape, using few-shot machine-learning, and by finding an efficient measure of similarity amongst compactification scenarios. Using a class of some one million Calabi-Yau manifolds as concrete examples, the paradigm of few-shot machine-learning and Siamese Neural Networks represents them as points in R^3. Using...
Go to contribution page -
Sven Krippendorf (LMU Munich)13/12/2021, 17:00
In the first part I discuss methods on how to identify symmetries of a system without requiring knowledge about such symmetries. In the second part I discuss how to find a Lax pair/connection associated with integrable systems.
Go to contribution page -
Sanjaye Ramgoolam (Queen Mary, University of London)13/12/2021, 18:00
I give an introduction to the Linguistic Matrix Theory programme, where permutation invariant random matrix theory is developed for applications to matrix data arising in compositional distributional semantics. Techniques from distributional semantics produce ensembles of matrices and it is argued that the relevant semantic information has an invariance under permutations. The general...
Go to contribution page -
Edward Hirst (City, University of London)13/12/2021, 18:30
We examine the classic datasbase of Calabi-Yau hypersurfaces in weighted P4s with tools from supervised and unsupervised machine-learning. Surprising linear behaviour arises with respect to the Calabi-Yau’s Hodge numbers, with a natural clustering of the spaces. In addition, simple supervised methods learn to identify weights which produce spaces with CY hypersurfaces, with improved...
Go to contribution page -
13/12/2021, 19:00
-
Koji Hashimoto (Kyoto University)14/12/2021, 16:00
Bulk reconstruction in AdS/CFT correspondence is a key idea revealing the mechanism of it, and various methods were proposed to solve the inverse problem. We use deep learning and identify the neural network as the emergent geometry, to reconstruct the bulk. The lattice QCD data such as chiral condensate, hadron spectra or Wilson loop is used as input data to reconstruct the emergent geometry...
Go to contribution page -
Daniel Roberts (MIT)14/12/2021, 16:30
Deep learning is an exciting approach to modern artificial intelligence based on artificial neural networks. The goal of this talk is to provide a blueprint — using tools from physics — for theoretically analyzing deep neural networks of practical relevance. This task will encompass both understanding the statistics of initialized deep networks and determining the training dynamics of such an...
Go to contribution page -
James Halverson (Northeastern University)14/12/2021, 17:00
In this talk I'll discuss aspects of using neural networks to design and define quantum field theories. As this approach is generally non-Lagrangian, it requires rethinking some things. Interactions will arise from breaking the Central Limit Theorem, i.e. from both 1/N-corrections and non-independence of neurons. Symmetries will play a crucial role, a duality will arise, and Gaussian...
Go to contribution page -
Anindita Maiti (Northeastern University)14/12/2021, 18:00
We use a duality between parameter space and function space to study ensembles of Neural Networks. Symmetries of the NN action can be inferred from invariance of its correlation functions, computed in parameter space. This mechanism, which we call ‘symmetry-via-duality,’ utilizes a judicious choice of architecture and parameter distribution to ensure invariant network actions, even when their...
Go to contribution page -
Greg Yang (Microsoft Research)14/12/2021, 18:30
Hyperparameter tuning in deep learning is an expensive process, prohibitively so for neural networks (NNs) with billions of parameters that often can only be trained once. We show that, in the recently discovered Maximal Update Parametrization (µP), many optimal hyperparameters remain stable even as model size changes. Using this insight, for example, we are able to re-tune the...
Go to contribution page -
14/12/2021, 19:00
-
Babak Haghighat (Tsinghua University)15/12/2021, 16:00
In this talk, we present recent results about the capability of restricted Boltzmann machines (RBMs) to find solutions for the Kitaev honeycomb model with periodic boundary conditions. We start with a review of non-abelian topological phases of matter and their importance for a scheme of quantum computation known as “topological quantum computation”. We then proceed to introduce the Kitaev...
Go to contribution page -
Costis Papageorgakis (Queen Mary, University of London)15/12/2021, 16:30
I will introduce a novel numerical approach for solving the conformal-bootstrap equations with Reinforcement Learning. I will apply this to the case of two-dimensional CFTs, successfully identifying well-known theories like the 2D Ising model and the 2D CFT of a compact scalar, but the method can be used to study arbitrary (unitary or non-unitary) CFTs in any spacetime dimension.
Go to contribution page -
Harold Erbin (MIT)15/12/2021, 17:00
In a recent work, Halverson, Maiti and Stoner proposed a description of neural networks in terms of a quantum field theory (dubbed NN-QFT correspondence). The infinite-width limit is mapped to a free field theory while finite N corrections are taken into account by interactions. In this talk, after reviewing the correspondence, I will derive non-perturbative renormalization group equations. An...
Go to contribution page -
Mark Hughes (Brigham Young University)15/12/2021, 18:00
Knots in 3-dimensional space form an infinite dataset whose structure is not yet well understood. Recently techniques from machine learning have been applied to knots in an effort to better understand their topology, however so far these approaches have mainly involved techniques from supervised and reinforcement learning. In this talk I will outline an approach to using generative adversarial...
Go to contribution page -
Arjun Kar (University of British Columbia)15/12/2021, 18:30
We discuss correlations between the Jones polynomial and other knot invariants uncovered using deep neural networks. Some of these correlations are explainable in the sense that there are ideas in knot or gauge theory which, together with interpretable machine learning techniques, can be used to reverse-engineer the function computed by the network. After briefly reviewing a correlation...
Go to contribution page -
15/12/2021, 19:00
-
Challenger Mishra (Cambridge University)16/12/2021, 16:00
-
Lara Anderson (Virginia Tech)16/12/2021, 16:30
-
Fabian Ruehle (Northeastern University)16/12/2021, 17:00
I will introduce a Tensorflow package for sampling points and computing metrics of string compactification spaces of SU(3) holonomy or SU(3) structure. We vastly extended previous work in this area, allowing the methods to be applied to any Kreuzer-Skarke (KS) Calabi-Yau or CICY. While extensions to CICYs are rather straight-forward, toric varieties require more work. I will first explain how...
Go to contribution page -
Anthony Ashmore (University of Chicago)16/12/2021, 18:00
Calabi-Yau manifolds have played a key role in both mathematics and physics, and are particularly important for deriving realistic models of particle physics from string theory. Without the explicit metrics on these spaces, we have resorted to numerical methods, and now have a variety of techniques to find approximate metrics. I will present recent work on what one can do with these numerical...
Go to contribution page -
Gary Shiu (University of Wisconsin)16/12/2021, 18:30
In this talk, I’ll discuss how genetic algorithms and reinforcement learning can be used as complementary approaches to search for optimal solutions in string theory and to discover structure of the string landscape, based on 1907.10072 and 2111.11466. I’ll also present ongoing work with Gregory Loges on breeding realistic D-brane models.
Go to contribution page -
16/12/2021, 19:00
-
Miranda Cheng (University of Amsterdam)17/12/2021, 16:00
Machine learning has the potential of becoming, among other things, a major computational tools in physics, making possible what was not. In this talk I focus on one specific example of using this new tool on concrete computational problems. I will summarise my recent paper (2110.02673) with de Haan, Rainone, and Bondesan, where we use a continuous flow model to help ameliorate the numerical...
Go to contribution page -
Robin Schneider (Uppsala University)17/12/2021, 16:30
In this talk I'll demonstrate how to find numerical approximations of the unique Ricci-flat metric on Calabi-Yau manifolds with the new open source package \texttt{cymetric}. In a first step points and their integration weights are sampled from some Calabi-Yau manifold. In a second step a neural network will be trained to learn the exact correction to the Fubini-Study metric. A jupyter...
Go to contribution page -
Per Berglund (University of New Hampshire)17/12/2021, 17:00
-
Andre Lukas (Oxford University)17/12/2021, 18:00
We present work applying reinforcement learning and genetic algorithms to string model building, specifically to heterotic Calabi-Yau models with monad bundles. Both methods are found to be highly efficient in identifying phenomenologically attractive three-family models, in cases where systematic scans are not feasible. For monads on the bi-cubic Calabi-Yau either method facilitates a...
Go to contribution page -
Eva Silverstein (Stanford University)17/12/2021, 18:30
We introduce a novel framework for optimization based on energy-conserving Hamiltonian dynamics in a strongly mixing (chaotic) regime. The prototype is a discretization of Born-Infeld dynamics, with a relativistic speed limit given by the objective (loss) function. This class of frictionless, energy-conserving optimizers proceeds unobstructed until stopping at the desired minimum, whose basin...
Go to contribution page -
17/12/2021, 19:00
Choose timezone
Your profile timezone: