string_data_2021

Africa/Johannesburg
Talks will be hosted on Zoom. https://wits-za.zoom.us/j/96640116847?pwd=Snd4b0p2bHFhODBiVzIrUXRlU2sxUT09
Description
String Data 2021 Logo

Description

String Data hosts Mathematicians, Computer Scientists and String Theorists to talk about and exchange new ideas on the interface between Machine Learning, Data Science and Fundamental Physics.


Zoom and YouTube Live Stream

All the talks were hosted on Zoom and streamed live to YouTube.


YouTube Recordings

All of the talks were recorded and uploaded to YouTube. You can get the link to the YouTube recording from the list of contributions. 


Timezone

South Africa time converts as follows: 6 am to 10 am Pacific, 7 am to 11 am Mountain, 8 am to 12 pm Central, 9 am to 1 pm Eastern, 2 pm to 6 pm UK, 3 pm to 7 pm Central Europe, 7:30 pm to 11:30 pm India, 10 pm to 2 am China and Taiwan, 11 pm to 3 am Japan.


Speaker List

Lara Anderson (Virginia Tech)
Anthony Ashmore (University of Chicago)
Per Berglund (University of New Hampshire)
Miranda Cheng (University of Amsterdam)
Michael Douglas (Stony Brook University)
Harold Erbin (MIT)
Babak Haghighat (Tsinghua University)
James Halverson (Northeastern University)
Koji Hashimoto (Kyoto University)
Yang-Hui He (London Institute, Royal Institution)
Edward Hirst (City, University of London)
Mark Hughes (Brigham Young University)
Arjun Kar (University of British Columbia)
Sven Krippendorf (LMU Munich)
Andre Lukas (Oxford University)
Anindita Maiti (Northeastern University)
Challenger Mishra (Cambridge University)
Costis Papageorgakis (Queen Mary, University of London)
Sanjaye Ramgoolam (Queen Mary, University of London)
Dan Roberts (MIT)
Fabian Ruehle (Northeastern University)
Robin Schneider (Uppsala University)
Gary Shiu (University of Wisconsin)
Eva Silverstein (Stanford University)
Greg Yang (Microsoft Research)


Local Organising Committee

Pallab Basu
Robert de Mello Koch
Kevin Goldstein
Vishnu Jejjala
Jonathan Shock
Hossein Yavartanoo


International Advisory Committee

David Berman
James Halverson
Sven Krippendorf
Brent D. Nelson
Fabian Ruehle


Previous Meeting

string_data_2020


 

Registration
Registration Form
Participants
  • Abhishek Dubey
  • Aditya Sharma
  • Adwait Gaikwad
  • Andreas Schachner
  • ANIK RUDRA
  • Anindita Maiti
  • Anthony Ashmore
  • Arghya Chattopadhyay
  • Arjun Kar
  • Ben Campbell
  • Benjamin Sung
  • Binh Ta
  • Bruno De Luca
  • Chandan Jana
  • Charlotte Kristjansen
  • Christopher Hughes
  • Claude Formanek
  • Damian Mayorga Pena
  • Daniel Kläwer
  • Dario Partipilo
  • David Berman
  • David Cyncynates
  • Dean Rance
  • Dvij Mankad
  • Dylan Feldner-Busztin
  • Enrique Escalante-Notario
  • Eva Silverstein
  • Fabian Ruehle
  • Fakhira Afzal
  • Frima Kalyuzhner
  • Gary Shiu
  • Gregory Loges
  • Hajime Otsuka
  • hanzhi jiang
  • Haoyu Sun
  • Harold Erbin
  • Hekta Dlamini
  • Helen Urgelles Pérez
  • Hosein Hashemi
  • Ignacio Portillo Castillo
  • Irvin Martinez
  • Ivonne Zavala
  • James Gray
  • Jeff Murugan
  • Jessica Craven
  • Jim Halverson
  • Joydeep Naskar
  • Junggi Yoon
  • Kaiwen Sun
  • Kaixuan Zhang
  • Keegan Stoner
  • Kevin Loo
  • Koji Hashimoto
  • Magdalena Larfors
  • Mehmet Demirtas
  • Mike Douglas
  • Mohammad Zaz
  • Moritz Muenchmeyer
  • Muhammad Shuraim
  • Parthiv Haldar
  • Paul Richmond
  • Percy Cáceres
  • Prafulla Oak
  • Rahul Toley
  • Rajeev Singh
  • Rak-Kyeong Seong
  • Riccardo Finotello
  • Rishi Raj
  • Robin Schneider
  • Ross Altman
  • Samuel Tovey
  • Sarthak Duary
  • Sayan Samanta
  • Shabeeb Alalawi
  • Shi-Bei Kong
  • SUBHADEEP RAKSHIT
  • Subham Dutta Chowdhury
  • Sukrut Mondkar
  • Surya Raghavendran
  • Sven Krippendorf
  • Taniya Mandal
  • Teflon Rabambi
  • Thomas Harvey
  • Vasileios Niarchos
  • Veronica Guidetti
  • Vishnu Jejjala
  • Xiaobin Li
  • Xin Wang
  • Yang-Hui He
  • Yidi Qi
  • Zach Wolpe
  • Zubair Bhatti
    • 16:00 16:30
      What is string data? 30m

      Machine learning critically depends on high quality datasets. In a theoretical subject like string theory, we can generate datasets, but what sort of data should we generate and study? We discuss this question from several perspectives: mathematical (generating solutions), statistical (getting representative samples), and methodological (improving access to prior work).

      Speaker: Michael Douglas (Stony Brook University)
    • 16:30 17:00
      The world in a grain of sand 30m

      We propose a novel approach toward the vacuum degeneracy problem of the string landscape, using few-shot machine-learning, and by finding an efficient measure of similarity amongst compactification scenarios. Using a class of some one million Calabi-Yau manifolds as concrete examples, the paradigm of few-shot machine-learning and Siamese Neural Networks represents them as points in R^3. Using these methods, we can compress the search space for exceedingly rare manifolds to within one percent of the original data by training on only a few hundred data points. We also
      demonstrate how these methods may be applied to characterize ‘typicality’ for vacuum representatives. Joint work with Shailesh Lal and Zaid Zaz.

      Speaker: Yang-Hui He (London Institute, Royal Institution)
    • 17:00 17:30
      ML to identify symmetries and integrability of physical systems 30m

      In the first part I discuss methods on how to identify symmetries of a system without requiring knowledge about such symmetries. In the second part I discuss how to find a Lax pair/connection associated with integrable systems.

      Speaker: Sven Krippendorf (LMU Munich)
    • 18:00 18:30
      Permutation invariant random matrix theory and natural language data 30m

      I give an introduction to the Linguistic Matrix Theory programme, where permutation invariant random matrix theory is developed for applications to matrix data arising in compositional distributional semantics. Techniques from distributional semantics produce ensembles of matrices and it is argued that the relevant semantic information has an invariance under permutations. The general 13-parameter permutation invariant Gaussian matrix models are described. Techniques from symmetric group representation theory and quantum field theory allow the computation of expectation values of permutation invariant matrix observables (PIMOs). This is used to give evidence for approximate Gaussianity in the ensembles of matrices in compositional distributional semantics. Statistical distributions of PIMOs are applied to natural language tasks involving synonyms, antonyms, hypernyms and hyponyms.

      Speaker: Sanjaye Ramgoolam (Queen Mary, University of London)
    • 18:30 19:00
      Machine learning Calabi–Yau hypersurfaces 30m

      We examine the classic datasbase of Calabi-Yau hypersurfaces in weighted P4s with tools from supervised and unsupervised machine-learning. Surprising linear behaviour arises with respect to the Calabi-Yau’s Hodge numbers, with a natural clustering of the spaces. In addition, simple supervised methods learn to identify weights which produce spaces with CY hypersurfaces, with improved performance where the Hodge numbers take extreme values.

      Speaker: Edward Hirst (City, University of London)
    • 19:00 20:00
      Discussion 1h
    • 16:00 16:30
      Deep learning and holographic QCD 30m

      Bulk reconstruction in AdS/CFT correspondence is a key idea revealing the mechanism of it, and various methods were proposed to solve the inverse problem. We use deep learning and identify the neural network as the emergent geometry, to reconstruct the bulk. The lattice QCD data such as chiral condensate, hadron spectra or Wilson loop is used as input data to reconstruct the emergent geometry of the bulk. The requirement that the bulk geometry is a consistent solution of an Einstein-dilaton system determines the bulk dilaton potential backwards, to complete the reconstruction program. We demonstrate the determination of the bulk system from QCD lattice/experiment data.

      Speaker: Koji Hashimoto (Kyoto University)
    • 16:30 17:00
      The principles of deep learning theory 30m

      Deep learning is an exciting approach to modern artificial intelligence based on artificial neural networks. The goal of this talk is to provide a blueprint — using tools from physics — for theoretically analyzing deep neural networks of practical relevance. This task will encompass both understanding the statistics of initialized deep networks and determining the training dynamics of such an ensemble when learning from data. This talk is based on a book, “The Principles of Deep Learning Theory,” co-authored with Sho Yaida and based on research also in collaboration with Boris Hanin. It will be published next year by Cambridge University Press.

      Speaker: Daniel Roberts (MIT)
    • 17:00 17:30
      Building quantum field theories out of neurons 30m

      In this talk I'll discuss aspects of using neural networks to design and define quantum field theories. As this approach is generally non-Lagrangian, it requires rethinking some things. Interactions will arise from breaking the Central Limit Theorem, i.e. from both 1/N-corrections and non-independence of neurons. Symmetries will play a crucial role, a duality will arise, and Gaussian Lorentz-invariant quantum field theories will be constructed.

      Speaker: James Halverson (Northeastern University)
    • 18:00 18:30
      A tale of symmetry and duality in neural networks 30m

      We use a duality between parameter space and function space to study ensembles of Neural Networks. Symmetries of the NN action can be inferred from invariance of its correlation functions, computed in parameter space. This mechanism, which we call ‘symmetry-via-duality,’ utilizes a judicious choice of architecture and parameter distribution to ensure invariant network actions, even when their forms are unknown. Symmetries of input and output layers are analogous to space-time and internal symmetries, respectively, in quantum field theory. In simple experiments we find a correlation between symmetry breaking and training accuracy.

      Speaker: Anindita Maiti (Northeastern University)
    • 18:30 19:00
      Renormalizing the optimal hyperparameters of a neural network 30m

      Hyperparameter tuning in deep learning is an expensive process, prohibitively so for neural networks (NNs) with billions of parameters that often can only be trained once. We show that, in the recently discovered Maximal Update Parametrization (µP), many optimal hyperparameters remain stable even as model size changes. Using this insight, for example, we are able to re-tune the 6.7-billion-parameter model of GPT-3 and obtain performance comparable to the 13-billionparameter model of GPT-3, effectively doubling the model size.
      In this context, there is a rich analogy we can make to Wilsonian effective field theory. For example, if “coupling constants” in physics correspond to “optimal hyperparameters” in deep learning and “cutoff scale” corresponds to “model size”, then we can say “µP” is a renormalizable theory of neural networks.” We explore this analogy further in the talk and leave open the question whether methods from effective field theory itself can make advances in tuning hyperparameters.

      Speaker: Greg Yang (Microsoft Research)
    • 19:00 20:00
      Discussion 1h
    • 16:00 16:30
      Machine learning the Kitaev honeycomb model 30m

      In this talk, we present recent results about the capability of restricted Boltzmann machines (RBMs) to find solutions for the Kitaev honeycomb model with periodic boundary conditions. We start with a review of non-abelian topological phases of matter and their importance for a scheme of quantum computation known as “topological quantum computation”. We then proceed to introduce the Kitaev Honeycomb model and our method for finding representations of its ground and excited states using RBMs. Furthermore, the possibility of realizing anyons in the RBM is discussed and an algorithm is given to build these anyonic excitations and braid them for possible future applications in quantum computation. Using the correspondence between topological field theories in (2 + 1)d and 2d CFTs, we propose an identification between our RBM states with the Moore-Read state and conformal blocks of the 2d Ising model.

      Speaker: Babak Haghighat (Tsinghua University)
    • 16:30 17:00
      Towards solving CFTs with reinforcement learning 30m

      I will introduce a novel numerical approach for solving the conformal-bootstrap equations with Reinforcement Learning. I will apply this to the case of two-dimensional CFTs, successfully identifying well-known theories like the 2D Ising model and the 2D CFT of a compact scalar, but the method can be used to study arbitrary (unitary or non-unitary) CFTs in any spacetime dimension.

      Speaker: Costis Papageorgakis (Queen Mary, University of London)
    • 17:00 17:30
      Non-perturbative renormalization for the neural network-QFT correspondence 30m

      In a recent work, Halverson, Maiti and Stoner proposed a description of neural networks in terms of a quantum field theory (dubbed NN-QFT correspondence). The infinite-width limit is mapped to a free field theory while finite N corrections are taken into account by interactions. In this talk, after reviewing the correspondence, I will derive non-perturbative renormalization group equations. An important difference with the usual analysis is that the effective (IR) 2-point function is known, while the microscopic (UV) 2-point function is not, which requires setting the problem with care. Finally, I will discuss preliminary numerical results for translation-invariant kernels. A major result is that changing the standard deviation of the neural network weight distribution can be interpreted as a renormalization flow in the space of networks.

      Speaker: Harold Erbin (MIT)
    • 18:00 18:30
      Using generative adversarial networks to produce knots with specified invariants 30m

      Knots in 3-dimensional space form an infinite dataset whose structure is not yet well understood. Recently techniques from machine learning have been applied to knots in an effort to better understand their topology, however so far these approaches have mainly involved techniques from supervised and reinforcement learning. In this talk I will outline an approach to using generative adversarial networks (GAN) to produce knots with specified invariant values. In particular, we show how to construct a GAN which takes as input information from the Jones polynomial, and outputs a knot with specified invariants. This is joint work with Amy Eubanks, Jared Slone, and Dan Ventura.

      Speaker: Mark Hughes (Brigham Young University)
    • 18:30 19:00
      Deep learning knot invariants and gauge theory 30m

      We discuss correlations between the Jones polynomial and other knot invariants uncovered using deep neural networks. Some of these correlations are explainable in the sense that there are ideas in knot or gauge theory which, together with interpretable machine learning techniques, can be used to reverse-engineer the function computed by the network. After briefly reviewing a correlation between the Jones polynomial and hyperbolic volume that is explained by the volume conjecture and analytic continuation of Chern-Simons theory, we present new (as yet) unexplained correlations between the Jones polynomial, Rasmussen s-invariant, and slice genus. We speculate on a gauge theory explanation of these correlations using the fivebrane construction of Khovanov homology.

      Speaker: Arjun Kar (University of British Columbia)
    • 19:00 20:00
      Discussion 1h
    • 16:00 16:30
      Metrics and Machine Learning 30m
      Speaker: Challenger Mishra (Cambridge University)
    • 16:30 17:00
      SU(3) holonomy and SU(3) structure metrics and stable bundles 30m
      Speaker: Lara Anderson (Virginia Tech)
    • 17:00 17:30
      SU(3) holonomy/structure metrics for CICYs and toric varieties 30m

      I will introduce a Tensorflow package for sampling points and computing metrics of string compactification spaces of SU(3) holonomy or SU(3) structure. We vastly extended previous work in this area, allowing the methods to be applied to any Kreuzer-Skarke (KS) Calabi-Yau or CICY. While extensions to CICYs are rather straight-forward, toric varieties require more work. I will first explain how to obtain the (non-Ricci-flat) analog of the Fubini-Study metric for KS models, and then how to sample points uniformly from these spaces using a powerful mathematical theorem.

      Speaker: Fabian Ruehle (Northeastern University)
    • 18:00 18:30
      Calabi–Yau metrics, CFTs, and random matrices 30m

      Calabi-Yau manifolds have played a key role in both mathematics and physics, and are particularly important for deriving realistic models of particle physics from string theory. Without the explicit metrics on these spaces, we have resorted to numerical methods, and now have a variety of techniques to find approximate metrics. I will present recent work on what one can do with these numerical metrics, focusing on the “data” of the spectrum of the Laplacian. Computing this for many different points in complex structure moduli space, we will see that the spectrum displays random matrix statistics, suggesting that certain 2d SCFTs are chaotic.

      Speaker: Anthony Ashmore (University of Chicago)
    • 18:30 19:00
      The string genome project 30m

      In this talk, I’ll discuss how genetic algorithms and reinforcement learning can be used as complementary approaches to search for optimal solutions in string theory and to discover structure of the string landscape, based on 1907.10072 and 2111.11466. I’ll also present ongoing work with Gregory Loges on breeding realistic D-brane models.

      Speaker: Gary Shiu (University of Wisconsin)
    • 19:00 20:00
      Discussion 1h
    • 16:00 16:30
      Machine learning for computing in quantum field theory 30m

      Machine learning has the potential of becoming, among other things, a major computational tools in physics, making possible what was not. In this talk I focus on one specific example of using this new tool on concrete computational problems. I will summarise my recent paper (2110.02673) with de Haan, Rainone, and Bondesan, where we use a continuous flow model to help ameliorate the numerical difficulties in sampling in lattice field theories, which for instance hampers high-precision computations in LQCD.

      Speaker: Miranda Cheng (University of Amsterdam)
    • 16:30 17:00
      Hands-on Ricci-flat metrics with \texttt{cymetric} 30m

      In this talk I'll demonstrate how to find numerical approximations of the unique Ricci-flat metric on Calabi-Yau manifolds with the new open source package \texttt{cymetric}. In a first step points and their integration weights are sampled from some Calabi-Yau manifold. In a second step a neural network will be trained to learn the exact correction to the Fubini-Study metric. A jupyter notebook will be used to visualize this process.

      Speaker: Robin Schneider (Uppsala University)
    • 17:00 17:30
      On machine learning Kreuzer–Skarke Calabi–Yau manifolds 30m
      Speaker: Per Berglund (University of New Hampshire)
    • 18:00 18:30
      Exploring heterotic models with reinforcement learning and genetic algorithms 30m

      We present work applying reinforcement learning and genetic algorithms to string model building, specifically to heterotic Calabi-Yau models with monad bundles. Both methods are found to be highly efficient in identifying phenomenologically attractive three-family models, in cases where systematic scans are not feasible. For monads on the bi-cubic Calabi-Yau either method facilitates a complete search of the environment and leads to similar sets of previously unknown three-family models.

      Speaker: Andre Lukas (Oxford University)
    • 18:30 19:00
      BI for AI 30m

      We introduce a novel framework for optimization based on energy-conserving Hamiltonian dynamics in a strongly mixing (chaotic) regime. The prototype is a discretization of Born-Infeld dynamics, with a relativistic speed limit given by the objective (loss) function. This class of frictionless, energy-conserving optimizers proceeds unobstructed until stopping at the desired minimum, whose basin of attraction contains the parametrically dominant contribution to the phase space volume of the system. Building from mathematical and physical studies of chaotic systems such as dynamical billiards, we formulate a specific algorithm with good performance on machine learning and PDE-solving tasks. We analyze this and the effects of noise both theoretically and experimentally, performing preliminary comparisons with other optimization algorithms containing friction such as Adam and stochastic gradient descent.

      Speaker: Eva Silverstein (Stanford University)
    • 19:00 20:00
      Discussion 1h