CERN Accelerating science

Talk
Title Invited talk: Deep Learning Meets Physics
Video
If you experience any problem watching the video, click the download button below
Download Embed
Mp4:Medium
(1000 kbps)
High
(4000 kbps)
More..
Copy-paste this code into your page:
Copy-paste this code into your page to include both slides and lecture:
Author(s) Hochreiter, Sepp (speaker)
Corporate author(s) CERN. Geneva
Imprint 2018-06-04. - 1:06:56.
Series (Machine Learning)
(IML Machine Learning Working Group: sequential models)
Lecture note on 2018-06-04T15:10:00
Subject category Machine Learning
Abstract Deep Learning has emerged as one of the most successful fields of machine learning and artificial intelligence with overwhelming success in industrial speech, text and vision benchmarks. Consequently it evolved into the central field of research for IT giants like Google, facebook, Microsoft, Baidu, and Amazon. Deep Learning is founded on novel neural network techniques, the recent availability of very fast computers, and massive data sets. In its core, Deep Learning discovers multiple levels of abstract representations of the input. The main obstacle to learning deep neural networks is the vanishing gradient problem. The vanishing gradient impedes credit assignment to the first layers of a deep network or to early elements of a sequence, therefore limits model selection. Major advances in Deep Learning can be related to avoiding the vanishing gradient like stacking, ReLUs, residual networks, highway networks, and LSTM. For Deep Learning, we suggested self-normalizing neural networks (SNNs) which automatically avoid the vanishing gradient. In unsupervised Deep Learning generative adversarial networks (GANs) excel in generating realistic images outperforming all previous approaches. We proved that a two time-scale update rule for training GANs converge under mild assumptions to a local Nash equilibrium. For deep reinforcement learning we introduced a new approach to learn long delayed rewards, for which methods that estimate value functions like temporal difference, Monte Carlo, or Monte Carlo Tree Search failed. Current applications of Deep Learning in physics comprise analysis of ATLAS data e.g. to identify measurements of the Higgs boson, quantum chemistry, energy prediction without the Schrödinger equation and wave functions, and quantum state classifications. On the other hand, methods from physics are used to describe Deep Learning systems. The Fokker-Plank equation describes the behavior of stochastic gradient descent which finds flat minima in error surfaces. We use electric field equations to define a new GAN objective which can be proved via the continuity equation to have a single (global) Nash equilibrium.
Copyright/License © 2018-2024 CERN
Submitted by paul.seyfert@cern.ch

 


 Record created 2018-06-05, last modified 2022-11-02


External links:
Download fulltextTalk details
Download fulltextEvent details