IML Machine Learning Working Group: sequential models

500-1-001 - Main Auditorium (CERN)

500-1-001 - Main Auditorium


Show room on map

Agenda under development. If you like to present, please contact

Videoconference Rooms
Vidyo room for the IML machine learning working group
Steven Schramm
Auto-join URL
Useful links
Phone numbers
There is a live webcast for this event
    • 15:00 15:10
      News 10m
      Speakers: Lorenzo Moneta (CERN), Markus Stoye (Imperial College (GB)), Paul Seyfert (CERN), Rudiger Haake (CERN), Steven Schramm (Universite de Geneve (CH))
    • 15:10 15:55
      Invited talk: Deep Learning Meets Physics 45m

      Deep Learning has emerged as one of the most successful fields of machine learning and artificial intelligence with overwhelming success in industrial speech, text and vision benchmarks. Consequently it evolved into the central field of research for IT giants like Google, facebook, Microsoft, Baidu, and Amazon. Deep Learning is founded on novel neural network techniques, the recent availability of very fast computers, and massive data sets. In its core, Deep Learning discovers multiple levels of abstract representations of the input.

      The main obstacle to learning deep neural networks is the vanishing gradient problem. The vanishing gradient impedes credit assignment to the first layers of a deep network or to early elements of a sequence, therefore limits model selection. Major advances in Deep Learning can be related to avoiding the vanishing gradient like stacking, ReLUs, residual networks, highway networks, and LSTM.

      For Deep Learning, we suggested self-normalizing neural networks (SNNs) which automatically avoid the vanishing gradient. In unsupervised Deep Learning generative adversarial networks (GANs) excel in generating realistic images outperforming all previous approaches. We proved that a two time-scale update rule for training GANs converge under mild assumptions to a local Nash equilibrium. For deep reinforcement learning we introduced a new approach to learn long delayed rewards, for which methods that estimate value functions like temporal difference, Monte Carlo, or Monte Carlo Tree Search failed.

      Current applications of Deep Learning in physics comprise analysis of ATLAS data e.g. to identify measurements of the Higgs boson, quantum chemistry, energy prediction without the Schrödinger equation and wave functions, and quantum state classifications. On the other hand, methods from physics are used to describe Deep Learning systems. The Fokker-Plank equation describes the behavior of stochastic gradient descent which finds flat minima in error surfaces. We use electric field equations to define a new GAN objective which can be proved via the continuity equation to have a single (global) Nash equilibrium.

      Speaker: Prof. Sepp Hochreiter
    • 16:10 16:50
      Invited talk: Overview, RNNs and alike in HEP 40m
      Speaker: Kyle Stuart Cranmer (New York University (US))
    • 17:00 17:20
      IML workshop challenge winners presentation 20m
      Speakers: David Josef Schmidt (Rheinisch Westfaelische Tech. Hoch. (DE)), Marcel Rieger (RWTH Aachen University (DE))
    • 17:30 17:50
      Sequence representations for event classification 20m
      Speaker: Justin Tan (University of Melbourne)
Your browser is out of date!

Update your browser to view this website correctly. Update my browser now