IML Machine Learning Working Group: Generative models

503-1-001 - Council Chamber (CERN)

503-1-001 - Council Chamber


Show room on map
Videoconference Rooms
Vidyo room for the IML machine learning working group
Paul Seyfert
Auto-join URL
Useful links
Phone numbers
    • 15:00 15:10
      News and group updates 10m
      Speakers: Lorenzo Moneta (CERN) , Michele Floris (CERN) , Paul Seyfert (CERN) , Dr. Sergei Gleyzer (University of Florida (US)) , Steven Randolph Schramm (Universite de Geneve (CH))
    • 15:10 15:45
      Introduction to GANs 35m
      Speaker: Luke Percival De Oliveira
    • 15:45 16:15
      Frontiers with GANs 30m
      Speaker: Michela Paganini (Yale University (US))
    • 16:15 16:40
      Quantized Stochastic Gradient Descent 25m

      Parallel implementations of stochastic gradient descent (SGD) have received significant research attention recently, thanks to the good scalability properties of this algorithm. A fundamental barrier for parallelizing large-scale SGD is the fact that the cost of communicating the gradient updates between nodes can become very large. Consequently, several compression heuristics have been proposed, by which nodes only communicate quantized, approximate versions of the model updates. Although effective in practice, these heuristics do not always converge, and it is not clear whether they can be improved. In this talk, I will describe Quantized SGD (QSGD), a family of lossy
      compression techniques which allow the compression of gradient updates at each node, while guaranteeing convergence under standard assumptions. Empirical results show that QSGD can significantly reduce communication cost for multi-GPU DNN training, while being competitive with standard uncompressed techniques in terms of accuracy on a variety of deep learning tasks. Time permitting, I will also discuss an extension of these techniques which allows SGD to run entirely on compressed, low-precision data representations. For linear models, it is possible to simultaneously quantize the samples, the
      model, and the gradient updates using as little as one bit per dimension, while maintaining the convergence guarantees. This framework enables an FPGA implementation that's almost an order of magnitude faster than an optimized multi-threaded implementation.

      Speaker: Prof. Dan Alistarh (ETH Zurich)
    • 16:40 17:05
      GANs and fast simulation in GeantV 25m
      Speakers: Maurizio Pierini (CERN) , Sofia Vallecorsa (Gangneung-Wonju National University (KR))
    • 17:05 17:30
      Adversarial Networks in the Deep Continuum Suppression for the Belle II experiment 25m
      Speaker: Dennis Weyland (KIT)
    • 17:30 17:31
Your browser is out of date!

Update your browser to view this website correctly. Update my browser now