Help us make Indico better by taking this survey! Aidez-nous à améliorer Indico en répondant à ce sondage !

IML Machine Learning Working Group - Parallelized/Distributed Machine Learning

Europe/Zurich
40/S2-C01 - Salle Curie (CERN)

40/S2-C01 - Salle Curie

CERN

115
Show room on map
    • 15:00 15:10
      News and group updates 10m
      Speakers: Lorenzo Moneta (CERN), Michele Floris (CERN), Paul Seyfert (Universita & INFN, Milano-Bicocca (IT)), Dr Sergei Gleyzer (University of Florida (US)), Steven Randolph Schramm (Universite de Geneve (CH))
    • 15:10 15:30
      Internally-Parallelized Boosted Decision Trees 20m
      Speaker: Andrew Mathew Carnes (University of Florida (US))
    • 15:30 15:50
      Rapid development platforms for machine learning 20m
      Speaker: Dr Andrew Lowe (Hungarian Academy of Sciences (HU))
    • 15:50 15:55
      Distributed Deep Learning using Apache Spark and Keras (see materials) 5m

      Data parallelism is an inherently different methodology of optimizing parameters. The general idea is to reduce the training time by having n workers optimizing a central model by processing n different shards (partitions) of the dataset in parallel. In this setting we distribute n model replicas over n processing nodes, i.e., every node (or process) holds one model replica. Then, the workers train their local replica using the assigned data shard. However, it is possible to coordinate the workers in such a way that, together, they will optimize a single objective during training and as a result, reduce the wall clock training time. There are several approaches to achieve this, and these will be discussed in greater detail in the materials below.

      Speaker: Joeri Hermans (Maastricht University (NL))
    • 15:55 16:25
      Parallelization in Machine Learning with Multiple Processes 30m
      Speakers: Gerardo gutierrez (ITM), Omar Andres Zapata Mesa (University of Antioquia & Metropolitan Institute of Technology)
    • 16:25 16:26