Heavy Flavour Data Mining workshop

Y16 G15 (University of Zurich, Irchel Campus)

Y16 G15

University of Zurich, Irchel Campus

Andrey Ustyuzhanin (Yandex School of Data Analysis (RU)), Francesco Dettori (CERN), Marc Olivier Bettler (CERN), Marcin Chrzaszcz (University of Zurich (CH)), Thomas Blake (University of Warwick), Tim Head (Ecole Polytechnique Federale de Lausanne (CH))

This workshop is designed to provide overview and hands-on experience for popular tools and methods in various fields of Machine Learning. Physicists are welcome to share challenges they are facing to facilitate collaboration with ML-practitioners. Active Machine Learning practitioners are invited to share their experience and their instruments that they use to achieve meaningful results in their domains of interest. Those fields might include Natural Language Processing, Image Recognition, Robotics. Those areas seemingly being away from HEP still have plenty of tools and algorithms could be applied to HEP challenges as well.

To foster the interaction OpenSpace Technology will be used, that is recognized as an approach for boosting creativity in variety of contexts and that "can lead to surprising results and fascinating new questions".

Winners of the Physics Prize of Flavours of Physics challenge organized by CERN, Yandex at Kaggle will present their solutions. Practical introduction into ML toolkits would be covered by tutorials on scikti-learn (by Gilles Loupe - core developer of scikit-learn), REP, hep_ml, Deep Learning tools.

    • 8:30 AM
    • 1
    • HEP challenges
      Convener: Marcin Chrzaszcz (Universitaet Zuerich (CH), Institute of Nuclear Physics (PL))
      • 2
        Data Science at LHCb
        Speaker: Tim Head (Ecole Polytechnique Federale de Lausanne (CH))
      • 3
        Summary of «Flavors of Physics» Challenge
        Speaker: Andrey Ustyuzhanin (Yandex School of Data Analysis (RU))
      • 10:30 AM
        Coffee Break
      • 4
        Data Doping solution for "Flavours of Physics" challenge
        Speaker: Dr Vicens Gaitan
      • 5
        Transfer Learning solution for "Flavours of Physics" challenge
        Speaker: Dr Alexander Rakhlin
      • 6
        Pitfalls of evaluating a classifier's performance in high energy physics applications
        Speaker: Dr Gilles Louppe (New York University (US))
    • 12:30 PM
    • HEP challenges discussions
      Convener: Andrey Ustyuzhanin (Yandex School of Data Analysis (RU))
    • Machine Learning tools & tutorials
      Convener: Andrey Ustyuzhanin (Yandex School of Data Analysis (RU))
      • 7
        An introduction to machine learning with Scikit-Learn
        Speaker: Dr Gilles Louppe (New York University (US))
      • 5:00 PM
        Cofee break
      • 8
        Boosting applications for HEP
        Speaker: Aleksei Rogozhnikov (Yandex School of Data Analysis (RU))
    • 7:00 PM
    • Data Science Applications
      Convener: Tim Head (Ecole Polytechnique Federale de Lausanne (CH))
      • 9
        Classifier output calibration to probability
        Speaker: Tatiana Likhomanenko (National Research Centre Kurchatov Institute (RU))
      • 10
        Classifiers for centrality determination in proton-nucleus and nucleus-nucleus collisions
        Centrality, as a geometrical property of the collision, is crucial for the physical interpretation of proton-nucleus and nucleus-nucleus experimental data. However, it cannot be directly accessed in event-by-event data analysis. Contemporary methods of the centrality estimation in A-A and p-A collisions usually rely on a single detector (either on the signal in zero-degree calorimeters or on the multiplicity in some semi-central rapidity range). In the present work, we develop an approach for centrality determination that is based on machine-learning techniques and utilizes information from several detector subsystems simultaneously. Different event classifiers are suggested and evaluated for their selectivity power in terms of the number of nucleons-participants and the impact parameter of the collision. The authors acknowledge Saint-Petersburg State University for a research grant
        Speaker: Igor Altsybeev (St. Petersburg State University (RU))
      • 11
        Data Fusion Surogate Modeling on Incomplete Factorial Design of Experiments
        This work concerns a construction of surrogate models for a specific aerodynamic data base. This data base is generally available from wind tunnel testing or from CFD aerodynamic simulations and contains aerodynamic coefficients for different flight conditions and configurations (such as Mach number, angle-of-attack, vehicle configuration angle) encountered over different space vehicles mission. The main peculiarity of aerodynamic data base is a specific design of experiment which is a union of grids of low and high fidelity data with considerably different sizes. Universal algorithms can’t approximate accurately such significantly non-uniform data. In this work a fast and accurate algorithm was developed which takes into account different fidelity of the data and special design of experiments
        Speaker: Prof. Eugene Burnaev (IITP)
      • 10:40 AM
        Coffee Break
      • 12
        Mathematics of Big Data
        Speaker: Prof. Dmitry Vetrov (Skoltech, Yandex School of Data Analysis, Higher School of Economics)
      • 13
        Efficient Elastic Net Regularization for Sparse Linear Models in the Multilabel Setting
        Speaker: Mr Zachary Chase Lipton (University of California, Amazon)
      • 14
        Optimized Methods to Apply Neural Networks in HEP
        Different steps of NN application in HEP are considered. Possible optimization methods for each of the steps are discussed. The proposed methods were applied for the single top quark analysis in CMS and corresponding examples are presented in the talk.
        Speaker: Lev Dudko (M.V. Lomonosov Moscow State University (RU))
    • 12:30 PM
    • HEP challenges discussions
      Convener: Andrey Ustyuzhanin (Yandex School of Data Analysis (RU))
    • Machine Learning tools & tutorials
    • Data Science Applications
      Convener: Marc Olivier Bettler (CERN)
      • 18
        Automatic Tuning of Hyperparameters
        The training process of a machine learning algorithm includes tuning of hyperparameters, such as the regularization coefficient of a linear model or the depth of a decision tree. Unfortunately, it usually is conducted manually, what is very expensive to be done on a regular basis. Moreover, the growing number of hyperparameters in modern complex machine learning methods additionally complicates this problem. In our talk, we overview methods to make the process of hyperparameters tuning more autonomous, i.e. make it less requiring help of experts.
        Speaker: Mr Alexander Fonarev (Skoltech)
      • 19
        Deep Learning for event reconstruction
        Speaker: Amir Farbin (University of Texas at Arlington (US))
      • 10:20 AM
        Coffee Break
      • 20
        Fast multimodal clustering: searching for optimal patterns
        In Machine Learning, we usually deal with object-attribute tables. However, underlying objects may have other modalities than attributes only. For instance, an object may have a certain attribute only under specific conditions. The real examples came from gene expression data, where a gene can be active (expressed) in particular situations at a certain moment of time, implying ternary relation with triples (g,s,t). One more example came from resource sharing systems like Flickr or Bibsonomy, i.e. a user u can assign a certain tag t to a resource r. One may ask how to find homogeneous patterns, groups of genes with similar properties or communities in such data. This talk presents several definitions of “optimal patterns” in triadic data and results of experimental comparison of five triclustering algorithms on real-world and synthetic datasets. The evaluation is carried over such criteria as resource efficiency, noise tolerance and quality scores involving cardinality, density, coverage, and diversity of the patterns. An ideal triadic pattern is a totally dense maximal cuboid (formal triconcept). Relaxations of this notion under consideration are: OAC-triclusters; triclusters optimal with respect to the least-square criterion; and graph partitions obtained by using spectral clustering. We show that searching for an optimal tricluster cover is an NP-complete problem, whereas determining the number of such covers is #P-complete. Our extensive computational experiments lead us to a clear strategy for choosing a solution at a given dataset guided by the principle of Pareto-optimality according to the proposed criteria. In the end on the talk, we will outline future prospects of multimodal triclustering and its relationship with tensor factorisation.
        Speaker: Dr Dmitry Ignatov (HSE)
      • 21
        Summary of open space discussions
        Speaker: Andrey Ustyuzhanin (Yandex School of Data Analysis (RU))
    • Machine Learning tools & tutorials
      Convener: Andrey Ustyuzhanin (Yandex School of Data Analysis (RU))
    • 12:40 PM
    • Machine Learning tools & tutorials
      • 24
        TensorFlow introduction & tutorial. Continuation
        Speaker: Rafal Jozefowicz (Google)
    • 25
      Closing Remarks
      Speakers: Andrey Ustyuzhanin (Yandex School of Data Analysis (RU)), Marcin Chrzaszcz (Universitaet Zuerich (CH), Institute of Nuclear Physics (PL))