Wonders and Woes of Machine Learning Competitions25m
Overview and development on collaborative aspects of machine learning competitions
Data Science at LHCb40m
Machine learning is used at all stages of the LHCb experiment. It is routinely used: in the process of deciding which data to record and which to reject forever, as part of the reconstruction algorithms (feature engineering), and in the extraction of physics results from our data. This talk will highlight current use cases, as well as ideas for ambitious future applications, and how we (machine learning expert + physicist) can collaborate on them.
(Ecole Polytechnique Federale de Lausanne (CH))
Machine Learning tools have revolutionized data analysis in high-energy physics (HEP). But the problems posed by HEP are unique in many aspects, presenting novel challenges and requiring novel solutions. I will describe recent progress in tackling these open problems and describe current outstanding issues.
(University of California Irvine (US))
Flavours of Physics: Identifying Tau to Three Muon Decay Events at the LHCb Using a Combination of Hand-Crafted and Automatic Feature Engineering and Ensemble Algorithms15m
Arjun Subramaniam, Rishab Gargeya
What is wrong with data challenges40m
We will develop a constructive criticism of the data challenge format practiced today. It will be illustrated by our story of the HiggsML challenge, but our conclusions will go beyond. In a nutshell, challenges are long job interviews for participants, publicity for organizers, and benchmarking and teaching aids for the data science community. What are they not? They will not deliver a workable solution to your problem, not even a prototype, partly because the very problem you can squeeze into the competitive gaming mechanism is a diluted or abstract version of the real problem you want to solve. You will have no access to the data scientists participating in the challenge, unless of course you can hire them. They incentivize neither collaboration nor creativity.
In the last third of the talk I will describe the format and tool that we have been developing to run collaborative hackatons (RAMPs for Rapid Analytics and Model Prototyping) at the Paris-Saclay Center for Data Science which implements some of the missing features of the classical challenge format.
(CNRS / Université Paris-Saclay)
In recent years, our deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. They are now widely used in industry. I will briefly review deep supervised / unsupervised / reinforcement learning, and discuss the latest state of the art results in numerous applications.
Jet Images: Deep Learning Edition15m
Building on the notion of a particle physics detector as a camera we investigate the potential of deep learning architectures to identify highly boosted W bosons. We develop techniques for visualizing features learned deep networks and what additional information is used to improve performance. Our study of physically-motivated features and learning algorithms is general and can be used to significantly increase discovery sensitivity and gain a deeper understanding of the physics within jets.
Luke Percival De Oliveira
(SLAC National Accelerator Laboratory (US))
Machine learning algorithms have offered solutions to a wide range of problems, and some of the tasks found in the high energy physics field are one of them. However, given the nature of problems that are being faced in this field, machine estimates have to meet certain conditions (for instance, the Cramer-von Mises and Kolmogorov-Smirnov tests). Even more, when ensemble of classifiers is used as a solution for the given task, those conditions could be difficult to accomplish.
In this talk, I will present a combination method for different output distributions and their use in the "Flavours of Physic" Challenge.
(Computer Science Dept. at Universidad Autonoma de Madrid), Juan Jose Choquehuanca-Zevallos, Roberto Dıaz-Morales
Building a Robust Detector Algorithm: Application of Bayesian, Nonparametric, and Poisson Methods to Improve Photon Denoising5m
An alternative to ABC for likelihood-free inference40m
The field of particle physics has the luxury of very predictive models of the data based on quantum field theory; however, the simulation of a complicated experimental apparatus makes it impractical to directly evaluate the likelihood for a given observation. A popular approach to this class of problems is Approximate Bayesian Computation (ABC). I will describe an alternative technique for parameter inference in this “likelihood-free” setting that is based on a parametrized family of classifiers and univariate density estimation. I will end with examples where this technique is being applied to problems at the LHC.
Kyle Stuart Cranmer
(New York University (US))
Over the past decade, the use of machine learning algorithms to classify event types has become commonplace in particle physics. However, in many cases it's not obvious how to teach the machine what the physicist wants it to learn. I will discuss some modified classifiers developed for use in such cases, and then reflect on the questions: What is it that physicists actually do when analyzing data? How can we teach machines to do this for - and better than - us? Finally, what role will deep learning play in future particle physics experiments?
J Michael Williams
(Massachusetts Inst. of Technology (US))