The Second Machine Learning summer school organized by Yandex School of Data Analysis and Laboratory of Methods for Big Data Analysis of National Research University Higher School of Economics will be held in Lund, Sweden from 20 to 26 June 2016. It is hosted by Lund University.
The school is intended to cover the relatively young area of data analysis and computational research that has started to emerge in High Energy Physics (HEP). It is known by several names including “Multivariate Analysis”, “Neural Networks”, “Classification/Clusterization techniques”. In more generic terms, these techniques belong to the field of “Machine Learning”, which is an area that is based on research performed in Statistics and has received a lot of attention from the Data Science community.
There are plenty of essential problems in High energy Physics that can be solved using Machine Learning methods. These vary from online data filtering and reconstruction to offline data analysis.
Students of the school will receive a theoretical and practical introduction to this new field and will be able to apply acquired knowledge to solve their own problems. Topics ranging from decision trees to deep larning and hyperparameter optimization will be covered with concrete examples and hands-on tutorials. A special data-science competition will be organized within the school to allow participants to get better feeling of real-life ML applications scenarios.
The MLHEP school is a satellite event to the LHCP2016 conference, so its dates and venue (Lund University) are well-aligned with the conference.
Expected number of students for the school is 40-50 people.
Pre-requisites for participation
- Python programming experience (e.g. http://nbviewer.jupyter.org/gist/rpmuller/5920182, https://www.codecademy.com/tracks/python)
- interest and/or background in HEP
- laptop with WiFi connectivity
Upon completion of the school participants would be able to
- formulate a HEP-related problem in ML-friendly terms
- select quality criteria for a given problem
- understand and apply principles of widely-used classification models (e.g. boosting, bagging, BDT, neural networks, etc) to practical cases
- optimize features and parameters of a given model in efficient way under given restrictions
- select the best classifier implementation amongst a variety of ML libraries (scikit-learn, xgboost, deep learning libraries, etc)
- define & conduct reproducible data-driven experiments
Github repository, with material and slides from the school