Machine learning (ML) is a thriving field with active research topics. It has found numerous practical applications in natural language processing, understanding of speech and images as well as fundamental sciences. ML approaches are capable of replicating and often surpassing the accuracy of hypothesis driven first-principles simulations and can provide new insights to a research problem.
We here provide an overview about the content of the Machine Learning tutorials. Although the theory and practice sessions are described separately, they will be taught alternating one to the other, during the four lectures. In this way, after we've introduced new concepts, we can immediately use them in a tailored exercise, which will help us absorb the material covered.
We will start with a gentle introduction to the ML field, introducing the 3 learning paradigms: supervised, unsupervised, and reinforcement learning. We'll then delve into the two different supervised sub-categories: regression and classification using neural nets' forward and backward propagation. We'll soon see that smart choices can be done to exploit the nature of the data we're dealing with, and introduce convolutional and recurrent neural nets. We'll move on to unsupervised learning, and we'll familiarise with generative models such as variational autoencoders and adversarial networks.
We will introduce machine learning technology focusing on the open source software stack PyTorch. We'll go over a brief introduction to PyTorch architecture, primitives and automatic differentiation, implementing multi-layer perceptron and convolutional layers, a deep dive into recurrent neural networks for sequence learning tasks, and finally some generative models. Python programming experience and Numpy exposure is highly desirable, but previous experience with PyTorch is not required.