Modern data-sets are often huge, possibly high-dimensional, and require complex non-linear parameterization to be modeled accurately.
Examples include image and audio classification but also data analysis problems in natural sciences, e..g high energy physics or biology.
Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.
L. Moneta, M. Pierini.......... Refreshments will be served at 10h30