Thanks to a diversified program of collaborations with leading ICT companies and other research organisations, CERN openlab promotes research on innovative solutions and knowledge sharing between communities. In particular, it is involved in a large set of Deep Learning and AI projects within the High Energy Physics community and beyond. The HEP community has a long tradition of using Neural Networks and Machine Learning methods to solve specific tasks, mostly related to analysis. In the recent years, several studies have demonstrated the benefit of using Deep Learning (DL) in different fields of science, society and industry. Building on these examples, HEP experiments are now exploring how to integrate DL into their workflows: from data quality, to trigger, reconstruction and simulation.
Efficient training and fast inference of such models have been made tractable with the improvement of optimization methods and the advent of dedicated hardware well adapted to tackle the highly-parallelizable tasks related to neural networks. In particular, efficient data management, simplified deployment strategies and High Performance Computing technologies are required by these kind of projects, together with the availability of multi-architecture frameworks (spacing from large multi-core systems to hardware accelerators) either on premise or deployed in the cloud.
This talk will describe a few examples of promising DL applications in our field, with particular attention to the implication this new kind of workload have on the HEP computing model.