Deep Learning techniques are gaining interest in the High Energy Physics, following a new and efficient approach to solve different problems. These techniques leverage the specific features of GPU accelerators and rely on a set of software packages allowing users to compute on GPUs and program Deep Learning algorithms. However, the rapid pace at which both the hardware and the low and high level libraries are evolving poses several operational issues to computing centers such as the IN2P3 Computing Center (CC-IN2P3 -- http://cc.in2p3.fr).
In this talk we present how we addressed these operational challenges thanks to the use of container technologies. We show that the flexibility offered by containers comes with no overhead while allowing users to benefit of the better performance of compiled from sources versions of popular deep learning frameworks. Finally, we detail the best practices proposed to the users of the CC-IN2P3 to prepare and submit their deep learning oriented jobs on the available GPU resources.