Containers are more and more becoming prevalent in Industry as the standard method of software deployment. They have many benefits for shipping software by encapsulating dependencies and turning complex software deployments into single portable units. Similar to Virtual Machines, but with a lower overall resource requirement, greater flexibility and more transparency they are a compelling choice for software deployment. The use of containers is becoming attractive to WLCG experiments as a means to encapsulate their payloads, ensure that userland environments are consistent and to segregate running jobs from one another to improve isolation. Technologies such as Docker and Singularity are already being used and tested by larger WLCG experiments along with CERN IT.
Our purpose here is to explore the use of containers at a medium to large WLCG Tier-2 as a method of reducing the manpower required to run such a site. By looking at the requirements of WLCG payloads (such as the availability of CVMFS, Trust Anchors or VOMS information) a model of a contained compute platform will be developed and presented. It is hoped that novel ways of interaction with experiment frameworks will be apparent along with the ability to leverage new technologies such as Docker-Swarm, Kubernetes or CoreOS to allow compute resources to be turned up quickly and effectively. Along with providing the compute it is hoped that readily available monitoring solutions can be bundled to provide a complete toolbox for local System Administrators to provide resources quickly and securely.