Speaker
Description
In recent years the LHC delivered a record-breaking luminosity to the CMS experiment making it a challenge to successfully handle all the demands for the efficient Data and Monte Carlo processing. In the presentation we will review major issues managing such requests and how we were able to address them. Our main strategy relies on the increased automation and dynamic workload and data distribution. We maximize sharing of CPU resources using an HTCondor-based global pool, which was recently expanded by the dedicated Tier0 resources. To avoid underutilization of Tier-2 sites, we heavily rely on the remote data access (AAA). Multicore resizable jobs reduce the load on the workflow managing tools and improve efficiency across all type of resources. A wide range of opportunistic resources such as the CMS trigger farm, supercomputing centers and cloud resources were integrated into the global pool, which gives reach to more than 250k CPU cores.