Speaker
Description
Many of the workflows in the CMS offline operation are designed around the concept of acquisition of a run: a period of data-taking with stable detector and accelerator conditions. The capability of integrating statistics across several runs is an asset for statistically limited monitoring and calibration workflows. Crossing run boundaries requires careful evaluation of the conditions of the experiment, operation mode of the accelerator and version of the reconstruction software to build a robust aggregation logic capable of building homogeneous datasets. The CMS collaboration invested in the automation of this process building a framework for continuous running of calibration workflows, data quality monitoring tasks and performance studies across run boundaries. The contribution will illustrate the design principles of this new tool and report on the operational experience during the LHC Run-II and the prospect for its future development.