Speaker
Description
Maintaining high data quality in large HEP experiments like CMS at the LHC is essential for obtaining reliable physics results. The LHC high-luminosity phase will introduce higher event rates, requiring more sophisticated monitoring techniques to promptly identify and address potential issues. The CMS protocols for Data Quality Monitoring (DQM) and Data Certification (DC) rely on significant human intervention and have limited time granularity, which may lead to transient anomalies going undetected. To address these challenges, unsupervised machine learning techniques, such as convolutional autoencoders, have been deployed for anomaly detection with a granularity of 23s of data taking. Given the complexity and diversity of CMS subdetectors, multiple tools are being developed in parallel and maintained by subsystem experts. In this contribution, we discuss the development these automated workflows for online DQM and DC across different CMS subdetectors. We also present the integration of these models into a common interface: DIALS, a new CMS tool for automating DC.
Significance
The abstract is submitted on behalf of the CMS collaboration. Speaker to be announced.
Experiment context, if any | CMS experiment |
---|