Speaker
Description
Ensuring the quality of data in large HEP experiments such as CMS at the LHC is crucial for producing reliable physics outcomes, especially in view of the high-luminosity phase of the LHC, where the new data-taking conditions will require much more careful monitoring of the experimental apparatus. The CMS protocols for Data Quality Monitoring (DQM) rely on the analysis of a standardized set of histograms, providing a condensed snapshot of the detector’s condition. However, this approach requires significant human effort and has a limited time granularity, potentially masking transient anomalies. To overcome these limitations, unsupervised machine learning models, such as autoencoders and convolutional neural networks, have recently been deployed for anomaly detection with per-lumisection granularity. In this contribution, we discuss the development of an automated workflow for the online DQM of the CMS Muon system, offering a flexible tool for the different muon subsystems based on deep learning models trained on occupancy maps. The potential flexibility and extensibility to different detectors, as well as the effort towards the integration of per-lumisection monitoring in the DQM workflow will be discussed.
Experiment context, if any | CMS experiment |
---|