Conveners
Monitoring: Thu AM
- Julia Andreeva (CERN)
- Sang Un Ahn (Korea Institute of Science & Technology Information (KR))
Monitoring: Thu PM
- Sang Un Ahn (Korea Institute of Science & Technology Information (KR))
- Julia Andreeva (CERN)
The EU-funded ESCAPE project aims at enabling a prototype federated storage infrastructure, a Data Lake, that would handle data on the exabyte-scale, address the FAIR data management principles and provide science projects a unified scalable data management solution for accessing and analyzing large volumes of scientific data. In this respect, data transfer and management technologies such as...
The ATLAS Tile Calorimeter (TileCal) is the central part of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. The readout is segmented into nearly 10000 channels that are calibrated by means of Cesium source, laser, charge injection, and integrator-based...
The Belle II detector began collecting data from $e^+e^-$ collisions at the SuperKEKB electron-positron collider in March 2019 and has already exceeded the Belle instantaneous luminosity. The result is an unprecedented amount of incoming raw data that must be calibrated promptly prior to data reconstruction. To fully automate the calibration process a Python plugin package, b2cal, had been...
The LHCb detector at the LHC is currently undergoing a major upgrade to increase full detector read-out rate to 30 MHz. In addition to the detector hardware modernisation, the new trigger system will be software-only. The code base of the new trigger system must be thoroughly tested for data flow, functionality and physics performance. Currently, the testing procedure is based on a system of...
During the second long shutdown (LS2) of the CERN Large Hadron Collider (LHC), the Detector Control System (DCS) of the Compact Muon Solenoid (CMS) Electromagnetic Calorimeter (ECAL) is undergoing a large software upgrade at various levels. The ECAL DCS supervisory system has been reviewed and extended to migrate the underlying software toolkits and platform technologies to the latest...
The CMS experiment at the CERN LHC (Large Hadron Collider) relies on a distributed computing infrastructure to process the multi-petabyte datasets where the collision and simulated data are stored. A scalable and reliable monitoring system is required to ensure efficient operation of the distributed computing services, and to provide a comprehensive set of measurements of the system...
A large scientific computing infrastructure must offer versatility to host any kind of experiment that can lead to innovative ideas. The ATLAS experiment offers wide access possibilities to perform intelligent algorithms and analyze the massive amount of data produced in the Large Hadron Collider at CERN. The BigPanDA monitoring is a component of the PanDA (Production ANd Distributed Analysis)...
GlideinWMS is a pilot framework to provide uniform and reliable HTCondor clusters using heterogeneous and unreliable resources. The Glideins are pilot jobs that are sent to the selected nodes, test them, set them up as desired by the user jobs, and ultimately start an HTCondor schedd to join an elastic pool. These Glideins collect information that is very useful to evaluate the health and...
The ATLAS Experiment at the LHC generates petabytes of data that is distributed among 160 computing sites all over the world and is processed continuously by various central production and user analysis tasks. The popularity of data is typically measured as the number of accesses and plays an important role in resolving data management issues: deleting, replicating, moving between tapes, disks...
CERN uses the world's largest scientific computing grid, WLCG, for distributed data storage and processing. Monitoring of the CPU and storage resources is an important and essential element to detect operational issues in its systems, for example in the storage elements, and to ensure their proper and efficient function. The processing of experiment data depends strongly on the data access...
Recent changes to the ATLAS offline data quality monitoring system are described. These include multithreaded histogram filling and subsequent postprocessing, improvements in the responsiveness and resource use of the automatic check system, and changes to the user interface to improve the user experience.