Indico celebrates its 20th anniversary! Check our blog post for more information!

Readout of the CMS experiment during the 2010 heavy ion run

Not scheduled
1m
Théâtre National (Centre Bonlieu)

Théâtre National

Centre Bonlieu

France
Board: 137
Poster Experiments upgrade, future facilities and instrumentations

Speaker

Ivan Amos Cali (LNS)

Description

CMS was designed and optimized to record high luminosity pp collisions. Its powerful DAQ and trigger systems are normally configured to handle very high frequency of relatively low multiplicity pp events. To reduce data volume the CMS sub-detectors are read out using zero suppression algorithms optimized for pp. The large multiplicities expected in PbPb collisions required a different optimization of the zero suppression algorithms. The optimization could only be done after the data was taken. To make sure that the collected data is of highest quality the CMS collaboration decided to disable the zero suppression algorithms for the silicon strip tracker and the electromagnetic and hadron calorimeters for the duration of the first PbPb run. This resulted in event size of about 12MB of data, corresponding to about 11 million channels recorded for each event. CMS was recording data at up to 180 Hz and with a bandwidth to tape of over 2GB/s, well beyond of what it was designed for (more than 6 times the data volume per second recorded during the pp running). The excellent luminosity delivered by the LHC required that CMS trigger system reduces the rate of minimum bias events written to tape while maintaining the rate of interesting physics events. The trigger algorithms operating at Level-1 and High Level Trigger were optimized to provide maximum selectivity and data writing rates for jets, muons and photons. The fraction of minimum bias events was adjusted during running to maximize the available bandwidth. In just few weeks CMS collected about 890 TB of data. After the run was over, CMS developed a new zero suppression algorithms optimized for heavy ions and the data was compressed offline to about 190TB. In this talk we will present the CMS configuration during the 2010 PbPb run, describe the detailed performance of the CMS DAQ and trigger system and the subsequent offline compression processing.

Presentation materials