Speaker
Description
To ensure particle tracking can be maintained in HEP colliders with high luminosity environments requires increased sensor granularity to keep the occupancies at levels acceptable for pattern recognition In addition the data must be readout fast enough to avoid pile up. Therefore the increase in luminosity and granularity requires read out of large data sets at high rates.
In ATLAS the data from the pixel system is readout via electrical cables due to the high radiation levels, which preclude the use of optical fibre. Considerations of material budget means this limits the output of the modules to about 5Gbps. Determining the data rates is important to ensure that the system is designed to cope with full collision data and that the number of cables, which are a significant overhead in the material budget of the system, are optimised.
The calculation of data rates starts by calculating the hit rates using simulation of the detector. These are determined by the layout of the system and the geometry of the sensors. In addition, a good understanding of the material distribution is required to ensure that sources of background hits are included. The data rates are then calculated by simulating the output of the readout chip. The readout can be optimised, based on the structure of the hits, to minimise the size of the data.
The estimation of hit rates and data rates for the ATLAS ITk upgrade are presented as an example. The implication of the system layout and pixel geometry, and the uncertainties due to the material distribution are discussed. Different readout algorithms are evaluated based on simulated hit rates and date rate reduction. The subsequent impact on link occupancy is also considered as well as the robustness of these estimates discussed.