Speaker
Description
Summary
The Large Hadron Collider will undergo to a sustained increase of luminosity within a time scale of 10 years. To collect 300 fb-1/year, peak luminosity will increase by factor 5 compared to the design value.
Within the ATLAS experiment, detectors requirements will go beyond the current design specifications: higher peak luminosity will lead to increased density of interactions both in space and time, requiring higher detector resolutions. Higher integrated luminosity will also pose limits of irradiation damage of materials.
The trigger selection will futher suffer from an enormous increase of rates, given by the natural rise of the number of interactions per bunch crossing as well as by the reduced rejection power of the algorithms on events that become more and more complex. Higher resolution will be required at the first steps of the online selection, and offline-like algorithms are preferred.
The calorimetry-based trigger detectors will improve their selectivity by the increased granularity available at trigger level, allowing for a higher energy resolution. Great advantages could derive from using complex (and time-consuming) cluster algorithms to reduce the effects of background noise due to events pile-up and that are currently used offline to increase the rejection of abundant QCD jets.
In the muon detector, the momentum resolution of the trigger can be improved by using the precision muon tracking detectors, the Monitored Drift Tuber chambers (MDT). An MDT-based trigger scheme is being developed and validated, based on new radiation-hard readout chips currently under development.
In addition, the use of the inner tracking system in the lower levels of the trigger selection can preserve the ATLAS trigger's selectivity without reducing its flexibility. Great advantages could derive by mean of multiple different algorithms: combining the calorimeter/muon information with tracks to remove mis-reconstructed or fake objects; providing reliable track isolation for single leptons, track multiplicity for tau selection, impact parameter for b-tagging, and vertex information for multi-object signatures.
These improvements are possible with a radical change of the trigger and DAQ system, that with a new infrastructure can allow longer latencies at the first stages of the selection. Exploiting the Region-of-Interest approach currently included in the TDAQ infrastructure, the data throughput and consequently the time required to readout the future billion-channels silicon tracker can be reduced. Studies are on-going to explore the feasibility of a fast readout process in the high-occupancy regions of the tracker. A fast hardware tracking processor, inherited from the one in use at the ATLAS second level trigger, can be perform fast pattern-recognition algorithms to be applied immediately after the front-end readout of relevant sub-detectors.
This new trigger scheme during the last phase of the LHC upgrade is currently under discussion. Different scenarios are compared, having in mind the requirements to achieve the expected physics potential of ATLAS in this high luminosity regime. The status of ongoing tests and preliminary results for the system under development are discussed.