1–5 Sept 2014
Faculty of Civil Engineering
Europe/Prague timezone

ATLAS FTK challenge: simulation of a billion-fold hardware parallelism

1 Sept 2014, 16:10
25m
C217 (Faculty of Civil Engineering)

C217

Faculty of Civil Engineering

Faculty of Civil Engineering, Czech Technical University in Prague Thakurova 7/2077 Prague 166 29 Czech Republic
Oral Computing Technology for Physics Research Computing Technology for Physics Research

Speaker

Alexandre Vaniachine (ATLAS)

Description

During the current LHC shutdown period the ATLAS experiment will upgrade the Trigger and Data Acquisition system to include a hardware tracker coprocessor: the Fast Tracker (FTK). The FTK accesses the 80 million of channels of the ATLAS silicon detector, identifying charged tracks and reconstructing their parameters in the entire detector at a rate of up to 100 KHz and within 100 microseconds. To achieve this performance the FTK system utilizes the computing power of a custom ASIC chip with associative memory (AM) designed to perform “pattern matching” at very high speed, and the track parameters are calculated using modern FPGAs. To control this massive system a detailed simulation has been developed with the goal of supporting the hardware design and studying the impact of such a system in the ATLAS online event selection at high LHC luminosities. The two targets, electronic design and physics performance evaluation, have different requirements: while the hardware design requires accurate emulation of a relatively small data sample, physics studies require millions of events and the efficient use of CPU is important. We present the issues related to emulating this system on a commercial CPU platform, using ATLAS computing Grid resources, and the solutions developed in order to mitigate these problems to allow the emulation to perform the studies required to support the system design, construction and installation.

Summary

The FTK system performs the track fitting for charged tracks in p-p collisions separating the pattern matching step and the track parameters calculation in two sequential steps: the pattern matching is performed using the capability to find correlation in data using the associative memory (AM) chips, while the tracks parameters are calculated implementing a fast fit in FPGA. In both steps the high degree of parallelism and the extremely large computing available in both devices is used. This allows the reconstruction of all the tracks with transverse momentum greater than 1 GeV, in the full detector acceptance and in real time, at an event rate of up to 100 KHz within 100 microseconds per event.

The simulation of such highly parallelized system is an extremely complex task when executed using commercial computers based on CPUs. The main bottlenecks are the low bandwidth access to memory, if compared to the AM system that has I/O bandwidth of about 25 TB/s, and the lack of parallelism. In fact the AM chip uses a content addressable memory (CAM) architecture and any data inquiry is broadcasted to all memory elements simultaneously, thus data retrieval time is independent of the database size, with the possibility to perform millions of comparison per chip per second. Each incoming hit reaches all the 1 billion patterns in the whole AM system within the same clock cycle (10 ns), a very specific feature that cannot be matched by CPU based systems. Similar penalties are paid in the track fitter implementation, that in CPU cannot reach the 1 GHz rate that will be obtained in the FPGAs installed in the fit boards.

In designing and developing the FTK simulation we developed solutions to allow to run the system emulation in standard Grid working nodes, overcoming the limitation of commercial hardware in term of CPU power and memory availability. This solutions allows us to take benefit of the thousands of worker nodes available in the Grid computing facilities used by the ATLAS experiment and produce millions of physics events to complete studies on the expected performance of the system.

Primary author

Denis Oliveira Damazio (Brookhaven National Laboratory (US))

Presentation materials