The Fast TracKer (FTK) will perform global track reconstruction after each Level-1 trigger accept signal to enable the software-based higher level trigger to have early access to tracking information. FTK will use the entire silicon tracking system of about 100 million channels, consisting of the Pixel and semiconductor tracker (SCT) detectors, as well as the new Insertable B-Layer (IBL) pixel detector. To deal with the large input data rate as well as the hit combinatorics due to high occupancy at higher luminosity, FTK is highly parallel. The system is segmented into 64 eta-phi trigger towers, each with its own pattern recognition and track fitter hardware. To avoid inefficiency at tower boundaries, the towers must overlap because of the finite size of the beam luminous region in the z coordinate, and because of the finite curvature of charged particles in the magnetic field. The first stage of the ATLAS FTK is designed to organize data for hardware-based tracking on parallel processing trigger towers. Its two main tasks are clustering and data formatting. It receives hits from Pixel, SCT and IBL readout drivers (RODs), performs clustering to reduce the data volume with minimal efficiency loss, remaps the ATLAS Inner Detector geometry to match the FTK-tower structure, shares clusters in overlap regions, and delivers the data to the tracking processor units. An FPGA Mezzanine Card (the FMC standard) module called FTK Input Mezzanine (FTK_IM) has been developed to perform clustering, and a general-purpose ATCA based data processor board, the Data Formatter (DF), has been developed for the data sharing using the full-mesh ATCA backplane interconnect. Four FTK_IMs connect to a DF board. The design details of DF board, called Pulsar- II as a general-purpose board, will be presented in a separate abstract at this workshop.
The system consists of 32 DFs and 128 FTK_IMs in four full-mesh ATCA shelves. An FTK_IM is equipped with two Spartan-6 FPGAs to receive the silicon detector hits from RODs and perform 2D- and 1D-clustering for pixel and micro-strip silicon hits. The 128 FTK_IMs are able to receive up to 512 input S-LINK optic fibers. Clustering logic with a moving window technique is implemented to minimize the resource usage in the FPGAs on FTK_IM. The detected clusters are sent to DF boards over parallel buses of LVDS differential signals through the FMC connectors. All DF boards in one ATCA shelf are directly connected using multiple point-to-point high-speed serial links over the backplane for data formatting. A Virtex-7 FPGA on each DF board is used for the main processing engine. The received cluster data are efficiently remapped and shared among the 32 boards over the full-mesh backplane and dedicated inter-shelf fiber links such that the system latency is minimized and the data transfer efficiency is maximized. The output data, organized into 64-tower structure, will be sent to the tracking core units with high bandwidth serial links.
The board and system level performance studies and implementation details as well as the operation experiences from the FTK full-chain testing will be presented.