Speaker
Description
Summary
Driven by the need of a scalable and high-performance computing architecture for data acquisition, the COMPASS experiment at CERN’s SPS developed a new DAQ from scratch using a novel approach to event building networks. The new system and its event builder exploit the application-optimized technology of Field Programmable Gate Arrays (FPGAs) in contrast to traditional event builders which
base on distributed online computers interconnected via Ethernet network. Recent developments in FPGA technology, such as increased I/O bandwidth (> Gbps) and support for high-performance SDRAMs even on low-cost chips, made FPGAs suitable for event building purposes. Reduced costs, higher reliability, and increased compactness are the arguments to move from traditional to FPGA-based event builders in future.
COMPASS commissioned its intelligent, FPGA-based DAQ (iFDAQ) in 2015 for a run, which required only a reduced COMPASS spectrometer setup. For 2016, the system will be deployed in its full scale and it will be able to cope with the expected on-spill data rate of 1.5 GB/s. By buffering data on different
levels, the system exploits the spill structure of the SPS beam and averages the on-
spill data rate over the whole duty cycle to a sustained rate of 500 MB/s. The iFDAQ uses a hybrid FPGA-software approach. The event building task is solely performed by FPGAs, whereas the software is responsible for system control, user interfaces, and configuration. The event builder consists of nine custom designed FPGA-cards, called Data Handling Cards (DHC). The DHCs comply with the AMC ATCA standard and are equipped with 4 GB of DDR3 memory and 16 high-speed links. The event builder receives data from the front-end electronics via approximately 100 optical serial interfaces. It buffers and multiplexes data, combines event fragments to complete events, and finally distributes them to eight readout computers via FPGA PCIe cards. All hardware nodes are synchronized by the Trigger Control System (TCS). The firmware of the DHCs is designed to react on potential inadequacies of front-end modules to ensure system stability and data integrity by throttling too high rates and replace
wrongly formatted or missing data by empty but correct frames. Information about detected errors in the data stream are accessible for the software via a dedicated Ethernet network using IPbus protocol.
From 2016, all involved point-to-point high-speed links between front-end electronics, the hardware event builder, and the readout computers are wired via a fully programmable crosspoint switch. This allows the user to remotely customize the network and hence simplifies compensation for hardware failure and optimization for load balancing. In a second step, the intelligent hardware will recognize load imbalance and malfunctioning hardware nodes by itself and will automatically take appropriate actions. By distributing the needed information synchronously via TCS, the highly reliable intelligent event builder can change its topology on-the-fly. A cost estimate of scalability of the system up to 1TB/s with currently available technology will be presented.