7–10 Mar 2016
CERN
Europe/Zurich timezone
There is a live webcast for this event.

The intelligent, FPGA-based event builder and Data Acquisition System of COMPASS – an exemplary system for the future?

8 Mar 2016, 16:15
2m
30/7-018 - Kjell Johnsen Auditorium (CERN)

30/7-018 - Kjell Johnsen Auditorium

CERN

Building 30/7-018 CH - 1211 Geneva 23
190
Show room on map
Poster Poster

Speaker

Dominik Steffen (Technische Universitaet Muenchen (DE))

Description

Using FPGA-technology for event building tasks in high-energy physics experiments reduces costs and increases reliability of DAQ-systems. In 2015, COMPASS experiment at CERN’s SPS commissioned a novel, intelligent, FPGA- based DAQ (iFDAQ) in which event building is performed by FPGAs. The highly scalable system is designed to cope with an on-spill data rate of 1.5 GB/s and sustained data rate of 500 MB/s. Its event builder is able to handle front-end errors and to maintain continuous data flow. The intelligent and highly reliable hardware automatically compensates hardware failures and balances load.

Summary

Driven by the need of a scalable and high-performance computing architecture for data acquisition, the COMPASS experiment at CERN’s SPS developed a new DAQ from scratch using a novel approach to event building networks. The new system and its event builder exploit the application-optimized technology of Field Programmable Gate Arrays (FPGAs) in contrast to traditional event builders which
base on distributed online computers interconnected via Ethernet network. Recent developments in FPGA technology, such as increased I/O bandwidth (> Gbps) and support for high-performance SDRAMs even on low-cost chips, made FPGAs suitable for event building purposes. Reduced costs, higher reliability, and increased compactness are the arguments to move from traditional to FPGA-based event builders in future.

COMPASS commissioned its intelligent, FPGA-based DAQ (iFDAQ) in 2015 for a run, which required only a reduced COMPASS spectrometer setup. For 2016, the system will be deployed in its full scale and it will be able to cope with the expected on-spill data rate of 1.5 GB/s. By buffering data on different
levels, the system exploits the spill structure of the SPS beam and averages the on-
spill data rate over the whole duty cycle to a sustained rate of 500 MB/s. The iFDAQ uses a hybrid FPGA-software approach. The event building task is solely performed by FPGAs, whereas the software is responsible for system control, user interfaces, and configuration. The event builder consists of nine custom designed FPGA-cards, called Data Handling Cards (DHC). The DHCs comply with the AMC ATCA standard and are equipped with 4 GB of DDR3 memory and 16 high-speed links. The event builder receives data from the front-end electronics via approximately 100 optical serial interfaces. It buffers and multiplexes data, combines event fragments to complete events, and finally distributes them to eight readout computers via FPGA PCIe cards. All hardware nodes are synchronized by the Trigger Control System (TCS). The firmware of the DHCs is designed to react on potential inadequacies of front-end modules to ensure system stability and data integrity by throttling too high rates and replace
wrongly formatted or missing data by empty but correct frames. Information about detected errors in the data stream are accessible for the software via a dedicated Ethernet network using IPbus protocol.

From 2016, all involved point-to-point high-speed links between front-end electronics, the hardware event builder, and the readout computers are wired via a fully programmable crosspoint switch. This allows the user to remotely customize the network and hence simplifies compensation for hardware failure and optimization for load balancing. In a second step, the intelligent hardware will recognize load imbalance and malfunctioning hardware nodes by itself and will automatically take appropriate actions. By distributing the needed information synchronously via TCS, the highly reliable intelligent event builder can change its topology on-the-fly. A cost estimate of scalability of the system up to 1TB/s with currently available technology will be presented.

Author

Dominik Steffen (Technische Universitaet Muenchen (DE))

Co-authors

Dmytro Levit (Technische Universitaet Muenchen (DE)) Igor Konorov (Technische Universitaet Muenchen (DE)) Josef Novy (Czech Technical University (CZ)) Martin Bodlak (Charles University (CZ)) Miroslav Virius (Czech Technical University (CZ)) Stefan Huber (Technische Universitaet Muenchen (DE)) Vladimir Frolov (Joint Inst. for Nuclear Research (RU)) Vladimir Jary (Czech Technical University (CZ)) Yunpeng Bai (Technische Universitaet Muenchen (DE))

Presentation materials