Summary 500 words
The data acquisition systems at the European X-ray Free Electron Laser (XFEL) facility will have to cope with data rates and beam structure timing comparable to those in particle physics experiments and employs similar components and techniques. The prototype Large Pixel Detector under construction at the STFC Rutherford Appleton Laboratory for the European-XFEL at DESY contains one Megapixel sensor elements. The detector is constructed out of 16 identical super-modules. A custom Front End Module (FEM), mounted on the rear of each super-module, provides readout and control functions. Each of the 16 FEMs is linked to the central XFEL data acquisition (Train Builder) by a 10 Gbps SFP+ fibre optic data link.
In normal operation the XFEL will generate trains of ~3,000 X-ray pulses (with a 220 nsec spacing) with a train repetition rate of 10 Hz resulting in data rates comparable with the next generation of particle physics facilities. The LPD detector will generate approximately 10 GByte/sec of processed data for each Megapixel.
Each FEM card is interfaced to 128 pipeline ASICs (64K channels total) on the super-module via high density connectors to a custom backplane and feed-through cards which also supply power. In the ~ 99 msec period between X-ray bunch trains the ASICs output 128 digitised data streams, running at 100 MHz, which are processed in parallel by a single Xilinx Virtex5 FPGA. Data is buffered in a standard DDR2 SODIMM memory module, using the hardwired memory controllers in the FPGA. Three smaller Xilinx Spartan 3AN FPGAs provide I/O signal buffering and board configuration functions.
The Virtex 5 FPGA is programmed with an algorithm to select the data from the LPD ASICs corresponding to the optimal signal to noise sample chosen from three pipeline gain settings on each pixel for each bunch. If necessary, the FEM logic also reorders the data stream back into proper bunch time order, to correct for the ASIC trigger pipeline logic, and formats the data according to XFEL standards. Immediately after the readout phase the FEM receives the XFEL clock and fast machine signals, such as the bunch fill pattern and trigger signals, and processes and distributes to the ASICs ready for processing the next bunch train.
The 10 Gbps SFP+ optical data links are provided by DESY as industry standard VITA57 pluggable FMC mezzanine cards. Commercial FMC cards providing the same functionality are also now available. Data is transmitted over the optical links to the Train Builder using a standard transmission protocol 10GbE UDP implemented in the V5 FPGA. This enables standalone readout tests to a PC to be made if a Train builder system is not available. The DDR2 memory is used to balance the data flow of the ASICs by buffering the processed data prior to optical transmission at an aggregate rate of ~ 2 GBytes/sec. The memory module will be sufficient to hold the data from a number of bunch trains and thereby able to provide local buffering during standalone tests.
A standard GbE link provides an interface to the experimental slow controls system. Dual PowerPC 440 micro-controllers, embedded in the FPGA, are used to manage the DDR2 memory controller DMA engines and operate the TCP/IP GbE link. As the FEM will be located inside the detector remote in-situ FPGA bit file and software OS reconfiguration will also be implemented via the GbE link. Each FEM is also connected to the XFEL common clock and controls by dedicated electrical links which provide fast timing and bunch trigger information.
The first prototype FEM cards were manufactured in January 2011. Standalone bench tests show that the cards are performing to their specifications. The FEMs are now being used to evaluate the performance of the LPD detector modules.
The Train Builder system, also under design at STFC, will be implemented in the Advanced Telecom ATCA industry standard. The Train Builder boards assemble the partial images from all the FEMs and for complete bunch-trains before feeding on to final data processing on a farm of PCs.