Speaker
Description
With the increased need for scalable data acquisition systems, we see high potential in modular designs not only for software, but also for the hardware elements. Oftentimes, the on-detector electronics sends and receives data over point-to-point links which are connected to custom processing units. This proposal aims at minimizing the need for custom devices, by introducing an intermediate layer, based on commercial components, that will decouple the point-to-point links from the data consumers/producers. This allows the latter to be implemented by software in commodity servers attached to the network. Such an approach is being actively researched in ATLAS for the future LHC upgrades, but is applicable more generically to many data acquisition and high performance computing architectures.
Making use of FPGAs at the decoupling layer level allows to support multiple point-to-point protocols, depending on the detector needs and its environment (e.g. radiation levels, data volumes, synchronization, reliability, etc.). Integrating the FPGAs into commodity servers (e.g. via PCIe) allows making use of large amounts of cheap memory for buffering data, in combination with substantial computing resources to orchestrate the routing of data to/from the detector to the relevant end-points via high-speed networks.
Our goal is to provide the enabling of a wide variety of users to build data acquisition systems from these building blocks with more streamlined R&D efforts, and to emphasize the importance of scalable, flexible, uniform and upgradable TDAQ systems.
Signal processing, data acquisition | FPGA, high-speed networks, modularity, high performance computing |
---|---|
System integration and engineering | modular design, scalability, use of commercial components. |