Speaker
Description
The need for an unbiased analysis of large complex datasets, especially those collected by the LHC experiments, is pushing for data acquisition systems where predefined online trigger selections are limited if not suppressed at all. Not just this poses tremendous challenges for the hardware components, but also calls for new strategies for the online software infrastructures. Open source Big-Data tools could certainly offer valuable solutions for the latter.
In view of the high luminosity upgrade of the LHC, we developed a prototype online processing scheme for the CMS muon detectors (Drift Tubes Chambers, DT), which streams signals from the front-end electronics at the same rate as the LHC clock (40 MHz) and serves them through Apache Kafka to a remote Apache Spark cluster where offline quality reconstruction algorithms are run. Extensive tests have been carried out demonstrating the scalability of the system, in particular the throughput expected from the DT chambers at HL-LHC can be sustained by a computing cluster of comparable size as the current prototype. This setup has been exploited successfully for beam tests and will be deployed for parasitic operations in CMS during the next LHC Run.
Consider for promotion | No |
---|