Data Acquisition systems are an integral part of their respective experiments. They are designed to meet the needs set by the physics programme. Despite some very interesting differences in the architecture the unprecedented data-rates at LHC have led to a lot of commonalities among the four large LHC data acquisition systems. All of them rely on commercial local area network technology and more specificially mostly on Gigabit Ethernet. They transport the data from the detector readout-boards to large farms of industry standard servers, where a pure software trigger is run. These four systems will be reviewed, the underlying commonalities will be high-lighted and interesting architectural differences will be discussed. In
view of a possible LHC upgrade we will briefly discuss the suitability and evolution of the current architectures to fit the needs of SLHC.
Niko Neufeld studied experimental particle physics in Austria. He was awarded a grant to do his PhD at CERN in the DELPHI experiment and moved to Geneva in 1997. In 2000 he became a CERN fellow in the LHCb Data Acquisition
and has worked there since then, mostly on event-building, high-speed networks and fabric management. A CERN staff since 2004 he is the deputy project leader of the LHCb Online system responsible for the Online computing infrastructure and the data acquisition of LHCb.