Speaker
Description
Summary
The PHENIX Experiment has started to exceed its originally designed data rates in 2004, when we switched to a compressed data format on disk and upgraded to a maximum data rate of 600 MB/s. This was made possible with a distributed compression system, which uses a large number of CPUs in the event builder to compress the data, rather than the logger machines. The switch, which has allowed us to run our Level-2 trigger is "tag-only", rather than in filter mode, has given us access to physics signals which one can not normally trigger on due to the high multiplicity in heavy-ion collisions.
With the increased data rate, we had to implement a managed access to the data. Compared to a traditional staging model, we have achieved an throughput increase for the data analysis by an estimated factor 30.
We will explain the technologies involved in the DAQ and analysis procedures, and give an overview of the strategies we will use to maintain our event rate in spite of the increasing event sizes after a future upgrade.