TDAQ EoI

Europe/Zurich
42/R-001 (CERN)

42/R-001

CERN

10
Show room on map
Zoom Meeting ID
68378966273
Host
Thorsten Wengler
Useful links
Join via phone
Zoom URL
    • 14:30 14:31
      Check Patrick Janot / Martin Aleksa mail 1m

      Specifically subscribe to this egroup: FCC-PED-DetectorConcepts-Allegro@cern.ch
      Nominate contact person for ALLEGRO subdetctors, including TDAQ

      Full email:

      Dear authors of the ALLEGRO EoI,

       

      There are a few points that we would like to bring to your attention: 

       

      1) Please find below a mail from P. Janot about the very positive outcome of the Open Symposium in Venice. It is great to see that there is very strong community support for the FCC. If you don’t receive such mails yet, and you would be interested to receive them in future, please sign up to the mailing list FCC-PED-FeasibilityStudy@cern.ch

       

      2) An email list for the ALLEGRO full-detector concept study group has been created: FCC-PED-DetectorConcepts-Allegro@cern.ch. Please subscribe to this list at https://e-groups.cern.ch/e-groups/EgroupsSearchForm.do. The traffic is going to be low. 

       

      3) As already announced during the FCC Week in May, we would like to nominate contact persons for the different ALLEGRO sub detectors. These contact persons should channel the momentum from the different sub-detector EoIs that showed interest to participate in the ALLEGRO concept and overview the progress of the different proposals made. We were thinking of the following categories: 

       

      • Vertex Detector & Tracker & PID
      • ECAL (DRD6 WP2)
      • Solenoid & Cryostat
      • HCAL
      • Muon System
      • Luminometers
      • Trigger & Data Acquisition 

       

      These contact persons should also work closely with the corresponding Detector Subsystems Subgroups in the Detector Concepts Work-Package. 

      We would like to solicit nominations for these contact persons. Please send your nominations per email to Martin, Nicolas and Marc-André. Self nominations are explicitly encouraged! If you have any questions or comments, please don’t hesitate to contact us to discuss with us in person or via zoom. 

       

      Sorry for the rather long mail! 

      Please send us your nominations by August 31st and consider signing up to the above mailing lists! 

       

      Best regards,

      Martin, Marc-André, Nicolas

    • 14:31 14:32
      Rate and size info 1m

      The rate of events to be read out and recorded by the FCC-ee detector data acquisition systems is largest at the Z pole, and is dominated by Z production (∼100 kHz),
      low-angle Bhabha’s (∼50 kHz), and γγ→ hadrons events (∼30 kHz). While Bhabha
      events can be recorded in a dedicated stream, in which only the data from the luminometers are read out, the entire detector must be read out with a trigger rate in
      excess of 100 kHz.
      With 20 tracks on average in each hadronic Z decay, about 800 readout channels
      are expected to be hit in the vertex detector of CLD, each of which uses 13 bits to
      be encoded, corresponding to 1 kB per event. For an integration time of the VXD
      readout electronics of 1 µs (covering 50 bunch crossings at the Z pole), however, the
      750 hits induced by the IPC background for each bunch crossing largely dominate the
      VXD data volume, with 50 kB per physics event. With a trigger rate of 100 kHz, the
      data that have to be read out amount to a bandwidth of 6 GB/s. The data volume
      for the IDEA vertex detector is not expected to be significantly different. The CLD
      tracker contributes similarly to the data volume: with 22 bits per read-out channel,
      the size of a hadronic Z decay amounts to about 1.4 kB, and the IPC background
      adds about 2 kB per BX. For a read-out time window of 1 µs, a bandwidth of 10 GB/s
      is needed to read out the tracker data.
      With a drift chamber, all digitised hits generated on the occurrence of a trigger are usually transferred to data storage. The IDEA drift chamber transfers
      2 B/ns from both ends of all wires hit, over a maximum drift time of 400 ns.
      With 20 tracks/event and 130 cells hit for each track, the size of a hadronic
      Z decay in the DCH is therefore about 4 MB, corresponding to a bandwidth of
      400 GB/s at the Z pole. The contribution from γγ→ hadrons amounts to 60 GB/s.
      As reported in Section 7.4.4, the IPC background causes the read-out of additional
      1500 wires on average for every trigger (including a safety factor of 3), which translates into a bandwidth of 250 GB/s. A similar bandwidth is taken by the noise
      induced by the low single electron detection threshold necessary for an efficient
      cluster counting. Altogether, the various contributions sum up to a data rate of
      about 1 TB/s.
      Reading out these data and sending them into an “event builder” would not
      be a challenge, but the data storage requires a large reduction. Such a reduction
      can be achieved by transferring the minimum information needed by the cluster
      timing/counting, for each hit drift cell i.e. the amplitude and the arrival time of
      516 The European Physical Journal Special Topics
      each peak associated with each individual ionisation electron – each encoded in 1 B
      – instead of the full signal spectrum. The data generated by the drift chamber,
      subsequently digitised by an ADC, can be analysed in real time by a fast readout algorithm implemented in an FPGA [506]. This algorithm identifies the peaks
      corresponding to the different ionisation electrons in the digitised signal, stores the
      amplitude and the time for each peak in an internal memory, filters out spurious and
      isolated hits and sends these reduced data to the acquisition system on the occurrence of a trigger. Each cell hit integrates the signal of up to 30 ionisation electrons,
      which can thus be encoded within 60 B per wire end instead of the aforementioned
      800 B. Because the noise and background hits are filtered out by the FPGA algorithm, the data rate induced by Z hadronic decays is reduced to 25 GB/s, for a total
      bandwidth of about 30 GB/s.
      The data volume delivered by the dual-readout calorimeter of the IDEA detector has also been estimated at the Z pole. On average, the calorimetric showers
      from hadronic Z decays are expected to hit 8 × 105 fibres with an assumed density of 50 fibres per cm2
      . Single readout channels collect the signal from nine fibres
      and encode it in four words of 2 B, yielding a size of 0.7 kB per event and a bandwidth of 70 GB/s. The data volume corresponding to the CLD calorimeter is under
      investigation.
      In total, the data to be sent to the IDEA acquisition system for each trigger
      is of the order of 100 GB/s, i.e. more than one order of magnitude lower than the
      amount of data that will be readout by ATLAS and CMS at the HL-LHC. On the
      other hand, the throughput to disk should be reduced by a factor of 5 to 10 in
      order to be comparable with the anticipated storage and processing capacities of
      ATLAS and CMS. Software algorithms, running in a computer farm, would have to
      be designed to achieve such a data reduction.
      A traditional hardware trigger system to select signal events of interest for
      physics analyses should not be too difficult to design in the clean FCC-ee environment. The expected accuracy of the physics measurements would require, however,
      the efficiency of this trigger system to be known with a precision of 10−5 at the Z
      pole, which may be an insurmountable challenge. The feasibility of a “trigger-less”
      scheme as in LHCb, in which software algorithms perform the event selection using
      the full detector readout, is under investigation.

    • 14:32 14:52
      Update on background studies 20m
      Speakers: Aimilianos Koulouris (CERN), Rosa Simoniello (CERN), Stefano Franchellucci (Universite de Geneve (CH))
    • 14:52 15:12
      TDAQ workshop discussion 20m