The Compressed Baryonic Matter experiment (CBM) will study rare probes in a heavy-ion environment at high interaction rates of up to 10 MHz. The observation of detached vertices requires a topological trigger, which is realized in software. CBM opted for a free-running readout, for reasons similar to LHC-b. The primary beam is delivered by a slow extraction synchrotron. To be able to operate the experiment at highest interaction rates, despite beam intensity fluctuations, a time-based throttling mechanism is under study. We will compare different throttling strategies.
This study is based on Silicon Tracking System (STS) subsystem which is closest to the target. The readout tree of STS comprises 16000 STS-XYTER ASICs, connecting to about 100 Common Readout Interface cards (CRI) which are interfaced to a global Timing and Fast Control system(TFC). Each ASIC comprises 128 readout channels. Each channel has a FIFO of 8 words. A FIFO-almost-full flag is asserted once 7 elements are filled. The ASIC counts the number of almost-full channels, and reports the busy alert to the CRI with a programmable alert threshold. Based on the busy information from all CRIs, the TFC decides if CBM as a whole should be throttled. The CRIs will propagate system dependent throttling instructions to the ASICs. This time-based throttling without trigger or event information, is designed to satisfy the limited latency requirement of TFC system.
The closed-loop simulation model comprises the event generator, the data flow model and result analysis. The data flow model describes relevant functionality of ASICs, CRIs and TFC, using the System Verilog language. It calls Linux shells to invoke the other stages realized in C++/ROOT. Each event means one collision and generates particles detected as hits. The hit rate equals to the event rate multiplying by hit size per event.
Throttling parameters are alert thresholds per ASIC, and fraction of ASICs reporting alerts. The total hit losses can be distinguished into controlled and uncontrolled losses. Uncontrolled losses come from FIFO overflow. Controlled losses represent full events discarded when either data input is stopped or complete events are cleared from FIFO.
The model is firstly verified through an event generator with a fixed hit rate. The output is equal to input when the hit rate is up to the maximum bandwidth of ASICs. Using a Poisson event generator, the model begins to lose data when the average hit rate is more than about 98% of maximum bandwidth. These conclusions approve our expectation.
Two throttling strategies are compared. The first is to stop accepting new hits, drain the ASIC FIFOs, then restart accepting hits. We select maximum 5 readout links for each ASIC. The fastest drain time in the STS-XYTER is about 20 us. The second strategy is to clear the FIFOs, then re-enable data taking immediately. They are abbreviated as “Stop” and “Clear” strategy.
For both strategies, lower alert thresholds lead to less uncontrolled losses and more controlled losses with increasing throttling-on operations. The total losses stay the same.
Furthermore, for event reconstruction random uncontrolled losses are much worse than block controlled losses. We define the time windows of block losses as discard windows. For Clear strategy, random losses are automatically removed when the FIFOs are cleared. The discard windows are equal to FIFO cleared windows. For Stop strategy, the discard windows have to be extended out of the hit mask windows since uncontrolled losses exist before the hit masking.
Next, more complete evaluation criteria will be introduced. Realistic beam intensity fluctuations and distributions of bandwidth utilization of ASICs will be added.