15–19 Sept 2008
Naxos - GREECE
Europe/Athens timezone

The Associative Memory for the Self-Triggered SLIM5 Silicon Telescope

18 Sept 2008, 16:15
2h
Naxos - GREECE

Naxos - GREECE

Speaker

Francesco Crescioli (Univ. of Pisa + INFN Pisa)

Description

Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections to be efficient selecting the rare events inside the huge background. We present a fast, high-quality, track-based event selection for the self-triggered SLIM5 silicon telescope. This is an R&D experiment whose innovative trigger will show that high rejection factors and manageable trigger rates can be achieved using fine-granularity, low-material tracking detectors. The system performances will be measured on a test beam, using noisy conditions to simulate high-occupancy . This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. Affordable latency and rates are provided by a dedicated device, the Associative Memory (AM). The time consuming pattern recognition problem, generally referred to as the “combinatorial challenge”, is beat by the AM exploiting parallelism to the maximum level: it compares the event to precalculated “expectations” (pattern matching) at once. This approach reduces to linear the typical exponential complexity of the CPU-based algorithms. The problem is solved by the time data are loaded into the AM devices. We describe the AM-based trigger and its performances.

Summary

I. INTRODUCTION
Any L1 tracking strategy has to be conceived before the detector readout design is frozen because the possibility to trigger needs to be built directly inside the detector. The SLIM5 experiment [1] puts together many innovative techniques to demonstrate the feasibility of a low-material silicon telescope, provided of a continuous data driven readout [2] and a low-latency track-based trigger capability.
The key device to provide reconstructed tracks in a very short time is the Associative Memory [3] (AM) that associates the silicon hits from the 6 telescope layers into high spatial resolution track candidates.
The real challenge in a large detector is to make a large amount of tracker data available to the AM processor in a very short time. The latency strongly depends on the time necessary to load the data in the AM system. For this reason we send data to the trigger exploiting a parallel readout of the detector layers.
Any hit will be identified by a word defining both the position in the detector and the time-stamp (bunch crossing number). The engine is an AM chip where all possible tracks have been previously loaded. Each stored hit pattern is provided with the necessary logic to compare itself with the event.
We plan to use the AM chip developed for the CDF experiment [4]. It runs at 50 MHz with 6 parallel buses of 18 bits encoding each.
II THE TRIGGER ARCHITECTURE
The trigger system for SLIM5 is made of a single 9U VME board, called AMBslim. It is an evolution of two previous versions: it has (a) the very powerful input bandwidth of the first one (4 Gbit/s), developed with FPGA AM chips for an LHC online track processor [5], and (b) the much more powerful pattern bank of the second version [6] developed with standard cell AM chips for the CDF experiment.

The AMBslim has a modular structure, consisting of 4 smaller boards, the Local Associative Memory Banks (LAMB). Each LAMB contains 32 Associative Memory (AM) chips, 16 per face. The AM chips come in PQ208 packages, and contain the stored patterns with the readout logic. They are connected into four 8-chip pipelines on each LAMB. Found tracks flow down in the 4 pipelines and are collected and merged in a single stream by the GLUE chip placed on top of the LAMB.
The board has a flexible control logic placed inside a single, very powerful FPGA chip (Virtex II pro xc2vp100, with a 1696 pin package [7]). The FPGA flexibility allows the use of the same hardware in different applications characterized by short- or long-latency trigger decisions. In the case of long-latency very large associative memory banks can be obtained exploiting pipelining of boards and AM chips. For low-latency applications, instead, we use only the AM chips directly connected to the GLUE. It is possible to assign different events to different AM chips. SLIM5 needs a low-latency trigger decision, so we focus on this particular use of the AMBslim board.
III AMBSLIM PROTOCOL
The AMBslim board receives all the incoming Hits on the six input buses from the P3 connector, distributes them to all the AM chips on the board, collects and sends all the fired tracks to the output through the P3 connector. All the input hit buses and the output track bus go through the AM control chip that controls the event synchronization and format.
An End Event word signals the end of hits and tracks belonging to a particular event. Each board input is provided with a deep FIFO for event synchronization. If, occasionally, a FIFO becomes "Almost Full", a HOLD signal is sent to the upstream board, which suspends the data flow until more FIFO locations become available. The Almost Full threshold is set to give the upstream board plenty of reaction time.
When the AMBslim starts to process an event, the hits are popped in parallel from the six hit input FIFOs. Popped hits are simultaneously sent to the four LAMBs. In the SLIM5 application the AMBslim clock is 40 MHz, equal to the AM chip clock and each hit is sent to all the LAMBs in the same clock cycle. However, for more demanding conditions it is possible to allocate different LAMBs to different events and distribute only the right hits to each LAMB. If for the incoming hit distribution we use an FPGA clock four times more aggressive (200 MHz) than the AM chip clock, we can process different events in parallel and increase the trigger performances.
Data from different streams are checked for consistency: upon detection of different event sequences, a severe error is asserted and the whole system needs to be synchronized again. As soon as hits are downloaded into a LAMB, locally matched tracks set the request to be read out from the LAMB (Data Ready). When the end-event word is received on a hit stream, no more words are popped from that FIFO until the end event word is received on all hit streams. Once the event is completely read out from the hit FIFOs, the LAMBs make the last matched tracks available within few clock cycles. When all tracks have been read out, the AM bank of the completed event is reset to be downloaded again with a new event.
Hits and tracks flow on a custom backplane through the P3 connector. We use the LVDS (Low Voltage Differential Signaling) Serializer-Deserializer DS90CR287/288A chips from National Semiconductor to decrease the number of necessary connections and to mantain a good noise rejection. 28 TTL signals are serialized by each chip into 4 LVDS signals (8 connector pins) and transmitted together with the synchronous clock to the receiving board.
IV CONCLUSION
The SLIM5 trigger system provides track reconstruction in a six layer silicon detector telescope, exploiting the detector full resolution, with a very low latency.
We present the trigger architecture and performances.
V REFERENCES
[1] http://www.pi.infn.it/slim5/;
[2] Rizzo, G, et al., “Recent Development on Triple Well 130 nm CMOS MAPS with In-Pixel Signal Processing and Data Sparsification Capability” Nuclear Science Symposium Conference Record, 2007. NSS '07. IEEE, Volume 2, Oct. 26 2007-Nov. 3 2007 Page(s):927 – 930
[3] M. Dell'Orso and L. Ristori, “VLSI structures for track finding”, Nucl. Instr. and Meth., vol. A278, pp. 436-440, (1989).
[4] Annovi, A et al., “A VLSI Processor for Fast Track Finding Based on Content Addressable Memories”, IEEE Transactions on Nuclear Science, Volume 53, Issue 4, Part 2, Aug. 2006 Page(s):2428 – 2433.
[5] Annovi, A. et al.; “The fast tracker processor for hadron collider triggers”, IEEE Transactions on Nuclear Science, Volume 48, Issue 3, Part 1, June 2001 Page(s):575 – 580;
Annovi, A.; “Hadron collider triggers with high-quality tracking at very high event rates”, IEEE Transactions on Nuclear Science, Volume 51, Issue 3, Part 1, June 2004 Page(s):391 - 400
[6] Annovi, A et al.; “The AM++ board for the silicon vertex tracker upgrade at CDF”, IEEE Transactions on Nuclear Science, Volume 53, Issue 3, Part 3, June 2006 Page(s):1726 – 1731.
[7] http://www.xilinx.com

Primary author

Francesco Crescioli (Univ. of Pisa + INFN Pisa)

Co-author

collaboration SLIM5 (Univ. + INFN)

Presentation materials

There are no materials yet.