Implementation of a CANbus interface for the Detector Control System in the ALICE ITS Upgrade

5 Sept 2019, 16:55
20m
Poster Programmable Logic, Design Tools and Methods Posters

Speaker

Simon Voigt Nesbo (Western Norway University of Applied Sciences (NO))

Description

For the Long Shutdown 2 upgrade of the ALICE experiment, a new Inner Tracking System (ITS) is under development, based on the ALPIDE Monolitihic Active Pixel Sensor (MAPS) chip.

Data readout from the ALPIDE chips is performed by 192 Readout Units (RU), which are also responsible for trigger distribution, monitoring, configuration, and control of the sensor chips. Monitoring and control of the experiment is performed by the Detector Control System (DCS), normally via optical GBT links offered by the RU. A CANbus interface is also provided as a backup and this paper will discuss the implementation of this interface.

Summary

For the Long Shutdown 2 (LS2) upgrade of the ALICE experiment at the CERN LHC, a new Inner Tracking System (ITS) is under development. The upgraded ITS will consist of 24120 ALPIDE pixel sensor chips in seven cylindrical barrels, which offer significantly improved tracking capabilities over the existing system, and at higher event rates.

Data readout, trigger distribution, configuration, monitoring, and control of the sensor chips is performed by an SRAM-based Xilinx UltraScale FPGA, which is the main FPGA on each of the 192 Readout Unit (RU) boards. In addition, the RU boards have a flash-based Microsemi ProASIC3 FPGA and an external flash memory, which are used for configuration and scrubbing of the main FPGA.

The RU features custom radiation hardened transceivers and ASICs from LHC's GBT project for a radiation hardened bi-directional optical link. It has three optical transceivers; one VTRx with a downlink for triggers (no uplink); another VTRx with a downlink for control, and an uplink for data and control; and one VTTx with two additional data uplinks. These transceivers connect to three transceiver ASICs called GBTx, which are connected to the main FPGA.

The uplinks connect to the Common Readout Unit (CRU) in the First Level Processor (FLP). There are 24 CRUs for the ITS, each handling eight RUs. From the CRU/FLP, data go to the $\mathrm{O}^2$ (Online-Offline) system and control signals go to the Detector Control System (DCS), which is responsible for monitoring and control of every detector in ALICE. In the ITS this is performed by accessing the wishbone bus in the main FPGA on the RUs. This access is implemented over GBT using a custom protocol.

To keep the operational conditions stable the ITS staves should ideally have power at all times, with temperatures and currents monitored by DCS. Normally this is done over the GBT links. But the GBT links are not always available, such as under maintenance periods with no beam, or in the unlikely case of radiation related errors. Therefore, the RU also has a CANbus interface, with a High Level Protocol (HLP) for wishbone access implemented on top of the CANbus frames, based on a protocol developed for the TOF detector in the STAR experiment.

The HLP protocol uses standard CANbus frames with 11-bit ID. Eight of those bits are used for node ID (configurable by DIP switches on the RUs), and the remaining three bits are used to indicate the type of command (e.g. read or write).
CANbus uses a multi-drop network topology, which allows several RUs to be connected to the same bus and limits the amount of cabling. In principle, every node is a master in the base CANbus protocol, but the HLP protocol implements a request/response scheme where the DCS effectively acts as a master, and the RUs act as slaves that respond to the requests.

This paper presents the implementation of the CANbus HLP in the RU, and also the efforts taken to protect the interface logic in the FPGA against radiation induced upsets.

Primary author

Simon Voigt Nesbo (Western Norway University of Applied Sciences (NO))

Presentation materials

There are no materials yet.