23rd Virtual IEEE Real Time Conference


23rd IEEE Real Time Conference


Virtual Conference

The IEEE Nuclear and Plasma Sciences Society (NPSS) and its technical committee, Computer Applications in Nuclear and Plasma Sciences (CANPS) are pleased to announce the 23rd Real Time Conference. RT2022 is an interdisciplinary conference focusing on the latest developments in real time computing, data acquisition, transport and processing techniques in related fields including: nuclear and particle physics, nuclear fusion and plasma science, medical physics and imaging, astrophysics and space instrumentation as well as accelerators and control systems.

Due to the lingering complications of international travel with the ongoing pandemic, the upcoming RT2022 conference will be held in a fully virtual format. The committee is hopeful that this will help facilitate participation for both regular and potential new attendees. Like previous editions, RT2022 will hold primarily plenary sessions. In addition, recorded mini-orals will be offered for all the accepted poster submissions. All papers submitted at the conference will have the option for peer review, and if accepted, will be published in a special edition of the IEEE Transactions on Nuclear Science journal.

The conference will provide an opportunity for scientists, engineers, and students from all over the world to share the latest research and developments.  This event also attracts relevant industries, integrating a broad spectrum of computing applications and technology.




  • ABBOTT, David
  • AI, Pengcheng
  • AMAUDRUZ, Pierre-Andre
  • BABA, Hidetada
  • BERGMANN, Benedikt
  • BOHM, Christian
  • CAMPLANI, Alessandra
  • CAPRA, Andrea
  • CARPENO, Antonio
  • CARRIO ARGOS, Fernando
  • CHEN, Qiangjun
  • CHEN, Tao
  • CHENG, Ka Yung
  • CIFRA, Pierfrancesco
  • COSTA, Víctor
  • CUEVAS, Chris
  • DENG, YunQi
  • FENG, Miaohui
  • FEO, Mauricio
  • FUCHS, Marvin
  • FURLETOV, Sergey
  • GERATHY, Matthew
  • GIBBS, Thomas
  • GOODRICH, michael
  • GRECO, Michela
  • GROSSMANN, Martin
  • GU, William
  • GUAZZONI, Chiara
  • GYURJYAN, Vardan
  • HARRISON, Beau
  • HE, rui
  • HEGEMAN, Jeroen
  • HENNIG, Wolfgang
  • HENNIG, Wolfgang
  • HERWIG, Christian
  • HINO, Ricardo
  • HONDA, Ryotaro
  • HU, Peng
  • HU, yuxiao
  • HUANG, Jin
  • HUANG, Yi
  • HUANG, yuliang
  • IGARASHI, Youichi
  • INOUE, Tomonao
  • ITOH, Ryosuke
  • JIANG, lin
  • JINGJUN, Wen
  • JU, Huang
  • KLEINES, Harald
  • KOLOS, Serguei
  • KUMAR, Piyush
  • KUNIGO, Takuto
  • KUNJIR, Shrijaj
  • KÖPPEL, Marius
  • LAI, Yun-Tsung
  • LARSEN, Raymond
  • LAWRENCE, David
  • LE DÛ, Patrick
  • LERAY, Jean-Luc
  • LEVINSON, Lorne
  • LEVIT, Dmytro
  • LI, Jiaming
  • LI, Nan
  • LI, qicai
  • LI, Xianqin
  • LI, Yixin
  • LI, Yukun
  • LIANG, Shunyi
  • LIAO, Shun
  • LIU, chao
  • LIU, Ming Xiong
  • LUCHETTA, Adriano Francesco
  • MA, Yubo
  • MANSOUR, El Houssain
  • MANTILLA SUAREZ, Cristina Ana
  • MASTROIANNI, Stefano
  • MENDES CORREIA, Pedro Manuel
  • MITREVSKI, Jovan
  • MIYAOKA, Robert
  • MÜLLER, Martin
  • NEUFELD, Niko
  • NING, Zhe
  • NIU, xiaoyang
  • OLIVEIRA, João
  • PARK, Seokhee
  • PERRI, Marco
  • PISANI, Flavio
  • PURSCHKE, Martin
  • QU, Huifan
  • RAYDO, Benjamin
  • RIVILLA, Daniel
  • ROSS, A C
  • RUIZ, Mariano
  • SAKUMA, yasutaka
  • SATO, Yutaro
  • SCHACHT, Jörg
  • SHAYAN, Helmand
  • SONG, chunxiao
  • SONG, Jianing
  • SONG, Shuchen
  • SPIWOKS, Ralf
  • TAKAHASHI, Tomonori
  • TAKESHIGE, Shoko
  • TARPARA, Eaglekumar
  • TENG, Yao
  • THAPA, Shiva Bikram
  • TIMMER, Carl
  • TREVISAN, Luca
  • VASILE, Matei
  • WANG, Hui
  • WANG, Xiaohu
  • WANG, yuting
  • WU, Qi
  • WU, Zibing
  • XIAOWEI, Guo
  • XIE, xiaochuan
  • XIONG, Binqiang
  • XU, Weiye
  • YAMADA, Satoru
  • YANAGIHARA, Kunitoshi
  • YANG, haibo
  • YECK, James
  • YIN, Rui
  • YU, Peng
  • ZAHN, Felix
  • ZHANG, Honghui
  • ZHANG, Jie
  • ZHANG, Yuezhao
  • ZHAO, Cong
  • ZHENG, Ran
  • ZHONG, Ke
  • ZHOU, Shiqiang
General Inquiries
    • Invited Talk: Atmospheric Neutron Effects in Advanced Microelectronics, Standards and Applications
      • 1
        Neutron Single Event Effects In Advanced Micro-Nano-Electronics - Tests Standards And Applications In Avionics, In Physics Or Fusion Research and Radiotherapy Facilities

        – Since the 80s and the 90s it is known that Terrestrial Cosmic Rays,
        mainly reported as ‘Atmospheric Neutrons’, as well as muons and protons, of
        cosmic origin, can penetrate the natural shielding of buildings, equipment and
        circuit package. Eg, flux of thermal to high-energy neutron of interest ranges
        between ~10 particles.cm-2.hour-1 at sea level to ~104 cm-2.hour-1 at typical
        airplanes flight altitude of 40000 feet. They eventually can trigger Soft Errors
        and Hard Errors such as Latch-up in integrated circuits and Breakdown of
        power devices. Besides, the elementary transistor device shrinking, pitch or
        gate length, affected the stored charge per bit of information, making devices
        more and more susceptible.
        From a Real Time viewpoint, this evolution allowed gate delay down to some
        picoseconds and signal bandwidth to dramatically increase to Gbits/s and
        more. As a consequence, downscaling of device design affected signal
        integrity margins and rendered the single event effect more effective because
        more and more minute ionizing transients were captured as parasitic signals.
        We present a simple model resulting in an evolutionary diagram of the signal
        charge from micrometer and submicrometric to deca-nanometer compared to
        the transient charge induced by an ionization track.
        The dramatic growth of the size of DRAM and SRAM memory (MB, GB, TB)
        caused the resulting Failure-In-Time and Soft Error Rate at circuit and system
        level to become unacceptable in standalone or embedded processors, FPGAs
        and System-on-Chips. Test standards and design solutions have been
        proposed to assess and maintain the reliability of commercial products and
        improve such as those used in stringent applications mainly in avionic flight
        computers, but also physics large hadron colliders fusion research facilities,
        and even some radiotherapy areas and high-end computing and routing the

        Speaker: LERAY, Jean-Luc (CEA/Consultant)
    • Mini Oral - I

      and following Poster A session

    • DAQ System & Trigger - I
      • 2
        System Design and Prototyping for the CMS Level-1 Trigger at the High-Luminosity LHC

        For the High-Luminosity Large Hadron Collider era, the trigger and data acquisition system of the Compact Muon Solenoid experiment will be entirely replaced. Novel design choices have been explored, including ATCA prototyping platforms with SoC controllers and newly available interconnect technologies with serial optical links with data rates up to 28 Gb/s. Trigger data analysis will be performed through sophisticated algorithms, including the widespread use of Machine Learning, in large FPGAs, such as the Xilinx Ultrascale family. The system will process over 50 Tb/s of detector data with an event rate of 750 kHz. The system design and prototyping are described and examples of trigger algorithms are reviewed.

        Speaker: KUMAR, Piyush (University of Hyderabad, India)
      • 3
        Timing reference distribution in CMS iRPC time measurement

        The improved Resistive Plate Chambers (iRPC) will be installed in the challenging forward region during the CMS phase II upgrade with new FrontEnd Electronics Boards (FEBs) to record the rising edge of signals from both ends by a Time-to-Digital Converter[1]. TDC data of each end has two parts of time information—the coarse time (2.5 ns) and the fine time (around 10ps, 2.5ns/256). The backend provides fast control (timing reference distribution, reset, clock, etc.) and slow control (powering up, FEB TDC configuration, etc) for the FEB.
        The Bunch Cross 0 (BC0) signal as the timing reference is distributed to each FEB located in different RPC stations by Backend Electronics Board (BEB). Arrival times of BC0 between links are different because of different lengths.
        This paper discusses the measurement, comparison, and way to minimize this difference by adding an adjustable correction value. A 12-bit counter of the 2.5ns step is developed in BEB and used to measure the loopback time of different links through a self-defined handshake mechanism. By taking the faster link as the reference we apply a correction value to the slower link according to the loopback time via slow control. So TDC data on each FEB will minus this specific correction value to minimize the latency difference between paths. A test system was set up and the preliminary results show that the loopback time measurement mechanism satisfied the requirement.

        [1]CMS collaboration, The Phase-2 Upgrade of the CMS Muon Detectors, CERN-LHCC-2017-012, CMS-TDR-016 (2017).

        Speaker: SONG, Jianing (Institute of High Energy Physics, Chinese Academy of Sciences (CN))
      • 4
        Commissioning and operation of the upgraded Belle II DAQ system with PCI-express-based high-speed readout

        The Belle II experiment and SuperKEKB collider is designed to operate under a higher luminosity compared to that of Belle for the improvement on rare B meson decay study and new physics search. A new PCI-express-based high-speed readout board (PCIe40) is adopted for the upgrade of DAQ system in order to break the bottleneck of bandwidth and to improve the stability in operation. The new system including PCIe40 firmware, slow control and readout software has been validated. Readout upgrade is complete for the particle-identification detectors and the neutral kaon and muon detector in Belle II, and the global DAQ has been operating stably with the new system. The commissioning of new PCIe40 system with sub-detectors, and the performance of global DAQ operation will be reported.

        Speaker: Dr LAI, Yun-Tsung (Univ. of Tokyo, Kavli IPMU)
      • 5
        ZeroMQ Based Online ROOT-Output Storage and Express-Reconstruction System for the Belle II Experiment

        The Belle II experiment started to take data of electron-positron collisions provided by the SuperKEKB accelerator, with all subdetectors from March 2019. The Belle II detector consists of 7 subdetectors and data from each subdetector are serialized by event builders. The maximum trigger rate at the detector is designed to be 30 kHz. The collected data are filtered by a software-based trigger system and stored on online storage. After the online storage, an express-reconstruction system is provided for semi-realtime data quality monitoring and event display. In this presentation, we present ZeroMQ based online storage and express- reconstruction framework to achieve stable data flow during operation. Furthermore, a new storage system, capable of storing data directly in ROOT format, is introduced which is helpful to reduce the network bandwidth of output file transfer from online to offline storage and the overhead of offline computing resources.

        Speaker: PARK, Seokhee
    • Poster Session - A
    • DAQ System & Trigger - II
      • 6
        Preliminary Design of CDEX-100 DAQ Architecture

        This paper introduces the DAQ architecture of CDEX-100 experiment and the preliminary test results. The target of CDEX-100 experiment is the direct detection of WIMPs and $^{76}Ge$ $0\nu\beta\beta$, it contains 100-kilogram HPGe detectors and is deployed in a large liquid nitrogen thermostat which located in CJPL-II underground laboratory. We have designed a DAQ system which responsible for waveform digitization, data triggering, time synchronization and data uploading to satisfy the requirement of CDEX-100 experiment. The system contains a data acquisition and timing chassis which conforms to the CPCI standard and a PCIe readout card for data dumping. Each chassis can provide 128 sampling channels(125 MSPS/16-Bit) and 80 Gbps upload bandwidth(maximum). The sampling channels and triggering methods can be flexibly configured by modifying the number of chassis. In addition, each chassis contains a trigger/timing board, which uses a WR module to fan out synchronization clocks to other boards in the same chassis through the backplane. Finally, the collected data of all chassis is transmitted to the PCIe acquisition card through optical fiber and saved in the storage servers.

        Speaker: WEN, Jingjun
      • 7
        The SABRE South Data Acquisition System

        The SABRE (Sodium-iodide with Active Background REjection) South experiment, located at the Stawell Underground Physics Laboratory (SUPL) in Australia, aims to measure an annual modulation in dark-matter interactions using ultra-high-purity NaI(Tl) crystals. In partnership with the SABRE North effort at the Gran Sasso National Laboratory (LNGS), SABRE South is designed to disentangle any seasonal or site-related effects from the dark matter-like modulated signal observed by DAMA/LIBRA in the Northern Hemisphere.

        SABRE South is instrumented with 7 ultra-high-purity NaI(Tl) crystals surrounded by a liquid scintillator veto, and covered by 8 plastic scintillator muon detectors. Each NaI(Tl) crystal and muon detector is coupled to 2 photomultiplier tubes (PMTs) and a further 18 PMTs are used to detect interactions in the liquid scintillator giving a combined total of 48 channels. The SABRE South DAQ utilises a number of CAEN digitizers to acquire data from all these channels while a CAEN logic-unit is used to trigger data acquisition. These are controlled and monitored using custom software which interfaces with EPICS. In addition, control and monitoring of the PMT voltage supplies, environmental sensors, and calibration tools have also been integrated into this system.

        In this presentation, the design and status of the SABRE South DAQ will be discussed.

        Speaker: GERATHY, Matthew
    • Control, Monitoring, Diagnostics, Safety, Security - I
      • 8
        A temperature calibration method using dynamic energy windows for the online blood radioactivity meter

        In nuclear medicine, the online blood radioactivity meter measures the radioactivity in the blood vessel to form the arterial input function in a dynamic PET study. Such a radioactivity meter is composed of scintillation crystals and SiPMs. The accuracy of the activity measurement varies with the temperature due to the temperature dependency in the gain of SiPMs and the light output of crystals. We aim to propose an algorithm to correct the effect of temperature changes on photon counts based on a floating energy window.
        This algorithm calculates the energy window according to the real-time energy spectrum. It finds peaks of the energy spectrum and fits them with Gaussian functions. The intersection points of the Gaussian functions are then used to determine the energy window. The radioactivity meter finally outputs the number of coincidence events within this floating energy window after the radioactive decay and detection sensitivity are corrected.
        When the radioactive source 22Na with a long half-life is measured, the proposed method shows a coefficient of variation of 0.0037, while the fixed energy window method and the normalized energy window method have 0.0946 and 0.0067. For the commonly used 18F-FDG, the proposed method outperforms the other two with a smaller error during the period when the temperature increases. The relative error remains less than 5% in the first half-life of the source.
        Experimental results show that we proposed a simple and robust method to calibrate the online blood radioactivity meter so that the readout is stable when temperature varies.

        Speaker: Ms QU, Huifan (Shanghai Jiao Tong University)
      • 9
        Automatic recovery system in the Belle II operation

        The Belle II experiment is designed to search for physics beyond the standard model of particle physics exploiting the large number of B meson decays produced by the SuperKEKB accelerator.
        We have maintained a good data-taking efficiency since the beginning of our operation on March 2019.
        Nevertheless, we encountered various problems and errors during our operation.
        The instantaneous luminosity of SuperKEKB is gradually increasing; in this phase, we occasionally suffer from huge background events induced by the beam operation.
        For example, we encountered the problems in the nodes of the online software trigger, readout PCs of each detector component, and the status of the network connections among the components.
        In order to minimise the time without data-taking, it is essential to establish a system which diagnoses errors and performs proper recovery actions quickly and automatically.
        We adopted Elastic Stack to achieve this goal; this stack is consists of 1) a distributed analysis engine, 2) data-feeding applications, and 3) a web interface to visualise the data in the analysis engine.

        This automatic recovery system is still developing with new functionalities being implemented; we present the current status of the system together with the difficulties experienced during the operation and the future plan.

        Speaker: KUNIGO, Takuto (KEK (IPNS))
    • Invited Talk: Electron-Ion Collider
      • The Electron-Ion Collider (EIC), a powerful new facility to be built in the United States at the U.S. Department of Energy’s Brookhaven National Laboratory (BNL) in collaboration with Thomas Jefferson National Accelerator Facility (TJNAF), will explore the most fundamental building blocks of nearly all visible matter. Its focus is to reveal how these particles interact to build up the structure and properties of everything we see in the universe today, from stars to planets to people. BNL and TJNAF are working with domestic and international partners to deliver the $2.5B EIC construction project. The EIC Project Director, Jim Yeck, will provide an update on the project’s current progress including the design and schedule.
      • 10
        Electron-Ion Collider Project Update

        -The Electron-Ion Collider (EIC), a powerful new facility to be built in the United States at the U.S. Department of Energy’s Brookhaven National Laboratory (BNL) in collaboration with Thomas Jefferson National Accelerator Facility (TJNAF), will explore the most fundamental building blocks of nearly all visible matter. Its focus is to reveal how these particles interact to build up the structure and properties of everything we see in the universe today, from stars to planets to people. BNL and TJNAF are working with domestic and international partners to deliver the $2.5B EIC construction project. The EIC Project Director, Jim Yeck, will provide an update on the project’s current progress including the design and schedule.

        Speaker: Mr YECK, Jim (Brookhaven - IEC)
    • Control, Monitoring, Diagnostics, Safety, Security - II
      • 11
        Calibration facility for detector strings for the KM3NeT/ARCA neutrino telescope at the CACEAP laboratory (Caserta)

        KM3NeT is a network of submarine Cherenkov neutrino telescopes under construction in two different sites of the Mediterranean Sea. ARCA, near Sicily in Italy, is optimized for the detection of cosmic neutrinos while ORCA, near Toulon in France, for atmospheric neutrinos.
        ARCA and ORCA are both arrays of thousands of optical sensors (Digital Optical
        Modules - DOMs), each made of 31 small photomultipliers (PMTs) housed inside a
        glass sphere, which detect the Cherenkov light produced by the secondary particles
        generated in the neutrino interactions. 18 DOMs are arranged on flexible strings,
        namely vertical Detection Units (DUs), anchored to the sea floor. Once completed,
        ARCA and ORCA will consist of 230 and 115 DUs respectively. Each DOM of a string communicates at a dedicated wavelength to the shore station via a network of
        optical fibers transmitting optical and acoustic signal information as well as
        orientation information.
        Before the deployment, each DU is tested and calibrated in a dark room. The test
        bench is equipped with a full data acquisition system for communication, data
        processing and time synchronization. Several steps are needed to accomplish the DU
        calibration, including the HV tuning of the PMTs, checking of the acoustic receivers and calibration light sources in the DOM and time calibration using laser signals distributed to all DOMs.
        Here we describe the DU test and calibration facility at the CACEAP Laboratory in
        Caserta, focusing on the functional tests and calibrations performed at the end of the DU integration.

        Speaker: Dr MASTROIANNI, Stefano (INFN NA)
      • 12
        Data Acquisition system for real time operation of the CGEM Inner Tracker

        An innovative cylindrical gas-electron multiplier (CGEM) is under construction to replace the BESIII experiment inner drift chamber, which is suffering from aging. By the end of 2019 two of the three CGEM layers have been built and shipped to the Institute of High Energy Physics in Beijing. Due to SARS-CoV2 pandemic, the system has been remotely controlled since January 2020.
        A special readout chain has been developed for data acquisition. Signals from the detector strips are processed by TIGER, a custom 64-channel ASIC that provides analog charge readout via a fully digital output. TIGER continuously transmits data over thresholds in triggerless mode, has linear charge readout up to about 50 fC, less than 3ns jitter, and power dissipation of less than 12 mW per channel. An FPGA-based off-detector module (GEMROC) was developed specifically to interface with TIGER. The module configures the ASICs and organizes the incoming data by assembling the event packets when the trigger arrives.
        A full graphical user interface (GUFI) was developed in Python for testing, debugging and measuring the entire system. This interface provides all the functions needed to operate the electronics, including data acquisition and storage. Data, along with environmental and detector metrics, are continuously collected in a database and queried via a GRAFANA dashboard, to ensure operational reliability and verify data quality.
        The entire system was designed with modularity, versatility, and scalability in mind. We present the data acquisition system of the CGEM detector, with special attention to online monitoring and fast analysis in realtime.

        Speakers: GRECO, Michela (Universita e INFN Torino (IT)), CIBINETTO, Gianluigi (INFN Ferrara)
      • 13
        The SPIDER Pulse Plant Configuration Environment

        At the ITER Neutral Beam Test Facility, the Source for the Production of Ions of Deuterium Extracted from Radio frequency plasma (SPIDER) has been in operation since 2018 aiming at prototyping the heating and diagnostic neutral beam appliances in view of the ITER demanding requirements for plasma burning conditions and instabilities control.

        For the sake of safety, machine protection and efficiency it is necessary to follow an accurate planning strategy and approval action of the experiment parameter settings. Although the initial tools available for the SPIDER integrated commissioning and early campaigns offered the basic functionality to perform the necessary tasks, there were a set of relevant issues that were identified as needing improvements for an efficient and safe configuration environment. Namely, the fact that there were no tools indicating the parameters change since the previous pulse, or the lack of a comparison tool between a new set of parameters and a previous setup, demanded a tedious and error prone verification of all parameters in the sequence. Moreover, several verifications should be automated according to the machine safety limits, increasing the safety check by human and automated machine (algorithmic) validation.

        This contribution depicts (i) the approval sequence designed for SPIDER safe operation; (ii) the configuration environment requirements according to the SPIDER pulse preparation procedure; (iii) the decision of the development tools used for implementation and design;(iv) the implementation details and preliminary tests of the global environment.

        Speakers: Dr CRUZ, Nuno (Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001, Lisboa, Portugal), CASTELO BRANCO DA CRUZ, Nuno Sérgio (Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa)
    • Mini Oral - II

      and following Poster B session

    • Poster Session - B
    • Industry
      • 14
        Measurements with SiPM-based Front-End Electronics FERS-5200

        This work presents measurements made with the CAEN A5202 FERS-5200 board, which integrates two Citiroc-1A chips, for a total of 64 acquisition channels, and the possibility to synchronize up to 128 boards. Besides the use of this chip for single photon spectra and counting, this works wants to explore the possibility to use it also to acquire γ energy spectra from scintillator detectors coupled with SiPMs. In particular, we use Caesium Iodine (CsI), Cerium-doped luthetium Yttrium Orthosilicate (LYSO(Ce)), and Bismuth Germanate (BGO) crystals, which are very common detectors both for the use in industrial applications and in fundamental nuclear physics research. Future plans are in progress to perform measurements with faster crystals like Lanthanum Bromide (LaBr3) and Cerium Bromide (CeBr3). ASICs chips with higher shaping time than Citiroc-1A and different pulse processing were already successfully used for γ spectroscopy. This would be the first time that Citiroc-1A chip, which has a maximum shaping time of 87.5 ns, is used for γ spectroscopy measurements. We expect to achieve resolutions compatible with literature and with a complementary system which uses a digitizer (CAEN DT5720A) to perform charge integration on the signals from SiPM. Measurements with plastic scintillators are also performed to measure the energy loss of cosmic rays.

        Speaker: PERRI, Marco
      • 15
        Future Perspectives of Digital Readout

        Over the past decade the design of CAEN Digital Acquisition Systems has been accompanied by an equivalent effort in the development of Firmware, Software and Communication protocols designed to maximize the performance offered by the system itself. With the development of a second generation of Digital Solutions, CAEN continues its tradition of strong collaboration with our customers while taking advantage of new technologies such as Gigabit Ethernet, USB 3, SoC, cutting-edge FPGA. The CAEN second generation Digital Solutions have been designed to provide a blend of flexibility, performance, and fast and easy integration ideally suited to nearly any application.

        Speaker: VENTURINI, Yuri
      • 16
        Sub-Nanosecond Time Resolution in Time-of-Flight Style Measurements with White Rabbit Time Synchronization

        As radiation detector arrays in nuclear physics applications become larger and physically more separated, the time synchronization and trigger distribution between many channels of detector readout electronics becomes more challenging. Among applications requiring the highest precision are time-of-flight measurements which try to determine the time difference in two or more related particle interactions in two or more separate detectors to sub-nanosecond precision, ideally in the tens of picoseconds. Clocks and triggers are traditionally distributed through dedicated cabling, but newer methods such as the IEEE 1588 Precision Time Protocol and its high accuracy profile (White Rabbit) allow clock synchronization through the exchange of timing messages over Ethernet.
        We report here the use of White Rabbit, implemented in the Pixie Net XL detector readout electronics, to synchronize multiple modules that read out separate detectors. The timing performance is characterized both coincident gamma rays from Na22 and a split pulser signal. Time resolutions are about 300 ps full width half maximum for Na22 and about 90 ps for the pulser, compared to ~15 ps for 2 channels on the same module using the pulser.

        Speaker: HENNIG, Wolfgang
      • 17
        Harnessing the data from edge sensors

        The 2020’s are emerging as the decade of the experiment as new instruments come on-line or are being upgraded with significant improvements that will support each of the science domains. The new devices will produce huge volumes of rich data which along with improvements in data processing and simulation methods offer the potential to address grand challenges in science that if resolved could dramatically improve the quality of life.
        These include improved time and quality of drug development, improved materials, energy delivery methods to reduce the carbon footprint along with some of the questions that have intrigued mankind for centuries that include understanding the origins of the universe and the standard model of physics. This paper will address three of the key challenges that will be needed to be addressed to realize the potential from these new data sources: Harnessing the data from edge sensors and instruments; Building and orchestrating the composite workflows that will need to be automated and integrated with humans in the loop to better operate the improved instruments and maximize the value of the data obtained, processed, and archived. To complete these workflows the instruments will need to be integrated with the HPC data center and cloud based resources to optimize the diverse requirements for computation and data processing in real-time and near real-time.
        The paper will describe the challenges and initial thinking on the solution to maximize the value of the new data sources at the edge. We will use examples from light source experiments, fusion energy devices, Electron microscopy and other instruments used in the high energy particle physics and the drug discovery laboratory. For these devices we will identify the composite workflows that will need to be automated and combined with the new data sources and then presented to humans in the loop as the instruments are operated.

        Speaker: GIBBS, Tom
    • DAQ System & Trigger - III
      • 18
        Design and Commissioning of the first 32 Tbit/s event-builder.

        The LHCb experiment is a forward spectrometer, designed to study beauty and charm quarks Physics at the LHC. To exploit of the higher luminosity that will be delivered during Run3, the full experiment needed a substantial upgrade, from the detector to the DAQ and HLT. In this paper, we will focus on the new DAQ system for the LHCb experiment; that represents a substantial paradigm shift compared to the previous one, and to similar systems used by similar experiments in the past and present times. To overcome the inefficiencies introduced by a local selection implemented directly with the readout hardware, the Run3 system is designed to perform a full software reconstruction of all the produced events. To achieve this, both the DAQ and the HLT need to process the $\sim30\ $MHz full event-rate. In particular, this paper will introduce the final design of the system; it will provide a focus on the hardware and software design of the event-builder and how we integrated technologies designed for the high performance computing world - like InfiniBand HDR (200 Gb/s)- into the DAQ system; we will present performance measurements of the full event building system under different operational conditions; and we will provide a feedback on the event-builder operation during the beginning of the data-taking.

        Speaker: PISANI, Flavio (CERN)
      • 19
        Streaming DAQ software prototype at J-PARC hadron experimental facility

        In recent years, particle physics and nuclear physics experiments require faster data collection systems and advanced trigger systems as the beam intensity increases. The current DAQ system at the J-PARC hadron experimental facility (HEF) uses a fixed- and low-latency trigger with dedicated hardware to reduce data and event-building software that merges data into a single endpoint. This conventional DAQ system is expected to be inadequate for the requirement of future experiments and has to be replaced with a new simplified but powerful one, called streaming DAQ, which collects the whole detector signals and filters them by software running on many computers. Therefore, we have developed a prototype of streaming DAQ software for the J-PARC HEF using FairMQ and Redis as middleware. The key features are that it is simple and has a low learning cost to develop and operate by a small number of people. The software can be used not only in a fully streaming readout system but also in the triggered DAQ and the combined DAQ of the hardware and software trigger. This contribution will discuss the software architecture, performance evaluation, and application in ongoing and future experiments.

        Speaker: TAKAHASHI, Tomonori (RIKEN)
      • 20
        Development of Streaming and a Generalized Hybrid DAQ Model for the CODA Data Acquisition System and Jefferson Lab Experiments

        The CODA Data Acquisition system is a software toolkit developed in conjunction with both custom and commercial hardware to support the general experimental program at Jefferson Lab. This includes larger complex DAQ systems in the four primary experimental halls as well as more basic developmental test bed systems. Recent developments for CODA have been focused on support for streaming (triggerless) front-end operation. The goal has been to integrate streaming data as seamlessly as possible with the existing pipelined trigger operation such that both can be run individually or even simultaneously. This is made possible in part by the existing JLAB clock/trigger distribution system. In addition, two key new CODA “components” have been developed. The first is an FPGA-based Readout Control (ROC) component that compliments the existing software ROC. The second is a software Event Management Unit (EMU) which serves as a stream “aggregator” and interface to back-end online processing. We will discuss the details of these new components, their features, performance metrics and some experimental results and expectations for experiment support using this new generalized hybrid DAQ model.

        Speaker: RAYDO, Ben (Jefferson Lab)
      • 21
        The Data Acquisition System of the ECCE Detector at the Future Electron-Ion Collider

        The Electron-Ion Collider (EIC) is an extraordinary new facility enabling frontier research in nuclear physics, with initial operation planned for 2031. It is the largest single project undertaken by the US DOE Office of Nuclear Physics and, as such, represents a landmark allocation of resources. Recently, the proposal of the ECCE consortium has been selected as the reference design for “Detector 1” at the EIC. The central barrel of the detector incorporates the 1.4 T BaBar superconducting solenoid and the sPHENIX barrel hadronic calorimeter currently under construction.
        We will give an overview of the ECCE detector, and detail a number of key points from the proposed electronics and data acquisition technologies to be developed or adapted. We will present the current plan of acquiring and storing the data, and preparing them for analysis.

        Speaker: PURSCHKE, Martin L (Brookhaven National Laboratory (US))
      • 22
        Triggerless Search of Dark Matter Candidates with DarkSide-20k

        DarkSide-20k is a direct dark matter search detector using a two-phase Liquid Argon Time Projection Chamber (LArTPC) with an active mass of 50 tonnes of low radioactivity Argon from an underground source. Two planes of cryogenic SiPMs covering the top and the bottom faces of the TPC detect the light signals produced by the scattering of a WIMP particle on an Argon nucleus. The TPC is surrounded by a Gadolinium-loaded plastic shell for active neutron veto with underground liquid Argon.

        The DarkSide-20 Data Acquisition (DAQ) gathers data from the top and bottom detector planes of the TPC, as well as the Veto photo-detectors, in triggerless or "self-trigger" mode. Noise is minimized on a channel basis by digital processing the signals in the waveform digitizers, thanks to FPGAs which implement suitable algorithms to identify photo-electron signals with high efficiency. Each fragment of data collected by each waveform digitizer is to be time-stamped with a global time source at the sampling frequency for data analysis. Individual channel signals are processed in the first stage by Front End Processors, or FEPs, to reduce to a reasonable amount the information to be sent to the event building stage. Event building and selection are performed on fully assembled data collected over a fixed time period, a "Time Slice". Each of these is assigned to a given computer node in a Time Slice Processor (TSP) farm providing an online selection of interesting events and data logging of raw and preprocessed data for offline analysis.

        Speaker: CAPRA, Andrea (TRIUMF (CA))
    • Mini Oral - III

      and following Poster C session

    • Poster Session - C
    • Deep Learning and Machine Learning
      • 23
        PMT Waveform Timing Analysis Using Machine Learning Method

        The timing information carried by the waveforms important when reconstructing the particle trajectory especially in the application of time-of-flight detectors. The traditionally used timing methods, including the leading edge discrimination (LED) and the constant fraction discrimination (CFD), are easily realized in the circuit but obviously the time information of the pulse has not been precisely obtained. Former studies have shown that the CNN-based model can improve the coincidence time resolution (CTR) of the pair of PMTs by nearly 20%. But the limit of number of labels directly affects the CNN timing performance of the pair of FPMTs. Therefore, we developed a new method to train a CNN model for the timing of a pair of PMTs. By precisely obtaining the time difference of the two waveforms, the position of the positron annihilation can be reconstructed. Since the distances from the 22Na radioactive source to the two detectors are known, the labels of the pair of waveforms can be calculated. Instead of using real paired waveforms with different distances to the radioactive source, in our method only a group of paired waveforms are obtained with fixed distance to the radioactive source. The paired waveforms with different labels are produced by delaying or advancing the time. Another 6000 paired waveforms whose time difference is 0 ps are used to validate the model. The results show a 50% improvement compared with the CFD method.

        Speaker: WU, Qi
      • 24
        Real-time Compression and Noise Filtering for Time Projection Chamber Data via Bicephalous Convolutional Autoencoder

        Real-time data collection and analysis in large collider experimental facilities present a great challenge across multiple domains, such as at the sPHENIX experiment at the Relativistic Heavy Ion Collider and future experiments at the Electron-Ion Collider. To address this, machine learning (ML)-based methods for real-time data compression have drawn significant attention. However, unlike natural image data, such as CIFAR and ImageNet that are relatively small-sized and continuous, collider experiment data often come in as three-dimensional data volumes at high rates with high sparsity (many zeros), non-Gaussian value distribution, and significant noise. This makes direct application of popular ML and conventional data compression methods suboptimal. To address these obstacles, this work introduces a dual-head autoencoder to resolve sparsity and regression simultaneously with the added benefit of noise filtering, called Bicephalous Convolutional Autoencoder. This method has shown advantages both in compression fidelity and ratio compared to traditional data compression methods, such as MGARD, SZ, and ZFP. Furthermore, taking advantage of the modern accelerators, such as graphics and intelligence processing units and field-programmable gate arrays, this ML-based compression is high throughput with fixed latency, ideal for the real-time applications.

        Speaker: HUANG, Yi
      • 25
        Recent Developments in hls4ml

        Neural Network (NN)-based inference deployed in FPGAs or ASICs is a powerful tool for real-time data processing and reduction. FPGAs or ASICs may be needed to meet difficult latency or power efficiency requirements in data acquisition or control systems. Alternately one may need to design ASICs for special conditions, like use in high radiation areas. The software package, hls4ml, was designed to make deploying optimized NNs on FPGAs and ASICs accessible for domain applications. In this talk, we will present the package capabilities, give updates on the current status, and give examples of its application.

        The hls4ml's internal structure has been reorganized to allow for more flexibility to expand our backend support. Deploying on Intel FPGAs is now supported in the main branch, as is support for accelerators. We are continuously improving our NN support, such as adding new CNN implementations. We are integrating support for LSTM and GRU networks. We have also collaborated with the AMD/Xilinx FINN group to develop a Quantized ONNX (QONNX) representation to serve as a common way to to represent quantized NNs. Through common tools we provide, both Brevitas and QKeras can produce QONNX, which can then be ingested by both FINN and hls4ml. Usage of hls4ml continues to grow. Demonstrator systems for accelerator control are being built at Fermilab, and it is being studied for use by DUNE to detect supernova neutrino bursts. Together with the FINN team, we have provided solutions for MLPerf Tiny benchmarking results.

        Speaker: MITREVSKI, Jovan (Fermi National Accelerator Lab. (US))
      • 26
        Development of ML FPGA filter for particle identification

        With the increase of luminosity for accelerator colliders as well as a granularity of detectors for particle physics, more challenges fall on the readout system and data transfer from detector front-end to computer farm and long term storage. Modern concepts of trigger-less readout and data streaming will produce large data volumes being read from the detectors.
        From a resource standpoint, it appears strongly advantageous to perform both the pre-processing of data and data reduction at earlier stages of a data acquisition. Real-time data processing is a frontier field in experimental particle physics. Machine Learning methods are widely used and have proven to be very powerful in particle physics.
        The growing computational power of modern FPGA boards allows us to add more sophisticated algorithms for real time data processing. Many tasks could be solved using modern Machine Learning (ML) algorithms which are naturally suited for FPGA architectures. The FPGA-based machine learning algorithm provides an extremely low, sub-microsecond, latency decision and makes information-rich data sets for event selection.
        Work has started to develop an FPGA based ML algorithm for a real-time particle identification with Transition Radiation detector and E/M Calorimeter. This report describes the purpose of the work and progress in evaluating the ML-FPGA application.

        Speaker: FURLETOV, Sergey
    • Architectures, Intelligent Signal Processing & Simulation
      • 27
        PulseDL-II: a system-on-chip neural network accelerator for timing and energy extraction of nuclear detector signals

        Front-end electronics equipped with high-speed digitizers are being used and proposed for future nuclear detectors. Recent literature reveals that deep learning models, especially one-dimensional convolutional neural networks, are promising when dealing with digital signals from nuclear detectors. Simulations and experiments demonstrate the satisfactory accuracy and additional benefits of neural networks in this area. However, specific hardware accelerating such models for online operations still needs to be studied. In this work, we introduce PulseDL-II, a system-on-chip (SoC) specially designed for applications of event feature (time, energy, etc.) extraction from pulses with deep learning. Based on the previous version, PulseDL-II incorporates a RISC CPU into the system structure for better functional flexibility and integrity. The neural network accelerator in the SoC adopts a three-level (arithmetic unit, processing element, neural network) hierarchical architecture and facilitates parameter optimization of the digital design. Furthermore, we devise a quantization scheme and associated implementation methods (rescale & bit-shift) for full compatibility with deep learning frameworks (e.g., TensorFlow) within a selected subset of layer types. With the current scheme, the quantization-aware training of neural networks is supported, and network models are automatically transformed into software of RISC CPU by dedicated scripts, with nearly no loss of accuracy. We validate the correct operations of PulseDL-II on field programmable gate arrays and show its improvements compared with the previous version with respect to the performance, power and area. The chip will be taped out with the TSMC 28/65 nm process.

        Speaker: Dr AI, Pengcheng (Tsinghua University)
      • 28
        The LHCb HLT2 storage system: a 40 GB/s system made from commercial off-the-shelf components and open-source software

        The LHCb (Large Hadron Collider beauty) experiment is designed to study differences between particles and anti-particles as well as very rare decays in the charm and beauty sector at the LHC. With the major upgrade done in view of Run 3, the detector will read-out all events at the full LHC frequency of 40 MHz, the online system will be subjected to a considerably increased data rate, reaching a peak of ~40 Tb/s. The second stage of the two stage-filtering consists of more than 10000 multithreaded processes which simultaneously write output files at an aggregated band-width of 100 Gb/s. At the same time a small amount of file-moving processes will read files from the same storage to copy them over to tape-storage. This whole mechanism must run reliably over months and be able to cope with significant fluctuations. Moreover, for cost reasons, it must be built from cost-efficient off-the-shelf components. In this paper we describe LHCb’s solution to this challenge, we show the design, present reasons for the design choices, the configuration and tuning of the adopted software solution and we present performance figures.

        Speaker: Mr CIFRA, Pierfrancesco (Nikhef National institute for subatomic physics (NL))
      • 29
        The Real-Time System for Distribution of Clock, Control and Monitoring Commands with Fixed Latency of the LHCb experiment at CERN

        The LHCb experiment has gone through a major upgrade for LHC's Run-3, scheduled to start in the middle of 2022. The entire readout system has been replaced by Front-End and Back-End electronics that are able to record LHC events at the full LHC bunch rate of 40MHz.

        In order to maintain synchronicity across the full system, clock and control commands are originated from a single Readout Supervisor (SODIN) and distributed downstream through a Passive Optical Network (PON) infrastructure, reaching all Back-End cards of the readout system, via a two-level hierarchy and utilizing PON splitters. An FPGA-based firmware core, called LHCb-PON and based on the CERN TTC-PON, is instantiated on SODIN and all Back-End cards and provides the communication protocol while ensuring a stable distribution of control commands within the readout period of 25 ns as well as a clock signal with fixed and deterministic latency with a precision of a few hundreds of picoseconds. This is achieved by the firmware backbone that provides clock recovery and ensures the transmission of a control word downstream with fixed latency.

        Additionally, the LHCb-PON also provides monitoring functionalities over the upstream link, with less strict timing requirements, connecting multiple Back-End cards to the central SODIN. This is achieved by having each Back-End card send data upstream through a time division multiplexing scheme.

        This paper describes the implementation, measurements and results from the usage of the system during the start-up of Run 3 at the LHC.

        Speaker: FEO, Mauricio (CERN)
      • 30
        Using FPGA-based AMC Carrier Boards for FMC to Implement Intelligent Data Acquisition Applications in MTCA.4 Systems using OpenCL

        The MTCA.4 standard is widely used in developing advanced data acquisition and processing solutions in the big physics community. The number of applications implemented using commercial MTCA AMC cards using XILINX and IntelFPGA systems on chips is growing due to the flexibility and scalability of these reconfigurable hardware devices and their suitability to implement intelligent applications using artificial intelligence and machine learning techniques. Firstly, the contribution presents the design methodologies proposed by IntelFPGA and XILINX and the software/hardware setups needed for the development phase. Manufacturers are developing solutions that cover not only the hardware part but also the integration with advanced Linux platforms. Secondly, it develops how to interface FMC-based ADCs modules to the PL logic, how to process the data acquired using OpenCL, and how to interface with the EPICS software layer using the ITER Nominal Device Support framework. The contribution focuses on: the advantages and drawbacks of the hardware reference designs for SoC-based and PCIe designs; the methodologies proposed by manufacturers to implement the applications; how the reference designs have been applied to develop data acquisition and processing applications using two AMC from NAT Europe (one based on IntelFPGA Arria 10, and another one based on XILINX ZynqMP).

        Speakers: Mr RIVILLA, Daniel (Universidad Politécnica de Madrid), RIVILLA, Daniel (Universidad Politécnica de Madrid)
      • 31
        ERSAP: Towards better HEP/NP data-stream analytics with flow-based programming

        This paper presents an reactive, actor-model and FBP paradigm based framework that we develop to design data-stream processing applications for HEP and NP. This framework encourages a functional decomposition of the overall data processing application into small mono-functional artifacts. Artifacts that are easy to understand, develop, deploy and debug. The fact that these artifacts (actors) are programmatically independent they can be scaled and optimized independently, which is impossible to do for components of the monolithic application. One of the important advantages of this approach is fault tolerance where independent actors can come and go on the data-stream without forcing the entire application to crash. Furthermore, it also makes it is easy to locate the faulty actor in the data pipeline. Due the fact that the actors are loosely coupled, and that the data (inevitably) carries the context, they can run on heterogeneous environments, utilizing different accelerators. This paper describes the main design concepts of the framework and presents a “proof of concept” application design and deployment results obtain processing on-beam calorimeter streaming data.

        Speaker: Dr GYURJYAN, Vardan
    • Mini Oral - IV

      and following Poster D session

    • Poster Session - D
    • DAQ System & Trigger - IV
      • 32
        High-Speed Data AcQuisition System Development for a new 262k Pixels X-Ray Camera at the SOLEIL Synchrotron

        Single photon counting hybrid pixel detector technology with short gate capability has been chosen to achieve time-resolved pump-probe experiments in the energy range from 5 to 15 keV at SOLEIL Synchrotron. The detector prototypes, based on the UFXC32k [1] readout chip, were developed for this initial purpose due to various very interesting performances of the chip, such as high readout speed, high count-rate capabilities, and moderate pixel size. The prototype cameras can reach the framerate of 20 kfps in fastest configuration with the current DAQ (2-bits readout, DAQ limited). After completion of small-size prototypes, a new detector demonstrator of medium size with 8 chips is under development. This represents several new challenges in the detector development program at SOLEIL, as the new camera will provide improved performance with multiple acquisition modes. This paper presents a new data acquisition system with a quad channel 10Gb/s UDP/IP data streaming architecture dedicated for this demonstrator. It is based on a Xilinx Virtex-7 FPGA (Field Programmable Gate Array) and PandaBlocks firmware for control and status. The DAQ (Data AcQuisition) system enables streaming data from the detector with four 10 Gb/s Ethernet links reaching up to 40 Gb/s bandwidth through four high-speed transceivers. The new architecture allows for reading 8 chips simultaneously, without reducing image frequency.

        Speaker: Dr AIT MANSOUR, El Houssain (Synchrotron SOLEIL)
      • 33
        Data Flow in the Mu3e DAQ

        The Mu3e experiment at the Paul Scherrer Institute (PSI) searches for the charged lepton flavour violating decay $\mu^+ \rightarrow e^+ e^+ e^-$.
        The experiment aims for an ultimate sensitivity of one in $10^{16}$ $\mu$ decays.
        The first phase of the experiment, currently under construction, will reach a branching ratio sensitivity of $2\cdot10^{-15}$ by observing $10^{8}$ $\mu$ decays per second over a year of data taking.
        The highly granular detector based on thin high-voltage monolithic active pixel sensors (HV-MAPS) and scintillating timing detectors will produce about 100 GB/s of data at these particle rates.

        The Field Programmable Gate Array based Mu3e Data Acquisition System (DAQ) will read out the different detector parts.
        The trigger-less online readout system is used to sort, time align and analyze the data while running.
        A farm of PCs equipped with powerful graphics processing units (GPUs) will perform the data reduction.

        The talk presents the ongoing integration of the sub detectors into the DAQ, in particular focusing on the time aligning and the data flow inside the FPGAs of the filter farm.
        It will also show the DAQ system used in the Mu3e Integration Runs performed in spring 2021 and 2022.

        Speaker: KÖPPEL, Marius
      • 34
        Progress in design and testing of the DAQ and data-flow control for the Phase-2 upgrade of the CMS experiment

        The CMS detector will undergo a major upgrade for the Phase-2 of the LHC program: the High-Luminosity LHC. The upgraded CMS detector will be read out at an unprecedented data rate exceeding 50 Tbit/s, with a Level-1 trigger selecting events at a rate of 750 kHz, and an average event size reaching 8.5 MB. The Phase-2 CMS back-end electronics will be based on the ATCA standard, with node boards receiving the detector data from the front-ends via custom, radiation-tolerant, optical links.
        The CMS Phase-2 data acquisition (DAQ) design tightens the integration between trigger control and data flow, extending the synchronous regime of the DAQ sys- tem. At the core of the design is the DAQ and Timing Hub, a custom ATCA hub card forming the bridge between the different, detector-specific, control and readout electronics and the common timing, trigger, and control systems.
        The overall synchronisation and data flow of the experiment is handled by the Trigger and Timing Control and Distribution System (TCDS). For increased flexibility during commissioning and calibration runs, the Phase-2 architecture breaks with the traditional distribution tree, in favour of a configurable network connecting multiple independent control units to all off-detector endpoints.
        This paper describes the overall Phase-2 TCDS architecture, and briefly compares it to previous CMS implementations. It then discusses the design and prototyping experience of the DTH, and concludes with the convergence of this prototyping process into the (pre)production phase, starting in early 2023.

        Speaker: HEGEMAN, Jeroen (CERN)
    • CANPS Award: Reflections on Physics Instrumentation Standards
    • Invited Talk: Timepix detectors in Space: from radiation monitoring in low earth orbit to astroparticle physics
      • 36
        Timepix detectors in Space: from radiation monitoring in low earth orbit to astroparticle physics

        Hybrid pixel detectors (HPD) of Timepix [1,2] technology have become increasingly interesting for space applications. While up to date, common space radiation monitors rely on silicon diodes, achieving particle (mainly electron and proton) separation by pulse-height analysis, detector stacking, shielding or electron removal by a magnetic field, the key advantage of HPDs is that, in addition to the energy deposition measurement, particle signatures in the sensor are seen as tracks with a rich set of features. These track characteristics can be exploited for identification of particle type, energy, and its trajectory. Determining these pieces of information on a single layer bypasses the need for sensor stacking or complex shielding geometries, so that HPD based space radiation devices provide science-class data with a large field of view at an order of magnitude lower weight and approximately half of the power consumption compared with commonly used space radiation monitors.

        The first Timepix (256 x 256 pixels, 55 µm pitch) used in open space is SATRAM (Space Application of Timepix Radiation Monitor), attached to the Proba-V satellite launched to low earth orbit (LEO, 820 km, sun-synchronous) in 2013. Up to now, 9 years after its launch, it provides data for mapping out the fluxes of electrons and protons trapped in the Van-Allen radiation belts [3]. Figure 1 shows the in-orbit map of the ionizing dose rate and illustrates the different radiation fields in polar horn region (Fig. 1b), the South Atlantic Anomaly (Fig. 1c) and in a region shielded by Earth’s magnetic field (Fig. 1d) as measured with SATRAM. Noiseless detection of individual particles allows to detect even rare signatures of highly ionizing events.
        In the present contribution, I will discuss different data analysis methodology, relying on track feature analysis, novel machine learning approaches (e.g. [5]) and using statistical interpolation.

        Based on the success of SATRAM, advanced and miniaturized space radiation monitors based on Timepix3 [2] and Timepix2 [4] technology have been developed for the European Space Agency (ESA). These will be flown on the GOMX-5 mission (launch in 2023) and used within the European Space Radiation Array. Large area Timepix3 detectors (512 x 512 pixels, 55 µm pitch) were developed for the demonstrator of the penetrating particle analyzer [5] (mini.PAN), a compact magnetic spectrometer (MS) to precisely measure the cosmic ray flux, composition, spectral characteristics and directions. Mini.PAN employs position-sensitive (pixel and strip) detectors and (fast) scintillators to infer the particle type and velocity of GeV particles (and antiparticles) passing through the instrument’s magnetic field by measuring their bending angles, charge deposition and time-of-flight. It will measure the properties of cosmic rays in the 100MeV/n - 20GeV/n energy range in deep space with unprecedented accuracy, providing novel results to investigate the mechanisms of origin, acceleration and propagation of galactic cosmic rays and of solar energetic particles, and producing unique information for solar system exploration missions. We will describe the mini.PAN development status, challenges imposed by the space environment, and outline how a MS purely relying on latest generation hybrid pixel detectors could simply instrument design, reduce the PAN mass budget and provide high rate capabilities.

        [1] X. Llopart et al., NIM A 581 (2007), pp. 485-494.
        [2] T. Poikela et al., JINST 9 (2014) C05013.
        [3] St. Gohl et al., Advances in Space Research 63 (2019), Issue 1, pp. 1646-1660.
        [4] W. Wong et al., Rad. Meas. 131 (2021), 106230.
        [5] M. Ruffenach et al., in IEEE TNS 68 (2021), Issue 8, 1746-1753.
        [6] X. Wu et al., Advances in Space Research, 63 (2019), Issue 8, pp 2672-2682.

        Speaker: BERGMANN, Benedikt
    • Fast Data Transfer Links and Networks
      • 37
        Clock distribution system using IOSERDES based clock-duty-cycle-modulation

        The author has developed a clock distribution system for front-end electronics synchronization using the clock-duty-cycle-modulation (CDCM) [1] based transceiver (CBT), which is realized by the Xilinx Kintex-7 IOSERDES primitives. In addition, the link layer protocol called the MIKUMARI link was developed. The link protocol can transfer not only data using frame structure but also a one-shot pulse with a fixed latency to a destination. A clock distribution system consisting the CBT and the MIKUMARI link sends the clock, data, and the pulse through a pair of TX and RX lines. Thus, the system is independent from an FPGA high-speed serial transceiver. The system will be introduced a new standard system for clock, trigger, and synchronous command distribution in J-PARC hadron experiments.
        The author tested the CBT and the MIKUMARI link using a general-purpose FPGA module. Two FPGA modules were connected by an optical fiber. In the slave module, the CDCM encoded clock signal was fed into a mixed-mode clock manager (MMCM) in FPGA and an external jitter cleaner IC, CDCE62002, to recover the clock. During the tests, 8-bit incremental data is continuously transferred. The measured random jitters of the recovered clocks were 5.0±0.1 ps and 5.2±0.1 ps for CDCM-10-1.5 and CDCM-10-2.5 [1], respectively.
        In this contribution, the author will report the design of the CBT and the MIKUMARI link. The performances for several clock frequencies will also be discussed.
        [1] D. Calvet, IEEE TNS vol. 67, Issue 8, Aug. 2020, 1912-1919.

        Speaker: HONDA, Ryotaro (KEK IPNS)
      • 38

        To increase the science rate for high data rates (100Gbs - 1Tbs) with guaranteed Quality of Service (QoS), JLab is partnering with ESnet for proof-of-concept engineering of an 1.5 network layer AI/ML directed dynamic Compute Work Load Balancer (CWLB) of UDP streamed data using an FPGA for fixed latency and high throughput in order to demonstrate seamless integration of edge / core computing to support direct experimental data processing for immediate use by JLab science programs and others such as the EIC as well as data centers of the future. The ESnet/JLaB FPGA Accelerated Transport (EJFAT) project is targeting near future projects requiring high throughput and low latency for hot data, and data-center horizontal scaling applications for both hot and cooled data.

        The CLWB distributes commonly tagged UDP packets to individual and dynamically configurable destination endpoints. Additional tagging provides for individuated stream reassembly at the endpoint with horizontal scaling. The CLWB is an FPGA programmed at the data-plane level with both P4 and RTL to effect a packet parsing state machine that performs table look-ups through several levels of indirection to dynamically map LB meta-data to destination hosts and rewrite the UDP packet header as required. Control plane programming uses conventional host computer resources to monitor network and compute farm telemetry in order to make dynamic AI/ML guided decisions for destination compute host redirection / load balancing.

        Speaker: GOODRICH, michael
    • Emerging Technologies, New Standards and Feedback on Experience
      • 39
        Assessment of IEEE 1588-based timing system of the ITER Neutral Beam Test Facility

        MITICA is one of the two ongoing experiments at the ITER Neutral Beam Test Facility (NBTF) located in Padova (Italy). MITICA aims to develop the full-size neutral beam injector of ITER and, as such, its Control and Data Acquisition System will adhere to ITER CODAC directives. In particular, its timing system will be based on the IEEE1588 PTPv2 protocol and will use the ITER Time Communication Network (TCN).
        Following the ITER device catalog, the National Instruments PXI-6683H PTP timing modules will be used to generate triggers and clocks that are synchronized with a PTP grandmaster clock. Data acquisition techniques, such as lazy triggers, will be also used to implement event-driven data acquisition without the need of any hardware link in addition to the Ethernet connections used to transfer data and timing synchronization.
        To evaluate the accuracy over time that can be achieved with different network topologies and configurations, a test system has been set-up consisting of a grand master clock, two PXI-6683H devices and two PTP aware network switches. In particular, the impact on accuracy due to the transparent and boundary clocks configurations has been investigated. In addition, a detailed simulation of the network and the involved devices has been performed using the OMNET++ discrete event simulator. The simulation parameters include not only the network and switches configuration, but also the PID parameters used in the clock servo controllers. A comparison between simulated and measured statistics is reported, together with a discussion on the possible optimal configuration strategies.

        Speaker: TREVISAN, Luca (Consorzio RFX)
      • 40
        Design and testing of a reconfigurable AI ASIC for Front-end Data Compression at the HL-LHC

        We have designed and implemented a reconfigurable ASIC data encoder for low-power, low-latency hardware acceleration of an unsupervised machine learning data encoder. The implementation of a complex neural network algorithm demonstrates the effectiveness of a high-level synthesis-based design automation flow for building design IP for ASICs. The ECON-T ASIC, which includes the AI algorithm, enables specialized compute capability and has been optimized for data compression at 40\,MHz in the trigger path of the High-Granularity Endcap Calorimeter (HGCal), an upgrade for the Compact Muon Solenoid (CMS) experiement for the high-luminosity LHC (HL-LHC). The objective encoding can be reconfigured based on detector conditions and geometry by updating the trained weights. The design has been implemented in an LP CMOS 65~nm process. It occupies a total area of 2.9\,mm2, consumes 48~mW of power and is optimized to withstand approximately 200~MRad ionizing radiation. This talk will present the design methodology and initial results from testing the algorithm in silicon.

        Speaker: MANTILLA SUAREZ, Cristina Ana (Fermi National Accelerator Lab. (US))
    • 1:05 PM
    • Student Paper Award
    • Front End Electronics and Fast Digitizers
      • 41
        The Nupix-A1, a MAPS for real-time beam monitoring in heavy-ion physics experiments

        Due to excellent spatial resolution, fast timing response, and low material budget, the Monolithic Active Pixel Sensor (MAPS) has become one of the most advanced technologies for particle measurement. To perform real-time beam monitoring in heavy-ion physics experiments, a MAPS named Nupix-A1 has been designed. This Nupix-A1 can simultaneously measure the particle hit's position, energy, and arrival time.

        The Nupix-A1 consists of 128 x 64 pixels with a pitch of 30μm, thirty-two 12-bit column-parallel ADCs, the DAC array with an I2C interface, and the 5Gbps data transmission link. Each pixel has an energy path and time path. The energy path can measure up to ~145ke- and the Integral Non-linearity (INL) is better than ~1%. The measurement range of the time path can be tuned from 3μs to 275μs. At the typical range of ~7.5μs, the maximum INL is ~1.3%. The noise level is ~8e-. The DAC array can be configured with the I2C interface to tune the bias of the in-pixel circuit. Each column-parallel ADC converts the analog signal from the pixels in two adjacent columns into a digital signal at 3.63MSps with the ENOB of 11.61bits. The digital data is serialized and transmitted with the 5Gbps transmission link, consisting of a 16b/20b encoder, a 20:1 serializer, a Feed Forward Equalization (FFE) driver, and a high-speed receiver to deal with the external clock. This paper will discuss the design and performance of this Nupix-A1.

        Speaker: JU, Huang (IMPCAS)
      • 42
        Integration and Commissioning of the ATLAS Tile Demonstrator Module for Run 3

        The Tile Calorimeter (TileCal) is the central hadronic calorimeter of the ATLAS experiment at Large Hadron Collider (LHC). The LHC will undergo a series of upgrades leading into the High Luminosity LHC (HL-LHC). The TileCal Phase-II Upgrade will accommodate the detector readout electronics to the HL-LHC conditions using a new clock and readout strategy.

        The TileCal Phase-II upgrade project has undertaken an extensive R&D program. A Demonstrator module containing the upgraded on-detector readout electronics was built in 2014, evaluated during seven test beam campaigns, and inserted into the ATLAS experiment in 2019. This module will be operated in the ATLAS experiment during Run-3 (2022–2025) through a Tile PreProcessor (TilePPr) Demonstrator board implementing the upgraded clock and readout architecture envisioned for the HL-LHC. The TilePPr also provides backward compatibility of the Demonstrator module with the present ATLAS Trigger and Data AcQuisition and the Timing, Trigger and Command systems.

        This contribution describes in detail the hardware and firmware for the implementation of the data acquisition system of the Demonstrator module and discusses the results of the integration tests performed during the commissioning of the Demonstrator module for Run 3.

        Speaker: CARRIO ARGOS, Fernando (Instituto de Física Corpuscular (CSIC-UV))
      • 43
        Development of the PreProcessor Modules for the Upgrade of the ATLAS Tile Calorimeter Towards the High-Luminosity LHC

        The Tile Calorimeter (TileCal) is the central hadronic calorimeter of the ATLAS experiment at the Large Hadron Collider (LHC). The LHC will undergo a series of upgrades towards a High Luminosity LHC (HL-LHC) in 2025-2028. The ATLAS TileCal Phase-II Upgrade is planned in order to accommodate the detector and data acquisition system to the HL-LHC requirements. In the upgraded readout architecture, the on-detector readout electronics will transmit detector data to the PreProcessors (PPr) in the counting rooms for every bunch crossing (~25 ns). The new readout system will require a total data bandwidth of 40 Tbps. The PPr boards will transmit calibrated energy and time per cell to the first level of the ATLAS trigger system through the Trigger and Data Acquisition interface system, and trigger selected event data to the Front End LInk eXchange (FELIXs) system. In addition, the PPr boards will be responsible for the distribution of the LHC clock towards the on-detector electronics for the sampling of the PMT signals. A total of 32 PPr boards will be required to read out the entire calorimeter during the HL-LHC, where each PPr board will be composed of four Compact Processing Module and an ATCA carrier. The design of hardware, firmware and software components of the final PreProcessor modules for the ATLAS Tile Calorimeter will be presented. The results and experiences obtained from the first integration tests are discussed, as well as the plans for the production, validation and installation of the PreProcessor modules during the ATLAS Phase-II Upgrade.

        Speaker: CARRIO ARGOS, Fernando (Instituto de Física Corpuscular (CSIC-UV))
      • 44
        Fast opamp-based preamplifier and shaping filter for smart rad-hard fast detection systems

        Different application fields require the development of fast preamplifiers and shaping filters to be coupled with dedicated radiation detectors. In view of fast prototyping and targeting a medium number of readout channels, we present in this paper the feasibility study, design and qualification of a fast, opamp-based, preamplifier and shaping filter suited to equip a smart rad-hard detection system for the diagnostics and tagging of Radioactive Ion Beams (RIBs) at high intensity ($10^6$ pps or higher) The paper illustrates the conception and design of the fast readout system and presents the detailed analysis and qualification of its performance in view of the design of a network of smart sensors, equipped with optimized frontend electronics, DAQ with Data Real-Time Management capabilities and a dedicated software layer implementing Artificial Intelligence and machine learning techniques as a tool to improve the production, transport and use of RIBs.

        Speaker: GUAZZONI, Chiara (Politecnico di Milano & INFN)
      • 45
        Characterization of a 16-Channel LArASIC for a Front-end Read-out Electronics at a DUNE Experiment

        A 16-channel front-end Liquid Argon ASIC (LArASIC) has been designed for the Deep Underground Neutrino Experiment (DUNE) liquid argon (LAr) time-projection chamber (TPC). The LArASIC, fabricated in a commercial 180 nm CMOS process, was specifically designed for operation at cryogenic temperatures (77K-89K). It has 16-channels of charge amplifiers and shapers with programmable gain and filter time constant to measure the weak charge signals from TPC sensing wires and PCB strips. We will be using around 1200, 24000 and 15360 LArASIC (P5B version) packaged chips for ProtoDUNE-II, DUNE FD1 and DUNE VD1 experiments, respectively. Therefore, the systematic characterization and performance testing of the LArASIC chip at cryogenic temperature is vital for the quality and reliability of detectors at ProtoDUNE-II and DUNE experiments. We have developed a cold test stand for LArASIC characterization and functionality evaluation at cryogenic temperature. A Dual-DUT test board has been designed and developed which encompasses two pairs of LArASIC and ColdADC to carry out the characterization of two LArASIC chips in a single thermal cycle. The acquired results satisfy all the required DUNE specifications for LArASIC quality control. Since this LArASIC characterization methodology is proved to be successful and thus, it will be a reference design for future large-batch QA/QC production testing of LArASIC chips for the DUNE experiments.

        Speaker: TARPARA, Eaglekumar (Brookhaven National Laboratory)