GMT
,
Description

# 22nd IEEE Real Time Conference

### Live stream to watch the conference

Click on "Virtual Conference Handbook" on the left menu and follow the link on that page (you have to be registered to see the handbook).

If you are not registered at the conference, you can still watch a live steam of our invited talks at IEEE.tv. Please share this link with all your friends.

Following presentations will be streamed via IEEE.tv

 Opening Mon 12 Oct, 13:00 UTC Nhan Viet TRAN: Real-time machine learning in embedded systems for particle physics Mon 12 Oct, 13:45 UTC Miaoyuan LIU: Heterogeneous compute for ML acceleration in HEP experiments Tue 13 Oct, 13:00 UTC Audrey CORBEIL THERRIEN: ML at the Edge for Ultra High Rate Detectors Wed 14 Oct, 13:00 UTC Jean-Pierre MARTIN: CANPS Award Talk Thu 15 Oct, 13:00 UTC Christof BUEHLER: Artificial Intelligence - a gentle introduction Fri 16 Oct, 7:00 UTC Patrick LE DÛ: History of the Real Time Conference Mon 19 Oct, 7:00 UTC

### Virtual Conference

The organizers are excited to present the Real Time 2020 in a completely virtual form to our attendees. This will not only ensure the safety of everyone, but also offers new opportunities like a completely new form of poster sessions, and a drastic reduction of registrations fees, especially for low income countries.

Like the previous editions, RT2020 will be a multidisciplinary conference devoted to the latest developments on real time techniques in the fields of plasma and nuclear fusion, particle physics, nuclear physics and astrophysics, space science, accelerators, medical physics, nuclear power instrumentation and other radiation instrumentation.

For this year RT2020, we are excited to include a new topic for the scientific program - Real Time applications in Machine Learning and Deep Learning. We know this is an active area of research in many disciplines, and we encourage those in the community engaged in such study to submit an abstract for this conference.

The conference will provide an opportunity for scientists, engineers, and students from all over the world to share the latest research and developments.  This event also attracts relevant industries, integrating a broad spectrum of computing applications and technology.

Participants
• ABBOTT, David
• ABIVEN, Yves
• ALEXOPOULOS, Kostas
• AMAUDRUZ, Pierre-Andre
• ASTRAIN ETXEZARRETA, Miguel
• AVERY, Keith
• BATTAGLIERI, Marco
• BELL, Zane
• BERGER, Niklaus Emanuel
• BERGNOLI, Antonio
• BIONDI, Silvia
• BOHM, Christian
• BOUCHARD, Jonathan
• BRÜCKNER, Martin
• BÜHLER, Christof
• CALLIGARIS, Luigi
• CALVET, Denis
• CAO, Thanh Long
• CARRIO ARGOS, Fernando
• CHEN, Ying
• CHILO, JOSE
• CHOUDHURY, Somnath
• CHUNG CAO, Văn
• CLOUT, Peter
• CONLIN, Rory
• CORBEIL THERRIEN, Audrey
• CUEVAS, Chris
• DAVID, Quentin
• DEKKERS, Sam
• DENG, Wendi
• DI VITA, Davide
• DING, ye
• DONG, Jianmeng
• DRESSENDORFER, Paul
• EISCH, Jonathan
• ESQUEMBRI, Sergio
• FABBRI, Andrea
• FANG, Ni
• FERRARI, Roberto
• FILIMONOV, Viacheslav
• FONTAINE, Rejean
• FRANCESCONI, Marco
• FREITAS, Hugo
• FU, Xiaoliang
• GALLI, Luca
• GAROLA, Andrea
• GIANNETTI, Paola
• GIOIOSA, Antonio
• GIORDANO, Raffaele
• GIRERD, Claude
• GKAITATZIS, Stamatios
• GOLD, Steven
• GROSSMANN, Martin
• GU, Jinliang
• GU, William
• GUAZZONI, Chiara
• GUO, DI
• GYURJYAN, Vardan
• H. TRUONG, Tuan
• H. TRUONG, Tuan
• HENNESSY, Karol
• HENNIG, Wolfgang
• HIDVEGI, Attila
• HODAK, Rastislav
• HONG HAI, Vo
• HU, Wenhui
• HUANG, Xing
• HUBER, Stefan
• IGARASHI, Youichi
• ITOH, Ryosuke
• JALMUZNA, Wojciech
• JASTRZEMBSKI, Edward
• JIANG, Lin
• JIANG, Wei
• KAMIKUBOTA, Norihiko
• KARCHER, Nick
• KARKOUR, NABIL
• KIM, Yongkyu
• KLYS, Kacper
• KOERTE, Heiko
• KOHANI, Shahab
• KOHOUT, Pavel
• KOLOS, Serguei
• KOUZES, Richard
• KOZLOWSKI, Pawel
• KUNIGO, Takuto
• KURIMOTO, Yoshinori
• KÖPPEL, Marius
• LAGHA, joumana
• LARSEN, Albe
• LAVITOLA, luigi
• LE DÛ, Patrick
• LE, Xuan Chung
• LEOMBRUNI, Orlando
• LEVINSON, Lorne
• LEVIT, Dmytro
• LI, Jiawen
• LINDNER, Thomas Hermann
• LIU, Guoming
• LIU, Lei
• LIU, Miaoyuan
• LIU, Zhen-An
• LOPEZ CUENCA, Carlos
• LU, Jiaming
• LYOUSSI, Abdallah
• MAKOWSKI, Dariusz
• MANDJAVIDZE, Irakli
• MANSOUR, Wassim
• MARINI, Filippo
• MARJANOVIC, Jan
• MARSICANO, Luca
• MARTIN, Jean-Pierre
• MEIKLE, Steven
• MENDES CORREIA, Pedro Manuel
• MIELCZAREK, Aleksander
• MIN, Yan
• MINO, Yuya
• MORINO, Yuhei
• MÜLLER, Martin
• NAKAO, Mikihiko
• NAKAYE, Yasukazu
• NAKAZAWA, Yu
• NEUFELD, Niko
• NGUYEN, Truong
• NGUYEN, Viet
• NOMACHI, MASAHARU
• OHARA, Takao
• PAN, Lei
• PENG, Qiyu
• PERAZZO, Amedeo
• PESCHKE, Richard
• PHAN, Hien
• PHÚC NGUYỄN, Huỳnh
• PIERRE-ALEAXANDRE, Petitjean
• PISANI, Flavio
• PORZIO, Luca
• PRIM, Markus
• PURSCHKE, Martin Lothar
• QIAN, Sen
• QIANYU, Hu
• RAYDO, Ben
• RITT, Stefan
• RIVERA, Ryan Allen
• ROSS, A C
• RUBEN, Andreas
• RUSSO, Stefano
• SAKUMA, yasutaka
• SAKUMURA, Takuto
• SAMSON, Arnaud
• SATO, setsuo
• SCHAART, Dennis
• SCOTTI, Valentina
• SHU, Yantai
• SMITH, Benjamin
• SONG, Shuchen
• SONG, Zhipeng
• SOTIROPOULOU, Calliope-Louisa
• SOTTOCORNOLA, Simone
• SPIWOKS, Ralf
• STEZELBERGER, Thorsten
• STUBBE, Sven
• SUGIYAMA, Yasuyuki
• SUMMERS, Sioni Paris
• SUN, Quan
• SUN, Xiaoyang
• TAKAHASHI, Hiroyuki
• TAKAHASHI, Tomonori
• TAMURA, Fumihiko
• TANAKA, Aoto
• TANG, Fukun
• TAPPER, Alex
• THAYER, John
• THỊ HỒNG LOAN, Trương
• THỊ KIỀU TRANG, Hoàng
• TIANHAO, Wang
• TRAN, Nhan
• TURQUETI, Marcos
• TÉTRAULT, Marc-André
• URBAN, Ondrej
• VALDES SANTURIO, Eduardo
• VALERO BIOT, Alberto
• VARNER, Gary
• VENARUZZO, Massimo
• VENARUZZO, Massimo
• VU, Ngoc Minh Trang
• VU, Thanh Mai
• WANG, Cheng
• WANG, Hui
• WANG, Wei
• WANG, Weijia
• WANG, Yang
• WEN, Jingjun
• WETZ, David
• WU, Jinyuan
• XIE, Likun
• YANG, Min
• YANG, Yifan
• YIN, Zhiguo
• YOSHIDA, Hiroki
• YUAN, Jianhui
• YUAN, Shiming
• ZHANG, Honglin
• ZHANG, Li
• ZHANG, Wei
• ZHANG, Yuezhao
• ZHANG, Zuchao
• ZHAO, Chengxin
• ZHAO, Zhixiang
• ZHONG, Ke
• ZHOU, Qidong
• ZHU, Jinfu
• ZICH, Jan
• ZIMMERMANN, Sergio
General Inquiries
• Monday, October 12
• Opening
• 1
Welcome
Speaker: GROSSMANN, Martin
• 2
Technical Introduction
Speaker: RITT, Stefan (Paul Scherrer Institute)
• 3
Publications
Speaker: Dr SCHMELING, Sascha (CERN)
• TNS best paper award
Convener: BELL, Zane (Oak Ridge National Laboratory (retired))
• Invited talk 01
• 4
Real-time machine learning in embedded systems for particle physics

We discuss applications and opportunities for the machine learning in real-time embedded systems in particle physics. This talk will focus on how to implement machine learning algorithms in systems with FPGAs and ASICs for a variety of use-cases. We will review essential ideas for designing and optimizing efficient algorithms in hardware and emerging tool flows to accelerate algorithm development. We will then explore a few examples spanning different application spaces such as front-end data compression in rad-hard environments to powerful trigger reconstruction algorithms to controls of particle accelerators.

Speaker: TRAN, Nhan Viet (Fermi National Accelerator Lab. (US))
• Oral presentations DAQ01
• 5
Real-Time Implementation of the Neutron/Gamma discrimination in an FPGA-based DAQ MTCA platform using a Convolutional Neural Network

Research on the classification of neutron/gamma waveforms in scintillators using Pulse Shape Discrimination (PSD) techniques is still an active topic. Numerous methods have been explored to optimise this classification. Some of the most recent research is focused on machine learning techniques for this classification with excellent results. In this field, FPGAs with high-sampling rate ADCs have been used to perform this classification in real-time. In this work, we select a potential architecture and implement it with the help of the IntelFPGA OpenCL SDK environment. The shorter and C-like development of OpenCL enables more straightforward modification and optimisation of the network architecture. The main goal of the work is to evaluate the resources and performance of a complete solution in the FPGA. The dataset used to model the neural network is provided by JET. The FPGA design is generated as if it was connected to an ADC module streaming the data samples with the help of a Board Support Package developed for an IntelFPGA ARRIA10 available in an AMC module in an MTCA.4 platform. The prototyped solution has been integrated into EPICS using the Nominal Device Support (NDSv3) model that is currently being developed by ITER.

*See the author list of E. Joffrin et al. accepted for publication in Nuclear Fusion Special issue 2019, https://doi.org/10.1088/1741-4326/ab2276

• 6
FELIX: commissioning the new detector interface for the ATLAS trigger and readout system

After the current LHC shutdown (2019-2021), the ATLAS experiment will be required to operate in an increasingly harsh collision environment. To maintain physics performance, the ATLAS experiment will undergo a series of upgrades during the shutdown. A key goal of this upgrade is to improve the capacity and flexibility of the detector readout system. To this end, the Front-End Link eXchange (FELIX) system has been developed. FELIX acts as the interface between the data acquisition; detector control and TTC (Timing, Trigger and Control) systems; and new or updated trigger and detector front-end electronics. The system functions as a router between custom serial links from front end ASICs and FPGAs to data collection and processing components via a commodity switched network. The serial links may aggregate many slower links or be a single high bandwidth link. FELIX also forwards the LHC bunch-crossing clock, fixed latency trigger accepts and resets received from the TTC system to front-end electronics. FELIX uses commodity server technology in combination with FPGA-based PCIe I/O cards. FELIX servers run a software routing platform serving data to network clients. Commodity servers connected to FELIX systems via the same network run the new multi-threaded Software Readout Driver (SW ROD) infrastructure for event fragment building, buffering and detector-specific processing to facilitate online selection. This presentation will cover the design of FELIX and the results of the installation and commissioning activities for the full system in spring 2020.

Speaker: FERRARI, Roberto (INFN Pavia (IT))
• 7
The Mu3e Data Acquisition

The Mu3e experiment aims to find or exclude the lepton flavor violating decay $\mu^+ \rightarrow e^+e^-e^+$ with a sensitivity of one in 10$^{16}$ muon decays. The first phase of the experiment is currently under construction at the Paul Scherrer Institute (PSI, Switzerland), where beams with up to 10$^8$ muons per second are available. The detector will consist of an ultra-thin pixel tracker made from high-voltage monolithic active pixel sensors (HV-MAPS), complemented by scintillating tiles and fibres for precise timing measurements. The experiment produces about 80 GBit/s of zero-suppressed data which are transported to a filter farm using a network of FPGAs and fast optical links. On the filter farm, tracks and three-particle vertices are reconstructed using highly parallel algorithms running on graphics processing units, leading to a reduction of the data to 100 Mbyte/s for mass storage and offline analysis. The talk introduces the system design and reports on progress in building and testing the hardware, firmware and software for this data acquisition system.

Speaker: BERGER, Niklaus (Institute of Nuclear Physics, JGU Mainz)
• 8
The new software based read out driver for the ATLAS experiment

In order to maintain sensitivity to new physics in the coming years of LHC operations, the ATLAS experiment is upgrading a portion of the front-end electronics and replacing parts of the detector with new devices that can operate under the much harsher background conditions of the future LHC. The legacy DAQ system features detector-specific custom VMEbus boards (Readout Drivers or 'RODs') devoted to data processing, configuration and control. Data are then received by a common Readout System (ROS), responsible for buffering during High-Level Trigger (HLT) processing. From Run 3 onward, all new trigger and detector systems will be read out using new components, replacing the RODs and ROS. This new path will feature an application called the Software Readout Driver (SW ROD), which will run on a commodity server receiving front-end data via the Front-End Link eXchange (FELIX) system. The SW ROD will perform event fragment building and buffering as well as serving the data on request to the HLT. The SW ROD application has been designed as a highly customizable high-performance framework providing support for detector specific event building and data processing algorithms. The Run 3 implementation is capable of building event fragments at a rate of 100 kHz from an input stream consisting of up to 120 MHz of individual data packets.

This document will cover the design and the implementation of the SW ROD application and the results of performance measurements taken with the server models selected to host SW ROD applications in Run 3

Speaker: KOLOS, Serguei (University of California Irvine (US))
• Micro Oral A
• Industrial Exhibit 12.10.
• Poster session A-01
• 9
The Analog Front-end for the LGAD Based Precision Timing Application in CMS ETL

The analog front-end for the Low Gain Avalanche Detector (LGAD) based precision timing application in the CMS Endcap Timing Layer (ETL) has been prototyped in a 65 nm CMOS mini-ASIC named ETROC0. Serving as the very first prototype of ETL readout chip (ETROC), ETROC0 aims to study and demonstrate the performance of the analog front-end, with the goal to achieve 40 to 50 ps time resolution per hit with LGAD (therefore reach about 30ps per track with two detector-layer hits per track). ETROC0 consists of preamplifier and discriminator stages, which amplifies the LGAD signal and generates digital pulses containing time of arrival (TOA) and time over threshold (TOT) information. The actual TOA and TOT measurements will be done with a time-to-digital converter (TDC) downstream and is not part of the ETROC0. ETROC0 chips have been tested extensively using charge injection, laser, cosmic rays as well as beam, and the measured performance agrees well with simulation. The initial beam test at Fermilab has demonstrated time resolution of around 33 ps from the preamplifier waveform analysis and around 42 ps from the discriminator pulses analysis. A subset of ETROC0 chips have also been tested to a total ionizing dose (TID) of 100 MRad using X-ray machine at CERN and no performance degradation been observed. The ETROC0 design is successful and is directly used without modification for the next prototype chip, ETROC1, which has 5x5 pixel arrays and includes a new TDC stage and H-tree style clock distribution.

• 10
A Low-Power Time-to-Digital Converter for the CMS Endcap Timing Layer (ETL) Upgrade

We present the design and test results of a Time-to-Digital-Converter (TDC) functional block. The TDC will be a part of the readout ASIC, called ETROC, of the Low-Gain Avalanche Detectors (LGADs) for the CMS Endcap Timing Layer (ETL) of High-Luminosity LHC upgrade. One of the challenges of the ETROC design is that the TDC is required to consume less than 200 mW for each pixel at the nominal hit occupancy of 1%. The TDC is based on a simple delay-line approach originally developed in FPGA implementation. In order to meet the low power requirement, we use a single delay line for both the Time of Arrival (TOA) and the Time over Threshold (TOT) measurements without delay control. A double-strobe self-calibration scheme is used to achieve high performance. The TDC is fabricated in a 65 nm CMOS technology mini-ASIC. The TOA has a bin size of 17.8 ps and a precision of around 5.4 ps. Within its effective dynamic range of 11.4 ns, the Differential Non-Linearity (DNL) is better than ±0.5 LSB and the Integral Non-Linearity (INL) is less than ±1.0 LSB. The TOT has a bin size of 35.4 ps, twice the TOA bin size, and a precision of about 10 ps. The DNL and INL of the TOT are ±0.8 LSB and ±1.3 LSB, respectively. The calibration process functions as expected. The actual TDC block consumes about 100 mW at the hit occupancy of 1%. The overall measured performance of the TDC meets the CMS ETL upgrade requirements.

Speakers: Mr ZHANG, Wei (Southern Methodist University) , Dr GONG, Datao (Southern Methodist University)
• 11
A 20 Gbps Data Transmitting ASIC with PAM4 for Particle Physics Experiments

Abstract: We present the design and test results of a data transmitting ASIC, GBS20, for particle physics experiments. The goal of GBS20 will be an ASIC that employs two serializers each from the 10.24 Gbps lpGBT SerDes, sharing the PLL also from lpGBT. A PAM4 encoder plus a VCSEL driver will be implemented in the same die to use the same clock system, eliminating the need of CDRs in the PAM4 encoder. This way the transmitter mezzanine module, GBT20, developed using the GBS20 ASIC, will have the exact lpGBT data interface and transmission protocol, with an output of 20.48 Gbps over one fiber. With PAM4 embedded FPGAs at the receiving end, GBT20 will halve the fibers needed in a system and better use the input bandwidth of the FPGA. A prototype, GBS20.v0 is fabricated using a commercial 65 nm CMOS technology. This prototype has two serializers and a PAM4 encoder sharing the lpGBT PLL, but no user data input. An internal PRBS generator provides data to the serializers. GBS20.v0 is tested up to 20.48 Gbps. With lessons learned from this prototype, we are designing the second prototype, GBS20.v1, that will have 16 user data input channels each at 1.28 Gbps. We will present the design concept of the GBS20 ASIC and the GBT20 module, the preliminary test results, and lessons learned from GBS20.v0 and the design of GBS20.v1 which will be not only a test chip but also a user chip with 16 input data channels.

Speaker: Ms ZHANG, Li (Southern Methodist University and Central China Normal University)
• 12
Gas emission measurement system used in Chilla-Juliaca landfill

In many countries around the world, most of the waste use to be disposed of to landfills, this generate public concern about the health effects of emissions. Landfill gases are produced by the natural bacterial decomposition of waste and it is about half of methane, with the remainder mostly carbon dioxide and minor amounts of other gases. Real-time measurement and modelling of emissions gases in landfills is important. In this work a low cost wireless measurement system is developed using MOS gas sensors (MQ4, MQ5 and MQ9), microcontroller and XBee and HC-12 wireless communications modules. The system can be mounted on an unmanned aircraft (UAV, drone) or deployed as a wireless sensor network. Experiments have been carried out near a closed landfill, which show high gas concentration.According to the preliminary results, high levels of methane concentration can be seen. Keep in mind that the population is very close to the landfill, therefore it is very important to have more knowledge of the generation of methane and other gases

Speaker: CHILO, JOSE (University of Gavle)
• 13
Synchronous digitization of incident pulses across 1088-pixel Camera of MACE Telescope

In Imaging Atmospheric Cherenkov Telescopes like MACE, Synchronous digitization of 1088-input channels of Camera Electronics is of paramount importance. These PMT based pixels of Camera have been incorporated in 68 modules called Camera Integrated Module (CIM), each catering to 16 pixels. Each module acquires, processes and transfers data independently of other modules. To reduce the data volume, Charge accumulated in each pixel is calculated in Camera itself. Profile data is sent only for the pixels above threshold. Hence it is important to align the recorded pulse profiles with minimum possible jitter. SCA based Analog memory ASIC DRS4 is used for sampling at 1 GSPS. CIMs start event acquisition on receipt of global Trigger signal. This signal is applied to the 9th Input channel of each of the DRS4 IC. Detection of Pulse profile region is based on detection of trigger signal in DRS4 IC. Various factors like inherent jitters of DRS4 IC, trigger generation jitter and different HV to PMTs add to the uncertainties in the timing of recorded pulses. Data Processing algorithms have been implemented in FPGA to correct for these uncertainties in real time and record the incident pulses with reduced jitter. Camera Electronics of MACE Telescope has been installed and Integrated Camera Trial runs are being carried out. This paper presents the details of implemented scheme for low jitter event acquisition and initial results observed during the trial run with a focus on jitter in recorded pulses across the camera during LED_Calibration scheme.

Speaker: NEEMA, SAURABH (Bhabha Atomic Reaserch Center, Mumbai)
• 14
The GosipGUI framework for control and benchmarking of readout electronics front-ends

The GOSIP (Gigabit Optical Serial Interface Protocol) provides communication via optical fibres between multiple kinds of front-end electronics and the KINPEX PCIe receiver board located in the readout host PC. In recent years a stack of device driver software has been developed to utilize this hardware for several scenarios of data acquisition. On top of this driver foundation, several graphical user interfaces (GUIs) have been created.
These GUIs are based on the Qt graphics libraries and are designed in a modular way: All common functionalities, like generic I/O with the front-ends, handling of configuration files, and window settings, are treated by a framework class GosipGUI. In the Qt workspace of such GosipGUI frame, specific sub classes may implement additional windows dedicated to operate different GOSIP front-end modules. These readout modules developed by GSI Experiment Electronics department are for instance FEBEX sampling ADCs, TAMEX FPGA-TDCs, or POLAND QFWs.
For each kind of front-end the GUIs allow to monitor specific register contents, to set up the working configuration, and to interactively change parameters like sampling thresholds during data acquisition. The latter is extremely useful when qualifying and tuning the front-ends in the electronics lab or detector cave.
Moreover, some of these GosipGUI implementations have been equipped with features for mostly automatic testing of ASICs in a prototype mass production. This has been applied for the APFEL-ASIC component of the PANDA experiment currently under construction, and for the FAIR beam diagnostic readout system POLAND.

• 15
High-Level Software Tools for LLRF System Dedicated to Elliptical Cavities Management of European Spallation Source Facility

The elliptical cavities are important component of superconducting part of ESS (European Spallation Source) linear accelerator. Each of 120 cavities have to deliver the electric field at the proper phase and amplitude to accelerate protons. LLRF (Low Level Radio Frequency) system is responsible for controlling these parameters and distributing reference clock signal. Another task is compensation of the parasitic effects like Lorentz force detuning. These phenomena cause resonators frequency shift and its magnitude is dependent on mechanical properties of the cavity. This results in less effective accelerator performance.
The paper describes Piezo Driver application, used to control piezoelectric actuators actively compensating detuning effect. The Piezo Driver hardware was designed as an RTM (Rear Transition Module) in MTCA.4 (Micro Telecommunications Computing Architecture) based LLRF system. An LO (Local Oscillator) and a clock generation component are also a part of this setup. also implemented as RTM. Dedicated software module is destined to manage LO/CLK signal configuration and distribution for LLRF system. The Cavity Simulator software is responsible for configuration of the hardware cavity simulator. The device emulates behaviour of elliptical, resonant cavities. The IPMI (Intelligent Platform Management Interface) manager is an application to monitor the chassis state of MicroTCA.4 system, hardware standard chosen for LLRF infrastructure at ESS.
All of presented tools uses EPICS (Experimental Physics and Industrial Control System) framework and E3 (ESS EPICS Environment) environment. The operator panels were designed in ESS Phoebus release.
The paper demonstrates the structure of the solutions and discusses results of conducted tests and measurements.

Speaker: Mr KLYS, Kacper (Lodz University of Technology, Department of Microelectronics and Computer Science)
• 16
Global Trigger Technological Demonstrator for ATLAS Phase-II upgrade

ATLAS detector at the Large Hadron Collider (LHC) will undergo a major Phase-II upgrade for the High Luminosity LHC (HL-LHC). The upgrade affects all major ATLAS systems, including the Trigger and Data Acquisition systems.
As part of the Level-0 Trigger System, the Global Trigger uses full-granularity calorimeter cells to perform algorithms, refines the trigger objects and applies topological requirements. The Global Trigger uses a Global Common Module (GCM) as a building block of its design. To achieve a high input and output bandwidth and substantial processing power, the GCM will host the most advanced FPGAs and optical modules. In order to evaluate the new generation of optical modules and FPGAs running at high data rates (up to 28 Gb/s), a Global Trigger Technological Demonstrator board has been designed and tested. The main hardware blocks of the board are the Xilinx Virtex Ultrascale+ 9P FPGA and a number of optical modules, including high-speed Finisar BOA and Samtec FireFly modules. Long-run link tests have been performed for the Finisar BOA and Samtec FireFly optical modules running at 25.65 and 27.58 Gb/s respectively. Successful results demonstrating a good performance of the optical modules when communicating with the FPGA have been obtained. The presentation will provide a hardware overview and measurement results of the Technological Demonstrator.

Speaker: FILIMONOV, Viacheslav (Johannes Gutenberg Universitaet Mainz (DE))
• 17
Safe Bound on simulation interval for real-time systems for two-phases task model

Real-time computing refers to applications that compute correct results but, and more importantly, applications that perform on-time. In addition to usual system constraints, real-time systems include timing constraints that need to be satisfied by the system in order to provide a safe computation. Designing such systems requires more attention and strict guarantees to ensure their safety. One of the key problems in the design is the schedulability problem. To test it, simulation-based tests have been defined. These tests require a safe interval bound to run the system under a given scheduling algorithm in order to conclude its schedulability. In this paper, we adopt a multi-phases computation model that differentiates between memory accesses and computations, together with the use of DMA engines allow to hide most memory access latencies behind computation times and enhance the performance of the system. And we propose the first safe bound on the simulation interval for any schedule generated by a deterministic and memoryless scheduler running on a mono-processor platform where tasks are composed of two independent phases: a memory phase where the program is prefetched into the local memory (scratchpad) from main memory or an I/O devices, and a computation phase where the task is executing on the processor without the need to access the main memory or an I/O devices.

Speaker: Dr LAGHA, Joumana (Université de Nantes, École Centrale de Nantes, CNRS, LS2N, UMR 6004, F-44000 Nantes, France.)
• 18
Mössbauer spectrometer with high channel resolution utilizing real-time industrial computer

New Mössbauer spectrometer based on CompactRIO platform was developed. CompactRIO is a modular industrial computer platform developed by National Instruments. This device runs “LabVIEW RT OS” - Linux-based Real-Time Operating system, which has support of LabVIEW programming environment. CompactRIO has integrated FPGA (Field Programmable Gate Array), which was used for most critical functions of the developed spectrometer, such as scintillation detector signal acquisition, accumulation of Mössbauer spectra and for driving the velocity transducer movement using PID (proportional–integral–derivative) regulation. Velocity Transducer modulates the gamma ray energy by movement using the Doppler effect. Precision of this movement is crucial, as it defines whole spectrometer accuracy. More complicated algorithms as PID parameters autotuning using genetic algorithms and additional Mössbauer spectra linearization methods implemented in developed Mössbauer spectrometer are performed on the Real-time part of CompactRIO. Developed spectrometer shows the advantages of Real-time systems such as deterministic processing of large amounts of data, which allows measurement in very high resolution.

Speaker: KOHOUT, Pavel (Joint Institute for Nuclear Research)
• 19
Real time dosimeter based on a scintillation fiber operating in single photon counting for medical applications

Brachytherapy is a radiotherapy modality where the radioactive material is placed close to the
tumor, being a common treatment for skin, breast and prostate cancers. In order to provide
the best treatment possible and minimize risks for patients, dose verification and quality
assurance in radiotherapy needs to be assessed in real-time and with as much precision as
possible. Notwithstanding, due to technical constraints in certain treatments (brachytherapy),
there is no such tool capable to perform real-time dose measurement. On the other hand,
interventional radiology involves the exposure of both patients and health professionals to
ionizing radiation. Because of that, it is necessary to establish practices of radiation protection
for the professionals involved in order to reduce the levels of exposure. Thus, a dosimeter
capable to provide a real time response of the dose in strategic areas of the professionals,
allow them to take protection actions.

Speaker: FREITAS, H.
• 20
Experience and performance of persistent memory for the DUNE data acquisition system

The DUNE detector is a long baseline neutrino experiment due to take data in 2025. The data acquisition (DAQ) system of the DUNE experiment will consist of a highly complex and distributed architecture designed to handle O(5) TB/s of incoming data from the readout system at a rate of 2 MHz.

One of the physics goals of the DUNE experiment is to detect neutrino signals from Supernova Burst events (SN). These appear as low energy signals distributed over the detector. In order to achieve this goal, the DAQ system needs to store the data in 10s transient buffers. Once the trigger decision has been taken, the data needs to be continuously persisted for a total of about 100s at a rate of 10 GB/s for each detector unit.

In this talk, we present studies on the DAQ system for the supernovae burst events. After an initial characterization of the performance of both Intel Optane SSDs and Persistent Memory devices, we present the results of an initial prototype capable of handling a data rate of approximately 10 GB/s for 100s.

Speaker: ABED ABUD, Adam (University of Liverpool (GB))
• 21
SAMPA Based Streaming Readout Data Acquisition Prototype

We have assembled a small-scale streaming data acquisition system based on the SAMPA front-end ASIC. The 32-channel SAMPA chip was designed for the high-luminosity upgrade of the ALICE Time Projection Chamber (TPC) detector at the CERN Large Hadron Collider. The goals of the prototype system are to determine if the SAMPA chip is appropriate for use in detector systems at Jefferson Lab, and to gain experience with the hardware and software required to deploy streaming data acquisition systems in nuclear physics experiments.
The 800 channel system is composed of components used in the ALICE TPC data acquisition upgrade. Five front-end cards (FEC) support five SAMPA chips each. SAMPA data streams on an FEC are concentrated into two high-speed (4.48 Gb/s) serial data streams by a pair Gigabit Transceiver ASICs (GBTx). These ten streams (44.8 Gb/s) are transmitted from the FECs over fibers to a PCIe based readout unit. The FPGA engine on the readout unit compresses data for transmission to a server via 100 Gb ethernet. Components on the FECs are radiation tolerant. High data rates can be handled. The system is by design scalable and thus provides a functional prototype for high-rate streaming readout at Jefferson Lab and the future Electron Ion Collider.
We have made fundamental measurements (noise, linearity, time resolution) on the SAMPA ASIC. We have also coupled the readout system to a small Gas Electron Multiplier (GEM) detector and have studied its response to cosmic rays. A beam test of the system is planned.

Speaker: JASTRZEMBSKI, Edward (Thomas Jefferson National Accelerator Facility)
• 22
OpenIPMC: a free and open source Intelligent Platform Management Controller

We present a free and open source firmware designed to operate as an Intelligent Platform Management Controller (IPMC). An IPMC is a fundamental component for electronic boards conformant to the Advanced Telecommunications Computing Architecture (ATCA) standard, being adopted by a number of high energy physics experiments, and is responsible for monitoring the health parameters of the board, managing its power states, and providing board control, debug and recovery functionalities to remote clients. OpenIPMC is based on the FreeRTOS real-time operating system and is designed to be architecture-independent, allowing it to be run on a variety of different microcontrollers. Having a fully free and open source code is an innovative aspect for this kind of firmware, allowing full customization by the user. This work aims to describe its development, features and usage.

• 23
The electronics of the Juno Experiment central detector

The Jianmeng Underground Neutrino Observatory (JUNO) will be the
largest, liquid scintillator (LS) underground neutrino
experiment. Juno is under construction in the South of China and its
main purpose is to measure the neutrino mass hierarchy. Thanks to its
large mass, 20 kton of LS, it will perform several measurements, from the precise measurements of oscillation parameters
to the detection of neutrinos coming from SuperNovae. The central
detector is made of a sphere of 20kt LS surrounded by about 18000 20"
(large) and 25000 3" (small) photomultipliers. An essential ingredient
for the neutrino mass hierarchy measurement is excellent energy (3%
at 1 MeV) and time resolutions. These requirements will be satisfied
thanks to the full PMT waveform acquisition with a large dynamic range
(from 1 to 1000 photo-electrons). In order to satisfy these challenging
requirements, the Front End electronics will fully exploit the use of a
set of 14 bits Flash ADC, a medium-sized FPGA and a dedicated
synchronization and trigger system that distributes the clock to all the
channels, synchronizes the timestamp and manage the validation of the
trigger primitives sent by each channel. The front end electronics will
be placed underwater, and the reliability of such
devices will be a primary concern. The data readout is performed with a
1Gb/s Ethernet Links, connecting groups of three PMT to the
'dry' electronics. This paper
presents an overview of the electronics of the central
detector with results on tests performed on the prototypes of the
full electronics and trigger chains.

Speaker: BERGNOLI, Antonio (Universita e INFN, Padova (IT))
• 24
Multi-Purpose Compact Zynq-Based System-on-Module for Data Acquisition Systems

Modern FPGA and system-on-chip (SoC) require extensive amounts of electronics and complex multi-layer printed circuit boards (PCB) to support their operating conditions. By combining all required electronics on a densely packed PCB, the use of SoC-based system-on-module (SoM) helps to reduce system costs by reusing multiple SoMs in the same DAQ system or in different applications. Off-the-shelf SoM are widely available, but they generally are limited to entry level processors and commonly sacrifice electromagnetic compatibility and interference (EMC/EMI) and signal integrity to lower costs by reducing the number of layers and component quality. We present a new SoM based on Xilinx Zynq SoC specifically designed for low noise applications requiring high bandwidth and large numbers of I/O, such as positron emission tomography (PET) and time-correlated single photon counting (TCSPC) applications. The SoM integrates a Zynq 7030 SoC, 1 GB DDR3L RAM, Gigabit Ethernet PHY and all required voltage regulators on an 18-layer 60 x 70 mm² PCB and comprises up to 197 user I/O available to the carrier board. The board design was optimized for high signal integrity, low EMI emissions and good power integrity using custom stackup and shielding techniques. All power sequencing is managed onboard, and I/O signal length was precisely matched and compensated for internal package variations leading to simple application-driven carrier board designs and lower system costs. A custom test board was also designed to validate the SoM after the assembly process and ensure good electrical paths, reducing the risk prior to system integration.

Speaker: Mr BOUCHARD, Jonathan (Université de Sherbrooke)
• 25
A VXS [VITA41] Trigger Processor for the 12GEV Experimental Programs at Jefferson Lab

The VXS_Trigger_Processor [VTP] was developed and commissioned for CLAS12 in the fall of 2016. This board is a VITA41 switch card and it collects data from a variety of front-end TDC and Flash ADC modules. The VTP has since been used in several experiments at Jefferson Lab serving as the L1 trigger module for a variety of detector types, such as stacked calorimeters, strip calorimeters, time-of-flight, Cerenkov, hodoscopes, drift chambers, and silicon strips. Trigger algorithms implemented include cluster finding (1D, 2D), drift chamber segment and road finding, geometry matching between various detectors, particle counting, and general global trigger bit processing. The VTP is also capable of reading out each front-end crate with up to 40Gbps Ethernet which is an enormous increase compared to the currently used 200MB/s VME bus. Recent progress has been made to show that a firmware and software upgrade can enable existing Jefferson Lab front-end crates to operate in a streaming DAQ mode. In February 2020, tests will be performed on a full calorimeter and matched hodoscope which are components of the CLAS12 Forward Tagger detector system with beam in Hall B. This paper details the hardware performance, triggered, and streaming applications that have been implemented using the VTP for several experiments at Jefferson Lab.

Speaker: CUEVAS, Chris (Jefferson Lab)
• 26
Design and implementation of a digital CDR in FPGA

The capability to extract timing informations out of a serial data stream has become a very common requirement. The receiver usually relies on a Clock and Data Recovery (CDR) chip, which generates a clock signal at the corresponding sampling frequency, phase-aligned to the data.
Modern physics experiment have often this same requirement, where perhaps thousands of boards receive uncorrelated data and it's up to them to decode the messages. For that reason, the presence of a CDR on-board is usually mandatory.
Present readout systems in physics experiments usually rely on FPGAs to receive and transmit data at high rate to high capaicity DAQ systems; exploting FPGAs to recover timing information from streamed data is therefore beneficial for a number of reasons, including power consumption and cost reduction.
The design is based on two components: a Numerically-Controlled Oscillator (NCO), in order to create a controlled frequency clock signal, and a digital Phase Detector (PD) to match the clock frequency with the data rate.
NCOs are often coupled with a Digital to Analog Converter (DAC) to create Direct Digital Synthesizers (DDS), which are able to produce analog waveforms of any desired frequency. In the presented case, the NCO generates a digital clock signal of an arbitrary frequence, while the PD manages this frequency by intercepting any shifting on the relative phase between the clock and the data.
The paper presents the implemented CDR design, the limitations and the challenges involved, possible fields of application in actual physics experiments and, finally, some results.

• 27
Automatic test system of the back-end card for the JUNO experiment

Automatic test system of the back-end card for the JUNO experiment
S. Hang1, 2, Y. Yang1, B. Clerbaux1 and P.-A. Petitjean1
1. IIHE(ULB), Université Libre de Bruxelles, Brussels, Belgium
2. Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China

The Jiangmen Underground Neutrino Observatory (JUNO) is a medium-baseline neutrino experiment under construction in China, with the goal to determine the neutrino mass hierarchy. The JUNO electronics readout system consists of underwater front-end electronics system and outside-water back-end electronics system, these two parts are connected by 100-meter Ethernet cables and power cables.
The back-end card (BEC) is a part of the JUNO electronics readout system used to link the underwater boxes to the trigger system and transmit system clock and trigger signals. Each BEC connects to 48 underwater boxes, and in total around 150 BECs are needed. It’s essential to verify the physical layer links before applying real connection with under water system. Therefore, our goal is to build an automatic test system to check the physical link performance.
The test system is based on a custom designed FPGA board, in order to make the design general, we use only JTAG as the interface to PC. The system can generate and check different data pattern at different speed for 96 channels simultaneously, test results of 1024 continuously clock cycles are automatically uploaded to PC periodically. We will describe the setup of the automatic test system of the BEC and we will present the latest test results.

Speaker: Mr PETITJEAN, Pierre-Alexandre (IIHE(ULB), Université Libre de Bruxelles, Brussels, Belgium)
• 28
Data acquisition system in Run0 of the J-PARC E16 experiment

J-PARC E16 experiment aims to investigate the origin of hadron mass through the systematic measurement of the spectral change of vector mesons in nuclei. The experiment is performed at the high-moemntum beam line, which is a newly constructed beam line, at J-PARC Hadron Experimental Facility with a new spectrometer. A 30 GeV proton beam with an intensity of $1\times10^{10}$ protons per pulse (2-seconds spill per 5.2 seconds cycle) irradiates thin targets to produce vector mesons. We focus on the detection of the $e^{+}e^{-}$ from the decay of $\rho/\omega/\phi$ mesons. The spectrometer is composed of silicon strip detectors (SSD), GEM trackers (GTR), hadron blind detectors (HBD), and leadglass calorimeters (LG), which are placed in a dipole magnet. These detectors are modularized into 26 modules and the number of readout channels is more than 110,000 in total. To achieve the good discrimination of the background in the offline analysis, waveforms of the readout channels are recorded by using analog memory ASICs of APV25 for SSD, GTR and HBD, and DRS4 for LG, respectively. The data acquisition system is designed to cope with data size of more than 660 MB/spill, where trigger request rate is expected to be 1-2 kHz under the interaction rate of 10-20 MHz.
The spectrometer construction of $\sim$1/4 of 26 modules (Run0-a) has been completed and we are waiting for the beam commissioning of February 2020. This contribution introduces the DAQ system of E16 and reports the performance obtained in the commissioning.

Speaker: TAKAHASHI, Tomonori (RIKEN)
• 29
Performance of the DHH readout system for the Belle-II pixel detector

The SuperKEKB accelerator in Tsukuba, Japan is providing e$^+$e$^-$ beams for the Belle-II experiment since about one year. In order to deal with the aimed peak luminosity being forty times higher than the one recorded at Belle, a pixel detector based on DEPFET technology has been installed It features a long integration time of 20us resulting in an expected data rate of 20 GB/s at a maximum occupancy of 3%. In order to deal with this high amount of data the data handling hub (DHH) has been developed, it contains all necessary functionality for the control and readout of the detector. In this paper we describe the architecture and features of the DHH system. Further we will show the key performance characteristics after one year of operation.

Speakers: LEVIT, Dmytro (Technische Universitaet Muenchen (DE)) , HUBER, Stefan (Technische Universitaet Muenchen (DE))
• 30
Methods in Quench Detection for Advance Superconductive Systems

Several types of superconducting devices require quench detection systems to safely operate. Detection has to happen in real time to avoid damage. Superconducting magnets for particle accelerators are one example of such device. The quench, that is the resistive runaway resulting from the transition from superconductive to the normal-conducting state must be detect as soon as possible to avoid thermal and voltage related damage to the magnet. In this work, we discuss advanced implementations of low latency systems deployed to deal with this critical problem for both Low and High Temperature Superconductive (LTS and HTS) devices. The diverse challenges posed by LTS and HTS based apparatus demand concurrent systems utilizing sensor arrays, and data fusion when early quench detection with reduced false detection rate is required. System implementation utilizing the latest developments in hardware, and signal processing algorithms engineered for fast quench detection developed at Lawrence Berkeley National Laboratory (LBNL) are shown. Experimental results from measurements carried out with both HTS and LTS superconducting magnets and undulators are analyzed. These results illustrate existing system capabilities and limitations to properly operate and detect a quench. Enhanced implementations under development utilizing current detection hardware with the aid of machine learning based algorithms are explored. The capability of these algorithms to minimize erroneous quench detection while potentially improving detection time are discussed as the next logical step for this kind of system.

Speaker: TURQUETI, Marcos (Lawrence Berkeley National Laboratory)
• 31
PandABox: Real-time Processing Platform for Scanning and Feedback applications

Synchrotron scanning techniques, particularly in experimental continuous scans, require a high level of synchronization between data acquisition- and motion systems - (optics or sample). The purpose of the PandABox system [1] is to address multi--technique scanning- and feedback- applications. The initial objective driving the project was to provide a real-time, multi-channel, encoder processing system to deliver synchronous triggers. The first resulting system, based on a multi-purpose platform, was constructed to address FLYSCAN technique, but the modularity of the platform has also been demonstrated as a real-time beam-intensity-attenuation controller for synchrotron experiments, Current-Injection-Efficiency-and-Lifetime measurement systems (CIEL) and a derivative product for readout electronics. This flexible electronic system embeds an industrial board with a powerful Xilinx Zynq 7030 SoC (System on Chip)[2], associated with peripheral modules: FMC slot, SFP modules, TTL and LDVS I/Os and removable encoder. The framework, called PandABlocks [3], comprises firmware and software with the FPGA logic, TCP server, webserver, boot sources and root filesystem. All embedded functionalities are run-time configurable and rewireable over the use of multiplexed data and control buses. The project was developed in collaboration between Synchrotron SOLEIL and Diamond Light Source to provide an open solution from the hardware-level up to control systems integration (TANGO or EPICS). This paper details the capabilities of the platform for hardware performance, framework adaptability and applications status.
References
[1] Zhang, S., et al. “PandABox: A Multipurpose Platform for Multi-technique Scanning and Feedback” ,ICALEPCS. 2017.
[2] http://zedboard.org/
[3] Christian, G. et al. “PandaBlocks-a flexible framework for zynq7000-based soc configuration”.ICALEPCS 2019.

Speaker: ABIVEN, Yves-Marie
• 32
Diagnostic data integration using deep neural networks for real-time plasma analysis

Recent advances in acquisition equipment is providing experiments with growing amounts of precise yet affordable sensors. At the same time an improved computational power, coming from new hardware resources (GPU, FPGA, TPU), has been made available at relatively low costs. This led us to explore the possibility of completely renewing the chain of acquisition for a fusion experiment, where many high-rate sources of data, coming from different diagnostics, can be combined in a wide framework of algorithms.
If on one side adding new data sources with different diagnostics enrich our knowledge about physical aspects, on the other hand, the dimensions of the overall model grow making relations among variables more and more opaque. A new approach for the integration of such heterogeneous diagnostics, based on deep learning techniques, could ease this problem, performing operations such as feature extraction, compression, denoising, corrupted or missing data recovery and even diagnostic to diagnostic mapping.
However to ensure a real-time signal analysis for fusion experiments, those algorithmic techniques must be adapted to run in well suited hardware. In particular it is shown that, attempting a quantization of neurons transfer functions, such models can be modified to create an embedded firmware. Those firmware, approximating the deep generative model to a set of simple operations, fit well with the simple logic units that are largely abundant in FPGAs. This is the key factor that permits the use of affordable hardware with complex deep neural topology and operates them in real-time.

Speaker: RIGONI GAROLA, Andrea (Consorzio RFX)
• 33
Application of heterogeneous computing techniques for image-based hot spot detection

Improving image processing systems is key to plasma diagnostics. The operation conditions of ITER and the future machines require changing the role of such systems from monitoring and archiving for offline post-processing to real-time processing for machine control. Another purpose for these systems is machine protection. A relevant application of vision diagnostics is the wall and divertor temperature monitoring with hot spot detection. However, algorithms for hot spot detection are computationally complex. To achieve real-time performance at the required time resolution for all these experiments, evaluation and validation of the newest technologies is vital.
This work applies heterogeneous computing techniques based on OpenCL to the real-time hot spot detection problem. OpenCL improves portability, evaluation and validation on the development phase, and it also helps to execute each part of the algorithm in the platform that is best suited for it. The proposed solution balances the computational load between an FPGA, a GPU, and a CPU. The algorithm has been adapted, partitioned and optimized taking profit on the particularities of each platform.

Speaker: Mr COSTA, Victor (Instrumentation and Applied Acoustic Research Group)
• 34
Integration of control, investment protection and safety in the ITER Neutral Beam Test Facility

The ITER Neutral Beam Test Facility (NBTF) is an extensive R&D programme currently under advanced construction at Consorzio RFX, Padova (Italy), with the aim of developing and testing the technology required by the ITER neutral beam injectors. The NBTF uses a two-step approach that involves the implementation of two experimental devices: the first, SPIDER, for the study of the ion source and the second, MITICA, for the development of the full-size HNB prototype. SPIDER is currently in operation, whereas MITICA is being constructed and most electrical and mechanical components have been already installed.
The operation of each experiment will be managed through three instrumentation and control (I&C) systems, independent of each other, referred to as the Control and Data Acquisition System (CODAS) to manage conventional control, the Central Interlocks System (CIS) to protect the investment, and the Central Safety System (CSS) responsible for occupational safety and environmental protection. Following a common principle in system engineering, it is required that system reliability increases significantly from CODAS to CIS to CSS, whereas complexity decreases drastically.
The paper will firstly discuss on the operation of the SPIDER I&C systems, discussing the advantages and disadvantages of the SPIDER realization. It will then describe the status of the MITICA I&C systems, with focus on the system evolution from SPIDER to MITICA, triggered by both additional requirements and technology evolution. Finally the paper will discuss the interaction of the MITICA I&C systems in the overall operation of the experiment.

• 35
The FragmentatiOn Of Target Experiment (FOOT) and its DAQ system

Particle therapy uses proton or 12-C beams for the treatment of deep-seated solid tumors. Due to the features of energy deposition of charged particles, a small amount of dose is released to the healthy tissue in the beam entrance region, while the maximum of the dose is released to the tumor at the end of the beam range, in the Bragg peak region. Dose deposition is dominated by electromagnetic interactions but nuclear interactions between the beam and patient tissues inducing fragmentation processes must be carefully taken into account. The FOOT experiment (FragmentatiOn Of Target) is a funded project designed to study these processes. The detector includes a beam monitor, a magnetic spectrometer based on silicon pixel and strip detectors, a TOF and ΔE scintillating detector and a crystal calorimeter. The experiment is being planned as a ‘table-top’, relocatable experiment in order to cope with the small dimensions of the experimental halls and the different beams available at the CNAO, LNS, GSI and HIT treatment centers, where the data takings are foreseen in the near future (2020-2021). The experiment goals, the detector and the distributed data acquisition system will be presented. The latter is a fully distributed and real-time system, made with PCs, SOC-FPGA boards and dedicated electronic boards which allows DAQ rates of order 1kHz, with minimal dead time and with full control on the data quality.

Speaker: BIONDI, Silvia (Universita e INFN, Bologna (IT))
• 36
Characterization of Zynq-Based Data Aquisition Architecture for the 129,024-Channel UHR PET Scanner Dedicated to Human Brain Imaging

The Ultra-High-Resolution (UHR) positron emission tomography (PET) scanner is the latest LabPET II-based device being designed at Université de Sherbrooke for imaging the human brain. This new scanner uses the already established LabPET II technology capable of sub-millimetric spatial resolution in preclinical imaging. The UHR architecture implements 129,024 detection channels. The purpose of this paper is to assess the maximum data acquisition rate achievable by the second version of the Embedded Signal Processing Unit (ESPUV2) of the UHR architecture, handling 2304 of the 129,024 channels. To achieve this goal, we built and characterized a test PCB implementing the features of the UHR DAQ. This prototype ESPUV2 reads the data generated from 768 channels and transfers them to an acquisition computer by an Ethernet link. During the preliminary testing, the data throughput reached 49.61 MB/s for continuously generated data and a mean value of 44.94 MB/s for randomly generated data.

Speaker: SAMSON, Arnaud (Université de Sherbrooke)
• 37

As radiation detector arrays in nuclear physics applications become larger and physically more separated, the time synchronization and trigger distribution between many channels of detector readout electronics becomes more challenging. Clocks and triggers are traditionally distributed through dedicated cabling, but newer methods such as the IEEE 1588 Precision Time Protocol and White Rabbit allow clock synchronization through the exchange of timing messages over Ethernet.
Consequently, we report here the use of White Rabbit in a new detector readout module, the Pixie-Net XL. The White Rabbit core, data capture from multiple digitizing channels, and subsequent pulse processing for pulse height and constant fraction timing are implemented in a Kintex 7 FPGA. The detector data records include White Rabbit time stamps and are transmitted to storage through the White Rabbit core's gigabit Ethernet data path or a slower diagnostic/control link using an embedded Zynq processor. The performance is characterized by time-of-flight style measurements with radiation from coincident gamma emitters and by time correlation of high energy background events from cosmic showers in detectors separated by longer distances. Software for the Zynq controller can implement "software triggering", for example to limit recording of data to events where a minimum number of channels from multiple modules detect radiation at the same time.

Speaker: HENNIG, Wolfgang (XIA LLC)
• 38
Performance of the ATLAS Hadronic Tile Calorimeter Demonstrator system for the Phase-II upgrade facing the High-Luminosity LHC era

The High Luminosity Large Hadron Collider (HL-LHC) will have a peak luminosity of $5-10 \times 10 ^{34}$ $cm^{-2}$ $s^{-1}$, five to ten times higher than the design luminosity of the LHC. The ATLAS Tile Calorimeter (TileCal) is a sampling calorimeter with steel as absorber and plastic scintillators as active medium. Major replacements of TileCal's electronics will take place during the Phase-II upgrade for the HL-LHC in 2026, so that the system can cope with the HL-LHC increased radiation levels and out of time pileup. The Phase-II upgrade system will digitize and send the calorimeter sampled signals to the off-detector systems, where the events will be reconstructed and shipped to the first level of trigger, all at 40 MHz rate. Consequently, development of more complex trigger algorithms will be possible with the more precise calorimeter signals provided to the trigger system. The new design comprises state of the art electronics with a redundant design and radiation hard electronics to avoid single points of failure, in addition to multi-Gbps optic links for the high volume of data retansmission and Field Programmable Gate Arrays (FPGAs) to drive the logic functions of the o and on-detector electronics. A hybrid demonstrator prototype module containing the new calorimeter module electronics, but still compatible with the present system was assembled and inserted in ATLAS during June 2019, so that the Phase-II system can be tested in real ATLAS conditions. We present current status and results for different the tests done with the demonstrator system running in ATLAS.

Speaker: VALDES SANTURIO, Eduardo (Stockholm University (SE))
• 39
Redesign of the ATLAS Tile Calorimeter link Daughterboard for the HL-LHC

The Phase-2 ATLAS upgrade for the High Luminosity Large Hadron Collider (HL-LHC) has motivated progressive redesigns of the ATLAS Tile Calorimeter (TileCal) read-out link and control board (Daughterboard). The Daughterboard (DB) communicates with the off-detector electronics via two 4.6 Gbps downlinks and two pairs of 9.6 Gbps uplinks. Configuration commands and LHC timing is received through the downlinks by two CERN radiation hard GBTx ASICs and propagated through Ultrascale+ FPGAs to the front-end. Simultaneously, the FPGAs send continuous high-speed readout of digitized PMT samples, slow control and monitoring data through the uplink. The design minimizes single points of failure and reduces sensitivity to SEUs and radiation damage by employing a double-redundant scheme, using Triple Mode Redundancy (TMR) and Xilinx Soft Error Mitigation (SEM) in the FPGAs, adopting Cyclic Redundancy Check (CRC) error verification in the uplinks and Forward Error Correction (FEC) in the downlinks. We present a DB redesign that brings an enhanced timing scheme, and improved radiation tolerance by mitigating Single Event Latch-up (SEL) induced errors and implementing a more robust power-up and current monitoring scheme.

Speaker: VALDES SANTURIO, Eduardo (Stockholm University (SE))
• 40
Novel Digital Camera with the PCIe Interface

Digital cameras are commonly used for diagnostic purposes in large-scale physics experiments.
A typical image diagnostic system consists of an optical setup, digital camera, frame grabber, image processing CPU, and data analysis tool.
The standard architecture of the imaging system has a number of disadvantages. Data transmitted from a camera are buffered multiple times and must be converted between various protocols before they are finally transmitted to the host memory. Such an architecture makes the system quite complicated, limits its performance and, in consequence, increases its price. The limitations are even more critical for control or protection systems operating in real-time.

Modern megapixel cameras generate large data throughput, easily exceeding 10 Gb/s, which often requires some additional processing on the host side.
The optimal system architecture should assure low overhead and high performance of the data transmission and processing. It is particularly important during the processing of data streams from several imaging devices, which can be as high as several terabits per second.

A novel architecture of image acquisition and processing system based on the PCI Express interface was proposed to meet the requirements of real-time imaging systems applied in large-scale physics experiments. The architecture allows to transfer an image stream directly from the camera to the data processing unit and therefore significantly decreases the overhead and improves performance.

Two various architectures will be presented, compared and discussed in the paper.

Speaker: MAKOWSKI, Dariusz (Lodz University of Technology, Department of Microelectronics and Computer Science)
• 41
Streaming Mode Data Acquisition at Jefferson Lab

Over the last decade advances in electronics, computing, and software have changed the assumptions upon which data acquisition system designs for nuclear physics experiments are based. This is true at Jefferson Lab as well as at other laboratories. Looking forward to future experiments that are in various stages of planning we must reevaluate the near- and long-term course that the development of readout systems for Jefferson Lab experiments will take. One technique that is growing in favor in the data acquisition community is streaming mode readout, whereby detectors are continuously read out in parallel streams of data. The goal of this document is to discuss the advantages and disadvantages of streaming mode readout in the context of Jefferson Lab experiments and make comparisons to the status quo pipelined trigger mode. Additionally, this document will define streaming mode terminology and discuss use cases.

Speaker: Dr GYURJYAN, Vardan (Jefferson Lab)
• 42
Cooling & Timing Tests of the ATLAS Fast Tracker VME boards

The Fast Tracker Processor (FTK) is an ATLAS trigger upgrade built for full event,
low-latency, high-rate tracking. The FTK core, made of 9U VME boards, performs the most
(AMBSLP) and the "Auxiliary card" (AUX), plugged on the front and back sides of the same VME slot, constitute the Processing Unit (PU), which finds tracks using hits from 8 layers of the inner detector. The PU works in pipeline with the "Second Stage Board" (SSB), which finds 12-layer
tracks by adding extra hits to the identified tracks.
In the designed configuration, 16 PUs and 4 SSBs are installed in a VME crate. The high power-
consumption of the AMBSLP, AUX and SSB (respectively 250 W, 70 W and 160 W per board)
required the development of a custom cooling system. Even though the consumption expected for
each VME crate of the FTK system is high compared to a common VME setup, the 8 FTK core
crates will use ~ 50 kW, which is just a fraction of the power and the space needed for a CPU farm
We report on the integration of 32 PUs and 8 SSBs inside the FTK system, on the infrastructures
needed to run and cool them, and on the tests performed to verify the system processing rate and the
temperature stability at a safe value.

Speaker: SOTTOCORNOLA, Simone (INFN - National Institute for Nuclear Physics)
• 43
Integration of a Network Synchronized Motion Tracking Camera into a Real-Time Positron Emission Tomography Data Acquisition System

We are building a high-resolution brain PET scanner, the Scanner Approaching in Vivo Autoradiographic Neuro Tomography (SAVANT). Based on the LabPET-II’s detector module with 1:1 crystal to APD coupling, it is expected to reach 1.35 mm³ volumetric resolution. At this resolution level, slight, involuntary head movement will blur the image and spoil the exquisite intrinsic performance of the device. An accurate head motion correction scheme is therefore paramount to maintain the target spatial resolution in-vivo. We selected the NDI Polaris Vega motion capture camera to track head movement with high accuracy during the PET acquisition. Head motion information is then modeled inside the PET reconstruction algorithm to obtain images virtually free of motion blurring. With its ethernet-only connection and IEEE-1588-PTP synchronization support, the camera needs a different integration scheme compared with previous pulse-triggered motion capture schemes.
This work describes the synchronization and integration of the NDI Polaris Vega camera on the hardware, firmware and software levels into the SAVANT’s real-time PET DAQ, distributed on Xilinx Zynq-7000 modules. The integration is validated by imaging an 18F phantom undergoing continuous motion through a Quasar respiratory phantom. PET images reconstructed without and with motion correction will be presented at the conference. The design can also bridge the camera to other types of scanners with minor changes, allowing the camera to be shared amongst different scanners installed in an imaging center.

Speaker: TÉTRAULT, Marc-André (Université de Sherbrooke)
• 44
Real-time Attenuation Control System of incident beam using hybrid pixel detector with specific architecture

The purpose of attenuators system on a synchrotron beamline is to adapt intensity of the incident beam accordingly to the experimental conditions, samples and used detectors. The SixS (Surfaces and interfaces x-ray Scattering) beamline is equipped with a unique fast attenuation system prototype developed at SOLEIL Synchrotron. It adjusts in a real time the beam intensity in function of the photon flux received by a 2D photon counting detector (XPAD being used in this case) by rapidly acting on the direct attenuators. The system guarantees operating the detector within its linear range even during a long continuous scans, and in the same time a protection against beam damage, and with varied intensity of the beam (e.g. X-ray reflectivity and surface x-ray scattering experiments).
The presented system performs a cyclic real-time estimation of the flux received by every pixel during acquisition of an image. The pixels matrix is searched for clusters (at least two pixels) that exceed allowed levels of counts/s. The beam attenuation is then immediately changed accordingly. This process does not require reading a complete image but takes advantage of the specific architecture of the XPAD detectors family, and exploits an extra counter bit which can be accessed during exposure. Therefore, the real-time system is fast compared to an image read out and analyzed offline and the reactivity can be up to 250 Hz. The processing logic has been implemented in a new platform: the generic PandA hardware platform, developed in collaboration between SOLEIL Synchrotron and Diamond Light Source

• Tuesday, October 13
• Invited talk 02
• 45
Real-time Artificial Intelligence - with heterogeneous compute
Speaker: LIU, Miaoyuan (Purdue University (US))
• Oral presentations DAQ02
Convener: CALVET, Denis (Université Paris-Saclay (FR))
• 46
Accelerated Deep Reinforcement Learning for Fast Feedback of Beam Dynamics at KARA

Reinforcement learning algorithms have demonstrated a remarkable performance in the area of fast feedback controls applications. At KIT (Karlsruhe Institute of Technology) storage ring KARA (Karlsruhe Research Accelerator) a fast and adaptive longitudinal feedback system based on reinforcement learning is considered to stabilize the longitudinal complex dynamics of the electron beam. In order to control the fast dynamics of the micro-bunching sub-structures, the feedbacks time latency of the reinforcement inference should not exceed 100 µs. Due to the necessary continuously learning mechanism and to provide a fast feedback signals to the RF accelerator system, we developed an ultra-fast reinforcement learning framework running on Xilinx ZYNQ US+ device. In this contribution, we present the reinforcement learning framework and the hardware development in the ZYNQ. As proof of concept, two examples of fast controls have been considered and will be presented. The control of the equilibrium of a cartpole and of a pendulum. The first control is based on Policy Gradient (PG) and the second on Deep Deterministic Policy Gradient (DDPG). A detailed comparison between the proposed framework and the implementation on standard PC by Tensorflow will be presented. The comparison shows a dramatically improvement, of both the inference and training, in the time step in continuous learning. The presented framework is developed to cover several reinforcement algorithms including the Deep-Q-Network (DQN), the Advantage Actor Critic (A2C) or Asynchronous Advantage Actor Critic.

Speaker: Mr WANG, Weijia
• 47
Study of using machine learning for level 1 trigger decision in JUNO experiment

The Jiangmen Underground Neutrino observatory (JUNO) is a neutrino medium baseline experiment in construction in China, with the main goal to determine the neutrino mass hierarchy. A large liquid scintillator (LS) volume will detect the antineutrinos issued from nuclear reactors. The LS detector is instrumented by around 20000 large photomultiplier tubes. The JUNO electronics readout system consists of two parts: (i) the underwater front-end electronics system and after 100-meter-long Ethernet cables, (ii) the back-end electronics system. Hit information from each PMT will be collected to a center trigger unit for level one trigger decision, current algorithm is vertex fitting.
Since neural networks even with only one hidden layer are capable of approximating any Borel measurable function, it is interesting to study the performance of a machine learning method for the level 1 trigger decision. We treat trigger decision as a classification problem, and train a Multi-Layer Perceptron (MLP) model to distinguish from background the events with an energy higher than a certain threshold, using JUNO software datasets which include 10K physics events with noise and 10K pure noise events. For events with energy higher than 50 KeV，we can achieve an accuracy higher than 99%. We managed to fit the trained model into a Kintex 7 FPGA. We will present the technical details of the neural network implementation and training, as well as its performance when applied to real hardware.

Speaker: Dr YANG, yifan (iihe)
• 48
The ReadoutCard userspace driver for the new ALICE O2 computing system

The ALICE (A Large Ion Collider Experiment) experiment focuses on the study of the quark-gluon plasma as a product of heavy-ion collisions at the CERN LHC (Large Hadron Collider). During the Long Shutdown 2 of the LHC in 2019-2020, a major upgrade is underway in order to cope with a hundredfold input data rate increase with peaks of up to 3.4 TB/s. This upgrade includes the new Online-Offline computing system called O2.

The O2 readout chain runs on commodity Linux servers equipped with custom PCIe FPGA-based readout cards; the PCIe v3 x16, Intel Arria 10-based CRU (Common Readout Unit) and the PCIe v2 x8, Xilinx Vertex6-based CRORC (Common ReadOut Receiver Card). Access to the cards is provided through the O2 ReadoutCard userspace driver which handles synchronisation and communication for DMA transfers, provides BAR access, and facilitates card configuration and monitoring. The ReadoutCard driver is the lowest-level interface to the readout cards within O2 and is in use by all central systems and detector teams of the ALICE experiment.

This communication presents the architecture of the driver, and the suite of tools used for card configuration and monitoring. It also discusses its interaction with the tangent subsystems within the O2 framework.

Speaker: ALEXOPOULOS, Kostas (CERN)
• 49
Intermodular Configuration Scrubbing of On-detector FPGAs in the Aerogel RICH at Belle II

On-detector digital electronics in High-Energy Physics experiments is increasingly being implemented by means of SRAM-based FPGA, due to their capabilities of reconfiguration, real-time processing and multi-gigabit data transfer. Radiation-induced single event upsets in the configuration hinder the correct operation, since they may alter the programmed routing paths and logic functions. In most trigger and data acquisition systems, data from several front-end modules are concentrated into a single board, which then transmits data to back-end electronics for acquisition and triggering. Since the front-end modules are identical, they host identical FPGAs, which are programmed with the same bitstream.

In this work, we present a novel scrubber capable of correcting radiation-induced soft-errors in the configuration of SRAM-based FPGAs by majority voting across different modules. We show an application of this system to the read-out electronics of the Aerogel Ring Imaging CHerenkov (ARICH) subdetector of the Belle2 experiment at the SuperKEKB collider of the KEK laboratory (Tsukuba, Japan). We discuss the architecture of the system and its implementation in a Virtex-5 LX50T FPGA, in the concentrator board, for correcting the configuration of up to six Spartan-6 LX45 FPGAs, on pertaining front-end modules. We compare the performance, resource occupation and reliability of our solution to the ones of the Xilinx soft error mitigation controller. We discuss results from fault-injection and neutron irradiation tests at the TRIGA reactor of the Jožef Stefan Institute (Ljubljana, Slovenia). We also report about single event upsets field results from the operation in Belle2 and we correlate them with beam conditions.

Speakers: GIORDANO, Raffaele (Università di Napoli and INFN) , Dr GIORDANO, Raffaele (Universita e sezione INFN di Napoli (IT))
• 50
The WaveDAQ integrated Trigger and Data Acquisition System for the MEG II experiment

The MEG II experiment at Paul Scherrer Institut aims at a sensitivity improvement on $\mu\rightarrow e\gamma$ decay by an order of magnitude with respect to the former MEG experiment while keeping the same detection strategy. This is possible thanks to an higher segmentation of all detectors, which improves the resolutions and helps coping with twice muon stopping rate, mandatory to collect the required amount of statistics in three years.

The new WaveDAQ integrated Trigger and DAQ system has been developed to fit within the experiment upgrade, pushing further the performances of the DRS4 Switched Capacitor Digitizer allowing of GigaSample digitisation of all the ~9000 channels in a ad-hoc designed crate system.
The system design is highly scalable and can cover the TDAQ needs ranging from laboratory tests to medium scale experiments, like MEG II.

Each input channel can provide biasing for SiPMs applications and a programmable high bandwidth frontend, removing the need of additional hardware; the trigger generation is fully programmable on a multiple-FPGA architecture interconnected by low latency links and capable of computing charge and time based selections in less than 500 ns.

We report the result of the full system demonstrator used in fall 2019 engineering run with the upgraded MEG II detectors, achieving the requested time resolution and stability and showing the possibility of real-time event selection based on charge measurement.

Speaker: GALLI, Luca (INFN)
• 51
Vendor Product Information: NAT
• Micro Oral B
• Industrial Exhibit 13.10.
• Poster session B-01
• 52
A reconfigurable neural network ASIC for detector front-end data compression

The next generation of particle detectors will feature unprecedented readout rates and require optimizing lossy data compression and transmission from front-end application-specific integrated circuits (ASICs) to off-detector trigger processing logic. Typically, channel aggregation and thresholding are applied, removing information useful for particle reconstruction in the process. A new approach to this challenge is directly embedding machine learning (ML) algorithms in ASICs on the detector front-end to allow for intelligent data compression before transmission. We present the design and implementation of an algorithm optimized for the High-Granularity Endcap Calorimeter (HGCal) to be installed in the CMS Experiment for the high-luminosity upgrade to the Large Hadron Collider which requires 16x data compression at inference rates of 40~Mhz. A neural-network (NN) autoencoder algorithm is trained to achieve optimal compression fidelity for physics reconstruction while respecting hardware constraints on internal parameter precisions, computational (circuit) complexity, and area footprint. The autoencoder is improves over non-ML algorithms in reconstructing low-energy signals in high-occupancy environments. Quantization-aware training is performed using qKeras and the implementation demonstrates the effectiveness of a high-level synthesis-based design automation flow, using the {\tt hls4ml} tool, for building design IP for ASICs. The objective encoding can be adapted based on detector conditions and geometry by updating the trained weights. The design has been implemented in an LP CMOS 65~nm process. It occupies a total area of 2.5\,mm$^2$, consumes 280~mW of power and is optimized to withstand approximately 200~MRad ionizing radiation. The simulated energy consumption per inference is 7~nJ.

Speaker: TRAN, Nhan Viet (Fermi National Accelerator Lab. (US))
• 53
System Design and Prototyping for the CMS Level-1 Trigger at the High-Luminosity LHC

For the High-Luminosity LHC era, the trigger and data acquisition system of the
Compact Muon Solenoid experiment will be entirely replaced. Novel design choices have been
explored, including ATCA prototyping platforms with SoC controllers and newly available
interconnect technologies with serial optical links with data rates up to 28 Gb/s. Trigger data
analysis will be performed through sophisticated algorithms, including widespread use of Machine
Learning, in large FPGAs, such as the Xilinx UltraScale family. The system will process over 50 Tb/
s of detector data with an event rate of 750 kHz. The system design and prototyping will be
described and examples of trigger algorithms reviewed.

Speaker: TAPPER, Alex (Imperial College London)
• 54
A 32 Channel Time-Tagging and Coincidence Detector Unit with High Data Throughput

Coincidence detectors and time-tagging units are used in many physical experiments. Some experiment in quantum optics need however many channels, which initiated this project, but the board could be used for other experiments too. This device is a 32 channel time-tagging and coincidence detector unit, with about 8 ps of timing resolution, which can process up to 200 million pulses per second and transfer the results to a host computer over USB-3 SS connection or PCIe port. The device also includes individually programmable channel offsets, dead-time filters, coincidence vector event filter and 8 programmable pattern trigger outputs. From the coincidence vectors histograms with programmable duration are generated in real-time. Result data can be stored in real-time to a storage device. Most importantly the device is properly characterized and the performance of each channel is measured individually.

Speaker: HIDVEGI, Attila (Stockholm University)
• 55
Study on the time characteristics of fast MPPC based on 40GS/s real-time oscilloscope

The time characteristics of fast Multi-pixel Photon Counter (MPPC) are important features that affect its application. The fast timing detector is widely used for picosecond (ps) level time detection, some fast MPPCs with about 100ps@SPE. The time characteristics of fast MPPC refer to the rise time (RT), fall time (FT) and electron transition time (TT)of the output signal. Fast MPPC has an excellent time resolution, and the transition time spread (TTS) of MPPC can reach tens of picoseconds. This paper studies the time characteristics of several typical fast MPPCs by using a high-speed real-time oscilloscope. The relationship between the rise time, fall time and transition time spread of fast MPPC and the operational voltage as well as the relationship between the number of incident photons and the transition time spread of the fast MPPC are measured. The results show that the time characteristics of fast MPPC are better than ordinary MPPC and as the number of incident photons increases, the transition time spread of fast MPPC decreases, finally reaches a static value.

Speakers: YAN, Min (Institute of High Energy Physics) , Mr QIAN, Sen (Institute of High Energy Physics)
• 56
The Study of Test Method on Time Characteristic for ultra-fast Photodetectors

Abstract：The research on the time performance test method of ultra-fast photodetector provides an important experimental basis for the performance measurement of domestic ultra-fast photodetector, beam current experiment or detector performance calibration in some special places. In this paper, based on the MCP-PMTs with about 50ps@SPE and some fast SiPM with about 100ps@SPE, the time performance of the above two super-fast photodetectors is tested by different methods, namely the common high-sampling oscilloscope and the new high-precision testing plug-in MDPP-16.So the test method of ultra-fast photodetector will be studied by comparing the test results.

Speakers: QIANYU, Hu (Institute of High Energy Physics) , Mr QIAN, sen (institude of high energy physical)
• 57
Applications of Triggered Scaler Module for Accelerator Timing

During the operation of J-PARC timing system since 2006, there were a few unexpected trigger-failure events occurred. It was difficult to find the faulty module among many suspicious modules. In order to find such a module easily, a Yokogawa PLC- type triggered scaler module was developed. It can accept the start of J-PARC Main Ring (MR) slow cycle (2.48s/5.2s) signal and the start of rapid cycle (25Hz) signal, which are generated by J-PARC timing system. A scaler in the module counts number of trigger pulses during the J-PARC slow cycle and stores counts in an array. In 2018, the module was tested successfully and the results showed the expected performance.
Two applications were developed based on the triggered scaler module. The first one, a Machine Protection System (MPS) detection system, succeeded to visualize which phase of a slow cycle an MPS event occurred. The second one, an unexpected-trigger detection system, was developed to detect failure events and to identify the type of failure. Both applications were tested successfully during J-PARC beam operation in June, 2020, and showed that the triggered scaler module can be applied for various timing-related applications in the future.
The details of the module and two associated applications will be described in the paper.

Speaker: Ms YANG, Min (SOKENDAI/ KEK)
• 58
The influence of the dead layer increase on total gamma spectrum response for a coaxial HPGe p-type detector

The dead layer thickness increase of HPGe detectors in during the operation time is main effect for reducing the volume and the full energy peak efficiency FEPE of the HPGe detector. In the work, we studied the influences of dead layer thickness on the whole gamma spectrum response for HPGe detector by using Monte Carlo simulation. From the simulated relation of the FEPEs and the dead layer thicknesses, the dead layer thickness of 0.588 mm was derived which was increase 28% in comparison with the nominal value of 0.46 mm from manufacturer after 4 years of operation. The multiple scattering region and peak ROI decrease with different rates and the valley region of forward scattering photon increases quickly when the dead layer thickness increases. The influence of the volume-to-surface area ratio (V/A) on the peak-to-total ratio (P/T) was evaluated. The trend of P/T vs V/A as a polynomial function of 2nd order and energy dependence of the P/T were found.

Speaker: THỊ HỒNG LOAN, Trương (University of Science, VNUHCM, Ho Chi Minh City, Vietnam)
• 59
Development of a real-time simulation RELAP/SUNDIALS code for the Dalat Nuclear Research Reactor

The Reactor Excursion and Leak Analysis Program (RELAP5) is a light water reactor transient analysis code for simulation of hydraulic and thermal transients in both nuclear and nonnuclear systems for which a new RELAP5-based real time simulation code has been developed at the Center for Nuclear Technologies (CNT) for the Dalat Nuclear Research Reactor (DNRR) in Vietnam. However, a probable bug in the RELAP5 reactor kinetics (rkin) module causes nonphysical reactor power curve when small calculations time steps are applied. This paper proposes a new code so-called RELAP/SUNDIALS, by coupling the software package SUNDIALS (SUite of Nonlinear and DIfferential/ALgebraic Equation Solvers) and RELAP5 together to replace the implementation of the original Runge-Kutta method to which is of implicit and explicit iterative numerical analysis. Both RELAP5 and RELAP/SUNDIALS were compared against benchmark cases and the latter proved to be superior in terms of calculation accuracy.

Speaker: Mr TRUONG, Tuan (Center for Nuclear Technologies)
• 60
Response function of a neutron dose-rate meter: unfolding evaluation and verification

In the present work, the response function of a neutron dose rate meter (Aloka TPS-451C) has been evaluated based on the characteristics of various neutron standard fields with 252Cf and 241Am-Be sources (established in the previous works) and a self-developed unfolding code (applied the Singular Value Decomposition, SVD, approach). Measurements of neutron ambient dose equivalent rates, H(10)’, have been performed when the neutron meter irradiated to various neutron standard fields with different spectra. The measured values of H(10)’ and corresponding incident neutron spectra were both used as input data for determining the response function of the neutron dose rate meter applying the SVD approach. To verify the feasibility of SVD approach, it was applied to determine the response function of a well-known 8” Bonner Sphere and then compared with the published data. The result shows a good agreement between them. The SVD approach was alternatively applied for evaluating the response function of Aloka TPS-451C neutron meter which has also been compared with that obtained for the same neutron meter in other work.

Speaker: Dr LE, Ngoc Thiem (Physicist)
• 61
Optimization of PSD technique on charge integration ratio to improve neutron/gamma discrimination for EJ-276 plastic scintillation detector

Charge integration ratio (Qratio), method in the Pulse Shape Discrimination (PSD) technique has been widely used to discriminate between fast neutron and gamma in organic scintillation detectors. Here Qratio is defined as the ratio of tail integration to total integration of digitized pulses. In this method, tail integration window is depends much on the decay components in ﬂuorescence of organic scintillation material and PMT time response.
In this work, we carry out optimizing the tail integration window, based on decay components of organic scintillation material and PMT time response to improve the Figure of Merit (FOM), a quantity characterizing for neutron/gamma separation. We study for an EJ-276 plastic scintillator of (14x40x14)mm3, a commercial product of ELJEN technology, which is known for its good performance of separating gamma and fast neutron signals on the basis of their timing characteristics. PMT pulse shape recorded by a DRS-4 digitizer with sampling rate of 2 GSPS is employed. We conduct an experiment on Cf-252 radioisotope source which emits fast neutron and gamma. A comparison of charge integration ratio is carried out in energy thresholds from 100keVee to 1000keVee to evaluate the neutron/gamma discrimination.

Speaker: Dr VO, Hong Hai (University of Science, VNUHCM )
• 62
Study on impact of ENDF/B-VII.0 nuclear library uncertainty on the CERMET fueled ADS reactivity calculation using MCNP6 and WHISPER

In previous studies, the great potential in transmutating transuranic elements from spent nuclear fuel of the accelerator-driven system (ADS) was confirmed by the calculation results obtained by the Monte Carlo method simulation codes. The fast neutron spectrum system showed a benefit for reducing minor actinide (MA) inventories in the fuel cycle. However, the reliability of the calculation data directly depends on the accuracy of the cross section library. The uncertainty of the evaluated nuclear data, especically, in the high energy range would result in an inaccuracy in the reactivity estimation of the ADS system. In this study, the impact of the ENDF/B-VII.0 nuclear library uncertainty on reactivity prediction of the CERMET (ceramic metal matrix) fueled ADS was investigated using WHISPER. It was found that the inaccuracy of ENDF/B-VII.0 library caused a big uncertainty (about 1100 pcm) in the reactivity calculation result. From the sensitivity analyses obtained by MCNP6 code, the uncertainty of elastic scattering and absorption cross sections of 95Mo, fission cross sections of 237Np, 239Pu, 241Am and 244Cm mainly contributes to the uncertainty of the calculation’s result. Consequently, it is highly recommended that the accurary of those isotopes’ cross sections needs to be enhanced to provide more reliable results on reactivity calculation for the fast system.

Speaker: Dr VU, Thanh Mai (VNU University of Science, Hanoi)
• 63
Safety Interlock System Design for Proton Therapy System

As the most complx medical equipment, proton therapy system(PTS) must have a safety interlock system(SIS) which completely avoid non-necessary radiation dose to patients or personnel during operation. Due to the proton beam moves with around two thirds of light speed, the safety interlock system should cut off the proton beam within specified time to ensure the over dose in each field less than 0,25 Gy according to the IEC60601-2-64 standard.

The SIS for PTS consists of three subsystems, which are patient safety interlock personnel safety interlock and equipment safety interlock. Patient safety interlock is responsible for beam off interlock during patient treatment. In order to quickly recover the PTS from the interlock state, patient safety interlock adopts hierarchical beam cutting off strategy according to the severity of the system interlock. The personnel safety interlock is responsible for the safety management of operators or visitors and access control of the radiation area. The equipment safety interlock is responsible for the safety protection of accelerator and beamline equipment. Both equipment and personnel interlock can trigger to shut off the beam.

Hardware module with signal transmission and logic realization is used to ensure the microsecond level response of patient interlock. The configuration of safety interlock system follows the SIL3 level standard, the controller and field equipment adopt SIL3 level hardware. In addition, the beam breaking interlock routine of the safety interlock system adopts redundant design, from the controller to the interlock equipment and the hardware circuit between them.

Speaker: FENG, Hansheng
• 64
Architecture Design of JUNO DAQ

The Jiangmen Underground Neutrino Observatory (JUNO) aims to determine the neutrino mass ordering along with other purposes. There are about 20000 20’ inch and 25000 3’ inch PMTs in central and veto detects. The DAQ is required to readout about 40GB/s full waveform data from large PMTs with 1us sample window, 1GHz sample rate and 1kHz trigger rate. The small PMTs require only readout time and charge information by self-trigger. A new proposal to readout only the hit information of large PMTs with self-trigger at same time could be applied for supernova burst and multi-messages. And waveform need to be compressed for disk storage according to online event classification. This paper will introduce these requirements, solutions and architecture design of DAQ system.

Speaker: LI, Fei (IHEP)
• 65
Real-time monitoring of operational data in the Belle II experiment

Belle II is a luminosity frontier experiment designed to search for physics beyond the standard model of particle physics, and has successfully started the physics data taking on March 2019.
We use seven sub-detectors to record the enormous numbers of events that are produced by the SuperKEKB accelerator.

There are many components to be monitored in our system for smooth data taking, such as the nodes of the online software trigger, readout PCs for each of the sub-systems, and the status of the network connections among the components.
In order to provide quick error recognition, we adopt the Elastic Stack as a central monitoring system, to which we put all the log outputs.
This monitoring tool allows us near real-time interactive visualisation of errors.

To achieve an unprecedented luminosity, the accelerator team still continues its commissioning works, and gradually increases its instantaneous luminosity in 2019 runs.
In such a situation, we build up our data acquisition system, facing up to many troubles; for example (1) sudden death of data-taking processes, (2) instability of readout PCs, and (3) data-taking deadtime due to unexpectedly large event size.

We present the development status of our monitoring tool, which detects any troubles in our system and hence minimise the data-taking inefficiency.
We still build up the monitoring system; we discuss the difficulties experienced during the operation.

Speaker: KUNIGO, Takuto (KEK (IPNS))
• 66
Preliminary design of real-time plasma control system for CFETR

China Fusion Engineering Experimental Reactor (CFETR), an important step to realize the dream of fusion energy, has entered the stage of engineering design and research. The preliminary design of real-time plasma control system (PCS) for CFETR is promoted synchronously. The design of CFETR PCS is composed of real-time control framework, control algorithms integration, and two assistant tools PCS-SDP (Software Development Platform) and PCS-VP (Verification Platform) for algorithm development and verification.
For the control framework hardware design, the distributed cluster structure will be adopted and the operating system will be customized for real-time performance. The software of real-time control framework, referred to ITER RTF (Real Time Framework), is divided into three layers following the layered and modular design principles. PCS-SDP, a visual algorithm development platform, has completed its preliminary design. And a prototype is built to validate the design and technical feasibility with the control algorithm development for EAST. The validation process and results will be reported in this paper. PCS-VP is under preliminary design taking ITER PCSSP (Plasma Simulation Platform) design as reference. The PCSSP has successfully been adapted to EAST and the plasma current and position controller is integrated and tested with GSevolve plasma response model, which accumulates experience for the development of CFETR PCS-VP.

Keywords: real-time plasma control system; Control framework; PCS software development platform; PCS verification platform

Speaker: Dr YUAN, Qiping (Institute of Plasma Physics, Chinese Academy of Sciences)
• 67
Development of the second version of a 64-ch SiPM readout ASIC for TOP-PET with individual energy and timing digitizer

The paper presents the development of the second version of a 64-ch SiPM readout ASIC named DIET, which individually amplifies the input current, extracts and digitizes the energy and the arrival time in each channel. Compared to the previous version, more control logics have been integrated on-chip and hence fewer external control ports are required. The crosstalk between the analog and digital parts of the chip has also been suppressed by isolated substrate with deep nwell. The chip was fabricated in 0.18 μm CMOS process with the die area of 3.4×3.4 mm2 and its performance has been preliminarily evaluated. For the energy measurement, the INL and output noise was measured to be less than 0.6% and 4.3 LSB respectively. For the time measurement, the bin size of the time digitizer was measured to be 28.7 ps and the jitter was less than 48 ps. More details on the chip design and test results will be given in this paper.

Speaker: Mr HAO, Jiajun (Tsinghua University)
• 68
Clock Trigger Node based on White Rabbit Technology for Distributed Trigger for Large Scale Neutron Detectors

The new generation of neutron instrument will use large-scale neutron detectors to complete experiments efficiently and quickly, which means the large number of detector units, the large span of installation location and the complex and flexible experiment scheme. The traditional centralized trigger system and clock synchronization system often use star fan-out structure, and use dedicated links to fan out step by step, which is difficult to adapt to the needs of flexible triggering of different neutron detectors to set time window separately and switch between different trigger sources, along with the need of the wide range synchronization. In this paper, a distributed clock trigger node based on the White Rabbit (WR) high-precision time synchronization technology was preliminarily designed to realize clock synchronization through network links and complete distributed autonomous triggering, which could also be used as a timing card of PXIe chassis to further fan out the synchronous clock and trigger into each module in the chassis. The input trigger signal can be time-stamped by a clock trigger node, and then broadcast to multiple clock trigger nodes at different locations through WR network. These nodes can process the received message at the same time and generate the trigger autonomously according to the trigger timestamp and the set trigger delay, and then fan out it to other measurement modules through the trigger bus of the chassis. Such a distributed clock triggering system gives consideration to flexibility, integration and universality.

Speaker: Ms LI, Jiawen (University of Science and Technology of China)
• 69
A Low-power Large-capacity Storage Method for Deep-sea In-situ Radiation Measurement

Deep-sea in situ radiation detection equipment is designed to work at the deep seafloor for half a year. During the period of running, the equipment will generate large volume of data. Besides, this equipment is powered by batteries. So design challenge of large capacity storage with low power is met in the equipment development. This paper proposes a low-power large-capacity storage solution based on the combination of static random-access memory (SRAM) and Flash. The detected data is recorded in two 128 GB embedded multi-media card (eMMC) Flash. However, if the eMMC Flash keeps running, its high power consumption will reach to about several hundred milliwatts, which will shorten the batteries’ life. In order to reduce the power consumption, a storage method combining SRAM and Flash is designed. During every data recording cycle, data is first buffered into the SRAM chip and then written into the Flash chip before the SRAM is almost full. In this way, Flash almost stays in sleep mode most of the time except for the data transfer period. Besides, a low-power FPGA based on Flash technology is adopted as the control center of the circuit to further reduce the power consumption. Preliminary results show that this low-power storage solution reduces power consumption by more than 90% compared to the way Flash working alone. The estimated total power consumption can reach down to 6.6 mW. The proposed storage solution significantly reduces power consumption and satisfies the long time running requirements for the deep-sea in-situ equipment.

Speaker: JIANG, Wei (Department of Modern Physics, University of Science and Technology of China)
• 70
Monitoring and control systems for a new branched proton beam line in Hadron Experimental Facility at J-PARC.

The Hadron Experimental Facility at the Japan Proton Accelerator Research Complex (J-PARC) is designed to provide high-intensity beams. A new branched proton beam line called as the “B-line” at J-PARC has been constructed and is going to start its operation from Feb. 2020. A small fraction (<0.1%) of the primary beam is branched by a Lambertson magnet and goes into the B-line. While the beam intensity of the B-line greatly depends on the beam profile at the Lambertson magnet, the beam intensity of the B-line must be controlled to be under 3x1010 protons per spill. With these regards, the beam monitors and the control system play a crucial role for the B-line.
Beam monitors are newly installed, and a control system is developed for the operation of the B-line. In particular, the beam profile and beam loss around the Lambertson are monitored to ensure the beam intensity going into the B-line. The beam profile and the beam intensity are also monitored at a beam dump to ensure the proper beam destination. When the control system detects the unexpected large beam intensity, the primary beam extraction will be aborted within a few msec even during the beam extraction.
In this presentation, we will show the details of the control and monitoring system of the new proton beam line and report the first operation results.

Speaker: Dr MORINO, Yuhei (KEK, High Energy Accelerator Research Organization)
• 71
A Linear Digital Timing Measure Method for In-Beam PET

Abstract—Accurate timing is required in the annihilation event calibration and coincidence of the In-Beam positron emission tomography (In-Beam PET), which is the main way for the in-situ monitoring in the Heavy-Ion Cancer Therapy Device (HICTD). To achieve a high-resolution and low-cost time information, a linear digital timing measure method for In-Beam PET based on photomultiplier tube readout of LYSO crystals is presented. In this method, the field programmable gate arrays (FPGA) based data acquisition unit (DAQU) analyzes the event signals from detector, which are sampled by 14-bit analog to digital converters (ADCs) at the sampling rate of 50 MHz. Each event is examined by the energy and shape discrimination module in FPGA. Meanwhile, a stable digital trigger is synchronous generated with the valid event to launch the acquisition for the energy and time measure, which avoids the use of traditional analog timing circuit. The actual timing is calculated by linear interpolation between baseline and initial sample with the digital signal processing (DSP) unit in FPGA. Results show that the time resolution for this method is about 100 ps, which is useful and low cost for the development of readout electronics in the In-Beam PET.
Index Terms—Digital timing, Time stamp, In-Beam PET, FPGA, Linear interpolation

Speaker: Dr KE, Lingyun
• 72
Characterization of an In-Beam PET prototype for Heavy-Ion Cancer Therapy Device

Heavy-Ion Cancer Therapy Device (HICTD) developed by the Institute of Modern Physics of the Chinese Academy of Sciences has been equipped for clinical application. The positron emission tomography (PET) located on the beam line, called In-Beam PET, is a key detector in HICTD, which is employed for monitoring during heavy-ion therapy beta+ emitters like $^{11}$C, $^{15}$O, $^{10}$C are generated in irradiated tissues by nuclear reactions. The measurement of the time, energy and position information of this activity, immediately after patient irradiation, can lead to information on the effective delivered dose. The prototype is constructed with a Dual-Head Planar-Type detector consisting of 2 detection units. Each unit is made up of a position sensitive photomultiplier coupled to a square matrix of same size of LYSO scintillating crystals (2×2×15 mm$^{3}$ pixel dimensions). Four signals from each detector are acquired through a dedicated readout electronic system that performs signal amplification, shaping, A/D conversion and area calculation in digital signal processing domian. The In-Beam PET prototype has been characterized in the lab, energy and time resolutions were measured using a 22Na radioactive source. The validation of the prototype was performed using different energy ion-beam at the HICTD beam line of PMMA phantoms. A good correlation between the position of the activity distal edge and the thickness of the beam range shifter was found along the axial direction.

Speaker: Dr YAN, junwei
• 73
Universal Readout Method for GAEA Gamma Spectrometer at CSNS Back-n White Neutron Source

The back-streaming neutrons (Back-n) facility at China Spallation Neutron Source provides an excellent white neutron source for accurate measurements of nuclear data. At Back-n, the GAEA (gamma spectrometer with germanium array) spectrometer in planning is designed for neutron-induced cross-sections measurements with high accuracy and efficiency. As six kinds of detectors are used in GAEA, it is hard to design and maintain different readout electronics for each type of detector separately. Therefore, a universal readout method is proposed in this paper, which mainly consists of readout modules and the PXIe readout platform. To accommodate different characteristics of signals from various detectors, each of the readout module is composed of a dedicated mezzanine board for digitization and a universal carrier board for data processing and readout. The signal from detector is digitized directly on the mezzanine board with a dedicated analog gain. By changing the value of the feedback resistance, the gain of the conditioning circuit can be adapted for each detector. The digitized data is transmitted to the field programmable gate array (FPGA) in the 3U carrier board for data processing with real-time algorithms. Finally, the triggered data will be transmitted to the chassis controller via the PCIe bus. Besides high-speed data transmission, the PXIe chassis provides an excellent platform for digital trigger system. With full waveform digitization technology, the readout mothed in this paper provides a concise and high-performance readout electronics for GAEA.

Speaker: XIE, Likun (State Key Laboratory of Particle Detection and Electronics, Department of Modern Physics, University of Science and Technology of China, Hefei 230026, China)
• 74
Development of an 16 channel low power and high intergration readout ASIC for time projection chamber in 65 nm CMOS

The paper presents the development of a 16 channel low power and high integration readout ASIC for time projection chamber (TPC) for the CEPC (Circular Electron Positron Collider) experiment. In order to achieve high spatial and momentum resolution, ~1 million readout channels are required with waveform sampling in 8-10 bits and 10-40 MS/s. Power consumption became very critical and was addressed by using 65 nm CMOS process and circuit structures with less analog circuits in our design. A 16 channel TPC readout ASIC has been developed based on our previously successful prototype chips. Each channel consists of the analog front end (AFE) and a free running SAR ADC. The power consumption of the AFE and the core ADC were simulated to be 1.4 mW and 1 mW per channel respectively. The ENC of the AFE was optimized to be 263 e+ 3.7 e/pF * Cin. The input dynamic range is up to 120 fC with the integral non-linearity less than 0.37%. The digital trapezoidal filter is also under development as a pre-filter for data compression. More detailed design and test results will be given in the paper.

Speaker: Mr LIU, wei (Tsinghua University)
• 75
The data taking network for COMET Phase-I

An experiment to search for mu-e conversion named COMET is constructing in J-PARC Hadron hall. The experiment will be carried out using a two staged approach, Phase-I and Phase-II.
The data taking system of the Phase-I experiment is going to be developed based on common network technology. The data taking system needs two kinds of networks. One is a front-end network. This network bundles around twenty front-end devices which have a 1 Gb optical network port and send to a front-end computer. The other is a back-end network. The back-end network collects all event fragments from the front-end computers using 10 Gb network equipment. We used a low price 1G/10G optical network switch for the front-end network. And the direct connection between 10G optical network ports was used for the back-end network. The back-end PC has ten 10G network ports. And each network port connects the front-end PC’s port without a network switch. We evaluated data taking performance with an event building on the two kinds of networks. The event building throughput of the front-end network achieved 400 MiB/s. And the event building throughput of the backend networks achieved 1 GiB/s. It means, we could reduce the construction cost of the data taking network using this structure without the decrease performance.
We will report the structure of the data taking system of the COMET Phase-I and the performance of their networks.

Speaker: Dr IGARASHI, Youichi (KEK)
• 76
Pulse Shape Discrimination of Bulk and Very Bulk Events within HPGe Detectors

The CDEX (China Dark matter Experiment) now deploys ~10 kg pPCGe (p-type Point Contact Germanium) detectors in CJPL (China Jinping Underground Laboratory). It aims to detect rare events such as dark matter and 0vbb (Neutrinoless double beta decay). The discrimination of bulk and very bulk events are essential for improvements of the analysis threshold of dark matter. Very bulk events are generated near the p+ point surface of pPCGe, which are usually from radioactive materials of electronic devices. Due to different locations of charge collection, bulk and very bulk events have different pulse shape. This paper will use CCM (Charge Comparison Method) to realize the discrimination of bulk and very bulk events. Two kind of ADCs (Analog-to-Digital Converters) are used for pulse digitization and their results are also compared. The preliminary results show that FOMs (Figure of Merit) are 1.34 ± 0.55 and 0.49 ± 0.12 using 1 GSPS 12-bit and 100 MSPS 14-bit ADC respectively.

Speaker: Dr ZHU, Jinfu
• 77
A Prototype Design of the Readout Electronics of MRPC Detectors in CEE in HIRFL

In Cooling storage ring External-target Experiment (CEE) in Heavy Ion Research Facility in Lanzhou (HIRFL), Multi-gap Resistive Plate Chambers (MRPC) detectors are utilized for high precision Time Of Flight (TOF) measurement. Especially for some Internal TOF (iTOF) detectors, which is designed to identify the particles scattered after the ion beam hit the target, a very high precision of the time measurement (detector together with the electronics are required to be less than 50 ps in total) is required, which requires the time precision of the readout electronics to be less than 10 ps RMS. Based on our previous work on the readout electronics system of the start time (T0) detector in CEE, we design this porotype to verify our scheme, which contains a Front-End Electronics (FEE) module utilizing NINO ASIC to achieve the low-noise, high-precision discrimination and a Time Digitization Module (TDM) with double-chain Tapped Delay Line (TDL) Field Programmable Gate Array (FPGA) Time to Digital Convertors (TDCs) implemented in Xilinx Artix-7 FPGA. By constraining the TDLs at appropriate positions and choosing a main clock of 320 MHz according to the transmission speed of the hit signal on the TDL, we optimize the Differential NonLinearity (DNL) of the TDL. According to the test results, a 7.1 ps RMS time precision of the TDM is achieved. We also test the performance of the whole system under different input signal charges and the results indicate that the prototype achieves the expectations.

Speaker: LU, Jiaming
• 78
Multi-Spectrometer Compatible Data Acquisition System for Deep-sea In-situ Radiation Measurement

Radiation measurement at deep-sea environment helps to get a better understanding on deep-sea sedimentation, geological structure, radiation monitoring and so on. However, reports about radiation of natural deep-sea environment are rare, especially about deep-sea in-situ gamma radiation measurement. Two gamma spectrometers and two neutron spectrometers are going to be used in deep-sea in-situ radiation measurement. Aiming at different scientific goals, the four spectrometers can develop multiple kinds of measurements, including dose of gamma rays, energy spectrum of gamma rays, energy spectrum of characteristic gamma rays with neutron capture process, and time spectrum of thermal neutron. In this paper, a multi-spectrometer compatible data acquisition (DAQ) system is proposed. The DAQ provides features of readout, visualization, data process algorithm and file input and output. To benefit from high reusability and low coupling, the DAQ system is based on Model-View-ViewModel architecture. With a customized application layer protocol, the DAQ can be is fully compatible with the four different spectrometers. Besides supporting of real-time control, the DAQ system also provides a way to program the spectrometers to execute an unmanned mission, so that the spectrometers can load a time table from a built-in flash memory and run automatically.

Speaker: Mr YUAN, Jianhui (University of Science and Technology of China)
• 79
Peak-finding for Longitudinal Beam Halo Readout System

Mu2e is a next generation search for muon to electron conversion that will improve the current physics sensitivity by 4 orders of magnitude. The experiment will need a high-quality pulsed proton beam with extinction, or the ratio of out of time to in time protons, better than 10-10. In order to achieve this performance, the beam extinction in the Delivery ring must be better than 10-5. To monitor the longitudinal beam halo, we have developed a system based on quartz Cherenkov counters and photomultiplier tubes read out using commercial electronic based on the MicroTCA standard. We use ADC and FPGA hardware from VadaTech that will be configured to perform a peak-finding algorithm on the signals. The output data will be sent to the data acquisition system via a PCIe followed by a gigabit ethernet link. The system is being tested in the Fermilab Recycler Ring. The result of the peak-finding algorithm and the firmware design will be presented.

Speaker: Dr NGUYEN, Minh Truong (UC Davis)
• 80
Archiver System Management for Belle II Detector Operation

The Belle II experiment is a high-energy physics experiment using the SuperKEKB electron-positron collider. Belle II has started the so-called "phase 3" data taking from March, 2019.
With Belle II data, high precision measurement of rare decays and CP-violation in heavy quarks and leptons will be made to probe New Physics. In this presentation, we present the archiver system to monitor the Belle II detector, and discuss how we maintain the system that archives the monitoring process variables of the subdetectors. In the archiver, we currently save about 26 thousand variables including the temparature of various subdetector components, status of water leak sensors, high voltage power supply status, data acquision status, and luminosity information of the colliding beams. For stable data taking, it is essential to collect and archive these variables.
To ensure variables are sent from the subdetector and other systems, we regularly check variable availablity and consistency. To secure the operation of the archiver, we made a dedicated variable.
By checking the stored status of the variable, we ensure the archiver operation. To cope with a possible hardware failure, we prepared a backup archiver that is syncronized with the main archiver.

Speaker: KIM, Yongkyu (Yonsei Univ)
• 81
Prototype Design for Upgrading Central Safety and Interlock System on EAST

The national project of experimental advanced superconducting tokamak (EAST) is an important part of the fusion development stratagem of China. The safety and interlock system (SIS) is in charge of implementing the investment protection of human and tokamak from potential accidents. The SIS is constituted by two horizontal layers, one for the central safety and interlock system (CSIS), and another for the different plant SIS, they are connected through different networks. With the development of physical experiment, the CSIS had come close to reaching its limits for expandability. For instance, the former central safety and interlock system based on PLC just offers digital I/O channels with 1ms scan time, and the response time of event is around 4ms. What more, the primitive GREEN and RED circles dashboard, and intermittent event record display problem make the central control team determines to update the CSIS. The new CSIS is divided in two architectures according to timing requirements. The slow architecture has been established based on PLC, the fast one executes functions within 100 us by using COTS based on FPGA. The new CSIS preserves the previous PLC systems as slow controller, and introduces NI compactRIO into fast architecture. The fast controller processes not only digital signals but also analogue variables, which require certain calculation. The supervisory HMI of CSIS is redesigned, and variables from slow controller and fast architecture are collected through OPC UA. In this paper, we’ll present EAST machine and human protection mechanism and the architecture of the upgrading CSIS.

Speaker: Mr ZHANG, Zuchao (ASIPP)
• 82
MPV - Parallel Readout Architecture for the VME data acquisition system

In the field of nuclear physics, DAQ systems based on the legacy parallel bus (CAMAC and VME) are still widely working. In case of RIKEN RIBF which is an accelerator facility dedicated for RI beams in Japan, about 20 VME crates are used for experiments. Recent trends of the DAQ front-end architecture are based on the micro-TCA (uTCA) or the direct attachment of the GigaBit Ehternet (GbE) link. In fact, some experiments in RIBF utilize both new uTCA/GbE based DAQ systems and legacy VME based systems. In most cases, uTCA/GbE based systems are much faster than VME. Our motivation is to speed up VME DAQ comparable to uTCA/GbE systems. Recently, we have successfully introduced an MPV system which has a parallel readout architecture compatible with the VME bus. The MPV system dramatically improves readout speed of the VME-based DAQ system. MPV consists of MOCO, MPV backplane (parallelized VME backplane), and MPV controller. MOCO is a small VME controller which is attached to each VME module (i.e. one VME module per one VME controller). About MOCO, it was reported by a poster contribution at 2016 IEEE RealTime Conference. Here, we newly developed the MPV backplane and the MPV controller which merge data from MOCOs and send data to the server through GbE. In this contribution, new architecture of MPV and the data throughput of the system will be shown.

• 83
Improvement of EAST Data Acquisition Configuration Management

The data acquisition console is one component of EAST data acquisition system, which provides unified configuration management for data acquisition and data storage. The data acquisition console was developed many years ago, and with the increasing of data acquisition nodes and the emergence of new control nodes, the function of configuration management has become inadequate. It can’t manage the new control nodes and no one except the DAQ administrator can inquiry the signals information, so it is planned to update the configuration management function of the data acquisition console. The updated data acquisition console consists of a software component based on LabVIEW and a website based on PHP. The software component is mainly used for the DAQ administrator, with the functions of setting configuration parameters, inquiring configuration, publishing configuration parameters, monitoring the data acquisition nodes, and control the data acquisition process. The website provides all experimenters with inquiry signals information and data acquisition device parameters. The updated data acquisition console is in design now and is expected to be tested in next EAST campaign.

Speakers: Dr CHEN, Ying (ASIPP) , Dr LI, Shi, Dr WANG, Feng
• 84
Validation 2-D In-Vivo Dosimetry Based on Correlation Ratio Algorithm

Verification before and during treatment is very needed in Radiotherapy. 9 out of 17 cases in radiotherapy did not detect any errors during pre-treatment verification but were detected at verification during treatment or real-time measurement, namely In-Vivo Dosimetry (IVD). One of the most massive developments in IVD is using EPID. there are already several algorithms to build it, one algorithm that is considered quite successful is the correlation ratio, but more validation is needed before we used that in daily clinical activities. Because of that, we validate in-several cases using homogeneous phantom, inhomogeneous phantom, and patient simulation with the IMRT technique. We used EPID a-Si 1000, Treatment Planning System (TPS) ECLIPSE 13.6, and Linac Varian Unique photon 6 MV. Every image was taken by continuous acquisition modes. We calculate inline and crossline profile by Full-Width Half Maximum (FWHM), beam symmetry, and beam flatness and also compared the image by Gamma Index 2%/2mm and 3%/3mm. The result shows that correlation ratios need some improvement, the difference resolution between x and y in EPID a-Si 1000 make crossline profile has better match than inline profile. the gamma index shows the algorithm pass greater than 90% in 3%/3mm and less than 90% in 2%/2mm. the algorithm inaccurate in the profile field without the horn section. Further development is needed to make this algorithm can be used in daily clinical activities.

• 85
Particle Tracking with Space Charge Effect using Graphics Processing Unit

Particle tracking simulations including space charge effect are very important for high intensity proton rings. Since they include not only Hamilton mechanics of a single particle but constructing charge densities and solving Poisson equations to obtain the electromagnetic field due to the space charge, they are extremely time-consuming. We have newly developed a particle tracking simulation code which can be used in Graphics Processing Units (GPU). GPUs have strong capacities of parallel processing so that the calculation of single particle mechanics can be done very fast by complete parallelization. Our new code also includes the space charge effect. It must construct charge densities, which cannot be completely parallelized. For the charge density construction, we can use “shared memory” which can be accessed very fast from each thread. The usage of shared memory is another advantage of GPU computing. As a result of our new development, we increase the speed of our particle tracking including space charge effect more than 10 times faster than that in case of our conventional
code used in CPU. We will present many practical issues that we faced during the development as well as the details of our developed code.

Speaker: KURIMOTO, Yoshinori (High Energy Accelerator Research Organization)
• 86
Data Management and Network Platform for CFETR

The China Fusion Engineering Test Reactor (CFETR) is the next-generation facility in the roadmap for the realization of fusion energy in China, which is at the stage of concept design. During concept design, it is estimated that it will generate about ten million 3D components, 2D diagrams, project and design documents over the entire lifecycle of CFETR. To ensure the consistency of CFETR design and integration, it is important to provide a collaborative work environment for all the participators. How to store and manage CFETR design documents is a big challenge for CFETR IT team. This paper presents a description of the design and implements of the data management and collaboration network platform for CFETR. The design focused on the following major technical issues including data system structure, user management, data security mechanisms, disaster recovery & backup and network environment. CFETR members come from different research organizations and play different roles in the project. It is essential to provide user authorization and authentication to avoid data abusing. To protect data from destructive forces and from unwanted actions, some data security mechanisms, including access control, action monitoring and auditing, data encryption and disaster backup, have been deployed in the CFETR data management system. The collaboration design network environment is based on the WAN Optimization Controller network device. Built-in Virtual Private Network protocol will create a safe and encrypted connection over a less secure network. WAN optimization application will be used to provide data deduplication function and secure data transfer.

Speaker: Dr SUN, Xiaoyang (Institute of Plasma Physics Chinese Academy of Sciences)
• 87
Gigabit Ethernet Daisy-Chain on FPGA for COMET Read-out Electronics

The COMET experiment at J-PARC aims to search for the neutrinoless transition of a muon to an electron (µ–e conversion). Since µ–e conversion is strictly forbidden in the Standard Model, it would be the clear evidence of the new physics if it is found. We have developed the readout electronics board called ROESTI for the COMET straw tube tracker. We plan to install the ROESTI in the gas manifold of the detector. The number of vacuum feedthroughs needs to be reduced due to space constraints and cost. In order to decrease the number of vacuum feedthroughs drastically, we developed a daisy-chain function of Gigabit Ethernet to the FPGA on the ROESTI. Data transfer throughput on the daisy-chain is required to be close to the maximum rate of Gigabit Ethernet. The function to set parameter is also required. In order to satisfy these requirements, two SiTCPs are implemented in the FPGA of the ROESTI. We connected up to five ROESTIs with the daisy-chain and measured a data transfer speed by TCP/IP communication. In the result, we found that the daisy-chain function worked properly and the throughput satisfied our requirement. All the ROESTIs on the daisy-chain can receive the setting parameter directly from the data acquisition PC by UDP/IP communication. In this presentation, we will describe the details of the implementation of the daisy-chain function and the result of performance measurements of throughput, stability and data loss rate.

• 88
Noise Analysis of Current Sensitive Preamplifiers and Influence on Energy Resolution of NaI:Tl Detector System

Current Sensitive Preamplifiers (CSPs) are widely used in front-end electronics in DAQ (Data Acquisition System). The optimization of energy resolution requires front-end electronics with low noise and suitable gain, bandwidth, etc. We define the noise normalization factor containing noise, gain of CSPs and integrated gate of pulse. The linear model between energy resolution and noise normalization factor are set up. To verify the model, different CSPs with commercial Operational Amplifiers (OP AMPs) are designed and coupled with PhotoMultiplier Tubes (PMTs) in NaI:Tl detector. The experimental results show that the energy resolution and noise normalization factor are linearly conformable, and the goodness of fit R2 is ~0.89. The paper can provide reference for optimization on energy resolution by CSP design based on commercial OP AMPs.

Speaker: Mr JIANG, Lin
• 89
Iterative Retina for barrel-shape tracker and high magnetic field and high track multiplicity

Real-time track reconstruction in high energy physics experiments at colliders running at high luminosity is very challenging for trigger systems. When it is run on FPGAs, the Retina algorithm has been used in High Energy Physics experiments mainly in the case of parallel plane detection layers without magnetic field such as the LHCb VELO detector. However another interesting geometry is a tracking detector with a barrel shape made of multiple concentric cylindrical layers surrounded by a strong magnetic field. Our research aims to adapt Retina algorithm to a tracker detector with cylindrical geometry and complex physical environment and implement it on FPGA devices. We have first studied the Retina algorithm performance in terms of Pt and initial angle resolution with a simple simulation then with a full GEANT4 simulations. For such detector geometry and in presence of high track multiplicity, we have recently developed a process where the Retina algorithm is run in an iterative way, identifying region of interests in the track parameter space in the first pass and then reconstructing the track parameters with high accuracy in the second pass. From the simulation, both the efficiency and the purity of our procedure is above 90%. Getting promising results we are now implementing it in an FPGA. A Xilinx Kintex-7 evaluation board is chosen as the firmware platform. In this contribution we will report on the expectations from the simulations and the preliminary performance (efficiency, resolution, resource and latency) results on the hardware platform.

Speaker: Mr DENG, Wendi (ULB & CCNU)
• 90
A Low-power VCSEL Driving Structure Implemented in a 4 x 14-Gbps VCSEL Array Driver

We present design and test results of a four-channel 4 x 14-Gbps VCSEL array driver ASIC with a novel output structure fabricated in 65 nm CMOS technology. The driver die features a size of 2000 µm × 1230 µm with four independent channels. Each channel receives 200 mVp-p differential CML signals and outputs a 2 mA bias current and a 5 mA modulation current at 14 Gbps/ch.

The analog core of each channel consists of the limiting amplifier (LA) and the output driver. The LA design includes a 3-bit RC degeneration equalizer and a four-stage pre-driver with the shared inductor structure. The innovative output driver adopts a PMOS current mirror as the load of the differential MOS at the output stage without the bandwidth degradation. The modulated current of the internal branch now can also contribute to the output branch. Thus the output modulation current can be - Imod ~ Imod instead 0 ~ Imod in the conventional design. The modulation efficiency is effectively improved at the output driver stage from the structure level.

The driver ASIC has been taped out and tested after integraed with a 4 channel VCSEL Array. Widely-open 14 Gbps optical eyes have been captured, and BER < 10E-12 has been achieved in the full channel optical test.

Speaker: GUO, Di (Central China Normal University)
• 91
Short Course: Achieving High Performance Using Low Tech - Topics and Examples on FPGA and ASIC TDC

Silicon technologies have been constantly progressing but intrinsically they will never satisfy demands of performance in HEP projects, so that current available technologies with reasonable cost can always be considered low tech. In our daily design jobs, however, there are rooms for the designers to put in various good thinking to eliminate unnecessary elements or operations which will dramatically reduce silicon area, power consumption and costs for given requirements. Such improvements may enable functionalities that otherwise would be impossible.
The development of FPGA TDC was based on its forerunner developed in ASIC and the TDC implemented using FPGA platform was considered problematic. The limitation of available circuits in FPGA forced the TDC designers to choose unadjusted delay lines with uneven bin widths while calibrating the bin widths using measurement data, which is an approach deviated from its counterparts in ASIC. The approach of using unadjusted delay lines succussed in FPGA platform after efforts of hundreds of developers worldwide in past decade. Today the approach has been transplanted back into AISC TDC and finest bin width allowed by certain technology has been achieved along with significant power reduction.
In this presentation, I will discuss several topics in digital designs including silicon resource saving, power consumption reduction, timing uncertainty confinement, etc. using examples seen in TDC and other digitizers. The discussion will cover several practical building blocks such as digital delay lines, adjustable gated ring oscillators, coarse time counter and data readout circuits.

Speaker: WU, Jinyuan (Fermi National Accelerator Lab. (US))
• Wednesday, October 14
• Invited talk 03
• 92
ML at the Edge for Ultra High Rate Detectors
Speaker: CORBEIL THERRIEN, Audrey (Sherbrooke)
• Oral presentations RTA01
Convener: Mr AMAUDRUZ, Pierre-Andre (TRIUMF (CA))
• 93
Keras2c: A simple library for converting Keras neural networks to real-time friendly C

With the growth of machine learning models and neural networks in measurement and control systems comes the need to deploy these models in a way that is compatible with existing systems. Existing options for deploying neural networks either introduce very high latency, requires expensive and time consuming work to integrate into existing code bases, or only support a very limited subset of model types. We have therefore developed a new method, called Keras2c, which is a simple library for converting Keras/TensorFlow neural network models into real time compatible C code. It supports a wide range of Keras layer and model types, including multidimensional convolutions, recurrent layers, well as multi-input/output models, and shared layers. Keras2c re-implements the core components of Keras/TensorFlow required for predictive forward passes through neural networks in pure C, relying only on standard library functions. The core functionality consists of only ~1200 lines of code, making it extremely lightweight and easy to integrate into existing codebases. Keras2c has been sucessfully tested in experiments and is currently in use on the plasma control system at the DIII-D National Fusion Facility at General Atomics in San Diego.

Speaker: CONLIN, Rory (Princeton University)
• 94
NOAA image data acquisition to determine soil moisture in Arequipa-Peru

In recent years, irrigations have been built on dry areas in Majes-Arequipa. Over time, the irrigations water forms moist areas in lower areas, which can have positive or negative consequences. Therefore, it is important to know in advance where the water from the new irrigation will appear. The limited availability of real-time satellite image data is still a hindrance to some applications. Data from NOAA's environmental satellites are available fee and license free. In order to receive data, users must obtain necessary equipment. In this work we present a satellite data acquisition system with an RTL SDR receiver, a 137-138 Mhz Turnstile antenna with Balun, WXtolmag and MatLab software).
We have designed a Turnstile Crossed dipole antenna with Balun, which is easy to manufacture and at low cost. The antenna parameter measurements show very good correspondence with those obtained by simulation. The RTL SDR RTL2832U receiver, combined with our antenna and WXtoImg software, forms the system for recording, decoding, editing and displaying Automatic Picture Transmission signals. The results show that the satellite image receptions are sufficiently clear and descriptive for further analysis

Speaker: CHILO, JOSE (University of Gavle)
• 95
A data processing system for balloon-borne telescopes

The JEM-EUSO Collaboration aims at studying Ultra High Energy Cosmic Rays (UHECR) from space. In order to reach this goal, a series of pathfinder missions has been developed to prove the observation principle and to raise the technological readiness level of the instrument. Among these, the EUSO-SPB2 (Extreme Universe Space Observatory on a Super Pressure Balloon, mission two) foresees the launch of two telescopes on an ultra-long duration balloon. One is a fluorescence telescope designed to detect UHECR via the UV fluorescence emission of the showers in the atmosphere. The other one measures direct Cherenkov light emission from lower energy cosmic rays and other optical backgrounds for cosmogenic tau neutrino detection.
In this contribution, we describe the data processing system which has been designed to perform data management and instrument control for the two telescopes. It is a complex which controls front-end electronics, tags events with arrival time and payload position through a GPS system, provides signals for time synchronization of the event and measures live and dead time of the telescope. In addition, the data processing system manages mass memory for data storage, performs housekeeping monitor, and controls power on and power off sequences.
The target flight duration for the Nasa super pressure program is 100 days, consequently, the requirements on the electronics and the data handling are quite severe. The system operates at high altitude in unpressurised environment, which introduces a technological challenge for heat dissipation.

Speaker: SCOTTI, Valentina
• 96
FPGA network cards developments for the supernovae trigger of the DUNE dual-phase 10 kton detector module

The DUNE 10 kton dual-phase module is foreseen acquiring data in continuous streaming via 10 Gbit/s links from 240 uTCA crates hosting 2400 front-end AMC digitizers of 64 readout channels each. The transmitted data flow is compressed in the AMCs with a lossless algorithm. Triggers are defined online by the back-end DAQ system, which continuously analyses the data stream from the uTCA crates to define trigger primitives, which are then considered globally at the level of the detector in order to define physics triggers implying permanent storage of the corresponding data. The most challenging task is to trigger on supernova neutrino bursts. This task implies continuously analyzing and storing the data on a sliding window of at least 10 seconds looking for low energy depositions of the single neutrino interactions. This talk will focus on the R&D on a back-end architecture based on FPGA network cards, connected to the front-end units, capable of performing data decompression and the search for trigger primitives on real time.

Speaker: DAVID, Quentin (IP2I)
• 97
Readout firmware of the Vertex Locator for LHCb Run 3 and beyond

The new LHCb Vertex Locator for LHCb, comprising a new pixel detector
and readout electronics will be installed in 2020 for data-taking in
Run 3 at the LHC. The electronics centres around the "VeloPix" ASIC at
the front-end operating in a triggerless readout at 40 MHz. A custom
serialiser, called GWT (Gigabit Wireline Transmitter) and associated
custom protocol have been designed for the VeloPix. The GWT data are
sent from the serialisers of the VeloPix at a line rate of 5.12 Gb/s,
reaching a total data rate of 2-3 Tb/s for the full VELO detector.

Data are sent over 300 m optic-fibre links to the control and readout
electronics cards for deserialisation and processing in Intel Arria
10 FPGAs. Due to the VeloPix triggerless design, latency variances up
to 12 $\mu s$ can occur between adjacent datagrams. It is therefore
essential to buffer and synchronise the data in firmware prior to
onward propagation or suffer a huge CPU processing penalty. This paper
will describe the architecture of the readout firmware in detail
with focus given to the re-synchronisation mechanism and techniques
for clusterisation. Issues found during readout commissioning, and
scaling resource utilisation, along with the their solutions, will be
illustrated. The latest results of the firmware data processing chain be
presented as well as the verification procedures employed in simulation.
Challenges for the next generation of the detector will also be presented
with ideas for a readout processing solution.

Speaker: HENNESSY, Karol (University of Liverpool (GB))
• 98
Vendor Product Information: DESY TechLab
• Micro Oral C
• Industrial Exhibit 14.10.
• Poster session C-01
• 99
The threshold voltage generation for CMS ETL readout ASIC

We present the threshold voltage generation for CMS Endcap-Timing-Layer (ELT) readout ASIC called ETROC. ETROC together with the chosen sensor, low gain avalanche diode (LGAD), aim to achieve time resolution of 30~40 ps per track. In the analog front-end of ETROC, a pre-amplifier amplifies the signal from LGAD sensor, and then a discriminator converts the pre-amplifier signal into digital pulses for succeeding time-to-digital conversion and readout. The discriminator requires a programmable precision threshold voltage to cover the possible pre-amplifier signal range from 0.6 V to 1.0 V with a fine step size. The proposed threshold voltage generation network includes a reference generation network and a threshold voltage generator.
The threshold voltage generator has been implemented in ETROC0, the first prototype of ETROC, in a 65 nm CMOS technology. It includes a 10-bit digital-to-analog converter (DAC) and a noise filter. A two-stage resistor string structure is used in the DAC to achieve a monotonic adjustment curve. The first stage implements the 5-bit most significant bits (MSB) and the second stage determines the 5-bit least significant bits (LSB). The noise filter is used to filter out high-frequency noise and limit the noise well below the pre-amplifier noise. The threshold voltage generator measures differential nonlinearity (DNL) and integral nonlinearity (INL) of ±0.15 LSB and ±1.2LSB, respectively. The power consumption is 80 uA.
The reference generation network has been implemented in a separate prototype recently. Test results are expected to be presented at the conference.

Speaker: Ms ZHANG, Li (Southern Methodist University)
• 100
Qualification of radiation tolerance with fault-injection

The new ALICE Inner Tracking System (ITS) is currently in its commissioning phase. The ITS sensors are read out by 192 ITS readout units, which send the data to the counting room. These readout units are located 4 m from theALICE experiment beam collision point and are therefore in a radiation zone. Thus, the components on the readout unit are radiation tolerant. The sensor data are collected and transmitted by a Xilinx FPGA on the readout unit. An auxiliary and radiation tolerant Microsemi ProASIC3 (PA3) FPGA is on the readout unit to provide configuration and scrubbing of the Xilinx FPGA design. The fault injector is a new module of the PA3 design. With this fault injector it is possible to run a radiation qualification test for the Xilinx FPGA design without any radiation facilities. Since this module runs stand-alone, the entire detector system is available and ready when the faults are injected. This allows one to observe the system effects of SEUs in the Xilinx FPGA which will significantly increase the design iteration speed when doing radiation qualification of the design. An upgrade during a technical stop can be qualified for the equivalent of a 9 months run in just 4.4 hours because of the error injection rate of this design, which runs orders of magnitude faster than in real time.

Speaker: ERSDAL, Magnus Rentsch (University of Bergen (NO))
• 101
Discrete Component Front End and Acquisition system for Hyperkamiokande

Hyper-Kamiokande is a next generation underground water Cherenkov detector designed to study neutrinos from astronomical sources, nucleon decay, with the main focus on the determination of leptonic CP violation. To detect the weak Cherenkov light generated by neutrino interactions or proton decay, the newly developed 20inch PMT will be used in addition to a system of small photomultipliers as implemented in the KM3NeT experiment, the multiPMT. We proposed a full Front End and Acquisition system based on discrete components for cost reduction, introduction of newer and more performing components every year and small package reached by modern components. For 20inch PMTs our FE is composed of two sections, a fast one with an high gain amplifier and a discriminator and a slow one with a charge integrator and 2 ADCs to accomodate the high signal dynamic range (from 1 to 1250 PE). A Xilinx Zynq FPGA will manage the acquisition, an optical Gigabit Ethernet for data transfer and will also provide the time measurement with a TDC.
Each mPMT system is composed of 19 3inch PMTs and all the electronics required for the system, with a power budget of only 4 W. The FE follows the same principle of the 20inch PMTs, a fast discriminator and a slow shaper followed by a 12 bit@2Msps ADC. The FPGA collects the data and provide a TDC measurements with 270 ps resolution. The ARM CPU equipped with Linux and Midas manage the acquisition.
This approach let us to match all the HK requirements.

Speaker: LAVITOLA, Luigi (INFN - National Institute for Nuclear Physics)
• 102
A Novel Spectroscope with Machine Learning at the Edge for Real-Time Position Sensitivity in Thick LaBr3 Crystal

We present a 144-channel detection module for gamma spectroscopy that couples a SiPM detectors array to a thick LaBr3 scintillation crystal. The module features custom front-end electronics, state-of-the-art energy resolution (2.9% at 662keV), 80kHz monolithic acquisition rate and embedded, real-time imaging capabilities. The relatively large number of independent channels allows to achieve a large energy range (up to 20MeV) and spatial resolution in photon-interaction position reconstruction (aiming at relativistic Doppler effect correction in accelerator-based nuclear physics experiments), without sacrificing state-of-the-art energy resolution, thanks to the 84dB dynamic range GAMMA ASIC. The experimental results, obtained in real-time, edge computing of coordinates and energy per scintillation event on the FPGA device with sub-µs processing time, suggest an innovative approach in instrumentation development for nuclear science.

Speaker: Mr DI VITA, Davide (Politecnico di Milano)
• 103
Beam Tests of the Data Acquisition of the Mu3e Experiment

The Mu3e experiment at the Paul Scherrer Institute searches for the lepton flavour violating decay $\mu^+ \rightarrow e^+e^+e^-$. The experiment aims for an ultimate sensitivity of one in $10^{16}$ decays. The first phase of the experiment, currently under construction, will reach a branching ratio sensitivity of $2 \cdot 10^{-15}$ by observing $10^{8}$ muon decays per second over a year of data taking. The highly granular detector based on thin high-voltage monolithic active pixel sensors (HV-MAPS) and scintillating timing detectors will produce about 100 GB/s of data at these rates. The FPGA-based Mu3e Data Acquisition System will read out this data from the detector and identify interesting events using a farm of graphics processing units. The poster presents the ongoing integration of the sub detectors into the Field Programmable Gate Array based readout system which is used to sort and transport the data to the filter farm. Integration test beam campaigns are conducted to verify and integrate the different detector systems.

Speaker: KÖPPEL, Marius
• 104
A 10 Gbps Driver/Receiver ASIC and Optical Modules for Particle Physics Experiments

We present the design and test results of a Drivers and Limiting AmplifierS operating at 10 Gbps (DLAS10) and Miniature Optical Transmitter/Receiver/Transceiver modules (MTx+, MRx+, and MTRx+) based on DLAS10. DLAS10 can drive two Transmitter Optical Sub-Assemblies (TOSAs) of Vertical Cavity Surface Emitting Lasers (VCSELs), receive the signals from two Receiver Optical Sub-Assemblies (ROSAs) that have no embedded limiting amplifiers, or drive a VCSEL TOSA and receive the signal from a ROSA, respectively. Each channel of DLAS10 consists of an input Continuous Time Linear Equalizer (CTLE), a four-stage limiting amplifier (LA), and an output driver. The LA amplifies the signals of variable levels to a stable swing. The output driver drives VCSELs or impedance-controlled traces. DLAS10 is fabricated in a 65 nm CMOS technology. The die is 1 mm × 1 mm. DLAS10 is packaged in a 4 mm × 4 mm 24-pin quad-flat no-leads (QFN) package. DLAS10 has been tested in MTx+, MRx+, and MTRx+ modules. Both measured optical and electrical eye diagrams pass the 10 Gbps eye mask test. The input electrical sensitivity is 40 mV, while the input optical sensitivity is -12 dBm. The total jitter of MRx+ is 29 ps (P-P) with a random jitter of 1.6 ps (RMS) and a deterministic jitter of 9.9 ps (P-P). Each MTx+/MTRx+ module consumes 82 mW/ch and 174 mW/ch, respectively. Irradiation tests will be carried out and reported at the conference.

Speaker: Mr HUANG, Xing (Central China Normal University)
• 105
An FPGA-based Trigger System with Online Track Recognition in COMET Phase-I

The COMET Phase-I experiment plans to search for the muon-to-electron conversion with a single event sensitivity of $3\times10^{-15}$, which has never been observed. The event signature is a mono-energetic electron of $105\,\mathrm{MeV/c}$ for muonic atoms formed in aluminum. This electron is detected by a cylindrical drift chamber (CDC) and a trigger hodoscope in a 1-T solenoidal magnetic field.
A highly intense muon beam is used to reach our sensitivity goal. It gives an unacceptably high trigger rate for data acquisition. To solve this problem, we are developing an FPGA-based online trigger system where the CDC-hit information is utilized with machine learning (ML) technique. A classifier optimized by the ML is implemented on FPGAs in the form of small look-up tables (LUTs). Using them, this system finds out the conversion-electron events and significantly suppresses the background trigger rate. Adopting a distributed trigger architecture makes it possible to handle data for ~5000 CDC channels. All the processes need to be finished within the required latency which is limited by a buffer size of the CDC readout electronics. The LUTs carry out the ML-based classification in parallel within a clock cycle, and it keeps the latency short enough.
The trigger algorithm was implemented on the trigger-related electronics. In an operation test, the total latency was measured and met the requirement. The performance of the COTTRI system was estimated in a simulation study. In this presentation, we report details about the trigger algorithm, the operation-test result, and the expected performance.

Speaker: Mr NAKAZAWA, Yu (Osaka University)
• 106
Development of multi-channel high time resolution DAQ system for TOT-ASIC

We have developed a multi-channel high time resolution DAQ system for TOT (time over threshold)-ASIC. The developed FPGA based DAQ has 144 channels per board with an ethernet readout for measuring the energy and timing information simultaneously. The measured time resolution of the developed DAQ board is up to 62.5 ps and should be possible to achieve up to 15.625 ps in the future. This simultaneous energy and time measuring DAQ with TOT-ASIC will be useful for many radiation detection applications, such as PET and Compton imaging.
As the first stage of the development, we have developed a DAQ with a 1ns time resolution using the DDR method. It was demonstrated that the time coincidence method can be used to capture the electron-positron pair of 22Na. We obtained two-dimensional cross-section images of the divided positions between two detectors arranged 8×8 facing each other. It is confirmed that all data on several boards are compared with each other.
As the second stage development, we have developed DAQs with time resolutions of 125 ps and 62.5 ps by a delay line method. From the measurement with different lengths of the coaxial cable, the length of 10 cm is sufficiently identified.
For synchronizing the time of data on many boards, we have adopted a T0 signal which consists of a rising edge for strict time coincidence and subsequent identification time data and is distributed to all boards. All data can be compared with each other with time resolution accuracy.

Speaker: SATO, setsuo (The University of Tokyo)
• 107
Investigation of depth-dose profile of electron radiation in continuous and discontinuous materials based on Monte Carlo simulations and measurements

In food irradiation, the medium in a food product package is usually discontinuous, however in most of estimation of dose profiles, it is considered as continuous material. In the present work, investigation of the depth-dose profiles of electron radiation induced by 10 MeV electron beam linear accelerator (UELR-10-15S2) in continuous and discontinuous materials has been performed based on Monte Carlo simulations and experiments. Three materials were chosen such as water (density of 1.0 g/cm3), plastic (1.05 g/cm3) and food-like dummy (1.0 g/cm3) in Monte Carlo simulations using MCNP4c code and the experiments. In the discontinuous mediums, the materials were separated into layers with the thickness of 0.2-0.4 cm by air gaps, but the total thickness of the material mediums are kept the same. The thickness of the air gaps was increased from 1.0 to 4.0 cm. The results show no significant difference of the depth-dose profiles in the continuous and discontinuous materials. The difference of practical range, i.e. the depth of electron radiation, in continuous and discontinuous mediums is within 5% for the same materials. The results are similar with the three materials, and an agreement between simulations and experiments are obtained. The practical range in plastic is 5.23 g/cm2, which is greater than that in water (5.02 g/cm2) and food-like dummy (4.94 g/cm2). This indicates that for plastic products irradiated by the double-sided electron beam, it is possible to use a thicker package of 9.6 g/cm2 compared to the usual thickness of 8.8 g/cm2.

Speaker: Mr CAO, Van Chung
• 108
A System-on-a-Chip based Front-end electronics control system for the HL-LHC ATLAS Level 0 muon trigger

Exploiting commodity FPGAs in the front-end electronics is a standard solution
for readout and trigger electronics for the HL-LHC. It requires the system to be
capable of configuring, debugging and testing the firmware from a remote host.
Furthermore, the control system is also responsible for monitoring and recovery
in case of the SEU errors on the configuration memory of the FPGAs. Realizing
such control capability is a unique challenge for the HL-LHC experiments. We
have developed a control system to achieve such requirements with a new
module exploiting System-on-a-Chip device as the solution for the Level-0
endcap muon trigger system of the LHC-ATLAS experiment. The new module
named JATHub module will provide functionalities of flexible control of the
system: configuration, testing, monitoring, and debugging the FPGAs on the
front-end electronics. In addition, realization of the robust operation scheme for
the possible radiation damage on the JATHub module is another essential item of
development. We have invented a scheme to mitigate SEU errors on the SoC
device and a redundant boot mechanism that will guarantee robust operations
even in case of the radiation damages in the flash memory devices during the
physics data taking with beam collisions. We have fabricated two prototype
boards of the JATHub module. Demonstration study with the proto-modules will
verify the concept of the use of SoC for the management of the front-end
FPGAs in the experimental cavern of the collider experiments for HL-LHC.

Speaker: TANAKA, Aoto (University of Tokyo (JP))
• 109
The study of calibration process for the hybrid pixel array detector of HEPS-BPIX

The calibration process for the hybrid array pixel detector designed for High Energy Photon Source in China, we called HEPS-BPIX, is presented in this paper. Based on the S-curve scanning, the relationship between the energy and threshold is quantified. For the threshold trimming, the precise algorithm and fast algorithm are proposed to study the performance of the threshold DACs which will be applied to the pixel. The fast algorithm could get the applicable threshold distribution for a silicon pixel module and take a shorter time, while the precise algorithm could get better threshold distribution，but time consuming. The ENC and threshold spread of the precise algorithm is better than the fast algorithm. The threshold spread has been reduced from 26.924mV without algorithm to 3.991mV with the precise algorithm, while it’s 4.659mV with fast algorithm. The temperature dependence of the silicon pixel module is also studied. With the increasing of temperature, the equivalent noise threshold and the noise pixel increased. And the minimum detection energy can be reduced 1keV with a lower temperature. The calibrated silicon pixel module has been already applied to the X-ray diffraction.

Speaker: Mrs DING, ye (ihep)
• 110
High-Speed Image Acquisition System for Real-time Plasma Control on EAST

In order to realize plasma boundary reconstruction and plasma shape control based on visible camera, a high-speed camera acquisition system was developed on EAST. This system is optimized in many ways to achieve high-speed, real-time, low-latency performance, and can simultaneously load multiple acquisition cards. The acquisition rate of this system can be close to 10,000 FPS where the frame size set as 8-bit@320*240. And the system uses DMA and optimized memory copy function in data transmission, reducing the time spent on data memory reading and writing. Besides, the visual display function of the system based on Python web can communicate with the acquisition machines, synthesize data of multiple acquisition machines in near real-time and perform image fusion and access display. About the data storage, the system optimizes the data storage format and provides a GUI for offline data analysis and processing based on Matlab.

Speaker: HANG, Qin
• 111
DAQ Control Signal Codec for Time of Flight AFP Detector

The particle physics experiments produce the data in the order of tens GB per hour. Due to the limited resources for data storage and offline processing this amount has to be reduced with respect to the events strongly related to the ongoing experiment. Data acquisition (DAQ) system is the key component dedicated to data collecting and storing at different stages of the particle detector system. The Codec takes the commands from the Digital Trigger Module (DTM) in Time-of-Flight (ToF) detector chain at ATLAS Forward Proton (AFP) project. These commands are then encoded and prepared for the fast serial transfer via the 265 m long coaxial cable located between the CERN LHC accelerator and the rack room where the DAQ is placed. As the LHC operating frequency (bunch clock) is 40 MHz the transfer rate of the 5-bit data frames is set to 400 MHz. The received signal at the end of the coax is distorted mainly due to the parasitic capacitance and limited output power of the cable buffer of the transmitter. Therefore, the receiver implements the circuitry for the restoring of the input signal. Then the decoder processes the received data frame and prepares the signals for control of the DAQ. The system as a whole will be thoroughly tested in DESY and CERN and consequently installed at LHC.

Speaker: ZICH, Jan (University of West Bohemia (CZ))
• 112
Digital Trigger Module for Cherenkov Time of Flight Detector

The ATLAS Forward Proton (AFP) project extends the forward physics program of the multipurpose ATLAS detector located at LHC in CERN. The time-of-flight (ToF) detector measures the time delay of the detected high-energy protons (HEPs) during the multiple proton-proton collisions. Due to the high luminosity at LHC the number of events detected by ToF is enormous as well as the amount of data produced. The main purpose of the digital trigger module (DTM) is to select the events relevant to the ongoing physical experiment. The DTM input signals are compared with the remotely set logical conditions. Then the module decides whether the input signals will pass to the following chain of ToF detector or not. The secondary function of the device is to send the control commands to the data acquisition system which stores the data from the selected levels (modules) of the ToF chain. The DTM performs the high-speed signal processing of the signals with rate up to 40 MHz (bunch clock frequency). The selection of the relevant events takes approximately 9 ns. After the full testing procedure (including the test in DESY and at the CERN beam test facilities) the DTM will be installed at LHC.

Speaker: ZICH, Jan (University of West Bohemia (CZ))
• 113
LLRF Controller for High Current Cyclotron Based BNCT System

The 30 MeV high current proton cyclotron had been demonstrated successfully for BNCT System in Japan. From the M-C simulation at China Institute of Atomic Energy (CIAE), the lower energy, e.g. 14 MeV proton beam will be benefit obviously for the neutron modulator and may not affect the neutron flux very much. A project of high current cyclotron based BNCT system was approved to start the design and construction in 2016. For such a 14 MeV compact high current cyclotron, the load of RF cavities is small, while its beam loading is quite high for the mA level machine. Thus, during the beam commissioning and machine operation, the RF system requires a more powerful, flexible, and reliable real time LLRF system. Although the 100 MeV compact cyclotron had produced up to 52 kW of proton beam at CIAE, the ability of its analog-digital hybrid LLRF system cannot satisfy the requirements of the high beam loading situation of the 14 MeV cyclotron. At TRIUMF, a digital LLRF system was developed for the pre-buncher of ARIEL project. The state-of-art design is extended and utilized for the BNCT LLRF system. In which, the amplitude and phase of the two separate Dees as well as the buncher for beam injection will be regulated by a single FPGA. For such a demanding control task, the design shows a promising future, both from real-time response and flexibility point of view. The design ideas, technique feature, system structure, hardware manufacture and software development will be presented.

Speaker: Mr XIAOLIANG, Fu
• 114
Control of cyclotron vertical deflector for Proton Therapy

The CYCIAE-230 cyclotron operates cw(continuous wave). The extracted beam is characterized by a fixed energy greater than 230MeV. However modern therapy methodology often requires fast modulation of the beam both from intensity and energy. The energy modulation is typically implemented with a passive device namely energy degrader. Unfortunately, when this device changes the energy, it also changes the intensity of the beam greatly. This cross talk between the beam energy and intensity modulation introducing by the degrader is nonlinear and varies with devices. To address this issue, a vertical deflector is designed and installed in the central region, using electrical field to deflect the beam. The vertical deflector together with a collimator several turns downstream can control the beam intensity. However, the voltage applied to the vertical deflector is also nonlinear to the beam transmission ratio due to the vertical betatron oscillation. These two non-linearities require a sophisticated design of the controller for the vertical deflector. The vertical deflector controller is a 6U VME custom designed control card.The controlling unit is based on a Digital Signal Processor which interpolates the lookup table data to implement feed-forward control. In the meantime, an ionized chamber is providing feedback readings of the intensity just before the nozzle. The DSP unit is using PID algorithm and take this feedback as input to correct the vertical deflector voltage. This type of advanced PID controller is able to adapt to the timing requirements for the beam intensity modulation for proton therapy.

Speaker: Prof. YIN, Zhiguo
• 115
A digital approach of beam phase measurement and regulation system

A superconducting cyclotron, namely CYCIAE-230, is developed by China Institute of Atomic Energy (CIAE) as a proton source for cancer therapy. As proton therapy is a very demanding application for beam stability, it is very important to control the isochronism for the cyclotron. In the daily operation of the accelerator, the accumulation of phase slip should be limited. In general, there are several ways to regulate the magnet field for a fix frequency cyclotron. One of them is to measure the beam phase with respect to the RF cavity. This phase result can be used to increase or decrease the field accordingly, so that the total acceleration time for different particles can be constant. The beam phase stability control system of CYCIAE-230 superconducting cyclotron is designed to measure the beam phase and using it to adjust the current setting of main magnet power supply. The reported system includes a high-resolution resonator type beam phase probe, a digital frequency down conversion and phase detector module, a digital PID controller and a digital power supply adjustment interface. This paper summarizes the numerical analysis of high-resolution phase probe, and will give the design of digital frequency reduction phase discriminator, the core algorithm of digital controller, and the digital power control interface. A prototype has been manufactured and tested with CYCIAE-100 cyclotron, the results will also be evaluated in this paper.

Speaker: Prof. YIN, Zhiguo
• 116
Development of Payload and Platform for Gamma-ray Energy Measurement and Terrestrial Gamma-ray Flash detection

The observations of Terrestrial Gamma-ray Flashes (TGF) from ground today are still a challenge since there is not much ground detectors are built for detecting those energetic bursts. Inherit from the detector design of the IGOSat nanosatellite, we are proposing a new project to develop a payload and a platform for measuring gamma-rays from 100 keV up to 2 MeV. The payload contains a gamma-ray detector, which is a combination of the CeBr3 crystal scintillator and surrounded EJ-200 plastic scintillators, all of them are readout by Silicon Photomultipliers (SiPMs). The payload and platform can be used to measure background gamma-ray as an observatory, or can be carried in a balloon flight for detecting the radiative photons at high altitude, especially at 15 km. Its ability of TGF detection will open us a windows to have better understands of this phenomena by comparing to the TGF models that are already recently published by other authors. Beside that, the final product can be used as a portable gamma-ray detector for other applications. This project will create a new collaboration between Vietnamese‘s universities: University of Science and Technology of Hanoi - USTH; Vietnamese - German University - VGU; University of Science -VNUHCM; with the support of the APC laboratory (Paris Diderot University).

Speaker: Dr PHAN, Hiền (University of Science and Technology of Hanoi (USTH))
• 117
EPICS-Based Accelerator Magnet Control System for the SC200 Protontherapy Facility

The SC200 Facility in Hefei, China, will implement a protontherapy installation. It consists of a 200 MeV superconducting cyclotron, one Gantry treatment room and one fixed beam room. Both treatment rooms are equipped with a two dimensional magnetic scanning system. The facility is currently being assembled and first beam is expected in early 2020.

The control system for the cyclotron magnet measurement is based on the EPICS framework. It integrates the control of the System integrated High-precision Gauss meter, three-dimensional motion platform, power supply and the water cooling system. The three-dimensional motion platform equipped with Hall probes allow users to set motion parameters on the software and perform magnetic field measurement tasks with multiple motion trajectories.The system realizes online data collection, monitoring and real-time storage. Operation through a unified GUI is simple which guarantees high experimental efficiency.

Speaker: WANG, Cheng
• 118
How to improve the performance of the fast timing detector

The Fast timing detector is widely used for its picosecond (ps) level time resolution, such as the small MCP-PMTs with about 50ps@ SPE, some fast SiPM with about 100ps@SPE. The time characteristics of these types of fast timing photodetector refer to rise time, fall time and electron transition time of the output signal. The transition time is the interval from the generation of photoelectrons to the anode output signal. The transition time spread (TTS) describes the distribution of transition time and characterizes the time resolution of PMT (MCP-PMT, SiPM). In the lab, for the performance of the Fast timing detector test, the light source, the electronic device board, the DAQ system are all affected the results directly. In this manuscript, some comparative test data will be show and discussed for how to improve the performance of the fast timing detector.

Speaker: MA, Lishuang (Institute of High Energy Physics,CAS)
• 119
Archiving system for EAST PF Power Supply

Poloidal field power supply (PFPS) is an important subsystem of EAST (Experimental Advanced Super—conducting Tokamak), the archiving system of is essential. The monitor system of PFPS is running on Linux operation system, based on EPICS (Experimental Physics and Industrial Control System), and the HMI (Human Machine Interface) is CSS (Control System Studio). Archive is a plugin of CSS, used to data archiving. This paper creates archiving system, including archive engine, client, and RDB (Relational DataBase). Configure the PVs (Process Variables) which need archived so that they can be stored to RDB when changed. Realize the real-time archiving of all input signals, and the history data can be browsed at any time through CSS. Archiving makes monitor system to be a fully function system and provides data basis for analyzing EAST PFPS.

Speaker: Mrs HE, Shiying
• 120
OPC UA for integration of industrial components in a large scientific project

Integration of control and data access to industrial components implies the need to handle a myriad of multiple proprietary interfacing mechanisms. OPC UA, the Industry 4.0 sponsored standard for communication, entices us with a promise of a single, well supported integration system. OPC UA is based on modern concepts, such as service-oriented architecture (SOA) and adoption of information models for the description of devices and their capabilities. Component services can be orchestrated into more abstract machines, thus increasing the flexibility and adding the possibility to configure the whole system.
The work assesses the feasibility of using OPC UA as the standard mechanism to perform integration of industrial components into ITER Plant Systems. It also investigates the benefits of the OPC UA comprehensive information model to express system components.
In this paper, performance comparisons between OPC UA and EPICS PV Access protocol are performed: configuration, monitoring and data storage, are evaluated with respect to different plant interfaces. Furthermore, several use cases are developed in order to test the performance and the integrity of the protocol checking latency, bandwidth usage, resources consumption and transactional behaviour.
Finally, both Server and Client interfaces are tested on ITER CODAC Core System environment by making use of programmable logic controllers (PLCs) and desktop workstations.

Speaker: Dr PORZIO, Luca
• 121
40 Gbps Readout interface STARE for the AGATA Project

The Advanced GAmma Tracking Array (AGATA) multi detector spectrometer will provide precise information for the study of the properties of the exotic nuclear matter (very unbalanced proton (Z) and neutron (N) numbers) along proton- and neutron- drip lines and of super-heavy nuclei. This is done using the latest technology of particle accelerators. The AGATA spectrometer consists of a 180 high purity Germanium detectors. Each detector is segmented into 38 segments. The very harsh project requirements is to measure gamma ray energies with very high resolution < 1x 10 -3) @ high detector counting rate (50 Kevents / sec / crystal). This results in a very high data transfer rate per crystal (5 to 8 Gbps). The samples are continuously transferred to the CAP (Control And Processing Motherboard) module which reduces the data rate from 64 Gbps to 10 Gbps. The CAP module also adds continuous monitoring data which results in total outgoing data rate beyond 10 Gbps. The STARE module is designed to fit between the CAP module and the computer farm. It will package the data from the CAP module and transmit it to the computer farm using up to 4 x 10 Gbps UDP (User Datagram Protocol) connection with a delivery insurance mechanism implemented in the application layer.

Speaker: Mr KARKOUR, NABIL (CNRS/IN2P3/CSNSM)
• 122
Real-time disruption prediction in the PCS of EAST tokamak using a random forest model

A real-time data-driven disruption prediction algorithm has been implemented in the Plasma Control System (PCS) of the EAST superconducting tokamak. Disruptions are dangerous, unforeseen events and they can cause extreme damage to the plasma facing components via rapid (on the order of milliseconds) release of the magnetic stored energy and plasma current. Therefore, a real-time warning system that predicts disruptions ahead of time to initiate avoidance actions or trigger the mitigation system is necessary in current experiments and for future fusion reactors. To this aim, we developed a data-driven algorithm based on random forests that uses zero-dimensional plasma parameters including diagnostic measurements and modelling data that reconstructs the plasma state. This system is able to calculate the disruption probability in real time, by using signals to transferred to the PCS every 500 microseconds and taking approximately 150-300 microseconds for inference. The random forest is a machine learning algorithm used in many other fields outside plasma physics, but also already exploited on another tokamak (DIII-D). The model is trained using around 1000 plasma discharges including disruptive and non-disruptive discharges. The estimated disruption probability can serve as the trigger to the disruption control systems, such as MGI and fast ramp down of the plasma current to safely shut down the experiment with no damage. The algorithm can also provide indications of the causes of the disruptivity by analyzing the feature contributions in real-time. This can be used to detect which action should be taken by the PCS to avoid the disruption safely.

Speaker: Ms HU, Wenhui ( Institute of Plasma Physics, Chinese Academy of Sciences)
• 123
Robust control design for multi-input multi-output plasma shape control on EAST tokamak using H∞ synthesis

Accurate plasma shape control is the basis of tokamak plasma experiments and physical research. Modeling of the linearized control response of plasma shape and position has been widely used for shape controller design in the last several years. But it usually contains much of the uncertainty, such as structured uncertainties and unmodeled dynamics. EAST tokamak plasma shape controller design is also based on a linear rigid plasma response model which integrated within a Matlab-based toolset known as TokSys. Meanwhile the PID control approach is currently used for EAST plasma shape control. This leads to strong coupling between different parameters describing the plasma shape. To handle these problems, a H∞ robust control scheme for EAST multi-input multi-output (MIMO) shape control has been proposed. First, the plasma is modeled as a distributed current source and linearized about a distribution defined by the GA Equilibrium Fitting code (EFIT). Then, the controller design technique is introduced with two main stages: 1) loop shaping is used to shape the nominal plant singular values to give desired open-loop properties at frequencies of high and low loop gain; 2) a normalized coprime factorization and H∞ technique is used to decouple the most relevant control channels and minimize the tracking errors. Finally, the simulation results show that the H∞ robust controller combines good robust stability margins, speed of response, dynamic tracking characteristics, and closed-loop decoupling for EAST plasma shape control.

Speaker: Dr LIU, Lei (Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, China)
• 124
A reconfigurable digital dataflow controller for large-scale pixel arrays with power-efficient RISC-V core

In high energy physics experiments, large-scale pixel arrays are desired because of the requirements in accurate measuring time and position in the effective detector range. When pixel detectors form a unified module in a limited area, multi-controller interconnections and pixel array control interfaces become the bottlenecks to prevent application of larger pixel arrays. This paper proposes a reconfigurable digital readout system to overcome this problem. The controller includes three parts: Core module, communication interface and pixel scanning array. The core module of the controller uses a compact and power-efficient CPU with the RISC-V instruction set. The communication interface is capable of data exchange and programming between core controllers. The pixel scanning array implements the digital logic for readout and supports up to 256x256 pixels. The power supply voltage, scanning area, and pixel threshold can be configured according to the requirements in different situations. This feature improves the system compatibility and reduces the power consumption when operating under low voltage. When scaling up to a large array of pixels, multiple controllers can be connected using a daisy-chain topology through the stitching technology without changing the definition of the ports. The prototype has been set up and verified by simulation successfully. The core is taped out using a commercial 130nm CMOS process.

Speaker: FANG, Ni (CCNU)
• 125
A calculation software for β (LSC)-γ coincidence counting

Liquid scintillation counting(LSC) is widely used in the absolute radioactivity measurement for its advantages in no-self-absorption, simple sample preparation and relatively easy application to many radionuclides, for example, to 60Co and 134Cs. Base on the existing liquid scintillation testing platform and self-developed acquisition system, we could obtain the energy spectrum and time information of nuclides early in 2018. In this paper, a methods for β (LSC)-γ digital coincidence counting is developed and implemented in the form of software.Different from the pervious method, we applied the living time method in the process of counting rate correction to take place of the formula correction,which makes the coincidence counting method can applied to more general cases. The experiments on 60Co and 134Cs indicates that this software can work out absolute radioactivity with tolerable error and has wonderfully potential application in nuclear measurement filed.

Speaker: ZHONG, Ke (University of Science and Technology of China)
• 126
An 8.8-ps RMS Precision Time-to-Digital Converter Implemented in a Cyclone 10 LP FPGA with Automatic Temperature Correction

This paper presents a non-uniform multiphase (NUMP) time-to-digital converter (TDC) implemented in field-programmable gate array (FPGA) with automatic temperature compensation in real time. NUMP-TDC is a novel low-cost, high-performance TDC that has achieved an excellent (2.3-ps, root mean square) timing precision in Altera Cyclone V FPGA. However, the time delays of the delay chains in some FPGAs (for example, Altera Cyclone 10 LP) change significantly with the changes of the temperatures. Thus, the timing performances of the NUMP-TDCs implemented in those FPGAs are significantly affected by the temperature fluctuations. In this study, we developed a simple method to monitor the changes of the time delays using two registers deployed at the two ends of the delay chains, and to compensate the changes of the time delays using a look-up table (LUT). When the changes of the time delays exceed a given threshold, the LUT for delay correction is updated and the bin-by-bin correction is launched. Using this correction approach, a precision of 8.8-ps RMS over a wide temperature range (5°C to 80°C) has been achieved in a NUMP-TDC implemented in a Cyclone 10 LP FPGA.

Speakers: Mr SONG, Zhipeng (Huazhong University of Science and Technology) , Dr PENG, Qiyu (Lawrence Berkeley National Laboratory, Berkeley, CA, USA)
• 127
The Electronics Design of Magnetic Field Active Control System in KTX Reversed Field Pinch

Keda Torus eXperiment (KTX) is a new reversed field pinch (RFP) plasma scientific device at the University of Science and Technology of China. Resistive wall modes (RWMs) and tearing modes (TMs) of RFP configuration limit the increase of discharge duration.
The objective of this paper is to introduce a new electronics design that is a main part of magnetic field active control system. The system can prolong the discharge duration and improve the confinement performance by feedback controlling RWMs and TMs in real time in KTX. The electronics system is based on the work station and PCI extensions for instrumentation express (PXIe) chassis. FPGA is the central part of the whole system to control all the process, including data acquisition from the magnetic probes, high-speed data transmission and data summary processing in real time. FPGA gives a feedback to the external power amplifier after PID control to change the voltage of saddle control coils.
Result indicates that data transmission speed is able to meet needs of the system. More tests are in progress to prove that the system could successfully achieve fast calculation with lower delay.

Speaker: Mr SONG, Shuchen (Shuchen)
• 128
Performance of the High Level Trigger System at CMS in LHC Run-2

The CMS experiment at the LHC selects events with a two-level trigger system, the Level-1 (L1) trigger and the High Level trigger (HLT). The HLT reduces the rate from 100 kHz to about 1 kHz and has access to the full detector readout and runs a streamlined version of the offline event reconstruction. During LHC Run-2 the peak instantaneous luminosity reached values up to 2x10^34 cm-2.s-1, posing a challenge to the online event selection. An overview of the HLT system, trigger selections for the online physics object reconstruction, and the main triggers using those physics objects used in the 2016-2018 proton collision data-taking period will be presented. The performance of the main trigger paths using the physics objects in a representative set of physics triggers will also be discussed.

Speaker: CHOUDHURY, Somnath (Indian Institute of Science (IN))
• 129
Radiation tolerance of online trigger system for COMET Phase-I

The COMET experiment aims to search for the neutrinoless muon to electron transition process. The online trigger system is an integral part of reaching the sensitivity levels required and will be located in the detector region where high levels of neutron radiation are expected. Consequently a significant number of soft errors in the digital logic of the on board field programmable gate arrays (FPGA) can occur, requiring error correction for single event upsets (SEU) and firmware redownloading schemes for unrecoverable soft errors (URE). We tested the performance of one of the COMET Phase-I front-end trigger boards that implements a Xilinx Kintex-7 series FPGA, subject to neutron fluence up to $10^{12}\;n\;cm^{-2}$ with error correction and firmware redownloading schemes. Single bit and multi bit errors were measured in CRAM, BRAM and also in multigigabit (4.8 Gbps) cable transfer, with utilisation of multiple error correcting codes.

Speaker: DEKKERS, Sam (Monash University)
• 130
Hi’Beam: a pixelated on-line beam monitor for carbon-ion therapy facility

Considering the increasing number of cancer patients, China has built its carbon ion therapy facility, which has been officially put into clinic treatment in August 2020. The beam monitoring system in the therapy facility ensures the beam energy deposition can accurately cover the dedicated tumor region and the correct dose can be delivered. Hence, a precise and non-invasive real-time beam monitor, named Hi’Beam, has been designed for the carbon-ion therapy facility. The Hi’Beam locates in-between the scanning magnets and the tumor. The main components of Hi'Beam are an oblate gas chamber, 80 Topmetal pixel sensors, the readout electronics, and the data acquisition system. With the small pixels of 83μm x 83μm in the Topmetal sensor, the Hi’Beam is expected to provide a monitoring accuracy of ~10μm to 20μm. This paper will discuss the design and performance of the Hi’Beam system.

Speaker: Mr ZHANG, Honglin (Institute of Modern Physics, CAS)
• 131
Trigger-DAQ and Slow Controls Systems in the Mu2e Experiment

The muon campus program at Fermilab includes the Mu2e experiment that will search for a charged-lepton flavor violating processes where a negative muon converts into an electron in the field of an aluminum nucleus, improving by four orders of magnitude the search sensitivity reached so far.
Mu2e’s Trigger and Data Acquisition System (TDAQ) uses otsdaq as its solution. Developed at Fermilab, otsdaq uses the artdaq DAQ framework and art analysis framework, under-the-hood, for event transfer, filtering, and processing.
otsdaq is an online DAQ software suite with a focus on flexibility and scalability, while providing a multi-user, web-based, interface accessible through the Chrome or Firefox web browser.
The detector Read Out Controller (ROC), from the tracker and calorimeter, stream out zero-suppressed data continuously to the Data Transfer Controller (DTC). Data is then read over the PCIe bus to a software filter algorithm that selects events which are finally combined with the data flux that comes from a Cosmic Ray Veto System (CRV).
A Detector Control System (DCS) for monitoring, controlling, alarming, and archiving has been developed using the Experimental Physics and Industrial Control System (EPICS) Open Source Platform. The DCS System has also been itegrated into {\it otsdaq}. The installation of the TDAQ and the DCS systems in the Mu2e building is planned for 2021-2022, and a prototype has been built at Fermilab’s Feynman Computing Center. We report here on the developments and achievements of the integration of Mu2e’s DCS system into the online otsdaq software.

Speaker: Dr GIOIOSA, Antonio (INFN - National Institute for Nuclear Physics)
• 132
Performance evaluation of new parallel VME readout system for unstable nuclear physics

The RIBF DAQ system is used for researches of unstable nuclear physics at RI Beam Factory in RIKEN [H.Baba et al., Nucl.Instr.Meth.A 616, 65 (2010)]. This system can take the data of CAMAC and VME standard modules such as QDC, TDC, ADC from front-end-computers. Recently, the new MOCO (mountable controller) and the MPV (MOCO with parallelized VME) system have been developed [H.Baba et al., RIKEN Acc.Prog.Rep. 52, 146, (2019) and poster presentation in this conf.]. As a result, the data from the several VME modules can be read out in parallel at high speed. On the other hand, some experiments such as measurements of the total reaction cross sections require very low error rates of the data in the order of less than 0.01%.
Therefore, we verified the accuracy of the data in an actual accelerator experiment. The QDC, TDC, and ADC data of radiation detectors were acquired by using well confirmed CAMAC standard modules and the unconfirmed VME standard modules with the MPV+MOCO system redundantly and simultaneously.
We evaluated the accuracy of the data of the system in the order of 0.01% by comparing the VME data with the CAMAC data. By using this new VME-based DAQ system with MPV+MOCO, the dead time can be reduced from about 150 us to about 15 us in typical set ups of the experiments, we can take the data with the high trigger rates efficiently. In this presentation, we will report these results.

Speaker: TAKAHASHI, Hiroyuki (Tokyo City University)
• 133
Preliminary Design of a FADC Readout System for the Alpha/Beta Discrimination in a Large Area Plastic Scintillation Detector

This paper describes a FADC Readout system developed for the tap water α/β dose monitoring system which is based on EJ444 phoswich scintillation detector and wavelength shifting fiber readout. The Readout system contains dual sampling channels that can supply sampling rate up to 1 GSPS, 14-Bit vertical resolution and adequate effective number of bits (9.7 Bits at 10 MHz), which is optimum for the discrimination of the minimal difference between alpha/beta signals. Moreover, the system is based on a ZYNQ SoC which provides high data throughput speed, low latency and excellent flexibility. As for the discrimination algorithms, a simple least-square classification method is used to discriminate the α/β signals and shows high discrimination capability. Besides, the front-end electronics and high voltage supply modules are also briefly introduced in the paper.

Speaker: WEN, Jingjun (Tsinghua University)
• 134
Analysis and Verification of Relation between Digitizer’s Sampling Properties and Energy Resolution of HPGe Detectors

The CDEX (China Dark matter Experiment) aims at detection of WIMPs (Weakly Interacting Massive Particles) and 0vbb (Neutrinoless double beta decay) of 76Ge. It now uses ~10 kg HPGe (High Purity Germanium) detectors in CJPL (China Jinping Underground Laboratory). The energy resolution of detectors is calculated via height spectrum of waveforms with 6-µs shaping time. It is necessary to know how sampling properties of a digitizer effect the energy resolution. This paper will present preliminary energy resolution results of waveforms at different sampling properties. The preliminary results show that the ENOB (effective number of bits) with 8.25-bit or better can meet the energy resolution of CDEX HPGe detectors. Based on the ADC (Analog-to-Digital Converter) quantized error theory, this paper will also make a quantitative analysis on energy resolution in CDEX HPGe detectors. It will provide guidance for ADC design in full-chain cryogenic readout electronics for HPGe detectors.

Speaker: WANG, Tianhao
• 135
Design of a configurable column-parallel ADC in the MAPS for real-time beam monitoring

As the leading research platform of heavy-ion science in China, the Heavy Ion Research Facility in Lanzhou (HIRFL) provides heavy-ion beams for research on nuclear physics and related applications. The beam monitoring system in HIRFL ensures the beam can be delivered in an accurate manner. The full image of the beam energy deposition is needed for accurate beam calibration, thus a Monolithic Active Pixel Sensor (MAPS), which can provide the energy deposition in each pixel, is being designed in a 130nm CMOS process. The MAPS will be equipped in different terminals where the sampling frequencies and required resolution varies. As the key part in realizing this MAPS with full-image output, a novel column-parallel ADC with configurable resolution from 5bit to 9bit has been designed for the MAPS. To respond to the restrict constraints of power dissipation, size, working speed and accuracy for the MAPS, the column-parallel ADC combines the dedicated sample phase and the signal conversion phase into a single phase. Moreover, the ADC has a high tolerance to the offset of the comparators by generating 1.5-bit in every stage. Each column-parallel ADC covers a small area of 100um x 300um, consumes low power of 5.4mW at 3.3V supply and provides the sampling rate from 5MS/s to 10MS/s with the dynamic input range of 1000 mV. In 9-bit mode, it can achieve the ENOB of 8.07 bit with the SNDR of 50.34dB. This paper discusses the design and performance of the column-parallel ADC.

Speaker: ZHAO, Chengxin (Institute of Modern Physics, CAS)
• 136
Optimized low-precision multiply-and-accumulate for neural computing in pixel sensor applications

Pixel array chips are widely used in physical experiments. The data rate of the pixel array chip can be considerably high (640Mbps for Topmetal-M), and the analog output is generally converted to a digital value by a low-precision ADC at the output end. When analyzing the quantized data for online pattern recognition, neural networks and deep learning are promising methods to perform on-site preprocessing and alleviate the burden of offline analysis. In deep learning, the basic arithmetic unit is the multiply-and-accumulate (MAC). The current MAC design targets data types of FP32, FP16, and INT8. However in pixel sensor applications, lower precision is helpful to reduce transmitted data without greatly decreasing the discrimination ability. In this abstract, we use the 3-bit low-precision data from the Topmetal-M pixel array chip as an example. We design the 3-bit Carry-Ripple Adder, Carry Skip Adder, Carry Look-ahead Adder, Parallel Prefix Adder and other structural adders. Besides, Array multiplication, Booth multiplication and other structural multiplications are also implemented. Comprehensive analysis, aiming at the best power, performance and area (PPA) is conducted. Based on the analysis, we propose a new low-precision multiply-and-accumulate (LPMAC) based on the Carry Look-ahead Adder, Carry Save Adder and the Booth Coded multiplier to support Low-precision operations in neural networks. These designs have been verified using a commercial 130nm process and have been functionally simulated successfully on FPGAs.

Speaker: WANG, Hui (Central China Normal University)
• 137
A Design of Clock and Timing System Prototype for Hard X-ray FEL Facility

Shanghai Hard X-ray Free Electron Laser Facility (SHINE) aims at producing the X-ray pulses in the photon energy range from 3 keV to 25 keV, which is under construction. To target the design purpose, SHINE requires high precise distribution of the clock and timing signals over distance of about 3.1 km. Based on the standard White Rabbit (WR) precision time protocol, this paper presents a prototype design of customized high precision clock and timing system. Meanwhile, special clock processing methods are proposed for the requirements of different devices in SHINE, including dividing frequency, adjusting phase delay and duty cycle, and adding MASK information. Test results indicate that the skew jitter of distributed clock over the frequency range from about 1Hz to 1MHz is less than 20 picoseconds RMS and the range of phase delay adjustment can be up to 1 second with a step size of about 400 picoseconds.

Speaker: Dr GU, Jinliang (State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China)
• 138
WIE event - use the attached link to connect!
Speaker: DA VIA, Cinzia (University of Manchester (GB))
• Thursday, October 15
• Invited talk 04
• 139
CANPS Award Talk
Speaker: MARTIN, Jean-Pierre
• Oral presentations MISC01
Convener: Prof. LIU, Zhen-An (IHEP,Chinese Academy of Sciences (CN))
• 140
FERS-5200: Front-End Readout System for large detector arrays

The FERS-5200 is the new CAEN Front-End Readout System for large detector arrays. It consists in a compact, distributed and easy-deployable solution integrating front-end based on ASICs, A/D conversion, data processing, synchronization and readout. Using the appropriate Front-End the solution perfectly fits a wide range of detectors such as SiPMs, multianode PMTs, GEMs, Silicon Strip detectors, Wire Chambers, Gas Tubes, etc. The first member of the FERS family is the unit A5202, a 64 channel readout card for SiPMs, based on the CITIROC ASIC by Weeroc SaS. The Concentrator board DT5215 can manage the readout of up to 128 cards at once, that is 8192 readout channels in case of the A5202.

Speaker: Dr VENARUZZO, Massimo (CAEN SpA)
• 141
CentOS Linux for the ATLAS MUCTPI Upgrade

A new Muon-to-Central-Trigger Processor Interface (MUCTPI) was built as part of the upgrade
of the ATLAS Level-1 trigger system for the next run of the Large Hadron Collider at CERN.
The MUCTPI has 208 high-speed optical serial links for receiving muon candidates from the
muon trigger detectors. Three high-end FPGAs are used for real-time processing of the muon
candidates, for sending trigger information to other parts of the trigger system, and for
sending summary information to the data acquisition and monitoring system. A System-on-Chip
(SoC) is used for the control, configuration and monitoring of the hardware and the operation
of the MUCTPI. The SoC consists of an FPGA part and a processor system. The FPGA part provides communication with the processing FPGAs, while the processor system runs software for communication with the run control system of the ATLAS experiment. In this paper we will describe our experience with running CentOS Linux on the SoC. Cross-compilation together with the existing framework for building of the ATLAS trigger and data acquisition (TDAQ) software is being used in order to allow us deploying the TDAQ software directly on the SoC.

Speaker: SPIWOKS, Ralf (CERN)
• 142
High Throughput Optical Module for Large Size Experiment

The Jiangmen Underground Neutrino Observatory (JUNO) is a 20 kton multi-purpose underground neutrino detector which was proposed in order to determine, as its primary goal, the neutrino mass hierarchy using reactor anti-neutrinos produced by the nearby nuclear power plants as a source. The detector is currently under construction in China.
A module able to manage 0.5 Tbps of aggregate throughput board based on multi-gigabit optical serial links is presented. This module, called Reorganize and Multiplexing Unit (RMU), has been developed as a part of the JUNO experiment. It is used in the Trigger system in order to collect, manage and distribute the trigger information.
The system can also be exploited in ground telecommunications, satellite communications and other particle physics experiments.
The RMU functionalities has been physically divided on different PCB: supplies, slow control, clock distribution and optical links. This approach allows to maximize performance and reduces production and maintenance cost.

Speaker: FABBRI, Andrea (INFN - National Institute for Nuclear Physics)
• 143

Detector readout systems for medium to large scale physics experiments, and instruments in some other fields as well, are generally composed of multiple front-end digitizer boards distributed over a certain area. Often, this hardware has to be synchronized to a common reference clock with minimal skew and low jitter. These front-end units usually also need to receive messages synchronously, for example trigger information. The high speed serial transceivers (a.k.a. SerDes) embedded in modern Field Programmable Gate Arrays (FPGAs) are primarily meant for moving large data volumes, but a reference clock can also be propagated via their serial clock. While using the SerDes clock recovery technique to transport synchronization over fast serial links has now become a mainstream solution, it has some limitations. An alternative option uses distinct clock and data links. This can potentially reach higher synchronization accuracy, at significant hardware expenses.
This work reports some first steps to explore a third scheme for clock and synchronous message distribution. Like the standard approach, the same media is used to convey clock and data, but instead of using today’s “data-centric” links where the recovered clock is only a by-product of a SerDes, this paper defines and investigates “clock-centric” links where, at the opposite, a clock is carried by the link, and synchronous data are embedded into it by a modulation technique. After defining the concepts and principles of data-centric links, experimental studies are presented. Finally, the merits and limitations of the proposed approach are discussed.

Speaker: CALVET, Denis (Université Paris-Saclay (FR))
• 144
Real-time web display in the MIDAS DAQ software

Modern data acquisition systems have the common need of near real-time visualization of parameters and slow control variables of an experiment. This includes the display of values in text format but also in various graphs including trend lines vs. time. The MIDAS DAQ system which is used word-wide in various experiments (TRIUMF, PSI, CERN, T2K, …) has recently be extended to allow such visualizations through a web browser. This extension uses modern web technologies such as AJAX, JSON, Canvas and Websockets included in HTML5 to render graphics and text in any browser including mobile devices. The advantage over native graphics packages such as QT is the fact that the graphics application does not have to be installed locally, but it is obtained from the experiment through the web link automatically. Updates of the application are therefore instantly available by all people monitoring an experiment. This is only possible since today’s browsers can render graphics in JavaScript almost as fast as native application by accessing the GPU directly, allowing millions of data points per second to be rendered with minimal CPU load.

This paper will explain the technology and optimizations made to easily allow any graphical representation in the MIDAS DAQ framework, and how to use it for your own experiment. Experiences made with this system for remote monitoring of experiments across several continents will be shown.

Speaker: RITT, Stefan (Paul Scherrer Institute)
• 145
Vendor Product Information: Wiener "A universal, low latency and open VME Data Acquisition - Hardware and Software by MESYTEC"
• Industrial Exhibit 15.10.
• Poster session A-02 - program see A-01 on Mon 12/10
• 146
HLS4ML Tutorial
Speakers: LIU, Miaoyuan (Purdue University (US)) , SUMMERS, Sioni Paris (CERN) , TRAN, Nhan Viet (Fermi National Accelerator Lab. (US))
• Friday, October 16
• Invited talk 05
• 147
Artifical Intelligence - a gentle introduction
Speaker: BUEHLER, Christof (super computing systems scs)
• Oral presentations DAQ03
Convener: NEUFELD, Niko (CERN)
• 148
Pattern matching Unit for Medical Applications

We explore the application of concepts developed in High Energy Physics (HEP) for advanced medical data analysis.

Our study case is a problem with high social impact: clinically-feasible Magnetic Resonance Fingerprinting (MRF). MRF is a new, quantitative, imaging technique that replaces multiple qualitative MRI exams with a single, reproducible measurement for increased sensitivity and efficiency. A fast acquisition is followed by a pattern matching (PM) task, where signal responses are matched to entries from a dictionary of simulated, physically-feasible responses, yielding multiple tissue parameters simultaneously. Each pixel signal response in the volume is compared through scalar products with all dictionary entries to choose the best measurement reproduction. MRF is limited by the PM processing time, which scales exponentially with the dictionary dimensionality, i.e. with the number of tissue parameters to be reconstructed. We developed for HEP a powerful, compact, embedded system, optimized for extremely fast PM. This system executes real-time tracking for online event selection in the HEP experiments, exploiting maximum parallelism and pipelining.
Track reconstruction is executed in two steps. The Associative Memory (AM) ASIC first implements a PM algorithm by recognizing track candidates at low resolution. The second step, which is implemented into FPGAs (Field Programmable Gate Arrays), refines the AM output finding the track parameters at full resolution.

We propose to use this system to perform MRF, to achieve clinically reasonable reconstruction time. This paper proposes an adaptation of the HEP system for medical imaging and shows some preliminary results.

Speaker: LEOMBRUNI, Orlando (Universita & INFN Pisa (IT))
• 149
Operational experience and evolution of the ATLAS Tile Hadronic Calorimeter Read-Out Drivers

TileCal is the central hadronic calorimeter of the ATLAS experiment at the Large Hadron Collider (LHC). It is a sampling detector where scintillating tiles are embedded in steel absorber plates. The tiles are grouped forming cells, which are read-out on both sides by photomultiplier tubes (PMTs). The PMT digital samples are transmitted to the Read-Out Drivers (ROD) located in the back-end system for the events accepted by the Level 1 trigger system. The ROD is the core element of the back-end electronics and it represents the interface between the front-end electronics and the ATLAS overall Data AcQuisition (DAQ) system. It is responsible of energy and time reconstruction, trigger and data synchronization, busy handling, data integrity checking and lossless data compression. The TileCal ROD is a standard 9U VME board equipped with DSP based Processing Units mezzanine cards. A total of 32 ROD modules are required to read-out the entire TileCal detector. Each ROD module has to process the data from up to 360 PMTs in real time in less than 10μs. The commissioning of the RODs was completed in 2008 before the first LHC collisions. Since then, several hardware and firmware updates have been implemented to accommodate the RODs to the evolving ATLAS Trigger and DAQ conditions adjusted to follow the LHC parameters.
The initial ROD system, the different updates implemented and the operational experience during the LHC Run 1 and Run 2 are presented.

Speaker: VALERO BIOT, Alberto (Univ. of Valencia and CSIC (ES))
• 150
Design of the Compact Processing Module for the ATLAS Tile Calorimeter

The LHC plans a series of upgrades towards a High Luminosity LHC (HL-LHC) to increase up to 5-7 times the current nominal instantaneous luminosity. The ATLAS experiment will accommodate the detector and data acquisition system to the HL-LHC requirements during the Long Shutdown 3 (2024-2026), where the on- and off-detector electronics of Tile Calorimeter (TileCal) will be completely replaced with a new readout electronics system with new interfaces to the full-digital ATLAS trigger system.

In the HL-LHC era, the on-detector electronics will stream digitized data from the PMTs to 128 Compact Processing Modules (CPMs) operated in 32 ATCA carrier blades located in the counting rooms, requiring a total bandwidth of 40 Tbps to read out the entire detector.

The CPMs will be the core of the off-detector electronics, being responsible for the high-speed communication with the on-detector electronics and ATLAS DAQ system, clock distribution, data acquisition, and core processing functionalities. Each CPM is equipped with 8 Samtec FireFly modules connected to a Xilinx Kintex UltraScale FPGA for data buffering, online digital processing and on-detector electronics control. A Xilinx Artix 7 FPGA performs slow control functionalities for the clocking circuitry, power monitoring, Ethernet configuration, and monitors the phase drifts of the distributed clock.

This contribution presents the design of the first CPMs for the ATLAS Tile Calorimeter and its integration into the ATLAS TDAQ system. The results of the early performance testing with the on-detector electronics and ATLAS DAQ system are discussed, as well as the layout design and firmware architecture.

Speaker: CARRIO ARGOS, Fernando (Univ. of Valencia and CSIC (ES))
• 151
The Data Acquisition System of protoDUNE dual-phase

ProtoDUNE dual-phase is a prototype at the scale 1/20 of the future DUNE 10 kton dual-phase module. The detector is based on the liquid argon time projection chamber technology with amplification and collection of the ionization electrons in the gas phase. The detector has an active volume of 6x6x6 m3 containing 300 ton of liquid argon. A drift window of 6 m corresponds to 4 ms which are subdivided in 10000 samples of 12 bit for each one of the 7680 readout channels used to instrument the segmented anode collecting the drifted charges at the top of the detector. The data acquisition system of protoDUNE dual-phase is the outcome of a long R&D process aimed at simplifying the design and costs while preserving high bandwidth performance for a trigger rate up to 100 Hz. The front-end digitizing units are 120 AMC cards of 64 channels with 10 gbit/s connectivity. Groups of 10 AMC cards are integrated in 12 μTCA crates together with timing and synchronization slave nodes based on the White-Rabbit standard, used as well to dispatch external triggers. ProtoDUNE dual-phase started operating in August 2019. This talk will focus on the design aspects and the operation experience of the DAQ system.

Speaker: GIRERD, Claude
• 152
Preparing the LHCb data acquisition for LHC Run3

The LHCb experiment will face an unprecedented amount of data inLHC Run3 starting in 2021. We expect more than 30 Tbit/s of data fromthe detector into the event-builder. After an intense R&D phase we havenow decided on the details of the architecture and on the technology. Interms of amount of data to handle this will be the biggest scientific dataacquisition system built to date. While the technology to build even bigger systems is clearly available, building a compact system at this size with a limited budget poses some interesting challenges. In our system we have tried to minimize the number of custom-made components to maximally profit from technological development in industry, but our use-case for these components remains quite special.In this paper we review the design, the reasons which brought us to it, the measurements which guided our decisions and we will discuss the main technology drivers.

Speaker: PISANI, Flavio (CERN)
• 153
Zoom Coffee Break

Please coordinate and meet in one of the Zoom coffee rooms

• Industrial Exhibit 16.10.
• Poster session B-02 - program see B-01 on Tue 13/10
• Monday, October 19
• Invited talk 06
• 154
History of the Real Time Conference
Speaker: LE DÛ, Patrick
• Oral presentations CMTS01
Convener: LEVINSON, Lorne (Weizmann Institute of Science (IL))
• 155
From 40G to 100G - a new MCH concept

ncreasing demands for modularity and bandwidth especially in Physics applications create a constant challenge to meet the requirements of application from the “low end” (i.e. Industrial PLC type of control) to the “high end” (i.e. data acquisition and processing with high-end FPGAs). Existing switched MOSA (modular open system architecture) approaches such as MicroTCA, which are heavily used in Big Physics application like machine control, need to catch up with the increasing demand. Therefore, a new concept for the MicroTCA Carrier Hub (MCH) is needed which allows a flexible mix-and-match of MCH sub-modules and provides state-of-the art switching technology for slim and fat-pipe fabrics at the same time. The presentation will show how this transition from existing MCHs to future solutions can be smoothly effected while maintaining a maximum on backward compatibility.

Speaker: KOERTE, Heiko (N.A.T.)
• 156
Versatile Configuration and Control Framework for Real Time Data Acquisition Systems

FPGA-based systems for real-time data acquisition in long-term experiments require an interface for hardware control if they do not act autonomously. Besides a low-level register interface, access to I2C and SPI buses is also needed to configure the complete device. Advanced control flows and calculations required for configuration and calibration are not possible without implementation on the FPGA. This demands for repeated device accesses from a client system or a complex hardware implementation. In the former case, the duration of the network accesses considerably slows down a software calibration on client side causing undesired down times for the measurement. By using SoC-FPGA solutions with a microprocessor, more sophisticated configuration and calibration solutions, as well as standard remote access protocols, can be integrated in software. A versatile control system has been implemented for the readout of superconducting sensors and quantum bits. This software framework offers a convenient access to Xilinx ZYNQ SoC-FPGAs via remote-procedure calls (RPCs). Based on the open source RPC system gRPC, functionality with low-latent control flow, complex algorithms, data conversions and processing, as well as configuration via external buses can be provided to a client via Ethernet. Furthermore, client interfaces for various programming languages can be generated which eases collaboration among different working groups and integration into existing software. This contribution presents the framework as well as measurement data regarding latency and data throughput.

Speaker: KARCHER, Nick (Karlsruhe Institute of Technology (KIT))
• 157
Firmware and Software Implementation Status of the nBlm and icBlm Systems for ESS Facility

Ionization chamber Beam Loss Monitor (icBLM) and neutron Beam Loss Monitor (nBLM) systems are fundamental components of European Spallation Source (ESS) accelerator safety infrastructure. Main responsibility of these devices is an instantaneous and reliable detection of accelerated proton beam loss that exceeds predefined safety threshold. The detection statuses from several detectors are combined using operator-defined logical function to emit Beam Inhibit signal to Machine Protection System, which brings the accelerator to safe state.

The paper presents implementation details of beam loss detection algorithms used for both types of systems together with overview of remaining firmware and software layers. Each component of the system is discussed in the context of requirements defined by ESS and possible improvements. In addition, the contribution covers overall system evaluation details in simulation, laboratory and real facility tests.

Speaker: Dr JALMUZNA, Wojciech (Lodz University of Technology)
• 158
Integrated real-time supervisory management for handling of off-normal-events and feedback control of tokamak plasmas

For long-pulse-tokamaks, one of the main challenges in control strategy is to simultaneously reach multiple control objectives using a limited set of actuators and to robustly handle in real-time (RT) off-normal-events (ONE). We have developed a flexible and generic architecture of the plasma control system (PCS) to deal with these issues. The PCS is separated into a tokamak-dependent and a tokamak-agnostic-layer. The first one converts tokamak-specific signals to a generic-state-description used by the tokamak-agnostic-layer and vice-versa. The latter includes:
• a plasma-event-monitor and supervisor to evaluate ONE, plasma and actuator states, decide the appropriate control scenario (list of control tasks), activate and prioritize control tasks
• an actuator manager to decide the best actuator resource allocation to active tasks and distribute commands to corresponding actuators
• controllers that execute control laws to fulfil their tasks with assigned resources and ask for more/less resources if necessary
Thanks to the modular and interface-standardized features of the tokamak-agnostic-layer, it facilitates the implementation, improves maintenance and development capabilities, as well as to be easily transferable to different devices.
We present here the recent development of RT decision-making by the supervisor to switch between various control-scenarios (normal, backup, shutdown, etc.). First, for each ONE, a danger-level and a corresponding ONE-reaction-level are determined. Then, a ONE-to-Scenario mapping decides the appropriate control-scenario based on the set of ONEs and the associated ONE-reaction-levels. The whole PCS has been implemented on the TCV, applied to disruption avoidance experiments with edge-density-limit, demonstrating the excellent capabilities of a RT integrated strategy.

Speaker: Dr VU, Trang (SPC-EPFL)
• 159
EasyPET Technologies

EasyPET-3D is an affordable PET scanner technology capable of acquiring
real time and high quality in-vivo PET images of mice. While the high price
of preclinical PET scanners makes them unaccessible for many research
centers, EasyPET-3D reduces the overall costs without compromising image
quality. The system uses an innovative acquisition method based on two
rotation axes for the movement of detector arrays, and a simple front-end

Speaker: CORREIA, Pedro (University of Aveiro)
• Industrial Exhibit 19.10.
• Poster session C-02 - program see C-01 on Wed 14/10
• Tuesday, October 20
• Oral presentations TRIG01
Convener: NOMACHI, MASAHARU
• 160
Global Trigger System of JUNO Central Detector

As a neutrino experiment, the Jiangmen Underground Neutrino Observatory (JUNO) aims to determine the neutrino mass ordering along with other purposes. The Central Detector (CD) of JUNO is a 20k ton liquid scintillator, has diameter 35.4m. About 18000 photomultipliers (PMT) spread on the surface of the CD sphere to detect the scintillation light.
As the dark noise of the PMTs is quite high (about 50kHz), a global trigger system is designed to reduce the trigger rate and eliminate non-physics signals. The huge size of the CD leads to large Time of Flight (TOF) difference with different reaction vertex. The global trigger system implements a simple online Vertex Fitting Logic (VFL) to calibrate the TOF difference and “guess” the event vertex. If one or more vertex matches current PMT hit information, a trigger accept will be generated. With this algorithm, the trigger window could be reduced to 50ns and significantly reduce the influence of PMT dark noise.
The architecture of the global trigger system will be introduced in this talk. We will also explain the mechanism of the VFL algorithm.

Speaker: DONG, Jianmeng (Tsinghua University)
• 161
ATLAS hardware-based Endcap Muon Trigger for future upgrades

The LHC is expected to increase its center-of-mass energy to 14 TeV with an instantaneous luminosity to 2x1034 cm-2s-1 for Run 3 scheduled from 2021 to 2023. The High-Luminosity-LHC (HL-LHC) program is then planned to start the operation in 2026 with an instantaneous luminosity of 7.5x1034 cm-2s-1. In order to cope with the high event rate, continuous upgrades of the ATLAS trigger system are mandatory.
The hardware-based Endcap Muon trigger system identifies muons with high transverse momentum by combining data from a fast muon trigger detector, TGC. In the ongoing upgrade for Run 3, new detectors will be installed in the inner station region for the endcap muon trigger. In order to handle data from various detectors, some new electronics have been developed, including the trigger processor board known as Sector Logic. Finer track information from the new detectors can be used as part of the muon trigger logic to enhance performance significantly.
For HL-LHC, the new hardware muon trigger is required to reconstruct muon candidates with an improved momentum resolution to suppress the trigger rate with keeping the efficiency. The track reconstruction using full-granular information enables to form more offline-like tracks, by Virtex UltraScale+ FPGA with about hundred pairs of transceivers, and with huge memory resources for a pattern matching algorithm.
This presentation describes the aforementioned upgrades of the hardware-based Endcap Muon trigger system. Particular emphasis will be placed on the new electronics design and the firmware. The expected trigger performance will also be discussed.

Speaker: MINO, Yuya (Kyoto University (JP))
• 162
Data Transportation with ZeroMQ at the Belle II High Level Trigger

The data acquisition system of the Belle II is designed for a sustained first-level trigger rate of up to 30 kHz at the design luminosity of the SuperKEKB collider. The raw data read out from the subdetector frontends is delivered in realtime into an online High Level Trigger (HLT) farm, consisting of up to 20 computing nodes housing around 5000 processing cores, where it is reconstructed by the full Belle II reconstruction software in realtime. Based on this online reconstruction, events are filtered before being stored to disk for later offline processing and analysis.
In this talk, we will present the implementation of the newly developed data flow throughout the HLT system utilizing the open-source library ZeroMQ, which was first used in the beam run period in fall 2019. We will compare the performance and capabilities of the new system with the previous implementation based on a custom shared memory ringbuffer and discuss advantages and disadvantages of both approaches.

Speaker: PRIM, Markus (KIT)
• 163
Data Acquisition System for the COMPASS++/AMBER Experiment

We present a new data acquisition system for
the COMPASS++/AMBER experiment designed as a further
development of the Intelligent FPGA-based Data Acquisition
framework. The system is designed to have a throughput of
5 GB/s. The system includes front-end cards, fully-digital trigger
processor, data multiplexers, data switch, an event builder, and
a PCIe interface to the PC for data storage. We designed the
system to provide free-running continuous read out. The free-
running continuous detector read out allows us to implement
a sophisticated triggering algorithms by delaying data filtering
until the event builder stage. The data selection and event
assembly then require a time structure of the data streams
with different granularity for different detectors. By routing data
based on the time structure we can average data rates on the
event builder inputs and easily achieve scalability. The scalable
architecture allows us to increase the throughput of the system
and achieve a true triggerless mode of operation.

Speaker: LEVIT, Dmytro (Technische Universitaet Muenchen (DE))
• 164
Time Synchronization Schemes for the Future Hyper-Kamiokande Experiment

On the strength of a double Nobel prize-winning experiment (Super)Kamiokande and an extremely successful long-baseline neutrino program, the third generation Water Cherenkov detector, Hyper-Kamiokande, is being developed by an international collaboration as a leading worldwide experiment based in Japan. Hyper-Kamiokande will be able to measure with the highest precision the leptonic CP violation, will have excellent capability to search for proton decay and ability to precisely test the three-flavor neutrino oscillation paradigm in conjunction with a strong astrophysical program.
The Hyper-Kamiokande detector will be hosted in the Tochibora mine, about 295 km away from the J-PARC proton accelerator research complex in Tokai, Japan and will be the largest underground water Cherenkov detector with a 68 m diameter and 72 m height cylindrical tank, approximately 8 times larger fiducial volume than Super-Kamiokande. It will be equipped with about 50,000 photo-sensors. Synchronization of the timing of each PMT signal is crucial for precise measurement of the timing of photon arrival. In Hyper-Kamiokande, timing resolution of the photo-sensor is expected to be sub-nanosecond and the jitter less than 100 ps. The CERN designed White Rabbit protocol and a full custom FPGA based solution are under evaluation to reach this objective. This contribution will show the architectural solutions envisaged for the experiment and will focus on the R&D program comparing the solutions under evaluation and showing the first measures conducted on the prototypes.

Speaker: RUSSO, Stefano (CNRS (new))
• 165
Streaming readout system for the BDX experiment

Due to the lack of results by ‘traditional’ Dark Matter (DM) searches, in the last years, the experimental activity extended to search for DM hints at different mass scales, through new experiments performed at accelerators. The Beam Dump eXperiment at Jefferson Laboratory aims to reveal dark matter particles produced in the interaction of an intense electron beam with the beam dump. An electromagnetic calorimeter (CsI(Tl) crystals read by SiPMs), protected by environmental background by an active veto counter (plastic scintillator with SiPM), will detect the interaction of DM with atomic electrons of the detector. The absence of an an external trigger to start the data acquisition when an event of interest is produced, suggested to implement a full streaming DAQ system. The ~1000 channels of the BDX detectors will be digitized by custom flash digitizers and data streamed to a CPU farm for further elaboration. Low cost per channel, flexible front-end circuitry, high-performance timing system, adjustable memory buffer, and a fully trigger-less approach are the requirements driving the design of a the BDX-WaveBoard, a 12 channel 250MHz 14 bits digitizer. Waveforms and timestamps (provided by a GPS for global synchronization) will be streamed to the TriDAS software developed for KM3NeT, an underwater neutrino telescope. In this contribution we will describe the WaveBoard, the TRIDAS and their performance as measured with a prototype of the BDX detector (BDX-MINI) during Spring 2019 data taking. Results will be compared to what obtained by running the DAQ in standard triggered mode.

Speaker: Dr MARSICANO, Luca (INFN)
• Industrial Exhibit 20.10.
• Poster session A-03 - program see A-01 on Mon 12/10
• First feedback from the conference (discussion)
• Wednesday, October 21
• Oral presentations CMTS02
Convener: TANG, Fukun (University of Chicago (US))
• 166
A New Scheme of Redundant Timing Crosschecking for Frontend Systems

In high energy physics experiments, frontend digitization modules are usually driven by a common clock. At the frontend electronics and digitization modules of the detector, a precise timing reference must establish in order to make a useful timing measurement. It would be very useful for the frontend digitization system to include a feature of redundant timing crosschecking. The inspiration for the timing crosschecking scheme came from the long-abandoned analog mean-timer schemes. In this scheme, several Front-end modules are connected together through a cable set with taps connected to the digitization modules. Pulses are driven from every FE modules alternately without overlapping. The arrival times of pulses sent from FE modules are digitized at the timing crosschecking module. The mean times will change as temperature changes, but their difference between two FE modules are cancelled mathematically. This feature enables easy achievement of good timing precision without the need for complex hardware.

Speaker: Dr WU, Jinyuan (Fermi National Accelerator Lab. (US))
• 167
Development of next-generation timing system for the Japan Proton Accelerator Research Complex

A precise and stable timing system is necessary for high
intensity proton accelerators such as the J-PARC. The existing timing
system, which was developed during the construction period of the
J-PARC, has been working without major issues since 2006. After a
decade of operation, the optical modules, which are key components for
signal transfer, were discontinued already. Thus, the next-generation
timing system for the J-PARC is under development. The new system is
designed to be compatible with the existing system in terms of the
operating principle. The new system utilizes modern high speed signal
communication for the transfer of the clock, trigger, and type code.
We present the system configuration of the next-generation timing
system and current status.

Speaker: TAMURA, Fumihiko (Japan Atomic Energy Agency (JP))
• 168
Tau Identification with Deep Neural Networks at the CMS Experiment

The reconstruction and identification of tau leptons decaying into hadrons are crucial for physics studies with tau leptons in the final state at the LHC. The recently deployed tau identification algorithm using deep neural networks at the CMS experiment for the discrimination of taus from light flavour quark or gluon induced jets, electrons, or muons is an ideal example for the exploitation of modern deep learning neural network techniques. With this algorithm a significant suppression of tau misidentification rates has been achieved for the same identification efficiency as compared to previous algorithms at the LHC, leading to considerable performance gains for physics studies with tau leptons. This new multi-class deep neural network based tau identification algorithm at CMS and its performance with proton collision data will be discussed.

Speaker: CHOUDHURY, Somnath (Indian Institute of Science (IN))
• 169
Remote Configuration of the ProASIC3 on the ALICE Inner Tracking System Readout Unit

A Large Ion Collider Experiment (ALICE) is one of the four major experiments conducted by CERN at the Large Hadron Collider (LHC). The ALICE detector is currently undergoing an upgrade for the upcoming Run 3 at the LHC. The new Inner Tracking System (ITS) sub-detector is part of this upgrade. The front-end electronics of ITS is deployed in a radiation environment, and Single Event Upsets (SEUs) in the SRAM-based Xilinx Kintex Ultrascale FPGA of the ITS Readout Unit (RU) is therefore a real concern. To clear SEUs in the configuration memory of this device, a secondary Flash-based Microsemi ProASIC3E (PA3) FPGA is included on the RU. This device configures and continuously scrubs the Xilinx FPGA while data-taking is ongoing, which avoids accumulation of SEUs. There are altogether 192 RUs located in the experimental cavern with limited access. The communication path to the RUs is via the radiation hard Gigabit Transceiver (GBT) system on 100 m long optical links. The PA3 is reachable via the GBT Slow Control Adapter (GBT-SCA) ASIC using a dedicated JTAG bus driving channel.
During the course of Run 3, it is foreseeable that the FPGA design of the PA3 requires upgrades to correct possible issues and add new functionality. It is therefore mandatory that the PA3 itself can be configured remotely, for which a dedicated software tool is needed. This paper presents the design and implementation of the distributed tools to re-configure remotely the PA3 FPGAs.

Speaker: YUAN, Shiming (University of Bergen (NO))
• Industrial Exhibit 21.10.
• Poster session B-03 - program see B-01 on Tue 13/10
• Thursday, October 22
• Oral presentations DAQ04
Convener: BOHM, Christian (Stockholm University (SE))
• 170
Evaluation of a streaming readout solution for Jefferson Lab experiments

Jefferson Lab (JLab) operates the CEBAF accelerator that provides a high energy (up to 12 GeV) and high current (up to 150 µA) polarized electron beam for fixed target experiments. JLab's mission is to provide substantial progress in understanding the Quantum Chromo Dynamics (QCD), the current theory of the spectrum and the dynamics of hadronic matter. Given the broad scientific program a general-purpose detector, CLAS12, has been designed and constructed. Such a detector requires a sophisticated trigger and current experiments use an on-line FPGA-based system that relies upon custom firmware and electronics both of which are difficult reconfigure from one experiment to the next. To overcome these challenges transitioning to a streaming readout has been proposed since it would allow a more flexible, easier to debug, software trigger to be developed. To facilitate this JLab is gaining experience with streaming readout systems and is actively working to design and test a streaming data acquisition architecture. Significant work has already been done to make existing front-end electronics compatible with operation in streaming mode. Software-wise, much work remains to be done but an existing framework, TriDAS, which was developed for underwater neutrino telescopes can, in principle, be used with the existing JLab hardware. A test stand is already in place in the INDRA DAQ development lab at JLab. This paper will present the development of this TriDAS based test system and it’s operation in-beam using parts of the CLAS12 detector to evaluate streaming readout for a large general purpose detector.

Speaker: Dr BATTAGLIERI, Marco (INFN and Jefferson Lab)
• 171
Status of the Phase-2 Tracker Upgrade of the CMS experiment at the HL-LHC

The Phase-2 Upgrade of the CMS experiment is designed to prepare its detectors for operation at the High Luminosity Large Hadron Collider (HL-LHC). The upgraded collider will begin operation in 2026, featuring new challenging conditions in terms of data throughput, pile-up and radiation, reasons for which the tracker detector will be entirely replaced by a new design. We present the current development activities ongoing in CMS aimed at the design, test and validation of the components of the tracker detector, and its read-out, calibration, control, and data processing chains.

• 172
Combining Triggered and Streaming Readout - The sPHENIX DAQ System

Recently, the Streaming Readout paradigm has gained traction as a viable alternative to classic triggered readout architectures for future experiments, for example at the planned Electron-Ion Collider. The sPHENIX apparatus at the Relativistic Heavy Ion Collider, a significant upgrade of the former PHENIX detector, cannot yet implement a full streaming readout, although the (by data volume) major detectors will already be read out in this way. The electromagnetic and hadronic calorimeters are the main detector components still using a traditional triggered readout. We have made significant progress combining the triggered with streaming data and implementing a combined readout. We will show results and progress from recent test beams. We will explain the challenges with the combination of streaming and triggered data streams, and present a general update of our DAQ system status.

Speaker: Dr PURSCHKE, Martin L. (Brookhaven National Laboratory (US))
• 173
Assessing the Requirements of a Computer-Based PET Coincidence Engine Using a Cluster of 60 Raspberry Pi

In current PET scanners, coincidence finding is often performed inside FPGAs because of their high parallelization capacities. Nowadays, the trend in PET is to multiply the number of individual detection channels, and thus the FPGAs dedicated to the coincidence engine need to be increasingly powerful. However, FPGAs have a limited number of I/O ports, and their cost becomes prohibitive for the largest devices. This paper attempts to determine the scale of real-time computational resources needed for a computer-based coincidence engine dedicated to a LabPET II-based Ultra-High-Resolution (UHR) PET scanner with more than 129,000 channels. This is done by generating PET data from GATE simulations using a cluster of 60 Raspberry Pi computers to mimic the behavior of the UHR data acquisition system. During our preliminary tests, we were able to transmit data simultaneously from the 60 Raspberry PI to a host computer and, thus, to establish a baseline of the required performance before implementation. Optimized results will be presented at the conference.

Speaker: SAMSON, Arnaud (Université de Sherbrooke)
• 174
Data Processing for the Linac Coherent Light Source

The increase in velocity, volume, and complexity of
the data generated by the upcoming LCLS-II upgrade presents a
considerable challenge for data acquisition, data processing, and
data management. These systems face formidable challenges due
to the extremely high data throughput, hundreds of GB/s to multi-
TB/s, generated by the detectors at the experimental facilities and
to the intensive computational demand for data processing and
scientific interpretation. The LCLS-II Data System offers a fast,
powerful, and flexible architecture that includes a feature
extraction layer designed to reduce the data volumes by at least
one order of magnitude while preserving the science content of the
data. Innovative architectures are required to implement this
reduction with a configurable approach that can adapt to the
multiple science areas served by LCLS. In order to increase the
likelihood of experiment success and improve the quality of
recorded data, a real-time analysis framework provides
visualization and graphically-configurable analysis of a selectable
subset of the data on the timescale of seconds. A fast feedback layer
offers dedicated processing resources to the running experiment
in order to provide experimenters feedback about the quality of
acquired data within minutes. We will present an overview of the
LCLS-II Data System architecture with an emphasis on the Data
Reduction Pipeline (DRP) and online monitoring framework.

Speaker: PERAZZO, Amedeo
• 175
Data Acquisition and Signal Processing for the Gamma Ray Energy Tracking Array (GRETA)

The Gamma Ray Energy Tracking Array (GRETA) is a 4-π detector system, currently under development, capable of determining energy, timing and tracking of multiple gamma-ray interactions inside germanium crystals as demonstrated in GRETINA. Charge sensitive amplifiers instrument the crystals and their outputs are converted using analog to digital converters for real-time digital processing. In this paper, we will present the design of the detector system and data acquisition with respect to the real time components and the modeling in MATLAB of the digital signal processing path used to find the energy and timing of the gamma rays at low and high rates and compare them with the performance of analog readout. We intend to use the experience of GRETINA and these simulations to improve the processing executed in real time in field programmable gate arrays. We will describe the performance of the data acquisition system hardware and the performance of simulated and measured signals under various conditions for the signal processing. In addition, we will describe enhancements to the digital signal processing and their effects on the results.

Speaker: STEZELBERGER, Thorsten (Lawrence Berkeley National Lab)
• Industrial Exhibit 22.10.
• Poster session C-03 - program see C-01 on Wed 14/10
• Friday, October 23
• Oral presentations MISC02
Convener: TÉTRAULT, Marc-André (Université de Sherbrooke)
• 176
High-Level Python-based Hardware Description Language (HLPyHDL)

Abstract
We present a High-Level Python-based Hardware Description Language (HLPyHDL), which uses object-oriented Python as a source language and converts it to standard VHDL.

It uses python as its source language and converts it to standard VHDL. Compared to other approaches of building converters from a high-level programming language into a hardware description language, this new approach aims to maintain an object-oriented paradigm throughout the entire process. Instead of removing all the high-level features from Python to make it into an HDL this approach goes the opposite way. It tries to show how certain features from a high-level language can be implemented into an HDL, providing the corresponding benefits of high-level programming for the user. This approach sees the following features as mandatory for object-orientated design: Classes (Combination of Methods and Data), Data Abstraction (Types are defined by their public interfaces), encapsulation (preserving of internal invariances), Information Hiding (expressing intentions), Inheritance (similarity hierarchies) and polymorphism (similar types have similar interfaces). We demonstrate the features of HLPyHDL in the context of the development of firmware for a Belle II detector subsystem: the K-Long and Muon Counter (KLM). HLPyHDL is in an early development state, nevertheless, core functionalities such as converting to VHDL and running python-based simulations are already implemented. Running simulation in python has the advantage that the user can take advantage of all python libraries and features for the testing code. Only the Unit Under Test (UUT) needs to obey the limitations introduced by the conversion to VHDL.

Speaker: PESCHKE, Richard (University of Hawaii)
• 177
Waveform Feature Extraction in Belle II Time-of-Propagation (TOP) Detector

The Time of Propagation (TOP) subdetector at the Belle II experiment requires a single photon timing resolution of below 100 picoseconds to identify the type of charged hadrons produced in $e^+e^-$ collisions at SuperKEKB collider. Due to data bandwidth constraints only the amplitude and the timing information calculated from the digitized photon pulses are transmitted to the Belle II DAQ system.
The initial feature extraction method used software constant fraction discrimination (CFD) to extract the uncalibrated signal timing information in the processing system. An offline time-base calibration (TBC) is used to correct the photon hit time. The main downside of this method is degraded time resolution for small signal amplitudes, due of their poor Signal-to-Noise Ratio. Template fitting applied to the digitized, time-base calibrated signal has been proposed to fix this problem.
Various template fitting strategies using analytical and chi-squared algorithms, employing both polynomial and experimental-data generated templates, have been studied. These methods are computationally more intensive than the CFD method. To keep the processing time within the necessary limits imposed by the projected high trigger rate at the design luminosity of the SuperKEKB accelerator, implementation in programmable logic is also being studied. The TOP firmware allows the optimal feature extraction algorithm to be chosen dynamically, depending on the number of waveforms pending processing. These studies and their results are presented here in detail.

Speaker: KOHANI, Shahab (University of Hawaii at Manoa)
• 178
PicoPET -- An advanced data acquisition architecture for modern nuclear imaging

PicoPET is a new generation of advanced data acquisition system for modern nuclear imaging. Very different from conventional front-end electronics designs, the key innovation of the PicoPET electronics is to include almost all functions of the front-end readout electronics, including signal splitting for energy and timing measurements, signal clustering and event triggering for detectors with different configuration, ADC, Time-to-Digital Conversion (TDC) and etc., inside a low-cost FPGA. This not only simplifies the analog components and reduces the cost, but also provides powerful and flexible signal processing. PicoPET adapts the daisy-chained topology, and the PicoPET modules are linked linearly using high-speed LVDS transceivers with a data rate of 1Gps. And the speed can be updated to 12.5Gbps. We have validated the PicoPET electronics and architecture 4 PET systems. The smallest system used only one PicoPET module to read out 92 SiPMs, and the largest system used 90 PicoPET modules to read out 8,640 SiPMs. We conclude that the Pico-PET electronics system provides a practical solution to the bottleneck problem that has limited the development of potentially advanced nuclear imaging technology using tens of thousands of photosensors.

Speakers: Mr ZHAO, Zhixiang (Shanghai Jiaotong University, Shanghai, China) , Dr PENG, Qiyu (Lawrence Berkeley National Laboratory, Berkeley, CA, USA) , Mr YANG, Jingwei (Huazhong University of Science and Technology, Wuhan, China)
• 179
Performance of the Unified Readout System of Belle II

Belle II experiment at the SuperKEKB collider at KEK, Tsukuba, Japan has successfully started the data taking with the full detector in March 2019, as the luminosity frontier experiment of the new generation to search for physics beyond the Standard Model of elementary particles. In order to read out the events from the seven subdetectors and the trigger system to precisely measure the decay products of B and charm mesons and tau leptons, we adopt a highly unified readout system, including a unified trigger timing distribution (TTD) system for the entire Belle II detector, a unified high speed data link system (Belle2link) which is used by all subdetectors except the pixel detector, and a 9U-VME-based backend system (COPPER) to receive Belle2link. Each subdetector frontend readout system has an FPGA in which the unified firmware components of TTD receiver and Belle2link transmitter are embedded. The system aims for taking data at 30 kHz trigger rate with about 1% deadtime from the frontend readout system. We share the operation time with the accelerator commissioning and physics data taking. The trigger rate is still much lower than our design, but the background level is high as it is one of the limiting factor of the accelerator and detector operation. Hence the occupancy and the stress to the frontend electronics are rather severe, causing various kind of instabilities. We present the performance of the system, including the achieved trigger rate, deadtime, stability, and discuss the experiences gained during the operation.

Speaker: NAKAO, Mikihiko (KEK)
• 180

Belle II is an experiment dedicated to explore the new physics beyond the Standard Model of elementary particles at the beam intensity frontier, SuperKEKB accelerator, which is located at KEK, Japan. Belle II has started data-taking since April, 2018, with a synchronous data acquisition (DAQ) system based on a pipelined trigger flow control. Belle II DAQ system is designed to handle 30 kHz trigger rate with about 1% dead time under the raw event size more than 1 MB. It is a big challenge to maintain the DAQ stability under the requirement in a long term with current read out system, meanwhile, the readout system also becoming a bottleneck for providing higher speed data transfer. A solution of PCI-express based readout module with data throughput as high as 100 Gbps, was adopt for the upgrade of Belle II DAQ system. We focus on in particular the design of firmware and software driver based on this new generation of readout board called PCIe40 with an Altera Arria 10 FPGA chip. 48 GBT (Gigabit transceiver) links, DMA mechanism, interface of Timing Trigger System, as well as the slow-control system have been designed for integrating with current Belle II DAQ system. We present the development of firmware and software for the new readout system, and the test performance with pocket Belle II DAQ, specifically involve the FrontEnd Electronics of Belle II TOP sub-detector.

Speaker: ZHOU, Qidong (High Energy Accelerator Research Organization (JP))
• 181
FPGA-BASED REAL-TIME IMAGE MANIPULATION AND ADVANCED DATA ACQUISITION FOR 2D-XRAY DETECTORS

Scientific experiments rely on some type of measurements that provides the required data to extract aimed information or conclusions. Data production and analysis are therefore essential components at the heart of any scientific experimental application.
Traditionally, efforts on detector development for photon sources have focused on the properties and performance of the detection front-ends. In many cases, the data acquisition chain as well as data processing, are treated as a complementary component of the detector system and added at a late stage of the project. In most of the cases, data processing tasks are entrusted to CPUs; achieving thus the minimum bandwidth requirements and kept hardware relatively simple in term of functionalities. This also minimizes design effort, complexity and implementation cost.
This approach is changing in the last years as it does not fit new high performance detectors; FPGA and GPUs are now used to perform complex image manipulation tasks such as image reconstruction, image rotation, accumulation, filtering, data analysis and many others. This frees CPUs for simpler tasks.
The objective of this paper is to present both the performances of real time FPGA-based image manipulation techniques, as well as, the performance of the ESRF data acquisition platform called RASHPA, into the back-end board of the SMARTPIX photon-counting detector also developed at the ESRF.

Speaker: Dr MANSOUR, Wassim (ESRF)
• Student Paper Awards
• Conclusion