Help us make Indico better by taking this survey! Aidez-nous à améliorer Indico en répondant à ce sondage !

System-on-Chip Workshop - CERN

Europe/Zurich
40/S2-C01 - Salle Curie (CERN)

40/S2-C01 - Salle Curie

CERN

115
Show room on map
Diana Scannicchio (University of California Irvine (US)), Frans Meijers (CERN), Marc Dobson (CERN), Ralf Spiwoks (CERN), Revital Kopeliansky (Indiana University (US))
Registration
SoC Workshop - CERN - Registration Form
160 / 160
Participants
  • Aamir Irshad
  • Adam Artur Wujek
  • Adrian Nikolica
  • Alain Antoine
  • Alejandro Santos
  • Ales Svetek
  • Alessandro Cerri
  • Alessandro Marchioro
  • Alessandro Thea
  • Alexander Ruede
  • Alexandre Frassier
  • Alice Laure Celine Faure
  • Andrea Boccardi
  • Andreas Osborg
  • André David
  • Angel Abusleme
  • Antoine Marzin
  • Anyes Taffard
  • Arkadiy Shevrikuko
  • Arthur Spierer
  • Artur Lobanov
  • Arturo Sanchez Pineda
  • Atanu Modak
  • Attila Racz
  • Awais Bin Zahid
  • Calin Bira
  • Cenk Yildiz
  • Christos Gentsos
  • Dan Gastler
  • Daniel Perrin
  • Dariusz Jakub Zielinski
  • David Belohrad
  • David Merodio Codinachs
  • David Miller
  • David Sankey
  • Davide Cieri
  • Diana Scannicchio
  • Dominique Gigi
  • Dorothea Pfeiffer
  • Emily Ann Smith
  • Emmanuel Josue Molina Gonzalez
  • Eric Buschmann
  • Eric Shearer Hazen
  • Fabrizio Alfonsi
  • Fernando Carrio Argos
  • Filiberto Bonini
  • Filipe Martins
  • Franco Brasolin
  • Frans Meijers
  • Fukun Tang
  • Gael Ducos
  • Gerhard Immanuel Brandt
  • Gianluca Aglieri Rinella
  • Giuseppe Avolio
  • Gregory Donzel
  • Grzegorz Daniluk
  • Gustavo Andres Uribe Gomez
  • Gwen Llewellyn Gardner
  • Hamza Boukabache
  • Hannes Sakulin
  • Igor Soloviev
  • Inaki Ortega Ruiz
  • Ioannis Xiotidis
  • Janet Ruth Wyngaard
  • Javier Galindo Guarch
  • Jelena Spasic
  • Jeroen Hegeman
  • Jesra Tikalsky
  • Jiri Kral
  • John Evans
  • Jonathan Emery
  • Juan Carlos Allica Santamaria
  • Juan David Gonzalez Cobas
  • Juan Manuel Guijarro
  • Kai Chen
  • Kamil Kucel
  • Keyshav Suresh Mor
  • Kolyo Raychinov
  • Kostas Alexopoulos
  • Kris Chaplin
  • Kristian Hahn
  • Luis Ardila
  • Léa Strobino
  • Maciej Marek Lipinski
  • Magnus Hansen
  • Maheyer Jamshed Shroff
  • Manoel Barros Marin
  • Marc Dobson
  • Marco Angelo Bellato
  • Marcos Vinicius Silva Oliveira
  • Maria Elena Angoletta
  • Mark Mclean
  • Markus Hennersperger
  • Martin Aleksa
  • Martin Kocian
  • Mathieu Saccani
  • Matt Warren
  • Matthew Shaun Twomey
  • Matthias Wittgen
  • Mauro Dinardo
  • Mehmet Ozgur Sahin
  • Michael Jaussi
  • Michal Husejko
  • Miguel Fontes Medeiros
  • Mihaly Vadai
  • Muhammad Ali
  • Nikkie Deelen
  • Nordin Aranzabal Barrio
  • Oldrich Kepka
  • Oliver Sander
  • Panagiotis Papageorgiou
  • Paris Moschovakos
  • Peter Onyisi
  • Petr Zejdl
  • Pieter Van Trappen
  • Piotr Nikiel
  • Predrag Kuzmanovic
  • Priya Sundararajan
  • Radu Hobincu
  • Rainer Bartoldus
  • Rainer Hentges
  • Ralf Erik Rossel
  • Ralf Spiwoks
  • Remi Mommsen
  • Revital Kopeliansky
  • Rimsky Alejandro Rojas Caballero
  • Roland Johnson
  • Ryan Knowlton
  • Sacha Samson Nohan Cormenier
  • Salvatore Danzeca
  • Samuele Altruda
  • Sarath Kundumattathil Mohanan
  • Serguei Kuleshov
  • Simon Spannagel
  • Sophie Baron
  • Sorin Martoiu
  • Stefan Haas
  • Stefan Lueders
  • Stefan Schlenker
  • Stefano Mersi
  • Steffen Stärz
  • Tamas Balazs
  • Thiago Costa De Paiva
  • Thilo Pauly
  • Thomas Andrew Gorski
  • Thomas Oulevey
  • Tim Bell
  • Tom Williams
  • Tomas Vanat
  • Tomasz Podzorny
  • Trishna Rajkumar
  • Tullio Grassi
  • Victor Andrei
  • Wainer Vandelli
  • Wim Beaumont
  • Wojciech Bialas
  • Xiaoguang Yue
  • Xin Cui
  • Yannick Geerebaert
  • Yuri Ermoline
    • Manufacturer Presentations 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
      Conveners: Revital Kopeliansky (Indiana University (US)), Ralf Spiwoks (CERN)
      • 1
        Workshop Introduction & Motivation 30/7-018 - Kjell Johnsen Auditorium

        30/7-018 - Kjell Johnsen Auditorium

        CERN

        190
        Show room on map
        Speaker: Ralf Spiwoks (CERN)
      • 2
        Xilinx - Roadmap to 7nm Versal Family 30/7-018 - Kjell Johnsen Auditorium

        30/7-018 - Kjell Johnsen Auditorium

        CERN

        190
        Show room on map

        The Zynq UltraScale+™ MPSoC (Multi-Processing System on Chip) is the second generation of SoC following the 28nm Zynq- 7000 All Programmable SoC. Xilinx has developed the architecture based on the most advanced TSMC 16nm FinFET process technology for high performance and power efficiency.
        The Zynq UltraScale+ MPSoC boasts over 6000 interconnects linking the processing system and programmable logic blocks. At the heart of the processing-system is the 64-bit ARM® Cortex®-A53, available in dual or quad core options, providing greater than 2x performance over its Cortex®-A9 predecessor in the Zynq-7000. It also integrates a dual core ARM Cortex®-R5 real-time processor for real time response and low latency for safety critical and high reliability applications. The Zynq UltraScale+ MPSoC enables the highest levels of security and ensures things like information assurance, anti-tamper and multiple layers of encryption and authentication during boot-up.
        The Zynq UltraScale+ MPSoC can interface with high speed DDR4 and LPDDR4 SDRAM memories. It incorporates high performance peripherals (PCIe up to Gen 3.0, USB 3.0, Display Port, 100G Ethernet MAC) and can be available on platforms with H.265 video codec unit, high end Analog-to-Digital/Digital-to-Analog Converters (RFSoC families) or HBM memories.
        As an ARM-based processor, the software ecosystem includes a broad OS environment, hypervisors, open source software, commercial solutions and safety certifiable solutions.
        All these new architectural elements are coupled with the Vivado® Design Suite and the free Software Development Kit (SDK). Moreover and for several years, XILINX has been addressing the design challenges of complex system development with Vivado High Level Synthesis (HLS) and SDx environments, giving the ability to take C/C++/OpenCL straight to hardware without RTL coding.
        Finally, XILINX announced a new class of devices in 7nm process technology named ACAP (Adaptive Compute Acceleration Platform) and a family: Versal. The devices will combine acceleration, adaptable and intelligent engines with programmable logic and an ARM processing subsystem.

        Speaker: Kester Aernoudt (Xilinx)
      • 10:10
        Coffee break / user questions, discussions, and demonstrations 30/7-012 (CERN)

        30/7-012

        CERN

      • 3
        Intel/Altera - SoC R&D plans 30/7-018 - Kjell Johnsen Auditorium

        30/7-018 - Kjell Johnsen Auditorium

        CERN

        190
        Show room on map

        This presentation will discuss the families of Intel SoC FPGA devices from 28nm Cyclone V SoC through to the newly-announced 10nm Agilex SoCs. Starting off with a discussion of the benefits of integrating processor and FPGA onto a single die, detail will be given as to use cases for their application and technical differences and evolutions between the family members and their evolution over time.

        The presentation will move on to discuss the embedded processor families used and the software operating systems that run on these devices, and their uses along with the long-term support strategy for both hardware and software.

        The debug infrastructure required to co-debug FPGA programmable hardware and software will also be presented, illustrating how FPGA + processor solutions can be debugged in on debugging environment, with the technical detail as to how this works.

        Tools required for the build and integration of the devices will be discussed along with PC requirements to build the projects. The presentation will focus on technical detail and architectural comparisons between FPGA family members.

        Speaker: Kris Chaplin (Intel)
      • 11:40
        User question, discussions, and demonstrations 30/7-012 (CERN)

        30/7-012

        CERN

    • 12:30
      Lunch break 40/S2-C01 - Salle Curie

      40/S2-C01 - Salle Curie

      CERN

      115
      Show room on map
    • Overview and Use Cases 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
      Conveners: Ralf Spiwoks (CERN), Revital Kopeliansky (Indiana University (US))
      • 4
        Evolution of CMS online system and role of SoC 30/7-018 - Kjell Johnsen Auditorium

        30/7-018 - Kjell Johnsen Auditorium

        CERN

        190
        Show room on map
        Speaker: Frans Meijers (CERN)
      • 5
        ATLAS SoC - Review 30/7-018 - Kjell Johnsen Auditorium

        30/7-018 - Kjell Johnsen Auditorium

        CERN

        190
        Show room on map
        Speaker: Revital Kopeliansky (Indiana University (US))
      • 6
        ATLAS MUCTPI and System on Chip 30/7-018 - Kjell Johnsen Auditorium

        30/7-018 - Kjell Johnsen Auditorium

        CERN

        190
        Show room on map

        The Muon to Central Trigger Processor Interface (MUCTPI) is a key element of the first level, hardware trigger of the ATLAS experiment. It is being upgraded for the next run of the Large Hadron Collider (LHC) at CERN. The new MUCTPI is implemented as a single ATCA blade with high-end processing FPGAs for the low-latency, parallel trigger processing. A Xilinx Zynq System-on-Chip (SoC) consisting of a programmable logic part and a processor part is used for control. The processor part of the SoC is based on ARM processor cores and runs Linux. Several options for building the operating system, the root file system and the application software including an ATLAS run control application, as well as interfacing to the run control system in a controlled technical network have been investigated.

        Speaker: Ralf Spiwoks (CERN)
      • 7
        Fault Resilient Design for 28nm ZYNQ SoC based Radiation Protection Monitoring System Fulfilling Safety Integrity Level 2 30/7-018 - Kjell Johnsen Auditorium

        30/7-018 - Kjell Johnsen Auditorium

        CERN

        190
        Show room on map

        CERN Radiation Protection is developing a state-of-the-art radiation monitor, CROME (CERN RadiatiOn Monitoring Electronics), measuring dose rates from 50nSv/h over a range of 9 decades without auto-scaling. Converted into electrical current, the frontend electronics accurately (+-1%) measures from 1fA to 1uA.

        CROME’s main purpose is to measure radiation across CERN sites to protect persons by triggering interlock signals based on doses calculations and configurable conditions.
        As a safety device, CROME is subject to strict constraints to fulfil SIL2 (Safety Integrity Level 2) requirements. CROME uses a Xilinx Zynq 7020 SoC, for all computations.

        The FPGA section of the Zynq is dedicated to safety critical functions:

        • Real time electronics management and data acquisition
        • Floating point dose rates calculations using various concurrent algorithms
        • Integral doses calculations
        • Decision making algorithms, alarm and interlock signals triggering.

        The Zynq dual ARM cores run a custom homemade Linux OS primarily used to:

        • Manage the secured bidirectional communication between the FPGA section and the processors (some 200 x 64bits pipelined parameters)
        • Communicate with the SCADA Supervision through a homemade TCP/IP protocol, ROMULUS
        • Manage the non-safety critical calculations or tasks like data compression, storage, events generation...

        As a safety related system, CROME’s SoC can boot throughout an SD card, TFTP/PXE server, flash memory or an eMMC. It implements hardening techniques such as TMR, Software Error Mitigation Core, Isolation Flow, ECC protected memories and a custom intra-sections data exchange mechanism. This feature has been extensively tested at CHARM facility to assess the decoupling between safety critical functions inside the FPGA and potential crashes of the OS.

        Speaker: Hamza Boukabache (CERN)
      • 15:00
        Tea break 30/7-012 (CERN)

        30/7-012

        CERN

      • 8
        SoC technology for embedded control and interlocking within fast pulsed systems 30/7-018 - Kjell Johnsen Auditorium

        30/7-018 - Kjell Johnsen Auditorium

        CERN

        190
        Show room on map

        The control of CERN’s beam-transfer kicker magnet high-voltage pulse generators requires often the use of digital control in FPGAs to allow tight timing control (jitter better than one ns) and fast protection of the high-voltage thyratron and semiconductor switches. A FPGA is a perfect candidate for the digital logic, it is however limited in integration possibilities with the actual and future control systems.

        TE-ABT has recently levered the market push for integrated devices, so called Systems on a Chip (SoC). This technology allows for a tightly coupled ARM (Cortex A9 in case of the Xilinx Zynq 7000) processing system and programmable logic in a single device thus avoiding difficult placement and routing on an electronic card. This paper presents two projects where the Xilinx Zynq SoC has been used. It is used in the fast interlock detection system to integrate it neatly within the CERN control framework, by using embedded linux and running a Snap7 server. In the second project it implements a lower-tier communication bridge between a front-end computer, using serial communication in software and high-fan out multiplexing in the programmable logic for timing and analogue control logic.

        Speaker: Pieter Van Trappen (CERN)
      • 9
        The CMS APx Platform using SoC Devices 30/7-018 - Kjell Johnsen Auditorium

        30/7-018 - Kjell Johnsen Auditorium

        CERN

        190
        Show room on map

        The APx platform encompasses multiple ATCA blade types, supporting CMS Phase 2 detector back end FPGA-based processing. Central to the APx architecture are two custom-designed ZYNQ mezzanines. The ZYNQ-IPMC is a DIMM form-factor, enhanced IPM controller for ATCA, with primary responsibility for sensor and power control. The Embedded Linux Mezzanine (ELM), has primary responsibility for payload configuration and operation.

        ZYNQ-IPMC software is implemented in C/C++ and uses FreeRTOS, running on a XC7Z020 or XC7Z014S device. The Gen-1 board, the ELM1, runs a Petalinux distribution on a XC7Z045 device, with an ELM2 currently in design using an Ultrascale+ SoC device. Both boards allow customization of the programmable logic (PL) to the specific requirements of the host blade. Internal register IO in blade FPGAs is AXI-based, with access by the ELM CPU via a MGT-based AXI bridge.

        ELM-based services are an evolution of the model successfully deployed on the CMS CTP7 card during LHC Run 2. Services include FPGA bitfile loading, I2C IO support for optical modules and special-purpose ICs, hosting a 10GbE endpoint, and FPGA control. Control methods include shell scripts, custom applications, and remote procedure call (RPC) modules which allow system PCs to access FPGA-specific functions.

        SoC images on both boards are fully updatable over the LAN, and both boards have LAN-accessible console interfaces. APx FPGA images can be bundled with associated support resources to guarantee software/firmware compatibility. A mechanism spanning both SoC boards provides automatic assignment of geographic IP addresses via DHCP.

        Speaker: Thomas Andrew Gorski (University of Wisconsin Madison (US))
      • 10
        CMS R&D for Phase-2 Tracker Back-end Electronics 30/7-018 - Kjell Johnsen Auditorium

        30/7-018 - Kjell Johnsen Auditorium

        CERN

        190
        Show room on map

        The Phase 2 CMS tracker back-end processing system is composed by two types of "Detector, Trigger and Control" (DTC) boards interfacing the inner and outer tracker, and by the "Track Finding Processor" (TFP) board performing level-1 track reconstruction from the outer tracker data. Several groups are building hardware to prove key and novel technologies needed in the back-end processing system. Each of these development platforms is using different slow control devices and architectures. While the main functionality of the boards is different, they all share similar management requirements. These can be fulfilled using an integrated solution based on Zynq Ultrascale+ (US+) System-on-Chip (SoC) devices as presented here.

        The Zynq US+ SoC provides FPGA logic, high-performance ARM-A53 multi-core processors and two ARM-R5 real-time capable processors. The ARM-R5 cores are used to implement the IPMI/IPMC functionality and communicate via backplane with the shelf manager at power-up. The ARM-R5 are also connected to the power supply (via PMBus), to voltage and current monitors, to clock generators and jitter cleaners (via I2C, SPI). Once full power is enabled from the crate, a Linux starts on the ARM-A53 cores. The FPGA is used to implement some of the low-level interfaces including IPBus, or glue-logic. The SoC is the central entry point to the main FPGAs on the motherboard via IPMB and TCP/IP based network interfaces. The communication between the Zynq US+ SoC and the main FPGAs uses the AXI chip-to-chip protocol via MGT pairs keeping infrastructure requirements in the main FPGAs to a minimum.

        Speaker: Luis Ardila (KIT-IPE)
    • Tutorials and Presentations 40/S2-B01 - Salle Bohr

      40/S2-B01 - Salle Bohr

      CERN

      100
      Show room on map
      Conveners: Revital Kopeliansky (Indiana University (US)), Marc Dobson (CERN)
      • 11
        PetaLinux - Presentation + Tutorial

        This presentation gives a basic introduction to ZYNQ device(s), explains its booting process, and gives a tutorial covering the necessary steps required for building a working Linux system for ZedBoard (Zynq 7000) with Xilinx Vivado and PetaLinux Tools. Finally, a live demonstration of Linux booting over network with NFS root file system is given.

        Speakers: Keyshav Suresh Mor (Eindhoven Technical University (NL)), Petr Zejdl (Fermi National Accelerator Lab. (US))
      • 10:30
        Coffee break
      • 12
        CentOS on Xilinx Zynq Ultrascale+ MPSoC System on Chip

        The Xilinx Zynq Ultrascale+ MPSoC is a System on Chip (SoC), which is used in the upgrade of the ATLAS Muon to Central Trigger Processor Interface (MUCTPI) module. It interfaces the MUCTPI module to the ATLAS run control system. While the Linux kernel for the ARM-based processor part of the SoC is being prepared using the Yocto framework, the root file system can be prepared using the dnf cross installation tool to install the CentOS for ARM Linux distribution. Additional software packages can be easily installed or can be natively compiled, e.g. on an evaluation board. This way the ATLAS run control system can be built in its own environment and a run control application can be produced which runs over CentOS on the SoC.
        In an interactive demo, the basic functionality of CentOS will be presented running on a ZCU102 evaluation board which has a Zynq Ultrasscale+ MPSoC. Using common system services and installing new functionality, it will be shown that this system behaves like a regular PC with CERN Centos 7.

        Speaker: Panagiotis Papageorgiou (National Technical Univ. of Athens (GR))
      • 13
        CentOS: cross-Installation, cross-compilation of an application, and performance measurements

        Presentation of the SLAC tools for running CentOS7 on Zynq 7000.
        The presentation will cover the following aspects:
        Install SLAC tools via RPM;
        Cross-install a Centos7 ARM64 ROOTFS with python script;
        Write an SD card from the created ROOTFS;
        Boot a ZCU102 from the SD card;
        Cross compile an echo server (ZMQ+msgpack) for ZCU102 and compile for a Linux host;
        Run echo server and measure data throughput.
        C++ variant encode to msgpack transfer with ZMQ decode to C++ valiant, re-encode sent back and decode.

        Speaker: Matthias Wittgen (SLAC National Accelerator Laboratory (US))
    • 12:30
      Lunch Break 40/S2-C01 - Salle Curie

      40/S2-C01 - Salle Curie

      CERN

      115
      Show room on map
    • Use Cases 40/S2-B01 - Salle Bohr

      40/S2-B01 - Salle Bohr

      CERN

      100
      Show room on map
      Conveners: Ralf Spiwoks (CERN), Diana Scannicchio (University of California Irvine (US))
      • 14
        CaRIBOu - A versatile data acquisition system

        The process of a detector development includes a subtask concerning the readout of data and controlling the detector. It typically consist of designing hardware in form of a readout board containing programmable logic to provide an interface to the chip, power supplies for biasing the detector chip, as well as DACs and ADCs for setting and measuring operation parameters, test pulses, etc. One also needs to write software for control and readout. This process can be repeated again and again for each new chip developed, which requires different voltage levels or different number of data lines. The CaRIBOu system, on the other hand, provides a robust, versatile DAQ system, which can be easily adjusted to the needs of different detector chips. Using such a system saves development cost and reduces the time needed to get first data from the detector. CaRIBOu is a combination of hardware and software modules that forms a stand-alone readout and DAQ system for detector prototypes. It was initially developed for testing newly developed pixel-detector chips for ATLAS and for a future CLIC detector. Adding support for a new chip is a matter of writing a piece of code performing an interface between the chip-specific features and the standard data and control interface of the CaRIBOu system. The system is based on a Xilinx Zynq System-on-Chip (SoC) architecture combining the power of a programmable hardware (FPGA) and a fully-featured Linux operating system based on the Yocto framework. It can run either stand-alone, storing data to a local filesystem, or connected via network interface to a data storage or a superior control system. The data decoding and analysis can be done either directly in the system both in software and in FPGA-based hardware, or it can be stored in a raw format and analysed offline.
        The talk presents the structure and capabilities of the CaRIBOu system and shows example applications and future plans.

        Speaker: Tomas Vanat (CERN)
      • 15
        ATLAS CSC, Pixel - Zynq based SoC RCE Readout and Evolution

        The Reconfigurable Cluster Element (RCE) readout is a generic DAQ concept based on SoC building blocks with associated networking solution that was initially developed on an ATCA platform at SLAC in 2007. The combination of Gen-1 RCE on Virtex4/PowerPC and HSIO carried out the ATLAS pixel IBL stave QA/QC and installation commissioning in 2013. The RCE Gen-3 design migrated its base to the ZYNQ-7000 SoC in 2013, with the ATCA based Cluster on Board (COB) running the upgrade ATLAS muon CSC readout since 2015. ZYNQ based RCE mezzanine integrated onto the portable HSIO-2 also provided a popular test stand and test beam setup for ATLAS pixel upgrade. This evolved setup with RCE running Arch-Linux is also operating the ATLAS AFP readout. Recent development also established ATLAS ITk pixel RD53 readout on RCE platforms and engaging in system test setups that can readout large number of channels from ITk pixel staves and rings. This development is coupled to the evolution of RCE hardware base to Ultrascale+ MPSoC and potentially also the forthcoming ACAP devices. This development for many applications over the last 10+ years also brought extensive experience in software/firmware infrastructure that supported setups widely distributed in ATLAS that provided a stable ATLAS application development platform with controlled releases to benefit from SLAC RCE core utility upgrades that’s always in flux to serve many non-ATLAS applications also. The RCE core utility suite offers dataflow backbone with DMA engines, Reliable UDP (RUDP) protocol and low latency PGP protocol etc. The application software development also yielded experience with different inter-process communication schemes, data serialization formats, and running Centos-7 on ZYNQ etc. In this presentation, we will discuss the key highlights from the extensive experience and current development for both the hardware platforms and software/firmware infrastructure.

        Speaker: Matthias Wittgen (SLAC National Accelerator Laboratory (US))
      • 16
        The APOLLO Platform - Use of SoC Devices in ATLAS and CMS

        Many of the ATCA blades being proposed include some flavor of SoC on each blade. I will discuss where our SoC, a commercial Zynq 7000 series mezzanine board will fit in in our Apollo family of ATCA blades. The Apollo family of boards are made up of a services module, a command module, a SoC mezzanine, an Ethernet switch, and an IPMC. The services module is the primary board used for connecting to the ATCA power & data backplanes and serves as the host for the other boards. The Command module mates with Service module and is the application specific board hosting the high power FPGA and optics and has low and high-speed links back to the SoC mezzanine on the service module. Focusing on the SoC, I will discuss the topology and access control of I2C sensors between IPMC and SoC; the use of the SoC for system control and monitoring of Service and Command module components; our thoughts on the software access to these via the SoC; and our current ideas on PS-PL use such as prescaled DAQ path processing, histogramming, and/or calibration.

        Speaker: Daniel Edward Gastler (Boston University (US))
      • 17
        The Zynq MPSoC in the gFEX hardware trigger in ATLAS

        The Global Feature Extractor (gFEX) is a hardware jet trigger system that will be installed in the ATLAS experiment at CERN during Phase 1 upgrades. The gFEX will read out the entire calorimeter at the full LHC collision rate of 40 MHz and thus be able to calculate global quantities. This allows for the implementation of large-radius jet algorithms, potentially refined with jet subjet information, to select Lorentz-boosted objects like bosons and top quarks. The gFEX will additionally be able to implement event based local pile-up suppression within the large-radius jet algorithms. The global nature of the gFEX allows for the calculation of quantities like MET and algorithms like “jets without jets”. The gFEX board consists of three Ultrascale+ Virtex FPGAs and one Zynq+ MPSoC. Each of the FPGAs deal with a specific pseudo-rapidity range of calorimetry data and the Zynq+ MPSoC coordinates this process. The Zynq+ is also in charge of the control and monitoring of the board. The SoC is loaded with a customized Operating System (OS) built using the Yocto Project with the custom layer meta-l1calo. This talk will discuss the design and structure of the OS, along with SoC-specific considerations for operating the gFEX board. The gFEX is unique in its global view of events and its ability to trigger efficiently on jets and events with complex substructure. Its addition to the hardware trigger of ATLAS will increase the performance and sensitivity of the trigger to many important physics processes.

        Speaker: David Miller (University of Chicago (US))
      • 18
        Exploring the use of a Zynq MPSoC for machine learning inference in hardware triggers

        One of the ongoing technical challenges in experimental high energy physics is the real-time filtering (triggering) of data that may contain interesting interactions from a physics perspective. These challenges are especially pertinent on the hardware level for the general purpose LHC experiments since the first stage of triggering needs to simultaneously function on the order of tenths of microseconds and select for these interesting events. A current challenge is optimizing the tradeoff between the latency and the mis-classification rate of interactions. Our dataset is one of pseudo-calorimeter data (each represented by an image) of simulated events and the task is to classify each event as signal or background. Our approach is to implement a neural network (NN)-based algorithm for this task on an Zynq UltraScale+ MPSoC. To this end, we start with a NN trained on offline data in a high level programming language. Then, we use a high level synthesis tool and library to convert the NN into a hardware implementation on the Zynq MPSoC. Given the success of NNs in image classification and the ability of the MPSoC to use both the ARM processor and the programmable logic (FPGA) to implement NNs in a highly parallelized and energy-efficient fashion with low fixed latency, we believe this may be a fruitful approach for future triggering improvements. We see potential applications as an R&D system for the ATLAS experiment’s Run 3 gFEX system and potentially as part of the trigger in the Run 4 (HL-LHC) ATLAS Global Event Processor.

        Speaker: Emily Ann Smith (University of Chicago (US))
      • 15:10
        Tea break
      • 19
        The SoC-based Readout Drivers of the ATLAS Pixel/IBL detector

        From 2015 to 2018 the ATLAS Pixel detector has gradually been installing new readout drivers (RODs) that are used for data readout, configuration, monitoring, and calibration of the Pixel frontends. The RODs are 9U VME boards utilizing FPGA technology. The main FPGA is a Xilinx Virtex 5 Soc which contains a PPC440 processor. The processor is used to configure the frontend chips and to read back monitoring information from the FPGAs. Up until recently the operating system running on the PPC was xilkernel. During the current shutdown of the LHC xilkernel is being replaced by a recent Linux kernel which will provide superior stability and debugging capabilities. The architecture of the system and the issues that were encountered will be presented in this talk.

        Speaker: Oldrich Kepka (Acad. of Sciences of the Czech Rep. (CZ))
      • 20
        A SoC module for the ATLAS Tile Calorimeter PreProcessors at the HL-LHC

        In 2026, the LHC will be upgraded to the High Luminosity LHC (HL-LHC) allowing it to deliver up to 7 times the instantaneous design luminosity. The ATLAS Tile Calorimeter (TileCal) Phase-II Upgrade will accommodate the detector and data acquisition system to the HL-LHC requirements. The on- and off-detector readout electronics will be completely re-designed to cope with new data rates and radiation levels for the Phase-II Upgrade. The on-detector electronics will transfer digitized data for every bunch crossing (~25 ns) to the Tile PreProcessor (TilePPr) in the counting rooms which is interfaced with the ATLAS Trigger and Data AcQuisition (TDAQ),and the ATLAS Detector Control System (DCS).

        The TilePPr module is being designed as a full-size ATCA blade form factor. This system is composed of an ATCA Carrier Blade Board (ACBB) and four FPGA based Compact processing Modules (CPM) with AMC form factor. The high-speed communication with the on-detector electronics, data acquisition and core processing functionalities relies on the CPMs, while functionalities for power management, control and configuration of the boards are implemented in separated mezzanine cards.

        One of these mezzanine boards is the Tile Computer on Module (TileCoM). This 204-pin SODIMM form factor board is equipped with a System-on Chip Xilinx ZYNQ UltraScale+ FPGA running embedded Linux operating system. The main role of the TileCoM is the remote control and configuration of all the boards placed in the ACBB, remote configuration of the on-detector electronics FPGAs and interface with the DCS for monitoring.

        This contribution presents the design of the TileCoM module, as well as the integration plans of the TileCoM with the TileCal readout system and ATLAS DCS.

        Speaker: Fernando Carrio Argos (Univ. of Valencia and CSIC (ES))
      • 21
        ATLAS TREX - VME digital modules using Zynq MPSoC

        During LS2, the legacy PreProcessor system of the ATLAS Level-1 Calorimeter Trigger (L1Calo) will be extended with new VME digital modules, called Tile Rear Extension (TREX). The main task of the TREX modules will be to extract copies of the Tile digitised results from the legacy trigger data path, and to transmit them to the new L1Calo Feature Extractor processors, via optical links running at 11.2 Gbps.

        The operating conditions of the TREX modules will be continuously monitored by the ATLAS Detector System (DCS). The slow-control functionality and the communication with the DCS will be managed on each TREX by an on-board, commercial, Zynq MPSoC based module. The Zynq device will collect permanently environmental parameters from various sources on the TREX board, and transfer them to the DCS via an embedded OPC UA server and an Ethernet interface. Additionally, the Zynq device will act as a board manager, being able to initiate power-on/-off sequences, whenever this will be requested by the DCS or by the VME controlling software, or to reconfigure various TREX programmable devices upon specific commands received from the VME.

        Details on the implementation, functionality and operation of the Zynq MPSoC module will be presented in this talk.

        Speaker: Victor Andrei (Ruprecht Karls Universitaet Heidelberg (DE))
    • Tutorials and Presentations 40/S2-C01 - Salle Curie

      40/S2-C01 - Salle Curie

      CERN

      115
      Show room on map
      Convener: Revital Kopeliansky (Indiana University (US))
      • 22
        Balena as a deployment stack - Presentation + Tutorial

        Alongside the many gains brought by the addition of SoCs to boards are new management, operation, and development challenges. These SoCs require mechanisms for deploying, maintaining, and managing a large-scale heterogeneous cluster. Doing so requires the maintenance of multiple OS and application build tool chains, mechanisms for deploying and updating both at scale, and tools for accessing and monitoring each, both individually and in batch processes.

        Many possible solutions exist to address these challenges. One near-off-the-shelf solution presented here is the open source project Balena. Balena is a tool stack that is rapidly growing in popularity amongst IoT providers who are using it to manage fleets of remote devices. At its core, Balena consists of (1) a Yocto layer in a linux build recipe that creates pre-configured unique images (BalenaOS) for SoCs, and (2) a set of cloud services that enable the operation of hundreds of devices running the BalenaOS once booted.
        Internally BalenaOS utilises Docker and git tooling to deploy applications and provide automated network configuration, access control, and ssh access to both the host OS and containers. Balena cloud services and node-based API pair with these mechanisms to provide management, configuration, and debugging tools for device fleets.

        This presentation will provide an overview of the Balena toolstack, and review the advantages and disadvantages of using a Balena-flavoured CERN CentOS and a local Balena cloud service within the context of CERN SoC management needs. A CentOS-Balena instance deployed on a Zynq will be demonstrated.

        Speaker: Janet Ruth Wyngaard (University of Notre Dame (US))
      • 23
        SoCs and Control System Integration

        For hardware control and monitoring tasks, System-on-Chip devices offer the perfect combination of flexibility and performance for hardware owners as well as a convenient computing platform for control and back-end experts to implement local control and back-end interface software.
        The industry standard OPC-UA represents a well-adapted middleware solution for hardware integration into back-end applications in the frame of detector control systems (DCS) or detector configuration/calibration engines. In the case of SoCs, OPC-UA can be directly embedded into the processing part and thus allows e.g. for easy network integration of SoC-based software. Exploiting the power of the quasar framework to generate OPC-UA servers from object-oriented device models, the software development process can be significantly facilitated as well as the integration e.g. into WinCCOA or python/C++-based applications simplified.
        In this contribution, applications of SoCs in the context of control system projects within the ATLAS Phase-II upgrades are presented and usage scenarios as well as network integration aspects discussed. A general overview of OPC-UA and quasar, as well as the current status and experience with it on SoC devices will be given.

        Speaker: Stefan Schlenker (CERN)
      • 24
        OPC-UA on Zynq: Walk-through, experience and a demo - Tutorial part I

        The tutorial will focus on creation of OPC-UA servers for Zynq SoCs using the quasar framework. Firstly, a short walk-through of quasar will demonstrate the general workflow for newcomers with a demo of building an OPC-UA server for a laptop and using it.

        • practical walk-through of ecosystem - show tooling, Eclipse integration, UaExpert etc...
        • making of an OPC-UA server for a desktop with some simple monitoring
        Speaker: Piotr Nikiel (CERN)
      • 10:45
        Coffee Break
      • 25
        OPC-UA on Zynq: Walk-through, experience and a demo - Tutorial part II

        Subsequently, the usage of quasar on SoCs will be detailed. The build process for Zynq will be explained using four different strategies: Yocto, PetaLinux, native CentOs and usage of cross-compiler/SDK. The current status and mileage will be shown. In the last part an interactive demo will present building for a real Zynq board and integration of some sensors and controls with OPC-UA connectivity.

        Speaker: Piotr Nikiel (CERN)
    • 12:30
      Lunch Break 40/S2-C01 - Salle Curie

      40/S2-C01 - Salle Curie

      CERN

      115
      Show room on map
    • System Aspects 40/S2-C01 - Salle Curie

      40/S2-C01 - Salle Curie

      CERN

      115
      Show room on map
      Conveners: Diana Scannicchio (University of California Irvine (US)), Marc Dobson (CERN)
      • 26
        Xilinx Cybersecurity offering in Industrial Internet of Things (IIoT)

        Xilinx Cybersecurity offering in Industrial Internet of Things (IIoT) is part of the whole Secure Chain. This offer includes HW Security Devices, Secure Boot, Validated OS, Trusted Apps, Secure Comms, System Monitoring and Gateways/Firewalls. This session shows the history of HW Security Features from Virtex-5 to Zynq-UltraScale+ MPSoC with its implemented passive and active features. The role of FPGA/SoC vendors is in the Enterprise Network, at the Controller level, Control Network, IO devices and in Asset – but not in the Cloud or Internet domain. The Xilinx SoC Endpoint Security Platform in Hardware may be extended with a Mocana Comprehensive, full-stack IoT Security platform solution, which builds on HW Root of Trust and is Endpoint Security IIC Industrial Internet Security Framework compliant. This Mocana solution supports device enrolment, secure update, secure transport and remote attestation.

        Speakers: Gregory Donzel (Avnet Silica), Mr Markus Hennersperger (Avnet Silica)
      • 27
        Intel/Altera - Secure Device Manager for Startix10 & Agilex SoC

        Security is becoming ever more important as time progresses and is fast becoming a requirement during the architectural decision-making process. From secure boot and authentication through to cryptographic engines FPGAs have an evolving solution to address market needs.

        During this brief 10 minute presentation, the headline evolution of secure boot in SoC FPGA devices will be presented, and the new Secure Device Manager (SDM) available in Stratix 10 and Agilex SoC devices will be discussed, demonstrating a field-upgradable security solution to common cryptographic authentication, encryption and revocation needs.

        Speaker: Kris Chaplin (Intel, Kris.Chaplin@intel.com)
      • 28
      • 29
        State of Linux distributions at CERN and developments in the community by CERN IT Linux Team

        After giving a status update on the supported CERN linux distributions, we will talk about the CentOS community and Special Interest Groups (SIG) which is the framework for managing contributions. An overview of the alternative architecture SIG will be given. Finally few key features of the next CentOS release will be discussed.

        Speaker: Thomas Oulevey (CERN)
      • 14:40
        Tea break
    • System Aspects 40/S2-C01 - Salle Curie

      40/S2-C01 - Salle Curie

      CERN

      115
      Show room on map
      Conveners: Dr Alessandro Thea (Rutherford Appleton Laboratory (GB)), Wainer Vandelli (CERN)