4th CERN System-on-Chip Workshop
CERN
Welcome to the 4th CERN SoC workshop!
The workshop is a follow-up from the 3rd workshop held in October 2023 and its aim is to report on progress of the various SoC projects and of the common tools and solutions available
The workshop will take place in presence at CERN as well as on zoom.
-
-
10:00
→
12:00
Vendor Exhibition CERN: Ground floor of building 40 behind the cafeteria on both sides of the passage towards the terrace and building 42
CERN: Ground floor of building 40 behind the cafeteria on both sides of the passage towards the terrace and building 42
Convener: Hamza Boukabache (CERN) -
12:00
→
13:30
Lunch Break 1h 30m
-
13:30
→
13:35
Convener: Ralf Spiwoks (CERN)
-
13:30
Welcome and Introduction 5mSpeaker: Ralf Spiwoks (CERN)
-
13:30
-
13:35
→
15:35
Vendor Presentations: Vendor Presentations # 1 40/S2-D01 - Salle DiracConveners: Hamza Boukabache (CERN), Ralf Spiwoks (CERN)
-
13:35
Vendor Presentation 1.1 - TOPIC SoM Expertise & Technical Insights in developing the new CERN ATS SoM Module 1h
TOPIC Embedded Systems has developed the new CERN ATS SoM Module, which will be presented at this 4th CERN System-on-Chip Workshop. In this presentation, TOPIC shares its expertise in creating Systems-on-Modules based on System-on-Chips and outlines the development process of the new CERN ATS SoM, including the challenges overcome along the way.
Speaker: Jeroen van der Meulen -
14:35
Vendor Presentation 1.2 - AVNET: Welcome to the Post-European Cyber Resilience Act (CRA) Era 1h
In this session we will show how to use the TPM to encrypt and decrypt Linux filesystems for embedded devices. The demonstration will be on AMD's Kria SoM. For encryption the standard LUKS mechanism will be used.
Outline of the presentation/demonstration:
1) Overview of the CRA
A brief introduction to the Cyber Resilience Act and its implications.
2) Why Security Matters: A Practical Example
Through a live demo, we'll show why security is crucial: We'll copy the main flash content of an AMD Kria SoM to a PC, inject malicious code without adding files to the root file-system, and copy it back. Curious about what the injected code does and how it works? Join us to find out!
3) Trusted Platform Module (TPM)
An in-depth look at TPM concepts with a live demonstration on the AMD Kria SoM: Root file encryption using LUKS, with the decryption key unsealed from the TPM. Implementing PCR policies to detect unauthorized software changes in early boot stages (FSBL, U-Boot, Kernel).
4) Secure Boot
A step-by-step guide on:
- Generating RSA keys and signing the BOOT.BIN image.
- Burning eFuses on an AMD Zynq UltraScale+ device for secureSpeaker: Marco Hoefle (Avnet Silica)
-
13:35
-
15:35
→
16:00
Coffee/Tea Break 25m
-
16:00
→
18:00
Project Presentations: Project Presentations #1 40/S2-D01 - Salle DiracConveners: Ralf Spiwoks (CERN), Alina Corso Radu (University of California Irvine (US))
-
16:00
The X2O ATCA IPMC and Control Solution: Update 25m
The presentation will discuss the latest updates of the X2O platform for the upcoming Phase-2 upgrade. The X2O platform is a modular system in ATCA standard that includes hardware, firmware and software solutions. Main points/updates of the X2O platform include:
The KRIA SoM (Ultrascale+) as a system controller and IPMC host within the power module (rev.3). Adaptations for VU13P FPGA module and 30-cage QSFP optical module. New features in KRIA SoM firmware and software such as fast custom DMA - Slave Serial core, U-Boot IPMI tool as an IPMC extension, AXI safety improvements, Xilinx IIC bus recovery. New IPMC version that is running as an application on SoM KRIA within AlmaLinux 9.
The X2O platform will be used for L1 Trigger subsystems: EMTF, OMTF, GEM, GMT and also in the DAQ upgrade of the CSC subsystem.Speaker: Aleksei Greshilov (Rice University (US)) -
16:30
Design and Development of ATLAS L0 Muon Endcap Control Software on Zynq MPSoC 25m
The L0 Muon Endcap Sector Logic for Phase‐2 upgrade integrates Zynq Ultrascale+ MPSoC with Virtex Ultrascale+ FPGA, interconnected via AXI Chip2Chip. In terms of control and monitoring functionality, the Zynq MPSoC is responsible for the monitoring and control of the ATLAS TGC system through communication with the Virtex FPGA, which provides optical interfaces for approximately 30 front end boards.
To implement this functionality, we are developing a comprehensive control software stack on the MPSoC. This stack comprises low‐level APIs for accessing onboard devices and front end boards, as well as high‐level APIs for TDAQ and DCS. The low‐level APIs are implemented in Rust to leverage its robust and expressive type system for enforcing compile‐time access guarantees, while the high‐level layer employs DAQ2SoC as a proposed approach to communicate with ATLAS Run Control applications and DCS.
In this presentation, we will present the development of this system as an example of a complete system implementation, sharing insights and experiences gained during the process.Speaker: Wataru Otsubo (University of Tokyo (JP)) -
17:00
FELIX: not a system on chip 25m
FELIX (FrontEnd LInk eXchange), the readout system designed for ATLAS has been around for some time. Developed since 2012 but taken into operation for ATLAS since 2022, for the ATLAS phase1 upgrade. The first version of FELIX aka phase1 FELIX, was based on a PCIe gen3 card with a Kintex Ultrascale FPGA, not a system on chip. Accessing the various configuration and monitor registers, happens through a register map accessible through the PCIe interface by the software running on the host server containing the PCIe card.
With ATLAS undergoing the phase2 upgrade which should be ready by 2029, also FELIX will be upgraded with new hardware. The latest available series of AMD FPGA was chosen for the new PCIe readout card: AMD Versal Premium. This series only comes with a processing system on it which didn’t really give us a choice to go without one. But even though all the firmware is still fully compatible with the phase1 hardware and thus has to operate completely in the FPGA fabric, we have found some good use cases for the processing system. At first, a built-in-self-test was created running on the ARM processing system, which can verify all the peripherals (I2C devices, power rails, memory, transceiver eye scans, etc), display the information on a web interface and publish a report in a database. This will ease the production testing tremendously compared to the procedure for phase1 hardware. A special virtual ethernet interface (flxnet) between the PS and PCIe was developed, with linux drivers to replace the physical 1Gb Ethernet interface on the card. With this interface available, FELIX won’t require a special 1Gb network for managing the cards, and administration can be done through the host server. Upgrading the firmware using the SoC over flxnet is as simple as copying files using scp or other linux commands. Another added use case for the SoC is a mitigation for the not so deterministic phase of the recovered RX clock of the GTYe5 transceivers on the Versal devices, called Knypaegje Rinne Werom; an algorithm utilizing the near-end PMA loopback of the transceiver channel, and a DMTD to measure the phase difference now runs on the SoC and it was proven successful guaranteeing a deterministic startup phase.Speaker: Frans Schreuder (Nikhef National institute for subatomic physics (NL)) -
17:30
Advancements and future expansions of the Caribou DAQ system 25m
Caribou is a versatile data acquisition system used in multiple collaborative frameworks (CERN EP R&D, DRD3, AIDAinnova, Tangerine) for laboratory and test-beam qualification of novel silicon pixel detector prototypes. The system is built around a common hardware, firmware and software stack shared accross different projects, thereby drastically reducing the development effort and cost. It consists of a custom Control and Readout (CaR) board and a commercial Xilinx Zynq System-on-Chip (SoC) platform. The SoC platform runs a full Yocto distribution integrating the custom software framework (Peary) and a custom FPGA firmware built within a common firmware infrastructure (Boreal). The CaR board provides a hardware environment featuring various services such as powering, slow-control, and high-speed data links for the target detector prototype. Boreal and Peary, in turn, offer firmware and software environments that enable seamless integration of control and readout for new devices. While the first version of the system used a SoC platform based on the ZC706 evaluation board, migration to a Zynq UltraScale+ architecture is progressing with the finalized support of the ZCU102 and Mercury+ ST1 boards, and the ultimate objective of integrating the SoC functionality directly into the CaR board, eliminating the need for separate evaluation boards. This talk describes the Caribou system, focusing on the latest project developments and showcasing progress and future plans across its hardware, firmware, and software components.
Speaker: Younes Otarid (CERN)
-
16:00
-
10:00
→
12:00
-
-
10:00
→
12:00
Vendor Presentations: Vendor Presentations # 2 40/S2-C01 - Salle Marie Sklodowska-CurieConvener: Hamza Boukabache (CERN)
-
10:00
Vendor Presentation 2.1 - AVNET: Latest Adaptive SoCs: Versal Gen2 & RF 1h
AMD’s Versal Gen2 families - AI Edge, Prime, Premium, and RF- deliver adaptive compute, AI acceleration, and advanced connectivity for edge, embedded, networking, and RF applications. In this presentation, we will see how each is purpose-built to meet demanding performance and integration needs, and how this new generation of Versal devices compares with the previous one.
Speaker: Gregory Donzel (Avnet Silica) -
11:00
Vendor Presentation 2.2 - Enclustra: All About AXI – On-Chip Interconnect 1h
Dive into the world of Advanced eXtensible Interface (AXI) with this presentation.
We'll unravel the basics of and differences between AXI standards (AXI4, AXI4-Stream, AXI3, AXI4-Lite), explore common pitfalls, and spotlight AMD infrastructure available. Gain insights into building your own AXI components, from managing backpressure in AXI4-Stream to crafting AXI4 slaves and masters using both HDL and HLS. Whether you're a novice or an experienced designer, this session offers a concise guide to harnessing AXI's power efficiently and effectively in your projects.Speaker: Oliver Bruendler
-
10:00
-
12:00
→
13:30
Lunch Break 1h 30m
-
13:30
→
18:00
Project Presentations: Project Presentations #2 40/S2-C01 - Salle Marie Sklodowska-CurieConveners: Petr Zejdl (CERN), Greg Daniluk (CERN)
-
13:30
SoC-based flexible architecture for Quantum Communications and Quantum Random Number Generation 25m
We present a Zynq-7000 system that exploits the full SoC capabilities of both FPGA and CPUs to implement Quantum Key Distribution (QKD) and Quantum Random Number Generation (QRNG) systems. The system can both receive (top->down, Config1) or transmit (down->top, Config2) a data streaming from and to an external source via a TCP. It exploits interrupts, BRAM and bare-metal applications to reach the maximum speed of the gigabit connection. The system can generate the electronic signals to control a quantum apparatus (Config1) and can also readout the signals, via a custom TDC with 28 ps jitter or via external ADCs, from the apparatus (Config2). Config1 can be used for a QKD transmitter while Config2 can be used as a QKD-Receiver or as (discrete and continuous variable) QRNG. The results are presented in two manuscripts (10.1109/TQE.2022.3143997, 10.48550/arXiv.2406.01293). The system was successfully tested and used over the years in several quantum experiments.
Speaker: Dr Andrea Stanco (University of Padova) -
14:00
Status of the Tile Computer-on-Module for the ATLAS Tile Calorimeter Phase-II Upgrade 25m
The Tile Calorimeter (TileCal) is the central hadronic calorimeter of the ATLAS detector at the LHC. Its High-Luminosity LHC (HL-LHC) upgrade requires a complete redesign of the readout electronics. The Tile PreProcessor (TilePPr) forms the core of the off-detector system, hosting FPGA-based boards that control and read out the on-detector electronics.
Within the TilePPr, the TileGbE switch mezzanine enables board-to-board communication, while the Tile Computer-on-Module (TileCoM) manages remote configuration of both the off-detector and the on-detector electronics. TileCoM also interfaces with the Detector Control System (DCS), providing real-time monitoring. Two OPC UA servers connect to the DCS to monitor the off- and on-detector electronics via our readout system, the Compact Processing Module (CPM) using IPBus. This contribution presents the current status of the TileCoM hardware and software developments, its integration in the TilePPr, and validation tests. The supporting software stack and CI/CD pipelines for firmware and software deployment will also be described.Speaker: Antonio Cervello Duato (Univ. of Valencia and CSIC (ES)) -
14:30
From prototype to operation - Deploying the ATLAS DCS Embedded Monitoring Processor (EMP) 25m
The Embedded Monitoring Processor (EMP) is a state-of-the-art System-on-Chip (SoC) processing platform developed for the ATLAS Detector Control System (DCS) upgrade. It features a custom baseboard hosting a commercial System-on-Module (SoM) with a Xilinx Zynq UltraScale+ MPSoC. The board exposes a wide range of digital and analog interfaces to support diverse embedded monitoring and control applications.
The heterogeneous architecture of the MPSoC, combining programmable logic (FPGA) with a multi-core ARM processor unit, offers both low-level hardware adaptability and high-level software integration. This design allows the EMP to bridge on-detector front-end electronics and the experiment’s control infrastructure. On one side, the EMP interfaces with front-end modules via Versatile Link+ (VL+) high-speed optical links; on the other, its on-board processor runs an industry-standard OPC UA server (using the quasar framework) for seamless integration into the DCS back-end.
This contribution presents the current status and recent developments of the EMP, including its transition from prototype to operational use. We highlight how several ATLAS Phase-II subsystems have begun deploying the EMP and discuss initial user feedback and operational challenges from these early deployments.
Speakers: Dominic Ecker (Bergische Universitaet Wuppertal (DE)), Paris Moschovakos (CERN) -
15:00
The CERN ATS SoM Project: An Open Hardware Platform for the CERN Accelerator Complex 25m
The CERN Accelerators and Technologies Sector (ATS) System-on-Module (SoM) project aims to deliver a standardized, high-performance hardware platform for control and data acquisition across CERN’s accelerator complex. Mandated by the Common Controls Technologies Strategy Board (CTSB), the project follows a specification defined by the Common Controls Technologies Technical Board (CTTB). The SoM is based on the DI/OT system board — CERN’s Distributed I/O Tier platform for accelerator controls — and is fully compatible with its existing hardware and software ecosystem. This design enables straightforward integration across the ATS sector and simplifies long-term support.
The project is structured to deliver prototype and pre-series hardware that meets CERN’s strict requirements for reliability and interoperability. Throughout the process, close collaboration between CERN equipment and support groups (SY-BI, SY-EPC, BE-CEM, HSE, and others) and the industrial partner TOPIC — a Dutch embedded-systems company — ensures rapid progress and provides the expertise and capacity needed to achieve the project goals. TOPIC is responsible for hardware design, prototype testing, and pre-series manufacturing, while the CERN teams provide advice and support. Following successful validation, the ATS SoM will become part of CERN’s Open Hardware repository, encouraging further reuse and adaptation.
Key features of the ATS SoM include a SoC from the AMD MPSoC family, high-speed transceivers, general-purpose I/Os, onboard DDR4 memory, and optional add-ons for thermal management. Designed for straightforward testing with a 3U carrier mimicking the DI/OT system board, the SoM simplifies adoption across a range of ATS projects. Teams needing a different form factor can also develop their own carrier based on the existing design, minimizing cost and development time.
The presentation will introduce CERN users to the design implementation, and will present feedback and test results from the first prototypes which are expected in September 2025.
Speaker: Manoel Barros Marin (CERN) -
15:30
Coffee/Tea Break 30m
-
16:00
ATS SoC Framework project: a common framework for standalone SoC systems across the accelerator complex at CERN 25m
The number of System-on-Chip (SoC) applications in the accelerator complex at CERN is rising. This motivated a survey of SoC users within CERN's ATS, and a study to identify generic architectures, common needs for services, and ways of integrating SoCs in the accelerator control system. The outcome of the study is the ATS SoC Framework project, which is developing a framework with common tools and libraries to simplify the development of SoC-based applications and enabling SoCs to be used as standalone systems across CERN's accelerator complex. This talk describes the main components of the framework and the preliminary results of the prototype test campaign.
Speaker: Irene Degl'Innocenti (CERN) -
16:30
Development of RFSoC Digital Acquisition Platform for HL-LHC Beam Position Monitors 25m
The future HL-LHC Beam Position Monitor (BPM) data acquisition system to be installed near the ATLAS and CMS experiments is an application with demanding digitization and signal processing requirements. The system development has progressed since last presented at the SoC Workshop in 2023. The talk will start with a refresh of the project requirements and background followed by a discussion of the ongoing development of firmware and gateware modules. Particular attention will be given to the inclusion of components of the ATS SoC Framework and the interface between gateware, real-time (RPU) firmware, and Linux software.
Speaker: Christopher Ashley Hulley -
17:00
TriglaV: A Prototype SoC ASIC for Particle Physics Applications realized with the SOCRATES Platform 25m
The TriglaV ASIC is a RISC-V-based SoC designed to address the requirements of future particle physics experiments. Fabricated in a TID-robust 28nm CMOS technology, it integrates fault-tolerance against single-event effects using TMR and ECC techniques. The architecture features a triplicated core, ECC-hardened memory blocks, hardened peripherals and interconnects. TriglaV was built via the SOCRATES (System-on-Chip Radiation-Tolerant Ecosystem) platform, enabling customization and reusable IP integration. This prototype aims to validate SOCRATES for developing resilient SoCs tailored to harsh conditions. Verification included formal methods, FPGA emulation and fault simulations. Dedicated on-chip circuitry for testability and fault observability facilitates ongoing silicon testing and evaluation.
Speaker: Anvesh Nookala (EP-ESE-ME)
-
13:30
-
10:00
→
12:00
-
-
10:00
→
12:00
Vendor Presentations: Vendor Presentations # 3 40/S2-B01 - Salle BohrConvener: Hamza Boukabache (CERN)
-
11:00
Vendor Presentation 3.2 - Knowledge Resources GmbH 1h
We’ll do a 2-part presentation; First a few pointers for analyzing features of an RF enabled SoM “how to choose an RF-SoM for best performance” followed by a deep-dive into “low jitter clocking architectures for RF-SoMs” where we will share our recent advances in clock designs, learnings and design pitfalls to avoid.
Speakers: Marco Smutek (Knowledge Resources GmbH), Giovanni Bassi (Knowledge Resources GmbH)
-
11:00
-
12:00
→
13:30
Lunch Break 1h 30m
-
13:30
→
18:00
Project Presentations: Project Presentation #3 40/S2-B01 - Salle BohrConvener: Marc Dobson (CERN)
-
13:30
From Unit Tests to HiL: A Holistic Testing Framework for Heterogeneous SoCs 25m
This talk presents a comprehensive methodology for achieving robust testability in CROME, a heterogeneous system-on-chip (SoC) platform engineered for safety-critical application. CROME combines custom IP cores, multiple processor subsystems, and a pseudo real-time software components into a tightly coupled hardware-software ecosystem. Due to the system’s architectural heterogeneity and stringent functional safety requirements, a layered and tool-assisted test strategy is essential to ensure coverage, correctness, and maintainability.
Our approach integrates test methods across multiple levels of abstraction:
- Unit Testing at the software level using embedded-hosted frameworks to validate isolated software modules and drivers.
- Hardware Simulation and Emulation Testbenches, leveraging UVM (Universal Verification Methodology) and transactional-level modeling, to verify RTL components and SoC integration scenarios.
- Formal Verification for safety-critical blocks, applying property checking and equivalence checking to exhaustively prove correctness under constrained input spaces.
- Software-in-the-Loop (SiL) and Hardware-in-the-Loop (HiL) Testing for validating real-time behaviors, timing constraints, and platform interactions under realistic workloads.
A key contribution is our Integration Testbench Framework, designed to automate system-level validation before any major release. This testbench utilizes stimulus generation, protocol checkers, and trace monitors to execute application-level scenarios across hardware and software boundaries. It supports coverage metrics aggregation, regression tracking, and fault injection to simulate degraded operational modes.
We also discuss the integration of our test infrastructure into a Continuous Integration (CI) pipeline. The CI system coordinates test execution across heterogeneous environments triggered on every code commit. This enables scalable, incremental validation and early fault detection, significantly reducing debug turnaround time.
Finally, we share key lessons learned in balancing test depth with feedback latency, ensuring non-intrusive observability, and maintaining test collateral consistency as the system evolves. Our results demonstrate how an integrated, multi-modal test strategy can scale with increasing system complexity while upholding high safety and reliability standards.
Speaker: Emma Anna Safia Boulharts -
14:00
CRAZy - Compact Remote Administration for ZynqMP 25m
CRAZy ("Compact Remote Administration for Zynq") is a minimalistic implementation of IPMI/RMCP protocols running in the ZynqMP's Power Management Unit (RMCP). It provides a remote reset and serial console capability that runs completely independently from the APUs (Cortex-A cores) and can be fitted to most ZynqMP-based platforms without any need for hardware modifications.
Speaker: Tomasz Wlostowski (CERN) -
14:30
Evaluating Kubernetes Deployment on SoC Platforms in the CMS Phase-2 DAQ System 25m
This contribution presents our experience deploying containerised control applications on System-on-Chip (SoC) platforms as part of the CMS Phase-2 Data Acquisition (DAQ) system upgrade. Specifically, we evaluate the use of Kubernetes on a Xilinx Kria K26 SoM (Zynq UltraScale+ MPSoC), which provides embedded control and monitoring for the DAQ and Timing Hub (DTH400) and DAQ800 boards. We compare the performance and resource usage of applications running in Kubernetes-managed containers versus bare-metal execution, supported by benchmark results highlighting CPU, memory, and I/O performance. We also describe the modular software stack used, including a C++/Python hardware abstraction layer, a RESTful HTTP server for control, and MQTT-based monitoring. Furthermore, we discuss the requirements and challenges encountered during deployment, such as kernel module dependencies, limitations with OverlayFS over NFS(SoM is netbooted with root FS over NFS), and other system-level constraints. These insights aim to inform the SoC community on best practices and practical considerations for orchestration in embedded environments.
Speakers: Philipp Brummer (CERN), Polyneikis Tzanis (CERN) -
15:00
Update on DAQ-to-SoC communication library for ATLAS TDAQ Phase-2 upgrade 25m
DAQ-to-SoC communication library aims to help integration of SoC systems in ATLAS data-taking environment: it allows a DAQ application to exchange commands and data with a server application running on SoC.
The implementation is based on HTTP protocol and client sends commands and payload to a server using HTTP POST requests and gets back a response including a response code plus arbitrary (text or binary) response body. Synchronous and asynchronous communication patterns are supported. The server part of the library is designed to be lightweight, and independent from any external s/w in compile and in run time, guaranteeing support for wide range of SoC systems.
The presentation highlights recent implementation features, including moving from nginx server to a Boost library, asynchronous communication and SSL protocol support.Speaker: Andrei Kazarov (University of Johannesburg (SA)) -
15:30
Coffee/Tea Break 30m
-
16:00
Infrastructure Automation and Monitoring for System-on-Chip in the ATLAS Level-1 Central Trigger 25m
The ATLAS Level-1 Central Trigger is based on custom electronics modules. The currently operating
Muon-CTP-Interface (MUCTPI) uses a System-on-Chip (SoC), the Local Trigger Interface (LTI) and
Central Trigger Processor (CTP) for the phase-2 upgrade will use a System-on-Module (SoM).
With production modules, prototypes and evaluation boards for testing, there is an increasing number
of different SoCs/SoMs being used. The framework for automation of building and deploying the
firmware and software, and for configuring and monitoring the infrastructure will be presented.
The framework uses identification of the modules by individual module and by location and provides
booting of the modules over the network. Boot files and operating system are updated automatically.
Firmware and software are developed and deployed consistently on all modules.
Modules are configured according to their type and function. Monitoring ensures the proper functioning of
the infrastructure.Speaker: Till Kurek -
16:30
SoCks - A Build Framework for Heterogeneous High Performance System-on-Chips 25m
Modern System-on-Chip (SoC) devices tightly integrate advanced components such as multi-core processors, AI accelerators, Field-Programmable Gate Array (FPGA) fabric, and high-speed interfaces into a single package, thereby promising to solve demanding challenges across various industries through heterogeneous computing at the edge. However, the increasing complexity of these systems comes at a price. Exploiting their full potential is a major challenge for firmware and software developers that can only be countered with powerful development tools. In practice, however, we have found that development tools are also becoming increasingly complex and are therefore often unable to adequately support developers in coping with system complexity. To address this, we propose to use SoC blocks (SoCks), a modular and scalable build framework for SoC devices that introduces a new layer of abstraction and reduces dependencies where possible while making the remaining ones comprehensible. Unlike previous approaches, SoCks aims at building firmware and software units for different components within the SoC as independently as possible while providing clearly defined interfaces. These interfaces also simplify the process of swapping between different versions of a given unit, such as the chosen embedded Linux operating system. Furthermore, they facilitate the establishment of a partially automated development flow through continuous integration and continuous deployment (CI/CD). Containerization support in SoCks further simplifies the use of CI/CD by isolating the build environment from the host system, which is also beneficial for local development, since users are not required to install a multitude of toolchains and dependencies on their systems.
Speaker: Marvin Fuchs (KIT - Karlsruhe Institute of Technology (DE)) -
17:00
The APx SoC Environment 25m
Our talk will cover recent developments and current plans for Embedded Linux and applications on SoC devices in the APx environment. The APx environment supports a family of FPGA cards in the ATCA form factor, for trigger processing and DAQ readout in the CMS detector. We will discuss how we build and deploy linux and firmware images and configuration. We will discuss our tools and processes for management and bringup of boards, as well as running automated multi-board slice testing. We will also discuss our approaches to communication and network address management challenges.
Speaker: Jesra Tikalsky (University of Wisconsin Madison (US))
-
13:30
-
10:00
→
12:00
-
-
11:00
→
12:00
Convener: Ralf Spiwoks (CERN)
-
11:00
Introductory tutorial on automating SoC/FPGA software and firmware testing using Kubernetes/Container-based Infrastructure 1h
Presentation will include:
1. Setting up a lightweight k3 Kubernetes cluster which is build out of the Aarch64 (AMD/Xilinx Versal Gen1 dev kits) and x86_64 (VMs) nodes.
2. Setting up gitlab CI/CD pipelines to build firmware, Petalinux (or Yocto), and an example software using the above cluster.
3. Setting up gitlab CI/CD pipelines to deploy to the cluster all the components in the automated way using GitOps techniques.Speaker: Michal Husejko (Stanford University (US))
-
11:00
-
12:00
→
13:30
Lunch Break 1h 30m
-
13:30
→
17:00
Conveners: Ralf Spiwoks (CERN), Petr Zejdl (CERN)
-
13:30
ATLAS: update of the netbooting framework for SoC 25m
The TDAQ System Administration team manages the installation and configuration of the Operating System (OS) of all the nodes of the ATLAS experiment and of the ATLAS TDAQ TestBed Laboratory in the CERN General Purpose Network. The currently supported OS is AlmaLinux 9 and it can be installed locally or via network.
Some of the SoC devices (e.g Zynq MPSoc) can be booted with that supported OS and therefore the idea is to integrate those devices into the TDAQ System Administration “netbooting” infrastructure used to create and deploy the “standard” OS image.
The capability to create the OS image to boot a SoC device via network will be overviewed, describing the more recents steps performed. Its management and deployment will be described.Speaker: Quentin Duponnois (CERN) -
14:00
Network Services and SoC Booting in CMS ATCA Chassis Using Geographical Addressing via DHCP Client ID 25m
The new CMS DAQ electronics for the Phase-2 upgrade are equipped with a ZYNQ MPSoC on the Rear Transition Module (RTM). These ZYNQ MPSoCs are fully network-booted. We present the implementation of the network booting mechanism and its configuration using the DHCP Client ID for geographical addressing within the ATCA chassis. Additionally, we describe the containerized network services running on a server that provide the necessary functionality to support the booting process and the network configuration.
Speaker: Petr Zejdl (CERN) -
14:30
Multistage boot proces for ATS SoCs 25m
In the ATS sector, the Xilinx Zynq UltraScale+ MPSoC is deployed on the DIOT platform as a versatile solution. However, its general-purpose nature results in multiple BOOT.BIN files for various applications. To manage this complexity and ensure a reliable boot process across different use cases, we developed a two-stage boot mechanism. This method first initializes a common golden BOOT.BIN, which then selectively loads the appropriate BOOT.BIN specific to the application. This approach significantly enhances boot flexibility and system reliability on the DIOT platform, facilitating seamless integration in diverse operational environments.
Speaker: Vasileios Amoiridis (CERN) -
15:00
Coffee/Tea Break 30m
-
15:30
ATLAS and CMS System and Network Administration with SoC 25m
The System Administration and networking aspects for supporting the embedded controllers in the ATLAS experiment will be described. Various aspects will be covered such as the network architecture options and restrictions and the operating system (OS) support, management and booting process (the details of which are covered in dedicated presentations).
Most information will be common to both ATLAS and CMS, but where there are any differences, this will be shown.Speaker: Diana Scannicchio (University of California Irvine (US)) -
16:00
Linux Roadmap, CERN IT 25mSpeaker: Alex Iribarren (CERN)
- 16:30
-
13:30
-
17:00
→
17:30
Summary and ClosureConvener: Ralf Spiwoks (CERN)
-
17:00
Summary and Closure 30mSpeaker: Ralf Spiwoks (CERN)
-
17:00
-
11:00
→
12:00