Paying off technical debt of SoC code-bases through standards and good practices25m
CROME is CERN new generation of radiation monitoring system. It is based on semi-autonomous radiation detectors controlled by Zynq SoCs.
The management of heterogeneous complex code had to deal with a growing technical debt due to an ever larger code-base and increased complexity.
In the early days, some light configuration was viable using scripts modifying directly the source code. After some time, we started having multiple scripts scattered across the code-base, each acting differently to either modifying existing files, generating new files or passing generics.
Moreover, the use of a large set of different software for generating files, documenting the code, verification of the development and now linting made dependency tracking a challenge for newcomers.
In this presentation, we will expose how we managed to reduce the complexity by using a combination of GNU Autotools, make and TCL scripts. We significantly reduced technical debt by:
• Implementing a unified mechanism for configuring the project with a simple user interface,
• Tracking software dependencies before building the project allowing us to give the user a clear view on what is or is not possible with the current build environment,
• Separating source and build directories of heterogeneous codes, making it clear on what can or cannot be modified,
• Implementing linting of the HDL file, thus reducing noise of new commits.
CROME hardware/software co-design with Gitlab CI25m
Continuous Integration and Continuous Deployment (CI/CD) is the practice of continuously integrating/verifying the code changes automatically and deploying them in production/test devices. CI/CD greatly impacts fast software development, maintenance, and deployment. In this talk, we present the Gitlab CI/CD integration for the development of CERN RadiatiOn Monitoring Electronics (CROME) hardware/software ecosystem where we utilize Gitlab CI workflow to test successful build of the embedded application, the ROMULUSlib TCP/IP communication library and FPGA Bitstream generation. The Gitlab CI pipelines runs within dedicated docker containers which runs on a dedicated virtual machine. The CI pipelines are elemental in ensuring that any changes we make to the code pass all tests, linting guidelines and code compliance standard. And in doing so, we can detect errors early in the development process, reduce integration problems, and deploy faster with reduced risk.
Amitabh Yadav(European Organization for Nuclear Research (CERN))
Setting up development infrastructure for Petalinux projects and Zynq MPSoC/RFSoC based hardware utilizing continuous integration and deployment techniques (Tutorial)25m
In this tutorial we demonstrate how to setup basic Petalinux development and continuous integration and deployment (CI/CD) infrastructure for MPSoC/RFSoC based projects. We start by showing how to organize a workstation so that it could be used at the same time for interactive and batch (Gitlab CI based) Petalinux compilation jobs. In the next step we extend the setup with an example RFSoC board to show how to perform continuous deployment of Petalinux images directly to the hardware utilizing network boot and how to execute and organize basic tests utilizing features of Gitlab CI server. This tutorial relies on standard components which can be enabled in Petalinux/Yocto (like Docker and Kubernetes) and provides low-level information when necessary so that attendees could rather easily reuse all or part of the demonstrated content on their own premises.