ITER hosted the Spring EPICS Collaboration Meeting in June 2019. The meeting took place at the ITER site in Cadarache, 45 km north-east of Aix-en-Provence, France.
Getting visitor badges and name tags
Web Services for EPICS (Workshop)
Timing Workshop
Motion Control (workshop)
Motion Control (workshop)
Getting visitor badges and name tags
Special events
Welcome to the EPICS Collaboration Meeting June 2019!
New and noteworthy
A cordial welcome to the participants of the EPICS Collaboration Meeting June 2019 by the head of the ITER Science and Operations Department, Tim Luce.
Within the ITER project, 35 nations are collaborating to build the world's largest fusion facility in the South of France. Most contributions are in-kind - integration of more than 170 plant systems will be the main challenge when building the control system, which is using CODAC Core System, based on RedHat Enterprise Linux, EPICS and ITER specific tools.
The construction of the ESS is progressing. First experiences with the control system have been made with the first stage of the accelerator, cryogenics systems and one offsite neutron instrument using ESS ICS technology. This talk will summarize the status of the systems and plans for the coming years, in particular ESS plans for introducing and taking advantage of the new features in EPICS 7.
The Linear IFMIF prototype accelerator, LIPAc, aims at producing a powerful (9MeV, 1.1MW) deuteron beam at 125mA in CW, to validate the concept of the future IFMIF accelerator (40MeV, 125mA CW). The beam is accelerated through two main accelerating stages (RFQ and SRF Linac), plus two bunching cavities as part of the Medium Energy Beam Transport (MEBT). In order the beam to be accelerated continuous wave RF power at 175 MHz for the 18 RF power sources feeding the eight RFQ couplers (200kW), the two buncher cavities (16kW) and the eight superconducting half wave resonators of the SRF Linac (105kW) are needed in the final stage.
Presently LIPAc consists of the Injector, RF Quadrupole (RFQ), Medium Energy Beam Transport (MEBT), Diagnostics-plate and Low Power Beam Dump (LPBD) components, which is also referred to as the Phase B of the project. The Phase B aims to demonstrate the acceleration of a deuteron beam through RFQ up to 5.0 MeV in pulsed mode with a low duty cycle of 0.1%. This phase is currently under commissioning, which among the beam operation, RFQ conditioning and other activities mean integrating the local control systems (LCS) from EU coordinated by F4E (EU) to the central control system (CCS) that is designed and managed by QST (Japan). During the commissioning phase a lot of data have already been gathered and various EPICS tools (pyEPICS) have been developed to ease the commissioning and make the best out of the machine. Effective ways to share the data with the participating laboratories in Europe are currently under preparation. This talk presents the highlights from the LIPAc phase B commissioning in the view point of control systems.
Everything else
A short presentation of the main discussion topics and results of the Timing Workshop on Monday, 03 June.
New and noteworthy
An ultra-low emittance, 6-GeV high energy synchrotron radiation light source which contains a linac, a booster, a 1360-m circumference storage ring, and fifteen beamlines in phase one will be built in suburban Beijing. The number of devices and complexity level for the new machine is extremely large which requires a modern control system design for smooth operation. The preliminary control system design will be presented. New development for high-level software and databases is described. Possible EPICS related development is planned.
In a few years, at GANIL in Caen (France), light to super heavy exotic ion beams will be produced by the SPIRAL1 and SPIRAL2 facilities to be studied in the experimental hall DESIR: Fundamental researches on nuclear physics, weak interaction and astrophysics will be conducted.
The low energy Radioactive Ion Beams (RIB) with short half-lives will be transported, separated, cooled, bunched and trapped rapidly using instruments under EPICS control before performing high precision measurements on ions of interest using techniques like laser spectroscopy, mass spectrometry and decay spectroscopy.
The aim of this talk is to present the development of the DESIR beam lines, the HRS (High Resolution Separator), the GPIB (General Purpose Ion Buncher) and the double Penning trap PIPERADE conducted at the CENBG Laboratory in Bordeaux. Collaborative developments and the architecture of this EPICS Control system including industrial PLCs, embedded IOCs and real time distributed devices will be more specifically detailed.
GANIL: Grand Accelerateur National d'Ions Lourds (National Large Heavy Ions Accelerator)
RIB: Radioactive Ion Beam
DESIR: Decay, Excitation and Storage of Radioactive Ions
CENBG: Centre d'Etudes Nucléaires de Bordeaux Gradignan (a CNRS/IN2P3 & Bordeaux University Laboratory)
The control system architecture for the SARAF project was based on VME according to CEA experience and a large part of the EPICS community. During the last year a significant evolution to MTCA was noticed and the very close collaboration with ESS provided us with the opportunity to upgrade the CEA EPICS environment into MTCA.4. The SARAF project is a good opportunity to implement both MTCA.4 and MRF timing system and to gain all benefits from this new technology. The presentation will describe the EPICS environment and the preliminary architectures for SARAF.
The High Energy Photon Source (HEPS) is the next ring-based light source to be built in China, with 6 GeV beam energy, and an emittance of tens of picometer rad, and about 1.3Km circumference, and 48 straight sections. In the phase I, 15 beamlines will be installed. This paper outlines the conceptual and the functional design for beamline control system of HEPS.
The SPIDER Acceleration Grid Power Supply (SP-AGPS) is a high voltage power supply of -96kV/75A, used for acceleration of beam in the NBTF facility, at Consortio RFX, Padova. Recently it was commissioned successfully, and is now operational on site. The central control system at NBTF is called CODAS, which is similar to CODAC. A LabVIEW based EPICS server was implemented as power supply controller for command, configuration and monitoring. A translator IOC was deployed on the central control system that coordinated the controller with the SP-AGPS EPICS server and resolved incompatibility issues. This talk shares the experience with the EPICS interface of SP-AGPS during commissioning at Consortio RFX.
Everything else
A short presentation of the main motions (!), discussion topics and results of the Motion Control Workshop on Monday, 03 June.
Special events
Lots of lightning talks
We present the current state of the HiLASE Centre control system. The aim of the development is to build a control system which would be in charge of the operation of kW-class in-house developed laser beamlines. These beamlines deliver picosecond pulses with repetition rates between 1 kHz and 1 MHz and high-energy nanosecond pulses at 10 Hz. A generic control system architecture is presented, which can either support full-size development lasers or compact industrial versions.
This talk will be about the current status of EPICS beamline control at BESSY II. We are currently running on EPICS Base 3.14 with various VME based hardware, MVME162 / MVME2100 boards, different OMS motion controllers, Delta Tau PMAC2-VME, Delta Tau Geobricks and our control software, based on the TPMAC module.
The structure of each beamline is unique, they all have different control system requirements and thus use different hardware. For the control engineers at BESSY II, these circumstances turned out to be a major difficulty regarding suitability of models for various hardware hierarchies, legacy systems and other obsolescences. The challenges are met with incremental upgrades in short iterations. To successfully operate the 42 different beamlines of BESSY II, an agile work cycle with the help of kanban process planning was established and will be followed strictly.
The intention of this talk is to discuss the major bottlenecks in beamline control and how BESSY II will approach those obstacles with future developments within the EPICS community.
The exploration of the QCD phase diagram in the region of high baryon densities is the primary goal of the physics program of the Compressed Baryonic Matter (CBM) experiment at FAIR.
Charged hadron identification will be performed by a time-of-flight (TOF) measurement with a wall of RPCs.
EPICS is used for CBM-TOF slow control, which includes High Voltage, Low Voltage, Gas System, Front-End Electronics.
High Voltage is supplied by a CAEN HV Create. Low voltage is supplied by a MealWell rack system, and a power distribution box designed by GSI is used to distribute Low Voltage. Also, a gas system which is designed by GSI is controlled by EPICS. To monitor the environment, such as temperature, pressure, humidity in ToF detector, IOC based on IPbus and GBTx link is designed.
During the beam time in mCBM, March 2019, GSI, the control system was used and worked well.
I&C design of ITER diagnostics is performed in a modelling tool called Enterprise Architect (EA), using UML profiles defined for diagnostics I&C. The information entered in these models is detailed enough to provide a decent export to the ITER static configuration database (SDD) and EPICS tools. The example will show how to convert EA models to SDD projects and EPICS DB files.
Particle accelerators are complex machines built from thousands of devices. For a neophyte, grasping how they work is challenging.
Using a virtual reality application for outreach to the general public is a way to raise interest of a broader audience. This technology immerses the user in a new environment with game-like elements to guide and highlight the experience. It can also give a much better sense of scale and architecture than a video or an article.
Over the past decade, J-PARC Main Ring have become more sophisticated
and more stable operation is required. Along with this, demands arose from experts of equipment and users that acquiring detailed and real-time information on the apparatus from the office LAN. On the other hand, a two-layered firewall quarantines the accelerator control network so that any manipulation of accelerator equipment from the office network shall be prohibited, despite being intentional or not.An one-way gateway system was constructed which allows Channel Access from the office network to accelerator control network through the firewall.
A new version “edmq” of the edm display editor/manager is being developed. edmq is intended as a plug-in replacement for edm.
edmq uses the Qt graphics system but is not predicated on Qt.
EPICS offers a collection of software tools for setting up distributed control systems for experimental physics projects.
In order to establish EPICS in the Max Planck Society (MPG) for device control and data acquisition as an alternative to commercial solutions, this project aims to increase the visibility of EPICS within the MPG.
It is planned to improve the documentation of EPICS and to create training documents. A demo hardware will be set up which can be borrowed by the institutes to gain experience with the software.
At the CLS we use a small part of an in-house development called ControlSystemWeb as an easy notification system for changes in PV’s of interest, via texting or email. Using this notifier, plus IFTTT, Node-RED, and Google Home recipes, it is possible to have a home device announce these PV changes and further, to be able to turn this service on and off through voice commands.
The CLS runs approximately 1000 EPICS IOC applications on 400 hosts. Our goal is a single screen summarizing the real-time status of every IOC application. However, it is difficult to identify individual IOC applications based on their beacons. This talk summarizes the difficulties encountered in achieving our goal and describes a technique to map EPICS beacons to legacy IOC applications to facilitate monitoring without modifying code or even restarting them.
pvAccess in the real world
At the Fritz-Haber-Institute PVAccess is used for new experiments to realize high data rates (streaming).
Using RTEMS as the real-time operating system, a custom PVData structure captures data deterministically from a newly constructed fast scanning tunnelling microscope. The recorded data is sent with PVAccess to a python server.
The python server stores the data in a special file format for online analysis. The metadata is stored as a Normative Type structure in an instance of the Archiver Appliance.
pvAccess in the real world
MARTe2 (https://vcis.f4e.europa.eu/marte2-docs/master/html/) is a C++ modular and multi-platform framework for the development of real-time control system applications. The architecture promotes a bold separation between the interfacing hardware implementation (e.g. data acquisition cards), the environment details (e.g. site specific libraries for monitoring and data archiving) and the real-time algorithms (i.e. the user code). The previous version of the framework was deployed in many fusion real-time control systems, particularly in the JET tokamak. The new version of the framework follows a Quality Assurance strategy that enables the integration of contributions from a large and heterogeneous development community, which includes developers with an academic background and industrial suppliers.
MARTe2 is being used as the application framework for the development of key ITER plant systems, such as the ECRH and the magnetics diagnostics, and of ITER relevant test facilities. As a consequence, several components have been developed in order to allow the interfacing of the framework with many of the standard ITER data acquisition cards, with the real-time and data archiving networks and with both the EPICS Channel Access and EPICS pvAccess protocols. These data-driven components can be readily deployed in any MARTe application and allow to monitor application variables, to provide control references and to message and change the application state and configuration. These interfaces can also be used to deploy protocol adapters through proxying (e.g. between pvAccess and OPCUA).
This talk will provide an overview of the MARTe2 EPICS components design, complemented by examples and use-cases from running systems that are using such components.
The ITER plant configuration system is a component of Control, Data Access and Communication (CODAC) Supervision (SUP) system and is tasked to derive machine parameters from planned experiment schedule, conduct multi-stage engineering verification process involving a wide range of codes (e.g. electromagnetic induced forces on mechanical structures, scarce resource budget management, etc.), and eventually load machine parameters to the plant control system as part of the experiment preparation.
The ITER plant configuration system interfaces to Machine Operation, to a heterogeneous set of data repositories (e.g. experiment schedule, machine geometry and condition, operating limits, live measurements, plant system self-description data, etc.), and the hundreds plant systems that compose the ITER machine.
The architecture and detailed design is being elaborated as a joint initiative between ITER Organization (IO) and Fusion for Energy (F4E, ITER European Domestic Agency) and will be submitted to a peer review process during the coming CODAC final design review scheduled in 2020. The architecture promotes a strong decoupling between variables and data types definition, data access, transformation and verification codes, and the configuration workflow management. The strong decoupling enables offline composition and verification of the experiment schedule, distinct life-cycles and qualification processes for the various components; it ensures a high degree of testability, etc.
The detailed design is being substantiated with representative scale prototypes. Transformation and verification codes take the form of C/C++, Matlab and/or python functions, and are provided for execution by means of EPICS 7 (pvAccess) services which support dynamic discovery of interfaces and data types, and a concurrent execution model. EPICS 7 services are also deployed atop plant system interfaces and support a synchronous transaction model for structured configuration variables; interface adapters are tasked to propagate and complement the transaction with data integrity verification through to remote controllers. Interface adapters cover obviously the serialisation of structured pvAccess variables to EPICS records through ChannelAccess, the mapping to OPC UA servers, etc. XML based domain specific language is proposed to describe and organise the configuration workflow process. Distributed MARTe2 applications are used as workflow executor.
The presentation aims at clarifying the context, presenting the architecture and substantiating arguments, the technical choices, as well as successes and difficulties encountered with using EPICS 7 during the prototyping activities.
Everything else
A short presentation of the main discussion topics and results of the Web Services Workshop on Monday, 03 June.
Special events
Special events
Drivers, devices and details
We have designed a bunch-by-bunch beam position monitor (BPM). The BPM consists of a four button pickup and a four channel 500MS/s 14-bit ADC electronic system. The core processor of the electronic system is a Xilinx ZYNQ-7000 SoC, equipped with a dual-core ARM Cortex-A9 and a Kintex®-7 based programmable logic array. EPICS and the Linux operating system will run on the ARM core of the ZYNQ. This report will introduce the system’s whole architecture, some test results, and the status of our development progress. We also plan to develop an embedded EVR in the foreseeable future.
Introducing the lua EPICS support module, an integration of the lua scripting language into EPICS. The module provides:
Keywords: EPICS, White Rabbit, LIPAc, Interlocks, RF control, amplifiers, control system
Linear IFMIF Prototype Accelerator (LIPAc), located in Rokkasho, Japan is a linear accelerator whose objective is to validate the final IFMIF accelerators concept. LIPAc includes 18 RF power chains, distributed in a nine meters RFQ cavity with 8 chains, 2 buncher cavities with one chain per cavity and a superconducting cavity with the remaining 8 chains. In terms of beam power, LIPAc aims to reach a (9MeV, 1.1MW) deuteron beam at 125mA in CW. At present the project is finishing the commissioning of its second of four phases that consists in Injector + RFQ (Radio-frequency quadrupole) + MEBT (Medium Energy Beam Transport) + DPlate (Diagnostic plate) + LPBD (Low Power Beam dump).
One of the state-of-the-art features that this accelerator has is the distribution of RF, timing synchronization and precise diagnostic timestamp through White Rabbit technology. The utilization of White-Rabbit technology allows controlling each chain’s frequency, amplitude and phase independently, simplifying significantly the facility commissioning and tuning operation.
White-Rabbit (WR) is a fully deterministic Ethernet-based network for general purpose data transfer and synchronization. It was born at CERN for time and frequency dissemination up to 1000 nodes, demonstrating the capability to provide synchronization accuracy better than 100ps and stability on frequency dissemination better than 2ps. It is self-calibrated and allows plug and play deployment due to a periodically temperature compensation.
The LIPAC LLRFs have been designed with the aim of being flexible and precise systems including multiple features in order to be as stable and accurate as possible. One of the new features is the possibility of correlating RF Control interlock between difference control systems using EPICS and CSS/BOY combined with WR. Using the PVs and the timestamps supplied by WR, it has been possible the creation of an interlock’s management system for individual LLRF, which is an ordered way to represent the interlocks data. The aim is to obtain a correlate and time-structured historical of interlocks data for all RF control system. The data are timestamped with the precision of WR and can be displayed in a graphical way in CSS or stored for a post-analysis.
This feature allows the possibility of monitoring and correlating the interlocks occurred in the system working in stand-alone or as a complete system where all the LLRF are synchronized with a sub-nanosecond precision. Thereby achieving an improvement in the reliability of the system by providing frequent points of failure or vulnerable points in the accelerator, RF amplifiers, subsystems….
The first LLRF synchronization results on the combined cavity using multiple outputs and preliminary results will be presented.
Development efforts are affected by the increasing number of applications that relies on client-server-based control systems, so new research lines appear in order to simplify the implementation and integration of device drivers in distributed control environments.
NDS has been designed as a control-system independent layer (namely the NDS Core). The interface to a specific control-system is delegated to a second layer, the NDS control-system layer (for instance NDS EPICS for the EPICS control system). Therefore, standardized device drivers may be achieved for multiple control systems with minimum development efforts.
This work focuses on our contribution to NDS3 in order to provide a complete class-based skeleton for data acquisition and timing boards that ensures homogenous device drivers and easier development and usage. Both a testing-purpose and an EPICS control system are available.
New different nodes have been implemented for supporting basic DAQs and timing functionalities. These nodes have been validated through the implementation of device drivers for two timing cards and two acquisition boards. Additionally, some special nodes are being implemented for interfacing device drivers with specific software layers such as the Data Archiver Network (DAN) or the Synchronous Data-bus Network (SDN) available at ITER for efficient data archiving or real-time data synchronization, respectively. All developments are validated by means of unitary tests implemented in C++ language (test control system) and PyEpics (EPICS control system).
Currently, huge efforts are focused on improving the user experience and covering the identified lacks of functionalities.
Finally, the interface between NDS3 with areaDetector to achieve support for image acquisition devices must be highlighted as one of the main deficiency to be implemented in a future.
This work has been carried out by GMV and UPM, and supervised by ITER Organization.
The usage of FPGA-based DAQ systems has been growing in instrumentation and control systems for Big Science experiments over the last years. The combination of flexibility and performance that FPGAs give to DAQ and processing systems allow to apply them in multiple applications such as data acquisition, processing algorithms, complex timing and triggers mechanisms, interlock operations, and machine-learning applications. At present, three options exist to develop using FPGAs: hardware description languages (VHDL, Verilog/SystemVerilog), graphical languages like LabVIEW/FPGA; and high-level languages such as HLS languages or OpenCL. From these, HDLs are the most complex to use, but they offer the highest flexibility and control over the implementation, while not being bound to a particular hardware manufacturer. Alternatively, LabVIEW/FPGA is easier to use, but it is constrained to specific hardware devices. Lastly, HLS languages and OpenCL enormously simplify the hardware description using high-level languages like C/C++ which are also hardware agnostic. Additionally, standardization adds value to high-level languages, reducing the development times, to the extent that modules or complete software layers can be reused.
The work presented here is a set of methods and tools that allow developing applications for FPGA-based instruments in a standardized way, using the following elements: an OpenCL compliant Board Support Package for a PCIe device; an OpenCL interface to communicate with high-speed AD/DA converters using the JESD204B standard; a set of OpenCL Kernels to support common DAQ functionality, capable of acquiring and processing at high sampling rates; and a standardized software interface, including the Nominal Device Support software layer supported by ITER that integrates the whole solution with EPICS.
A discussion is provided on how this methodology simplifies integration and improves maintainability compared to other development languages and tools.
During the past year we have developed EPICS drivers for several different data acquisition boards that support sampling rates from 100MHz to GHz range. Common properties of these boards include the MTCA.4 form factor, and with it, PCIe bus, while their differences mostly reside in number of input channels, sampling rates, different driver APIs, etc.
EPICS drivers were structured based on common purpose of these boards – which is digitizing analogue signals, with board-specific functionality added later on. Such common code base approach reduces the amount of code, allows creating standard documentation and shortens the time new users spend on familiarizing themselves with the application. There are different ways of creating drivers with such standardized structure. In the examples presented, we used NDS3 which is a framework based on asyn and targeted towards DAQ devices.
The presentation will give an overview of NDS3 based EPICS drivers for 4 different boards: Teledyne ADQ14 and ADQ7, IOXOS ADC3110 (on IFC1410 carrier), VadaTech AMC523 (with MRT523 RTM and MZ523B mezzanine card).
Special events
Lots of lightning talks
1-wire system at NSLS-II provides an example of low cost measurement of large amount of temperature, humidity and other sensor signals. 1-wire sensors work great with any microcontroller using a single digital pin, and even multiple ones can be connected to the same pin, each one has a unique 64-bit ID burned in at the factory to differentiate them. This presentation will focus on the implementation and EPICS interface of the 1-wire system at NSLS-II.
The real-time radiation dose monitor system is based on EPICS, and the system hardware is mainly composed of Raspberry Pi and XBee wireless communication module. The system designed the monitor function which can transfer the radiation dose value from dose meter to local PC wirelessly in a distance.
In a cooperation between ITER and HZB/BESSY II, a Device Support for the OPC UA industrial SCADA protocol is under development. Goals, status and roadmap will be presented.
ELVEFLOW OB1 uses piezoelectric technology, can deliver pressure from 200mbar to 8 bar, and measure flow as low as 1.5ul/min. It is popular in microfluidic mixing experiments. Vendor provides Windows based API. Asyn driver has been developed to support integration with EPICS.
The SwissFEL Low-level RF (LLRF) system controls more than 30 RF stations based on EPICS with the version 3.14.12. Except for performing general device controls, like configuring the RF/LLRF devices and acquiring RF data (e.g. RF waveforms), the LLRF system performs other complicated functions like RF signal processing, feedback and feedforward control and RF system calibration and optimization. All these functions increase significantly the complexity of the LLRF software. In order to better architect the LLRF software within the frame of EPICS, two object-oriented frameworks were developed with C++ and Python respectively aiming to designing complex and clearly architected software based on EPICS. The C++ framework can be used to design EPICS modules with a generic ready-to-use device support, while the Python framework can be used to design soft IOCs as Channel Access (CA) clients using the pyCafe CA library designed by PSI. Both frameworks tend to hide the details of EPICS which enables programmers starting their development without knowing much of EPICS. This presentation will give a brief introduction to the frameworks and their applications in SwissFEL LLRF system will be highlighted.
EPICS applications are typically hosted on vxWorks, GNU/Linux, RTEMS, Solaris, MS Windows and MacOS operating systems. However, one of the most widely used operating systems is not included in this list: Android.
One could think that Android is typically out of the scope of EPICS environments, but looking from the opposite point of view: could manufacturers which base their products on Android take the advantage of the EPICS environments? The answer is yes, they could.
This work shows a prototype Android-based application that embeds the well-known modbus IOC. The system is based on an Octa-Core board with ARM Cortex A-53 processor that runs Android 5.1. The application allows starting/stopping the IOC as well as setting the network parameters of the modbus server. The IOC shell is always displayed and commands can be sent to the IOC while it is running. Finally, the application also includes an indicator to warn when anomalies are detected, for instance, if there is no communication with the modbus server.
An upgraded version of this driver is presently used in the EPOWERSYS range of high stability power converters. 11 units of standard 300A, 20V model are to power the quadrupole magnets in the Medium Energy Beam Transport section of the Linac at the European Spallation Source (ESS) in Lund, Sweden.
The Tango DataBrowser is an application developed as Open Source in collaboration by SOLEIL and ANSTO. It allows the visualisation of scientific data files and control system data through a plugin architecture that supports HDF, Nexus and Tango/Tango Archiver.
We extended this tool with plugins for EPICS Process variables (ChannelAccess, .db files) and the EPICS Archiver Appliance (Google protocol buffer .pb files).
Accelerator controls at the ISIS Neutron Source currently uses the Vista controls system. We are beginning development of a new EPICS control system for our Front End Test Stand (FETS) to evaluate a potential migration of our existing controls system to EPICS. We present a brief overview of accelerator controls at ISIS, our progress in evaluating EPICS, and our plans to handle future migration.
Everything GUI
Everything GUI
An update on the latest developments in control system studio and phoebus.
CS-Studio 4.6 is scheduled for release May 2019, the highlights of this release include the migration to openjdk11 and javafx.
Phoebus is a replacement for the eclipse RCP framework for cs-studio applications and services. The framework uses core java functionality with javafx to provide a reliable, performant, and thiner alternative to eclipse and SWT.
In order to ease the migration from eclipse to phoebus the cs-studio 4.6 release includes phoebus integration plugins with functionality needed to support a smooth transition.
Information about initial development of a web runtime for the CS-Studio Display Builder.
Archive, alarm, log and similar services
IRFU[1] software control team (LDISC) is involved from feasibility studies to equipment deployment into many different experiments by their size and running time. For many years, LDISC is using PLC solution for controlling part of the experiment, and two different SCADA:
With MUSCADE, LDISC has developed a set of tools that gives an easy and a fast way for PLC developers to configure the SCADA. As EPICS projects are growing in our department, we are working now on adapting those tools to EPICS:
REFERENCES
[1] Irfu, http://irfu.cea.fr
[2] MOONARCH: Memory Optimizer ON Archiver Appliance
[3] CAFÉ Java: Channel Access For Epics Java
When porting the RCP-based CS-Studio alarm system tools to phoebus, its architecture which used to be based on a combination of RDB and JMS was reconsidered. The new implementation based on Kafka offers several advantages.
An effective and useful alarm system is one which produces alarms which are well defined both in terms of its causes and the corrective actions to be taken at the same time it avoids overwhelming the user with noisy and numerous alarms. To achieve these goals it is imperative to ensure that the alarm configuration is properly populated, tuned, and maintained.
The new phoebus alarm system based on kafka allows for easy development and deployment of microservices to facilitate configuration management of the alarm server.
Two such services include:
The alarm logger, a service which records all the alarm state changes, user actions, and configuration changes into an elastic backend for analysis and reporting.
The alarm config logger, which records configuration changes to the alarm configuration.
Both buses will go to the restaurant. One of them will continue to Aix.
Special events
Drivers, devices and details
CEA Irfu Saclay is contributing in many facilities (ESS, IFMIF, SARAF…) through several collaborations and, especially in the last few years, in different RF conditioning test stands. Between 2015 and 2020, 10 RF test stands will have been installed . Due to different hardware choices made by the projects, we do not necessarily use the same hardware. However the scientific need is very similar, this is the reason why we have developed several EPICS generic modules dedicated to RF tests, from analysing and calibration of the pulse to conditioning sequence automation. I will present the modules implemented and how we use them in the different projects.
Reflections on the costs and benefits of developing and testing hardware drivers using software simulations. Experiences with Serial, Ethernet, PCI, and VME devices.
StreamDevice has received quite some updates over the last years. In particular version 2.8 introduces several new features. More changes are planned for the future.
Support modules and applications
Some background on this session, and a few questions that the talks might help answering.
The EPICS build system at PSI differs from the standard EPICS way of building IOCs. We build all software modules (e.g. drivers) as loadable modules and load them into softIoc at run-time. This requires a system for dealing with versions and dependencies during build, deployment, and IOC startup.
The EPICS development and deployment at ESS is based on the concept developed at PSI, and adapted to our specific challenges. These include dealing with many external developers in geographically separated locations, having different environments and levels of expertise. The environment E3 addresses a number of issues that we had in our first attempt but keeps the same main core concepts and goals. The goals include avoiding re-building IOCs from scratch and helping to focus on the functionality of the IOCs, and many others.
Support modules and applications
Manually maintaining configure/RELEASE files and building EPICS support modules in the correct order is an error-prone and tedious task, especially if different versions of different sets of modules are to be kept around in order for old applications to remain buildable.
SUMO was developed as a tool to automate all of this work. A single JSON file with a simple schema is used to specify dependencies and source locations for each version of each support module in use, keeping the end-application in control of the exact set of module versions it wants to use. The tool is highly configurable and can be easily adapted to different work-flows.
The NSLS-II Debian Repository was updated to include recent contributions to the epicsdeb project on GitHub. The repository is now available at the public mirror https://epicsdeb.bnl.gov/debian/. Notable news:
We will continue our efforts to accommodate GitHub updates to support external repository users and internal software update needs. Also, to enable better flexibility and greater control over the environment which runs EPICS software, a solution is being investigated to containerize base, modules, and IOCs in such a way that a working setup can always be easily recreated and redeployed. A prospective approach is a (set of) lightweight container(s) running in abstraction to the host OS. Enabling persistence via mechanisms like shared volumes and leveraging container management can open a route to the usage of modern CI/CD and better infrastructure standardization.
SNACK stands for SiNgular Application Configuration Kit and is an Ansible-based software configuration and deployment tool used at NSLS-II, primarily on accelerator side, to provide means of retaining and deploying EPICS IOCs. Despite the solution versatility, rich feature set, and reliable operation as applied to deploy close to two hundred application instances, there is no sufficient traction in using the tool for beamline deployments. Some lessons were learned as a result of this project:
The Facility for Rare Isotope Beams (FRIB) project follows a standardized and automated process to continuously deploy and deliver EPICS IOCs from development to test and production environments. Unfortunately, building, testing, packaging, and deploying software can be a time consuming and error prone process, so at FRIB we use a set of different tools to make this process easy, safe, flexible, and fast. This process includes the use of a central code repository, a continuous integration server performing automatic builds and running automatic test, as well as automated software packaging, and a software configuration management tool to configure how new software must be deployed/delivered to either a test or a production environment. The high degree of reproducibility as well as extensive automated tests allow us to release more frequently without jeopardizing the quality of our production systems.
ITER Instrumentation and Control System comprise all hardware and software required to operate ITER. CODAC (Control, Data Acquisition and Communication) provides centralized services to Plant Systems during installation, testing, integration and operation. Such CODAC Services include the Operator Interface BOY, Alarm System BEAST, Archiving System BEAUTY and Electronic Logbook OLOG.
For production level CODAC services deployment and configuration, the objectives are:
ITER I&C applications, developed world-wide, are designed to be structured and built in a similar manner and packaged in a compatible way to allow smooth integration. This design makes use of the following technologies:
The scope of this presentation will cover the following topics:
Special events
Support modules and applications
We present rsync-dist, a tool to deploy build artefacts from an EPICS
application to remote servers. It uses ssh for authentication and
authorization, log files for auditing, and supports unlimited rollback. In
contrast to standard CI tools, its use is decoupled from source version
control to allow developers to easily install test and debug versions of
their software.
3.15, 7.0 and beyond
Will discuss status of PVAccess Gateway project and results of performance and stress tests at SLAC.
The typical EPICS system consists of three major categories of hardware asset classes: the local area network (LAN), the operator interfaces (OPI), and the Input/Output Controllers (IOCs) where scientific data such as process variable (PV) values, limits and metadata reside. Most institutions’ mission requires that the system be up and running over 95% of the time during service periods and without rebooting for several months. Therefore, any event that could cause service disruption represents a serious concern and must be prevented. However, the current protection measures for EPICS systems assets only consist of border security techniques which do not prevent a user with access from controlling the entire system once logged in.
Our work proposes to analyze the security challenges facing the EPICS systems. Specifically, we look at three threats models and their potential consequences to the system. First, we consider users who may gain inappropriate access to IOCs and accidentally disturb operations, compromise the integrity of data and/or the validity of scientific experiments. Next, we consider malicious users that launch untargeted attacks by installing self-propagating malware, starting with the first internet-facing hardware asset encountered. Such an event can also lead to a compromise of the IOC database. Finally, we look to study targeted attacks against the hardware assets aiming to disrupt activities at the institutions. These types of attacks are the most serious and costliest as they can lead to a sabotage of the software, damage to equipment, takeover of the entire system and/or launch of distributed denial of service (DDOS) by the adversary.
Our work intends to transform the EPICS software into a resilient, secure cyberinfrastructure that will immediately benefit the high-energy physics community. As such, we plan to integrate security throughout the EPICS software development life cycle using state of the art techniques for vulnerability discovery and for integrity protection of EPICS software products. Next, our work aims to improve and leverage the operating system security services. Common cryptographic libraries will be built for EPICS and memory protection will be enhanced by deploying advanced techniques used in general-purpose operating systems. Finally, we intend to focus on improving network security for EPICS protocols by formally modeling and analyzing EPICS PV gateway, improving security logging and designing a network Intrusion Detection System (IDS) for EPICS PV Gateway.
Report on recent changes to EPICS Base and related modules.
Update on the always underestimated activities of the EPICS Council.
An EPICS Python working group has been established.
3.15, 7.0 and beyond
EPICS uses a set of Continuous Integration (CI) tools to support the development of the Core package and support modules. An overview of the used tools and their scopes will be presented.
A recent development tries to provide a common set of scripts for these CI services. This allows support modules to be automatically built and tested against multiple EPICS versions, on different OSs, using different compiler versions, with minimal configuration effort.
Special events
Thank you for attending the EPICS Collaboration Meeting June 2019!
The Next EPICS collaboration meeting will be held in conjunction with ICALEPCS 2019 in Brooklyn, New York.
This short talk will outline both the next workshop, how it relates to ICALEPCS and the general area of the collaboration and the conference.
Hands-On demonstration and training session on EPICS 7, using CS-Studio and areaDetector
Hands-On demonstration and training session on EPICS 7, using CS-Studio and areaDetector
AreaDerector Training
AreaDerector Training