- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Help us make Indico better by taking this survey! Aidez-nous à améliorer Indico en répondant à ce sondage !
Cosylab hosted the EPICS Collaboration Meeting in September 2022. EPICS collaboration meeting in Ljubljana was a hybrid event; an in-person and a virtual event.
All virtual material is available either in the timetable or on the video material subpage.
The meeting took place at the Cosylab premises and at the premises of our co-host the Jožef Stefan Institute, Ljubljana, Slovenia.
Getting visitor badges and name tags
Kay Kasemir will host a workshop on:
1) "PVA Introduction"
2) "Java PVA API"
The link to the website for more info is here:
https://controlssoftware.sns.ornl.gov/training/2022_EPICS/
Those interested in participating should download the VM and start it up once ahead of the workshop (as described on the website).
Please check the website where you can find more information about the preparation and the presentations.
Jukka Pietarinen will organize a Timing workshop for MRF Timing System users to discuss issues, solutions, and future requests and requirements.
Getting visitor badges and name tags
Special events
The Spallation Neutron Source at Oak Ridge National Laboratory has been operating since 2006. The integrated EPICS based control system has performed well but now requires a variety of upgrades to maintain high availability and support facility upgrade projects. These upgrades are largely possible due the flexibility of the EPICS architecture. This talk covers recent system improvements along with plans to keep the control system robust and viable well into the future.
Fermilab has historically not been an EPICS site. A new superconducting linear accelerator, PIP-II, is being built as a high intensity proton source to serve the accelerator complex and EPICS will be used as its framework for controls. I will present how we build, provide template IOCs, test, deploy for different platforms using Continuous Integration and Deployment practices. I will also discuss applications, integration with the existing control system, and IOC management.
Accelerator Controls at the ISIS Neutron and Muon Source is transitioning from our current Vsystem control system to EPICS. Operations must continue uninterrupted during our transition, requiring the two control systems to be run simultaneously. We present our work in bridging between the two control systems, automating the conversion of our existing control screens, creating our first novel control screens, and in developing our first IOCs (communicating via the CIP protocol). A number of decisions about our preferred EPICS configuration (e.g. a preference for PVAccess, containers, etc.) have been made while some remain deferred (e.g. archiving, alarms). We also discuss the issues we have encountered moving to EPICS.
The ITER project is in full construction with more and more support systems in commissioning and 24/7 operation. The EPICS based control system is about 15% complete towards first plasma considering energized instrumentation and control racks or integrated process variables.
In this talk we report on the current status of the ITER project with emphasize on the integration, commissioning and operation of in-kind deliverables. We also outline the plan for the coming years.
CHIMERA is being built in Sheffield UK by the Culham Centre for Fusion Energy (UKAEA). It will be the only machine in the world able to test components under the unique combination of conditions encountered in large fusion devices. The SCADA control software will use EPICS and is being produced by Observatory Sciences under contract to Jacobs Clean Energy.
Diamond Light Source has been operational since 2007 and expanded to now include around 32 photon beamlines, each with its own unique set of instrumentation and capabilities. All of these are operating with EPICS Controls Systems. This talk gives an overview and status of the current controls systems in production at Diamond.
The future of Diamond is envisaged to include a major upgrade, commonly known as Diamond-II. This upgrade will be replacing the existing machine with a new 4th generation accelerator along with upgrades to existing beamline and a new suite of flagship beamlines, ensuring that Diamond will remain a world-class facility. This talk will include current ideas and plans to evolve and upgrade the Controls Systems for Diamond-II.
As ITER construction continues, the importance of high-level applications for integrated control and monitoring increases. ITER Operation Applications are being developed to address these needs. The Pulse Schedule Preparation System (PSPS) and Supervisory System (SUP) combine to provide coordinated, well-controlled and yet flexible solutions for configuration and operation of all plant systems at ITER. This talk will provide an update on the current development status of these systems.
Lots of lightning talks
Many industrial process control devices are in-use at ISIS Neutron and Muon Source like PLC (Programmable logic controller) and HMI (Human–machine interface) which are integrated with our current control VISTA using FINS (Factory Interface Network Service) protocol. Recently a project for the replacement of old PLCs was started to coincide with the VISTA to EPICS migration. Therefore a decision was made to integrate new PLCs with EPICS. After investigation, CIP protocol (Common Industrial Protocol) was chosen oven FINS. This talk will describe work done on the implementation of a PVAccess-based EPICS interface written in python that periodically reads an array of CIP structures containing all related PV properties and populates them into PVAccess records.
In a collaborational effort (ITER/HZB-BESSY/ESS/PSI), a Device Support for the OPC UA industrial SCADA protocol is under development. Goals, status and roadmap will be presented.
PyDM is an open-source PyQt-based framework for building user interfaces for control systems (https://github.com/slaclab/pydm). PyDM is developed and maintained at SLAC National Accelerator Laboratory. PyDM provides an easy connection between GUI’s and EPICS in an accessible programming language with an expansive community of developers. PyDM has seen recent updates including PVAccess, improved graphing tools, and a more intuitive interface for adding Process Variables (PV) to Channel Access to widgets in Qt Designer. PyDM has new features in development, for example expanding PVAccess and developing a custom GUI editor to give users a more PyDM focused development experience compared to Qt Designer. PyDM is an expanding toolset with lots of new features on the horizon.
EPICS (Experimental Physics and Industrical Control System) procServ is used extensively throughout the Canadian Light Source (CLS) and has proved itself incredibly valuable for diagnostics.
An initiative has been undertaken to port it Python for the purpose of having a single, cross-platform (more or less) drop-in replacement. Along the way, other features and options have been discovered.
This talk will highlight current progress, lessons learend, proposed future development and hopefully generate suggestions from the wider EPICS community for other additions.
The Compressed Baryonic Matter (CBM) experiment dedicated to the study of the properties of the strongly interacting matter is now under construction at the Facility for Anti-proton and Ion Research (FAIR) in Darmstadt.
In order to optimize the performance of experimental subsystems, a small-scale mCBM demonstrator was installed for the test purposes. As the future Silicon Tracking System (STS) is one of the core detection systems of the CBM, the mSTS is now subject of the intensive investigation.
The CBM's Detector Control System (DCS) focuses on supervision of the detector operation conditions, provides tracking of its vital parameters, data storage, and ensures a safe operation of the mSTS. A novel approach based on the containerization was implemented for these purposes. An Experimental Physics and Industrial Control System (EPICS) based system was configured and deployed in order to control, monitor and store process variables (PVs) associated with the hardware.
In this presentation, we will present the results from the beam-test campaigns in 2020-2022, which allowed us to evaluate the performance of its soft- and hardware components.
The HBM MGCPlus [1] DAQ system is a modular system for acquiring wide range of analogue signals at relative high acquisition rates (up to 19.2 kS/s per channel) with high channel count in a single chassis (up to 128). We have used the StreamDevice for applying settings to the device. We have used the asyn to acquire the DAQ data published by the device. The setup was depending on the connection card and measurement card selection.
[1] https://www.hbm.com/en/2261/mgcplus-data-acquisition-system/?product_type_no=MGCplus
The temperature control of the crystal optics is critical for meV-resolution Inelastic Scattering Beamlines. For 1 meV photon energy resolution, the absolute temperature stability of the crystal optics must be below 4 mK to ensure the required stability of lattice constant stability required. The temperature control enables setting the absolute temperature of individual crystal, making it possible to align the reflection energy of each crystal’s rocking curve thereby maximizing the reflectivity of the crystals.
Using PT1000 sensors, Keithley 3706A 7.5 sensor scanner, and Wiener MPOD LV power supply, we were able to achieve absolute temperature stability below 1 mK for several asymmetrically cut analyzer crystals. The EPICS ePID record was used for the control of the power supplies based on PT1000 sensor input that was read with 7.5 digits accuracy Keithley 3706A scanner.
When storing archived data in a relational database (RDB), performance and data maintenance improve by using partitioning and automation to for example compress older partitions. While this has been demonstrated with both Oracle and PostgreSQL, the approaches to partitioning and the scripts to automate it were site-specific and not generally shared.
TimescaleDB is an open-source extension to PostgreSQL that supports the automated partitioning of time-series data. The CS-Studio archive engine can write to TimescaleDB without any changes, and recent additions to the CS-Studio Data Browser allow stored procedures to take advantage of TimescaleDB for optimized data retrieval. This opens optimized RDB setups for archived data to everybody in a common approach that no longer depends on site-specific ideas.
EPICS and Tango are two of the most popular control systems widely used in scientific facilities. They are based on different designs, but they allow for interaction with multiple kinds of hardware devices. Sometimes the support for some devices is already provided by Tango Device Servers, but integration with EPICS is not yet implemented. In that cases, EPICS Tango Bridge is a perfect solution. The Bridge allows the tango device server to be accessed via the EPICS access channel. The solution enables a control system using both EPICS and Tango interfaces with great reliability and robustness. It helps to save the development time needed for the implementation of dedicated EPICS modules and IOCs.
In modern control systems, alarms are one of the most important aspects. Each control system can face unexpected issues, which demand fast and precise resolution. As the control system starts to grow, it requires the involvement of more engineers to access the alarm list and focus on the most important ones. Our objective was to allow users to access the alarms fast, remotely, and without dedicated software. According to current trends in the IT community, creating a web application turned out to be the most suitable solution. IC@MS - Integrated Cloud-ready Alarm Management System is the extension and web equivalent to the current Panic GUI application known from Tango Controls. What is important, our application is integrated also with EPICS, allowing users to create and edit alarms that relate to EPICS PVs’. IC@MS allows constant remote access using just a web browser which is currently present on every machine including mobile phones and tablets. The access to the different functionalities can be restricted to the users provided.
Epics Tango Bridge and IC@MS raise accessibility and awareness which are crucial for control systems
Step scan, Trajectory scan: ESS will use neither nor.
All data is collected in apache kafka (a data pool).
All detectors and cameras need to time stamp their data,
and so must the motion controllers.
We look at an ongoing development and scratch the surface:
Facility-wide timing, TwinCAT, EPICS
Detectors based on TimePix3 chips are complementary to existing, within NSLS2 synchrotron, direct detection sensors represented by Lambda, Merlin, Eiger/Eiger2, and Pilatus detectors. Compared to Eiger/Eiger2 and Pilatus family the smaller pixel size will result in a 2-fold increase in signal to noise and the physical characteristics of the device will allow us to mount it further away from the sample, resulting in another 4x increase in SNR. The total 8x increase in SNR will enable either the detection of 64x faster dynamics or a 8x better speckle contrast depending on the specific application. The timing capabilities allow new possibilities in particle and photon data collection, by providing not only characterization of synchrotron source, but also science related to timed dynamics. The timing capabilities of TimePix3 detector allows neutron measurements such as energy using intrinsic Time of Flight and similar modes. EPICS driver for TimePix3 detector is a major step in development of the science within modern synchrotron and neutron facilities.
Special events
Recent developments and future plans for EPICS Base from the EPICS Core Developer's Group.
The operation of ITER is expected to happen not only directly in the ITER control room, but also to benefit from human capital from around the globe. Each ITER participating country could create a remote participation room and follow the progress of experiments in relatively real time. Scientists from all over the world can collaborate on experiments at the same time as they are performed. This is called “remote participation” in ITER.
ITER control system is based on EPICS. It is thus natural to try to extend EPICS use to remote participation sites. The authors designed tests to find out how EPICS performance depends on network performance, with the goal of understanding if an EPICS-based application can be used directly on the remote side. A special test suite has been developed to see how many process variables (PVs) remote participants can use if they run their local operator screens or independent applications. Remote test participants were connected via a dedicated VPN channel and were 2,600 km and 9,700 km away. The test exercised reading of large number of PVs – up to 10 000 – with an update frequency of up to 10 Hz, short time and also for long periods up to 24 hours. The performance was compared with equivalent execution in a local network.
With a large number of PVs and their frequent updating, the latency of updates on the side of the remote participant, adjusted to the static network delay due to distance, was demonstrated to be comparable to the latency of local execution. This suggests that EPICS over long distance is quite usable for the purpose of ITER remotes participation tasks.
We constructed the distributed TDC system at SuperKEKB based on the White Rabbit timing system. The individual slave node of White Rabbit works as the TDC with an accuracy of 8ns. And their timestamps are synchronized at the sub-nanosecond level. This TDC node consists of the PCIexpress-type White Rabbit board that is operated on the commercial PC. We developed the EPICS device support that works on Ubuntu Linux.
We introduce the details of the entire system and the performance of our White Rabbit application.
Implementing EPICS device support for a specific device can be tricky; implementing generic device support that can integrate different kinds of devices sharing a common interface is trickier still. Yet such a driver can save a lot of time down the road. A well-known example is the Modbus EPICS module: the same support module can be used to integrate any device that speaks the Modbus protocol. It is up to the EPICS database to map device registers to EPICS records. Because no changes to the driver code are needed to integrate a device, a lot of effort is saved. At Cosylab, we often encounter device controllers that speak bespoke protocols. To facilitate development of generic drivers, we wrote the Autoparam EPICS module. It is a base class derived from asynPortDriver that handles low-level details that are common to all generic drivers: it creates handles for device data based on information provided in EPICS records and provides facilities for handling hardware interrupts. Moreover, it strives to provide a more ergonomic API for handling device functions than vanilla asynPortDriver.
With increasing popularity of Beckhoff devices in scientific projects, there is a rising need for their devices to be integrated into EPICS control systems. Our customers often want to use Beckhoff PLCs for applications that have to handle a large amount of inputs with fast cycle times. How can we connect Beckhoff devices to EPICS control systems without sacrificing this performance?
Beckhoff offers multiple possibilities when it comes to interfacing with their PLCs or industrial PCs, such as modbus, OPC UA, or ADS protocol. While all of these could be used for the usual use cases, we believe that for more data intensive applications, ADS works best. For this reason, Cosylab developed an EPICS device support module that implements advanced ADS features, such as ADS sum commands, which provide fast read/write capabilities to your IOCs.
Python is becoming increasingly popular choice in Control System applications due to its ease of use and plethora of available analytical tools. Following the Python principles, PyDevice project tries to simplify the use of custom Python code from EPICS records. Since the initial version, many new features were implemented that make PyDevice even more attractive for slow controls, scientific and analytical applications that need to be integrated into EPICS. The talk will present new additions and updates made in the last 2 years of development.
EPICS has a long and proven track record as a versatile framework for developing distributed control systems. But engineers and developers usually meet the first challenge way before they write the first line of code. First, they need to collect information about signals, input and alarm limits, archiving rules, etc. from all the parties involved in the development of the machine.
In the ideal world all the specifications and interfaces would be well defined and finalized before the first record is configured in the database, but in reality, it is rarely so. Progress is happening in the parallel threads, forcing an early start of tasks and ongoing modifications of existing software to stay in touch with the latest updates. Keeping track of changes, modifying databases, alarm and archiver configurations by hand with each new update is a great source of errors and mistakes in the final software and can cause a lot of frustration for developers.
We have gathered a lot of experience working on projects with such a work dynamic. We figured out that providing a standardized template (such as an Excel spreadsheet) with clear instructions about how to fill it works really well. We use that structured data as an input for Python scripts that automatically generate EPICS database.
In this lecture we will present the mentioned workflow and the tools we use.
The ITER Real-Time Framework (RTF) is a software framework for building real-time applications such as the Plasma Control System (PCS) and plasma diagnostics. It is entirely written in C++ and it runs real-time control loops on isolated CPU threads adapting the busy-wait synchronization techniques to optimize the response time and jitter. A given RTF application is instantiated from XML-based configuration files at the framework initialization time, and consists of Function Blocks (FB) configurable through parameters and inter-connected with signals and events. Encapsulation of FBs is also possible using so called Composite Function Blocks.
The deployment of RTF application can be local or distributed, and is configured separately by assigning blocks to threads, processes and nodes. FBs deployed on the same thread belong to the same control loop and exchange signal and event updates by simply sharing the same memory allocation. For FBs deployed on different threads, processes or nodes, however, the data must be exchanged synchronously using Gateway FBs (implicitly) inserted on both communication ends. For inter-thread communication, these gateways are implemented with dedicated real-time queues, while for the inter-process and inter-node communication an arbitrary implementation of an RTF Transport Layer can be used.
The RTF Transport Layer provides an abstraction layer between the RTF and the communication interface. Various implementations exist or are foreseen, e.g., for the Ethernet interface using different network protocols (SDN, DAN, pvAccess, etc.), for local data exchange using files, pipes or shared memory, for DAQ devices, for user interaction using standard input/output or graphical screens, etc.
This talk is specifically about the pvAccess transport layer implemented with the PVXS library. It explains how RTF internal structures are mapped to pvData structure at RTF initialization time, and how data is then being serialized at runtime.
Lots of lightning talks
Since the transition of NSLS-II from Debian to RHEL as a primary standard OS, many things have changed in how EPICS software is being deployed. This lightning talk highlights some key differences and new approaches we utilize, such as moving away from Debian packaging, distributing a bundled EPICS base and modules RPM, automated package installation and more.
Our society has been benefiting from Zynq/ZynqMP SoCs that integrate ARM processors and programmable logic fabric by utilizing their flexibility from processing system (PS) and high performance from programmable logic (PL). However, the typical approach for utilizing such systems has observed several problems during deployment and maintenance. Firstly, implementation of a typical EPICS software system on such a device involves compilation of EPICS base, supporting modules, and EPICS driver, either through cross-compilation or compilation locally on the SoCs. This process becomes very time consuming when lots of devices need to be built or updated. Secondly, the reliability of the storage on these devices is suspicious. Many of these devices have file systems hosted on non-volatile memory devices, such as SD cards, which are more fragile than hard disk drives or solid state drives. Since EPICS IOCs can create significant amount of data to be written during routine operation, such as writing autosaved files and logs, when the devices are not properly rebooted or powered off, file systems can be corrupted which results in devices malfunctioning. We implemented a centralized framework that reduces the effort for deploying EPICS and supporting software system on such Zynq-based devices, especially when multiple devices are to be deployed. We built Linux images using PetaLinux SDK with the configuration that minimizes the changes needed for each device during deployment. We exported root file systems from a central NFS server, which the devices mount as their own file systems, so that all data are written to the servers which are much more reliable than the SD cards used on the devices. We created root file systems for different devices that include EPICS packages tailored specifically for the corresponding functionality. We automated the procedure that differentiate instances (e.g., at different beamlines), load corresponding parameters and bring up the system. This paper describes the design and configuration of the design.
Overview of the contributions, discussions, and results of the "EPICS 7" Workshop earlier this week.
GUI testing is an important and difficult part of any HMI toolkit. We have adopted TestFX to create automated GUI tests for the various JavaFX components of Phoebus.
NSLS2 is preparing to migrate from the old eclipse based CS-Studio to the latest version, Phoebus. We have started the migration on 3 of our beamlines. In this talk I hope to share the important findings of our migration process so far and how we intend to proceed with moving the rest of the beamlines and the accelerator over to Phoebus.
This talk will cover the control system upgrades necessary to support the APS upgrade project (APS-U)reduce currently in progress to increase the light source brightness. The upgrade involves replacing most components in the storage ring to implement a multi-bend achromat Lattice.
An update on the latest developments in the EPICS tools and EPICS middle layer services. EPICS tools include applications like CS-Studio and Phoebus which provide a GUI for the EPICS control system and various EPICS services. The middle layer services include services which provide additional functionality on top of EPICS like Alarm consolidation and management, logging, archiving, etc...
Two new high level control tools have been developed at the Advanced Light Source which have been made available for the community to use and collaborate on. The first tool, phoebusgen, is a Python module which allows for the programmatic creation of CS Studio Phoebus XML. The tool allows for quick screen development and is especially useful for engineering panels which contain many widgets in a common pattern.
The second tool is a web application called PV Info. Built with ReactJS, it is a front-end interface to the EPICS Channel Finder PV directory service. Users can query PVs based on its name, IOC name, host name, etc. In addition, PV Info has an optional integration with PV Web Socket (pvws) to show live PV data. Integrations with archive viewers and online logs are also available.
Lots of lightning talks
We present a simple Python-based startup script for setting up an EPICS Edge server for small laboratory experiments.
For easy and fast setup of EPICS hardware for lab experiments we have developed a script that sets EPICS Base, all necessary support modules as well as our institutes own environment variables. For this purpose Base and the modules are further built from the current sources.
Clog (Compact Electronic Logbook System) is a web-based logbook system aimed to be used in accelerators. The goal is to design and implement a logbook system with modern web technologies but in a very simple way. The features of this system are compact design, multiple login methods support and multiple languages support. Recently the development of Clog 2.0 has been finished and will be used in some new projects at IHEP (Institute of High Energy Physics).
RTEMS is now available in version 6 and includes the FreeBSD network stack. This enables the use of modern protocols like NFSv4, DHCP etc.. This has of course caused incompatibilities to the previously used integration in EPICS which will be fixed step by step. The presentation will provide an overview of ongoing activities in this area.
The migration of the ISIS Controls System from Vsystem to EPICS presents a significant challenge and risk to the day-to-day operations of the accelerator. An evaluation of potential options has indicated that the most effective migration method to mitigate against this risk is to make use of a ‘hybrid’ system running Vsystem and EPICS simultaneously. This allows for a phased porting of controls hardware from the existing software to EPICS. This presentation will describe the two prongs of PVEcho: vistatoepics which replicates Vsystem channels in EPICS and epicstovista which propagates changes in native EPICS IOCs to Vsystem channels. It will outline the software stack used for PVEcho’s construction as well as how pvapy has been utilised to in each of its components.
The migration of the ISIS accelerators to a new control system presents opportunities to integrate new technologies such as machine learning into every-day operations. This talk will focus on how EPICS is being used to facilitate the development of machine learning models and optimisation routines to improve the stability and reliability of the ISIS accelerators.
Whatrecord is primarily a Python-based parsing tool for interacting with a variety of EPICS file formats, including V3 and V4/V7 database files. The project aims for compliance with epics-base by using Lark grammars that closely reflect the original Lex/Yacc grammars. It offers a suite of tools for working with its supported file formats, with convenient Python-facing dataclass object representations and easy JSON serialization. There is also a prototype backend web server for hosting IOC and record information with a Vue.js-based frontend, an EPICS build system Makefile dependency inspector, a static analyzer-of-sorts for startup scripts, and a host of other things that the author added at whim to this toy project of his.
Link: https://github.com/pcdshub/whatrecord/
Overview of the contributions, discussions, and results of the "Build and Deployment" Workshop earlier this week.
Summary report of the Timing workshop on Tuesday morning.
Summary of the Motion Control Workshop
Special events