LHC Post Mortem Workshop - Ichaired by Robin Lauckner (CERN), Adriaan Rijllart (CERN), Rüdiger Schmidt (CERN)
at CERN ( 874/1-011 )
This is the first workshop on the recording and analysis of data after an event in the LHC, such as a magnet quench or a beam dump - Post Mortem. Data will come from transient recorders, from the logging systems, from alarms and probably other sources.
The main aims of the workshop are :
Many groups have started to prepare their systems for the different phases of commissioning and operation. The workshop will review their activities, identify open issues and help to define the future roles and responsibilities.
09:00 - 09:30
General Introduction with the main aims of the Post Mortem System
Convener: Rüdiger Schmidt (CERN) Material: slides
09:30 - 12:00
What exists - PM System, Logging, Alarms
Convener: Adriaan Rijllart (CERN)
PM system architecture, front-ends, servers, triggering
Speaker: Robin Lauckner (CERN) Material: Slides
PM Data Collection and Storage"
This talk will cover the following items: - PM data model - Client API - PM server - Data processing and SDDS conversion - Performance and scalability - Current status
Speaker: Nikolai Trofimov (CERN) Material: Slides
- 10:30 Coffee break 30'
SDDS to LabVIEW, the path from client data to viewing and analysis
The presentation will cover: - Data arrival and event building - SDDS format and its implementation for PM - PMX method for data description and control - SDDS converter - generic version - Possible enchancement of the converter - LabVIEW application/framework for individual data module (PMM) - PMM data locator - PMM SDDS ascii/binary loader - Internal data classes - Data viewing - Data analysis - Automatic analysis - Diagnostic tools - Conclusion
Speaker: Boris Khomenko (Joint Institute for Nuclear Research (JINR)) Material: Slides
Alarms in relation with Post Mortem
LASER will provide alarm event information to the PM system in the case of a PM event. A first solution, agreed between the LASER and the PM teams at the end of 2005, will be described. Since then, the LASER system has evolved which opens up other possibilities to integration. These solutions will be discussed as well as the questions they give rise to.
Speaker: Katarina Sigerud (CERN) Material: Slides
Logging data in relation with PM and archiving
This presentation will explain briefly the purpose, scope and architecture of the LHC Logging Service. More detail will be given on the interaction with the Post-Mortem system including naming conventions and enforcement, data lifetime policy, combining and correlation of slow logging data and external transient data. Finally some ideas and possibilities will be discussed such as the use of the Measurement Service and storing of PM summary information.
Speaker: Ronny Billen (CERN) Material: Slides
- 09:30 PM system architecture, front-ends, servers, triggering 30'
12:00 - 14:00
Location: 866 - Rest #3
14:00 - 16:30
Cold circuits – data, analysis
Convener: Felix Rodriguez Mateos (CERN)
Powering of the SC circuits: procedures and strategies for circuit validation
The commissioning of the warm part of the superconducting circuits of the LHC started in 2005 with the short-circuit tests of the power converters where the non-superconducting elements of the circuits are being commissioned together with their associated general services. Once the circuits are at their operation temperature and before powering them, the interlock system will be validated (PIC tests). The overall commissioning of the superconducting circuits will start in February 2007 with the first powering up to nominal current of all the magnets in Sector 7-8. This talk will introduce the sequence of steps and detailed procedures which lead to the powering of the different superconducting circuit types, the powering strategies designed to be ready for 450GeV beam commissioning on schedule and the needs of the hardware commissioning team for diagnostics and to ensure the integrity of the hardware.
Speaker: Antonio Vergara Fernandez (Cent.de Investigac.Energeticas Medioambientales y Tecnol. (CIEMA) Material: Slides
Analysis requirements for the SC magnet systems
Effective commissioning of the LHC hardware demands a well-designed set of high level software tools, which is required for the equipment performance analysis and validation. The challenge includes a large amount of equipment integrating heterogeneous systems like powering, energy extraction, distributed magnet protection systems, cryogenics and vacuum with their distributed instrumentation as well as the technical services. Various operational conditions must be dealt with like the superconducting magnet quench phenomenon and quench effects, including their constraints on the next powering cycle while respecting the destructive power stored in the magnet system. The level of the commissioning of the main ring superconducting magnet system will depend not only on the time allocated to the commissioning, but also on the availability of the high level software analysis tools. The required tools for various phases of the LHC start-up will be elucidated and discussed. The role of newly created Main Ring Magnet System Performance Panel (MPP), in view of the definition of the high level software tools for the equipment commissioning and performance analysis will also be briefly addressed.
Speaker: Andrzej SIEMKO (CERN) Material: Slides
Present status of the individual systems analysis applications
Three components of the Post Mortem Analysis are already used by the equipment support teams. This talk will present the status and the modes of operation for each of them. Then the present architecture will be detailed, followed by the implementation dedicated to the Hardware Commissioning.
Speaker: Hubert Reymond (CERN) Material: Slides
- 15:30 Tea break 30'
How do we tackle the extended requirements?
The first Post-Mortem requirements have come from the needs of the individual systems involved in the first phase of the hardware commissioning using short circuit tests. The second phase of powering the circuits, involving systems such as vacuum, cryogenics and DFB’s will extend the requirements of analysis to a new scale. This talk will show how we plan to include these new analysis requirements into the present framework, how it interfaces with the sequencer and how the analysis could trigger on spontaneous events. Important aspects, such as modularity, flexibility, sequencing and scalability will be covered.
Speaker: Adriaan Rijllart (CERN) Material: Slides
- 14:00 Powering of the SC circuits: procedures and strategies for circuit validation 30'
16:30 - 17:00
Convener: Hermann Schmickler (AB-CO)
- 09:00 - 09:30 Introduction
09:00 - 12:00
Operation with beam - PM requirements
Convener: Jorg Wenninger (CERN)
Beam quality checks at injection
For each beam injection into the LHC a well-defined series of beam quality checks needs to be made, starting in the SPS just before extraction and in the LHC immediately after injection. These checks will be dependent on the beam type, intensity and position in the filling sequence, and will use transient data which must be acquired and analysed at the appropriate time and within a specified time window. The requirements in terms of functionality, response times and scope are described, and the equipment subsystems identified. Potential issues are discussed.
Speaker: Verena Kain (CERN) Material: Slides
Beam dump XPOC analysis
Each dump action must be followed by an XPOC which is launched automatically and is designed to verify that the dump was correctly executed. If an anomaly is discovered during these tests, the XPOC must withhold the User Permit to the BIS (via a software channel). The XPOC comprises beam instrumentation and other signals which will come from the logging and Post- Mortem system, or direct from the equipment. The XPOC must be triggered by the dump action, must retrieve and analyse key data and make a comparison of the relevant parameters against specified reference values, and then give or withhold the User Permit according to the result. The requirements in terms of functionality, response times and scope are described, and the equipment subsystems identified. Data types, reduction, volumes and rates are estimated.
Speaker: Brennan Goddard (CERN) Material: Slides
Emergency dump Post Mortem
After an emergency dump a general Post-Mortem request will be issued to acquire transient data from a variety of systems. The analysis of a Post-Mortem event may take from minutes to many months, depending of the desired level of details. Key data must be however presented in a way which allows for simple and efficient fault-finding. Operation crews must be presented with clear information to indicate of operation may continue or if expert interventions are required after the emergency beam dump. Key equipment and instrumentation data required to identify the source and causes of an emergency abort are described. Various experiments and measurements will also require the possibility to make ad-hoc acquisition of some transient beam and possibly equipment data, in order to diagnose and solve specific problems and to cope with unforeseen difficulties. An attempt is made to outline the different transient data required for general operational purposes, together with the requirements for triggering and acquisition which are distinct from the general Post- Mortem data.
Speaker: Jorg Wenninger (CERN) Material: Slides
- 10:30 Cofee break 30'
Transient beam data acquisition
In addition to systematic transient data acquisition, operation of the LHC will also require the possibility to make ad-hoc acquisition of some transient beam and possibly equipment data, in order to diagnose and solve specific problems and to cope with unforeseen difficulties. An attempt is made to outline the different transient data required for general operational purposes, together with the requirements for triggering and acquisition which are distinct from the general Post- Mortem data.
Post Mortem acquisition triggering
A post-mortem timing event distributed by the LHC machine timing system is used to freeze the PM buffers of a large fraction of the LHC equipment. This event must be generated automatically whenever the BIS is issuing a beam dump request by changing the state of the beam permit signal. This presentation outlines the present ideas on how to generate the PM timing event. The issue of PM event suppression in the case of single beam dumps or special operation modes like 'inject and dump' will be addressed.
Speaker: Julian Lewis (CERN) Material: Abstract Slides
- 09:00 Beam quality checks at injection 30'
12:00 - 13:45
Location: 866 - Rest #3
13:45 - 16:15
Data providers, volume, type of analysis.
Convener: Robin Lauckner (CERN)
Overview of providers
Post Mortem will be the key to mastering the full complexity of LHC Operation and the interaction between systems. Many systems will be involved in full optimisation and understanding of performance. Today a few systems are providing data to validate and understand hardware commissioning. This must be extended giving priority to obtaining essential information related to achieving first collisions. This talk will review systems involved, discuss the nature of the information to be provided and attempt to identify some priorities. The vacuum system will be examined to demonstrate how these demands are being met.
Speaker: Robin Lauckner (CERN) Material: Slides
The key beam instruments for post-mortem diagnostics in the LHC include:
• the beam position monitors (BPM), • the beam loss monitors (BLM), • the beam current transformers (BCT), • the non-destructive beam profile monitors, • the tune measurement, • the abort gap monitors. Turn by turn (or highest time resolution) data will be provided for all systems for the equivalent of 1000 turns before the post-mortem trigger. Coarser data will also be provided for the time interval of around 20 seconds before the trigger as well as 10-20 samples after the trigger. Data volume depends on the PM data send to the PM server. For instance, 64 BPM systems will send 36 samples of 1000 points which will be approximately 300 Kbytes per system. It will require an external trigger (BST system) to freeze the post-mortem buffers.
Speaker: Stephane Bart Pedersen (CERN) Material: Slides
The RF acceleration (ACS) and transverse damper (ADT) systems will supply post-mortem data at various acquisition rates. The PLCs controlling the power systems acquire at a few Hz, while high-speed digitizers and acquisition buffers embedded in the low- level hardware acquire transient signals at 80 MSamples/s over time periods ranging from a few milliseconds to several hundred milliseconds. The high speed acquisitions in particular will result in high volumes of data, and some local data analysis and reduction may be necessary to alleviate this. An overview of the available data signals will be presented, along with tentative requirements on data analysis, logging and alarms.
Speaker: Dr. Andrew Butterworth (CERN) Material: Slides
Reliable operation of LHC injection, tune/aperture and LBDS kicker systems relies on continuous on-line and off-line surveillances of their critical operational characteristics. Different acquisition techniques like trends logging, shot-by-shot logging or fast transient recording will be used to acquire and record the diverse types of signals existing within kicker systems. Correlation between the acquired data will be done through a precise time-stamping of the data acquisition time coupled with an internal management of the possible acquisition trigger sources. The structure of the different post-mortem buffers will be presented for each kicker system with estimation of their volume and a description of the different acquisition analysis and recording mechanisms. In addition, the triggering logic will be described and the remaining open-issues linked mainly to the distribution of post-mortem event(s) will be highlighted.
Speaker: Etienne Carlier (CERN) Material: Slides
- 15:25 Tea break 20'
Collimators and movable objects
The LHC collimation system is responsible for providing clean beam conditions and hence to assure the protection the equipment in the LHC. A failure of the collimation system may trigger a beam dump to avoid magnet quenches. The post mortem data of the collimation system supplies the following information • Demanded and actual positions of all collimator jaws (millisecond accuracy) Note: information on the actual positions is provided by resolver, position and gab lvdt's as well as end switches and anti-collision switches). • Temperatures of the jaws • Jaw vibrations over a period of a few seconds before and after the beam dump • BLM transient data during a collimator movement. • Command history The first analysis of the collimator post mortem data must assure that there were no internal failures in maintaining the actual collimator positions. A second analysis in combination with information from beam loss, beam position and beam profile monitors should validate that the collimation efficiency was as required.
Speaker: Michel Jonker (CERN) Material: Slides slides
- 13:45 Overview of providers 20'
16:15 - 17:45
Open issues: structure, technology, roadmap, priorities
Convener: Mike Lamont (CERN) Material: slides
- 09:00 - 12:00 Session 3