Speaker
Description
The Post Mortem was designed almost a decade ago to enable the collection and the analysis of high-resolution, transient data recordings of relevant events, such as beam dumps in the LHC accelerator. Since then, the storage has been constantly evolving both to accommodate larger datasets and to satisfy new requirements and use-cases for the LHC but also first machines in the injector complex. The operational experience allowed to identify some of the drawbacks of the initially designed solution which, in order to be solved in an efficient way, will require substantial changes of the currently deployed infrastructure.
This contribution summarizes the recent work and R&D towards the definition of the next generation Post Mortem storage architecture, in line with modern data storage and processing systems which provide solutions to the major limitations of the current deployment and enable an easier integration of future use cases. The proposed architecture provides in addition a better integration with the next generation CALS storage, serving the users with the most accurate data in a more transparent way while replying to the determinism in terms of response time imposed by certain LHC use-cases.