Conveners
Computing & Batch Services
- Michel Jouvin (Centre National de la Recherche Scientifique (FR))
Computing & Batch Services
- Manfred Alef (Karlsruhe Institute of Technology (KIT))
Computing & Batch Services
- Manfred Alef (Karlsruhe Institute of Technology (KIT))
AMD returned to the server CPU market in 2017 with the release of their EPYC line of CPUs, based on the Zen microarchitecture. In this presentation, we'll provide an overview of the AMD EPYC CPU architecture, and how it differs from Intel's Xeon Skylake. We'll also present performance and cost comparisons between EPYC and Skylake, with an emphasis on use in HEP/NP computing environments.
The HEPiX Benchmarking Working Group is working on a new 'long-running' benchmark to measure installed capacities to replace the currently used HS06. This presentation will show the current status.
The classic workflow of a expermient in a synchrotron facility starts with the users coming physically to the facility with their samples, they analyze those samples with the beamline equipment and finally they get back to their institution with a huge amount of data in a portable hard disk.
The data reduction and analysis is done majorly on the scientific institution of the user. As data...
This is a report on the recently held workshop at BNL on Central Computing Facilities support for Photon Sciences, with participation from various Light Source facilities from Europe and the US.
Scaling an OpenMP or MPI application on modern TurboBoost-enabled CPUs is getting harder and harder. Using some simple 'openssl' commands, however, it is possible to adjust OpenMP benchmarking results to correct for the TurboBoost frequencies of modern Intel and AMD CPUs. In this talk I will explain how to achieve better OpenMP scaling numbers and will show how a non-root user can determine...
Predictions for requirements for the LHC computing for Run 3 and for Run 4 (HL_LHC) over the course of the next 10 years show a considerable gap between required and available resources, assuming budgets will globally remain flat at best. This will require some radical changes to the computing models for the data processing of the LHC experiments. The use of large scale general purpose...
PDSF, the Parallel Distributed Systems Facility, has been in continuous operation since 1996 serving high-energy and nuclear physics research. It is currently a tier-1 site for STAR, a tier-2 site for ALICE, and a tier-3 site for ATLAS. We are in the process of migrating the PDSF workload from the existing commodity cluster to the Cori Cray XC40 system. Docker containers enable running the...
With the demands of LHC computing, coupled with pressure on the traditional resources available, we need to find new sources of compute power. We have described, at HEPiX and elsewhere how we have started to explore running batch workloads on storage servers at CERN, and on public cloud resources. Since the summer of 2018, ATLAS & LHCb have started to use a pre-production service on storage...
Short report on the workshop held at RAL in September, and an outlook to the next workshop