Conveners
Computing and Batch Services
- Michele Michelotto (Universita e INFN, Padova (IT))
Computing and Batch Services
- Matthias Jochen Schnepf
The Benchmarking Working Group (WG) has been actively advancing the HEP Benchmark Suite to meet the evolving needs of the Worldwide LHC Computing Grid (WLCG). This presentation will provide a comprehensive status report on the WG’s activities, highlighting the intense efforts to enhance the suite’s capabilities with a focus on performance optimization and sustainability.
In response to...
The performance score per CPU core — corepower — reported annually by WLCG sites is a critical metric for ensuring reliable accounting, transparency, trust, and efficient resource utilization across experiment sites. It is therefore essential to compare the published CPU corepower with the actual runtime corepower observed in production environments. Traditionally, sites have reported annual...
The Nordugrid Advanced Resource Connector Middleware (ARC) will manifest itself as ARC 7 this spring, after a long release preparation process. ARC 7 represents a significant advancement in the evolution of the Advanced Resource Connector Middleware, building upon elements introduced in the ARC 6 release from 2019, and refined over the subsequent years.
This new version consolidates...
MTCA starterkits next step evolution
In this presentation, you will learn more about the powerBridge starterkits. The starterkits from powerBridge do include MTCA.0, Rev. 3 changes as well as new exciting products, including payload cards and are available in different sizes and flavours. They do allow an easy jumpstart for new MTCA users.
The tenth european HTCondor workshop took place at NIKHEF Amsterdam autumn last year and as always covered most if not all aspects of up-to-date high throughput computing.
Here comes a short summary of the parts of general interest if you like :)
In the realm of High Throughput Computing (HTC), managing and processing large volumes of accounting data across diverse environments and use cases presents significant challenges. AUDITOR addresses this issue by providing a flexible framework for building accounting pipelines that can adapt to a wide range of needs.
At its core, AUDITOR serves as a centralized storage solution for...
For years, GPUs have become increasingly interesting for particle physics. Therefore, GridKa provides some GPU machines to the Grid and the particle physics institute at KIT.
Since GPU usage and provisioning differ from CPUs, some development on the provider and user side is necessary.
The provided GPUs allow the HEP community to use GPUs in the Grid environment and develop solutions for...
At Nikhef, we've based much of our "fairness" policy implementation around User, group, and job-class (e.g. queue) "caps", that is, setting upper limits on the number of simultaneous jobs (or used cores). One of the main use cases for such caps is to prevent one or two users from acquiring the whole cluster for days at a time, blocking all other usage.
When we started using HTCondor, there...
Developments in microprocessor technology have confirmed the trend towards higher core-counts and decreased amount of memory per core, resulting in major improvements in power efficiency for a given level of performance. Per node core-counts have increased significantly over the past five years for the x86_64 architecture, which is dominating in the LHC computing environment, and the higher...
Many efforts have tried to combine the HPC and QC fields, proposing integrations between quantum computers and traditional clusters. Despite these efforts, the problem is far from solved, as quantum computers face a continuous evolution. Moreover, nowadays, quantum computers are scarce compared to the traditional resources in the HPC clusters: managing the access from the HPC nodes is...