The Southwest Tier-2 (SWT2) consortium is comprised of two data centers
operated at the University of Texas at Arlington (UTA) and at the
University of Oklahoma (OU). SWT2 provides distributed computing
services in support of the ATLAS experiment at CERN. In this
presentation we will describe the resources at each site (CPU cycles and
data storage), along with other associated...
AGLT2 has a few updates to report since the last HEPix meeting in Spring 2024.
1) We transitioned from Cobbler and Satellite plus Capsule server for RHEL provision
2) we transitioned from CFengine to Ansible for configuration management for the RHEL9 nodes.
3) In order to improve the occupancy of the HTcondor cluster, we started tuning of HTCondor and also new developments of scripts to...
PIC report to HEPIX Fall 2024.
The KEK Central Computer System (KEKCC) is the KEK's largest-scale computer system and provides several services such as Grid and Cloud computing.
Following the procurement policy for the large-scale computer system requested by the government, we have taken a multiple-year contract and replaced the entire system at the end of every contract year. The new system has been in production since...
The ATLAS experiment is currently developing multiple analysis frameworks which leverage the Python data science ecosystem. We describe the setup and operation of the infrastructure necessary to support demonstrations of these frameworks. One such demonstrator aims to process the compact ATLAS data format PHYSLITE at rates exceeding 200 Gbps. Integral to this study was the analysis of network...
More and more opportunistic resources are provided to the Grid. Often behind one Compute Element several opportunistic computing resource provider exists or are additional to the pledged resources of a Grid site. For such use cases and others, we have developed a most flexible multipurpose accounting ecosystem AUDITOR (AccoUnting DatahandlIng Toolbox for Opportunistic Resources).
AUDITOR...
Atmospheric Visibility Estimation From Single Camera Images: A Deep Learning Approach
A site report on the infrastructure and services that underpin SLAC's data-intensive processing pipelines. The SLAC Shared Science Data Facility hosts the Rubin Observatory DF, LCLS-II and many other experimental and research workflows. Networking and Storage form the core of S3DF with hardware deployed in a modern Stanford datacenter.
This presentation will focus on two topics: 1) status of ATLAS T2 site in Taiwan, and 2) experiences of supporting broader scientific computing over the cloud based on WLCG technology.
Crossplane is a cloud-native control plane for declarative management of infrastructure and platform resources using Kubernetes-native APIs.
It enables the integration of infrastructure-as-code practices by reusing existing tools such as Ansible and Terraform, while providing flexible, instanceable "compositions" for defining reusable resource configurations. This approach allows...
The CMS Coffea-Casa analysis facility at the University of Nebraska-Lincoln provides researchers with Kubernetes based Jupyter environments and access to CMS data along with both CPU and GPU resources for a more interactive analysis experience than traditional clusters provide. This talk will cover updates to this facility within the past year and recent experiences with the 200 Gbps challenge.
dCache is composed by a set of components running in Java Virtual Machines (JVM) and a storage backend, Ceph in this case. CSCS moved these JVMs into containers and developed an Helm Chart to deploy them on a Kubernetes cluster. This cloud native approach makes the deployments and management of new dCache instances easier and faster.
Encountered challenges and future developments will be...
The 2nd Joint Xrootd and FTS Workshop at STFC in September 2024 covered many interesting topics. This presentation will summarize the discussion on state of affairs of FTS and Xrootd, plan on FTS4, WLCG token support in FTS, future plan on CERN Data Management Client, The Pelican project and Xrootd/Xcache, Xrootd monitoring, etc. It will cover some of the feedback by experiments, especially...
Abstract: To evaluate the cost of various on-premises storage solutions with traditional and S3 interfaces, including flash, disk, and tape.
This presentation compares the costs, factors of flash, disk, and tape-based storage systems, including systems that are compatible with AWS S3. Key metrics to be considered include purchase price, power consumption, cooling requirements, product...
PIC has developed CosmoHub, a scientific platform built on top of Hadoop and Apache Hive, which facilitates scalable reading, writing and managing huge astronomical datasets. This platform supports a global community of scientists, eliminating the need for users to be familiar with Structured Query Language (SQL). CosmoHub officially serves data from major international collaborations,...
Scientific experiments and computations, particularly in High Energy Physics (HEP) programs, are generating and accumulating data at an unprecedented rate. Effectively managing this vast volume of data while ensuring efficient data analysis poses a significant challenge for data centers. This paper aims to introduce machine learning algorithms to enhance data storage optimization across...
In 2020 we started the migration from our TSM-based tape system to HPSS which was finally finished in the summer of 2024. I'll present lessons learned, pitfalls and also the necessary in-house software developments.
News from CERN since the last HEPiX workshop. This talk gives a general update from services in the CERN IT department.
I will give an report on the Scientific Computing program at Jefferson Lab and a brief introduction to HPDF, the High Performance Data Facility.
We need to have a discussion about what sites and possible users need and expect.
The goal is to both clarify details and get guidance for what we should focus on during the afternoon sessions today.
This presentation aims to give an update on the global security landscape from the past year. The global political situation has introduced a novel challenge for security teams everywhere. What's more, the worrying trend of data leaks, password dumps, ransomware attacks and new security vulnerabilities does not seem to slow down.
We present some interesting cases that CERN and the wider HEP...
We have some sites that have question/potential issues concerning the traffic measurements from Zeek vs SNMP.
- Should be expect that the Zeek traffic estimate should be close to the SNMP counters from the corresponding switch ports?
- Is some kind of NIC/hardware offloading hiding traffic from Zeek?
- Do we have best practice recommendations regarding configurations?
- What should sites...
What does it take to craft a good Zeek alert? Can we work through an example or two? What is the suggested guidance for doing this?
How to enable alerts using webhooks and various applications.
Sending to SLACK
Sending to Mattermost
What about Keybase?
Why not email?
Zeek, MISP, pDNSSOC, Elasticsearch, Opensearch, Elastiflow, ElastiAlert, other information sources, other tools?
Advantages, capabilities, limitations, concerns....
Let's discuss
Minimising carbon associated with computing will require compromise. In this presentation I will present the results from simulating a Grid site where the compute is run at reduced frequency when the predicted carbon intensity rises above some threshold. The compromise is a reduction in throughput in exchange for an increased carbon-efficiency for the work that is completed. The presentation...
Data center sustainability, a phenomenon that has grown in focus due to the continuing evolution of Artificial intelligence (AI)/High Performance Computing (HPC) systems; furthermore, the rampant increase in carbon emissions resulted in an unprecedented rise in Thermal Design Power (TDP) of the computer chips at the Scientific Data and Computing Center (SDCC) at Brookhaven National Laboratory...
The Smart Procurement Utility is a tool that allows the visualisation of HEPScore/Watt vs HEPScore/unit-cost to guide procurement choices and the compromise between cost and carbon. It uses existing benchmarking data and allows the entry of new benchmarking data. Costs can be entered as relative numbers (percentages relative to a chosen baseline) to generate the cost-related plots.
I will present some preliminary studies and ideas to understand natural job drainage and power reduction in PIC Tier-1, which is using HTCondor. Based on the historical batch system logs, we are simulating natural drainage and understanding how we can modulate the PIC farm without killing jobs.
The Purdue Analysis Facility (Purdue AF) is an advanced computational platform designed to support high energy physics (HEP) research at the CMS experiment. Based on a multi-tenant JupyterHub server deployed on a Kubernetes cluster, Purdue AF leverages the resources of the Purdue CMS Tier-2 computing center to provide scalable, interactive environments for HEP workflows. It supports a full HEP...
A description of our experience deploying Openshift both for container orchestration as well as a replacement for Redhat Enterprise Virtualization.
This talk presents the findings of the 2023 cybersecurity audit undertaken at CERN, and the resulting plans/progress/accomplishment the Organization experienced in the past 9 months while implementing their recommendations.
This talk will walk you through the challenges the ESnet security team faced during an attack against one of its firewalls. It covers the struggle and drama to access the data we needed and, in the end, highlights how nothing quite beats good old-fashioned, down-and-dirty system forensics.
We will describe the current activities and plans in WLCG networking, including details about SciTags, the WLCG perfSONAR deployment and the related activities to monitor and analyze our networks. We will also described the related efforts to plan for the upcoming WLCG Network Data Challenge through a series of mini-challenges that incorporate our tools and metrics.
The HEPiX IPv6 Working Group has been encouraging the deployment of IPv6 in WLCG for many years. At the last HEPiX meeting in Paris we reported that the LHC experiment Tier-2 storage services are now close to 100% IPv6-capable. We had turned our attention to WLCG compute and launched a GGUS ticket campaign for WLCG sites to deploy dual-stack computing elements and worker nodes. At that time...
The CZ Tier-2 in Prague (the Czech Republic) joined the WLCG Data Challenge 24 and managed to receive and sent more than 2 PB during the second week of the DC24. Since than we upgraded our network connection to LHCONE from 100 to 2x100 Gbps. The LHCONE link uses GEANT connection, which was also upgraded to 2x100 Gbps. During July 2024 we executed dedicated network stress tests between Prague...
This presentation looks at what is different about building and deploying AI fabrics. I can if needed remove the Arista logo's from the presentation. I don't see a place to attach the presentation?
System administrators and developers need a way to call application code and other tasks through command line interfaces (CLIs). Some examples include user management (creation, deletion, moderation, etc) or seeding the database for development. We have developed an open source Python framework, [pykern.pkcli][1], that simplifies the creation of these application-specific CLIs. In this talk, I...
As the complexity of FPGA and SoC development grows, so does the need for efficient and automated processes to streamline testing, building, and collaboration, particularly in large-scale scientific environments such as CERN. This initiative focuses on providing CI infrastructure that is tailored for FPGA development and pre-configured Docker images for essential EDA tools, keeping the...
This talk describes a project to develop a set of collaborative tools for the upcoming ePIC experiment at the BNL Electron-Ion Collider (EIC). The "Collaborative Research Information Sharing Platform" (CRISP) is built upon an extensible, full-featured membership directory, with CoManage integration and a customized InvenioRDM document repository. The CRISP architecture will be presented, along...
Advances in computing hardware are essential for future HEP and NP experiments. These advances are seen as incremental improvements in performance metric over time, i.e. everything works the same, just better, faster, and cheaper. In reality, hardware advances and changes in requirements can result in the crossing of thresholds that require a re-evaluation of existing practices. The HEPiX...