20–24 Apr 2026
ISCTE Instituto Universitário de Lisboa
Europe/Lisbon timezone

Contribution List

73 out of 73 displayed
Export to PDF
  1. Jorge Humberto Lucio Oliveira Gomes (LIP)
    20/04/2026, 09:00
  2. Hugo Miguel Da Silva Gomes (LIP), Joao Antonio Tomasio Pina (Laboratory of Instrumentation and Experimental Particle Physics (PT)), Jorge Humberto Lucio Oliveira Gomes (LIP)
    20/04/2026, 09:20
  3. Jorge Humberto Lucio Oliveira Gomes
    20/04/2026, 09:30
    Site Reports

    LIP and CNCA sites report to HEPiX Spring 2026.

    Go to contribution page
  4. Vladimir Bahyl (CERN)
    20/04/2026, 09:50
    Site Reports

    News from CERN since the last HEPiX workshop. This talk gives a general update from services in the CERN IT department.

    Go to contribution page
  5. Martin Bly (STFC-RAL)
    20/04/2026, 10:10
    Site Reports

    Update on developments from RAL.

    Go to contribution page
  6. Ofer Rind (Brookhaven National Laboratory)
    20/04/2026, 11:00
    Site Reports

    This presentation will cover developments over the past year, as well as upcoming plans, at the Scientific Computing and Data Facilities (SCDF) at BNL.

    Go to contribution page
  7. Mr Xiaowei Jiang (IHEP(中国科学院高能物理研究所))
    20/04/2026, 11:20
    Site Reports

    To introduce the recent updates of IHEP site.

    Go to contribution page
  8. Dr Max Kühn (Karlsruhe Institute of Technology)
    20/04/2026, 11:40
    Site Reports

    We provide an overview of current activities, topics and challenges around the GridKa Tier-1 centre. Key experiences include very high capacity workernodes, the physical relocation of our entire tape library between campuses, and the transition of the entire compute centre to a new network layout. We also dive into current woes around Grid-size online storage and configuration management.

    Go to contribution page
  9. Jose Flix Molina (CIEMAT - Centro de Investigaciones Energéticas Medioambientales y Tec. (ES))
    20/04/2026, 12:00
    Site Reports

    PIC report to HEPIX Spring 2026.

    Go to contribution page
  10. Peter van der Reest
    20/04/2026, 14:20
    Storage & data management

    The CS3 (Conference on Sync & Share Services) [1] is a community-driven event whose history dates back to 2014.
    Its focus has shifted from the original exploration of various products for S&S services, site hardware configurations as well as the integration of applications towards the federation of various S&S instances in research and academic institutions, the support of scientific...

    Go to contribution page
  11. Robert Appleyard
    20/04/2026, 14:40
    Storage & data management

    Demand for object storage at RAL is growing. We already have Echo, a 130PB cluster for the WLCG and general physics, and we are expecting to support users with new AI/ML training frameworks that predominantly use S3.

    To support these use cases, we're building a new dense flash cluster, "Leo", using QLC storage. Leo will only offer an S3 interface, and will initially aim to support...

    Go to contribution page
  12. Mario Jorge Moura David
    20/04/2026, 15:00
    Storage & data management

    Polen is a set of services for Open Research Data provided for researchers by the national funding agency (FCCN). In this framework, one such service is a sync&share data platform based on Nextcloud. The implementation is done by LIP, whereby we present the architecture, the implementation decisions based on the requirements and additional development for the operation of the service.

    Go to contribution page
  13. Dr Pau Tallada-Crespí (PIC-CIEMAT)
    20/04/2026, 16:00
    Storage & data management

    The adoption of Open Science in data-intensive fields such as astronomy and cosmology requires infrastructures capable of managing, distributing, and enabling the reproducible reuse of massive datasets across geographically distributed communities. Addressing these challenges, CosmoHub is a high-performance open science data platform developed at the Port d’Informació Científica (PIC) to...

    Go to contribution page
  14. Hao Hu (Institute of High Energy of Physics)
    20/04/2026, 16:20
    Storage & data management

    The High Energy cosmic Radiation Detector (HERD) is a major international space astronomy and particle astrophysics experiment planned for installation on the Chinese Space Station (CSS) around 2027. Its primary scientific objectives include precise measurement of high-energy cosmic rays up to the PeV range, indirect detection of dark matter, and observation of high-energy gamma rays.
    It is...

    Go to contribution page
  15. Natalia Diana Szczepanek (CERN)
    21/04/2026, 09:00
    Environmental sustainability, business continuity, and Facility improvement

    Monitoring power consumption at the level of grid job slots remains a missing component of current Workload Management Systems for HEP experiments. While individual computing centres can monitor power consumption locally, maintaining a consistent view across heterogeneous clusters and re-benchmarking systems after each configuration change is time-consuming and often impractical for...

    Go to contribution page
  16. Jan Hartmann
    21/04/2026, 09:30
    Environmental sustainability, business continuity, and Facility improvement

    I have been working at DESY for a bit over a year as part of Research Facilities 2.0 (RF2.0), with the aim of making the compute infrastructure more ressource efficient.

    In this talk I will present how we rolled out benchmarks to our clusters, how the results helped us finding misconfigurations, and some of the configuration changes we made to our infrastructure.
    Also, i will present our...

    Go to contribution page
  17. Jose Flix Molina (CIEMAT - Centro de Investigaciones Energéticas Medioambientales y Tec. (ES))
    21/04/2026, 09:50
    Environmental sustainability, business continuity, and Facility improvement

    High Energy Physics (HEP) computing centers face increasing pressure to optimize energy consumption due to volatile electricity markets and the urgent need to reduce carbon footprints. The Port d'Informació Científica (PIC) is actively investigating strategies to implement dynamic, intra-day power scaling, aiming to reduce power draw during periods of peak electricity prices or high CO2...

    Go to contribution page
  18. Horst Severini (University of Oklahoma (US))
    21/04/2026, 11:00
    Site Reports

    Updates on the US ATLAS SouthWest Tier2 Center since the last HEPiX we attended.

    Go to contribution page
  19. Andreas Haupt (Deutsches Elektronen-Synchrotron (DE))
    21/04/2026, 11:20
    Site Reports

    News from the lab

    Go to contribution page
  20. Christopher Huhn
    21/04/2026, 11:40
    Site Reports

    Site report for GSI Helmholtzzentrum

    Go to contribution page
  21. Mattias Wadenstein (University of Umeå (SE))
    21/04/2026, 12:00
    Site Reports

    Site report from the Nordic Tier-1

    Go to contribution page
  22. Alessandro Di Girolamo (CERN), James Letts (Univ. of California San Diego (US))
    21/04/2026, 14:00
  23. Jose Flix Molina (CIEMAT - Centro de Investigaciones Energéticas Medioambientales y Tec. (ES))
    21/04/2026, 14:10
    Follow-up on mid-long term evolution of facilities (Topical Session)
  24. Jose Flix Molina (CIEMAT - Centro de Investigaciones Energéticas Medioambientales y Tec. (ES))
    21/04/2026, 14:35
    Follow-up on mid-long term evolution of facilities (Topical Session)
  25. Thomas Birkett
    21/04/2026, 15:05
    Follow-up on mid-long term evolution of facilities (Topical Session)
  26. Garhan Attebury (University of Nebraska Lincoln (US))
    21/04/2026, 15:35
    Follow-up on mid-long term evolution of facilities (Topical Session)

    The HL-LHC era and associated increase in data volume and processing requirements has led to many re-evaluations and discussions on how the computing systems supporting high energy physics should be architected. This presentation summarizes work done by U.S. CMS in recent years to develop its mid- to long-term infrastructure strategy based on task force work conducted in 2024-2025.

    This...

    Go to contribution page
  27. Mr Xiaowei Jiang (IHEP(中国科学院高能物理研究所))
    21/04/2026, 16:30
    Follow-up on mid-long term evolution of facilities (Topical Session)
  28. 21/04/2026, 16:55
    Follow-up on mid-long term evolution of facilities (Topical Session)
  29. Panos Paparrigopoulos (CERN)
    22/04/2026, 09:00
    Software and Services for Operation

    As WLCG prepares for the HL-LHC era, Operations Coordination continues to steer the infrastructure through major technical transitions while ensuring stable Run 3 operations.
    The current focus is on four key cross-cutting projects: the migration to token-based authentication and authorization, the evolution and sustainability of WLCG accounting, the implementation of XRootD monitoring, and...

    Go to contribution page
  30. Joao Antonio Tomasio Pina (Laboratory of Instrumentation and Experimental Particle Physics (PT))
    22/04/2026, 09:20
    Software and Services for Operation

    The Unified Middleware Distribution (UMD) is a software distribution provided by EGI that integrates a collection of software components (middleware) selected from various technology providers for deployment on the EGI/WLCG production infrastructure. The software repository for UMD (repository.egi.eu) is developed, maintained, and operated by LIP under contract with EGI.

    The repositories...

    Go to contribution page
  31. João Machado
    22/04/2026, 09:40
    Software and Services for Operation

    We present the migration of the CNCA Helpdesk from Request Tracker (RT) to Zammad, driven by the need for improved usability, automation, and integration capabilities.

    The talk focuses on the migration approach, including data extraction from RT and transformation/import into Zammad via APIs. Existing open-source tooling was extended to support CNCA specific requirements, enabling the...

    Go to contribution page
  32. Jingyan Shi (IHEP)
    22/04/2026, 10:00
    Software and Services for Operation

    Joblens, a lightweight and observability collector designed to achieve fine-grained monitoring of cluster jobs. Leveraging eBPF-based kernel instrumentation, Joblens enables dynamic tracking of process creation and system calls with zero overhead and no need for kernel modifications. Its modular and highly configurable plugin system, built on an asynchronous double-buffer pipeline, exports...

    Go to contribution page
  33. Joseph Frith
    22/04/2026, 11:00
    Cloud Technologies, Virtualization & Orchestration, Operating Systems

    The Scientific Computing and Data Facilities (SCDF) at Brookhaven Lab began
    in 1997 when the Relativistic Heavy Ion Collider (RHIC) and ATLAS Computing Facility
    was established. The full-service scientific computing facility has since supported
    a diverse range of scientific collaborations, by providing dedicated data processing,
    storage, and analysis resources for these expansive...

    Go to contribution page
  34. Samuel Bernardo
    22/04/2026, 11:20
    Cloud Technologies, Virtualization & Orchestration, Operating Systems

    Kubernetes (K8s) is a game-changing container orchestration platform that revolutionizes the way we deploy and manage applications. In this talk, we will delve into the intricacies of K8s deployment, optimal architecture design, and seamless integration with essential tools like ArgoCD and Gitlab CI. Join us as we share our journey, best practices, and key insights for successful Kubernetes adoption.

    Go to contribution page
  35. Franz Rhee (DESY)
    22/04/2026, 11:40
    Cloud Technologies, Virtualization & Orchestration, Operating Systems

    Provisioning research group storage at large-scale scientific computing facilities typically requires manual intervention from storage administrators: a user submits a request, an administrator logs into the storage management console, creates a namespace directory, sets quotas, and assigns ownership. This process does not scale as the number of virtual organisations (VOs) and research groups...

    Go to contribution page
  36. Bart van der Wal (NIkhef)
    22/04/2026, 12:00
    Cloud Technologies, Virtualization & Orchestration, Operating Systems

    After Vmware was bought out by Broadcom the price of the products increased and the service decreased. Nikhef decided to move our Vmware cluster to XPC-ng with XOA.
    I will be talking about why we decided to do this, but most of the talks will be about the migration and our experiences of the last year.

    Go to contribution page
  37. Domenico Giordano (CERN)
    22/04/2026, 14:00
    Computing & Batch Services

    The HEPiX Benchmarking Working Group develops and maintains benchmarking tools to measure computing resources across the Worldwide LHC Computing Grid (WLCG). Since the adoption of HEPScore23 in April 2023, the WG has been enhancing the benchmark suite to address evolving community needs. Currently, the WG is focused on two main developments: extending the benchmark suite with modules to...

    Go to contribution page
  38. Ms Elisabeth Gameiro (FUJIFILM Recording Media France)
    22/04/2026, 14:30
    Techwatch (Topical Session)

    High-Performance Computing (HPC) has become a critical tool in scientific research, engineering, AI training, and more. Some of the main challenges of HPC in data storage are handling massive data volumes, ensuring long-term data integrity and security, reducing the floor space and the carbon footprint.

    HPC applications generate petabytes of data, requiring high-capacity storage solutions....

    Go to contribution page
  39. Vladimir Bahyl (CERN)
    22/04/2026, 14:50
    Techwatch (Topical Session)

    This talk is a short update on the recent evolution of the tape technology from the CERN user perspective.

    Go to contribution page
  40. Dr Andrea Sciabà (CERN)
    22/04/2026, 15:10
    Techwatch (Topical Session)

    The Technology Watch Working Group, established in 2018 to take a close look at the evolution of the technology relevant to HEP computing, has resumed its activities after a long pause. In this report, we provide an overview of the hardware technology landscape and some recent developments, highlighting the impact on the HEP computing community, with a special focus on resource price evolution.

    Go to contribution page
  41. Tristan Suerink
    22/04/2026, 15:40
    Techwatch (Topical Session)

    With CPU’s and Accelerators that want to have more and more watt compared with the previous generations, cooling will become getting harder to keep future clusters running efficiently.
    At every conference, you’ll see more companies telling you that they have the real golden egg for this problem. The big question is, do they really have that golden egg for you?
    We will show the different type...

    Go to contribution page
  42. Tristan Suerink
    22/04/2026, 16:30
    Techwatch (Topical Session)

    The problem of efficient and effective cooling has been haunting us for nearly a decade: since 2018 Nikhef has been aware of the issue and has explored several possibilities: ones that fir our environment, use the data centre infrastructure we have, and the infrastructure we have for getting rid of residual heat.
    But what is the best technology out there today? And which technologies that...

    Go to contribution page
  43. Oxana Smirnova (Lund University)
    22/04/2026, 16:50
    Techwatch (Topical Session)

    The WLCG Workshop on Heterogeneous Architectures (CERN, Dec 2025) reviewed the readiness of GPU‑enabled workflows across LHC experiments and the requirements for heterogeneous resources ahead of Run 4. While GPU acceleration is advancing in simulation, reconstruction and ML workflows, experiments are not yet ready to request formal GPU pledges, pending further benchmarking, workflow...

    Go to contribution page
  44. Matthias Jochen Schnepf
    22/04/2026, 17:20
    Techwatch (Topical Session)

    GPUs are energy-efficient hardware for several High Energy Physics (HEP) applications.
    However, servers with GPUs are expensive and special in cooling, operation, and software support compared to CPU-only servers.
    We present what we have learned during the operations of GPU servers regarding these aspects at KIT, as well as some experiences from other sites.

    Go to contribution page
  45. Luca Atzori (CERN)
    22/04/2026, 17:40
    Techwatch (Topical Session)

    As CERN prepares its central IT Data Centres for the High‑Luminosity LHC era (Run 4), the compute and storage infrastructure operated by CERN IT must evolve to meet significantly higher demands in throughput, capacity, and efficiency while remaining within strict constraints on budget, power consumption, and operational simplicity. With LS3 expected to begin in July 2026 and the HL‑LHC start...

    Go to contribution page
  46. Bo Zhuang (中国科学院高能物理研究所)
    23/04/2026, 09:00
    Storage & data management

    The Institute of High Energy Physics has constructed multiple large-scale scientific facilities, including BSRF, HEPS, LHAASO, JUNO, AliCPT, which generate a large amount of data requiring high-performance data transfer services. To make full use of the computing resources of the remote sites of IHEP, data needs to be transmitted between multiple computing sites. The National High Energy...

    Go to contribution page
  47. Christoph Beyer
    23/04/2026, 09:20
    Computing & Batch Services

    The yearly autumn, european HTCondor workshop is wrapped up and some of the highlights are presented in more detail

    Go to contribution page
  48. Raghuvar Vijayakumar (University of Freiburg (DE))
    23/04/2026, 09:40
    Computing & Batch Services

    Distributed computing infrastructures are typically shared by multiple research communities, where precise and transparent resource accounting is essential. To meet these demands, we developed AUDITOR (AccoUnting DatahandlIng Toolbox for Opportunistic Resources), a flexible, modular, and extensible accounting ecosystem designed for heterogeneous computing environments.

    AUDITOR captures and...

    Go to contribution page
  49. Jan Hartmann
    23/04/2026, 10:00
    Computing & Batch Services

    A little bit of DESY Linux tools, all at once.
    In the past, we have developed lots of smaller and larger tools to help in various aspects of Linux administration at DESY.
    We present some of them in this talk.

    • Which packages do applications stem from that run on a Linux system? Security status of packages and applications?
    • Jump hosts: Making administration more secure, and isolating...
    Go to contribution page
  50. Steve Brasier (StackHPC)
    23/04/2026, 10:20
    Computing & Batch Services

    HPC clusters are often seen as relatively static systems that do not need the flexibility provided by cloud environments. We describe the StackHPC Slurm Appliance, a Slurm batch scheduler deployment which uses OpenStack to combine bare metal performance with the operational convenience and ease of testing provided by virtualisation. The resulting HPC system is suitable for uses ranging from a...

    Go to contribution page
  51. Christoph Beyer
    23/04/2026, 11:00
    Computing & Batch Services

    The IDAF (Interdisciplinary Data and Analysis Facility) is an Helmholtz LK II facility. It is located at DESY, and serves computational needs for communities in the MATTER program.

    Go to contribution page
  52. Konomi Omori (KEK)
    23/04/2026, 11:20
    Software and Services for Operation

    KEK has deployed an Identity Provider (IdP) and joined both GakuNin and eduGAIN to enable federated authentication with domestic and international institutions. This presentation describes the technical architecture and current status of the IdP, and outlines plans to extend this federated authentication infrastructure across multiple services provided by the KEK Computing Research Center. It...

    Go to contribution page
  53. qi luo (中科院高能物理所计算中心)
    23/04/2026, 11:40
    Software and Services for Operation

    The Institute of High Energy Physics (IHEP), Chinese Academy of Sciences,is a leading research institution in China dedicated to high-energy physics, advanced accelerator technology development, and nuclear technology applications. IHEP undertakes several major national science infrastructure projects, the most prominent of which is the High Energy Photon Source (HEPS). With an electron beam...

    Go to contribution page
  54. Zhechka Toteva (CERN)
    23/04/2026, 12:00
    Software and Services for Operation

    CERN's computing infrastructure manages thousands of services across a complex distributed environment, requiring robust secret management for application credentials, root accounts, certificates, and service tokens. This talk explores CERN's transition from puppet-oriented, in-house secrets management solutions to HashiCorp Vault as a centralized, enterprise-level secret management...

    Go to contribution page
  55. Marc Santamaria Riba (INSTITUT DE FÍSICA D´ALTES ENERGIES)
    23/04/2026, 14:00
    Networking & Security

    The Port d’Informació Científica (PIC) serves as a critical data and computing hub for numerous scientific experiments, with the vast majority of its resources dedicated to the Worldwide LHC Computing Grid (WLCG). While standard LHC grid operations rely heavily on traditional batch submissions, the growing demand for interactive, Tier-3 analysis facilities requires a shift toward scalable,...

    Go to contribution page
  56. David Kelsey (Science and Technology Facilities Council STFC (GB))
    23/04/2026, 14:20
    Networking & Security

    The HEPiX IPv6 Working Group has been encouraging the deployment and use of IPv6 in WLCG for many years. At the last HEPiX meeting in China we reported that more than 70% of all WLCG sites have worker nodes and compute services that are IPv6-capable. That campaign continues. We also presented news that the USA Tier1 sites had successfully removed IPv4 peering from their LHCOPN connections. We...

    Go to contribution page
  57. 曾珊 zengshan
    23/04/2026, 14:50
    Networking & Security

    At the LHCOPN/ONE meeting in October 2025, IHEP decided to act as a pioneer and volunteer to pilot the LHCOPN IPv6-only initiative. This report presents the LHCOPN IPv6-only progress at IHEP.

    Go to contribution page
  58. Mattias Wadenstein (University of Umeå (SE))
    23/04/2026, 15:10
    Networking & Security

    Universities and research institutes have been early adopters of IPv4, which
    have served scientific research infrastructure well in the past. But now the
    time has come to let go of the legacy protocol with awkward limits, and phase
    it out in favour of IPv6.

    The World-wide LHC Computing Grid (WLCG) is half-way through the transition
    from IPv4 to IPv6, with almost all services now being...

    Go to contribution page
  59. Jiri Chudoba (Czech Academy of Sciences (CZ))
    23/04/2026, 16:00
    Networking & Security

    The Czech WLCG Tier-2 consistently meets its computing and storage commitments to the LHC experiments through a geographically distributed infrastructure. CZ-Tier-2 resources are spread across three sites, connected by high-bandwidth links operated by the Czech NREN, CESNET. Additionally, substantial CPU resources from the Czech national supercomputing center IT4I are incorporated into WLCG...

    Go to contribution page
  60. Dawid Kulikowski
    23/04/2026, 16:20
    Networking & Security

    This presentation aims to give an update on the global security landscape from the past year. The global political situation has introduced a novel challenge for security teams everywhere. What’s more, the worrying trend of data leaks, password dumps, ransomware attacks and new security vulnerabilities does not seem to slow down. We present some interesting cases that CERN and the wider HEP...

    Go to contribution page
  61. Dennis van Dok (Nikhef)
    23/04/2026, 16:40

    demo of how we currently run Gitlab pipelines to build deb and rpm packages for software that we maintain. This in container images that themselves are poduced by gitlab pipelines.

    Go to contribution page
  62. Chris Brew (Science and Technology Facilities Council STFC (GB)), Chris Brew (Department of Physics), Chris Brew (Particle Physics-Rutherford Appleton Laboratory-STFC - Science &), Chris Brew (CCLRC - RAL)
    23/04/2026, 16:50

    showing off the RAL PPD multi-layerered NFS home file service

    Go to contribution page
  63. Garhan Attebury (University of Nebraska Lincoln (US))
    23/04/2026, 17:00

    Tools mentioned will be:

    Cobbler
    Netbox
    k9s / Flux
    Puppet
    Akvorado

    Go to contribution page
  64. 23/04/2026, 17:10
  65. Alexandr Mikula (Czech Academy of Sciences (CZ))
    24/04/2026, 09:00
    Computing & Batch Services

    We present operational experience from two CZ WLCG sites deploying per-job LVM-enforced disk quotas and CGroup2-enforced memory limits within HTCondor, with a 10% overhead allowance and swap disabled for all batch jobs.
    We cover bugs and unexpected behaviours encountered in production, pitfalls to anticipate, configuration choices to avoid.

    Go to contribution page
  66. Dr Jeff Wagg (OCA)
    24/04/2026, 09:20
    Miscellaneous

    The SPECTRUM project (https://spectrumproject.eu/), funded under Horizon Europe, presents its final deliverables: the Strategic Research, Innovation and Deployment Agenda (SRIDA) and the Technical Blueprint for a European compute and data continuum serving data-intensive science communities.

    The SRIDA is structured around four pillars encompassing 13 strategic priorities spanning technical...

    Go to contribution page
  67. Ruben Domingo Gaspar Aparicio (CERN)
    24/04/2026, 09:50
    Miscellaneous

    As part of CERN’s transcription and translation service, high-quality captions were produced for approximately 40,000 hours of media using models trained on CERN-specific/HEP terminology, covering CERN’s official languages. Beyond the immediate accessibility benefits of captioning, the service also explored ways to improve the discoverability of media content. Two proof-of-concept systems were...

    Go to contribution page
  68. Zhengde Zhang (中国科学院高能物理研究所)
    24/04/2026, 10:10
    Applied AI in Computing Center Infrastructures

    Interdisciplinary teams at IHEP have developed several AI agents for scientific research, including Dr.Sai BESIII, Dr.Sai Rongzai, and Dr.Sai DORA, which require new AI infrastructure.

    This talk will cover recent progress in these agent systems and introduce the OpenDr.Sai framework (as a harness).

    The implemented solutions cover: connecting agents with experimental data, integrating...

    Go to contribution page
  69. Juan Manuel Guijarro (CERN)
    24/04/2026, 11:00
    Applied AI in Computing Center Infrastructures

    We are developing a new Machine Learning (ML) service at CERN to support the use of Large Language Models (LLMs) and Agentic AI. Our goal is to provide a reliable and secure foundation for researchers and developers. In this presentation, we will describe the architecture and plans for this new service. It will include several key components: an LLM Proxy that works with OpenAI-compatible...

    Go to contribution page
  70. Shore Salle Chota
    24/04/2026, 11:20
    Applied AI in Computing Center Infrastructures

    The increasing complexity of computational research demands HPC and AI systems that are not only powerful but also reproducible, adaptable, and easier to manage. This presentation details the integration of Spack, MLflow, and AgenticAI within the Maxwell Cluster at DESY to enhance software management, experiment tracking, and workflow automation. Spack introduces a flexible package management...

    Go to contribution page
  71. QI XU (National Space Science Center, Chinese Academy of Sciences)
    24/04/2026, 11:40
    Applied AI in Computing Center Infrastructures

    The growing trend of AI for Science (AI4S) has placed new demands on scientific data management and application. As a bridge connecting raw data to data-driven applications, a data repository is expected to enhance its data service capabilities to facilitate AI4S applications.

    The Chinese National Space Science Data Center (NSSDC) is responsible for the archiving, curation, long-term...

    Go to contribution page
  72. Alessandro Di Girolamo (CERN), James Letts (Univ. of California San Diego (US))
    Follow-up on mid-long term evolution of facilities (Topical Session)
  73. Jose Flix Molina (CIEMAT - Centro de Investigaciones Energéticas Medioambientales y Tec. (ES))