CernVM Workshop 2019

Europe/Zurich
30/7-018 - Kjell Johnsen Auditorium (CERN)

30/7-018 - Kjell Johnsen Auditorium

CERN

190
Show room on map
Gerardo Ganis (CERN), Jakob Blomer (CERN), Radu Popescu (CERN)
Description

The next CernVM Users Workshop will take place at CERN from 3 to 5 June 2019, following the previous editions held at CERN in March 2015at RAL (UK) in June 2016 and at CERN in January 2018.

As usual, the workshop aims to bring together users and developers to discuss the current status of the CernVM ecosystem and the future directions, with a fresh look onto the landscape of cloud technology and software delivery.

As in previous editions, we invited guest speakers from the industry on selected technology topics. This time, we are very happy to welcome

  • Harris Hancock (Cloudflare), core developer of the Cloudflare Workers serverless framework
  • Michael Bauer (SyLabs), core developer of Singularity
  • Dorian Krause (Jülich Supercomputing Centre), head of the division High-Performance Computing Systems
  • Jesse Williamson (Canonical), senior distributed storage engineer on Ceph
  • Giuseppe Scrivano (Red Hat), main developer of fuse-overlayfs
  • Doug Thain (U Notre Dame, remotely), head of the Cooperative Computing Lab

The workshop will also include a "hands on" session. Whilst the agenda is yet to be defined, there will be opportunities for demos and tutorials, also for users to come forward with topics for discussion.

A video-conference service will be provided to help remote attendance.

The workshop dinner will take place at the 'Les Saveurs du Liban' libanese restaurant, downtown Geneva. Please fill up the related survey by June 1st.

More information will be posted in due time on this web site.

For questions or comments, please contact us at cernvm-workshop@cern.ch.

    • 08:30 09:00
      Registration 30m 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
    • 09:00 09:30
      News from the Developers: Welcome 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
    • 09:30 10:30
      News from the Developers: New Developments I 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
    • 10:30 11:00
      Coffee Break 30m 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
    • 11:00 12:00
      News from the Developers: New Developments II 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
    • 14:00 14:10
      Session Introduction 10m 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
    • 14:10 15:30
      User Stories: Experiment and Site Reports I 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
      • 14:10
        LHCb Nightly Build Use Case 20m
        Speaker: Ben Couturier (CERN)
      • 14:30
        Using CVMFS with Spack at the FCC experiment 20m

        In preparation for the post-LHC era, the Future Circular Collider (FCC)
        Collaboration is undertaking design studies for multiple accelerator
        projects with emphasis on proton-proton and electron-positron high-energy
        frontier machines. From the beginning of the collaboration, the development
        of a software stack with common and interchangeable packages plays an
        important role in simulation, reconstruction or analysis studies. Similarly
        to the existing LHC experiments, these packages need to be built and
        deployed for different platforms and compilers. For these tasks FCC relies
        on Spack, a new package manager tool recently adopted by other experiments
        inside the High-Energy Physics (HEP) community. Despite its warm adoption,
        the integration of Spack with CVMFS for software distribution, while
        already possible, exposes some limitations. This talk provides an overview
        of these difficulties in the context of the FCC Software.

        Speaker: Javier Cervantes Villanueva (CERN)
      • 14:50
        The application of CVMFS at IHEP 20m

        IHEP has been using CVMFS since 2017 and cvmfs-stratum-one.ihep.ac server provides software repository duplicate services for cern.ch,opensciencegrid.org,egi.eu,ihep.ac.cn .Part of this report will introduce the status and next plan of CVMFS-stratum-one.ihep.ac.cn.
        China's Large High Altitude Air Shower Observatory (LHAASO), a cosmic ray detection facility located in the high mountains of Sichuan province. The experimental data of LHAASO captured by the detector will be processed in the computer room which located in the high altitude and poor natural environment , and then transmitted to IHEP. The other part of this report will introduce the application of CVMFS in LHAASO experiments, and share some problems encountered in the use of CVMFS.
        Expect to use for 10 minutes.

        Speaker: Mr Qingbao Hu (IHEP)
      • 15:10
        CVMFS at Compute Canada 20m

        This talk will present the activities and developments at Compute Canada related to CVMFS and publication and management of research software and data, including topics such as availability and resiliency considerations for globally-accessible repositories, collaboration and coordination challenges that can arise when different organizations use each other's repositories, and container image distribution in an experimental Kubernetes cluster at the University of Victoria.

        Speaker: Ryan Taylor (University of Victoria (CA))
    • 15:30 16:00
      Coffee Break 30m 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
    • 16:00 17:30
      User Stories: Experiment and Site Reports II 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
      • 16:00
        Using CVMFS for User Analysis Code Distribution 20m

        CVMFS is primarily used on the grid for distributing released experiment code, and it is expected that only a small number of people in each experiment manage that code. Fermilab has implemented a system for publishing temporary user analysis code in CVMFS, with repositories that are shared by many people and a server API that accepts tarballs from users in multiple experiments. Updates are expedited so they become available on worker nodes within a few minutes. The system is integrated with Fermilab's job submission system. This talk will describe the system and explain how it could be deployed at other sites and used by other job submission systems.

        Speaker: Dave Dykstra (Fermi National Accelerator Lab. (US))
      • 16:20
        Using CVMFS on a distributed Kubernetes cluster - The PRP experience 20m

        The Pacific Research Platform (PRP) is operating a Kubernetes cluster that manages over 2.5k CPU cores and 250 GPUs. The compute nodes are distributed over several locations, mostly in California, all connected with high speed networking of 10Gbps or higher.

        In order to support OSG users, we were in need of providing CVMFS support in the Kubernetes cluster. Since user containers run unprivileged, CVMFS could not be used from the user containers themselves. The chosen path forward was to provide CVMFS as a mountable Kubernetes volume. We started with the CERN-provided CSI plugin, but had to make some changes to make it to work in our environment.

        In this talk, we will detail the actual setup used, as well as the operational experience of the last 4 months. The perceived missing functionality will also be outlined.

        Speaker: Igor Sfiligoi (UCSD)
      • 16:40
        Evolution of CernVM-FS Infrastructure at CERN 20m

        The Storage group of the CERN IT department operates the CVMFS release managers for repositories hosted at CERN (also known as stratum zeros), the CERN replica servers (stratum ones), and the local squid caches. This talk describes the current architecture of the CVMFS service at CERN and reports on the introduction of S3 as the storage backend for stratum zeros, the upgrade of selected release managers to CERN CentOS 7, the deployment of the Gateway component, and the instantiation of dedicated squid proxies for ATLAS repositories.

        Speaker: Enrico Bocchi (CERN)
      • 17:00
        Using CVMFS to distribute LCG Releases 20m

        This talk outlines the usage of the CernVM file system in the EP-SFT-SPI section at CERN. Its task is to distribute and continuously update a software stack (called LCG Releases) that is used by ATLAS, LHCb, SWAN and several more.

        The stack contains HEP specific software as well as external packages. There are usually two major releases per year as well as nightly builds with the latest updates. For these nightly builds we have two main challenges to solve in the future: Increase publication speed and improve the control of the build environments.

        We present our approach to solving these challenges by using recent CVMFS features such as parallel publications with a gateway machine. On the other hand we want to outline our future usage of Kubernetes orchestration to cope with the growing number of builds each night.

        Speaker: Johannes Martin Heinz
    • 20:00 22:00
      Social Events: Workshop Dinner at 'Les Saveurs du Liban'
    • 08:45 09:00
      Announcements 15m 31/3-004 - IT Amphitheatre

      31/3-004 - IT Amphitheatre

      CERN

      105
      Show room on map
    • 09:00 10:30
      Technology Outlook: Invited Speakers I 31/3-004 - IT Amphitheatre

      31/3-004 - IT Amphitheatre

      CERN

      105
      Show room on map
      • 09:00
        Exploring Cloudflare Workers 45m

        Cloudflare Workers is a serverless computing platform optimized to minimize latency to end users. Cloudflare runs every guest function in each server of its network's 175 points of presence, meaning code runs as close to the end user as possible, and is not confined to geographic regions. This requires overcoming a scalability challenge faced by container-based platforms: how to reduce each function's overhead enough to deploy them universally to a global server fleet.

        The V8 JavaScript engine contains a solution: Isolates, a lightweight sandboxing technology. V8 Isolates allow Cloudflare Workers to use JavaScript and WebAssembly modules as serverless functions, minimizing overhead and providing a familiar language environment for web application developers. Workers reinforces this familiarity by implementing standardized JavaScript APIs found in browsers.

        This talk examines the design of Cloudflare Workers, how it fits into the serverless computing landscape, and the problems Worker scripts can solve.

        About the Author
        Harris Hancock is a systems engineer who helps implement the Cloudflare Workers runtime environment, with a particular focus on the JavaScript API. He previously wrote communications middleware for an educational robotics startup, during which time he became a regular contributor to the Cap'n Proto RPC library. It was this interest in protocols and systems programming which lured him to Cloudflare in 2017.

        Speaker: Harris Hancock (Cloudflare)
      • 09:45
        The evolution of the HPC facility at JSC 45m

        The Jülich Supercomputing Centre (JSC) operates a large-scale network-, data- and compute-infrastructure for scientific research covering a broad spectrum of use cases from scalability-focused large-scale simulations to community-specific services with high-throughput requirements. In this talk we will discuss the design of JSC’s facility addressing such different use cases though a mixture of collocated and dedicated resources. Based on currently on-ongoing, exemplary projects with different scientific communities, we will discuss JSC’s future systems strategy in view of the on-going exascale race, the need for novel storage management capabilities and upcoming requirements for support of new system usage patterns.

        About the Speaker
        Dorian Krause leads the High Performance Systems division at the Jülich Supercomputing Centre at Forschungszentrum Jülich. His group is responsible for the operation of the two major supercomputers JURECA and JUWELS and the primary storage infrastructure JUST as well as the co-design and implementation of new compute and data services based on these systems.

        Speaker: Dorian Krause (Jülich Supercomputing Centre)
    • 10:30 11:00
      Coffee Break 30m 31/3-009 - IT Amphitheatre Coffee Area

      31/3-009 - IT Amphitheatre Coffee Area

      CERN

      30
      Show room on map
    • 11:00 11:45
      Technology Outlook: Invited Speakers II 31/3-004 - IT Amphitheatre

      31/3-004 - IT Amphitheatre

      CERN

      105
      Show room on map
      • 11:00
        Solving Problems in HPC with Singularity 45m

        The Singularity container runtime has become widely adopted as the de facto standard container platform for HPC workloads. At the beginning of 2018, Sylabs was founded to further HPC innovation by driving Singularity development. This talk will explore some of the ways in which the Singularity community and Sylabs are helping to solve problems in the HPC space, with a focus on the efforts of streamlining CVMFS usage with containers.

        About the speaker
        Michael Bauer first began working with containers at GSI national lab in Darmstadt, Germany, in 2017 while taking a semester off of school at the University of Michigan. Michael met Greg Kurtzer, project lead of Singularity, during his time at GSI and he began contributing heavily to the Singularity project. At the start of summer 2017, Greg hired Michael to work at the Silicon Valley startup RStor, where he continued to work on the Singularity container technology. After 6 months at RStor, the Singularity team left RStor to create their own company, SyLabs, Inc., where Michael, Greg and several other developers now work full time on developing Singularity.

        Speaker: Michael Bauer (Sylabs)
    • 12:00 14:00
      Social Events: Experiment Visit for Invited Speakers
    • 14:00 15:00
      User Stories: Experiment and Site Reports III 31/3-004 - IT Amphitheatre

      31/3-004 - IT Amphitheatre

      CERN

      105
      Show room on map
    • 15:00 17:45
      Technology Outlook: Invited Speakers III 31/3-004 - IT Amphitheatre

      31/3-004 - IT Amphitheatre

      CERN

      105
      Show room on map
      • 15:00
        A Tale of Two Clusters: CernVM-FS and CephFS in Context 45m

        This talk explores the underlying purposes of CernVM-FS and CephFS, providing intuition for the general goals of each project, indicating recent directions and highlighting use cases for which they are each well-suited.

        About the Speaker
        Jesse Williamson is a software engineer with experience in a wide variety of environments. He has contributed to Ceph, CernVM-FS, Riak, and the Boost C++ libraries. A long-time user of C++, he is an organizer of the Portland C++ User's Group, a member of several committees at CppCon, and has
        participated in WG21.

        Speaker: Jesse Williamson (Canonical)
      • 15:45
        Coffee Break 30m
      • 16:15
        Scalable Applications 45m

        About the Speaker
        Douglas Thain is an Professor and Associate Chair of Computer Science and Engineering at the University of Notre Dame. He received the Ph.D. in Computer Sciences from the University of Wisconsin and the B.S. in Physics from the University of Minnesota. His research is focused on the design of large scale computing systems for science and engineering, including
        scientific workflows, distributed filesystems, and high throughput computing.

        Speaker: Douglas Thain (University of Notre Dame)
      • 17:00
        Rootless containers with Podman and fuse-overlayfs 45m

        During my talk, I will show how it is possible to run OCI containers with Podman without requiring root privileges on the host.

        In the second part of the talk, I'll focus on how we use fuse-overlayfs to replace the kernel overlay file system implementation, and the fuse-overlayfs built-in shifts-like capabilities.

        About the Speaker
        Giuseppe is a Principal Software Engineer at Red Hat. He is part of the Container Runtime Team, where he mainly works on Podman, Buildah, CRI-O.

        Speaker: Giuseppe Scrivano (Red Hat)
    • 17:45 18:15
      Wrap-Up 30m 31/3-004 - IT Amphitheatre

      31/3-004 - IT Amphitheatre

      CERN

      105
      Show room on map
    • 09:00 12:00
      Tutorials: CernVM-FS Tutorials 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map

      Tutorial repository:
      https://github.com/radupopescu/cvm19-tutorial

      VM access keys:
      https://send.firefox.com/download/ef56c7858213bd08/#PL_YviYi7oRBc1Pfru38mg