CernVM Users Workshop

Europe/Zurich
30/7-018 - Kjell Johnsen Auditorium (CERN)

30/7-018 - Kjell Johnsen Auditorium

CERN

190
Show room on map
Gerardo Ganis (CERN), Jakob Blomer (CERN), Radu Popescu (CERN)
Description

The next CernVM Users Workshop will take place at CERN from 30 January to 1 February 2018, following the successful editions held at CERN in March 2015 and at RAL, UK in June 2016

As usual, the workshop aims to bring together users and developers to discuss the current status of the CernVM ecosystem and the future directions, with a fresh look onto the landscape of technology and the evolution in virtualization and cloud computing.

This edition will also be the occasion to celebrate the 10th anniversary of the CernVM project. We are looking forward to a keynote address by Predrag Buncic, founder of the CernVM project and now ALICE computing coordinator.

As in previous editions, we invited guest speakers from the industry on selected technology topics. This time, we are very happy to welcome

  • Miklos Szeredi (Red Hat), the author of File System in Userspace (FUSE) and OverlayFS
  • Saeed Noursalehi (Microsoft), core developer of the Git Virtual File System (GVFS)
  • Michael Bauer (SyLabs), core developer of Singularity
  • Justin Cormack (Docker), core developer of LinuxKit

The third day of the workshop will be entirely dedicated to 'hands on' sessions. Whilst the agenda is yet to be defined, there will be opportunities for demos and tutorials, also for users to come forward with topics for discussion.

A video-conference service will be provided to help remote attendance.

More information will be posted in due time on this web site.

For questions or comments, please contact us at cernvm-workshop@cern.ch.

The workshop will be followed on 2 February 2018 by a special pre-GDB meeting dedicated to the usage of HPC Resources for LHC computing.

Videoconference Rooms
CernVM_Users_Workshop
Name
CernVM_Users_Workshop
Description
A/V transmission of the workshop
Extension
10608592
Owner
Jakob Blomer
Auto-join URL
Useful links
Phone numbers
    • 08:30 09:50
      Technology Outlook 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
      • 08:30
        Evolution of FUSE and OverlayFS 40m

        OverlayFS is the "union filesystem" soltion that is now available as part of the Linux kernel. OverlayFS is currently in active development. POSIX compliance, NFS export and improved performance are currently being worked on. There are plans to add user namespace and unprivileged mounting support.

        FUSE is a userspace interface for developing filesystems. FUSE started out on Linux, but is now available on other platforms as well. FUSE is mostly in the maintenance mode at the moment, but there are plans for adding user namespace support, improving operation for distibuted filesystems and performance improvements to keep in pace with the developments of fast, memory based storage.

        This talk aims to give an overview of FUSE and OverlayFS features past, present and future. The target audience is userspace developers familiar with the UNIX filesystem interface.

        About the speaker:
        Miklos Szeredi is a Linux kernel hacker working for Red Hat. He has been interested in virtual filesystems for a long time, starting several open source projects including Filesystem in Userspace (FUSE) and the Overlay Filesystem. Prior to joining Red Hat, he has worked at SUSE Labs and at Ericsson. Miklos is currently living in a small town near Budapest in Hungary with his family of six, twins being the latest addition.

        Speaker: Miklos Szeredi (Red Hat)
      • 09:10
        Designing the Git Virtual File System (GVFS) 40m

        We’ve built a virtual file system that enables the Windows team to work in a Git repository that is a few orders of magnitude larger than what Git was previously able to support. In this talk we’ll cover a high level overview of the scale challenges we faced with Git, how we designed our virtual file system on top of NTFS, and some of the difficulties we ran into while building a file system that is correct, lazy, and performant.

        About the speaker:
        Saeed Noursalehi is on the Visual Studio Team Services team at Microsoft, focused on helping some very large teams within Microsoft migrate to Git. Among other things, this means solving some hard scale problems in Git, which is a great source of fun. He also enjoys rock climbing, road biking, and music.

        Speaker: Saeed Noursalehi (Microsoft)
    • 09:50 10:10
      Coffee Break 20m 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
    • 10:10 11:40
      Technology Outlook 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
      • 10:10
        Building Reproducible Science with Singularity Containers 40m

        One of the biggest problems in scientific HPC is ensuring that results are reproducible. That is, the code a scientist runs locally must be able to run identically on any computational resource. Until recently, the job of ensuring that fell to system administrators who needed to manage a complex web of tools and dependencies on those resources. However, with the introduction of HPC containers via Singularity, the ability to mobilize the compute environment has never been easier. Singularity allows anybody to run their own containers on HPC, ushering in a new era of computational mobility, validity, and reproducibility.

        About the speaker:
        Michael Bauer first began working with containers at GSI national lab in Darmstadt, Germany, in 2017 while taking a semester off of school at the University of Michigan. Michael met Greg Kurtzer, project lead of Singularity, during his time at GSI and he began contributing heavily to the Singularity project. At the start of summer 2017, Greg hired Michael to work at the Silicon Valley startup RStor, where he continued to work on the Singularity container technology. After 6 months at RStor, the Singularity team left RStor to create their own company, SyLabs, Inc., where Michael, Greg and several other developers now work full time on developing Singularity.

        Speaker: Michael Bauer
      • 10:50
        Tooling for Using Linux 40m

        LinuxKit is a framework for building small, modular, immutable Linux systems that was open sourced last year by Docker. It came via a different design process than CernVM but shares much of the same philosophy. This talk looks at similarities and differences, and shows how to construct systems with LinuxKit, and future developments. It will also cover containerd, the new container runtime that LinuxKit and Docker use, and container image distribution.

        About the speaker:
        Justin Cormack is a software engineer working for Docker in Cambridge, UK. He is a maintainer for Docker and LinuxKit, and works across the container ecosystem.

        Speaker: Justin Cormack (Docker)
    • 12:15 14:00
      Underground Visit to the ALICE Detector 1h 45m 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
    • 14:30 16:00
      Focused Topics: CDN and Data Distribution 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
      • 14:30
        Open HTC Content Delivery Network 20m
        Speaker: Dave Dykstra (Fermi National Accelerator Lab. (US))
      • 14:50
        XCache Overview 20m
        Speaker: Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))
      • 15:10
        XCache in CernVM-FS 10m
        Speaker: Radu Popescu (CERN)
      • 15:20
        West-Life, Tools for Integrative Structural Biology 15m

        Structural biology is part of molecular biology focusing on determining structure of macromolecules inside living cells and cell membranes. As macromolecules determines most of the functions of cells the structural knowledge is very useful for further research in metabolism, physiology to application in pharmacology etc.

        As macromolecules are too small to be observed directly by light microscope, there are other methods used to determine the structure including nuclear magnetic resonance (NMR), X-Ray crystalography, cryo electron microscopy and others. Each method has it's advantages and disadvantages in the terms of availability, sample preparation, resolution.

        West-Life project has ambition to facilitate integrative approach using multiple techniques mentioned above. As there are already lot of software tools to process data produced by the techniques above, the challenge is to integrate them together in a way they can be used by experts in one technique but not experts in other techniques.

        One product of the West-Life project is a data management service - virtual folder. It delivers a uniform way to integrate scattered data from different storage providers.

        Another product is a virtual machine, which may allow to launch specific software tools to process user's data in virtual folder.

        CernVM with option to be launched with graphical user interface is used as a basic template to contextualize virtual machine with additional structural biology software suites such as CCP4, Scipion and others. CernVM-FS is used to distribute updates of structural biology software suites as well as West-Life specific services - virtual folder and newly repository.

        The virtual machine templates are available in EGI's APP DB as well as within STFC cloud computing infrastructure.

        Speaker: Tomas Kulhanek (STFC Daresbury Laboratory)
      • 15:35
        CernVM facilitates offline processing on the ATLAS HLT farm 10m
        Speaker: Frank Berghaus (University of Victoria (CA))
      • 15:45
        Containerized CernVM-FS Server 10m
        Speaker: Dan van der Ster (CERN)
    • 16:00 16:30
      Coffee Break 30m 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
    • 16:30 17:20
      Feedback from Users 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
      • 16:30
        CVMFS Build and Release Pipeline Using Docker Microservces 15m

        IceCube is a cubic kilometer neutrino detector located at the south pole. CVMFS is a key component to IceCube’s Distributed High Throughput Computing analytics workflow for sharing 500GB of software across datacenters worldwide. Building the IceCube software suite across multiple platforms and deploying it into CVMFS has until recently been a manual, time consuming task that doesn’t fit well within an agile continuous delivery framework.
        Within the last 2 years a plethora of tooling around microservices has created an opportunity to upgrade the IceCube software build and deploy pipeline. We present a framework using Kubernetes to deploy Buildbot. The Buildbot pipeline is a set of pods (docker containers) in the Kubernetes cluster that builds the IceCube software across multiple platforms, tests the new software for critical errors, syncs the software to a containerized CVMFS server, and finally executes a publish. The time from code commit to CVMFS publish has been greatly reduced and has enabled the capability of publishing nightly builds to CVMFS.

        Speaker: HEATH SKARLUPKA (University of Wisconsin Madison)
      • 16:45
        cernatschool.org's use of CVMFS and the CernVM 15m

        cernatschool.org is a very small Virtual Organisation made up of secondary school and university students, and participating organisations in the Institute for Research in Schools.

        We use CVMFS to delpoy dependencies and Python 3 itself for custom software used for analysing radiation data from Medipix detectors. This software is designed for running on GridPP worker nodes, part of the UK based distributed computing grid.

        The cernatschool.org VO also uses the CernVM, for job submission and interacting with the grid. The current use for both CVMFS and the CernVM is for facilitating analysis of 3 years worth of data from the LUCID payload on TechDemoSat-1.

        The CernVM looks like it could be particularly useful in the future for a standard system for students to use to program and analyse data themselves with, allowing easy access to any software they might need (not necessarily using GridPP compute resources at all).

        Speaker: Will Furnell
      • 17:00
        CernVM-FS for Data 20m
        Speakers: Brian Paul Bockelman (University of Nebraska Lincoln (US)), Derek John Weitzel (University of Nebraska Lincoln (US))
    • 17:20 18:00
      Focused Topics: Cloud Computing 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
    • 18:00 18:30
      Open Session and Closing 30m 30/7-018 - Kjell Johnsen Auditorium

      30/7-018 - Kjell Johnsen Auditorium

      CERN

      190
      Show room on map
Your browser is out of date!

If you are using Internet Explorer, please use Firefox, Chrome or Edge instead.

Otherwise, please update your browser to the latest version to use Indico without problems.

×