25–29 Mar 2019
SDSC Auditorium
America/Los_Angeles timezone

Session

Storage & Filesystems

25 Mar 2019, 15:45
E-B 212 (SDSC Auditorium)

E-B 212

SDSC Auditorium

10100 Hopkins Drive La Jolla, CA 92093-0505

Presentation materials

There are no materials yet.

  1. Mr Michael Meffie (Sine Nomine)
    25/03/2019, 15:45
    Storage & Filesystems

    A report from the OpenAFS Release Team on recent OpenAFS releases and development branch updates. Topics include acknowledgement of contributors, descriptions of issues fixed, updates for new versions of Linux and Solaris, changes currently under review, and an update on the new RXGK security class for improved security.

    Go to contribution page
  2. Dr Shunxing Bao (Vanderbilt University)
    25/03/2019, 16:10
    Storage & Filesystems

    Logistical Storage (LStore) provides a flexible logistical networking storage framework for distributed and scalable access to data in both an HPC and WAN environment. LStore uses commodity hard drives to provide unlimited storage with user controllable fault tolerance and reliability. In this talk, we will briefly discuss LStore's features and discuss the newly developed native LStore plugin...

    Go to contribution page
  3. Rob Appleyard (STFC)
    25/03/2019, 16:35
    Storage & Filesystems

    RAL's Ceph-based Echo storage system is now the primary disk storage system running at the Tier 1, replacing a legacy CASTOR system that will be retained for tape. This talk will give an update on Echo's recent development, in particular the adaptations needed to support the ALICE experiment and the challenges of scaling an erasure-coded Ceph cluster past the 30PB mark. These include the...

    Go to contribution page
  4. Marcus Ebert (University of Victoria)
    27/03/2019, 09:00
    Storage & Filesystems

    We describe our experience and use of the Dynafed data federator with cloud and traditional Grid computing resources as an substitute for a traditional Grid SE.
    This is an update of the report given at the Fall HEPiX meeting of 2017 where we introduced our use case for such federation and described our initial experience with it.
    We used Dynafed in production for Belle-II since late 2017 and...

    Go to contribution page
  5. Benjeman Jay Meekhof (University of Michigan (US))
    27/03/2019, 09:25
    Storage & Filesystems

    OSiRIS is a pilot project funded by the NSF to evaluate a
    software-defined storage infrastructure for our primary Michigan
    research universities and beyond. In the HEP world OSiRIS is involved
    with ATLAS as a provider of Event Service storage via the S3 protocol
    as well as experimenting with dCache backend storage for AGLT2. We
    are also in the very early stages of working with IceCube and...

    Go to contribution page
  6. Jan Erik Sundermann (Karlsruhe Institute of Technology (KIT))
    27/03/2019, 09:50
    Storage & Filesystems

    The computing center GridKa is serving the ALICE, ATLAS, CMS and
    LHCb experiments as Tier-1 center with compute and storage resources.
    It is operated by the Steinbuch Centre for Computing at Karlsruhe Institute
    of Technology in Germany. In its current stage of expansion GridKa
    offers the HEP experiments a capacity of 35 Petabytes of online storage.
    The storage system is based on Spectrum...

    Go to contribution page
  7. Petr Vokac (Czech Technical University (CZ))
    27/03/2019, 10:15
    Storage & Filesystems

    DPM (Disk Pool Manager) is mutli-protocol distributed storage system that can be easily used within grid environment and it is still popular for medium size sites. Currently DPM can be configured to run in legacy or DOME mode, but official support for the legacy flavour ends this summer and sites using DPM storage should think about their upgrade strategy or coordinate with WLCG DPM Upgrade...

    Go to contribution page
  8. Enrico Bocchi (CERN)
    27/03/2019, 11:10
    Storage & Filesystems

    The Storage group of the CERN IT department is responsible for the development and the operation of petabyte-scale services needed to accommodate the diverse requirements for storing physics data generated by LHC and non-LHC experiments as well as supporting users of the laboratory in their day-by-day activities.

    This contribution presents the current operational status of the main storage...

    Go to contribution page
  9. Robert Hancock (Brookhaven National Laboratory)
    27/03/2019, 11:35
    Storage & Filesystems

    Brookhaven National Laboratory stores and processes large amounts of data from the following: PHENIX,STAR,ATLAS,Belle II, Simons, as well as smaller local projects. This data is stored long term in tape libraries but one working data is stored in disk arrays. Hardware raid devices from companies such as Hitachi Ventara are very convenient and require minimal administrative intervention....

    Go to contribution page
  10. Mr Andy Watson (WekaIO)
    28/03/2019, 14:00
    Storage & Filesystems

    In November 2018, running on a mere half-rack of ordinary SuperMicro servers, WekaIO's Matrix Filesystem outperformed 40 racks of specialty hardware on Oak Ridge National Labs' Summit system, yielding the #1 ranked result for the IO-500 10-Node Challenge. How can that even be possible?

    This level of performance becomes important for modern use cases whether they involve GPU-accelerated...

    Go to contribution page
Building timetable...