DOMA LHCC Review Prep - Storage

Europe/Zurich
Description

Second of two sessions inviting feedback on the DOMA LHCC review doc.

This session focuses on the storage section, including CTA, dCache, Echo, EOS, StoRM and XRootD.

Please use the Indico integrated Zoom (not the one distributed previously by email).

Further information - https://twiki.cern.ch/twiki/bin/view/LCG/LHCC2021

Videoconference
DOMA LHCC Review Prep - Storage
Zoom Meeting ID
67430401964
Host
Mario Lassnig
Useful links
Join via phone
Zoom URL
    • 16:00 16:15
      Presentation of review doc 15m
      Speakers: Mario Lassnig (CERN), Oliver Keeble (CERN)
    • 16:15 16:30
      Atlas requirements and priorities 15m
      Speaker: David Cameron (University of Oslo (NO))
    • 16:30 16:45
      CMS requirements and priorities 15m
      Speakers: Danilo Piparo (CERN), James Robert Letts (Univ. of California San Diego (US))
    • 16:45 17:05
      Other input - Alice, LHCb, facilities - tbc 20m
      Speaker: Reda Tafirout (TRIUMF (CA))

      From INFN-T1:

      Minor comment: in XrootD description (4.2.1) it worth to mention also GPFS as a "kind of storage" and GEMSS as "tape access": Numerous production quality plug-ins have been developed over the years to address community needs such as xroot[s] and http[s] protocols, popular authentication and authorisation mechanisms (Macaroons, SciTokens, x509), interfaces to various kinds of storage (Ceph, DPM, EOS, GPFS, HDFS, Lustre, Unix), commonly used check summing, tape access (CTA, GEMSS and HPSS), Cheers, Vladimir

       

       

      From Frank:


      Both UCSD and Caltech are in the middle of their respective switch from HDFS to Ceph. Caltech is probably more advanced than UCSD.
      But both will be Ceph sites by the end of the summer.

      At UCSD, we are also changing the overall structure from disks in worker nodes to large fileserver with 102 HDDs per server.
      So the fundamentals change in many ways.

      The change is to benefit from erasure encoding, and because I’m sick and tired of corruption due to the large entropy in our HDFS setup.
      A single 2GB file is spread across too many physical devices, and as a result, 2 systems going down results in thousands of corrupted files.

      We never had the to fix this architecture in HDFS. So we are changing it now, as we move to Ceph.

       

    • 17:05 18:00
      Discussion 55m