HEP CEPH Discussion

Europe/Zurich
CERN

CERN

    • 16:00 16:20
      RAL Update 20m
      Speaker: Alastair Dewhurst (STFC - Rutherford Appleton Lab. (GB))
    • 16:20 16:40
      Recent Conferences 20m

      Recent Ceph talks at CHEP and HEPiX:

      1) CERN's Ceph infrastructure: OpenStack, NFS, CVMFS, CASTOR, and more!
      https://indico.cern.ch/event/505613/contributions/2230907/

      2) Evolution of the Ceph Based Storage Systems at the RACF.
      https://indico.cern.ch/event/505613/contributions/2230970/
      https://indico.cern.ch/event/531810/contributions/2302099/ (HEPiX talk)

      3) OSiRIS: A Distributed Ceph Deployment Using Software Defined Networking for Multi-Institutional Research
      https://indico.cern.ch/event/505613/contributions/2230915/
      https://indico.cern.ch/event/531810/contributions/2326471/ (HEPiX talk)

      4) CEPHFS: a new generation storage platform for Australian high energy physics
      https://indico.cern.ch/event/505613/contributions/2230911/

      https://indico.cern.ch/event/531810/contributions/2309925/ (HEPIX 2016)

      5) dCache on steroids - delegated storage solutions
      https://indico.cern.ch/event/505613/contributions/2230914/
      https://indico.cern.ch/event/531810/contributions/2321492/ (HEPiX talk)

      6) The deployment of a large scale object store at the RAL Tier 1
      https://indico.cern.ch/event/505613/contributions/2230932/
      https://indico.cern.ch/event/531810/contributions/2298934/ (HEPiX talk)

      7) Achieving Cost/Performance Balance Ratio Using Tiered Storage Caching Techniques: A Case Study with CephFS
      https://indico.cern.ch/event/505613/contributions/2230922/

      OpenStack Summit in Barcelona

      1. Cinder HA & Volume Replication
       - Until recently Cinder has not been able to run in an HA setup. Here at CERN we ran multiple Cinder nodes anyway, but apparently this has been unsafe. HA support now in latest release.
       - Cinder now supports volume replication (mirror an RBD image across the WAN to a 2nd Ceph pool/cluster). For Ceph this is achieved with the rbd-mirror daemon.
       - See details in this video: https://www.youtube.com/watch?v=VjQ6D4IZMBk

      2. Ceph status talk:
       - Sage presented various news regarding the upcoming Kraken/Luminous releases
       - notable beyond bluestore: EC overwrites, ceph-mgr, native QoS, RBD ordered persistent WB cache, RGW metadata in ElasticSearch (to quickly query objects metadata)
       - Presentation here: https://www.youtube.com/watch?v=WgMabG5f9IM

      3. "Hyperconverged" compute/storage on same nodes
       - It seems that the general trend for building clouds is now to put the Compute and Storage on the same nodes. In practise this is done using kvm/ceph in containers, AFAICT.
       - Dreamhost presented their Ceph/Neutron work, and how their cloud is now hyperconverged: https://www.youtube.com/watch?v=pKhZpBc9srA
       - ATT/Canonical gave a short talk about this concept: https://www.youtube.com/watch?v=669pg4V3Q0E


      And there were quite a few other Ceph talks, which you can find here: https://www.youtube.com/user/OpenStackFoundation/videos

    • 16:40 17:00
      Discussion 20m
      Speakers: Alastair Dewhurst (STFC - Rutherford Appleton Lab. (GB)), Dan van der Ster (CERN), Hironori Ito (Brookhaven National Laboratory (US))