Ceph/CVMFS/Filer Service Meeting

Europe/Zurich
600/R-001 (CERN)

600/R-001

CERN

15
Show room on map
    • 14:00 14:05
      CVMFS 5m
      Speaker: Enrico Bocchi (CERN)

      Enrico:

      • Re-signing whitelist with YubiKey Tuesday to Thursday (OTG0053606)
        • Approx. 10 repositories per day
        • Big repositories left untouched for now
      • atlas.cern.ch volume extension failed
        • Attempts to fix it resulted in a corrupted partition table
        • Data has been replicated via zfs send/rcv and via cvmfs_snapshot to two independent 8TB volumes
        • Will coordinate with repo owners to do the switch-over
      • cms-ib requires execution of `cvmfs_server eliminate-hardlinks` due to migration to CC7
        • Needs to walk the whole file catalog from the root -- Can be time-consuming
        • The first attempt failed (inodes limit?)
        • Second attempt scheduled for tomorrow (Tue Dec 3)
      • cms.cern.ch needs to be migrated to CC7
        • `cvmfs_server eliminate-hardlinks` is blocking
      • projects.cern.ch accessible from CERN only
        • Now also S3 returns 403 Forbidden to non-CERN IPs
      • ams.cern.ch unable to complete transaction due to root full
        • On CC7, spooling area is on the root partition (hypervisor SSD)
        • 150 GB usable on the root partition used as spool area
        • Currently, 1TB volume attached to let the transaction go through (failed over the weekend due to AFS)
        • Needs debriefing and understanding why the run such huge transactions
    • 14:05 14:10
      Ceph Upstream News 5m

      Releases, Tickets, Testing, Board, ...

      Speaker: Dan van der Ster (CERN)

      Mimic 13.2.7 released.

      • Notable new feature is `slow ping detection` -- the usual OSD heartbeats trigger "osd failed" messages if they stop pinging. Now ceph will generate a health warn if the ping time is longer than 5% of the heartbeat timeout. (see mon_warn_on_slow_ping_time, mon_warn_on_slow_ping_ratio)
      • Default bluefs allocator changed from "stupid" to "bitmap" -- this gives consistent object create latency at cost of ~100MB of ram.
      • MDS has the gradual recall caps that Teo backported.
      • Planning to upgrade ceph/dwight to 13.2.7 this week then assess for other clusters if we upgrade directly to nautilus or to 13.2.7.
    • 14:10 14:15
      Ceph Backends & Block Storage 5m

      Cluster upgrades, capacity changes, rebalancing, ...
      News from OpenStack block storage.

      Speaker: Theofilos Mouratidis (National and Kapodistrian University of Athens (GR))

      From Dan:

      • ceph/erin had one PG inactive (from test pool, so not critical). I found the pool had size=2, min_size=2, which is a misconfiguration? Set to min_size=1 so it could activate, then set to size=3, min_size=2 for a permanent solution.
      • ceph/beesly/osd/critical had multiple failures this morning, and loadavg much higher than usual. p05798818b00174 particularly bad, ssh not working (bu mco is working). Maybe related to user activity -- still investigating.

      Theo:

      • ceph/erin: second to last rack being reformatted, waiting for a ticket about a disk failure to be resolved so the newly formatted rack can enter the cluster.
    • 14:15 14:20
      Ceph Disk Management 5m

      OSD Replacements, Liaison with CF, Failure Predictions

      Speaker: Julien Collet (CERN)
    • 14:20 14:25
      S3 5m

      Ops, Use-cases (backup, DB), ...

      Speakers: Julien Collet (CERN), Roberto Valverde Cameselle (Universidad de Oviedo (ES))

      Giuliano:

      • IP-based access restriction using bucket policies for cvfms (see: CEPH-786)
      • Accounting:
        • script in acron
        • ongoing translating of the script output to match Hugo's requirement
      • rgw-atlas:
        • Investigation in progress, had vm_kills of the atlas rgw during 8:30-10:00
          • "Dec  2 08:33:22 cephgabe-rgwxl-f250ed5924 kernel: traefik invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0"

        • 2 Nodes became unreachable as a result
    • 14:25 14:30
      CephFS/HPC/FILER/Manila 5m

      Filer Migration, CephFS/Manila, HPC status and plans.

      Speakers: Dan van der Ster (CERN), Pablo Llopis Sanmillan (CERN)

      FILER (dan):

      • https://its.cern.ch/jira/browse/AI-5617 brought in a new puppet-nfs module which is supported going forward (el8 etc...). It required some minor changes in hg_filer to support.
      • FILER-120: All filers in critical power barn will need to be recreated next year. Hardware is being decommissioned, and because they are on LCG network, migration is not possible.

      CephFS (HPC - jim)

      • HPC team asked if MDS's in jim are busy, can they reduce (to get a worker node back into the cluster). We found that 1 should be sufficient, so changed to max_mds=1.
    • 14:30 14:35
      HyperConverged 5m
      Speakers: Jose Castro Leon (CERN), Julien Collet (CERN), Roberto Valverde Cameselle (Universidad de Oviedo (ES))

      Kopano (from Dan):

      • CephFS is being used for 3x the expected use-cases (and space): attachments, backup staging area, and folder indices. Need to review with CDA the expected space usage of all. (And review individual share quotas on ceph/kelly).
    • 14:35 14:40
      Monitoring 5m
    • 14:40 14:45
      AOB 5m