Starting ganga.cern.ch at Sun May 24 07:32:01 CEST 2020
CernVM-FS: replicating from http://cvmfs-stratum-zero.cern.ch/cvmfs/ganga.cern.ch
CernVM-FS: using public key(s) /etc/cvmfs/keys/cern.ch/cern.ch.pub, /etc/cvmfs/keys/cern.ch/cern-it1.cern.ch.pub, /etc/cvmfs/keys/cern.ch/cern-it2.cern.ch.pub, /etc/cvmfs/keys/cern.ch/cern-it4.cern.ch.pub, /etc/cvmfs/keys/cern.ch /cern-it5.cern.ch.pub
Failed to contact stratum 0 server (9 - host returned HTTP error)
ERROR from cvmfs_server
- mds cache trim threshold: 200000 # default 65536. trim LRU space more quickly
+ mds cache trim threshold: 400000 # default 65536. trim LRU space more quickly
- mds max caps per client: 20000 # default 1 million. Limits memory consumption of single clients.
+ mds max caps per client: 100000 # default 1 million. Limits memory consumption of single clients.
CEPH-573: Test setup of NFS Ganesha over CephFS:
I apologize for the delay in the updates. Post discussion with backline engineering, and the feedback is the /boot partition must be created on a partition outside of the RAID array. Also EFI System partition on RAID is not supported. There maybe possibility if the RAID is broken or not functioning as expected it may lead to inconsistency with respect to bootloader and may cause failure for server booting. Hence the recommendation will be to proceed with your RAID scheme partitioning for all mount-points except _/boot_ and _/boot/efi_ which need to be on a separate partition. I apologize for the delay and less positive note, but let me know if there are any additional queries that I can assist with.
If EFI system partitions on RAID are not supported, then why does anaconda create /boot/efi a RAID1 with metadata=1.0? Anaconda is doing the right thing here, surely not by accident: https://github.com/rhinstaller/anaconda/blob/master/pyanaconda/modules/storage/platform.py#L145
If anaconda will be updated to explicitly not support /boot/efi on a RAID1, then would RedHat consider fixing their kernel/grub tooling to support two boot disks?
Another thing: we had assumed that the weekly raid-check cron would let us know if the 2 disks get out of sync (e.g. following an external write during boot).
However, it seems that /usr/sbin/raid-check simply ignores mismatches for raid1 and raid10!  We're not sure if that false positive scenario applies to /boot/efi (since it is so rarely written to on a running system). Would it be safe for us to monitor mismatch_cnt ourselves?