Site has been running well
Testing CVMFS 2.2.1
- Installed on all nodes
- So far no problems seen
- CVMFS 2.2.2 will be released soon (bug fixes for server, no client changes)
Disk pledge
- dCache
- Found dark space in dCache (space not allocated to any token)
- Found dark space on backing store pools (let dCache autosize pools)
- Add two new servers with 260TB
- Time sync problem on dCache head nodes prevented allocation of free space to tokens
- Net of 750TB added to DATADISK
- Brings MWT2 up to 2015 pledge of 3300TiB on DATADISK, GROUPDISK, USERDISK
- To meet our remaining 2016 pledge of 4500TiB attacking on three fronts
- Bringing up S3 object store on Ceph system (can be 1200TiB on day one)
- Add RDB block devices on Ceph to be used on dCache
- Appears as a disk device which dCache uses as a pool
- Can immediately add to all space tokens
- Use all dCache doors (srm, webdav, xrootd)
- Performance needs to be monitored
- Future - dCache will directly support Ceph objects
- Migrate all space tokens except DATADISK from dCache to Ceph
- DATADISK will occupy all dCache space of 3654TiB
- GROUPDISK (812TiB) and USERDISK (400TiB) will put us over pledge
- As dCache RDB or space tokens migrate, S3 size can be reduced
dCache to Ceph migration
- The plan to migrate a space token from dCache to Ceph
- Bestman SRM server (ceph-srm.mwt2.org:8443/srm/v2/server?SFN=)
- gfal-sync to synhronize backing store copy of space token on dCache to a copy on Ceph
- disable current spacetoken
- final sync
- enable space token with new SRM server
- Still need webdav and xrootd doors on Ceph