21/02: Sheffield will have to reduce to 150TB, but has 800 job slots. This is a good candidate for going diskless (no buffer to start with), accessing data from Lancaster and/or Manchester.
    Birmingham should have 600 TB (700-800TB next year). We can use this! It is large enough for a DATADISK.
    Agreed to discuss with ATLAS at the Sites Jamboree (5-8 March). We can then bring it back to GridPP.
    28/02: Brian: We need more experience from Birmingham and Manchester monitoring.
    14/03: From Jamboree:
    Bham: Mark has agreed to setup XCache in Bham
    Shef: When Mark switches to XCache Sheffield can use Manchester storage
    Cam and Sussex: Dan is happy for Cam and Sussex to use QM storage
    Brunel: Dan is not sure about Brunel. Elena will ask Raul
    LOCALGROUPDISK at diskless sites: Elena will send email to ATLAS
    21/03: Elena (email):
        3. Cedric says "diskless sites can keep LOCALGROUPDISK [...] if the UK cloud takes care of contacting the sites if they have problems."
    Decided we want to keep LOCALGROUPDISK for the time being.
    Dan: will this go away? No Gridpp funding, so only keep long term if sites choose to fund it.
    Sam: LOCALGROUPDISK is supposed to be owned ATLAS UK, but mostly actually used by local users.
    Elena: should ask John from Cambridge when he wants to try to use QM storage.
    Sam: is this what we want. John was interested in seeing how Mark got on at Bham.
    28/03: Alessandra will do Sussex: CentOS7 upgrade and switching to diskless.
    Alessandra: Brunel still has storage for CMS, but just 33 TB for ATLAS. Don't use XCache.
    Brunel has now been set to run low-IO jobs and reduce data consolidation.
    Decided to keep as is (with 33 TB ATLAS disk) for the moment.
    Brian: wait to see how Birmingham gets on with XCache before trying to use it at Cambridge. Alternative would be to use another site so as not to overload QMUL.
    04/04: Mark will write up a procedure for installing XCache. Wait to test ATLAS setup at Birmingham before trying other sites (eg. Cambridge).
    Mario said there were two modes to test. Transparent cache puts proxy in front of storage, so not useful for our case. We want the Buffer cache mode (Mario calls it "Volatile RSE").
    02/05: Lincoln Bryant was at RAL and demonstrated XCache setup using SLATE to Alastair, Brian, Tim, et al.
    16/05: Brian will try out SLATE at RAL.
    06/06: Elena said Sheffield was waiting. Need to finish with CentOS7 upgrade first.
    13/06: Cambridge(John Hill) was thinking about installing XCache. Elena will contact John to confirm the plans.
    20/05: John: Cambridge have 900 job slots; but only 220TB DPM storage, all out of warranty (5.5-9 years old). No money for new disk. John is retiring in September, with no full-time replacement. Would like to minimise effort after that.
    Discussed four options:
        1) Leave everything as is. Still would need work now to upgrade DPM (currently on 1.10.0) and maybe go DOME. Difficult to maintain storage especially recovering after a disk crash.
        2) Run diskless with low-I/O jobs (MC sim). This is easiest, but worry about too many ATLAS sites like this.
        3) Install XCache manually using one pool node. This would take a bit to set up, but less long-term maintenance. If there are disk problems don't care about lost data, and can run diskless while being fixed.
        4) Install XCache with SLATE. Need to install Kubernetes instead. Mark setup XCache in a day, so maybe we don't gain much using SLATE.
    Decided to install XCache manually (option 3) to access QMUL storage. John will drain one pool node (40 TB) and put XCache on it. Reduce spacetoken size and ATLAS DDM will free up the space.
    Alessandra will open a ticket: ADCINFR-129