Present: Alastair Dewhurst, James Adams, Rob Appleyard, Shaun de Witt, George Ryall, John Cassons
Meeting structure:
- Agreed that agenda, presentation and slides should be stored on Indico.
- George is chairing the meetings until August 2014.
- Minute taker will be a different person each week.
- Meetings will be weekly, although some may be cancelled if there is nothing to talk about.
- NExt meeting will be 15th April.
Notation:
- We agreed that the word "Copy" was ambiguous when talking about file availability within Ceph. We will use the word replica to indicate the number of times the file is replicated within the instance. 0 replicas means just the original object is stored within Ceph.
Project Overview:
Two main uses cases:
- Tier 1 Cloud Storage which Ian Collier will be responsible for (John Casson is currently in charge of the Cloud storage but will be moving soon to ISIS). This will be referred to as the Cloud instance.
- Tier 1 disk only storage which Shaun de Witt will be responsible for. This will be referred to as the Grid instance.
Aims of the Cloud project:
- Provide backend storage for virtual machines (Shared nothing storage)
- Provide some form of bucket storage for use outside the VM infastrucutre.
- This instance will have multiple replicas of the data (probably D3T0)
Aims of the Grid project:
- To provide large scale cost effective storage for GridPP (LHC VO) use.
- This instance will have no replicas and rely on erasure encoding (RAID 6).
Deployment Status:
- The Cloud hardware has been purchased and delivered and is in the racks. It is not connected or powered on currently.
- There are 12 x13 generation disk servers available for use in the Grid instance. Some of this hardware is already in production for Castor so is 'ready'.
- It would take a significant amount of time to re-configure the x13 disk servers away from RAID 6.
- The 08 generation of WN is available to test the CephFS if necessary.
- The development instance is up with 6 disk servers and around 140TB of space.
- Decided to keep the Cloud and Grid instances separate for now. Also agreed that at a future point the instance will be taken down and we should decide then if it is better to run them as a shared instance or not.
- Decided to use 3 Ceph monitors and to run them on storage nodes.
Actions:
- Alastair to provide George with a list of host that can be used to create the Grid instance.
- George to start setting up the Grid instance.
- Shaun will start to look at the xrootd plugin onto of Ceph (get code from CERN).
- Alastair, John, Andrew Lahiff will provide George with a list of monitoring requirements.
There are minutes attached to this event.
Show them.