Review of critical area requests

Europe/Zurich
513/R-070 - Openlab Space (CERN)

513/R-070 - Openlab Space

CERN

15
Show room on map
Present: Bernd, Tim, Wayne
 
  • Highest priority is to reduce the time to recover. This implies
    • systems that allow us to access and recover applications
    • the monitoring of basic infrastructure (power, network)
  • Thus, lxtunnel request is approved and is small
  • Given 80 cores in total spare,
    • Approve 50 cores for the tomcat services
      • Recommend priority for Adams, Foundation and Phonebook as being the ones we need for the recovery
    • Need to clarify the load balancer quota since LBaaS will be down if physics power is down
  • The CephFS move would be recommended if feasible
    • Use of the Hyperconverged cluster to hosts Ceph mons should be evaluated

Actions

 

  • Tim to inform the cloud team of the proposal
  • Tim to inform Jose, Enrico, Arne re: CephFS to validate the hyperconverged proposal
  • Wayne to review the resources for opendcim and TIM to check we've got sufficient
    • If there is a need, these should be considered as highest priority for any remaining capacity
  • Tim and Bernd to review critical area resources list with the aim of
    • Finding remaining cores for tomcat
    • Moving towards a cleaner situation for the barn going forward. DFS capacity, for example, would seem to be a worthwhile evaluation.

 

 


 

There are minutes attached to this event. Show them.
    • 11:00 11:20
      Input from resource requests 20m
      Current resources in the critical area in attached spreadsheet.
       
      1) Y/N for CephFS move to the barn
      2) There are currently several outstanding requests for capacity. We need to Y/N them. Cloud team estimate space for 20 4xcore VMs.
       
      - 2 cores for lxtunnel (RQF2150048)
      - 100 cores for Tomcat (RQF2150308), including LBs
      - 48 cores for CephFS mons in the barn (RQF2151464)
       
      The details of the Tomcat request is as follows
       
      egroup - k8s cluster (1/2pod) + webservice?
      Kitry - 1 app VM (application), VM (httpd) appserver/oracleforms11g/kitry_prod
      Adams (Apex) - K8s cluster (4 pods), VM (httpd)
      Foundation -  K8s cluster (1 pod), VM (httpd)
      phonebook - k8s cluster (1 pod)
      Qualiac - 2 app VMs, VM (httpd) appserver/qualiac/qualiac_prod
      EDH - k8s cluster (2 pods)
      Baan/InfoLN - 2 app VMs, VM (httpd) appserver/baan/prod appserver/inforln/inforln_prod
      SIR/LMS - 1 app VM, VM (httpd) appserver/lms/lms_prod

      VM flavours:
      Application VM - m2.2xlarge (8 cores)
      apache VM - m2.large, 1 VM can be shared for all applications (4 cores)

      to be discussed:
      EDMS?
      InforEAM
      NFS from old IT-DB

      day2 - efiles, payroll, OracleHR, Impact are deployed now as follows:
       
      efiles, Impact - kubernetes
      payroll - VMs
      OracleHR - physical machines in physics area