T1.2 - periodic phone-conf

Europe/Zurich

After reading the KnowledgeBase document, there are some points on which CNAF is interested in having more details:

  • Belmiro explained how Openstack Ironic works, using Nova API calls, creation of pool of nodes, according to the flavour their are associated to, like hypervisors for Openstack, or nodes for EOS.
  • Andrea: What is the exact procedure for putting a node in production?
    • First of all the node is booted with a special image, that collects all the hardware information and stores them in specific databases:
      • the main one is the "networkDB" (we don't have such a tool) where the info on the nodes stay as long as the node exist in the data center. For example there it is writtten there the mapping MAC-addr - IP
      • another one is the "HardwareDb"
    • Ironic does not interact with BIOS settings. BIOS is generally set up by the company who won the procurement tender.
  • IPMI addressing issue was described by CNAF. CERN does not have this issue since the machine is configured only as long as it stays in the data center.  Hardware recycling is not a common practice at CERN.
    • Belmiro will try to find out how the allocation of the IPMI address is done at CERN
  • CNAF will probably use RALPH 3 as asset management tool, while CERN apparently chosed OpenDCIM.
    • CERN choise was done after evaluating a number of similar tools with pros and cons. Between them the most appreciated ones were Ralph3 and OpenDCIM. For the final choice a big impact had the OpenDCIM web GUI, and the integration with existing DBs 
  • CNAF asked if OpenDCIM has good import/export functions or if it can be interfaced with external tools via apis,
    • more info will come (Belmiro)
  • RedFish is a new implementation of IPMI interface that looks very promising in an IRONIC oriented environment. CERN is currently requiring it to be supported on new acquired hardware, but as for the moment it seems not used.
  • CERN is not aware of a generic tool that may integrate into a single view all the different hardware vendors and collect monitoring information in an integrated way.

Belmiro wil try to clarify some of the  points above.

Accounting and Monitoring

  • discussion on this argument is postponed for the next time.
  • interesting argument for CNAF, in particular for the "cloud" resources- We'll have a look to how cASO con be instaleld and configured, and the metrics it offers. CNAF is not part of the EGI FedCloud, will use cASO without the integration with APEL

 

 

There are minutes attached to this event. Show them.
    • 15:00 16:00
      Discussion on common areas of interest 1h

      g-doc shared for "KnowledgeBase for T1.2"

      Connection Details:

      For reference:

      CNAF asked what is the exact procedure for putting a node in production.

      First of all the node is booted with a special image, that collects all the hardware information and stores them in specific databases:
      the main one is the network data base (we don't have such a tool) where the info on the nodes stay as long as the node exist in the data center

      Ironic does not interact with bios settings. BIOS is generally set up by the company who won the procurement tender.

      IPMI addressing issue was described by CNAF. CERN does not have this issue since the machine is configured only as long as it stays in the data center.
      Hardware recycling is not a common practice at CERN. At the end of the hardware lifecycle, parts from other servers can be used to keep them alive.

      CNAF will probably use RALP3 as a ne asset management tool, while cern apparently chose OpenDCIM. It is not 100% clear how this choice was made among other tools, but  our choice of ralph3 seems to be a good option too, since every software has its pros and cons.
      CNAF asked if OpenDCIM has good import/export functions or if it can be interfaced with external tools via apis, more info will come.

      RedFish is a new implementation of IPMI interface that looks very promising in an ironic oriented environment. CERN is currently requiring it to be supported on new acquired hardware, but as far as we know, right now it's not used.
      CERN is not aware of a generic tool that may integrate into a single view all the different hardware vendors and collect monitoring information in an integrated way.