Help us make Indico better by taking this survey! Aidez-nous à améliorer Indico en répondant à ce sondage !

SLATE Collaboration Face-to-Face

America/Detroit
Other Institutes

Other Institutes

University of Michigan Physics 450 Church Street Ann Arbor, MI
Description

Please register so we can track number of attendees.

Meeting details in this Google doc

Registration
SLATE Collaboration Face-to-Face Meeting Registration
Participants
  • Ben Kulbertis
  • Bob Killen
  • Chris Weaver
  • Gabriele Carcassi
  • Jason Stidd
  • Joe Breen
  • Lincoln Bryant
  • Robert William Gardner Jr
  • Shawn Mc Kee
  • Shelly Johnson
  • Todd Raeker
  • Monday 21 January
    • SLATE Demonstration: SLATE Workshop Startup
      • 08:45
        Coffee and breakfast

        Coffee and breakfast outside the room

      • 1
        Workshop Introduction

        Introduction (Rob)
        Workshop logistics (Shawn)
        ATLAS Centrally Managed Build Process:
        https://docs.google.com/document/d/1c7Lmmp_-vsrCKg2Ggie6WWfjcaO-oDw4lOIqFRNXalE/edit#heading=h.x803ifi2xymt

      • 2
        Build Utah machine from raw hardware

        Lincoln and Ben facilitate
        https://docs.google.com/document/d/1c7Lmmp_-vsrCKg2Ggie6WWfjcaO-oDw4lOIqFRNXalE/edit#heading=h.x803ifi2xymt
        Boot machine from USB image
        Install software using Puppet
        Provision network (Calico, metallb)
        Manual registration steps
        Validate basic Kubernetes and SLATE client functionality

    • 12:30
      Lunch

      Find places nearby along South U

    • SLATE Storage: ATLAS Storage Discussion

      Bob Killen facilitates

      • 3
        ATLAS Storage Discussion

        ATLAS project defines disk requirements.
        How do we layout disks on ATLAS nodes?
        How do we provision and advertise the storage consistently for xcache and other SLATE apps?
        ATLAS XCache is relying on manual partition in current packaging
        Does this storage packaging allow for other SLATE or ATLAS apps seamlessly?

    • SLATE Node Level Monitoring: ATLAS Node Level Monitoring
      • 4
        Discussion on SLATE Monitoring with ATLAS as a Driver

        Gabriele and Bob facilitate

        Monitoring doc:
        https://docs.google.com/document/d/1lMji2dwLPkHPgtgYk5U5f0H500kOLeQswAd9SbBl1RU/edit#heading=h.n9pkgznjiiyp
        Goal for planning session: come up with the general deployment strategy that will be implemented over the next few months and record it in JIRA
        Metrics
        Logs
        Goal for working session: Prometheus/Grafana installed at least on one cluster standalone
        Items to discuss:
        Deployment strategy for metrics monitoring (standard helm charts? create our own?)
        Deployment strategy for logs
        Outside access (ingress, dns name, …)
        Location for central services (i.e. Grafana and Prometheus aggregator)
        Data aggregations from the clusters (Push Gateway, prometheus/thanos/cortex?)
        Cluster level permission (what requires admin privileges and what doesn’t?)
        Metrics (what do we want to monitor from each node?)
        User level permissions (i.e. logins and permissions for Grafana)

    • 15:30
      Coffee and Snacks
    • SLATE Monitoring Discussion: Monitoring Discussion for SLATE
      • 5
        SLATE Monitoring Discussion

        Discuss what SLATE should do about monitoring for at least two use-cases: 1) Monitoring for the SLATE application developer: what does the "SLATE pane of glass" provide for a developer and 2) Monitoring for the SLATE platform managers: what does the "SLATE pane of glass" provide for a platform deployer/manager?

        Other cases?

        How to monitor the network?

    • Wrap-Up Day1: Wrap-Up and Action Items

      Summarize day's work, review action items.