WLCG Workshop June 2017: Benchmarking session

This page contains a collection of topics to be discussed at the WLCG Workshop https://indico.cern.ch/event/609911/timetable/#20170621

Benchmarking session structure (a proposal)

  • Initial introduction of the WG ( objectives, recent activities, foreseen plans),
  • Then an interactive part, where a panel of representatives from the experiments and sites will animate the discussion
    • To steer the discussion, have a list of topic (see below) that the experiments/site representative should answer before the meeting
      • the answers will be summarised during the meeting (for instance in the introduction talk)

  • Panelists
    • Alessandra Forti (Atlas experiment and site repres.)
    • Andrew McNab (LHCb and site repres.)
    • Manfred Alef (WG chair and site repres.)
    • Pepe Flix (CMS experiment and site repres.)
    • Latchezar Betev (ALICE experiment repres.)

Topics

Fast benchmarks

  1. Does the experiment need to access benchmarking information in the job slot? For which purpose?
  2. Expectation: have a pessimistic benchmark score, based on fully loaded server (what can be obtained with MJF) or running fast benchmarks in pilot jobs
  3. What is the state of the art for the adoption of fast benchmarks in the pilot framework?
  4. what are the preferred fast benchmarks from the experiment point of view? Is it still DB12 python? are there other benchmarks evaluated? (DB12 cpp?)

HS06

  1. is the issue of HS06 score Vs time Simulation workload confirmed, within the accuracy you need?
  2. is the correlation still good for reconstruction jobs?
  3. when and how was it studied in the recent years? Isolated machines or job slots?
  4. would it be better if HS06 is compiled a 64 bit?

Preparation for the new long-running benchmark (Successor of HS06)

  1. We need to prepare a suite of Experiment workloads to compare the execution time respect to the future proposed benchmarks.
    • What are the suggested workloads from the experiments?
      • characteristics of simulation workloads
      • characteristics of reconstruction workloads
  2. action that the experiments can take to make available such workloads in containers (I will present next Friday an example)
  3. Collection of results: How shall we collect results? Is there a need of a common DB for hardware models? N.B I do not refer here to the accounting use case, but just to the approaches to run the benchmarks and the WLCG workload suite in a reproducible way and collect and share the results.
  4. looking at the future:
    • what is the status of adoption of multi-threading? this will impact the selection of benchmarks
    • what is the set of new architectures where the WLCG workloads will run (that then need to be benchmarked?)
      • what is the status of adoption of GPUs?

Point of view of Site representatives

  1. Requirements:
    • benchmark to be used for : procurement, accounting, pledges, monitoring, etc
    • desired lifetime of a benchmark

-- DomenicoGiordano - 2017-05-19

Edit | Attach | Watch | Print version | History: r4 < r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r4 - 2017-06-07 - DomenicoGiordano
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    HEPIX All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback