14–18 May 2018
University of Wisconsin-Madison
America/Chicago timezone

Session

Computing and batch systems

CB
16 May 2018, 11:50
Chamberlin Hall (University of Wisconsin-Madison)

Chamberlin Hall

University of Wisconsin-Madison

Madison, USA 43°4'25.8024''N 89°24'18.7776''W 43.073834, -89.405216

Presentation materials

There are no materials yet.

  1. Michele Michelotto (Università e INFN, Padova (IT))
    16/05/2018, 11:50
    Computing & Batch Services

    The benchmarking working group holds biweekly meeting. we are focusing on the health of HS06, fast benchmark and study of a new benchmark to replace HS06 since SPEC has moved to a new family of benchmark

    Go to contribution page
  2. Jose Flix Molina (Centro de Investigaciones Energéti cas Medioambientales y Tecno)
    16/05/2018, 12:10
    Computing & Batch Services

    The working group has been established and is now working towards a cost and performance model that allows to quantitatively estimate the computing resources needed for HL-LHC and map them towards the cost at specific sites.
    The group has defined a short and medium term plan and identified the main tasks. Around the tasks teams with members from experiments and sites have formed and started...

    Go to contribution page
  3. William Edward Strecker-Kellogg (Brookhaven National Laboratory (US))
    16/05/2018, 14:00
    Computing & Batch Services

    Computing is changing at BNL, we will discuss how we are restructuring our Condor pools, integrating them with new tools like Jupyter notebooks, and other resources like HPC systems run with Slurm.

    Go to contribution page
  4. Christoph Beyer
    16/05/2018, 14:20
    Computing & Batch Services

    The batch facilities at DESY are currently enlarged significantly while at the same time partly migrated from SGE to HTCondor.
    This is a short overview of what is going on on site in terms of GRID-, local- and HPC cluster development.

    Go to contribution page
  5. Markus Schulz (CERN)
    16/05/2018, 14:40
    Computing & Batch Services

    At the last HEPix meeting we described the results of a proof of concept study to run batch jobs on EOS disc server nodes. By now we have moved forward towards a production level configuration and the first pre-production nodes have been setup. Beside the relevance for CERN this is also a more general step towards a hyper-converged infrastructure.

    Go to contribution page
  6. Maxime Reis (CERN)
    16/05/2018, 15:00
    Computing & Batch Services

    Techlab, a CERN IT project, is a hardware lab providing experimental systems and benchmarking data for the HEP community.

    Techlab is constantly on the lookout for new trends in HPC, cutting-edge technologies and alternative architectures, in terms of CPUs and accelerators.
    We believe that in the long run, a diverse offer and a healthy competition in the HPC market will serve science in...

    Go to contribution page
  7. Todd Tannenbaum (University of Wisconsin Madison (US))
    16/05/2018, 15:20
    Computing & Batch Services

    he goal of the HTCondor team is to to develop, implement, deploy, and evaluate mechanisms and policies that support High Throughput Computing (HTC) on large collections of distributively owned computing resources. Increasingly, the work performed by the HTCondor developers is being driven by its partnership with the High Energy Physics (HEP) community.

    This talk will present recent changes...

    Go to contribution page
  8. Tony Quan (LBL)
    16/05/2018, 16:10
    Computing & Batch Services

    PDSF, the Parallel Distributed Systems Facility, has been in continuous operation since 1996 serving high energy physics research.  It is currently a tier-1 site for Star, a tier-2 site for Alice and a tier-3 site for Atlas. We are in the process of migrating PDSF workload from commodity cluster to the Cori a Cray XC40 system.
 The process will involve preparing containers that will allow PDSF...

    Go to contribution page
  9. Mr Dino Conciatore (CSCS (Swiss National Supercomputing Centre))
    16/05/2018, 16:30
    Computing & Batch Services

    For the past 10 years, CSCS has been providing computational resources for the ATLAS, CMS, and LHCb experiments on a standard commodity cluster.
    The High Luminosity LHC upgrade (HL-LHC) presents new challenges and demands with a predicted 50x increase in computing needs over the next 8 to 10 years. High Performance Computing capabilities could help to equalize the computing demands due to...

    Go to contribution page
  10. Dr Zhihua Dong
    16/05/2018, 16:50
    Computing & Batch Services

    HPL and HPCG Benchmark on Brookhaven National Laboratory SDCC clusters and various generations of Linux Farm nodes has been conducted and compared with HS06 results. While HPL results are more aligned with CPU/GPU performance. HPCG results are impacted by memory performances as well.

    Go to contribution page
  11. Mr Fernando Furusato (LNLS/CNPEM)
    16/05/2018, 17:10
    Computing & Batch Services

    In this work, we present a fast implementation for analytical image reconstruction from projections, using the so-called "backprojection-slice theorem" (BST). BST has the ability to reproduce reliable image reconstructions in a reasonable amount of time, before taking further decisions. The BST is easy to implement and can be used to take fast decisions about the quality of the measurement,...

    Go to contribution page
Building timetable...