21โ€“25 Sept 2020
(teleconference only)
Europe/Paris timezone

Contribution List

51 out of 51 displayed
Export to PDF
  1. Helge Meinhard (CERN)
    21/09/2020, 14:50
    Miscellaneous
  2. Miron Livny (University of Wisconsin-Madison)
    21/09/2020, 15:00
    HTCondor presentations and tutorials
  3. Christina Koch (University of Wisconsin-Madison)
    21/09/2020, 15:20
    HTCondor presentations and tutorials
  4. Lauren Michael (UW Madison)
    21/09/2020, 16:20
    HTCondor presentations and tutorials
  5. Dr Emmanouil Vamvakopoulos (CCIN2P3/CNRS)
    21/09/2020, 16:55
    HTCondor user presentations

    In recent months the HTCondor has been the main workload management system for the Grid environment at CC-IN2P3. The computing cluster consists of ~640 worker nodes of various types which deliver in a total of ~27K execution slots (including hyperthreading). The system supports LHC experiments (Alice, Atlas, CMS, and LHCb) under the umbrella of the Worldwide LHC Computing Grid (WLCG) as a Tier...

    Go to contribution page
  6. Stefano Dal Pra (Universita e INFN, Bologna (IT))
    21/09/2020, 17:15
    HTCondor user presentations

    CNAF started working with HTCondor during spring 2018,
    planning to move its Tier-1 Grid Site based on CREAM-CE and LSF
    Batch System to HTCondor-CE and HTCondor. The phase out of CREAM and
    LSF was completed by spring 2020. This talk describes our experience
    with the new system, with particular focus on HTCondor .

    Go to contribution page
  7. Todd Tannenbaum (Univ of Wisconsin-Madison, Wisconsin, USA)
    21/09/2020, 17:35
    HTCondor presentations and tutorials
  8. Todd Tannenbaum (Univ of Wisconsin-Madison, Wisconsin, USA)
    22/09/2020, 14:50
    HTCondor presentations and tutorials
  9. Mark Coatsworth (UW Madison)
    22/09/2020, 15:10
    HTCondor presentations and tutorials
  10. Gregory Thain (University of Wisconsin-Madison)
    22/09/2020, 15:25
    HTCondor presentations and tutorials
  11. Christoph Beyer
    22/09/2020, 15:45
    HTCondor user presentations

    In 2016 the local (BIRD) and GRID DESY batch facilities were migrated to HTCondor, this talk will cover some of the experiences and developments we saw over the time and the plans fot the future of HTC at DESY.

    Go to contribution page
  12. Andrea Sartirana (Centre National de la Recherche Scientifique (FR))
    22/09/2020, 16:05
    HTCondor user presentations

    GRIF is a distributed Tier-2 WLCG site grouping four laboratories in the Paris Region (IJCLab, IRFU, LLR, LPNHE). Multiple HTCondor instances are deployed at GRIF since several years. In particular an ARC-CE + HTCondor system provides access to the computing resources of IRFU and a distributed HTCondor pool, with CREAM-CE and Condor-CE gateways, gives unified access to the IJCLab and LLR...

    Go to contribution page
  13. Marco Mambelli (University of Chicago (US))
    22/09/2020, 16:40
    HTCondor user presentations

    GlideinWMS is a pilot framework to provide uniform and reliable HTCondor clusters using heterogeneous and unreliable resources. The Glideins are pilot jobs that are sent to the selected nodes, test them, set them up as desired by the user jobs, and ultimately start an HTCondor schedd to join an elastic pool. These Glideins collect information that is very useful to evaluate the health and...

    Go to contribution page
  14. Marco Mascheroni (Univ. of California San Diego (US))
    22/09/2020, 17:00
    HTCondor user presentations

    The resource needs of high energy physics experiments such as CMS at the LHC are expected to grow in terms of the amount of data collected and the computing resources required to process these data. Computing needs in CMS are addressed through the "Global Pool" a vanilla dynamic HTCondor pool created through the glideinWMS software. With over 250k cores, the CMS Global Pool is the biggest...

    Go to contribution page
  15. James Frey (University of Wisconsin Madison (US))
    22/09/2020, 17:20
    HTCondor Compute Element (CE) presentations and tutorials
  16. John Knoeller (University of Wisconsin-Madison)
    22/09/2020, 17:40
    HTCondor Compute Element (CE) presentations and tutorials
  17. 22/09/2020, 18:00

    For system admins installing and/or configuring an HTCondor pool on their campus

    Go to contribution page
  18. 22/09/2020, 18:00

    For general questions, open discussions, getting started

    Go to contribution page
  19. 22/09/2020, 18:00

    Questions about grid/cloud: CE, OSG, WLCG, EGI, bursting to HPC/Cloud, etc.

    Go to contribution page
  20. 22/09/2020, 18:00

    For people who want to submit workflows and have questions about using the command line tools or developer APIs (Python, REST)

    Go to contribution page
  21. Brian Hua Lin (University of Wisconsin - Madison)
    23/09/2020, 14:50
    HTCondor Compute Element (CE) presentations and tutorials
  22. Stefano Dal Pra (Universita e INFN, Bologna (IT))
    23/09/2020, 15:25
    HTCondor user presentations

    CNAF started working with the HTCondor Computing Element from May
    2018, planning to move its Tier-1 Grid Site based on CREAM-CE and LSF
    Batch System to use HTCondor-CE and HTCondor. The phase out of CREAM
    and LSF was completed by spring 2020. This talk describes our
    experience with the new system, with particular focus on HTCondor-CE.

    Go to contribution page
  23. Brian Hua Lin (University of Wisconsin - Madison)
    23/09/2020, 15:45
    HTCondor Compute Element (CE) presentations and tutorials
  24. Max Fischer (Karlsruhe Institute of Technology)
    23/09/2020, 16:05
    HTCondor user presentations

    This contribution provides firsthand experience of adopting HTCondor-CE at German WLCG sites DESY and KIT. Covering two sites plus a remote setup for RWTH Aachen, we share our lessons learned in pushing HTCondor-CE to production. With a comprehensive recap from technical setup, a detour to surviving the ecosystem and accounting, to the practical Dos and Donts, this contribution is suitable for...

    Go to contribution page
  25. Brian Hua Lin (University of Wisconsin - Madison)
    23/09/2020, 16:40
    HTCondor Compute Element (CE) presentations and tutorials
  26. Brian Hua Lin (University of Wisconsin - Madison)
    23/09/2020, 16:55
    HTCondor Compute Element (CE) presentations and tutorials
  27. Brian Hua Lin (University of Wisconsin - Madison)
    23/09/2020, 17:10
    HTCondor Compute Element (CE) presentations and tutorials
  28. Ben Jones (CERN)
    23/09/2020, 17:20
    HTCondor user presentations

    A review of how we run and operate a large multi purpose condor pool, with grid, local submission and dedicated resources. Using grid and local submission to drive utilisation of shared resources. Using transforms and routers in order to ensure jobs end up on the correct resources, and are accounted correctly. We will review our automation and monitoring tools, together with integration of...

    Go to contribution page
  29. Xavier Eric Ouvrard (CERN)
    23/09/2020, 17:40
    HTCondor user presentations

    The Coflu Cluster, also known as the Radio-Protection (RP) Cluster, started as an experimental project at CERN involving a few standard desktop computers, in 2007. It was envisaged to have a job scheduling system and a common storage space so that multiple Fluka simulations could be run in parallel and monitored, utilizing a custom built and easy-to-use web-interface.

    Abstract The...

    Go to contribution page
  30. Gregory Thain (University of Wisconsin-Madison)
    24/09/2020, 14:50
    HTCondor presentations and tutorials
  31. Clemens Lange (CERN)
    24/09/2020, 15:15
    HTCondor user presentations

    The majority of physics analysis jobs at CERN are run on high-throughput computing batch systems such as HTCondor. However, not everyone has access to computing farms, e.g. theorist wanting to make use of CMS Open Data, and for reproducible workflows more backend-agnostic approaches are desirable. The industry standard here are containers leveraged with Kubernetes, for which computing...

    Go to contribution page
  32. Oliver Freyermuth (University of Bonn (DE))
    24/09/2020, 15:35
    HTCondor user presentations

    Our HTC cluster using HTCondor has been set up at Bonn University in 2017/2018.
    All infrastructure is fully puppetised, including the HTCondor configuration.

    OS updates are fully automated, and necessary reboots for security patches are scheduled in a staggered fashion backfilling all draining nodes with short jobs to maximize throughput.
    Additionally, draining can also be scheduled for...

    Go to contribution page
  33. Todd Lancaster Miller (University of Wisconsin Madison (US))
    24/09/2020, 15:55
    HTCondor presentations and tutorials
  34. Cheryl Zhang (Google Cloud)
    24/09/2020, 16:15
    HTCondor user presentations

    We're excited to share the launch of the HTCondor offering on the Google Cloud Marketplace, built by Google software engineer Cheryl Zhang with advice and support from the experts at the CHTC. Come see how quickly and easily you can start using HTCondor on Google Cloud with this new solution.

    Go to contribution page
  35. James Frey (University of Wisconsin Madison (US))
    24/09/2020, 16:45
    HTCondor presentations and tutorials
  36. Anthony Richard Tiradani (Fermi National Accelerator Lab. (US))
    24/09/2020, 17:05
    HTCondor user presentations

    HEPCloud is working to integrate isolated HPC Centers, such as Theta at Argonne
    National Laboratory, into the pool of resources made available to its user
    community. Major obstacles to using these centers include limited or no outgoing
    networking and restrictive security policies. HTCondor has provided a mechanism
    to execute jobs in a manner that satisfies the constraints and...

    Go to contribution page
  37. Pablo Llopis Sanmillan (CERN)
    24/09/2020, 17:20
    HTCondor user presentations

    The bulk of computing at CERN consists of embarrassingly parallel HTC use cases (Jones, Fernandez-Alavarez et al), however for MPI applications for e.g. Accelerator Physics and Engineering, a dedicated HPC cluster running SLURM is used.
    In order to optimize utilization of the HPC cluster, idle nodes in SLURM cluster are backfilled with Grid HTC workloads. This talk will detail the CondorCE...

    Go to contribution page
  38. Emanuele Simili (University of Glasgow)
    24/09/2020, 17:40
    HTCondor user presentations

    Our Tier2 cluster (ScotGrid, Glasgow) uses HTCondor as batch system, combined with ARC-CE as front-end for job submission and ARGUS for authentication and user mapping.
    On top of this, we have built a central monitoring system based on Prometheus that collects, aggregates and displays metrics on custom Grafana dashboards. In particular, we extract jobs info by regularly parsing the output of...

    Go to contribution page
  39. 24/09/2020, 18:00

    For system admins installing and/or configuring an HTCondor pool on their campus

    Go to contribution page
  40. 24/09/2020, 18:00

    For general questions, open discussions, getting started

    Go to contribution page
  41. 24/09/2020, 18:00

    Questions about grid/cloud: CE, OSG, WLCG, EGI, bursting to HPC/Cloud, etc.

    Go to contribution page
  42. 24/09/2020, 18:00

    For people who want to submit workflows and have questions about using the command line tools or developer APIs (Python, REST)

    Go to contribution page
  43. Jeff Templon (Nikhef National institute for subatomic physics (NL))
    25/09/2020, 14:50
    HTCondor user presentations

    The Physics Data Processing group at Nikhef is developing a Condor-based cluster, after a 19-year absence from the HTCondor community. This talk will discuss why we are developing this cluster, and present our plans and the results so far. It will also spend a slide or two on the potential to use HTCondor for other services we provide.

    Go to contribution page
  44. Jason Patton (UW Madison)
    25/09/2020, 15:10
    HTCondor presentations and tutorials
  45. Todd Tannenbaum (Univ of Wisconsin-Madison, Wisconsin, USA)
    25/09/2020, 15:35
    HTCondor presentations and tutorials
  46. Mr Matyas Selmeci (University of Wisconsin - Madison)
    25/09/2020, 15:45
    HTCondor user presentations

    Dask is an increasingly-popular tool for both low-level and high-level parallelism in the Scientific Python ecosystem. I will discuss efforts at the Center for High Throughput Computing at UW-Madison to enable users to run Dask-based work on our HTCondor pool. In particular, we have developed a "wrapper package" based on existing work in the Dask ecosystem that lets Dask spawn workers in the...

    Go to contribution page
  47. Matyas Selmeci (University of Wisconsin - Madison)
    25/09/2020, 16:05
    HTCondor presentations and tutorials
  48. Zach Miller, Brian Paul Bockelman (University of Wisconsin Madison (US))
    25/09/2020, 16:30
    HTCondor presentations and tutorials
  49. Jim Basney (University of Illinois)
    25/09/2020, 17:00
    HTCondor user presentations

    In this presentation, I will introduce the SciTokens model (https://scitokens.org/) for federated capability-based authorization in distributed scientific computing. I will compare the OAuth and JWT security standards with X.509 certificates, and I will discuss ongoing work to migrate HTCondor use cases from certificates to tokens.

    Go to contribution page
  50. Jason Patton (UW Madison), Zach Miller
    25/09/2020, 17:20
    HTCondor presentations and tutorials
  51. Helge Meinhard (CERN)
    25/09/2020, 17:40
    Miscellaneous