-
Helge Meinhard (CERN)21/09/2020, 14:50Miscellaneous
-
Miron Livny (University of Wisconsin-Madison)21/09/2020, 15:00HTCondor presentations and tutorials
-
Christina Koch (University of Wisconsin-Madison)21/09/2020, 15:20HTCondor presentations and tutorials
-
Lauren Michael (UW Madison)21/09/2020, 16:20HTCondor presentations and tutorials
-
Dr Emmanouil Vamvakopoulos (CCIN2P3/CNRS)21/09/2020, 16:55HTCondor user presentations
In recent months the HTCondor has been the main workload management system for the Grid environment at CC-IN2P3. The computing cluster consists of ~640 worker nodes of various types which deliver in a total of ~27K execution slots (including hyperthreading). The system supports LHC experiments (Alice, Atlas, CMS, and LHCb) under the umbrella of the Worldwide LHC Computing Grid (WLCG) as a Tier...
Go to contribution page -
Stefano Dal Pra (Universita e INFN, Bologna (IT))21/09/2020, 17:15HTCondor user presentations
CNAF started working with HTCondor during spring 2018,
Go to contribution page
planning to move its Tier-1 Grid Site based on CREAM-CE and LSF
Batch System to HTCondor-CE and HTCondor. The phase out of CREAM and
LSF was completed by spring 2020. This talk describes our experience
with the new system, with particular focus on HTCondor . -
Todd Tannenbaum (Univ of Wisconsin-Madison, Wisconsin, USA)21/09/2020, 17:35HTCondor presentations and tutorials
-
Todd Tannenbaum (Univ of Wisconsin-Madison, Wisconsin, USA)22/09/2020, 14:50HTCondor presentations and tutorials
-
Mark Coatsworth (UW Madison)22/09/2020, 15:10HTCondor presentations and tutorials
-
Gregory Thain (University of Wisconsin-Madison)22/09/2020, 15:25HTCondor presentations and tutorials
-
Christoph Beyer22/09/2020, 15:45HTCondor user presentations
In 2016 the local (BIRD) and GRID DESY batch facilities were migrated to HTCondor, this talk will cover some of the experiences and developments we saw over the time and the plans fot the future of HTC at DESY.
Go to contribution page -
Andrea Sartirana (Centre National de la Recherche Scientifique (FR))22/09/2020, 16:05HTCondor user presentations
GRIF is a distributed Tier-2 WLCG site grouping four laboratories in the Paris Region (IJCLab, IRFU, LLR, LPNHE). Multiple HTCondor instances are deployed at GRIF since several years. In particular an ARC-CE + HTCondor system provides access to the computing resources of IRFU and a distributed HTCondor pool, with CREAM-CE and Condor-CE gateways, gives unified access to the IJCLab and LLR...
Go to contribution page -
Marco Mambelli (University of Chicago (US))22/09/2020, 16:40HTCondor user presentations
GlideinWMS is a pilot framework to provide uniform and reliable HTCondor clusters using heterogeneous and unreliable resources. The Glideins are pilot jobs that are sent to the selected nodes, test them, set them up as desired by the user jobs, and ultimately start an HTCondor schedd to join an elastic pool. These Glideins collect information that is very useful to evaluate the health and...
Go to contribution page -
Marco Mascheroni (Univ. of California San Diego (US))22/09/2020, 17:00HTCondor user presentations
The resource needs of high energy physics experiments such as CMS at the LHC are expected to grow in terms of the amount of data collected and the computing resources required to process these data. Computing needs in CMS are addressed through the "Global Pool" a vanilla dynamic HTCondor pool created through the glideinWMS software. With over 250k cores, the CMS Global Pool is the biggest...
Go to contribution page -
James Frey (University of Wisconsin Madison (US))22/09/2020, 17:20HTCondor Compute Element (CE) presentations and tutorials
-
John Knoeller (University of Wisconsin-Madison)22/09/2020, 17:40HTCondor Compute Element (CE) presentations and tutorials
-
22/09/2020, 18:00
For system admins installing and/or configuring an HTCondor pool on their campus
Go to contribution page -
22/09/2020, 18:00
For general questions, open discussions, getting started
Go to contribution page -
22/09/2020, 18:00
Questions about grid/cloud: CE, OSG, WLCG, EGI, bursting to HPC/Cloud, etc.
Go to contribution page -
22/09/2020, 18:00
For people who want to submit workflows and have questions about using the command line tools or developer APIs (Python, REST)
Go to contribution page -
Brian Hua Lin (University of Wisconsin - Madison)23/09/2020, 14:50HTCondor Compute Element (CE) presentations and tutorials
-
Stefano Dal Pra (Universita e INFN, Bologna (IT))23/09/2020, 15:25HTCondor user presentations
CNAF started working with the HTCondor Computing Element from May
Go to contribution page
2018, planning to move its Tier-1 Grid Site based on CREAM-CE and LSF
Batch System to use HTCondor-CE and HTCondor. The phase out of CREAM
and LSF was completed by spring 2020. This talk describes our
experience with the new system, with particular focus on HTCondor-CE. -
Brian Hua Lin (University of Wisconsin - Madison)23/09/2020, 15:45HTCondor Compute Element (CE) presentations and tutorials
-
Max Fischer (Karlsruhe Institute of Technology)23/09/2020, 16:05HTCondor user presentations
This contribution provides firsthand experience of adopting HTCondor-CE at German WLCG sites DESY and KIT. Covering two sites plus a remote setup for RWTH Aachen, we share our lessons learned in pushing HTCondor-CE to production. With a comprehensive recap from technical setup, a detour to surviving the ecosystem and accounting, to the practical Dos and Donts, this contribution is suitable for...
Go to contribution page -
Brian Hua Lin (University of Wisconsin - Madison)23/09/2020, 16:40HTCondor Compute Element (CE) presentations and tutorials
-
Brian Hua Lin (University of Wisconsin - Madison)23/09/2020, 16:55HTCondor Compute Element (CE) presentations and tutorials
-
Brian Hua Lin (University of Wisconsin - Madison)23/09/2020, 17:10HTCondor Compute Element (CE) presentations and tutorials
-
Ben Jones (CERN)23/09/2020, 17:20HTCondor user presentations
A review of how we run and operate a large multi purpose condor pool, with grid, local submission and dedicated resources. Using grid and local submission to drive utilisation of shared resources. Using transforms and routers in order to ensure jobs end up on the correct resources, and are accounted correctly. We will review our automation and monitoring tools, together with integration of...
Go to contribution page -
Xavier Eric Ouvrard (CERN)23/09/2020, 17:40HTCondor user presentations
The Coflu Cluster, also known as the Radio-Protection (RP) Cluster, started as an experimental project at CERN involving a few standard desktop computers, in 2007. It was envisaged to have a job scheduling system and a common storage space so that multiple Fluka simulations could be run in parallel and monitored, utilizing a custom built and easy-to-use web-interface.
Abstract The...
Go to contribution page -
Gregory Thain (University of Wisconsin-Madison)24/09/2020, 14:50HTCondor presentations and tutorials
-
Clemens Lange (CERN)24/09/2020, 15:15HTCondor user presentations
The majority of physics analysis jobs at CERN are run on high-throughput computing batch systems such as HTCondor. However, not everyone has access to computing farms, e.g. theorist wanting to make use of CMS Open Data, and for reproducible workflows more backend-agnostic approaches are desirable. The industry standard here are containers leveraged with Kubernetes, for which computing...
Go to contribution page -
Oliver Freyermuth (University of Bonn (DE))24/09/2020, 15:35HTCondor user presentations
Our HTC cluster using HTCondor has been set up at Bonn University in 2017/2018.
All infrastructure is fully puppetised, including the HTCondor configuration.OS updates are fully automated, and necessary reboots for security patches are scheduled in a staggered fashion backfilling all draining nodes with short jobs to maximize throughput.
Go to contribution page
Additionally, draining can also be scheduled for... -
Todd Lancaster Miller (University of Wisconsin Madison (US))24/09/2020, 15:55HTCondor presentations and tutorials
-
Cheryl Zhang (Google Cloud)24/09/2020, 16:15HTCondor user presentations
We're excited to share the launch of the HTCondor offering on the Google Cloud Marketplace, built by Google software engineer Cheryl Zhang with advice and support from the experts at the CHTC. Come see how quickly and easily you can start using HTCondor on Google Cloud with this new solution.
Go to contribution page -
James Frey (University of Wisconsin Madison (US))24/09/2020, 16:45HTCondor presentations and tutorials
-
Anthony Richard Tiradani (Fermi National Accelerator Lab. (US))24/09/2020, 17:05HTCondor user presentations
HEPCloud is working to integrate isolated HPC Centers, such as Theta at Argonne
Go to contribution page
National Laboratory, into the pool of resources made available to its user
community. Major obstacles to using these centers include limited or no outgoing
networking and restrictive security policies. HTCondor has provided a mechanism
to execute jobs in a manner that satisfies the constraints and... -
Pablo Llopis Sanmillan (CERN)24/09/2020, 17:20HTCondor user presentations
The bulk of computing at CERN consists of embarrassingly parallel HTC use cases (Jones, Fernandez-Alavarez et al), however for MPI applications for e.g. Accelerator Physics and Engineering, a dedicated HPC cluster running SLURM is used.
Go to contribution page
In order to optimize utilization of the HPC cluster, idle nodes in SLURM cluster are backfilled with Grid HTC workloads. This talk will detail the CondorCE... -
Emanuele Simili (University of Glasgow)24/09/2020, 17:40HTCondor user presentations
Our Tier2 cluster (ScotGrid, Glasgow) uses HTCondor as batch system, combined with ARC-CE as front-end for job submission and ARGUS for authentication and user mapping.
Go to contribution page
On top of this, we have built a central monitoring system based on Prometheus that collects, aggregates and displays metrics on custom Grafana dashboards. In particular, we extract jobs info by regularly parsing the output of... -
24/09/2020, 18:00
For system admins installing and/or configuring an HTCondor pool on their campus
Go to contribution page -
24/09/2020, 18:00
For general questions, open discussions, getting started
Go to contribution page -
24/09/2020, 18:00
Questions about grid/cloud: CE, OSG, WLCG, EGI, bursting to HPC/Cloud, etc.
Go to contribution page -
24/09/2020, 18:00
For people who want to submit workflows and have questions about using the command line tools or developer APIs (Python, REST)
Go to contribution page -
Jeff Templon (Nikhef National institute for subatomic physics (NL))25/09/2020, 14:50HTCondor user presentations
The Physics Data Processing group at Nikhef is developing a Condor-based cluster, after a 19-year absence from the HTCondor community. This talk will discuss why we are developing this cluster, and present our plans and the results so far. It will also spend a slide or two on the potential to use HTCondor for other services we provide.
Go to contribution page -
Jason Patton (UW Madison)25/09/2020, 15:10HTCondor presentations and tutorials
-
Todd Tannenbaum (Univ of Wisconsin-Madison, Wisconsin, USA)25/09/2020, 15:35HTCondor presentations and tutorials
-
Mr Matyas Selmeci (University of Wisconsin - Madison)25/09/2020, 15:45HTCondor user presentations
Dask is an increasingly-popular tool for both low-level and high-level parallelism in the Scientific Python ecosystem. I will discuss efforts at the Center for High Throughput Computing at UW-Madison to enable users to run Dask-based work on our HTCondor pool. In particular, we have developed a "wrapper package" based on existing work in the Dask ecosystem that lets Dask spawn workers in the...
Go to contribution page -
Matyas Selmeci (University of Wisconsin - Madison)25/09/2020, 16:05HTCondor presentations and tutorials
-
Zach Miller, Brian Paul Bockelman (University of Wisconsin Madison (US))25/09/2020, 16:30HTCondor presentations and tutorials
-
Jim Basney (University of Illinois)25/09/2020, 17:00HTCondor user presentations
In this presentation, I will introduce the SciTokens model (https://scitokens.org/) for federated capability-based authorization in distributed scientific computing. I will compare the OAuth and JWT security standards with X.509 certificates, and I will discuss ongoing work to migrate HTCondor use cases from certificates to tokens.
Go to contribution page -
Jason Patton (UW Madison), Zach Miller25/09/2020, 17:20HTCondor presentations and tutorials
-
Helge Meinhard (CERN)25/09/2020, 17:40Miscellaneous
Choose timezone
Your profile timezone: