Oct 10 – 14, 2016
San Francisco Marriott Marquis
America/Los_Angeles timezone

Opportunistic Computing with Lobster: Lessons Learned From Scaling up to 25k Non-Dedicated Cores

Oct 10, 2016, 2:00 PM
GG C2 (San Francisco Mariott Marquis)


San Francisco Mariott Marquis

Oral Track 3: Distributed Computing Track 3: Distributed Computing


Anna Elizabeth Woodard (University of Notre Dame (US)) Matthias Wolf (University of Notre Dame (US))


We previously described Lobster, a workflow management tool for exploiting volatile opportunistic computing resources for computation in HEP. We will discuss the various challenges that have been encountered while scaling up the simultaneous CPU core utilization and the software improvements required to overcome these challenges.

Categories: Workflows can now be divided into categories based on their required system resources. This allows the batch queueing system to optimize assignment of tasks to nodes with the appropriate capabilities. Within each category, limits can be specified for the number of running jobs to regulate the utilization of communication bandwidth. System resource specifications for a task category can now be modified while a project is running, avoiding the need to restart the project if resource requirements differ from the initial estimates. Lobster now implements time limits on each task category to voluntarily terminate tasks. This allows partially completed work to be recovered.

Workflow dependency specification: One workflow often requires data from other workflows as input. Rather than waiting for earlier workflows to be completed before beginning later ones, Lobster now allows dependent tasks to begin as soon as sufficient input data has accumulated.

Resource monitoring: Lobster utilizes a new capability in Work Queue to monitor the system resources each task requires in order to identify bottlenecks and optimally assign tasks.

The capability of the Lobster opportunistic workflow management system for HEP computation has been significantly increased. We have demonstrated efficient utilization of 25K non-dedicated cores and achieved a data input rate of 9 Gb/s and an output rate of 400 GB/h. This has required new capabilities in task categorization, workflow dependency specification, and resource monitoring.

Primary Keyword (Mandatory) Distributed workload management

Primary authors

Anna Elizabeth Woodard (University of Notre Dame (US)) Matthias Wolf (University of Notre Dame (US))


Anna Yannakopoulos (Notre Dame / Florida State University) Dr Benjamin Tovar (University of Notre Dame) Douglas Thain (University of Notre Dame) Kenyi Paolo Hurtado Anampa (University of Notre Dame (US)) Kevin Patrick Lannon (University of Notre Dame (US)) Mike Hildreth (University of Notre Dame (US)) Patrick Donnelly (University of Notre Dame) Dr Paul Brenner (University of Notre Dame) Wenzhao Li (University of Notre Dame)

Presentation materials