Nov 4 – 8, 2019
Adelaide Convention Centre
Australia/Adelaide timezone

Abstracting container technologies and transfer mechanisms in the Scalable CyberInfrastructure for Artificial Intelligence and Likelihood Free Inference (SCAILFIN) project

Nov 5, 2019, 5:30 PM
Riverbank R7 (Adelaide Convention Centre)

Riverbank R7

Adelaide Convention Centre

Oral Track 7 – Facilities, Clouds and Containers Track 7 – Facilities, Clouds and Containers


Kenyi Paolo Hurtado Anampa (University of Notre Dame (US))


High Performance Computing (HPC) facilities provide vast computational power and storage, but generally work on fixed environments designed to address the most common software needs locally, making it challenging for users to bring their own software. To overcome this issue, most HPC facilities have added support for HPC friendly container technologies such as Shifter, Singularity, or CharlieCloud. These different container technologies are all compatible with the more popular Docker containers, however the implementation and use of said containers is different for each HPC friendly container technology. These usage differences can make it difficult for an end user to easily submit and utilize different HPC sites without making adjustments to their workflows and software. This issue is exacerbated when attempting to utilize workflow management software between different sites with differing container technologies.

The SCAILFIN project aims to develop and deploy artificial intelligence (AI) and likelihood-free inference (LFI) techniques and software using scalable cyberinfrastructure (CI) that span multiple sites. The project has extended the CERN-based REANA framework, a platform designed to enable analysis reusability, and reproducibility while supporting different workflow engine languages, in order to support submission to different HPC facilities. The work presented here focuses on the development of an abstraction layer that allows the support of different container technologies and different transfer protocols for files and directories between the HPC facility and the REANA cluster edge service from the user's workflow application.

Consider for promotion No

Primary authors

Mike Hildreth (University of Notre Dame (US)) Kenyi Paolo Hurtado Anampa (University of Notre Dame (US)) Tibor Simko (CERN) Dr Paul Brenner (University of Notre Dame) Mr Scott Hampton (University of Notre Dame) Mr Cody Kankel (University of Notre Dame) Ms Irena Johnson (University of Notre Dame)

Presentation materials