Session

Track 7 Session

13 Apr 2015, 16:30
OIST

OIST

1919-1 Tancha, Onna-son, Kunigami-gun Okinawa, Japan 904-0495

Conveners

Track 7 Session: #1 (Site operations)

  • Jeff Templon (NIKHEF (NL))

Track 7 Session: #2 (volunteer computing, storage)

  • Federico Stagni (CERN)

Track 7 Session: #3 (user for experiments)

  • Claudio Grandi (INFN - Bologna)

Track 7 Session: #4

  • Andrew McNab (University of Manchester (GB))

Description

Clouds and virtualization

Presentation materials

There are no materials yet.

  1. Stefano Zilli (CERN)
    13/04/2015, 16:30
    Track7: Clouds and virtualization
    oral presentation
    CERN has been running a production OpenStack cloud since July 2013 to support physics computing and infrastructure services for the site. This is expected to reach over 100,000 cores by the end of 2015. This talk will cover the different use cases for this service and experiences with this deployment in areas such as user management, deployment, metering and configuration of thousands of...
    Go to contribution page
  2. Dr Ulrich Schwickerath (CERN)
    13/04/2015, 16:45
    Track7: Clouds and virtualization
    oral presentation
    As part of CERN's Agile Infrastructure project, large parts of the CERN batch farm have been moved to virtual machines running on CERNs private IaaS (link is external) cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance (rated in HS06) in...
    Go to contribution page
  3. Andrew McNab (University of Manchester (GB))
    13/04/2015, 17:00
    Track7: Clouds and virtualization
    oral presentation
    We compare the Vac and Vcycle virtual machine lifecycle managers and our experiences in providing production job execution services for ATLAS, LHCb, and the GridPP VO at sites in the UK and at CERN. In both the Vac and Vcycle systems, the virtual machines are created outside of the experiment's job submission and pilot framework. In the case of Vac, a daemon runs on each physical host which...
    Go to contribution page
  4. Andrew John Washbrook (University of Edinburgh (GB))
    13/04/2015, 17:15
    Track7: Clouds and virtualization
    oral presentation
    Cloud computing enables ubiquitous, convenient and on-demand access to a shared pool of configurable computing resources that can be rapidly provisioned with minimal management effort. The flexible and scalable nature of the cloud computing model is attractive to both industry and academia. In HEP, the use of the โ€œcloudโ€ has become more prevalent with LHC experiments making use of standard...
    Go to contribution page
  5. Ben Couturier (CERN)
    13/04/2015, 17:30
    Track7: Clouds and virtualization
    oral presentation
    docker & HEP: containerization of applications for development, distribution and preservation. ================================================= HEP software stacks are not shallow. Indeed, HEP experiments' software are usually many applications in one (reconstruction, simulation, analysis, ...) and thus require many libraries - developed in-house or by third parties - to be...
    Go to contribution page
  6. Sara Vallero (Universita e INFN (IT))
    13/04/2015, 17:45
    Track7: Clouds and virtualization
    oral presentation
    The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers IaaS services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a separate grid Tier-2 site for the...
    Go to contribution page
  7. Andrew David Lahiff (STFC - Rutherford Appleton Lab. (GB))
    13/04/2015, 18:00
    Track7: Clouds and virtualization
    oral presentation
    The recently introduced vacuum model offers an alternative to the traditional methods that virtual organisations (VOs) use to run computing tasks at sites, where they either submit jobs using grid middleware or create virtual machines (VMs) using cloud APIs. In the vacuum model VMs are created and contextualized by the site itself, and start the appropriate pilot job framework which fetches...
    Go to contribution page
  8. Dr Miguel Marquina (CERN)
    14/04/2015, 14:00
    Track7: Clouds and virtualization
    oral presentation
    Using virtualisation with CernVM has emerged as a de-facto standard among HEP experiments; it allows for running of HEP analysis and simulation programs in cloud environments. Following the integration of virtualisation with BOINC and CernVM(link is external), first pioneered for simulation of event generation in the Theory group at CERN, the LHC experiments ATLAS, CMS and LHCb have all...
    Go to contribution page
  9. David Cameron (University of Oslo (NO))
    14/04/2015, 14:15
    Track7: Clouds and virtualization
    oral presentation
    A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte-Carlo simulation. Volunteer computing has been used over the last few years in many other scientific fields and by CERN itself to run simulations of the LHC beams. The ATLAS@Home project was started to allow volunteers to run simulations of...
    Go to contribution page
  10. Laurence Field (CERN)
    14/04/2015, 14:30
    Track7: Clouds and virtualization
    oral presentation
    Volunteer computing remains an untapped opportunistic resource for the LHC experiments. The use of virtualization in this domain was pioneered by the Test4theory project and enabled the running of high-energy particle physics simulations on home computers. This paper describes the model for CMS to run workloads using a similar volunteer computing platform. It is shown how the original approach...
    Go to contribution page
  11. Mr Thomas Hauth (KIT - Karlsruhe Institute of Technology (DE))
    14/04/2015, 14:45
    Track7: Clouds and virtualization
    oral presentation
    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte Carlo simulation. The Institut fรผr Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to...
    Go to contribution page
  12. Maria Arsuaga Rios (CERN)
    14/04/2015, 15:00
    Track7: Clouds and virtualization
    oral presentation
    Amazon S3 is a widely adopted protocol for scalable cloud storage that could also fulfill storage requirements of the high-energy physics community. CERN has been evaluating this option using some key HEP applications such as ROOT and the CernVM filesystem (CvmFS) with S3 back-ends. In this contribution we present our evaluation based on two versions of the Huawei UDS storage system used from...
    Go to contribution page
  13. Paul Millar (Deutsches Elektronen-Synchrotron (DE))
    14/04/2015, 15:15
    Track7: Clouds and virtualization
    oral presentation
    Traditionally storage systems have had well understood responsibilities and behaviour, codified by the POSIX standards. More sophisticated systems (such as dCache) support additional functionality, such as storing data on media with different latencies (SSDs, HDDs, tapes). From a user's perspective, this forms a relatively simple adjunct to POSIX: providing optional quality-of-service...
    Go to contribution page
  14. Dirk Hufnagel (Fermi National Accelerator Lab. (US))
    14/04/2015, 15:30
    Track7: Clouds and virtualization
    oral presentation
    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize โ€œopportunisticโ€ resources โ€” resources not owned by, or a priori configured for CMS โ€” to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and...
    Go to contribution page
  15. Ian Gable (University of Victoria (CA))
    14/04/2015, 16:30
    Track7: Clouds and virtualization
    oral presentation
    The use of distributed IaaS clouds with the CloudScheduler/HTCondor architecture has been in production for HEP and astronomy applications for a number of years. The design has proven to be robust and reliable for batch production using HEP clouds, academic non-HEP (opportunistic) clouds and commercial clouds. Further, the system is seamlessly integrated into the existing WLCG...
    Go to contribution page
  16. Dr David Colling (Imperial College Sci., Tech. & Med. (GB))
    14/04/2015, 16:45
    Track7: Clouds and virtualization
    oral presentation
    The resources CMS is using are increasingly being offered as clouds. In Run 2 of the LHC the majority of CMS CERN resources, both in Meyrin and at the Wigner Computing Centre, will be presented as cloud resources on which CMS will have to build its own infrastructure. This infrastructure will need to run all of the CMS workflows including: Tier 0, production and user analysis. In addition, the...
    Go to contribution page
  17. Ryan Taylor (University of Victoria (CA))
    14/04/2015, 17:00
    Track7: Clouds and virtualization
    oral presentation
    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology....
    Go to contribution page
  18. Dr Randy Sobie (University of Victoria (CA))
    14/04/2015, 17:15
    Track7: Clouds and virtualization
    oral presentation
    The BelleII experiment is developing a global computing system for the simulation of MC data prior its collecting real collision data in the next few years. The system utilizes the grid middleware used in the WLCG and uses the DIRAC workload manager. We describe how IaaS cloud resources are being integrated into the BelleII production computing system in Australia and Canada. The IaaS...
    Go to contribution page
  19. Andrew McNab (University of Manchester (GB))
    14/04/2015, 17:30
    Track7: Clouds and virtualization
    oral presentation
    The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the CernVM 3 system for providing root images for virtual machines. We use the cvmfs...
    Go to contribution page
  20. Ms Bowen Kan (Institute of High Physics Chinese Academy of Sciences)
    14/04/2015, 17:45
    Track7: Clouds and virtualization
    oral presentation
    Mass data processing and analysis contribute much to the development and discoveries of a new generation of High Energy Physics. The BESIII experiment of IHEP(Institute of High Energy Physics, Beijing, China) studies particles in the tau-charm energy region ranges from 2 GeV to 4.6 GeV, and requires massive storage and computing resources, which is a typical kind of data intensive...
    Go to contribution page
  21. Alexander Baranov (ITEP Institute for Theoretical and Experimental Physics (RU))
    14/04/2015, 18:00
    Track7: Clouds and virtualization
    oral presentation
    Computational grid (or simply 'grid') infrastructures are powerful but restricted by several aspects: grids are incapable of running user jobs compiled with a non-authentic set of libraries and it is difficult to restructure grids to adapt to peak loads. At the same time if grids are not loaded with user-tasks, owners still have to pay for electricity and hardware maintenance. So a grid is not...
    Go to contribution page
  22. Gerardo Ganis (CERN)
    16/04/2015, 09:00
    Track7: Clouds and virtualization
    oral presentation
    Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteersโ€™ computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the...
    Go to contribution page
  23. Ioannis Charalampidis (CERN)
    16/04/2015, 09:15
    Track7: Clouds and virtualization
    oral presentation
    Lately there is a trend in scientific projects to look for computing resources in the volunteering community. In addition, to reduce the development effort required to port the scientificย software stack to all the known platforms, the use of Virtual Machines (VMs) as end-projects is becoming increasingly popular. Unfortunately, the installation and the interfacing with the existing...
    Go to contribution page
  24. Dario Berzano (CERN)
    16/04/2015, 09:30
    Track7: Clouds and virtualization
    oral presentation
    During the last years, several Grid computing centers chose virtualization as a better way to manage diverse use cases with self-consistent environments on the same bare infrastructure. The maturity of control interfaces (such as OpenNebula and OpenStack) opened the possibility to easily change the amount of resources assigned to each use case by simply turning on and off virtual machines....
    Go to contribution page
  25. Marek Kamil Denis (CERN)
    16/04/2015, 09:45
    Track7: Clouds and virtualization
    oral presentation
    Cloud federation brings an old concept into new technology, allowing for sharing resources between independent cloud installations. Cloud computing starts to play major role in HEP and e-science allowing resources to be obtained on demand. Cloud federation supports sharing between independent organizations and companies coming from the commercial world such as public clouds, bringing new ways...
    Go to contribution page
  26. Dr Stefano Bagnasco (I.N.F.N. TORINO)
    16/04/2015, 10:00
    Track7: Clouds and virtualization
    oral presentation
    The present work aims at optimizing the use of computing resources available at the grid Italian Tier-2 sites of the ALICE experiment at CERN LHC by making them accessible to interactive distributed analysis, thanks to modern solutions based on cloud computing. The scalability and elasticity of the computing resources via dynamic (โ€œon-demandโ€) provisioning is essentially limited by the size of...
    Go to contribution page
  27. Parag Mhashilkar (Fermi National Accelerator Laboratory)
    16/04/2015, 10:15
    Track7: Clouds and virtualization
    oral presentation
    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each...
    Go to contribution page
Building timetable...