HEPiX Spring 2012 Workshop

Europe/Prague
J. Heyrovsky Institute of Physical Chemistry

J. Heyrovsky Institute of Physical Chemistry

Dolejškova 2155/3, 182 23 Prague 8, Czech Republic
Michel Jouvin (LAL / IN2P3), Milos Lokajicek (Acad. of Sciences of the Czech Rep. (CZ)), Sandy Philpott (JLAB)
Description

HEPiX meetings bring together IT system support engineers from the High Energy Physics (HEP) laboratories, institutes, and universities, such as BNL, CERN, DESY, FNAL, IN2P3, INFN, JLAB, NIKHEF, RAL, SLAC, TRIUMF and others.

Meetings have been held regularly since 1991, and are an excellent source of information for IT specialists in scientific high-performance and data-intensive computing disciplines. We welcome participation from related scientific domains for the cross-fertilization of ideas.

The hepix.org website provides links to information from previous meetings.

Group photo
Logistics
Photos
Trip report
Participants
  • Alan Silverman
  • Alex Iribarren
  • Alexander Pattison
  • Andrea Chierici
  • Andrei Maslennikov
  • Arne Wiebalck
  • Bastian Neuburger
  • Bob Cowles
  • Chen Yi Chien
  • Christopher Huhn
  • Connie Sieh
  • Daniel Dvorak
  • Daniel Gomez Blanco
  • David Kelsey
  • David Sanford
  • Derek Feichtinger
  • Dirk Jahnke-Zumbusch
  • Dmitry Nilsen
  • Dmitry Ozerov
  • Emmanouil Vamvakopoulos
  • Eric Bonfillou
  • Eric Fede
  • Francesco Prelz
  • Frederic Schaer
  • George Jones
  • Gerard Bernabeu
  • Giacomo Tenaglia
  • Gilles Mathieu
  • Helga Schwendicke
  • Helge Meinhard
  • Hironori Ito
  • Ian Bird
  • Ian Peter Collier
  • Jakub Moscicki
  • James Adams
  • James Borden
  • Jan Kundrat
  • Jan Svec
  • Jan Trautmann
  • Jaroslav Cvach
  • Jaroslav Vojtech
  • Jean-Claude DESVIGNES
  • Jens Timmerman
  • Jingyan Shi
  • Jiri Chudoba
  • Jiri Horky
  • John Gordon
  • Jose Castro Leon
  • Karel PISKA
  • Keith Chadwick
  • Larry Pezzaglia
  • Lenka Gogova
  • Lubos Kolar
  • Lukas Fiala
  • Manfred Alef
  • Manuel Guijarro
  • Marek Elias
  • Mark Godwin
  • Martin Bly
  • Mattias Wadenstein
  • Michel Jouvin
  • Michele Michelotto
  • Milos Lokajicek
  • Nadine Neyroud
  • Nam Gyu Kim
  • Nils Höimyr
  • Nina Loktionova
  • Nina Tumova
  • Ofer Rind
  • Owen Millington Synge
  • Patricia Mendez Lorenzo
  • Patrick Fuhrmann
  • Paul Kuipers
  • Pedro Andrade
  • Peter Gronbech
  • Phil Wilson
  • Philippe OLIVERO
  • Pirmin Fix
  • Qi Fazhi
  • Randal Melen
  • Reinhard Baltrusch
  • Remi Mollon
  • Roger Goff
  • Roman Matousek
  • Roman Rusnok
  • Sandra Philpott
  • Seung Hee Lee
  • Shawn Mc Kee
  • Thomas Bellman
  • Thomas Finnern
  • Tina Friedrich
  • Tomas Kouba
  • Tony Cass
  • Ulf Tigerstedt
  • Vladimir Sapunenko
  • Walter Schön
  • Wayne Salter
  • Wolfgang Friebel
  • Yves Kemp
    • 09:00 09:30
      Welcome
      • 09:00
        Welcome by prof. Jan Ridky, director of the Institute of Physics AS CR 7m
        Speaker: Prof. Jan Ridky (Institute of Physics AS CR)
      • 09:07
        Welcome by prof. Rupert Leitner, chairman of the Committee for Collaboration of the Czech Republic with CERN 7m
        Speaker: Rupert Leitner (Inst. of Particle and Nuclear Phys.)
        Slides
      • 09:15
        Logistics 15m
        Workshop logistics
        Speaker: Jiri Horky (Acad. of Sciences of the Czech Rep. (CZ))
        Poster
        Slides
    • 09:30 10:30
      Site Reports
      • 09:30
        Prague site report 15m
        A typical site report about the HEP computing activities at Institute of Physics in Prague will be extended by information about local users experience and participating institutions.
        Speaker: Jiri Chudoba (Institute of Physics)
        Slides
      • 09:45
        Fermilab Site Report - Spring 2012 HEPiX 15m
        The Fermilab Site Report
        Speaker: Dr Keith Chadwick (Fermilab)
        Paper
        Slides
      • 10:00
        Site Report Nikhef 10m
        Overview of the changes since last site report.
        Speaker: Paul Kuipers (Nikhef)
        Slides
      • 10:10
        NDGF Site Report 10m
        Update on recent developments in NDGF
        Speaker: Erik Mattias Wadenstein (Unknown)
        Slides
      • 10:20
        AGLT2 Site Report 10m
        We will report on the ATLAS Great Lakes Tier-2 which is co-located at Michigan State University and the University of Michigan. We will describe some of our recent efforts at providing service resiliency and "business continuity" even if one of our two sites were offline for an extended period.
        Speaker: Shawn Mc Kee (High Energy Physics)
        Slides
    • 10:30 11:00
      Coffee Break 30m
    • 11:00 12:30
      Site Reports
      • 11:00
        CC-IN2P3 Site Report 10m
        Some news from CC-IN2P3 since one year.
        Speaker: philippe olivero (CC-IN2P3)
        Slides
      • 11:10
        PIC Site Report Spring 2012 10m
        PIC is a scientific-technological center providing High Throughput and Data Processing Services to various scientific disciplines: High Energy Physics, Astrophysics, Cosmology and Life Sciences among others. To fulfill these communities requirements it needs to maintain a steep capacity growth while keeping high levels of reliability. Thanks to technology improvements, in the last years it has been possible to support this growth keeping the overall energy budget about constant. However, power limit is getting close and, as user communities and requirements just keep growing, energy efficiency has become a key metric to ensure the sustainability of the activity. Therefore, power efficiency has become the metric for most purchasing decisions. The current input power limitations are a 200KVA UPS line to our main computer room and 100KVA UPS line to an independent, more energy efficient, module (PUE 2.3 vs 1.55). While a significant improvement of the situation is on sight, as of today we have to hold out with the available power. That means that our resources are mostly channeled on “going green”: how to save as much energy as possible to keep our equipment running while still delivering the required service. With that in mind, different techniques are being experimented: improvements on racks to enhance cooling, by controlling air fluxes; newer, more efficient equipment that has a better performance unit per watt ratio; and virtualization technologies that allow us to consolidate servers. In this presentation a brief description of the experiments running at PIC will be given. Furthermore, the status of the PIC site will be presented: computing resources, storage resources, datacenter considerations and software considerations regarding the most resource-consuming project, LHC, as well as other projects, and will try to give some insight on the trade offs found.
        Speaker: Gerard Bernabeu (U)
        Slides
      • 11:20
        GSI site report 10m
        GSI site report
        Speaker: Walter Schoen (GSI)
        Slides
      • 11:30
        INFN-T1 Site report 10m
        We will show recent improvements in our T1 infrastructure
        Speaker: Andrea Chierici (INFN-CNAF)
        Slides
      • 11:40
        CERN site report 20m
        News from CERN since the previous meeting in Vancouver
        Speaker: Dr Helge Meinhard (CERN)
        Slides
      • 12:00
        Oxford and SouthGrid site report 10m
        To be filled by Peter
        Speaker: Peter Gronbech (Particle Physics)
        Slides
      • 12:10
        ASGC site report 10m
        ASGC site report
        Speaker: Ms Jinny Chien (ASGC)
        Slides
      • 12:20
        SLAC Site Report - Spring 2012 HEPiX 10m
        Spring 2012 HEPiX Site Report for the SLAC National Accelerator Laboratory
        Speaker: Randy Melen
        Slides
    • 12:30 14:00
      Lunch 1h 30m
    • 14:00 15:30
      IT Infrastructure
      • 14:00
        EasyBuild: building software with ease. 30m
        EasyBuild is an open source build framework written in Python that enables you to install software in a repeatable and consistent way. EasyBuild was motivated by the need for a system that would allow us to build and install multiple versions of a software package, built with different tool-kits in an automated manner. This whilst a huge number of software packages require us to divert from the standard configure - make - make install.
        Speaker: Jens Timmerman (U)
        Slides
      • 14:30
        DB On Demand at CERN 20m
        This presentation gives an introduction to the Database On Demand service at CERN. The need to offer support based on different database systems is on the rise within our user community. For example, one use case includes certain applications certified for running with non-Oracle databases (e.g. MySQL). Additional use cases are based on the desire expressed by user groups in being able to easily deploy and maintain different DBMS systems for their internal applications development. DB On Demand will cover this need offering users an easy way to create and manage their databases through a system that is fully integrated with different CERN technologies.
        Speaker: Daniel Gomez Blanco (CERN)
        Slides
      • 14:50
        Database access management system 20m
        This talk will present the access management system used at CERN Database Services group for allowing intercommunication within the nodes of the various database clusters, used also for granting and maintaining access to database and application servers to administrators and users. Started as a simple SSH keys management tool, the system has evolved together with CERN IT infrastructure, providing now integration with Kerberos, authorization via LDAP groups and so on.
        Speaker: Giacomo Tenaglia (CERN)
        Slides
      • 15:10
        Update on CERN infrastructure services 20m
        The talk will address the evolution of CERN computing infrastructure services such as CVS, SVN and TWiki as well as the Engineering Linux server infrastructure. Furthermore, the project to setup a central Issue Tracking Service based on Atlassian JIRA will be presented, along with a description of the IT BOINC application service for Volunter Computing.
        Speaker: Nils Høimyr (CERN)
        Slides
    • 15:30 16:00
      Coffee Break 30m
    • 16:00 17:10
      IT Infrastructure
      • 16:00
        Experience with new Service Management at CERN 20m
        The CERN Service Management project was born to fulfil the need of a global, homogeneous, and efficient service organization for all services provided to the users. On February 2011 the project entered in operation including in a first step services of the IT and GS departments. The CERN Service Management infrastructure is based on ITIL best practises and it includes a single Service Desk, a Service Portal providing access to the service catalogue and a set of standard processes as the most significant elements of the project. The tool that has been chosen to support the project and to give a single entry point to both supporters and users is a commercial product: Service-Now. The infrastructure provided by this tool has been adapted to accommodate the structures and needs of the laboratory. This talk presents the status of the project after one year of operation including the user’s and supporter’s perspectives, volume of work after the first year of operation, and the status and plans for the tool. Future developments as the incoming Service Level Management procedures, new processes and facilities will also be discussed.
        Speaker: Dr Patricia Mendez Lorenzo (CERN)
        Slides
      • 16:20
        New burn-in test 20m
        This talk will provide an overview of the CERN's new burn-in test system. It will mainly focus on the reasons why a new system is needed (operational factors, remote hosting adaptation) and how it is being implemented and evaluated. The first results will be presented as the acceptance of about 1500 servers relied on it.
        Speaker: Eric Bonfillou (CERN)
        Slides
      • 16:40
        ARTEMIS 30m
        ARTEMIS is a lightweight system developed at the STFC e-Science centre for collecting, viewing and analysing environmental data from distributed sensors in a data-centre environments.
        Speaker: James Adams (STFC RAL)
        Slides
    • 17:10 19:00
      welcome drink 1h 50m
    • 17:40 18:10
      server room visit 30m server room (Institute of Physics AS CR)

      server room

      Institute of Physics AS CR

      Prague Tier2 centre visit for interested. See the possibilities on the information boards close to the registration place.

    • 08:55 10:30
      Site Reports
      • 08:55
        RAL Site Report 10m
        An update on events at RAL
        Speaker: Mr Martin Bly (STFC/RAL)
        Slides
      • 09:05
        BEIJING-LCG2 Site Report 10m
        The current status of BEIJING-LCG2 Site Report
        Speaker: Jingyan Shi (IHEP)
        Slides
      • 09:15
        GridKa Site Report 10m
        Current status and latest news at GridKa, e.g.: - Hardware status - Storage systems - Middleware deployment
        Speaker: Dmitry Nilsen
        Slides
      • 09:25
        BNL RACF Site Report 10m
        A summary of developments at BNL's RACF since the last HEPiX meeting.
        Speaker: Dr Ofer Rind (BROOKHAVEN NATIONAL LABORATORY)
        Slides
      • 09:35
        Diamond Light Source site report 10m
        update on last years projects & developments
        Speaker: Tina Friedrich (Diamond Light Source Ltd)
        Slides
      • 09:45
        DESY site report 15m
        DESY site report
        Speaker: Yves Kemp (Deutsches Elektronen-Synchrotron)
        Slides
      • 10:00
        PSI Site Report 10m
        Developments at PSI over the last year, among them: introduction of 10GbE for fast detectors, introduction of AFS Object storage, GPFS + CTDB update
        Speaker: Dr Derek Feichtinger (PSI)
        Slides
      • 10:10
        PDSF at NERSC -- Site Report 10m
        PDSF is a commodity Linux cluster at NERSC which has been in continuous operation since 1996. This talk will provide a status update on the PDSF system and summarize recent changes at the NERSC Center. Highlighted PDSF changes include the conversion to xCAT-managed netboot node images, the ongoing deployment of Scientific Linux 6, and the introduction of XRootD for STAR.
        Speaker: Larry Pezzaglia (LBNL)
        Slides
      • 10:20
        JLAB Site Report 10m
        JLAB site update
        Speaker: Sandy Philpott (JLAB)
        Slides
    • 10:30 11:00
      Coffee Break 30m
    • 11:00 12:30
      IT Infrastructure
      • 11:00
        Agile Infrastructure at CERN: Introduction 15m
        Over the past decade, CERN-IT has successfully managed thousands of machines for specific services in the CERN computer centre, using dedicated home-grown tools for configuration, installation and monitoring. However, a more dynamic and flexible approach is needed in order to provide new services, reduce inefficiencies, address business continuity, and cope with a remote extension to the Tier-0 compute facility. The presentation will explain the motivation, and introduce the Agile Infrastructure project detailed in the subsequent presentations.
        Speaker: Dr Helge Meinhard (CERN)
        Slides
      • 11:15
        Agile Infrastructure: Configuration and Operation Tools 25m
        Configuration Management is not new to CERN. For more than a decade, CERN has build its own Quattor based Configuration Management infrastructure which is currently used to manage the configuration of several thousands machines. Experience shows that our infrastructure has several limitations. Moreover, it will not scale to manage configuration of systems on a heavily virtualised computer centre which will be scattered within two distant sites. This talk unveils those limitations and describes the technical choices that have been made when designing a new Configuration Management infrastructure. An update on current status of its implementation is also provided
        Speaker: Dr Helge Meinhard (CERN)
        Slides
      • 11:40
        Agile Infrastructure Monitoring 25m
        The Agile Infrastructure (AI) project will establish a flexible and dynamic management of CERN computer centre resources. From the infrastructure monitoring perspective the AI project is working towards a common monitoring architecture to allow accessing and correlating information about all computer centre resources. This new monitoring solution will simplify the sharing of monitoring data and promote complex monitoring analysis tasks. This talk will motivate the need for such architecture, explain the architecture building blocks, and present the selected technologies.
        Speaker: Mr Pedro Manuel Rodrigues De Sousa Andrade (CERN)
        Slides
      • 12:05
        Agile Infrastructure: IaaS 25m
        The Agile Infrastructure project leverages emerging OpenStack Cloud Computing framework and Puppet configuration tools to provide Infrastructure as a Service in a sustainable and scalable way, but also ensuring sufficient resource flexibility and availability. In this solution, different services will be connected and scheduled over the IaaS layer using the same entry point. Moreover, a common approach to organizing the resources used by each different service will be deployed to be able to correlate usage tracking, auditing, authorization, etc... for various reasons such as security, accounting and isolation. This presentation will give an overview, the current status of the Openstack IaaS implementation and future plans along with how this fits into the CERN computing model
        Speaker: Jose Castro Leon (Universidad de Oviedo (ES))
        Slides
    • 12:30 14:00
      Lunch 1h 30m
    • 14:00 15:35
      Storage
      • 14:00
        The CERN Storage Services Strategy 25m
        CERN IT is faced with a rich set of requirements when it comes to the provisioning of storage services. This talk will give an overview of the challenges and constraints, describe which part of the phase space the current services (AFS, CASTOR, EOS) cover and present the short- and mid-term storage strategy. In addition, the current status of ongoing investigations of alternative storage solutions will be summarised.
        Speaker: Mr Jakub Moscicki (CERN)
        Slides
      • 14:25
        Update on CASTOR tape services 10m
        News from tape services for CASTOR at CERN
        Speaker: Alex Iribarren (CERN)
        Slides
      • 14:35
        Backup Infrastructure at CERN 15m
        CERN's current backup and archive service hosts 5.5 PB of data in more than 1.6 billion files. We have over 1200 clients which back up or restore an average of 50 TB of data each day. At the current growth rate, we expect to have about 7 PB by the end of 2012. In this contribution we present CERN's backup and archive service based on the IBM Tivoli Storage Manager. We show the architecture and design of the system, the user requirements and the operational issues, the current limitations of TSM and what we hope will be improved in the future.
        Speaker: Alex Iribarren (CERN)
        Slides
      • 14:50
        The Infiniband based lustre at GSI 25m
        A new Infiniband based part of the lustre system is build at the "Mini-Cube" at GSI. The connection to the IP based lustre part is realised with LNET routers.
        Speaker: Dr Walter Schoen (GSI)
        Slides
      • 15:15
        Building the Czech national storage facility 20m
        CESNET, the Czech national research and education network provider, does not only provide plain network connectivity. CESNET coordinates and builds e-infrastructure services nowadays. Among them, national storage facility is being put into operation. The aim of the project is to provide more than 15PB of storage in three geographically distributed data centers for research and science community. Storage center in Pilsen with a starting set of access protocols now begins to operate. We present challenges, design choices, future plans as well as consequences for HEP community in the Czech Republic.
        Speaker: Jiri Horky (Institute of Physics)
        Slides
    • 15:35 16:05
      Coffee Break 30m
    • 16:05 17:05
      Storage
      • 16:05
        Data Protection Technologies: What comes after RAID? 30m
        Increasing data volumes and demand for access impose serious challenges for data storage infrastructures. Risk of data corruption increases over time, increase of disc size leads to an unacceptably long recovery time in usual RAID configurations. Recently a new solution, erasure coding, has appeared in the Cloud. It is based on a mathematical construction and involves dividing the data into a large number of small segments which are then duplicated and spread over multiple locations and in some cases multiple IaaS providers. Erasure coding in combination with RAIN (Redundant Array of Inexpensive Nodes) allows significantly reduce overhead ensuring data redundancy similar to RAID and possibility of recovery from storage nodes failures. Several solutions implementing erasure coding are already available on the market.
        Speaker: Dr Vladimir Sapunenko (INFN)
        Slides
      • 16:35
        Evolving the AFS Service at CERN 30m
        In order to provide a scalable service, the AFS infrastructure at CERN currently undergoes an architectural change from Fibre Channel based fabrics towards external SAS based storage. This talk will present the design choices taken, cover the technologies involved, highlight the features and discuss limitations along with their potential solutions.
        Speaker: Mr Jakub Moscicki (CERN)
        Slides
    • 17:05 19:30
      BOF: Grid Engine
      • 17:05
        BOF Grid Engine 1h 30m Institute of Physics

        Institute of Physics

        This is a first face to face meeting after the initiative from CC-IN2P3 to start a collaboration around Grid Engine, with Scientific Sites using it. In order to prepare the meeting and to discuss some concrete matter, a list of wanted functions of GridEngine requested to Oracle by CC-IN2P3 is given in a provisory wiki: https://forge.in2p3.fr/projects/gess/wiki#Lack-of-functionalities with also a list of interested sites and their configuration. Then, we would like to suggest the following agenda: - Self introduction of participants, with an emphasis on expectations - Collection of proposals for activities of the group, assessment - Discussion of the requirements list just mentioned; - Organisational topics: - AOB (hopefully a lot...) This is only a suggestion open for discussion on the list or even at HEPIX before the beginning of the meeting itself.
        Speaker: philippe olivero (CC-IN2P3)
        Slides
    • 17:10 17:35
      server room visit 25m server room (Institute of Physics AS CR)

      server room

      Institute of Physics AS CR

      Prague Tier2 centre visit for interested. See the possibilities on the information boards close to the registration place.

    • 08:45 10:30
      Computing
      • 08:45
        CPU Benchmarking at GridKa (Update 04/2012) 20m
        Comparative benchmarking of cluster hardware with the latest generations of processors and investigation on the influence of BIOS settings like symmetric multiprocessing and turbo mode.
        Speaker: Manfred Alef (Karlsruhe Institute of Technology (KIT))
        Slides
      • 09:05
        HEP-SPEC06 on the Bulldozer and SandyBridge processor 25m
        We received a dual socket machine with 2x Xeon E5 and 2x Opteron 62xx multicore processor. The HEP-SPEC06 measurements will be presented and compared with previous generation processors.
        Speaker: Dr Michele Michelotto (INFN Padua & CMS)
        Slides
      • 09:30
        Hardware evaluation 2012 15m
        We have performed several evaluation of new hardware during the last year ranging from the new generation of worker nodes to testing of a deduplication solution. In the talk, we present results from testing of deduplication technology from Fujitsu using synthetic data as well as real life backup and HEP experiment data. It is a common worry that the disk performance can hardly scale with increasing number of cores in the worker nodes. We show the disk performance scaling evaluation using a C6145 Dell server equipped with 64 cores with up to 10 disk drives. We also show HEP-SPEC and performance/watt numbers for new Intel Sandy Bridge processors.
        Speaker: Jiri Horky (Institute of Physics)
        Slides
      • 09:45
        How we carried out a migration to Oracle Grid Engine at CC-IN2P3 20m
        In 2011, CC-IN2P3 changed its old custom batch System BQS by Oracle Grid Engine (OGE). After recalling brieflyly the reasons why we did it, this talk will present how we carried out this migration: the method, the different steps, the problems we have had to solve and I'll finish by the current situation, and how we are having a try to encourage a collaboration between sites running xGE clusters.
        Speaker: Philippe Olivero (Unknown)
        Slides
      • 10:05
        Gridengine Upgrade at DESY 25m
        DESY IT supports a number of Batch systems for various scientific groups. As these gain more and more importance we investigated the open source versions of Gridengine to profit from freely available updates and features to be prepared for the challenges to come. We report about our experience during testing, migration and early production phase and will give an outlook on future plans with the Son of Grid Engine for different DESY resources.
        Speaker: Pirmin Fix (DESY)
        Slides
    • 10:30 11:00
      Coffee Break 30m
    • 11:00 12:30
      Grid & Cloud
      • 11:00
        Virtualisation working group progress report 20m
        This presentation will summarise the progress of the Virtualisation working group since the Vancouver meeting.
        Speaker: Tony Cass (CERN)
        Slides
      • 11:20
        Image publishing and subscribing. 20m
        Publishing images involves creating consistent uptodate images, and managing image metadata. At DESY (HH), VM images are created using virt-install, kickstart, vmimagemanager, and puppet. The HEPIX VWG has recently changed the image format due to multi core job requirements and the image list subscriber has provided an event interface to allow easier integration with clouds and projects such as the Stratus lab market place.
        Speaker: Owen Millington Synge
        Slides
      • 11:40
        Virtualisation & Cloud Projects at RAL Tier 1 30m
        Update on various virtualisation and cloud computing projects at the RAL Tier 1
        Speaker: Ian Collier (UK Tier1 Centre)
        Slides
      • 12:10
        CHOS in Production -- Supporting Multiple Linux Environments on PDSF at NERSC 20m
        The CHOS[1] software package combines a Linux kernel module, a PAM module, and batch system integration to provide a mechanism for concurrently supporting multiple Linux environments on a single Linux system. This presentation gives an introduction to CHOS and details how NERSC has deployed this utility on the PDSF HPC system to meet the complex, and often conflicting, software environment requirements of multiple applications. The CHOS utility has been in continuous use on PDSF for over 8 years, and has proven to be a robust and simple approach to ensure optimal software environments for HENP workloads. [1] CHOS was written by Shane Canon of NERSC, and the code is available on GitHub[2]. The CHOS technology is explained in detail in the paper at [3]. [2] http://github.com/scanon/chos/ [3] http://indico.cern.ch/getFile.py/access?contribId=476&sessionId=10&resId=1&materialId=paper&confId=0
        Speaker: Larry Pezzaglia (LBNL)
        Slides
    • 12:30 14:00
      Lunch 1h 30m
    • 14:00 15:30
      Business Continuity
      • 14:00
        Status of CERN Business Continuity 30m
        This talk will give an overview the status of business continuity at CERN currently and explain the steps that are planned to improve on this.
        Speaker: Wayne Salter (CERN)
        Slides
      • 14:30
        Business Continuity Efforts at Fermilab 30m
        Business Continuity Efforts at Fermilab
        Speaker: Dr Keith Chadwick (Fermilab)
        Paper
        Slides
      • 15:00
        ITIL and Business Continuity (Service Perspective) 30m
        Based on ITIL best practices, the support of the overall business continuity is ensured by managing and controlling the risks that could seriously affect the corresponding services. The identification of the risks and the provision of measures to mitigate or eliminate the threats in the system will play an important role in achieving the level of service required to ensure the business continuity. The CERN Service Management project is starting to put the basis towards a future risk management activity covering the services of the IT and GS departments. A formal approach will be defined to analyze the service assets, the threats and the vulnerabilities and to establish countermeasures towards the increase of the services reliability. This talk presents the ITIL principles of business continuity and risk management and it describes several practical cases applied to some of the most important IT and GS services.
        Speaker: Dr Patricia Mendez Lorenzo (CERN)
        Slides
    • 15:30 16:00
      Coffee Break + group photo 30m
    • 16:00 16:30
      Business Continuity
      • 16:00
        Change Control at RAL 30m
        In 2009 the RAL Tier-1 introduced a formal change control process as a means of driving cultural changes in the way we tested and deployed new services. Although designed from the bottom up, a mid term review found remarkable similarity with the ITIL model for change management. This talk describes how the process evolved, its impact on team culture, considers outcome metrics and discusses the challenges of risk mitigation and accurate risk assessment.
        Speaker: Dr John Gordon (Particle Physics)
        Slides
    • 16:30 17:00
      IT Infrastructure
      • 16:30
        Scientific Linux Status April 2012 30m
        Current status of Scientific Linux.
        Speaker: connie sieh (Fermilab)
        Slides
    • 17:00 18:00
      HEPiX Board (closed) Institute of Physics

      Institute of Physics

    • 19:00 21:45
      Cultural Event 2h 45m Baroque refectory of The Dominican Convent

      Baroque refectory of The Dominican Convent

      Jilská 7a, Prague 1 GPS: +50° 5' 7.89", +14° 25' 8.64"

      The Hradistan Dulcimer Band concert and banquet

    • 08:55 09:40
      Vendor Presentations
      • 08:55
        Why SAS NL? 30m
        Presentation WD.
        Speaker: David Sanford (Western Digital)
        Slides
      • 09:25
        Discussion 15m
    • 09:40 10:40
      Storage
      • 09:40
        HTTP Storage Federation - a dCache/DPM demonstration. 30m
        CERN-DM and dCache.org will demonstrate a storage federation purly based on the http(s) protocol.
        Speaker: Fabrizio Furano (CERN)
        Slides
      • 10:10
        Use of NetApp at CERN IT Database Services group 30m
        This talk will present the experience of CERN Database Services group in running NetApp filers to provide a highly available NFS-based NAS infrastructure.
        Speaker: Giacomo Tenaglia (CERN)
        Slides
    • 10:40 11:10
      Coffee Break 30m
    • 11:10 12:30
      IT Infrastructure
      • 11:10
        CERN Infrastructure Projects Update 30m
        This talk will provide an update on the two main infrastructure projects; namely the upgrade and consolidation of the CERN computer centre and the remote hosting project. As the tender for the remote hosting will have been recently adjudicated at the CERN Finance Committee in March, the talk will concentrate on the tender and the results. Nonetheless it will give an brief update on the progress with the computer centre upgrade which has progressed significantly since the last HEPiX meeting.
        Speaker: Wayne Salter (CERN)
        Slides
      • 11:40
        Computer rooms and air conditioning experiences at a Tier-2 30m
        To be filled by Peter
        Speaker: Peter Gronbech (Particle Physics)
        Slides
    • 12:30 14:00
      Lunch 1h 30m
    • 14:00 15:30
      Security & Networking
      • 14:00
        IPv6 at Fermilab 30m
        Status of the IPv6 Deployment at Fermilab
        Speaker: Dr Keith Chadwick (Fermilab)
        Paper
        Slides
      • 14:30
        The HEPiX IPv6 Working Group 20m
        Since the Vancouver HEPiX meeting in Oct 2011, the IPv6 working group has been busy expanding its IPv6 testbed and testing data management over IPv6. Work has also started on the full survey of the IPv6 readiness of all WLCG applications, software and tools. This talk will present our experiences to date and plans for the future. A second talk by Francesco Prelz will give more details about the experiences on the distributed testbed.
        Speaker: Dr David Kelsey (STFC - Science & Technology Facilities Council (GB))
        Slides
      • 14:50
        The IPv6 reality check: what we learned so far on the IPv6 distributed testbed. 20m
        The outcome of the first, extremely basic application tests that were performed on the dual-stack testbed that was set up by the HEPix IPv6 group is reported. Running sustained file transfers and deploying the File Transfer Service (FTS) over a mesh of 'gridftp' servers provided enough real-life issues for a first reality check.
        Speaker: Francesco Prelz (Sezione di Milano)
        Slides
      • 15:10
        Federated Identity Management for HEP 20m
        A single Grid (X.509) identity certificate together with an attribute certificate from a Virtual Organisation can be used by an appropriately authorised HEP user anywhere in the world to access WLCG resources wherever they may be. There are however many other non-Grid distributed computing services that HEP users also need to access; webs, wikis, mail lists, document databases, agenda systems, and other collaboration tools to name just a few. Traditionally access to these services has to be fully managed by the hosting site and this results in the requirement to create and manage many many user accounts. This is very painful for both sites and users. This talk will present an overview of work being done both in the general research and education community and by WLCG, in collaboration with other scientific communities, to improve this situation.
        Speaker: Dr David Kelsey (STFC - Science & Technology Facilities Council (GB))
        Slides
    • 15:30 16:00
      Coffee Break 30m
    • 16:00 17:00
      Security & Networking
      • 16:00
        Computer Security Update 30m
        The talk tackles current trends in the computer security field. Recent security events, threats, risk management, ... Those are the points which are covered here, including the academic world.
        Speaker: Remi Mollon (CERN)
        Slides
      • 16:30
        Cyber Security - The Road We've Traveled and Modest Predictions 30m
        One of the first cyber incidents was caused by an insect (a moth, in fact). Ever since then, the lower life forms have been trying to disrupt the virtual landscapes we construct for our users. As our virtual environments have evolved to be more complex, these other life forms have always evolved sufficiently to find new cracks in our defenses. As we continue down the current path we are reminded of a quote attributed to Einstein, "Insanity: doing the same thing over and over again and expecting different results." Of course, he also said, "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former." So, let's look at how we got here and see if we can use that to guide what we should do in the future.
        Speaker: Bob Cowles (SLAC)
        Slides
    • 17:00 19:00
      Guided sighseeing walk 2h

      Guided sightseeing walk through Prague. We will go by buses to the Prague Castle and then walk to the dinner place.
      The buses depart from the workshop venue (Heyrovsky Institute) 17:00.

    • 19:00 23:55
      Conference Dinner 4h 55m Klub Lávka

      Klub Lávka

      Novotného lávka 1 110 00, Praha 1 GPS: +50° 5' 7.41", +14° 24' 47.82"
    • 09:00 10:30
      Grid & Cloud
      • 09:00
        FermiCloud Update 30m
        Update on the status of the FermiCloud project.
        Speaker: Dr Keith Chadwick (Fermilab)
        Paper
        Slides
      • 09:30
        EGI Federated Cloud Infrastructure 30m
        This presentation introduces the EGI Task Force on Federated Clouds to the world wide Grid and Cloud communities. Building on the technology and expertise aggregated in over 10 years of successful provisioning and operation of a pan-European Grid Infrastructure, the Task Force further pushes the frontiers of Cloud interoperability enabling user communities to scale their computing needs across multiple Cloud providers, both academic/publicly funded and commercial providers.
        Speaker: Ian Peter Collier (STFC - Science & Technology Facilities Council (GB))
        Slides
      • 10:00
        Helix Nebula 30m
        In Europe the EIROForum are the 8 international organizations pursuing fundamental research in science and space exploration. In this context, CERN and several of the other organizations, including ESA and EMBL are forming a collaboration to engage with European industry in public-private partnerships to build a European Cloud infrastructure capable of supporting the missions of these organizations. As well as addressing the purely technical aspects, the collaboration will focus on issues of policy and privacy, particularly in the area of data. The goals of the work are to understand cost and service models, and to understand how the needs of the science organizations can be fulfilled by commercial compute and storage providers. This talk will discuss the complementary large scale flagship projects proposed by CERN, ESA, and EMBL, and how they will address the open questions that may eventually enable science to make large scale use of such commercial facilities.
        Speaker: Tony Cass (CERN)
        Slides
    • 10:30 11:00
      Coffee Break 30m
    • 11:00 12:00
      IT Infrastructure
      • 11:00
        Monitoring at GRIF 30m
        The GRIF monitoring infrastructure will be presented, along with the requirements, issues, and foreseen evolutions
        Speaker: Mr Frederic Schaer (CEA)
        Slides
      • 11:30
        Quattor Update 30m
        Report from the recent Quattor Workshop in Budapest and update on developments in the Quattor toolset.
        Speaker: Ian Collier (UK Tier1 Centre)
        Slides
    • 12:00 12:20
      Wrap-Up
      • 12:00
        Wrap-Up 20m
        Speaker: Michel Jouvin (Universite de Paris-Sud 11 (FR))
        Slides
    • 12:20 13:50
      Lunch 1h 30m