HEPiX Spring 2011 Workshop

Europe/Berlin
Hörsaal / lecture hall (GSI Helmholtzzentrum für Schwerionenforschung GmbH)

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany
Michel Jouvin (LAL / IN2P3), Sandy Philpott (JLAB), Walter Schon
Description

HEPiX meetings bring together IT system support engineers from the High Energy Physics (HEP) laboratories, institutes, and universities, such as BNL, CERN, DESY, FNAL, IN2P3, INFN, JLAB, NIKHEF, RAL, SLAC, TRIUMF and others.

Meetings have been held regularly since 1991, and are an excellent source of information for IT specialists in scientific high-performance and data-intensive computing disciplines. We welcome participation from related scientific domains for the cross-fertilization of ideas.

The hepix.org website provides links to information from previous meetings.

Participants
  • Alan Silverman
  • Alvaro Gonzalez
  • Andreas Heiss
  • Andreas Petzold
  • Andrei Maslennikov
  • Andrey Shevel
  • Bastian Neuburger
  • Bernard CHAMBON
  • Christopher Huhn
  • Chuck Boeheim
  • David Kelsey
  • Derek Feichtinger
  • Dmitry Ozerov
  • Dorin Lobontu
  • Doris Ressmann
  • Edgar Barabas
  • Eric Cano
  • Eric Fede
  • Fabio Hernandez
  • Felix Lee
  • Flavio Costa
  • Gang Chen
  • Giulio Eulisse
  • Hee-Jun Yoon
  • Helga Schwendicke
  • Helge Meinhard
  • Ian Collier
  • Ian Gable
  • Jan Kundrát
  • Jerome Belleman
  • Jerome Caffaro
  • Jie Tao
  • Jiri Horky
  • Johan Guldmyr
  • John Gordon
  • Jos van Wezel
  • Jose Benito Gonzalez Lopez
  • Juraj Sucik
  • Keith Chadwick
  • Knut Woller
  • Lukas Fiala
  • Lukasz Flis
  • Manfred Alef
  • Martin Bly
  • Mattias Wadenstein
  • Mattieu PUEL
  • Michel Jouvin
  • michele michelotto
  • Milos Lokajicek
  • Muriel Gougerot
  • Ofer Rind
  • Owen Synge
  • Pascal TROUVÉ
  • Patrick Fuhrmann
  • Pau Tallada Crespí
  • Paul Kuipers
  • Peter Malzacher
  • philippe OLIVERO
  • Pierre-Francois Honore
  • Pierrick Micout
  • Pirmin Fix
  • Randal Melen
  • Roberto Gomezel
  • Rolf Rumler
  • Romain WARTEL
  • Seung Hee Lee
  • Silke Halstenberg
  • Stefan Haller
  • Steve Thorn
  • Thomas Bellman
  • Thomas Finnern
  • Thomas Kress
  • Thomas Roth
  • Tina Friedrich
  • Tomas Kouba
  • Tony Cass
  • Troy Dawson
  • Ulf Tigerstedt
  • Ulrich Schwickerath
  • Walter Schoen
  • Wayne Salter
  • Wolfgang Friebel
  • Yaodong CHENG
  • Zhechka Toteva
    • 09:15 10:00
      Introduction Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      • 09:15
        Registration 30m
      • 09:45
        Welcome Address 15m
        Speaker: Prof. Karlheinz Langanke (GSI Darmstadt)
    • 10:00 11:05
      Site Reports Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      Conveners: Mr Alan Silverman (CERN), philippe olivero (CC-IN2P3)
      • 10:00
        GSI site report 15m
        New developments at GSI
        Speaker: Walter Schon (GSI)
        Slides
      • 10:15
        Fermilab Site Report - Spring 2011 HEPiX 20m
        The Fermilab site report for the Spring 2011 HEPiX
        Speaker: Dr Keith Chadwick (Fermilab)
        Paper
        Slides
      • 10:35
        GridKa Site Report 15m
        Current status and latest news at GridKa, e.g.: - Hardware status - Batch system issues - CPU performance
        Speaker: Mr Manfred Alef (Karlsruhe Institute of Technology (KIT))
        Slides
      • 10:50
        Nikhef site report 15m
        Nikhef site report
        Speaker: Mr Paul Kuipers (nikhef)
    • 11:05 11:35
      Coffee Break 30m Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
    • 11:35 13:00
      Site Reports Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      Conveners: Mr Alan Silverman (CERN), philippe olivero (CC-IN2P3)
      • 11:35
        CERN site report 20m
        News from CERN since last meeting
        Speaker: Dr Helge Meinhard (CERN-IT)
        Slides
      • 11:55
        DESY Site Report 20m
        Computing at DESY news
        Speaker: Dr Wolfgang Friebel (Deutsches Elektronen-Synchrotron (DESY)-Unknown-Unknown)
        Slides
      • 12:15
        RAL Site Report 20m
        Update on activities at RAL Tier1
        Speaker: Martin Bly (STFC-RAL)
        Slides
      • 12:35
        SLAC Site Report 25m
        Update on activities at SLAC
        Speaker: Mr Randy Melen (SLAC National Accelerator Laboratory)
        Slides
    • 13:00 14:00
      Lunch 1h Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
    • 14:00 15:30
      Networking & Security Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      • 14:00
        Signing (and encrypted) message handling and implications for admins. 30m
        In grid computing we use an X509 PKI security infrastructure. This infrastructure is used to enable secure connections between hosts to deliver payload. This often leads to scalability and reliability issues. This talk presents the alternative approach of signing messages for asynchronous handling, allowing authentication of the payload rather than the connection. The implications of this approach will be illustrated showing how service interdependency can be reduced, and clustering simplified. AMQP (RabbitMQ) will be used as a transport mechanism in this talk to illustrate these concepts. Both the openssl command line and a python library can be used to authenticate signed messages making scalable secure authentication between sites resources practical for administrators.
        Speaker: Owen Synge (DESY (HH))
        Slides
      • 14:30
        Computer security update 30m
        This presentation provides an update of the security landscape since the last meeting. It describes the main vectors of compromises in the academic community and discusses security risks management in general, as well as the security aspects of the current hot topics in computing, for example identity federation and virtualisation.
        Speaker: Mr Romain Wartel (CERN)
        Slides
      • 15:00
        Host based intrusion detection with OSSEC 30m
        In this talk the open source host-based intrusion detection system OSSEC is described. Besides an overview of its features it will also be explained how to use it for non-security related monitoring and notifying. Furthermore several possible real life scenarios will be demonstrated and some of the current drawbacks will be elaborated.
        Speaker: Bastian Neuburger (GSI)
        Slides
    • 15:30 16:00
      Coffee Break 30m Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
    • 16:00 17:30
      Computing Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      Convener: Dr Michele Michelotto (Univ. + INFN)
      • 16:00
        Batch Monitoring and Testing 30m
        In order to improve its batch service for local and Grid users, development is ongoing at CERN to design a batch monitoring system and set up a test instance. The goal is to enhance the batch service by investigating new scheduler features, fine-tuning the already used ones and decreasing the time spent in problem identification and fault resolution.
        Speaker: Mr Jerome Belleman (CERN)
        Slides
      • 16:30
        Selecting a new batch system at CC-IN2P3 30m
        2 years ago, CC-IN2P3 decided to give up the home made batch system (BQS) for a new product. This presentation will expose the process we set up to make the selection and will explain our choice.
        Speaker: Mr Bernard CHAMBON (CC-IN2P3 /CNRS)
        Slides
      • 17:00
        Grid Engine setup at CC-IN2P3 30m
        As you know, we chose Grid Engine as the next batch system for CC-IN2P3. This presentation will focus on 2 aspects we have examined during last months : 1/ Scalability and robustness testing 2/ Specific requirements at CC-IN2P3 : problems and solutions
        Speaker: Mr Bernard CHAMBON (CC-IN2P3 /CNRS)
        Slides
    • 09:15 10:45
      Site Reports Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      Conveners: Mr Alan Silverman (CERN), philippe olivero (CC-IN2P3)
      • 09:15
        CC-IN2P3 Site-Report update 15m
        I'll present a short update of the CC-In2p3 Site-report, specifying the main changes occured since last hepix.
        Speaker: Mr philippe olivero (CC-IN2P3)
        Slides
      • 09:30
        IHEP Site Report 15m
        Report of status of computing system of IHEP
        Speaker: Dr Gang Chen (Institute of High Energy Physics (IHEP))
        Slides
      • 09:45
        NDGF Site Report 15m
        Site update from NDGF, recent developments, neat things, etc.
        Speaker: Mattias Wadenstein (NDGF)
      • 10:00
        Petersburg Nuclear Physics Institute (PNPI) status report 15m
        It is planned to describe the updated status for the computing infrastructure of High Energy Physics Division (HEPD): LAN (400 hosts), mail service for the Institute, other centralized servers, computing cluster. A number of updated topics is observed: security and SPAM, cluster virtualization, WiFi, video conferences.
        Speaker: Mr Andrey Shevel (Petersburg Nuclear Physics Institute (PNPI))
        Slides
      • 10:15
        DLS site report 15m
        Overview of computing systems at Diamond, including current status and planned future developments.
        Speaker: Ms Tina Friedrich (Diamond Light Source Ltd)
        Slides
      • 10:30
        BNL Site report 15m
        A report on the current status of the RHIC/ATLAS Computing Facility at BNL with an emphasis on developments and updates since the last Fall Hepix meeting.
        Speaker: Dr Ofer Rind (BROOKHAVEN NATIONAL LABORATORY)
        Slides
    • 10:45 11:15
      Coffee Break 30m Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
    • 11:15 11:45
      Site Reports Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      Conveners: Mr Alan Silverman (CERN), philippe olivero (CC-IN2P3)
      • 11:15
        ASGC site report 15m
        ASGC current status.
        Speaker: Mr Felix Lee (Academia Sinica)
        Slides
      • 11:30
        PSI - Site report 15m
        Site report for the Paul Scherrer Institut.
        Speaker: Dr Derek Feichtinger (PSI)
        Slides
    • 11:45 13:15
      IT Infrastructure Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      Convener: Dr Helge Meinhard (CERN-IT)
      • 11:45
        Drupal at CERN 30m
        Drupal is an open source content management platform used worldwide. CERN has chosen to use Drupal for building multilingual, content managed web sites and applications. The infrastructure is based on cluster of Apache web servers, MySQL database servers and storage servers. The setup uses SLC6 operating system as a platform. The high availability configuration is achieved with Red Hat Cluster Suite. The talk will present the details of the Drupal configuration at CERN, current status of the project and integration with existing CERN services: e-groups, CERN Authentication and CERN Document Server.
        Speaker: Mr Juraj Sucik (CERN)
        Slides
      • 12:15
        Indico - Present and future 30m
        Indico (Integrated Digital Conference: http://indico.cern.ch) is a web-based, multi-platform conference lifecycle management system and agenda. It has also become the long term archiving tool for documents and metadata related to all kinds of events that take place at CERN. The software is used in production at CERN (hosting >114.000 events, > 580.000 presentations, > 770.000 files and around 10.000 visitors per day) and installed in more than 90 institutes world-wide. Indico has been changing a lot in the last 3 years, therefore we will review all these changes and new features along this period, and we will also give an overview of the future for Indico.
        Speaker: Mr Jose Benito Gonzalez Lopez (CERN)
        Slides
      • 12:45
        Invenio at CERN 30m
        Invenio <http://invenio-software.org/> is a software suite enabling to run a digital library or document repository on the web. The technology offered by the software covers all aspects of digital library management from document ingestion through classification, indexing, and curation to dissemination. Invenio has been originally developed at CERN to run the CERN document server (CDS), managing over 1,000,000 bibliographic records in high-energy physics since 2002, covering articles, books, journals, photos, videos, and more. Invenio is nowadays co-developed by an international collaboration comprising institutes such as CERN, DESY, EPFL, FNAL, SLAC and is being used by about thirty scientific institutions worldwide. The presentation will focus on the current and future usage of Invenio at CERN: integration with other CERN IT services (Drupal, GRID, Indico, MediaArchive, AIS, etc.) as well as other HEP-related information systems, newly introduced features and workflows, usage statistics, etc. The software development strategy, including future planned developments as well as insight into the underlying technologies will be covered.
        Speaker: Mr Jerome Caffaro (CERN)
        Slides
    • 13:15 14:15
      Lunch 1h Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
    • 14:15 15:45
      Computing Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      • 14:15
        OpenMP Performance on Virtual Machines 30m
        Virtualization technology has been applied to a variety of areas including server consolidation, High Performance Computing, as well as Grid and Cloud computing. Due to the fact that applications do not run directly on the hardware of a host machine, virtualization generally causes a performance loss for both sequential and parallel applications. This talk studies the OpenMP applications running on a virtualized multicore machine. It shows the overhead of parallelization and compares the parallel performance on virtual machines with the performance of native executions. An interesting scenario is that one application runs much slower in parallel than the sequential runs. A performance analysis tool is then applied to investigate the cause of such abnormal behavior. The talk demonstrates the performance optimization and the results based on the analysis.
        Speaker: Dr Jie Tao
      • 14:45
        CMS 64bit transition and multicore plans 30m
        CMS has ported its complete software stack to run natively on 64 bit linux and it's using it for all its computing workflows, from data acquisition to final analysis. In this talk we'll present our experience with such a transition, both in terms of deployment issues and actual performance gains. Moreover, we'll give an insight of what we consider our present and future challenges, focusing in particular on how we plan to exploit multi-core architectures.
        Speaker: Mr Giulio Eulisse (FERMILAB)
        Slides
      • 15:15
        Performance Comparison of Multi and Many-Core Batch Nodes 30m
        The compute power of batch nodes is measured in units of HEP-SPEC06 which is based on the industry standard SPEC CPU2006 benchmark suite. In this talk I will compare the HEP-SPEC06 scores of multi-core worker nodes with accounting data taken from the batch system.
        Speaker: Mr Manfred Alef (Karlsruhe Institute of Technology (KIT))
        Slides
    • 15:45 16:15
      Coffe Break 30m Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
    • 16:15 17:30
      IT Infrastructure Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      Convener: Dr Helge Meinhard (CERN-IT)
      • 16:15
        FAIR 3D Tier-0 Green-IT Cube 1h
        FAIR computing presents computing requirements for the first level processing of the experiment data exceeding those at CERN. All computing resources, including the first level event selectors will be hosted in one data center, which is currently being planned. It sets new standards with respect to energy density, implementing more than 100 kW/sqm and energy efficiency by requiring less than 10% for the data center cooling, while allowing the use of general purpose computer servers. The over all FAIR computing concept is presented as well as the FAIR Tier-0 data center architecture.
        Speaker: Prof. Volker Lindenstruth (FIAS, GSI)
        Slides
    • 09:15 10:45
      Storage & FileSystems Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      • 09:15
        Evaluation of gluster file system at IHEP 30m
        GlusterFS is an open source, clustered file system capable of scaling to several petabytes and handling thousands of clients. At IHEP, we setup a testbed to evaluate the file system, including functionality, performance and current status. The advantages and disadvantages for HEP data processing are also disscussed.
        Speaker: Dr Yaodong CHENG (Institute of High Energy Physics,Chinese Academy of Sciences)
        Slides
      • 09:45
        Benefits of a virtualized approach to mass storage system 20m
        The talk will show the benefits of grouping a number of heterogeneous tape libraries into one virtual container of tape media and drives. The backup and archive applications send their data to this huge container which has all necessary mechanisms to control and access the tape resources(cartridges, drives, physical libraries).The implementation is based on an ibm software "Enterprise Removable Media Manager"
        Speaker: Dorin Daniel Lobontu (Karlsruhe Institute of Technology (KIT))
        Slides
      • 10:05
        CASTOR status and development 30m
        We will present the performance achieved during data taking for the 2010 LHC run, including heavy ion run. The operational benefits reaped from the deployed improvement as well as the roadmap for further developments to consolidate the system and lower its deployment cost will be introduced. Our performance assessment for the new generation of Oracle tape drives – T10000C – will also be shown.
        Speaker: Eric Cano (CERN)
        Slides
    • 10:45 11:15
      Coffee Break 30m Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
    • 11:15 12:45
      Storage & FileSystems Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      • 11:15
        The DESY Grid-Lab, a detailed 'local access protocol' evaluation 30m
        Since mid of 2010, DESY IT is operating a performance evaluation facility of the size of a small gLite Tier II, the DESY Grid-Lab. Regular gLite software is deployed, allowing to execute commonly used LHC analysis jobs as well as applications provided by other communities. This presentation focuses on the comparison of different implementations of XROOTD, dCap as well as of the NFS4.1/pNFS dCache implementation. The evaluation scenarios include real world analysis jobs of LHC VO's including standard Hammercloud jobs, I/O intensive jobs provided by the ROOT team and examples from non HEP communities.
        Speaker: Dmitry Ozerov (DESY)
        Slides
      • 11:45
        Lustre at GSI 30m
        Lustre has been employed with great success as the general purpose distributed file system for all experimental and theory groups at GSI. Currently there are 100 mio files stored on Lustre, and between batch nodes, interactive nodes and desktops there are ca. 500 clients with access to Lustre. Past issues with stability have been overcome by running the Lustre 1.8 version. Hardware upgrades of metadata servers and OSS are under way. The total file space will increase to > 2 PB soon.
        Speaker: Thomas Roth (GSI)
        Slides
      • 12:15
        Evaluation of distributed file systems using trace and replay mechanism 30m
        Reliable benchmarking of file systems is a complex and time consuming task when one has to test against a production environment to achieve relevant results. In case of the HEP community, this eventually leads to setting up a particular experiments' software environment, which could be a rather complicated task for a system administrator. To simplify this task, we developed an application for exact replaying of IO requests to reliably replicate an IO behavior of the original applications without a need of installing the whole working environment. Using the application, we present performance comparison of Lustre, GPFS and Hadoop file systems by replaying traces of LHCb, CMS and ATLAS jobs.
        Speaker: Mr Jiri Horky (Institute of Physics of Acad. of Sciences of the Czech Rep. (ASCR))
        Slides
    • 12:45 13:15
      Networking & Security Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      Convener: Dr David Kelsey (RAL)
      • 12:45
        HEPiX IPv6 Working Group 30m
        A new working group on IPv6 in HEP was dsicussed and agreed at the previous HEPiX meeting. This new working group has recently been created and work is just starting. This talk will present the status and plans of the working group for the year ahead.
        Speaker: Dr David Kelsey (RAL)
        Slides
    • 13:15 14:15
      Lunch 1h Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
    • 14:15 15:45
      IT Infrastructure Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      • 14:15
        Overview of the new Computing room at CC-IN2P3 30m
        The presentation will give an overview of the new commissioned infrastructure and computing room at CC-IN2P3. The presentation will give an update of technical infrastructure project and improvement. It will focus on major achievement and will spotlight and describe the future capacity offered up to 2019 Topics to be reviewed: Building, Cooling system, Power distribution and Confined Racks, Future capacity, projects and scheduling.
        Speaker: Mr Pascal Trouve (CC-IN2P3)
        Slides
      • 14:45
        Evolution of CERN's Computing Facilities 30m
        CERN is currently evolving its computing facilities through a number of projects. This presentation will give an overview of the various projects and their current status.
        Speaker: Mr Wayne Salter (CERN)
        Slides
      • 15:15
        Implementing Service Management processes with Service-Now 30m
        The choice of Service-Now as a tool for handling the request fulfillment and incident management ITIL processes in the IT and the General Services Departments at CERN has created several months of intensive development. Besides the implementation of these two standardized ITIL process it has been a very interesting task to model CERN Service catalogue in the tool. The integration with third party systems and workflows, as SSO, GGUS, organization data, knowledge base, has started and will be a running task for the next couple of years. The biggest challenge will be the transition of existing non-ITIL processes implemented in other tools into Service-Now.
        Speaker: Zhechka Toteva (CERN)
        Slides
    • 15:45 16:15
      Coffee Break 30m Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
    • 16:15 17:00
      IT Infrastructure Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      • 16:15
        Scientific Linux Status Report + Discussion 45m
        Progress of Scientific Linux over the past 6 months. What we are currently working on. What we see in the future for Scientific Linux.
        Speaker: Mr Troy Dawson (FERMILAB)
        Slides
    • 19:00 22:30
      Conference Dinner 3h 30m Hotel Restaurant Weißer Schwan, Frankfurter Landstrasse 190, 64291 Darmstadt - Arheilgen

      Hotel Restaurant Weißer Schwan, Frankfurter Landstrasse 190, 64291 Darmstadt - Arheilgen

    • 09:15 10:45
      Cloud, grid and virtualization Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      • 09:15
        Moving virtual machines images securely between sites. 30m
        A Grid service allows applications to the run on many sites without modification. Virtualization provides the potential for deploying the same customized operating system at many sites. This talk will present one of many possible security infrastructures and models to allow sharing and deployment of virtual machines images that meets the objectives of secure non-repudiation of images, auditing and fault tolerance that has been developed within the HEPIX virtualization working group. The talk focus will be made on the meta data describing the virtual machines, how to share this meta data, how to share the image described and to verify an image adn its meta data, presenting packages and deployment, and how to audit the approach.
        Speaker: Owen Synge (DESY (HH))
      • 09:45
        Virtualization at CERN: a status report 30m
        Virtualization at CERN: a status report We present updates to the virtualization services provided by CERN IT. - CERNs internal cloud has been moved into full production mode in December 2010, and has been running since then providing virtualized batch resources. We will report on operational experiences, as well as further developments made since the last meeting in Cornell, including benchmark results, OpenNebula and ISF experiences, a first view on SLC6. - the CVI Self-Service continues to grow rapidly (>1200 VMs on >200 hypervisors), and so are the use requirements. We describe the service evolution of CVI 2, with a focus on Linux VMs. We will also present the plans to evaluate Openstack at CERN.
        Speaker: Dr Ulrich Schwickerath (CERN)
        Slides
      • 10:15
        StratusLab Marketplace for Sharing Virtual Machine Images 30m
        StratusLab (http://stratuslab.eu/) provides a complete, open-source solution for deploying an "Infrastructure as a Service" cloud infrastructure. Use of the cloud requires the use of prepared machine and disk images, yet preparing correct, secure images remains difficult and represents a significant barrier to the adoption of cloud technologies. The StratusLab Marketplace is an image registry, containing cryptographically signed metadata associated with shared images. It simultaneously allows: end-users to search for existing images, image creators to publicize their images, and cloud administrators to evaluate the trustworthiness of an image. The image files themselves are stored elsewhere--either in cloud storage or in web-accessible repositories. The StratusLab Marketplace facilitates the sharing of images and use of IaaS cloud infrastructures, allowing users access to a diverse set of existing images and providing cloud administrators with the confidence to allow them to run. Its integration with the StratusLab distribution makes use of registered images easy, further reducing barriers to adoption.
        Speaker: Cal Loomis (CNRS/LAL)
        Slides
    • 10:45 11:15
      Coffee Break 30m Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
    • 11:15 12:45
      Cloud, grid and virtualization Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      • 11:15
        Operating a distributed IaaS Cloud for BaBar MC production and user analysis 30m
        In the last year we have established a system which replicates a standard Condor HTC environment across multiple distinct IaaS clouds of different types including EC2, Nimbus and Eucalyptus. Users simply submit batch jobs to a Condor queue containing a custom attribute which is a pointer to the Virtual Machine Image they would like booted to service their job. The system automatically boots instances of the requested machine type on one of the available clouds and contextualizes them to connect to the batch system. The system is being used on a continual basis for astronomy and HEP jobs. We report on our experience operating this system which has booted over 30 000 VMs and completed over 250 000 jobs.
        Speaker: Ian Gable (University of Victoria)
        Slides
      • 11:45
        FermiGrid Scalability and Reliability Improvements 30m
        The Fermilab Campus Grid (FermiGrid) is a meta-facility that provides grid infrastructure for scientific computing at Fermilab. It provides highly available centralized authorization and authentication services, a site portal for Globus job submission, coordination for interoperability among the various stakeholders, and grid-enabled mass storage interfaces. We currently support approximately 25000 batch processing slots. This presentation will describe the current structure of FermiGrid and recent improvements in scalability and reliability of our authorization and authentication services. These improvements include orders of magnitude improvement in our web services based Site AuthoriZation service (SAZ). We will also describe recent enhancements to the information system and matchmaking algorithm of our site job gateway. Finally we will describe the FermiGrid HA2 project currently under way which distributes our services across two buildings, making us resilient in the case of major building outages.
        Speaker: Dr Keith Chadwick (Fermilab)
        Paper
        Slides
      • 12:15
        Adopting Infrastructure as Code to run HEP applications 30m
        GSI is a German national laboratory for heavy-ion beams, planning to build the new accelerator complex "Facility for Antiproton and Ion Research" (FAIR). In preparation for the Tier-0 computing center for FAIR different Infrastructure as a Service (IaaS) cloud technologies have been compared, to construct a private cloud. Simultaneously, effort has been taken to learn how to efficiently execute HEP applications in a virtual environment. The result is a private cloud testbed, called SCLab, build with the help of the OpenNebula toolkit. The concept Infrastructure as Code (IaC), based on the Chef configuration management system, has been adopted for the deployment and operation of HEP applications in clouds. Tools have been developed to start virtual clusters in any IaaS cloud on demand. The first successful applications are a completely virtual AliEn grid site for the ALICE experiment at LHC and simulations for radiation protection studies for FAIR. The talk will present the design decisions and the experience in running HEP applications in IaaS clouds
        Speaker: Mykhaylo Zynovyev (GSI)
        Slides
    • 12:45 14:00
      Lunch 1h 15m Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
    • 14:00 16:00
      Oracle Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany

      Discussion with Oracle

      • 14:00
        An introduction to Oracle Linux 40m
        In this presentation, Lenz will provide an overview about Oracle Linux, Oracle's Enterprise Linux distribution and the Oracle Unbreakable Enterprise Kernel (UEK). The session will cover the technical highlights and improvements as well as the support offerings that complement it.
        Speaker: Mr Lenz Grimmer (Oracle)
        Slides
      • 14:40
        Open Source at Oracle 30m
        In this presentation, Oracle will go over its major open source products, and their future directions.
        Speaker: Mr Gilles Gravier (Oracle)
        Slides
      • 15:10
        Discussion 50m
    • 16:00 16:30
      Coffee Break 30m Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
    • 16:30 17:30
      Storage & FileSystems Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      Conveners: Andrei Maslennikov (CASPUR), Mr Peter van der Reest (Deutsches Elektronen-Synchrotron DESY)
      • 16:30
        Foundation of the EOFS - support the lustre development 30m
        The European Open File Systems society is a non profit organisation to coordinate the future development of lustre. Founding members of the organisation are Universities, Supercomputing Centers and partners from industry. The next release of a lustre version is scheduled for summer 2011.
        Speaker: Walter Schon (GSI)
        Slides
      • 17:00
        Discussion 30m
    • 17:30 18:30
      HEPiX Board (closed) Seminarraum Theorie (room no. SB3 > > 3.170) (GSI)

      Seminarraum Theorie (room no. SB3 > > 3.170)

      GSI

    • 09:15 10:15
      IT Infrastructure Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      Convener: Dr Helge Meinhard (CERN-IT)
      • 09:15
        Version Control Services at CERN 30m
        CERN offers three Version Control Services, one using SVN and two older services using CVS. The older CVS service is to be closed by 2Q 2011 and will be merged into the high availability CVS service on AFS where the performance has been improved to suite the needs of all users. The main SVN service has expanded a great deal, in users, commits and repositories, since it started in 2009. Our future plans include new tools for users, internal software upgrades, improved statistics and monitoring.
        Speaker: Mr Alvaro Gonzalez Alvarez (CERN)
        Slides
      • 09:45
        CernVM-FS Production Service and Deployment 30m
        CernVM-FS is now a production service supported at CERN distributing VO software to sites/worker nodes. This talk will describe the production service as well as give details on the deployment and management required to use CVMFS at sites.
        Speaker: Mr Ian Peter Collier (STFC RAL Tier1)
        Slides
    • 10:15 10:45
      Coffe Break 30m Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
    • 10:45 11:45
      Cloud, grid and virtualization Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      Conveners: Dr John Gordon (STFC-RAL), Dr Keith Chadwick (Fermilab), Tony Cass (CERN)
      • 10:45
        The National Grid Service Cloud 30m
        The UK's National Grid Service is investigating how it can best make use of cloud technologies in the future. The focus is on users, not only those who want to perform computationally intensive research, but also others in the wider academic setting. The usefulness of Infrastructure as a Service clouds to this community is crucial in determining future cloud provision in this area. To examine this question, Eucalyptus-based clouds were deployed at the Universities of Edinburgh and Oxford to gain real experience from the users' perspective.
        Speaker: Dr Steve Thorn (University of Edinburgh)
        Slides
      • 11:15
        HEPiX VWG Status Report 30m
        This presentation will give an update of the activities of the HEPiX Virtualisation Working Group over the past few months, describe the current status and give an outlook on future progress.
        Speaker: Tony Cass (CERN)
        Slides
    • 11:45 12:00
      Wrap-up Hörsaal / lecture hall

      Hörsaal / lecture hall

      GSI Helmholtzzentrum für Schwerionenforschung GmbH

      Planckstr. 1, 64291 Darmstadt, Germany
      • 11:45
        Wrap-Up 15m
        Speaker: Michel Jouvin (LAL / IN2P3)
        Slides