HEPiX Workshop

America/Los_Angeles
Bldg. 66 Auditorium (Lawrence Berkeley National Laboratory)

Bldg. 66 Auditorium

Lawrence Berkeley National Laboratory

1, Cyclotron Road, Berkeley, CA, 94720 USA
Iwona Sakrejda, Jay Srinivasan (Lawrence Berkeley National Lab. (LBNL)-Unknown-Unknown), Michel Jouvin (LAL / IN2P3), Sandy Philpott (JLAB)
Description
The HEPiX meetings bring together IT system support engineers from the High Energy Physics (HEP) laboratories and institutes, such as BNL, CERN, DESY, FNAL, IN2P3, INFN, JLAB, NIKHEF, RAL, SLAC, TRIUMF and others. They have been held regularly since 1991, and are an excellent source of information for IT specialists. That's why they enjoy large participation also from the non-HEP organizations.
Participants
  • Alan Silverman
  • Alexandre Schmitt
  • Andrea Chierici
  • Andrew Uselton
  • Artem Trunov
  • Booker Bense
  • Christophe Bonnaud
  • Christopher Huhn
  • Christopher Walker
  • Chuck Boeheim
  • Dan Gunter
  • David Kelsey
  • Denice Deatrich
  • Douglas McNab
  • Frederic Azevedo
  • Gary Stiehr
  • Giacomo Tenaglia
  • Helge Meinhard
  • Ian Colllier
  • Ian Gable
  • Jan Svec
  • Jason Hick
  • Jay Srinivasan
  • Jiri Chudoba
  • John Bartelt
  • John Gordon
  • Jonathan Schaeffer
  • Julian Macassey
  • Juraj Sucik
  • Karl Amrhein
  • Keith Chadwick
  • Kelvin Raywood
  • Lukas Fiala
  • Maarten Litmaath
  • Marc Hausard
  • Martin Bly
  • Matt Crawford
  • Mattias Wadenstein
  • Michel Jouvin
  • Michele Michelotto
  • Miguel Oliveira
  • Neal Adams
  • Paul Kuipers
  • Phillippe Olivero
  • Pierre Emmanuel Brinette
  • Pierrick Micout
  • Roberto Gomezel
  • Sandra Philpott
  • Sebastian Lopienski
  • Stefan Haller
  • Stijn De Weirdt
  • Thomas Davis
  • Tim Skirvin
  • Tobias Koenig
  • Tom Degroote
  • Tony Cass
  • Tony Chan
  • Troy Dawson
  • Ulrich Schwickerath
  • Walter Schoen
  • Wojciech Lapka
    • Opening Remarks and Conference Introduction Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
    • Keynote Speech Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 1
        Keynote Speech by Kathy Yelick
        Slides
    • Morning Break Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
    • Site Reports I Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 2
        CERN site report
        Summary of important news at CERN since the spring 2009 meeting
        Speaker: Dr Helge Meinhard (CERN-IT)
        Slides
      • 3
        JLab Site Report
        Status of Scientific Computing at JLab, including experimental physics and high performance computing for Lattice QCD.
        Speaker: Sandy Philpott (JLAB)
        Slides
      • 4
        Site Report GSI
        Site report GSI
        Speaker: Dr Walter Schoen (GSI)
        Slides
      • 5
        CC-IN2P3 Site Report
        News, changes, upgrades occured last year at CC-IN2P3.
        Speaker: Mr philippe olivero (CC-IN2P3)
        Slides
    • 12:00
      Lunch Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
    • Storage I Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 6
        U.S. Government Supports OpenAFS
        The U.S. Department of Energy has awarded Your File System Inc. a US$648,000 Small Business Innovative Research Phase II grant to support the development of a next generation globally distributed file system that is compatible with AFS. This talk will describe the technologies that Your File System Inc. will be implementing and contributing to OpenAFS through August 2011.
        Speaker: Jeffrey Altman (Your File System Inc.)
        Slides
      • 7
        Experiences with StoRM and Lustre at an Atlas Tier-2 site
        Queen Mary, University of London has been using the StoRM SRM in front of a Lustre filesystem. We present the results of benchmarks on the Lustre filesystem, and the throughput from simulated analysis using the hammercloud framework.
        Speaker: Christopher J Walker (Queen Mary, University of London)
        Slides
      • 8
        lustre@gsi: A Petabyte file system for the analysis farm - status and outlook
        lustre@gsi: A Petabyte file system for the analysis farm - status and outlook
        Speaker: Walter Schoen (GSI)
        Slides
      • 9
        Performance of Hadoop file system on a HEPiX storage workgroup testbed
        This work is continuation of storage solution testing performed by HEPiX storage workgroup on it's testbed at FZK. Hadoop, an Apache project, offers a cluster file system called HDFS inspired by Google File System and designed to run on commodity hardware. It has gained some popularity in OSG, where is has become a supported storage solution, and is currently in production at a few T2 sites. In this series of tests we used HEPiX testbed worker nodes' hard drives as a underlying storage, without using external file servers or storage arrays. We used a standard HEPiX storage application suite to evaluate performance of this solution. We present obtained results in this paper.
        Speaker: Artem Trunov (Karlsruhe Institute of Technology)
        Slides
    • Afternoon Break Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
    • Virtualization I Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 10
        Batch virtualization project at CERN
        Between March and August 2009 a project has been set up at CERN with the aim to evaluate possibilities to use virtualization at a large scale, with the focus on batch computing. Two key issues have been identified for this specific application: the placing of virtual machines on an appropriate hyper-visor, and the selection of an appropriate image which should be driven by the actual demand. Both commercial and free software solutions exist which are able to solve the placing issue. The virtual machine orchestrater, VMO, a commercial solution by Platform computing, and the free software solution OpenNebula have been evaluated during the project. For VMO, the vendor provided a first implementation of an algorithm for selecting the image to be deployed, which is driven by user requirements of pending batch jobs. For OpenNebula, an external mechanism needs to be developed to perform this task. In the presentation, the basic concepts of the project and lessons learned will be presented. Further visions and possible implications for services offered at CERN will be described.
        Speaker: Dr Ulrich Schwickerath (CERN)
        Slides
      • 11
        Virtualization within FermiGrid
        The current virtualization infrastructure in use within FermiGrid and the operational experience will be presented.
        Speaker: Dr Chadwick Keith (Fermilab)
        Slides
      • 12
        The Magellan Cloud Computing Project at NERSC
        NERSC and the Argonne LCF have been funded by DOE to acquire test systems to explore cloud computing technologies. We present an overview of the Cloud Computing Project at NERSC.
        Speaker: Mr Brent Draney (NERSC/LBNL)
        Slides
    • Site Reports II Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 13
        IRFU Site report
        What is new at IRFU Saclay.
        Speaker: Pierrick Micout (CEA IRFU)
        Slides
      • 14
        RAL Site Report
        Latest news from the RAL Tier1
        Speaker: Martin Bly (STFC-RAL)
        Slides
      • 15
        Site report from PDSF
        We present the current status of PDSF and updates since the last HEPiX meeting.
        Speaker: Mr Jay Srinivasan (Lawrence Berkeley National Lab. (LBNL)-Unknown-Unknown)
        Slides
      • 16
        ScotGrid, A UK ATLAS Tier 2 Approaching Readiness for Data Taking
        This presentation will provide an overview of the very successful UK ATLAS Tier-2, ScotGrid, as we fast approach LHC data taking and data analysis. This will cover a variety of topics, ranging from an overview of the fabric and middleware, the site's current readiness, success during STEP, storage issues, site optimisations all the way through to disaster planning and site security. The presentation will conclude with a look into the future and how we can retain our position as one of the most successful ATLAS Tier-2 centres.
        Speaker: Dr Douglas McNab (University of Glasgow)
        Slides
      • 17
        NIKHEF site report
        NIKHEF site report
        Speaker: Paul Kuipers (NIKHEF)
        Slides
    • 10:30
      Morning Break Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
    • Benchmarking I Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 18
        HEP-SPEC06 Measurement on Nehalem and Instanbul
        Performances of the last intel DP processor Nehalem 55xx and last amd DP processor Instanbul 24xx using the HEP-SPEC06 benchmark.
        Speaker: Michele Michelotto (Univ. + INFN)
        Slides
      • 19
        Benchmarking of CPU servers
        The recent generation of Intel XEON CPUs comes with support for symmetric multi processing, formally known as hyperthreading. In addition, a new CPU feature has been added, the Intel Turbo mode. The influence of these features on the system performance has been tested using the HEPSPEC06 benchmark suite, with enhanced statistics, and a study of the scaling behaviour has been done. The results of these tests are presented, and consequences for certain applications as well as for procurement procedures themselves are discussed.
        Speaker: Dr Ulrich Schwickerath (CERN)
        Slides
    • Lunch Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
    • Storage II Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 20
        Storage R&D at CERN
        This talk will present an update on R&D activities around storage at CERN. The main focus will be various activities around iSCSI technology, but an update will also be presented on the Lustre evaluation project.
        Speaker: Helge Meinhard (CERN-IT)
        Slides
      • 21
        IN2P3 HPSS Migration (v5.1 to 6.2) report
        IN2P3 Computing Center has been using HPSS as a Mass Storage System since 1999. There has been no major system upgrade since 2005 and IN2P3 still runs HPSS 5.1. This version is no more supported by IBM and doesn't include the T10K-B drive support. In June 2009, the system has been upgraded to HPSS 6.2.2.2. This upgrade implies major changes (DCE removal, DB2 and systems upgrade). This presentation will expose all the operations done to upgrade the system during a 3 day downtime, and the issues encountered. - HPSS new 6.2 features and changes. - Operation planning. - Systems preparation. - Core server Migration. - Metadata Migration. - Issues encountered.
        Speaker: Pierre Emmanuel Brinette (CNRS-CCIN2P3)
        Slides
      • 22
        HPSS in the Extreme Scale Era
        The High Performance Storage System (HPSS) has served the DOE community for high performance archival storage for the past fifteen years. It specifically serves the HEP community at LBNL/NERSC by providing archival storage from the PDSF system. This presentation will provide a brief overview of how HPSS works, what its current more unique features are, and what our plans our for our next major release (8.1) and thoughts on preparing for Extreme Scale (2018-2020).
        Speaker: Jason Hick (LBNL)
        Slides
      • 23
        Optimizing tape data access
        TReqS is our Tape Request Scheduler. Based on BNLBatch, its goal is to get between dCache and HPSS and to reorder the files requests. Since May of this year, a first implementation hit our production system. We will present here : - the problematic of tape access for LHC experiments - the solution we implemented - TReqS in its production environement, our 5 month experiment with the beast
        Speaker: Jonathan Schaeffer (CCIN2P3)
        Handout
        Slides
      • 24
        First exercises with PROOF on NFS v4.1/pNFS
        At DESY we have installed a testbed for Grid and storage related issues. One of the first trials we did was running a ALICE Proof job against data held in a distributed NFS 4.1 service based on dCache. Also trails were made against an industrial system, of which I am not allowed to speak yet. Comparisons we made with regard to scaling in PROOF and IO performance in comparison to NFS3 To make it more savvy, we have an accompanying movie which shows the running computation and the IO received.
        Speaker: Peter van der Reest (DESY)
        ALICE Offline Documentation
        Animation
        PROOF Tutorial - Demo on pp 49,50
        Slides
    • 15:15
      Afternoon Break Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
    • Virtualization II Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 25
        Evolution of virtual infrastructure with Hyper-V
        The Internet services group provides the infrastructure and sophisticated management tools for virtual machine provisioning based on Hyper-V, Microsoft Virtual Machine Manager, management SOAP web services and a user web interface. This virtualisation service has already confirmed its reliability and efficiency by wide range of satisfied users. The infrastructure which was presented at the last HEPIX meeting has undergone significant improvements, which allowed us to provide new features - live migration, rapid provisioning and better Linux support. This talk will present these important updates of our infrastructure and summarize the experience gained from running the Linux operating system in the virtual machines.
        Speaker: Mr Juraj Sucik (CERN)
        Slides
      • 26
        On-demand Virtualization and Grid/Cloud Integration
        INFN-T1 implemented a solution to get Worker-nodes on demand using virtualization technology. This solution is allowing us extreme flexibility providing dynamic virtual execution environments and integrating seamlessly into our production grid. Currently 200 VMs slots are available and will increase further.
        Speaker: Andrea Chierici (INFN-CNAF)
        Slides
      • 27
        A Vision for Virtualisation in WLCG
        This talk presents a possible roadmap for the use of virtualisation across WLCG sites to deliver improved computing services for the experiments and users.
        Speaker: Dr Tony Cass (CERN)
        Slides
    • Site Reports III Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 28
        LIP Site Report
        Site report for all sites (LIP-Lisbon,LIP-Coimbra,NCG-INGRID-PT) and activities at LIP.
        Speaker: Dr Miguel Oliveira (LIP - Laboratório de Instrumentação e Física Experimental de Partículas)
        Slides
      • 29
        INFN-T1 status report
        I will present the status report of the Italian Tier1 site
        Speaker: Andrea Chierici (INFN-CNAF)
        Slides
      • 30
        TRIUMF site report
        An external review of TRIUMF computing took place and some changes have been recommended. The Tier-1 Center has completed its acquisition for the 2009/2010 upgrade. We continue to use a mix of Xen, OpenVZ and Hyper-V for virtualisation with OpenVZ being preferred for hosted servers. For Linux desktops, we provide a repository of TRIUMF rpms which customise a standard Scientific Linux installation.
        Speaker: Kelvin Raywood (TRIUMF)
    • 10:00
      Morning Break Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
    • Monitoring Infrastructure and Tools I Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 31
        Open Source Solution for Monitoring of Grid Services in WLCG
        Since 2005 Worldwide LHC Computing Grid (WLCG) services have been monitored by the Service Availability Monitoring (SAM) system which has been the main source of information for the monthly WLCG availability and reliability calculations. During this time SAM framework gained popularity amongst site and service managers and was very useful in building robust grid infrastructure. Experience with this monitoring tool as well as preparation to the evolution of the European grid infrastructure from EGEE to national grid initiatives (NGI) led to design of the enhanced and distributed model for monitoring grid services. Nagios has been adopted as a monitoring framework and messaging technology (ActiveMq) has been chosen as a transport mechanism. This talk covers the architecture of the new system.
        Speaker: Mr Wojciech Lapka (Unknown)
        Slides
      • 32
        Unified Performance and Environment Monitoring using Nagios,Ganglia and Cacti
        We present a method of monitoring the environment and performance using open source tools such as Nagios, Ganglia and Cacti to collect and display performance data as well as availability information for various components of large computing systems in an integrated fashion. We will present information on how the data is collected, viewed and analyzed, with specific examples from NERSC's Cray system.
        Speaker: Mr Thomas Davis (NERSC/LBNL)
      • 33
        Monitoring tape drives and medias at CC-IN2P3
        Due to the continuous load and intensive usage on our robotics, we regularly face some hardware issues with tapes and tape drives. A recurrent issue concerns possible data loss which leads to go through a long recovery process. In order to improve our reliability, we have studied commercial solutions to avoid permanent write/read errors, or at least foresee occurring errors. We've tested two products (one month period each) and purchased the one that met our requirements best. In this talk I'll expose the criteria to select the product, our daily usage after 4 months of use and finally what we expect to do with/around it in the near future.
        Speaker: Frédéric AZEVEDO (CC-IN2P3)
        Slides
    • 12:00
      Lunch Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
    • Network, Security I Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 34
        Cyber security update
        This talk gives an update on security issues affecting computers, software applications and networks during the last months. It includes information on emerging types of vulnerabilities and recent attack vectors, and provides an insight into the cyber-crime economy of 2009. This talk is based on contributions and input from the CERN Computer Security Team.
        Speaker: Mr Sebastian Lopienski (CERN)
        Slides
      • 35
        Grid Security Update
        An update on Grid security in WLCG, EGEE and EGI, concentrating on progress in operational and policy issues.
        Speaker: Dr David Kelsey (RAL)
        Slides
    • Benchmarking II Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 36
        AMD Roadmap
        HPC AMD Roadmap and Benchmarking
        Speaker: Mr Jeff Underhill (AMD)
      • 37
        Benchmarking at LIP
        Benchmarking is a key activity on all computer centers not only for tender procedures but also to optimize resources. At LIP we underwent recently major upgrades of all sites and deployed a new one. We report on CPU HEP-SPEC and storage benchmarking results.
        Speaker: Miguel Oliveira (LIP - Laboratório de Instrumentação e Física Experimental de Partículas)
        Slides
    • 15:15
      Afternoon Break Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
    • Other (O/S, Applns., Data centers/Facilities) I Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 38
        The CDCE Project @ BNL
        This presentation will describe the expansion of the RHIC/ATLAS Computing Facility (RACF) to accomodate its commitments to the computational needs of the scientific programs at Brookhaven National Laboratory. The expansion has nearly tripled the footprint of the facility over the past 2+ years and allows the RACF to adequately meet our computing and storage requirements for the foreseeable future. The presentation will describe the challenges faced during the design, construction and commissioning phase of the project and will also provide an update on the current status and plans for the newly available floor space.
        Speaker: Dr Tony Chan (Brookhaven National Laboratory)
        Slides
      • 39
        Options for expanding CERN’s computing capacity without a new Building
        CERN is approaching the limit of what can be housed in its Computer Centre but there is no clear consensus about the provision of new capacity. While discussions continue, CERN has decided to take two interim measures, partly to satisfy immediate and medium-term needs and partly to gain experience in these domains, namely external hosting and the acquisition and operation of container-based solutions. I will describe the options considered for each of these and expose our current plans.
        Speaker: Mr Alan Silverman (CERN)
        Slides
      • 40
        Intel HPC environment for Silicon Design and Key Learnings
        Silicon design technical complexity is increasing every year due to several new features and process technology shrinks. Additionally, the business drivers such as shorter product development time, reduced headcount, and lower cost is increasing pre-silicon verification, high degree of design automation, and global multi-site design teams. These two factors (technological and business) are astronomically increasing demand on computing and storage driving design computing to be engineered as an HPC environment. This presentation will cover Intel HPC design compute environment, generational improvements, and realized value in the areas of compute clusters, very high large memory servers, optimal network, and parallel storage.
        Speaker: Mr Shesha Krishnapura (Intel)
        Slides
    • HEPiX Board Meeting (Board Only) Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
    • Travel to Dinner Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
    • Dinner at Cafe Venezia Main Room (Cafe Venezia)

      Main Room

      Cafe Venezia

      1799 University Ave, Berkeley, CA,
    • Network, Security II Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 41
        Web application security
        CERN hosts a large number of Web sites (CERN-related, but also private), both on central Web Services, as well as on machines managed by particular Web site owners. Some of these Web sites are actually interactive Web applications developed with languages like PHP, ASP, Java, Perl, Python etc. - and unavoidably a fraction of them have bugs making them vulnerable to attacks such as Cross Site Scripting (XSS), Code/SQL Injection, Cross Site Request Forgery (CSRF), and so on. To address this issue, several Web application vulnerability assessment tools have been evaluated at CERN, and chosen ones are used to find vulnerabilities before the attackers do. This talk will discuss the choice of tools, the findings, and suggestions how Web application security can be improved in large organizations.
        Speaker: Mr Sebastian Lopienski (CERN)
        Slides
      • 42
        Security aspects of the WLCG infrastructure
        The Worldwide LHC Computing Grid (WLCG) infrastructure has been built up for the storage and analysis of the very large data volumes that will be recorded by the LHC experiments. Its existing security mechanisms and policies are foreseen to evolve in various respects, for example with an increasing use of virtual machines, pilot jobs, clouds, enhancements to data storage and access models, and potential integration with single sign-on campus-wide or federated identity management systems. To steer such evolution, input from the HEPiX community would be very desirable.
        Speaker: Maarten Litmaath (CERN)
        Slides
    • 10:00
      Morning Break Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
    • Monitoring Infrastructure and Tools II Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 43
        Adopting Quattor for managing the UK Tier 1 fabric at RAL
        The UK Tier 1 Centre at RAL will increase in size significantly in the coming year. The need for better automation of both system deployment and ongoing configuration management has prompted a survey of possible solutions. Since deciding to adopt Quattor earlier this year, we have successfully deployed our new SL5 batch service using the system and are already seeing better consistency and easier management. The talk will discuss the options we considered and planning and carrying out the deployment of a complex management system while avoiding disruption of the running farm.
        Slides
      • 44
        Monitoring CC-IN2P3 services with Nagios
        At CC-IN2P3, Nagios have taken over from the previous system to become the main monitoring tool used by the operation. This presentation will introduce its configuration in a Tier-1 environment and will present various extra features developed at CC-IN2P3 in order to customize the notification system and to provide multi-user development and failover mechanism.
        Speaker: marc hausard (CC-IN2P3)
        Slides
      • 45
        Deploying and Using the Lustre Monitoring Tool
        The Lustre Monitoring Tool (LMT) provides a useful view of the server-side behavior of the Lustre parallel file system. This talk presents a brief overview of the architecture of the tool and explores several use cases including tracking system health, server-side performance tuning, applications-side performance tuning, and incident evaluation, among others.
        Speaker: Mr Andrew Uselton (NERSC/LBNL)
    • Other (O/S, Applns., Data centers/Facilities) II Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 46
        Scientific Linux Status Report and Plenary Discussion
        Progress of Scientific Linux over the past 6 months. What we are currently working on. What we see in the future for Scientific Linux. Also we will have a Plenary discussion to get feedback to and input for the Scientific Linux developers from the HEPiX community. This may influence upcoming decisions e.g. on distribution lifecycles, and packages added to the distribution.
        Speaker: Connie Sieh (FERMILAB)
        Paper
        Slides
    • 12:15
      Lunch Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
    • Other (O/S, Applns., Data centers/Facilities) II (contd.) Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 47
        The WLCG Technical Forum and HEPiX
        The Worldwide LHC Computing Grid (WLCG) Technical Forum has been set up for discussions between WLCG stakeholders about middleware etc. in view of improving the reliability and efficiency of the WLCG infrastructures. HEPiX is a good venue for discussions pertaining to the operation, usage and evolution of computing and storage facilities, from the perspectives of WLCG sites as well as the LHC experiments. Some topics of interest: virtual machines, clouds, pilot jobs, efficient data access, security.
        Speaker: Maarten Litmaath (CERN)
        Slides
      • 48
        Update on Version Control Services at CERN
        CERN Central Subversion Service was started as a pilot project on January 2008, and since January 2009 is an official service offered by CERN IT to CERN users. In the long distance it is meant to replace CERN Central CVS Server. This talk will present an overview of the CERN Version Control Services lifecycles, with an emphasis on community-driven Service Design, and Service Operation integrated with CERN IT infrastructure.
        Speaker: Giacomo Tenaglia (CERN)
        Slides
    • ITIL Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 49
        ITIL’s roles and tools from a perspective of a Scientific Computing Centre
        The Karlsruhe Institute of Technology (KIT) was founded at 1st October 2009, merging the University of Karlsruhe and the Forschungszentrum Karlsruhe. As a first new organizational unit of the KIT the Steinbuch Centre for Computing (SCC) was established, combining the former Institute for Scientific Computing of Forschungszentrum Karlsruhe and the Computing Centre of the Technical University Karlsruhe. The KIT merge process directly influenced the SCC which has to cover two locations with a distance of 10 km. The IT service management, according to the industrial standard “IT Infrastructure Library (ITIL), was selected by the SCC as a strategic element to support the merging of the two existing computing centres. The service support processes of ITIL like: Incident, Problem, Change, Configuration and Release Management are the basis of SCC’s first class IT Services. The talk explains the different roles and tools of each ITIL support process and it puts attention to the special needs of GridKa, the German Tier-1 centre of the WLCG infrastructure, hosted at the SCC.
        Speaker: Mr Tobias Koenig (Karlsruhe Institute of Technology (KIT))
        Slides
      • 50
        The FermiGrid Software Acceptance Process
        The software acceptance process that is used by FermiGrid together with the operational experience of operating using this process will be presented.
        Speaker: Dr Chadwick Keith (Fermilab)
        Slides
      • 51
        ITIL at CERN
        This talk will cover progress at CERN to improve service organisation throtugh adoption of ITIL principles.
        Speaker: Dr Tony Cass (CERN)
        Slides
    • 15:45
      Afternoon Break Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
    • Desktop Management I Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 52
        Windows 7 horizons
        Windows 7, the next version of Windows OS, is scheduled to be available worldwide on 22 October 2009. CERN IT-IS group has been working with it ever since its beta release in January 2009. The purpose of this talk is to discuss this experience and to share the plans for deployment of Windows 7 at CERN.
        Speaker: Juraj Sucik (CERN)
        Slides
      • 53
        Linux Desktop Management with old school NFS-root boot
        The standard Linux desktops at GSI received their operating system via NFS and not from the local hard drive since the last millennium. Since then this policy has been enhanced to work with shared read-only OS images that provide advanced security and fast OS deployment and upgrades. Image generation and configuration management of NFS-root desktops is completely integrated into our infrastructure for standalone servers. My talk will give an overview of the currently used techniques, the shortcomings of this approach and an outlook on our future plans.
        Speaker: Christopher Huhn (GSI Helmholtzzentrum für Schwerionenforschung GmbH)
        Slides
      • 54
        Review of Desktop Computing Support
        We will review the Desktop Computing Support across some HEP sites.
        Speaker: Mr Alan Silverman (CERN)
        Slides
    • Network, Security III Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 55
        ESnet: Networking for Science
        This talk will cover an introduction to the Energy Sciences Network (ESnet). It will outline both ESnet as a program in DOE's Office of Science and the current network implementation known as ESnet4. The roles of both components of ESnet4, the IP network and Science Data Network (SDN) will be discussed. The presentation will also touch upon several international network collaborations that ESnet is taking a key roles in to advance the usability of R&D networks to support science. Lastly, the talk will cover ESnet's participation in ARRA funded research into the demonstration of 100Gbs wide area networks.
        Speaker: Joe Burrescia (ESnet)
        Slides
      • 56
        Network Performance Tuning
        Speaker: Mr Brian Tierney (NERSC/LBNL)
        Slides
    • Site Reports IV Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      • 57
        INFN Site Report
        An overview about INFN computing and networking activities
        Speaker: Roberto Gomezel (INFN)
        Slides
      • 58
        SLAC Site Report
        Report on new personnel and projects, and status of IT and HPC at SLAC.
        Speaker: John Bartelt (SLAC)
        Slides
      • 59
        Prague Tier2 site report
        Prague Tier2, its current status and plans for nearest future.
        Speaker: Jiri Chudoba (Institute of Physics, Prague)
        Slides
      • 60
        NDGF Site Report
        News and overview of the whats happening in the NDGF region, as well as some small updates on previously covered topics in NDGF-related matters.
        Speaker: Mattias Wadenstein (NDGF)
        Slides
    • Conference Wrap-Up Bldg. 66 Auditorium

      Bldg. 66 Auditorium

      Lawrence Berkeley National Laboratory

      1, Cyclotron Road, Berkeley, CA, 94720 USA
      slides