HEPiX Fall 2012 Workshop

Asia/Shanghai
C305 (Institute of High Energy Physics)

C305

Institute of High Energy Physics

19B YuquanLu Shijingshan Beijing China
Gang CHEN (INSTITUTE OF HIGH ENERGY PHYSICS), Helge Meinhard (CERN), Sandy Philpott (JLAB)
Description

HEPiX meetings bring together IT system support engineers from the High Energy Physics (HEP) laboratories, institutes, and universities, such as BNL, CERN, DESY, FNAL, IN2P3, INFN, JLAB, NIKHEF, RAL, SLAC, TRIUMF and many others.

Meetings have been held regularly since 1991, and are an excellent source of information for IT specialists in scientific high-performance and data-intensive computing disciplines. We welcome participation from related scientific domains for the cross-fertilization of ideas.

The hepix.org website provides links to information from previous meetings.

    • 09:00 09:30
      Miscellaneous C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Convener: Dr Helge Meinhard (CERN)
      • 09:00
        Welcome address 20m
        Speaker: Prof. Yifang Wang (IHEP)
        Slides
      • 09:20
        Workshop logistics 10m
        Speaker: Gang Chen (Chinese Academy of Sciences (CN))
        Slides
    • 09:30 10:35
      Site reports C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Convener: Mr Alan Silverman (CERN)
      • 09:30
        LCG-BEIJING Site Status 15m
        The presentation will show the current status of BEIJING LCG Site and our next plan.
        Speaker: Dr Jingyan Shi (IHEP)
      • 09:45
        Australia Site Report 15m
        Details of the upgrades and changes that have recently been made at the Australian Centre of Excellence for Particle Physics at the Terascale.
        Speakers: Lucien Philip Boland (University of Melbourne (AU)), Sean Christopher Crosby (University of Melbourne (AU))
        Slides
      • 10:00
        CERN site report 20m
        News from CERN since the previous meeting
        Speaker: Dr Helge Meinhard (CERN)
        Slides
      • 10:20
        GSI site report 15m
        GSI site report
        Speaker: Dr Walter Schoen (GSI)
        Slides
    • 10:35 11:00
      Coffee break 25m C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
    • 11:00 12:35
      Site reports C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Convener: Mr Alan Silverman (CERN)
      • 11:00
        NDGF site report 15m
        Overview of new developments at the distributed NDGF tier-1. New hardware, new organization and new lessons learned will feature.
        Speaker: Erik Mattias Wadenstein (Unknown)
      • 11:15
        RAL Site Report 15m
        News from RAL
        Speaker: Martin Bly (STFC-RAL)
        Slides
      • 11:30
        The ATLAS Great Lakes Tier-2 (AGLT2) Site Report 15m
        We will present an update on our site since the last report and cover our work with VMware, dCache and perfSONAR-PS. In addition we will discuss our new denser storage system from Dell, recent networking changes and describe how we are integrating these into our site. We will conclude with a summary of what has worked and what problems we encountered and indicate directions for future work.
        Speaker: Shawn Mc Kee (University of Michigan (US))
        Slides
      • 11:45
        Fermilab Site Report 20m
        We present recent developments in the Scientific Computing Facilities at Fermilab. We will discuss continued improvements in site networking and wide area networking. We will show significant developments in physical facilities. We will present an overview of major computing and organizational activities in support of the scientific program.
        Speaker: Steven Timm (Fermilab)
        Slides
      • 12:05
        LAL and GRIF site report 15m
        LAL and GRIF changes since 1 year
        Speaker: Michel Jouvin (Universite de Paris-Sud 11 (FR))
        Slides
      • 12:20
        DESY Site Report 15m
        The site report will discuss changes and developments at the Hamburg and Zeuthen sites since Spring.
        Speaker: Mr Peter van der Reest (DESY)
        Slides
    • 12:35 12:50
      Group photo 15m C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
    • 12:50 14:00
      Lunch break 1h 10m C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
    • 14:00 15:30
      IT Infrastructure C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Conveners: Mr Alan Silverman (CERN), Dr Helge Meinhard (CERN)
      • 14:00
        OpenStack Chances and Practice in IHEP 30m
        OpenStack is a global collaboration of developers and cloud computing technologists producing the open standard cloud computing platform for both public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and feature rich. The technology consists of a series of interrelated projects delivering various components for a cloud infrastructure solution. This talk will present the status of the project from the user's and developer's perspectives and also show our practice on using OpenStack to build our cloud computing environment in data center. Then a series of experiments will be conducted and the performance comparison with the current solution will be also demonstrated. Some interesting research and development what we concern on OpenStack will be stated, including private cloud store for HEPiX users, operation monitoring, etc. Finally, the future developments of OpenStack such as auto deployment, network planning and high availability will also be discussed.
        Speakers: Dr Yaodong CHENG (Institute of High Energy Physics,Chinese Academy of Sciences), Mr hu qingbao (IHEP)
        Slides
      • 14:30
        CERN Agile Infrastructure , Road to Production 30m
        The CERN Agile Infrastructure aims to redesign the work flow of machine and configuration management within CERN IT. As the AI project approaches production the main software components , OpenStack , Puppet, Foreman have now been deployed with several iterations of scale and stability. We present the current status and next steps for the project.
        Speaker: Steve Traylen (CERN)
        Slides
      • 15:00
        Integration Lemon/LAS monitoring with the CERN Agile Infrastructure 30m
        The Agile Infrastructure (AI) project will deliver a solution for the CERN Computer Centre resources management. Part of the solution will consist in a new monitoring infrastructure of which the LHC Era Monitoring (Lemon) system an early adopters. Lemon is a client/server based monitoring system, covering performance, application, environment and facilities (e.g. temperature, power consumption, cooling efficiency, etc.) monitoring. The Lemon Alarming System, a Lemon extension, is used at CERN for notifying the operator about error situations. This talk covers the migration strategy to the new infrastructure as well as support for the non-Quattor environment (e.g. Puppet).
        Speaker: Ivan Fedorko (CERN)
        Slides
    • 15:30 16:00
      Coffee break 30m C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
    • 16:00 17:30
      IT Infrastructure C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Conveners: Mr Alan Silverman (CERN), Dr Helge Meinhard (CERN)
      • 16:00
        Quattor update - Integrating Aquilon into a grid site 30m
        Aquilon is a Quattor configuration database and management broker developed by by an investment to meet the needs of their large worldwide grid. Providing much better relational integrity to the Quattor configuraton database and a workflow that is both more agile and more disciplined, Aquilon can transform the use of Quattor to manage sites. This talk will discuss RAL Tier 1 experiening deployed an Aquilon instance outside its original commercial environment and beginning to use it for managing a grid site.
        Speaker: Ian Collier (UK Tier1 Centre)
        Slides
      • 16:30
        Lync - Phone, voice mailbox, instant messaging... Get access to all of them from any place in the world. 30m
        Use your device (computer or portable device) as your main tool for unified communication. Check presence of your colleagues, make phone calls, get call notifications, listen to your voice mailbox, answer mails and send instant messages. Do it from any place around the world with internet access. The presentation will summarize our experience in integrating Microsoft Lync, Alcatel PBX and Microsoft Exchange. The goal was to provide a system that integrates VoIP phone/mail/instant messaging and presence to enhance communication capabilities.
        Speaker: Pawel Grzywaczewski (CERN)
        Slides
      • 17:00
        Scientific Linux Infastructure Improvements 30m
        The underlying infrastructure of Scientific Linux is starting to change. These changes should make the environment more stable and higher performing with a greater feature set. This presentation will detail some of the plans and progress thus far.
        Speaker: Pat Riehecky (Fermilab)
        Slides
    • 09:00 10:30
      IT Infrastructure C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Conveners: Mr Alan Silverman (CERN), Dr Helge Meinhard (CERN)
      • 09:00
        DYNES: Building a distributed networking instrument 30m
        This presentation will discuss the challenges of efficiently provisioning and deploying switch and host OS configurations enabling our collaboration to monitor, access, and repair the distributed instrument. Additionally we will cover some of the ongoing challenges post-deployment as regards configuration tracking, service verification, and monitoring the overall status of the DYNES instrument as well as enabling DYNES use for domain specific applications (like the LHC).
        Speaker: Benjeman Jay Meekhof (University of Michigan (US))
        Slides
      • 09:30
        Selecting a Business-Process-Management-System in conjunction with an Identity-and-Access-Management-System 30m
        Business processes are integral parts of every day (non-technical) administrative tasks. Many of these tasks at DESY are still paper-bound. A joint project of DESY-Administration and High-Energy-Physics department was started to provide the organisational and technical prerequisites for establishing electronic workflows by using business process management systems (BPMS) as well as an identity and access management (IAM) system. Thus processes will be handled faster, be traceable better and executed in a uniform manner. The presentation will show aspects of the procedures in choosing a BPM-system as well as an IAM-system and underlying requirements.
        Speaker: Mr Dirk Jahnke-Zumbusch (DESY)
        Slides
      • 10:00
        Scientific Linux current status update 30m
        This presentation will provide an update on the current status of Scientific Linux, descriptions for some possible future goals, and allow a chance for users to provide feedback on its direction.
        Speaker: Pat Riehecky (Fermilab)
        Slides
    • 10:30 11:00
      Coffee break 30m C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
    • 11:00 12:30
      Computing C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Conveners: Gilles Mathieu (CNRS), Michele Michelotto (Universita e INFN (IT))
      • 11:00
        A decade of Condor experience at Fermilab 30m
        The Condor Batch System has been used at Fermilab for a decade in the Run II Reprocessing and Analysis, the USCMS Tier 1 facility, and the FermiGrid General Purpose Grid Cluster. In this talk I present an overview of the operational stability, the scalabilty, and the best practices we have learned to build a 27,000 job slot campus grid using the Condor system.
        Speaker: Steven Timm (Fermilab)
        Slides
      • 11:30
        CERN Batch System, Monitoring and Accounting 30m
        The CERN batch service runs a 60k CPU core cluster using Platform LSF. We present some of the challenges of running a service at this scale, and describe the current planning of how we aim to evolve the current system to a more dynamic, larger scale service. As part of this, we recently undertook a project of developing new monitoring tools and upgrading the batch accounting system; we present the current state of development in this area.
        Speaker: Jerome Belleman (CERN)
        Slides
      • 12:00
        LRMS Migration at GridKa 30m
        This talk describes the scheduled migration to another LRMS at GridKa: - Problems and limitations of the LRMS which is currently used at GridKa - Selection and tests of a new one - Configuration details, e.g. fair-share configurations, and experiences with a first sub-cluster which is already managed by the new LRMS
        Speaker: Manfred Alef (Karlsruhe Institute of Technology (KIT))
        Slides
    • 12:30 14:00
      Lunch break 1h 30m C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
    • 14:00 15:30
      Storage and Filesystems C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Convener: Andrei Maslennikov (CASPUR/CINECA)
      • 14:00
        The Lustre file system at IHEP 30m
        Lustre has been selected as the main distributed file system solution in IHEP for more then four years. The Lustre File System at IHEP is currently at a scale of 2.2 PB capacity, 50 OSSs and 500+ OSTs, running Lustre 1.8.6. The file system which was built on top of commodity disk arrays, servers and 10 Gbit Ethernet, provides 24 GB/s bandwidth for five high energy physics experiments. The presentation will mainly report the status of the Lustre file system at IHEP.It includes three parts: 1) an overview of the Lustre File system at IHEP, including the deployment history, current configuration and real performance captured during production usage; 2) the I/O pattern of High energy physics computing, including the file size, the read/write extent size and offset size and performance optimization according to this pattern; 3) management experience abstracted from 4 years’production run.
        Speaker: Ms Lu Wang (IHEP)
        Slides
      • 14:30
        Lustre at GSI 30m
        Since March 2012, GSI is running a second Lustre file system. The younger cluster introduced a host of new technologies, problems and challenges. Step-by-step migration of both data and hardware from the old to the newer system is under way. This younger installation is the current work horse for GSI HPC, but also provides experience and knowledge base for future projects coming with FAIR. Related is the TeraLink project connecting compute clusters at neighboring sites to GSI's Lustre. A first link has been established to the CSC supercomputer at Frankfurt University.
        Speaker: Thomas Roth (GSI)
        Slides
      • 15:00
        RAL Tier1 Disk only Storage Project status and plans 30m
        A working group has been investigating alternatives to Castor for disk only storage at the RAL Tier 1. Requirements have been gathered and we are now deploying test instances of a number of technologies. This talk will discuss both the reasons for the project, requirements and the current status and findings.
        Speaker: Ian Collier (UK Tier1 Centre)
        Slides
    • 15:30 16:00
      Coffee break 30m C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
    • 16:00 17:30
      Storage and Filesystems C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Convener: Andrei Maslennikov (CASPUR/CINECA)
      • 16:00
        CERN Cloud Storage Evaluation 30m
        Currently there is a growing interest in the area of cloud based infrastructures (either private or public) to implement data centres in a more scalable and manageable way and to include external resources in a more flexible way. In this context the CERN DSS group together with participation from IHEP and Huawei have investigated several cloud storage implementation with respect to their stability, performance and scalability. In this presentation we will summarise the possible motivations to use cloud storage components, their potential roles within the HEP context and the performance results achieved in a PB size test system. We will described the operational experience during several months of test activity and conclude with a tests plan for the next evaluation phase including replication studies between different distributed sites and different storage implementations.
        Speaker: Dirk Duellmann (CERN)
        Slides
      • 16:30
        Alternatives to Posix. Lessons with S3 Compatible Storage Systems 45m
        There are three major areas that of this presentation. First, a trend in storage systems away from tape and Posix to alternative systems that favor scale, latency tolerance, and the integration of storage backup processes including device and system trends. Second, how the features of the OpenStack/Swift and Huawei storage system meet these trends. Third, a little about the testing that has been completed in CERN OpenLab.
        Speaker: James Hughes (Huawei)
        Slides
    • 17:30 19:00
      Batch system BoF A419 (Institute for High Energy Physics)

      A419

      Institute for High Energy Physics

      Informal meeting of participants interested in LRMS (batch systems)

    • 09:00 10:15
      Security and Networking C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Convener: Dr David Kelsey (STFC - Science & Technology Facilities Council (GB))
      • 09:00
        Network Traffic Analysis using HADOOP Architecture 25m
        This report introduced a network traffic analysis tool using HADOOP architecture.By collecting the traffic information of the egress router in a campus or an institute, the network traffic analysis tool stored the traffic information which includes start time,end time,source IP,destination IP, Byte,Packet,Flow and etc. to HDFS which is a distributed file system as well as the RRD. In the frontend, the tool using rrdtool graph to draw graphs of the network flow trend chart , to get the details of the traffic information just click on the trend chart and there will be an detailed graph on the network flow information drawing by highstock which read data from HDFS, the user can also select a timeslot or a time window to get the netflow information which is calculated by the Map-reduce program running background. Meanwhile, by providing the IP specially related one HEP experiment, the tool can give the traffic information related to the specially HEP experiment, now it is used in collecting the network traffic information of DYB,YBJ,CMS and ATLAS, the network traffic information can be shown in realtime as well as the histroic record, and once you put the mouse on the graph, the timeslot and the netflow traffic information will be shown on the graph.
        Speaker: Ms Shan Zeng (IHEP)
        Slides
      • 09:25
        ZNeTS: log your network traffic ! 25m
        ZNeTS is an acronym for "The Network Traffic Supervisor". It is a tool for monitoring and recording machines traffic during months. ZNeTS is a network tool for network introspection and a response to the legal need, in France, to store one year traffic traces. ZNeTS is very easy to deploy whatever the architecture of your network. ZNeTS identifies compromised local machines (by virus, trojans, abusive or illegal usage, DNS or Mac spoofing, etc...). ZNeTS graphical interface is intuitive and ergonomic. Integrated metrology features offer two levels of details. Alerts are simple and relevant. Over the last 6 month, the tool has been successfully deployed as an appliance into all the IN2P3 laboratories (the french national research institute in physics) and we gave very positive feedbacks from the System administrators.
        Speaker: Mr Thierry DESCOMBES (CNRS IN2P3)
        Slides
      • 09:50
        IPv6 deployment in IHEP 25m
        Description of the IPV6 deployment in IHEP
        Speaker: Qi Fazhi (IHEP)
        Slides
    • 10:15 10:30
      Site reports C305

      C305

      Institute of High Energy Physics

      Convener: Mr Alan Silverman (CERN)
      • 10:15
        Site Report of ASGC 15m
        Update of e-Science infrastructure at ASGC will be reported. Focus will cover the international networking, computing and storage infrastructure, overall continuous operation improvement including the data center, e-Science application and virtual research framework, as well as the development of distributed cloud.
        Speaker: Mr YEN Eric (ASGC)
        Slides
    • 10:30 11:00
      Coffee break 30m C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
    • 11:00 12:30
      Security and Networking C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Convener: Dr David Kelsey (STFC - Science & Technology Facilities Council (GB))
      • 11:00
        IPv6 deployment status at CERN 30m
        An update on the IPv6 latest changes at CERN.
        Speaker: David Gutierrez Rueda (CERN)
        Slides
      • 11:30
        FZU IPv6 testbed updates 30m
        we present updates since Vancouver about our IPv6 testbed at FZU. We have setup nagios, smokeping several middleware services and several computing centre management procedures.
        Speaker: Marek Elias (Institute of Physics AS CR (FZU))
        Slides
      • 12:00
        The HEPiX IPv6 Working Group 30m
        This talk will provide an update on the activities of the IPv6 working group since the Prague meeting.
        Speaker: Dr David Kelsey (STFC - Science & Technology Facilities Council (GB))
        Slides
    • 12:30 14:00
      Lunch break 1h 30m C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
    • 14:00 15:30
      Computing C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Conveners: Gilles Mathieu (CNRS), Michele Michelotto (Universita e INFN (IT))
      • 14:00
        Report from batch system BOF 30m
        Report from the BOF session the day before
        Speaker: Aresh Vedaee (CC-IN2P3 - Centre de Calcul (FR))
        Slides
      • 14:30
        Oracle Grid Engine at CC-IN2P3 - report after One year 30m
        CC-IN2P3 has been running OGE for more than one year now. After describing the current context, I will report the difficulties encountered, solved or not, and the new enhancements we would like to get.
        Speaker: philippe olivero (CC-IN2P3)
        Slides
      • 15:00
        Setting up the CSP mode in a Gridengine production cluster 30m
        All the currently available Gridengine implementations don't provide any authenticated access with the default setup. This opens a big and easily exploitable security hole which might be considered severe especially in multi-community clusters. This talk will describe in detail the attack vector available in such setups. It will furthermore give a step-by-step guide to activate the certificate-based authentication in Gridengine (the so called "CSP mode") based on the experience at DESY.
        Speaker: Andreas Haupt (Deutsches Elektronen-Synchrotron (DE))
        Slides
    • 15:30 16:00
      Coffee break 30m C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
    • 16:00 17:00
      Computing C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Conveners: Gilles Mathieu (CNRS), Dr Michele Michelotto (Universita e INFN (IT))
      • 16:00
        Slurm Experiences for WLCG in the Nordics 30m
        Many compute clusters in the nordics run Slurm, this includes the grid connected ones. This talk looks at the experience, which parts works well, what could use improvements, and some comparisons to other batch systems.
        Speaker: Erik Mattias Wadenstein (Unknown)
      • 16:30
        Testing SLURM batch system for a grid farm: functionalities, scalability, performance and how it works in a GRID environment 30m
        We will show all the work done in order to install and configure the batch system itself together with the security configuration needed. In this presentation we will show the results of the deep testing that we have done on SLURM, in order to be sure that it will cover all the needed functionalities like: priorities, fairshare, limits, QoS, failover capabilities and others. We will report also on the possibility of exploiting this batch system within a complex mixed farm environment where grid job, local job and interactive activities are managed exploiting the same batch system. From a point of view of the scalability we will show how the SLURM batch system is able to deal with the increasing number of node, CPU and jobs served. We will also show the performance achieved with several client accessing the same batch server. We also will make some comparison with other available open source batch system both in terms of performance and functionalities. We will also provide feedback on mixed configuration with SLURM and MAUI as job scheduling. We will also describe the work done in order to support SLURM in a EGI grid environment.
        Speaker: Dr Giacinto Donvito (INFN-Bari)
        Slides
    • 17:00 18:30
      HEPiX board (by invitation only) A419

      A419

      Institute of High Energy Physics

      Conveners: Dr Helge Meinhard (CERN), Sandy Philpott (JLAB)
    • 09:00 10:30
      Grid, Cloud and Virtualisation C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Conveners: Dr John Gordon (STFC - Science & Technology Facilities Council (GB)), Dr Keith Chadwick (Fermilab)
      • 09:00
        The High-Availability FermiCloud Infrastructure-as-a-Service Facility 30m
        FermiCloud is an Infrastructure-as-a-Service private cloud built for the support of scientific computing at Fermilab. Within the past year we have deployed a facility capable of providing 24x7 service. We will present significant advances in monitoring and visualization, accounting, security, authorization, and user interface. We will also present our current plans for multi-cloud interoperability.
        Speaker: Steven Timm (Fermilab)
        Slides
      • 09:30
        scientific data cloud infrastructure and services in Chinese Academy of Sciences 30m
        In order to solve big data challenge in scientific research, a scientific data cloud has been planed to build in Chinese Academy of Sciences, which constitutes 12 data centers and one data archive center, and will provide big data online storage, data backup ,data archive and data intensive analysis services.This talk will introduce the infrastructure, key technology, including distributed file system, virtulization,monitoring, etc, and services.
        Speaker: Dr jianhui Li (no 4, south 4 street, zhongguancun, haidian district, beijing, china)
        Slides
      • 10:00
        EGI Federated Cloud Infrastructure 30m
        Follow up to talk at last HEPiX - describing recent developments & roadmap. Detailed abstract to follow.
        Speaker: Ian Collier (UK Tier1 Centre)
        Slides
    • 10:30 11:00
      Coffee break 30m C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
    • 11:00 12:30
      Grid, Cloud and Virtualisation C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Conveners: Dr John Gordon (STFC - Science & Technology Facilities Council (GB)), Dr Keith Chadwick (Fermilab)
      • 11:00
        Virtualisation working group progress report 30m
        An update on the work of the virtualisation working group.
        Speaker: Tony Cass (CERN)
        Slides
      • 11:30
        Global Accounting in the Grid and Cloud 30m
        Running jobs all over the world requires a method of recording and aggregating usage of users and VOs to present the worldwide view of that usage. The APEL Accounting systema has done that successfully for the cpu usage in the worldwide LHC Computing Grid since 2004. This presentation will cover the evolution of APEL and all the other systems who helped collect the data. It will also report on accounting of other types of usage (eg storage), how usage from the cloud can be incorporated and future evolution planned or required.
        Speaker: Dr John Gordon (STFC - Science & Technology Facilities Council (GB))
        Slides
      • 12:00
        STFC Scientific Computing Department Cloud computing 30m
        A report on various projects investigating and using cloud computing technologies across the Scientific Computing Department.
        Speaker: Ian Collier (UK Tier1 Centre)
        Slides
    • 12:30 14:00
      Lunch break 1h 30m C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
    • 14:00 15:30
      Security and Networking C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Convener: Dr David Kelsey (STFC - Science & Technology Facilities Council (GB))
      • 14:00
        Data Center Network changes and extension to Wigner 30m
        The latest changes on CERN’s Data Center Network will be presented, including the migration to High-End Brocade routers, tests and introduction of 100Gbps in the Core of the LCG and bandwidth increase on the firewall system. At the same time, a Network Architecture for the Data Center extension at Wigner will be discussed.
        Speaker: David Gutierrez Rueda (CERN)
        Slides
      • 14:30
        Federated Identity Management for HEP 30m
        This talk will present an update on the activities in Federated Identity since the last HEPiX meeting.
        Speaker: Dr David Kelsey (STFC - Science & Technology Facilities Council (GB))
        Slides
      • 15:00
        Cyber security update 30m
        This talk gives an update on security trends, and issues affecting computers, software applications and networks during the last months. It includes information on emerging types of vulnerabilities and recent attack vectors, and provides an insight into the cyber-security world of 2012. New security tools developed at CERN will also be presented. This talk is based on contributions and input from the CERN Computer Security Team.
        Speaker: Mr Sebastian Lopienski (CERN)
        Slides
    • 15:30 16:00
      Coffee break 30m C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
    • 16:00 17:00
      Storage and Filesystems C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Convener: Andrei Maslennikov (Universita e INFN, Roma I (IT))
      • 16:00
        Bringing cloud storage to your desk with Mucura 30m
        In this contribution we present our experience building a prototype of an open source software system for operating online file repositories of extensible capacity. Built on the well-understood client-server architecture model, the system can be used by computing centers looking at solutions for providing online storage services for their individual users. The client-side component runs on the end-user’s personal computer and provides both command-line and graphical user interfaces. It supports a deliberately limited set of operations on remote files, namely store, retrieve, organize and share remote files. Mucura exposes the same HTTP-based standard API supported by Amazon S3 and extends it to support the certificate-based authentication mechanism used by production grid computing platforms such as WLCG. As a consequence, personal file repositories based on Mucura can be seamlessly accessed both from the user’s personal computer and from grid jobs running on the user’s behalf. This integration allows researchers to use their individual online storage space as a personal storage element conveniently managed from their personal computer. At the core of the system there are components for managing file metadata and for secure storage of the files’ contents, implemented on top of highly available, distributed, persistent and scalable key-value stores. We will present a detailed architectural view of the system, the status of development and the perspectives for the months to come. This work is inspired not only by the increasing number of commercial services available nowadays to individuals for their personal storage needs (backup, file sharing, synchronization, …) such as Amazon S3, Dropbox, SugarSync, bitcasa, etc., but also by several efforts in the same area in the academic and research worlds (NASA, SDSC, etc.). We are persuaded that the level of flexibility offered to individuals by this kind of systems adds value to the day-to-day work of scientists.
        Speaker: Mr Fabio Hernandez (IN2P3/CNRS Computing Center and IHEP Computing Center)
        Slides
      • 16:30
        News from HEPiX Storage Working Group 30m
        Speaker: Andrei Maslennikov (CASPUR/CINECA)
        Slides
    • 17:00 17:30
      Grid, Cloud and Virtualisation C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Conveners: Dr John Gordon (STFC - Science & Technology Facilities Council (GB)), Dr Keith Chadwick (Fermilab)
      • 17:00
        CERN and Helix Nebula, the Science Cloud 30m
        Helix Nebula, the Science Cloud, is a collaborative effort of several European organizations, including CERN, ESA and EMBL, to engage with European industry in public-private partnerships to build a European cloud infrastructure capable of supporting the missions of these organisations. During the initial pilot phase of Helix Nebula, the ATLAS experiment at CERN was selected as one of the flagship projects and a proof-of-concept phase was defined in order to demonstrate the feasibility of integrating commercial cloud facilities into the ATLAS distributed computing infrastructure. This talk will outline the status of Helix Nebula and present the results of the ATLAS use-case in particular. Three commercial cloud providers with varied infrastructures were tested; all were successfully able to run ATLAS simulation jobs, though the paths to success at some providers were more difficult than others. We will give an insight into the lessons learned, the technical recommendations for the suppy side and some of the future work in the Helix Nebula partnership.
        Speaker: Fernando Harald Barreiro Megino (CERN)
        Slides
    • 19:00 22:00
      Workshop banquet
    • 09:00 10:00
      IT Infrastructure C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Conveners: Mr Alan Silverman (CERN), Dr Helge Meinhard (CERN)
      • 09:00
        ITIL at CC-IN2P3 30m
        IN2P3 Computing Center cares about the quality of its services and tries to improve processes and tools using ITIL best practices. In this talk, I'll describe what we are doing on quality. I'll show the different ongoing work : the ticketing system, the CMDB, the service catalog, the business continuity plan, the identity management, ... . I'll take some time to go deeper into the change of our ticketing system to OTRS : why change ? what software were evaluated and how ? what changes are expected in daily work ? what more than ticketing could we do with it ?
        Speaker: Frédéric AZEVEDO (CC-IN2P3)
        Slides
      • 09:30
        JASMINE/CEMS and EMERALD 30m
        Details of the new e-Infrastructure South services at RAL, including the 4.5PB Panasas installation and the GPU service.
        Speaker: Martin Bly (STFC-RAL)
        Slides
    • 10:00 10:30
      Miscellaneous C305

      C305

      Institute of High Energy Physics

      Convener: Dr Helge Meinhard (CERN)
      • 10:00
        Mobile web development, and CERN mobile web site 30m
        Mobile computing is clearly on the rise – but developing mobile applications, especially for multiple platforms, is a considerable effort. Fortunately, there is an alternative to native apps: web sites that are optimized for mobile devices and touch screens. In this presentation, I will discuss both solutions, and will present a hybrid approach. The presentation will also include a brief introduction to technologies such as jQuery, jQuery Mobile and PhoneGap. Additionally, as an example of a mobile web application, CERN mobile web site (http://m.cern.ch) will be presented.
        Speaker: Mr Sebastian Lopienski (CERN)
        Slides
    • 10:30 11:00
      Coffee break 30m C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
    • 11:00 12:00
      Security and Networking C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Convener: Dr David Kelsey (STFC - Science & Technology Facilities Council (GB))
      • 11:00
        Service Provisioning and Security Guarantee in CSTNET 30m
        CSTNET is the non-profitable, nationwide academic network in China, with the aim of providing Internet service and applications for the demand of scientific research and constructing an innovative network environment for the future ICT. The service provisioning and security guarantee are key factors to ensure the success of an operational network. In this talk, we will first give an overview of CSTNET, followed by the introduction of its infrastructure. And then, we will demonstrate the advanced scientific network services and applications provided by CSTNET. The main services include the network management cloud service, network security cloud service, unified communication service, Duckling collaboration working environment service, and network research and experimentation service. The main network applications include light-path provisioning for eVLBI and its tracking for Chang’E-1&2 Lunar Mission, massive data transmission for IHEP-NERSC Daya Bay neutrino experiment, and integrated services for ITER. Finally, we will present how to guarantee the security in CSTNET, which is provided by the network security infrastructure and the security cloud platform. The network security cloud delivers a clear and achievable path for network administrators using SAAS paradigm to achieve the centralized and unified monitoring and to provide multitenant, on-demand, location-independent network security services. Based on the security facilities and security operating center, both personalized special services and cloud-based general services are provided for 100 institutions in CAS, including security monitoring and situation awareness service, malicious code in-depth analysis service, emergency response on information security service, security assessment and reinforcement service, and safety training service.
        Speaker: Dr Yulei Wu (CSTNET)
        Slides
      • 11:30
        Networking Tools for Sysadmins 30m
        The talk wants to introduce netdisco, which is used at DESY for network inventory and has been enhanced slightly. A command line interface for the netdisco DB developed at DESY should be covered as well. The concept of using netflow data to understand and analyze network traffic should be discussed, which is especially suitable for sites with high traffic. Programs that can deal with netflow data, such as the recently released version 5 of ntop, nfdump to record netflows and nfsen to visualize the data will be presented, as well as some plugins for nfsen. All the tools covered in the talk are IPv6 enabled.
        Speaker: Dr Wolfgang Friebel (Deutsches Elektronen-Synchrotron (DE))
        Slides
    • 12:00 12:30
      Miscellaneous C305

      C305

      Institute of High Energy Physics

      19B YuquanLu Shijingshan Beijing China
      Convener: Dr Helge Meinhard (CERN)
      • 12:00
        Workshop wrap-up 30m
        Closing comments
        Speaker: Dr Helge Meinhard (CERN)
        Slides