24–28 Oct 2011
Hosted by TRIUMF, SFU and the University of Victoria at the Harbour Center - Downtown Vancouver
Canada/Pacific timezone

Contribution List

56 out of 56 displayed
Export to PDF
  1. Denice Deatrich
    24/10/2011, 09:30
    Site Reports
    Updates on the status of the Tier-1 and other TRIUMF computing news.
    Go to contribution page
  2. Iwona Sakrejda
    24/10/2011, 09:45
    Site Reports
    PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of Physics, Astrophysics and Nuclear Science collaborations. Located at NERSC and benefiting from excellent network and storage infrastructure cluster constantly changes to keep up with computing requirements of physics experiments like ATLAS, Alice and Daya Bay....
    Go to contribution page
  3. Dr Keith Chadwick (Fermilab)
    24/10/2011, 10:00
    Site Reports
    Fermilab Site Report - Fall 2011 HEPiX
    Go to contribution page
  4. Erik Mattias Wadenstein (Unknown)
    24/10/2011, 10:15
    Site Reports
    Update of recent and current events at NDGF.
    Go to contribution page
  5. Mr Martin Bly (STFC-RAL)
    24/10/2011, 11:00
    Site Reports
    The latest from the RAL Tier1.
    Go to contribution page
  6. Dr Michele Michelotto (INFN Padua & CMS)
    24/10/2011, 11:15
    Site Reports
    Istituto Nazionale di Fisica Nucleare - Site Report
    Go to contribution page
  7. Dr Helge Meinhard (CERN)
    24/10/2011, 11:30
    Site Reports
    News from CERN since last meeting
    Go to contribution page
  8. Dr Shawn McKee (University of Michigan ATLAS Group)
    24/10/2011, 11:50
    Site Reports
    We will report on the ATLAS Great Lakes Tier-2 (AGLT2), one of five US ATLAS Tier-2 sites, providing a brief overview of our experiences planning, deploying, testing and maintaining our infrastructure to support the ATLAS distributed computing model. AGLT2 is one of the larger WLCG Tier-2s worldwide with 2.2 PB of dCache storage and 4500 job-slots, so we face a number of challenges in...
    Go to contribution page
  9. Walter Schon
    24/10/2011, 12:05
    Site Reports
  10. Hung-Te Lee (Academia Sinica (TW))
    24/10/2011, 12:20
    Site Reports
    Site report of ASGC.
    Go to contribution page
  11. Wayne Salter (CERN)
    24/10/2011, 14:00
    IT Infrastructure and Services
    There are a number of projects currently underway to improve and extend the CERN computing facilities which have been reported at previous HEPiX meetings. An update will be given on the current status of these projects and particular emphasis will be placed on efficiency improvements that have been made in the CERN Computer Centre and the resulting energy, and hence cost, savings.
    Go to contribution page
  12. Jan Kundrat (Unknown-Unknown-Unknown)
    24/10/2011, 14:30
    IT Infrastructure and Services
    The proposed talk discusses the Deska project [1], our attempt at delivering an inventory database whose goal is to provide a central source of machine-readable information about one's computing center. We mention the motivation behind the project, describe the design choices we have made and talk about how the Deska system could help reduce maintenance effort on other sites.
    Go to contribution page
  13. connie sieh (Fermilab)
    24/10/2011, 15:00
    IT Infrastructure and Services
    Current status of Scientific Linux Futures of Scientific Linux
    Go to contribution page
  14. Veronique Lefebure (CERN)
    24/10/2011, 16:00
    IT Infrastructure and Services
    SINDES, Secure INformation DElivery System, is a tool aimed at ensuring enough level of privacy in storing and delivering confidential files. Initially written at CERN in 2005, SINDES is now being rewritten in order to have its user-interface, flexibility and maintainability improved: access control granularity, logging, file modifications, history, machine upload, unattended installations and...
    Go to contribution page
  15. Matthias Schroeder (CERN)
    24/10/2011, 16:30
    IT Infrastructure and Services
    CERN has started to use OCS Inventory for the HW and SW inventory of SLC nodes on site, and plans to do the same for the MacOS nodes. I will report about the motivation for this, the setup used and the experience gained.
    Go to contribution page
  16. David (John) Cownie (AMD)
    25/10/2011, 09:00
    Computing & Batch Services
    An overview of the architecture and power efficiency features of the latest 16-core processors from AMD including benchmark results for the HEP-SPEC suite –- showing the performance improvements over the current 12-core and older 6-, quad-, and dual-core processors. AMD's newest Opteron processors feature the “Bulldozer” x86 core pair compute module which is especially well-suited for modern...
    Go to contribution page
  17. Jan Svec (Acad. of Sciences of the Czech Rep. (CZ))
    25/10/2011, 10:00
    Site Reports
    Main computing and storage facilities for LHC computing in the Czech Republic are situated at Prague Tier-2 site. We participate in grid activities since the beginning of European Data Grid. The recent years were significant in the growth of our computing and storage capacities. In this talk, we will present the current state of our site, its history and plans for the near future.
    Go to contribution page
  18. Dr Tony Wong (Brookhaven National Laboratory)
    25/10/2011, 10:15
    Site Reports
    This presentation will cover recent developments and operational news of the RHIC-ATLAS computing facility at BNL.
    Go to contribution page
  19. Michel Jouvin (Universite de Paris-Sud 11 (FR)), Pierrick Micout (CEA)
    25/10/2011, 11:00
    Site Reports
    Site report of GRIF/LAL and GRIF/Irfu
    Go to contribution page
  20. Mr Dirk Jahnke-Zumbusch (DESY)
    25/10/2011, 11:15
    Site Reports
    current information about DESY IT - both for the Hamburg and Zeuthen sites
    Go to contribution page
  21. Mr Neal Adams (SLAC)
    25/10/2011, 11:30
    Site Reports
    Site report for SLAC.
    Go to contribution page
  22. Jingyan Shi (IHEP)
    25/10/2011, 11:50
    Site Reports
    Give an introduction about the update activities on IHEP computing site during the past half year.
    Go to contribution page
  23. Sandy Philpott (JLAB)
    25/10/2011, 12:05
    Site Reports
    An update of scientific computing activities at JLab since the Cornell meeting, including Lustre and GPU status.
    Go to contribution page
  24. Manfred Alef (Karlsruhe Institute of Technology (KIT))
    25/10/2011, 14:00
    Computing & Batch Services
    Report on current benchmark issus, e.g.: - New release of SPEC CPU2006 available - New processor architectures
    Go to contribution page
  25. Dr Michele Michelotto (INFN Padua & CMS)
    25/10/2011, 14:30
    Computing & Batch Services
    The HEP-SPEC06 benchmark was designed by a working group born during the HEPIX meeting in JLAB. The HS06 is now the standard for measuring computing power in HEP and also in other scientific areas that make use of Computing GRID. The goal of this discussion is to understand how the HEPIX community sees the future of HS06.
    Go to contribution page
  26. Mr Christopher Huhn (GSI Darmstadt)
    25/10/2011, 15:10
    IT Infrastructure and Services
    GSI is successfully utilizing Cfengine for configuration management since almost a decade. Even though Cfengine is powerful as well as reliable we started to test the configuration management system Chef as a successor or complement to Cfengine to implement features we are lacking up to now.
    Go to contribution page
  27. Wayne Salter (CERN)
    25/10/2011, 16:00
    IT Infrastructure and Services
    A detailed study of approximately 4000 vendor interventions for hardware failures experienced in CERN IT computing facility in 2010-2011 will be presented. The rates of parts replacements are compared for different components and as expected disk failures are dominating with approximately 1% quarterly replacement rate. When plotting the variation with age a higher rate is seen in the first...
    Go to contribution page
  28. Dr Giuseppe Lo Presti (CERN)
    25/10/2011, 16:30
    IT Infrastructure and Services
    The TSM server network at CERN - with its 17 TSM servers in production, 30 drives, ~1300 client nodes and ~4 PB of data - often needs an overwhelming amount of effort to be properly managed by the few TSM administrators. Hence, the need for a central monitoring system able to cope with the increasing number of servers, client nodes and volumes. We will discuss our approach to this issue,...
    Go to contribution page
  29. Belmiro Daniel Rodrigues Moreira (CERN)
    26/10/2011, 08:30
    Grid, cloud and virtualization
    CERN is developing a different set of tools to improve and agile the Cloud Computing infrastructure. Currently there are two active important projects: CloudMan is a project developed in collaboration with BARC institute and the VMIC project is developed in collaboration with ASGC. The CloudMan project consists in the development of an Enterprise Graphical Management tool for IT resources...
    Go to contribution page
  30. Belmiro Daniel Rodrigues Moreira (CERN)
    26/10/2011, 08:55
    Grid, cloud and virtualization
    In December 2010 CERN moved parts of the batch resources into a cloud like infrastructure, and is running some of the batch resources in a fully virtualized infrastructure since then. This presentation will give an overview over the experiences learned from this exercise, the performance and results, impressions on operational overhead, and problems seen since the deployment of the...
    Go to contribution page
  31. Owen Millington Synge
    26/10/2011, 09:25
    Grid, cloud and virtualization
    The use of signed image lists, and updates the use of signed image lists.
    Go to contribution page
  32. Mr Troy Dawson (Red Hat)
    26/10/2011, 09:50
    Grid, cloud and virtualization
    OpenShift is a collection of cloud services forming a solid redefining Platform-as-a-Service (PaaS) for developers who build apps on open source technologies.
    Go to contribution page
  33. Dr Tony Cass (CERN)
    26/10/2011, 10:50
    Grid, cloud and virtualization
    An update on the progress of the working group since the Spring HEPiX meeting at GSI.
    Go to contribution page
  34. Iwona Sakrejda
    26/10/2011, 11:10
    Grid, cloud and virtualization
    Funded by the American Recovery and Reinvestment Act (Recovery Act) through the U.S. Department of Energy (DOE), the Magellan project was charged with a task of evaluating if cloud computing could meet specialized needs of scientists. Split between two DOE centers: the National Energy Research Scientific Computing Center (NERSC) in California and the Argonne Leadership Computing...
    Go to contribution page
  35. Ian Collier (UK Tier1 Centre)
    26/10/2011, 11:35
    Grid, cloud and virtualization
    Status of work on virtualisation and cloud computing at the RAL Tier 1.
    Go to contribution page
  36. Neil Johnston (Piston Cloud Computing)
    26/10/2011, 12:00
    Grid, cloud and virtualization
    OpenStack’s mission is “To produce the ubiquitous Open Source cloud computing platform that will meet the needs of public and private cloud providers regardless of size, by being simple to implement and massively scalable." This talk will review the implications of this vision to meet the storage and compute needs of data intensive research projects, then examine OpenStack’s potential as a...
    Go to contribution page
  37. Edoardo Martelli (CERN)
    26/10/2011, 14:00
    Security & Networking
    LHCONE (LHC Open Network Environment) is the network which will give dedicated bandwidth for LHC data transfer to Tier2s and Tier3s
    Go to contribution page
  38. Jason Zurawski (Internet2)
    26/10/2011, 14:30
    Security & Networking
    Scientific innovation produced by Virtual Organizations (VOs) such as the LHC, demands high capacity and highly available network technologies to link remote data creation, storage, and processing facilities. Research and Education (R&E) networks are a vital cog in this supply chain, and offer advanced capabilities to this distributed scientific project. Network operations staff spend...
    Go to contribution page
  39. Mr Marek Elias (Institute of Physics AS CR, v. v. i. (FZU))
    26/10/2011, 15:00
    Security & Networking
    We are facing exhaustion of IPv4 addresses and transition to IPv6 is becoming more and more urgent. In this contribution we describe our current problems with IPv4 and our special motivation for transition to IPv6. We present our current IPv6 setup and installation of core network services like DNS and DHCPv6. We describe our PXE installation testbed and results of our experiments with...
    Go to contribution page
  40. Edoardo Martelli (CERN)
    26/10/2011, 15:45
    Security & Networking
    Description of the CERN IPV6 deployment project: service definition, features, implementation plan
    Go to contribution page
  41. Dr David Kelsey (STFC - Science & Technology Facilities Council (GB))
    26/10/2011, 16:15
    Security & Networking
    This new working group was formed earlier in 2011. There have been several meetings, sub-topics have been planned and work is now well underway. This talk will present the current status and plans for the future.
    Go to contribution page
  42. Simon Liu (TRIUMF (CA))
    27/10/2011, 09:00
    Storage & Filesystems
    The ATLAS Tier1 data centre at TRIUMF provides a highly efficient and scalable storage components to support LHC data analysis and production. This contribution will describe and review the storage infrastructure and configuration currently deployed at the Tier-1 data centre at TRIUMF for both disk and tape, as well sharing of past experiences. A brief outlook on test beds and future expansion...
    Go to contribution page
  43. Dr Patrick Fuhrmann (DESY)
    27/10/2011, 09:30
    Storage & Filesystems
    The European Middleware Initiative is now rapidly approaching its projects half-value period. Nearly all objectives of the first year of EMI-Data have been achieved and the feedback from the first EMI review has been very positive. Internet standards, like WebDAV and NFS4.1/pNFS have been integrated into the EMI set of storage elements, the already existing accounting record has been extended...
    Go to contribution page
  44. Andrei Maslennikov (CASPUR)
    27/10/2011, 10:00
    Storage & Filesystems
  45. Mr William Maier (University of Wisconsin (US))
    27/10/2011, 11:00
    Storage & Filesystems
    The University of Wisconsin CMS Tier-2 center serves nearly a petabyte of storage and tens of thousands of hours of computation each day to the global CMS community. After seven years, the storage cluster had grown to 250 commodity servers running both the dCache distributed filesystem and the Condor batch scheduler. This multipurpose, commodity approach had quickly and efficiently scaled to...
    Go to contribution page
  46. Dr Giuseppe Lo Presti (CERN)
    27/10/2011, 11:30
    Storage & Filesystems
    [Still to be confirmed] The Data and Storage Services (DSS) group at CERN develops and operates two storage solutions for the CERN Physics data, targeting both Tier0 central data recording and preservation, and user-space physics analysis. In this talk we present the current status of the two systems, CASTOR and EOS, and the foreseen evolution in the medium term.
    Go to contribution page
  47. Ian Collier (UK Tier1 Centre)
    27/10/2011, 12:00
    Storage & Filesystems
    The CernVM-FS has matured very quickly into a production quality tool for distributing VO software to grid sites. CVMFS is now in production use at a number of sites. This talk will recap the technology behind CVMFS and discuss the production status of the infrastructure.
    Go to contribution page
  48. Mr Roger Goff (DELL)
    27/10/2011, 14:00
    Computing & Batch Services
  49. Mr Romain Wartel (CERN)
    27/10/2011, 15:00
    Security & Networking
    This presentation provides an update of the security landscape since the last meeting. It describes the main vectors of compromises in the academic community and presents interesting recent attacks. It also covers security risks management in general, as well as the security aspects of the current hot topics in computing, for example identity federation and virtualisation.
    Go to contribution page
  50. Bob Cowles (SLAC)
    27/10/2011, 15:30
    Security & Networking
    The coming of IPv6 represents the introduction of a new protocol stack, rich in features and, if the past is any guide, an interesting set of challenges for cyber security. The talk will cover both current recommendations for IPv6 configuration and open issues requiring further discussion and investigation.
    Go to contribution page
  51. Mr Alan Silverman (CERN)
    28/10/2011, 09:00
    20th Anniversary
    HEPiX is 20 years old this year and this talk will try to summarise some of the significant events of those 20 years. The speaker will also try to answer the question - is HEPiX worth the money?
    Go to contribution page
  52. Mr Les Cottrell (SLAC)
    28/10/2011, 09:30
    20th Anniversary
    At the inauguration of HEPiX in 1991, mainframes (and HEPVM) were on their way out with their bus & tag cables, channels with 3270 emulators and channel attached Ethernets. DEC/VMS and DECnet were still a major player in the scientific world. Mainframes and to a lesser extent VMS hosts were being replaced by Unix hosts with native TCP stacks running on thin and thicknet shared media, the...
    Go to contribution page
  53. Thomas Finnern (DESY)
    28/10/2011, 10:00
    20th Anniversary
    This is a personal retrospective view on 18 years of membership in the HEPiX community. Starting in 1993, it was associated with my career as a computer system engineer, the progression of high performance computing, and shifts of paradigm. The talk gives some spot lights on my own and community aspects during this time by recalling personal projects and events.
    Go to contribution page
  54. Rainer Toebbicke (CERN)
    28/10/2011, 10:45
    20th Anniversary
    Almost 20 years ago, the AFS service was born at CERN alongside a paradigm shift away from mainframe computing towards clusters. The scalable and manageable networked file system offered easy, ubiquitous access to files and greatly contributed to making this shift a success. Take a look back, with a smile rather than raised eyebrows, at how pre-Linux, pre-iPad, MegaByte and Megahertz...
    Go to contribution page
  55. Mr Corrie Kost (TRIUMF)
    28/10/2011, 11:15
    20th Anniversary
    An overview of computing hardware changes from 1991 to 2011 is given from a TRIUMF perspective. Aspects discussed are Moore’s law from speed, power consumption, and cost perspectives as well as how networks and commoditization, have influenced hardware. Speculation into the near and distant future nature of computing hardware is provided.
    Go to contribution page
  56. Sandy Philpott (JLAB)
    28/10/2011, 11:45