14โ€“18 Oct 2013
Amsterdam, Beurs van Berlage
Europe/Amsterdam timezone

Session

Facilities, Infrastructures, Networking and Collaborative Tools

14 Oct 2013, 13:30
Amsterdam, Beurs van Berlage

Amsterdam, Beurs van Berlage

Damrak 243 1012 ZJ AMSTERDAM

Presentation materials

There are no materials yet.

  1. Dr Randy Sobie (University of Victoria (CA))
    14/10/2013, 13:30
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will...
    Go to contribution page
  2. Marcos Seco Miguelez (Universidade de Santiago de Compostela (ES)), Victor Manuel Fernandez Albor (Universidade de Santiago de Compostela (ES))
    14/10/2013, 13:52
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The Datacenter at the Galician Institute of High Energy Physics(IGFAE) of the Santiago de Compostela University (USC) is a computing cluster with about 150 nodes and 1250 cores that hosts the LHCb Tiers 2 and 3. In this small datacenter, and of course in similar or bigger ones, it is very important to keep optimal conditions of temperature, humidity and pressure. Therefore, it is a necessity...
    Go to contribution page
  3. Mr Alexandr Zaytsev (Brookhaven National Laboratory (US)), Mr Kevin CASELLA (Brookhaven National Laboratory (US))
    14/10/2013, 14:14
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    RHIC & ATLAS Computing Facility (RACF) at BNL is a 15000 sq. ft. facility hosting the IT equipment of the BNL ATLAS WLCG Tier-1 site, offline farms for the STAR and PHENIX experiments operating at the Relativistic Heavy Ion Collider (RHIC), BNL Cloud installation, various Open Science Grid (OSG) resources, and many other small physics research oriented IT installations. The facility originated...
    Go to contribution page
  4. Dr Tony Wong (Brookhaven National Laboratory)
    14/10/2013, 14:36
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The advent of cloud computing centers such as Amazon's EC2 and Google's Computing Engine has elicited comparisons with dedicated computing clusters. Discussions on appropriate usage of cloud resources (both academic and commercial) and costs have ensued. This presentation discusses a detailed analysis of the costs of operating and maintaining the RACF (RHIC and ATLAS Computing Facility)...
    Go to contribution page
  5. Olof Barring (CERN)
    14/10/2013, 15:45
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    In May 2012 CERN signed a contract with the Wigner Data Centre in Budapest for an extension to the CERNโ€™s central computing facility beyond its current boundaries set by electrical power and cooling available for computing. The centre is operated as a remote co-location site providing rack-space, electrical power and cooling for server, storage and networking equipment acquired by CERN. The...
    Go to contribution page
  6. Ben Jones (CERN), Gavin Mccance (CERN), Nacho Barrientos Arias, Steve Traylen (CERN)
    14/10/2013, 16:05
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    For over a decade CERN's fabric management system has been based on home-grown solutions. Those solutions are not dynamic enough for CERN to face its new challenges such as significantly scaling out, multi-site management and the Cloud Computing model, without any additional staff. This presentation will illustrate the motivations for CERN to move to a new tool-set in the context of the Agile...
    Go to contribution page
  7. Belmiro Daniel Rodrigues Moreira (LIP Laboratorio de Instrumentacao e Fisica Experimental (LIP)-Un)
    14/10/2013, 16:25
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    CERN's Infrastructure as a Service cloud is being deployed in production across the two data centres in Geneva and Budapest. This talk will describe the experiences of the first six months of production, the different uses within the organisation and the outlook for expansion to over 15,000 hypervisors based on OpenStack by 2015. The open source toolchain used, accounting and scheduling...
    Go to contribution page
  8. Pedro Andrade (CERN)
    14/10/2013, 16:45
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    At the present time computing centres are facing a massive rise in virtualization and cloud computing. The Agile Infrastructure (AI) project is working to deliver new solutions to ease the management of CERN Computing Centres. Part of the solution consists in a new common monitoring infrastructure which collects and manages monitoring data of all computing centre servers and associated...
    Go to contribution page
  9. Dr Jose Antonio Coarasa Perez (CERN)
    14/10/2013, 17:25
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The CMS online cluster consists of more than 3000 computers. It has been exclusively used for the Data Acquisition of the CMS experiment at CERN, archiving around 20Tbytes of data per day. An openstack cloud layer has been deployed on part of the cluster (totalling more than 13000 cores) as a minimal overlay so as to leave the primary role of the computers untouched while allowing an...
    Go to contribution page
  10. Peter Kreuzer (Rheinisch-Westfaelische Tech. Hoch. (DE))
    14/10/2013, 17:45
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    CMS is using a tiered setup of dedicated computing resources provided by sites distributed over the world and organized in WLCG. These sites pledge resources to CMS and are preparing them specially for CMS to run the experiment's applications. But there are more resources available opportunistically both on the GRID and in local university and research clusters which can be used for CMS...
    Go to contribution page
  11. Dr Andrea Sciaba (CERN)
    15/10/2013, 13:30
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The Wordwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse their data. It includes almost 200,000 CPU cores, 200 PB of disk storage and 200 PB of tape storage distributed among more than 150 sites. The WLCG operations team is responsible for several essential tasks, such as the coordination of...
    Go to contribution page
  12. Ramon Medrano Llamas (CERN)
    15/10/2013, 13:50
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    HammerCloud was designed and born under the needs of the grid community to test the resources and automate operations from a user perspective. The recent developments in the IT space propose a shift to the software defined data centers, in which every layer of the infrastructure can be offered as a service. Testing and monitoring is an integral part of the development, validation and...
    Go to contribution page
  13. Adriana Telesca (CERN)
    15/10/2013, 14:10
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    ALICE (A Large Ion Collider Experiment) is a heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The ALICE Data-AcQuisition (DAQ) system handles the data flow from the sub-detector electronics to the permanent data storage in the CERN computing center. The DAQ farm consists of about 1000 devices of many...
    Go to contribution page
  14. Pawel Szostek (CERN)
    15/10/2013, 14:30
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    As Mooreโ€™s Law continues to deliver more and more transistors, the mainstream processor industry is preparing to expand its investments in areas other than simple core count. These new interests include deep integration of on-chip components, advanced vector units, memory, cache and interconnect technologies. We examine these moving trends with parallelized and vectorized High Energy Physics...
    Go to contribution page
  15. Jason Alexander Smith (Brookhaven National Laboratory (US))
    15/10/2013, 15:45
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    Solid state drives (SSDs) provide significant improvements in random I/O performance over traditional rotating SATA and SAS drives. While the cost of SSDs has been steadily declining over the past few years, high density SSDs continue to remain prohibitively expensive when compared to traditional drives. Currently, 1TB SSDs generally cost more than USD $1,000, while 1TB SATA drives typically...
    Go to contribution page
  16. Shawn Mc Kee (University of Michigan (US)), Simone Campana (CERN)
    15/10/2013, 16:05
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The WLCG infrastructure moved from a very rigid network topology, based on the MONARC model, to a more relaxed system, where data movement between regions or countries does not necessarily need to involve T1 centers. While this evolution brought obvious advantages, especially in terms of flexibility for the LHC experimentโ€™s data management systems, it also opened the question of how to monitor...
    Go to contribution page
  17. Dr Gabriele Garzoglio (FERMI NATIONAL ACCELERATOR LABORATORY)
    15/10/2013, 16:25
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated...
    Go to contribution page
  18. David Gutierrez Rueda (CERN)
    15/10/2013, 16:45
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The network infrastructure at CERN has evolved with the increasing service and bandwidth demands of the scientific community. Analysing the massive amounts of data gathered by the experiments requires more computational power and faster networks to carry the data. The new Data Centre in Wigner and the adoption of 100Gbps in the core of the network are the latest answers to these demands. In...
    Go to contribution page
  19. Dr Tony Wildish (Princeton University (US))
    15/10/2013, 17:25
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the...
    Go to contribution page
  20. 15/10/2013, 17:45
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    Computing and networking infrastructures across the world continue to grow to meet the increasing needs of data intensive science, notably those of the LHC and other large high energy physics collaborations. The LHCโ€™s large data volumes challenge the technology used to interconnect widely-separated sites (and their available resources) and lead to complications in the overall process of...
    Go to contribution page
  21. Dave Kelsey (STFC - Science & Technology Facilities Council (GB))
    17/10/2013, 11:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The Security for Collaborating Infrastructures (SCI) group (http://www.eugridpma.org/sci/) is a collaborative activity of information security officers from several large-scale distributed computing infrastructures, including EGI, OSG, PRACE, WLCG, and XSEDE. SCI is developing a framework to enable interoperation of collaborating Grids with the aim of managing cross-Grid operational security...
    Go to contribution page
  22. Dr Wenji Wu (Fermi National Accelerator Laboratory)
    17/10/2013, 11:23
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    Fermilab is the US-CMS Tier-1 Centre, as well as the main data centre for several other large-scale research collaborations. As a consequence, there is a continual need to monitor and analyse large-scale data movement between Fermilab and collaboration sites for a variety of purposes, including network capacity planning and performance troubleshooting. To meet this need, Fermilab designed and...
    Go to contribution page
  23. Mr Phil Demar (Fermilab)
    17/10/2013, 11:46
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    LHC networking has always been defined by high volume data movement requirements in both LAN and WAN. LAN network demands can typically be met fairly easily with high performance data center switches, albeit at high cost. LHC WAN data movement, on the other hand, presents a more complicated and difficult set of challenges. Typically, there are three high-level issues a high traffic volume...
    Go to contribution page
  24. Dave Kelsey (STFC - Science & Technology Facilities Council (GB))
    17/10/2013, 12:09
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The HEPiX (http://www.hepix.org) IPv6 Working Group has been investigating the many issues which feed into the decision on the timetable for the use of IPv6 networking protocols in HEP Computing, in particular in WLCG. RIPE NCC, the European Regional Internet Registry, ran out of IPv4 addresses in September 2012. The North and South America RIRs are expected to run out in 2014. In recent...
    Go to contribution page
  25. Mr Jose Benito Gonzalez Lopez (CERN)
    17/10/2013, 13:30
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    Indico has evolved into the main event organization software, room booking tool and collaboration hub for CERN. The growth in its usage has only accelerated during the past 9 years, and today Indico holds more that 215,000 events and 1,100,000 files. The growth was also substantial in terms of functionalities and improvements. In the last year alone, Indico has matured considerably in 3 key...
    Go to contribution page
  26. Thomas Baron (CERN)
    17/10/2013, 13:53
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    In the last few years, we have witnessed an explosion of visual collaboration initiatives in the industry. Several advances in video services and also in their underlying infrastructure are currently improving the way people collaborate globally. These advances are creating new usage paradigms: any device in any network can be used to collaborate, in most cases with an overall high quality....
    Go to contribution page
  27. Dr Maria Grazia Pia (Universita e INFN (IT))
    17/10/2013, 14:16
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The broad coverage of the search for the Higgs boson in the mainstream media is a relative novelty for HEP research, whose achievements have traditionally been limited to scholarly literature. This presentation illustrates the results of a scientometric analysis of HEP computing in scientific literature, institutional media and the press, and a comparative overview of similar metrics...
    Go to contribution page
  28. Dr Dirk Hoffmann (Centre de Physique des Particules de Marseille, CNRS/IN2P3)
    17/10/2013, 14:39
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The CTA (Cherenkov Telescope Array) consortium is developing a next generation ground-based instrument for very high energy gamma-ray astronomy, made up of approximately 100 telescopes of at least three different sizes. It counts presently more than 1000 members, out of which almost 800 have a computer account to use the "CTA web services". CTA decided in 2011 to use a SharePoint 2010 "site...
    Go to contribution page
Building timetable...