14–18 Oct 2013
Amsterdam, Beurs van Berlage
Europe/Amsterdam timezone

Contribution List

468 out of 468 displayed
Export to PDF
  1. 14/10/2013, 09:00
  2. David Groep (NIKHEF (NL))
    14/10/2013, 09:05
  3. Frank Linde (NIKHEF (NL))
    14/10/2013, 09:20
  4. Axel Naumann (CERN)
    14/10/2013, 09:45
    High Energy Physics is unthinkable without C++. But C++ is not the language it used to be: today it evolves continuously to respond to new requirements, and to benefit from the streamlined delivery process of new language features to compilers. How should HEP react? After a short, subjective overview of parallel languages and extensions, the main features of C++11 will be presented, including...
    Go to contribution page
  5. Dr Robert Lupton (Princeton)
    14/10/2013, 11:00
    Many of the scientific computing frameworks used in 'big science' have several million lines of source code, and software engineering challenges are amongst the most prominent challenges, be it in high-energy physics, astronomy, or other sciences. Dr Robert Lupton of Princeton University will talk the software engineering challenges that face scientific computing and how large scale systems...
    Go to contribution page
  6. Dr Kostas Glinos (European Commission)
    14/10/2013, 11:45
    Through joint efforts between the HEP community in the early days of the EU DataGrid project, through EGEE, and via EGI-InSPIRE today, the European Commission has had a profound impact in the way computing and data management for high energy physics is done. Kostas Glinos, Head of Unit eInfrastructures of the European Commission, has been with the European Commission since 1992. He leads...
    Go to contribution page
  7. Sander Klous (N)
    14/10/2013, 12:15
  8. Dr Randy Sobie (University of Victoria (CA))
    14/10/2013, 13:30
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will...
    Go to contribution page
  9. Tomasz Rybczynski (AGH University of Science and Technology (PL))
    14/10/2013, 13:30
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    The LHCb experiment records millions of proton collisions every second, but only a fraction of them are useful for LHCb physics. In order to filter out the "bad events" a large farm of x86-servers (~2000 nodes) has been put in place. These servers boot from and run from NFS, however they use their local disk to temporarily store data, which cannot be processed in real-time...
    Go to contribution page
  10. Claudio Kopper
    14/10/2013, 13:30
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    The IceCube Neutrino Observatory is a cubic kilometer-scale neutrino detector built into the ice sheet at the geographic South Pole. Light propagation in glacial ice is an important component of IceCube detector simulation that requires a large number of embarrassingly parallel calculations. The IceCube collaboration recently began using GPUs in order to simulate direct propagation of...
    Go to contribution page
  11. Marco Cattaneo (CERN)
    14/10/2013, 13:30
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    The LHCb experiment has taken data between December 2009 and February 2013. The data taking conditions and trigger rate have been adjusted several times to make optimal use of the luminosity delivered by the LHC and to extend the physics potential of the experiment. By 2012, LHCb was taking data at twice the instantaneous luminosity and 2.5 times the high level trigger rate than originally...
    Go to contribution page
  12. Rainer Schwemmer (CERN)
    14/10/2013, 13:30
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The LHCb Data Acquisition system reads data from over 300 read-out boards and distributes them to more than 1500 event-filter servers. It uses a simple push-protocol over Gigabit Ethernet. After filtering, the data is consolidated into files for permanent storage using a SAN-based storage system. Since the beginning of data-taking many lessons have been learned and the reliability and...
    Go to contribution page
  13. Alessandro Di Girolamo (CERN)
    14/10/2013, 13:30
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    The WLCG information system is just one of the many information sources that are required to populate a VO configuration database. Other sources include central portals such as the GOCDB and the OIM from EGI and OSG respectively. Providing a coherent view of all this information that has been synchronized from many different sources is a challenging activity and has been duplicated to various...
    Go to contribution page
  14. Frank-Dieter Gaede (Deutsches Elektronen-Synchrotron (DE))
    14/10/2013, 13:30
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    One of the key requirements for Higgs physics at the International Linear Collider ILC is excellent track reconstruction with very good momentum and impact parameter resolution. ILD is one of the two detector concepts at the ILC. Its central tracking system comprises of a highly granular TPC, an intermediate silicon tracker and a pixel vertex detector, and it is complemented by silicon...
    Go to contribution page
  15. Jim Kowalkowski (Fermilab)
    14/10/2013, 13:50
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The artdaq data acquisition software toolkit has been developed within the Fermilab Scientific Computing Division to meet the needs of current and future experiments. At its core, the toolkit provides data transfer, event building, and event analysis functionality, the latter using the art event analysis framework. In the last year, functionality has been added to the toolkit in the areas...
    Go to contribution page
  16. Thomas Kuhr (KIT)
    14/10/2013, 13:52
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    The Belle II experiment, a next-generation B factory experiment at KEK, is expected to record a two orders of magnitude larger data volume than its predecessor, the Belle experiment. The data size and rate are comparable to or more than the ones of LHC experiments and requires to change the computing model from the Belle way, where basically all computing resources were provided by KEK, to a...
    Go to contribution page
  17. Marcos Seco Miguelez (Universidade de Santiago de Compostela (ES)), Victor Manuel Fernandez Albor (Universidade de Santiago de Compostela (ES))
    14/10/2013, 13:52
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The Datacenter at the Galician Institute of High Energy Physics(IGFAE) of the Santiago de Compostela University (USC) is a computing cluster with about 150 nodes and 1250 cores that hosts the LHCb Tiers 2 and 3. In this small datacenter, and of course in similar or bigger ones, it is very important to keep optimal conditions of temperature, humidity and pressure. Therefore, it is a necessity...
    Go to contribution page
  18. Dmytro Karpenko (University of Oslo (NO))
    14/10/2013, 13:52
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    During three years of LHC data taking, the ATLAS collaboration completed three petascale data reprocessing campaigns on the Grid, with up to 2 PB of data being reprocessed every year. In reprocessing on the Grid, failures can occur for a variety of reasons, while Grid heterogeneity makes failures hard to diagnose and repair quickly. As a result, Big Data processing on the Grid must tolerate a...
    Go to contribution page
  19. Prof. Gang CHEN (INSTITUTE OF HIGH ENERGY PHYSICS), Dr Wenjing Wu (IHEP, CAS)
    14/10/2013, 13:53
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    The limitation of scheduling modules and the gradual addition of disk pools in distributed storage systems often result in imbalances among their disk pools in terms of both available space and number of files. This can cause various problems to the storage system such as single point of failure, low system throughput and imbalanced resource utilization and system loads. An algorithm named...
    Go to contribution page
  20. Philippe Canal (Fermi National Accelerator Lab. (US))
    14/10/2013, 13:53
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    We will present massively parallel high energy electromagnetic particle transportation through a finely segmented detector in the Graphic Processor Unit (GPU). Simulating events of energetic particle decay in a general-purpose high energy physics (HEP) detector requires intensive computing resources, due to the complexity of the geometry as well as physics processes applied to particles...
    Go to contribution page
  21. Leo Piilonen (Virginia Tech)
    14/10/2013, 13:55
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    I will describe the charged-track extrapolation and the muon identification modules in the Belle II data analysis code library. These modules use GEANT4E to extrapolate reconstructed charged tracks outward from the Belle II Central Drift Chamber into the outer particle-identification detectors, the electromagnetic calorimeter, and the K-long and muon (KLM) detector embedded in the iron yoke...
    Go to contribution page
  22. Kael Hanson (Université Libre de Bruxelles)
    14/10/2013, 14:10
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The IceCube Neutrino Observatory is a cubic kilometer-scale neutrino detector built into the ice sheet at the geographic South Pole. The online system for IceCube comprises subsystems for data acquisition, online filtering, supernova detection, and experiment control and monitoring. The observatory records astrophysical and cosmic ray events at a rate of approximately 3 kHz and selects the...
    Go to contribution page
  23. Simone Campana (CERN)
    14/10/2013, 14:14
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of...
    Go to contribution page
  24. Mr Alexandr Zaytsev (Brookhaven National Laboratory (US)), Mr Kevin CASELLA (Brookhaven National Laboratory (US))
    14/10/2013, 14:14
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    RHIC & ATLAS Computing Facility (RACF) at BNL is a 15000 sq. ft. facility hosting the IT equipment of the BNL ATLAS WLCG Tier-1 site, offline farms for the STAR and PHENIX experiments operating at the Relativistic Heavy Ion Collider (RHIC), BNL Cloud installation, various Open Science Grid (OSG) resources, and many other small physics research oriented IT installations. The facility originated...
    Go to contribution page
  25. Gerardo Ganis (CERN)
    14/10/2013, 14:15
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    The advent of private and commercial cloud platforms has opened the question of evaluating the cost-effectiveness of such solution for computing in High Energy Physics . Google Compute Engine (GCE) is a IaaS product launched by Google as an experimental platform during 2012 and now open to the public market. In this contribution we present the results of a set of CPU-intensive and...
    Go to contribution page
  26. Xavier Espinal Curull (CERN)
    14/10/2013, 14:16
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    Data Storage and Services (DSS) group at CERN stores and provides access to the data coming from the LHC and other physics experiments. We implement specialized storage services to provide tools for an optimal data management, based on the evolution of data volumes, the available technologies and the observed experiment and users usage patterns. Our current solutions are CASTOR for...
    Go to contribution page
  27. Qiming Lu (Fermi National Accelerator Laboratory)
    14/10/2013, 14:16
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    Synergia is a parallel, 3-dimensional space-charge particle-in-cell code that is widely used by the accelerator modeling community. We present our work of porting the pure MPI-based code to a hybrid of CPU and GPU computing kernels. The hybrid code uses the CUDA platform, in the same framework as the pure MPI solution. We have implemented a lock-free collaborative charge-deposition algorithm...
    Go to contribution page
  28. Kunihiro Nagano (High Energy Accelerator Research Organization (JP))
    14/10/2013, 14:30
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The ATLAS trigger system has been used for the online event selection for three years of LHC data-taking and is preparing for the next run. The trigger system consists of a hardware level-1 (L1) and a software high-level trigger (HLT). The high-level trigger is currently implemented in a region-of-interest based level-2 (L2) stage and a event filter (EF) operating after even building with...
    Go to contribution page
  29. Claudio Grandi (INFN - Bologna)
    14/10/2013, 14:36
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the...
    Go to contribution page
  30. Dr Tony Wong (Brookhaven National Laboratory)
    14/10/2013, 14:36
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The advent of cloud computing centers such as Amazon's EC2 and Google's Computing Engine has elicited comparisons with dedicated computing clusters. Discussions on appropriate usage of cloud resources (both academic and commercial) and costs have ensued. This presentation discusses a detailed analysis of the costs of operating and maintaining the RACF (RHIC and ATLAS Computing Facility)...
    Go to contribution page
  31. Dr Jerome LAURET (BROOKHAVEN NATIONAL LABORATORY)
    14/10/2013, 14:36
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    User Centric Monitoring (or UCM) has been a long awaited feature in STAR, whereas programs, workflows and system “events” could be logged, broadcast and later analyzed. UCM allows to collect and filter available job monitoring information from various resources and present it to users in a user-centric view rather than and administrative-centric point of view. The first attempt and...
    Go to contribution page
  32. Christos Filippidis (Nat. Cent. for Sci. Res. Demokritos (GR))
    14/10/2013, 14:39
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    Given the current state of I/O and storage systems in petascale systems, incremental solutions in most aspects are unlikely to provide the required capabilities in exascale systems. Traditionally I/O has been considered as a separate activity that is performed before or after the main simulation or analysis computation, or periodically for activities such as check-pointing, but still as...
    Go to contribution page
  33. Dr Tareq AbuZayyad (University of Utah)
    14/10/2013, 14:39
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    The Telescope Array Cosmic Rays Detector located in the Western Utah Desert is used for the observation of ultra-high energy cosmic rays. The simulation of a fluorescence detector response to cosmic rays initiated air showers presents many opportunities for parallelization. In this presentation we report on the Monte Carlo program used for the simulation of the Telescope Array fluorescence...
    Go to contribution page
  34. Slava Krutelyov (Texas A & M University (US))
    14/10/2013, 14:40
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    In 2012 the LHC increased both the beam energy and intensity. The former made obsolete all of the simulation data generated for 2011; the latter increased the rate of multiple proton-proton collisions (piluep) in a single event, significantly increasing the complexity of both the reconstructed and matching simulated events. Once the pileup surpassed 10, the resources needed for the software to...
    Go to contribution page
  35. Gero Müller (RWTH Aachen University)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    Many programs in experimental particle physics do not yet have a graphical interface, or demand strong platform and software requirements. With the most recent development of the VISPA project, we provide graphical interfaces to existing software programs and access to multiple computing clusters through standard web browsers. The scalable client-server system allows analyses to be performed...
    Go to contribution page
  36. John Bland (University of Liverpool)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    Liverpool is consistently amongst the top Tier-2 sites in Europe in terms of efficiency and cluster utilisation. This presentation will cover the work done at Liverpool over the last six years to maximise and maintain efficiency and productivity at their Tier 2 site, with an overview of the tools used (including established, emerging, and locally developed solutions) for monitoring, testing,...
    Go to contribution page
  37. Philipp Sitzmann (Goethe University Frankfurt)
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    CMOS Monolithic Active Pixel Sensors (MAPS) have demonstrated excellent performances as tracking detectors for charged particles. Their outstanding spatial resolution (few µm), ultra-light material budget (50 µm) and advanced radiation tolerance (> 1Mrad, >1e13 neq/cm²). They were therefore chosen for the vertex detectors of STAR and CBM and are foreseen to equip the upgraded ALICE-ITS. They...
    Go to contribution page
  38. Vincenzo Spinoso (Universita e INFN (IT)), Vincenzo Spinoso (Universita e INFN (IT))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    Running and monitoring simulations usually involves several different aspects of the entire workflow: the configuration of the job, the site issues, the software deployment at the site, the file catalogue, the transfers of the simulated data. In addition, the final product of the simulation is often the result of several sequential steps. This project tries a different approach to monitoring...
    Go to contribution page
  39. Daniel Hugo Campora Perez (CERN)
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    The LHCb Software Infrastructure is built around a flexible, extensible, single-process, single-threaded framework named Gaudi. One way to optimise the overall usage of a multi-core server, which is used for example in the Online world, is running multiple instances of Gaudi-based applications concurrently. For LHCb, this solution has been shown to work well up to 32 cores and is expected...
    Go to contribution page
  40. Andrea Formica (CEA/IRFU,Centre d'etude de Saclay Gif-sur-Yvette (FR))
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    The ATLAS muon alignment system is composed of about 6000 optical sensors for the Barrel muon spectrometer and the same number for the 2 Endcaps wheels. The system is acquiring data from every sensor continuously , with a whole read-out cycle of about 10 minutes. The read-out chain stores data inside an Oracle DB. These data are used as input from the alignment algorithms (C++ based) in...
    Go to contribution page
  41. Mr MA Binsong (IPN Orsay France)
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    The PANDA (AntiProton ANnihilation at DArmstadt) experiment is one of the key projects at the future Facility for Antiproton and Ion Research (FAIR), which is currently under construction at Darmstadt. This experiment will perform precise studies of antiproton-proton and antiproton-nucleus annihilation reactions. The aim of the rich experimental program is to improve our knowledge of the...
    Go to contribution page
  42. Marco Clemencic (CERN)
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    The nightly build system used so far by LHCb has been implemented as an extension on the system developed by CERN PH/SFT group (as presented at CHEP2010). Although this version has been working for many years, it has several limitations in terms of extensibility, management and ease of use, so that it was decided to develop a new version based on a continuous integration system. In this...
    Go to contribution page
  43. Mr Peter Waller (University of Liverpool (GB))
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    The focus in many software architectures of the LHC experiments is to deliver a well-designed Event Data Model (EDM). Changes and additions to the stored data are often very expensive, requiring large amounts of CPU time, disk storage and man-power. At the ATLAS experiment, such a reprocessing has only been undertaken once for data taken in 2012. However, analysts have to develop and apply...
    Go to contribution page
  44. Alessandro De Salvo (Universita e INFN, Roma I (IT))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    In the Atlas experiment, the calibration of the precision tracking chambers of the muon detector is very demanding, since the rate of muon tracks required to get a complete calibration in homogeneous conditions and to feed prompt reconstruction with fresh constants is very high (several hundreds Hz for 8-10 hours runs). The calculation of calibration constants is highly CPU consuming. In...
    Go to contribution page
  45. Dr Salman Toor (Helsinki Institute of Physics (FI))
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments, such as the CERN HEP analyses, requires continuous exploration of new technologies and techniques. In this article we present a hybrid solution of an open source cloud with a network file system for CMS data analysis. Our aim has been to design a scalable and...
    Go to contribution page
  46. Andrea Formica (CEA/IRFU,Centre d'etude de Saclay Gif-sur-Yvette (FR))
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    ATLAS Conditions data include about 2 TB in a relational database and 400 GB of files referenced from the database. Conditions data is entered and retrieved using COOL, the API for accessing data in the LCG Conditions Database infrastructure. It is managed using an ATLAS-customized python based tool set. Conditions data are required for every reconstruction and simulation job, so access to...
    Go to contribution page
  47. Dmitry Ozerov (D)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    In a future-proof data preservation scenario, the software and environment employed to produce and analyse high energy physics data needs to be preserved, rather than just the data themselves. A software preservation system will be presented which allows analysis software to be migrated to the latest software versions and technologies for as long as possible, substantially extending the...
    Go to contribution page
  48. Gareth Roy (U), Mark Mitchell (University of Glasgow)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    With the current trend towards "On Demand Computing" in big data environments it becomes crucial that the deployment of services and resources becomes increasingly automated. With opensource projects such as Canonicals MaaS and Redhats Spacewalk; automated deployment is available for large scale data centre environments but these solutions can be too complex and heavyweight for smaller,...
    Go to contribution page
  49. Derek John Weitzel (University of Nebraska (US))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    Bosco is a software project developed by the Open Science Grid to help scientists better utilize their on-campus computing resources. Instead of submitting jobs through a dedicated gatekeeper, as most remote submission mechanisms use, it uses the built-in SSH protocol to gain access to the cluster. By using a common access method, SSH, we are able to simplify the interaction with the...
    Go to contribution page
  50. Alexey Anisenkov (Budker Institute of Nuclear Physics (RU))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    In this paper we describe the ATLAS Grid Information System (AGIS), the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing (ADC) applications and services. The Information system centrally defines and exposes the topology of the ATLAS computing infrastructure including...
    Go to contribution page
  51. Ian Collier (UK Tier1 Centre), Mr Matthew James Viljoen (STFC - Science & Technology Facilities Council (GB))
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    In this paper we shall introduce the service deployment framework based on Quattor and Microsoft HyperV at the RAL Tier 1. As an example, we will explain how the framework has been applied to CASTOR in our test infrastructure and outline our plans to roll it out into full production. CASTOR is a relatively complicated open source hierarchical storage management system in production use at...
    Go to contribution page
  52. Qiyan Li (Goethe University Frankfurt)
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    CBM aims to measure open charm particles from 15-40 AGeV/c heavy ion collisions by means of secondary vertex reconstruction. The measurement concept includes the use of a free-running DAQ, real time tracking, primary and secondary vertex reconstruction and a tagging of open charm candidates based on secondary vertex information. The related detector challenge will be adressed with an...
    Go to contribution page
  53. Fabrizio Furano (CERN)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    In this contribution we present a vision for the use of the HTTP protocol for data management in the context of HEP, and we present demonstrations of the use of HTTP-based protocols for storage access & management, cataloguing, federation and transfer. The support of HTTP/WebDAV, provided by frameworks for scientific data access like DPM, dCache, STORM, FTS3 and foreseen for XROOTD, can be...
    Go to contribution page
  54. Francesco Giacomini (INFN CNAF)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The success of a scientific endeavor depends, often significantly, on the ability to collect and later process large amounts of data in an efficient and effective way. Despite the enormous technological progress in areas such as electronics, networking and storage, the cost of the computing factor remains high. Moreover the limits reached by some historical directions of hardware...
    Go to contribution page
  55. Carlos Solans Sanchez (CERN)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The Tile calorimeter is one of the sub-detectors of ATLAS. In order to ensure its proper operation and assess the quality of data, many tasks are to be performed by means of many tools which were developed independently to satisfy different needs. Thus, these systems are commonly implemented without a global perspective of the detector and lack basic software features. Besides, in some cases...
    Go to contribution page
  56. Maaike Limper (CERN)
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    As part of the CERN Openlab collaboration, an investigation has been made into the use of an SQL-based approach for physics analysis with various up-to-date software and hardware options. Currently physics analysis is done using data stored in customised root-ntuples that contain only the variables needed for a specific analysis. Production of these ntuples is mainly done by accessing the...
    Go to contribution page
  57. Dr Giacinto Donvito (INFN-Bari)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    The italian community in CMS has built a geographically distributed network in which all the data stored in the italian region are available to all the users for their everyday work. This activity involves at different level all the CMS centers: the Tier1 at CNAF, all the four Tier2s (Bari, Rome, Legnaro and Pisa), and few Tier3s (Trieste, Perugia, etc). The federation uses the new network...
    Go to contribution page
  58. Dr Samuel Cadellin Skipsey
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    Of the three most widely used implementations of the WLCG Storage Element specification, Disk Pool Manager (DPM) has the simplest implementation of file placement balancing (StoRM doesn't attempt this, leaving it up to the underlying filesystem, which can be very sophisticated in itself). DPM uses a round-robin algorithm (with optional filesystem weighting), for placing files across...
    Go to contribution page
  59. Shaun De Witt (STFC - Science & Technology Facilities Council (GB))
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    At the RAL Tier 1 we have successfully been running a CASTOR HSM instance for a number of years. While it performs well for disk-only storage for analysis and processing jobs, it is heavily optimised for tape usage. We have been investigating alternative technologies which could be used for online storage for analysis. We present the results of our preliminary selection and test results for...
    Go to contribution page
  60. Dr Massimiliano Nastasi (INFN Milano-Bicocca)
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    Measurements of radioactive sources, in order to reach an optimum level of accuracy, require an accurate determination of the detection efficiency of the experimental setup. In gamma ray spectroscopy, in particular, the high level of sensitivity reached nowadays implies a correct evaluation of the detection capability of source emitted photons. The standard approach, based on an analytical...
    Go to contribution page
  61. David Cameron (University of Oslo (NO))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    Grid middleware suites provide tools to perform the basic tasks of job submission and retrieval and data access, however these tools tend to be low-level, operating on individual jobs or files and lacking in higher-level concepts. User communities therefore generally develop their own application-layer software catering to their specific communities' needs on top of the Grid middleware....
    Go to contribution page
  62. Dr Roberto Ammendola (INFN Roma Tor Vergata)
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    Modern Graphics Processing Units (GPUs) are now considered accelerators for general purpose computation. A tight interaction between the GPU and the interconnection network is the strategy to express the full potential on capability computing of a multi-GPU system on large HPC clusters; that is why an efficient and scalable interconnect is a key technology to finally deliver GPUs for...
    Go to contribution page
  63. Dr Jörg Meyer (KIT - Karlsruher Institute of Technology)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    After analysis and publication, there is no need to keep experimental data online on spinning disks. For reliability and costs inactive data is moved to tape and put into a data archive. The data archive must provide reliable access for at least ten years following a recommendation of the German Science Foundation (DFG), but many scientific communities wish to keep data available much longer....
    Go to contribution page
  64. Mario Lassnig (CERN)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    Rucio is the successor of the current Don Quijote 2 (DQ2) system for the distributed data management (DDM) system of the ATLAS experiment. The reasons for replacing DQ2 are manifold, but besides high maintenance costs and architectural limitations, scalability concerns are on top of the list. The data collected so far by the experiment adds up to about 115 Peta bytes spread over 270 million...
    Go to contribution page
  65. Jaroslava Schovancova (Brookhaven National Laboratory (US))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The ATLAS Distributed Computing (ADC) Monitoring targets three groups of customers: ADC Operations, ATLAS Management, and ATLAS sites and ATLAS funding agencies. The main need of ADC Operations is to identify malfunctions early and then escalate issues to an activity or a service expert. The ATLAS Management use visualisation of long-term trends and accounting information about the ATLAS...
    Go to contribution page
  66. Alexey Sedov (Universitat Autònoma de Barcelona)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    ATLAS Distributed Computing Operation Shifts were evolved to meet new requirements. New monitoring tools as well as new operational changes led to modifications in organization of shifts. In this paper we describe the roles and the impacts of the shifts to smooth operation of complex computing grid employed in ATLAS, the influence of Discovery of Higgs like particle on shift operations, the...
    Go to contribution page
  67. Cedric Serfon (CERN)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    The current ATLAS Distributed Data Management system (DQ2) is being replaced by a new one called Rucio. The new system has many improvements, but it requires a number of changes. One of the most significant ones is that no local file catalog like the LFC, which was a central component in DQ2, will be used by Rucio. Instead of querying a file catalogue that stores the association of files with...
    Go to contribution page
  68. Tom Uram (ANL)
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude...
    Go to contribution page
  69. Dr Alexander Undrus (Brookhaven National Laboratory (US))
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    The ATLAS Nightly Build System is a facility for automatic production of software releases. Being the major component of ATLAS software infrastructure, it supports more than 50 multi-platform branches of nightly releases and provides vast opportunities for testing new packages, for verifying patches to existing software, and for migrating to new platforms and compilers. The Nightly System...
    Go to contribution page
  70. Grigori Rybkin (Universite de Paris-Sud 11 (FR))
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    The ATLAS software code base is over 7 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is...
    Go to contribution page
  71. Jason Alexander Smith (Brookhaven National Laboratory (US))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    Public clouds are quickly becoming cheap and easy methods for dynamically adding more computing resources to your local site to help handle peak computing demands. As cloud use continues to grow, the HEP community is looking to run more than just simple homogeneous VM images, which run basic data analysis batch jobs. The growing demand for heterogeneous server configurations demands better...
    Go to contribution page
  72. Jason Alexander Smith (Brookhaven National Laboratory (US))
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    Running a stable production service environment is important in any field. To accomplish this, a proper configuration management system is necessary along with good change management policies. Proper testing and validation is required to protect yourself against software or configuration changes to production services that can cause major disruptions. In this paper, we discuss how we extended...
    Go to contribution page
  73. Dr Jorge Luis Rodriguez (UNIVERSITY OF FLORIDA)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    With the explosion of big data in many fields, the efficient management of knowledge about all aspects of the data analysis gains in importance. A key feature of collaboration in large scale projects is keeping a log of what and how is being done - for private use and reuse and for sharing selected parts with collaborators and peers, often distributed geographically on an increasingly...
    Go to contribution page
  74. John Hover (Brookhaven National Laboratory (BNL)-Unknown-Unknown)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    AutoPyFactory (APF) is a next-generation pilot submission framework that has been used as part of the ATLAS workload management system (PanDA) for two years. APF is reliable, scalable, and offers easy and flexible configuration. Using a plugin-based architecture, APF polls for information from configured information and batch systems (including grid sites), decides how many additional pilot...
    Go to contribution page
  75. Ludmila Marian (CERN)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The volume of multimedia material produced by CERN is growing rapidly, fed by the increase of dissemination activities carried out by the various outreach teams, such as the central CERN Communication unit and the Experiments Outreach committees. In order for this multimedia content to be stored digitally for the long term, to be made available to end-users in the best possible conditions and...
    Go to contribution page
  76. Ian Peter Collier (STFC - Science & Technology Facilities Council (GB))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    In the last three years the CernVM Filesystem (CernVM-FS) has transformed the distribution of experiment software to WLCG grid sites. CernVM-FS removes the need for local installations jobs and performant network fileservers at sites, in addition it often improves performance at the same time. Furthermore the use of CernVM-FS standardizes the computing environment across the grid and removes...
    Go to contribution page
  77. Stefano Dal Pra (Unknown)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    At the Italian Tier1 Center at CNAF we are evaluating the possibility to change the current production batch system. This activity is motivated mainly because we are looking for a more flexible licensing model as well as to avoid vendor lock-in. We performed a technology tracking exercise and among many possible solutions we chose to evaluate Grid Engine as an alternative because its...
    Go to contribution page
  78. Victor Manuel Fernandez Albor (Universidade de Santiago de Compostela (ES))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    Communities of different locations are running their computing jobs on dedicated infrastructures without the need to worry about software, hardware or even the site where their programs are going to be executed. Nevertheless, this usually implies that they are restricted to use certain types or versions of an Operating System because either their software needs an definite version of a system...
    Go to contribution page
  79. Kenneth Bloom (University of Nebraska (US))
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born three years ago at the LPC (LHC Physics Center), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of...
    Go to contribution page
  80. Mr Igor Sfiligoi (University of California San Diego)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The CMS experiment at the Large Hadron Collider is relying on the HTCondor-based glideinWMS batch system to handle most of its distributed computing needs. In order to minimize the risk of disruptions due to software and hardware problems, and also to simplify the maintenance procedures, CMS has set up its glideinWMS instance to use most of the attainable High Availability (HA) features. The...
    Go to contribution page
  81. Mrs Ianna Osborne (Fermi National Accelerator Lab. (US))
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    CMS faces real challenges with upgrade of the CMS detector through 2020. One of the challenges, from the software point of view, is managing upgrade simulations with the same software release as the 2013 scenario. We present the CMS geometry description software model, its integration with the CMS event setup and core software. The CMS geometry configuration and selection is implemented in...
    Go to contribution page
  82. Dr Tony Wildish (Princeton University (US))
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    During the first LHC run, CMS saturated one hundred petabytes of storage resources with data. Storage accounting and monitoring help to meet the challenges of storage management, such as efficient space utilization, fair share between users and groups, and further resource planning. We present newly developed CMS space monitoring system based on the storage dumps produced at the sites. Storage...
    Go to contribution page
  83. Marco Mascheroni (Universita & INFN, Milano-Bicocca (IT))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job...
    Go to contribution page
  84. Dr Edward Karavakis (CERN)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The ATLAS Experiment at the Large Hadron Collider has been collecting data for three years. The ATLAS data are distributed, processed and analysed at more than 130 grid and cloud sites across the world. The total throughput of transfers is more than 5 GB/s and data occupies more than 120 PB on disk and tape storage. At any given time, there are more than 100,000 concurrent jobs running and...
    Go to contribution page
  85. Moritz Kretz (Ruprecht-Karls-Universitaet Heidelberg (DE))
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    In 2014 the Insertable B-Layer (IBL) will extend the existing Pixel Detector of the ATLAS experiment at CERN by 12 million additional pixels. As with the already existing pixel layers, scanning and tuning procedures need to be employed for the IBL to account for aging effects and guarantee a unified response across the detector. Scanning the threshold or time-over-threshold of a front-end...
    Go to contribution page
  86. Carlos Solans Sanchez (CERN)
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    After two years of operation of the LHC, the ATLAS Tile Calorimeter is undergoing the consolidation process of its front-end electronics. The first layer of certification of the repairs is performed in the experimental area with a portable test-bench which is capable of controlling and reading out all the inputs and outputs of one front-end module through dedicated cables. This testbench has...
    Go to contribution page
  87. Line Everaerts (CERN)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    Using the framework of ITIL best practises, the service managers within CERN-IT have engaged into a continuous improvement process, mainly focusing on service operation. This implies an explicit effort to understand and improve all service management aspects in order to increase efficiency and effectiveness. We will present the requirements, how they were addressed and share our experiences....
    Go to contribution page
  88. Mr Hiroyuki Maeda (Hiroshima Institute of Technology)
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    DAQ-Middleware is a software framework for a network-distributed data acquisition (DAQ) system that is based on the Robot Technology Middleware (RTM). The framework consists of a DAQ-Component and a DAQ-Operator. The basic functionalities such as transferring data, starting and stopping the system, and so on, are already prepared in the DAQ-Components and DAQ-Operator. The DAQ-Component is...
    Go to contribution page
  89. Dr Andrea Valassi (CERN)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    CORAL and COOL are two software packages that are widely used by the LHC experiments for the management of conditions data and other types of data using relational database technologies. They have been developed and maintained within the LCG Persistency Framework, a common project of the CERN IT department with ATLAS, CMS and LHCb. The project used to include the POOL software package,...
    Go to contribution page
  90. Niko Neufeld (CERN)
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    LHCb will have an upgrade of its detector in 2018. After the upgrade, the LHCb experiment will run at a high luminosity of 2x 10^33 cm^-2 . s^-1. The upgraded detector will be read out at 40 MHz with a highly flexible software-based triggering strategy. The Data Acquisition (DAQ) system of HCb reads out the data fragments from the Front-End Electronics and transports them to the High-Lever...
    Go to contribution page
  91. Ruslan Asfandiyarov (Universite de Geneve (CH)), Yordan Ivanov Karadzhov (Universite de Geneve (CH))
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    The Electron-Muon Ranger (EMR) is a totally active scintillator detector which will be installed in the muon beam of the Muon Ionization Cooling Experiment (MICE), the main R&D project for a future neutrino factory. It is designed to measure the properties of a low energy beam composed of muons, electrons and pions, and to perform an identification on a particle by particle basis. The EMR is...
    Go to contribution page
  92. Evan Niner (Indiana University), Mr Zukai Wang (University of Virginia)
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    The NOvA experiment at Fermi National Accelerator Lab, due to its unique readout and buffering design, is capable of accessing physics beyond the core neutrino oscillations program for which it was built. In particular the experiment is able to search for evidence of relic cosmic magnetic monopoles and for the signs of the neutrino flash from a near by supernova through uses of a specialized...
    Go to contribution page
  93. katarzyna wichmann (DESY)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    The data preservation project at DESY was established in 2008, shortly after data taking ended at the HERA ep collider, soon after coming under the umbrella of the DPHEP global initiative. All experiments are implementing data preservation schemes to allow long term analysis of their data, in cooperation with the DESY-IT division. These novel schemes include software validation and...
    Go to contribution page
  94. Dr Bodhitha Jayatilaka (Fermilab)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF experiment has nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the CDF data is present at Fermilab. The Fermilab Run II...
    Go to contribution page
  95. Dr Michael Kirby (Fermi National Accelerator Laboratory)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    The Tevatron experiments have entered their post-data-taking phases but are still producing physics output at a high rate. The D0 experiment has initiated efforts to preserve both data access and full analysis capability for the collaboration members through at least 2020. These efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout...
    Go to contribution page
  96. Mr Tao Lin (Institute of High Energy Physics)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    Data Transfer is an essential part in grid. In the BESIII experiment, the result of Monte Carlo Simulation should be transfered back from other sites to IHEP and the DST files for physics analysis should be tranfered from IHEP to other sites. A robust transfer system should make sure all data are transfered correctly. DIRAC consists of cooperation distributed services and light-weight...
    Go to contribution page
  97. Kai Leffhalm (Deutsches Elektronen-Synchrotron (DE))
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    The dCache storage system writes billing data into flat files or a relational database. For a midsize dCache installation there are one million entries - representing 300 MByte - per day. Gathering accounting information for a longer time interval about transfer rates per group, per file type or per user results in increasing load on the servers holding the billing information. Speeding up...
    Go to contribution page
  98. Gancho Dimitrov (CERN)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The ATLAS experiment at CERN is one of the four Large Hadron Collider experiments. The DCS Data Viewer (DDV) is an application that provides access to historical data of the ATLAS Detector Control System (DCS) parameters and their corresponding alarm information. It features a server-client architecture: the pythonic server serves as interface to the Oracle-based conditions database and...
    Go to contribution page
  99. Andreas Petzold (KIT)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    GridKa, the German WLCG Tier-1 site hosted by Steinbuch Centre for Computing at Karlsruhe Institute of Thechnology, is a collaboration partner in the HEPIX-IPv6 testbed. A special IPv6-enabled gridftp server has been installed previously. In 2013, the IPv6 efforts will be increased. Already the installation of a new Mini-Grid site has been started. This Mini-Grid installation is planned as a...
    Go to contribution page
  100. Franco Brasolin (Universita e INFN (IT))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs...
    Go to contribution page
  101. Tai Sakuma (Texas A & M University (US))
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    We describe the creation of 3D models of the CMS detector and events using SketchUp, a 3D modelling program. SketchUp provides a Ruby API with which we interface with the CMS Detector Description, the master source of the CMS detector geometry, to create detailed 3D models of the CMS detector. With the Ruby API we also interface with the JSON-based event format used for the iSpy event display...
    Go to contribution page
  102. Sergey Belogurov (ITEP Institute for Theoretical and Experimental Physics (RU))
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    Detector geometry exchange between CAD systems and physical Monte Carlo (MC), packages ROOT and Geant4 is a labor-consuming process necessary for fine design optimization. CAD and MC geometries have completely different structure and hierarchy. For this reason automatic conversion is possible only for very simple shapes. CATIA-GDML Geometry Builder is a tool which allows to facilitate...
    Go to contribution page
  103. Xavier Espinal Curull (CERN)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    After the strategic decision in 2011 to separate tier-0 activity from analysis, CERN-IT developed EOS as a new petascale disk-only solution to address the fast-growing needs for high-performance low-latency data access. EOS currently holds around 22PB usable space for the four big experiment (ALICE, ATLAS, CMS, LHCb), and we expect to grow to >30PB this year. EOS is one of the first production...
    Go to contribution page
  104. Luisa Arrabito (LUPM Université Montpellier 2, IN2P3/CNRS)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    DIRAC (Distributed Infrastructure with Remote Agent Control) is a general framework for the management of tasks over distributed heterogeneous computing environments. It has been originally developed to support the production activities of the LHCb (Large Hadron Collider Beauty) experiment and today is extensively used by several particle physics and biology communities. Current (Fermi-LAT,...
    Go to contribution page
  105. Dr Armando Fella (INFN Pisa), Mr Bruno Santeramo (INFN Bari), Cristian De Santis (Universita degli Studi di Roma Tor Vergata (IT)), Dr Giacinto Donvito (INFN-Bari), Marcin Jakub Chrzaszcz (Polish Academy of Sciences (PL)), Mr Milosz Zdybal (Institute of Nuclear Physics, Polish Academy of Science), Rafal Zbigniew Grzymkowski (P)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    In HEP computing context, R&D studies aiming to the definition of the data and workload models were brought forward by the SuperB community beyond the experiment life itself. This work is considered of great interest for a generic mid- and small size VO to fulfil Grid exploiting requirements involving CPU-intensive tasks. We present the R&D line achievements in the design, developments...
    Go to contribution page
  106. Dr Tony Wong (Brookhaven National Laboratory)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The RHIC and ATLAS Computing Facility (RACF) at Brookhaven Lab is a dedicated data center serving the needs of the RHIC and US ATLAS community. Since it began operations in the mid-1990's, it has operated continuously with few unplanned downtimes. In the last 24 months, Brookhaven Lab has been affected by two hurricanes and a record-breaking snowstorm. In this presentation, we discuss...
    Go to contribution page
  107. Justin Lewis Salmon (University of the West of England (GB))
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The Extended ROOT Daemon (XRootD) is a distributed, scalable system for low-latency clustered data access. XRootD is mature and widely used in HEP, both standalone and as core functionality for the EOS system at CERN, and hence requires extensive testing to ensure general stability. However, there are many difficulties posed by distributed testing, such as cluster initialization,...
    Go to contribution page
  108. Stefano Piano (INFN (IT))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    Since 2003 the computing farm hosted by the INFN T3 facility in Trieste supports the activities of many scientific communities. Hundreds of jobs from 45 different VOs, including those of the LHC experiments, are processed simultaneously. The currently available shared disk space amounts to about 300 TB, while the computing power is provided by 712 cores for a total of 7400 HEP-SPEC06. Given...
    Go to contribution page
  109. Dr Jorge Luis Rodriguez (UNIVERSITY OF FLORIDA)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    We have developed remote data access for large volumes of data over the Wide Area Network based on the Lustre filesystem and Kerberos authentication for security. It this paper we explore a prototype for two-step data access from worker nodes at Florida T3 centers, located behind a firewall and using a private network, to data hosted on the Lustre filesystem at the University of Florida CMS T2...
    Go to contribution page
  110. Ian Gable (University of Victoria (CA))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    It has been shown possible to run HEP workloads on remote IaaS cloud resources. Typically each running Virtual Machine (VM) makes use of the CERN VM Filesystem (CVMFS), a caching HTTP file system, to minimize the size of the VM images, and to simplify software installation. Each VM must be configured with a HTTP web cache, usually a Squid Cache, in proximity in order to function efficiently....
    Go to contribution page
  111. Dr raul lopes (School of Design and Engineering - Brunel University, UK)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The performance of hash function computations can impose a significant workload on SSL/TLS authentication servers. In the WLCG this workload shows also in the computation of data transfers checksums. It has been shown in the EGI grid infrastructure that the checksum computation can double the IO load for large file transfers leading to an increase in re-transfers and timeout errors. Storage...
    Go to contribution page
  112. Tomas Kouba (Acad. of Sciences of the Czech Rep. (CZ))
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The production usage of the new IPv6 protocol is becoming reality in the HEP community and the Computing Centre of the Institute of Physics in Prague participates in many IPv6 related activities. Our contribution will present experience with monitoring in HEPiX distributed IPv6 testbed which includes 11 remote sites. We use Nagios to check availability of services and Smokeping for...
    Go to contribution page
  113. Mr Igor Sfiligoi (University of California San Diego)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The basic premise of pilot systems is to create an overlay scheduling system on top of leased resources. And by definition, leases have a limited lifetime, so any job that is scheduled on such resources must finish before the lease is over, or it will be killed and all the computation wasted. In order to effectively schedule jobs to resources, the pilot system thus requires the expected...
    Go to contribution page
  114. Ian Fisk (Fermi National Accelerator Lab. (US))
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The Fermilab CMS Tier-1 facility provides processing, networking, and storage as one of seven Tier-1 facilities for the CMS experiment. The storage consists of approximately 15 PB of online/nearline disk managed by the dCache file system, and 22 PB of tape managed by the Enstore mass storage system. Data is transferred to and from computing centers worldwide using the CMS-developed PhEDEx...
    Go to contribution page
  115. Guenter Duckeck (Experimentalphysik-Fakultaet fuer Physik-Ludwig-Maximilians-Uni), Dr Johannes Ebke (Ludwig-Maximilians-Univ. Muenchen (DE)), Sebastian Lehrack (LMU Munich)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The Apache Hadoop software is a Java based framework for distributed processing of large data sets across clusters of computers using the Hadoop file system (HDFS) for data storage and backup and MapReduce as a processing platform. Hadoop is primarily designed for processing large textual data sets which can be processed in arbitrary chunks, and must be adapted to the use case of...
    Go to contribution page
  116. Ian Fisk (Fermi National Accelerator Lab. (US))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The physics event reconstruction in LHC/CMS is one of the biggest challenges for computing. Among the different tasks that computing systems perform, the reconstruction takes most of the CPU resources that are available. The reconstruction time of a single event varies according to the event complexity. Measurements were done in order to find precisely this correlation, creating means to...
    Go to contribution page
  117. Ian Fisk (Fermi National Accelerator Lab. (US))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    CMS production and analysis job submission is based largely on glideinWMS and pilot submissions. The transition from multiple different submission solutions like gLite WMS and HTCondor-based implementations was carried out over years and is coming now to a conclusion. The historically explained separate glideinWMS pools for different types of production jobs and analysis jobs are being unified...
    Go to contribution page
  118. Prof. Jesus Marco (IFCA (CSIC-UC) Santander Spain)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    The strategy at the end of the LEP era for the long term preservation of physics results and data processing framework was not obvious. One of the possibilities analyzed at the time, previously to the generalization of virtualization techniques, was the setup of a dedicated farm, to be conserved in its original state for medium-long term, at least until the new data from LHC could...
    Go to contribution page
  119. Wim Lavrijsen (Lawrence Berkeley National Lab. (US))
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    Intel recently released the first commercial boards of its Many Integrated Core (MIC) Architecture. MIC is Intel's solution for the domain of throughput computing, currently dominated by general purpose programming on graphics processors (GPGPU). MIC allows the use of the more familiar x86 programming model and supports standard technologies such as OpenMP, MPI, and Intel's Threading Building...
    Go to contribution page
  120. Laura Sargsyan (ANSL (Yerevan Physics Institute) (AM))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The organization of the distributed user analysis on the Worldwide LHC Computing Grid (WLCG) infrastructure is one of the most challenging tasks among the computing activities at the Large Hadron Collider. The Experiment Dashboard offers a solution that not only monitors but also manages (kill, resubmit) user tasks and jobs via a web interface. The ATLAS Dashboard Task Monitor provides...
    Go to contribution page
  121. Boris Wagner (University of Bergen (NO) for the ALICE Collaboration)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The Nordic Tier-1 for the LHC is distributed over several, sometimes smaller, computing centers. In order to minimize administration effort, we are interested in running different grid jobs over one common grid middleware. ARC is selected as the internal middleware in the Nordic Tier-1. The AliEn grid middleware, used by ALICE has a different design philosophy than ARC. In order to use most of...
    Go to contribution page
  122. Jakub Cerkala (Technical University of Košice), Slávka Jadlovská (Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Košice)
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    ALICE Controls data produced by commercial SCADA system WINCCOA is stored in ORACLE database on the private experiment network. The SCADA system allows for basic access and processing of the historical data. More advanced analysis requires tools like ROOT and needs therefore a separate access method to the archives. The present scenario expects that detector experts create simple...
    Go to contribution page
  123. Max Fischer (KIT - Karlsruhe Institute of Technology (DE))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The CMS collaboration is successfully using glideInWMS for managing grid resources within the WLCG project. The GlideIn mechanism with HTCondor underneath provides a clear separation of responsibilities between administrators operating the service and users utilizing computational resources. German CMS collaborators (dCMS) have explored modern capabilities of the glideInWMS and aiming at...
    Go to contribution page
  124. Dennis Box (F)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The Fermilab Intensity Frontier Experiments use an integrated submission system known as FIFE-jobsub, part of the FIFE (Fabric for Frontier Experiments) initiative, to submit batch jobs to the Open Science Grid. FIFE-jobsub eases the burden on experimenters by integrating data transfer and site selection details in an easy to use and well documented format. FIFE-jobsub automates tedious...
    Go to contribution page
  125. Johannes Philipp Grohs (Technische Universitaet Dresden (DE))
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    The readout of the trigger signals of the ATLAS Liquid Argon (LAr) calorimeters is foreseen to be upgraded in order to prepare for operation during the first high-luminosity phase of the Large Hadron Collider (LHC). Signals with improved spatial granularity are planned to be received from the detector by a Digitial Processing System (DPS) in ATCA technology and will be sent in real-time to the...
    Go to contribution page
  126. Dr Piotr Golonka (CERN)
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    Rapid growth of popularity of web applications gives rise to a plethora of reusable graphical components, such as Google Chart Tools or jQuery Sparklines, implemented in JavaScript and running inside a web browser. In the paper we describe the tool that allows for seamless integration of web-based widgets into WinCC Open Architecture, the SCADA system used commonly at CERN to build complex...
    Go to contribution page
  127. Laurent Garnier (LAL-IN2P3-CNRS)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    Geant4 application in a web browser   Geant4 is a toolkit for the simulation of the passage of particles through matter. The Geant4 visualization system supports many drivers including OpenGL, OpenInventor, HepRep, DAWN, VRML, RayTracer, gMocren and ASCIITree, with diverse and complementary functionalities.   Web applications have an increasing role in our work, and thanks to emerging...
    Go to contribution page
  128. Dr Thomas Kittelmann (European Spallation Source ESS AB)
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    The construction of the European Spallation Source ESS AB, which will become the worlds most powerful source of cold and thermal neutrons (meV scale), is about to begin in Lund, Sweden, breaking ground in 2014 and coming online towards the end of the decade. Currently 22 neutron-scattering instruments are planned as the baseline suite at the facility, and a crucial part of each such beam-line...
    Go to contribution page
  129. Prof. Vladimir Ivantchenko (CERN)
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    Electromagnetic physics sub-package of the Geant4 Monte Carlo toolkit is an important component of LHC experiment simulation and other Geant4 applications. In this work we present recent progress in Geant4 electromagnetic physics modeling, with an emphasis on the new refinements for the processes of multiple and single scattering, ionisation, high energy muon interactions, and gamma induced...
    Go to contribution page
  130. Aurelie Pascal (CERN)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    CERN has recently renewed its obsolete VHF firemen’s radio network and replaced it by a digital one based on TETRA technology. TETRA already integrates an outdoor GPS localization system, but it appeared essential to look for a solution to also locate TETRA users in CERN’s underground facilities. The system which answers this problematic and which has demonstrated a good resistance to...
    Go to contribution page
  131. Oliver Keeble (CERN)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The GLUE 2 information schema is now fully supported in the production EGI/WLCG information system. However, to make the schema usable and allow clients to rely on the information it is important that the meaning of the published information is clearly defined, and that information providers and site configurations are validated to ensure as far as possible that what they publish is correct....
    Go to contribution page
  132. Dr Yaodong CHENG (Institute of High Energy Physics,Chinese Academy of Sciences)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    Gluster file system adopts no metadata architecture, which theoretically eliminates both a central point of failure and a performance bottleneck of metadata server. Firstly, this talk will introduce gluster compared to lustre or hadoop. However, its some mechanisms are not so good in current version. For example, it has to read the extend attributes of all bricks to locate one file. And it is...
    Go to contribution page
  133. Dr Sebastien Binet (IN2P3/LAL)
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    Current HENP libraries and frameworks were written before multicore systems became widely deployed and used. From this environment, a 'single-thread' processing model naturally emerged but the implicit assumptions it encouraged are greatly impairing our abilities to scale in a multicore/manycore world. Thanks to C++11, C++ is finally slowly catching up with regard to concurrency...
    Go to contribution page
  134. Halyo Valerie
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    Significant new challenges are continuously confronting the High Energy Physics (HEP) experiments in particular the Large Hadron Collider (LHC) at CERN who does not only drive forward theoretical, experimental and detector physics but also pushes to limits computing. LHC delivers proton-proton collisions to the detectors at a rate of 40 MHz. This rate must be significantly reduced to comply...
    Go to contribution page
  135. Roberto Ammendola (INFN)
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    We describe a pilot project for the use of GPUs (Graphics processing units) in online triggering applications for high energy physics experiments. Two major trends can be identified in the development of trigger and DAQ systems for particle physics experiments: the massive use of general-purpose commodity systems such as commercial multicore PC farms for data acquisition, and the reduction of...
    Go to contribution page
  136. Michelle Perry (Florida State University)
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    The search for new physics has typically been guided by theoretical models with relatively few parameters. However, recently, more general models, such as the 19-parameter phenomenological minimal supersymmetric standard model (pMSSM), have been used to interpret data at the Large Hadron Collider. Unfortunately, due to the complexity of the calculations, the predictions of these models are...
    Go to contribution page
  137. Derek John Weitzel (University of Nebraska (US))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    During the last decade, large-scale federated distributed infrastructures have continually developed and expanded. One of the crucial components of a cyber-infrastructure is an accounting service that collects data related to resource utilization and identity of users using resources. The accounting service is important for verifying pledged resource allocation per particular groups and users,...
    Go to contribution page
  138. Johannes Elmsheuser (Ludwig-Maximilians-Univ. Muenchen (DE))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to...
    Go to contribution page
  139. Dr Gabriele Garzoglio (FERMI NATIONAL ACCELERATOR LABORATORY)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service...
    Go to contribution page
  140. Maria Dimou (CERN)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    In the Wordwide LHC Computing Grid (WLCG) project the Tier centres are of paramount importance for storing and accessing experiment data and for running the batch jobs necessary for experiment production activities. Although Tier2 sites provide a significant fraction of the resources a non-availability of resources at the Tier0 or the Tier1s can seriously harm not only WLCG Operations but...
    Go to contribution page
  141. Steven Goldfarb (University of Michigan (US))
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    On July 4, 2012, particle physics became a celebrity. Around 1,000,000,000 people (yes, 1 billion) saw rebroadcasts of two technical presentations announcing discovery of a new boson. The occasion was a joint seminar of the CMS and ATLAS collaborations, and the target audience were members of those collaborations plus interested experts in the field of particle physics. Yet, the world ate it...
    Go to contribution page
  142. Ramon Medrano Llamas (CERN)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The recent paradigm shift toward cloud computing in IT, and general interest in "Big Data" in particular, have demonstrated that the computing requirements of HEP are no longer globally unique. Indeed, the CERN IT department and LHC experiments have already made significant R&D investments in delivering and exploiting cloud computing resources. While a number of technical evaluations of...
    Go to contribution page
  143. Wahid Bhimji (University of Edinburgh (GB))
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    “Big Data” is no longer merely a buzzword, but is business-as-usual in the private sector. High Energy Particle Physics is often cited as the archetypal Big Data use case, however it currently shares very little of the toolkit used in the private sector or other scientific communities. We present the initial phase of a programme of work designed to bridge this technology divide by both...
    Go to contribution page
  144. Alex Mann (Ludwig-Maximilians-Univ. Muenchen (DE)), Alexander Mann (Ludwig-Maximilians-Universität)
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    The ATLAS detector operated during the three years of the run 1 of the Large Hadron Collider collecting information on a large number of proton-proton events. One the most important results obtained so far is the discovery of one Higgs boson. More precise measurements of this particle must be performed as well as there are other very important physics topics still to be explored. One of...
    Go to contribution page
  145. Stefan Kluth (Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut) (D)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    We benchmarked an ARM Cortex A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the hepspec 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the...
    Go to contribution page
  146. Andre Sailer (CERN), Christian Grefe (CERN), Stephane Guillaume Poss (Centre National de la Recherche Scientifique (FR))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    ILCDIRAC was initially developed in the context of the CLIC Conceptual Design Report (CDR), published in 2012-2013. It provides a convenient interface for the mass production of the simulation events needed for the physics performance studies of the two detectors concepts considered, ILD and SID. It was since used in the ILC Detailed Baseline Detector (DBD) studies of the SID detector...
    Go to contribution page
  147. Dr Alexei Strelchenko (FNAL)
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    Lattice Quantum Chromodynamics (LQCD) simulations are critical for understanding the validity of the Standard Model and the results of the High-Energy and Nuclear Physics experiments. Major improvements in the calculation and prediction of physical observables, such as nucleon form factors or flavor singlet meson mass, require large amounts of computer resources, of the order of hundreds of...
    Go to contribution page
  148. Kati Lassila-Perini (Helsinki Institute of Physics (FI))
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    Implementation of the CMS policy on long-term data preservation, re-use and open access has started. Current practices in providing data additional to published papers and distributing simplified data-samples for outreach are promoted and consolidated. The first measures have been taken for the analysis and data preservation for the internal use of the collaboration and for the open access to...
    Go to contribution page
  149. Enrico Mazzoni (INFN-Pisa)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The INFN-Pisa Tier2 infrastructure is described, optimized not only for GRID CPU and Storage access, but also for a more interactive use of the resources in order to provide good solutions for the final data analysis step. The Data Center, equipped with about 5000 production cores, permits the use of modern analysis techniques realized via advanced statistical tools (like RooFit and RooStat)...
    Go to contribution page
  150. Donato De Girolamo (INFN CNAF), Mr Lorenzo Chiarelli (INFN CNAF), Mr Stefano Zani (INFN CNAF)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The computing models of HEP experiments, starting from the LHC ones, are facing an evolution with the relaxation of the data locality paradigm: the possibility of a job accessing data files over the WAN is becoming more and more common. One of the key factors for the success of this change is the ability to use the network in the most efficient way: in the best scenario, the network...
    Go to contribution page
  151. Andrew Malone Melo (Vanderbilt University (US))
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The LHC experiments have always depended upon a ubiquitous, highly-performing network infrastructure to enable their global scientific efforts. While the experiments were developing their software and physical infrastructures, parallel development work was occurring in the networking communities responsible for interconnecting LHC sites. During the LHC's Long Shutdown \#1 (LS1) we have an...
    Go to contribution page
  152. Dr Tony Wildish (Princeton University (US))
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The ever-increasing amount of data handled by the CMS dataflow and workflow management tools poses new challenges for cross-validation among different systems within CMS experiment at LHC. To approach this problem we developed an integration test suite based on the LifeCycle agent, a tool originally conceived for stress-testing new releases of PhEDEx, the CMS data-placement tool. The LifeCycle...
    Go to contribution page
  153. Ivana Hrivnacova (Universite de Paris-Sud 11 (FR))
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    g4tools, that is originally part of the inlib and exlib packages [1], provides a very light and easy to install set of C++ classes that can be used to perform analysis in a Geant4 batch program. It allows to create and manipulate histograms and ntuples, and write them in supported file formats (ROOT, AIDA XML, CSV and HBOOK). It is integrated in Geant4 through analysis manager classes,...
    Go to contribution page
  154. Dmitry Nilsen (Karlsruhe Institute of Technology), Dr Pavel Weber (KIT - Karlsruhe Institute of Technology (DE))
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The complexity of the heterogeneous computing resources, services and recurring infrastructure changes at the GridKa WLCG Tier-1 computing center require a structured approach to configuration management and optimization of interplay between functional components of the whole system. A set of tools deployed at GridKa, including Puppet, Redmine, Foreman, SVN and Icinga, provides the...
    Go to contribution page
  155. Dr Andreas Gellrich (DESY)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The vast majority of jobs in the Grid are embarrassingly parallel. In particular HEP tasks are divided into atomic jobs without need for communication between them. Jobs are still neither multi-threaded nor multi-core capable. On the other hand, resource requirements reach from CPU-dominated Monte Carlo jobs to network intense analysis jobs. The main objective of any Grid site is to...
    Go to contribution page
  156. Vidmantas Zemleris (Vilnius University (LT))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    Background: The goal of the virtual data service integration is to provide a coherent interface for querying a number of heterogenous data sources (e.g., web services, web forms, proprietary systems, etc.) in cases where accurate results are necessary. This work explores various aspects of its usability. Problem: Querying is usually carried out through a structured query language, such as...
    Go to contribution page
  157. Victoria Sanchez Martinez (Instituto de Fisica Corpuscular (IFIC) UV-CSIC (ES))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    In this contribution we expose the performance of the Iberian (Spain and Portugal) ATLAS cloud during the first LHC running period (March 2010-January 2013) in the framework of the GRID Computing and Data Model. The evolution of the resources for CPU, disk and tape in the Iberian Tier1 and Tier2s is summarized. The data distribution over all ATLAS destinations is shown, focusing in the number...
    Go to contribution page
  158. Andrew John Washbrook (University of Edinburgh (GB))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    High Performance Computing (HPC) provides unprecedented computing power for a diverse range of scientific applications. As of November 2012, over 20 supercomputers deliver petaflop peak performance with the expectation of "exascale" technologies available in the next 5 years. Despite the sizeable computing resources on offer there are a number of technical barriers that limit the use of HPC...
    Go to contribution page
  159. Eygene Ryabinkin (National Research Centre Kurchatov Institute (RU))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The review of the distributed grid computing infrastructure for LHC experiments in Russia is given. The emphasis is placed on the Tier-1 site construction at the National Research Centre "Kurchatov Institute" (Moscow) and the Joint Institute for Nuclear Research (Dubna). In accordance with the protocol between CERN, Russia and the Joint Institute for Nuclear Research (JINR) on participation...
    Go to contribution page
  160. Luca dell'Agnello (INFN-CNAF)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    Long-term preservation of experimental data (intended as both raw and derived formats) is one of the emerging requirements coming from scientific collaborations. Within the High Energy Physics community the Data Preservation in High Energy Physics (DPHEP) group coordinates this effort. CNAF is not only one of the Tier-1s for the LHC experiments, it is also a computing center providing...
    Go to contribution page
  161. Shaun De Witt (STFC - Science & Technology Facilities Council (GB))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    WLCG is moving towards greater use of xrootd. While this will in general optimise resource usage on the grid, it can create load problems at sites when storage elements are unavailable. We present some possible methods of mitigating these problems and the results from experiments at STFC
    Go to contribution page
  162. Andrew John Washbrook (University of Edinburgh (GB))
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    A number of High Energy Physics experiments have successfully run feasibility studies to demonstrate that many-core devices such as GPGPUs can be used to accelerate algorithms for trigger systems and data analysis. After this exploration phase experiments on the Large Hadron Collider are now investigating how these devices can be incorporated into key areas of their software framework in...
    Go to contribution page
  163. Mr Stephen Lloyd (University of Edinburgh)
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    The Matrix Element Method has been used with great success in the past several years, notably for the high precision top quark mass determination, and subsequently the single top quark discovery, at the Tevatron. Unfortunately, the Matrix Element method is notoriously CPU intensive due to the complex integration performed over the full phase space of the final state particles arising from...
    Go to contribution page
  164. DIMITRIOS ZILASKOS (STFC)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The WLCG uses HEP-SPEC as its benchmark for measuring CPU performance. This provides a consistent and repeatable CPU benchmark to describe experiment requirements, lab commitments and existing resources. However while HEP-SPEC has been customized to represents WLCG applications it is not a perfect measure. The Rutherford Appleton Laboratory (RAL), is the UK Tier 1 site and provides CPU and...
    Go to contribution page
  165. Dr Jean-Roch Vlimant (CERN)
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    The analysis of the LHC data at the CMS experiment requires the production of a large number of simulated events. In 2012, CMS has produced over 4 Billion simulated events in about 100 thousands of datasets. Over the past years a tool (PREP) has been developed for managing such a production of thousands of samples. A lot of experience working with this tool has been gained, and conclusions...
    Go to contribution page
  166. Dr Janusz Martyniak (Imperial College London)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The international Muon Ionisation Cooling Experiment (MICE) is designed to demonstrate the principle of muon ionisation cooling for the first time, for application to a future Neutrino Factory or Muon Collider. The experiment is currently under construction at the ISIS synchrotron at the Rutherford-Appleton Laboratory, UK. The configuration/condition of the experiment during each run is...
    Go to contribution page
  167. Yordan Ivanov Karadzhov (Universite de Geneve (CH))
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    The Muon Ionization Cooling Experiment (MICE) is under development at the Rutherford Appleton Laboratory (UK). The goal of the experiment is to build a section of a cooling channel that can demonstrate the principle of ionization cooling and to verify its performance in a muon beam. The final setup of the experiment will be able to measure a 10% reduction in emittance (transverse phase space...
    Go to contribution page
  168. Dr Patricia Mendez Lorenzo (CERN)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The large potential and flexibility of the ServiceNow infrastructure based on "best practices" methods is allowing the migration of some of the ticketing systems traditionally used for the tracing of the servers and services available at the CERN IT Computer Center. This migration enables a standardization and globalization of the ticketing and control systems implementing a generic system...
    Go to contribution page
  169. Mark Mitchell (University of Glasgow)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The monitoring of a grid cluster (or of any piece of reasonably scaled IT infrastructure) is a key element in the robust and consistent running of that site. There are several factors which are important to the selection of a useful monitoring framework, which include ease of use, reliability, data input and output. It is critical that data can be drawn from different instrumentation packages...
    Go to contribution page
  170. Alexandre Beche (CERN)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The computing models of the LHC experiments are gradually moving from hierarchical data models with centrally managed data pre-placement towards federated storage which provides seamless access to data files independently of their location and dramatically improved recovery due to fail-over mechanisms. Enabling loosely coupled data clusters to act as a single storage resource should increase...
    Go to contribution page
  171. Bogdan Lobodzinski (DESY, Hamburg, Germany)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    Small Virtual Organizations (VO) employ all components of the EMI or gLite Middleware. In this framework, a monitoring system is designed for the H1 Experiment to identify and recognize within the GRID the best suitable resources for execution of CPU-time consuming Monte Carlo (MC) simulation tasks (jobs). Monitored resources are Computer Elements (CEs), Storage Elements (SEs), WMS-servers...
    Go to contribution page
  172. Georg Weidenspointner (MPE Garching)
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    An extensively documented, quantitative study of software evolution resulting in deterioration of physical accuracy over the years is presented. The analysis concerns the energy deposited by electrons in various materials produced by Geant4 versions released between 2007 and 2013. The evolution of the functional quality of the software is objectively quantified by means of a rigorous...
    Go to contribution page
  173. Dr Maria Grazia Pia (Universita e INFN (IT))
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    A large-scale project is in progress, which validates the basic constituents of the electromagnetic physics models implemented in major Monte Carlo codes (EGS, FLUKA, Geant4, ITS, MCNP, Penelope) against extensive collections of experimental data documented in the literature. These models are responsible for the physics observables and the signal generated in particle detectors, including...
    Go to contribution page
  174. Ian Gable (University of Victoria (CA))
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    We review the demonstration of next generation high performance 100 Gbps networks for HEP that took place at the Supercomputing 2012 (SC12) conference in Salt Lake City. Three 100 Gbps circuits were established from the California Institute of Technology, the University of Victoria and the University of Michigan to the conference show floor. We were able to to efficiently utilize these...
    Go to contribution page
  175. Paul Nilsson (University of Texas at Arlington (US))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The Production and Distributed Analysis system (PanDA) has been in use in the ATLAS Experiment since 2005. It uses a sophisticated pilot system to execute submitted jobs on the worker nodes. While originally designed for ATLAS, the PanDA Pilot has recently been refactored to facilitate use outside of ATLAS. Experiments are now handled as plug-ins, and a new PanDA Pilot user only has to...
    Go to contribution page
  176. Dr Peter Van Gemmeren (Argonne National Laboratory (US))
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    The ATLAS event store employs a persistence framework with extensive navigational capabilities. These include real-time back navigation to upstream processing stages, externalizable data object references, navigation from any data object to any other both within a single file and across files, and more. The 2013-2014 shutdown of the Large Hadron Collider provides an opportunity to enhance...
    Go to contribution page
  177. Anastasia Karavdina (University Mainz)
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    Precise luminosity determination is crucial for absolute cross-section measurements and scanning experiments with the fixed target PANDA experiment at the planned antiproton accelerator HESR (FAIR, Germany). For the determination of the luminosity we will exploit the elastic antiproton-proton scattering. Unfortunately there are no or only a few data with large uncertainties available in the...
    Go to contribution page
  178. Christopher John Walker (University of London (GB))
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The WLCG, and high energy physics in general, relies on remote Tier-2 sites to analyse the large quantities of data produced. Transferring this data in a timely manner requires significant tuning to make optimum usage of expensive WAN links. In this paper we describe the techniques we have used at QMUL to optimise network transfers. Use of the FTS with settings and appropriate TCP...
    Go to contribution page
  179. Zoltan Mathe (CERN)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    The LHCb experiment produces a huge amount of data which has associated metadata such as run number, data taking condition (detector status when the data was taken), simulation condition, etc. The data are stored in files, replicated on the Computing Grid around the world. The LHCb Bookkeeping System provides methods for retrieving datasets based on their metadata. The metadata is stored in a...
    Go to contribution page
  180. Dr Giacinto Donvito (INFN-Bari), Tommaso Boccali (Sezione di Pisa (IT))
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The Italian Ministry of Research (MIUR) funded in the past years research projects aimed to an optimization of the analysis activities in the Italian CMS computing Centers. A new grant started in 2013, and activities are already ongoing in 9 INFN sites, all hosting local CMS groups. Main focus will be on the creation of an italian storage federation (via Xrootd initially, and later HTTP) which...
    Go to contribution page
  181. Egor Ovcharenko (ITEP Institute for Theoretical and Experimental Physics (RU))
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    One of the current problems in HEP computing is the development of particle propagation algorithms capable of efficient work at parallel architectures. An interesting approach in this direction has been recently introduced by the GEANT5 group at CERN [1]. Our report will be devoted to realization of similar functionality using Intel Threading Building Blocks (TBB) library. In the prototype...
    Go to contribution page
  182. Stewart Martin-Haugh (University of Sussex (GB))
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    We present a description of the algorithms and the performance of the ATLAS Inner Detector trigger for LHC run I, as well as prospects for a redesign of the tracking algorithms in run 2. The Inner Detector trigger algorithms are vital for many trigger signatures at ATLAS. The performance of the algorithms for muons, electrons, taus and b-jets is presented. The ATLAS trigger software after...
    Go to contribution page
  183. Enrico Bonaccorsi (CERN), Francesco Sborzacchi (Istituto Nazionale Fisica Nucleare (IT)), Niko Neufeld (CERN)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The virtual computing is often run to satisfy different needs: reduce costs, reduce resources, simplify maintenance and the last but not the least add flexibility. The use of Virtualization in a complex system such as a farm of PCs that control the hardware of an experiment (PLC, power supplies ,gas, magnets..) put as in a condition where not only an High Performance requirements...
    Go to contribution page
  184. Eduardo Bach (UNESP - Universidade Estadual Paulista (BR))
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    Distributed storage systems have evolved from providing a simple means to store data remotely to offering advanced services like system federation and replica management. This evolution have made possible due to the advancement of the underlying communication technology, that plays a vital role in determining the communication efficiency of the distributed systems. The dCache system, which has...
    Go to contribution page
  185. Dr Dmytro Kovalskyi (Univ. of California Santa Barbara (US))
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    Databases are used in many software components of the HEP computing, from monitoring and task scheduling to data storage and processing. While the database design choices have a major impact on the system performance, some solutions give better results out of the box than the others. This paper presents detailed comparison benchmarks of the most popular Open Source systems for a typical class...
    Go to contribution page
  186. Christophe Haen (Univ. Blaise Pascal Clermont-Fe. II (FR))
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The backbone of the LHCb experiment is the Online system, which is a very large and heterogeneous computing center. Making sure of the proper behavior of the many different tasks running on the more than 2000 servers represents a huge workload for the small expert-operator team and is a 24/7 task. At the occasion of CHEP 2012, we presented a prototype of a framework that we designed in order...
    Go to contribution page
  187. Dr Dirk Hoffmann (Centre de Physique des Particules de Marseille, CNRS/IN2P3)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    PLUME - FEATHER is a non-profit project created to Promote economicaL, Useful and Maintained softwarE For the Higher Education And THE Research communities. The site references software, mainly Free/Libre Open Source Software (FLOSS) from French universities and national research organisations, (CNRS, INRA...), laboratories or departments. Plume means feather in French. The main goals of PLUME...
    Go to contribution page
  188. Graeme Andrew Stewart (CERN)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    This paper describes a popularity prediction tool for data-intensive data management systems, such as the ATLAS distributed data management (DDM) system. The tool is fed by the DDM popularity system, which produces historical reports about ATLAS data usage and provides information about the files, datasets, users and sites where data was accessed. The tool described in this contribution uses...
    Go to contribution page
  189. Nathalie Rauschmayr (CERN)
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    Due to the continuously increasing number of cores on modern CPUs, it is important to adapt HEP applications. This must be done at different levels: the software which must support parallelization and the scheduling has to differ between multicore and singlecore jobs. The LHCb software framework (GAUDI) provides a parallel prototype (GaudiMP), based on the multiprocessing approach. It allows a...
    Go to contribution page
  190. Simone Coscetti (Sezione di Pisa (IT))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The ALEPH Collaboration took data at the LEP (CERN) electron-positron collider in the period 1989-2000, producing more than 300 scientific papers. While most of the Collaboration activities stopped in the last years, the data collected still has physics potential, with new theoretical models emerging, and needing a check with data at the Z and WW production energies. An attempt to revive and...
    Go to contribution page
  191. Dr Dirk Hoffmann (Centre de Physique des Particules de Marseille, CNRS/IN2P3)
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    We are developing the prototype of a high speed data acquisition (DAQ) system for the Cherenkov Telescope Array. This experiment will be the next generation ground-based gamma-ray instrument. It will be made up of approximately 100 telescopes of at least three different sizes, from 6 to 24 meters in diameter. Each camera equipping the telescopes is composed of hundreds of light detecting...
    Go to contribution page
  192. Semen Lebedev (Justus-Liebig-Universitaet Giessen (DE))
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    The software framework of the CBM experiment at FAIR - CBMROOT - has been continuously growing over the years. The increasing complexity of the framework and number of users require improvements in maintenance, reliability and in overall software development process. In this report we address the problem of the software quality assurance (QA) and testing. Two main problems are considered in...
    Go to contribution page
  193. Dr Armando Fella (INFN Pisa), Mr Domenico Diacono (INFN Bari), Dr Giacinto Donvito (INFN-Bari), Mr Giovanni Marzulli (GARR), Paolo Franchini (Universita e INFN (IT)), Dr Silvio Pardi (INFN)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    In HEP computing context, R&D studies aiming to the definition of the data and workload models were brought forward by the SuperB community beyond the experiment life itself. This work is considered of great interest for a generic mid- and small size VO during its Computing Model definition phase. Data-model R&D work we are presenting, starts with the general design description of the...
    Go to contribution page
  194. Dr Tony Wildish (Princeton University (US))
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    PhEDEx. the data-placement tool used by the CMS experiment at the LHC, was conceived in a more trusting time. The security model was designed to provide a safe working environment for site agents and operators, but provided little more protection than that. CMS data was not sufficiently protected against accidental loss caused by operator error or software bugs or from loss of data caused by...
    Go to contribution page
  195. Adrian Buzatu (University of Glasgow (GB))
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    In high-­‐energy physics experiments, online selection is crucial to reject most uninteresting collisions and to focus on interesting physical signals. The b-­‐jet selection is part of the trigger strategy of the ATLAS experiment and is meant to select hadronic final states with heavy-­‐flavor content. This is important for the selection of physics channels with more than one b-­‐jet in the...
    Go to contribution page
  196. Witold Pokorski (CERN)
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    In this paper we present the recent developments in the Geant4 hadronic framework, as well as in some of the existing physics models. Geant4 is the main simulation toolkit used by the LHC experiments and therefore a lot of effort is put into improving the physics mod els in order for them to have more predictive power. As a consequence, the code complexity increases, which requires...
    Go to contribution page
  197. Christian Veelken (Ecole Polytechnique (FR))
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    An algorithm for reconstruction of the Higgs mass in $H \rightarrow \tau\tau$ decays is presented. The algorithm computes for each event a likelihood function $P(M_{\tau\tau})$ which quantifies the level of compatibility of a Higgs mass hypothesis $M_{\tau\tau}$, given the measured momenta of visible tau decay products plus missing transverse energy reconstructed in the event. The algorithm is...
    Go to contribution page
  198. Mr Igor Mandrichenko (Fermilab)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    RESTful web services are popular solution for distributed data access and information management. Performance, scalability and reliability of such services is critical for the success of data production and analysis in High Energy Physics as well as other areas of science. At FNAL, we have been successfully using REST HTTP-based data access architecture to provide access to various types...
    Go to contribution page
  199. Dr Tony Wildish (Princeton University (US))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    PhEDEx has been serving CMS community since 2004 as the data broker. Every PhEDEx operation is initiated by a request, such as request to move data, request to delete data, and so on. A request has it own life cycle, including creation, approval, notification, and book keeping and the details depend on its type. Currently, only two kinds of requests, transfer and deletion, are fully integrated...
    Go to contribution page
  200. Bertrand Bellenot (CERN)
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    In order to be able to browse (inspect) ROOT files in a platform independent way, a JavaScript version of the ROOT I/O subsystem has been developed. This allows the content of ROOT files to be displayed in most available web browsers, without having to install ROOT or any other software on the server or on the client. This gives a direct access to ROOT files from any new device in a light way....
    Go to contribution page
  201. Bertrand Bellenot (CERN)
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    In my poster I'll present a new graphical back-end for ROOT that has been developed for the Mac OS X operating system as an alternative to the more than 15 year-old X11-based version. It represents a complete implementation of ROOT's GUI, 2D and 3D graphics based on Apple's native APIs/frameworks, written in Objective-C++.
    Go to contribution page
  202. Daniela Remenska (NIKHEF (NL))
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    A big challenge in concurrent software development is early discovery of design errors which can lead to deadlocks or race-conditions. Traditional testing does not always expose such problems in complex distributed applications. Performing more rigorous formal analysis, like model-checking, typically requires a model which is an abstraction of the system. For object-oriented software, UML is...
    Go to contribution page
  203. Mr Igor Mandrichenko (Fermilab)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    Over several years, we have developed a number of collaborative tools used by groups and collaborations at FNAL, which is becoming a Suite of Scientific Collaborative Tools. Currently, the suite includes: - Electronic Logbook (ECL), - Shift Scheduler, - Speakers Bureau and - Members Database. These product organize and help run the collaboration at every stage of its life...
    Go to contribution page
  204. Jakob Blomer (CERN)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the integrity of the data transferred to clients over the internet....
    Go to contribution page
  205. Federico Stagni (CERN), Mario Ubeda Garcia (CERN)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    Within this paper we present an autonomic Computing resources management system used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of...
    Go to contribution page
  206. Giovanni Zurzolo (Universita e INFN (IT))
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    Artificial Neural Networks (ANN) are widely used in High Energy Physics, in particular as software for data analysis. In the ATLAS experiment that collects proton-proton and heavy ion collision data at the Large Hadron Collider, ANN are mostly applied to make a quantitative judgment on the class membership of an event, using a number of variables that are supposed to discriminate between...
    Go to contribution page
  207. Mr Ajay Kumar (Indian Institute of Technology Indore)
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    Ajay Kumar and Ankhi Roy For the PANDA collaboration Indian Institute of Technology Indore, Indore-4520017, India Email- ajayk@iiti.ac.in The PANDA experiment is one of the main experiments at the future accelerator facility FAIR which is currently under construction in Darmstadt, Germany. Experiments will be performed with intense, phase space cooled antiproton beams incident on a...
    Go to contribution page
  208. Dr Guy Barrand (Universite de Paris-Sud 11 (FR))
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    Softinex names a software environment targeted to data analysis and visualization. It covers the C++ inlib and exlib "header only" libraries that permit, through GL-ES and a maximum of common code, to build applications deliverable on the AppleStore (iOS), GooglePlay (Android), traditional laptops/desktops under MacOSX, Linux and Windows, but also deliverable as a web service able to display...
    Go to contribution page
  209. Dr Alexander Moibenko (Fermi NAtiona Accelerator Laboratoy)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    Enstore is a tape based Mass Storage System originally designed for Run II Tevatron experiments at FNAL (CDF, D0). Over the years it has proven to be reliable and scalable data archival and delivery solution, which meets diverse requirements of variety of applications including US CMS Tier 1, High Performance Computing, Intensity Frontier experiments as well as data backups. Data intensive...
    Go to contribution page
  210. Dr Simon Patton (LAWRENCE BERKELEY NATIONAL LABORATORY)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The SPADE application was first used by the IceCube experiment to move its data files from the South Pole to Wisconsin. Since then is has been adapted by the DayaBay experiment to move its data files from its experiment, just outside Hong Kong, to both Beijing an LBNL. The aim of this software is to automate much of the data movement and warehousing that is often done by hand or home-grown...
    Go to contribution page
  211. Alastair Dewhurst (STFC - Science & Technology Facilities Council (GB))
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    During the early running of the LHC , multiple collaborations began to include Squid caches in their distributed computing models. The two main use cases are: for remotely accessing conditions data via Frontier, which is used by ATLAS and CMS; and serving collaboration software via CVMFS, which is used by ATLAS, CMS, and LHCb, and is gaining traction with some non-LHC collaborations. As a...
    Go to contribution page
  212. Witold Pokorski (CERN)
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    The LCG Generator Services project provides validated, LCG compliant Monte Carlo generators code for both the theoretical and experimental communities at the LHC. It collaborates with the generators authors, as well as the experiments software developers and the experimental physicists. In this paper we present the recent developments and the future plans of the project. We start with...
    Go to contribution page
  213. Benedikt Hegner (CERN)
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    For more than ten years, the LCG Savannah portal has successfully served the LHC community to track issues in their software development cycles. In total, more than 8000 users and 400 projects use this portal. Despite its success, the underlying infrastructure that is based on the open-source project "Savane" did not keep up with the general evolution of web technologies and the increasing...
    Go to contribution page
  214. Dr Xavier Espinal Curull (CERN)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    This contribution describes the evolution of the main CERN storage system, CASTOR, as it manages the bulk data stream of the LHC and other CERN experiments, achieving nearly 100 PB of stored data by the end of LHC Run 1. Over the course of 2012 the CASTOR service has addressed the Tier-0 data management requirements, focusing on a tape-backed archive solution, ensuring smooth operations of...
    Go to contribution page
  215. Christopher Tunnell
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    In the coming years, Xenon 1 T, a ten-fold expansion of Xenon 100, will further explore the dark matter WIMP parameter space and must be able to cope with correspondingly higher data rates. With a focus on sustainable software architecture, and a unique experimental scale compared to collider experiments, a high-level trigger system is being designed for the next many years of Xenon 1 T...
    Go to contribution page
  216. Oliver Keeble (CERN)
    14/10/2013, 15:00
    Software Engineering, Parallelism & Multi-Core
    Poster presentation
    In the recent years, with the end of the EU Grid projects such as EGEE and EMI in sight, the management of software development, packaging and distribution has moved from a centrally organised approach to a collaborative one, across several development teams. While selecting their tools and technologies, the different teams and services have gone through several trends and fashion of product...
    Go to contribution page
  217. Dr Catherine Biscarat (LPSC/IN2P3/CNRS France)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    We describe the synergy between CIMENT (a regional multidisciplinary HPC centre) and the infrastructures used for the analysis of data recorded by the ATLAS experiment at the LHC collider and the D0 experiment at the Tevatron. CIMENT is the High Performance Computing (HPC) centre developed by Grenoble University. It is a federation of several scientific departments and it is based on the...
    Go to contribution page
  218. Daniele Francesco Kruse (CERN)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    Disk access and tape migrations compete for network bandwidth in CASTOR’s disk servers, over various protocols: RFIO, Xroot, root and GridFTP. As there are a limited number of tape drives, it is important be keep them busy all the time, at their nominal speed. With potentially 100s of user read streams per server, the bandwidth for the tape migrations has to be guaranteed to a controlled...
    Go to contribution page
  219. Thomas Lindner (T)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    ND280 is the off-axis near detector for the T2K neutrino experiment. ND280 is a sophisticated, multiple sub-system detector designed to characterize the T2K neutrino beam and measure neutrino cross-sections. We have developed a complicated system for processing and simulating the ND280 data, using computing resources from North America, Europe and Japan. The first key challenge has been...
    Go to contribution page
  220. michele pezzi (Infn-cnaf)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    In large computing centers, such as the INFN-CNAF Tier1, is essential to be able to set all the machines, depending on use, in an automated way. For several years at the Tier1 has been used Quattor, a server provisioning tool, which is currently used in production. Nevertheless we have recently started a comparison study involving other tools able to provide specific server installation...
    Go to contribution page
  221. Robert Fay (University of Liverpool)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    A key aspect of ensuring optimum cluster reliability and productivity lies in keeping worker nodes in a healthy state. Testnodes is a lightweight node testing solution developed at Liverpool. While Nagios has been used locally for general monitoring of hosts and services, Testnodes is optimised to answer one question: is there any reason this node should not be accepting jobs? This tight focus...
    Go to contribution page
  222. Jason Webb (Brookhaven National Lab)
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    The STAR experiment has adopted an Abstract Geometry Modeling Language (AgML) as the primary description of our geometry model. AgML establishes a level of abstraction, decoupling the definition of the detector from the software libraries used to create the concrete geometry model. Thus, AgML allows us to support both our legacy GEANT3 simulation application and our ROOT/TGeo based...
    Go to contribution page
  223. Adriana Telesca (CERN)
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    ALICE (A Large Ion Collider Experiment) is a heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The ALICE DAQ (Data Acquisition System) is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches). The DAQ reads the data transferred from the...
    Go to contribution page
  224. Mr Barthelemy Von Haller (CERN)
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    ALICE (A Large Ion Collider Experiment) is a detector designed to study the physics of strongly interacting matter and the quark-gluon plasma produced in heavy-ion collisions at the CERN Large Hadron Collider (LHC). Due to the complexity of ALICE in terms of number of detectors and performance requirements, Data Quality Monitoring (DQM) plays an essential role in providing an online feedback...
    Go to contribution page
  225. Dr Dario Barberis (Università e INFN Genova (IT))
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    Modern scientific experiments collect vast amounts of data that must be cataloged to meet multiple use cases and search criteria. In particular, high-energy physics experiments currently in operation produce several billion events per year. A database with the references to the files including each event in every stage of processing is necessary in order to retrieve the selected events from...
    Go to contribution page
  226. Martin Woudstra (University of Manchester (GB))
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    CERN’s Large Hadron Collider (LHC) is the highest energy proton-proton collider, providing also the highest instantaneous luminosity as a hadron collider. Bunch crossings occurred every 50 ns in 2012 runs. Amongst of which the online event selection system should reduce the event recording rate down to a few 100 Hz, while events are in a harsh condition with many overlapping proton-proton...
    Go to contribution page
  227. Rafal Zbigniew Grzymkowski (P)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    In the multidisciplinary institutes the traditional way of computations is highly ineffective. A computer cluster dedicated to a single research group is typically exploited at a rather low level. The private cloud model enables various groups to share computing resources. It can boost the efficiency of the infrastructure usage by a large factor and at the same time reduce maintenance costs....
    Go to contribution page
  228. Dr Federico De Guio (CERN)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The Data Quality Monitoring (DQM) Software proved to be a central tool in the CMS experiment. Its flexibility allowed its integration in several environments: Online, for real-time detector monitoring; Offline, for the final, fine-grained Data Certification; Release-Validation, to constantly validate our reconstruction software; in Monte Carlo productions. The central tool to deliver Data...
    Go to contribution page
  229. Shima Shimizu (Kobe University (JP))
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    The ATLAS jet trigger is an important element of the event selection process, providing data samples for studies of Standard Model physics and searches for new physics at the LHC. The ATLAS jet trigger system has undergone substantial modifications over the past few years of LHC operations, as experience developed with triggering in a high luminosity and high event pileup environment. In...
    Go to contribution page
  230. Mei YE (IHEP)
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    The Daya Bay reactor neutrino experiment is designed to determine precisely the neutrino mixing angle θ13 with the sensitivity better than 0.01 in the parameter sin22θ13 at the 90% confidence level. To achieve this goal, the collaboration has built eight functionally identical antineutrino detectors. The detectors are immersed in water pools that provide active and passive shielding against...
    Go to contribution page
  231. Mario Lassnig (CERN)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    Rucio is the next-generation data management system supporting ATLAS physics workflows in the coming decade. Historically, clients interacted with the data management system via specialised tools, but in Rucio additional methods are provided. To support filesystem-like interaction with all ATLAS data a plugin to the DMLite software stack has been developed. It is possible to mount Rucio as a...
    Go to contribution page
  232. Dr WooJin Park (KIT)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The GridKa computing center, hosted by Steinbuch Centre for Computing at the Karlsruhe Institute for Technology (KIT) in Germany, is serving as the largest Tier-1 center used by the ALICE collaboration at the LHC. In 2013, GridKa provides 30k HEPSEPC06, 2.7 PB of disk space, and 5.25 PB of tape storage to ALICE. The 10Gbit/s network connections from GridKa to CERN, several Tier-1 centers and...
    Go to contribution page
  233. Norman Anthony Graf (SLAC National Accelerator Laboratory (US))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The International Linear Collider (ILC) physics and detector community recently completed an exercise to demonstrate the physics capabilities of detector concepts. The Detailed Baseline Design (DBD) involved the generation, simulation, reconstruction and analysis of large samples of Monte Carlo datasets. The detector simulations utilized extremely detailed Geant4 implementations of...
    Go to contribution page
  234. Thomas Baron (CERN)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    For a long time HEP has been ahead of the curve in its usage of remote collaboration tools, like videoconference and webcast, while the local CERN collaboration facilities were somewhat behind the expected quality standards for various reasons. This time is now over with the creation by the CERN IT department in 2012 of an integrated conference room service which provides guidance and...
    Go to contribution page
  235. Mr Massimo Sgaravatto (INFN Padova)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The Legnaro-Padova Tier-2 is a computing facility serving the ALICE and CMS LHC experiments. It also supports other High Energy Physics experiments and other virtual organizations of different disciplines, which can opportunistically harness idle resources if available. The unique characteristic of this Tier-2 is its topology: the computational resources are spread in two different...
    Go to contribution page
  236. Sandra Saornil Gamarra (Universitaet Zuerich (CH))
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    The experiment control system of the LHCb experiment is continuously evolving and improving. The guidelines and structure initially defined are kept, and more common tools are made available to all sub-detectors. Although the main system control is mostly integrated and actions are executed in common for the whole LHCb experiment, there is some degree of freedom for each sub-system to...
    Go to contribution page
  237. Sebastian Neubert (CERN)
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    The LHCb experiment is a spectrometer dedicated to the study of heavy flavor at the LHC. The rate of proton-proton collisions at the LHC is 15 MHz, but resource limitations mean that only 5 kHz can be written to storage for offline analytsis. For this reason the LHCb data acquisition system -- trigger -- plays a key role in selecting signal events and rejecting background. In contrast to...
    Go to contribution page
  238. Pierrick Hanlet (Illinois Institute of Technology)
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    The Muon Ionization Cooling Experiment (MICE) is a demonstration experiment to prove the feasibility of cooling a beam of muons for use in a Neutrino Factory and/or Muon Collider. The MICE cooling channel is a section of a modified Study II cooling channel which will provide a 10% reduction in beam emittance. In order to ensure a reliable measurement, MICE will measure the beam emittance...
    Go to contribution page
  239. Joern Mahlstedt (NIKHEF (NL))
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    The LHC is the world's highest energy and luminosity proton-proton (p-p) collider. During 2012 luminosities neared 10^34 cm-2 s-1, with bunch crossings occurring every 50 ns. The online event selection system of the ATLAS detector must reduce the event recording rate to only a few hundred Hz and, at the same time, selecting events considered interesting. This presentation will specifically...
    Go to contribution page
  240. Pierrick Hanlet (Illinois Institute of Technology)
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    The international Muon Ionisation Cooling Experiment (MICE) is designed to demonstrate the principle of muon ionisation cooling for the first time, for application to a future Neutrino Factory or Muon Collider. In order to measure the change in beam emittance, MICE is equipped with a pair of high precision scintillating fibre trackers. The trackers are required to measure a 10% change in...
    Go to contribution page
  241. Daniele Francesco Kruse (CERN)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    Physics data stored in CERN tapes is quickly reaching the 100 PB milestone. Tape is an ever-changing technology that is still following Moore's law in terms of capacity. This means we can store every year more and more data in the same amount of tapes. However this doesn't come for free: the first obvious cost is the new higher capacity media. The second less known cost is related to moving...
    Go to contribution page
  242. Mr Andrey SHEVEL (Petersburg Nuclear Physics Institute)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    A small physics group (3-15 persons) might use a number of computing facilities for the analysis/simulation, developing/testing, teaching. It is discussed different types of computing facilities: collaboration computing facilities, group local computing cluster (including colocation), cloud computing. The author discuss the growing variety of different computing options for small groups and...
    Go to contribution page
  243. Bob Cowles (BrightLite Information Security)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    As HEP collaborations grow in size (10 years ago, BaBar was 600 scientists; now, both CMS and ATLAS are on the order of 3000 scientists), the collaboratory has become a key factor in allowing identity management (IdM), once confined to individual sites, to scale with the number of members, number of organizations, and the complexity of the science collaborations. Over the past two decades (at...
    Go to contribution page
  244. Jason Webb (Brookhaven National Lab)
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    The STAR experiment pursues a broad range of physics topics in pp, pA and AA collisions produced by the Relativistic Heavy Ion Collider (RHIC). Such a diverse experimental program demands a simulation framework capable of supporting an equally diverse set of event generators, and a flexible event record capable of storing the (common) particle-wise and (varied) event-wise information provided...
    Go to contribution page
  245. Oliver Holme (ETH Zurich, Switzerland)
    14/10/2013, 15:00
    Data acquisition, trigger and controls
    Poster presentation
    The Electromagnetic Calorimeter (ECAL) is one of the sub-detectors of the Compact Muon Solenoid (CMS) experiment of the Large Hadron Collider (LHC) at CERN. The Detector Control System (DCS) that has been developed and implemented for the CMS ECAL was deployed in accordance with the LHC schedule and has been supporting the detector data-taking since LHC physics runs started in 2009. During...
    Go to contribution page
  246. Andrew David Lahiff (STFC - Science & Technology Facilities Council (GB))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    While migration from the grid to the cloud has been gaining increasing momentum in recent times, WLCG sites are currently still expected to accept grid job submission, and this is likely to continue for the foreseeable future. Furthermore, sites which support multiple experiments may need to provide both cloud and grid-based access to resources for some time, as not all experiments may be...
    Go to contribution page
  247. Shaun De Witt (STFC - Science & Technology Facilities Council (GB))
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    LHC experiments are moving away from a traditional HSM solution for Tier 1's in order to separate long term tape archival from disk only access, using the tape as a true archive (write once, read rarely). In this poster we present two methods by which this is being achieved at two distinct sites, ASGC and RAL, which have approached this change in very different ways.
    Go to contribution page
  248. Robert Fay (University of Liverpool)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    As the number of cores on chip continues to trend upwards and new CPU architectures emerge, increasing CPU density and diversity presents multiple challenges to site administrators. These include scheduling for massively multi-core systems (potentially including GPU (integrated and dedicated) and many integrated core (MIC)) to ensure a balanced throughput of jobs while preserving overall...
    Go to contribution page
  249. Daniel Hugo Campora Perez (CERN)
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The LHCb Online Network is a real time high performance network, in which 350 data sources send data over a Gigabit Ethernet LAN to more than 1500 receiving nodes. The aggregated throughput of the application, called Event Building, is more than 60 GB/s. The protocol employed by LHCb makes the sending nodes transmit simultaneously portions of events to one receiving node at a time, which is...
    Go to contribution page
  250. Dr Daniel van der Ster (CERN), Dr Jakub Moscicki (CERN)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    AFS is a mature and reliable storage service at CERN, having worked for more than 20 years as the provider of Linux home directories and application areas. Recently, our AFS service has been growing at unprecedented rates (300% in the past year), thanks to innovations in both the hardware and software components of our file servers. This work will present how AFS is used at CERN and how...
    Go to contribution page
  251. Daniele Gregori (Istituto Nazionale di Fisica Nucleare (INFN)), Luca dell'Agnello (INFN-CNAF), Pier Paolo Ricci (INFN CNAF), Tommaso Boccali (Sezione di Pisa (IT)), Dr Vincenzo Vagnoni (INFN Bologna), Dr Vladimir Sapunenko (INFN)
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    The Mass Storage System installed at the INFN CNAF Tier-1 is one of the biggest hierarchical storage facilities in Europe. It currently provides storage resources for about 12% of all LHC data, as well as to other High Energy Physics experiments. The Grid Enabled Mass Storage System (GEMSS) is the present solution implemented at the INFN CNAF Tier-1 and it is based on a custom integration...
    Go to contribution page
  252. Ivan Antoniev Dzhunov (University of Sofia)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    Given the distributed nature of the grid and the way CPU resources are pledged and scared around the globe, VO's are facing the challenge to monitor the use of these resources. For CMS and the operation of centralized workflows the monitoring of how many production jobs are running and pending in the Glidein WMS production pools is very important. The Dashboard SSB (Site Status Board) provides...
    Go to contribution page
  253. Dr Tomoaki Nakamura (University of Tokyo (JP))
    14/10/2013, 15:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Poster presentation
    The Tokyo Tier2 center, which is located at International Center for Elementary Particle Physics (ICEPP) in the University of Tokyo, was established as a regional analysis center in Japan for the ATLAS experiment. The official operation with WLCG was started in 2007 after the several years development since 2002. In December 2012, we have replaced almost all hard wares as the third system...
    Go to contribution page
  254. Jetendr Shamdasani (University of the West of England (GB))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    Efficient, distributed and complex software is central in the analysis of high energy physics (HEP) data. One area that has been somewhat overlooked in recent years has been the tracking of the development of the HEP software and of its use in data analyses and its evolution over time. This area of tracking analyses to provide records of actions performed, outcomes achieved and (re-)design...
    Go to contribution page
  255. Daniele Francesco Kruse (CERN)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    Administrating a large-scale, multi-protocol, hierarchical tape storage infrastructure like the one at CERN, which stores around 30PB / year, requires an adequate monitoring system for quick spotting of malfunctions, easier debugging and on demand report generation. The main challenges for such system are: to cope with log format diversity and its information scattered among several log files,...
    Go to contribution page
  256. Morten Dam Joergensen (Niels Bohr Institute (DK))
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    The ATLAS offline data quality monitoring infrastructure functioned successfully during the 2010-2012 run of the LHC. During the 2013-14 long shutdown, a large number of upgrades will be made in response to user needs and to take advantage of new technologies - for example, deploying richer web applications, improving dynamic visualization of data, streamlining configuration, and moving...
    Go to contribution page
  257. Rahmat Rahmat (University of Mississippi (US))
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    HFGFlash is a very fast simulation of electromagnetic showers using parameterizations of the profiles in Hadronic Forward Calorimeter. HF GFlash has good agreement to Collision Data and previous Test Beam results. In addition to good agreement to Data and previous Test Beam results, HFGFlash can simulate about 10000 times faster than Geant4. We will report the latest development of HFGFlash...
    Go to contribution page
  258. Robin Eamonn Long (Lancaster University (GB))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The need to maximize computing facilities whilst maintaining versatile and flexible setups leads to the need for on demand virtual machines through the use of cloud computing. GridPP is currently investigating the role that Cloud Computing, in the form of Virtual Machines, can play in supporting Particle Physics analyses. As part of this research we look at the ability of VMWare's ESXi...
    Go to contribution page
  259. Igor Sfiligoi (University of California San Diego)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    Monitoring is an important aspect of any job scheduling environment, and Grid computing is no exception. Writing quality monitoring tools is however a hard proposition, so the Open Science Grid decided to leverage existing enterprise-class tools in the context of the glideinWMS pilot infrastructure, which powers a large fraction of its Grid computing. The product chosen is the CycleServer,...
    Go to contribution page
  260. Carl Henrik Ohman (Uppsala University (SE))
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. With the new cloud technologies come also new challenges, and one such is the contextualization of cloud resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google...
    Go to contribution page
  261. Igor Sfiligoi (University of California San Diego)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The HTCondor based glideinWMS has become the product of choice for exploiting Grid resources for many communities. Unfortunately, its default operational model expects users to log into a machine running a HTCondor schedd before being able to submit their jobs. Many users would instead prefer to use their local workstation for everything. A product that addresses this problem is rcondor, a...
    Go to contribution page
  262. Antanas Norkus (Vilnius University (LT))
    14/10/2013, 15:00
    Event Processing, Simulation and Analysis
    Poster presentation
    The scrutiny and validation of the software and of the calibrations used to simulate and reconstruct the collision events, have been key elements to the physics performance of the CMS experiment. Such scrutiny is performed in stages by approximately one hundred experts who master specific areas of expertise, ranging from the low-level reconstruction and calibration which specific to a...
    Go to contribution page
  263. Stephen Jones (Liverpool University)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    VomsSnooper is a tool that provides an easy way to keep documents and sites up to date with the newest VOMS records from the Operations Portal, and removes the need for manual edits to security configuration files. Yaim is used to configure the middle-ware at grid sites. Specifically, Yaim processes variables that define which VOMS services are used to authenticate users of any VO. The data...
    Go to contribution page
  264. Alexandre Beche (CERN), David Tuckett (CERN)
    14/10/2013, 15:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Poster presentation
    The Worldwide LHC Computing Grid provides resources for the four main virtual organizations. Along with data processing, data distribution is the key computing activity on the WLCG infrastructure. The scale of this activity is very large, the ATLAS virtual organization (VO) alone generates and distributes more than 40 PB of data in 100 million files per year. Another challenge is the...
    Go to contribution page
  265. Matevz Tadel (Univ. of California San Diego (US))
    14/10/2013, 15:00
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    Following the smashing success of XRootd-based USCMS data-federation, AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching-proxy. The first one simply starts fetching a whole file as soon as a file-open request is received and is suitable when completely random file access is expected or it is already...
    Go to contribution page
  266. Dr Peter Elmer (Princeton University (US))
    14/10/2013, 15:45
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    Modern HEP software stacks, such as those used by the LHC experiments at CERN, involve many millions of lines of custom code per experiment, as well as a number of similarly sized shared packages (ROOT, Geant4, etc.) Thousands of people have made contributions over time to these code bases, including graduate students, postdocs, professional researchers and software/computing...
    Go to contribution page
  267. Dr Daniel van der Ster (CERN)
    14/10/2013, 15:45
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    Emerging storage requirements, such as the need for block storage for both OpenStack VMs and file services like AFS and NFS, have motivated the development of a generic backend storage service for CERN IT. The goals for such a service include (a) vendor neutrality, (b) horizontal scalability with commodity hardware, (c) fault tolerance at the disk, host, and network levels, and (d) support for...
    Go to contribution page
  268. Dr Antonio Maria Perez Calero Yzquierdo (Centro de Investigaciones Energ. Medioambientales y Tecn. - (ES)
    14/10/2013, 15:45
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    In the next years, processor architectures based on much larger numbers of cores will be most likely the model to continue "Moore's Law" style throughput gains. This not only results in many more jobs in parallel running the LHC Run 1 era monolithic applications. Also the memory requirements of these processes push the workernode architectures to the limit. One solution is parallelizing the...
    Go to contribution page
  269. Ramon Medrano Llamas (CERN)
    14/10/2013, 15:45
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to...
    Go to contribution page
  270. Olof Barring (CERN)
    14/10/2013, 15:45
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    In May 2012 CERN signed a contract with the Wigner Data Centre in Budapest for an extension to the CERN’s central computing facility beyond its current boundaries set by electrical power and cooling available for computing. The centre is operated as a remote co-location site providing rack-space, electrical power and cooling for server, storage and networking equipment acquired by CERN. The...
    Go to contribution page
  271. Prof. Ivan Kisel (GSI, Gesellschaft fuer Schwerionenforschung mbH)
    14/10/2013, 15:45
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    The CBM (Compressed Baryonic Matter) experiment is an experiment being prepared to operate at the future Facility for Anti-Proton and Ion Research (FAIR, Darmstadt, Germany). Its main focus is the measurement of very rare probes, which requires interaction rates of up to 10 MHz. Together with the high multiplicity of charged tracks produced in heavy-ion collisions, this leads to huge data...
    Go to contribution page
  272. Mr Arnim Balzer (DESY, University Potsdam)
    14/10/2013, 15:45
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The High Energy Stereoscopic System (H.E.S.S.) is a system of five Imaging Atmospheric Cherenkov Telescopes (IACTs) located in the Khomas Highland in Namibia. It measures cosmic gamma-rays with very high energies (VHE; > 100 GeV) using the Earth’s atmosphere as a calorimeter. The H.E.S.S. array has entered Phase II in September 2012 with the inauguration of a fifth telescope that is larger and...
    Go to contribution page
  273. Ben Jones (CERN), Gavin Mccance (CERN), Nacho Barrientos Arias, Steve Traylen (CERN)
    14/10/2013, 16:05
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    For over a decade CERN's fabric management system has been based on home-grown solutions. Those solutions are not dynamic enough for CERN to face its new challenges such as significantly scaling out, multi-site management and the Cloud Computing model, without any additional staff. This presentation will illustrate the motivations for CERN to move to a new tool-set in the context of the Agile...
    Go to contribution page
  274. Jaroslav Zalesak (Acad. of Sciences of the Czech Rep. (CZ))
    14/10/2013, 16:05
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The NOvA experiment has developed a data acquisition system that is able to continuously digitize and produce a zero bias streaming readout for the more than 368,000 detectors cells that constitute the 14 kTon far detector. The NOvA DAQ system combines custom built frontend readout and data aggregation hardware, with advances in enterprise class networking to continuously deliver data to...
    Go to contribution page
  275. Jim Kowalkowski (Fermilab)
    14/10/2013, 16:07
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code....
    Go to contribution page
  276. Ian Fisk (Fermi National Accelerator Lab. (US)), Jacob Thomas Linacre (Fermi National Accelerator Lab. (US))
    14/10/2013, 16:07
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    During Spring 2013, CMS processed 1 Billion RAW data events at the San Diego Super Computing Center (SDSC) that was nearly the size of half the CMS dedicated Tier-1 processing resources. This facility has none of the permanent CMS services, service level agreements, or support normally associated with a Tier-1, and was assembled with a few weeks notice to process only a few workflows. The size...
    Go to contribution page
  277. Maxim Potekhin (Brookhaven National Laboratory (US))
    14/10/2013, 16:07
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    The ATLAS Production System is the top level workflow manager which translates physicists' needs for production level processing into actual workflows executed across about a hundred processing sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or...
    Go to contribution page
  278. Dr Jakub Moscicki (CERN)
    14/10/2013, 16:08
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    Individual users at CERN are attracted by external file hosting services such as Dropbox. This trend may lead to what is know as the "Dropbox Problem": sensitive organization data stored on servers outside of corporate control, outside of established policies, outside of enforceable SLAs and in unknown geographical locations. Mitigating this risk also provides a good incentive to rethink how...
    Go to contribution page
  279. Dr Mohammad Al-Turany (GSI)
    14/10/2013, 16:10
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    The FairRoot framework is the standard framework for simulation, reconstruction and data analysis for the FAIR experiments. The framework, is designed to optimize the accessibility for beginners and developers, to be flexible and to cope with future developments. FairRoot enhances the synergy between the different physics experiments within the FAIR project. Moreover, the framework is...
    Go to contribution page
  280. Belmiro Daniel Rodrigues Moreira (LIP Laboratorio de Instrumentacao e Fisica Experimental (LIP)-Un)
    14/10/2013, 16:25
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    CERN's Infrastructure as a Service cloud is being deployed in production across the two data centres in Geneva and Budapest. This talk will describe the experiences of the first six months of production, the different uses within the organisation and the outlook for expansion to over 15,000 hypervisors based on OpenStack by 2015. The open source toolchain used, accounting and scheduling...
    Go to contribution page
  281. Dr Christopher Jones (Fermi National Accelerator Lab. (US))
    14/10/2013, 16:25
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The DarkSide-50 dark matter experiment has recently been constructed and commissioned at the Laboratori Nazionali del Gran Sasso (LNGS). The data acquisition system for the experiment was jointly constructed by members of the LNGS Research Division and the Fermilab Scientific Computing Division, and it makes use of commercial, off-the-shelf hardware components and the artdaq DAQ software...
    Go to contribution page
  282. Dr Friederike Nowak (DESY)
    14/10/2013, 16:29
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    In 2007, the National Analysis Facility (NAF) was set up within the framework of the Helmholtz Alliance "Physics at the Terascale", and is located at DESY. Its purpose was the provision of an analysis infrastructure for up-to-date research in Germany, complementing the Grid by offering a interactive access to the data. It has been well received within the physics community, and has proven to...
    Go to contribution page
  283. Tadashi Maeno (Brookhaven National Laboratory (US))
    14/10/2013, 16:29
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment at the LHC is the Production and Distributed Analysis (PanDA) workload management system. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not...
    Go to contribution page
  284. Stefan Lohn (CERN)
    14/10/2013, 16:29
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    Software optimization is a complex process, where the intended improvements have different effects on different platforms, with multiple operating systems and an ongoing introduction of new hardware. In addition several compilers produce differing object-code as result of different internal optimization procedures. To trace back the impact of the optimizations is going to become more...
    Go to contribution page
  285. Semen Lebedev (Justus-Liebig-Universitaet Giessen (DE))
    14/10/2013, 16:30
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    Development of fast and efficient event reconstruction algorithms is an important and challenging task in the Compressed Baryonic Matter (CBM) experiment at the future FAIR facility. The event reconstruction algorithms have to process terabytes of input data produced in particle collisions. In this contribution, several event reconstruction algorithms, which use available features of modern...
    Go to contribution page
  286. Dr Wang Lu (Institute of High Energy Physics,CAS)
    14/10/2013, 16:31
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    Object storage systems based on Amazon’s Simple Storage Service (S3) have substantially developed in the last few years. The scalability, durability and elasticity characteristics of those systems make them well suited for a range of use cases where data is written, seldom updated and frequently read. Storage of images, static web sites and backup systems are some of the use cases where S3...
    Go to contribution page
  287. Pedro Andrade (CERN)
    14/10/2013, 16:45
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    At the present time computing centres are facing a massive rise in virtualization and cloud computing. The Agile Infrastructure (AI) project is working to deliver new solutions to ease the management of CERN Computing Centres. Part of the solution consists in a new common monitoring infrastructure which collects and manages monitoring data of all computing centre servers and associated...
    Go to contribution page
  288. Dr Radoslaw Karabowicz (GSI)
    14/10/2013, 16:45
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The PANDA experiment will be running up to 2.10^7 antiproton-proton collisions per second at energies reaching 15 GeV. The lack of simple features distinguishing the interesting events from background, as well as strong pileup of events' data streams make the use of a hardware trigger impossible. As a consequence the whole data stream of about 300 GB/s has to be analyzed online, i.e:...
    Go to contribution page
  289. Wim Lavrijsen (Lawrence Berkeley National Lab. (US))
    14/10/2013, 16:50
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    The Python programming language brings a dynamic, interactive environment to physics analysis. With PyPy high performance can be delivered as well, when making use of its tracing just in time compiler (JIT) and cppyy for C++ bindings, as cppyy is able to exploit common HEP coding patterns. For example, ROOT I/O with cppyy runs at speeds equal to that of optimized, hand-tuned C++. Python does...
    Go to contribution page
  290. Dr Antonio Limosani (University of Melbourne (AU))
    14/10/2013, 16:51
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    The Australian Government is making a $AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 24,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All...
    Go to contribution page
  291. Dr Michael Kirby (Fermi National Accelerator Laboratory)
    14/10/2013, 16:51
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    The Fabric for Frontier Experiments (FIFE) project is a new far-reaching, major-impact initiative within the Fermilab Scientific Computing Division to drive the future of computing services for Fermilab Experiments. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and...
    Go to contribution page
  292. Seppo Sakari Heikkila (CERN)
    14/10/2013, 16:54
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    Cloud storage is an emerging architecture aiming to provide increased scalability and access performance, compared to more traditional solutions. CERN is evaluating this promise using Huawei UDS and OpenStack storage deployments, focusing on the needs of high-energy physics. Both deployed setups implement S3, one of the protocols that are emerging as standard in the cloud storage market. A set...
    Go to contribution page
  293. Stefanie Lewis
    14/10/2013, 16:55
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    At the mass of a proton, the strong force is not well understood. Various quark models exist, but it is important to determine which quark model(s) are most accurate. Experimentally, finding resonances predicted by some models and not others would give valuable insight into this fundamental interaction. Several labs around the world use photoproduction experiments to find these missing...
    Go to contribution page
  294. Dr Salvatore Tupputi (Universita e INFN (IT))
    14/10/2013, 17:25
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions which improve the reliability of the system. In this perspective a crucial case is the automatic exclusion/recovery of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to...
    Go to contribution page
  295. Dr Jamie Shiers (CERN)
    14/10/2013, 17:25
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    The international study group on data preservation in high energy physics, DPHEP, achieved a milestone in 2012 with the publication of its eagerly anticipated large scale report, which contains a description of data preservation activities from all major high energy physics collider-based experiments and laboratories. A central message of the report is that data preservation in HEP is not...
    Go to contribution page
  296. Michail Salichos (CERN)
    14/10/2013, 17:25
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    FTS is the service responsible for distributing the majority of LHC data across the WLCG infrastructure. From the experiences of the last decade supporting and monitoring FTS, reliability, robustness and high-performance data transfers has proved to be of high importance in the Data Management world. We are going to present the current status and features of the new File Transfer Service...
    Go to contribution page
  297. Pascal Costanza (ExaScience Lab, Intel, Belgium)
    14/10/2013, 17:25
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    Using Intel's SIMD architecture (SSE, AVX) to speed up operations on containers of complex class and structure objects is challenging, because it requires that the same data members of the different objects within a container have to be laid out next to each other, in a structure of arrays (SOA) fashion. Currently, programming languages do not provide automatic ways for arranging containers as...
    Go to contribution page
  298. Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US))
    14/10/2013, 17:25
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    Modern computing hardware is transitioning from using a single high frequency complicated computing core to many lower frequency simpler cores. As part of that transition, hardware manufacturers are urging developers to exploit concurrency in their programs via operating system threads. We will present CMS' effort to evolve our single threaded framework into a highly concurrent framework. We...
    Go to contribution page
  299. Dr Jose Antonio Coarasa Perez (CERN)
    14/10/2013, 17:25
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The CMS online cluster consists of more than 3000 computers. It has been exclusively used for the Data Acquisition of the CMS experiment at CERN, archiving around 20Tbytes of data per day. An openstack cloud layer has been deployed on part of the cluster (totalling more than 13000 cores) as a minimal overlay so as to leave the primary role of the computers untouched while allowing an...
    Go to contribution page
  300. Dr Igor Oya (Humboldt University)
    14/10/2013, 17:25
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The Cherenkov Telescope Array (CTA) is one of the major ground-based astronomy projects being pursued and will be the largest facility for ground-based gamma-ray observations ever built. CTA will consist of two arrays: one in the Northern hemisphere composed of about 20 telescopes, and the other one in the Southern hemisphere composed of about 100 telescopes, both arrays containing...
    Go to contribution page
  301. Peter Kreuzer (Rheinisch-Westfaelische Tech. Hoch. (DE))
    14/10/2013, 17:45
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    CMS is using a tiered setup of dedicated computing resources provided by sites distributed over the world and organized in WLCG. These sites pledge resources to CMS and are preparing them specially for CMS to run the experiment's applications. But there are more resources available opportunistically both on the GRID and in local university and research clusters which can be used for CMS...
    Go to contribution page
  302. Evan Niner (Indiana University)
    14/10/2013, 17:45
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The NOvA detector utilizes not only a high speed streaming readout system which capable of reading out the waveforms of over 368,000 detector cells, but a distributed timing system that is able drive and program the frontend clock systems of each of these readout to allow each hit in the detector to be time stamped with a universal wall clock time. This system is used to perform an absolute...
    Go to contribution page
  303. Andrei Gheata (CERN)
    14/10/2013, 17:46
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    Among the components contributing to particle transport, geometry navigation is an important consumer of CPU cycles. The tasks performed to get answers to "basic" queries like locating a point within a geometry hierarchy or computing accurately the distance to the next boundary can become very computing intensive for complex detector setups. Among several optimization methods already in use by...
    Go to contribution page
  304. Brian Van Klaveren (SLAC)
    14/10/2013, 17:47
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    The SLAC Computing Applications group (SCA) has developed a general purpose data catalog framework, initially for use by the Fermi Gamma-Ray Space Telescope, and now in use by several other experiments. The main features of the data catalog system are: * Ability to organize datasets in a virtual hierarchy without regard to physical location or access protocol * Ability to catalog...
    Go to contribution page
  305. Iban Jose Cabrillo Bartolome (Universidad de Cantabria (ES))
    14/10/2013, 17:47
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    The Altamira supercomputer at the Institute of Physics of Cantabria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used for message passing in parallel jobs, also supports the connection to General Parallel File System (GPFS) servers, enabling an efficient processing of multiple data demanding jobs at the same time. Sharing a common GPFS system with...
    Go to contribution page
  306. Mike Hildreth (University of Notre Dame (US))
    14/10/2013, 17:48
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    Data and Software Preservation for Open Science (DASPOS), represents a first attempt to establish a formal collaboration tying together physicists from the CMS and ATLAS experiments at the LHC and the Tevatron experiments with experts in digital curation, heterogeneous high-throughput storage systems, large-scale computing systems, and grid access and infrastructure. Recently funded by the...
    Go to contribution page
  307. Benedikt Hegner (CERN)
    14/10/2013, 17:50
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    In the past, the increasing demands for HEP processing resources could be fulfilled by distributing the work to more and more physical machines. Limitations in power consumption of both CPUs and entire data centers are bringing an end to this era of easy scalability. To get the most CPU performance per Watt, future hardware will be characterised by less and less memory per processor, as well...
    Go to contribution page
  308. Jim Kowalkowski (Fermilab)
    15/10/2013, 09:00
    Developments in concurrency (massive multi-core, GPU, and architectures such as ARM) are changing the physics computing landscape. In this talk dr Jim kowalkowski of Fermilab will describe on the use of GPU, massive multi-core, and the changes that result from massive parallelization and how this impacts data processing and models.
    Go to contribution page
  309. Mr Philippe Canal (Fermi National Accelerator Lab. (US))
    15/10/2013, 09:45
    Developments in many of our key software packages, such as Root 6 and the next generation Geant, will have a significant impact on the way analysis is done. Dr. Philippe Canal will present the birds-eye view on where these developments can lead us, on the way next generation ROOT and Geant can be combined, and on how for example the increased use of concurrency in these key software packages...
    Go to contribution page
  310. Dr Torre Wenaus (Brookhaven National Laboratory (US))
    15/10/2013, 11:00
    The computing for the LHC experiments has resulted in spectacular physics during the first few years of running. Now, the long shutdown offers the possibility to re-think some of the underlying concepts, look back to the lessons learned from this first run, and at the same work on revised models for the next after LS1. Dr Torre Wenaus of Brookhaven National Lab will talk about the revisions...
    Go to contribution page
  311. Stefano Spataro (University of Turin)
    15/10/2013, 11:45
    For many experiments, e.g. those at the LHC, design choices made a very long time ago for the compute and trigger model are still used today. The incoming experiments have the opportunity to make new choices based on the current state of computing technology and novel ways to design the reconstruction frameworks, using the experience from previous experiments as well as already existing...
    Go to contribution page
  312. Martin Philipp Hellmich (University of Edinburgh (GB))
    15/10/2013, 13:30
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    Recent developments, including low power devices, cluster file systems and cloud storage, represent an explosion in the possibilities for deploying and managing grid storage. In this paper we present how different technologies can be leveraged to build a storage service with differing cost, power, performance, scalability and reliability profiles, using the popular DPM/dmlite storage solution...
    Go to contribution page
  313. Dr Peter Elmer (Princeton University (US))
    15/10/2013, 13:30
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    In the last decade power limitations led to the introduction of multicore CPU's. The cores on the processors were however not dramatically different from the processors just before the multicore-era. In some sense, this was merely a tactical choice to maximize compatibility and buy time. The same scaling problems that led to the power limit are likely to push processors in the...
    Go to contribution page
  314. Mario Ubeda Garcia (CERN), Victor Mendez Munoz (PIC)
    15/10/2013, 13:30
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using Dirac and its LHCb-specific extension LHCbDirac as an interware for its Distributed Computing. So far it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of Dirac (VMDIRAC) extends it to the integration of Cloud computing...
    Go to contribution page
  315. Alessandro Lonardo (INFN, Roma I (IT))
    15/10/2013, 13:30
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The integration of GPUs in trigger and data acquisition systems is currently being investigated in several HEP experiments. At higher trigger levels, when the efficient many-core parallelization of event reconstruction algorithms is possible, the benefit of reducing significantly the number of the farm computing nodes is evident. At lower levels, where tipically severe real-time...
    Go to contribution page
  316. Dr Andrea Sciaba (CERN)
    15/10/2013, 13:30
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The Wordwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse their data. It includes almost 200,000 CPU cores, 200 PB of disk storage and 200 PB of tape storage distributed among more than 150 sites. The WLCG operations team is responsible for several essential tasks, such as the coordination of...
    Go to contribution page
  317. Christopher Jung (KIT - Karlsruhe Institute of Technology (DE))
    15/10/2013, 13:30
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    Data play a central role in most fields of Science. In recent years, the amount of data from experiment, observation, and simulation has increased rapidly and the data complexity has grown. Also, communities and shared storage have become geographically more distributed. Therefore, methods and techniques applied for scientific data need to be revised and partially be replaced, while keeping...
    Go to contribution page
  318. Zachary Louis Marshall (Lawrence Berkeley National Lab. (US))
    15/10/2013, 13:30
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    In the 2011/12 data the LHC provided substantial multiple proton-proton collisions within each filled bunch-crossing and also multiple filled bunch-crossings within the sensitive time window of the ATLAS detector. This will increase in the near future during the run beginning in 2015. Including these effects in Monte Carlo simulation poses significant computing challenges. We present a...
    Go to contribution page
  319. Petr Zejdl (CERN)
    15/10/2013, 13:50
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The CMS data acquisition (DAQ) infrastructure collects data from more than 600 custom detector Front End Drivers (FEDs). In the current implementation data is transferred from the FEDs via 3.2 Gbs electrical links to custom interface boards, which transfer the data to a commercial Myrinet network based on 2.5 Gbps optical links. During 2013 and 2014 the CMS DAQ system will undergo a major...
    Go to contribution page
  320. Ramon Medrano Llamas (CERN)
    15/10/2013, 13:50
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    HammerCloud was designed and born under the needs of the grid community to test the resources and automate operations from a user perspective. The recent developments in the IT space propose a shift to the software defined data centers, in which every layer of the infrastructure can be offered as a service. Testing and monitoring is an integral part of the development, validation and...
    Go to contribution page
  321. Mrs Tanya Levshina (FERMILAB)
    15/10/2013, 13:52
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    The Open Science Grid (OSG) Public Storage project is focused on improving and simplifying the management of OSG Storage. Currently, OSG doesn’t provide efficient means to manage public storage offered by participating sites. A Virtual Organization (VO) that relies on opportunistic storage has difficulties finding appropriate storage, verifying its availability, and monitoring its...
    Go to contribution page
  322. Dag Larsen (University of Silesia (PL))
    15/10/2013, 13:52
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    Currently, the NA61/SHINE data production is performed on the CERN shared batch system, an approach inherited from its predecessor NA49. New data productions are initiated by manually submitting jobs to the batch system. An effort is now under way to migrate the data production to an automatic system, on top of a fully virtualised platform based on CernVM. There are several motivations for...
    Go to contribution page
  323. Sebastiano Schifano (U)
    15/10/2013, 13:53
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    An interesting evolution in scientific computing is represented by the streamline introduction of co-processor boards that were originally built to accelerate graphics rendering and that are now being used to perform general computing tasks. A peculiarity of these boards (GPGPU, or General Purpose Graphic Processing Units, and many-core boards like the Intel Xeon Phi) is that they...
    Go to contribution page
  324. Dr Paul Millar (Deutsches Elektronen-Synchrotron (DE))
    15/10/2013, 13:53
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    Storage is a continually evolving environment, with new solutions to both existing problems and new challenges. With over ten years in production use, dCache is also evolving to match this changing landscape. In this paper, we present three areas in which dCache is matching demand and driving innovation. Providing efficient access to data that maximises both streaming and random-access...
    Go to contribution page
  325. Mike Hildreth (University of Notre Dame (US))
    15/10/2013, 13:55
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    Within the last year, design studies for LHC detector upgrades have begun to reach a level of detail that requires the simulation of physics processes with simulation performance at the level provided by Geant4. Full detector geometries for potential upgrades have been designed and incorporated into the CMS software. However, the extreme luminosities expected during the lifetimes of the...
    Go to contribution page
  326. Rainer Schwemmer (CERN)
    15/10/2013, 14:10
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The architecture of the data acquisition for the LHCb upgrade is designed to allow for data transmission from the front-end electronics directly to the readout boards synchronously with the bunch crossing at the rate of 40 MHz. To connect the front-end electronics to the readout boards the upgraded detector will require order of 12000 GBT based (3.2 Gb/s radiation hard CERN serializers)...
    Go to contribution page
  327. Adriana Telesca (CERN)
    15/10/2013, 14:10
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    ALICE (A Large Ion Collider Experiment) is a heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The ALICE Data-AcQuisition (DAQ) system handles the data flow from the sub-detector electronics to the permanent data storage in the CERN computing center. The DAQ farm consists of about 1000 devices of many...
    Go to contribution page
  328. Dr Robert Illingworth (Fermilab)
    15/10/2013, 14:14
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    Fermilab Intensity Frontier experiments such as Minerva, NOvA, and MicroBooNE are now using an improved version of the Fermilab SAM data handling system. SAM was originally used by the CDF and D0 experiments for Run II of the Fermilab Tevatron to provide file metadata and location cataloguing, uploading of new files to tape storage, dataset management, file transfers between global processing...
    Go to contribution page
  329. Dr David Colling (Imperial College Sci., Tech. & Med. (GB))
    15/10/2013, 14:14
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    The Higher Level Trigger (HLT) farm in CMS is a more than ten thousand core processor farm that is heavily used during data acquisition and largely unused when the detector is off. In this presentation we will cover the work done in CMS to utilize this large processing resource with cloud resource provisioning techniques. This resource when configured with Open Stack and Agile Infrastructure...
    Go to contribution page
  330. Robert Johannes Langenberg (Technische Universitaet Muenchen (DE))
    15/10/2013, 14:15
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    The track reconstruction algorithms of the ATLAS experiment have demonstrated excellent performance in all of the data delivered so far by the LHC. The expected large increase in the number of interactions per bunch crossing in the future introduce new challenges both in the computational aspects and physics performance of the algorithms. With the aim of taking advantage of modern CPU design...
    Go to contribution page
  331. Daniel Funke (KIT - Karlsruhe Institute of Technology (DE))
    15/10/2013, 14:16
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN near Geneva/Switzerland is a general-purpose particle detector which led, among many other results, to the discovery of a Higgs-like particle in 2012. It comprises the largest silicon-based tracking system built to date with 75 million individual readout channels and a total surface area of 205 m^2. The...
    Go to contribution page
  332. Giacinto Donvito (Universita e INFN (IT))
    15/10/2013, 14:16
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    In this work we will show the testing activity carried on several distributed file-system in order to check the capability of supporting the HEP data analysis In particular, we focused our attention and our test on HadoopFS, CEPH, and GlusterFS. All are Open Source software. HadoopFS is an Apache foundation software and is part of a more general framework, that contains: task...
    Go to contribution page
  333. Pawel Szostek (CERN)
    15/10/2013, 14:30
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    As Moore’s Law continues to deliver more and more transistors, the mainstream processor industry is preparing to expand its investments in areas other than simple core count. These new interests include deep integration of on-chip components, advanced vector units, memory, cache and interconnect technologies. We examine these moving trends with parallelized and vectorized High Energy Physics...
    Go to contribution page
  334. Josef Novy (Czech Technical University (CZ))
    15/10/2013, 14:30
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The COMPASS is a fixed target experiment, situated at the Super Proton Synchrotron (SPS) accelerator in the north area of the CERN laboratory, in Geneva, Switzerland. The experiment was commissioned during 2001, data-taking started in 2002. The data acquisition system of the experiment is based on the DATE soft-ware package, originally developed for the ALICE experiment. In 2011, after the...
    Go to contribution page
  335. Dr Dario Menasce (INFN Milano-Bicocca)
    15/10/2013, 14:36
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    Radiation detectors usually require complex calibration procedures in order to provide reliable activity measurements. The Milano-Bicocca group has developed, over the years, a complex simulation tool, based on GEANT4, that provide the functionality required to compute the correction factors necessary for such calibrations in a broad range of use-cases, considering various radioactive source...
    Go to contribution page
  336. Dr Adam Lyon (Fermilab)
    15/10/2013, 14:36
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    IFDH (Intensity Fronter Data Handling), is a suite of tools for data movement tasks for Fermilab experiments and is an important part of the FIFE (Fabric for Frontier Experiments) initiative described at this conference. IFDH encompasses moving input data from caches or storage elements to compute nodes (the "last mile" of data movement) and moving output data potentially to those caches as...
    Go to contribution page
  337. Rolf Edward Andreassen (University of Cincinnati (US))
    15/10/2013, 14:38
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    We present a general framework for maximum-likelihood fitting, in which GPUs are used to massively parallelise the per-event probability calculation. For realistic physics fits we achieve speedups, relative to executing the same algorithm on a single CPU, of several hundred.
    Go to contribution page
  338. Andreas Petzold (KIT - Karlsruhe Institute of Technology (DE))
    15/10/2013, 14:39
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    The need for storage continues to grow at a dazzling pace and science and society have become dependent on access to digital data. First sites storing an exabyte of data will be reality in a few years. The common storage technology in small and large computer centers continues to be magnetic disks because of their very good price performance ratio. Storage class memory and solid state disk...
    Go to contribution page
  339. Roberto Castello (Universite Catholique de Louvain (BE))
    15/10/2013, 14:40
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    Fast and efficient methods for the calibration and the alignment of the detector play a key role in ensuring reliable physics performance to an HEP experiment. CMS has set up a solid framework for alignment and calibration purpose, in close contact with the detector and physics needs. The about 200 types of calibration and alignment existing for the various sub-detectors are collected by...
    Go to contribution page
  340. 15/10/2013, 15:00
  341. Dr Simon Patton (LAWRENCE BERKELEY NATIONAL LABORATORY)
    15/10/2013, 15:45
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    In March 2012 the Dayabay Neutrino Experiment published the first measurement of the theta_13 mixing angle. The publication of this result occurred 20 days after the last data that appeared in the paper was taken, during which time normal data taking and processing was continuing. This achievement used over forty thousand 'core hours' of CPU and handled eighteen thousand files totaling 16 TBs....
    Go to contribution page
  342. Markus Frank (CERN)
    15/10/2013, 15:45
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The LHCb experiment at the LHC accelerator at CERN collects collisions of particle bunches at 40 MHz. After a first level of hardware trigger with output of 1 MHz, the physically interesting collisions are selected by running dedicated trigger algorithms in the High Level Trigger (HLT) computing farm. This farm consists of up to roughly 25000 CPU cores in roughly 1600 physical nodes each...
    Go to contribution page
  343. Lucien Boland (University of Melbourne)
    15/10/2013, 15:45
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    The Nectar national research cloud provides compute resources to Australian researchers using OpenStack. CoEPP, a WLCG Tier2 member, wants to use Nectar’s cloud resources for Tier 2 and Tier 3 processing for ATLAS and other experiments including Belle, as well as theoretical computation. CoEPP would prefer to use the Torque job management system in the cloud because they have extensive...
    Go to contribution page
  344. Vardan Gyurjyan (Jefferson Lab)
    15/10/2013, 15:45
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    The majority of developed physics data processing applications (PDP) are single, sequential processes that start at a point in time, and advance one step at a time until they are finished. In the current era of cloud computing and multi-core hardware architectures this approach has noticeable limitations. In this paper we present a detailed evaluation of the FBP-based Clas12 event...
    Go to contribution page
  345. Phillip Urquijo (Universitaet Bonn (DE))
    15/10/2013, 15:45
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    The Belle II experiment is a future flavour factory experiment at the intensity frontier SuperKEKB e+e- collider, KEK Japan. Belle II is expected to go online in 2015, and collect a total of 50 ab-1 of data by 2022. The data will be used to study rare flavour phenomena in the decays of B- and D- mesons and tau-leptons, as well as heavy meson spectroscopy. Owing to the record breaking...
    Go to contribution page
  346. Dr Tony Wildish (Princeton University (US))
    15/10/2013, 15:45
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    The data management elements in CMS are scalable, modular, and designed to work together. The main components are PhEDEx, the data transfer and location system; the Dataset Booking System (DBS), a metadata catalogue; and the Data Aggregation Service (DAS), designed to aggregate views and provide them to users and services. Tens of thousands of samples have been cataloged and petabytes of data...
    Go to contribution page
  347. Jason Alexander Smith (Brookhaven National Laboratory (US))
    15/10/2013, 15:45
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    Solid state drives (SSDs) provide significant improvements in random I/O performance over traditional rotating SATA and SAS drives. While the cost of SSDs has been steadily declining over the past few years, high density SSDs continue to remain prohibitively expensive when compared to traditional drives. Currently, 1TB SSDs generally cost more than USD $1,000, while 1TB SATA drives typically...
    Go to contribution page
  348. Shawn Mc Kee (University of Michigan (US)), Simone Campana (CERN)
    15/10/2013, 16:05
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The WLCG infrastructure moved from a very rigid network topology, based on the MONARC model, to a more relaxed system, where data movement between regions or countries does not necessarily need to involve T1 centers. While this evolution brought obvious advantages, especially in terms of flexibility for the LHC experiment’s data management systems, it also opened the question of how to monitor...
    Go to contribution page
  349. Tomasz Bold (AGH Univ. of Science and Technology, Krakow)
    15/10/2013, 16:05
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The high level trigger (HLT) of the ATLAS experiment at the LHC selects interesting proton-proton and heavy ion collision events for the wide ranging ATLAS physics program. The HLT examines events selected by the level-1 hardware trigger using a combination of specially designed software algorithms and offline reconstruction algorithms. The flexible design of the entire trigger system was...
    Go to contribution page
  350. Georgios Lestaris (CERN)
    15/10/2013, 16:07
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    In a virtualized environment, contextualization is the process of configuring a VM instance for the needs of various deployment use cases. Contextualization in CernVM can be done by passing a handwritten context to the “user data” field of cloud APIs, when running CernVM on the cloud, or by using CernVM web interface when running the VM locally. CernVM online is a publicly accessible web...
    Go to contribution page
  351. Niko Neufeld (CERN)
    15/10/2013, 16:07
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    The ARM architecture is a power-efficient design that is used in most processors in mobile devices all around the world today since they provide reasonable compute performance per watt. The current LHCb software stack is designed (and expected) to build and run on machines with the x86/x86_64 architecture. This paper outlines the process of measuring the performance of the LHCb software stack...
    Go to contribution page
  352. Juan Carlos Diaz Velez (University of Wisconsin-Madison)
    15/10/2013, 16:07
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    IceProd is a data processing and management framework developed by IceCube Neutrino Observatory for processing of Monte Carlo simulations and data. IceProd runs as a separate layer on top of middleware and can take advantage of a variety of computing resources including grids and batch systems such as GLite, Condor, NorduGrid, PBS and SGE. This is accomplished by a set of dedicated daemons...
    Go to contribution page
  353. Vincent Garonne (CERN)
    15/10/2013, 16:08
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    Rucio is the next-generation Distributed Data Management (DDM) system benefiting from recent advances in cloud and "Big Data" computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 140 petabytes spread worldwide across 130 sites, and...
    Go to contribution page
  354. Mathias Michel (Helmholtz-Institut Mainz)
    15/10/2013, 16:10
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    A large part of the physics program of the PANDA experiment at FAIR deals with the search for new conventional and exotic hadronic states like e.g. hybrids and glueballs. In a majority of analyses PANDA will need a Partial Wave Analsis (PWA) to identify possible candidates and for the classification of known states. Therefore, a new, agile and efficient PWA-Framework will be...
    Go to contribution page
  355. Dr Gabriele Garzoglio (FERMI NATIONAL ACCELERATOR LABORATORY)
    15/10/2013, 16:25
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated...
    Go to contribution page
  356. Dr Remi Mommsen
    15/10/2013, 16:25
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The DAQ system of the CMS experiment at the LHC is redesigned during the accelerator shutdown in 2013/14. To reduce the interdependency of the DAQ system and the high-level trigger (HLT), we investigate the feasibility of using a file-system-based HLT. Events of ~1 MB size are built at the level-1 trigger rate of 100 kHz. The events are assembled by ~50 builder units (BUs). Each BU writes the...
    Go to contribution page
  357. Graeme Andrew Stewart (CERN)
    15/10/2013, 16:29
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    The need to run complex workflows for a high energy physics experiment such as ATLAS has always been present. However, as computing resources have become even more constrained, compared to the wealth of data generated by the LHC, the need to use resources efficiently and manage complex workflows within a single grid job have increased. In ATLAS, a new Job Transform framework has been...
    Go to contribution page
  358. Mr Davide Salomoni (INFN CNAF), Dr Elisabetta Ronchieri (INFN CNAF), Mr Marco Canaparo (INFN CNAF), Mr Vincenzo Ciaschini (INFN CNAF)
    15/10/2013, 16:29
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    Software packages in our scientific environment are constantly growing in size, and are written by any number of developers. This implies a strong churn on the code itself, and an associated risk of bugs and stability problems. This risk is unavoidable as long as the software undergoes active evolution, as it always happens with software that is still in use. However, the necessity of having...
    Go to contribution page
  359. Jakob Blomer (CERN)
    15/10/2013, 16:29
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    The traditional virtual machine building and and deployment process is centered around the virtual machine hard disk image. The packages comprising the VM operating system are carefully selected, hard disk images are built for a variety of different hypervisors, and images have to be distributed and decompressed in order to instantiate a virtual machine. Within the HEP community, the CernVM...
    Go to contribution page
  360. Dr Tom Whyntie (Queen Mary, University of London/The Langton Star Centre)
    15/10/2013, 16:30
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    The Langton Ultimate Cosmic ray Intensity Detector (LUCID) experiment [1] is a satellite-based device that uses five Timepix hybrid silicon pixel detectors [2] to make measurements of the radiation environment at an altitude of approximately 660km, i.e. in Low Earth Orbit (LEO). The experiment is due to launch aboard Surrey Satellite Technology Limited's (SSTL's) TechDemoSat-1 in Q3 of 2013....
    Go to contribution page
  361. Ilija Vukotic (University of Chicago (US))
    15/10/2013, 16:31
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    In the past year the ATLAS Collaboration has accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing for more efficient and intelligent use of computing resources by monitoring...
    Go to contribution page
  362. David Gutierrez Rueda (CERN)
    15/10/2013, 16:45
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The network infrastructure at CERN has evolved with the increasing service and bandwidth demands of the scientific community. Analysing the massive amounts of data gathered by the experiments requires more computational power and faster networks to carry the data. The new Data Centre in Wigner and the adoption of 100Gbps in the core of the network are the latest answers to these demands. In...
    Go to contribution page
  363. Mr Pierre Vande Vyvre (CERN)
    15/10/2013, 16:45
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    for the ALICE O2 Collaboration ALICE (A Large Ion Collider Experiment) is a heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). After the second long shutdown of the LHC, the ALICE detector will be upgraded in order to make high precision measurements of rare probes at low pT, which cannot be...
    Go to contribution page
  364. Michal Husejko (CERN)
    15/10/2013, 16:51
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    This contribution describes how CERN has designed and integrated multiple essential tools for agile software development processes, ranging from a version control (Git) to issue tracking (Jira) and documentation (Wikis). Running such services in a large organisation like CERN requires many administrative actions both by users and the service providers, such as creating software projects,...
    Go to contribution page
  365. Donald Petravick (U)
    15/10/2013, 16:51
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    The Dark Energy Survey (DES) is designed to probe the origin of the accelerating universe and help uncover the nature of dark energy by measuring the 14-billion-year history of cosmic expansion with high precision. More than 120 scientists from 23 institutions in the United States, Spain, the United Kingdom, Brazil, Switzerland and and Germany are working on the project. This...
    Go to contribution page
  366. Dario Berzano (CERN)
    15/10/2013, 16:51
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    PROOF, the Parallel ROOT Facility, is a ROOT-based framework which enables interactive parallelism for event-based tasks on a cluster of computing nodes. Although PROOF can be used simply from within a ROOT session with no additional requirements, deploying and configuring a PROOF cluster used to be not as straightforward. Recently great efforts have been spent to make the provisioning of...
    Go to contribution page
  367. Kenneth Bloom (University of Nebraska (US))
    15/10/2013, 16:54
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    CMS is in the process of deploying an Xrootd based infrastructure to facilitate a global data federation. The services of the federation are available to export data from half the physical capacity and the majority of sites are configured to read data over the federation as a back-up. CMS began with a relatively modest set of use-cases for recovery of failed local file opens, debugging and...
    Go to contribution page
  368. Mihai Niculescu (ISS - Institute of Space Science (RO) for the ALICE Collaboration)
    15/10/2013, 16:55
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    The visualization applications called event displays, are used in every high energy physics experiment as a fast quality assurance method for the entire process flow: starting from data acquisition, data reconstruction & calibration and finally obtaining the global view: a 3D view. In this paper, we present a method that parallelizes this process flow and show how it is used for the ALICE...
    Go to contribution page
  369. Dr Saracco Paolo (INFN Genova (Italy))
    15/10/2013, 17:25
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    Uncertainty Quantification (UQ) addresses the issue of predicting non-statistical errors affecting the results of Monte Carlo simulations, deriving from uncertainties in the physics data and models they embed. In HEP it is relevant to particle transport in detectors, as well as to event generators. We summarize recent developments, which have established the mathematical ground of an exact...
    Go to contribution page
  370. Martin Barisits (CERN)
    15/10/2013, 17:25
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    The ATLAS Distributed Data Management system stores more than 140PB of physics data across 100 sites worldwide. To cope with the anticipated ATLAS workload of the coming decade, Rucio, the next-generation data management system has been developed. Replica management, as one of the key aspects of the system, has to satisfy critical performance requirements in order to keep pace with the...
    Go to contribution page
  371. Dr Tony Wildish (Princeton University (US))
    15/10/2013, 17:25
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the...
    Go to contribution page
  372. Alberto Gianoli (Universita di Ferrara (IT))
    15/10/2013, 17:25
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The performance of "level 0" (L0) triggers is crucial to reduce and appropriately select the large amount of data produced by detectors in high energy physics experiments. This selection must be accomplished as fast as possible, since data staging within detectors is a critical resource. For example, in the NA62 experiment at CERN, the event rate is estimated at around 10 MHz, and the...
    Go to contribution page
  373. Dr Jose Caballero Bejar (Brookhaven National Laboratory (US))
    15/10/2013, 17:25
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    The Open Science Grid (OSG) encourages the concept of software portability: a user's scientific application should be able to run in as many operating system environments as possible. This is typically accomplished by compiling the software into a single static binary, or distributing any dependencies in an archive downloaded by each job. However, the concept of portability runs against the...
    Go to contribution page
  374. Elisabetta Vilucchi (Laboratori Nazionali di Frascati (LNF) - Istituto Nazionale di F)
    15/10/2013, 17:25
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    In the ATLAS computing model Grid resources are managed by the PanDA system, a data-driven workload management system designed for production and distributed analysis. Data are stored under various formats in ROOT files and end-user physicists have the choice to use either the ATHENA framework or directly ROOT. The ROOT way to analyze data provide users the possibility to use PROOF to exploit...
    Go to contribution page
  375. Fons Rademakers (CERN)
    15/10/2013, 17:25
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    The parametric function classes of ROOT (TFormula and TF1) have been improved using the capabilities of Cling/LLVM. We will present how formula expressions can now be compiled on the fly using the just-in-time capabilities of LLVM/Cing. Furthermore using the new features of C++ 11, one can build complex function expressions by re-using the existing mathematical functions. We will show also the...
    Go to contribution page
  376. 15/10/2013, 17:45
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    Computing and networking infrastructures across the world continue to grow to meet the increasing needs of data intensive science, notably those of the LHC and other large high energy physics collaborations. The LHC’s large data volumes challenge the technology used to interconnect widely-separated sites (and their available resources) and lead to complications in the overall process of...
    Go to contribution page
  377. Leo Piilonen (Virginia Tech)
    15/10/2013, 17:45
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    I will describe the first-level trigger in the Belle II experiment that examines the hit patterns in the K-long and muon (KLM) detector to find evidence for compact clusters (indicative of a K-long meson hadronic shower) or tracks (indicative of a charged particle from the interaction point or of a cosmic ray). The algorithm is implemented in a VIRTEX6 FPGA on a Universal Trigger Module...
    Go to contribution page
  378. Danilo Piparo (CERN)
    15/10/2013, 17:47
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    During the first four years of data taking at the Large Hadron Collider (LHC), the simulation and reconstruction programs of the experiments proved to be extremely resource consuming. In particular, for complex event simulation and reconstruction applications, the impact of evaluating elementary functions on the runtime is sizeable (up to one fourth of the total), with an obvious effect on the...
    Go to contribution page
  379. Mr Igor Sfiligoi (University of California San Diego)
    15/10/2013, 17:47
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    The User Analysis of the CMS experiment is performed in distributed way using both Grid and dedicated resources. In order to insulate the users from the details of computing fabric, CMS relies on the CRAB (CMS Remote Analysis Builder) package as an abstraction layer. CMS has recently switched from a client-server version of CRAB to a purely client-based solution, with ssh being used to...
    Go to contribution page
  380. Andrew Norman (Fermilab)
    15/10/2013, 17:47
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    The Cern Virtual File System (CVMFS) provides a technology for efficiently distributing code and application files to large and varied collections of computing resources. The CVMFS model and infrastructure has been used to provide a new, scalable solution to the previously difficult task of application and code distribution for grid computing. At Fermilab, a new CVMFS based application...
    Go to contribution page
  381. Zbigniew Baranowski (CERN)
    15/10/2013, 17:48
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    The Hadoop framework has proven to be an effective and popular approach for dealing with “Big Data” and, thanks to its scaling ability and optimised storage access, Hadoop Distributed File System-based projects such as MapReduce or HBase are seen as candidates to replace traditional relational database management systems whenever scalable speed of data processing is a priority. But do these...
    Go to contribution page
  382. Wouter Verkerke (NIKHEF (NL))
    15/10/2013, 17:50
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    RooFit is a library of C++ classes that facilitate data modeling in the ROOT environment. Mathematical concepts such as variables, (probability density) functions and integrals are represented as C++ objects. The package provides a flexible framework for building complex fit models through classes that mimic math operators. For all constructed models RooFit provides a concise yet powerful...
    Go to contribution page
  383. Brian Paul Bockelman (University of Nebraska (US))
    16/10/2013, 09:00
    The experience with the processing of large amounts of data results in changing data models and data access patterns, both locally as well as over the wide area. Dr. Brian Bockelman of the University of Nebraska will present the developments in big data for particle physics looking at data mining, extreme data bases, access to data storage, and the impact thereof on data modelling at different...
    Go to contribution page
  384. Dr Edwin Valentijn (Kapteijn Institute University of Groningen)
    16/10/2013, 09:45
    Large amounts of data are now streaming daily from large astronomical survey telescopes, such as Lofar and the new generation of wide field imagers at ESO's Paranal Observatory, but also from DNA scanners, text scanners etc etc. In the future the volumes will only increase with ESA's Euclid all sky deep imaging survey mission and SKA. Prof Dr Edwin A. Valentijn of the Kapteyn Institute will...
    Go to contribution page
  385. Dr Pirjo-Leena Forsström (CSC)
    16/10/2013, 11:00
    Developments in data preservation and data life cycle management are having great impact the computing and storage landscape. In this talk dr Pirjo-Leena Forsström of CSC (Helsinki) will describe trends and future developments in data services for science, humanities and culture, the way these developments are being addressed by at CSC, and how this could apply to physics data.
    Go to contribution page
  386. Dr Toon Moene (KNMI)
    16/10/2013, 11:45
    Weather forecasting is both one of the most visible as well as one of the more demanding applications of computing in the world we know today. The development of forecasting models draws heavily on parallelization and efficient exploitation of many-core systems to get predictions done in near real-time. Because the computational domain is very large when using high resolution models, domain...
    Go to contribution page
  387. David Groep (NIKHEF (NL))
    16/10/2013, 12:30
  388. 16/10/2013, 13:30
    Workshop on Data Preservation in HEP contribution
  389. 16/10/2013, 13:50
    Workshop on Data Preservation in HEP contribution
  390. 16/10/2013, 14:30
    Workshop on Data Preservation in HEP contribution
  391. 16/10/2013, 15:00
    Talks on ongoing projects: LHCb, CDF, PREDON, H2020 perspective
    Go to contribution page
  392. 16/10/2013, 16:00
    Workshop on Data Preservation in HEP contribution
  393. 16/10/2013, 16:30
  394. Cristinel Diaconu (Centre National de la Recherche Scientifique (FR))
    16/10/2013, 17:15
  395. 16/10/2013, 17:30
    Workshop on Data Preservation in HEP contribution
  396. Dr Inder Monga (ESnet)
    17/10/2013, 09:00
    Networking is one of the important factors in getting physics done, and the flows between data sources, data centres and physicists have reached an unprecedented scale. To make the next step, the network itself has to become more flexible and a schedulable resource. In this talk dr. Inder Monga of ESNet will talk about software defined networking, the protocols and services to describe and...
    Go to contribution page
  397. Harvey Newman (California Institute of Technology (US))
    17/10/2013, 09:30
    Optical networking plays a key role in high-speed data transport, but the technology is developing at a fast pace. These developments are having direct impact not only for local and wide area data transport, but also in ‘on-line’ systems. Dr. Harvey Newman of Caltech will talk about the future trends not only in topical network, but also looking beyond to what advanced networking can enable tomorrow.
    Go to contribution page
  398. Sander Klous (N)
    17/10/2013, 10:00
  399. Werner Wiedenmann (University of Wisconsin (US))
    17/10/2013, 11:00
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    An accurate simulation of the trigger response is necessary for high quality data analyses. This poses a challenge. For event generation and simulated data reconstruction the latest software is used to be in best agreement with the reconstructed data. Contrary the trigger response simulation needs to be in agreement with when the data was taken. The approach we follow is to use trigger...
    Go to contribution page
  400. Illya Shapoval (CERN, KIPT), Marco Clemencic (CERN)
    17/10/2013, 11:00
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    The computing model of the LHCb experiment implies handling of an evolving set of heterogeneous metadata entities and relationships between them. The entities range from software and databases states to architecture specificators and software/data deployment locations. For instance, there is an important relation between the LHCb Conditions Database (CondDB), which provides versioned, time...
    Go to contribution page
  401. Mr Igor Sfiligoi (University of California San Diego)
    17/10/2013, 11:00
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using the...
    Go to contribution page
  402. Oliver Gutsche (Fermi National Accelerator Lab. (US))
    17/10/2013, 11:00
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this presentation we will discuss the operational experience from the first run. We will present the workflows...
    Go to contribution page
  403. Andrzej Nowak (CERN)
    17/10/2013, 11:00
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    This paper summarizes the five years of CERN openlab’s efforts focused on the Intel Xeon Phi co-processor, from the time of its inception to public release. We consider the architecture of the device vis a vis the characteristics of HEP software and identify key opportunities for HEP processing, as well as scaling limitations. We report on improvements and speedups linked to parallelization...
    Go to contribution page
  404. Daniele Trocino (Northeastern University (US))
    17/10/2013, 11:00
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a tradeoff between the complexity of the algorithms running on the available computing power, the...
    Go to contribution page
  405. Dave Kelsey (STFC - Science & Technology Facilities Council (GB))
    17/10/2013, 11:00
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The Security for Collaborating Infrastructures (SCI) group (http://www.eugridpma.org/sci/) is a collaborative activity of information security officers from several large-scale distributed computing infrastructures, including EGI, OSG, PRACE, WLCG, and XSEDE. SCI is developing a framework to enable interoperation of collaborating Grids with the aim of managing cross-Grid operational security...
    Go to contribution page
  406. Johannes Albrecht (Technische Universitaet Dortmund (DE))
    17/10/2013, 11:20
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The LHCb experiment is a spectrometer dedicated to the study of heavy flavor at the LHC. The rate of proton-proton collisions at the LHC is 15 MHz, but resource limitations imply that only 5 kHz can be written to storage for offline analytsis. For this reason the LHCb data acquisition system -- trigger -- plays a key role in selecting signal events and rejecting background. In contrast to...
    Go to contribution page
  407. Wolfgang Ehrenfeld (Universitaet Bonn (DE))
    17/10/2013, 11:22
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    In this presentation we will review the ATLAS Monte Carlo production setup including the different production steps involved in full and fast detector simulation. A report on the Monte Carlo production campaigns during Run 1 and Long Shutdown 1 will be presented, including details on various performance aspects. Important improvements in the workflow and software will be...
    Go to contribution page
  408. Danilo Piparo (CERN)
    17/10/2013, 11:22
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    The necessity for really thread-safe experiment software has recently become very evident, largely driven by the evolution of CPU architectures towards exploiting increasing levels of parallelism, For high-energy physics this represents a real paradigm shift, as concurrent programming was previously only limited to special, well-defined domains like control software or software framework...
    Go to contribution page
  409. Andrew McNab (University of Manchester (GB))
    17/10/2013, 11:22
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    We present a model for the operation of computing nodes at a site using virtual machines, in which the virtual machines (VMs) are created and contextualised for virtual organisations (VOs) by the site itself. For the VO, these virtual machines appear to be produced spontaneously "in the vacuum" rather than in response to requests by the VO. This model takes advantage of the pilot...
    Go to contribution page
  410. Manuel Giffels (CERN)
    17/10/2013, 11:23
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    The Data Bookkeeping Service 3 (DBS 3) provides an improved event meta data catalog for Monte Carlo and recorded data of the CMS (Compact Muon Solenoid) experiment at the Large Hadron Collider (LHC). It provides the necessary information used for tracking datasets, like data processing history, files and runs associated with a given dataset on a scale of about 10^5 datasets and more than 10^7...
    Go to contribution page
  411. Dr Wenji Wu (Fermi National Accelerator Laboratory)
    17/10/2013, 11:23
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    Fermilab is the US-CMS Tier-1 Centre, as well as the main data centre for several other large-scale research collaborations. As a consequence, there is a continual need to monitor and analyse large-scale data movement between Fermilab and collaboration sites for a variety of purposes, including network capacity planning and performance troubleshooting. To meet this need, Fermilab designed and...
    Go to contribution page
  412. Andrea Giammanco (Universite Catholique de Louvain (BE))
    17/10/2013, 11:25
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    A framework for Fast Simulation of particle interactions in the CMS detector has been developed and implemented in the overall simulation, reconstruction and analysis framework of CMS. It produces data samples in the same format as the one used by the Geant4-based (henceforth Full) Simulation and Reconstruction chain; the output of the Fast Simulation of CMS can therefore be used in the...
    Go to contribution page
  413. Andrew Norman (Fermilab)
    17/10/2013, 11:40
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The NOvA experiment is unique in its stream readout and triggering design. The experiment utilizes a sophisticated software triggering system that is able to select portions of the raw data stream to be extracted for storage, in a manner completely asynchronous to the actual readout of the detector. This asynchronous design permits NOvA to tolerate trigger decision latencies ranging from...
    Go to contribution page
  414. Prof. Peter Hobson (Brunel University (GB)), Dr raul lopes (School of Design and Engineering - Brunel University, UK)
    17/10/2013, 11:44
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    Variations of kd-trees represent a fundamental data structure frequently used in geometrical algorithms, Computational Statistics, and clustering. They have numerous applications, for example in track fitting, in the software of the LHC experiments and in physics in general. Computer simulations of N-body systems, for example, have seen applications in the study of dynamics of interacting...
    Go to contribution page
  415. Dr Andrei Tsaregorodtsev (Centre National de la Recherche Scientifique (FR))
    17/10/2013, 11:44
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    DIRAC is a framework for building general purpose distributed computing systems. It was developed originally for the LHCb HEP experiment at CERN and now it is used in several other HEP and astrophysics experiments as well as for user communities in other scientific domains. There is a large interest from smaller user communities to have a simple to use tool for accessing grid and other...
    Go to contribution page
  416. Chiara Debenedetti (University of Edinburgh (GB))
    17/10/2013, 11:45
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    The huge success of Run 1 of the LHC would not have been possible without detailed detector simulation of the experiments. The outstanding performance of the accelerator with a delivered integrated luminosity of 25 fb-1 has created an unprecedented demand for large simulated event samples. This has stretched the possibilities of the experiments due to the constraint of their computing...
    Go to contribution page
  417. Stefano Bagnasco (Universita e INFN (IT))
    17/10/2013, 11:45
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and...
    Go to contribution page
  418. Elizabeth Gallas (University of Oxford (GB))
    17/10/2013, 11:46
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    The ATLAS Conditions Database, based on the LCG Conditions Database infrastructure, contains a wide variety of information needed in online data taking and offline analysis. The total volume of ATLAS conditions data is in the multi-Terabyte range. Internally, the active data is divided into 65 separate schemas (each with hundreds of underlying tables) according to overall data taking type,...
    Go to contribution page
  419. Mr Phil Demar (Fermilab)
    17/10/2013, 11:46
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    LHC networking has always been defined by high volume data movement requirements in both LAN and WAN. LAN network demands can typically be met fairly easily with high performance data center switches, albeit at high cost. LHC WAN data movement, on the other hand, presents a more complicated and difficult set of challenges. Typically, there are three high-level issues a high traffic volume...
    Go to contribution page
  420. Nicoletta Garelli (SLAC)
    17/10/2013, 12:00
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The ATLAS experiment, aimed at recording the results of LHC proton-proton collisions, is upgrading its Trigger and Data Acquisition (TDAQ) system during the current LHC first long shutdown. The purpose of such upgrade is to add robustness and flexibility to the selection and the conveyance of the physics data, simplify the maintenance of the infrastructure, exploit new technologies and,...
    Go to contribution page
  421. Igor Sfiligoi (University of California San Diego)
    17/10/2013, 12:06
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    The computing landscape is moving at an accelerated pace to many-core computing. Nowadays, it is not unusual to get 32 cores on a single physical node. As a consequence, there is increased pressure in the pilot systems domain to move from purely single-core scheduling and allow multi-core jobs as well. In order to allow for a gradual transition from single-core to multi-core user jobs, it...
    Go to contribution page
  422. Dr Jose Antonio Coarasa Perez (CERN)
    17/10/2013, 12:06
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    The CMS online cluster consists of more than 3000 computers. It has been exclusively used for the Data Acquisition of the CMS experiment at CERN, archiving around 20Tbytes of data per day. An openstack cloud layer has been deployed on part of the cluster (totalling more than 13000 cores) as a minimal overlay so as to leave the primary role of the computers untouched while allowing an...
    Go to contribution page
  423. Dr Federico Carminati (CERN)
    17/10/2013, 12:06
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes...
    Go to contribution page
  424. Jerome Fulachier (Centre National de la Recherche Scientifique (FR))
    17/10/2013, 12:09
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    The “ATLAS Metadata Interface” framework (AMI) has been developed in the context of ATLAS, one of the largest scientific collaborations. AMI can be considered to be a mature application, since its basic architecture has been maintained for over 10 years. In this paper we will briefly describe the architecture and the main uses of the framework within the experiment (Tag Collector for...
    Go to contribution page
  425. Dave Kelsey (STFC - Science & Technology Facilities Council (GB))
    17/10/2013, 12:09
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The HEPiX (http://www.hepix.org) IPv6 Working Group has been investigating the many issues which feed into the decision on the timetable for the use of IPv6 networking protocols in HEP Computing, in particular in WLCG. RIPE NCC, the European Regional Internet Registry, ran out of IPv4 addresses in September 2012. The North and South America RIRs are expected to run out in 2014. In recent...
    Go to contribution page
  426. Mike Hildreth (University of Notre Dame (US))
    17/10/2013, 12:10
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    The total amount of Monte Carlo events produced for CMS in 2012 is about 6.5 billion. In the future run at 14 TeV larger datasets, higher particle multiplicity and higher pileup are expected. This is a new challenge for the CMS software. In particular, increasing the speed of Monte Carlo production by a significant factor without compromising the physics performance is a highly-desirable goal....
    Go to contribution page
  427. Dr Gongxing Sun (INSTITUE OF HIGH ENERGY PHYSICS)
    17/10/2013, 13:30
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    This paper brings the idea of MapReduce parallel processing to BESIII physical analysis, gives a new data analysis system structure based on HADOOP framework; Optimizes the process of data processing, by establish an event level metadata(TAG) database and do event pre-selection based on TAGs, significantly reduce the number of events that need to do further analysis by 2-3 classes, which...
    Go to contribution page
  428. Markus Frank (CERN)
    17/10/2013, 13:30
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    The geometry, and in general, the detector description is an essential component for the development of the data processing applications in high-energy physics experiments. We will present a generic detector description toolkit, describing the guiding requirements and the architectural design for the main components of the toolkit, as well as the main implementation choices. The design is...
    Go to contribution page
  429. Paul James Laycock (University of Liverpool (GB))
    17/10/2013, 13:30
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    While a significant fraction of ATLAS physicists directly analyse the AOD (Analysis Object Data) produced at the CERN Tier 0, a much larger fraction have opted to analyse data in a flat ROOT format. The large scale production of this Derived Physics Data (DPD) format must cater for both detailed performance studies of the ATLAS detector and object reconstruction, as well as higher level and...
    Go to contribution page
  430. Mr Jose Benito Gonzalez Lopez (CERN)
    17/10/2013, 13:30
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    Indico has evolved into the main event organization software, room booking tool and collaboration hub for CERN. The growth in its usage has only accelerated during the past 9 years, and today Indico holds more that 215,000 events and 1,100,000 files. The growth was also substantial in terms of functionalities and improvements. In the last year alone, Indico has matured considerably in 3 key...
    Go to contribution page
  431. Brian Paul Bockelman (University of Nebraska (US))
    17/10/2013, 13:30
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    To efficiently read data over high-latency connections, ROOT-based applications must pay careful attention to user-level usage patterns and the configuration of the I/O layer. Starting in 2010, CMSSW began using and improving several ROOT "best practice" techniques such as enabling the TTreeCache object and avoiding reading events out-of-order. Since then, CMS has been deploying additional...
    Go to contribution page
  432. Mario Lassnig (CERN)
    17/10/2013, 13:30
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    Rucio is the next-generation data management system supporting ATLAS physics workflows in the coming decade. The software engineering process to develop Rucio is fundamentally different to existing software development approaches in the ATLAS distributed computing community. Based on a conceptual design document, development takes place using peer-reviewed code in a test-driven environment,...
    Go to contribution page
  433. Andre Georg Holzner (Univ. of California San Diego (US))
    17/10/2013, 13:30
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The Data Acquisition system of the Compact Muon Solenoid experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s. By the time the LHC restarts after the 2013/14 shut-down, the current compute nodes, networking, and storage infrastructure infrastructure will have reached the end of their lifetime. In order to handle higher LHC...
    Go to contribution page
  434. Pierrick Hanlet (Illinois Institute of Technology)
    17/10/2013, 13:50
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    The Muon Ionization Cooling Experiment (MICE) is a demonstration experiment to prove the feasibility of cooling a beam of muons for use in a Neutrino Factory and/or Muon Collider. The MICE cooling channel is a section of a modified Study II cooling channel in which we will measure a 10% reduction in beam emittance. In order to ensure a reliable measurement, MICE will measure the beam...
    Go to contribution page
  435. Dr Edward Karavakis (CERN)
    17/10/2013, 13:52
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    The Worldwide LCG Computing Grid (WLCG) today includes more than 170 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation , performance and reliability. Furthermore,...
    Go to contribution page
  436. Federica Legger (Ludwig-Maximilians-Univ. Muenchen (DE))
    17/10/2013, 13:52
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. ATLAS currently stores over...
    Go to contribution page
  437. Rocco Mandrysch (University of Iowa (US))
    17/10/2013, 13:53
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    In a complex multi-developer, multi-package software environment, such as the ATLAS offline Athena framework, tracking the performance of the code can be a non-trivial task in itself. In this paper we describe improvements in the instrumentation of ATLAS offline software that have given considerable insight into the performance of the code and helped to guide optimisation. Code can be...
    Go to contribution page
  438. Thomas Baron (CERN)
    17/10/2013, 13:53
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    In the last few years, we have witnessed an explosion of visual collaboration initiatives in the industry. Several advances in video services and also in their underlying infrastructure are currently improving the way people collaborate globally. These advances are creating new usage paradigms: any device in any network can be used to collaborate, in most cases with an overall high quality....
    Go to contribution page
  439. Dr Johannes Ebke (TNG Technology Consulting)
    17/10/2013, 13:53
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    In comparison to storing data packed by event, column data stores store event variables or sets of event variables in individual data packs. One well-known example is the CERN ROOT library's TTree, which has a mode where it behaves like a column store. Columnar data stores can offer fast processing of a subset of the event structure or individual variables. In the experimental Drillbit...
    Go to contribution page
  440. Prof. Adele Rimoldi (Universita e INFN (IT)), Dr Pierluigi Piersimoni (Universita de Pavia and INFN)
    17/10/2013, 13:55
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    The Italian National Centre of Hadrontherapy for Cancer Treatment (CNAO – Centro Nazionale di Adroterapia Oncologica) in Pavia, Italy, has started the treatment of selected cancers with the first patients in late 2011. In the coming months at CNAO plans are to activate a new dedicated treatment line for irradiation of uveal melanomas using the available active beam scan. The beam...
    Go to contribution page
  441. Ms Silvia Amerio (University of Padova & INFN)
    17/10/2013, 14:10
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    One of the most important issues facing particle physics experiments at hadron colliders is real-time selection of interesting events for offline storage. Collision frequencies do not allow all events to be written to tape for offline analysis, and in most cases, only a small fraction can be saved. Typical trigger systems use commercial computers in the final stage of processing. Much of the...
    Go to contribution page
  442. Mr Stefano Alberto Russo (Universita degli Studi di Udine (IT))
    17/10/2013, 14:14
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    Hadoop/MapReduce is a very common and widely supported distributed computing framework. It consists in a scalable programming model named MapReduce, and a locality-aware distributed file system (HDFS). Its main feature is to implement data locality: through the fusion of computing and storage resources and thanks to the locality-awareness of HDFS, the computation can be scheduled on the nodes...
    Go to contribution page
  443. Marco Mascheroni (Universita & INFN, Milano-Bicocca (IT))
    17/10/2013, 14:14
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    ATLAS, CERN-IT, and CMS embarked on a project to develop a common system for submitting analysis jobs to the distributed computing infrastructure based on elements of PANDA. After an extensive feasibility study and development of a proof-of-concept prototype, the project has a basic infrastructure that can be used to support the analysis use case of both experiments with common services. In...
    Go to contribution page
  444. Dr Adam Lyon (Fermilab)
    17/10/2013, 14:15
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    Flexibility in producing simulations is a highly desirable, but difficult to attain feature. A simulation program may be written for a particular purpose, such as studying a detector or aspect of an experimental apparatus, but adapting that program to answer different questions about that detector or apparatus under different situations may require recoding or a separate fork of the program....
    Go to contribution page
  445. Dr Maria Grazia Pia (Universita e INFN (IT))
    17/10/2013, 14:16
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The broad coverage of the search for the Higgs boson in the mainstream media is a relative novelty for HEP research, whose achievements have traditionally been limited to scholarly literature. This presentation illustrates the results of a scientometric analysis of HEP computing in scientific literature, institutional media and the press, and a comparative overview of similar metrics...
    Go to contribution page
  446. Elizabeth Gallas (University of Oxford (GB))
    17/10/2013, 14:16
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    ATLAS maintains a rich corpus of event-by-event information that provides a global view of virtually all of the billions of events the collaboration has seen or simulated, along with sufficient auxiliary information to navigate to and retrieve data for any event at any production processing stage.  This unique resource has been employed for a range of purposes, from monitoring, statistics,...
    Go to contribution page
  447. Mr Giulio Eulisse (Fermi National Accelerator Lab. (US))
    17/10/2013, 14:16
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    CMS Offline Software, CMSSW, is an extremely large software project, with roughly 3 millions lines of code, two hundreds of active developers and two to three active development branches. Given the scale of the problem, both from a technical and a human point of view, being able to keep on track such a large project, bug free, and to deliver builds for different architectures is a challenge in...
    Go to contribution page
  448. Mr Wataru Takase (High Energy Accelerator Research Organization (KEK), Japan)
    17/10/2013, 14:36
    Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
    Oral presentation to parallel session
    In this paper we report on the setup, deployment and operation of a low-maintenance, policy-driven distributed data management system for scientific data based on the integrated Rule Oriented Data System (iRODS). The system is located at KEK, Tsukuba, Japan with a satellite system at QMUL, London, UK. The system has been running stably in production for more than two years with minimal...
    Go to contribution page
  449. Hannes Sakulin (CERN)
    17/10/2013, 14:38
    Data acquisition, trigger and controls
    Oral presentation to parallel session
    We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it...
    Go to contribution page
  450. Mr Dennis Van Dok (Nikhef (NL))
    17/10/2013, 14:38
    Software Engineering, Parallelism & Multi-Core
    Oral presentation to parallel session
    The LCMAPS family of grid middleware has improved in the last years by moving from a custom build system to open source community standards for building, packaging and distributing. This contribution outlines what improvements were made and the benefits they rendered. LCMAPS, gLExec and related middleware components were developed under a series of European framework program projects,...
    Go to contribution page
  451. Giacinto Donvito (Universita e INFN (IT))
    17/10/2013, 14:38
    Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
    Oral presentation to parallel session
    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe...
    Go to contribution page
  452. Gancho Dimitrov (CERN)
    17/10/2013, 14:39
    Data Stores, Data Bases, and Storage Systems
    Oral presentation to parallel session
    The ATLAS Distributed Computing (ADC) project delivers production tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with high efficiency the needed computing activities during the first run of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges. Databases are a...
    Go to contribution page
  453. Dr Dirk Hoffmann (Centre de Physique des Particules de Marseille, CNRS/IN2P3)
    17/10/2013, 14:39
    Facilities, Production Infrastructures, Networking and Collaborative Tools
    Oral presentation to parallel session
    The CTA (Cherenkov Telescope Array) consortium is developing a next generation ground-based instrument for very high energy gamma-ray astronomy, made up of approximately 100 telescopes of at least three different sizes. It counts presently more than 1000 members, out of which almost 800 have a computer account to use the "CTA web services". CTA decided in 2011 to use a SharePoint 2010 "site...
    Go to contribution page
  454. Dr Gabriele Cosmo (CERN)
    17/10/2013, 14:40
    Event Processing, Simulation and Analysis
    Oral presentation to parallel session
    The Geant4 simulation toolkit has reached maturity in the middle of the previous decade, providing a wide variety of established features coherently aggregated in a software product which has become the standard for detector simulation in HEP and is used in a variety of other application domains. We review the most recent capabilities introduced in the kernel, highlighting those which are...
    Go to contribution page
  455. 17/10/2013, 15:00
  456. Niko Neufeld (CERN)
    17/10/2013, 15:45
  457. Florian Uhlig (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    17/10/2013, 16:07
  458. Wahid Bhimji (University of Edinburgh (GB))
    17/10/2013, 16:29
  459. Oxana Smirnova (Lund University (SE))
    17/10/2013, 16:55
    There is an emerging trend in computing for HEP, namely, that it spills outside the traditional laboratory boundaries and benefits from becoming less HEP-specific. As all forms of research are becoming ICT-dependent, what is the high energy and nuclear physics community doing to encourage mainstream software? Or do we do exactly the opposite? Dr. Oxana Smirnova of Lund University will chair a...
    Go to contribution page
  460. David Groep (NIKHEF (NL))
    17/10/2013, 17:55
  461. 18/10/2013, 09:00
  462. Dr Stefan Roiser (CERN)
    18/10/2013, 09:50
  463. Nurcan Ozturk (University of Texas at Arlington (US))
    18/10/2013, 10:10
  464. Dr Solveig Albrand (Centre National de la Recherche Scientifique (FR))
    18/10/2013, 11:00
  465. Dr Helge Meinhard (CERN)
    18/10/2013, 11:22
  466. Hiroshi Sakamoto (University of Tokyo (JP)), Tomoaki Nakamura (University of Tokyo (JP))
    18/10/2013, 11:45
  467. David Groep (NIKHEF (NL))
    18/10/2013, 12:00
  468. Dr Richard Philip Mount (SLAC National Accelerator Laboratory (US))
    Data Stores, Data Bases, and Storage Systems
    Poster presentation
    User data analysis in high energy physics presents a challenge to spinning-disk based storage systems. The analysis is data intense, yet reads are small, sparse and covers a large volume of data files. It is also unpredictable due to users' response to storage performance. We describe here a system with an array of Solid State Disk as a non-conventional, standalone file level cache in front of...
    Go to contribution page