Contribution List

535 out of 535 displayed
Export to PDF
  1. Dr Ken Peach (OIST)
    13/04/2015, 09:00
  2. Dr Masanori Yamauchi (KEK)
    13/04/2015, 09:15
  3. Dr Takanori Hara (KEK)
    13/04/2015, 10:00
  4. Graeme Stewart (University of Glasgow (GB))
    13/04/2015, 11:00
  5. 13/04/2015, 11:45
  6. 13/04/2015, 12:15
  7. Christian Nieke (Brunswick Technical University (DE))
    13/04/2015, 14:00
    Track6: Facilities, Infrastructure, Network
    oral presentation
    Optimising a computing infrastructure on the scale of LHC requires a quantitative understanding of a complex network of many different resources and services. For this purpose the CERN IT department and the LHC experiments are collecting a large multitude of logs and performance probes, which are already successfully used for short-term analysis (e.g. operational dashboards) within each...
    Go to contribution page
  8. Andrea Formica (CEA/IRFU,Centre d'etude de Saclay Gif-sur-Yvette (FR))
    13/04/2015, 14:00
    Track3: Data store and access
    oral presentation
    The ATLAS and CMS Conditions Database infrastructures have served each of the respective experiments well through LHC Run 1, providing efficient access to a wide variety of conditions information needed in online data taking and offline processing and analysis. During the long shutdown between Run 1 and Run 2, we have taken various measures to improve our systems for Run 2. In some cases, a...
    Go to contribution page
  9. Stephen Gowdy (Fermi National Accelerator Lab. (US))
    13/04/2015, 14:00
    Track5: Computing activities and Computing models
    oral presentation
    The global distributed computing system (WLCG) used by the Large Hadron Collider (LHC) is evolving. The treatment of wide-area-networking (WAN) as a scarce resource that needs to be strictly managed is far less necessary that originally foreseen. Static data placement and replication, intended to limit interdependencies among computing centers, is giving way to global data federations...
    Go to contribution page
  10. Andreas Salzburger (CERN)
    13/04/2015, 14:00
    Track2: Offline software
    oral presentation
    Track reconstruction is one of the most complex elements of the reconstruction of events recorded by ATLAS from collisions delivered by the LHC. It is the most time consuming reconstruction component in high luminosity environments. After a hugely successful Run-1, the flat budget projections for computing resources for Run-2 of the LHC together with the demands of reconstructing higher...
    Go to contribution page
  11. Federico Stagni (CERN)
    13/04/2015, 14:00
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    In the last few years, new types of computing infrastructures, such as IAAS (Infrastructure as a Service) and IAAC (Infrastructure as a Client), gained popularity. New resource may come as part of pledged resources, while others are in the form of opportunistic ones. Most of these new infrastructures are based on virtualization techniques, others don't. Meanwhile, some concepts, such as...
    Go to contribution page
  12. Frederic Bruno Magniette (Ecole Polytechnique (FR))
    13/04/2015, 14:00
    Track1: Online computing
    oral presentation
    High-energy physics experiments produce huge amounts of data that need to be processed and stored for further analysis and eventually treated in real time for triggering and monitoring purposes. In addition, more and more often these requirements are also being found on other fields such as on-line video processing, proteomics and astronomical facilities. The complexity of such experiments...
    Go to contribution page
  13. Zbigniew Baranowski (CERN)
    13/04/2015, 14:15
    Track3: Data store and access
    oral presentation
    During LHC run 1 ATLAS and LHCb databases have been using Oracle Streams replication technology for their use cases of data movement between online and offline Oracle databases. Moreover ATLAS has been using Streams to replicate conditions data from CERN to selected Tier 1s. GoldenGate is a new technology introduced by Oracle to replace and improve on Streams, by providing better performance,...
    Go to contribution page
  14. Ian Fisk (Fermi National Accelerator Lab. (US))
    13/04/2015, 14:15
    Track5: Computing activities and Computing models
    oral presentation
    Beginning in 2015 CMS will collected and produce data and simulation adding to 10B new events a year. In order to realize the physics potential of the experiment these events need to be stored, processed, and delivered to analysis users on a global scale. CMS has 150k processor cores and 80PB of disk storage and there is constant pressure to reduce the resources needed and increase the...
    Go to contribution page
  15. Pedro Andrade (CERN)
    13/04/2015, 14:15
    Track6: Facilities, Infrastructure, Network
    oral presentation
    Over the past two years, the operation of the CERN Data Centres went through significant changes with the introduction of new mechanisms for hardware procurement, new services for cloud infrastructure and configuration management, among other improvements. These changes resulted in an increase of resources being operated in a more dynamic environment. Today, the CERN Data Centres provide over...
    Go to contribution page
  16. David Lange (Lawrence Livermore Nat. Laboratory (US))
    13/04/2015, 14:15
    Track2: Offline software
    oral presentation
    Over the past several years, the CMS experiment has made significant changes to its detector simulation and reconstruction applications motivated by the planned program of detector upgrades over the next decade. These upgrades include both completely new tracker and calorimetry systems and changes to essentially all major detector components to meet the requirements of very high pileup...
    Go to contribution page
  17. Dr paolo branchini (INFN Roma Tre)
    13/04/2015, 14:15
    Track1: Online computing
    oral presentation
    The Data Acquisition System (DAQ) and the Front-End electronics for an array of Kinetic Inductance Detectors (KIDs) are described. KIDs are superconductive detectors, in which electrons are organized in Cooper pairs. Any incident radiation could break such pairs generating quasi-particles, whose effect is increasing the inductance of the detector. Electrically, any KID is equivalent to a...
    Go to contribution page
  18. Tadashi Maeno (Brookhaven National Laboratory (US))
    13/04/2015, 14:15
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyze the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed...
    Go to contribution page
  19. Dr Simone Campana (CERN)
    13/04/2015, 14:30
    Track5: Computing activities and Computing models
    oral presentation
    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming...
    Go to contribution page
  20. Birgit Lewendel (Deutsches Elektronen-Synchrotron (DE))
    13/04/2015, 14:30
    Track6: Facilities, Infrastructure, Network
    oral presentation
    DESY operates a multi-VO Grid site for 20 HEP and non-HEP collaborations and is one of the world-wide largest Tier-2 sites for ATLAS, CMS, LHCb, and BELLE2. In one common Grid infrastructure computing resources are shared by all VOs according to MoUs and agreements, applying an opportunistic usage model allows to distribute free resources among the VOs. Currently, the Grid site...
    Go to contribution page
  21. Marco Rovere (CERN)
    13/04/2015, 14:30
    Track2: Offline software
    oral presentation
    The CMS tracking code is organized in several levels, known as 'iterative steps', each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and...
    Go to contribution page
  22. Dr Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energ. Medioambientales y Tecn. - (ES)
    13/04/2015, 14:30
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    The successful exploitation of the multicore processor architectures available at the computing sites is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework has...
    Go to contribution page
  23. Roland Sipos (Eotvos Lorand University (HU))
    13/04/2015, 14:30
    Track3: Data store and access
    oral presentation
    With the restart of the LHC in 2015, the growth of the CMS Conditions dataset will continue, therefore the need of consistent and highly available access to the Conditions makes a great cause to revisit different aspects of the current data storage solutions. We present a study of alternative data storage backends for the Conditions Databases, by evaluating some of the most popular NoSQL...
    Go to contribution page
  24. Julian Glatzer (CERN)
    13/04/2015, 14:30
    Track1: Online computing
    oral presentation
    The ATLAS Level-1 Central Trigger (L1CT) system is a central part of ATLAS data-taking and is configured, controlled, and monitored by a software framework with emphasis on reliability and flexibility. The hardware has undergone a major upgrade for Run 2 of the LHC, in order to cope with the expected increase of instantaneous luminosity of a factor of 2 with respect to Run 1. It offers...
    Go to contribution page
  25. Nathalie Rauschmayr (CERN)
    13/04/2015, 14:45
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    The main goal of a Workload Management System (WMS) is to find and allocate resources for the jobs it is handling. The more and more accurate information the WMS receives about the jobs, the easier it will be to accomplish its task, which will directly translate into a better utilization of resources. Traditionally, the information associated with each job, like expected runtime or memory...
    Go to contribution page
  26. Federico Stagni (CERN)
    13/04/2015, 14:45
    Track3: Data store and access
    oral presentation
    Nowadays, many database systems are available but they may not be optimized for storing time series data. The DIRAC job monitoring is a typical use case of such time series. So far it was done using a MySQL database, which is not well suited for such an application. Therefore alternatives have been investigated. Choosing an appropriate database for storing huge amounts of time series is not...
    Go to contribution page
  27. Olof Barring (CERN)
    13/04/2015, 14:45
    Track6: Facilities, Infrastructure, Network
    oral presentation
    The Open Compute Project, OCP ( http://www.opencompute.org/ (link is external)), was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at lowest possible cost. The technologies are released as open hardware, with the goal to develop servers and data centers following the model traditionally associated with open source software projects. In 2013...
    Go to contribution page
  28. Dr Andrea Sciaba (CERN)
    13/04/2015, 14:45
    Track5: Computing activities and Computing models
    oral presentation
    The Worldwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse the ~50 Petabytes of data annually generated by the LHC. The WLCG operations are coordinated by a distributed team of managers and experts and performed by people at all participating sites and from all the experiments. Several...
    Go to contribution page
  29. Barbara Storaci (Universitaet Zuerich (CH))
    13/04/2015, 14:45
    Track2: Offline software
    oral presentation
    The LHCb track reconstruction uses sophisticated pattern recognition algorithms to reconstruct trajectories of charged particles. Their main feature is the use of a Hough-transform like approach to connect track segments from different subdetectors, allowing for having no tracking stations in the magnet of LHCb. While yielding a high efficiency, the track reconstruction is a major contributor...
    Go to contribution page
  30. Eduard Ebron Simioni (Johannes-Gutenberg-Universitaet Mainz (DE))
    13/04/2015, 14:45
    Track1: Online computing
    oral presentation
    The Large Hadron Collider (LHC) in 2015 will collide proton beams with increased luminosity from $10^{34}$ up to $3 \times 10^{34}$ cm$^{−2}$ s$^{−1}$. ATLAS is an LHC experiment designed to measure decay properties of highly energetic particles produced in these proton-collisions. The high luminosity places stringent physical and operational requirements on the ATLAS Trigger in order to...
    Go to contribution page
  31. Dr Peter Elmer (Princeton University (US))
    13/04/2015, 15:00
    Track6: Facilities, Infrastructure, Network
    oral presentation
    Deploying the Worldwide LHC Computing Grid (WLCG) was greatly facilitated by the convergence, around the year 2000, on Linux and commodity x86 processors as a standard scientific computing platform. This homogeneity enabled a relatively simple "build once, run anywhere" model for applications. A number of factors are now driving interest in alternative platforms. Power limitations at the...
    Go to contribution page
  32. Jiri Chudoba (Acad. of Sciences of the Czech Rep. (CZ))
    13/04/2015, 15:00
    Track5: Computing activities and Computing models
    oral presentation
    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are succesfully used. The first and the second versions of production system based on bash...
    Go to contribution page
  33. Dominick Rocco (urn:Google)
    13/04/2015, 15:00
    Track2: Offline software
    oral presentation
    The NOvA experiment is a long baseline neutrino oscillation experiment utilizing the NuMI beam generated at Fermilab. The experiment will measure the oscillations within a muon neutrino beam in a 300 ton Near Detector located underground at Fermilab and a functionally-identical 14 kiloton Far Detector placed 810 km away. The detectors are liquid scintillator tracking calorimeters with a...
    Go to contribution page
  34. Helio Takai (Brookhaven National Laboratory (US))
    13/04/2015, 15:00
    Track1: Online computing
    oral presentation
    The global feature extractor (gFEX) is a component of the Level-1 Calorimeter trigger Phase-I upgrade for the ATLAS experiment. It is intended to identify patterns of energy associated with the hadronic decays of high momentum Higgs, W, & Z bosons, topquarks, and exotic particles in real time at the LHC crossing rate. The single processor board will be implemented as a fast reconfigurable...
    Go to contribution page
  35. Ms Marina Golosova (National Research Centre "Kurchatov Institute")
    13/04/2015, 15:00
    Track3: Data store and access
    oral presentation
    In recent years the concepts of Big Data became well established in IT-technologies. Most systems (for example Distributed Data Management or Workload Management systems) produce metadata that describes actions performed on jobs, stored data or other entities and its volume takes one to the realms of Big Data on many occasions. This metadata can be used to obtain information about the current...
    Go to contribution page
  36. James Letts (Univ. of California San Diego (US))
    13/04/2015, 15:00
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    CMS will require access to more than 125k processor cores for the beginning of Run2 in 2015 to carry out its ambitious physics program with more and higher complexity events. During Run1 these resources were predominantly provided by a mix of grid sites and local batch resources. During the long shut down cloud infrastructures, diverse opportunistic resources and HPC supercomputing centers...
    Go to contribution page
  37. Wesley Gohn (U)
    13/04/2015, 15:15
    Track1: Online computing
    oral presentation
    A new measurement of the anomalous magnetic moment of the muon, $a_{\mu} \equiv (g-2)/2$, will be performed at the Fermi National Accelerator Laboratory. The most recent measurement, performed at Brookhaven National Laboratory and completed in 2001, shows a 3.3-3.6 standard deviation discrepancy with the standard model value of $g$-$2$. The new measurement will accumulate 20 times those...
    Go to contribution page
  38. Michael Boehler (Albert-Ludwigs-Universitaet Freiburg (DE))
    13/04/2015, 15:15
    Track3: Data store and access
    oral presentation
    The ATLAS detector consists of several sub-detector systems. Both data taking and Monte Carlo (MC) simulation rely on an accurate description of the detector conditions from every sub system, such as calibration constants, different scenarios of pile-up and noise conditions, size and position of the beam spot, etc. In order to guarantee database availability for critical online applications...
    Go to contribution page
  39. Shawn Mc Kee (University of Michigan (US))
    13/04/2015, 15:15
    Track6: Facilities, Infrastructure, Network
    oral presentation
    The Worldwide LHC Computing Grid relies on the network as a critical part of its infrastructure and therefore needs to guarantee effective network usage and prompt detection and resolution of any network issues, including connection failures, congestion, traffic routing, etc. The WLCG Network and Transfer Metrics project aims to integrate and combine all network-related monitoring data...
    Go to contribution page
  40. Giuseppe Cerati (Univ. of California San Diego (US))
    13/04/2015, 15:15
    Track2: Offline software
    oral presentation
    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of...
    Go to contribution page
  41. Alec Habig (Univ. of Minnesota Duluth)
    13/04/2015, 15:15
    Track5: Computing activities and Computing models
    oral presentation
    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study nu-e appearance in a nu-mu beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of offsite resources...
    Go to contribution page
  42. Vincent Garonne (CERN)
    13/04/2015, 15:15
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    For more than 8 years, the Distributed Data Management (DDM) system of ATLAS called DQ2 has been able to demonstrate very large scale data management capabilities with more than 600M files, 160 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, the system does not scale for LHC run2 and a new DDM system called Rucio has been developed to be DQ2's...
    Go to contribution page
  43. Bruno Heinrich Hoeft (KIT - Karlsruhe Institute of Technology (DE))
    13/04/2015, 15:30
    Track6: Facilities, Infrastructure, Network
    oral presentation
    The Steinbuch Center for Computing (SCC) at Karlsruhe Institute of Technology (KIT) was involved quite early in 100G network technology. In 2010 already a first 100G wide area network testbed over a distance of approx. 450 km was deployed between the national research organizations KIT and FZ-Jülich - initiated by DFN (the German NREN). Only three years later 2013, KIT joined the Caltech SC13...
    Go to contribution page
  44. Oliver Frost (DESY)
    13/04/2015, 15:30
    Track2: Offline software
    oral presentation
    With the upgraded electron-positron-collider facility, SuperKEKB and Belle II, the Japanese high energy research center KEK strives to exceed its own world record luminosity by a factor of 40. To provide a solid base for the event reconstruction within the central drift chamber in the enhanced luminosity setup, a powerful track finding algorithm coping with the higher beam induced backgrounds...
    Go to contribution page
  45. Dr Baosong Shan (Beihang University (CN))
    13/04/2015, 15:30
    Track5: Computing activities and Computing models
    oral presentation
    The Alpha Magnetic Spectrometer (AMS) is a high energy physics experiment installed and operating on board of the International Space Station (ISS) from May 2011 and expected to last through Year 2024 and beyond. The computing strategy of the AMS experiment is discussed in the paper, including software design, data processing and modelling details, simulation of the detector performance and...
    Go to contribution page
  46. Ludovico Bianchi (Forschungszentrum Jülich)
    13/04/2015, 15:30
    Track1: Online computing
    oral presentation
    The PANDA experiment is a next generation particle detector planned for operation at the FAIR facility, currently under construction in Darmstadt, Germany. PANDA will detect events generated by colliding an antiproton beam on a fixed proton target, allowing studies in hadron spectroscopy, hypernuclei production, open charm and nucleon structure. The nature of hadronic collisions means that...
    Go to contribution page
  47. Martin Barisits (CERN)
    13/04/2015, 15:30
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    The ATLAS Distributed Data Management system stores more than 160PB of physics data across more than 130 sites globally. Rucio, the next-generation data management system of ATLAS has been introduced to cope with the anticipated workload of the coming decade. The previous data management system DQ2 pursued a rather simplistic approach for resource management, but with the increased data volume...
    Go to contribution page
  48. Dr Dario Barberis (Università e INFN Genova (IT))
    13/04/2015, 15:30
    Track3: Data store and access
    oral presentation
    The EventIndex is the complete catalogue of all ATLAS events, keeping the references to all files that contain a given event in any processing stage. It replaces the TAG database, which had been in use during LHC Run 1. For each event it contains its identifiers, the trigger pattern and the GUIDs of the files containing it. Major use cases are event picking, feeding the Event Service used on...
    Go to contribution page
  49. Dr Ivan Kisel (Johann-Wolfgang-Goethe Univ. (DE))
    13/04/2015, 15:45
    Track2: Offline software
    oral presentation
    The future heavy-ion experiment CBM (FAIR/GSI, Darmstadt, Germany) will focus on the measurement of very rare probes at interaction rates up to 10 MHz with data flow of up to 1 TB/s. The beam will provide free stream of beam particles without bunch structure. That requires full online event reconstruction and selection not only in space, but also in time, so-called 4D event building and...
    Go to contribution page
  50. Adam Jedrzej Otto (Ministere des affaires etrangeres et europeennes (FR))
    13/04/2015, 15:45
    Track6: Facilities, Infrastructure, Network
    oral presentation
    The LHCb experiment is preparing a major upgrade of both the detector and the data acquisition system. A system capable of transporting up to 50 Tbps of data will be required. This can only be achieved in a manageable way using 100 Gbps links. Such links recently became available also in the servers, while they have been available between switches already for a while. We present first...
    Go to contribution page
  51. Dr Tony Wildish (Princeton University (US))
    13/04/2015, 15:45
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    AsyncStageOut (ASO) is a new component of the distributed data analysis system of CMS, CRAB, designed for managing users' data. It addresses a major weakness of the previous model, namely that data movement was part of the job execution resulting in inefficient use of job slots and an unacceptable failure rate at the end of the jobs. ASO foresees the management of up to 400k files per day...
    Go to contribution page
  52. Dr Takashi SUGIMOTO (Japan Synchrotron Radiation Research Institute)
    13/04/2015, 15:45
    Track5: Computing activities and Computing models
    oral presentation
    An X-ray free electron laser (XFEL) facility, SACLA, is generating ultra-short, high peak brightness, and full-spatial-coherent X-ray pulses [1]. The unique characteristics of the X-ray pulses, which have never been obtained with conventional synchrotron orbital radiation, are now opening new opportunities in a wide range of scientific fields such as atom, molecular and optical physics,...
    Go to contribution page
  53. Javier Sanchez (Instituto de Fisica Corpuscular (ES))
    13/04/2015, 15:45
    Track3: Data store and access
    oral presentation
    The ATLAS EventIndex contains records of all events processed by ATLAS, in all processing stages. These records include the references to the files containing each event (the GUID of the file) and the internal “pointer” to each event in the file. This information is collected by all jobs that run at Tier-0 or on the Grid and process ATLAS events. Each job produces a snippet of information for...
    Go to contribution page
  54. Christoph Paus (Massachusetts Inst. of Technology (US))
    13/04/2015, 16:30
    Track5: Computing activities and Computing models
    oral presentation
    The Dynamic Data Management (DDM) framework is designed to manage the majority of the CMS data in an automated fashion. At the moment 51 CMS Tier-2 data centers have the ability to host about 20 PB of data. Tier-1 centers will also be included adding substantially more space. The goal of DDM is to facilitate the management of the data distribution and optimize the accessibility of data for the...
    Go to contribution page
  55. Dr Junichi Kanzaki (KEK)
    13/04/2015, 16:30
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    Fast event generation system of physics processes is developed using graphics processing unit (GPU). The system is based on the Monte Carlo integration and event generation programs, BASES/SPRING, which were originally developed in FORTRAN. They were rewritten on the CUDA platform provided by NVIDIA in order for the implementation of these programs to GPUs. Since the Monte Carlo integration...
    Go to contribution page
  56. David Schultz (University of Wisconsin-Madison)
    13/04/2015, 16:30
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    We describe the overall structure and new features of the second generation of IceProd, a data processing and management framework. IceProd was developed by the IceCube Neutrino Observatory for processing of Monte Carlo simulations and detector data, and has been a key component of the IceCube offline computing infrastructure since it was first deployed in 2006. It runs fully in user space as...
    Go to contribution page
  57. Stefano Zilli (CERN)
    13/04/2015, 16:30
    Track7: Clouds and virtualization
    oral presentation
    CERN has been running a production OpenStack cloud since July 2013 to support physics computing and infrastructure services for the site. This is expected to reach over 100,000 cores by the end of 2015. This talk will cover the different use cases for this service and experiences with this deployment in areas such as user management, deployment, metering and configuration of thousands of...
    Go to contribution page
  58. Rolf Seuster (TRIUMF (CA))
    13/04/2015, 16:30
    Track2: Offline software
    oral presentation
    The talk will give a summary of the broad spectrum of software upgrade projects to prepare ATLAS for the challenges of the soon coming LHC Run-2. Those projects include the reduction of the CPU required for reconstruction by a factor 3 compared to 2012, which was required to meet the challenges of the expected increase in pileup and the higher data taking rate of up to 1 kHz. As well, the new...
    Go to contribution page
  59. Marko Bracko (Jozef Stefan Institute (SI))
    13/04/2015, 16:30
    Track3: Data store and access
    oral presentation
    The Belle II experiment, a next-generation B factory experiment at the KEK laboratory, Tsukuba, Japan, is expected to collect an experimental data sample fifty times larger than its predecessor, the Belle experiment. The data taking and processing rates are expected to be at least one order of magnitude larger as well. In order to cope with these large data processing rates and huge data...
    Go to contribution page
  60. Thomas Beermann (Bergische Universitaet Wuppertal (DE))
    13/04/2015, 16:45
    Track5: Computing activities and Computing models
    oral presentation
    This contribution presents a study on the applicability and usefulness of dynamic data placement methods for data-intensive systems, such as ATLAS distributed data management (DDM). In this system the jobs are sent to the data, therefore having a good distribution of data is significant. Ways of forecasting workload patterns are examined which then are used to redistribute data to achieve a...
    Go to contribution page
  61. Hideki Miyake (KEK)
    13/04/2015, 16:45
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    In Belle II experiment a large amount of physics data will be continuously taken and the production rate is equivalent to LHC experiments. Considerable resources of computing, storage, and network, are necessary to handle not only the taken data but also substantial simulated data. Therefore Belle II exploits distributed computing system based on DIRAC interware. DIRAC is a general...
    Go to contribution page
  62. Dr Ulrich Schwickerath (CERN)
    13/04/2015, 16:45
    Track7: Clouds and virtualization
    oral presentation
    As part of CERN's Agile Infrastructure project, large parts of the CERN batch farm have been moved to virtual machines running on CERNs private IaaS (link is external) cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance (rated in HS06) in...
    Go to contribution page
  63. Christophe Haen (CERN)
    13/04/2015, 16:45
    Track3: Data store and access
    oral presentation
    In the distributed computing model of LHCb the File Catalog (FC) is a central component that keeps track of each file and replica stored on the Grid. It is federating the LHCb data files in a logical namespace used by all LHCb applications. As a replica catalog, it is used for brokering jobs to sites where their input data is meant to be present, but also by jobs for finding alternative...
    Go to contribution page
  64. Scott Snyder (Brookhaven National Laboratory (US))
    13/04/2015, 16:45
    Track2: Offline software
    oral presentation
    During the 2013-2014 shutdown of the Large Hadron Collider, ATLAS switched to a new event data model for analysis, called the xAOD. A key feature of this model is the separation of the object data from the objects themselves (the `auxiliary store'). Rather being stored as member variables of the analysis classes, all object data are stored separately, as vectors of simple values. Thus, the...
    Go to contribution page
  65. Dr Sami Kama (Southern Methodist University (US))
    13/04/2015, 16:45
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    The growing size and complexity of events produced at the high luminosities expected in 2015 at the Large Hadron Collider demands much more computing power for the online event selection and for the offline data reconstruction than in the previous data taking period. In recent years, the explosive performance growth of low-cost, massively parallel processors like Graphical Processing Units...
    Go to contribution page
  66. Dr Peter Van Gemmeren (Argonne National Laboratory (US))
    13/04/2015, 17:00
    Track2: Offline software
    oral presentation
    ATLAS developed and employed for Run 1 of the Large Hadron Collider a sophisticated infrastructure for metadata handling in event processing jobs.  This infrastructure profits from a rich feature set provided by the ATLAS execution control framework, including standardized interfaces and invocation mechanisms for tools and services, segregation of transient data stores with concomitant object...
    Go to contribution page
  67. Ruben Domingo Gaspar Aparicio (CERN)
    13/04/2015, 17:00
    Track3: Data store and access
    oral presentation
    CERN IT-DB group is migrating its storage platform, mainly NetApp NAS’s running on 7-mode but also SAN arrays, to a set of NetApp C-mode clusters. The largest one is made of 14 controllers and it will hold a range of critical databases from administration to accelerators control or experiment control databases. This talk shows our setup: network, monitoring, use of features like transparent...
    Go to contribution page
  68. Prof. Daniele Bonacorsi (University of Bologna)
    13/04/2015, 17:00
    Track5: Computing activities and Computing models
    oral presentation
    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed...
    Go to contribution page
  69. Richard Calland
    13/04/2015, 17:00
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    The Tokai-to-Kamioka (T2K) experiment is a second generation long baseline neutrino experiment, which uses a near detector to constrain systematic uncertainties for oscillation measurements with its far detector. Event-by-event reweighting of Monte Carlo (MC) events is applied to model systematic effects and construct PDFs describing predicted event distributions. However when analysing...
    Go to contribution page
  70. Federico Stagni (CERN)
    13/04/2015, 17:00
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    The DIRAC workload management system used by LHCb Distributed Computing is based on Computing Resource reservation and late binding (also known as pilot job in the case of batch resources) that allows the serial execution of several jobs obtained from a central task queue. CPU resources can usually be reserved for limited duration only (e.g. batch queue time limit) and in order to optimize...
    Go to contribution page
  71. Andrew McNab (University of Manchester (GB))
    13/04/2015, 17:00
    Track7: Clouds and virtualization
    oral presentation
    We compare the Vac and Vcycle virtual machine lifecycle managers and our experiences in providing production job execution services for ATLAS, LHCb, and the GridPP VO at sites in the UK and at CERN. In both the Vac and Vcycle systems, the virtual machines are created outside of the experiment's job submission and pilot framework. In the case of Vac, a daemon runs on each physical host which...
    Go to contribution page
  72. Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US))
    13/04/2015, 17:15
    Track5: Computing activities and Computing models
    oral presentation
    Today there are many different experimental event processing frameworks in use by running or about to be running experiments. This talk will compare and contrast the different components of these frameworks and highlight the different solutions chosen by different groups.  In the past there have been attempts at shared framework projects for example the collaborations on the BaBar framework...
    Go to contribution page
  73. Andrew John Washbrook (University of Edinburgh (GB))
    13/04/2015, 17:15
    Track7: Clouds and virtualization
    oral presentation
    Cloud computing enables ubiquitous, convenient and on-demand access to a shared pool of configurable computing resources that can be rapidly provisioned with minimal management effort. The flexible and scalable nature of the cloud computing model is attractive to both industry and academia. In HEP, the use of the “cloud” has become more prevalent with LHC experiments making use of standard...
    Go to contribution page
  74. David Michael Rohr (Johann-Wolfgang-Goethe Univ. (DE))
    13/04/2015, 17:15
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    ALICE (A Large Heavy Ion Experiment) is one of the four major experiments at the Large Hadron Collider (LHC) at CERN, which is today the most powerful particle accelerator worldwide. The High Level Trigger (HLT) is an online compute farm of about 200 nodes, which reconstructs events measured by the ALICE detector in real-time. The HLT uses a custom online data-transport framework to distribute...
    Go to contribution page
  75. Jeffrey Michael Dost (Univ. of California San Diego (US))
    13/04/2015, 17:15
    Track3: Data store and access
    oral presentation
    In April of 2014, the UCSD T2 Center deployed hdfs-xrootd-fallback, a UCSD-developed software system that interfaces Hadoop with XRootD to increase reliability of the Hadoop file system. The hdfs-xrootd-fallback system allows a site to depend less on local file replication and more on global replication provided by the XRootD federation to ensure data redundancy. Deploying the software has...
    Go to contribution page
  76. Dr Torre Wenaus (Brookhaven National Laboratory (US))
    13/04/2015, 17:15
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    The ATLAS Event Service (ES) implements a new fine grained approach to HEP event processing, designed to be agile and efficient in exploiting transient, short-lived resources such as HPC hole-filling, spot market commercial clouds, and volunteer computing. Input and output control and data flows, bookkeeping, monitoring, and data storage are all managed at the event level in an implementation...
    Go to contribution page
  77. Marco Rovere (CERN)
    13/04/2015, 17:15
    Track2: Offline software
    oral presentation
    The Data Quality Monitoring (DQM) Software is a central tool in the CMS experiment. Its flexibility allows for integration in several key environments: Online, for real-time detector monitoring; Offline, for the final, fine-grained data analysis and certification; Release-Validation, to constantly validate the functionalities and the performance of the reconstruction software; in Monte Carlo...
    Go to contribution page
  78. Marco Mascheroni (Universita & INFN, Milano-Bicocca (IT))
    13/04/2015, 17:30
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    The CMS Remote Analysis Builder (CRAB) provides the service for managing analysis tasks isolating users from the technical details of the distributed Grid infrastructure. Throughout the LHC Run 1, CRAB has been successfully employed by an average 350 distinct users every week executing about 200,000 jobs per day. In order to face the new challenges posed by the LHC Run 2, CRAB has been...
    Go to contribution page
  79. Philippe Canal (Fermi National Accelerator Lab. (US))
    13/04/2015, 17:30
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    The recent prevalence of hardware architectures of many-core or accelerated processors opens opportunities for concurrent programming models taking advantages of both SIMD and SIMT architectures. The Geant Vector Prototype has been designed both to exploit the vector capability of main stream CPUs and to take advantage of Coprocessors including NVidia’s GPU and Intel Xeon Phi. The...
    Go to contribution page
  80. Ben Couturier (CERN)
    13/04/2015, 17:30
    Track7: Clouds and virtualization
    oral presentation
    docker & HEP: containerization of applications for development, distribution and preservation. ================================================= HEP software stacks are not shallow. Indeed, HEP experiments' software are usually many applications in one (reconstruction, simulation, analysis, ...) and thus require many libraries - developed in-house or by third parties - to be...
    Go to contribution page
  81. Jakob Blomer (CERN)
    13/04/2015, 17:30
    Track3: Data store and access
    oral presentation
    Fermilab has several physics experiments including NOvA, MicroBooNE, and the Dark Energy Survey that have computing grid-based applications that need to read from a shared set of data files. We call this type of data Auxiliary data to distinguish it from (a) Event data which tends to be different for every job, and (b) Conditions data which tends to be the same for each job in a batch of...
    Go to contribution page
  82. Dr Carl Vuosalo (University of Wisconsin (US))
    13/04/2015, 17:30
    Track2: Offline software
    oral presentation
    The CMS experiment has developed a new analysis object format (the "mini-AOD") targeted to be less than 10% of the size of the Run 1 AOD format. The motivation for the Mini-AOD format is to have a small and quickly derived data format from which the majority of CMS analysis users can perform their analysis work. This format is targeted at having sufficient information to serve about 80% of CMS...
    Go to contribution page
  83. Dr Bodhitha Jayatilaka (Fermilab)
    13/04/2015, 17:30
    Track5: Computing activities and Computing models
    oral presentation
    The Open Science Grid (OSG) ties together individual experiments' computing power, connecting their resources to create a large, robust computing grid; this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. OSG has been funded by the Department of Energy Office of Science and National Science Foundation...
    Go to contribution page
  84. Sebastian Neubert (CERN)
    13/04/2015, 17:45
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    Reproducibility of results is a fundamental quality of scientific research. However, as data analyses become more and more complex and research is increasingly carried out by larger and larger teams, it becomes a challenge to keep up this standard. The decomposition of complex problems into tasks that can be effectively distributed over a team in a reproducible manner becomes...
    Go to contribution page
  85. Federica Legger (Ludwig-Maximilians-Univ. Muenchen (DE))
    13/04/2015, 17:45
    Track5: Computing activities and Computing models
    oral presentation
    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data for the distributed physics community is a challenging task. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user...
    Go to contribution page
  86. Michele Martinelli (INFN Rome)
    13/04/2015, 17:45
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    The computing nodes of modern hybrid HPC systems are built using the CPU+GPU paradigm. When this class of systems is scaled to large size, the efficiency of the network connecting GPUs mesh and supporting the internode traffic is a critical factor. The adoption of a low latency, high performance dedicated network architecture, exploiting peculiar characteristics of CPU and GPU hardware,...
    Go to contribution page
  87. Sara Vallero (Universita e INFN (IT))
    13/04/2015, 17:45
    Track7: Clouds and virtualization
    oral presentation
    The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers IaaS services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a separate grid Tier-2 site for the...
    Go to contribution page
  88. Janusz Martyniak (Imperial College London)
    13/04/2015, 17:45
    Track2: Offline software
    oral presentation
    The Muon Ionization Cooling Experiment (MICE) has developed the MICE Analysis User Software (MAUS) to simulate and analyse experimental data. It serves as the primary codebase for the experiment, providing for offline batch simulation and reconstruction as well as online data quality checks . The software provides both traditional particle physics functionalities such as track reconstruction...
    Go to contribution page
  89. Johannes Elmsheuser (Ludwig-Maximilians-Univ. Muenchen (DE))
    13/04/2015, 17:45
    Track3: Data store and access
    oral presentation
    With the exponential growth of LHC (Large Hadron Collider) data in the years 2010-2012, distributed computing has become the established way to analyze collider data. The ATLAS experiment Grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centres to smaller university clusters. So far the storage technologies and access protocols to the clusters...
    Go to contribution page
  90. Thomas Maier (Ludwig-Maximilians-Univ. Muenchen (DE))
    13/04/2015, 18:00
    Track3: Data store and access
    oral presentation
    I/O is a fundamental determinant in the overall performance of physics analysis and other data-intensive scientific computing. It is, further, crucial to effective resource delivery by the facilities and infrastructure that support data-intensive science. To understand I/O performance, clean measurements in controlled environments are essential, but effective optimization requires as well an...
    Go to contribution page
  91. Andrew David Lahiff (STFC - Rutherford Appleton Lab. (GB))
    13/04/2015, 18:00
    Track7: Clouds and virtualization
    oral presentation
    The recently introduced vacuum model offers an alternative to the traditional methods that virtual organisations (VOs) use to run computing tasks at sites, where they either submit jobs using grid middleware or create virtual machines (VMs) using cloud APIs. In the vacuum model VMs are created and contextualized by the site itself, and start the appropriate pilot job framework which fetches...
    Go to contribution page
  92. Dr Tian Yan (Institution of High Energy Physics, Chinese Academy of Science)
    13/04/2015, 18:00
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    For Beijing Spectrometer III (BESIII) experiment located at the Institute of High Energy Physics (IHEP), China, the distributed computing environment (DCE) has been setup and been in production status since 2012. The basic framework or middleware is DIRAC (Distributed Infrastructure with Remote Agent Control) with BES-DIRAC extensions. About 2000 CPU cores and 400 TB storage contributed by...
    Go to contribution page
  93. Mr Steffen Baehr (Karlsruhe Institute of Technology)
    13/04/2015, 18:00
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    The impending Upgrade of the Belle experiment is expected to increase the generated data set by a factor of 50. This means that for the planned pixeldetector, which is the closest to the interaction point, the data rates are going to increase to over 20 GB/s. Combined with data generated by the other detectors, this rate is too big to be efficiently send out to offline processing. This is...
    Go to contribution page
  94. Dirk Hufnagel (Fermi National Accelerator Lab. (US))
    13/04/2015, 18:00
    Track5: Computing activities and Computing models
    oral presentation
    In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system...
    Go to contribution page
  95. Adam Aurisano (University of Cincinnati)
    13/04/2015, 18:00
    Track2: Offline software
    oral presentation
    The NOvA experiment is a two-detector, long-baseline neutrino experiment operating in the recently upgraded NuMI muon neutrino beam. Simulating neutrino interactions and backgrounds requires many steps including: the simulation of the neutrino beam flux using FLUKA and the FLUGG interface; cosmic ray generation using CRY; neutrino interaction modeling using GENIE; and a simulation of the...
    Go to contribution page
  96. Srikanth Sridharan (CERN)
    13/04/2015, 18:15
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    The proposed upgrade for the Large Hadron Collider LHCb experiment at CERN envisages a system of 500 Data sources each generating data at 100 Gbps, the acquisition and processing of which is a challenge even for state of the art FPGAs. This challenge splits into two, the Data Acquisition (DAQ) part and the Algorithm acceleration part, the later not necessarily immediately following the former....
    Go to contribution page
  97. Oliver Keeble (CERN)
    13/04/2015, 18:15
    Track3: Data store and access
    oral presentation
    The DPM project offers an excellent opportunity for comparative testing of the HTTP and xroot protocols for data analysis. - The DPM storage itself is multi-protocol, allowing comparisons to be performed on the same hardware - The DPM has been instrumented to produce an i/o monitoring stream, familiar from the xrootd project, regardless of the protocol being used for access - The...
    Go to contribution page
  98. Janusz Martyniak (Imperial College London)
    13/04/2015, 18:15
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    The GridPP consortium in the UK is currently testing a multi-VO DIRAC service aimed at non-LHC VOs. These VOs are typically small (fewer than two hundred members) and generally do not have a dedicated computing support post. The majority of these represent particle physics experiments (e.g. T2K, NA62 and COMET), although the scope of the DIRAC service is not limited to this field. A few VOs...
    Go to contribution page
  99. Norman Anthony Graf (SLAC National Accelerator Laboratory (US))
    13/04/2015, 18:15
    Track2: Offline software
    oral presentation
    The Heavy Photon Search (HPS) is an experiment at the Thomas Jefferson National Accelerator Facility (JLab) designed to search for a hidden sector photon (A’) in fixed target electroproduction. It uses a silicon microstrip tracking and vertexing detector inside a dipole magnet to measure charged particle trajectories and a fast electromagnetic calorimeter just downstream of the magnet to...
    Go to contribution page
  100. Dr Jonathan Dorfan (OIST)
    14/04/2015, 09:00
  101. Robert Group (University of Virginia)
    14/04/2015, 09:15
  102. 14/04/2015, 10:00
  103. Amber Boehnlein
    14/04/2015, 11:00
  104. 14/04/2015, 11:45
  105. 14/04/2015, 12:15
  106. Edgar Fajardo Hernandez (Univ. of California San Diego (US))
    14/04/2015, 14:00
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    The HTCondor-CE is the next-generation gateway software for the Open Science Grid (OSG). This is responsible for providing a network service which authorizes remote users and provides a resource provisioning service (other well-known gatekeepers include Globus GRAM, CREAM, Arc-CE, and Openstack’s Nova). Based on the venerable HTCondor software, this new CE is simply a highly-specialized...
    Go to contribution page
  107. Jeremy Coles (University of Cambridge (GB))
    14/04/2015, 14:00
    Track6: Facilities, Infrastructure, Network
    oral presentation
    This first section of this paper elaborates upon the operational status and directions within the UK Computing for Particle Physics (GridPP) project as we approach LHC Run-2. It details the pressures that have been gradually reshaping the deployed hardware and middleware environments at GridPP sites – from the increasing adoption of larger multicore nodes to the move towards alternative batch...
    Go to contribution page
  108. Tim Smith (CERN)
    14/04/2015, 14:00
    Track5: Computing activities and Computing models
    oral presentation
    In this paper we present newly launched services for open data and for long-term preservation and reuse of high-energy-physics data analyses. We follow the "data continuum" practices through several progressive data analysis phases up to the final publication. The aim is to capture all digital assets and associated knowledge inherent in the data analysis process for subsequent generations, and...
    Go to contribution page
  109. Vakho Tsulaia (Lawrence Berkeley National Lab. (US))
    14/04/2015, 14:00
    Track2: Offline software
    oral presentation
    AthenaMP is a multi-process version of the ATLAS reconstruction and data analysis framework Athena. By leveraging Linux fork and copy-on-write, it allows the sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated...
    Go to contribution page
  110. Emilio Meschi (CERN)
    14/04/2015, 14:00
    Track1: Online computing
    oral presentation
    Technology convergences in the post-LHC era In the course of the last three decades HEP experiments have had to face the challenge of manipulating larger and larger masses of data from increasingly complex and heterogeneous detectors with hundreds of millions of electronic channels. The traditional approach of low-level data reduction using ad-hoc electronics working on fast analog...
    Go to contribution page
  111. Dr Miguel Marquina (CERN)
    14/04/2015, 14:00
    Track7: Clouds and virtualization
    oral presentation
    Using virtualisation with CernVM has emerged as a de-facto standard among HEP experiments; it allows for running of HEP analysis and simulation programs in cloud environments. Following the integration of virtualisation with BOINC and CernVM(link is external), first pioneered for simulation of event generation in the Theory group at CERN, the LHC experiments ATLAS, CMS and LHCb have all...
    Go to contribution page
  112. James Catmore (University of Oslo (NO))
    14/04/2015, 14:15
    Track2: Offline software
    oral presentation
    During the Long shutdown of the LHC, the ATLAS collaboration overhauled its analysis model based on experience gained during Run 1.  A significant component of the model is a "Derivation Framework" that takes the Petabyte-scale AOD output from ATLAS reconstruction and produces samples, typically Terabytes in size, targeted at specific analyses.  The framework incorporates all of the...
    Go to contribution page
  113. Christopher Jon Lee (University of Johannesburg (ZA))
    14/04/2015, 14:15
    Track1: Online computing
    oral presentation
    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of ~3000 servers, processing the data readout from ~100 million detector channels through multiple trigger levels. During the two years of the first Long Shutdown (LS1) there has...
    Go to contribution page
  114. David Cameron (University of Oslo (NO))
    14/04/2015, 14:15
    Track7: Clouds and virtualization
    oral presentation
    A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte-Carlo simulation. Volunteer computing has been used over the last few years in many other scientific fields and by CERN itself to run simulations of the LHC beams. The ATLAS@Home project was started to allow volunteers to run simulations of...
    Go to contribution page
  115. Andrej Filipcic (Jozef Stefan Institute (SI))
    14/04/2015, 14:15
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    Distributed computing resources available for high-energy physics research are becoming less dedicated to one type of workflow and researchers’ workloads are increasingly exploiting modern computing technologies such as parallelism. The current pilot job management model used by many experiments relies on static dedicated resources and cannot easily adapt to these changes. The model used for...
    Go to contribution page
  116. Jose Flix Molina (Centro de Investigaciones Energ. Medioambientales y Tecn. - (ES)
    14/04/2015, 14:15
    Track6: Facilities, Infrastructure, Network
    oral presentation
    The LHC experiments will collect unprecedented data volumes in the next Physics run, with high pile-up collisions resulting in events which require a more complex processing. The collaborations have been asked to update their Computing Models to optimize the use of the available resources in order to cope with the Run2 conditions, in the midst of widespread funding restrictions. The changes in...
    Go to contribution page
  117. Martin Urban (Rheinisch-Westfaelische Tech. Hoch. (DE))
    14/04/2015, 14:15
    Track5: Computing activities and Computing models
    oral presentation
    VISPA provides a graphical front-end to computing infrastructures giving its users all functionality needed for working conditions comparable to a personal computer. It is a framework that can be extended with custom applications to support individual needs, e.g. graphical interfaces for experiment-specific software. By design, VISPA serves as a multi-purpose platform for many disciplines and...
    Go to contribution page
  118. Jon Kerr Nilsen (University of Oslo (NO))
    14/04/2015, 14:30
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    While current grid middlewares are quite advanced in terms of connecting jobs to resources, their client tools are generally quite minimal and features for managing large sets of jobs are left to the user to implement. The ARC Control Tower (aCT) is a very flexible job management framework that can be run on anything from a single user’s laptop to a multi-server distributed setup. aCT was...
    Go to contribution page
  119. Laurence Field (CERN)
    14/04/2015, 14:30
    Track7: Clouds and virtualization
    oral presentation
    Volunteer computing remains an untapped opportunistic resource for the LHC experiments. The use of virtualization in this domain was pioneered by the Test4theory project and enabled the running of high-energy particle physics simulations on home computers. This paper describes the model for CMS to run workloads using a similar volunteer computing platform. It is shown how the original approach...
    Go to contribution page
  120. Dr Bodhitha Jayatilaka (Fermilab)
    14/04/2015, 14:30
    Track5: Computing activities and Computing models
    oral presentation
    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at...
    Go to contribution page
  121. Dr Sami Kama (Southern Methodist University (US))
    14/04/2015, 14:30
    Track2: Offline software
    oral presentation
    The challenge faced by HEP experiments from the current and expected architectural evolution of CPUs and co-processors is how to successfully exploit concurrency and keep memory consumption within reasonable limits. This is a major change from frameworks which were designed for serial event processing on single core processors in the 2000s. ATLAS has recently considered this problem in some...
    Go to contribution page
  122. Jeff Templon (NIKHEF (NL))
    14/04/2015, 14:30
    Track6: Facilities, Infrastructure, Network
    oral presentation
    With the advent of workloads containing explicit requests for multiple cores in a single grid job, grid sites faced a new set of challenges in workload scheduling. The most common batch schedulers deployed at HEP computing sites do a poor job at multicore scheduling when using only the native capabilities of those schedulers. This talk describes how efficient multicore scheduling was...
    Go to contribution page
  123. Reiner Hauser (Michigan State University (US))
    14/04/2015, 14:30
    Track1: Online computing
    oral presentation
    After its first shutdown, LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The Data Flow (DF) element of the TDAQ is a distributed hardware and software system responsible for buffering and transporting event data from the Readout system to the...
    Go to contribution page
  124. Manuel Giffels (KIT - Karlsruhe Institute of Technology (DE))
    14/04/2015, 14:45
    Track6: Facilities, Infrastructure, Network
    oral presentation
    Recent developments in high energy physics (HEP) including multi-core jobs and multi-core pilots require data centres to gain a deep understanding of the system to correctly design and upgrade computing clusters. Especially networking is a critical component as the increased usage of data federations relies on WAN connectivity and availability as a fallback to access data. The specific...
    Go to contribution page
  125. Roger Jones (Lancaster University (GB))
    14/04/2015, 14:45
    Track5: Computing activities and Computing models
    oral presentation
    Complementary to parallel open access and analysis preservation initiatives, ATLAS is taking steps to ensure that the data taken by the experiment during run-1 remain accessible and available for future analysis by the collaboration. An evaluation of what is required to achieve this is underway, examining the ATLAS data production chain to establish the effort required and potential problems....
    Go to contribution page
  126. Charles Leggett (Lawrence Berkeley National Lab. (US))
    14/04/2015, 14:45
    Track2: Offline software
    oral presentation
    The ATLAS experiment has successfully used its Gaudi/Athena software framework for data taking and analysis during the first LHC run, with billions of events successfully processed. However, the design of Gaudi/Athena dates from early 2000 and the software and the physics code has been written using a single threaded, serial design. This programming model has increasing difficulty in...
    Go to contribution page
  127. Mr Thomas Hauth (KIT - Karlsruhe Institute of Technology (DE))
    14/04/2015, 14:45
    Track7: Clouds and virtualization
    oral presentation
    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte Carlo simulation. The Institut für Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to...
    Go to contribution page
  128. Jorn Schumacher (University of Paderborn (DE))
    14/04/2015, 14:45
    Track1: Online computing
    oral presentation
    The ATLAS experiment at CERN is planning the full deployment of a new, unified link technology for connecting detector front-end electronics on the timescale of the LHC Run 4 (2025). It is estimated that roughly 8000 Gigabit Transceiver links (GBT), with transfer rates probably up to 9.6 Gbps, will replace existing links used for readout, detector control and distribution of timing and trigger...
    Go to contribution page
  129. Andres Gomez Ramirez (Johann-Wolfgang-Goethe Univ. (DE))
    14/04/2015, 14:45
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    Grid infrastructures allow users flexible on-demand usage of computing resources using an Internet connection. A remarkable example of a Grid in High Energy Physics (HEP) research is used by the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC) at...
    Go to contribution page
  130. Remi Mommsen (Fermi National Accelerator Lab. (US))
    14/04/2015, 15:00
    Track1: Online computing
    oral presentation
    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event...
    Go to contribution page
  131. Jetendr Shamdasani (University of the West of England (GB))
    14/04/2015, 15:00
    Track5: Computing activities and Computing models
    oral presentation
    In complex data analyses it is increasingly important to capture information about the usage of data sets in addition to their preservation over time in order to ensure reproducibility of results, to verify the work of others and to ensure appropriate conditions data have been used for specific analyses. This so-called provenance data in the computer science world is defined as the history or...
    Go to contribution page
  132. Mr Phil Demar (Fermilab)
    14/04/2015, 15:00
    Track6: Facilities, Infrastructure, Network
    oral presentation
    Fermilab is in the process of upgrading its wide-area network facilities to 100GE technology. One might assume that migration to be relatively straightforward, with forklift upgrades of our existing network perimeter devices to 100GE-capable platforms, and accompanying deployment of 100GE WAN services. However, our migration to 100GE WAN technology has proven to be significantly more...
    Go to contribution page
  133. Maria Arsuaga Rios (CERN)
    14/04/2015, 15:00
    Track7: Clouds and virtualization
    oral presentation
    Amazon S3 is a widely adopted protocol for scalable cloud storage that could also fulfill storage requirements of the high-energy physics community. CERN has been evaluating this option using some key HEP applications such as ROOT and the CernVM filesystem (CvmFS) with S3 back-ends. In this contribution we present our evaluation based on two versions of the Huawei UDS storage system used from...
    Go to contribution page
  134. Dr Christopher Jones (Fermi National Accelerator Lab. (US))
    14/04/2015, 15:00
    Track2: Offline software
    oral presentation
    During 2014, the CMS Offline and Computing Organization completed the necessary changes to use the CMS threaded framework in the full production environment. Running reconstruction workflows using the multi-threaded framework is a crucial element of CMS' 2015 and beyond production plan. We will briefly discuss the design of the CMS Threaded Framework, in particular how the design affects...
    Go to contribution page
  135. Dr Tony Wildish (Princeton)
    14/04/2015, 15:00
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    The ANSE project has been working with the CMS and ATLAS experiments to bring network awareness into their middleware stacks. For CMS, this means enabling control of virtual network circuits in PhEDEx, the CMS data-transfer management system. PhEDEx orchestrates the transfer of data around the CMS experiment to the tune of 1 PB per week spread over about 70 sites. The goal of ANSE is to...
    Go to contribution page
  136. Emilio Meschi (CERN)
    14/04/2015, 15:15
    Track1: Online computing
    oral presentation
    During the LHC Long Shutdown 1, the CMS DAQ system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has...
    Go to contribution page
  137. Dr Alexei Klimentov (Brookhaven National Laboratory (US))
    14/04/2015, 15:15
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    A crucial contributor to the success of the massively scaled global computing system that delivers the analysis needs of the LHC experiments is the networking infrastructure upon which the system is built. The experiments have been able to exploit excellent high-bandwidth networking in adapting their computing models for the most efficient utilization of resources. New advanced networking...
    Go to contribution page
  138. Dr Andrew Norman (Fermilab)
    14/04/2015, 15:15
    Track5: Computing activities and Computing models
    oral presentation
    The ability of modern HEP experiments to acquire and process unprecedented amounts of data and simulation have led to an explosion in the volume of information that individual scientists deal with on a daily basis. This explosion has resulted in a need for individuals to generate and keep large “personal analysis” data sets which represent the skimmed portions of official data collections...
    Go to contribution page
  139. Luca Magnoni (CERN)
    14/04/2015, 15:15
    Track6: Facilities, Infrastructure, Network
    oral presentation
    Monitoring the WLCG infrastructure requires to gather and to analyze high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve...
    Go to contribution page
  140. Paul Millar (Deutsches Elektronen-Synchrotron (DE))
    14/04/2015, 15:15
    Track7: Clouds and virtualization
    oral presentation
    Traditionally storage systems have had well understood responsibilities and behaviour, codified by the POSIX standards. More sophisticated systems (such as dCache) support additional functionality, such as storing data on media with different latencies (SSDs, HDDs, tapes). From a user's perspective, this forms a relatively simple adjunct to POSIX: providing optional quality-of-service...
    Go to contribution page
  141. Dr Florian Uhlig (GSI Darmstadt)
    14/04/2015, 15:15
    Track2: Offline software
    oral presentation
    The FairRoot framework is the standard framework for simulation, reconstruction and data analysis developed at GSI for the future experiments at the FAIR facility. The framework delivers base functionality for simulation, i.e.: Infrastructure to easily implement a set of detectors, fields, and event generators. Moreover, the framework decouples the user code (e.g.: Geometry description,...
    Go to contribution page
  142. Renaud Vernet (CC-IN2P3 - Centre de Calcul (FR))
    14/04/2015, 15:30
    Track6: Facilities, Infrastructure, Network
    oral presentation
    The computing needs in the HEP community are increasing steadily, but the current funding situation in many countries is tight. As a consequence experiments, data centres, and funding agencies have to rationalize resource usage and expenditures. CC-IN2P3 (Lyon, France) provides computing resources to many experiments including LHC, and is a major partner for astroparticle projects like...
    Go to contribution page
  143. Dr Mohammad Al-Turany (CERN)
    14/04/2015, 15:30
    Track2: Offline software
    oral presentation
    The commonalities between the ALICE and FAIR experiments and their computing requirements lead to the development of large parts of a common software framework in an experiment independent way. The FairRoot project has already shown the feasibility of such an approach for the FAIR experiments and extending it beyond FAIR to experiments at other facilities. The ALFA framework is a joint...
    Go to contribution page
  144. Dirk Hufnagel (Fermi National Accelerator Lab. (US))
    14/04/2015, 15:30
    Track7: Clouds and virtualization
    oral presentation
    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resources — resources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and...
    Go to contribution page
  145. Alessandra Forti (University of Manchester (GB))
    14/04/2015, 15:30
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    After the successful first run of the LHC, data taking will restart in early 2015 with unprecedented experimental conditions leading to increased data volumes and event complexity. In order to process the data generated in such scenario and exploit the multicore architectures of current CPUs, the LHC experiments have developed parallelized software for data reconstruction and simulation. A...
    Go to contribution page
  146. Georgiana Lavinia Darlea (Massachusetts Inst. of Technology (US))
    14/04/2015, 15:30
    Track1: Online computing
    oral presentation
    The CMS experiment at CERN is one of the two general-purpose detectors on the Large Hadron Collider (LHC) in the Geneva area, Switzerland. Its infrastructure has undergone massive upgrades during 2013 and 2014, which lead to major changes in the philosophy of its DAQ (Data AcQuisition) system. One of the major components of this system is the Storage Manager, which is responsible for buffering...
    Go to contribution page
  147. Fons Rademakers (CERN)
    14/04/2015, 15:30
    Track5: Computing activities and Computing models
    oral presentation
    CERN openlab is a unique public-private partnership between CERN and leading ICT companies. Its mission is to accelerate the development of cutting-edge solutions to be used by the worldwide HEP community. Since January 2015 openlab phase V has started. To bring the openlab conducted research closer to the experiments, phase V has been changed to a project based structure which allows research...
    Go to contribution page
  148. Srecko Morovic (CERN)
    14/04/2015, 15:45
    Track1: Online computing
    oral presentation
    A flexible monitoring system has been designed for the CMS File-based Filter Farm making use of modern data mining and analytics components. All the metadata and monitoring information concerning data flow and execution of the HLT are generated locally in the form of small “documents” using the JSON encoding. These documents are indexed into a hierarchy of elasticsearch (es) clusters along...
    Go to contribution page
  149. Mr Romain Wartel (CERN)
    14/04/2015, 15:45
    Track5: Computing activities and Computing models
    oral presentation
    This presentation gives an overview of the current computer security landscape. It describes the main vectors of compromises in the academic community including lessons learnt, reveals inner mechanisms of the underground economy to expose how our computing resources are exploited by organised crime groups, and gives recommendations how to better protect our computing infrastructures. By...
    Go to contribution page
  150. Stefano Agosta (CERN)
    14/04/2015, 15:45
    Track6: Facilities, Infrastructure, Network
    oral presentation
    With the inexorable increase in the use of mobile devices, for both general communications and mission-critical applications, wireless connectivity is required anytime and anywhere. This requirement is addressed in office buildings through the use of Wi-Fi technology but Wi-Fi is ill adapted for use in large experiment halls and complex underground environments such as the LHC tunnel and...
    Go to contribution page
  151. Dr Wenji Wu (Fermi National Accelerator Laboratory)
    14/04/2015, 15:45
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    Multicore and manycore have become the norm for scientific computing environments. Multicore/manycore platform architectures provide advanced capabilities and features that can be exploited to enhance data movement performance for large-scale distributed computing environments, such as LHC. However, existing data movement tools do not take full advantage of these capabilities and features....
    Go to contribution page
  152. marko staric (J. Stefan Institute, Ljubljana, Slovenia)
    14/04/2015, 15:45
    Track2: Offline software
    oral presentation
    We present software framework being developed for physics analyses using the data collected by the Belle II experiment. The analysis workflow is organized in a modular way integrated within the Belle II software framework (BASF2). A set of physics analysis modules that perform simple and well defined tasks and are common to almost all physics analyses are provided. The physics modules do...
    Go to contribution page
  153. Matthias Richter (University of Oslo (NO))
    14/04/2015, 16:30
    Track1: Online computing
    oral presentation
    An upgrade of the ALICE detector is currently prepared for the Run 3 period of the Large Hadron Collider (LHC) at CERN starting in 2020. The physics topics under study by ALICE during this period will require the inspection of all collisions at a rate of 50 kHz for minimum bias Pb-Pb and 200 kHz for pp and p-Pb collisions in order to extract physics signals embedded into a large...
    Go to contribution page
  154. Dr Makoto Asai (SLAC National Accelerator Laboratory (US))
    14/04/2015, 16:30
    Track2: Offline software
    oral presentation
    The Geant4 Collaboration released a new generation of the Geant4 simulation toolkit (version 10.0) in December 2013, and continues to improve its physics, computing performance and usability. This presentation will cover the major improvements made since version 10.0. The physics evolutions include improvement of the Fritiof hadronics model, extension of the INCL++ model to higher...
    Go to contribution page
  155. Ian Gable (University of Victoria (CA))
    14/04/2015, 16:30
    Track7: Clouds and virtualization
    oral presentation
    The use of distributed IaaS clouds with the CloudScheduler/HTCondor architecture has been in production for HEP and astronomy applications for a number of years. The design has proven to be robust and reliable for batch production using HEP clouds, academic non-HEP (opportunistic) clouds and commercial clouds. Further, the system is seamlessly integrated into the existing WLCG...
    Go to contribution page
  156. Christopher Hollowell (Brookhaven National Laboratory)
    14/04/2015, 16:30
    Track3: Data store and access
    oral presentation
    The RACF (RHIC-ATLAS Computing Facility) has operated a large, multi-purpose dedicated computing facility since the mid-1990's, serving a worldwide, geographically diverse scientific community that is a major contributor to various HEPN projects. A central component of the RACF is the Linux-based worker node cluster that is used for both computing and data storage purposes. It currently has...
    Go to contribution page
  157. Rainer Schwemmer (CERN)
    14/04/2015, 16:30
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    For Run 2 of the LHC, LHCb is exchanging a significant part of its event filter farm with new compute nodes. For the evaluation of the best performing solution, we have developed a method to convert our high level trigger application into a stand-alone, bootable benchmark image. With additional instrumentation we turned it into a self-optimising benchmark which explores techniques such as late...
    Go to contribution page
  158. Mr Jason Alexander Smith (Brookhaven National Laboratory)
    14/04/2015, 16:30
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    Using centralized configuration management, including automation tools such as Puppet, can greatly increase provisioning speed and efficiency when configuring new systems or making changes to existing systems, reduce duplication of work, and improve automated processes. However, centralized management also brings with it a level of inherent risk: a single change in just one file can...
    Go to contribution page
  159. Alexey Rybalchenko (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    14/04/2015, 16:45
    Track1: Online computing
    oral presentation
    After Long Shutdown 2, the upgraded ALICE detector at the LHC will produce more than a terabyte of data per second. The data, constituted from a continuous un-triggered stream data, have to be distributed from about 250 First Level Processor nodes (FLPs) to O(1000) Event Processing Nodes (EPNs). Each FLP receives a small subset of the detector data that is chopped in sub-timeframes. One EPN...
    Go to contribution page
  160. Eric Cano (CERN)
    14/04/2015, 16:45
    Track3: Data store and access
    oral presentation
    CERN’s tape-based archive system has collected over 70 Petabytes of data during the first run of the LHC. The Long Shutdown is being used for migrating the complete 100 Petabytes data archive to higher-density tape media. During LHC Run 2, the archive will have to cope with yearly growth rates of up to 40-50 Petabytes. In this contribution, we will describe the scalable architecture for...
    Go to contribution page
  161. Ivana Hrivnacova (IPNO, Université Paris-Sud, CNRS/IN2P3)
    14/04/2015, 16:45
    Track2: Offline software
    oral presentation
    Virtual Monte Carlo (VMC) provides an abstract interface into Monte Carlo transport codes. A user VMC based application, independent from the specific Monte Carlo codes, can be then run with any of the supported simulation programs.  Developed by the ALICE Offline Project and further included in ROOT, the interface and implementations have reached stability during the last decade and have...
    Go to contribution page
  162. Daniel Hugo Campora Perez (CERN)
    14/04/2015, 16:45
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    During the data taking process in the LHC at CERN, millions of collisions are recorded every second by the LHCb Detector. The LHCb "Online" computing farm, counting around 15000 cores, is dedicated to the recontruction of the events in real-time, in order to filter those with interesting Physics. The ones kept are later analysed "Offline" in a more precise fashion on the Grid. This imposes...
    Go to contribution page
  163. Alessandro De Salvo (Universita e INFN, Roma I (IT))
    14/04/2015, 16:45
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    The ATLAS Installation System v2 is the evolution of the original system, used since 2003. The original tool has been completely re-designed in terms of database backend and components, adding support for submission to multiple backends, including the original WMS and the new Panda modules. The database engine has been changed from plain MySQL to Galera/Percona and the table structure has been...
    Go to contribution page
  164. Dr David Colling (Imperial College Sci., Tech. & Med. (GB))
    14/04/2015, 16:45
    Track7: Clouds and virtualization
    oral presentation
    The resources CMS is using are increasingly being offered as clouds. In Run 2 of the LHC the majority of CMS CERN resources, both in Meyrin and at the Wigner Computing Centre, will be presented as cloud resources on which CMS will have to build its own infrastructure. This infrastructure will need to run all of the CMS workflows including: Tier 0, production and user analysis. In addition, the...
    Go to contribution page
  165. Ms Bowen Kan (Institute of High Physics Chinese Academy of Sciences)
    14/04/2015, 17:00
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    Scheduler is one of the most important components of high performance cluster. This paper introduces a self-adaptive dispatching system (SAPS) based on torque/maui which increases the resources utilization of cluster effectively and guarantees the high reliability of the computing platform. It provides great convenience for users to run various tasks on the computing platform. First of all,...
    Go to contribution page
  166. Dr Giuseppe Avolio (CERN)
    14/04/2015, 17:00
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    Complex Event Processing (CEP) is a methodology that combines data from different sources in order to identify events or patterns that need particular attention. It has gained a lot of momentum in the computing world in the past few years and is used in ATLAS to continuously monitor the behaviour of the data acquisition system, to trigger corrective actions and to guide the experiment’s...
    Go to contribution page
  167. Dr Andrew Norman (Fermilab)
    14/04/2015, 17:00
    Track3: Data store and access
    oral presentation
    Many experiments in the HEP and Astrophysics communities generate large extremely valuable datasets, which need to be efficiently cataloged and recorded to archival storage. These datasets, both new and legacy, are often structured in a manner that is not conducive to storage and cataloging with modern data handling systems and large file archive facilities. In this paper we discuss in...
    Go to contribution page
  168. Ryan Taylor (University of Victoria (CA))
    14/04/2015, 17:00
    Track7: Clouds and virtualization
    oral presentation
    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology....
    Go to contribution page
  169. Josef Novy (Czech Technical University (CZ))
    14/04/2015, 17:00
    Track1: Online computing
    oral presentation
    This contribution focuses on the deployment and first results of the new data acquisition system (DAQ) of the COMPASS experiment utilizing FPGA-based event builder. The new DAQ system is developed under name RCCARS (run control, configuration, and readout system). COMPASS is a high energy physics experiment situated at the SPS particle accelerator at CERN laboratory in Geneva, Switzerland....
    Go to contribution page
  170. Norman Anthony Graf (SLAC National Accelerator Laboratory (US))
    14/04/2015, 17:00
    Track2: Offline software
    oral presentation
    As the complexity and resolution of particle detectors increases, the need for detailed simulation of the experimental setup also increases. We have developed efficient and flexible tools for detailed physics and detector response simulations which build on the power of the Geant4 toolkit but free the end user from any C++ coding. Geant4 is the de facto high-energy physics standard for...
    Go to contribution page
  171. Karsten Schwank (DESY)
    14/04/2015, 17:15
    Track3: Data store and access
    oral presentation
    We report on the status of the data preservation project at DESY for the HERA experiments and present the latest design of the storage which is a central element for bit-preservation. The HEP experiments based at the HERA acceleerator at DESY collected large and unique datasets during the period 1992 to 2007. In addition, corresponding Monte Carlo simulation datasets were produced, which...
    Go to contribution page
  172. Mr Giulio Eulisse (Fermi National Accelerator Lab. (US))
    14/04/2015, 17:15
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7...
    Go to contribution page
  173. Katsuki Hiraide (the University of Tokyo)
    14/04/2015, 17:15
    Track1: Online computing
    oral presentation
    XMASS is a multi-purpose low-background experiment with a large volume of liquid xenon scintillator at Kamioka in Japan. The first phase of the experiment aiming at direct detection of dark matter was commissioned in 2010 and is currently taking data. The detector uses ~830 kg of liquid xenon viewed by 642 photomultiplier tubes (PMTs). Signals from 642 PMTs are amplified and read out by 1...
    Go to contribution page
  174. Mr Federico Carminati (CERN)
    14/04/2015, 17:15
    Track2: Offline software
    oral presentation
    Detector simulation is consuming at least half of the HEP computing cycles, and even so, experiments have to take hard decisions on what to simulate, as their needs greatly surpass the availability of computing resources. New experiments still in the design phase such as FCC, CLIC and ILC as well as upgraded versions of the existing LHC detectors will push further the simulation requirements....
    Go to contribution page
  175. Dr Randy Sobie (University of Victoria (CA))
    14/04/2015, 17:15
    Track7: Clouds and virtualization
    oral presentation
    The BelleII experiment is developing a global computing system for the simulation of MC data prior its collecting real collision data in the next few years. The system utilizes the grid middleware used in the WLCG and uses the DIRAC workload manager. We describe how IaaS cloud resources are being integrated into the BelleII production computing system in Australia and Canada. The IaaS...
    Go to contribution page
  176. Mr Tigran Mkrtchyan (DESY)
    14/04/2015, 17:15
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    Over the previous years, storage providers in scientific infrastructures were facing a significant change in the usage profile of their resources. While in the past, a small number of experiment frameworks were accessing those resources in a coherent manner, now, a large amount of small groups or even individuals request access in a completely chaotic way. Moreover, scientific laboratories...
    Go to contribution page
  177. David Lange (Lawrence Livermore Nat. Laboratory (US))
    14/04/2015, 17:30
    Track2: Offline software
    oral presentation
    This presentation will discuss new features of the CMS simulation for Run 2, where we have made considerable improvements during LHC shutdown to deal with the increased event complexity and rate for Run 2. For physics improvements migration from Geant4 9.4p03 to Geant4 10.0p02 has been performed. CPU performance was improved by introduction of the Russian roulette method inside CMS...
    Go to contribution page
  178. David Yu (BNL)
    14/04/2015, 17:30
    Track3: Data store and access
    oral presentation
    Brookhaven National Lab (BNL)’s RHIC and Atlas Computing Facility (RACF), is supporting science experiments such as RHIC as its Tier-0 center and the U.S. ATLAS/LHC as a Tier-1 center. Scientific data is still growing exponentially after each upgrade. The RACF currently manages over 50 petabytes of data on robotic tape libraries, and we expect a 50% increase in data next year. Not only do we...
    Go to contribution page
  179. Asato Orii (urn:Facebook)
    14/04/2015, 17:30
    Track1: Online computing
    oral presentation
    Super-Kamiokande (SK), a 50-kiloton water Cherenkov detector, is one of the most sensitive neutrino detectors. SK is continuously collecting data as the neutrino observatory and can be used also for supernova observations by detecting supernova burst neutrinos. It is reported that Betelgeuse (640ly) is shrinking 15% in 15 years (C.H.townes et al. 2009) and this may be an...
    Go to contribution page
  180. Peter Onyisi (University of Texas (US))
    14/04/2015, 17:30
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    During LHC Run 1, the information flow through the offline data quality monitoring in ATLAS relied heavily on chains of processes polling each other's outputs for handshaking purposes.  This resulted in a fragile architecture with many possible points of failure and an inability to monitor the overall state of the distributed system.  We report on the status of a project undertaken during the...
    Go to contribution page
  181. Jakob Blomer (CERN)
    14/04/2015, 17:30
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    Most high-energy physics analysis jobs are embarrassingly parallel except for the final merging of the output objects, which are typically histograms. Currently, the merging of output histograms scales badly. The running time for distributed merging depends not only on the overall number of bins but also on the number partial histogram output files. That means, while the time to analyze data...
    Go to contribution page
  182. Andrew McNab (University of Manchester (GB))
    14/04/2015, 17:30
    Track7: Clouds and virtualization
    oral presentation
    The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the CernVM 3 system for providing root images for virtual machines. We use the cvmfs...
    Go to contribution page
  183. Bruno Lange Ramos (Univ. Federal do Rio de Janeiro (BR)), Bruno Lange Ramos (Univ. Federal do Rio de Janeiro (BR))
    14/04/2015, 17:45
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    In order to manage a heterogeneous and worldwide collaboration, the ATLAS experiment developed web systems that range from supporting the process of publishing scientific papers to monitoring equipment radiation levels. These systems are vastly supported by Glance, a technology that was set forward in 2004 to create an abstraction layer on top of different databases; it automatically...
    Go to contribution page
  184. Ms Bowen Kan (Institute of High Physics Chinese Academy of Sciences)
    14/04/2015, 17:45
    Track7: Clouds and virtualization
    oral presentation
    Mass data processing and analysis contribute much to the development and discoveries of a new generation of High Energy Physics. The BESIII experiment of IHEP(Institute of High Energy Physics, Beijing, China) studies particles in the tau-charm energy region ranges from 2 GeV to 4.6 GeV, and requires massive storage and computing resources, which is a typical kind of data intensive...
    Go to contribution page
  185. Gaelle Boudoul (Universite Claude Bernard-Lyon I (FR))
    14/04/2015, 17:45
    Track2: Offline software
    oral presentation
    CMS Detector Description (DD) is an integral part of the CMSSW software multithreaded framework. CMS software has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the limitations of the Run I DD model and changes implemented for the restart of the LHC program in 2015....
    Go to contribution page
  186. Luca Mascetti (CERN)
    14/04/2015, 17:45
    Track3: Data store and access
    oral presentation
    CERN IT DSS operates the main storage resources for data taking and physics analysis mainly via three system: AFS, CASTOR and EOS. The total usable space available for users is about 100 PB (with relative ratios 1:20:120). EOS deploys disk resources across the two CERN computer centres (Meyrin and Wigner) with a current ratio 60% to 40%. IT DSS is also providing sizable on-demand resources for...
    Go to contribution page
  187. Max Fischer (KIT - Karlsruhe Institute of Technology (DE))
    14/04/2015, 17:45
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    With the second run period of the LHC, high energy physics collaborations will have to face increasing computing infrastructural needs. Opportunistic resources are expected to absorb many computationally expensive tasks, such as Monte Carlo event simulation. This leaves dedicated HEP infrastructure with an increased load of analysis tasks that in turn will need to process an increased volume...
    Go to contribution page
  188. Tomonori Takahashi (Research Center for Nuclear Physics, Osaka University)
    14/04/2015, 17:45
    Track1: Online computing
    oral presentation

    1. Introduction

    The J-PARC E16 experiment aims to investigate the chiral symmetry restoration in cold nuclear matter and the origin of the hadron mass through the systematic study of the mass modification of vector mesons.
    In the experiment,
    $e^{+}e^{-}$ decay of slowly-moving $\phi$ mesons in the normal nuclear matter density are intensively studied using several nuclear targets (H,...

    Go to contribution page
  189. Andrew Hanushevsky (STANFORD LINEAR ACCELERATOR CENTER)
    14/04/2015, 18:00
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    As more experiments move to a federated model of data access the environment becomes highly distributed and decentralized. In many cases this may pose obstacles in quickly resolving site issues; especially given vast time-zone differences. Spurred by ATLAS needs, Release 4 of XRootD incorporates a special mode of access to provide remote debugging capabilities. Essentially, XRootD allows a...
    Go to contribution page
  190. Mr Andreas Joachim Peters (CERN)
    14/04/2015, 18:00
    Track3: Data store and access
    oral presentation
    Archiving data to tape is a critical operation for any storage system, especially for the EOS system at CERN which holds production data from all major LHC experiments. Each collaboration has an allocated quota it can use at any given time therefore, a mechanism for archiving "stale" data is needed so that storage space is reclaimed for online analysis operations. The archiving tool that we...
    Go to contribution page
  191. Dr Tobias Stockmanns (FZ Jülich GmbH)
    14/04/2015, 18:00
    Track2: Offline software
    oral presentation
    Future particle physics experiments are searching more and more for rare decays which have similar signatures in the detector as the huge background. For those events usually simple selection criteria do not exist, which makes it impossible to implement a hardware-trigger based on a small subset of detector data. Therefore all the detector data is read out continuously and processed...
    Go to contribution page
  192. Mr Pawel Szostek (CERN)
    14/04/2015, 18:00
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    As Moore's Law drives the silicon industry towards higher transistor counts, processor designs are becoming more and more complex. The area of development includes core count, execution ports, vector units, uncore architecture and finally instruction sets. This increasing complexity leads us to a place where access to the shared memory is the major limiting factor, making feeding the cores...
    Go to contribution page
  193. Alexander Baranov (ITEP Institute for Theoretical and Experimental Physics (RU))
    14/04/2015, 18:00
    Track7: Clouds and virtualization
    oral presentation
    Computational grid (or simply 'grid') infrastructures are powerful but restricted by several aspects: grids are incapable of running user jobs compiled with a non-authentic set of libraries and it is difficult to restructure grids to adapt to peak loads. At the same time if grids are not loaded with user-tasks, owners still have to pay for electricity and hardware maintenance. So a grid is not...
    Go to contribution page
  194. Mr Eitaro Hamada (High Energy Accelerator Research Organization (KEK))
    14/04/2015, 18:00
    Track1: Online computing
    oral presentation
    **1. Introduction** We developed a DAQ system of the J-PARC E16 Experiment by using the DAQ-Middleware. We evaluated the DAQ system and confirmed that the DAQ system can be applied to the experiment. The DAQ system receives an average 660MB/spill of data (2-seconds spill per 6 seconds cycle). In order to receive such a large quantity of data, we need a network-distributed system....
    Go to contribution page
  195. Dr Sergey Linev (GSI DARMSTADT)
    14/04/2015, 18:15
    Track1: Online computing
    oral presentation
    The *Data Acquisition Backbone Core* (*DABC*) is a C++ software framework that can implement and run various data acquisition solutions on Linux platforms. In 2013 version 2 of *DABC* has been released with several improvements. These developments have taken into account lots of practical experiences of *DABC v1* with detector test beams and laboratory set-ups since first release in 2009. The...
    Go to contribution page
  196. Mikhail Hushchyn (Moscow Institute of Physics and Technology, Moscow)
    14/04/2015, 18:15
    Track3: Data store and access
    oral presentation
    The amount of data produced by the LHCb experiment every year consists of several petabytes. This data is kept on disk and tape storage systems. Disks are much faster than tapes, but are way more expensive and hence disk space is limited. It is impossible to fit the whole data taken during the experiment's lifetime on disk, but fortunately fast access to datasets are no longer needed after the...
    Go to contribution page
  197. Sameh Mannai (Universite Catholique de Louvain (UCL) (BE))
    14/04/2015, 18:15
    Track2: Offline software
    oral presentation
    The Semi-Digital Hadronic CALorimeter(SDHCAL) using Glass Resistive Plate Chambers (GRPCs) is one of the two hadronic calorimeter options proposed by the ILD (International Large Detector) project for the future (ILC) International Linear Collider experiments. It is a sampling calorimeter with 48 layers. Each layer has a size of 1 m² and finely segmented into cells of 1 cm² ensuring a...
    Go to contribution page
  198. Christopher Hollowell (Brookhaven National Laboratory)
    14/04/2015, 18:15
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    Non-uniform memory access (NUMA) is a memory architecture for symmetric multiprocessing (SMP) systems where each processor is directly connected to separate memory. Indirect access to other CPU's (remote) RAM is still possible, but such requests are slower as they must also pass through that memory's controlling CPU. In concert with a NUMA-aware operating system, the NUMA hardware...
    Go to contribution page
  199. Oliver Gutsche (Fermi National Accelerator Lab. (US))
    15/04/2015, 09:00
  200. Dr Maria Girone (CERN)
    15/04/2015, 09:45
  201. Thorsten Sven Kollegger (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    15/04/2015, 11:00
  202. Mr Tom Fifield (OpenStack Foundation)
    15/04/2015, 11:30
  203. Sebastian Lopienski (CERN)
    15/04/2015, 12:00
  204. 15/04/2015, 12:30
  205. Manuel Delfino Reznicek (Universitat Autònoma de Barcelona (ES))
    16/04/2015, 09:00
    Track3: Data store and access
    oral presentation
    Several scientific fields, including Astrophysics, Astroparticle Physics, Cosmology, Nuclear and Particle Physics, and Research with Photons, are estimating that by the 2020 decade they will require data handling systems with data volumes approaching the Zettabyte distributed amongst as many as 1018 individually addressable data objects (Zettabyte-Exascale systems). It may be...
    Go to contribution page
  206. Pedro Ferreira (CERN)
    16/04/2015, 09:00
    Track6: Facilities, Infrastructure, Network
    oral presentation
    Indico has come a long way since it was first used to organize CHEP 2004. More than ten years of development have brought new features and projects, widening the application's feature set and enabling event organizers to work even more efficiently. While this has boosted the tool's usage and facilitated its adoption by a remarkable 300,000 events (at CERN only), it has also generated a whole...
    Go to contribution page
  207. Gerardo Ganis (CERN)
    16/04/2015, 09:00
    Track7: Clouds and virtualization
    oral presentation
    Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers’ computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the...
    Go to contribution page
  208. Dr Maria Grazia Pia (Universita e INFN (IT))
    16/04/2015, 09:00
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    Testable physics by design The validation of physics calculations requires the capability to thoroughly test them. The difficulty of exposing parts of the software to adequate testing can be the source of incorrect physics functionality, which in turn may generate hard to identify systematic effects in physics observables produced by the experiments. Starting from real-life examples...
    Go to contribution page
  209. Glen Cowan (Royal Holloway, University of London)
    16/04/2015, 09:00
    Track2: Offline software
    oral presentation
    High Energy Physics has been using Machine Learning techniques (commonly known as Multivariate Analysis) since the 1990s with Artificial Neural Net and more recently with Boosted Decision Trees, Random Forest etc. Meanwhile, Machine Learning has become a full blown field of computer science. With the emergence of Big Data, data scientists are developing new Machine Learning algorithms to...
    Go to contribution page
  210. Dr Andrea Bocci (CERN)
    16/04/2015, 09:00
    Track1: Online computing
    oral presentation
    The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a tradeoff between the complexity of the algorithms running on the available computing power, the...
    Go to contribution page
  211. Martin Gasthuber (Deutsches Elektronen-Synchrotron (DE))
    16/04/2015, 09:15
    Track3: Data store and access
    oral presentation
    Data taking and analysis infrastructures in HEP have evolved during many years to a well known problem domain. In contrast to HEP, third generations synchrotron light sources, existing and upcoming free electron laser are confronted an explosion in data rates which is primarily driven by recent developments in 2D pixel array detectors. The next generation will produce data in the region...
    Go to contribution page
  212. Ioannis Charalampidis (CERN)
    16/04/2015, 09:15
    Track7: Clouds and virtualization
    oral presentation
    Lately there is a trend in scientific projects to look for computing resources in the volunteering community. In addition, to reduce the development effort required to port the scientific software stack to all the known platforms, the use of Virtual Machines (VMs) as end-projects is becoming increasingly popular. Unfortunately, the installation and the interfacing with the existing...
    Go to contribution page
  213. Elisabetta Ronchieri (INFN)
    16/04/2015, 09:15
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    Geant4 is a widespread simulation system of "particles through matter" used in several experimental areas from high energy physics and nuclear experiments to medical studies. Some of its applications may involve critical use cases; therefore they would benefit from an objective assessment of the software quality of Geant4. The issue of maintainability is especially relevant for such a widely...
    Go to contribution page
  214. Gloria Corti (CERN)
    16/04/2015, 09:15
    Track2: Offline software
    oral presentation
    In the LHCb experiment all massive processing of data is handled centrally. In the case of simulated data a wide variety of different types of Monte Carlo (MC) events has to be produced, as each physics’ analysis needs different sets of signal and background events. In order to cope with this large set of different types of MC events, of the order of several hundreds, a numerical event type...
    Go to contribution page
  215. Andrea Perrotta (Universita e INFN, Bologna (IT))
    16/04/2015, 09:15
    Track1: Online computing
    oral presentation
    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increases in center-of-mass energy and luminosity will raise the...
    Go to contribution page
  216. Mr Joao Correia Fernandes (CERN)
    16/04/2015, 09:15
    Track6: Facilities, Infrastructure, Network
    oral presentation
    We will present an overview of the current real-time video service offering for the LHC, in particular the operation of the CERN Vidyo service will be described in terms of consolidated performance and scale: The service is an increasingly critical part of the daily activity of the LHC collaborations, topping recently more than 50 million minutes of communication in one year, with peaks of up...
    Go to contribution page
  217. Dr Patrick Fuhrmann (DESY)
    16/04/2015, 09:30
    Track3: Data store and access
    oral presentation
    With the great success of the dCache Storage Technology in the framework of the World Wide LHC Computing Grid, an increasing number of non HEP communities were attracted to use dCache for their data management infrastructure. As a natural consequence, the dCache team was presented with new use-cases that stimulated the development of interesting dCache features. Perhaps the most important...
    Go to contribution page
  218. Dario Berzano (CERN)
    16/04/2015, 09:30
    Track7: Clouds and virtualization
    oral presentation
    During the last years, several Grid computing centers chose virtualization as a better way to manage diverse use cases with self-consistent environments on the same bare infrastructure. The maturity of control interfaces (such as OpenNebula and OpenStack) opened the possibility to easily change the amount of resources assigned to each use case by simply turning on and off virtual machines....
    Go to contribution page
  219. Danilo Piparo (CERN)
    16/04/2015, 09:30
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    The sixth release cycle of ROOT is characterised by a radical modernisation in the core software technologies the tookit relies on: language standard, interpreter, hardware exploitation mechanisms. If on the one hand, the change offered the opportunity of consolidating the existing codebase, in presence of such innovations, maintaing the balance between full backward compatibility and...
    Go to contribution page
  220. Esteban Gabancho (CERN)
    16/04/2015, 09:30
    Track6: Facilities, Infrastructure, Network
    oral presentation
    With the expected release of invenio next stable version, CDS is preparing a 'lab' service where users will have the opportunity to experience the powerful features of the new software. After a short introduction of Invenio next, the talk will explain the mechanisms that have been implemented to allow to run parallel services with the same content exposed from two different designs and the...
    Go to contribution page
  221. Markus Frank (CERN)
    16/04/2015, 09:30
    Track1: Online computing
    oral presentation
    The LHCb experiment at the LHC accelerator at CERN collects collisions of particle bunches at 40 MHz. After a first level of hardware trigger with output of 1 MHz, the physically interesting collisions are selected by running dedicated trigger algorithms in the High Level Trigger (HLT) computing farm. This farm consists of up to roughly 25000 CPU cores in roughly 1600 physical nodes...
    Go to contribution page
  222. Dominick Rocco (urn:Google)
    16/04/2015, 09:30
    Track2: Offline software
    oral presentation
    In this paper we present the Library Event Matching (LEM) classification technique for particle identification. The LEM technique was developed for the NOvA electron neutrino appearance analysis as an alternative but complimentary approach to standard multivariate methods. Traditional multivariate PIDs are based on high-level reconstructed quantities which can obscure or discard important...
    Go to contribution page
  223. Marek Kamil Denis (CERN)
    16/04/2015, 09:45
    Track7: Clouds and virtualization
    oral presentation
    Cloud federation brings an old concept into new technology, allowing for sharing resources between independent cloud installations. Cloud computing starts to play major role in HEP and e-science allowing resources to be obtained on demand. Cloud federation supports sharing between independent organizations and companies coming from the commercial world such as public clouds, bringing new ways...
    Go to contribution page
  224. Dr Paul Millar (Deutsches Elektronen-Synchrotron (DE))
    16/04/2015, 09:45
    Track3: Data store and access
    oral presentation
    The availability of cheap, easy-to-use sync-and-share cloud services has split the scientific storage world into the traditional big data management systems and the very attractive sync-and-share services. With the former, the location of data is well understood while the latter is mostly operated in the Cloud, resulting in a rather complex legal situation. Beside legal issues, those two...
    Go to contribution page
  225. Bruno Silva De Sousa (CERN)
    16/04/2015, 09:45
    Track6: Facilities, Infrastructure, Network
    oral presentation
    The emergence of social media platforms in the consumer space unlocked new ways of interaction between individuals on the web. People develop now their social networks and relations based on common interests and activities with the choice to opt-in or opt-out on content of their interest. This kind of platforms have also an important place to fill inside large organizations and enterprises...
    Go to contribution page
  226. Stefan Gadatsch (NIKHEF (NL))
    16/04/2015, 09:45
    Track2: Offline software
    oral presentation
    In particle physics experiments data analyses generally use Monte Carlo (MC) simulation templates to interpret the observed data. These simulated samples may depend on one or multiple model parameters, such as a shifting mass parameter, and a set of such samples may be required to scan over the various parameter values. Since detailed detector MC simulation can be time-consuming, there is...
    Go to contribution page
  227. Tatiana Likhomanenko (National Research Centre Kurchatov Institute (RU))
    16/04/2015, 09:45
    Track1: Online computing
    oral presentation
    The main b-physics trigger algorithm used by the LHCb experiment is the so-called topological trigger. The topological trigger selects vertices which are a) detached from the primary proton-proton collision and b) compatible with coming from the decay of a b-hadron. In the LHC Run 1, this trigger utilized a custom boosted decision tree algorithm, selected an almost 100% pure sample of...
    Go to contribution page
  228. Philippe Canal (Fermi National Accelerator Lab. (US))
    16/04/2015, 09:45
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    Following the release of version 6, ROOT has entered a new area of development. It will leverage the industrial strength compiler library shipping in ROOT 6 and its support of the C++11/14 standard, to significantly simplify and harden ROOT's interfaces and to clarify and substantially improve ROOT's support for multi-threaded environments. This talk will also recap the most important new...
    Go to contribution page
  229. Steven Goldfarb (University of Michigan (US))
    16/04/2015, 10:00
    Track6: Facilities, Infrastructure, Network
    oral presentation
    The ATLAS Education and Outreach Group is in the process of migrating its public online content to a professionally designed set of web pages built on a Drupal-based content management system. Development of the front-end design passed through several key stages, including audience surveys, stakeholder interviews, usage analytics, and a series of fast design iterations, called sprints. ...
    Go to contribution page
  230. Alessandro Manzotti (The University of Chicago)
    16/04/2015, 10:00
    Track2: Offline software
    oral presentation
    CosmoSIS [http://arxiv.org/abs/1409.3409] is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo (MCMC) and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a...
    Go to contribution page
  231. Mr Andreas Joachim Peters (CERN)
    16/04/2015, 10:00
    Track3: Data store and access
    oral presentation
    EOS is an open source distributed disk storage system in production since 2011 at CERN. Development focus has been on low-latency analysis use cases for LHC and non-LHC experiments and life-cycle management using JBOD hardware for multi PB storage installations. The EOS design implies a split of hot and cold storage and introduced a change of the traditional HSM functionality based workflows...
    Go to contribution page
  232. Dr Stefano Bagnasco (I.N.F.N. TORINO)
    16/04/2015, 10:00
    Track7: Clouds and virtualization
    oral presentation
    The present work aims at optimizing the use of computing resources available at the grid Italian Tier-2 sites of the ALICE experiment at CERN LHC by making them accessible to interactive distributed analysis, thanks to modern solutions based on cloud computing. The scalability and elasticity of the computing resources via dynamic (“on-demand”) provisioning is essentially limited by the size of...
    Go to contribution page
  233. Sean Benson (CERN)
    16/04/2015, 10:00
    Track1: Online computing
    oral presentation
    The LHCb experiment will record an unprecedented dataset of beauty and charm hadron decays during Run II of the LHC, set to take place between 2015 and 2018. A key computing challenge is to store and process this data, which limits the maximum output rate of the LHCb trigger. So far, LHCb has written out a few kHz of events containing the full raw sub-detector data, which are passed through a...
    Go to contribution page
  234. Parag Mhashilkar (Fermi National Accelerator Laboratory)
    16/04/2015, 10:15
    Track7: Clouds and virtualization
    oral presentation
    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each...
    Go to contribution page
  235. Mr Giulio Eulisse (Fermi National Accelerator Lab. (US))
    16/04/2015, 10:15
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    In recent years the size and scale of scientific computing has grown significantly. Computing facilities have grown to the point where energy availability and costs have become important limiting factors for data-center size and density. At the same time, power density limitations in processors themselves are driving interest in more heterogeneous processor architectures. Optimizing...
    Go to contribution page
  236. Mr Joao Correia Fernandes (CERN)
    16/04/2015, 10:15
    Track6: Facilities, Infrastructure, Network
    oral presentation
    The CERN IT department has built over the years a performant and integrated ecosystem of collaboration tools, from videoconference and webcast services to event management software. These services have been designed and evolved in very close collaboration with the various communities surrounding the laboratory and have been massively adopted by CERN users. To cope with this very heavy usage,...
    Go to contribution page
  237. Dmytro Kresan (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    16/04/2015, 10:15
    Track1: Online computing
    oral presentation
    The R3B (Reactions with Rare Radioactive Beams) experiment is one of the planned experiments at the future FAIR facility at GSI Darmstadt. R3B will cover experimental reaction studies with exotic nuclei far off stability, thus enabling a broad physics programs with rare-isotope beams with emphasis on nuclear structure and dynamics. Several different detection subsystems as well as...
    Go to contribution page
  238. Christoph Wissing (Deutsches Elektronen-Synchrotron (DE))
    16/04/2015, 10:15
    Track3: Data store and access
    oral presentation
    The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows...
    Go to contribution page
  239. Dr Frederik Beaujean (LMU Munich)
    16/04/2015, 10:15
    Track2: Offline software
    oral presentation

    The Bayesian analysis toolkit (BAT)
    is a C++ package centered around Markov-chain Monte Carlo sampling. It
    is used in analyses of various particle-physics experiments such as
    ATLAS and Gerda. The software has matured over the last few years to a
    version 1.0. We will summarize the lessons learned and report on the
    current developments of a complete redesign...

    Go to contribution page
  240. Dr Samuel Cadellin Skipsey
    16/04/2015, 11:00
    Track3: Data store and access
    oral presentation
    The *Object Store* model has quickly become the de-facto basis of most commercially successful mass storage infrastructure, backing so-called "Cloud" storage such as Amazon S3, but also underlying the implementation of most parallel distributed storage systems. Many of the assumptions in object store design are similar, but not identical, to concepts in the design of Grid Storage Elements,...
    Go to contribution page
  241. Vakho Tsulaia (Lawrence Berkeley National Lab. (US))
    16/04/2015, 11:00
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    High performance computing facilities present unique challenges and opportunities for HENP event processing. The massive scale of many HPC systems means that fractionally small utilizations can yield large returns in processing throughput. Parallel applications which can dynamically and efficiently fill any scheduling opportunities the resource presents benefit both the facility (maximal...
    Go to contribution page
  242. Stefan Nicolae Stancu (CERN)
    16/04/2015, 11:00
    Track6: Facilities, Infrastructure, Network
    oral presentation
    The LHC Optical Private Network, linking CERN and the Tier1s and the LHC Open Network Environment linking these to the Tier2 community successfully supported the data transfer needs of the LHC community during Run 1 and have evolved to better serve the networking requirements of the new computing models for Run 2. We present here the current status and the key changes, notably the delivered...
    Go to contribution page
  243. Dr Andrew Norman (Fermilab)
    16/04/2015, 11:00
    Track1: Online computing
    oral presentation
    The NOvA experiment uses a continuous, free-running, dead-timeless data acquisition system to collect data from the 14 kT far detector. The DAQ system readouts the more than 344,000 detector channels and assembles the information into an raw unfiltered high bandwidth data stream. The NOvA trigger systems operate in parallel to the readout and asynchronously to the primary DAQ readout/event...
    Go to contribution page
  244. Oliver Keeble (CERN)
    16/04/2015, 11:00
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    The overall success of LHC data processing depends heavily on stable, reliable and fast data distribution. The Worldwide LHC Computing Grid (WLCG) relies on the File Transfer Service (FTS) as the data movement middleware for moving sets of files from one site to another. This paper describes the components of FTS3 monitoring infrastructure and how they are built to satisfy the common and...
    Go to contribution page
  245. Lorenzo Moneta (CERN)
    16/04/2015, 11:00
    Track2: Offline software
    oral presentation
    ROOT is  a C++ data analysis framework, providing advanced statistical methods needed by the HEP experiments for analysing their data. R is a free software framework for statistical computing, which complements the functionality of ROOT, by including some of the latest tools developed by statistics and computing research groups. We will present the ROOT-R package, a module in ROOT, which...
    Go to contribution page
  246. Luca Mascetti (CERN)
    16/04/2015, 11:15
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    Cernbox is a cloud synchronisation service for end-users: it allows to sync and share files on all major mobile and desktop platforms (Linux, Windows, MacOSX, Android, iOS) aiming to provide offline availability to any data stored in the CERN EOS infrastructure. The successful beta phase of the service confirmed the high demand in the community for such easily accessible cloud storage...
    Go to contribution page
  247. Geert Jan Besjes (Radboud University Nijmegen (NL))
    16/04/2015, 11:15
    Track2: Offline software
    oral presentation
    We present a software framework for statistical data analysis, called *HistFitter*, that has been used extensively in the ATLAS Collaboration to analyze data of proton-proton collisions produced by the Large Hadron Collider at CERN. Most notably, HistFitter has become a de-facto standard in searches for supersymmetric particles since 2012, with some usage for Exotic and Higgs boson physics....
    Go to contribution page
  248. Sergey Panitkin (Brookhaven National Laboratory (US))
    16/04/2015, 11:15
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment.   While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly...
    Go to contribution page
  249. Mr Michael Poat (Brookhaven National Laboratory)
    16/04/2015, 11:15
    Track3: Data store and access
    oral presentation
    The STAR online computing environment is an intensive ever-growing system used for first-hand data collection and analysis. As systems become more sophisticated, they result in a more detailed dense collection of data output and inefficient limited storage systems have become an impediment to fast feedback to the online shift crews relying on data processing at near real-time speed. Motivation...
    Go to contribution page
  250. Mikolaj Krzewicki (Johann-Wolfgang-Goethe Univ. (DE))
    16/04/2015, 11:15
    Track1: Online computing
    oral presentation
    The ALICE High Level Trigger (HLT) is an online reconstruction, triggering and data compression system used in the ALICE experiment at CERN. Unique among the LHC experiments, it extensively uses modern coprocessor technologies like general purpose graphic processing units (GPGPU) and field programmable gate arrays (FPGA) in the data flow. Real-time data compression is performed using a cluster...
    Go to contribution page
  251. Vincenzo Capone (DANTE)
    16/04/2015, 11:15
    Track6: Facilities, Infrastructure, Network
    oral presentation
    The GÉANT infrastructure is the backbone that serves the scientific communities in Europe for their data movement needs and their access to international research and education networks. Using the extensive fibre footprint and infrastructure in Europe the GÉANT network delivers a portfolio of services aimed to best fit the specific needs of the users, including Authentication and Authorization...
    Go to contribution page
  252. Dr Lisa Gerhardt (LBNL)
    16/04/2015, 11:30
    Track2: Offline software
    oral presentation
    SciDB is an open-source analytical database for scalable complex analytics on very large array or multi-structured data from a variety of sources, programmable from Python and R. It runs on HPC, commodity hardware grids, or in a cloud and can manage and analyze terabytes of array-structured data and do complex analytics in-database. We present an overall description of the SciDB framework...
    Go to contribution page
  253. Parag Mhashilkar (Fermi National Accelerator Laboratory)
    16/04/2015, 11:30
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    The FabrIc for Frontier Experiments (FIFE) program is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model development for Fermilab experiments and external projects. FIFE is a collaborative effort between physicists and computing professionals to provide computing solutions for experiments of varying scale, needs, and...
    Go to contribution page
  254. Dr Tony Wildish (Princeton University (US))
    16/04/2015, 11:30
    Track6: Facilities, Infrastructure, Network
    oral presentation
    The LHC experiments have traditionally regarded the network as an unreliable resource, one which was expected to be a major source of errors and inefficiency at the time their original computing models were derived. Now, however, the network is seen as much more capable and reliable. Data are routinely transferred with high efficiency and low latency to wherever computing or storage resources...
    Go to contribution page
  255. Dr Hironori Ito (Brookhaven National Laboratory (US))
    16/04/2015, 11:30
    Track3: Data store and access
    oral presentation
    Ceph based storage solutions are becoming increasingly popular within the HEP/NP community over the last few years. With the current status of the Ceph project, both its object storage and block storage layers are production ready on a large scale, and even the Ceph file system (CephFS) storage layer is rapidly getting to that state as well. This contribution contains a thorough review of...
    Go to contribution page
  256. Manuel Martin Marquez (CERN)
    16/04/2015, 11:30
    Track1: Online computing
    oral presentation
    Data science is about unlocking valuable insights and obtaining deep knowledge out of the data. Its application enables more efficient daily-based operations and more intelligent decision-making processes. CERN has been very successful on developing custom data-driven control and monitoring systems. Several millions of control devices: sensors, front-end equipment, etc., make up these...
    Go to contribution page
  257. Dr David Chamont (LLR - École polytechnique)
    16/04/2015, 11:30
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    The Matrix Element Method (MEM) is a well known powerful approach in particle physics to extract maximal information of the events arising from the LHC pp collisions. Compared to other methods requiring trainings, the MEM allows direct comparisons between a theory and the observation. Since the phase space has a higher dimensionality to explore, MEM is much more CPU time consuming at the...
    Go to contribution page
  258. Tom Uram (urn:Google)
    16/04/2015, 11:45
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    HEP’s demand for computing resources has grown beyond the capacity of the Grid, and these demands will accelerate with the higher energy and luminosity planned for Run II. Mira, the ten petaflops supercomputer at the Argonne Leadership Computing Facility, is a potentially significant compute resource for HEP research. Through an award of fifty million hours on Mira, we have delivered millions...
    Go to contribution page
  259. Mr Andreas Joachim Peters (CERN)
    16/04/2015, 11:45
    Track3: Data store and access
    oral presentation
    In 2013, CERN IT evaluated then deployed a petabyte-scale Ceph cluster to support OpenStack use-cases in production. As of fall 2014, this cluster stores around 300 TB of data comprising more than a thousand VM images and a similar number of block device volumes. With more than a year of smooth operations, we will present our experience and tuning best-practices. Beyond the cloud storage...
    Go to contribution page
  260. Markus Frank (CERN)
    16/04/2015, 11:45
    Track2: Offline software
    oral presentation
    The detector description is an essential component that has to be used to analyse and simulate data resulting from particle collisions in high energy physics experiments. Based on the DD4hep detector description toolkit a flexible and data driven simulation framework was designed using the Geant4 tool-kit. We present this framework and describe the guiding requirements and the...
    Go to contribution page
  261. Taylor Childers (Argonne National Laboratory (US))
    16/04/2015, 11:45
    Track8: Performance increase and optimization exploiting hardware features
    oral presentation
    Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the grid, however, will not double. The HEP community must consider how to bridge this computing gap. Two approaches to meeting this demand include targeting larger compute resources, and using the available compute resources as efficiently as possible. Argonne’s Mira, the fifth fastest...
    Go to contribution page
  262. Yu Higuchi (High Energy Accelerator Research Organization (JP))
    16/04/2015, 11:45
    Track1: Online computing
    oral presentation
    The ATLAS trigger has been used very successfully for the online event selection during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of...
    Go to contribution page
  263. Dave Kelsey (STFC - Rutherford Appleton Lab. (GB))
    16/04/2015, 11:45
    Track6: Facilities, Infrastructure, Network
    oral presentation
    The world is rapidly running out of IPv4 addresses; the number of IPv6 end systems connected to the internet is increasing; WLCG and the LHC experiments may soon have access to worker nodes and/or virtual machines (VMs) possessing only an IPv6 routable address. The HEPiX IPv6 Working Group (http://hepix-ipv6.web.cern.ch/) has been investigating, testing and planning for dual-stack services on...
    Go to contribution page
  264. Carlo Schiavi (Universita e INFN Genova (IT))
    16/04/2015, 12:00
    Track1: Online computing
    oral presentation
    Following the successful Run-1 LHC data-taking, the long shutdown gave the opportunity for significant improvements in the ATLAS trigger capabilities, as a result of the introduction of new or improved Level-1 trigger hardware and significant restructuring of the DAQ infrastructure. To make use of these new capabilities, the High-Level trigger (HLT) software has been to a large extent...
    Go to contribution page
  265. Ms Xiaofeng LEI (INSTITUE OF HIGH ENERGY PHYSICS, University of Chinese Academy of Sciences)
    16/04/2015, 12:00
    Track2: Offline software
    oral presentation
    In the past years, we have successfully applied Hadoop to high-energy physics analysis. Although, we have not only improved the efficiency of data analysis, but also reduced the cost of cluster building so far, there are still some spaces to be optimized, like static pre-selection, low-efficient random data reading and I/O bottleneck caused by Fuse which is used to access HDFS. In order to...
    Go to contribution page
  266. Mr Andreas Joachim Peters (CERN)
    16/04/2015, 12:00
    Track3: Data store and access
    oral presentation
    The EOS storage software was designed to cover CERN disk-only storage use cases in the medium-term trading scalability against latency. To cover and prepare for long-term requirements the CERN IT data and storage services group (DSS) is actively conducting R&D and open source contributions to experiment with a next generation storage software based on CEPH. CEPH provides a scale-out object...
    Go to contribution page
  267. Duncan Rand (Imperial College Sci., Tech. & Med. (GB))
    16/04/2015, 12:00
    Track6: Facilities, Infrastructure, Network
    oral presentation
    Named Data Networks (NDN) are an emerging network technology based around requesting data from a network rather than a specific host. Intermediate routers in the network cache the data. Each data packet must be signed to allow its provenance to be verified. Data blocks are addressed by a unique name which consists of a hierarchical path, a name and attributes. An example of a valid address...
    Go to contribution page
  268. Dr Robert Andrew Currie (Imperial College Sci., Tech. & Med. (GB))
    16/04/2015, 12:00
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    The DIRAC INTERWARE system was originally developed within the LHCb VO as a common interface to access distributed resources, i.e. grids, clouds and local batch systems. It has been used successfully in this context by the LHCb VO for a number of years. In April 2013 the GridPP consortium in the UK decided to offer a DIRAC service to a number of small VOs. The majority of these had been...
    Go to contribution page
  269. Robert Kutschke (Femilab)
    16/04/2015, 12:15
    Track2: Offline software
    oral presentation
    The art event processing framework is used by almost all new experiments at Fermilab, and by several outside of Fermilab. All use art as an external product in the same sense that the compiler, ROOT, Geant4, CLHEP and boost are external products. The art team has embarked on a campaign to document art and develop training materials for new users. Many new users of art have little or no...
    Go to contribution page
  270. Christos Papadopoulos (Colorado State University)
    16/04/2015, 12:15
    Track6: Facilities, Infrastructure, Network
    oral presentation
    Introduction ------------ The Computing Models of the LHC experiments continue to evolve from the simple hierarchical MONARC model towards more agile models where data is exchanged among many Tier2 and Tier3 sites, relying on both strategic data placement, and an increased use of remote access with caching through CMS's AAA and ATLAS' FAX projects, for example. The challenges presented...
    Go to contribution page
  271. Dr Andrew Norman (Fermilab)
    16/04/2015, 12:15
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    oral presentation
    As high energy physics experiments have grown, their operational needs and requirements they place on computing systems change. These changes often require new technical solutions to meet the increased demands and functionalities of the science. How do you affect sweeping change to core infrastructure, without causing major interruptions to the scientific programs? This paper explores the...
    Go to contribution page
  272. Arnim Balzer (Universiteit van Amsterdam)
    16/04/2015, 12:15
    Track1: Online computing
    oral presentation
    The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes located in the Khomas Highland in Namibia. Very high energy gamma rays are detected using the Imaging Atmospheric Cherenkov Technique. It separates the Cherenkov light emitted by the background of mostly hadronic air showers from the light emitted by air showers induced by gamma rays....
    Go to contribution page
  273. Niko Neufeld (CERN)
    16/04/2015, 14:00
  274. Michael Ernst (Unknown)
    16/04/2015, 14:45
  275. Jakob Blomer (CERN)
    16/04/2015, 16:00
  276. Prof. Daniele Bonacorsi (University of Bologna)
    16/04/2015, 16:30
  277. Tadashi Maeno (Brookhaven National Laboratory (US))
    16/04/2015, 16:40
    Track5: Computing activities and Computing models
  278. Andrew McNab (University of Manchester (GB))
    16/04/2015, 17:05
  279. Danilo Piparo (CERN)
    16/04/2015, 17:30
    Track8: Performance increase and optimization exploiting hardware features
  280. Dr Andrea Bocci (CERN)
    17/04/2015, 09:00
    Track1: Online computing
  281. Dr Ivan Kisel (Johann-Wolfgang-Goethe Univ. (DE))
    17/04/2015, 09:25
  282. Latchezar Betev (CERN)
    17/04/2015, 09:50
    Track3: Data store and access
  283. Marco Clemencic (CERN)
    17/04/2015, 10:45
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
  284. Jose Flix Molina (Centro de Investigaciones Energ. Medioambientales y Tecn. - (ES)
    17/04/2015, 11:10
  285. Andre Sailer (CERN), Andrew David Lahiff (STFC - Rutherford Appleton Lab. (GB)), Anna Elizabeth Woodard (University of Notre Dame (US)), Aram Santogidis (CERN), Christophe Haen (CERN), Dai Kobayashi (Tokyo Institute of Technology (JP)), Mr Erekle Magradze (Georg-August-Universitaet Goettingen (DE)), Yuji Kato
    17/04/2015, 11:35
  286. Dr Richard Philip Mount (SLAC National Accelerator Laboratory (US))
    17/04/2015, 12:30
  287. Hiroshi Sakamoto (University of Tokyo (JP))
    17/04/2015, 12:50
  288. Prof. Douglas Thain (University of Notre Dame), Haiyan Meng (U), Prof. Michael Hildreth (University of Notre Dame)
    Track5: Computing activities and Computing models
    poster presentation
    The reproducibility of scientific results increasingly depends upon the preservation of computational artifacts. Although preserving a computation to be used later sounds trivial, it is surprisingly difficult due to the complexity of existing software and systems. Implicit dependencies, networked resources, and shifting compatibility all conspire to break applications that appear to work well....
    Go to contribution page
  289. Shaun de Witt (STFC)
    Track3: Data store and access
    poster presentation
    For many years the Storage Resource Manager (SRM) has been the de-facto federation technology used by WLCG. This technology has, along with the rest of the middleware stack, mediated the transfer of many Petabytes of data since the start of data taking. In recent years, other technologies have become more popular as federation technologies because they offer additional functionalities that...
    Go to contribution page
  290. Dominick Rocco (urn:Google)
    Track2: Offline software
    poster presentation
    The NuMI Off-axis Neutrino Experiment (NOvA) is designed to study neutrino oscillations in the NuMI beam at Fermilab. Neutrinos at the Main Injector (NuMI) is currently being upgraded to provide 700 kW for NOvA. A 14 kt Far Detector in Ash River, MN and a functionally identical 0.3 kt Near Detector at Fermilab are positioned 810 km apart in the NuMI beam line. The fine granularity of the NOvA...
    Go to contribution page
  291. Domenico D'Urso (Universita e INFN (IT))
    Track2: Offline software
    poster presentation
    A flexible and modular data format implementation for HEP applications is presented. Designed to face HEP data issues, the implementation is based on the CERN ROOT toolkit. The design is aimed to create a data format as much as possible modular and easily upgradable and extendable. Event informations are split into different files, that may contain different parts of the event (i.e....
    Go to contribution page
  292. Mr Benjamin Farnham (CERN), Mr Piotr Pawel Nikiel (CERN)
    Track1: Online computing
    poster presentation
    This paper describes a new approach for generic design and efficient development of OPC UA servers. Development starts with creation of a design file, in XML format, describing an object-oriented information model of the target system or device. Using this model, the framework generates an executable OPC UA server application, which exposes the per-design OPC UA address space, without the...
    Go to contribution page
  293. Mr Alessandro Italiano (INFN-Bari)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    This work presents the result of several tests that demonstrate the capabilities of HTCondor as batch system for a big computing farm serving both LHC use cases and others scientists. The HTCondor testbed hosted at INFN-Bari is made of about 300 nodes and 15’000 CPU slots, and meant to sustain about 50’000 job in the queue. The computing farm is used both from Grid users of many VOs (HEP,...
    Go to contribution page
  294. Janusz Martyniak (Imperial College London)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The international Muon Ionisation Cooling Experiment (MICE) is designed to demonstrate the principle of muon ionisation cooling for the first time, for application to a future Neutrino Factory or Muon Collider. The experiment is currently under construction at the ISIS synchrotron at the Rutherford Appleton Laboratory, UK. As presently envisaged, the programme is divided into three Steps:...
    Go to contribution page
  295. Andrea Formica (CEA/IRFU,Centre d'etude de Saclay Gif-sur-Yvette (FR))
    Track3: Data store and access
    poster presentation
    Usage of Conditions Data in ATLAS is extensive for offline reconstruction and analysis (for example: alignment, calibration, data quality). The system is based on the LCG Conditions Database infrastructure, with read and write access via an ad hoc C++ API (COOL), a system which was developed before Run 1 data taking began. The infrastructure dictates that the data is organized into separate...
    Go to contribution page
  296. Andrea Biagioni (INFN)
    Track8: Performance increase and optimization exploiting hardware features
    poster presentation
    NaNet-10 is a four-ports 10GbE PCIe Network Interface Card designed for low-latency real-time operations with GPU systems. For this purpose the design includes a UDP offload module, for a fast and deterministic to clock-cyle handling of transport layer protocol, plus a GPUDirect P2P/RDMA engine for low-latency communication with nVIDIA Tesla GPU devices. A dedicate module (Merger) can...
    Go to contribution page
  297. Mike Hildreth (University of Notre Dame (US))
    Track2: Offline software
    poster presentation
    The CMS Simulation uses minimum bias events created by a "standard" event generator (e.g., Pythia) to simulate the additional interactions due to peripheral proton-proton collisions in each bunch crossing at the LHC (also known as pileup). Due to the inherent time constants of the CMS front-end electronics, many bunch crossings before and after the central bunch crossing of interest must be...
    Go to contribution page
  298. Alessandro De Salvo (Universita e INFN, Roma I (IT)), Dr Silvio Pardi (INFN)
    Track7: Clouds and virtualization
    poster presentation
    The advancements in technologies on provisioning end-to-end network services over geographical networks, together with the consolidation of Cloud Technologies, allow the creation of innovative scenarios for data centers. In this work, we present the architecture and performance studies concerning a prototype of distributed Tier2 infrastructure for HEP, instantiated between the two...
    Go to contribution page
  299. Ian Peter Collier (STFC - Rutherford Appleton Lab. (GB))
    Track7: Clouds and virtualization
    poster presentation
    The RAL Tier-1 has been deploying production virtual machines for several years. As we move to providing a production private cloud, managed using OpenNebula, we have experimented with a range of different ways of deploying virtual machine images on hypervisors. We present a quantative comparison of a variety of virtual machine image and storage combinations, including monolithic Scientific...
    Go to contribution page
  300. Dr Go Iwai (KEK)
    Track2: Offline software
    poster presentation
    We describe the development of an environment for Geant4 consisting of the application and data that enables users a faster and easier way to access the Geant4 applications without having to download and build the software locally. The environment is platform neutral and offers the users near-real time performance. The environment consists of data and Geant4 libraries built using the LLVM...
    Go to contribution page
  301. Dr Dario Berzano (CERN)
    Track7: Clouds and virtualization
    poster presentation
    One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a...
    Go to contribution page
  302. Andrew John Washbrook (University of Edinburgh (GB))
    Track8: Performance increase and optimization exploiting hardware features
    poster presentation
    Multivariate training and classification methods using machine learning techniques are commonly applied in data analysis at HEP experiments. Despite their success in looking for signatures of new physics beyond the standard model it is known that some of these techniques are computationally bound when input sample size and model complexity are increased. Investigating opportunities for...
    Go to contribution page
  303. Dr Domenico Giordano (CERN)
    Track7: Clouds and virtualization
    poster presentation
    Helix Nebula – the Science Cloud Initiative is a public-private-partnership between Europe's leading scientific research organisations (notably CERN, EMBL and ESA) and European IT cloud providers, that aims to establish a cloud-computing platform for data intensive science within Europe. Over the past two years, Helix Nebula has built a federated cloud framework – the Helix Nebula...
    Go to contribution page
  304. Alexey Anisenkov (Budker Institute of Nuclear Physics (RU))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The variety of the ATLAS Computing Infrastructure requires a central information system to define the topology of computing resources and to store the different parameters and configuration data which are needed by the various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services...
    Go to contribution page
  305. Dr Stephane Guillaume Poss (Alpes Lasers SA)
    Track7: Clouds and virtualization
    poster presentation
    We provide a report on ALDIRAC, the first DIRAC extension for a commercial application. DIRAC is a complete distributed computing solution, initially implemented for the LHCb experiment but now used by a wider community. The ALDIRAC extension is designed for the Alpes Lasers SA company in Neuchatel, Switzerland, to perform the simulation of the properties of Quantum Cascade Lasers on a Cloud...
    Go to contribution page
  306. Mr Tadeas Bilka (Charles University in Prague)
    Track2: Offline software
    poster presentation
    The Belle II experiment will start taking data in 2017. The SuperKEKB accelerator will deliver a factor 40 higher luminosity in comparison to its predecessor, KEKB, to acquire a 50 times larger data sample of B-B̅ events. In order to manage higher occupancy and background, a new silicon vertex detector consisting of two inner layers of DEPFET pixel sensors surrounded by four layers of...
    Go to contribution page
  307. Javier Jimenez Pena (Instituto de Fisica Corpuscular (ES))
    Track2: Offline software
    poster presentation
    ATLAS is equipped with a tracking system built using different technologies, silicon planar sensors (pixel and micro-strip) and gaseous drift- tubes, all embedded in a 2T solenoidal magnetic field. For the LHC Run II, the system has been upgraded with the installation of a new pixel layer, the Insertable B-layer (IBL). Offline track alignment of the ATLAS tracking system has to deal with about...
    Go to contribution page
  308. Dr Vladimir Sapunenko (INFN-CNAF (IT))
    Track3: Data store and access
    poster presentation
    Data management constitutes one of the major challenges that a geographically-distributed data centre has to face, especially when remote data access is involved. We discuss an integrated solution which enables transparent and efficient access to online and nearline data through high latency networks. This is based on the joint use of the General Parallel File System (GPFS) and of the Tivoli...
    Go to contribution page
  309. Dr Huiyoung Ryu (KISTI), Dr Junghyun Kim (KISTI), Prof. Kihyeon Cho (KISTI)
    Track2: Offline software
    poster presentation
    For the solution of searching for new physics beyond the standard model, we do a research and development of simulation tool kit based on the evolving computing architecture with international collaboration. Using the tools, we study particle physics beyond the standard model. We present the current status of the research and development for these.
    Go to contribution page
  310. Robert Fischer (Rheinisch-Westfaelische Tech. Hoch. (DE))
    Track5: Computing activities and Computing models
    poster presentation
    Within CERN's new open data portal, the CMS collaboration provides a substantial fraction of its recorded data to the public. To explore and analyse the data, computing resources, an analysis framework, and documentation are required as well. While scientists can use C++ and the experiment software CMSSW in virtual machines, a simpler approach is needed, e.g. for university students who are in...
    Go to contribution page
  311. Lukas Alexander Heinrich (New York University (US))
    Track5: Computing activities and Computing models
    poster presentation
    Long before data taking ATLAS established a policy that all analyses need to be preserved. In the initial data-taking period, this has been achieved by various tools and techniques. ATLAS is now reviewing the analysis preservation with the aim to bring coherence and robustness to the process and with a clearer view of the level of reproducibility that is reasonably achievable. The secondary...
    Go to contribution page
  312. Mr Phil Demar (Fermilab), Dr Wenji Wu (Fermi National Accelerator Laboratory)
    Track6: Facilities, Infrastructure, Network
    poster presentation
    Software-Defined Networking (SDN) has emerged as a major development direction in network technology. Conceptually, SDN enables customization of forwarding through network infrastructure on a per-flow basis. With SDN, a high impact LHC data flow could be allocated a “slice” of the network infrastructure. Functionally, the data flow would have a private path through the network infrastructure,...
    Go to contribution page
  313. Lu Wang
    Track6: Facilities, Infrastructure, Network
    poster presentation
    The cluster of CC-IHEP is a middle sized computing system providing 10 thousands CPU cores, 3 PB disk storage, and 40 GB /s IO throughput. Its 1000+ users come from serials of HEP experiments including ATLAS, BESIII, CMS, DYB, JUNO, YBJ etc. In such a system, job statistics is necessary to find performance bottlenecks, locate software pitfalls, identify suspicious behaviors and make resource...
    Go to contribution page
  314. Michi Hostettler (Universitaet Bern (CH))
    Track8: Performance increase and optimization exploiting hardware features
    poster presentation
    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, is in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the  ATLAS experiment has been enabled for this machine. We report on the technical solutions, performance, HPC policy  challenges and possible future opportunities for HEP on...
    Go to contribution page
  315. Graeme Stewart (University of Glasgow (GB))
    Track2: Offline software
    poster presentation
    To deal with Big Data flood from the ATLAS detector most events have to be rejected in the trigger system. the trigger rejection is complicated by the presence of a large number of minimum-bias events – the pileup. To limit pileup effects in the high luminosity environment of the LHC Run-2, ATLAS relies on full tracking provided by the Fast TracKer (FTK) implemented with custom...
    Go to contribution page
  316. Shima Shimizu (Kobe University (JP))
    Track1: Online computing
    poster presentation
    The immense rate of proton-proton collisions at the Large Hadron Collider (LHC) must be reduced from the nominal bunch-crossing rate of 40 MHz to approximately 1 kHz before the data can be written on disk offline. The ATLAS Trigger System performs real-time selection of these events in order to achieve this reduction. Dedicated selection of events containing jets is uniquely challenging...
    Go to contribution page
  317. Andreu Pacheco Pages (Institut de Física d'Altes Energies - Barcelona (ES))
    Track2: Offline software
    poster presentation
    In this presentation we will review the ATLAS Monte Carlo production setup including the different production steps involved in full and fast detector simulation. A report on the Monte Carlo production campaigns during Run-I, Long Shutdown 1 (LS1) and status of the production for Run-2 will be presented. The presentation will include the details on various performance aspects....
    Go to contribution page
  318. James Catmore (University of Oslo (NO)), Roger Jones (Lancaster University (GB))
    Track2: Offline software
    poster presentation
    Based on experience gained from run-I of the LHC, the ATLAS vertex reconstruction group has developed a refined primary vertex reconstruction strategy for run-II.  With instantaneous luminosity exceeding 10^34 cm-2 s-1, an average of 40 to 50 pp collisions per bunch crossing are expected. Together with the increase of the center-of-mass collision energy from 8 TeV to 13 TeV, this will create...
    Go to contribution page
  319. Andreas Salzburger (CERN)
    Track2: Offline software
    poster presentation
    The successful physics program of Run-1 of the LHC with the discovery of the higgs boson in 2012 has put a strong emphasis on design studies for future upgrades of the existing LHC detectors and for future accelerators. Ideas how to cope with instantaneous luminosities way beyond the current specifications of the LHC in future tracking detectors are emerging and need sufficiently accurate...
    Go to contribution page
  320. Fabian Glaser (Georg-August-Universitaet Goettingen (DE))
    Track7: Clouds and virtualization
    poster presentation
    User analysis job demands can exceed available computing resources, especially before major conferences. ATLAS physics results might be slowed down due to this lack of resources available. For these reasons, cloud R&D activities are now included in the skeleton of the ATLAS computing model, which has been extended by using resources from commercial and private cloud providers to satisfy the...
    Go to contribution page
  321. Gianluca Cerminara (CERN)
    Track2: Offline software
    poster presentation
    Fast and efficient methods for the calibration and the alignment of the detector are a key asset to exploit the physics potential of the Compact Muon Solenoid (CMS) detector and to ensure timely preparation of results for conferences and publications. To achieve this goal, the CMS experiment has set up a powerful framework. This includes automated workflows in the context of a prompt...
    Go to contribution page
  322. Mr Erekle Magradze (Georg-August-Universitaet Goettingen (DE))
    Track6: Facilities, Infrastructure, Network
    poster presentation
    High-throughput computing platforms consist of complex infrastructure and provide a number of services apt to failures. To mitigate the impact of failures on the quality of the provided services, a constant monitoring and in time reaction is required, which is impossible without automation of the system administration processes. This paper introduces a way of automation of the process of...
    Go to contribution page
  323. Oliver Schulz (MPI for Physics, Munich)
    Track2: Offline software
    poster presentation
    GERDA is an ultra-low background experiment, designed to search for the neutrinoless double beta decay of Ge-76. The main background sources of such an experiment are minute radioactive ontaminations of the experimental setup itself. Gaining a good understanding of the individual contributions to this radioactive background is vital not only for data analysis, but also guides the design...
    Go to contribution page
  324. Katsuaki Tomoyori (Japan Atomic Energy Agency)
    Track2: Offline software
    poster presentation
    In neutron protein crystallography, it should be also emphasized that the weak Bragg reflections due to the large unit cells may be buried beneath the strong background caused by the incoherent scattering of hydrogen atoms. Therefore, the background estimation from the source is more reliable to improve the accuracy of Bragg integral intensity. We propose the adoption of Statistics-sensitive...
    Go to contribution page
  325. Mr Olivier Couet (CERN)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The ROOT reference guide is part of the code. Classes description, methods usage, examples etc... are all embedded with the code itself. Doxygen is the reference model allowing to extract the documentation from such self described system. The ROOT documentation requires the development of specific tools (scripts) in the Doxygen context. The proposed project is these tools writing.
    Go to contribution page
  326. Rafal Zbigniew Grzymkowski (Polish Academy of Sciences (PL))
    Track7: Clouds and virtualization
    poster presentation
    The role of cloud computing technology in the distributed computing for HEP experiments grows rapidly. Some experiments (Atlas, BES-III, LHCb,…) already exploits private and public cloud resources for data processing. Future experiments such as Belle II or upgraded LHC experiments will largely rely on the availability of cloud resources and therefore their computing models have to be adjusted...
    Go to contribution page
  327. Dr Xiaomei Zhang (Institute of High Energy Physics)
    Track5: Computing activities and Computing models
    poster presentation
    Distributed computing is necessary nowadays for high energy physics experiments to organize heterogeneous computing resources all over the world to process enormous amounts of data. The BESIII experiment in China, which has aggregated about 3 PB of data over the last 5 years, has established its own distributed computing system, based on DIRAC, as a supplement to local clusters, collecting...
    Go to contribution page
  328. Manuel Martin Marquez (CERN)
    Track3: Data store and access
    poster presentation
    CERN’s accelerator complex is an extreme data generator, every second an important amount of comprehensively heterogeneous data coming from control equipment and monitoring agents is persisted and needs to be analysed. Over the decades, CERN’s researching and engineering teams have applied different approaches, techniques and technologies. This situation has minimized the necessary...
    Go to contribution page
  329. Luca Mazzaferro (Universita e INFN Roma Tor Vergata (IT))
    Track8: Performance increase and optimization exploiting hardware features
    poster presentation
    The possible usage of HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. In recent years the architecture of HPC systems has evolved, moving away from specialized monolithic systems, to a more generic linux type platform. This change means that the deployment of...
    Go to contribution page
  330. Andreas Petzold (KIT - Karlsruhe Institute of Technology (DE))
    Track7: Clouds and virtualization
    poster presentation
    The possibilities of cloud storage for use in HEP computing have been the topic of many studies and trials. The typical cloud storage values, easily accessible and expandable, relatively cheap and with a light weight interface have become available for local storage as well. Initially as part of larger environments like Open Nebula or OpenStack Swift, vendors now offer value storage with...
    Go to contribution page
  331. Lorenzo Moneta (CERN)
    Track2: Offline software
    poster presentation
    Differentiation is ubiquitous in high energy physics, for instance for minimization algorithms used for fitting and statistical analysis, detector alignment and calibration, and for theoretical physics. Automatic differentiation (AD) avoids well-known limitations in round-offs and speed, which symbolic and numerical differentiation suffer from, by transforming the source code of...
    Go to contribution page
  332. Mrs Silvia Arezzini (INFN - Pisa)
    Track8: Performance increase and optimization exploiting hardware features
    poster presentation
    "CLUSTERALIVE" "Clusteralive" is an integrated system developed in order to monitor and manage few important tasks in our HPC environment. We have also other management systems, but now, with “Clusteralive” we can know immediately, just seeing our screen, if Clusters are up and running and we are sure that the most important functionality are well instanced. "Clusteralive" is a php...
    Go to contribution page
  333. Marian Zvada (University of Nebraska (US))
    Track3: Data store and access
    poster presentation
    Over the past three years, the CMS Collaboration has developed the “Any Data, Anytime, Anywhere” technology to make use of a global data federation that is based on the XrootD protocol. The federation is now deployed across virtually all Tier-1 and Tier-2 sites in the CMS distributed computing system. This data federation gives workflows greater flexibility for location of execution, which has...
    Go to contribution page
  334. Clint Allan Richardson (Boston University (US))
    Track1: Online computing
    poster presentation
    The two-level trigger system employed by CMS consists of the Level 1 (L1) Trigger, which is implemented using custom-built electronics, and the High Level Trigger (HLT), a farm of commercial cpus running a streamlined version of the offline CMS reconstruction software. The operational L1 output rate of 100 kHz, together with the number of cpus in the HLT farm, imposes a fundamental constraint...
    Go to contribution page
  335. Mrs Natalia Ratnikova (Fermilab)
    Track3: Data store and access
    poster presentation
    Storage capacity at CMS Tier-1 and Tier-2 sites reached over 100 Petabytes in 2014, and will be substantially increased during Run 2 data taking. The allocation of storage for the individual users analysis data, which is not accounted as a centrally managed storage space, will be increased to up to 40%. For comprehensive tracking and monitoring of the storage utilization across all...
    Go to contribution page
  336. Mr Igor Mandrichenko (FNAL)
    Track3: Data store and access
    poster presentation
    Abstract: Conditions or calibration data are an important part of High Energy experiments. This kind of data is typically organized in terms of intervals of validity that require a special type of database table schema and API structure. At Fermilab we have designed and developed ConDB, a general tool to store, manage and retrieve conditions data organized into validity intervals in a...
    Go to contribution page
  337. Mr Michael Poat (Brookhaven National Laboratory)
    Track6: Facilities, Infrastructure, Network
    poster presentation
    The STAR online computing environment is an intensive ever-growing system used for real-time data collection and analysis. Composed of heterogeneous and sometimes custom-tuned machine groups (Data Acquisition or DAQ computing, Trigger group, Sow Control or user-end data quality monitoring resources do not have the same requirements) the computing infrastructure was managed by manual...
    Go to contribution page
  338. Marko Slyz (FNAL)
    Track5: Computing activities and Computing models
    poster presentation
    The Dark Energy Survey (DES) uses a CCD camera installed in the Blanco telescope in Cerro Tololo, Chile. The goal of the survey is to study the effect known as Dark Energy. DES uses Fermigrid for nightly processing, for quality assessement of images, and for the detection of type 1A Super Novae. Nighly processing needs to be carried out for each of the 105 nights in a season that DES...
    Go to contribution page
  339. Adam Aurisano (University of Cincinnati)
    Track3: Data store and access
    poster presentation
    During operations, NOvA produces between 5,000 and 7,000 raw files per day with peaks in excess of 12,000. These files must be processed in several stages to produce fully calibrated and reconstructed analysis files. In addition, many simulated neutrino interactions must be produced and processed through the same stages as data. To accommodate the large volume of data and Monte Carlo,...
    Go to contribution page
  340. Osamu Tatebe (University of Tsukuba)
    Track3: Data store and access
    poster presentation
    Files in storage are often corrupted silently without any explicit error. This is typically due to file system software bug, RAID controller firmware bug, and some other reasons. Most critical issue is damaged data is read without any error. Although there are several mechanisms to detect data corruption in different layers such as ECC in disk and memory and TCP checksum, the data may...
    Go to contribution page
  341. Christophe Haen (CERN)
    Track3: Data store and access
    poster presentation
    The DIRAC Interware provides a development framework and a complete set of components for building distributed computing systems. The DIRAC Data Management System (DMS) offers all the necessary tools to ensure data handling operations for small and large user communities. It supports transparent access to storage resources based on multiple technologies, and is easily expandable. The...
    Go to contribution page
  342. Dr Federico Colecchia (Brunel University London)
    Track2: Offline software
    poster presentation
    The contamination from low-energy strong interactions is a major issue for data analysis at the Large Hadron Collider, particularly with reference to pileup, i.e. to proton-proton collisions from other bunch crossings. With a view to improving on the performance of pileup subtraction in higher-luminosity regimes, particle weighting methods have recently been proposed whereby the weights are...
    Go to contribution page
  343. Ruben Domingo Gaspar Aparicio (CERN)
    Track3: Data store and access
    poster presentation
    Inspired on different database as a service, DBaas, providers, the database group at CERN has developed a platform to allow CERN user community to run a database instance with database administrator privileges providing a full toolkit that allows the instance owner to perform backup/ point in time recoveries, monitoring specific database metrics, start/stop of the instance and...
    Go to contribution page
  344. Mr Karsten Schwank (DESY)
    Track3: Data store and access
    poster presentation
    Increasingly, sites are using dCache to support communities that have different requirements from WLCG; as an example, DESY facilities and services now support photon sciences and biology groups. This presents new use-cases for dCache. Of particular interest is the chaotic file size distribution with a peak towards small files. This is problematic because tertiary storage systems, and tape in...
    Go to contribution page
  345. Mr Tigran Mkrtchyan Mkrtchyan (Deutsches Elektronen-Synchrotron DESY)
    Track3: Data store and access
    poster presentation
    For over a decade, dCache.ORG has provided software which is used at more than 80 sites around the world, providing reliable services for WLCG experiments and others. This can be achieved only with a well established process from white board, where ideas are created, through packages, installed on the production systems. Since early 2013 we have moved to git as out source code management...
    Go to contribution page
  346. Thomas Lindner (TRIUMF)
    Track1: Online computing
    poster presentation
    DEAP-3600 is a dark matter experiment located at SNOLAB in Ontario, Canada. The DEAP detector uses 3600kg of liquid argon to search for the interactions of Weakly Interacting Massive Particles (WIMPs), a dark matter candidate. Light from the WIMP interactions is imaged using an array of 255 PMTs. A critical challenge for the DEAP experiment is the large background from 39Argon beta decays...
    Go to contribution page
  347. Wim Lavrijsen (Lawrence Berkeley National Lab. (US))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The language improvements in C++11/14 greatly reduce the amount of boilerplate code required and allow resource ownership to be clarified in interfaces. On top, the Cling C++ interpreter brings a truly interactive experience and real dynamic behavior to the language. Taken together, these developments bring C++ much closer to Python in ability, allowing the combination of PyROOT/cppyy and...
    Go to contribution page
  348. Brian Davies (STFC (RAL) (GB))
    Track6: Facilities, Infrastructure, Network
    poster presentation
    perfSonar is a network monitoring tool set which enables performance of wide area communications to be analysed and eases problem identification across distributed centres. It has been widely used within WLCG since 2012 and has been crucial in identifying network problems and confirming network changes have the desired effect. We report on examples of this within this presentation. In...
    Go to contribution page
  349. Jingyan Shi (IHEP)
    Track7: Clouds and virtualization
    poster presentation
    Batch system is a common way for a local cluster to schedule jobs running on work nodes. In some cases, some jobs have to stay in queue without suitable work nodes while some job slots have to keep free without suitable jobs running. The reasons for such case might be various. One of the main reasons is that operating system running on the free work nodes is different from the one that jobs in...
    Go to contribution page
  350. Franco Brasolin (Universita e INFN (IT))
    Track7: Clouds and virtualization
    poster presentation
    During the LHC long shutdown period (LS1), that started in 2013, the simulation in Point1 (Sim@P1) project takes advantage in an opportunistic way of the trigger and data acquisition (TDAQ) farm of the ATLAS experiment. The farm provides more than 1500 computer nodes, and they are particularly suitable for running event generation and Monte Carlo production jobs that are mostly CPU and not...
    Go to contribution page
  351. Andre Zibell (Bayerische Julius Max. Universitaet Wuerzburg (DE))
    Track1: Online computing
    poster presentation
    Ourania Sidiropoulou on behalf of the ATLAS Muon Collaboration A Micromegas (MM) quadruplet prototype with an active area of {0.5 m$^2$} that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment...
    Go to contribution page
  352. Yuki Obara (University of Tokyo)
    Track1: Online computing
    poster presentation
    The purpose of the J-PARC E16 experiment is to investigate the origin of hadron mass through the chiral symmetry restoration in nuclear matter. In the experiment, we measure mass spectra of vector mesons in nuclei in the $e^{+}e^{-}$ decay channel with high precision and high statistics. We use a 30 GeV proton beam with high intensity of $10^{10}$ per spill to achieve high...
    Go to contribution page
  353. Andrew John Washbrook (University of Edinburgh (GB)), David Crooks (University of Glasgow (GB))
    Track7: Clouds and virtualization
    poster presentation
    The field of analytics, the process of analysing data to visualise meaningful patterns and trends, has become increasingly important to a wide range of scientific applications as the volume and variety of accessible data available to process (so called Big Data) has significantly increased. There are a number of scalable analytic platforms and services which have risen in prominence (such as...
    Go to contribution page
  354. Wataru Nakai (University of Tokyo / RIKEN)
    Track2: Offline software
    poster presentation
    The J-PARC E16 experiment will be performed to measure the mass modification of vector mesons in nuclear matter at J-PARC in order to study the origin of hadron mass. In the experiment, we will measure invariant mass spectra of vector mesons with the electron and positron decay channel. We will use 30 GeV proton beam with an intensity of $1\times10^{10}$ protons per pulse at High-momentum...
    Go to contribution page
  355. Dr Tony Wildish (Princeton University (US))
    Track5: Computing activities and Computing models
    poster presentation
    We present an abstract view of data-transfer architectures in use in ATLAS and CMS. We use this to classify data-transfer tools not in terms of their technology, but in terms of their more basic features, such as the properties of the traffic they handle and the use-cases they serve. This classification moves the focus from programming interfaces and technologies back into the original...
    Go to contribution page
  356. Geun Chul Park (KiSTi Korea Institute of Science & Technology Information (KR))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    AMGA (ARDA Metadata Grid Application) is a grid metadata catalog system that has been developed as a component of the EU FP7 EMI consortium based on the requirements of the HEP (High-Energy Physics) and the Biomed user communities. Currently, AMGA is exploited to manage the metadata in the gBasf2 framework at the Belle II which is one of the largest particle physics experiments in the world....
    Go to contribution page
  357. Andrew McNab (University of Manchester (GB))
    Track2: Offline software
    poster presentation
    The LHCb experiment has recorded the world’s largest sample of charmed meson decays. The search for matter-antimatter asymmetries in charm sector requires high precision analysis and thus intensive computing. This contribution will present a powerful method to measure matter-antimatter asymmetries in multi-body decays where GPU systems have been successfully exploited. In this method, local...
    Go to contribution page
  358. Christopher Jung (KIT - Karlsruhe Institute of Technology (DE))
    Track8: Performance increase and optimization exploiting hardware features
    poster presentation
    An ARM cluster, CEPH, ROOT and the energy balance The total cost of ownershipt (TCO) of todays computer centres are increasingly driven the power consumption of computing equipment. The question arises if Intel based CPUs are still the best choice for analysis tasks. Furthermore, data-driven computing models are emerging. This contribution compares performance, TCO, power and energy...
    Go to contribution page
  359. Silvia Arezzini (INFN Italy)
    Track7: Clouds and virtualization
    poster presentation
    A large scale computing center, when not dedicated to a single/few users, has to face the problem of meeting ever changing user needs with respect to operating system version, architecture, availability of attached data volumes and logins. While clouds are a typical answer to these types of questions, they introduce resource problems like higher usage of RAM, difficulty to expose bare metal...
    Go to contribution page
  360. Steven Andrew Farrell (Lawrence Berkeley National Lab)
    Track5: Computing activities and Computing models
    poster presentation
    The ATLAS analysis model has been overhauled for the upcoming run of data collection in 2015 at 13 TeV. One key component of this upgrade was the Event Data Model (EDM), which now allows for greater flexibility in the choice of analysis software framework and provides powerful new features that can be exploited by analysis software tools. A second key component of the upgrade is the...
    Go to contribution page
  361. Stefano Dal Pra (INFN)
    Track7: Clouds and virtualization
    poster presentation
    The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensues its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their...
    Go to contribution page
  362. Dr Gen Kawamura (International Cloud Cooperation)
    Track7: Clouds and virtualization
    poster presentation
    Grid computing enables deployments of large scale distributed computational infrastructures among different research facilities. It has been recently proposed that the Grid infrastructure be based on cloud computing. Provisioning systems and automated management frameworks using Cobbler, Rocks, Cfengine and Puppet are being successfully applied to many systems. Having implemented these new...
    Go to contribution page
  363. Stefano Dal Pra (INFN)
    Track6: Facilities, Infrastructure, Network
    poster presentation
    Tier-1 sites providing computing power for HEP experiments are usually tightly designed for high throughput performances. This is pursued by reducing the variety of supported usecases and tuning for performances those ones, the most important of which have been that of single-core jobs. Moreover, the usual workload is saturation: each available core in the farm is in use and there are...
    Go to contribution page
  364. Afiq Aizuddin Bin Anuar (University of Malaya (MY))
    Track1: Online computing
    poster presentation
    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increase in center-of-mass energy and luminosity will raise the...
    Go to contribution page
  365. Mr Romain Wartel (CERN)
    Track5: Computing activities and Computing models
    poster presentation
    Federated identity management (FIM) is an arrangement made among multiple organisations that lets subscribers use the same identification data, e.g. account names & credentials, to obtain access to the secured resources and computing services of all other organisations in the group. Specifically in the various research communities there is an increased interest in a common approach as there is...
    Go to contribution page
  366. Christopher Jung (KIT - Karlsruhe Institute of Technology (DE))
    Track3: Data store and access
    poster presentation
    Most analyses in experimental high-energy physics (HEP) are based on the data analysis framework ROOT. Therefore, simulated as well as measured events are stored in ROOT trees. A typical analysis loops over events in ROOT files and selects relevant events for further processing according to certain selection criteria. The emergence of NoSQL databases provide a new mean for large scale data...
    Go to contribution page
  367. Vikas Singhal (Department of Atomic Energy (IN))
    Track2: Offline software
    poster presentation
    The Compressed Baryonic Matter (CBM) experiment at the Facility for Anti-Proton and Ion Research (FAIR)in Darmstadt, Germany, is going to produce about 1 TByte per second of raw data at an interaction rate of 10 MHz for the measurement of very rare particles. Until now, all the HEP experiments are based on traditional hardware trigger approach; therefore all simulation and reconstruction...
    Go to contribution page
  368. Jerome Odier (Centre National de la Recherche Scientifique (FR))
    Track3: Data store and access
    poster presentation
    The ATLAS Metadata Interface (AMI) can be considered to be a mature application because it has existed for at least 10 years. Over the years, the number of users and the number of functions provided for these users has increased. It has been necessary to adapt the hardware infrastructure in a seamless way so that the Quality of Service remains high. We will describe the evolution of...
    Go to contribution page
  369. Brian Paul Bockelman (University of Nebraska (US)), Dr Jose Caballero Bejar (Brookhaven National Laboratory (US))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The Open Science Grid Application Software Installation Service (OASIS) provides an application installation service for Open Science Grid (OSG) virtual organizations (VOs) built on top of the CERN Virtual Machine File System (CVMFS). This paper provides an overview and progress report of the OASIS service, which has been in production for over 18 months. OASIS can be used either...
    Go to contribution page
  370. Thomas Lindner (TRIUMF)
    Track5: Computing activities and Computing models
    poster presentation
    ND280 is the off-axis near detector for the T2K neutrino experiment; it is designed to characterize the unoscillated T2K neutrino beam and measure neutrino cross-sections. We have developed a complicated system for processing and simulating the ND280 data, using computing resources from North America, Europe and Japan. Recent work has concentrated on unifying our computing framework across...
    Go to contribution page
  371. Andrew David Lahiff (STFC - Rutherford Appleton Lab. (GB))
    Track7: Clouds and virtualization
    poster presentation
    Today it is becoming increasingly common for WLCG sites to provide both grid and cloud compute resources. In order to avoid the inefficiencies caused by static partitioning of resources it is necessary to integrate grid and cloud resources. There are two options to consider when doing this. The simplest option is to have the cloud manage all the physical hardware and use entirely virtualised...
    Go to contribution page
  372. Dr Maria Grazia Pia (Universita e INFN (IT)), Sung Hun Kim (H)
    Track2: Offline software
    poster presentation
    Geant4 recommends a set of PhysicsLists and related classes (Builders, PhysicsConstructors) to its user community to facilitate the use of Geant4 functionality despite its intrinsic physics complexity. Relative limited documentation is available in the literature regarding Geant4 physics configuration tools, especially concerning the quantification of their accuracy, their computational...
    Go to contribution page
  373. Anna Elizabeth Woodard (University of Notre Dame (US)), Matthias Wolf (University of Notre Dame (US))
    Track5: Computing activities and Computing models
    poster presentation
    Individual scientists in high energy physics experiments like the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider require extensive use of computing resources for analysis of massive data sets. The majority of this analysis work is done at dedicated grid-enabled CMS computing facilities. University campuses offer considerable additional computing resources, but these are...
    Go to contribution page
  374. Arturo Sanchez Pineda (Universita di Napoli Federico II-Universita e INFN)
    Track2: Offline software
    poster presentation
    ‎We explore the potentialities of current web applications to create online interfaces that allow the visualization, interaction and real physics cut-based analysis and monitoring of processes trough a web browser. The project consists in the initial development of web-based and cloud computing services to allow students and researches to perform fast and very useful cut-based analysis on a...
    Go to contribution page
  375. Dr Marc Paterno (Fermilab)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The scientific discovery process can be advanced by the integration of independently-developed programs run on disparate computing facilities into coherent workflows usable by scientists who are not experts in computing. For such advancement, we need a system which scientists can use to formulate analysis workflows, to integrate new components to these workflows, and to execute different...
    Go to contribution page
  376. Dr Samuel Cadellin Skipsey
    Track3: Data store and access
    poster presentation
    The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in...
    Go to contribution page
  377. Dr Helge Meinhard (CERN)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    We will present how CERN's services around Issue Tracking and Version Control have evolved, and what the plans for the future are. We will describe the services' main design, integration and structure, giving special attention to the new requirements from the community of users in terms of collaboration and integration tools and how we address this challenge when defining new services based on...
    Go to contribution page
  378. Dr Bockjoo Kim (University of Florida (US)), Dr Dimitri Bourilkov (University of Florida (US)), Jorge Luis Rodriguez (Florida International University (US)), Paul Ralph Avery (University of Florida (US)), Yu Fu (University of Florida (US))
    Track3: Data store and access
    poster presentation
    One of the CMS Tier2 centers, the Florida CMS Tier2 center, has been using the Lustre filesystem for its data storage backend system since 2004. Recently, the data access pattern at our site has changed greatly due to various new access methods that include file transfers through the GridFTP servers, read access from the worker nodes, and remote read access through xrootd. In order to optimize...
    Go to contribution page
  379. Brian Davies (STFC (RAL) GB)
    Track3: Data store and access
    poster presentation
    The Rutherford Appleton Laboratory (RAL) operates the UK WLCG Tier1 facility on behalf of GridPP. Tier 1's provide persistent archival storage (on tape at RAL) and online storage for fast access data analysis. RAL is one of the few Tier-1s which supports data management for all the major LHC experiments, as well as a number of smaller Virtual Organisations. This allows us to compare usage...
    Go to contribution page
  380. Prof. Alberto Aloisio (Universita' di Napoli Federico II and INFN)
    Track1: Online computing
    poster presentation
    We present a feasibility study of a RF transmitters and modulators based on parametric softcores fully embedded in a general purpose FPGA fabric, without using external components. This architecture aims at providing wireless physical layers to I-o-T and NFC protocols with programmable hardware. We show preliminary results with latest generation 7-series XILINX FPGA.
    Go to contribution page
  381. Manuel Delfino Reznicek (Universitat Autònoma de Barcelona (ES))
    Track6: Facilities, Infrastructure, Network
    poster presentation
    Energy consumption is an increasing concern for data centers. This paper summarizes recent energy efficiency upgrades at the Port d’Informació Científica (PIC) in Barcelona, Spain which have considerably lowered energy consumption. The upgrades were particularly challenging, as they involved modifying the already existing machine room, which is shared by PIC with the general IT services of the...
    Go to contribution page
  382. Alejandro Alvarez Ayllon (CERN)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    FTS3 is the service responsible for the distribution of the LHC data across the WLCG Infrastructure. To facilitate its use outside the traditional grid environment we have provided a web application - known as WebFTS - fully oriented towards final users, and easily usable within a browser. This web application is completely decoupled from the core service, and interfaces with it via a REST...
    Go to contribution page
  383. Dr Sebastien Binet (IN2P3/LAL)
    Track2: Offline software
    poster presentation
    `fwk`: a go-based concurrent control framework ============================================ Current HEP control frameworks have been designed and written in the early 2000's, when multi-core architectures were not yet pervasive. As a consequence, an inherently sequential event processing design emerged. Evolving current frameworks' APIs and data models encouraging global states,...
    Go to contribution page
  384. Soon Yung Jun (Fermi National Accelerator Lab. (US))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    Performance evaluation and analysis of large-scale computing applications is essential for optimizing the use of resources. As detector simulation is one of the most compute-intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing...
    Go to contribution page
  385. Dr Tamborini Aurora (University of Pavia - INFN Section of Pavia)
    Track2: Offline software
    poster presentation
    **Purpose** The aim of this work is a study of a possible use of carbon ion pencil beams (delivered with active scanning modality) for the treatment of ocular melanomas at the National Centre for Oncological Hadrontherapy (CNAO). The promising aspect of carbon ions radiotherapy for the treatment of this disease lies in its superior relative radiobiological effectiveness (RBE). The Monte...
    Go to contribution page
  386. Dr Simone Campana (CERN)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The ATLAS Experiment at the Large Hadron Collider has collected data during Run 1 and is ready to collect data in Run 2. The ATLAS data are distributed, processed and analysed at more than 130 grid and cloud sites across the world. At any given time, there are more than 150,000 concurrent jobs running and about a million jobs are submitted on a daily basis on behalf of thousands of physicists...
    Go to contribution page
  387. Dr Randy Sobie (University of Victoria (CA))
    Track7: Clouds and virtualization
    poster presentation
    The HEP community is increasingly using clouds that are distributed around the world for running its applications. The stringent software criteria of HEP experiments require that we use the identical (secure) virtual machine (VM) image at all sites with a minimal set of site-specific customizations. Nearly all cloud systems (such as OpenStack) require that the VM image to be instantiated must...
    Go to contribution page
  388. Haykuhi Musheghyan (Georg-August-Universitaet Goettingen (DE))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The importance of monitoring on HEP grid computing systems is growing due to a significant increase in their complexities. Computer scientists and administrators have been studying and building effective ways to gather information on and clarify a status of each local grid infrastructure. The HappyFace project aims at making the above-mentioned workflow possible. It aggregates, processes and...
    Go to contribution page
  389. Mr Suman Sau (Calcutta University)
    Track1: Online computing
    poster presentation
    The Compressed Baryonic Matter (CBM) experiment is a part of the Facility for Antiproton and Ion Research (FAIR) in Darmstadt at the GSI. This experiment will examine heavy-ion collisions in fixed target geometry and will be able to measure hadrons, electrons and muons. Muon Chamber (MUCH) is used to detect low momentum muons in an environment of high particle densities. Basic read out chain...
    Go to contribution page
  390. Vincenzo Daponte (Universite de Geneve (CH))
    Track1: Online computing
    poster presentation
    The CMS High Level Trigger (HLT) is implemented running a streamlined version of the CMS offline reconstruction software running on thousands of CPUs. The CMS software is written mostly in C++, using Python as its configuration language through an embedded CPython interpreter. The configuration of each process is made up of hundreds of "modules", organized in "sequences" and "paths". As an...
    Go to contribution page
  391. Dr Maria Grazia Pia (Universita e INFN (IT)), Peter Steinbach (MPI-CBG), Stefan Kluth (Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut) (D), Dr Thomas Schoerner-Sadenius (DESY), Thomas Velz (Universitaet Bonn (DE))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The ability to judge, use and develop code efficiently and successfully is a key ingredient in modern particle physics. Software design plays a fundamental role in the software development process and is instrumental to many critical aspects in the life-cycle of an experiment: the transparency of software design enables the validation of physics results, contributes to the effective use of...
    Go to contribution page
  392. Edgar Fajardo Hernandez (Univ. of California San Diego (US))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The HTCondor batch system is heavily used in the HEP community as the batch system for several WLCG resources. Moreover it is the backbone of the GlideInWMS, the main pilot system used by CMS. To prepare for LHC Run 2, we are probing the scalability limits of new versions and configurations of HTCondor with the goal of reaching at least 200,000 simultaneous running jobs in a single pool. A...
    Go to contribution page
  393. Michal Husejko (CERN)
    Track8: Performance increase and optimization exploiting hardware features
    poster presentation
    In this paper we present our findings gathered during the evaluation and testing of Windows Server High Performance Computing (Windows HPC) in view of potentially using it as a production HPC system for engineering applications. The Windows HPC package, an extension of Microsoft's Windows Server product, provides all essential interfaces, utilities and management functionality for creating,...
    Go to contribution page
  394. Shawn Mc Kee (University of Michigan (US))
    Track6: Facilities, Infrastructure, Network
    poster presentation
    In today's world of distributed scientific collaborations, there are many challenges to providing reliable inter-domain network infrastructure. Network operators use a combination of active monitoring and trouble tickets to detect problems. However, some of these approaches do not scale to wide area inter-domain networks due to unavailability of data. The Pythia Network Diagnostic...
    Go to contribution page
  395. Stefano Dal Pra (INFN)
    Track7: Clouds and virtualization
    poster presentation
    While in the business world the cloud paradigm is typically implemented purchasing resources and services from third party providers (e.g. Amazon), in the scientific environment there's usually the need of on-premises IaaS infrastructures which allow efficient usage of the hardware distributed among (and owned by) different scientific administrative domains. In addition, the requirement of...
    Go to contribution page
  396. Mr Robert Mina (U. Virginia)
    Track1: Online computing
    poster presentation
    The NOvA collaboration has constructed a 14,000 ton, fine-grained, low-Z, total absorption tracking calorimeter at an off-axis angle to an upgraded NuMI neutrino beam. This detector, with its excellent granularity and energy resolution, and relatively low-energy neutrino thresholds was designed to observe electron neutrino appearance in a muon neutrino beam but it also has unique capabilities...
    Go to contribution page
  397. Marco Clemencic (CERN)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The new LHCb nightly build system described at CHEP 2013 was limited by the use of JSON files for its configuration. JSON had been chosen as a temporary solution to maintain backward compatibility towards the old XML format by means of a translation function. Modern languages like Python leverage on meta-programming techniques to enable the development of Domain Specific Languages...
    Go to contribution page
  398. Michael Boehler (Albert-Ludwigs-Universitaet Freiburg (DE))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, CMS, and LHCb experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure. Sites with failing functionality tests are auto-excluded from the ATLAS computing grid, therefore...
    Go to contribution page
  399. Ben Couturier (CERN), Marco Clemencic (CERN)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The purpose of this paper is to describe the steps that led to an improved interface for LHCb's Nightly Builds Dashboard. The goal was to have an efficient application that meets the needs of both the project developers, by providing them with a user friendly interface, as well as those of the computing team supporting the system by providing them with a dashboard allowing for better...
    Go to contribution page
  400. Jae-Hyuck Kwak (KISTI)
    Track3: Data store and access
    poster presentation
    This paper describes the recent improvement of AMGA python client library for the Belle II Experiment. We were drawn to the action items about library improvement after in-depth discussions with the developer of the Belle II distributed computing group. It includes GSI support, client-side metadata federation support and atomic operation support. Some of the improvements were already applied...
    Go to contribution page
  401. Alessandro De Salvo (Universita e INFN, Roma I (IT)), Domenico Elia (INFN Bari), Laura Perini (Università degli Studi e INFN Milano (IT)), Tommaso Boccali (Sezione di Pisa (IT))
    Track5: Computing activities and Computing models
    poster presentation
    In 2012, 14 Italian Institutions participating in all major LHC Experiments won a grant from the ITALIAN MINISTRY OF RESEARCH (MIUR), to optimise Analysis activities and in general the Tier2/Tier3 infrastructure. We report on the activities being researched upon, on the considerable improvement in the ease of access to resources by physicists, also those with no specific computing interests....
    Go to contribution page
  402. Andrew David Lahiff (STFC - Rutherford Appleton Lab. (GB))
    Track7: Clouds and virtualization
    poster presentation
    Today the primary method by which the LHC and other experiments run computing work at WLCG sites is grid job submission. Jobs are submitted to computing element middleware which in turn submits jobs to a batch system managing the local compute resources. With the increasing interest and usage of cloud technology, a new challenge facing sites which support multiple experiments in recent years...
    Go to contribution page
  403. Dr Luca Mazzaferro (Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut) (D)
    Track6: Facilities, Infrastructure, Network
    poster presentation
    In a grid computing infrastructure tasks such as continuous upgrades, services installations and software deployments are part of an admins daily work. In such an environment tools to help with the management, provisioning and monitoring of the deployed systems and services have become crucial. As experiments such as the LHC increase in scale, the computing infrastructure also becomes...
    Go to contribution page
  404. Andre Sailer (CERN)
    Track2: Offline software
    poster presentation
    The DD4hep detector description toolkit offers a flexible and easy to use solution for the consistent and complete description of particle physics detectors in one single system. It provides software components addressing visualisation, simulation, reconstruction and analysis of high energy physics data. The Linear Collider community has adopted DD4hep early on in the development phase and...
    Go to contribution page
  405. Dr Alexey Poyda (NATIONAL RESEARCH CENTRE "KURCHATOV INSTITUTE"), Eygene Ryabinkin (National Research Centre Kurchatov Institute (RU); Moscow Institute for Physics and Technology, Applied computational geophysics lab), Dr Ruslan Mashinistov (NATIONAL RESEARCH CENTRE "KURCHATOV INSTITUTE"; P.N. Lebedev Institute of Physics (Russian Academy of Sciences))
    Track8: Performance increase and optimization exploiting hardware features
    poster presentation
    During LHC Run1 ATLAS and ALICE produced more than 30 Petabytes of data, That rate outstripped any other scientific effort going on, even in data-rich fields such as genomics and climate science. To address an unprecedented multi-petabyte data processing challenge, the experiments are relying on the computational grid infrastructure deployed by the Worldwide LHC Computing Grid (WLCG). LHC...
    Go to contribution page
  406. Dr Dario Barberis (Università e INFN Genova (IT))
    Track3: Data store and access
    poster presentation
    The ATLAS EventIndex System, developed for use in LHC Run 2, is designed to index every processed event in ATLAS, replacing the TAG System used in Run 1. Its storage infrastructure, based on Hadoop, necessitates revamping how information in this system relates to other ATLAS systems. In addition, the scope of this new application is different from that of the TAG System. It will store fewer...
    Go to contribution page
  407. Alec Habig (Univ. of Minnesota Duluth)
    Track1: Online computing
    poster presentation
    The NOvA experiment, with a baseline of 810 km, samples Fermilab's upgraded NuMI beam with a Near Detector on-site and a Far Detector (FD) at Ash River, MN, to observe oscillations of muon neutrinos. The 344,064 liquid scintillator-filled cells of the 14 kton FD provide high granularity of a large detector mass and enable us to also study non-accelerator based neutrinos with our Data Driven...
    Go to contribution page
  408. Eygene Ryabinkin (National Research Centre Kurchatov Institute (RU))
    Track7: Clouds and virtualization
    poster presentation
    Cloud technologies allow easy load balancing between different tasks and projects. From the viewpoint of the data analysis in the ALICE experiment, cloud allows to deploy software using Cern Virtual Machine (CernVM) and CernVM File System (CVMFS), to run different (including outdated) versions of software for long term data preservation and to dynamically allocate resources for different...
    Go to contribution page
  409. Dr Maria Girone (CERN)
    Track8: Performance increase and optimization exploiting hardware features
    poster presentation
    High energy physics experiments are experiencing a growth in the number of collected and processed events that exceeds the rate of growth in computing resources sustainable by technology improvements at a flat yearly cost. This trend is expected to continue into the foreseeable future, and as the field is not expecting a big increase in support, innovative approaches are needed. In areas of...
    Go to contribution page
  410. Michal Husejko (CERN)
    Track1: Online computing
    poster presentation
    High-Level Synthesis (HLS) for Field-Programmable Logic Array (FPGA) programming is becoming a practical alternative to well-established VHDL and Verilog languages. This paper describes a case study in the use of HLS tools to design an FPGA-based data acquisition systems (DAQ). We will present the implementation of the CERN CMS detector ECAL Data Concentrator Card (DCC) functionality in HLS...
    Go to contribution page
  411. Yuji Kato
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The BelleII is an asymmetric energy $e^{+}e^{-}$ collider experiment at SuperKEKB in Japan. One of the main goals of BelleII is to search for physics beyond the Standard Model with a data set of about $5 \times 10^{10}$ $B\bar{B}$ pairs. In order to store such huge amount of data including MC events and analyze it in a timely manner, BelleII adopted distributed computing model with DIRAC...
    Go to contribution page
  412. Dr Sergey Linev (GSI DARMSTADT)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    This is further development of JSRootIO project. Code was mostly rewritten to make it modular; I/O part was clearly separated from the graphics. Many new interactive features were implemented: - loading of required functionality on the fly; - possibility of dynamic update of objects drawings; - automatic resize of drawings when browser window is resized; - move/resize of...
    Go to contribution page
  413. Andrej Gorisek (Jozef Stefan Institute (SI)), Garrin Mcgoldrick (University of Toronto (CA)), Matevz Cerv (CERN)
    Track2: Offline software
    poster presentation
    The Judith software performs pixel detector analysis tasks utilising two different data streams such as those produced by the reference and tested devices typically found in a testbeam. This software addresses and fixes problems arising from the desynchronization of the two simultaneously triggered heterogeneous data streams by detecting missed triggers in either of the streams. The software...
    Go to contribution page
  414. Andrew Norman (Fermilab)
    Track5: Computing activities and Computing models
    poster presentation
    Modern long baseline neutrino experiments like the NOvA experiment at Fermilab, require large scale, compute intensive simulations of their neutrino beam fluxes and backgrounds induced by cosmic rays. The amount of simulation required to keep the systematic uncertainties in the simulation from dominating the final physics results is often 10x to 100x that of the actual detector exposure. For...
    Go to contribution page
  415. Ben Couturier (CERN)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    After the successful RUN I of the LHC, the LHCb Core software team has taken advantage of the long shutdown to consolidate and improve its build and deployment infrastructure. Several of the related projects have already been presented like the build system using Jenkins, as well as the LHCb Performance and regression testing infrastructure. Some components are completely new, like the...
    Go to contribution page
  416. Andrey Ustyuzhanin (Moscow Institute of Physics and Technology, Moscow), Nikita Kazeev (Moscow Institute of Physics and Technology, Moscow)
    Track3: Data store and access
    poster presentation
    The LHCb experiment routinely generates up to 10^10 events per year. Organizing such an amount of data in a convenient manner for interactive analysis is non-trivial. It becomes even more complicated as every event undergoes several versions of reconstructions, and users have to be able to navigate through many different versions of the same event. This paper presents the LHCb EventIndex: an...
    Go to contribution page
  417. Dr Dmytro Kovalskyi (Univ. of California San Diego (US))
    Track5: Computing activities and Computing models
    poster presentation
    Glidemon is a lightweight monitoring system for GlideinWMS job submission infrastructure. It allows for basic information aggregation based on ClassAds in HTCondor environment of GlideinWMS. It can easily be adopted for a specific application running on top of GlideinWMS. In CMS it is used for user and production job monitoring managed by CRAB and WMAgent. We will review critical design...
    Go to contribution page
  418. Domenico Elia (INFN Bari), Giorgia Miniello (Universita e INFN (IT))
    Track3: Data store and access
    poster presentation
    A cloud-based Virtual Analysis Facility (VAF) for the ALICE experiment at the LHC has been developed in Bari. Similar facilities are currently running in other Italian sites with the aim to create a federation of interoperating farms able to provide their computing resources for interactive distributed analysis. The facility consists in a PROOF cluster of virtual machines dynamically deployed...
    Go to contribution page
  419. Vasco Chibante Barroso (CERN)
    Track1: Online computing
    poster presentation
    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). Following a successful Run 1, which ended in February 2013, the ALICE data acquisition (DAQ) entered a consolidation phase to prepare for Run 2 which will start in the beginning of 2015. A new software...
    Go to contribution page
  420. Ian Peter Collier (STFC - Rutherford Appleton Lab. (GB))
    Track7: Clouds and virtualization
    poster presentation
    The management of risk is fundamental to the operation of any distributed computing infrastructure. Identifying the cause of incidents is essential to prevent them from re-occurring. In addition, it is a goal to contain the impact of an incident while keeping services operational. For response to incidents to be acceptable this needs to be commensurate with the scale of the problem. The...
    Go to contribution page
  421. Dr Stefano Bagnasco (I.N.F.N. TORINO)
    Track7: Clouds and virtualization
    poster presentation
    Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources...
    Go to contribution page
  422. Loic Brarda (CERN)
    Track6: Facilities, Infrastructure, Network
    poster presentation
    The LHCb experiment operates a large computing infrastructure with more than 2000 servers, 300 virtual machines and 400 embedded systems. Many of the systems are operated diskless from NFS or iSCSI root-volumes. They are connected by more than 200 switches and routers. We have recently completed the migration of the management of this system from Quattor to puppet and of the original...
    Go to contribution page
  423. Prashanth Shanmuganathan (Kent State University, USA)
    Track6: Facilities, Infrastructure, Network
    poster presentation
    STAR collaboration’s record system is a collection of heterogeneous and sparse information associated to each members and institutions. In its original incarnation, only flat information were stored revealing many restrictions such as the lack of historical change information, the inability to keep track of members leaving and re-joining or the ability to easily extend the saved information as...
    Go to contribution page
  424. Dr Mario Lassnig (CERN)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component,...
    Go to contribution page
  425. Giacinto Donvito (INFN), Vincenzo Spinoso (INFN)
    Track7: Clouds and virtualization
    poster presentation
    INFN-Bari is involved in PRISMA and RECAS, two national projects aiming respectively at setting up an OpenStack-based cloud infrastructure for the public administration and the scientific data analysis, and upgrading the computing resources to a new T1-sized infrastructure. As Bari is also a T2 for the CMS and Alice experiments, setting up the cloud resources so that they can be used for...
    Go to contribution page
  426. Prof. Daniele Bonacorsi (University of Bologna), Nicolo Magini (CERN), Dr Tony Wildish (Princeton University (US))
    Track5: Computing activities and Computing models
    poster presentation
    During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a...
    Go to contribution page
  427. Kiyoshi Hayasaka (Nagoya Univ.)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    Belle II Experiment is the next generation of B factory at SuperKEKB in Japan. A sample of 50 at${}^{-1}$ will be collected at the $\Upsilon$ resonances. In addition a large Monte Carlo (MC) sample will be generated to optimize the event selection criteria. The large data samples are managed by a sophisticated distributed computing system. To utilize the computing resources with a high...
    Go to contribution page
  428. Cristovao Jose Domingues Cordeiro (CERN)
    Track7: Clouds and virtualization
    poster presentation
    The adoption of Cloud technologies by the LHC experiments places the fabric management burden of monitoring virtualized resources upon the VO. In addition to monitoring the status of the virtual machines and triaging the results, it must be understood if the resources actually provided match with any agreements relating to the supply. Monitoring the instantiated virtual machines is therefore a...
    Go to contribution page
  429. Jan Tomsa (Czech Technical University in Prague (CZ)), Josef Novy (Czech Technical University in Prague (CZ))
    Track1: Online computing
    poster presentation
    Nowadays, all modern high energy physics experiments are substantially dependent on fast and reliable data acquisition systems that are able to collect large quantities of data supplied by various detectors. To ensure smooth and errorless operation, it is necessary to control and monitor the behavior and state of processes running in the system. COMPASS is a high energy particle experiment...
    Go to contribution page
  430. Giovanni Franzoni (CERN)
    Track2: Offline software
    poster presentation
    The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events. During the runI of LHC (2010-2012), CMS has produced over 12 Billion simulated events, organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up). In order to aggregate the...
    Go to contribution page
  431. Dr Andreas Pfeiffer (CERN)
    Track3: Data store and access
    poster presentation
    The CMS experiment at CERN's Large Hadron Collider in Geneva redesigned the code handling the conditions data during the last years, aiming to increase performance and enhance maintainability. The new design includes a move to serialise all payloads before storing them into the database, allowing the handling of the payloads in external tools independent of a given software release. In this...
    Go to contribution page
  432. Nikos Kasioumis (CERN)
    Track5: Computing activities and Computing models
    poster presentation
    The talk will focus on the recent developments done by the Multimedia team of the Digital Library Services to better acquire grand-public content captured at CERN and disseminate it to the general public. In collaboration with the CERN communication unit and the Photo & Video Labs, the team has built new facilities to transfer, disseminate and archive multimedia content on the CERN...
    Go to contribution page
  433. Tao Cui (IHEP(Institute of High Energy Physics, CAS,China)), Dr Yaodong Cheng (IHEP)
    Track7: Clouds and virtualization
    poster presentation
    Traditionally, physical computer is used to run high-performance computing jobs. There are many problems such as job interference with each other, operation system crash because of abnormal operation and low computing resource utilization. IhepCloud expects to solve the job isolation, operating system fault isolation and to improve resource utilization by computing resource virtualization....
    Go to contribution page
  434. Lorenzo Moneta (CERN)
    Track2: Offline software
    poster presentation
    Several improvements have been introduced in the new version 6 of ROOT in the Math work package. We will report on the improvements in the ROOT function classes used for fitting data objects like histograms or trees. These include the usage of a new TFormula class, based on the capabilities Cling, which makes easier for the user to build complex expressions, which can be compiled on-the fly...
    Go to contribution page
  435. Mihaela Gheata (ISS - Institute of Space Science (RO))
    Track5: Computing activities and Computing models
    poster presentation
    Open access is one of the prerequisites for long term data preservation for a HEP experiment. To guarantee the usability of data analysis tools over long periods of time it is crucial that third party users from the scientific community have access to the data and associated software. The ALICE Collaboration has developed a layer of lightweight components built on top of virtualisation...
    Go to contribution page
  436. Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US))
    Track5: Computing activities and Computing models
    poster presentation
    The CMS experiment, in recognition of its commitment to data preservation and open access as well as to education and outreach, has made its first public release of high-level data: up to half of the proton-proton collision data at 7 TeV from 2010 in CMS Analysis Object Data format. CMS has prepared, in collaboration with CERN and the other LHC experiments, an open data web portal based on...
    Go to contribution page
  437. Dr Miguel Villaplana Perez (Università degli Studi e INFN Milano (IT))
    Track6: Facilities, Infrastructure, Network
    poster presentation
    We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor...
    Go to contribution page
  438. Mr Giulio Eulisse (Fermi National Accelerator Lab. (US))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a...
    Go to contribution page
  439. Aram Santogidis (CERN)
    Track8: Performance increase and optimization exploiting hardware features
    poster presentation
    ALFA is the common framework of the next generation software for ALICE and FAIR high energy physics experiments. It supports both offline and online processing which includes ALICE DAQ/HLT/Offline and the FairRoot project. The framework is designed based on a data-flow model with message-oriented middleware (MOM) serving as a transport layer. By using multiple data-flows concurrently it...
    Go to contribution page
  440. Norman Anthony Graf (SLAC National Accelerator Laboratory (US))
    Track2: Offline software
    poster presentation
    We describe a software toolkit for full event simulation and reconstruction in silicon tracking detectors. It features modular packages providing sophisticated simulations of the response of silicon detectors to the passage of charged particles. Sensor classes allow very detailed descriptions of charge carrier movement in silicon detectors: one can list the collecting, absorbing and reflecting...
    Go to contribution page
  441. Dr Sebastien Binet (IN2P3/LAL)
    Track2: Offline software
    poster presentation
    pawgo: an interactive analysis workstation ========================================== Current interactive analysis toolkits usually leverage a Turing-complete general programming language, such as `C++` or `python`, married with some kind of interpreter (_e.g.:_ `CINT` or `CLing`) and a graphical user interface to present results (`ROOT`, `matplotlib` or `Chaco`.) An obvious advantage...
    Go to contribution page
  442. Moritz Kretz (Ruprecht-Karls-Universitaet Heidelberg (DE))
    Track1: Online computing
    poster presentation
    With the installation of the Insertable B-Layer in 2014 the Pixel Detector of the ATLAS experiment has been extended by about 12 million pixels. Scanning and tuning procedures have been implemented by employing newly designed read-out hardware which is now able to support the full detector bandwidth even for calibration. The hardware is supported by an embedded software stack running on the...
    Go to contribution page
  443. Benjamin Radburn-Smith (Purdue University (US))
    Track1: Online computing
    poster presentation
    The trigger systems of LHC detectors play a fundamental role in defining the physics capabilities of the experiments. A reduction of several orders of magnitude in the rate of collected events, with respect to the proton-proton bunch crossing rate generated by the LHC, is mandatory to cope with the limits imposed by the readout and storage systems limits. An accurate and efficient online...
    Go to contribution page
  444. Dai Kobayashi (Tokyo Institute of Technology (JP))
    Track1: Online computing
    poster presentation
    The ATLAS experiment at the Large Hadron Collider (LHC) has taken data at a centre-of-mass energy between 900 GeV and 8 TeV during Run I (2009-2013). The LHC delivered an integrated luminosity of about 20 fb$^{−1}$ in 2012, which required dedicated strategies to guard the highest possible physics output while reducing effectively the event rate. The Muon High Level Trigger has successfully...
    Go to contribution page
  445. Alec Habig (Univ. of Minnesota Duluth)
    Track1: Online computing
    poster presentation
    The NOvA experiment studies neutrino oscillations with 2 functionally identical detectors separated by a baseline of 810km. The 14 kT far detector in Ash River, Minnesota, comprises 344,064 channels of liquid scintillator detection cells read out via wavelength-shifting fiber into 32-channel Avalanche Photo Diodes (APD). A custom designed Front End Board (FEB) continuously digitizes and...
    Go to contribution page
  446. Mia Tosi (Universita' degli Studi di Padova e INFN (IT))
    Track1: Online computing
    poster presentation
    The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. In 2015, the center-of-mass energy of proton-proton collisions will reach 13 TeV up to an unprecedented luminosity of 1e34 cm-2s-1. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and...
    Go to contribution page
  447. Jovan Mitrevski (Ludwig-Maximilians-Univ. Muenchen (DE))
    Track2: Offline software
    poster presentation
    In order to maximize the physics potential of the ATLAS detector during LHC's run 2, the Reconstruction software has been updated. Flat computing budgets required a factor of three improved run time, while the new xAOD data format forced changes in the reconstruction algorithms. Physics performance improvements have been made in the reconstruction of various objects, using improved techniques...
    Go to contribution page
  448. Dr Vladislav Kosejk (Czech Technical University Department of Physics)
    Track2: Offline software
    poster presentation
    This paper presents innovative telescope design based on usage of parabolic strip as objective. Isaac Newton was the first one to solve problem of chromatic aberration, which is caused by difference in refractive index in lens. This problem was solved by new kind of telescope with mirror used as objective. There are many different kind of telescopes. The most basic one is lens...
    Go to contribution page
  449. Dr Makoto Asai (SLAC National Accelerator Laboratory (US))
    Track2: Offline software
    poster presentation
    The Geant4 electromagnetic (EM) physics sub-packages are key components of any simulation; in particular, the simulation of LHC experiments. A small variation of EM physics may affect prediction accuracy and CPU performance of large scale Monte Carlo simulations for HEP, medicine or space science. In this work we report on recent improvements of the EM models and on new validations of EM...
    Go to contribution page
  450. Christopher Jung (KIT - Karlsruhe Institute of Technology (DE))
    Track5: Computing activities and Computing models
    poster presentation
    Modern science is most often driven by data. Improvements in state-of-the-art technologies and methods in many scientific disciplines lead not only to increasing data rates, but also to the need to improve or even completely overhaul their data life cycle management. Communities usually face two kinds of challenges: generic ones like federated authorization and authentication...
    Go to contribution page
  451. Dr Isidro Gonzalez Caballero (Universidad de Oviedo (ES))
    Track5: Computing activities and Computing models
    poster presentation
    The PROOF Analysis Framework (PAF) has been designed to improve the ability of the physicist to develop software for the final stages of an analysis where typically simple ROOT Trees are used and where the amount of data used is in the order of several terabytes. It hides the technicalities of dealing with PROOF leaving the scientist to concentrate on the analysis. PAF is capable of using...
    Go to contribution page
  452. Gerardo Ganis (CERN)
    Track3: Data store and access
    poster presentation
    During the LHC Run-1, Grid resources in ATLAS have been managed by the PanDA and DQ2 systems. In order to meet the needs for the LHC Run-2, Prodsys2 and Rucio are used as the new ATLAS Workload and Data Management systems. The data are stored under various formats in ROOT files and end-user physicists have the choice to use either the ATHENA framework or directly ROOT. Within the ROOT data...
    Go to contribution page
  453. Federico Stagni (CERN)
    Track5: Computing activities and Computing models
    poster presentation
    The Cherenkov Telescope Array (CTA) – an array of many tens of Imaging Atmospheric Cherenkov Telescopes deployed on an unprecedented scale – is the next generation instrument in the field of very high energy gamma-ray astronomy. CTA will operate as an open observatory providing data products to the scientific community. An average data stream of about 1 GB/s for about 1000 hours of observation...
    Go to contribution page
  454. George Ryall (STFC)
    Track6: Facilities, Infrastructure, Network
    poster presentation
    Rutherford Appleton Laboratory (RAL) is part of the UK’s Science and Technology Facilities Council (STFC). The Royal Charter that established the STFC requires us to generate public awareness and encourage public engagement and dialogue in relation to the science we undertake. We firmly support this activity as it is important to encourage the next generation of students to consider studying...
    Go to contribution page
  455. James Letts (Univ. of California San Diego (US))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The CMS experiment at the LHC relies on HTCondor and glideinWMS as its primary batch and pilot-based Grid provisioning system. So far we have been running several independent resource pools, but we are working on unifying them all to reduce the operational load and more effectively share resources between various activities in CMS. The major challenge of this unification activity is scale. The...
    Go to contribution page
  456. Dr Julius Hrivnac (Laboratoire de l'Accelerateur Lineaire (FR))
    Track3: Data store and access
    poster presentation
    The new ATLAS EventIndex catalogue uses a Hadoop cluster to store information on each event processed by ATLAS. Several tools belonging to the Hadoop eco-system are used to organise the data in HDFS, catalogue it internally, and provide the search functionality. This presentation will describe the Hadoop-based implementation of the adaptive query engine serving as the back-end for the ATLAS...
    Go to contribution page
  457. Catalin Condurache (STFC - Rutherford Appleton Lab. (GB))
    Track3: Data store and access
    poster presentation
    The CernVM-FS is firmly established as a method of software distribution for the LHC experiments at the WLCG sites. Use of CernVM-FS outside WLCG has been growing steadily, with increasing number of Virtual Organizations (VOs) both within High Energy Physics (HEP) communities and in other disciplines (i.e. Space, Natural and Life Sciences) having identified this technology as a more efficient...
    Go to contribution page
  458. Dr Mikael Reponen (RIKEN)
    Track2: Offline software
    poster presentation
    The nucleus perturbs the atomic energy levels of atoms and ions at the ppm level and although this is a small absolute effect it is readily probed and measured by modern laser spectroscopic methods. These methods are particularly suitable for the study of short-lived radionuclides with lifetimes as short as a few milliseconds and production rates often only a few isotopes/isomers per...
    Go to contribution page
  459. Barbara Storaci (Universitaet Zuerich (CH))
    Track1: Online computing
    poster presentation
    Stable, precise spatial alignment and PID calibration are necessary to achieve optimal detector performances. During Run2, LHCb will have a new real-time detector alignment and calibration to reach equivalent performances in the online and offline reconstruction. This offers the opportunity to optimise the event selection by applying stronger constraints as well as hadronic particle...
    Go to contribution page
  460. Dr Claudia Bertella (Johannes-Gutenberg-Universitaet Mainz (DE))
    Track1: Online computing
    poster presentation
    In high-energy physics experiments, online selection is crucial to identify the few interesting collisions from the large data volume processed. In the overall ATLAS trigger strategy, b-jet triggers are designed to identify heavy-flavor content in real-time and, in particular, provide the only option to efficiently record events with fully hadronic final states containing b-jets. In doing so,...
    Go to contribution page
  461. Dr Robert Andrew Currie (Imperial College Sci., Tech. & Med. (GB))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The Ganga project (http://cern.ch/ganga) has long been used by several experimental communities within HEP, most notably Atlas and LHCb. This talk describes the most recent developments in job submission and management within Ganga with a focus on newly developed tools and features. Ganga offers a powerful unified interface for submitting complex user-jobs to many different backends, this...
    Go to contribution page
  462. Laurent Garnier (LAL-IN2P3-CNRS)
    Track2: Offline software
    poster presentation
    Geant4 is a toolkit for the simulation of the passage of particles through matter. This object-oriented toolkit supports a variety of visualisation drivers including OpenGL, OpenInventor, HepRep, DAWN, VRML, RayTracer, gMocren and ASCIITree, with diverse and complementary functionalities. In 2013, Gean4-MT[1] has brought multi-threading to Geant4. The OpenGL suite of visualization drivers...
    Go to contribution page
  463. Kurt Biery (Fermi National Accelerator Lab. (US))
    Track1: Online computing
    poster presentation
    The artdaq data acquisition software toolkit has been developed within the Fermilab Scientific Computing Division, and it is being used by a growing number of high-energy and cosmology experiments. It currently provides data transfer, event building, run control, and event analysis functionality. The event analysis functionality is provided by the art framework, which has also been developed...
    Go to contribution page
  464. Mr Igor Mandrichenko (Fermilab)
    Track6: Facilities, Infrastructure, Network
    poster presentation
    RESTful web services are popular solution for distributed data access and information management. Performance, scalability and reliability of such services is critical for the success of data production and analysis in High Energy Physics as well as other areas of science. At FNAL, we have been successfully using HTTP/REST-based data access architecture to provide access to various types...
    Go to contribution page
  465. Prof. Soh Suzuki (KEK)
    Track6: Facilities, Infrastructure, Network
    poster presentation
    Formerly most of HEP experiments in japan used the centralized computing model. Originally HEPnet-J had only one instance which is connected to Internet, and recently it has many closed network which connects domestic sites. At that time, the network connectivity in Japan was very poor and the main purpose of HEPnet-J was providing enough connectivity for interactive use over...
    Go to contribution page
  466. Jan Justinus Keijser (NIKHEF)
    Track8: Performance increase and optimization exploiting hardware features
    poster presentation
    In the past, grid worker nodes struck a reasonable balance between the number of cores, the amount of available memory, available diskspace and the maximum network bandwidth. This led to an operating model where worker nodes were "carved up" into single core job slots, each of which would execute a HEP workload job. Typical worker nodes would have up to 16 computing cores , with roughly 2...
    Go to contribution page
  467. Dr Sophie Catherine Ferry (CEA/IRFU,Centre d'etude de Saclay Gif-sur-Yvette (FR))
    Track6: Facilities, Infrastructure, Network
    poster presentation
    GRIF is a distributed Tiers2 center, made of 6 different centers in the Paris region (France), and serving many VOs. The sub-sites are connected with 10Gbits/s private network and share tools for central management. One of the sub-sites, GRIF-IRFU held and maintained in the CEA-Saclay center, moved a year ago, to a configuration management using Puppet. Thanks to the versatility of...
    Go to contribution page
  468. Dr Mario Lassnig (CERN)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    This contribution details the deployment of Rucio, the ATLAS Distributed Data Management system. The main complication is that Rucio interacts with a wide variety of external services, and connects globally distributed data centres under different technological and administrative control, at an unprecedented data volume. It is therefore not possibly to create a duplicate instance of Rucio for...
    Go to contribution page
  469. Zbigniew Baranowski (CERN)
    Track3: Data store and access
    poster presentation
    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system is in certain cases very demanding on the technology and therefore on costs. Notably one of the critical...
    Go to contribution page
  470. Ben Jones (CERN)
    Track7: Clouds and virtualization
    poster presentation
    When CERN migrated its infrastructure away from home-grown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the toolchains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of...
    Go to contribution page
  471. Dr Alexei Klimentov (Brookhaven National Laboratory (US))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The Big Data processing needs of the ATLAS experiment grow continuously, as more data and more use cases emerge. For Big Data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, submitted by the...
    Go to contribution page
  472. Dr Maria Grazia Pia (Universita e INFN (IT))
    Track5: Computing activities and Computing models
    poster presentation
    An extensive scientometric assessment of the literature is presented, which documents the prominent role achieved by Monte Carlo methods, and simulation in general, in particle physics and related fields (nuclear physics, astrophysics, medical physics etc.). As an example of their pervasiveness, one can remark that currently approximately 50% of the papers published in major,...
    Go to contribution page
  473. Ms Shan Zeng (IHEP)
    Track6: Facilities, Infrastructure, Network
    poster presentation
    This Paper describes two research aspects and practices of SDN at IHEP. The first one is the SDN practice for the data transferring across the internet, in which a virtual private network based on SDN is designed and built,and an intelligent network route algorithm is developed and deployed in the SDN controller to make full use of IPv6 resources. Experimental results show that this practice...
    Go to contribution page
  474. Stefan Roiser (CERN)
    Track3: Data store and access
    poster presentation
    In this contribution we describe the activities and the technical aspects that led to the construction of a public prototype for LHCb file access that is built on HTTP and WebDAV, supporting file access for distributed computing data management and data processing activities as well as seamless interactive access via web browsers. The LHCb replica naming scheme provides characteristics that...
    Go to contribution page
  475. Thomas Hartmann (KIT - Karlsruhe Institute of Technology (DE))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The FTS service provides a transfer job scheduler to distribute and replicate waste amounts of data over the heterogeneous WLCG infrastructures. The most recent version FTS3 simplifies and improves the flexibility compared to the channel model of the previous incarnations while reducing the load to the service components. The improvements allow to handle a higher number of transfers with a...
    Go to contribution page
  476. Tomoteru Yoshie (University of Tsukuba)
    Track3: Data store and access
    poster presentation
    JLDG is a data-grid for the lattice QCD (LQCD) community in Japan. Several large research groups in Japan have been working on lattice QCD simulations using supercomputers distributed over distant sites. The JLDG provides such collaborations with an efficient method of data management and sharing. File servers installed on 9 sites are connected to the NII SINET VPN called HEPnet-J/sc...
    Go to contribution page
  477. Wolfgang Waltenberger (Austrian Academy of Sciences (AT))
    Track2: Offline software
    poster presentation
    We present a general procedure to decompose Beyond the Standard Model (BSM) collider signatures into Simplified Model Spectrum (SMS) topologies. Our method provides a way to cast BSM predictions for the LHC in a model independent framework, which can be directly confronted with the relevant experimental constraints. Our python implementation currently focusses on supersymmetry searches with...
    Go to contribution page
  478. Thomas Hauth (KIT)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    Belle II is a next generation B factory experiment that will collect 50 times more data than its predecessor Belle. The higher luminosity at the SuperKEKB accelerator leads to higher background and requires a major upgrade of the detector. As a consequence also the simulation, reconstruction, and analysis software has to be upgraded substantially and actually most parts are newly written...
    Go to contribution page
  479. Takashi Matsushita (Austrian Academy of Sciences (AT))
    Track1: Online computing
    poster presentation
    The Global Trigger is the final step of the CMS level-1 trigger and implements a trigger menu, a set of selection requirements applied to the final list of objects from calorimeter and muon triggers to meet the physics objectives. The conditions for trigger object selection, with possible topological requirements on multi-object triggers, are combined by simple combinatorial logic (AND-OR-NOT)...
    Go to contribution page
  480. Robert Group (University of Virginia)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study nu-e appearance in a nu-mu beam. NOvA has already produced more than 1 million Monte Carlo and detector generated files amounting to more than 1 PB in size. This data is divided between a number of parallel streams such as far and near detector beam spills, cosmic ray backgrounds, a number of data-driven...
    Go to contribution page
  481. Robert Group (University of Virginia)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The NOvA software (NOvASoft) is written in C++ and built on the Fermilab Computing Division's ART framework that uses ROOT analysis software. NOvASoft makes use of more than 50 external software packages, is developed by more than 50 developers and is used by more than 100 physicists from over 30 universities and laboratories in 3 continents. The software builds are handled by Fermilab's...
    Go to contribution page
  482. Dr Santiago Gonzalez De La Hoz (Instituto de Fisica Corpuscular (ES))
    Track6: Facilities, Infrastructure, Network
    poster presentation
    The goal of this work is to describe the way of addressing the main challenges of Run-2 by the Spanish ATLAS Tier-2. The considerable increase of energy and luminosity for the upcoming Run-2 w.r.t. Run-1 has led to a revision of the ATLAS computing model as well as some of the main ATLAS computing tools. The adaptation to these changes will be shown, with the peculiarities that it is a...
    Go to contribution page
  483. Victor Manuel Fernandez Albor (Universidade de Santiago de Compostela (ES))
    Track7: Clouds and virtualization
    poster presentation
    Cloud Computing is emerging today as the new approach followed by computing centres, since the flexibility the Cloud provides is a powerful component to manage their resources. Through the use of virtualization, cloud promise to address with the same shared set of physical resources a large user base with different needs. However, virtualization may induce significant performance penalties...
    Go to contribution page
  484. Shaun de Witt (STFC)
    Track3: Data store and access
    poster presentation
    Within WLCG much has been discussed concerning the possible demise of the Storage Resource Manager (SRM) and replacing it with different technologies such as XrootD and WebDAV. Each of these storage interfaces presents different functionalities and experiments currently make use of all of these at different sites. At the RAL Tier-1 we have been monitoring the usage of both SRM and XrootD by...
    Go to contribution page
  485. Dr Giacinto Donvito (INFN-Bari)
    Track7: Clouds and virtualization
    poster presentation
    At INFN-Bari we have set-up an OpenStack-based cloud infrastructure in the framework of a publicly funded project, PRISMA, aimed at implementing a fully integrated PaaS+IaaS platform to provide services in the field of public administration and scientific data analysis. The IaaS testbed currently consists of 25 compute nodes providing in total almost 600 physical cores, 3 TB of RAM, 400 TB of...
    Go to contribution page
  486. Dr Peter Love (Lancaster University (GB))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The operation of distributed computing systems requires comprehensive monitoring to ensure reliability and robustness. There are two components found in most monitoring systems: one being visually rich time-series graphs and another being notification systems for alerting operators under certain pre-defined conditions. In this paper the sonification of monitoring messages is explored using an...
    Go to contribution page
  487. Mr Carlos Ghabrous Larrea (University of Wisconsin (US))
    Track1: Online computing
    poster presentation
    The CMS (Compact Muon Solenoid) L1 (Level-1) Trigger electronics are composed of a large number of different cards based on the VMEBus standard. The majority of the system is being replaced to adapt the trigger to the higher collision rates the LHC will deliver after the LS1 (Long Shutdown 1), the first phase on the CMS upgrade program. As a consequence, the software that controls, monitors...
    Go to contribution page
  488. Duncan Rand (Imperial College Sci., Tech. & Med. (GB))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    In the lead up to Run 2 of the LHC the WLCG grid middleware, storage access protocols and LHC computing models are in a state of flux. The LCG utilities and SRM middleware are being phased out, IPv6 is being rolled out across the WLCG and LHC experiments are making increasing use of xrootd federated access to storage elements over the WAN. However, both client and server software and WLCG...
    Go to contribution page
  489. Adriana Telesca (CERN)
    Track6: Facilities, Infrastructure, Network
    poster presentation
    ALICE (A Large Ion Collider Experiment) is an experiment at the CERN LHC (Large Hadron Collider) studying the physics of strongly interacting matter and the quark-gluon plasma. The experiment collaboration counts more than 1500 members from 148 institutes in 39 countries. During the experiment start up in 2008 and the following years of data taking the information about members was...
    Go to contribution page
  490. Adriana Telesca (CERN)
    Track6: Facilities, Infrastructure, Network
    poster presentation
    ALICE (A Large Ion Collider Experiment) is an experiment at the CERN LHC (Large Hadron Collider) studying the physics of strongly interacting matter and the quark-gluon plasma. The experiment operation requires a 24 hours per day and 7 days a week “shift” crew at the experimental site, composed by the ALICE collaboration members. Shift duties are calculated for each institute according to...
    Go to contribution page
  491. Francesco Giovanni Sciacca (Universitaet Bern (CH))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The current distributed computing resources used for simulating and processing collision data collected by the LHC experiments are largely based on dedicated Linux clusters. Job control and software provisioning mechanisms are quite different from the common concept of self-contained HPC applications run by particular users on specific HPC systems. This poster reports on the development...
    Go to contribution page
  492. Andreas Salzburger (CERN)
    Track2: Offline software
    poster presentation
    During the last years ATLAS has successfully deployed a new integrated simulation framework (ISF) which allows a flexible mixture of full and fast detector simulation techniques within the processing of one event. With the ISF, the simulation execution speed could be increased up to a factor 100, which makes subsequent digitisation and reconstruction processing the dominant contributions to...
    Go to contribution page
  493. Lukas Alexander Heinrich (New York University (US))
    Track1: Online computing
    poster presentation
    During the 2013/14 shutdown of the Large Hadron Collider (LHC) the ATLAS first level trigger (L1T) and the data acquisition system (DAQ) were substantially upgraded to cope with the increase in luminosity and collision multiplicity, expected to be delivered by the LHC in 2015. To name a few, the L1T was extended on the calorimeter side (L1Calo) to better cope with pile-up and apply...
    Go to contribution page
  494. Chia-Ling Hsu (University of Melbourne)
    Track5: Computing activities and Computing models
    poster presentation
    The basf2 software framework has been developed the Belle II experiment, the next generation B-factory experiment at the KEK Laboratory. Belle II will collect 50 times more data than the previous Belle experiment and has a commensurate increase in computing requirements. Consequently Belle II has adopted a distributed computing solution to provide the computing resources required of the...
    Go to contribution page
  495. Bruno Silva De Sousa (CERN)
    Track6: Facilities, Infrastructure, Network
    poster presentation
    While travelling, we expect to have access to Internet, or being able to check a mailbox. But until recently, it was difficult to maintain voice conversations while outside of your working place. For some cases we can use mobile phones but the roaming charges are high when abroad. At CERN we have deployed Lync, a Voice-over-IP system that fills this gap. Once a CERN user has requested a...
    Go to contribution page
  496. Zhen Xie (Princeton University (US))
    Track1: Online computing
    poster presentation
    (BRIL) Beam Radiation Instrumentation and Luminosity is a new project within CMS. It consists of several independent sub-detectors for measuring the luminosity, monitoring the beam conditions, and the protection of CMS from serious radiation damage. It is beneficial for the project in the long run to use a single software infrastructure for data acquisition. Similar to CMS central daq, BRIL...
    Go to contribution page
  497. Giacomo Govi (Fermi National Accelerator Lab. (US))
    Track3: Data store and access
    poster presentation
    The Condition Database plays a key role in the CMS computing infrastructure. The complexity of the detector and the variety of the sub-systems involved are setting tight requirements for handling the Conditions. In the last two years the collaboration has put an effort in the re-design of the Condition Database system, with the aim to improve the scalability and the operability for the data...
    Go to contribution page
  498. Alex Christopher Martyniuk (University College London)
    Track1: Online computing
    poster presentation
    This contribution describes the trigger selection configuration system of the ATLAS low- and high-level trigger (HLT) and the upgrades it received in preparation for LHC Run 2. The ATLAS trigger configuration system is responsible for applying the physics selection parameters for the online data taking at both trigger levels and the proper connection of the trigger lines across those...
    Go to contribution page
  499. Mr Lirim Osmani (Department of Computer Science, University of Helsinki)
    Track7: Clouds and virtualization
    poster presentation
    The topic of data storage and analysis on Cloud infrastructures has gained importance in recent years. The High Energy Physics community is interested in performing simulations and data analysis on public or private Cloud facilities. Currently the simulations and analysis are performed mostly on a computing and data Grid. The software and experience of operating on a Grid needs to be adapted...
    Go to contribution page
  500. Federico Stagni (CERN)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    For many years the DIRAC interware (Distributed Infrastructure with Remote Agent Control) has had a web interface, allowing the users to monitor DIRAC activities and also interact with the system. Since then many new web technologies have emerged, therefore a redesign and a new implementation of the DIRAC Web portal were necessary, taking into account the lessons learnt using the old...
    Go to contribution page
  501. Aleksandra Wardzinska (CERN)
    Track5: Computing activities and Computing models
    poster presentation
    Large-scale long-term projects such as the LHC require the ability to store, manage, organize and distribute large amounts of engineering information, covering a wide spectrum of fields. This information is a living material, evolving in time, following various lifecycles. It has to reach the next generations of engineers so they understand how their predecessors designed, crafted, operated...
    Go to contribution page
  502. Roger Jones (Lancaster University (GB))
    Track7: Clouds and virtualization
    poster presentation
    With the data output from the LHC increasing, many of the LHC experiments have made significant improvements to their code to take more advantage of the underlying CPU architecture and advanced features. With the grid environment changing to heavily include virtualisation and cloud services, we look at whether these two systems can be compatible, or whether improvements in code are lost...
    Go to contribution page
  503. Dr Hans-Joachim Wenzel (Fermi National Accelerator Lab. (US))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    We describe the Geant4 physics validation repository and the technology used to implement it. The Geant4 collaboration regularly performs validation and regression tests where results obtained with a new Geant4 version are compared to data obtained by various HEP experiments or the results of previous releases. As the number of regularly performed validation tests increases and the...
    Go to contribution page
  504. Tobias Schlüter (LMU München)
    Track2: Offline software
    poster presentation
    GENFIT is an experiment-independent, universal track-fitting package, available under a free software license. It implements a variety of track-fitting algorithms and provides the surrounding functionality needed by particle physics experiments: general handling of detector hits, supplemented with example implementations for various detector types; track extrapolation code; a track-data model...
    Go to contribution page
  505. Dr Dario Barberis (Università e INFN Genova (IT))
    Track3: Data store and access
    poster presentation
    In this paper we describe specific technical solutions put in place in various database applications of the ATLAS experiment at LHC where we make use of several partitioning techniques available in Oracle 11g. With the broadly used range partitioning and its option of automatic interval partitioning we add our own logic in PLSQL procedures and scheduler jobs to sustain data sliding windows in...
    Go to contribution page
  506. Norman Anthony Graf (SLAC National Accelerator Laboratory (US))
    Track2: Offline software
    poster presentation
    Detectors at future electron-positron linear colliders such as ILC or CLIC will require unprecedentedly precise tracking, vertexing, and calorimetry in order to meet the ambitious physics goals of the experimental program. The physics performance of different detector geometries and technologies has to be realistically estimated. These assessments require sophisticated and flexible full...
    Go to contribution page
  507. Jerome Odier (Centre National de la Recherche Scientifique (FR))
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The ATLAS Metadata Interface (AMI) can be considered to be a mature application because it has existed for at least 10 years. Over the last year, we have been adapting the application to some recently available technologies. The web interface, which previously manipulated XML documents using XSL transformations, has been migrated to Asynchronous Java Script (AJAX). Web development has been...
    Go to contribution page
  508. Mr Barthelemy Von Haller (CERN)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    A Large Ion Collider Experiment (ALICE) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). The online Data Quality Monitoring (DQM) plays an essential role in the experiment operation by providing shifters with immediate feedback on the data being recorded in order to quickly identify and...
    Go to contribution page
  509. Eric Cano (CERN)
    Track3: Data store and access
    poster presentation
    CASTOR (the CERN Advanced STORage system) is used to store the custodial copy of all of the physics data collected from the CERN experiments, both past and present. CASTOR is a hierarchical storage management system that has a disk-based front-end and a tape-based back-end. The software responsible for controlling the tape back-end has been redesigned and redeveloped over the last year and...
    Go to contribution page
  510. Susan Kasahara (University of Minnesota)
    Track1: Online computing
    poster presentation
    The NO$\nu$A (NuMI Off-Axis $\nu_{e}$ Appearance) experiment is a long-baseline neutrino experiment using the NuMI main injector neutrino beam at Fermilab and is designed to search for $\nu_{\mu}$ ($\bar{\nu}_{\mu}$) to $\nu_{e}$ ($\bar{\nu}_{e}$) oscillations. The experiment consists of two detectors; both positioned 14 mrad off the beam axis: a 220 ton Near Detector constructed in an...
    Go to contribution page
  511. Dr Peter Shanahan (Fermilab)
    Track1: Online computing
    poster presentation
    The NOvA experiment studies neutrino oscillations with 2 functionally identical detectors separated by a baseline of 810km. The Data Acquisition (DAQ) system for the far detector in Ash River in Minnesota comprises more than 10,000 Front End Boards, and a cluster of 168 custom PPC-based, and 206 COTS x86 linux nodes performing a variety of functions. An Error Handling system has been...
    Go to contribution page
  512. Prof. Gianluigi Boca (University of Pavia and INFN, Italy)
    Track2: Offline software
    poster presentation
    PANDA is an antiproton-proton experiment that will run at center-of-mass energies from 2.25 to 5.46 GeV at the new facility FAIR in Darmstadt, Germany. In order to achieve the broad range of physics goals of PANDA, a triggerless data acquisition and a high luminosity (20 MHz interaction rate) are necessary. This talk will concentrate on the Pattern Recognition software of the...
    Go to contribution page
  513. Stewart Martin-Haugh (STFC - Rutherford Appleton Lab. (GB))
    Track1: Online computing
    poster presentation
    A description of the design and performance of the newly reimplemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is provided. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new...
    Go to contribution page
  514. Leo Piilonen (Virginia Tech)
    Track2: Offline software
    poster presentation
    SuperKEKB and Belle II, the next generation B factory and its detector counterpart, are being constructed in Japan, as an upgrade of KEKB and Belle, respectively. The commissioning of the new SuperKEKB collider will be started in 2015. The luminosity of this e+ e− collider will be increased by a factor of 40, which will create a data sample 50 times larger than the previous Belle sample. Both...
    Go to contribution page
  515. Dr Jiaheng Zou (IHEP), Prof. Weidong Li (IHEP), Prof. Xingtao Huang (SDU)
    Track2: Offline software
    poster presentation
    SNiPER (the abbreviation of Software for Non-collider Physics ExpeRiments) has been developed based on common requirements from both cosmic ray and nuclear reactor neutrino experiments. This contribution will introduce the detailed design and implementation of the SNiPER software. Compared to the existing offline software frameworks in the high energy physics domain, the design of SNiPER is...
    Go to contribution page
  516. Dr Sergey Linev (GSI DARMSTADT)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    New THttpServer class in ROOT implements http server for arbitrary ROOT-based application. It is based on embeddable Civetweb server and provides direct access to all registered for the server objects. THttpServer also supports FastCGI interface and therefore can be integrated with many standard web servers like Apache. Main advantage of http server usage in ROOT – one could access objects...
    Go to contribution page
  517. Eygene Ryabinkin (National Research Centre Kurchatov Institute (RU), Moscow Institute for Physics and Technology (RU))
    Track8: Performance increase and optimization exploiting hardware features
    poster presentation
    We present the status of RRC-KI-T1, new Russian Tier-1 that supports ALICE, ATLAS and LHCb. Our aim is to enter the full production mode just before the beginning of Run-2 and we will talk about our current setup, deployed services and middleware, workflow, achievements and problems on the route of bringing yet another Tier-1 for WLCG. Another facet of our current activity is making the...
    Go to contribution page
  518. Andressa Gomes (Univ. Federal do Rio de Janeiro (BR)), Carlos Solans Sanchez (CERN)
    Track6: Facilities, Infrastructure, Network
    poster presentation
    The ATLAS Tile Calorimeter assesses the quality of data in order to ensure its proper operation. A number of tasks are then performed by running several tools and systems, which were independently developed to meet distinct collaboration’s requirements and do not necessarily builds an effective connection among them. Thus, a program is usually implemented without a global perspective of the...
    Go to contribution page
  519. Andrew Norman (Fermilab)
    Track1: Online computing
    poster presentation
    The NOvA experiment uses a GPS based timing system to both internally to synchronize the readout of the DAQ components and to establish an absolute “wall clock” reference which can be used to link the Fermilab accelerator complex with the neutrino flux that crosses the NOvA detectors. We describe the methods that were used during the commissioning of the NOvA DAQ and Timing systems to...
    Go to contribution page
  520. Tony Cass (CERN)
    Track6: Facilities, Infrastructure, Network
    poster presentation
    The advent of mobile telephony and VoIP has significantly impacted the traditional telephone exchange industry---to such an extent that private branch exchanges are likely to disappear completely in the near future. For large organisations, such as CERN, it is important to be able to smooth this transition by implementing new voice platforms that can protect past investments and the...
    Go to contribution page
  521. Alexander Baranov (ITEP Institute for Theoretical and Experimental Physics (RU))
    Track7: Clouds and virtualization
    poster presentation
    Computational power that is distributed amongst user hardware: laptops, PCs and even smartphones is enormous. It is not exceptional that volunteer computational networks provide computational power comparable to the power of the modern supercomputers. The problem is that utilization of those resources is difficult from volunteer (user) point of view as well as from computation provider...
    Go to contribution page
  522. Andrey Ustyuzhanin (ITEP Institute for Theoretical and Experimental Physics (RU))
    Track6: Facilities, Infrastructure, Network
    poster presentation
    Data analysis in fundamental sciences nowadays is essential process that pushes frontiers of our knowledge and leads to new discoveries. At the same time we can see that complexity of those analysis increases exponentially due to a) enormous volumes of datasets being analyzed, b) variety of techniques and algorithms one have to check inside a single analysis, c) distributed nature of research...
    Go to contribution page
  523. Gaelle Boudoul (Universite Claude Bernard-Lyon I (FR))
    Track2: Offline software
    poster presentation
    The CMS experiment is has multi-faceted detector upgrade program planned over the next decade. The silicon tracker system plans an improved pixel detector for 2017 and proposes an entirely new tracker for the high-luminosity LHC run. In this presentation, we discuss the tools developed and used in the design, simulation and reconstruction of the upgraded tracker including completely new...
    Go to contribution page
  524. Dr Tony Wildish (Princeton University (US))
    Track5: Computing activities and Computing models
    poster presentation
    At the beginning of Run-1 CMS was operating it's facilities according to the MONARC model, where data-transfers were strictly hierarchical in nature. Direct transfers between Tier-2 nodes was excluded, being perceived as operationally intensive and risky in an era where the network was expected to be a major source of errors. By the end of Run-1 wide-area networks were more capable and stable...
    Go to contribution page
  525. Paul Millar (Deutsches Elektronen-Synchrotron (DE))
    Track3: Data store and access
    poster presentation
    X.509, the dominant identity system from grid computing, has proved unpopular for many user communties. More popular alternatives generally assume the user is interacting via their web-browser. Such alternatives allow a user to authenticate with many services with the same credentials (username and password). They also allow users from different organisations form collaborations...
    Go to contribution page
  526. Michelle Kuchera (National Superconducting Cyclotron Laboratory, Michigan State University)
    Track2: Offline software
    poster presentation
    Production of new isotopes is one of the opportunities at the intensity frontier of nuclear physics. The associated science ranges from tests of the Standard Model to exploration of the origin and evolution of the chemical elements in the universe. Leading facilities in this effort are RIBF at RIKEN, TRIUMF in Canada, and ISOLDE at CERN. New large scale facilities under development at the...
    Go to contribution page
  527. Dr Giuseppe Avolio (CERN)
    Track1: Online computing
    poster presentation
    The ATLAS data acquisition (DAQ) system is controlled and configured via a software infrastructure that takes care of coherently orchestrating the data taking. While the overall architecture, established at the end of the 90’s, has proven to be solid and flexible, many software components have undergone a complete redesign or re-implementation in 2013/2014 in order to fold-in the additional...
    Go to contribution page
  528. Roger Jones (Lancaster University (GB))
    Track7: Clouds and virtualization
    poster presentation
    Virtualisation is a key tool on the grid. It can be used to provide varying work environments or as pat of a cloud infrastructure. Virtualisation itself carries certain overheads that decrease the performance of the system through requiring extra resources to virtualise the software and hardware stack, and CPU-cycles wasted instantiating or destroying virtual machines for each job. With the...
    Go to contribution page
  529. Christopher John Walker (University of London (GB)), Daniel Peter Traynor (University of London (GB))
    Track6: Facilities, Infrastructure, Network
    poster presentation
    Jumbo frames (with an MTU of 9000 bytes rather than the ethernet standard of 1500) have potential performance advantages for WAN transfers. Whilst many national and international research and education networks support their use, they are not widely supported at end sites. Furthermore, firewalls at some end sites block path MTU discovery leading to potential performance bottlenecks. QMUL...
    Go to contribution page
  530. Giovanni Franzoni (CERN)
    Track3: Data store and access
    poster presentation
    A wide range of detector commissioning, calibration and data analysis tasks is carried out by members of the Compact Muon Solenoid (CMS) collaboration using dedicated storage resources available at the CMS CERN Tier-2 centre. Relying on the functionalities of the EOS disk-only storage technology, the optimal exploitation of the CMS user/group resources has required the introduction of...
    Go to contribution page
  531. Jan Justinus Keijser (NIKHEF)
    Track6: Facilities, Infrastructure, Network
    poster presentation
    The Intel Galileo Arduino board is a low cost, low power 32bit Pentium-class computer. It is normally used for embedded devices but it can also run a full-blown version of Linux. Grid security can be greatly enhanced using hardware token for two-factor authentication. Two-factor autentication is based on the idea that in order to obtain access you need both something you know (i.e. a...
    Go to contribution page
  532. Marco Clemencic (CERN)
    Track2: Offline software
    poster presentation
    The LHCb Software Framework Gaudi is a C++ software framework for HEP applications used by several experiments. Although Gaudi is extremely flexible and extensible, its adoption is limited by the lack of certain components that are fundamental for the software framework of an experiment, in particular a detector description framework, whose implementation is delegated to the adopters. To...
    Go to contribution page
  533. Sandro Christian Wenzel (CERN)
    Track8: Performance increase and optimization exploiting hardware features
    poster presentation
    A geometry modeller library is among the most important components of the software simulating the passage of particles in a detector, and many experiment simulations are currently based on the geometry implementations offered by Geant4 or ROOT. Here, we report on our effort to extend, re-engineer and evolve thes libraries in multiple directions in order to make them...
    Go to contribution page
  534. Dr Jonathan Dorfan (OIST)
  535. Luca Magnoni (CERN)
    Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing
    poster presentation
    The WLCG monitoring system solves a challenging task of keeping track of the LHC computing activities on the WLCG infrastructure, ensuring health and performance of the distributed services at more than 160 sites. The current challenge consists of decreasing the effort needed to operate the monitoring service and to satisfy the constantly growing requirements for its scalability and...
    Go to contribution page