3โ€“10 Aug 2016
Chicago IL USA
US/Central timezone
There is a live webcast for this event.

Session

Computing

4 Aug 2016, 11:30
Chicago IL USA

Chicago IL USA

Sheraton Grand Chicago 301 East Water Street Chicago IL 60611 USA

Conveners

Computing: Overview

  • Randy Sobie (University of Victoria (CA))

Computing: Infrastructure

  • Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US))

Computing: Physics Software 1

  • Doris Kim (Soongsil University)

Computing: Physics Software 2

  • Randy Sobie (University of Victoria (CA))

Computing: Data Management

  • Salman Habib (Argonne National Laboratory)

Presentation materials

There are no materials yet.

  1. Gianluca Cerminara (CERN)
    04/08/2016, 11:30
    Computing and Data Handling
    Oral Presentation
    The restart of the LHC coincided with an intense activity for the CMS experiment. Both at the beginning of Run II in 2015 and the restart of operations in 2016, the collaboration was engaged in an extensive re-commissioning of the CMS data-taking operations. After the long stop, the detector was fully aligned and calibrated. Data streams were redesigned, to fit the priorities dictated by the...
    Go to contribution page
  2. Jack Cranshaw (Argonne National Laboratory (US))
    04/08/2016, 11:50
    Computing and Data Handling
    Oral Presentation
    During the Long shutdown of the LHC, the ATLAS collaboration overhauled its analysis model based on experience gained during Run 1. The main components are a new analysis format and Event Data Model which can be read directly by ROOT, as well as a "Derivation Framework" that takes the Petabyte-scale output from ATLAS reconstruction and produces smaller samples targeted at specific analyses,...
    Go to contribution page
  3. Hans-Joachim Wenzel (Fermi National Accelerator Lab. (US))
    04/08/2016, 12:10
    Computing and Data Handling
    Oral Presentation
    Geant4 is a toolkit for the simulation of the passage of particles through matter. Its areas of application include high energy, nuclear and accelerator physics as well as studies in medical and space science. The Geant4 collaboration regularly performs validation and regression tests through its development cycle. A validation test compares results obtained with a specific Geant4 version with...
    Go to contribution page
  4. Nikiforos Nikiforou (CERN)
    04/08/2016, 12:30
    Computing and Data Handling
    Oral Presentation
    The ILC/CLIC linear collider community has for many years followed a strategy of developing common and generic software tools for studying the physics potential as well as continuously optimizing their detector concepts. The basis of the software framework is formed by the common event data model LCIO and the detector description toolkit DD4hep. DD4hep is a recently developed, generic detector...
    Go to contribution page
  5. Malachi Schram
    04/08/2016, 12:50
    Computing and Data Handling
    Oral Presentation
    The Belle II experiment is the next-generation flavor factory experiment at the SuperKEKB accelerator in Tsukuba, Japan. The first physics run will take place in 2017, then we plan to increase the luminosity gradually. We will reach the world's highest luminosity L=8x10^35 cm-2s-1 finally and collect a total of 50ab-1 data by the end of 2024. Such a huge amount of data allows us to explore the...
    Go to contribution page
  6. Wesley Gohn (University of Kentucky)
    04/08/2016, 14:30
    Computing and Data Handling
    Oral Presentation
    Graphical Processing Units (GPUs) have have recently become a valuable computing tool for the acquisition of data at high rates and for a relatively low cost. The devices work by parallelizing the code into thousands of threads, each executing a simple process, such as identifying pulses from a waveform digitizer. The cuda programming library can be used to effectively write code to...
    Go to contribution page
  7. Burt Holzman (Fermi National Accelerator Lab. (US))
    04/08/2016, 14:50
    Computing and Data Handling
    Oral Presentation
    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by holiday schedules, conference dates and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by...
    Go to contribution page
  8. Dr Kenneth Richard Herner (Fermi National Accelerator Laboratory (US))
    04/08/2016, 15:10
    Computing and Data Handling
    Oral Presentation
    The FabrIc for Frontier Experiments (FIFE) project is an initiative within the Fermilab Scientific Computing Division designed to steer the computing model for non-LHC Fermilab experiments across multiple physics areas. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying size, needs, and...
    Go to contribution page
  9. Philippe Canal (Fermi National Accelerator Lab. (US))
    04/08/2016, 15:30
    Computing and Data Handling
    Oral Presentation
    The GeantV project aims to research and develop the next generation simulation software describing the passage of particles through matter, targeting not only modern CPU architectures, but also more exotic resources such as GPGPU, Intelยฉ Xeon Phi, Atom or ARM, which cannot be ignored any more for HEP computing. While the proof of concept GeantV prototype has been mainly engineered for CPU...
    Go to contribution page
  10. Lisa Gerhardt (LBNL), Taylor Childers (Argonne National Laboratory (US))
    04/08/2016, 15:50
    Computing and Data Handling
    Oral Presentation

    The integration of HPC resources into the standard computing toolkit of HEP experiments is becoming important as traditional resources are being outpaced by the needs of the experiments. We will describe solutions that address some of the difficulty in running data-intensive pipelines on HPC systems. Users of NERSC HPCs are benefiting from a newly developed package called "Shifter" that...

    Go to contribution page
  11. Amir Farbin (University of Texas at Arlington (US))
    05/08/2016, 11:30
    Computing and Data Handling
    Oral Presentation
    The recent Deep Learning (DL) renaissance has yielded impressive feats in industry and science that illustrate the transformative potential of replacing laborious feature engineering with automatic feature learning to simplify, enhance, and accelerate raw data processing. One area where DL is particularly helpful is in detector R&D and optimization, where analyzing prototype data or studying...
    Go to contribution page
  12. Sezen Sekmen (Kyungpook National University (KR))
    05/08/2016, 11:50
    Computing and Data Handling
    Oral Presentation
    CMS is able to provide interpretation efficiently on large scans of new physics model parameter spaces, thanks to the availability of a fast simulation of the CMS detector, which serves as a fast and reliable alternative to the full, GEANT-based simulation. Fast simulation becomes particularly crucial with the current increase in LHC energy and luminosity. In this presentation, we will discuss...
    Go to contribution page
  13. Ruth Pordes (Fermilab)
    05/08/2016, 12:10
    Computing and Data Handling
    Oral Presentation
    LArSoft is a toolkit that provides a software infrastructure and algorithms for the simulation, reconstruction and analysis of events in Liquid Argon Time Projection Chambers (LArTPCs). It is currently used by the ArgoNeuT, LArIAT, MicroBooNE, DUNE and SBND experiments. The LArSoft collaboration has been formed to provide an environment for the development, use, and sharing of code across...
    Go to contribution page
  14. Tingjun Yang (FNAL)
    05/08/2016, 12:30
    Computing and Data Handling
    Oral Presentation
    Liquid Argon TPC (LArTPC) technology is increasingly prevalent in large-scale detectors designed to observe neutrino scattering events induced by accelerators or by natural sources. LArTPCs consist of a very high fraction of active detector material with spatial resolutions on the order of a few millimeters. Three-dimensional interactions are imaged in multiple two-dimensional views by the...
    Go to contribution page
  15. Flavia De Almeida Dias (University of Edinburgh (GB))
    05/08/2016, 12:50
    Computing and Data Handling
    Oral Presentation
    A very large number of simulated events is required for physics and performance studies with the ATLAS detector at the Large Hadron Collider. Producing these with the full GEANT4 detector simulation is highly CPU intensive. As a very detailed detector simulation is not always required, fast simulation tools have been developed to reduce the calorimeter simulation time by a few orders of...
    Go to contribution page
  16. Kenneth Bloom (University of Nebraska (US))
    05/08/2016, 14:30
    Computing and Data Handling
    Oral Presentation
    The CMS offline software and computing system has successfully met the challenge of LHC Run 2. In this presentation, we will discuss how the entire system was improved in anticipation of increased trigger output rate, increased rate of pileup interactions and the evolution of computing technology. The primary goals behind these changes was to increase the flexibility of computing facilities...
    Go to contribution page
  17. Barbara Sciascia
    05/08/2016, 14:50
    Computing and Data Handling
    Oral Presentation
    In Run 2, LHCb will collect the largest data sample of charm mesons ever recorded. Novel data processing and analysis techniques are required to maximise the physics potential of this data sample with the available computing resources and data preservation constraints. A new data-driven technique has been developed to measure the efficiency of selection requirements relying on particle...
    Go to contribution page
  18. Xiao-Yong Jin (Argonne National Laboratory)
    05/08/2016, 15:15
    Computing and Data Handling
    Oral Presentation
    We present a new software framework for simulating lattice field theories. It features an intuitive programming interface, while simultaneously achieving high performance supercomputing, all in one programming language, Nim. With a macro system based on its abstract syntax tree, the language enables us to check and optimize our code at compile time. It also allows us to code intrinsics that...
    Go to contribution page
  19. Charles Leggett (Lawrence Berkeley National Lab. (US))
    05/08/2016, 15:35
    Computing and Data Handling
    Oral Presentation
    In order to be able to make effective use of emerging hardware, where the amount of memory available to any CPU is rapidly decreasing as the core count continues to rise, ATLAS has begun a migration to a concurrent, multi-threaded software framework, known as AthenaMT. Significant progress has been made in implementing AthenaMT - we can currently run realistic Geant4 simulations on massively...
    Go to contribution page
  20. Roel Aaij (CERN)
    05/08/2016, 15:55
    Computing and Data Handling
    Oral Presentation
    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run II. Data collected at the start of the fill will be processed in a few minutes and used to update the alignment, while the calibration constants will be evaluated for each run. This procedure improves the quality of the online alignment. For example, the vertex locator is retracted and reinserted for...
    Go to contribution page
  21. Dustin James Anderson (California Institute of Technology (US))
    06/08/2016, 16:15
    Computing and Data Handling
    Oral Presentation
    Back in 2011, the CMS collaboration introduced Data Scouting as a way to produce physics results with events that could not be stored on disk, due to resource limits in the data acquisition and offline infrastructure. This technique was proved to be effective during 2012, when 18 inverse fb of 8 TeV collisions were collected. This technique is now a standard ingredient for CMS and ATLAS...
    Go to contribution page
  22. Antonio Falabella (Universita e INFN, Bologna (IT))
    06/08/2016, 16:35
    Computing and Data Handling
    Oral Presentation
    This contribution reports on the experience of the LHCb computing team during LHC Run II and its preparation for Run III. Furthermore a brief introduction on LHCbDIRAC, i.e. the tool to interface to the experiment distributed computing resources for its data processing and data management operations is given. Run II, which started in 2015, has already seen several changes in the data...
    Go to contribution page
  23. Ilija Vukotic (University of Chicago (US))
    06/08/2016, 16:55
    Computing and Data Handling
    Oral Presentation
    To meet a sharply increasing demand for computing resources in LHC Run 2, ATLAS distributed computing systems reach far and wide to gather CPU and storage capacity to execute an evolving ecosystem of production and analysis workflow tools. Indeed more than a hundred computing sites from the Worldwide LHC Computing Grid, plus many โ€œopportunisticโ€ facilities at HPC centers, universities,...
    Go to contribution page
  24. Dr Jean-Roch Vlimant (California Institute of Technology (US))
    06/08/2016, 17:15
    Computing and Data Handling
    Oral Presentation
    A Monte Carlo (MC) production for a large-scale experiment like CMS is a vast effort, extending to as many as 3000 individual samples to be produced, with different conditions (e.g., detector alignment), different inputs (e.g., partonshower vs ME generators) and many workflows (e.g., parametrized simulation vs detailed GEANT-based simulation). In run 1 there was a tight coupling of workflow...
    Go to contribution page
  25. Slava Krutelyov (Univ. of California San Diego (US))
    06/08/2016, 17:45
    Computing and Data Handling
    Oral Presentation

    High luminosity operation of the LHC is expected to deliver proton-proton collisions to experiments with average number of pp interactions reaching 200 every bunch crossing. Reconstruction of charged particle tracks with current algorithms, in this environment, dominates reconstruction time and is increasingly computationally challenging.
    We discuss the importance of taking computing costs...

    Go to contribution page
Building timetable...