21โ€“25 Aug 2017
University of Washington, Seattle
US/Pacific timezone

Session

Track 1: Computing Technology for Physics Research

Parallel Session
21 Aug 2017, 14:00
Alder Hall (University of Washington, Seattle)

Alder Hall

University of Washington, Seattle

Conveners

Track 1: Computing Technology for Physics Research: Online and Trigger

  • Niko Neufeld (CERN)

Track 1: Computing Technology for Physics Research: Parallel Session

  • Stefan Roiser (CERN)

Track 1: Computing Technology for Physics Research: Online II & Software management

  • Shih-Chieh Hsu (University of Washington Seattle (US))

Track 1: Computing Technology for Physics Research: Heterogeneous resources

  • Maria Girone (CERN)

Track 1: Computing Technology for Physics Research: New techniques and old problems

  • Maria Girone (CERN)

Track 1: Computing Technology for Physics Research: Machine Learning and Accelerators

  • Niko Neufeld (CERN)

Presentation materials

There are no materials yet.

  1. Vakho Tsulaia (Lawrence Berkeley National Lab. (US))
    21/08/2017, 14:00
    Track 1: Computing Technology for Physics Research
    Oral

    Data processing applications of the ATLAS experiment, such as event simulation and reconstruction, spend considerable amount of time in the initialization phase. This phase includes loading a large number of shared libraries, reading detector geometry and condition data from external databases, building a transient representation of the detector geometry and initializing various algorithms and...

    Go to contribution page
  2. Luca Pontisso (Sapienza Universita e INFN, Roma I (IT))
    21/08/2017, 14:20
    Track 1: Computing Technology for Physics Research
    Oral

    The use of GPUs to implement general purpose computational tasks, known as GPGPU since fifteen years ago, has reached maturity. Applications take advantage of the parallel architectures of these devices in many different domains.
    Over the last few years several works have demonstrated the effectiveness of the integration of GPU-based systems in the high level trigger of various HEP...

    Go to contribution page
  3. Adam Edward Barton (Lancaster University (GB))
    21/08/2017, 14:40
    Track 1: Computing Technology for Physics Research
    Oral

    Over the next decade of LHC data-taking the instantaneous luminosity
    will reach up 7.5 times the design value with over 200 interactions
    per bunch-crossing and will pose unprecedented challenges for the
    ATLAS trigger system.

    With the evolution of the CPU market to many-core systems, both the
    ATLAS offline reconstruction and High-Level Trigger (HLT) software
    will have to transition from a...

    Go to contribution page
  4. Tommaso Colombo (CERN)
    21/08/2017, 15:00
    Track 1: Computing Technology for Physics Research
    Oral

    Abstract:
    LHCb has decided to optimise its physics reach by removing the first level hardware trigger for LHC Run3 and beyond. In addition to requiring fully redesigned front-end electronics this design creates interesting challenges for the data-acquisition and the rest of the Online computing system. Such a system can only be realized within realistic cost using as much off-the-shelf...

    Go to contribution page
  5. Maciej Szymon Gladki (CERN, Geneva, Switzerland)
    21/08/2017, 15:20
    Track 1: Computing Technology for Physics Research
    Oral

    The efficiency of the Data Acquisition (DAQ) in the new DAQ system of the Compact Muon Solenoid (CMS) experiment for LHC Run-2 is constantly being improved. A significant factor on the data taking efficiency is the experience of the DAQ operator. One of the main responsibilities of DAQ operator is to carry out the proper recovery procedure in case of failure in data-taking. At the start of...

    Go to contribution page
  6. Rosen Matev (CERN)
    21/08/2017, 15:40
    Track 1: Computing Technology for Physics Research
    Oral

    The LHCb experiment plans a major upgrade of the detector and DAQ systems in the LHC long shutdown II (2018โ€“2019). For this upgrade, a purely software based trigger system is being developed, which will have to process the full 30 MHz of bunch-crossing rate delivered by the LHC. A fivefold increase of the instantaneous luminosity in LHCb further contributes to the challenge of reconstructing...

    Go to contribution page
  7. Gareth Douglas Roy (University of Glasgow (GB))
    21/08/2017, 16:30
    Track 1: Computing Technology for Physics Research
    Oral

    Containers are more and more becoming prevalent in Industry as the standard method of software deployment. They have many benefits for shipping software by encapsulating dependencies and turning complex software deployments into single portable units. Similar to Virtual Machines, but with a lower overall resource requirement, greater flexibility and more transparency they are a compelling...

    Go to contribution page
  8. Edgar Fajardo Hernandez (Univ. of California San Diego (US))
    21/08/2017, 16:50
    Track 1: Computing Technology for Physics Research
    Oral

    The Worldwide LHC Computing Grid (WLCG) is the largest grid computing infrastructure in the world pooling the resources of 170 computing centers (sites). One of the advantages of grid computing is that multiple copies of data can be stored at different sites allowing user access that is independent of that site's geographic location, unique operating systems, and software. Each site is able to...

    Go to contribution page
  9. Yaodong CHENG (IHEP, Beijing), Yaodong Cheng (Chinese Academy of Sciences (CN))
    21/08/2017, 17:10
    Track 1: Computing Technology for Physics Research
    Oral

    Distributed computing system is widely used in high energy physics such as WLCG. Computing job is usually scheduled to the site where the input data was pre-staged in using file transfer system. It will lead to some problems including low CPU utility for some small sites lack of storage capacity. Futhermore, It is not flexible in dynamic cloud computing environment. Virtual machines will be...

    Go to contribution page
  10. Justas Balcas (California Institute of Technology (US))
    21/08/2017, 17:30
    Track 1: Computing Technology for Physics Research
    Oral

    The Caltech team in collaboration with network, computer science and HEP partners at the DOE laboratories and universities, has developed high-throughput data transfer methods and cost-effective data systems that have defined the state of the art for the last 15 years.

    The achievable stable throughput over continental and transoceanic distances using TCP-based open source applications,...

    Go to contribution page
  11. Michael Poat (Brookhaven National Laboratory)
    21/08/2017, 17:50
    Track 1: Computing Technology for Physics Research
    Oral

    The online computing environment at STAR has generated demand for high availability of services (HAS) and a resilient uptime guarantee. Such services include databases, web-servers, and storage systems that user and sub-systems tend to rely on for their critical workflows. Standard deployment of services on bare metal creates a problem if the fundamental hardware fails or loses connectivity....

    Go to contribution page
  12. Robert Fischer (Rheinisch-Westfaelische Tech. Hoch. (DE))
    21/08/2017, 18:10
    Track 1: Computing Technology for Physics Research
    Oral

    In particle physics, workflow management systems are primarily used as
    tailored solutions in dedicated areas such as Monte Carlo production.
    However, physicists performing data analyses are usually required to
    steer their individual workflows manually, which is time-consuming and
    often leads to undocumented relations between particular workloads. We
    present a generic analysis design pattern...

    Go to contribution page
  13. Jim Pivarski (Princeton University)
    22/08/2017, 14:00
    Track 1: Computing Technology for Physics Research
    Oral

    Exploratory data analysis must have a fast response time, and some query systems used in industry (such as Impala, Kudu, Dremel, Drill, and Ibis) respond to queries about large (petabyte) datasets on a human timescale (seconds). Introducing similar systems to HEP would greatly simplify physicists' workflows. However, HEP data are most naturally expressed as objects, not tables. In particular,...

    Go to contribution page
  14. Martin Ritter
    22/08/2017, 14:20
    Track 1: Computing Technology for Physics Research
    Oral

    The Belle II experiment at KEK is preparing for taking first collision data in early 2018. For the success of the experiment it is essential to have information about varying conditions available to systems worldwide in a fast and efficient manner that is straightforward to both the user and maintainer. The Belle II Conditions Database was designed to make maintenance as easy as possible. To...

    Go to contribution page
  15. Elmar Ritsch (CERN)
    22/08/2017, 14:40
    Track 1: Computing Technology for Physics Research
    Oral

    In the last year ATLAS has radically updated its software development infrastructure hugely reducing the complexity of building releases and greatly improving build speed, flexibility and code testing. The first step in this transition was the adoption of CMake as the software build system over the older CMT. This required the development of an automated translation from the old system to the...

    Go to contribution page
  16. Stefan Roiser (CERN)
    22/08/2017, 15:00
    Track 1: Computing Technology for Physics Research
    Oral

    LHCb is planning major changes for its data processing and analysis workflows for LHC Run 3. Removing the hardware trigger, a software only trigger at 30 MHz will reconstruct events using final alignment and calibration information provided during the triggering phase. These changes pose a major strain on the online software framework which needs to improve significantly. The foreseen changes...

    Go to contribution page
  17. Andrew John Washbrook (University of Edinburgh (GB))
    22/08/2017, 15:20
    Track 1: Computing Technology for Physics Research
    Oral

    The regular application of software quality tools in large collaborative projects is required to reduce code defects to an acceptable level. If left unchecked the accumulation of defects invariably results in performance degradation at scale and problems with the long-term maintainability of the code. Although software quality tools are effective for identification there remains a non-trivial...

    Go to contribution page
  18. Marco Meoni (INFN Sezione di Pisa, Universita' e Scuola Normale Superiore, P)
    22/08/2017, 15:40
    Track 1: Computing Technology for Physics Research
    Oral

    The CERN IT provides a set of Hadoop clusters featuring more than 5 PB of raw storage. Different open-source user-level tools are installed for analytics purposes. For this reason, since early 2015, the CMS experiment has started to store a large set of computing metadata, including e.g. a massive number of dataset access log.. Several streamers have registered some billions traces from...

    Go to contribution page
  19. Kevin Thomas Bauer (University of California Irvine (US))
    22/08/2017, 16:45
    Track 1: Computing Technology for Physics Research
    Oral

    Starting during the upcoming major LHC shutdown from 2019-2021, the ATLAS experiment at CERN will move to the the Front-End Link eXchange (FELIX) system as the interface between the data acquisition system and the trigger
    and detector front-end electronics. FELIX will function as a router between custom serial links and a commodity switch network, which will use industry standard technologies...

    Go to contribution page
  20. Alexandre Beche (Ecole polytechnique fรฉdรฉrale de Lausanne (CH))
    22/08/2017, 17:05
    Track 1: Computing Technology for Physics Research
    Oral

    The PanDA WMS - Production and Distributed Analysis Workload Management System - has been developed and used by the ATLAS experiment at the LHC (Large Hadron Collider) for all data processing and analysis challenges. BigPanDA is an extension of the PanDA WMS to run ATLAS and non-ATLAS applications on Leadership Class Facilities and supercomputers, as well as traditional grid and cloud...

    Go to contribution page
  21. Dr Malachi Schram (PNNL), Malachi Schram (McGill University), Malachi Schram (Pacific Northwest National Laboratory)
    22/08/2017, 17:25
    Track 1: Computing Technology for Physics Research
    Oral

    The Belle II experiment at the SuperKEKB collider in Tsukuba, Japan, will start taking physics data in early 2018 and aims to accumulate 50/ab, or approximately 50 times more data than the Belle experiment.
    The collaboration expects it will manage and process approximately 190 PB of data.
    Computing at this scale requires efficient and coordinated use of the compute grids in North America, Asia...

    Go to contribution page
  22. Kevin Fox (PNNL)
    22/08/2017, 17:45
    Track 1: Computing Technology for Physics Research
    Oral

    At PNNL, we are using cutting edge technologies and techniques to enable the Physics communities we support to produce excellent science. This includes Hardware Virtualization using an on premise OpenStack private Cloud, a Kubernetes and Docker based Container system, and Ceph, the leading Software Defined Storage solution. In this presentation we will discuss how we leverage these...

    Go to contribution page
  23. Daniel Lo (Microsoft research), David Lange (Princeton University (US)), Gareth Roy (University of Glasgow), Ian Fisk (Simons Foundation), Jeff Hammond (Intel), Dr Tom Gibbs (NVIDIA Corporation)
    22/08/2017, 18:05
    Track 1: Computing Technology for Physics Research
    Oral
  24. Andrei Gheata (CERN)
    24/08/2017, 14:00
    Track 1: Computing Technology for Physics Research
    Oral

    GeantV went through a thorough community discussion in the fall 2016 reviewing the project's status and strategy for sharing the R&D benefits with the LHC experiments and with the HEP simulation community in general. Following up to this discussion GeantV has engaged onto an ambitious 2-year road-path aiming to deliver a beta version that has most of the performance features of the final...

    Go to contribution page
  25. Mr Jiang Zhu (Sun Yat-Sen University )
    24/08/2017, 14:20
    Track 1: Computing Technology for Physics Research
    Oral

    The current event display module is based on the ROOT EVE package in Jiangmen Underground Neutrino Observatory (JUNO). we use Unity, a multiplatform game engine, to improve its performance and make it available in different platform. Compared with ROOT, Unity can give a more vivid demonstration of high energy physics experiments and it can be transplanted into another platform easily. We build...

    Go to contribution page
  26. Niko Neufeld (CERN)
    24/08/2017, 14:40
    Track 1: Computing Technology for Physics Research
    Oral

    The 2020 upgrade of the LHCb detector will vastly increase the rate of collisions the Online system needs to process in software in order to filter events in real time. 30 million collisions per second will pass through a selection chain where each step is executed conditional to its prior acceptance.

    The Kalman filter is a process of the event reconstruction that, due to its time...

    Go to contribution page
  27. Pier Paolo Ricci (INFN CNAF)
    24/08/2017, 15:00
    Track 1: Computing Technology for Physics Research
    Oral

    The INFN CNAF Tier-1 has become the Italian national data center for the INFN computing activities since 2005. As one of the reference sites for data storage and computing provider in the High Energy Physics (HEP) community it offers resources to all the four LHC experiments and many other HEP and non-HEP collaborations. The CDF experiment has used the INFN Tier-1 resources for many years and,...

    Go to contribution page
  28. Guilherme Amadio (CERN)
    24/08/2017, 15:20
    Track 1: Computing Technology for Physics Research
    Oral

    When dealing with the processing of large amount of data, the rate at which the
    reading and writing can tale place is a critical factor. High Energy Physics
    data processing relying on ROOT based persistification is no exception.
    The recent parallelisation of LHC experiments' software frameworks and the
    analysis of the ever increasing amount of collision data collected by
    experiments further...

    Go to contribution page
  29. Jakob Blomer (CERN)
    24/08/2017, 15:40
    Track 1: Computing Technology for Physics Research
    Oral

    The analysis of High-Energy Physics (HEP) data sets often take place outside the realm of experiment frameworks and central computing workflows, using carefully selected "n-tuples" or Analysis Object Data (AOD) as a data source. Such n-tuples or AODs may comprise data from tens of millions of events and grow to hundred gigabytes or a few terabytes in size. They are typically small enough to...

    Go to contribution page
  30. Sofia Vallecorsa (CERN)
    24/08/2017, 16:45
    Track 1: Computing Technology for Physics Research
    Oral

    The GeantV project introduces fine grained parallelism, vectorisation, efficient memory management and NUMA awareness in physics simulations. It is being developed to improve accuracy, while preserving, at the same time, portability through different architectures (Xeon Phi, GPU). This approach brings important performance benefits on modern architectures and a good scalability through a large...

    Go to contribution page
  31. Jana Schaarschmidt (University of Washington (US))
    24/08/2017, 17:10
    Track 1: Computing Technology for Physics Research
    Oral

    Producing the very large samples of simulated events required by many physics and performance studies with the ATLAS detector using the full GEANT4 detector simulation is highly CPU intensive. Fast simulation tools are a useful way of reducing CPU requirements when detailed detector simulations are not needed. During the LHC Run-1, a fast calorimeter simulation (FastCaloSim) was successfully...

    Go to contribution page
  32. Thomas Janson
    24/08/2017, 17:30
    Track 1: Computing Technology for Physics Research
    Oral

    In this talk, we explore the data-flow programming approach for massive parallel computing on FPGA accelerator, where an algorithm is described as a data-flow graph and programmed with MaxJ from Maxeler Technologies. Such a directed graph consists of a small set of nodes and arcs. All nodes are fully pipelined and data moves along the arcs through the nodes. We have shown that we can implement...

    Go to contribution page
  33. Felice Pantaleo (CERN)
    24/08/2017, 17:50
    Track 1: Computing Technology for Physics Research
    Oral

    Starting from 2017, during CMS Phase-I, the increased accelerator luminosity with the consequently increased number of simultaneous proton-proton collisions (pile-up) will pose significant new challenges for the CMS experiment.
    The primary goal of the HLT is to apply a specific set of physics selection algorithms and to accept the events with the most interesting physics content. To cope with...

    Go to contribution page
  34. Jiaheng Zou (IHEP)
    24/08/2017, 18:10
    Track 1: Computing Technology for Physics Research
    Oral

    SNiPER is a general purpose software framework for high energy physics experiments. During the development, we pays more attention to the requirements of neutrino and cosmic ray experiments. Now SNiPER has been successfully adopted by JUNO (Jiangmen Underground Neutrino Observatory) and LHAASO (Large High Altitude Air Shower Observatory). It has an important effect on the research and design...

    Go to contribution page
  35. Matthias Jochen Schnepf (KIT - Karlsruhe Institute of Technology (DE))
    24/08/2017, 18:30
    Track 1: Computing Technology for Physics Research
    Oral

    As results of the excellent LHC performance in 2016, more data than expected has been recorded leading to a higher demand for computing resources. It is already foreseeable that for the current and upcoming run periods a flat computing budget and the expected technology advance will not be sufficient to meet the future requirements. This results in a growing gap between supplied and demanded...

    Go to contribution page
Building timetable...