21–25 Aug 2017
University of Washington, Seattle
US/Pacific timezone

Session

Poster Session

Posters
22 Aug 2017, 16:00
Alder Hall (University of Washington, Seattle)

Alder Hall

University of Washington, Seattle

Conveners

Poster Session: Poster Session and Coffee Break

  • There are no conveners in this block

Poster Session: Poster Session and Coffee Break

  • There are no conveners in this block

Poster Session: Poster Lightning talks

  • Gordon Watts (University of Washington (US))

Presentation materials

There are no materials yet.

  1. Paul James Laycock (CERN)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Conditions data infrastructure for both ATLAS and CMS have to deal with the management of several Terabytes of data. Distributed computing access to this data requires particular care and attention to manage request-rates of up to several tens of kHz. Thanks to the large overlap in use cases and requirements, ATLAS and CMS have worked towards a common solution for conditions data management...

    Go to contribution page
  2. Edgar Fajardo Hernandez (Univ. of California San Diego (US))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    With the shift in the LHC experiments from the computing tiered model where data was prefetched and stored at the computing site towards a bring data on the fly, model came an opportunity. Since data is now distributed to computing jobs using XrootD federation of data, a clear opportunity for caching arose.

    In this document, we present the experience of installing and using a Federated Xrootd...

    Go to contribution page
  3. Mr Tao Cui (IHEP(Institute of High Energy Physics, CAS,China))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Large-scale virtual computing system requires a loosely coupled virtual resource management platform that provides the flexibility to add or subtract physical resources and the Convenience to upgrade the platform and so on. Openstack provides large-scale virtualization solution such as "Cells" and "Tricircle/ Trio2o ". But because of the complexity, it’s difficult to be deployed and maintain...

    Go to contribution page
  4. Ilija Vukotic (University of Chicago (US))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Until now, geometry information for the detector description of HEP experiments was only stored in online relational databases integrated in the experiments’ frameworks or described in files with text-based markup languages. In all cases, to build and store the detector description, a full software stack was needed.
    In this paper we present a new and scalable mechanism to store the geometry...

    Go to contribution page
  5. Mr Jakub Kandra (Charles University in Prague)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Belle II experiment is approaching its first physics run in 2018. Its full capability
    to operate at the precision frontier will need not only excellent performance of the SuperKEKB
    accelerator and the detector, but also advanced calibration methods combined with data quality monitoring.

    To deliver data in a form suitable for analysis as soon as possible, an automated Calibration Framework...

    Go to contribution page
  6. Simone Campana (CERN)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The ATLAS collaboration started a process to understand the computing needs for the High Luminosity LHC era. Based on our best understanding of the computing model input parameters for the HL-LHC data taking conditions, results indicate the need for a larger amount of computational and storage resources with respect of the projection of constant yearly budget for computing in 2026. Filling the...

    Go to contribution page
  7. Siarhei Padolski (BNL)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    BigPanDA monitoring is a web based application which provides various processing and representation of the Production and Distributed Analysis (PanDA) system objects states. Analyzing hundreds of millions of computation entities such as an event or a job BigPanDA monitoring builds different scale and levels of abstraction reports in real time mode. Provided information allows users to drill...

    Go to contribution page
  8. Martin Ritter
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Belle II experiment at the SuperKEKB $e^{+}e^{-}$ accelerator is preparing for taking first collision data next year. For the success of the experiment it is essential to have information about varying conditions available in the simulation, reconstruction, and analysis code.

    The online and offline software has to be able to obtain conditions data from the Belle II Conditions Database in...

    Go to contribution page
  9. Siarhei Padolski (BNL)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Scientific collaborations operating on modern facilities generate vast volumes of data and auxiliary metadata, and the information is constantly growing. High energy physics data is a long term investment and contains the potential for physics results beyond the lifetime of a collaboration or/and experiment. Many existing HENP experiments are concluding their physics programs, and looking...

    Go to contribution page
  10. Fedor Ratnikov (Yandex School of Data Analysis (RU))
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Daily operation of a large-scale experiment is a resource consuming task, particularly from perspectives of routine data quality monitoring. Typically, data comes from different channels (subdetectors or other subsystems) and the global quality of data depends on the performance of each channel. In this work, we consider the problem of prediction which channel has been affected by anomalies in...

    Go to contribution page
  11. Enrico Fattibene (INFN - National Institute for Nuclear Physics)
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The data management infrastructure operated at CNAF, the central computing and storage facility of INFN (Italian Institute for Nuclear Physics), is based on both disk and tape storage resources. About 40 Petabytes of scientific data produced by LHC (Large Hadron Collider at CERN) and other experiments in which INFN is involved are stored on tape. This is the higher latency storage tier within...

    Go to contribution page
  12. Andrei Kazarov (Petersburg Nuclear Physics Institut (RU))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The ATLAS Trigger and Data Acquisition (TDAQ) is a large, distributed
    system composed of several thousands interconnected computers and tens
    of thousands software processes (applications). Applications produce a
    large amount of operational messages (at the order of O(10^4) messages
    per second), which need to be reliably stored and delivered to TDAQ
    operators in a realtime manner, and also be...

    Go to contribution page
  13. Frank Berghaus (University of Victoria (CA))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Input data for applications that run in cloud computing centres can be stored at distant repositories, often with multiple copies of the popular data stored at many sites. Locating and retrieving the remote data can be challenging, and we believe that federating the storage can address this problem. A federation would locate the closest copy of the data currently on the basis of GeoIP...

    Go to contribution page
  14. Dr Ruslan Mashinistov (Russian Academy of Sciences (RU))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Production and Distributed Analysis system (PanDA), used for workload management in the ATLAS Experiment for over a decade, has in recent years expanded its reach to diverse new resource types such as HPCs, and innovative new workflows such as the event service. PanDA meets the heterogeneous resources it harvests in the PanDA pilot, which has embarked on a next-generation reengineering to...

    Go to contribution page
  15. Andrey Ustyuzhanin (Yandex School of Data Analysis (RU))
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In the research, a new approach for finding rare events in high-energy physics was tested. As an example of physics channel the decay of \tau -> 3 \mu is taken that has been published on Kaggle within LHCb-supported challenge. The training sample consists of simulated signal and real background, so the challenge is to train classifier in such way that it picks up signal/background differences...

    Go to contribution page
  16. Dr Philipp Eller (Penn State University)
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The IceCube neutrino observatory is a cubic-kilometer scale ice Cherenkov detector located at the South Pole. The low energy analyses, that are for example used to measure neutrino oscillations, exploit shape differences in very high-statistics datasets. We present newly-developed tools to estimate reliable event rate distributions from limited statistics simulation and very fast algorithms to...

    Go to contribution page
  17. Lynn Wood (Pacific Northwest National Laboratory, USA)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Belle II Experiment at KEK is preparing for first collisions in early 2018. Processing the large amounts of data that will be produced will require conditions data to be readily available to systems worldwide in a fast and efficient manner that is straightforward to both the user and maintainer. The Belle II Conditions Database was designed to make maintenance as easy as possible. To...

    Go to contribution page
  18. Aaron Tohuvavohu (Penn State University)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Swift Gamma-Ray Burst Explorer is a uniquely capable mission, with three on-board instruments and rapid slewing capabilities. It often serves as a fast-response space observatory for everything from gravitational-wave counterpart searches to cometary science. Swift averages 125 different observations per day, and is consistently over-subscribed, responding to about one-hundred Target of...

    Go to contribution page
  19. Matthias Jochen Schnepf (KIT - Karlsruhe Institute of Technology (DE))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    As results of the excellent LHC performance in 2016, more data than expected has been recorded leading to a higher demand for computing resources. It is already foreseeable that for the current and upcoming run periods a flat computing budget and the expected technology advance will not be sufficient to meet the future requirements. This results in a growing gap between supplied and demanded...

    Go to contribution page
  20. Dr xiaomei zhang (IHEP,Beijing)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The IHEP distributed computing system has been built on DIRAC to integrate heterogeneous resources from collaboration institutes and commercial resource providers for data processing of IHEP experiments, and began to support JUNO in 2015. The Jiangmen Underground Neutrino Observatory (JUNO) is a multipurpose neutrino experiment located in southern China to start in 2019. The study on applying...

    Go to contribution page
  21. Roger Jones (Lancaster University (GB))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The LHC and other experiments are evolving their computing models to cope with the changing data volumes and rate, changing technologies in distributed computing and changing funding landscapes. The UK is reviewing the consequent network bandwidth provision required to meet the new models, there will be increasing consolidation of storage into fewer sites and increase use of caching and data...

    Go to contribution page
  22. Christoph Heidecker (KIT - Karlsruhe Institute of Technology (DE))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The heavily increasing amount of data delivered by current experiments in high energy physics challenge both end users and providers of computing resources. The boosted data rates and the complexity of analyses require huge datasets being processed. Here, short turnaround cycles are absolutely required for an efficient processing rate of analyses. This puts new limits to the provisioning of...

    Go to contribution page
  23. Mariel Pettee (Yale University (US))
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Tau leptons are used in a range of important ATLAS physics analyses,
    including the measurement of the SM Higgs boson coupling to fermions,
    searches for Higgs boson partners, and heavy resonances decaying into
    pairs of tau leptons. Events for these analyses are provided by a
    number of single and di-tau triggers, as well as triggers that require
    a tau lepton in combination with other...

    Go to contribution page
  24. Dr Alexis Pompili (Universita e INFN, Bari (IT))
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Graphical Processing Units (GPUs) represent one of the most sophisticated and versatile parallel computing architectures available that are nowadays entering the High Energy Physics field. GooFit is an open source tool interfacing ROOT/RooFit to the CUDA platform on nVidia GPUs (it also supports OpenMP). Specifically it acts as an interface between the MINUIT minimization algorithm and a...

    Go to contribution page
  25. Michael David Sokoloff (University of Cincinnati (US))
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The LHCb detector is a single-arm forward spectrometer, which has been designed for the efficient reconstruction decays of c- and b-hadrons.
    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run II. Data collected at the start of the fill are processed in a few minutes and used to update the alignment, while the calibration constants are evaluated for...

    Go to contribution page
  26. Guilherme Amadio (CERN)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Portable and efficient vectorization is a significant challenge in large
    software projects such as Geant, ROOT, and experiment frameworks.
    Nevertheless, taking advantage of the expression of parallelism through
    vectorization is required by the future evolution of the landscape of
    particle physics, which will be characterized by a drastic increase in
    the amount of data produced.

    In order to...

    Go to contribution page
  27. Pascal Boeschoten (Ministere des affaires etrangeres et europeennes (FR))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the strongly interacting state of matter realized in relativistic heavy-ion collisions at the CERN Large Hadron Collider (LHC). A major upgrade of the experiment is planned during the 2019-2020 long shutdown. In order to cope with a data rate 100 times higher than during LHC Run 2 and with the continuous...

    Go to contribution page
  28. Julius Hrivnac (Universite de Paris-Sud 11 (FR))
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The global view of the ATLAS Event Index system has been presented in the last ACAT. This talk will concentrate on the architecture of the system core component. This component handles the final stage of the event metadata import, it organizes its storage and provides a fast and feature-rich access to all information. A user is able to interrogate metadata in various ways, including by...

    Go to contribution page
  29. Adam Edward Barton (Lancaster University (GB))
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Physics analyses at the LHC which search for rare physics processes or
    measure Standard Model parameters with high precision require accurate
    simulations of the detector response and the event selection
    processes. The accurate simulation of the trigger response is crucial
    for determination of overall selection efficiencies and signal
    sensitivities. For the generation and the reconstruction of...

    Go to contribution page
  30. Geoffrey Nathan Smith (University of Notre Dame (US))
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In 2017, we expect the LHC to deliver an instantaneous luminosity of roughly $2.0 \times 10^{34} cm^{-2} s^{-1}$ to the CMS experiment, with about 60 simultaneous proton-proton collisions (pileup) per event. In these challenging conditions, it is important to be able to intelligently monitor the rate at which data is being collected (the trigger rate). It is not enough to simply look at the...

    Go to contribution page
  31. Nikola Lazar Whallon (University of Washington (US))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Yet Another Rapid Readout (YARR) system is a DAQ system designed for the readout of the current generation ATLAS Pixel FE-I4 chip, which has a readout bandwidth of 160 Mb/s, and the latest readout chip currently under design by the RD53 collaboration which has a much higher bandwidth up to 5 Gb/s and is part of the development of new Pixel detector technology to be implemented in...

    Go to contribution page
  32. Anton Josef Gamel (Albert-Ludwigs-Universitaet Freiburg (DE))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    High-Performance Computing (HPC) and other research cluster computing resources provided by universities can be useful supplements to the collaboration’s own WLCG computing resources for data analysis and production of simulated event samples. The shared HPC cluster "NEMO" at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines...

    Go to contribution page
  33. Mikhail Titov (National Research Centre Kurchatov Institute (RU))
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Scientific computing has advanced in a way of how to deal with massive amounts of data, since the production capacities have increased significantly for the last decades. Most large science experiments require vast computing and data storage resources in order to provide results or predictions based on the data obtained. For scientific distributed computing systems with hundreds of petabytes...

    Go to contribution page
  34. Valentin Volkl (University of Innsbruck (AT))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The ACTS project aims to decouple the experiment-agnostic parts of the well-established ATLAS tracking software into a standalone package. As the first user, the Future Circular Collider (FCC) Design Study based its track reconstruction software on ACTS. In this presentation we describe the usecases and performance of ACTS in the dense tracking environment of the FCC proton-proton (FCC-hh)...

    Go to contribution page
  35. Sean Murray (University of Cape Town (ZA))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    CODE-RADE is a platform for user-driven, continuous integration and delivery of research applications in a distributed environment. Starting with 6 hypotheses describing the problem at hand, we put forward technical and social solutions to these. Combining widely-used and thoroughly-tested tools, we show how it is possible to manage the dependencies and configurations of a wide range of...

    Go to contribution page
  36. Mr Petr Bouř (FNSPE CTU Prague)
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    We introduce several modifications of classical statistical tests applicable to weighted data sets in order to test homogeneity of weighted and unweighted samples, e.g. Monte Carlo simulations compared to the real data measurements. Specifically, we deal with the Kolmogorov-Smirnov, Anderson-Darling and f-divergence homogeneity tests. The asymptotic approximation of p-value and power of our...

    Go to contribution page
  37. Dr Igor Oya ( Deutsches Elektronen-Synchrotron (DESY))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Cherenkov Telescope Array (CTA) is the next-generation atmospheric Cherenkov gamma-ray observatory. CTA will consist of two installations, one in the southern (Cerro Armazones Chile) and the other in the northern hemisphere (La Palma, Spain). The two sites will contain dozens of telescopes of different sizes, constituting one of the largest astronomical installation under development. The...

    Go to contribution page
  38. Dominik Steinschaden (Stefan Meyer Institute)
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The $\overline{\text{P}}$ANDA experiment, currently under construction at the Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany, addresses fundamental questions in hadron and nuclear physics via interactions of antiprotons with a proton or nuclei, e.g. light and charm exotics, multi-strange baryons and hadrons in nuclei. It will be installed at the High Energy Storage Ring...

    Go to contribution page
  39. Antonio Augusto Alves Junior (University of Cincinnati (US))
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Hydra is a templatized header-only, C++11-compliant library for data analysis on massively parallel platforms targeting, but not limited to, the field High Energy Physics reseach.
    Hydra supports the description of particle decays via the generation of phase-space Monte Carlo, generic function evaluation, data fitting, multidimensional adaptive numerical integration and histograming.
    Hydra is...

    Go to contribution page
  40. Fedor Ratnikov (Yandex School of Data Analysis (RU)), Fedor Ratnikov
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging CHerenkov (RICH) detector, the hadronic and electromagnetic calorimeters, and the muon chambers. To improve charged particle identification, several neural networks including a deep...

    Go to contribution page
  41. Andrey Ustyuzhanin (Yandex School of Data Analysis (RU))
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    We investigate different approaches to the recognition of electromagnetic showers in the data which was collected by the international collaboration OPERA. The experiment initially was designed to detect neutrino oscillations, but the data collected can also be used for the development of the machine learning techniques for electromagnetic shower detection in photo emulsion films. Such showers...

    Go to contribution page
  42. Nikola Hardi (CERN)
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Containerisation technology is becoming more and more popular because it provides an efficient way to improve deployment flexibility by packaging up code into software micro-environments. Yet, containerisation has limitations and one of the main ones is the fact that entire container images need to be transferred before they can be used. Container images can be seen as software stacks and ...

    Go to contribution page
  43. Dr Kim Siang Khaw (University of Washington)
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Muon g-2 experiment at Fermilab will begin beam and detector commissioning in summer 2017 to measure the muon anomalous magnetic moment to an unprecedented level of 140 ppb. To deal with incoming data projected to be around tens of petabytes, a robust data reconstruction and analysis framework, built on Fermilab’s art event-processing framework, is developed. In this workshop, we report...

    Go to contribution page
  44. Brian Paul Bockelman (University of Nebraska-Lincoln (US))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The ROOT I/O (RIO) subsystem is foundational to most HEP experiments - it provides a file format, a set of APIs/semantics, and a reference implementation in C++. It is often found at the base of an experiment's framework and is used to serialize the experiment's data; in the case of an LHC experiment, this may be hundreds of petabytes of files! Individual physicists will further use RIO to...

    Go to contribution page
  45. Soon Yung Jun (Fermi National Accelerator Lab. (US))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Sequences of pseudorandom numbers of high statistical quality and their
    efficient generation are critical for the use of Monte Carlo simulation
    in many areas of computational science. As high performance parallel
    computing systems equipped with wider vector pipelines or many-cores
    technologies become widely available, a variety of parallel pseudo-random
    number generators (PRNGs) are being...

    Go to contribution page
  46. Xavier Valls Pla (University Jaume I (ES))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    In order to take full advantage of new computer architectures and to satisfy the requirement of minimizing the CPU usage with increasing amount of data to analysis, parallelisation and vectorisation have been introduced in the ROOT mathematical and statistical libraries.

    We report first on the improvements obtained in the function evaluation, used for data modelling, by adding the support...

    Go to contribution page
  47. Dr Tao Lin (IHEP)
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Jiangmen Underground Neutrino Observatory (JUNO) is a neutrino experiment to determine neutrino mass hierarchy. It has a central detector used for neutrino detection, which consists of a spherical acrylic vessel containing 20 kt liquid scintillator (LS) and about 18,000 20-inch photomultiplier tubes (PMT) to collect light from LS.

    As one of the important parts in JUNO offline software,...

    Go to contribution page
  48. Siarhei Padolski (BNL)
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Every scientific workflow involves an organizational part which purpose is to plan an analysis process thoroughly according to defined schedule, thus to keep work progress efficient. Having such information as an estimation of the processing time or possibility of system outage (abnormal behaviour) will improve the planning process, provide an assistance to monitor system performance and...

    Go to contribution page
  49. Dr Siarhei Padolski (BNL)
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Modern physics experiments collect peta-scale volumes of data and utilize vast, geographically distributed computing infrastructure that serves thousands of scientists around the world.
    Requirements for rapid, near real time data processing, fast analysis cycles and need to run massive detector simulations to support data analysis pose special premium on efficient use of available...

    Go to contribution page
  50. Rudolf Fruhwirth (Austrian Academy of Sciences (AT))
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Circle finding and fitting is a frequent problem in the data analysis of high-energy physics experiments. In a tracker immersed in a homogeneneous magnetic field, tracks with sufficiently high momentum are close to perfect circles if projected to the bending plane. In a ring-imaging Cherenkov detector, a circle of photons around the crossing point of a charged particles has to be found and...

    Go to contribution page
  51. Vakho Tsulaia (Lawrence Berkeley National Lab. (US))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    ATLAS uses its multi-processing framework AthenaMP for an increasing number of workflows, including simulation, reconstruction and event data filtering (derivation). After serial initialization, AthenaMP forks worker processes that then process events in parallel, with each worker reading data individually and producing its own output. This mode, however, has inefficiencies: 1) The worker no...

    Go to contribution page
  52. Ms Rui Li (Sun Yat-sen University)
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

     In order to find the rare particles generated from the collisions at high-energy particle colliders, we need to solve the signal-versus-background classification problems. It turns out neural network can be used here to improve the performance without any manually constructed inputs.


     This is the content of my oral report:

    1. A brief introduction to neural network
      2....
    Go to contribution page
  53. Mr Igor Mandrichenko (FNAL), Igor Vasilyevich Mandrichenko
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Columnar data representation is known to be an efficient way to store and access data, specifically in cases when the analysis is often done based only on a small fragment of the available data structure. Data representations like Apache Parquet, on the other hand, split data horizontally to allow for easy parallelization of data analysis. Based on the general idea of columnar data storage,...

    Go to contribution page
  54. Andrei Kazarov (Petersburg Nuclear Physics Institut (RU))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Data Acquisition (DAQ) of the ATLAS experiment is a large distributed
    and inhomogeneous system: it consists of thousands of interconnected
    computers and electronics devices that operate coherently to read out
    and select relevant physics data. Advanced diagnostics capabilities of
    the TDAQ control system are a crucial feature which contributes
    significantly to smooth operation and fast recovery...

    Go to contribution page
  55. Simone Campana (CERN)
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    With this contribution we present the recent developments made to Rucio, the data management system of the High-Energy Physics Experiment ATLAS. Already managing 260 Petabytes of both official and user data, Rucio has seen incremental improvements throughout LHC Run-2, and is currently laying the groundwork for HEP computing in the HL-LHC era. The focus of this contribution are (a) the...

    Go to contribution page
  56. Doris Yangsoo Kim (Soongsil University)
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The Belle II experiment / the SuperKEKB collider at KEK is a next generation B factory. Phase I of the experiment has been just finished, during which extensive beam studies were conducted. The collaboration is preparing for the physics run in 2018 with the full detector setup. The simulation library of the Belle II experiment is based on the Geant4 package. In this talk, we will summarize the...

    Go to contribution page
  57. Ms Shuhui Huang (Sun Yat-sen University)
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    BESIII, the detector of BEPCII accelerator, has accomplished a big upgrade on the endcaps of TOF detector, to make more precise measurement. As a result, BesVis system for event display on BESIII experiment needs to be updated. We used ROOT Geometry package to build up geometrical structure and display system. BesVis system plays a significantly important role in DAQ system, reconstruction...

    Go to contribution page
  58. Marcelo Vogel (Bergische Universitaet Wuppertal (DE))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    This paper describes the deployment of ATLAS offline software in containers for software development and the use in production jobs on the grid - such as event generation, simulation, reconstruction and physics derivations - and in physics analysis. For this we are using Docker and Singularity which are both lightweight virtualization technologies to encapsulates a piece of software inside a...

    Go to contribution page
  59. Lucio Dery (Stanford)
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. This paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics...

    Go to contribution page
  60. Gordon Watts (University of Washington (US))
    25/08/2017, 11:00
  61. Dr Philipp Eller (Penn State University)
    25/08/2017, 11:00
  62. Xavier Valls Pla (University Jaume I (ES))
    25/08/2017, 11:00
  63. Tao Lin (IHEP)
    25/08/2017, 11:00
  64. Guilherme Amadio (CERN)
    25/08/2017, 11:00
    Oral