21–25 Aug 2017
University of Washington, Seattle
US/Pacific timezone

Contribution List

196 out of 196 displayed
Export to PDF
  1. Gordon Watts (University of Washington (US))
    21/08/2017, 08:45
  2. Takahiro Ueda (KEK)
    21/08/2017, 09:15
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    Symbolic computation is an indispensable tool for theoretical particle
    physics, especially in the context of perturbative quantum field
    theory. In this talk, I will review FORM, one of computer algebra
    systems widely used in higher-order calculations, its design principles
    and advantages. The newly released version 4.2 will also be discussed.

    Go to contribution page
  3. Dr Ben Nachman (Lawrence Berkeley National Lab. (US))
    21/08/2017, 11:00
    Oral

    Modern machine learning (ML) has introduced a new and powerful toolkit to High Energy Physics. While only a small number of these techniques are currently used in practice, research and development centered around modern ML has exploded over the last year(s). I will highlight recent advances with a focus on jet physics to be concrete. Themselves defined by unsupervised learning algorithms,...

    Go to contribution page
  4. Stefano Carrazza (CERN)
    21/08/2017, 11:30
    Oral

    We start the discussion by summarizing recent and consolidated
    applications of ML in TH-HEP. We then focus our discussion on recent studies about parton distribution functions determination and related tools based on machine learning algorithms and strategies. We conclude by showing future theoretical applications of ML to Monte Carlo codes.

    Go to contribution page
  5. Herb Sutter (Microsoft Corporation)
    21/08/2017, 12:00
    Oral

    Can we evolve the C++ language itself to make C++ programming both more powerful and simpler, and if so, how? The only way to accomplish both of those goals at the same time is by adding abstractions that let programmers directly express their intent—to elevate comments and documentation to testable code, and elevate coding patterns and idioms into compiler-checkable declarations.

    This talk...

    Go to contribution page
  6. Heather Gray (LBNL)
    21/08/2017, 12:30
    Oral

    The reconstruction of particle trajectories in the tracking detectors is one of the most complex parts in analysing the data at hadron colliders. Maximum luminosity is typically achieved at the cost of a large number of simultaneous proton-proton interactions between beam crossing. The large number of particles produced in such interactions introduces challenges both in terms of maintaining...

    Go to contribution page
  7. Sebastian Skambraks (Technische Universität München)
    21/08/2017, 14:00
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Neural networks are going to be used in the pipelined first level trigger of the upgraded flavor physics experiment Belle II at the high luminosity B factory SuperKEKB in Tsukuba, Japan. A luminosity of $\mathcal{L} = 8 \times 10^{35}\,cm^{−2} s^{−1}$ is anticipated, 40 times larger than the world record reached with the predecessor KEKB. Background tracks, with vertices displaced along the...

    Go to contribution page
  8. Vakho Tsulaia (Lawrence Berkeley National Lab. (US))
    21/08/2017, 14:00
    Track 1: Computing Technology for Physics Research
    Oral

    Data processing applications of the ATLAS experiment, such as event simulation and reconstruction, spend considerable amount of time in the initialization phase. This phase includes loading a large number of shared libraries, reading detector geometry and condition data from external databases, building a transient representation of the detector geometry and initializing various algorithms and...

    Go to contribution page
  9. Ben Ruijl (Nikhef)
    21/08/2017, 14:00
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    We show how an extended version of the R* operation, a method to remove UV and soft IR divergences, can be used to calculate the poles of Feynman diagrams with arbitrary tensor structure from diagrams with fewer loops. We discuss solutions to combinatorial problems we encountered during the computation of the five loop QCD beta function, such as postponing Feynman rule substitutions, integral...

    Go to contribution page
  10. Luca Pontisso (Sapienza Universita e INFN, Roma I (IT))
    21/08/2017, 14:20
    Track 1: Computing Technology for Physics Research
    Oral

    The use of GPUs to implement general purpose computational tasks, known as GPGPU since fifteen years ago, has reached maturity. Applications take advantage of the parallel architectures of these devices in many different domains.
    Over the last few years several works have demonstrated the effectiveness of the integration of GPU-based systems in the high level trigger of various HEP...

    Go to contribution page
  11. Samuel David Jones (University of Sussex (GB))
    21/08/2017, 14:20
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Electron and photon triggers covering transverse energies from 5 GeV
    to several TeV are essential for signal selection in a wide variety of
    ATLAS physics analyses to study Standard Model processes and to search
    for new phenomena. Final states including leptons and photons had, for
    example, an important role in the discovery and measurement of the
    Higgs boson. Dedicated triggers are also used...

    Go to contribution page
  12. Dr Stefano Laporta (Dipartimento di Fisica, Universita di Bologna)
    21/08/2017, 14:25
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    In this talk I will describe the results of evaluation up to 1100 digits of precision of the mass-independent contribution of the 891 4-loop Feynman diagrams contributing to the electron g-2 in QED.
    I will show the analytical expressions fitted to the high-precision values,
    which contain polylogarithms of sixth-root of unity and one-dimensional integrals of products of complete elliptic...

    Go to contribution page
  13. Andrew Mathew Carnes (University of Florida (US))
    21/08/2017, 14:40
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The first implementation of Machine Learning inside a Level 1 trigger system at the LHC is presented. The Endcap Muon Track Finder at CMS uses Boosted Decision Trees to infer the momentum of muons based on 25 variables. All combinations of variables represented by 2^30 distinct patterns are evaluated using regression BDTs, whose output is stored in 2 GB look-up tables. These BDTs take...

    Go to contribution page
  14. Adam Edward Barton (Lancaster University (GB))
    21/08/2017, 14:40
    Track 1: Computing Technology for Physics Research
    Oral

    Over the next decade of LHC data-taking the instantaneous luminosity
    will reach up 7.5 times the design value with over 200 interactions
    per bunch-crossing and will pose unprecedented challenges for the
    ATLAS trigger system.

    With the evolution of the CPU market to many-core systems, both the
    ATLAS offline reconstruction and High-Level Trigger (HLT) software
    will have to transition from a...

    Go to contribution page
  15. Dr Daniel Maitre (IPPP)
    21/08/2017, 14:50
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    The storage of computationally intensive matrix elements for NLO processes have proven a good solution for high-multiplicity processes. In this presentation I will present the challenges of extending this method to calculation at NLO and offer some ways of alleviating them.

    Go to contribution page
  16. Tommaso Colombo (CERN)
    21/08/2017, 15:00
    Track 1: Computing Technology for Physics Research
    Oral

    Abstract:
    LHCb has decided to optimise its physics reach by removing the first level hardware trigger for LHC Run3 and beyond. In addition to requiring fully redesigned front-end electronics this design creates interesting challenges for the data-acquisition and the rest of the Online computing system. Such a system can only be realized within realistic cost using as much off-the-shelf...

    Go to contribution page
  17. Dr Chao Zhang (Brookhaven National Laboratory)
    21/08/2017, 15:00
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The Liquid Argon Time Projection Chamber (LArTPC) is an exciting detector technology that is undergoing rapid development. Due to its high density, low diffusion, and excellent time and spatial resolutions, the LArTPC is particularly attractive for applications in neutrino physics and nucleon decay, and is chosen as the detector technology for the future Deep Underground Neutrino Experiment...

    Go to contribution page
  18. Xiaohui Liu (Beijing Normal University )
    21/08/2017, 15:15
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    In this talk, I will review the current status of N-jettiness subtraction scheme and its application to Vj production at the LHC.

    Go to contribution page
  19. Ilija Vukotic (University of Chicago (US))
    21/08/2017, 15:20
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    At the times when HEP computing needs were mainly fulfilled by mainframes, graphics solutions for event and detector visualizations were necessarily hardware as well as experiment specific and impossible to use anywhere outside of HEP community. A big move to commodity computing did not precipitate a corresponding move of graphics solutions to industry standard hardware and software. In this...

    Go to contribution page
  20. Maciej Szymon Gladki (CERN, Geneva, Switzerland)
    21/08/2017, 15:20
    Track 1: Computing Technology for Physics Research
    Oral

    The efficiency of the Data Acquisition (DAQ) in the new DAQ system of the Compact Muon Solenoid (CMS) experiment for LHC Run-2 is constantly being improved. A significant factor on the data taking efficiency is the experience of the DAQ operator. One of the main responsibilities of DAQ operator is to carry out the proper recovery procedure in case of failure in data-taking. At the start of...

    Go to contribution page
  21. Dr Lu Wang (Computing Center,Institute of High Energy Physics, CAS)
    21/08/2017, 15:40
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The Jiangmen Underground Neutrino Observatory (JUNO) is a multiple purpose neutrino experiment to determine neutrino mass hierarchy and precisely measure oscillation parameters. The experimental site is under a 286m mountain, and the detector will be at -480m depth. Twenty thousand ton liquid scintillator (LS) is contained in a spherical container of radius of 17.7 m as the central detector...

    Go to contribution page
  22. Rosen Matev (CERN)
    21/08/2017, 15:40
    Track 1: Computing Technology for Physics Research
    Oral

    The LHCb experiment plans a major upgrade of the detector and DAQ systems in the LHC long shutdown II (2018–2019). For this upgrade, a purely software based trigger system is being developed, which will have to process the full 30 MHz of bunch-crossing rate delivered by the LHC. A fivefold increase of the instantaneous luminosity in LHCb further contributes to the challenge of reconstructing...

    Go to contribution page
  23. Gareth Douglas Roy (University of Glasgow (GB))
    21/08/2017, 16:30
    Track 1: Computing Technology for Physics Research
    Oral

    Containers are more and more becoming prevalent in Industry as the standard method of software deployment. They have many benefits for shipping software by encapsulating dependencies and turning complex software deployments into single portable units. Similar to Virtual Machines, but with a lower overall resource requirement, greater flexibility and more transparency they are a compelling...

    Go to contribution page
  24. Luke Percival De Oliveira
    21/08/2017, 16:30
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    We introduce the first use of deep neural network-based generative modeling for high energy physics (HEP). Our novel Generative Adversarial Network (GAN) architecture is able cope with the key challenges in HEP images, including sparsity and a large dynamic range. For example, our Location-Aware Generative Adversarial Network learns to produce realistic radiation patterns inside high energy...

    Go to contribution page
  25. Dr Soon Yung Jun (Fermi National Accelerator Lab. (US))
    21/08/2017, 16:30
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    Simulation in high energy physics (HEP) requires the numerical solution of ordinary differential equations (ODE) to determine the trajectories of charged particles in a magnetic field when particles move throughout detector volumes. Each crossing of a volume interrupts the underlying numerical method that solves the equations of motion, triggering iterative algorithms to estimate the...

    Go to contribution page
  26. Edgar Fajardo Hernandez (Univ. of California San Diego (US))
    21/08/2017, 16:50
    Track 1: Computing Technology for Physics Research
    Oral

    The Worldwide LHC Computing Grid (WLCG) is the largest grid computing infrastructure in the world pooling the resources of 170 computing centers (sites). One of the advantages of grid computing is that multiple copies of data can be stored at different sites allowing user access that is independent of that site's geographic location, unique operating systems, and software. Each site is able to...

    Go to contribution page
  27. Andrey Ustyuzhanin (Yandex School of Data Analysis (RU))
    21/08/2017, 16:55
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Tools such as GEANT can simulate volumetric energy deposition of particles down to a certain energy and length scales.
    However, fine-grained effects such as material imperfections, low-energy charge diffusion, noise, and read-out can be difficult to model exactly and may lead to systematic differences between the simulation and the physical detector.
    In this work, we introduce a...

    Go to contribution page
  28. Marcin Slodkowski (Faculty of Physics, Warsaw University of Technology (PL))
    21/08/2017, 16:55
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    This work is focused on the influence of energy deposited by jets in the
    medium on the behavior of bulk nuclear matter.
    In the heavy ion reactions jets are widely used as probes in the study
    of Quark-Gluon-Plasma (QGP). Modeling using relativistic hydrodynamics with
    jets perturbation is employed to extract the properties of the QGP.
    In order to observe a modification of the collective...

    Go to contribution page
  29. Yaodong CHENG (IHEP, Beijing), Yaodong Cheng (Chinese Academy of Sciences (CN))
    21/08/2017, 17:10
    Track 1: Computing Technology for Physics Research
    Oral

    Distributed computing system is widely used in high energy physics such as WLCG. Computing job is usually scheduled to the site where the input data was pre-staged in using file transfer system. It will lead to some problems including low CPU utility for some small sites lack of storage capacity. Futhermore, It is not flexible in dynamic cloud computing environment. Virtual machines will be...

    Go to contribution page
  30. Stefano Carrazza (CERN)
    21/08/2017, 17:20
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    Parton Distribution Functions (PDFs) are a crucial ingredient for accurate and reliable theoretical predictions for precision phenomenology at the LHC.
    The NNPDF approach to the extraction of Parton Distribution Functions relies on Monte Carlo techniques and Artificial Neural Networks to provide an unbiased determination of parton densities with a reliable determination of their...

    Go to contribution page
  31. Erica Brondolin (Austrian Academy of Sciences (AT))
    21/08/2017, 17:20
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The CMS experiment is in the process of designing a complete new tracker for the high-luminosity phase of LHC. The latest results of the future tracking performance of CMS will be shown as well as the latest developments exploiting the new outer tracker possibilities. In fact, in order to allow for a track trigger, the modules of the new outer tracker will produce stubs or vector hits...

    Go to contribution page
  32. Justas Balcas (California Institute of Technology (US))
    21/08/2017, 17:30
    Track 1: Computing Technology for Physics Research
    Oral

    The Caltech team in collaboration with network, computer science and HEP partners at the DOE laboratories and universities, has developed high-throughput data transfer methods and cost-effective data systems that have defined the state of the art for the last 15 years.

    The achievable stable throughput over continental and transoceanic distances using TCP-based open source applications,...

    Go to contribution page
  33. Wahid Bhimji (Lawrence Berkeley National Lab. (US))
    21/08/2017, 17:45
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    There has been considerable recent activity applying deep convolutional neural nets (CNNs) to data from particle physics experiments. Current approaches on ATLAS/CMS have largely focussed on a subset of the calorimeter, and for identifying objects or particular particle types. We explore approaches that use the entire calorimeter, combined with track information, for directly conducting...

    Go to contribution page
  34. Ricardo Vilalta (University of Houston)
    21/08/2017, 17:45
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    The widespread dissemination of machine learning tools in science, particularly in astronomy, has revealed the limitation of working with simple single-task scenarios in which any task in need of a predictive model is looked in isolation, and ignores the existence of other similar tasks. In contrast, a new generation of techniques is emerging where predictive models can take advantage of...

    Go to contribution page
  35. Michael Poat (Brookhaven National Laboratory)
    21/08/2017, 17:50
    Track 1: Computing Technology for Physics Research
    Oral

    The online computing environment at STAR has generated demand for high availability of services (HAS) and a resilient uptime guarantee. Such services include databases, web-servers, and storage systems that user and sub-systems tend to rely on for their critical workflows. Standard deployment of services on bare metal creates a problem if the fundamental hardware fails or loses connectivity....

    Go to contribution page
  36. Robert Fischer (Rheinisch-Westfaelische Tech. Hoch. (DE))
    21/08/2017, 18:10
    Track 1: Computing Technology for Physics Research
    Oral

    In particle physics, workflow management systems are primarily used as
    tailored solutions in dedicated areas such as Monte Carlo production.
    However, physicists performing data analyses are usually required to
    steer their individual workflows manually, which is time-consuming and
    often leads to undocumented relations between particular workloads. We
    present a generic analysis design pattern...

    Go to contribution page
  37. Daniel Sherman Riley (Cornell University (US))
    21/08/2017, 18:10
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Faced with physical and energy density limitations on clock speed, contemporary microprocessor designers have increasingly turned to on-chip parallelism for performance gains. Examples include the Intel Xeon Phi, GPGPUs, and similar technologies. Algorithms should accordingly be designed with ample amounts of fine-grained parallelism if they are to realize the full performance of the hardware....

    Go to contribution page
  38. Dr Stefano Laporta (Dipartimento di Fisica, Universita di Bologna)
    22/08/2017, 09:00
    Oral
  39. Sofia Vallecorsa (Gangneung-Wonju National University (KR))
    22/08/2017, 09:30
    Oral

    Machine Learning techniques have been used in different applications by the HEP community: in this talk, we discuss the case of detector simulation. The need for simulated events, expected in the future for LHC experiments and their High Luminosity upgrades, is increasing dramatically and requires new fast simulation solutions.We will present results of several studies on the application of...

    Go to contribution page
  40. Andreas Kronfeld (Fermilab)
    22/08/2017, 10:00
    Oral

    In this talk, I will give a quick overview of physics results and computational methods in lattice QCD. Then I will outline some of the physics challenges, especially those of interest to particle physicists. Last, I will speculate on how machine-learning ideas could be applied to accelerate lattice-QCD algorithms.

    Go to contribution page
  41. Walter Giele
    22/08/2017, 11:00
    Oral
  42. Ben Ruijl (Nikhef)
    22/08/2017, 11:30
    Oral

    Project HEPGame was created to apply methods from AI that have been successful for games, such as MCTS for Go, to solve problems in High Energy Physics. In this talk I will describe how MCTS helped us simplify large expressions. Additionally, I will describe how we managed to compute four loop (and some five loop) integrals in an automated way. I close with some interesting challenges for AI...

    Go to contribution page
  43. Dr Ravi Panchumarthy (Intel Corporation)
    22/08/2017, 12:00
    Oral

    This presentation will share details about the Intel Nervana Deep Learning Platform and how a data scientist can use it to develop solutions for deep learning problems. The Intel Nervana DL Platform is a full-stack platform including hardware and software tools that enable data scientists to build high-accuracy deep learning solutions quickly and cost effectively than with alternative...

    Go to contribution page
  44. Andrei Davydychev (Moscow State University and Schlumberger)
    22/08/2017, 14:00
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    It is shown how the geometrical splitting of N-point Feynman diagrams can be used to simplify the parametric integrals and reduce the number of variables in the occurring functions. As an example, a calculation of the dimensionally-regulated one-loop four-point function in general kinematics is presented.

    Go to contribution page
  45. Eric Metodiev (Massachusetts Institute of Technology)
    22/08/2017, 14:00
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Many simultaneous proton-proton collisions occur in each bunch crossing at the Large Hadron Collider (LHC). However, most of the time only one of these collisions is interesting and the rest are a source of noise (pileup). Several recent pileup mitigation techniques are able to significantly reduce the impact of pileup on a wide set of interesting observables. Using state-of-the-art machine...

    Go to contribution page
  46. Jim Pivarski (Princeton University)
    22/08/2017, 14:00
    Track 1: Computing Technology for Physics Research
    Oral

    Exploratory data analysis must have a fast response time, and some query systems used in industry (such as Impala, Kudu, Dremel, Drill, and Ibis) respond to queries about large (petabyte) datasets on a human timescale (seconds). Introducing similar systems to HEP would greatly simplify physicists' workflows. However, HEP data are most naturally expressed as objects, not tables. In particular,...

    Go to contribution page
  47. Martin Ritter
    22/08/2017, 14:20
    Track 1: Computing Technology for Physics Research
    Oral

    The Belle II experiment at KEK is preparing for taking first collision data in early 2018. For the success of the experiment it is essential to have information about varying conditions available to systems worldwide in a fast and efficient manner that is straightforward to both the user and maintainer. The Belle II Conditions Database was designed to make maintenance as easy as possible. To...

    Go to contribution page
  48. Markus Stoye (CERN)
    22/08/2017, 14:20
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Deep learning for jet-tagging and jet calibration have recently been increasingly explored. For jet-flavor tagging CMS’s most performant tagger for 2016 data (DeepCSV) was based on a deep neural network. The input was a set of standard tagging variables of pre-selected objects. For 2017 improved algorithms are implemented that start from particle candidates without much preselection, i.e. much...

    Go to contribution page
  49. Thomas Hahn (MPI f. Physik)
    22/08/2017, 14:25
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    Loopedia is a new database for bibliographic (and other) information on loop integrals. Its bibliometry is orthogonal to that of SPIRES or arXiv in the sense that it admits searching for graph-theoretical objects, e.g. a graph's topology. We hope it will in time be able to answer the query "Find all papers pertaining to graph $X$."

    Go to contribution page
  50. Elmar Ritsch (CERN)
    22/08/2017, 14:40
    Track 1: Computing Technology for Physics Research
    Oral

    In the last year ATLAS has radically updated its software development infrastructure hugely reducing the complexity of building releases and greatly improving build speed, flexibility and code testing. The first step in this transition was the adoption of CMake as the software build system over the older CMT. This required the development of an automated translation from the old system to the...

    Go to contribution page
  51. Michela Paganini (Yale University (US))
    22/08/2017, 14:45
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The separation of b-quark initiated jets from those coming from lighter quark flavours (b-tagging) is a fundamental tool for the ATLAS physics program at the CERN Large Hadron Collider. The most powerful b-tagging algorithms combine information from low-level taggers exploiting reconstructed track and vertex information using a multivariate classifier. The potential of modern Machine Learning...

    Go to contribution page
  52. Dr Hiren Patel (UMass Amherst)
    22/08/2017, 14:50
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    Package-X is a Mathematica package for analytically computing and symbolically manipulating dimensionally regulated one-loop Feynman integrals, and CollierLink is an upcoming interface to the COLLIER library. In this talk, I will review new features in the upcoming release of Package-X: calculation of cut discontinuities, and command-line readiness. Additionally, features of CollierLink will...

    Go to contribution page
  53. Stefan Roiser (CERN)
    22/08/2017, 15:00
    Track 1: Computing Technology for Physics Research
    Oral

    LHCb is planning major changes for its data processing and analysis workflows for LHC Run 3. Removing the hardware trigger, a software only trigger at 30 MHz will reconstruct events using final alignment and calibration information provided during the triggering phase. These changes pose a major strain on the online software framework which needs to improve significantly. The foreseen changes...

    Go to contribution page
  54. Aristeidis Tsaris (Fermilab)
    22/08/2017, 15:10
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Charged particle reconstruction in dense environments, such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms, such as the combinatorial Kalman Filter, have been used with great success in HEP experiments for years. However, these state-of-the-art techniques are inherently sequential and scale...

    Go to contribution page
  55. Andrew John Washbrook (University of Edinburgh (GB))
    22/08/2017, 15:20
    Track 1: Computing Technology for Physics Research
    Oral

    The regular application of software quality tools in large collaborative projects is required to reduce code defects to an acceptable level. If left unchecked the accumulation of defects invariably results in performance degradation at scale and problems with the long-term maintainability of the code. Although software quality tools are effective for identification there remains a non-trivial...

    Go to contribution page
  56. Michael Andrews (Carnegie-Mellon University (US))
    22/08/2017, 15:35
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    An essential part of new physics searches at the Large Hadron Collider
    at CERN involves event classification, or distinguishing signal decays
    from potentially many background sources. Traditional techniques have
    relied on reconstructing particle candidates and their physical
    attributes from raw sensor data. However, such reconstructed data are
    the result of a potentially lossy process of...

    Go to contribution page
  57. Marco Meoni (INFN Sezione di Pisa, Universita' e Scuola Normale Superiore, P)
    22/08/2017, 15:40
    Track 1: Computing Technology for Physics Research
    Oral

    The CERN IT provides a set of Hadoop clusters featuring more than 5 PB of raw storage. Different open-source user-level tools are installed for analytics purposes. For this reason, since early 2015, the CMS experiment has started to store a large set of computing metadata, including e.g. a massive number of dataset access log.. Several streamers have registered some billions traces from...

    Go to contribution page
  58. Paul James Laycock (CERN)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Conditions data infrastructure for both ATLAS and CMS have to deal with the management of several Terabytes of data. Distributed computing access to this data requires particular care and attention to manage request-rates of up to several tens of kHz. Thanks to the large overlap in use cases and requirements, ATLAS and CMS have worked towards a common solution for conditions data management...

    Go to contribution page
  59. Edgar Fajardo Hernandez (Univ. of California San Diego (US))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    With the shift in the LHC experiments from the computing tiered model where data was prefetched and stored at the computing site towards a bring data on the fly, model came an opportunity. Since data is now distributed to computing jobs using XrootD federation of data, a clear opportunity for caching arose.

    In this document, we present the experience of installing and using a Federated Xrootd...

    Go to contribution page
  60. Mr Tao Cui (IHEP(Institute of High Energy Physics, CAS,China))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Large-scale virtual computing system requires a loosely coupled virtual resource management platform that provides the flexibility to add or subtract physical resources and the Convenience to upgrade the platform and so on. Openstack provides large-scale virtualization solution such as "Cells" and "Tricircle/ Trio2o ". But because of the complexity, it’s difficult to be deployed and maintain...

    Go to contribution page
  61. Ilija Vukotic (University of Chicago (US))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Until now, geometry information for the detector description of HEP experiments was only stored in online relational databases integrated in the experiments’ frameworks or described in files with text-based markup languages. In all cases, to build and store the detector description, a full software stack was needed.
    In this paper we present a new and scalable mechanism to store the geometry...

    Go to contribution page
  62. Mr Jakub Kandra (Charles University in Prague)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Belle II experiment is approaching its first physics run in 2018. Its full capability
    to operate at the precision frontier will need not only excellent performance of the SuperKEKB
    accelerator and the detector, but also advanced calibration methods combined with data quality monitoring.

    To deliver data in a form suitable for analysis as soon as possible, an automated Calibration Framework...

    Go to contribution page
  63. Simone Campana (CERN)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The ATLAS collaboration started a process to understand the computing needs for the High Luminosity LHC era. Based on our best understanding of the computing model input parameters for the HL-LHC data taking conditions, results indicate the need for a larger amount of computational and storage resources with respect of the projection of constant yearly budget for computing in 2026. Filling the...

    Go to contribution page
  64. Siarhei Padolski (BNL)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    BigPanDA monitoring is a web based application which provides various processing and representation of the Production and Distributed Analysis (PanDA) system objects states. Analyzing hundreds of millions of computation entities such as an event or a job BigPanDA monitoring builds different scale and levels of abstraction reports in real time mode. Provided information allows users to drill...

    Go to contribution page
  65. Martin Ritter
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Belle II experiment at the SuperKEKB $e^{+}e^{-}$ accelerator is preparing for taking first collision data next year. For the success of the experiment it is essential to have information about varying conditions available in the simulation, reconstruction, and analysis code.

    The online and offline software has to be able to obtain conditions data from the Belle II Conditions Database in...

    Go to contribution page
  66. Siarhei Padolski (BNL)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Scientific collaborations operating on modern facilities generate vast volumes of data and auxiliary metadata, and the information is constantly growing. High energy physics data is a long term investment and contains the potential for physics results beyond the lifetime of a collaboration or/and experiment. Many existing HENP experiments are concluding their physics programs, and looking...

    Go to contribution page
  67. Fedor Ratnikov (Yandex School of Data Analysis (RU))
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Daily operation of a large-scale experiment is a resource consuming task, particularly from perspectives of routine data quality monitoring. Typically, data comes from different channels (subdetectors or other subsystems) and the global quality of data depends on the performance of each channel. In this work, we consider the problem of prediction which channel has been affected by anomalies in...

    Go to contribution page
  68. Enrico Fattibene (INFN - National Institute for Nuclear Physics)
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The data management infrastructure operated at CNAF, the central computing and storage facility of INFN (Italian Institute for Nuclear Physics), is based on both disk and tape storage resources. About 40 Petabytes of scientific data produced by LHC (Large Hadron Collider at CERN) and other experiments in which INFN is involved are stored on tape. This is the higher latency storage tier within...

    Go to contribution page
  69. Andrei Kazarov (Petersburg Nuclear Physics Institut (RU))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The ATLAS Trigger and Data Acquisition (TDAQ) is a large, distributed
    system composed of several thousands interconnected computers and tens
    of thousands software processes (applications). Applications produce a
    large amount of operational messages (at the order of O(10^4) messages
    per second), which need to be reliably stored and delivered to TDAQ
    operators in a realtime manner, and also be...

    Go to contribution page
  70. Frank Berghaus (University of Victoria (CA))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Input data for applications that run in cloud computing centres can be stored at distant repositories, often with multiple copies of the popular data stored at many sites. Locating and retrieving the remote data can be challenging, and we believe that federating the storage can address this problem. A federation would locate the closest copy of the data currently on the basis of GeoIP...

    Go to contribution page
  71. Dr Ruslan Mashinistov (Russian Academy of Sciences (RU))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Production and Distributed Analysis system (PanDA), used for workload management in the ATLAS Experiment for over a decade, has in recent years expanded its reach to diverse new resource types such as HPCs, and innovative new workflows such as the event service. PanDA meets the heterogeneous resources it harvests in the PanDA pilot, which has embarked on a next-generation reengineering to...

    Go to contribution page
  72. Andrey Ustyuzhanin (Yandex School of Data Analysis (RU))
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In the research, a new approach for finding rare events in high-energy physics was tested. As an example of physics channel the decay of \tau -> 3 \mu is taken that has been published on Kaggle within LHCb-supported challenge. The training sample consists of simulated signal and real background, so the challenge is to train classifier in such way that it picks up signal/background differences...

    Go to contribution page
  73. Dr Philipp Eller (Penn State University)
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The IceCube neutrino observatory is a cubic-kilometer scale ice Cherenkov detector located at the South Pole. The low energy analyses, that are for example used to measure neutrino oscillations, exploit shape differences in very high-statistics datasets. We present newly-developed tools to estimate reliable event rate distributions from limited statistics simulation and very fast algorithms to...

    Go to contribution page
  74. Lynn Wood (Pacific Northwest National Laboratory, USA)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Belle II Experiment at KEK is preparing for first collisions in early 2018. Processing the large amounts of data that will be produced will require conditions data to be readily available to systems worldwide in a fast and efficient manner that is straightforward to both the user and maintainer. The Belle II Conditions Database was designed to make maintenance as easy as possible. To...

    Go to contribution page
  75. Aaron Tohuvavohu (Penn State University)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Swift Gamma-Ray Burst Explorer is a uniquely capable mission, with three on-board instruments and rapid slewing capabilities. It often serves as a fast-response space observatory for everything from gravitational-wave counterpart searches to cometary science. Swift averages 125 different observations per day, and is consistently over-subscribed, responding to about one-hundred Target of...

    Go to contribution page
  76. Matthias Jochen Schnepf (KIT - Karlsruhe Institute of Technology (DE))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    As results of the excellent LHC performance in 2016, more data than expected has been recorded leading to a higher demand for computing resources. It is already foreseeable that for the current and upcoming run periods a flat computing budget and the expected technology advance will not be sufficient to meet the future requirements. This results in a growing gap between supplied and demanded...

    Go to contribution page
  77. Dr xiaomei zhang (IHEP,Beijing)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The IHEP distributed computing system has been built on DIRAC to integrate heterogeneous resources from collaboration institutes and commercial resource providers for data processing of IHEP experiments, and began to support JUNO in 2015. The Jiangmen Underground Neutrino Observatory (JUNO) is a multipurpose neutrino experiment located in southern China to start in 2019. The study on applying...

    Go to contribution page
  78. Roger Jones (Lancaster University (GB))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The LHC and other experiments are evolving their computing models to cope with the changing data volumes and rate, changing technologies in distributed computing and changing funding landscapes. The UK is reviewing the consequent network bandwidth provision required to meet the new models, there will be increasing consolidation of storage into fewer sites and increase use of caching and data...

    Go to contribution page
  79. Christoph Heidecker (KIT - Karlsruhe Institute of Technology (DE))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The heavily increasing amount of data delivered by current experiments in high energy physics challenge both end users and providers of computing resources. The boosted data rates and the complexity of analyses require huge datasets being processed. Here, short turnaround cycles are absolutely required for an efficient processing rate of analyses. This puts new limits to the provisioning of...

    Go to contribution page
  80. Mariel Pettee (Yale University (US))
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Tau leptons are used in a range of important ATLAS physics analyses,
    including the measurement of the SM Higgs boson coupling to fermions,
    searches for Higgs boson partners, and heavy resonances decaying into
    pairs of tau leptons. Events for these analyses are provided by a
    number of single and di-tau triggers, as well as triggers that require
    a tau lepton in combination with other...

    Go to contribution page
  81. Dr Alexis Pompili (Universita e INFN, Bari (IT))
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Graphical Processing Units (GPUs) represent one of the most sophisticated and versatile parallel computing architectures available that are nowadays entering the High Energy Physics field. GooFit is an open source tool interfacing ROOT/RooFit to the CUDA platform on nVidia GPUs (it also supports OpenMP). Specifically it acts as an interface between the MINUIT minimization algorithm and a...

    Go to contribution page
  82. Michael David Sokoloff (University of Cincinnati (US))
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The LHCb detector is a single-arm forward spectrometer, which has been designed for the efficient reconstruction decays of c- and b-hadrons.
    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run II. Data collected at the start of the fill are processed in a few minutes and used to update the alignment, while the calibration constants are evaluated for...

    Go to contribution page
  83. Guilherme Amadio (CERN)
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Portable and efficient vectorization is a significant challenge in large
    software projects such as Geant, ROOT, and experiment frameworks.
    Nevertheless, taking advantage of the expression of parallelism through
    vectorization is required by the future evolution of the landscape of
    particle physics, which will be characterized by a drastic increase in
    the amount of data produced.

    In order to...

    Go to contribution page
  84. Pascal Boeschoten (Ministere des affaires etrangeres et europeennes (FR))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the strongly interacting state of matter realized in relativistic heavy-ion collisions at the CERN Large Hadron Collider (LHC). A major upgrade of the experiment is planned during the 2019-2020 long shutdown. In order to cope with a data rate 100 times higher than during LHC Run 2 and with the continuous...

    Go to contribution page
  85. Julius Hrivnac (Universite de Paris-Sud 11 (FR))
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The global view of the ATLAS Event Index system has been presented in the last ACAT. This talk will concentrate on the architecture of the system core component. This component handles the final stage of the event metadata import, it organizes its storage and provides a fast and feature-rich access to all information. A user is able to interrogate metadata in various ways, including by...

    Go to contribution page
  86. Adam Edward Barton (Lancaster University (GB))
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Physics analyses at the LHC which search for rare physics processes or
    measure Standard Model parameters with high precision require accurate
    simulations of the detector response and the event selection
    processes. The accurate simulation of the trigger response is crucial
    for determination of overall selection efficiencies and signal
    sensitivities. For the generation and the reconstruction of...

    Go to contribution page
  87. Geoffrey Nathan Smith (University of Notre Dame (US))
    22/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    In 2017, we expect the LHC to deliver an instantaneous luminosity of roughly $2.0 \times 10^{34} cm^{-2} s^{-1}$ to the CMS experiment, with about 60 simultaneous proton-proton collisions (pileup) per event. In these challenging conditions, it is important to be able to intelligently monitor the rate at which data is being collected (the trigger rate). It is not enough to simply look at the...

    Go to contribution page
  88. Nikola Lazar Whallon (University of Washington (US))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Yet Another Rapid Readout (YARR) system is a DAQ system designed for the readout of the current generation ATLAS Pixel FE-I4 chip, which has a readout bandwidth of 160 Mb/s, and the latest readout chip currently under design by the RD53 collaboration which has a much higher bandwidth up to 5 Gb/s and is part of the development of new Pixel detector technology to be implemented in...

    Go to contribution page
  89. Anton Josef Gamel (Albert-Ludwigs-Universitaet Freiburg (DE))
    22/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    High-Performance Computing (HPC) and other research cluster computing resources provided by universities can be useful supplements to the collaboration’s own WLCG computing resources for data analysis and production of simulated event samples. The shared HPC cluster "NEMO" at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines...

    Go to contribution page
  90. Oliver Gutsche (Fermi National Accelerator Lab. (US))
    22/08/2017, 16:45
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Experimental Particle Physics has been at the forefront of analyzing the world’s largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collectively called “Big Data” technologies have emerged from industry and open source projects to support...

    Go to contribution page
  91. Kiyoshi Kato (Kogakuin University)
    22/08/2017, 16:45
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    The direct computation method(DCM) is developed to calculate the multi-loop amplitude for general masses and external momenta. The ultraviolet divergence is under control in dimensional regularization.
    We discuss the following topics in this presentation.
    From the last report in ACAT2016, we have extended the applicability of DCM to several scalar multi-loop integrals.
    Also it will be shown...

    Go to contribution page
  92. Kevin Thomas Bauer (University of California Irvine (US))
    22/08/2017, 16:45
    Track 1: Computing Technology for Physics Research
    Oral

    Starting during the upcoming major LHC shutdown from 2019-2021, the ATLAS experiment at CERN will move to the the Front-End Link eXchange (FELIX) system as the interface between the data acquisition system and the trigger
    and detector front-end electronics. FELIX will function as a router between custom serial links and a commodity switch network, which will use industry standard technologies...

    Go to contribution page
  93. Henry Fredrick Schreiner (University of Cincinnati (US))
    22/08/2017, 17:05
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The GooFit package provides physicists a simple, familiar syntax for manipulating probability density functions and performing fits, but is highly optimized for data analysis on NVIDIA GPUs and multithreaded CPU backends. GooFit is being updated to version 2.0, bringing a host of new features. A completely revamped and redesigned build system makes GooFit easier to install, develop with, and...

    Go to contribution page
  94. Alexandre Beche (Ecole polytechnique fédérale de Lausanne (CH))
    22/08/2017, 17:05
    Track 1: Computing Technology for Physics Research
    Oral

    The PanDA WMS - Production and Distributed Analysis Workload Management System - has been developed and used by the ATLAS experiment at the LHC (Large Hadron Collider) for all data processing and analysis challenges. BigPanDA is an extension of the PanDA WMS to run ATLAS and non-ATLAS applications on Leadership Class Facilities and supercomputers, as well as traditional grid and cloud...

    Go to contribution page
  95. Elise de Doncker (Western Michigan University)
    22/08/2017, 17:10
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    Using lattice generators, we implement lattice rules in
    CUDA for a many-core computation on GPUs. We discuss a
    high-speed evaluation of loop integrals, based on
    lattice rules combined with a suitable transformation.
    The theoretical background of the method and its
    capabilities will be outlined. Extensive results have been
    obtained for various rules and integral dimensions, and
    for...

    Go to contribution page
  96. David Lange (Princeton University (US))
    22/08/2017, 17:25
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    There are numerous approaches to building analysis applications across the high-energy physics community. Among them are Python-based, or at least Python-driven, analysis workflows. We aim to ease the adoption of a Python-based analysis toolkit by making it easier for non-expert users to gain access to Python tools for scientific analysis. Experimental software distributions and individual...

    Go to contribution page
  97. Dr Malachi Schram (PNNL), Malachi Schram (McGill University), Malachi Schram (Pacific Northwest National Laboratory)
    22/08/2017, 17:25
    Track 1: Computing Technology for Physics Research
    Oral

    The Belle II experiment at the SuperKEKB collider in Tsukuba, Japan, will start taking physics data in early 2018 and aims to accumulate 50/ab, or approximately 50 times more data than the Belle experiment.
    The collaboration expects it will manage and process approximately 190 PB of data.
    Computing at this scale requires efficient and coordinated use of the compute grids in North America, Asia...

    Go to contribution page
  98. Stephen Jones (MPI, Munich)
    22/08/2017, 17:35
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    We introduce pySecDec, a toolbox for the numerical evaluation of multi-scale integrals, and highlight some of the new features. The use of numerical methods for the computation of multi-loop amplitudes is described, with a particular focus on Sector Decomposition combined with Quasi-Monte-Carlo integration and importance sampling. The use of these techniques in the computation of multi-loop...

    Go to contribution page
  99. Soon Yung Jun (Fermi National Accelerator Lab. (US))
    22/08/2017, 17:45
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Geant4 is the leading detector simulation toolkit used in high energy physics to design
    detectors and to optimize calibration and reconstruction software. It employs a set of carefully validated physics models to simulate interactions of particles with matter across a wide range of interaction energies. These models, especially the hadronic ones, rely largely on directly measured...

    Go to contribution page
  100. Kevin Fox (PNNL)
    22/08/2017, 17:45
    Track 1: Computing Technology for Physics Research
    Oral

    At PNNL, we are using cutting edge technologies and techniques to enable the Physics communities we support to produce excellent science. This includes Hardware Virtualization using an on premise OpenStack private Cloud, a Kubernetes and Docker based Container system, and Ceph, the leading Software Defined Storage solution. In this presentation we will discuss how we leverage these...

    Go to contribution page
  101. Ayres Freitas (University of Pittsburgh)
    22/08/2017, 18:00
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    There are currently no analytical techniques for computing multi-loop integrals with an arbitrary number of different massive propagators. On the other hand, the challenge for numerical methods is to ensure sufficient accuracy and precision of the results. This contribution will give a brief overview on numerical approaches based on differential equations, dispersion relations and...

    Go to contribution page
  102. Axel Naumann (CERN)
    22/08/2017, 18:05
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    ROOT https://root.cern is evolving along several new paths. At the same time it is reconsidering existing parts. This presentation will try to predict where ROOT will be in three years from now: the main themes of development and where we are already now, the big open questions as well as some of the questions that we didn't even ask yet. The oral presentation will cover the new graphics and...

    Go to contribution page
  103. Daniel Lo (Microsoft research), David Lange (Princeton University (US)), Gareth Roy (University of Glasgow), Ian Fisk (Simons Foundation), Jeff Hammond (Intel), Dr Tom Gibbs (NVIDIA Corporation)
    22/08/2017, 18:05
    Track 1: Computing Technology for Physics Research
    Oral
  104. Guilherme Amadio (CERN)
    22/08/2017, 18:25
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The bright future of particle physics at the Energy and Intensity frontiers poses
    exciting challenges to the scientific software community. The traditional strategies
    for processing and analysing data are evolving in order to cope with the ever increasing
    complexity and size of the datasets.
    The traditional strategies for processing and analysing data are evolving in order to (i)...

    Go to contribution page
  105. 22/08/2017, 19:00

    Three bars near Alder Hall know we might be coming... Join them, and discuss all things ACAT, Seattle, and relax... Go early, go late - they are open!

    • College Inn
    • Shultzy's
    • Big Time
    Go to contribution page
  106. Prof. Blayne Heckel (University of Washington)
    23/08/2017, 08:45
  107. Prof. Jacob Vanderplas (University of Washington eScience Institute)
    23/08/2017, 09:00
    Oral
  108. Kyle Stuart Cranmer (New York University (US))
    23/08/2017, 09:30
    Oral
  109. Daniel Whiteson (University of California Irvine (US))
    23/08/2017, 10:00
  110. Dr Colin Williams (DWave Systems)
    23/08/2017, 11:00
    Oral
  111. Mr Tom Gibbs (NVIDIA), Dr Tom Gibbs (NVIDIA Corporation)
    23/08/2017, 11:30
    Oral
  112. Dr Andrew Putnam (Microsoft Corporation)
    23/08/2017, 12:00
    Oral

    The emergence of Cloud Computing has resulted in an explosive growth of computing power, where even moderately-sized datacenters rival the world’s most powerful supercomputers in raw compute capacity.

    Microsoft’s Catapult project has augmented its datacenters with FPGAs (Field Programmable Gate Arrays), which not only expand the compute capacity and efficiency for scientific computing, but...

    Go to contribution page
  113. Sergei Gleyzer (University of Florida (US))
    24/08/2017, 09:00
    Oral
  114. Pushpalatha Bhat (Fermi National Accelerator Lab. (US))
    24/08/2017, 09:30

    The round table will be animated by the following panelists

    Kyle Cranmer
    Wahid Bhijmi
    Michela Paganini
    Andrey Ustyuzhanin
    Sergei Gleyzer

    Go to contribution page
  115. Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US))
    24/08/2017, 11:00
    Oral

    We’ve known for a while now that projections of computing needs for the experiments running in 10 years from now are unaffordable. Over the past year the HSF has convened a series of workshops aiming to find consensus on the needs, and produce proposals for research and development to address this challenge. At this time many of the software related drafts are far enough along to give a...

    Go to contribution page
  116. Mike Hildreth (University of Notre Dame (US))
    24/08/2017, 11:30
    Oral

    Simply preserving the data from a scientific experiment is rarely sufficient to enable the re-use or re-analysis of the data. Instead, a more complete set of knowledge describing how the results were obtained, including analysis software and workflows, computation environments, and other documentation may be required. This talk explores the challenges in preserving the various knowledge...

    Go to contribution page
  117. Deborah BARD (LBL)
    24/08/2017, 12:00
    Oral

    High Performance Computing (HPC) has been an integral part of HEP computing for decades, but the use of supercomputers has typically been limited to running cycle-hungry simulations for theory and experiment. Today’s supercomputers offer spectacular compute power but are not always simple to use - supercomputers have a highly specialized architecture that means that code that runs well on a...

    Go to contribution page
  118. Andrei Gheata (CERN)
    24/08/2017, 14:00
    Track 1: Computing Technology for Physics Research
    Oral

    GeantV went through a thorough community discussion in the fall 2016 reviewing the project's status and strategy for sharing the R&D benefits with the LHC experiments and with the HEP simulation community in general. Following up to this discussion GeantV has engaged onto an ambitious 2-year road-path aiming to deliver a beta version that has most of the performance features of the final...

    Go to contribution page
  119. Kiyoshi Kato (Kogakuin University), Stephen Jones (MPI, Munich), Takahiro Ueda (KEK), Walter Giele
    24/08/2017, 14:00
    Track 3: Computations in Theoretical Physics: Techniques and Methods

    In recent years, we have seen an explosion of new results at the NNLO level and beyond for LHC processes. These advances have been achieved through both analytical and numerical techniques, depending on the process and the group that performed the calculation.

    This panel discussion will address such as how much the minimization of computer running time is desirable and if the possibility to...

    Go to contribution page
  120. Andrey Ustyuzhanin (Yandex School of Data Analysis (RU))
    24/08/2017, 14:00
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The result of many machine learning algorithms are computational complex models. And further growth in the quality of the such models usually leads to a deterioration in the applying times. However, such high quality models are desirable to be used in the conditions of limited resources (memory or cpu time).
    This article discusses how to trade the quality of the model for the speed of its...

    Go to contribution page
  121. Dr Malachi Schram (PNNL )
    24/08/2017, 14:20
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The Belle II experiment is expected to start taking data in early 2018. Precision measurements of rare decays are a key part of the Belle II physics program and machine learning algorithms have played an important role in the measurement of small signals in high energy physics over the past several years. The authors report on the application of deep learning to the analysis of the B to K*...

    Go to contribution page
  122. Mr Jiang Zhu (Sun Yat-Sen University )
    24/08/2017, 14:20
    Track 1: Computing Technology for Physics Research
    Oral

    The current event display module is based on the ROOT EVE package in Jiangmen Underground Neutrino Observatory (JUNO). we use Unity, a multiplatform game engine, to improve its performance and make it available in different platform. Compared with ROOT, Unity can give a more vivid demonstration of high energy physics experiments and it can be transplanted into another platform easily. We build...

    Go to contribution page
  123. Kevin Wierman (Pacific Northwest National Laboratory)
    24/08/2017, 14:40
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Liquid argon time projection chambers (LArTPCs) are an innovative technology used in neutrino physics measurements that can also be utilized in establishing lifetimes on several partial lifetimes for proton and neutron decay. Current analyses suffer from low efficiencies and purities that arise from the misidentification of nucleon decay final states as background processes and vice-versa....

    Go to contribution page
  124. Niko Neufeld (CERN)
    24/08/2017, 14:40
    Track 1: Computing Technology for Physics Research
    Oral

    The 2020 upgrade of the LHCb detector will vastly increase the rate of collisions the Online system needs to process in software in order to filter events in real time. 30 million collisions per second will pass through a selection chain where each step is executed conditional to its prior acceptance.

    The Kalman filter is a process of the event reconstruction that, due to its time...

    Go to contribution page
  125. Felice Pantaleo (CERN)
    24/08/2017, 15:00
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Starting with Run II, future development projects for the Large Hadron Collider will constantly bring nominal luminosity increase, with the ultimate goal of reaching a peak luminosity of $5 · 10^{34} cm^{−2}s^{−1}$ for ATLAS and CMS experiments planned for the High Luminosity LHC (HL-LHC) upgrade. This rise in luminosity will directly result in an increased number of simultaneous proton...

    Go to contribution page
  126. Pier Paolo Ricci (INFN CNAF)
    24/08/2017, 15:00
    Track 1: Computing Technology for Physics Research
    Oral

    The INFN CNAF Tier-1 has become the Italian national data center for the INFN computing activities since 2005. As one of the reference sites for data storage and computing provider in the High Energy Physics (HEP) community it offers resources to all the four LHC experiments and many other HEP and non-HEP collaborations. The CDF experiment has used the INFN Tier-1 resources for many years and,...

    Go to contribution page
  127. Guilherme Amadio (CERN)
    24/08/2017, 15:20
    Track 1: Computing Technology for Physics Research
    Oral

    When dealing with the processing of large amount of data, the rate at which the
    reading and writing can tale place is a critical factor. High Energy Physics
    data processing relying on ROOT based persistification is no exception.
    The recent parallelisation of LHC experiments' software frameworks and the
    analysis of the ever increasing amount of collision data collected by
    experiments further...

    Go to contribution page
  128. Fedor Ratnikov (Yandex School of Data Analysis (RU))
    24/08/2017, 15:20
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Reconstruction and identification in calorimeters of modern High Energy Physics experiments is a complicated task. Solutions are usually driven by a priori knowledge about expected properties of reconstructed objects. Such an approach is also used to distinguish single photons in the electromagnetic calorimeter of the LHCb detector on LHC from overlapping photons produced from high momentum...

    Go to contribution page
  129. Jakob Blomer (CERN)
    24/08/2017, 15:40
    Track 1: Computing Technology for Physics Research
    Oral

    The analysis of High-Energy Physics (HEP) data sets often take place outside the realm of experiment frameworks and central computing workflows, using carefully selected "n-tuples" or Analysis Object Data (AOD) as a data source. Such n-tuples or AODs may comprise data from tens of millions of events and grow to hundred gigabytes or a few terabytes in size. They are typically small enough to...

    Go to contribution page
  130. Katherine Woodruff (New Mexico State University)
    24/08/2017, 15:40
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    MicroBooNE is a liquid argon time projection chamber (LArTPC) neutrino
    experiment that is currently running in the Booster Neutrino Beam at Fermilab.
    LArTPC technology allows for high-resolution, three-dimensional representations
    of neutrino interactions. A wide variety of software tools for automated
    reconstruction and selection of particle tracks in LArTPCs are actively being
    developed....

    Go to contribution page
  131. Mikhail Titov (National Research Centre Kurchatov Institute (RU))
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Scientific computing has advanced in a way of how to deal with massive amounts of data, since the production capacities have increased significantly for the last decades. Most large science experiments require vast computing and data storage resources in order to provide results or predictions based on the data obtained. For scientific distributed computing systems with hundreds of petabytes...

    Go to contribution page
  132. Valentin Volkl (University of Innsbruck (AT))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The ACTS project aims to decouple the experiment-agnostic parts of the well-established ATLAS tracking software into a standalone package. As the first user, the Future Circular Collider (FCC) Design Study based its track reconstruction software on ACTS. In this presentation we describe the usecases and performance of ACTS in the dense tracking environment of the FCC proton-proton (FCC-hh)...

    Go to contribution page
  133. Sean Murray (University of Cape Town (ZA))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    CODE-RADE is a platform for user-driven, continuous integration and delivery of research applications in a distributed environment. Starting with 6 hypotheses describing the problem at hand, we put forward technical and social solutions to these. Combining widely-used and thoroughly-tested tools, we show how it is possible to manage the dependencies and configurations of a wide range of...

    Go to contribution page
  134. Mr Petr Bouř (FNSPE CTU Prague)
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    We introduce several modifications of classical statistical tests applicable to weighted data sets in order to test homogeneity of weighted and unweighted samples, e.g. Monte Carlo simulations compared to the real data measurements. Specifically, we deal with the Kolmogorov-Smirnov, Anderson-Darling and f-divergence homogeneity tests. The asymptotic approximation of p-value and power of our...

    Go to contribution page
  135. Dr Igor Oya ( Deutsches Elektronen-Synchrotron (DESY))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Cherenkov Telescope Array (CTA) is the next-generation atmospheric Cherenkov gamma-ray observatory. CTA will consist of two installations, one in the southern (Cerro Armazones Chile) and the other in the northern hemisphere (La Palma, Spain). The two sites will contain dozens of telescopes of different sizes, constituting one of the largest astronomical installation under development. The...

    Go to contribution page
  136. Dominik Steinschaden (Stefan Meyer Institute)
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The $\overline{\text{P}}$ANDA experiment, currently under construction at the Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany, addresses fundamental questions in hadron and nuclear physics via interactions of antiprotons with a proton or nuclei, e.g. light and charm exotics, multi-strange baryons and hadrons in nuclei. It will be installed at the High Energy Storage Ring...

    Go to contribution page
  137. Antonio Augusto Alves Junior (University of Cincinnati (US))
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Hydra is a templatized header-only, C++11-compliant library for data analysis on massively parallel platforms targeting, but not limited to, the field High Energy Physics reseach.
    Hydra supports the description of particle decays via the generation of phase-space Monte Carlo, generic function evaluation, data fitting, multidimensional adaptive numerical integration and histograming.
    Hydra is...

    Go to contribution page
  138. Fedor Ratnikov (Yandex School of Data Analysis (RU)), Fedor Ratnikov
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging CHerenkov (RICH) detector, the hadronic and electromagnetic calorimeters, and the muon chambers. To improve charged particle identification, several neural networks including a deep...

    Go to contribution page
  139. Andrey Ustyuzhanin (Yandex School of Data Analysis (RU))
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    We investigate different approaches to the recognition of electromagnetic showers in the data which was collected by the international collaboration OPERA. The experiment initially was designed to detect neutrino oscillations, but the data collected can also be used for the development of the machine learning techniques for electromagnetic shower detection in photo emulsion films. Such showers...

    Go to contribution page
  140. Nikola Hardi (CERN)
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Containerisation technology is becoming more and more popular because it provides an efficient way to improve deployment flexibility by packaging up code into software micro-environments. Yet, containerisation has limitations and one of the main ones is the fact that entire container images need to be transferred before they can be used. Container images can be seen as software stacks and ...

    Go to contribution page
  141. Dr Kim Siang Khaw (University of Washington)
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Muon g-2 experiment at Fermilab will begin beam and detector commissioning in summer 2017 to measure the muon anomalous magnetic moment to an unprecedented level of 140 ppb. To deal with incoming data projected to be around tens of petabytes, a robust data reconstruction and analysis framework, built on Fermilab’s art event-processing framework, is developed. In this workshop, we report...

    Go to contribution page
  142. Brian Paul Bockelman (University of Nebraska-Lincoln (US))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The ROOT I/O (RIO) subsystem is foundational to most HEP experiments - it provides a file format, a set of APIs/semantics, and a reference implementation in C++. It is often found at the base of an experiment's framework and is used to serialize the experiment's data; in the case of an LHC experiment, this may be hundreds of petabytes of files! Individual physicists will further use RIO to...

    Go to contribution page
  143. Soon Yung Jun (Fermi National Accelerator Lab. (US))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Sequences of pseudorandom numbers of high statistical quality and their
    efficient generation are critical for the use of Monte Carlo simulation
    in many areas of computational science. As high performance parallel
    computing systems equipped with wider vector pipelines or many-cores
    technologies become widely available, a variety of parallel pseudo-random
    number generators (PRNGs) are being...

    Go to contribution page
  144. Xavier Valls Pla (University Jaume I (ES))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    In order to take full advantage of new computer architectures and to satisfy the requirement of minimizing the CPU usage with increasing amount of data to analysis, parallelisation and vectorisation have been introduced in the ROOT mathematical and statistical libraries.

    We report first on the improvements obtained in the function evaluation, used for data modelling, by adding the support...

    Go to contribution page
  145. Dr Tao Lin (IHEP)
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    The Jiangmen Underground Neutrino Observatory (JUNO) is a neutrino experiment to determine neutrino mass hierarchy. It has a central detector used for neutrino detection, which consists of a spherical acrylic vessel containing 20 kt liquid scintillator (LS) and about 18,000 20-inch photomultiplier tubes (PMT) to collect light from LS.

    As one of the important parts in JUNO offline software,...

    Go to contribution page
  146. Siarhei Padolski (BNL)
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Every scientific workflow involves an organizational part which purpose is to plan an analysis process thoroughly according to defined schedule, thus to keep work progress efficient. Having such information as an estimation of the processing time or possibility of system outage (abnormal behaviour) will improve the planning process, provide an assistance to monitor system performance and...

    Go to contribution page
  147. Dr Siarhei Padolski (BNL)
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Modern physics experiments collect peta-scale volumes of data and utilize vast, geographically distributed computing infrastructure that serves thousands of scientists around the world.
    Requirements for rapid, near real time data processing, fast analysis cycles and need to run massive detector simulations to support data analysis pose special premium on efficient use of available...

    Go to contribution page
  148. Rudolf Fruhwirth (Austrian Academy of Sciences (AT))
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Circle finding and fitting is a frequent problem in the data analysis of high-energy physics experiments. In a tracker immersed in a homogeneneous magnetic field, tracks with sufficiently high momentum are close to perfect circles if projected to the bending plane. In a ring-imaging Cherenkov detector, a circle of photons around the crossing point of a charged particles has to be found and...

    Go to contribution page
  149. Vakho Tsulaia (Lawrence Berkeley National Lab. (US))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    ATLAS uses its multi-processing framework AthenaMP for an increasing number of workflows, including simulation, reconstruction and event data filtering (derivation). After serial initialization, AthenaMP forks worker processes that then process events in parallel, with each worker reading data individually and producing its own output. This mode, however, has inefficiencies: 1) The worker no...

    Go to contribution page
  150. Ms Rui Li (Sun Yat-sen University)
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

     In order to find the rare particles generated from the collisions at high-energy particle colliders, we need to solve the signal-versus-background classification problems. It turns out neural network can be used here to improve the performance without any manually constructed inputs.


     This is the content of my oral report:

    1. A brief introduction to neural network
      2....
    Go to contribution page
  151. Mr Igor Mandrichenko (FNAL), Igor Vasilyevich Mandrichenko
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    Columnar data representation is known to be an efficient way to store and access data, specifically in cases when the analysis is often done based only on a small fragment of the available data structure. Data representations like Apache Parquet, on the other hand, split data horizontally to allow for easy parallelization of data analysis. Based on the general idea of columnar data storage,...

    Go to contribution page
  152. Andrei Kazarov (Petersburg Nuclear Physics Institut (RU))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    Data Acquisition (DAQ) of the ATLAS experiment is a large distributed
    and inhomogeneous system: it consists of thousands of interconnected
    computers and electronics devices that operate coherently to read out
    and select relevant physics data. Advanced diagnostics capabilities of
    the TDAQ control system are a crucial feature which contributes
    significantly to smooth operation and fast recovery...

    Go to contribution page
  153. Simone Campana (CERN)
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    With this contribution we present the recent developments made to Rucio, the data management system of the High-Energy Physics Experiment ATLAS. Already managing 260 Petabytes of both official and user data, Rucio has seen incremental improvements throughout LHC Run-2, and is currently laying the groundwork for HEP computing in the HL-LHC era. The focus of this contribution are (a) the...

    Go to contribution page
  154. Doris Yangsoo Kim (Soongsil University)
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    The Belle II experiment / the SuperKEKB collider at KEK is a next generation B factory. Phase I of the experiment has been just finished, during which extensive beam studies were conducted. The collaboration is preparing for the physics run in 2018 with the full detector setup. The simulation library of the Belle II experiment is based on the Geant4 package. In this talk, we will summarize the...

    Go to contribution page
  155. Ms Shuhui Huang (Sun Yat-sen University)
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    BESIII, the detector of BEPCII accelerator, has accomplished a big upgrade on the endcaps of TOF detector, to make more precise measurement. As a result, BesVis system for event display on BESIII experiment needs to be updated. We used ROOT Geometry package to build up geometrical structure and display system. BesVis system plays a significantly important role in DAQ system, reconstruction...

    Go to contribution page
  156. Marcelo Vogel (Bergische Universitaet Wuppertal (DE))
    24/08/2017, 16:00
    Track 1: Computing Technology for Physics Research
    Poster

    This paper describes the deployment of ATLAS offline software in containers for software development and the use in production jobs on the grid - such as event generation, simulation, reconstruction and physics derivations - and in physics analysis. For this we are using Docker and Singularity which are both lightweight virtualization technologies to encapsulates a piece of software inside a...

    Go to contribution page
  157. Lucio Dery (Stanford)
    24/08/2017, 16:00
    Track 2: Data Analysis - Algorithms and Tools
    Poster

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. This paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics...

    Go to contribution page
  158. Sebastien Binet (IN2P3/LPC)
    24/08/2017, 16:45
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    High Energy and Nuclear Physics (HENP) libraries are now required to be more and
    more multi-thread-safe, if not multi-thread-friendly and multi-threaded.
    This is usually done using the new constructs and library components offered by
    the C++11 and C++14 standards.
    These components are however quite low-level (threads, mutexes, locks, ...) and
    hard to use and compose, or easy to...

    Go to contribution page
  159. Sofia Vallecorsa (CERN)
    24/08/2017, 16:45
    Track 1: Computing Technology for Physics Research
    Oral

    The GeantV project introduces fine grained parallelism, vectorisation, efficient memory management and NUMA awareness in physics simulations. It is being developed to improve accuracy, while preserving, at the same time, portability through different architectures (Xeon Phi, GPU). This approach brings important performance benefits on modern architectures and a good scalability through a large...

    Go to contribution page
  160. Martin Urban (RWTH Aachen University)
    24/08/2017, 16:45
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Latest developments in many research fields indicate that deep learning methods have the potential to significantly improve physics analyses.
    They not only enhance the performance of existing algorithms but also pave the way for new measurement techniques that are not possible with conventional methods.
    As the computation is highly resource-intensive both dedicated hardware and software are...

    Go to contribution page
  161. Dr Hiroshi DAISAKA (Hitotsubashi University)
    24/08/2017, 17:10
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    Evaluation of a wide variety of Feynman diagrams with multi-loop integrals and physical parameters, and its comparison with high energy experiments are expected to investigate new physics beyond the Standard Model. We have been developing a direct computation method (DCM) of multi-loop integrals of Feynman diagrams. One of the features of our method is that we adopt double exponential (DE)...

    Go to contribution page
  162. Tatsumi Nitta (Waseda University (JP))
    24/08/2017, 17:10
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    By colliding protons and examining the particle emitted from the collisions, the Large Hadron Collider aims to study the interactions of quarks and gluons at the highest energies accessible in a controlled experimental way. In such collisions, the types of interactions that occur may extend beyond those encompassed by the Standard Model of particle physics. Such interactions typically occur...

    Go to contribution page
  163. Jana Schaarschmidt (University of Washington (US))
    24/08/2017, 17:10
    Track 1: Computing Technology for Physics Research
    Oral

    Producing the very large samples of simulated events required by many physics and performance studies with the ATLAS detector using the full GEANT4 detector simulation is highly CPU intensive. Fast simulation tools are a useful way of reducing CPU requirements when detailed detector simulations are not needed. During the LHC Run-1, a fast calorimeter simulation (FastCaloSim) was successfully...

    Go to contribution page
  164. Andrea Capra (TRIUMF (CA))
    24/08/2017, 17:30
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The ALPHA experiment at CERN is designed to produce, trap and study antihydrogen, which is the antimatter counterpart of the hydrogen atom. Since hydrogen is one of the best studied physical system, both theoretically and experimentally, experiments on antihydrogen permit a precise direct comparison between matter and antimatter. Our basic technique consists of driving an antihydrogen...

    Go to contribution page
  165. Thomas Janson
    24/08/2017, 17:30
    Track 1: Computing Technology for Physics Research
    Oral

    In this talk, we explore the data-flow programming approach for massive parallel computing on FPGA accelerator, where an algorithm is described as a data-flow graph and programmed with MaxJ from Maxeler Technologies. Such a directed graph consists of a small set of nodes and arcs. All nodes are fully pipelined and data moves along the arcs through the nodes. We have shown that we can implement...

    Go to contribution page
  166. Dr Carlos Inostroza (Universidad de Concepcion)
    24/08/2017, 17:35
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    The contraction method is a procedure that allows to establish non-trivial relations between Lie algebras and has had successful applications in both mathematics and theoretical physics. This work deals with generalizations of the contraction procedure with a main focus in the so called S-expansion method as it includes most of the other generalized contractions. Basically, the S-expansion...

    Go to contribution page
  167. Kartik Chitturi (University of Texas (US))
    24/08/2017, 17:50
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    We study the ability of different deep neural network architectures to learn various relativistic invariants and other commonly-used variables, such as the transverse momentum of a system of particles, from the four-vectors of objects in an event. This information can help guide the optimal design of networks for solving regression problems, such as trying to infer the masses of unstable...

    Go to contribution page
  168. Felice Pantaleo (CERN)
    24/08/2017, 17:50
    Track 1: Computing Technology for Physics Research
    Oral

    Starting from 2017, during CMS Phase-I, the increased accelerator luminosity with the consequently increased number of simultaneous proton-proton collisions (pile-up) will pose significant new challenges for the CMS experiment.
    The primary goal of the HLT is to apply a specific set of physics selection algorithms and to accept the events with the most interesting physics content. To cope with...

    Go to contribution page
  169. Dr Carlos Inostroza (Universidad de Concepcion)
    24/08/2017, 18:00
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    S-expansion of Lie algebras is a procedure that contains the Inonu-Wigner contraction and most of its generalizations. Based on a recent work where we have presented a Java library to perform S-expansions of Lie algebras [arXiv:1703.04036] we provide an extension allowing to solve different problems. In particular, in this work we complement our library of [arXiv:1703.04036] with new...

    Go to contribution page
  170. Lukas Alexander Heinrich (New York University (US))
    24/08/2017, 18:10
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    The LHC data analysis software used in order to derive and publish experimental results is an important asset that is necessary to preserve in order to fully exploit the scientific potential of a given measurement. Among others, important use cases of analysis preservation are the reproducibility of the original results and the reusability of the analysis procedure in the context of new...

    Go to contribution page
  171. Jiaheng Zou (IHEP)
    24/08/2017, 18:10
    Track 1: Computing Technology for Physics Research
    Oral

    SNiPER is a general purpose software framework for high energy physics experiments. During the development, we pays more attention to the requirements of neutrino and cosmic ray experiments. Now SNiPER has been successfully adopted by JUNO (Jiangmen Underground Neutrino Observatory) and LHAASO (Large High Altitude Air Shower Observatory). It has an important effect on the research and design...

    Go to contribution page
  172. Matthias Jochen Schnepf (KIT - Karlsruhe Institute of Technology (DE))
    24/08/2017, 18:30
    Track 1: Computing Technology for Physics Research
    Oral

    As results of the excellent LHC performance in 2016, more data than expected has been recorded leading to a higher demand for computing resources. It is already foreseeable that for the current and upcoming run periods a flat computing budget and the expected technology advance will not be sufficient to meet the future requirements. This results in a growing gap between supplied and demanded...

    Go to contribution page
  173. 24/08/2017, 19:30
  174. Dr George Langford (Syracuse University)
    25/08/2017, 09:00
    Oral

    Research has shown that diversity enhances creativity. It encourages the search for novel information and perspectives leading to better decision making and problem solving, and leads to unfettered discoveries and breakthrough innovations. Even simply being exposed to diversity can change the way you think.
    Professional development opportunities are needed to train faculty and staff to improve...

    Go to contribution page
  175. Jerome LAURET (Brookhaven National Laboratory), Maria Girone (CERN)
    25/08/2017, 09:30
    Oral

    Our panel will cover the topics of "How to create/hire diversity into teams and the competitive advantage of diverse teams".

    We would like to collect questions you may have in advance so panelists have time to prepare comprehensive answers. We will collect them until Wednesday 23rd, noon. The form for this is at...

    Go to contribution page
  176. Gordon Watts (University of Washington (US))
    25/08/2017, 11:00
  177. Dr Philipp Eller (Penn State University)
    25/08/2017, 11:00
  178. Xavier Valls Pla (University Jaume I (ES))
    25/08/2017, 11:00
  179. Tao Lin (IHEP)
    25/08/2017, 11:00
  180. Guilherme Amadio (CERN)
    25/08/2017, 11:00
  181. Daniel S. Katz (University of Illinois)
    25/08/2017, 11:40
  182. Ayres Freitas (University of Zurich)
    25/08/2017, 11:55
    Oral
  183. Sergei Gleyzer (University of Florida (US))
    25/08/2017, 12:20
  184. Shih-Chieh Hsu (University of Washington Seattle (US))
    25/08/2017, 12:45
    Oral
  185. Pushpalatha Bhat (Fermi National Accelerator Lab. (US))
    25/08/2017, 13:10
    Oral
  186. Gordon Watts (University of Washington (US))
    25/08/2017, 13:30
  187. Sebastian Skambraks (Technische Universität München)
    Track 2: Data Analysis - Algorithms and Tools
  188. Andrew Mathew Carnes (University of Florida (US))
    Track 2: Data Analysis - Algorithms and Tools
  189. Renat Sadykov (Joint Institute for Nuclear Research (RU))
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    We continue to study Bhabha scattering at the one-loop level in SANC system. It is our first step to development EW library for radiative corrections
    for processes e+e−→ff¯ with longitudinal polarization for future collides. Higher-order leading logarithmic QED corrections are taken into account by means of structure function approach. Comparison with the existing results for unpolarized...

    Go to contribution page
  190. Fernanda Psihas (Indiana University)
    Track 2: Data Analysis - Algorithms and Tools
    Oral

    Deep Convolutional Neural Networks (CNNs) have been widely used in the field of computer vision to solve complex problems in image recognition and analysis. Recently, we have applied a Deep Convolutional Visual Network (CVN), to identify neutrino events in the NOvA experiment. NOvA is a long baseline neutrino experiment whose main goal is the measurement of neutrino oscillations. It relies on...

    Go to contribution page
  191. Lu Wang (Chinese Academy of Sciences (CN))
    Track 2: Data Analysis - Algorithms and Tools
  192. Serguei Bityukov (Institute for High Energy Physics (RU))
    Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    The concept of "the significance of the difference" is widely used in data analysis in high-energy physics. We consider the significance Type C [1]. This significance is used in works [2,3,4]. It is shown that when comparing two independent samples obtained from one general population, the distribution of estimates of the "significance of the difference" for mean values, obtained for these...

    Go to contribution page
  193. Dr Ben Nachman (Lawrence Berkeley National Lab. (US))
    Oral
  194. Samuel David Jones (University of Sussex (GB))
    Track 2: Data Analysis - Algorithms and Tools
  195. Track 3: Computations in Theoretical Physics: Techniques and Methods
    Oral

    In this work, we present results obtained with the new Three-fluid Hydrodynamics-based Event Simulator Extended by UrQMD final State interactions (THESEUS) and apply it to the description of heavy-ion collisions in the NICA/FAIR energy range. It presents the 3FH output in terms of a set of observed particles and the afterburner can be run starting from this output by means of the UrQMD model....

    Go to contribution page
  196. Chao Zhang
    Track 2: Data Analysis - Algorithms and Tools