-
Mr Simone Amoroso (University of Freiburg)02/09/2014, 08:00Data Analysis - Algorithms and ToolsPosterThe data recorded by the ATLAS experiment have been thoroughly analyzed for specific signals of physics beyond the Standard Model (SM); although these searches cover a wide variety of possible event topologies, they are not exhaustive. Events produced by new interactions or new particles might still be hidden in the data. The analysis presented here extends specific searches with a...Go to contribution page
-
Daniele Gregori (Istituto Nazionale di Fisica Nucleare (INFN))02/09/2014, 08:00Computing Technology for Physics ResearchPosterThe storage and farming departments at the INFN CNAF Tier1 manage approximately thousands of computing nodes and several hundreds of servers that provides access to the disk and tape storage. In particular, the storage server machines should provide the following services: an efficient access to about 15 petabytes of disk space with different cluster of GPFS file system, the data transfers...Go to contribution page
-
Armenuhi Abramyan (A.I. Alikhanyan National Science Laboratory (Yerevan Physics Institute) Foundation), Narine Manukyan (A.I. Alikhanyan National Science Laboratory (Yerevan Physics Institute) Foundation)02/09/2014, 08:00Computing Technology for Physics ResearchPosterFAMoS leverages the information stored in the central AliEn file catalogue, which describes every file in a Unix-like directory structure, as well as metadata on file location and replicas. In addition, it uses the access information provided by a set of API servers, which are used by all Grid clients to access the catalogue. The main functions of FAMoS are to sort the file accesses by logical...Go to contribution page
-
Christopher Jung (KIT - Karlsruhe Institute of Technology (DE))02/09/2014, 08:00Computing Technology for Physics ResearchPosterWith the introduction of federated data access to the workflows of WLCG, it is becoming increasingly important for data centers to understand specific data flows regarding storage element accesses, firewall configurations, as well as the scheduling of batch jobs themselves. As existing batch system monitoring and related system monitoring tools do not support measurements at batch job level, a...Go to contribution page
-
Eric Chabert (Institut Pluridisciplinaire Hubert Curien (FR))02/09/2014, 08:00Data Analysis - Algorithms and ToolsPosterThe CMS experiment has been designed with a 2-level trigger system. The Level 1 Trigger is implemented on custom-designed electronics. The High Level Trigger (HLT) is a streamlined version of the CMS offline reconstruction software running on a computer farm. Using b-tagging at trigger level will play a crucial role during the Run II data taking to ensure the Top quark, beyond the Standard...Go to contribution page
-
Claire Gwenlan (University of Oxford (GB))02/09/2014, 08:00Computing Technology for Physics ResearchPosterIn this presentation we will review the ATLAS Monte Carlo production setup including the different production steps involved in full and fast detector simulation. A report on the Monte Carlo production campaigns during Run-I and Long Shutdown 1 (LS1) will be presented, including details on various performance aspects. Important improvements in the work flow and software will be...Go to contribution page
-
Ali Mehmet Altundag (Cukurova University (TR))02/09/2014, 08:00Computing Technology for Physics ResearchPosterCMS Software is huge software development project with a large amount of source code. In large scale and complex projects, it is important to have a software documentation system as possible. The core of the documentation should be version-based and available online with the source code. CMS uses Doxygen and Twiki as main tools to provide automated and non-automated documentation. Both of...Go to contribution page
-
Mikhail Titov (University of Texas at Arlington (US))02/09/2014, 08:00Computing Technology for Physics ResearchPosterThe Production and Distributed Analysis system (PanDA) is a distributed computing workload management system for processing user analysis, group analysis, and managed production jobs on the grid. The main goal of the recommender system for PanDA is to utilize user activity to build a corresponding model of user interests that can be considered in how data needs to be distributed. As an...Go to contribution page
-
Tommaso Colombo (CERN and Universität Heidelberg)02/09/2014, 08:00Computing Technology for Physics ResearchPosterThe ATLAS detector at CERN records proton-proton collisions delivered by the Large Hadron Collider (LHC). The ATLAS Trigger and Data-Acquisition (TDAQ) system identifies, selects, and stores interesting collision data. These are received from the detector readout electronics at an average rate of 100 kHz. The typical event data size is 1 to 2 MB. Overall, the ATLAS TDAQ can be seen as...Go to contribution page
-
Eckhard Von Torne (Universitaet Bonn (DE))02/09/2014, 08:00Data Analysis - Algorithms and ToolsPosterDeep learning neural networks are feed-forward networks with several hidden layers. Due to their complex architecture, such hetworks have been successfully applied in several difficult non-HEP applications such as face recognition. Recently the application of such networks has been explored in the context of particle physics. We discuss the construction and training of such neural nets...Go to contribution page
-
Mr Eric Conte (GRPHE)02/09/2014, 08:00Data Analysis - Algorithms and ToolsPosterThe LHC experiments are currently pushing limits on new physics to a further and further extent. The interpretation of the results in the framework of any theory however relies on our ability to accurately simulate both signal and background processes. This task is in general achieved by matching matrix-element generator predictions to parton showering, and further employing hadronization and...Go to contribution page
-
Dr Alexander Kiselev (BNL)02/09/2014, 08:00Computing Technology for Physics ResearchPosterThe long-term upgrade plan for the RHIC facility at BNL foresees the addition of a high-energy polarized electron beam to the existing hadron machine thus converting RHIC into an Electron-Ion Collider (eRHIC) with luminosities exceeding $10^{33} cm^{-1} s^{-1}$. GEANT simulation framework for this future project (EicRoot) is based on FairRoot and its derivatives (PandaRoot, CbmRoot,...Go to contribution page
-
Geoffray Michel Adde (CERN)02/09/2014, 08:00Computing Technology for Physics ResearchPosterEOS is a distributed file system developed and used mainly at CERN. It provides low latency, high availability, strong authentication, multiple replication schemes as well as multiple access protocols and features. Deployment and operations remain simple and EOS is currently used by multiple experiments at CERN and it provides a total raw storage space of 65PB. In a first part we go through a...Go to contribution page
-
Mikel Eukeni Pozo Astigarraga (CERN)02/09/2014, 08:00Computing Technology for Physics ResearchPosterATLAS is a Physics experiment that collects high-energy particle collisions at the Large Hadron Collider at CERN. It uses tens of millions of electronics channels to capture the outcome of the particle bunches crossing each other every 25 ns. Since reading out and storing the complete information is not feasible (~100 TB/s), ATLAS makes use of a complex and highly distributed Trigger and...Go to contribution page
-
Andrei Tsaregorodtsev (Marseille), Ricardo Graciani Diaz (University of Barcelona (ES))02/09/2014, 08:00Computing Technology for Physics ResearchPosterThe Open DISData Initiative is focusing on today’s challenges of e-Science in a collaborative effort shared among different scientific communities, relevant technology providers and major e-Infrastructure providers. The target will be to evolve from existing partial solutions towards a common platform for distributed computing able to integrate already existing grid, cloud and other local...Go to contribution page
-
Vasil Georgiev Vasilev (CERN)02/09/2014, 08:00Data Analysis - Algorithms and ToolsPosterSince the silicon era, programming languages throve: assembler, macro assembler, Fortran, C, C++, LINQ. A common characteristic between the generations is the level of abstraction. While assembly languages didn't provide abstractions, macro assembers, Fortran, C and C++ promised to improve the deficiencies of the abstractions of the older ones. The increasing popularity of the domain specific...Go to contribution page
-
Niko Neufeld (CERN)02/09/2014, 08:00Computing Technology for Physics ResearchPosterThe LHCb Data Acquisition (DAQ) will be upgraded in 2020 to a trigger-free readout. In order to achieve this goal we will need to connect 500 nodes with a total network capacity of 40 Tb/s. To get such an high network capacity we are testing zero-copy technology in order to maximise the theoretical link throughput without adding excessive CPU and memory bandwidth overhead, leaving free...Go to contribution page
-
Sergey Panitkin (Brookhaven National Laboratory (US))02/09/2014, 08:00Computing Technology for Physics ResearchPosterExperiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide, thousands of physicists analyzing the data need remote access to hundreds of computing sites,the volume of processed data is beyond the exabyte scale, and data processing requires more than billions of hours of computing usage per year. The PanDA...Go to contribution page
-
Dr Giuseppe Avolio (CERN)02/09/2014, 08:00Computing Technology for Physics ResearchPosterThe ATLAS experiment at the Large Hadron Collider at CERN relies on a complex and highly distributed Trigger and Data Acquisition (TDAQ) system to gather and select particle collision data obtained at unprecedented energy and rates. The TDAQ system is composed of a large number of hardware and software components (about 3000 machines and more than 15000 concurrent processes at the end of...Go to contribution page
-
Joshua Wyatt Smith (University of Cape Town (ZA))02/09/2014, 08:00Computing Technology for Physics ResearchPosterHigh Performance Computing is relevant in many applications around the world, particularly high energy physics. Experiments such as ATLAS and CMS generate huge amounts of data which needs to be analyzed at server farms located on site at CERN and around the world. Apart from the initial cost of setting up an effective server farm the price to maintain them is enormous. Power consumption and...Go to contribution page
-
Sara Vallero (Universita e INFN (IT))02/09/2014, 08:00Computing Technology for Physics ResearchPosterThe private Cloud at the Torino INFN computing centre offers IaaS services to different scientific computing applications. The infrastructure is managed with the OpenNebula cloud controller. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier2 site for the BESIII collaboration,...Go to contribution page
-
Dr Alexandre Vaniachine (ANL)02/09/2014, 08:00Computing Technology for Physics ResearchPosterThe ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of...Go to contribution page
-
Mr Dmitry SAVIN (VNIIA, Moscow)02/09/2014, 08:00Data Analysis - Algorithms and ToolsPosterThe CHIPS-TPT physics library is being developed for simulation of neutron-nuclear reactions on the new exclusive level. The exclusive modeling conserves energy, momentum and quantum numbers in each neutron-nuclear interaction. The CHIPS-TPT algorithms are based on the exclusive CHIPS library, which is compatible with Geant4. Special CHIPS-TPT physics lists in the Geant4 format are provided,...Go to contribution page
-
Peter Berta (Charles University (CZ))02/09/2014, 08:00Data Analysis - Algorithms and ToolsPosterThe ability to correct jets and jet shapes for the contributions of multiple uncorrelated proton-proton interactions (pileup) largely determines the ability to identify highly boosted hadronic decays of W, Z, and Higgs bosons, or top quarks. We present a new method that operates at the level of the jet constituents and provides both performance improvement and simplification compared to...Go to contribution page
-
Ondrej Penc (Acad. of Sciences of the Czech Rep. (CZ))02/09/2014, 08:00Computing Technology for Physics ResearchPosterThe performance of the ATLAS Inner Detector (ID) Trigger algorithms being developed for running on the ATLAS High Level Trigger (HLT) processor farm during Run 2 of the LHC are presented. During the 2013-14 LHC long shutdown modifications are being carried out to the LHC accelerator to increase both the beam energy and luminosity. These modifications will pose significant challenges for...Go to contribution page
-
Jose Seixas (Univ. Federal do Rio de Janeiro (BR))02/09/2014, 08:00Data Analysis - Algorithms and ToolsPosterThe Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of ATLAS experiment and has about 10,000 eletronic channels. An Optimal Filter (OF) has been used to estimate the energy sampled by the calorimeter and applies a Quality Factor (QF) for signal acceptance. An approach using Matched Filter (MF) has also been pursued. In order to cope with the luminosity rising...Go to contribution page
-
Mr Dmitry Batkovich (St. Petersburg State University (RU))02/09/2014, 08:00Computing Technology for Physics ResearchPosterWe present a prototype of а scalable computing cloud. It is intended to be deployed on the basis of a cluster without a separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This increases utilization of the cluster resources as well as improves the fault tolerance...Go to contribution page
-
Jaroslava Schovancova (Brookhaven National Laboratory (US))02/09/2014, 08:00Computing Technology for Physics ResearchPosterThe PanDA Workload Management System (WMS) has been the basis for distributed production and analysis of the ATLAS experiment at the Large Hadron Collider since early 2008. Since the start of data taking of LHC Run I, PanDA usage has ramped up to over 1 exa-byte of processed data in 2013, and 1.5M peak completed jobs per day in 2014. The PanDA monitor is one of the core component of the PanDA...Go to contribution page
-
Mr Serguei Kolos (University of California Irvine (US))02/09/2014, 08:00Computing Technology for Physics ResearchPosterThe ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to...Go to contribution page
-
Pier Paolo Ricci (INFN CNAF)02/09/2014, 08:00Computing Technology for Physics ResearchPosterThe consolidation of Mass Storage services at the INFN CNAF Tier1 Storage department that has occurred during the last 5 years resulted in a reliable, high performance and moderately easy-to-manage facility that can provide virtually all the storage archive, backup and database software services to several different use-cases. At present the INFN CNAF Tier1 GEMSS Mass Storage System...Go to contribution page
-
Andre Sailer (CERN)02/09/2014, 08:00Computing Technology for Physics ResearchPosterFor the future experiments at linear electron--positron colliders (ILC or CLIC), detailed physics and detector optimisation studies are taking place in the CLICdp, ILD, and SiD groups. The physics performance of different detector geometries and technologies has to be estimated realistically. These assessments require sophisticated and flexible full detector simulation and reconstruction...Go to contribution page
-
Pier Paolo Ricci (INFN CNAF)02/09/2014, 08:00Computing Technology for Physics ResearchPosterIn the last years the problem of digital preservation of valuable scientific date has significantly become one of the most important point to consider inwards scientific collaborations. In particular the long term preservation of almost all experimental data, raw and all related derived formats including calibration information, is one of the emerging requirements within the High Energy...Go to contribution page
-
Dr Maxim Potekhin (Brookhaven National Laboratory)02/09/2014, 08:00Computing Technology for Physics ResearchPosterThe Long-Baseline Neutrino Experiment (LBNE) will provide a unique, world-leading program for the exploration of key questions at the forefront of particle physics and astrophysics. Chief among its potential discoveries is that of matter-antimatter symmetry violation in neutrino flavor mixing. To achieve its ambitious physics objectives as a world-class facility, LBNE has been conceived around...Go to contribution page
-
Bernardo Sotto-Maior Peralva (Juiz de Fora Federal University (BR))02/09/2014, 08:00Data Analysis - Algorithms and ToolsPosterThe ATLAS Tile Calorimeter (TileCal) is the detector used in the reconstruction of hadrons, jets, muons and missing transverse energy from the proton-proton collisions at the Large Hadron Collider (LHC). It covers the central part of the ATLAS detector (|η|<1.6). The energy deposited by the particles is read out by approximately 5,000 cells, with double readout channels. The signal provided by...Go to contribution page
-
Christopher Jung (KIT - Karlsruhe Institute of Technology (DE))02/09/2014, 08:00Computing Technology for Physics ResearchPosterModern data processing increasingly relies on data locality for performance and scalability, whereas the common HEP approaches aim for uniform resource pools with minimal locality, recently even across site boundaries. To combine advantages of both, the High Performance Data Analysis (HPDA) Tier 3 concept opportunistically establishes data locality via coordinated caches. In accordance...Go to contribution page
-
Mr Batkovich Dmitry (St. Petersburg State University (RU)), Mikhail Kompaniets (St. Petersburg State University (RU))02/09/2014, 08:00Computations in Theoretical Physics: Techniques and MethodsPosterWe present the set of tools for computations on Feynman diagrams. Various package modules implement: - graph manipulation, serialization, symmetries and automorphisms - calculators, which are used to calculate integrals by particular methods (analytical or numerical). - UV-counterterms calculation using IR-rearrangement and R* operation (minimal subtraction scheme) The following...Go to contribution page
-
David Abdurachmanov (Vilnius University (LT))02/09/2014, 08:00Data Analysis - Algorithms and ToolsPosterPower density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of...Go to contribution page
-
Gordon Watts (University of Washington (US))02/09/2014, 08:00Computing Technology for Physics ResearchPosterModern high energy physics analysis is complex. It typically requires multiple passes over different datasets, and is often held together with a series of scripts and programs. For example, one has to first reweight the jet energy spectrum in Monte Carlo to match data before plots of any other jet related variable can be made. This requires a pass over the Monte Carlo and the Data to derive...Go to contribution page
-
Mr Christian Glaser (RWTH Aachen University)02/09/2014, 08:00Computing Technology for Physics ResearchPosterThe VISPA web framework opens a new way of collaborative work. All relevant software, data and computing resources are supplied on a common remote infrastructure. Access is provided through a web GUI, which has all functionality needed for working conditions comparable to a personal computer. The analyses of colleagues can be reviewed and executed with just one click. Furthermore, code can be...Go to contribution page
Choose timezone
Your profile timezone: