# ACAT 2011

5-9 September 2011
Europe/London timezone
Home > Contribution List
Displaying 100 contributions out of 100
Type: Parallel talk Session: Monday 05th - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
Over a decade ago, the H1 Collaboration decided to embrace the object-oriented paradigm and completely redesign its data analysis model and data storage format. The event data model, based on the RooT framework, consists of three layers - tracks and calorimeter clusters, identified particles and finally event summary data - with a singleton class providing unified access. This original sol ... More
Presented by Dr. Paul LAYCOCK on 5 Sep 2011 at 14:50
Type: Poster Session: Poster session
Track: Track 1: Computing Technology for Physics Research
The LHCb experiment has been using the CMT build and configuration tool for its software since the first versions, mainly because of its multi-platform build support and its powerful configuration management functionality. Still, CMT has some limitations in terms of build performance and the increased complexity added to the tool to cope with new use cases added latterly. Therefore, we have been ... More
Presented by Marco CLEMENCIC
Type: Parallel talk Session: Thursday 08 - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
A freqently faced task in experimental physics is to measure the probability distribution of some quantity. Often this quantity to be measured is smeared by a non-ideal detector response or by some physical process. The procedure of removing this smearing effect from the measured distribution is called unfolding, and is a delicate problem in signal processing. Due to the numerical ill-posedness of ... More
Presented by Andras LASZLO on 8 Sep 2011 at 17:00
Type: Poster Session: Poster session
Track: Track 1: Computing Technology for Physics Research
ATLAS is the largest of several experiments built along the Large Hadron Collider at CERN, Geneva. Its aim is to measure particle production when protons collide at a very high center of mass energy, thus reproducing the behavior of matter a few instants after the Big Bang. The detecting techniques used for this purpose are very sophisticated and the amount of digitized data created by the sensing ... More
Presented by Mr. Alexandru Dan SICOE
Type: Parallel talk Session: Tuesday 06th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
Preserving data from past experiments and preserving the ability to perform analysis with old data is of growing importance in many domains of science, including High Energy Physics (HEP). A study group on this issue, DPHEP, has been established in this field to provide guidelines and a structure for international collaboration on data preservation projects in HEP. This contribution presents ... More
Presented by Yves KEMP on 6 Sep 2011 at 14:25
Presented by Dr. Denis PERRET-GALLIX on 9 Sep 2011 at 11:30
Type: Poster Session: Poster session
Track: Track 1: Computing Technology for Physics Research
This paper describes a new approach to the visualization of stored information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monit ... More
Presented by Mr. Adam HARWOOD, Mr. Luca MAGNONI
Type: Poster Session: Poster session
Track: Track 1: Computing Technology for Physics Research
We present a novel tool for managing data processing on grid resources. The tool provides a graphical user interface that offers new ATLAS users a quick and gentle start with computing, using a library of applications built up by previous users.
Presented by Dr. Frederik ORELLANA
Type: Parallel talk Session: Tuesday 06th - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
Title: Advanced event reweighting for MVA training. Multivariate discrimination techniques, such as Neural Networks, are key ingredients to modern data analysis and play an important role in high energy physics. They are usually trained on simulated Monte Carlo (MC) samples to discriminate signal from background and are then applied to data. This has in general some side effects which we addr ... More
Presented by Daniel MARTSCHEI on 6 Sep 2011 at 15:15
Type: Parallel talk Session: Monday 05th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
ATLAS has recorded almost 5PB of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 55PB is currently stored in the Worldwide LHC Computing Grid by ATLAS. All of this data is managed by the ATLAS Distributed Data Management system, called Don Quixote 2 (DQ2). ... More
Presented by Dr. Graeme Andrew STEWART on 5 Sep 2011 at 14:50
Type: Parallel talk Session: Monday 05th - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
Atlas is a multipurpose experiment that records the LHC collisions. In order to reconstruct the trajectories of charged particles, ATLAS is equipped with a tracking system built using disticnt technologies: silicon planar sensors (both pixel and microstrips) and drift-tubes (the Inner Detector). The tracking system is embedded in a 2 T solenoidal field. In order to reach the track parameter accura ... More
Presented by Mr. Jike WANG on 5 Sep 2011 at 14:00
Type: Parallel talk Session: Thursday 08 - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
The Barrel Hadronic calorimeter of ATLAS (Tilecal) is a detector used in the reconstruction of hadrons, jets, muons and missing transverse energy from the proton-proton collisions at the Large Hadron Collider (LHC). It comprises 10,000 channels in four readout partitions and each calorimeter cell is made of two readout channels for redundancy. The energy deposited by the particles produced in the ... More
Presented by Mr. Peralva SOTTO-MAIOR on 8 Sep 2011 at 16:10
Type: Parallel talk Session: Thursday 08th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
Traditional relational databases have not always been well matched to the needs of data-intensive sciences, but efforts are underway within the database community to attempt to address many of the requirements of large-scale scientific data management. One such effort is the open-source project SciDB. Since its earliest incarnations, SciDB has been designed for scalability in parallel and dis ... More
Presented by Dr. David MALON on 8 Sep 2011 at 14:25
Type: Parallel talk Session: Thursday 08 - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
Adaptive Metropolis (AM) is a powerful recent algorithmic tool in numerical Bayesian data analysis. AM builds on a well-known Markov Chain Monte Carlo (MCMC) algorithm but optimizes the rate of convergence to the target distribution by automatically tuning the design parameters of the algorithm on the fly. In our data analysis problem of counting muons in the water Cherenkov signal of the surface ... More
Presented by Mr. Balázs KéGL on 8 Sep 2011 at 14:50
Type: Parallel talk Session: Monday 05th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
An analytical calculation of a non-planar 2-loop box diagram is presented. This diagram appears in the computation of higher order corrections to top- quark pair production and contains one internal massive line. The corresponding integrals are solved with differential equation and Mellin-Barnes techniques.
Presented by Mr. Andreas VON MANTEUFFEL on 5 Sep 2011 at 15:15
Type: Parallel talk Session: Monday 05th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
With the Job Execution Monitor, a user-centric job monitoring software developed at the University of Wuppertal and integrated into the Pilot-based "PanDA" job brokerage system of the WLCG, job progress and grid worker node health can be supervised in real time. Imminent error conditions can thusly be detected early by the submitter and countermeasures taken. Grid site admins can access aggregated ... More
Presented by Dr. Tim DOS SANTOS on 5 Sep 2011 at 16:55
Type: Parallel talk Session: Thursday 08 - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
We derive a kinematic variable that is sensitive to the mass of the Standard Model Higgs boson (M_H) in the H->WW*->l l nu nu-bar channel using symbolic regression method. Explicit mass reconstruction is not possible in this channel due to the presence of two neutrinos which escape detection. Mass determination problem is that of finding a mass-sensitive function that depends on the measured obser ... More
Presented by Su Yong CHOI on 8 Sep 2011 at 14:00
Type: Poster Session: Poster session
Track: Track 1: Computing Technology for Physics Research
A crucial component of the CMS Software is the reconstruction, which translates the signals coming from the detector's readout electronics into concrete physics objects such as leptons, photons and jets. Given its relevance for all physics analyses, the behaviour and quality of the reconstruction code must be carefully monitored. In particular, the compatibility of its outputs between subsequent r ... More
Presented by Dr. Danilo PIPARO
Type: Parallel talk Session: Thursday 08th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
A program package will be presented which aims at the automated calculation of one-loop amplitudes for multi-particle processes. The program offers the possibility to optionally use either unitarity cuts or traditional tensor reduction of Feynman diagrams, or a combination of both. It can be used to calculate one-loop corrections to both QCD and electro-weak theory. Beyond the Standard Mod ... More
Presented by Gudrun HEINRICH on 8 Sep 2011 at 16:10
Type: Plenary talk Session: Monday 05th - Morning session
Track: Track 1: Computing Technology for Physics Research
Infrastructure-as-a-Service (IaaS) cloud computing is revolutionizing the way we acquire and manage computational and storage resources: by allowing on-demand resource leases and supporting user control over those resources it enables us to treat resource acquisition as an operational consideration rather than capital investment. The emergence of this new model raises many questions, in particular ... More
Presented by Dr. Kate KEAHEY on 5 Sep 2011 at 11:10
Type: Parallel talk Session: Tuesday 06th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
Current HENP libraries and frameworks were written before multicore systems became widely deployed and used. From this environment, a 'single-thread' processing model naturally emerged but the implicit assumptions it encouraged are greatly impairing our abilities to scale in a multicore/manycore world. While parallel programming - still in an intensive phase of R&D despite the 30+ years of ... More
Presented by Dr. Sebastien BINET on 6 Sep 2011 at 15:15
Type: Parallel talk Session: Tuesday 06th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
In-line holography has recently made the transition from silver-halide based recording media, with laser reconstruction, to recording with large-area pixel detectors and computer-based reconstruction. This form of holographic imaging is used for small particulates, such as cloud or fuel droplets, marine plankton and alluvial sediments, and enables a true 3D object field to be recorded at high reso ... More
Presented by Prof. Peter R HOBSON on 6 Sep 2011 at 16:35
Type: Poster Session: Poster session
Track: Track 2 : Data Analysis - Algorithms and Tools
High Energy Physics data sets are often characterized by a huge number of events. Therefore, it is extremely important to use statistical packages able to efficiently analyze these unprecedented amounts of data. We compare the performance of the statistical packages StatPatternRecognition (SPR) and Toolkit for MultiVariate Analysis (TMVA). We focus on how CPU time and memory usage of the learning ... More
Presented by Giulio PALOMBO
Type: Plenary talk Session: Thursday 08th - Morning session
Track: Track 1: Computing Technology for Physics Research
Constant changes in computational infrastructure like the current interest in Clouds, imply conditions on the design of applications. We must make sure that our analysis infrastructure, including source code and supporting tools, is ready for the on demand computing (ODC) era. This presentation is about a new analysis concept, which is driven by users needs, completely disentangled from the c ... More
Presented by Dr. Anar MANAFOV on 8 Sep 2011 at 09:40
Type: Parallel talk Session: Thursday 08 - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
We present a new approach to simulate Beyond-Standard-Model (BSM) processes which are defined by multiple parameters. In contrast to the traditional grid-scan method where a large number of events are simulated at each point of a sparse grid in the parameter space, this new approach simulates only a few events at each of a selected number of points distributed randomly over the whole parameter spa ... More
Presented by Jiahang ZHONG on 8 Sep 2011 at 14:25
Type: Poster Session: Poster session
Track: Track 1: Computing Technology for Physics Research
Many modern applications need large amounts of computing resources both for calculations and data storage. These resources are typically found in the computing grids but also in commercial clouds and computing clusters. Various user communities have access to different types of resources. The DIRAC project provides a solution for an easy aggregation of heterogeneous computing resources for a given ... More
Presented by Dr. Andrei TSAREGORODTSEV
Type: Parallel talk Session: Tuesday 06th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
The method of calculation of the loop integrals based on the dimensional recurrence relation and analyticity of the integrals as functions of $d$ is reviewed. Special emphasis is made on the possibility to automatize many steps of the method. New results obtained with this method are presented.
Presented by Dr. Roman LEE on 6 Sep 2011 at 14:25
Type: Poster Session: Poster session
Track: Track 2 : Data Analysis - Algorithms and Tools
Based on the ROOT TEve/TGeo classes and the standard Linear Collider data format (LCIO), a general linear collider event display has been developed. It supports the latest detector models for both the International Linear Collider (ILC) and Compact Linear Collider (CLIC) as well as test beam prototypes. It can be used to visualise various informations at the generation, simulation and reconstructi ... More
Presented by Dr. Manqi RUAN
Type: Poster Session: Poster session
Track: Track 1: Computing Technology for Physics Research
Data from high-energy physics experiments are collected with significant financial and human effort and are mostly unique. However, until recently no coherent strategy existed for data preservation and re-use, and many important and complex data sets have simply been lost. While the current focus is on the LHC at CERN, in the current period several important and unique experimental programs a ... More
Presented by Dr. Roman KOGLER
Type: Poster Session: Poster session
Track: Track 1: Computing Technology for Physics Research
For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs per day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in t ... More
Presented by Dr. Maxim POTEKHIN
Type: Parallel talk Session: Monday 05th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
Different forms of the generalized Crewther relation in QED and QCD are discussed. They follow from applyication of the method of OPE to the AVV triangle amplitude in the limit when conformal symmetry is valid and broken by the prosedure of renormalizations in the various variants of MS scheme, including 't Hooft prescription for defining beta-function. Special features of the conseuence ... More
Presented by Dr. Andrei KATAEV on 5 Sep 2011 at 17:00
Type: Parallel talk Session: Thursday 08th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
Following a previous publication, this study aims at investigating the impact of regional affiliations of centres on the organisation of collaboration within the Distributed Computing ALICE infrastructure, based on social networks methods. A self-administered questionnaire was sent to all centre managers about support, email interactions and wished collaborations in the infrastructure. Several add ... More
Presented by Mr. Federico CARMINATI on 8 Sep 2011 at 16:35
Type: Parallel talk Session: Monday 05th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
The conversion of existing computing centres to cloud facilities is becoming popular also because of a more optimal usage of existing resources. Inside a medium to large cloud facility, many specific virtual computing facilities might concur for the same resources based on their usage and destination elastically, i.e. by expanding or reducing allocated resources for currently running VMs, or by tu ... More
Presented by Dr. Berzano DARIO on 5 Sep 2011 at 14:00
Type: Parallel talk Session: Thursday 08th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
The future of high power computing is evolving towards the efficient use of highly parallel computing environment. The class of devices that has been designed having parallelism features in mind is the Graphics Processing Units (GPU) which are highly parallel, multithreaded computing devices. One application where the use of massive parallelism comes instinctively is Monte-Carlo simulations w ... More
Presented by Dr. Federico CARMINATI on 8 Sep 2011 at 17:00
Type: Parallel talk Session: Thursday 08th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
In this work we present the parallel implementations of an algorithm used to evaluate the likelihood function of the data analysis. The implementations run on CPU and GPU, respectively, and both devices cooperatively (hybrid). Therefore the execution of the algorithm can take full advantage from users commodity systems, like desktops and laptops, using entirely the hardware at disposal. CPU and GP ... More
Presented by Mr. Yngve SNEEN LINDAL on 8 Sep 2011 at 16:10
Type: Plenary talk Session: Wednesday 07th - Morning session
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
We suppose that a solution to a given Feynman integral is known in terms of multiple polylogarithms, and address the question of how to find another solution which is equivalent to the former, but with a simpler analytic structure.
Presented by Dr. Vittorio DEL DUCA on 7 Sep 2011 at 10:50
Type: Parallel talk Session: Monday 05th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
The talk presents the new features in FormCalc 7 (and some in LoopTools), such as analytic tensor reduction, inclusion of the OPP method, and the interface to FeynHiggs.
Presented by Thomas HAHN on 5 Sep 2011 at 17:25
Type: Parallel talk Session: Monday 05th - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
The concept of "particle flow" has been developed to optimise jet energy resolution by best separating the different components of hadronic jets. A highly granular calorimetry is mandatory and provides an unprecedented level of detail in the reconstruction of showers. This enables new approaches to shower analysis. Here the measurement and use of of showers' fractal dimension is described. ... More
Presented by Dr. Manqi RUAN on 5 Sep 2011 at 16:30
Type: Parallel talk Session: Tuesday 06th - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
Full Reconstruction is an important analysis technique utilized at B factories where B mesons are produced in e+e- -> Y(4S) -> BBbar processes. By reconstructing one of the two B mesons in an event fully in a hadronic final state, the properties of the other B meson are determined using momentum conservation. Therefore, it allows to measure or perform searches for rare B meson decays involving one ... More
Presented by Daniel ZANDER on 6 Sep 2011 at 14:50
Type: Parallel talk Session: Monday 05th - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
We present the concept, the implementation and the performance of a new software framework developed to provide a flexible and user-friendly environment for advanced analysis and processing of digital signals. The software has been designed to handle the full data analysis flow of GERDA, a low-background experiment which searches for the neutrinoless double beta decay of Ge-76 by using high-purity ... More
Presented by Mr. Matteo AGOSTINI on 5 Sep 2011 at 16:05
Type: Parallel talk Session: Thursday 08th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
Octave is one if the most used open source tools for numerical analysis and liner algebra. Our project wants to improve Octave introducing the support for GPU computing, in order to speed up some linear algebra operations. The core of our work is a C library that executes on GPU some BLAS operations concerning vector-vector, vector-matrix and matrix-matrix functions. OpenCL functions are used to ... More
Presented by Dr. Attilio SANTOCCHIA on 8 Sep 2011 at 16:35
Type: Parallel talk Session: Tuesday 06th - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
Background properties in experimental particle physics are typically estimated from large collections of events. This usually provides precise knowledge of average background distributions, but inevitably hides fluctuations. To overcome this limitation, an approach based on statistical mixture model decomposition is presented. Events are treated as heterogeneous populations comprising particles or ... More
Presented by Dr. Federico COLECCHIA on 6 Sep 2011 at 16:10
Type: Parallel talk Session: Tuesday 06th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
The differential reduction algorithm allows to change the values of parameters of any Horn-type hypergeometric functions on arbitrary integers numbers. The description of mathematical part of algorithm have been presented on ACAT08 by M.Kalmykov [6]. We will describe the status of project and will present a new version of MATHEMATICA based package including a several important hype ... More
Presented by Dr. Bytev VLADIMIR on 6 Sep 2011 at 14:00
Type: Parallel talk Session: Monday 05th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
As cloud middleware (and cloud providers) have become more robust, various experiments with experience in Grid submission have begun to investigate the possibility of taking previously Grid-Enabled applications and making them compatible with Cloud Computing, which will allow for dynamic scaling of the available hardware resources on a dynamic basis, providing access to peak-load handling capabili ... More
Presented by Andrew Malone MELO on 5 Sep 2011 at 14:25
Type: Poster Session: Poster session
Track: Track 1: Computing Technology for Physics Research
The proliferation of tools for monitoring both activities and infrastructure, together with the pressing need for prompt reaction in case of problems impacting data taking, data reconstruction, data reprocessing and user analysis brought to the need of better organizing the huge amount of information available. The monitoring system for the LHCb Grid Computing relies on many heterogeneous and inde ... More
Presented by Dr. Federico STAGNI
Type: Parallel talk Session: Thursday 08th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
Coverity's static analysis tool has been run on most of the LHC experiments' frameworks, as well as several of the packages provided to them (e.g. ROOT, Geant4). I will present how static analysis works and why it is complimentary to dynamic checkers like valgrind or test suites; typical issues discovered by static analysis; and lessons learned.
Presented by Axel NAUMANN on 8 Sep 2011 at 14:50
Type: Poster Session: Poster session
Track: Track 1: Computing Technology for Physics Research
The presentation will describe an interface within the ALICE analysis framework that allows transparent usage of the experiment's distributed resources. This analysis plug-in makes it possible to configure back-end specific parameters from a single interface and to run with no change the same custom user analysis in many computing environments, from local workstations to PROOF clusters or GRID res ... More
Presented by Andrei GHEATA
Type: Poster Session: Poster session
Track: Track 1: Computing Technology for Physics Research
The Pierre Auger Collaboration studies ultra high energy cosmic rays which induce extensive air showers when they interact at the top of the atmosphere. The generation of simulated showers involves tracking billions of particles as the shower develops through the atmosphere. The CPU time consumption of the complete simulation of a single shower is enormous but there are techniques to reduce it wit ... More
Presented by Julio LOZANO-BAHILO
Type: Parallel talk Session: Tuesday 06th - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
Neural networks (NN) are universal approximators. Therefore, in principle, it should be possible to use them to model any reasonably smooth probability density such as the probability density of fake missing transverse energy (MET). The modeling of fake MET is an important experimental issue in events such as $Z \rightarrow l^+ l^-$+jets, which is an important background in high-mass Higgs search ... More
Presented by Dr. Silvia TENTINDO on 6 Sep 2011 at 17:00
Type: Plenary talk Session: Tuesday 06th - Morning session
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
I discuss recently developed formulations of lattice Fermions possessing near-exact chiral symmetry. These are particularly appropriate for the simulation of complex weak matrix elements. I also discuss the state of the art of supercomputing for Lattice simulation
Presented by Peter BOYLE on 6 Sep 2011 at 10:50
Type: Parallel talk Session: Tuesday 06th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
Monitoring the Grid at local, national, and global levels The GridPP Collaboration The World-wide LHC Computing Grid is the computing infrastructure setup to process the experimental data coming from the experiments at the Large Hadron Collider located at CERN. GridPP is the project that provides the UK part of this infrastructure across 19 sites in the UK. To ensure that these large computa ... More
Presented by Mr. Peter GRONBECH on 6 Sep 2011 at 14:00
Type: Parallel talk Session: Thursday 08th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
Now that the LHC has started the LHC experiments crave for stability in ROOT, however progress in computing technology is not stopping and to keep ROOT up to date and compatible with new technologies requires a lot of work. In this presentation we will show what we are currently working on and what new technologies we try to exploit.
Presented by Fons RADEMAKERS on 8 Sep 2011 at 15:15
Type: Parallel talk Session: Thursday 08th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
The shared memory architecture of multicore CPUs provides HENP developers with the opportunity to reduce the memory footprint of their applications by sharing memory pages between the cores in a processor. ATLAS pioneered the multi-process approach to parallelizing HENP applications. Using Linux fork() and the Copy On Write mechanism we implemented a simple event task farm which allows to share up ... More
Presented by Vakhtang TSULAIA on 8 Sep 2011 at 14:00
Type: Parallel talk Session: Monday 05th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
Most calculations of quantum correction in the supersymmetric theories are made with the dimensional reduction, which is a modification of the dimensional regularization. However, it is well known that the dimensional reduction is not self-consistent. A consistent regularization, which does not break the supersymmetry is the higher covariant derivative regularization. However, the integrals obtain ... More
Presented by Dr. konstantin STEPANYANTZ on 5 Sep 2011 at 14:25
Type: Parallel talk Session: Monday 05th - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
Monte-Carlo technique enables one to generate random samples from distributions with known characteristics and helps to make probability based inferences of the underlying physical processes. Fast and efficient Monte-Carlo particle transport code particularly for high energy nuclear and particle physics experiments has become an important tool starting from the design and fabrication of det ... More
Presented by Dr. Federico CARMINATI on 5 Sep 2011 at 15:15
Type: Plenary talk Session: Tuesday 06th - Morning session
Track: Track 2 : Data Analysis - Algorithms and Tools
For very sound reasons, including the central limit theorem and mathematical tractability, classical multivariate statistics was heavily based on the multivariate normal distribution. However, the development of powerful computers, as well as increasing numbers of very large data sets, has led to a dramatic blossoming of research in this area, and the development of entirely new tools for multiva ... More
Presented by David HAND on 6 Sep 2011 at 09:40
Type: Plenary talk Session: Tuesday 06th - Morning session
Track: Track 1: Computing Technology for Physics Research
With the introduction of clustered storage, combining a set of hosts to a single storage system, a very successful standard data access protocol, NFS2/3 became obsolete. One of the reasons was that NFS 2/3 assumes the name service part of the protocol being severed from the same host as the actual data, which is of course no longer true for clustered systems. As a result, high performance storage ... More
Presented by Patrick FUHRMANN on 6 Sep 2011 at 09:00
Type: Plenary talk Session: Thursday 08th - Morning session
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
It has become customary to think of higher order calculations as analytic, in the sense that the result should be presented in the form of known functions or constants. If such a result is obtained, numerical evaluation for practical applications or expansion in asymptotic regimes should not pose any problem. There are, however, many problems of interest, where the analytic structure, due to the n ... More
Presented by Prof. Michal CZAKON on 8 Sep 2011 at 10:50
Type: Poster Session: Poster session
Track: Track 2 : Data Analysis - Algorithms and Tools
When monitoring complex experiments, comparison is often made between regularly acquired histograms of data and reference histograms which represent the ideal state of the equipment. With the larger HEP experiments now ramping up, there is a need for automation of this task since the volume of comparisons would overwhelm human operators. However, the two-dimensional histogram comparison tools curr ... More
Presented by Dr. Ivan D REID
Type: Parallel talk Session: Thursday 08th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
We present the publicly available program NGLUON allowing the numerical evaluation of colour-ordered amplitudes at one-loop order in massless QCD. The program allows the evaluation of one-loop amplitudes for an arbitrary number of gluons. We discuss in detail the speed as well as the numerical stability. In addition the packages allows the evaluation of one-loop scattering amplitudes using ... More
Presented by Mr. Benedikt BIEDERMANN on 8 Sep 2011 at 14:00
Type: Parallel talk Session: Tuesday 06th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
In recent years, Cloud computing has become a very attractive “notion” and popular model for accessing distributed resources and has emerged as the next big trend after the so-called Grid computing approach. The onsite STAR computing resources amounting to about 3000 CPU slots have been extended by additional 1000 slots using opportunistic resources from pilot DOE/Magellan and DOE/Nimbus ... More
Presented by Dr. jan BALEWSKI on 6 Sep 2011 at 17:00
Type: Parallel talk Session: Monday 05th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
The massive data processing in a multi-collaboration environment with geographically spread diverse facilities will be hardly "fair" to users and hardly using network bandwidth efficiently unless we address and deal with planning and reasoning related to data movement and placement. The needs for coordinated data resource sharing and efficient plans solving the data transfer paradigm in a dynamic ... More
Presented by Mr. Michal ZEROLA on 5 Sep 2011 at 15:15
Type: Parallel talk Session: Thursday 08th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
Numerically stable analytic expression of a one-loop integration is one of the most important elements of the accurate calculations of one-loop corrections to the physical processes. It is known that these integrations are expressed by some generalized classes of Gauss hypergeometric functions. Power series expansions, differential equations, contiguous and many other identities are known f ... More
Presented by Prof. Toshiaki KANEKO on 8 Sep 2011 at 15:15
Type: Parallel talk Session: Thursday 08th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
The algebraic tensor reduction of one-loop Feynman integrals with signed minors has been further developed. There is now available the C++ package PJFry by V. Yundin for the reduction of 5-point 1-loop tensor integrals up to rank 5. Special care is devoted to vanishing or small Gram determinants. Further, we derived extremely compact expressions for the contractions of the tensor integrals w ... More
Presented by Tord RIEMANN on 8 Sep 2011 at 14:50
Type: Parallel talk Session: Monday 05th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
We present an online measurement of the LHC beam parameters in ATLAS using the High Level Trigger (HLT). When a significant change is detected in the measured beamspot, it is distributed to the HLT. There, trigger algorithms like b-tagging which calculate impact parameters or decay lengths benefit from a precise,up-to-date set of beamspot parameters. Additionally, online feedback is sent to the ... More
Presented by Emanuel Alexandre STRAUSS on 5 Sep 2011 at 17:20
Type: Parallel talk Session: Thursday 08 - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
Electrons and photons are among the most important signatures in ATLAS. Their identification against jets background by the online trigger system relies very much on calorimetry information. The ATLAS online trigger comprises three cascaded levels and the Ringer is an alternative set of algorithms that uses calorimetry information for electron detection at the second trigger level (L2). It is spli ... More
Presented by Mr. José Manoel DE SEIXAS on 8 Sep 2011 at 15:15
Type: Parallel talk Session: Tuesday 06th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
PROOF (Parallel ROOT Facility) is an extention of ROOT enabling interactive analysis in parallel on clusters of computers or a many-core machine. PROOF has been adopted and successfully utilized as one of main analysis models by LHC experiments including ALICE and ATLAS. ALICE has seen growing number of PROOF clusters around the world, CAF at CERN, SKAF in Slovakia, GSIAF at Darmstadt being the ma ... More
Presented by Dr. Sangsu RYU, Dr. Gerardo GANIS on 6 Sep 2011 at 17:25
Type: Parallel talk Session: Tuesday 06th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
New features of the symbolic algebra package Form 4 are discussed. Most importantly, these features include polynomial factorization and polynomial GCD computation. Examples of their use are shown. One of them is an exact version of Mincer which gives answers in terms of rational polynomials and 5 master integrals.
Presented by Jan KUIPERS on 6 Sep 2011 at 15:15
Type: Plenary talk Session: Wednesday 07th - Morning session
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
With the beginning of the experimental programs at the LHC, the need of describing multi particle scattering events with high accuracy becomes more pressing. On the theoretical side, perturbative calculation within leading order precision cannot be sufficient, therefore accounting for effects due to Next-to-Leading Order (NLO) corrections becomes mandatory. In the last few years we observed a ... More
Presented by Francesco TRAMONTANO on 7 Sep 2011 at 09:40
Type: Parallel talk Session: Thursday 08th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
We report our progress on the development of the Direct Computation Method (DCM), which is a fully numerical method for the computation of Feynman diagrams. Based on a combination of a numerical integration tool and a numerical extrapolation technique, all steps in the computation are carried out in a fully numerical way. The combined method is applicable to one-, two- and multi-loop diagra ... More
Presented by Dr. Fukuko YUASA on 8 Sep 2011 at 14:25
Type: Poster Session: Poster session
Track: Track 2 : Data Analysis - Algorithms and Tools
The ATLAS tau trigger system runs very challenging real time algorithms on commodity computers. Whilst in the second level trigger (L2) fast and specialized algorithms are used, in the third level trigger (Event Filter -EF-) sophisticated and detailed reconstruction algorithms run. The performance of both types of algorithms can be decoupled because they both start from the information provided by ... More
Type: Parallel talk Session: Tuesday 06th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
Reduze is a computer program for reducing Feynman Integrals to master integrals employing the Gauss/Laporta algorithm. Reduze is written in C++ and uses the GiNaC library to perform simplifications of the algebraic prefactors in the system of equations. In this talk, the new version, Reduze 2, is presented. The program supports fully parallelised computations with MPI and allows to resume aborted ... More
Presented by Dr. Cedric STUDERUS on 6 Sep 2011 at 14:50
Type: Parallel talk Session: Monday 05th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
I apply commonly used regularization schemes to a multiloop calculation to examine the properties of the schemes at higher orders. I find complete consistency between the conventional dimensional regularization scheme and dimensional reduction, but I find that the four-dimensional helicity scheme produces incorrect results at next-to-next-to-leading order and singular results at next-to-next ... More
Presented by William KILGORE on 5 Sep 2011 at 14:50
Type: Parallel talk Session: Monday 05th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
We report results of a new regularization technique for infrared (IR) divergent loop integrals using dimensional regularization, where a positive regularization parameter (epsilon, satisfying that the dimension d = 4+2*epsilon) is introduced in the integrand to keep the integral from diverging as long as epsilon > 0. Based on an asymptotic expansion of the integral we construct a linear s ... More
Presented by Prof. Elise DE DONCKER on 5 Sep 2011 at 16:35
Type: Parallel talk Session: Tuesday 06th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
I present a method, elaborated within the NNPDF Collaboration, that allows the inclusion of the information contained in new datasets into an existing set of parton distribution functions without the need for refitting. The method exploits bayesian inference in the space of PDF replicas, computing for each replica a chisquare with respect to the new dataset and a weight associated to this. These ... More
Presented by Mr. Francesco CERUTTI on 6 Sep 2011 at 16:35
Type: Plenary talk Session: Thursday 08th - Morning session
Presented by Prof. David DE ROURE on 8 Sep 2011 at 11:30
Type: Parallel talk Session: Monday 05th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
Sector decomposition is a method to extract singularities from multi-dimensional polynomial parameter integrals in a universal way. Integrals of this type arise in perturbative higher order calculations in multi-loop integrals as well as in phase space integrals involving unresolved massless particles. The program 'SecDec' will be presented, which appli ... More
Presented by Jonathon CARTER on 5 Sep 2011 at 16:10
Type: Parallel talk Session: Tuesday 06th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
We will present a method to extract parton distribution functions from hard scattering processes based on an alternative type of neural networks, the Self-Organizing Maps (SOMs). Quantitative results including a detailed treatment of uncertainties will be presented within a Next to Leading Order analysis of both unpolarized and polarized inclusive deep inelastic scattering data. With a fully worki ... More
Presented by Prof. simonetta LIUTI on 6 Sep 2011 at 17:00
Type: Parallel talk Session: Tuesday 06th - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
Most classification algorithms used in high energy physics fall under the category of supervised machine learning. Such methods require a training set containing both signal and background events and are prone to classification errors should this training data be systematically inaccurate for example due to the assumed MC model. To complement such model-dependent searches, we propose an algorithm ... More
Presented by Mr. Mikael KUUSELA on 6 Sep 2011 at 16:35
Type: Parallel talk Session: Tuesday 06th - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
The toolkit for multivariate analysis, TMVA, provides a large set of advanced multivariate analysis techniques for signal/background classification and regression problems. These techniques are embedded in a framework capable of handling input data preprocessing and the evaluation of the results, thus providing a simple and convenient tool for multivariate techniques. The analysis techniques imple ... More
Presented by Eckhard VON TOERNE on 6 Sep 2011 at 14:00
Type: Parallel talk Session: Tuesday 06th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
We report on the current status of the development of parallel versions of the symbolic manipulation system FORM. Currently there are two parallel versions of the FORM: one is TFORM which is based on the POSIX threads and for running on multicore machines, and the other is ParFORM which uses the MPI and can run on computer clusters. By using these versions, most of existing FORM programs can ... More
Presented by Takahiro UEDA on 6 Sep 2011 at 16:10
Type: Plenary talk Session: Wednesday 07th - Morning session
Track: Track 2 : Data Analysis - Algorithms and Tools
All fields of scientific research have experienced an explosion of data. Analyzing this data to extract unexpected patterns presents a computational challenge that requires new, advanced methods of analysis. DQC (Dynamic Quantum Clustering), invented by David Horn (Tel Aviv University), is a novel, interactive and highly visual approach to this problem. Studies are already underway at SLAC to ... More
Presented by Dr. Marvin WEINSTEIN on 7 Sep 2011 at 09:00
Presented by Nigel GLOVER on 9 Sep 2011 at 10:50
Presented by Dr. Jerome LAURET on 9 Sep 2011 at 09:00
Presented by Pushpalatha BHAT on 9 Sep 2011 at 09:40
Type: Parallel talk Session: Tuesday 06th - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
Tau leptons will play an important role in the physics program at the LHC. They will be used in electroweak measurements and in detector related studies like the determination of the missing transverse energy scale, but also in searches for n ... More
Presented by Prof. Dugan O'NEIL on 6 Sep 2011 at 14:25
Type: Parallel talk Session: Monday 05th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
The Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment at CERN is the infrastructure responsible for filtering and transferring ATLAS experimental data from detectors to the mass storage system. It relies on a large, distributed computing environment, including thousands of computing nodes with thousands of application running concurrently. In such a complex environment, informati ... More
Presented by Mr. Luca MAGNONI on 5 Sep 2011 at 16:30
Type: Parallel talk Session: Monday 05th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
EOS was designed to fulfill generic requirements on disk storage scalability and IO scheduling performance for LHC analysis use cases following the strategy to decouple disk and tape storage as individual storage systems. The project was setup in April 2010. Since October 2010 EOS was evaluated by ATLAS as a disk only storage pool at CERN for analysis use cases in the context of various WLCG de ... More
Presented by Mr. Andreas Joachim PETERS on 5 Sep 2011 at 16:05
Type: Parallel talk Session: Tuesday 06th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
The LHCb computing model was designed in order to support the LHCb physics program, taking into account LHCb specificities (event sizes, processing times etc...). Within this model several key activities are defined, the most important of which are real data processing (reconstruction, stripping and streaming, group and user analysis), Monte-Carlo simulation and data replication. In this contribut ... More
Presented by Dr. Federico STAGNI, Dr. Philippe CHARPENTIER on 6 Sep 2011 at 14:50
Type: Parallel talk Session: Monday 05th - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
The CMS all-silicon tracker consists of 16588 modules. In 2010 it has been successfully aligned using tracks from cosmic rays and pp-collisions, following the time dependent movements of its innermost pixel layers. Ultimate local precision is now achieved by the determination of sensor curvatures, challenging the algorithms to determine about 200000 parameters. Remaining alignment uncertainties ar ... More
Presented by Gero FLUCKE on 5 Sep 2011 at 14:25
Type: Plenary talk Session: Thursday 08th - Morning session
Track: Track 2 : Data Analysis - Algorithms and Tools
Thanks to large sequencing initiatives of the last 10 years we now have access to full genome sequences in digital form, in particular for laboratory species such as the mouse whose genome is about 3.5 billion letters in size. Recent high-throughput technologies allow to then probe the function of this genome in many different experimental conditions by sampling the genome at the rate of 2-3 bi ... More
Presented by Jacques ROUGEMONT on 8 Sep 2011 at 09:00
Type: Plenary talk Session: Monday 05th - Morning session
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
After a short introduction, sketching the structure of a typical calculation of higher-order quantum corrections, I will discuss a few examples illustrating ideas that were instrumental in obtaining some recent novel results. Attention will be given to the tools facilitating those techniques and the technical challenges. In particular, the talk will cover the progress in sector decomposition m ... More
Presented by Dr. Alexey PAK on 5 Sep 2011 at 11:50
Type: Parallel talk Session: Monday 05th - Computations in Theoretical Physics
Track: Track 3: Computations in Theoretical Physics - Techniques and Methods
A Key feature of the minimal supersymmetric extension of the Standard Model (MSSM) is the existence of a light Higgs boson, the mass of which is not a free parameter but an observable that can be predicted from the theory. Given that the LHC is able to measure the mass of a light Higgs with very good accuracy, a lot of effort has been put into a precise theoretical prediction. We present ... More
Presented by Dr. Philipp KANT on 5 Sep 2011 at 14:00
Type: Parallel talk Session: Tuesday 06th - Computing Technology for Physics Research
Track: Track 1: Computing Technology for Physics Research
The reconstruction and simulation of collision events is a major task in modern HEP experiments involving several ten thousands of standard CPUs. On the other hand the graphics processors (GPUs) have become much more powerful and are by far outperforming the standard CPUs in terms of floating point operations due to their massive parallel approach. The usage of these GPUs could therefore s ... More
Presented by Dr. Christian SCHMITT on 6 Sep 2011 at 16:10
Type: Poster Session: Poster session
Track: Track 2 : Data Analysis - Algorithms and Tools
A sophisticated trigger system, capable of real-time track and vertex reconstruction, is in place in the ATLAS experiment, to reject most of the events containing uninteresting background collisions while preserving as much as possible the interesting physics signals. In this contribution we present the strategy adopted by the ATLAS collaboration for fast reconstruction of charged tracks and ... More
Presented by Dr. Andrea COCCARO
Type: Parallel talk Session: Thursday 08 - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
A novel method to estimate probability density functions, suitable for multivariate analyses will be presented. The implemented algorithm can work on relatively large samples, iteratively finding a non-parametric density function with adaptive kernels. With increasing number of sample points the resulting function converges to the real probability density. Specifically, we discuss a classification ... More
Presented by Mr. Peter KOEVESARKI on 8 Sep 2011 at 16:35
Type: Plenary talk Session: Tuesday 06th - Morning session
Track: Track 2 : Data Analysis - Algorithms and Tools
Multivariate datasets in astrophysics can be large, with the increasing volume of information now becoming available from a range of observations, from ground and Space, across the electromagnetic spectrum. The observations are in the form of raw images and/or spectra, and tables of derived quantities, obtained at multiple epochs in time. Large archives of images, spectra and catalogues are ... More
Presented by Dr. Somak RAYCHAUDHURY on 6 Sep 2011 at 11:30
Type: Parallel talk Session: Monday 05th - Data Analysis – Algorithms and Tools
Track: Track 2 : Data Analysis - Algorithms and Tools
Visual Physics Analysis (VISPA) is an analysis development environment with applications in high energy as well as astroparticle physics. VISPA provides a graphical steering of the analysis flow, which is comprised of self-written C++ and Python modules. The advances presented in this talk extend the scope from prototyping to the execution of analyses. A novel concept of analysis layers has been i ... More
Presented by Robert FISCHER on 5 Sep 2011 at 16:55
Presented by Prof. Geoff RODGERS on 5 Sep 2011 at 09:40
Type: Plenary talk Session: Monday 05th - Morning session
Track: Track 1: Computing Technology for Physics Research
The speaker will start by reviewing the dominant technologies chosen for the LHC Computing Grid and briefly discuss their suitability. He will then go on to look at technologies that have emerged since, but are not being seriously used. Some of these technologies are being or have been evaluated by the CERN openlab. In the last part of the talk the speaker will argue for the adoption of ... More
Presented by Sverre JARP on 5 Sep 2011 at 10:00