Prof.
Geoff Rodgers
(Brunel University)
05/09/2011, 09:40
Dr
Kate Keahey
(Argonne National Laboratory)
05/09/2011, 11:10
Infrastructure-as-a-Service (IaaS) cloud computing is revolutionizing the way we acquire and manage computational and storage resources: by allowing on-demand resource leases and supporting user control over those resources it enables us to treat resource acquisition as an operational consideration rather than capital investment. The emergence of this new model raises many questions, in...
Dr
Alexey Pak
(TTP KIT Karlsruhe)
05/09/2011, 11:50
Track 3: Computations in Theoretical Physics - Techniques and Methods
Plenary talk
After a short introduction, sketching the structure of a typical calculation of higher-order quantum corrections, I will discuss a few examples illustrating ideas that were instrumental in obtaining some recent novel results. Attention will be given to the tools facilitating those techniques and the technical challenges. In particular, the talk will cover the progress in sector ...
Mr
Jike Wang
(High Energy Group-Institute of Physics-Academia Sinica)
05/09/2011, 14:00
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
Atlas is a multipurpose experiment that records the LHC collisions. In order to reconstruct the trajectories of charged particles, ATLAS is equipped with a tracking system built using disticnt technologies: silicon planar sensors (both pixel and microstrips) and drift-tubes (the Inner Detector). The tracking system is embedded in a 2 T solenoidal field. In order to reach the track parameter...
Dr
Berzano Dario
(Sezione di Torino (INFN)-Universita e INFN)
05/09/2011, 14:00
Track 1: Computing Technology for Physics Research
Parallel talk
The conversion of existing computing centres to cloud facilities is becoming popular also because of a more optimal usage of existing resources. Inside a medium to large cloud facility, many specific virtual computing facilities might concur for the same resources based on their usage and destination elastically, i.e. by expanding or reducing allocated resources for currently running VMs, or...
Dr
Philipp Kant
(Humboldt-Universität zu Berlin)
05/09/2011, 14:00
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
A Key feature of the minimal supersymmetric extension of the Standard
Model (MSSM) is the existence of a light Higgs boson, the mass of
which is not a free parameter but an observable that can be predicted
from the theory. Given that the LHC is able to measure the mass of a
light Higgs with very good accuracy, a lot of effort has been put into
a precise theoretical prediction.
We...
Andrew Malone Melo
(Vanderbilt University)
05/09/2011, 14:25
Track 1: Computing Technology for Physics Research
Parallel talk
As cloud middleware (and cloud providers) have become more robust, various experiments with experience in Grid submission have begun to investigate the possibility of taking previously Grid-Enabled applications and making them compatible with Cloud Computing, which will allow for dynamic scaling of the available hardware resources on a dynamic basis, providing access to peak-load handling...
Dr
konstantin stepanyantz
(Moscow State University)
05/09/2011, 14:25
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
Most calculations of quantum correction in the supersymmetric theories are made with the dimensional reduction, which is a modification of the dimensional regularization. However, it is well known that the dimensional reduction is not self-consistent. A consistent regularization, which does not break the supersymmetry is the higher covariant derivative regularization. However, the integrals...
Gero Flucke
(DESY (Hamburg))
05/09/2011, 14:25
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
The CMS all-silicon tracker consists of 16588 modules. In 2010 it has been successfully aligned using tracks from cosmic rays and pp-collisions, following the time dependent movements of its innermost pixel layers. Ultimate local precision is now achieved by the determination of sensor curvatures, challenging the algorithms to determine about 200000 parameters. Remaining alignment...
Dr
Paul Laycock
(University of Liverpool)
05/09/2011, 14:50
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
Over a decade ago, the H1 Collaboration decided to embrace the
object-oriented paradigm and completely redesign its data analysis
model and data storage format. The event data model, based on the
RooT framework, consists of three layers - tracks and calorimeter
clusters, identified particles and finally event summary data -
with a singleton class providing unified access. This...
Dr
Graeme Andrew Stewart
(CERN)
05/09/2011, 14:50
Track 1: Computing Technology for Physics Research
Parallel talk
ATLAS has recorded almost 5PB of RAW data since the LHC started
running at the end of 2009. Many more derived data products and
complimentary simulation data have also been produced by the
collaboration and, in total, 55PB is currently stored in the Worldwide
LHC Computing Grid by ATLAS. All of this data is managed by the ATLAS
Distributed Data Management system, called Don Quixote 2...
William Kilgore
(Brookhaven National Lab)
05/09/2011, 14:50
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
I apply commonly used regularization schemes to a multiloop
calculation to examine the properties of the schemes at higher orders.
I find complete consistency between the conventional dimensional
regularization scheme and dimensional reduction, but I find that the
four-dimensional helicity scheme produces incorrect results at
next-to-next-to-leading order and singular results...
Mr
Andreas von Manteuffel
(University of Zurich)
05/09/2011, 15:15
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
An analytical calculation of a non-planar 2-loop box diagram is presented.
This diagram appears in the computation of higher order corrections to top-
quark pair production and contains one internal massive line. The
corresponding integrals are solved with differential equation and Mellin-Barnes
techniques.
Dr
Federico Carminati
(CERN)
05/09/2011, 15:15
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
Monte-Carlo technique enables one to generate random samples from distributions with known characteristics and helps to make probability based inferences of the underlying physical processes. Fast and efficient Monte-Carlo particle transport code particularly for high energy nuclear and particle physics experiments has become an important tool starting from the design and fabrication of...
Mr
Michal Zerola
(Academy of Sciences, Czech Republic)
05/09/2011, 15:15
Track 1: Computing Technology for Physics Research
Parallel talk
The massive data processing in a multi-collaboration environment with geographically spread diverse facilities will be hardly "fair" to users and hardly using network bandwidth efficiently unless we address and deal with planning and reasoning related to data movement and placement. The needs for coordinated data resource sharing and efficient plans solving the data transfer paradigm in a...
Mr
Matteo Agostini
(Munich Technical University)
05/09/2011, 16:05
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
We present the concept, the implementation and the performance of a new software framework developed to provide a flexible and user-friendly environment for advanced analysis and processing of digital signals. The software has been designed to handle the full data analysis flow of GERDA, a low-background experiment which searches for the neutrinoless double beta decay of Ge-76 by using...
Mr
Andreas Joachim Peters
(CERN)
05/09/2011, 16:05
Track 1: Computing Technology for Physics Research
Parallel talk
EOS was designed to fulfill generic requirements on disk storage scalability and IO scheduling performance for LHC analysis use cases following the strategy to decouple disk and tape storage as individual storage systems.
The project was setup in April 2010. Since October 2010 EOS was evaluated by ATLAS as a disk only storage pool at CERN for analysis use cases in the context of various...
Jonathon Carter
(University of Durham)
05/09/2011, 16:10
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
Sector decomposition is a method to extract singularities from
multi-dimensional polynomial parameter integrals in a universal way.
Integrals of this type arise in perturbative higher order calculations
in multi-loop integrals as well as
in phase space integrals involving unresolved massless particles.
The program 'SecDec' will be presented,
which...
Dr
Manqi Ruan
(Laboratoire Leprince-Ringuet (LLR)-Ecole Polytechnique)
05/09/2011, 16:30
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
The concept of "particle flow" has been developed to optimise jet energy resolution by best separating the different components of hadronic jets. A highly granular calorimetry is mandatory and provides an unprecedented level of detail in the reconstruction of showers. This enables new approaches to shower analysis. Here the measurement and use of of showers' fractal dimension is
described....
Mr
Luca Magnoni
(Conseil Europeen Recherche Nucl. (CERN))
05/09/2011, 16:30
Track 1: Computing Technology for Physics Research
Parallel talk
The Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment at CERN is the infrastructure responsible for filtering and transferring ATLAS experimental data from detectors to the mass storage system. It relies on a large, distributed computing environment, including thousands of computing nodes with thousands of application running concurrently.
In such a complex environment,...
Prof.
Elise de Doncker
(Western Michigan University)
05/09/2011, 16:35
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
We report results of a new regularization technique for infrared (IR) divergent loop integrals using dimensional regularization, where a positive regularization parameter (epsilon, satisfying that the dimension d = 4+2*epsilon) is introduced in the integrand to keep the integral from diverging as long as epsilon > 0.
Based on an asymptotic expansion of the integral we construct a...
Dr
Tim dos Santos
(Bergische Universitaet Wuppertal)
05/09/2011, 16:55
Track 1: Computing Technology for Physics Research
Parallel talk
With the Job Execution Monitor, a user-centric job monitoring software developed at the University of Wuppertal and integrated into the Pilot-based "PanDA" job brokerage system of the WLCG, job progress and grid worker node health can be supervised in real time. Imminent error conditions can thusly be detected early by the submitter and countermeasures taken. Grid site admins can access...
Robert Fischer
(RWTH Aachen University, III. Physikalisches Institut A)
05/09/2011, 16:55
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
Visual Physics Analysis (VISPA) is an analysis development environment with applications in high energy as well as astroparticle physics. VISPA provides a graphical steering of the analysis flow, which is comprised of self-written C++ and Python modules. The advances presented in this talk extend the scope from prototyping to the execution of analyses. A novel concept of analysis layers has...
Dr
Andrei Kataev
(INR, Moscow, Russia)
05/09/2011, 17:00
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
Different forms of the generalized Crewther relation in QED and QCD
are discussed. They follow from applyication of the method of OPE to the AVV triangle amplitude in the limit when conformal symmetry is valid and broken by the prosedure of renormalizations in the
various variants of MS scheme, including 't Hooft prescription for
defining beta-function. Special features of the...
Emanuel Alexandre Strauss
(SLAC National Accelerator Laboratory)
05/09/2011, 17:20
Track 1: Computing Technology for Physics Research
Parallel talk
We present an online measurement of the LHC beam parameters in ATLAS using the High Level Trigger (HLT). When a significant change is detected in the measured beamspot, it is distributed to the HLT. There, trigger algorithms like b-tagging which calculate impact parameters or decay lengths benefit from a precise,up-to-date set of beamspot parameters. Additionally, online feedback is sent to...
Thomas Hahn
(MPI f. Physik)
05/09/2011, 17:25
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
The talk presents the new features in FormCalc 7 (and some in LoopTools), such as analytic tensor reduction, inclusion of the OPP method, and the interface to FeynHiggs.
David Hand
(Imperial College London)
06/09/2011, 09:40
For very sound reasons, including the central limit theorem and mathematical tractability, classical multivariate statistics was heavily based on the multivariate normal distribution. However, the development of powerful computers, as well as increasing numbers of very large data sets, has led to a dramatic blossoming of research in this area, and the development of entirely new tools for...
Peter Boyle
(University of Edinburgh)
06/09/2011, 10:50
Track 3: Computations in Theoretical Physics - Techniques and Methods
Plenary talk
I discuss recently developed formulations of lattice Fermions possessing
near-exact chiral symmetry. These are particularly appropriate for the
simulation of complex weak matrix elements. I also discuss the state
of the art of supercomputing for Lattice simulation
Dr
Somak Raychaudhury
(University of Birmingham)
06/09/2011, 11:30
Multivariate datasets in astrophysics can be large, with the
increasing volume of information now becoming available from a range
of observations, from ground and Space, across the electromagnetic
spectrum. The observations are in the form of raw images and/or
spectra, and tables of derived quantities, obtained at multiple epochs
in time. Large archives of images, spectra and catalogues...
Dr
Bytev Vladimir
(JINR)
06/09/2011, 14:00
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
The differential reduction algorithm allows to change the values
of parameters of any Horn-type hypergeometric functions on arbitrary
integers numbers. The description of mathematical part of algorithm
have been presented on ACAT08 by M.Kalmykov [6].
We will describe the status of project and will present a new version
of MATHEMATICA based package including a several important...
Mr
Peter Gronbech
(Particle Physics-University of Oxford)
06/09/2011, 14:00
Track 1: Computing Technology for Physics Research
Parallel talk
Monitoring the Grid at local, national, and global levels
The GridPP Collaboration
The World-wide LHC Computing Grid is the computing infrastructure setup to process the experimental data coming from the experiments at the Large Hadron Collider located at CERN.
GridPP is the project that provides the UK part of this infrastructure across 19 sites in the UK. To ensure that these large...
Eckhard von Toerne
(University of Bonn)
06/09/2011, 14:00
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
The toolkit for multivariate analysis, TMVA, provides a large set of advanced multivariate analysis techniques for signal/background classification and regression problems. These techniques are embedded in a framework capable of handling input data preprocessing and the evaluation of the results, thus providing a simple and convenient tool for multivariate techniques. The analysis techniques...
Yves Kemp
(Deutsches Elektronen-Synchrotron (DESY))
06/09/2011, 14:25
Track 1: Computing Technology for Physics Research
Parallel talk
Preserving data from past experiments and preserving the ability to
perform analysis with old data is of growing importance in many
domains of science, including High Energy Physics (HEP). A study group on this issue, DPHEP, has been established in this field to provide guidelines and a structure
for international collaboration on data preservation projects in HEP.
This contribution...
Dr
Roman Lee
(Budker Institute of Nuclear Physics)
06/09/2011, 14:25
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
The method of calculation of the loop integrals based on the dimensional recurrence relation and analyticity of the integrals as functions of $d$ is reviewed. Special emphasis is made on the possibility to automatize many steps of the method. New results obtained with this method are presented.
Prof.
Dugan O'Neil
(Simon Fraser University (SFU))
06/09/2011, 14:25
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
Tau leptons will play an important role in the physics program at the
LHC. They will be used in electroweak measurements and in detector
related studies like the determination of the missing transverse
energy scale, but also in searches...
Daniel Zander
(Karlsruhe Institute of Technology)
06/09/2011, 14:50
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
Full Reconstruction is an important analysis technique utilized at B factories where B mesons are produced in e+e- -> Y(4S) -> BBbar processes. By reconstructing one of the two B mesons in an event fully in a hadronic final state, the properties of the other B meson are determined using momentum conservation. Therefore, it allows to measure or perform searches for rare B meson decays involving...
Dr
Cedric Studerus
(University of Bielefeld)
06/09/2011, 14:50
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
Reduze is a computer program for reducing Feynman Integrals to master integrals employing the Gauss/Laporta algorithm. Reduze is written in C++ and uses the GiNaC library to perform simplifications of the algebraic prefactors in the system of equations.
In this talk, the new version, Reduze 2, is presented. The program supports fully parallelised computations with MPI and allows to resume...
Dr
Federico Stagni
(Conseil Europeen Recherche Nucl. (CERN)), Dr
Philippe Charpentier
(Conseil Europeen Recherche Nucl. (CERN))
06/09/2011, 14:50
Track 1: Computing Technology for Physics Research
Parallel talk
The LHCb computing model was designed in order to support the LHCb physics program, taking into account LHCb specificities (event sizes, processing times etc...). Within this model several key activities are defined, the most important of which are real data processing (reconstruction, stripping and streaming, group and user analysis), Monte-Carlo simulation and data replication. In this...
Daniel Martschei
(Inst. für Experimentelle Kernphys.-Universitaet Karlsruhe-KIT)
06/09/2011, 15:15
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
Title: Advanced event reweighting for MVA training.
Multivariate discrimination techniques, such as Neural Networks, are key ingredients to modern data analysis and play an important role in high energy physics. They are usually trained on simulated Monte Carlo (MC) samples to discriminate signal from background and are then applied to data. This has in general some side effects which we...
Dr
Sebastien Binet
(Laboratoire de l'Accelerateur Lineaire (LAL)-Universite de Pari)
06/09/2011, 15:15
Track 1: Computing Technology for Physics Research
Parallel talk
Current HENP libraries and frameworks were written before multicore
systems became widely deployed and used.
From this environment, a 'single-thread' processing model naturally
emerged but the implicit assumptions it encouraged are greatly
impairing our abilities to scale in a multicore/manycore world.
While parallel programming - still in an intensive phase of R&D
despite the 30+...
Jan Kuipers
(Nikhef)
06/09/2011, 15:15
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
New features of the symbolic algebra package Form 4 are
discussed. Most importantly, these features include polynomial
factorization and polynomial GCD computation. Examples of
their use are shown. One of them is an exact version of Mincer which
gives answers in terms of rational polynomials and 5 master integrals.
Dr
Federico Colecchia
(University College London)
06/09/2011, 16:10
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
Background properties in experimental particle physics are typically estimated from large collections of events. This usually provides precise knowledge of average background distributions, but inevitably hides fluctuations. To overcome this limitation, an approach based on statistical mixture model decomposition is presented. Events are treated as heterogeneous populations comprising...
Takahiro Ueda
(Karlsruhe Institute of Technology)
06/09/2011, 16:10
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
We report on the current status of the development of parallel versions
of the symbolic manipulation system FORM. Currently there are two
parallel versions of the FORM: one is TFORM which is based on the POSIX
threads and for running on multicore machines, and the other is ParFORM
which uses the MPI and can run on computer clusters. By using these
versions, most of existing FORM programs...
Dr
Christian Schmitt
(Institut fuer Physik-Johannes-Gutenberg-Universitaet Mainz)
06/09/2011, 16:10
Track 1: Computing Technology for Physics Research
Parallel talk
The reconstruction and simulation of collision events is a major task
in modern HEP experiments involving several ten thousands of
standard CPUs. On the other hand the graphics processors (GPUs) have
become much more powerful and are by far outperforming the standard
CPUs in terms of floating point operations due to their massive
parallel approach. The usage of these GPUs could...
Prof.
Peter R Hobson
(Brunel University)
06/09/2011, 16:35
Track 1: Computing Technology for Physics Research
Parallel talk
In-line holography has recently made the transition from silver-halide based recording media, with laser reconstruction, to recording with large-area pixel detectors and computer-based reconstruction. This form of holographic imaging is used for small particulates, such as cloud or fuel droplets, marine plankton and alluvial sediments, and enables a true 3D object field to be recorded at high...
Mr
Francesco Cerutti
(Universitat de Barcelona)
06/09/2011, 16:35
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
I present a method, elaborated within the NNPDF Collaboration, that allows the inclusion of the information contained in new datasets into an existing set of parton distribution functions without the need for refitting.
The method exploits bayesian inference in the space of PDF replicas, computing for each replica a chisquare with respect to the new dataset and a weight associated to this. ...
Mr
Mikael Kuusela
(Helsinki Institute of Physics (HIP))
06/09/2011, 16:35
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
Most classification algorithms used in high energy physics fall under the category of supervised machine learning. Such methods require a training set containing both signal and background events and are prone to classification errors should this training data be systematically inaccurate for example due to the assumed MC model. To complement such model-dependent searches, we propose an...
Dr
Silvia Tentindo
(Department of Physics-Florida State University)
06/09/2011, 17:00
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
Neural networks (NN) are universal approximators. Therefore, in principle, it should be possible to use them to model any reasonably smooth probability density such as the probability density of fake missing transverse energy (MET). The modeling of fake MET is an important experimental issue in events such as
$Z \rightarrow l^+ l^-$+jets, which is an important background in high-mass Higgs...
Dr
jan balewski
(MIT)
06/09/2011, 17:00
Track 1: Computing Technology for Physics Research
Parallel talk
In recent years, Cloud computing has become a very attractive “notion” and
popular model for accessing distributed resources and has emerged as the next
big trend after the so-called Grid computing approach.
The onsite STAR computing resources amounting to about 3000 CPU slots have
been extended by additional 1000 slots using opportunistic resources from pilot
DOE/Magellan and DOE/Nimbus...
Prof.
simonetta liuti
(university of virginia)
06/09/2011, 17:00
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
We will present a method to extract parton distribution functions from hard scattering processes based on an alternative type of neural networks, the Self-Organizing Maps (SOMs). Quantitative results including a detailed treatment of uncertainties will be presented within a Next to Leading Order analysis of both unpolarized and polarized inclusive deep inelastic scattering data. With a fully...
Dr
Gerardo Ganis
(CERN), Dr
Sangsu Ryu
(KiSTi Korea Institute of Science & Technology Information (KiS)
06/09/2011, 17:25
Track 1: Computing Technology for Physics Research
Parallel talk
PROOF (Parallel ROOT Facility) is an extention of ROOT enabling interactive analysis in parallel on clusters of computers or a many-core machine. PROOF has been adopted and successfully utilized as one of main analysis models by LHC experiments including ALICE and ATLAS. ALICE has seen growing number of PROOF clusters around the world, CAF at CERN, SKAF in Slovakia, GSIAF at Darmstadt being...
Dr
Marvin Weinstein
(SLAC National Accelerator Laboratory)
07/09/2011, 09:00
All fields of scientific research have experienced an explosion of data. Analyzing this data to extract unexpected patterns presents a computational challenge that requires new, advanced methods of analysis. DQC (Dynamic Quantum Clustering), invented by David Horn (Tel Aviv University), is a novel, interactive and highly visual approach to this problem. Studies are already underway at...
Francesco Tramontano
(CERN)
07/09/2011, 09:40
Track 3: Computations in Theoretical Physics - Techniques and Methods
Plenary talk
With the beginning of the experimental programs at the LHC, the need of
describing multi particle scattering events with high accuracy becomes
more pressing. On the theoretical side, perturbative calculation within
leading order precision cannot be sufficient, therefore accounting for
effects due to Next-to-Leading Order (NLO) corrections becomes mandatory.
In the last few years we...
Dr
Vittorio Del Duca
(Laboratori Nazionali di Frascati (INFN))
07/09/2011, 10:50
Track 3: Computations in Theoretical Physics - Techniques and Methods
Plenary talk
We suppose that a solution to a given Feynman integral is known in terms of multiple polylogarithms, and address the question of how to find another solution which is equivalent to the former, but with a simpler analytic structure.
Dr
Anar Manafov
(GSI - Helmholtzzentrum fur Schwerionenforschung GmbH)
08/09/2011, 09:40
Constant changes in computational infrastructure like the current interest in Clouds, imply conditions on the design of applications. We must make sure that our analysis infrastructure, including source code and supporting tools, is ready for the on demand computing (ODC) era.
This presentation is about a new analysis concept, which is driven by users needs, completely disentangled from...
Prof.
Michal Czakon
(RWTH Aachen)
08/09/2011, 10:50
Track 3: Computations in Theoretical Physics - Techniques and Methods
Plenary talk
It has become customary to think of higher order calculations as analytic, in the sense that the result should be presented in the form of known functions or constants. If such a result is obtained, numerical evaluation for practical applications or expansion in asymptotic regimes should not pose any problem. There are, however, many problems of interest, where the analytic structure, due to...
Prof.
David De Roure
(Oxford e-Research Centre)
08/09/2011, 11:30
Plenary talk
Su Yong Choi
(Korea University)
08/09/2011, 14:00
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
We derive a kinematic variable that is sensitive to the mass of the Standard Model Higgs boson (M_H) in the H->WW*->l l nu nu-bar channel using symbolic regression method. Explicit mass reconstruction is not possible in this channel due to the presence of two neutrinos which escape detection. Mass determination problem is that of finding a mass-sensitive function that depends on the measured...
Vakhtang Tsulaia
(LBL)
08/09/2011, 14:00
Track 1: Computing Technology for Physics Research
Parallel talk
The shared memory architecture of multicore CPUs provides HENP developers with the opportunity to reduce the memory footprint of their applications by sharing memory pages between the cores in a processor. ATLAS pioneered the multi-process approach to parallelizing HENP applications. Using Linux fork() and the Copy On Write mechanism we implemented a simple event task farm which allows to...
Mr
Benedikt Biedermann
(Humboldt Universität zu Berlin)
08/09/2011, 14:00
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
We present the publicly available program NGLUON allowing the
numerical evaluation of colour-ordered amplitudes at one-loop
order in massless QCD.
The program allows the evaluation of one-loop amplitudes
for an arbitrary number of gluons. We discuss in detail the speed as
well as the numerical stability. In addition the packages allows the
evaluation of one-loop scattering amplitudes...
Dr
David Malon
(High Energy Physics Division-Argonne National Laboratory (ANL))
08/09/2011, 14:25
Track 1: Computing Technology for Physics Research
Parallel talk
Traditional relational databases have not always been well matched to the needs of data-intensive sciences,
but efforts are underway within the database community to attempt to address many of the requirements of large-scale
scientific data management. One such effort is the open-source project SciDB. Since its earliest incarnations,
SciDB has been designed for scalability in parallel and...
Jiahang Zhong
(Institute of Physics-Academia Sinica)
08/09/2011, 14:25
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
We present a new approach to simulate Beyond-Standard-Model (BSM) processes which are defined by multiple parameters. In contrast to the traditional grid-scan method where a large number of events are simulated at each point of a sparse grid in the parameter space, this new approach simulates only a few events at each of a selected number of points distributed randomly over the whole parameter...
Dr
Fukuko Yuasa
(KEK)
08/09/2011, 14:25
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
We report our progress on the development of the
Direct Computation Method (DCM), which is a fully
numerical method for the computation of Feynman diagrams.
Based on a combination of a numerical integration tool
and a numerical extrapolation technique, all steps in
the computation are carried out in a fully numerical
way. The combined method is applicable to one-, two-
and multi-loop...
Mr
Balázs Kégl
(Linear Accelerator Laboratory)
08/09/2011, 14:50
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
Adaptive Metropolis (AM) is a powerful recent algorithmic tool in numerical Bayesian data analysis. AM builds on a well-known Markov Chain Monte Carlo (MCMC) algorithm but optimizes the rate of convergence to the target distribution by automatically tuning the design parameters of the algorithm on the fly. In our data analysis problem of counting muons in the water Cherenkov signal of the...
Axel Naumann
(CERN)
08/09/2011, 14:50
Track 1: Computing Technology for Physics Research
Parallel talk
Coverity's static analysis tool has been run on most of the LHC experiments' frameworks, as well as several of the packages provided to them (e.g. ROOT, Geant4). I will present how static analysis works and why it is complimentary to dynamic checkers like valgrind or test suites; typical issues discovered by static analysis; and lessons learned.
Tord Riemann
(DESY)
08/09/2011, 14:50
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
The algebraic tensor reduction of one-loop Feynman integrals with signed minors has been further developed.
There is now available the
C++ package PJFry by V. Yundin for the reduction of 5-point 1-loop tensor integrals up to rank 5.
Special care is devoted to vanishing or small Gram determinants.
Further, we derived
extremely compact expressions for the contractions of the tensor...
Fons Rademakers
(CERN)
08/09/2011, 15:15
Track 1: Computing Technology for Physics Research
Parallel talk
Now that the LHC has started the LHC experiments crave for stability in ROOT, however progress in computing technology is not stopping and to keep ROOT up to date and compatible with new technologies requires a lot of work. In this presentation we will show what we are currently working on and what new technologies we try to exploit.
Prof.
Toshiaki Kaneko
(KEK)
08/09/2011, 15:15
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
Numerically stable analytic expression of a one-loop integration
is one of the most important elements of the accurate
calculations of one-loop corrections to the physical processes.
It is known that these integrations are expressed by some
generalized classes of Gauss hypergeometric functions. Power
series expansions, differential equations, contiguous and many
other identities are...
Mr
José Manoel de Seixas
(Univ. Federal do Rio de Janeiro (UFRJ))
08/09/2011, 15:15
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
Electrons and photons are among the most important signatures in ATLAS. Their identification against jets background by the online trigger system relies very much on calorimetry information. The ATLAS online trigger comprises three cascaded levels and the Ringer is an alternative set of algorithms that uses calorimetry information for electron detection at the second trigger level (L2). It is...
Mr
Peralva Sotto-Maior
(Universidade Federal do Rio de Janeiro (UFRJ))
08/09/2011, 16:10
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
The Barrel Hadronic calorimeter of ATLAS (Tilecal) is a detector used in the reconstruction of hadrons, jets, muons and missing transverse energy from the proton-proton collisions at the Large Hadron Collider (LHC). It comprises 10,000 channels in four readout partitions and each calorimeter cell is made of two readout channels for redundancy. The energy deposited by the particles produced in...
Gudrun Heinrich
(Max Planck Institute Munich)
08/09/2011, 16:10
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
A program package will be presented which aims at the automated calculation of
one-loop amplitudes for multi-particle processes.
The program offers the possibility to optionally use either unitarity cuts
or traditional tensor reduction of Feynman diagrams, or a combination of both.
It can be used to calculate one-loop corrections to both QCD and electro-weak theory.
Beyond the Standard...
Mr
Yngve Sneen Lindal
(Norges Teknisk-Naturvitens. Univ. (NTNU) and CERN openlab)
08/09/2011, 16:10
Track 1: Computing Technology for Physics Research
Parallel talk
In this work we present the parallel implementations of an algorithm used to evaluate the likelihood function of the data analysis. The implementations run on CPU and GPU, respectively, and both devices cooperatively (hybrid). Therefore the execution of the algorithm can take full advantage from users commodity systems, like desktops and laptops, using entirely the hardware at disposal. CPU...
Mr
Federico Carminati
(CERN, Geneva, Switzerland)
08/09/2011, 16:35
Track 1: Computing Technology for Physics Research
Parallel talk
Following a previous publication, this study aims at investigating the impact of regional affiliations of centres on the organisation of collaboration within the Distributed Computing ALICE infrastructure, based on social networks methods. A self-administered questionnaire was sent to all centre managers about support, email interactions and wished collaborations in the infrastructure. Several...
Dr
Attilio Santocchia
(Universita e INFN Perugia)
08/09/2011, 16:35
Track 3: Computations in Theoretical Physics - Techniques and Methods
Parallel talk
Octave is one if the most used open source tools for numerical analysis
and liner algebra. Our project wants to improve Octave introducing the support for GPU computing, in order to speed up some linear algebra operations. The core of our work is a C library that executes on GPU some BLAS operations concerning vector-vector, vector-matrix and matrix-matrix functions. OpenCL functions are used...
Mr
Peter Koevesarki
(Physikalisches Institut-Universitaet Bonn)
08/09/2011, 16:35
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
A novel method to estimate probability density functions, suitable for multivariate analyses will be presented. The implemented algorithm can work on relatively large samples, iteratively finding a non-parametric density function with adaptive kernels. With increasing number of sample points the resulting function converges to the real probability density. Specifically, we discuss a...
Andras Laszlo
(CERN, Geneva (on leave of absence from KFKI Research Institute for Particle and Nuclear Physics, Budapest))
08/09/2011, 17:00
Track 2 : Data Analysis - Algorithms and Tools
Parallel talk
A freqently faced task in experimental physics is to measure the probability distribution of some quantity. Often this quantity to be measured is smeared by a non-ideal detector response or by some physical process. The procedure of removing this smearing effect from the measured distribution is called unfolding, and is a delicate problem in signal processing. Due to the numerical...
Dr
Federico Carminati
(CERN)
08/09/2011, 17:00
Track 1: Computing Technology for Physics Research
Parallel talk
The future of high power computing is evolving towards the efficient use of highly parallel computing environment. The class of devices that has been designed having parallelism features in mind is the Graphics Processing Units (GPU) which are highly parallel, multithreaded computing devices. One application where the use of massive parallelism comes instinctively is Monte-Carlo...
Dr
Jerome Lauret
(BNL)
09/09/2011, 09:00
Pushpalatha Bhat
(Fermi National Accelerator Lab. (Fermilab))
09/09/2011, 09:40
Nigel Glover
(IPPP Durham)
09/09/2011, 10:50
Dr
Denis Perret-Gallix
(CNRS/IN2P3)
09/09/2011, 11:30
Marco Clemencic
(CERN)
Track 1: Computing Technology for Physics Research
Poster
The LHCb experiment has been using the CMT build and configuration tool for its software since the first versions, mainly because of its multi-platform build support and its powerful configuration management functionality. Still, CMT has some limitations in terms of build performance and the increased complexity added to the tool to cope with new use cases added latterly. Therefore, we have...
Mr
Alexandru Dan Sicoe
(CERN)
Track 1: Computing Technology for Physics Research
Poster
ATLAS is the largest of several experiments built along the Large Hadron Collider at CERN, Geneva. Its aim is to measure particle production when protons collide at a very high center of mass energy, thus reproducing the behavior of matter a few instants after the Big Bang. The detecting techniques used for this purpose are very sophisticated and the amount of digitized data created by the...
Mr
Adam Harwood
(University of the West of England), Mr
Luca Magnoni
(CERN)
Track 1: Computing Technology for Physics Research
Poster
This paper describes a new approach to the visualization of stored information about the operation of the ATLAS Trigger and Data Acquisition system.
ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN.
Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly...
Dr
Frederik Orellana
(University of Copenhagen)
Track 1: Computing Technology for Physics Research
Poster
We present a novel tool for managing data processing on grid resources. The tool provides a graphical user interface that offers new ATLAS users a quick and gentle start with computing, using a library of applications built up by previous users.
Dr
Danilo Piparo
(Conseil Europeen Recherche Nucl. (CERN))
Track 1: Computing Technology for Physics Research
Poster
A crucial component of the CMS Software is the reconstruction, which translates the signals coming from the detector's readout electronics into concrete physics objects such as leptons, photons and jets. Given its relevance for all physics analyses, the behaviour and quality of the reconstruction code must be carefully monitored. In particular, the compatibility of its outputs between...
Giulio Palombo
(California Institute of Technology)
Track 2 : Data Analysis - Algorithms and Tools
Poster
High Energy Physics data sets are often characterized by a huge number of events. Therefore, it is extremely important to use statistical packages able to efficiently analyze these unprecedented amounts of data. We compare the performance of the statistical packages StatPatternRecognition (SPR) and Toolkit for MultiVariate Analysis (TMVA). We focus on how CPU time and memory usage of the...
Dr
Roman Kogler
(DESY)
Track 1: Computing Technology for Physics Research
Poster
Data from high-energy physics experiments are collected with significant financial and human effort and are mostly unique.
However, until recently no coherent strategy existed for data
preservation and re-use, and many important and complex data
sets have simply been lost. While the current focus is on the
LHC at CERN, in the current period several important and unique
experimental...
Dr
Maxim Potekhin
(Brookhaven National Laboratory)
Track 1: Computing Technology for Physics Research
Poster
For several years the PanDA Workload Management System has
been the basis for distributed production and analysis for the
ATLAS experiment at the LHC. Since the start of data taking
PanDA usage has ramped up steadily, typically exceeding 500k
completed jobs per day by June 2011. The associated monitoring data
volume has been rising as well, to levels that present a new
set of challenges...
Dr
Andrei Tsaregorodtsev
(Centre de Physique de Particules de Marseille (CPPM)-Faculte de)
Track 1: Computing Technology for Physics Research
Poster
Many modern applications need large amounts of computing resources both for calculations and data storage. These resources are typically found in the computing grids but also in commercial clouds and computing clusters. Various user communities have access to different types of resources. The DIRAC project provides a solution for an easy aggregation of heterogeneous computing resources for a...
Dr
Manqi Ruan
(Laboratoire Leprince-Ringuet (LLR)-Ecole Polytechnique)
Track 2 : Data Analysis - Algorithms and Tools
Poster
Based on the ROOT TEve/TGeo classes and the standard Linear Collider data format (LCIO), a general linear collider event display has been developed. It supports the latest detector models for both the International Linear Collider (ILC) and Compact Linear Collider (CLIC) as well as test beam prototypes. It can be used to visualise various informations at the generation, simulation and...
Dr
Federico Stagni
(Conseil Europeen Recherche Nucl. (CERN))
Track 1: Computing Technology for Physics Research
Poster
The proliferation of tools for monitoring both activities and infrastructure, together with the pressing need for prompt reaction in case of problems impacting data taking, data reconstruction, data reprocessing and user analysis brought to the need of better organizing the huge amount of information available. The monitoring system for the LHCb Grid Computing relies on many heterogeneous and...
Andrei Gheata
(CERN)
Track 1: Computing Technology for Physics Research
Poster
The presentation will describe an interface within the ALICE analysis framework that allows transparent usage of the experiment's distributed resources. This analysis plug-in makes it possible to configure back-end specific parameters from a single interface and to run with no change the same custom user analysis in many computing environments, from local workstations to PROOF clusters or GRID...
Julio Lozano-Bahilo
(Universidad de Granada)
Track 1: Computing Technology for Physics Research
Poster
The Pierre Auger Collaboration studies ultra high energy cosmic rays which induce extensive air showers when they interact at the top of the atmosphere. The generation of simulated showers involves tracking billions of particles as the shower develops through the atmosphere. The CPU time consumption of the complete simulation of a single shower is enormous but there are techniques to reduce it...
Dr
Ivan D Reid
(Brunel University)
Track 2 : Data Analysis - Algorithms and Tools
Poster
When monitoring complex experiments, comparison is often made between regularly acquired histograms of data and reference histograms which represent the ideal state of the equipment. With the larger HEP experiments now ramping up, there is a need for automation of this task since the volume of comparisons would overwhelm human operators. However, the two-dimensional histogram comparison tools...
Mr
Kadlecik Peter
(Theoretical High Energy Phys. Dept. (NBI)-Niels Bohr Inst. Astr)
Track 2 : Data Analysis - Algorithms and Tools
Poster
The ATLAS tau trigger system runs very challenging real time algorithms on commodity computers. Whilst in the second level trigger (L2) fast and specialized algorithms are used, in the third level trigger (Event Filter -EF-) sophisticated and detailed reconstruction algorithms run. The performance of both types of algorithms can be decoupled because they both start from the information...
Dr
Andrea Coccaro
(Sezione di Genova (INFN)-Sezione di Genova (INFN)-Universita e)
Track 2 : Data Analysis - Algorithms and Tools
Poster
A sophisticated trigger system, capable of real-time track and vertex reconstruction,
is in place in the ATLAS experiment, to reject most of the events containing uninteresting background collisions while preserving as much as possible the interesting physics signals.
In this contribution we present the strategy adopted by the ATLAS collaboration for fast
reconstruction of charged tracks...