Mr
Federico Carminati
(CERN)
11/3/08, 8:45 AM
Denis Perret-Gallix
(Laboratoire d'Annecy-le-Vieux de Physique des Particules (LAPP))
11/3/08, 8:55 AM
Lawrence Pinsky
(University of Houston-Unknown-Unknown)
11/3/08, 9:00 AM
Intellectual Property, which includes the following areas of the law: Copyrights, Patents, Trademarks, Trade Secrets, and most recently Database Protection and Internet Law, might seem to be an issue for lawyers only. However, increasingly the impact of the laws governing these areas and the International reach of the effects of their implementation makes it important for all software...
Dr
Andrei Kataev
(Institute for Nucleaer Research , Moscow, Russia)
11/3/08, 10:40 AM
Different methods for treating the results of higher-order perturbative QCD calculations of the decay width of the Standard Model Higgs boson into bottom quarks are discusssed. Special attention is paid to the analysis of the $M_H$ dependence of the decay width $\Gamma(H\to \bar{b}b})$ in the cases when the mass of b-quark is defined as the running parameter in the $\bar{MS}$-scheme and as the...
David Bailey
(Lawrence Berkeley Laboratory)
11/3/08, 11:20 AM
For the vast majority of computations done both in pure and applied physics, ordinary 64-bit floating-point arithmetic (about 16 decimal digits) is sufficient. But for a growing body of applications, this level is not sufficient. For applications such as supernova simulations, climate modeling, n-body atomic structure calculations, "double-double" (approx. 32 digits) or even "quad-double"...
Mr
Andrew Hanushevsky
(Stanford Linear Accelerator Center (SLAC))
11/3/08, 12:00 PM
There are many ways to build a Storage Element. This talk surveys the common and popular architectures used to construct today's Storage Elements and presents points for consideration. The presentation then asks, "Are these architectures ready for LHC era experiments?". The answer may be surprising and certainly shows that the context in which they are used matters.
Dr
Ariel Garcia
(FORSCHUNGSZENTRUM KARLSRUHE, GERMANY)
11/3/08, 2:00 PM
g-Eclipse is both a user friendly graphical user interface and a programming framework for accessing Grid and Cloud infrastructures. Based on the extension mechanism of the well known Eclipse platform, it provides a middleware independent core implementation including standardized user interface components. Based on these components, implementations for any available Grid and Cloud middleware...
Dr
David Lawrence
(Jefferson Lab)
11/3/08, 2:00 PM
The C++ reconstruction framework JANA has been written to support the
next generation of Nuclear Physics experiments at Jefferson Lab in
anticipation of the 12GeV upgrade. This includes the GlueX experiment
in the planned 4th experimental hall "Hall-D". The JANA framework was designed to allow
multi-threaded event processing with a minimal impact on developers of
reconstruction software....
Warren Perkins
(Swansea University UK)
11/3/08, 2:00 PM
3. Computation in Theoretical Physics
Parallel Talk
Unitarity methods provide an efficient way of calculating 1-loop
amplitudes for which Feynman diagram techniques are impracticable.
Recently several approaches have been developed that apply these
techniques to systematically generate amplitudes. The 'canonical basis'
implementation of the unitarity method will be discussed in detail and illustrated using seven point QCD processes.
Mikhail Titov
(Moscow Physical Engineering Inst. (MePhI))
11/3/08, 2:25 PM
There is ATLAS wide policy how different types of data is distributed between centers of different level (T0/T1/Tn) it is well defined and centrally operated activity (uses Atlas Central Services which include Catalogue services, Sites services, T0 services, Panda Services and etc). At the same ATLAS Operations Group designed user oriented services to allow ATLAS physicists to place data...
Johannes Bluemlein
(DESY)
11/3/08, 2:25 PM
3. Computation in Theoretical Physics
Parallel Talk
We present a method to unfold the complete functional dependence of single-scale quantities as
QCD splitting functions and Wilson coefficients from a finite number of moments. These quantities
obey recursion relations which can be found in an automated way. The exact functional form is obtained
solving the corresponding difference equations. We apply the algorithm to the QCD Wilson...
Dr
Mikhail Rogal
(DESY)
11/3/08, 2:50 PM
3. Computation in Theoretical Physics
Parallel Talk
will be sent later
Dr
Dominik Dannheim
(CERN)
11/3/08, 2:50 PM
Probability-Density Estimation (PDE) is a multivariate discrimination technique based on sampling signal and background densities in a multi-dimensional phase space. The signal and background densities are defined by event samples (from data or monte carlo) and are evaluated using a binary search tree (range searching). This method is a powerful classification tool for problems with highly...
Paul Nilsson
(University of Texas at Arlington)
11/3/08, 2:50 PM
The PanDA system was developed by US ATLAS to meet the requirements for full scale production and distributed analysis processing for the ATLAS Experiment at CERN. The system provides an integrated service architecture with late binding of job, maximal automation through layered services, tight binding with the ATLAS Distributed Data Management system, advanced job recovery and error discovery...
Mr
Andrei Gheata
(ISS/CERN)
11/3/08, 3:15 PM
The talk will describe the current status of the offline analysis framework used in ALICE. The software was designed and optimized to take advantage of distributed computing resources and be compatible with ALICE computing model. The framwork's main features: possibility to use parallelism in PROOF or GRID environments, transparency of the computing infrastructure and data model, scalability...
Dr
Yoshimasa KURIHARA
(KEK)
11/3/08, 3:15 PM
3. Computation in Theoretical Physics
Parallel Talk
Automatic Feynman-amplitude calculation system, GRACE, has been extended to treat next-to-leading order (NLO) QCD calculations. Matrix elements of loop diagrams as well as those of tree level ones can be generated using the GRACE system. A soft/collinear singularity is treated using a leading-log subtraction method.
Higher order re-summation of the soft/collinear correction by the parton...
Giuseppe Codispoti
(Dipartimento di Fisica)
11/3/08, 4:10 PM
CRAB (CMS Remote Analysis Builder) is the tool used by CMS to enable running physics analysis in a transparent manner over data distributed across many sites. It abstracts out the interaction with the underlying batch farms, grid infrastructure and CMS workload management tools, such that it is easily usable by non-experts.
CRAB can be used as a direct interface to the computing system or...
Prof.
Vladimir Ivantchenko
(CERN, ESA)
11/3/08, 4:10 PM
3. Computation in Theoretical Physics
Parallel Talk
The status of Geant4 electromagnetic (EM) physics models is presented, focusing on the models most relevant for collider HEP experiments, at LHC in particular. Recently improvements were undertaken in models for the transport of electrons and positrons, and for hadrons. Models revised included those for single and multiple scattering, ionization at low and high energies, bremsstrahlung,...
Dr
Monica Verducci
(INFN RomaI)
11/3/08, 4:10 PM
The ATLAS Muon System has extensively started to use the LCG conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worlwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and...
Pier Paolo Ricci
(INFN CNAF)
11/3/08, 4:35 PM
The activities in the last 5 years for the storage access at the INFN CNAF Tier1 can be enlisted under two different solutions efficiently used in production: the CASTOR software, developed by CERN, for Hierarchical Storage Manager (HSM), and the General Parallel File System (GPFS), by IBM, for the disk resource management.
In addition, since last year, a promising alternative solution for...
Sergei V. Gleyzer
(Florida State University)
11/3/08, 4:35 PM
In high energy physics, variable selection and reduction are key to a high quality multivariate analysis. Initial variable selection often leads to a variable set cardinality greater than the underlying degrees of freedom of the model, which motivates the needs for variable reduction and more fundamentally, a consistent decision making framework. Such a framework called PARADIGM, based on a...
Vladimir Kolesnikov
(Joint Institute for Nuclear Research (JINR))
11/3/08, 4:35 PM
3. Computation in Theoretical Physics
Parallel Talk
Two types of SANC system output are presented.
At first the status of stand-alone packages for calculations of the EW and
QCD NLO RC at the parton level (Standard SANC FORM and/or FORTRAN Modules)
are done. Short overview of these packages in sector of the Neutral Current:
(uu, dd) -> (mu,mu, ee) and ee(uu, dd) -> HZ;
and in the sector of the Charge Current:
ee(uu, dd) -> (mu nu_mu, e...
Hegoi Garitaonandia
(NIKHEF)
11/3/08, 5:00 PM
The ATLAS experiment at CERN will require about 4000 CPUs for the online data acquisition system (DAQ). When the DAQ system experiences software errors, such as event selection algorithm problems, crashes or timeouts, the fault tolerance mechanism routes the corresponding event data to the so called debug stream. During first beam commissioning and early data taking, a large fraction of events...
Dr
Stanislav Zub
(National Science Center, Kharkov Institute of Physics and Techn)
11/3/08, 5:00 PM
3. Computation in Theoretical Physics
Parallel Talk
Dynamics of two bodies, which interacts by magnetic forces, is considered. Model of interaction builds on quasi-stationary approach for electromagnetic field, and symmetric rotors with different moments of inertia of the bodies are considered. Interaction energy general form is discovered for the case of coincidence of mass and magnetic symmetries. Since the energy of interaction depends only...
Mr
Andrey Lebedev
(GSI, Darmstadt / JINR, Dubna)
11/3/08, 5:00 PM
The Compressed Baryonic Matter (CBM) experiment at the future FAIR accelerator at Darmstadt is being designed for a comprehensive measurement of hadron and lepton production in heavy-ion collisions from 8-45 AGeV beam energy, producing events with large track multiplicity and high hit density. The setup consists of several detectors, including the silicon tracking system (STS) placed in a...
Dr
Christopher Jones
(CORNELL UNIVERSITY)
11/3/08, 5:25 PM
Event displays in HEP are used for many different purposes, e.g. algorithm debugging, commissioning, geometry checking and physics studies. The physics studies case is unique since few user are likely to become experts on the event display, the breadth of information all such users will want to see is quite large although any one user may only want a small subset of information and the best...
Dr
Stuart Wakefield
(Imperial College London)
11/3/08, 5:25 PM
From its conception the job management system has been distributed to increase scalability and robustness. The system consists of several applications (called prodagents) which each manage Monte Carlo, reconstruction and skimming jobs on collections of sites within different Grid environments (OSG, NorduGrid?, LCG) and submission systems (GlideIn?, local batch, etc..).
Production of...
Mr
Christophe Saout
(CMS, CERN & IEKP, University of Karlsruhe)
11/3/08, 5:50 PM
The CMS Offline software contains a widespread set of algorithms to identify jets originating from the weak decay of b-quarks. Different physical properties of b-hadron decays like lifetime information, secondary vertices and soft leptons are exploited. The variety of selection algorithms range from simple and robust ones, suitable for early data-taking and online environments as the trigger...
Dr
Elena Solfaroli
(INFN RomaI & Universita' di Roma La Sapienza), Dr
Monica Verducci
(INFN RomaI)
11/3/08, 5:50 PM
ATLAS is a large multipurpose detector, presently in the final phase
of construction at LHC, the CERN Large Hadron Collider accelerator.
In ATLAS the muon detection is performed by a huge magnetic
spectrometer, built with the Monitored Drift Tube (MDT) technology.
It consists of more than 1,000 chambers and 350,000 drift tubes,
which have to be controlled to a spatial accuracy better...
Dr
Akira Shibata
(New York University)
11/4/08, 2:00 PM
An impressive amount of effort has been put in to realize a set of frameworks to support analysis in this new paradigm of GRID computing. However, much more than half of a physicist's time is typically spent after the GRID processing of the data. Due to the private nature of this level of analysis, there has been little common framework or methodology.
While most physicists agree to use...
Dr
Andy Buckley
(Durham University)
11/4/08, 2:00 PM
3. Computation in Theoretical Physics
Parallel Talk
Event generator programs are a ubiquitous feature of modern particle physics, since the ability to produce exclusive, unweighted simulations of high-energy events is necessary for design of detectors, analysis methods and understanding of SM backgrounds. However --- particularly in the non-perturbative areas of physics simulated by shower+hadronisation event generators --- there are many...
Tatsiana Klimkovich
(RWTH-Aachen)
11/4/08, 2:00 PM
VISPA is a novel graphical development environment for physics analysis, following an experiment-independent approach. It introduces a new way of steering a physics data analysis, combining graphical and textual programming. The purpose is to speed up the design of an analysis, and to facilitate its control.
As the software basis for VISPA the C++ toolkit Physics eXtension Library (PXL) is...
Alexandre Vaniachine
(Argonne National Laboratory)
11/4/08, 2:25 PM
HEP experiments at the LHC store petabytes of data in ROOT files described with TAG metadata. The LHC experiments have challenging goals for efficient access to this data. Physicists need to be able to compose a metadata query and rapidly retrieve the set of matching events. Such skimming operations will be the first step in the analysis of LHC data, and improved efficiency will facilitate the...
Dr
Paolo Bartalini
(CERN)
11/4/08, 2:25 PM
3. Computation in Theoretical Physics
Parallel Talk
The CMS collaboration supports a wide spectrum of Monte Carlo generator packages in its official production, each of them requiring a dedicated software integration and physics validation effort. We report on the progress of the usage of these external programs with particular emphasis on the handling and tuning of the Matrix Element tools. The first integration tests in a large scale...
Dr
Mikhail Kirsanov
(Institute for Nuclear Research (INR), Moscow)
11/4/08, 2:50 PM
3. Computation in Theoretical Physics
Parallel Talk
The Generator Services project collaborates with the Monte Carlo
generators authors and with the LHC experiments in order to prepare
validated LCG compliant code for both the theoretical and the
experimental communities at the LHC. On the one side it provides the
technical support as far as the installation and the maintenance of
the generators packages on the supported platforms is...
Ian Fisk
(Fermi National Accelerator Laboratory (FNAL))
11/4/08, 2:50 PM
The CMS Tier 0 is responsible for handling the data in the first period of it's life, from being written to a disk buffer at the CMS experiment site in Cessy by the DAQ system, to the time transfer completes from CERN to one of the Tier1 computing centres. It contains all automatic data movement, archival and processing tasks run at CERN.
This includes the bulk transfers of data from Cessy...
Mr
Ricky Egeland
(University of Minnesota – Twin Cities, Minneapolis, MN, USA)
11/4/08, 3:15 PM
The CMS PhEDEx (Physics Experiment Data Export) project is responsible for facilitating large-scale data transfers across the grid ensuring transfer reliability, enforcing data placement policy, and accurately reporting results and performance statistics. The system has evolved considerably since its creation in 2004, and has been used daily by CMS since then. Currently CMS tracks over 2 PB of...
Mr
Sergey Belov
(JINR, Dubna)
11/4/08, 3:15 PM
3. Computation in Theoretical Physics
Parallel Talk
In this talk we present a way of making Monte-Carlo simulation chain fully automated.automation
Last years there was a need for common place to store sophisticated MC event samples prepared by experienced theorists. Also such samples should be accessible in some standard manner to be easyly imported and used in experiments' software.
The main motivation behind the LCG MCDB project is to...
Mr
John Alison
(Department of Physics and Astronomy, University of Pennsylvania)
11/4/08, 4:10 PM
The CERN's Large Hadron Collider (LHC) is the world largest particle accelerator. It will collide two proton beams at an unprecedented center of mass energy of 14 TeV and first colliding beams are expected during summer 2008. ATLAS is one of the two general purpose experiments that will record the decay products of the proton-proton collisions. ATLAS is equipped with a charge particle...
Dr
Ian Fisk
(Fermi National Accelerator Laboratory (FNAL))
11/4/08, 4:10 PM
In this presentation we will discuss the early experience with the CMS computing model from the last large scale challenge activities to the first days of data taking. The current version of the CMS computing model was developed in 2004 with a focus on steady state running. In 2008 a revision of the model was made to concentrate on the unique challenges associated with the commission period....
Prof.
Vladimir Ivantchenko
(CERN, ESA)
11/4/08, 4:10 PM
3. Computation in Theoretical Physics
Parallel Talk
An overview of recent developments for the Geant4 hadronic modeling is provided with a focus on the start of the LHC experiments. Improvements in Pre-Compound model, Binary and Bertini cascades, models of elastic scattering, quark-gluon string and Fritiof high energy models, and low-energy neutron transport were introduced using validation versus data from thin target experiments. Many of...
Dr
Sergey Bityukov
(INSTITUTE FOR HIGH ENERGY PHYSICS, PROTVINO)
11/4/08, 4:35 PM
3. Computation in Theoretical Physics
Parallel Talk
We compare two approaches to the combining of signal significances:
the approach, in which the signal significances are considered
as corresponding random variables, and the approach with the using
of confidence distributions. Several signal significances, which are
often used in analysis of data in experimental physics as a measure
of excess of the observed or expected number of...
Mr
Michal ZEROLA
(Nuclear Physics Inst., Academy of Sciences, Praha)
11/4/08, 4:35 PM
In order to achieve both fast and coordinated data transfer to collaborative sites as well as to create a distribution of data over multiple sites, efficient data movement is one of the most essential aspects in distributed environment. With such capabilities at hand, truly distributed task scheduling with minimal latencies would be reachable by internationally distributed collaborations (such...
Ms
Sonia Khatchadourian
(ETIS - UMR CNRS 8051)
11/4/08, 5:00 PM
The HESS project is a major international experiment currently performed
in gamma astronomy. This project relies on a system of four Cherenkov
telescopes enabling the observation of cosmic gamma rays. The
outstanding performance obtained so far in the HESS experiment has led
the research labs involved in this project to improve the existing
system: an additional telescope is currently...
Dr
Valeri FINE
(BROOKHAVEN NATIONAL LABORATORY)
11/4/08, 5:00 PM
With the era of multi-core CPUs, software parallelism is becoming both affordable as well as a practical need. Especially interesting is to re-evaluate the adaptability of the high energy and nuclear physics sophisticated, but time-consuming, event reconstruction frameworks to the reality of the multi-threaded environment.
The STAR offline OO ROOT-based framework implements a well known...
Dr
Andrej Arbuzov
(Joint Institute for Nuclear Research (JINR))
11/4/08, 5:00 PM
3. Computation in Theoretical Physics
Parallel Talk
Radiative corrections to processes of single Z and W boson production
are obtained within the SANC computer system. Interplay of one-loop
QCD and electroweak corrections is studied. Higher order QED final
state radiation is taken into account. Monte Carlo event generators
at the hadronic level are constructed. Matching with general purpose
programs like HERWIG and PYTHIA is performed to...
Miroslav Morhac
(Institute of Physics, Slovak Academy of Sciences)
11/4/08, 5:25 PM
The accuracy and reliability of the analysis of spectroscopic data depend critically on the treatment in order to resolve strong peak overlaps, to account for continuum background contributions, and to distinguish artifacts to the responses of some detector types. Analysis of spectroscopic data can be divided to
1. estimation of peaks positions (peak searching)
2. fitting of peak...
Mr
Andreas Joachim Peters
(CERN)
11/4/08, 5:25 PM
One of the biggest challenges in LHC experiments at CERN is data management for data analysis. Event tags and iterative looping over datasets for physics analysis require many file opens per second and (mainly forward) seeking access. Analyses will typically access large datasets reading terabytes in a single iteration.
A large user community requires policies for space management and a...
Mr
Tim Muenchen
(Bergische Universitaet Wuppertal)
11/4/08, 5:50 PM
As the Large Hadron Collider (LHC) at CERN, Geneva, has begun operation in september, the large scale computing grid LCG (LHC Computing Grid) is meant to process and store the large amount of data created in simulating, measuring and analyzing of particle physic experimental data. Data acquired by ATLAS, one of the four big experiments at the LHC, are analyzed using compute jobs running on the...
Dr
Alexander Sherstnev
(University of Oxford)
11/5/08, 9:00 AM
We present a new version of the CompHEP program package, version 4.5. We describe shortly new techniques and options implemented: interfaces to ROOT and HERWIG, generation of the XML-based header in event files (HepML), full implementation of Les Houches agreements (LHA I, SUSY LHA, LHA PDF, Les Houches events), realisation of the improved von Neumann procedure for the event generation, etc....
Dr
Harrison Prosper
(Department of Physics, Florida State University)
11/5/08, 9:40 AM
ultivariate methods are used routinely in particle physics research to classify objects or to discriminate signal from background.
They have also been used successfully to approximate multivariate functions. Moreover, as is evident from this conference, excellent easy-to-use implementations of these methods exist, making it possible for everyone to deploy these sophisticated methods. From...
Dr
Thomas Binoth
(University of Edinburgh)
11/5/08, 10:40 AM
In this talk I will motivate that a succesful descripton of LHC
physics needs the inclusion of higher order corrections for
all kinds of signal and background processes. In the case
of multi-particle production the combinatorial
complexity of standard approaches triggered many new
developments which allow for the efficient evaluation
of one-loop amplitudes for LHC phenomenology. I will...
Dr
Giulio Palombo
(University of Milan - Bicocca)
11/5/08, 2:00 PM
Datasets in modern High Energy Physics (HEP) experiments are often
described by dozens or even hundreds of input variables (features).
Reducing a full feature set to a subset that most completely represents
information about data is therefore an important task in analysis of HEP
data. We compare various feature selection algorithms for supervised
learning using several datasets such as,...
Mikhail Tentyukov
(Karlsruhe University)
11/5/08, 2:00 PM
3. Computation in Theoretical Physics
Parallel Talk
We report on the status of the current development in parallelization of the symbolic manipulation system
FORM. Most existing FORM programs will be able to take advantage of
the parallel execution, without the need for modifications.
Dr
Andrea Sciaba'
(CERN, Geneva, Switzerland)
11/5/08, 2:00 PM
The computing system of the CMS experiment works using distributed resources from more than 80 computing centres worldwide. These centres, located in Europe, America and Asia are interconnected by the Worldwide LHC Computing Grid. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure.
CMS has established a procedure to extensively test all...
Dr
Takahiro Ueda
(KEK)
11/5/08, 2:25 PM
3. Computation in Theoretical Physics
Parallel Talk
Nowadays the sector decomposition technique, which can isolate divergences from parametric representations of integrals, becomes quite useful tool for numerical evaluations of the Feynman loop integrals. It is used to verify the analytical results of multi-loop integrals in the Euclidean region, or in some cases practically used in the physical region by combining with other methods handling...
Dr
Marcin Wolter
(Henryk Niewodniczanski Institute of Nuclear Physics PAN)
11/5/08, 2:25 PM
Tau leptons will play an important role in the physics program at the
LHC. They will not only be used in electroweak measurements and
in detector related studies like the determination of the E_T^miss
scale, but also in searches for new phenomena like the Higgs boson or
Supersymmetry.
Due to the overwhelming background from QCD processes, highly
efficient algorithms are essential to...
Dr
André dos Anjos
(University of Wisconsin, Madison, USA)
11/5/08, 2:25 PM
The DAQ/HLT system of the ATLAS experiment at CERN, Switzerland, is being commissioned for first collisions in 2009. Presently, the system is composed of an already very large farm of computers that accounts for about one-third of its event processing capacity. Event selection is conducted in two steps after the hardware-based Level-1 Trigger: a Level-2 Trigger processes detector data based on...
Thomas Hahn
(MPI Munich)
11/5/08, 2:50 PM
3. Computation in Theoretical Physics
Parallel Talk
The talk will cover the latest version of the Feynman-diagram calculator FormCalc. The most significant improvement is the communication of intermediate expressions from FORM to Mathematica and back, for the primary purpose of introducing abbreviations at an early stage. Thus, longer expressions can be treated and a severe bottleneck in particular for processes with high multiplicities removed.
Dr
Fabrizio Furano
(Conseil Europeen Recherche Nucl. (CERN))
11/5/08, 2:50 PM
In this talk we address the way the ALICE Offline Computing is starting
to exploit the possibilities given by the Scalla/Xrootd repository
globalization tools. These tools are quite general and can be adapted to
many situations, without disrupting existing designs, but adding a level
of coordination among xrootd-based storage clusters, and the ability to
interact between them.
Alexander Kryukov
(Skobeltsyn Institute for Nuclear Physics Moscow State University)
11/5/08, 2:50 PM
1. Computing Technology
Grid systems are used for calculations and data processing in various applied areas such as biomedicine, nanotechnology and materials science, cosmophysics and high energy physics as well as in a number of industrial and commercial areas. However, one of the basic problems costing on a way to wide use of grid systems is related to the fact that applied jobs, as a rule, are developed for...
Dr
Fukuko YUASA
(KEK)
11/5/08, 3:15 PM
3. Computation in Theoretical Physics
Parallel Talk
We apply a 'Direct Computation Method', which is purely numerical, to evaluate
Feynman integrals. This method is based on the combination of an efficient
numerical integration and an efficient extrapolation strategy. In addition,
high-precision arithmetic and parallelization techniques can be used if required.
We present our recent progress in the development of this method and show...
David Cameron
(University of Oslo)
11/5/08, 3:15 PM
The NorduGrid collaboration and its middleware product, ARC (the Advanced Resource Connector), span institutions in Scandinavia and several other countries in Europe and the rest of the world. The innovative nature of the ARC design and flexible, lightweight distribution make it an ideal choice to connect heterogeneous distributed resources for use by HEP and non-HEP applications alike. ARC...
Dr
Jerzy Nogiec
(FERMI NATIONAL ACCELERATOR LABORATORY)
11/5/08, 3:15 PM
Accelerator R&D environments produce data characterized by different levels of organization. Whereas some systems produce repetitively predictable and standardized structured data, others may produce data of unknown or changing structure. In addition, structured data, typically sets of numeric values, are frequently logically connected with unstructured content (e.g., images, graphs,...
Dr
Alfio Lazzaro
(Universita' degli Studi and INFN, Milano)
11/5/08, 4:10 PM
MINUIT is the most common package used in high energy physics for numerical minimization of multi-dimensional functions. The major algorithm of this package, MIGRAD, searches for the minimum
by using the function gradient. For each minimization iteration, MIGRAD requires the calculation of the first derivatives for each parameter of the function to be minimized.
In this presentation we will...
Dr
Yoshimasa Kurihara
(KEK)
11/5/08, 4:10 PM
3. Computation in Theoretical Physics
Parallel Talk
Multiple Polylog functions (MPL) often appear as a result of the Feynman parameter integrals in higher order correction in quantum field theory. Numerical evaluation of the MPL with higher depth and weight is necessary for multi-loop calculations. We propose a purely numerical method to evaluate MPL using numerical contour integral in multi-parameter complex-plane. We can obtain values of MPL...
Dr
Biglietti Michela
(UNIVERSITY OF NAPOLI and INFN)
11/5/08, 4:35 PM
The ATLAS trigger system is designed to select rare physics processes of interest from an extremely high rate of proton-proton collisions, reducing the LHC incoming rate of about 10^7. The short LHC bunch crossing period of 25 ns and the large background of soft-scattering events overlapped in each bunch crossing pose serious challenges, both on hardware and software, that the ATLAS trigger...
Dr
Mohammad Al-Turany
(GSI DARMSTADT)
11/5/08, 4:35 PM
The new development in the FairRoot framework will be presented. FairRoot is the simulation and anaysis framework used by CBM and PANDA at FAIR/GSI experiments. The CMake based building and testing system will be described. A new event display based on EVE-package from ROOT and Geane will be shown, also the new developments for using GPUs and multi-core systems will be discussed.
Tord Riemann
(DESY)
11/5/08, 4:35 PM
3. Computation in Theoretical Physics
Parallel Talk
We present some recent results on the evaluation of massive one-loop multileg Feynman integrals, which are of relevance for LHC processes.
An efficient complete analytical tensor reduction was derived and implemented in a Mathematica package hexagon.m.
Alternatively, one may use Mellin-Barnes techniques in order to avoid the tensor reduction.
We shortly report on a new version of the...
Dr
mikhail kalmykov
(Hamburg U./JINR)
11/5/08, 5:00 PM
3. Computation in Theoretical Physics
Parallel Talk
Recent results related with manipulation of hypergeometric functions: reduction and
construction of higher-order terms in epsilon-expansion is revised. The application of
given technique to the analytical evaluation of Feynman diagrams is considered.
Mr
Danilo Enoque Ferreira De Lima
(Federal University of Rio de Janeiro (UFRJ) - COPPE/Poli)
11/5/08, 5:00 PM
The ATLAS trigger system is responsible for selecting the interesting collision events delivered by the Large Hadron Collider(LHC). The ATLAS trigger will need to achieve a ~10‐7 rejection factor against random proton‐proton collisions, and still be able to efficiently select interesting events. After a first processing level based on FPGAs and ASICS, the final event selection is based on...
Gero Flucke
(Universität Hamburg)
11/5/08, 5:00 PM
The ultimate performance of the CMS detector relies crucially on precise and prompt alignment and calibration of its components. A sizable number of workflows need to be coordinated and performed with minimal delay through the use of a computing infrastructure which is able to provide the constants for a timely reconstruction of the data for subsequent physics analysis. The framework...
Dario Berzano
(Istituto Nazionale di Fisica Nucleare (INFN) and University of Torino)
11/5/08, 5:25 PM
Current Grid deployments for LHC computing (namely the WLCG infrastructure) do not allow efficient parallel interactive processing of data. In order to allow physicists to interactively access subsets of data (e.g. for algorithm tuning and debugging before running over a full dataset) parallel Analysis Facilities based on PROOF have been deployed by the ALICE experiment at CERN and elsewhere....
Dr
Markward Britsch
(Max-Planck-Institut fuer Kernphysik (MPI)-Unknown-Unknown)
11/5/08, 5:25 PM
A large hadron machine like the LHC with its high track multiplicities always asks for powerful tools that drastically reduce the large background while selecting signal events efficiently. Actually such tools are widely needed and used in all parts of particle physics. Regarding the huge amount of data that will be produced at the LHC, the process of training as well as the process of...
Liliana Teodorescu
(Brunel University)
11/5/08, 5:50 PM
In order to address the data analysis challenges imposed by the complexity of the data generated by the current and future particle physics experiments, new techniques for performing various analysis tasks need to be investigated. In 2006 we introduced to the particle physics field one such new technique, based on Gene Expression Programming (GEP), and successfully applied it to an event...
Prof.
Volker Lindenstruth
(Kirchhoff Institute for Physics)
11/6/08, 9:00 AM
The ALICE High Level Trigger is a high performance computer, setup to process the ALICE on-line data, exceeding 25GB/sec in real time. The most demanding detector for the event reconstruction is the ALICE TPC. The HLT implements different kinds of processing elements, including AMD, Intel processors, FPGAs and GPUs. The FPGAs perform an on the fly cluster reconstruction and the tracks are...
Dr
Ivan Kisel
(Gesellschaft fuer Schwerionenforschung mbH (GSI), Darmstadt, Germany)
11/6/08, 11:20 AM
On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of the computer architecture. One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achieving more operations per clock cycle. The novel Cell processor extends the parallelization further by...
Dr
Anwar Ghuloum
(Intel Corporation)
11/6/08, 12:00 PM
1. Computing Technology
Power consumption is the ultimate limiter to current and future processor design, leading us to focus on more power efficient architectural features such as multiple cores, more powerful vector units, and use of hardware multi-threading (in place of relatively expensive out-of-order techniques). It is (increasingly) well understood that developers face new challenges with multi-core software...
11/6/08, 2:00 PM
Dr
Ian Fisk
(Fermi National Accelerator Laboratory, Batavia, United States)
11/7/08, 9:00 AM
Dr
Thomas Speer
(Brown University)
11/7/08, 9:30 AM
2. Data Analysis
Prof.
Kiyoshi Kato
(Kogakuin University)
11/7/08, 10:30 AM
Mr
Mihai Niculescu
(Institute of Space Sciences)
2. Data Analysis
Poster
In this paper we present an integrated system for online Monte Carlo simulations in
High Energy Physics. Several Monte Carlo simulations codes will be implemented: GEANT, PYTHIA,
FLUKA, HIJING. This system will be structured in several basic modules. First module will ensure the
system's web interface, the access to the other modules and will allow the logging of many users at the
same...
Sergei V. Gleyzer
(Florida State University)
2. Data Analysis
Poster
The Compact Muon Solenoid (CMS) experiment features an electromagnetic calorimeter (ECAL) composed of lead tungstate crystals and a sampling hadronic calorimeter (HCAL) made of brass and scintillator, along with other detectors. For hadrons, the response of the electromagnetic and hadronic calorimeters is inherently different. Because sampling calorimeters measure a fraction of the energy...
Dr
Nectarios Benekos
(University of Illinois)
2. Data Analysis
Poster
ATLAS is a large multipurpose detector, presently in the final phase
of construction at LHC, the CERN Large Hadron Collider accelerator.
In ATLAS the Muon Spectrometer (MS) is optimized to measure final state muons of 14 TeV proton-proton interactions with a good momentum resolution of 2-3% at 10-100 GeV/c and 10% at 1 TeV, and an efficiency close to 100%, taking into account the high level...
Pawel Wolniewicz
(PSNC)
1. Computing Technology
Poster
The g-Eclipse is an integrated workbench framework to access the power of existing Grid infrastructures. g-Eclipse can be used on user level or application level. On user level g-Eclipse is just a rich client application with user friendly interface which allows users to access Grid resources, operators to manage Grid resources and developers to speed up the development cycle of new Grid...
Lee Lueking
(Fermilab, Batavia, IL, USA)
1. Computing Technology
Poster
The CMS experiment has implemented a flexible and powerful approach to enable users to find data within the CMS physics data catalog. The Dataset Bookkeeping Service (DBS) comprises a database and the services used to store and access metadata related to its physics data. In addition to the existing WEB based and programatic API, a generalized query system has been designed and built. This...
Mr
Luciano Manhaes De Andrade Filho
(Universidade Federal do Rio de Janeiro)
2. Data Analysis
Poster
The hadronic calorimeter of ATLAS, TileCal, provides a large amount of readout channels (about 10,000). Therefore, track detection may be performed by TileCal when cosmic muons cross the detector. The muon track detection has extensively been used in the TileCal commissioning phase, for both energy and timing calibrations, and it will also be important for background noise removal during...
Dr
Federico Carminati
(CERN), Dr
Giuliana Galli Carminati, Dr
Rene Brun
(CERN)
1. Computing Technology
Poster
This poster presents a book which is due to be published in 2009 about HEP computing. HEP research has been constantly limited by technology, both in the accelerator and detector domains as well as that of computing. At the same time High Energy physicists have greatly contributed to the development of Information Technology. Several developments conceived for HEP have found applications well...
Kathleen Knobe
(Intel)
1. Computing Technology
Plenary
Concurrent Collections is a different way of writing parallel applications. Its major contribution is to isolate the task of specifying the application semantics from any consideration of its parallel execution. This isolation makes it much easier for the domain-expert, the physicist for example, to specify the application. It also makes the task of the tuning-expert, mapping the application...
Mrs
Maaike Limper
(NIKHEF)
2. Data Analysis
Poster
On behalf of the ATLAS Collaboration.
The ATLAS collaboration at the Large Hadron Collider at CERN intends
to study a variety of final states produced in proton-proton collisions
at the energy of 14 TeV. The precise reconstruction of trajectories of
charged and neutral particles including those which underwent decays
is crucial for many phycics analyses. In addition, a study of...
Carlos AGUADO SANCHEZ
(CERN)
1. Computing Technology
Poster
The CernVM Virtual Software Appliance contains a minimal operating system sufficient to host the application frameworks developed by the LHC experiments. In CernVM model the experiment application software and it dependencies are built independently from CernVM Virtual Machine. The procedures for building, installing and validating each software release remains in the hands and under...
Dr
Matevz Tadel
(CERN)
2. Data Analysis
Live Demo
EVE is a high-level environment using ROOT's data-processing, GUI and
OpenGL interfaces. It can serve as a framework for object management
offering hierarhical data organization, object interaction and
visualization via GUI and OpenGL representations and automatic
creation of 2D projected views. On the other hand, it can serve as a
toolkit satisfying most HEP requirements, allowing...
Mr
Loic Quertenmont
(Universite Catholique de Louvain)
2. Data Analysis
Poster
FROG is a generic framework dedicated to visualize events in a given geometry. \newline
It has been written in C++ and use OpenGL cross-platform libraries. It can be used to any particular physics experiment or detector design. The code is very light and very fast and can run on various Operating System. Moreover, FROG is self consistent and does not require installation of ROOT or...
Alberto Falzone
(NICE srl),
Giuseppe La Rocca
(Istituto Nazionale di Fisica Nucleare (INFN) Sez. Catania – Italy),
Nicola Venuti
(NICE srl),
Roberto Barbera
(University of Catania and INFN – Italy),
Valeria Ardizzone
(Istituto Nazionale di Fisica Nucleare (INFN) Sez. Catania – Italy)
1. Computing Technology
Poster
In order to address new challenges in modern e-Science and technological developments, the needs to have a transparent access to the distributed computational and storage resources within the grid paradigm is becoming of particular importance for different applications and communities.
So far, the basic know-how requested to access the grid infrastructures is not so easy, especially for not...
Prof.
Nikolai Gagunashvili
(University of Akureyri, Iceland)
2. Data Analysis
Poster
Weighted histograms in Monte-Carlo simulations are often used for the estimation of
a probability density functions. They are obtained as a result of random experiment with random events that have weights. In this paper the bin contents of weighted histogram are considered as a sum of random variables with random number of terms. Goodness of fit tests for weighted histograms and for weighted...
Mr
Mario Lassnig
(CERN & University of Innsbruck, Austria), Mr
Mark Michael Hall
(Cardiff University, Wales, UK)
1. Computing Technology
Poster
In highly data-driven environments such as the LHC experiments a reliable and
high-performance distributed data management system is a primary requirement.
Existing work shows that intelligent data replication is the key to achieving
such a system, but current distributed middleware replication strategies rely
mostly on computing, network and storage properties when deciding how...
Mr
Eduardo Simas
(Federal University of Rio de Janeiro)
2. Data Analysis
Poster
The ATLAS online trigger system has three filtering levels and accesses information from calorimeters, muon chambers and the tracking system. The electron/jet channel is very important for triggering system performance as Higgs signatures may be found efficiently through decays that produce electrons as final-state particles.
Electron/jet separation relies very much on calorimeter...
Mr
Alexander Ayriyan
(JINR)
1. Computing Technology
Poster
The CICC JINR cluster has been installed in 2007-2008 years increasing computational power and disk space memory. It is generally used for distributed computing as part of Russian Data Intensive Grid (EGEE-RDIG) to work in LHC Computing Grid (LCG).
With the just installed superblade modules at mid-May 2008, the CICC JINR cluster reached a heterogeneous 560-core structure. The system consists...
Ryabinkin Eygene
(Russian Research Centre "Kurchatov Institute")
1. Computing Technology
Poster
The major subject of this talk is the presentation of the distributed computing status report for the ALICE experiment at Russian sites just before and at the time of the data taking at the Large Hadron Collider in CERN. We present the usage of the ALICE application software, AliEn[1], at the top of the modern EGEE middleware called gLite for the simulation and data analysis in the experiment...
Alberto Pulvirenti
(University of Catania - INFN Catania)
2. Data Analysis
Poster
ALICE is the LHC experiment most specifically aimed at studying the hot and dense nuclear matter produced in Pb-Pb collisions at 5.5 TeV, in order to investigate the properties of the Quark-Gluon Plasma, whose formation is expected in such conditions.
Among the physics topics of interest within this experiment, resonances play a fundamental role, since they allow one to probe the chiral...
Benedikt Hegner
(CERN)
1. Computing Technology
Poster
The CMS experiment at LHC has a very large body of software of its own and uses extensively software from outside the experiment. Ensuring the software quality of such a large project requires checking and testing at every level of complexity. The aim is to give the developers very quick feedback on all the relevant CMS offline workflows during the (twice daily) Integration Builds. In addition...
Dr
Victor Eduardo Bazterra
(Univ Illinois at Chicago)
2. Data Analysis
Poster
The CMS Collaboration is studying several algorithms to discriminate
jets coming from the hadronization of b quarks from the lighter
background. These will be used to identify top quarks and in searches
of the Higgs boson and non-Standard Model processes. A reliable
estimate of the performance of these algorithms is therefore crucial,
and methods to estimate efficiencies and mistag rates...
Sergio Grancagnolo
(INFN & University Lecce)
2. Data Analysis
Poster
The ATLAS trigger system has a three-levels structure, implemented to retain interesting physics events, here described for the muon case ("Muon Vertical Slice"). The first level, implemented in a custom hardware, uses measurements from the trigger chambers of the Muon Spectrometer to select muons with high transverse momentum and defines a Region of Interest (RoI) in the detector. RoIs are...
Miroslav Morhac
(Institute of Physics, Slovak Academy of Sciences)
2. Data Analysis
Poster
Visualization is one of the most powerful and direct ways how the huge amount of information contained in multidimensional histograms can be conveyed in a form comprehensible to a human eye. With increasing dimensionality of histograms (nuclear spectra) the requirements in developing of multidimensional scalar visualization techniques become striking. In the contribution we present a...