# ACAT 2008

3-7 November 2008
Ettore Majorana Foundation and Centre for Scientific Culture
Europe/Zurich timezone
Home > Contribution List
Displaying 121 contributions out of 121
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables (features). Reducing a full feature set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various feature selection algorithms for supervised learning using several datasets such as, for i ... More
Presented by Dr. Giulio PALOMBO on 5 Nov 2008 at 14:00
Type: Parallel Talk Session: Computing Technology for Physics Research
Track: 1. Computing Technology
Current Grid deployments for LHC computing (namely the WLCG infrastructure) do not allow efficient parallel interactive processing of data. In order to allow physicists to interactively access subsets of data (e.g. for algorithm tuning and debugging before running over a full dataset) parallel Analysis Facilities based on PROOF have been deployed by the ALICE experiment at CERN and elsewhere. Wher ... More
Presented by Dario BERZANO on 5 Nov 2008 at 17:25
Presented by Mr. Federico CARMINATI on 7 Nov 2008 at 11:00
Type: Parallel Talk Session: Computing Technology for Physics Research
Track: 1. Computing Technology
The talk will describe the current status of the offline analysis framework used in ALICE. The software was designed and optimized to take advantage of distributed computing resources and be compatible with ALICE computing model. The framwork's main features: possibility to use parallelism in PROOF or GRID environments, transparency of the computing infrastructure and data model, scalability and ... More
Presented by Mr. Andrei GHEATA on 3 Nov 2008 at 15:15
Type: Parallel Talk Session: Computing Technology for Physics Research
Track: 1. Computing Technology
The ATLAS experiment at CERN will require about 4000 CPUs for the online data acquisition system (DAQ). When the DAQ system experiences software errors, such as event selection algorithm problems, crashes or timeouts, the fault tolerance mechanism routes the corresponding event data to the so called debug stream. During first beam commissioning and early data taking, a large fraction of events is ... More
Presented by Hegoi GARITAONANDIA on 3 Nov 2008 at 17:00
Type: Poster Track: 2. Data Analysis
ATLAS is a large multipurpose detector, presently in the final phase of construction at LHC, the CERN Large Hadron Collider accelerator. In ATLAS the Muon Spectrometer (MS) is optimized to measure final state muons of 14 TeV proton-proton interactions with a good momentum resolution of 2-3% at 10-100 GeV/c and 10% at 1 TeV, and an efficiency close to 100%, taking into account the high level back ... More
Presented by Dr. Nectarios BENEKOS
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
The ATLAS trigger system is designed to select rare physics processes of interest from an extremely high rate of proton-proton collisions, reducing the LHC incoming rate of about 10^7. The short LHC bunch crossing period of 25 ns and the large background of soft-scattering events overlapped in each bunch crossing pose serious challenges, both on hardware and software, that the ATLAS trigger must ... More
Presented by Dr. Biglietti MICHELA on 5 Nov 2008 at 16:35
Type: Poster Track: 2. Data Analysis
In this paper we present an integrated system for online Monte Carlo simulations in High Energy Physics. Several Monte Carlo simulations codes will be implemented: GEANT, PYTHIA, FLUKA, HIJING. This system will be structured in several basic modules. First module will ensure the system's web interface, the access to the other modules and will allow the logging of many users at the same time. A ... More
Presented by Mr. Mihai NICULESCU
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
The CERN's Large Hadron Collider (LHC) is the world largest particle accelerator. It will collide two proton beams at an unprecedented center of mass energy of 14 TeV and first colliding beams are expected during summer 2008. ATLAS is one of the two general purpose experiments that will record the decay products of the proton-proton collisions. ATLAS is equipped with a charge particle tracking s ... More
Presented by Mr. John ALISON on 4 Nov 2008 at 16:10
Type: Poster Track: 2. Data Analysis
The Compact Muon Solenoid (CMS) experiment features an electromagnetic calorimeter (ECAL) composed of lead tungstate crystals and a sampling hadronic calorimeter (HCAL) made of brass and scintillator, along with other detectors. For hadrons, the response of the electromagnetic and hadronic calorimeters is inherently different. Because sampling calorimeters measure a fraction of the energy spread o ... More
Presented by Sergei V. GLEYZER
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
The CMS Offline software contains a widespread set of algorithms to identify jets originating from the weak decay of b-quarks. Different physical properties of b-hadron decays like lifetime information, secondary vertices and soft leptons are exploited. The variety of selection algorithms range from simple and robust ones, suitable for early data-taking and online environments as the trigger syste ... More
Presented by Mr. Christophe SAOUT on 3 Nov 2008 at 17:50
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
A large hadron machine like the LHC with its high track multiplicities always asks for powerful tools that drastically reduce the large background while selecting signal events efficiently. Actually such tools are widely needed and used in all parts of particle physics. Regarding the huge amount of data that will be produced at the LHC, the process of training as well as the process of applying th ... More
Presented by Dr. Markward BRITSCH on 5 Nov 2008 at 17:25
Type: Plenary Session: Monday, 03 November 2008 - Morning session 2
Track: 3. Computation in Theoretical Physics
There are many ways to build a Storage Element. This talk surveys the common and popular architectures used to construct today's Storage Elements and presents points for consideration. The presentation then asks, "Are these architectures ready for LHC era experiments?". The answer may be surprising and certainly shows that the context in which they are used matters.
Presented by Mr. Andrew HANUSHEVSKY on 3 Nov 2008 at 12:00
Type: Plenary Session: Monday, 03 November 2008 - Morning session 1
Track: 1. Computing Technology
Intellectual Property, which includes the following areas of the law: Copyrights, Patents, Trademarks, Trade Secrets, and most recently Database Protection and Internet Law, might seem to be an issue for lawyers only. However, increasingly the impact of the laws governing these areas and the International reach of the effects of their implementation makes it important for all software developers ... More
Presented by Lawrence PINSKY on 3 Nov 2008 at 09:00
Type: Poster Track: 1. Computing Technology
The g-Eclipse is an integrated workbench framework to access the power of existing Grid infrastructures. g-Eclipse can be used on user level or application level. On user level g-Eclipse is just a rich client application with user friendly interface which allows users to access Grid resources, operators to manage Grid resources and developers to speed up the development cycle of new Grid applicati ... More
Presented by Pawel WOLNIEWICZ
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
High performance computing with a large code base and C++ has proved to be a good combination. But when it comes to storing data, C++ is a really bad choice: it offers no support for serialization, type definitions are amazingly complex to parse, and the dependency analysis (what does object A need to be stored?) is incredibly difficult. Nevertheless, the LHC data consists of C++ objects that are ... More
Presented by Axel NAUMANN on 4 Nov 2008 at 15:15
Type: Poster Track: 1. Computing Technology
The CMS experiment has implemented a flexible and powerful approach to enable users to find data within the CMS physics data catalog. The Dataset Bookkeeping Service (DBS) comprises a database and the services used to store and access metadata related to its physics data. In addition to the existing WEB based and programatic API, a generalized query system has been designed and built. This query s ... More
Presented by Lee LUEKING
Type: Poster Track: 1. Computing Technology
The CernVM Virtual Software Appliance contains a minimal operating system sufficient to host the application frameworks developed by the LHC experiments. In CernVM model the experiment application software and it dependencies are built independently from CernVM Virtual Machine. The procedures for building, installing and validating each software release remains in the hands and under responsibilit ... More
Presented by Carlos AGUADO SANCHEZ
Type: Plenary Session: Thursday, 06 November 2008
Track: 1. Computing Technology
CernVM is a Virtual Software Appliance to run physics applications from the LHC experiments at CERN. The virtual appliance provides a complete, portable and easy to install and configure user environment for developing and running LHC data analysis on any end-user computer (laptop, desktop) and on the Grid independently of operating system software and hardware platform (Linux, Windows, MacOS). Th ... More
Presented by Predrag BUNCIC on 6 Nov 2008 at 09:40
Type: Plenary Session: Wednesday, 05 November 2008
Track: 2. Data Analysis
Code quality has traditionally been decomposed into internal and external quality. In this talk, I will discuss the differences between these two views and I will consider the contexts in which either of the two becomes the main quality goal. I will argue that for physics software the programmer's perspective, focused on the internal quality, is the most important one. Then, I will revise the ... More
Presented by Paolo TONELLA on 5 Nov 2008 at 11:20
Type: Poster Track: 2. Data Analysis
The hadronic calorimeter of ATLAS, TileCal, provides a large amount of readout channels (about 10,000). Therefore, track detection may be performed by TileCal when cosmic muons cross the detector. The muon track detection has extensively been used in the TileCal commissioning phase, for both energy and timing calibrations, and it will also be important for background noise removal during nominal ... More
Presented by Mr. Luciano MANHAES DE ANDRADE FILHO
Type: Plenary Session: Wednesday, 05 November 2008
Track: 3. Computation in Theoretical Physics
We present a new version of the CompHEP program package, version 4.5. We describe shortly new techniques and options implemented: interfaces to ROOT and HERWIG, generation of the XML-based header in event files (HepML), full implementation of Les Houches agreements (LHA I, SUSY LHA, LHA PDF, Les Houches events), realisation of the improved von Neumann procedure for the event generation, etc. We al ... More
Presented by Dr. Alexander SHERSTNEV on 5 Nov 2008 at 09:00
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 1
Track: 3. Computation in Theoretical Physics
will be sent later
Presented by Dr. Mikhail ROGAL on 3 Nov 2008 at 14:50
Presented by Dr. Ian FISK on 7 Nov 2008 at 09:00
Type: Poster Track: 1. Computing Technology
This poster presents a book which is due to be published in 2009 about HEP computing. HEP research has been constantly limited by technology, both in the accelerator and detector domains as well as that of computing. At the same time High Energy physicists have greatly contributed to the development of Information Technology. Several developments conceived for HEP have found applications well bey ... More
Presented by Dr. Rene BRUN, Dr. Federico CARMINATI, Dr. Giuliana GALLI CARMINATI
Type: Plenary Session: Thursday, 06 November 2008
Track: 1. Computing Technology
Concurrent Collections is a different way of writing parallel applications. Its major contribution is to isolate the task of specifying the application semantics from any consideration of its parallel execution. This isolation makes it much easier for the domain-expert, the physicist for example, to specify the application. It also makes the task of the tuning-expert, mapping the application to a ... More
Presented by Kathleen KNOBE
Type: Poster Track: 2. Data Analysis
On behalf of the ATLAS Collaboration. The ATLAS collaboration at the Large Hadron Collider at CERN intends to study a variety of final states produced in proton-proton collisions at the energy of 14 TeV. The precise reconstruction of trajectories of charged and neutral particles including those which underwent decays is crucial for many phycics analyses. In addition, a study of tracking pe ... More
Presented by Mrs. Maaike LIMPER
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 1
Track: 3. Computation in Theoretical Physics
We report on the status of the current development in parallelization of the symbolic manipulation system FORM. Most existing FORM programs will be able to take advantage of the parallel execution, without the need for modifications.
Presented by Mikhail TENTYUKOV on 5 Nov 2008 at 14:00
Session: Friday, 07 November 2008
Track: 2. Data Analysis
Presented by Dr. Thomas SPEER on 7 Nov 2008 at 09:30
Type: Parallel Talk Session: Tuesday, 04 November 2008 - Morning session 2
Track: 2. Data Analysis
In this talk we describe the latest developments in the PROOF system. PROOF is the parallel extension of ROOT and allows large datasets to be processed in parallel on large clusters and/or multi-core machines. The recent developments have focused on readying PROOF for the imminent data analysis tasks of the LHC experiments. Main improvements have been made in the areas of overall robustness and fa ... More
Presented by Dr. Gerardo GANIS on 4 Nov 2008 at 10:40
Type: Parallel Talk Session: Computing Technology for Physics Research - Session 1
Track: 1. Computing Technology
The CMS PhEDEx (Physics Experiment Data Export) project is responsible for facilitating large-scale data transfers across the grid ensuring transfer reliability, enforcing data placement policy, and accurately reporting results and performance statistics. The system has evolved considerably since its creation in 2004, and has been used daily by CMS since then. Currently CMS tracks over 2 PB of dat ... More
Presented by Mr. Ricky EGELAND on 4 Nov 2008 at 15:15
Type: Parallel Talk Session: Computing Technology for Physics Research
Track: 1. Computing Technology
There is ATLAS wide policy how different types of data is distributed between centers of different level (T0/T1/Tn) it is well defined and centrally operated activity (uses Atlas Central Services which include Catalogue services, Sites services, T0 services, Panda Services and etc). At the same ATLAS Operations Group designed user oriented services to allow ATLAS physicists to place data replicati ... More
Presented by Mikhail TITOV on 3 Nov 2008 at 14:25
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 1
Track: 3. Computation in Theoretical Physics
The Generator Services project collaborates with the Monte Carlo generators authors and with the LHC experiments in order to prepare validated LCG compliant code for both the theoretical and the experimental communities at the LHC. On the one side it provides the technical support as far as the installation and the maintenance of the generators packages on the supported platforms is concerned ... More
Presented by Dr. Mikhail KIRSANOV on 4 Nov 2008 at 14:50
Type: Parallel Talk Session: Computing Technology for Physics Research - Session 1
Track: 1. Computing Technology
The LHC machine has just started operations. Very soon, Petabytes of data from the ATLAS detector will need to be processed, distributed worldwide, re-processed and finally analyzed. This data-intensive physics analysis chain relies on a fabric of computer centers on three different sub-grids: the Open Science Grid, the LHC Computing Grid and the Nordugrid Data Facility--all part of the Worldwid ... More
Presented by Guido NEGRI on 4 Nov 2008 at 14:25
Type: Parallel Talk Session: Computing Technology for Physics Research
Track: 1. Computing Technology
CRAB (CMS Remote Analysis Builder) is the tool used by CMS to enable running physics analysis in a transparent manner over data distributed across many sites. It abstracts out the interaction with the underlying batch farms, grid infrastructure and CMS workload management tools, such that it is easily usable by non-experts. CRAB can be used as a direct interface to the computing system or can d ... More
Presented by Giuseppe CODISPOTI on 3 Nov 2008 at 16:10
Type: Live Demo Track: 2. Data Analysis
EVE is a high-level environment using ROOT's data-processing, GUI and OpenGL interfaces. It can serve as a framework for object management offering hierarhical data organization, object interaction and visualization via GUI and OpenGL representations and automatic creation of 2D projected views. On the other hand, it can serve as a toolkit satisfying most HEP requirements, allowing visualizat ... More
Presented by Dr. Matevz TADEL
Type: Parallel Talk Session: Computing Technology for Physics Research - Session 2
Track: 1. Computing Technology
In this presentation we will discuss the early experience with the CMS computing model from the last large scale challenge activities to the first days of data taking. The current version of the CMS computing model was developed in 2004 with a focus on steady state running. In 2008 a revision of the model was made to concentrate on the unique challenges associated with the commission period. The t ... More
Presented by Dr. Ian FISK on 4 Nov 2008 at 16:10
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
The HESS project is a major international experiment currently performed in gamma astronomy. This project relies on a system of four Cherenkov telescopes enabling the observation of cosmic gamma rays. The outstanding performance obtained so far in the HESS experiment has led the research labs involved in this project to improve the existing system: an additional telescope is currently being b ... More
Presented by Ms. Sonia KHATCHADOURIAN on 4 Nov 2008 at 17:00
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
In order to address the data analysis challenges imposed by the complexity of the data generated by the current and future particle physics experiments, new techniques for performing various analysis tasks need to be investigated. In 2006 we introduced to the particle physics field one such new technique, based on Gene Expression Programming (GEP), and successfully applied it to an event selectio ... More
Presented by Liliana TEODORESCU on 5 Nov 2008 at 17:50
Type: Parallel Talk Session: Computing Technology for Physics Research - Session 2
Track: 1. Computing Technology
With the era of multi-core CPUs, software parallelism is becoming both affordable as well as a practical need. Especially interesting is to re-evaluate the adaptability of the high energy and nuclear physics sophisticated, but time-consuming, event reconstruction frameworks to the reality of the multi-threaded environment. The STAR offline OO ROOT-based framework implements a well known "st ... More
Presented by Dr. Valeri FINE on 4 Nov 2008 at 17:00
Type: Poster Track: 2. Data Analysis
FROG is a generic framework dedicated to visualize events in a given geometry. \newline It has been written in C++ and use OpenGL cross-platform libraries. It can be used to any particular physics experiment or detector design. The code is very light and very fast and can run on various Operating System. Moreover, FROG is self consistent and does not require installation of ROOT or Experiment so ... More
Presented by Mr. Loic QUERTENMONT
Type: Parallel Talk Session: Computing Technology for Physics Research
Track: 1. Computing Technology
The new development in the FairRoot framework will be presented. FairRoot is the simulation and anaysis framework used by CBM and PANDA at FAIR/GSI experiments. The CMake based building and testing system will be described. A new event display based on EVE-package from ROOT and Geane will be shown, also the new developments for using GPUs and multi-core systems will be discussed.
Presented by Dr. Mohammad AL-TURANY on 5 Nov 2008 at 16:35
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 2
Track: 3. Computation in Theoretical Physics
Recent results related with manipulation of hypergeometric functions: reduction and construction of higher-order terms in epsilon-expansion is revised. The application of given technique to the analytical evaluation of Feynman diagrams is considered.
Presented by Dr. mikhail KALMYKOV on 5 Nov 2008 at 17:00
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
Event displays in HEP are used for many different purposes, e.g. algorithm debugging, commissioning, geometry checking and physics studies. The physics studies case is unique since few user are likely to become experts on the event display, the breadth of information all such users will want to see is quite large although any one user may only want a small subset of information and the best way to ... More
Presented by Dr. Christopher JONES on 3 Nov 2008 at 17:25
Type: Plenary Session: Thursday, 06 November 2008
Track: 1. Computing Technology
This talk will start by reminding the audience that Moore's law is very much alive (even after 40+ years of existence). Transistors will continue to double for every new silicon generation every other year. Chip designers are therefore trying every possible "trick" for putting the transistors to good use. The most notable one is to push more parallelism into each CPU: More and longer vectors, m ... More
Presented by Mr. Sverre JARP on 6 Nov 2008 at 10:40
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 1
Track: 3. Computation in Theoretical Physics
The talk will cover the latest version of the Feynman-diagram calculator FormCalc. The most significant improvement is the communication of intermediate expressions from FORM to Mathematica and back, for the primary purpose of introducing abbreviations at an early stage. Thus, longer expressions can be treated and a severe bottleneck in particular for processes with high multiplicities removed.
Presented by Thomas HAHN on 5 Nov 2008 at 14:50
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 1
Track: 3. Computation in Theoretical Physics
We present a method to unfold the complete functional dependence of single-scale quantities as QCD splitting functions and Wilson coefficients from a finite number of moments. These quantities obey recursion relations which can be found in an automated way. The exact functional form is obtained solving the corresponding difference equations. We apply the algorithm to the QCD Wilson coefficie ... More
Presented by Johannes BLUEMLEIN on 3 Nov 2008 at 14:25
Type: Poster Track: 1. Computing Technology
In order to address new challenges in modern e-Science and technological developments, the needs to have a transparent access to the distributed computational and storage resources within the grid paradigm is becoming of particular importance for different applications and communities. So far, the basic know-how requested to access the grid infrastructures is not so easy, especially for not ITC ... More
Presented by Valeria ARDIZZONE, Roberto BARBERA, Alberto FALZONE, Giuseppe LA ROCCA, Nicola VENUTI
Type: Plenary Session: Thursday, 06 November 2008
Track: 2. Data Analysis
The ALICE High Level Trigger is a high performance computer, setup to process the ALICE on-line data, exceeding 25GB/sec in real time. The most demanding detector for the event reconstruction is the ALICE TPC. The HLT implements different kinds of processing elements, including AMD, Intel processors, FPGAs and GPUs. The FPGAs perform an on the fly cluster reconstruction and the tracks are planned ... More
Presented by Prof. Volker LINDENSTRUTH on 6 Nov 2008 at 09:00
Type: Poster Track: 2. Data Analysis
Weighted histograms in Monte-Carlo simulations are often used for the estimation of a probability density functions. They are obtained as a result of random experiment with random events that have weights. In this paper the bin contents of weighted histogram are considered as a sum of random variables with random number of terms. Goodness of fit tests for weighted histograms and for weighted hist ... More
Presented by Prof. Nikolai GAGUNASHVILI
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 2
Track: 3. Computation in Theoretical Physics
An overview of recent developments for the Geant4 hadronic modeling is provided with a focus on the start of the LHC experiments. Improvements in Pre-Compound model, Binary and Bertini cascades, models of elastic scattering, quark-gluon string and Fritiof high energy models, and low-energy neutron transport were introduced using validation versus data from thin target experiments. Many of these de ... More
Presented by Prof. Vladimir IVANTCHENKO on 4 Nov 2008 at 16:10
Type: Plenary Session: Monday, 03 November 2008 - Morning session 2
Track: 3. Computation in Theoretical Physics
Different methods for treating the results of higher-order perturbative QCD calculations of the decay width of the Standard Model Higgs boson into bottom quarks are discusssed. Special attention is paid to the analysis of the $M_H$ dependence of the decay width $\Gamma(H\to \bar{b}b})$ in the cases when the mass of b-quark is defined as the running parameter in the $\bar{MS}$-scheme and as the qua ... More
Presented by Dr. Andrei KATAEV on 3 Nov 2008 at 10:40
Type: Plenary Session: Monday, 03 November 2008 - Morning session 2
Track: 3. Computation in Theoretical Physics
For the vast majority of computations done both in pure and applied physics, ordinary 64-bit floating-point arithmetic (about 16 decimal digits) is sufficient. But for a growing body of applications, this level is not sufficient. For applications such as supernova simulations, climate modeling, n-body atomic structure calculations, "double-double" (approx. 32 digits) or even "quad-double" (appro ... More
Presented by David BAILEY on 3 Nov 2008 at 11:20
Type: Poster Track: 1. Computing Technology
In highly data-driven environments such as the LHC experiments a reliable and high-performance distributed data management system is a primary requirement. Existing work shows that intelligent data replication is the key to achieving such a system, but current distributed middleware replication strategies rely mostly on computing, network and storage properties when deciding how to replicate ... More
Presented by Mr. Mario LASSNIG, Mr. Mark Michael HALL
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
This presentation discusses activities at GSI to support interactive data analysis for the LHC experiment ALICE. GSI is a tier-2 centre for ALICE. One focus is a setup where it is possible to dynamically switch the resources between jobs from the Grid, jobs from the local batch system and the GSI Analysis Facility (GSIAF), a PROOF farm for fast interactive analysis. The second emphasis is on creat ... More
Presented by Anna KRESHUK on 4 Nov 2008 at 14:50
Presented by Mr. Federico CARMINATI on 3 Nov 2008 at 08:45
Presented by Denis PERRET-GALLIX on 3 Nov 2008 at 08:55
Type: Plenary Session: Tuesday, 04 November 2008 - Morning session 1
Track: 1. Computing Technology
This talk gives a high level introduction to the LLVM Compiler System (http://llvm.org/), which supports high performance compilation of C and C++ code, as well as adaptive runtime optimization and code generation. Using LLVM as a drop-in replacement for GCC offers several advantages, such as being able to optimize across files in your application, producing better generated code performance, and ... More
Presented by Mr. Chris LATTNER on 4 Nov 2008 at 09:40
Type: Plenary Session: Tuesday, 04 November 2008 - Morning session 1
Track: 2. Data Analysis
This talk will give a brief overview of the features of Java which make it well suited for use in High-Energy and Astro-physics, including recent enhancements such as the addition of parameterized types and advanced concurrency utilities, and its release as an open-source (GPL) product. I will discuss the current status of a number of Java based tools for High-Energy and Astro-physics includin ... More
Presented by Tony JOHNSON on 4 Nov 2008 at 09:00
Type: Parallel Talk Session: Computing Technology for Physics Research - Session 2
Track: 1. Computing Technology
As the Large Hadron Collider (LHC) at CERN, Geneva, has begun operation in september, the large scale computing grid LCG (LHC Computing Grid) is meant to process and store the large amount of data created in simulating, measuring and analyzing of particle physic experimental data. Data acquired by ATLAS, one of the four big experiments at the LHC, are analyzed using compute jobs running on the gri ... More
Presented by Mr. Tim MUENCHEN on 4 Nov 2008 at 17:50
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 1
Track: 3. Computation in Theoretical Physics
In this talk we present a way of making Monte-Carlo simulation chain fully automated.automation Last years there was a need for common place to store sophisticated MC event samples prepared by experienced theorists. Also such samples should be accessible in some standard manner to be easyly imported and used in experiments' software. The main motivation behind the LCG MCDB project is to make ... More
Presented by Mr. Sergey BELOV on 4 Nov 2008 at 15:15
Type: Plenary Session: Wednesday, 05 November 2008
Track: 3. Computation in Theoretical Physics
In this talk I will motivate that a succesful descripton of LHC physics needs the inclusion of higher order corrections for all kinds of signal and background processes. In the case of multi-particle production the combinatorial complexity of standard approaches triggered many new developments which allow for the efficient evaluation of one-loop amplitudes for LHC phenomenology. I will discu ... More
Presented by Dr. Thomas BINOTH on 5 Nov 2008 at 10:40
Type: Parallel Talk Session: Computing Technology for Physics Research
Track: 1. Computing Technology
From its conception the job management system has been distributed to increase scalability and robustness. The system consists of several applications (called prodagents) which each manage Monte Carlo, reconstruction and skimming jobs on collections of sites within different Grid environments (OSG, NorduGrid?, LCG) and submission systems (GlideIn?, local batch, etc..). Production of simulated d ... More
Presented by Dr. Stuart WAKEFIELD on 3 Nov 2008 at 17:25
Type: Parallel Talk Session: Computing Technology for Physics Research
Track: 1. Computing Technology
ATLAS is a large multipurpose detector, presently in the final phase of construction at LHC, the CERN Large Hadron Collider accelerator. In ATLAS the muon detection is performed by a huge magnetic spectrometer, built with the Monitored Drift Tube (MDT) technology. It consists of more than 1,000 chambers and 350,000 drift tubes, which have to be controlled to a spatial accuracy better tha ... More
Presented by Dr. Monica VERDUCCI, Dr. Elena SOLFAROLI on 3 Nov 2008 at 17:50
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
MINUIT is the most common package used in high energy physics for numerical minimization of multi-dimensional functions. The major algorithm of this package, MIGRAD, searches for the minimum by using the function gradient. For each minimization iteration, MIGRAD requires the calculation of the first derivatives for each parameter of the function to be minimized. In this presentation we will sho ... More
Presented by Dr. Alfio LAZZARO on 5 Nov 2008 at 16:10
on 6 Nov 2008 at 14:00
Type: Parallel Talk Session: Computing Technology for Physics Research
Track: 1. Computing Technology
The activities in the last 5 years for the storage access at the INFN CNAF Tier1 can be enlisted under two different solutions efficiently used in production: the CASTOR software, developed by CERN, for Hierarchical Storage Manager (HSM), and the General Parallel File System (GPFS), by IBM, for the disk resource management. In addition, since last year, a promising alternative solution for the H ... More
Presented by Pier Paolo RICCI on 3 Nov 2008 at 16:35
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 2
Track: 3. Computation in Theoretical Physics
Dynamics of two bodies, which interacts by magnetic forces, is considered. Model of interaction builds on quasi-stationary approach for electromagnetic field, and symmetric rotors with different moments of inertia of the bodies are considered. Interaction energy general form is discovered for the case of coincidence of mass and magnetic symmetries. Since the energy of interaction depends only from ... More
Presented by Dr. Stanislav ZUB on 3 Nov 2008 at 17:00
Presented by Prof. Kiyoshi KATO on 7 Nov 2008 at 10:30
Type: Plenary Session: Tuesday, 04 November 2008 - Morning session 2
Track: 1. Computing Technology
The MonALISA (Monitoring Agents in A Large Integrated Services Architecture) framework provides a set of distributed services for monitoring, control, management and global optimization for large scale distributed systems. It is based on an ensemble of autonomous, multi-threaded, agent-based subsystems which are registered as dynamic services. They can be automatically discovered and used by othe ... More
Presented by Iosif LEGRAND on 4 Nov 2008 at 11:20
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
The C++ reconstruction framework JANA has been written to support the next generation of Nuclear Physics experiments at Jefferson Lab in anticipation of the 12GeV upgrade. This includes the GlueX experiment in the planned 4th experimental hall "Hall-D". The JANA framework was designed to allow multi-threaded event processing with a minimal impact on developers of reconstruction software. As w ... More
Presented by Dr. David LAWRENCE on 3 Nov 2008 at 14:00
Type: Plenary Session: Wednesday, 05 November 2008
Track: 3. Computation in Theoretical Physics
ultivariate methods are used routinely in particle physics research to classify objects or to discriminate signal from background. They have also been used successfully to approximate multivariate functions. Moreover, as is evident from this conference, excellent easy-to-use implementations of these methods exist, making it possible for everyone to deploy these sophisticated methods. From tim ... More
Presented by Dr. Harrison PROSPER on 5 Nov 2008 at 09:40
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 1
Track: 3. Computation in Theoretical Physics
Nowadays the sector decomposition technique, which can isolate divergences from parametric representations of integrals, becomes quite useful tool for numerical evaluations of the Feynman loop integrals. It is used to verify the analytical results of multi-loop integrals in the Euclidean region, or in some cases practically used in the physical region by combining with other methods handling the t ... More
Presented by Dr. Takahiro UEDA on 5 Nov 2008 at 14:25
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 2
Track: 3. Computation in Theoretical Physics
We present some recent results on the evaluation of massive one-loop multileg Feynman integrals, which are of relevance for LHC processes. An efficient complete analytical tensor reduction was derived and implemented in a Mathematica package hexagon.m. Alternatively, one may use Mellin-Barnes techniques in order to avoid the tensor reduction. We shortly report on a new version of the Matheamt ... More
Presented by Tord RIEMANN on 5 Nov 2008 at 16:35
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 1
Track: 3. Computation in Theoretical Physics
We apply a 'Direct Computation Method', which is purely numerical, to evaluate Feynman integrals. This method is based on the combination of an efficient numerical integration and an efficient extrapolation strategy. In addition, high-precision arithmetic and parallelization techniques can be used if required. We present our recent progress in the development of this method and show test ... More
Presented by Dr. Fukuko YUASA on 5 Nov 2008 at 15:15
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 2
Track: 3. Computation in Theoretical Physics
Multiple Polylog functions (MPL) often appear as a result of the Feynman parameter integrals in higher order correction in quantum field theory. Numerical evaluation of the MPL with higher depth and weight is necessary for multi-loop calculations. We propose a purely numerical method to evaluate MPL using numerical contour integral in multi-parameter complex-plane. We can obtain values of MPL for ... More
Presented by Dr. Yoshimasa KURIHARA on 5 Nov 2008 at 16:10
Type: Poster Track: 2. Data Analysis
The ATLAS online trigger system has three filtering levels and accesses information from calorimeters, muon chambers and the tracking system. The electron/jet channel is very important for triggering system performance as Higgs signatures may be found efficiently through decays that produce electrons as final-state particles. Electron/jet separation relies very much on calorimeter inform ... More
Presented by Mr. Eduardo SIMAS
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
In high energy physics, variable selection and reduction are key to a high quality multivariate analysis. Initial variable selection often leads to a variable set cardinality greater than the underlying degrees of freedom of the model, which motivates the needs for variable reduction and more fundamentally, a consistent decision making framework. Such a framework called PARADIGM, based on a globa ... More
Presented by Sergei V. GLEYZER on 3 Nov 2008 at 16:35
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
Probability-Density Estimation (PDE) is a multivariate discrimination technique based on sampling signal and background densities in a multi-dimensional phase space. The signal and background densities are defined by event samples (from data or monte carlo) and are evaluated using a binary search tree (range searching). This method is a powerful classification tool for problems with highly non-lin ... More
Presented by Dr. Dominik DANNHEIM on 3 Nov 2008 at 14:50
Type: Poster Track: 1. Computing Technology
The CICC JINR cluster has been installed in 2007-2008 years increasing computational power and disk space memory. It is generally used for distributed computing as part of Russian Data Intensive Grid (EGEE-RDIG) to work in LHC Computing Grid (LCG). With the just installed superblade modules at mid-May 2008, the CICC JINR cluster reached a heterogeneous 560-core structure. The system consists thre ... More
Presented by Mr. Alexander AYRIYAN
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
HEP experiments at the LHC store petabytes of data in ROOT files described with TAG metadata. The LHC experiments have challenging goals for efficient access to this data. Physicists need to be able to compose a metadata query and rapidly retrieve the set of matching events. Such skimming operations will be the first step in the analysis of LHC data, and improved efficiency will facilitate the dis ... More
Presented by Alexandre VANIACHINE on 4 Nov 2008 at 14:25
Type: Plenary Session: Thursday, 06 November 2008
Track: 2. Data Analysis
On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of the computer architecture. One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achieving more operations per clock cycle. The novel Cell processor extends the parallelization further by com ... More
Presented by Dr. Ivan KISEL on 6 Nov 2008 at 11:20
Type: Parallel Talk Session: Computing Technology for Physics Research - Session 1
Track: 1. Computing Technology
An impressive amount of effort has been put in to realize a set of frameworks to support analysis in this new paradigm of GRID computing. However, much more than half of a physicist's time is typically spent after the GRID processing of the data. Due to the private nature of this level of analysis, there has been little common framework or methodology. While most physicists agree to use ROOT as ... More
Presented by Dr. Akira SHIBATA on 4 Nov 2008 at 14:00
Type: Poster Track: 1. Computing Technology
The major subject of this talk is the presentation of the distributed computing status report for the ALICE experiment at Russian sites just before and at the time of the data taking at the Large Hadron Collider in CERN. We present the usage of the ALICE application software, AliEn[1], at the top of the modern EGEE middleware called gLite for the simulation and data analysis in the experiment at t ... More
Presented by Ryabinkin EYGENE
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 2
Track: 3. Computation in Theoretical Physics
Radiative corrections to processes of single Z and W boson production are obtained within the SANC computer system. Interplay of one-loop QCD and electroweak corrections is studied. Higher order QED final state radiation is taken into account. Monte Carlo event generators at the hadronic level are constructed. Matching with general purpose programs like HERWIG and PYTHIA is performed to inclu ... More
Presented by Dr. Andrej ARBUZOV on 4 Nov 2008 at 17:00
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
Advanced mathematical and statistical computational methods are required by the LHC experiments for analyzing their data. Some of these methods are provided by the Math work package of the ROOT project, a C++ Object Oriented framework for large scale data handling applications. We present in detail the recent developments of this work package, in particular the recent improvements in the fitt ... More
Presented by Dr. Lorenzo MONETA on 4 Nov 2008 at 16:35
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 2
Track: 3. Computation in Theoretical Physics
The status of Geant4 electromagnetic (EM) physics models is presented, focusing on the models most relevant for collider HEP experiments, at LHC in particular. Recently improvements were undertaken in models for the transport of electrons and positrons, and for hadrons. Models revised included those for single and multiple scattering, ionization at low and high energies, bremsstrahlung, annihilat ... More
Presented by Prof. Vladimir IVANTCHENKO on 3 Nov 2008 at 16:10
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 1
Track: 3. Computation in Theoretical Physics
Automatic Feynman-amplitude calculation system, GRACE, has been extended to treat next-to-leading order (NLO) QCD calculations. Matrix elements of loop diagrams as well as those of tree level ones can be generated using the GRACE system. A soft/collinear singularity is treated using a leading-log subtraction method. Higher order re-summation of the soft/collinear correction by the parton shower ... More
Presented by Dr. Yoshimasa KURIHARA on 3 Nov 2008 at 15:15
Type: Poster Track: 2. Data Analysis
ALICE is the LHC experiment most specifically aimed at studying the hot and dense nuclear matter produced in Pb-Pb collisions at 5.5 TeV, in order to investigate the properties of the Quark-Gluon Plasma, whose formation is expected in such conditions. Among the physics topics of interest within this experiment, resonances play a fundamental role, since they allow one to probe the chiral symmetry ... More
Presented by Alberto PULVIRENTI
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
The ATLAS trigger system is responsible for selecting the interesting collision events delivered by the Large Hadron Collider(LHC). The ATLAS trigger will need to achieve a ~10‐7 rejection factor against random proton‐proton collisions, and still be able to efficiently select interesting events. After a first processing level based on FPGAs and ASICS, the final event selection is based on cust ... More
Presented by Mr. Danilo Enoque FERREIRA DE LIMA on 5 Nov 2008 at 17:00
Type: Parallel Talk Session: Computing Technology for Physics Research
Track: 1. Computing Technology
The offline software suite of the CMS experiment must support the production and analysis activities across a distributed computing environment. This system relies on over 100 external software packages and includes the developments of more than 250 active developers. This system requires consistent and rapid deployment of code releases, a stable code development platform, and efficient tools to e ... More
Presented by David LANGE on 5 Nov 2008 at 17:50
Type: Poster Track: 1. Computing Technology
The CMS experiment at LHC has a very large body of software of its own and uses extensively software from outside the experiment. Ensuring the software quality of such a large project requires checking and testing at every level of complexity. The aim is to give the developers very quick feedback on all the relevant CMS offline workflows during the (twice daily) Integration Builds. In addition the ... More
Presented by Benedikt HEGNER
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
The accuracy and reliability of the analysis of spectroscopic data depend critically on the treatment in order to resolve strong peak overlaps, to account for continuum background contributions, and to distinguish artifacts to the responses of some detector types. Analysis of spectroscopic data can be divided to 1. estimation of peaks positions (peak searching) 2. fitting of peak regions. One o ... More
Presented by Miroslav MORHAC on 4 Nov 2008 at 17:25
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 2
Track: 3. Computation in Theoretical Physics
Two types of SANC system output are presented. At first the status of stand-alone packages for calculations of the EW and QCD NLO RC at the parton level (Standard SANC FORM and/or FORTRAN Modules) are done. Short overview of these packages in sector of the Neutral Current: (uu, dd) -> (mu,mu, ee) and ee(uu, dd) -> HZ; and in the sector of the Charge Current: ee(uu, dd) -> (mu nu_mu, e nu_e) ... More
Presented by Vladimir KOLESNIKOV on 3 Nov 2008 at 16:35
Type: Poster Track: 2. Data Analysis
The CMS Collaboration is studying several algorithms to discriminate jets coming from the hadronization of b quarks from the lighter background. These will be used to identify top quarks and in searches of the Higgs boson and non-Standard Model processes. A reliable estimate of the performance of these algorithms is therefore crucial, and methods to estimate efficiencies and mistag rates dire ... More
Presented by Dr. Victor Eduardo BAZTERRA
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
Multivariate data analysis techniques are becoming increasingly important for high energy physics experiments. TMVA is a tool, integrated in the ROOT environment, which allows easy access to sophisticated multivariate classifiers allowing for a widespread use of these very effective data selection techniques. It furthermore provides a number of pre-processing capabilities and numerous additi ... More
Presented by Joerg STELZER on 3 Nov 2008 at 14:25
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
Tau leptons will play an important role in the physics program at the LHC. They will not only be used in electroweak measurements and in detector related studies like the determination of the E_T^miss scale, but also in searches for new phenomena like the Higgs boson or Supersymmetry. Due to the overwhelming background from QCD processes, highly efficient algorithms are essential to identi ... More
Presented by Dr. Marcin WOLTER on 5 Nov 2008 at 14:25
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
In this talk we address the way the ALICE Offline Computing is starting to exploit the possibilities given by the Scalla/Xrootd repository globalization tools. These tools are quite general and can be adapted to many situations, without disrupting existing designs, but adding a level of coordination among xrootd-based storage clusters, and the ability to interact between them.
Presented by Dr. Fabrizio FURANO on 5 Nov 2008 at 14:50
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 1. Computing Technology
The ATLAS Muon System has extensively started to use the LCG conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worlwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, bu ... More
Presented by Dr. Monica VERDUCCI on 3 Nov 2008 at 16:10
Type: Poster Track: 2. Data Analysis
The ATLAS trigger system has a three-levels structure, implemented to retain interesting physics events, here described for the muon case ("Muon Vertical Slice"). The first level, implemented in a custom hardware, uses measurements from the trigger chambers of the Muon Spectrometer to select muons with high transverse momentum and defines a Region of Interest (RoI) in the detector. RoIs are then p ... More
Presented by Sergio GRANCAGNOLO
Type: Parallel Talk Session: Computing Technology for Physics Research
Track: 1. Computing Technology
The NorduGrid collaboration and its middleware product, ARC (the Advanced Resource Connector), span institutions in Scandinavia and several other countries in Europe and the rest of the world. The innovative nature of the ARC design and flexible, lightweight distribution make it an ideal choice to connect heterogeneous distributed resources for use by HEP and non-HEP applications alike. ARC has be ... More
Presented by David CAMERON on 5 Nov 2008 at 15:15
Type: Plenary Session: Monday, 03 November 2008 - Morning session 1
Track: 1. Computing Technology
The initial phase of the Blue Brain Project aims to reconstruct the detailed cellular structure and function of the neocortical column (NCC) of the young rat. As a collaboration between the Brain Mind Institute of the Ecole Polytechnique Federale de Lausanne (EPFL) and IBM the project is based on the many years of experimental data from an electrophysiology lab and a dedicated massively parallel c ... More
Presented by Mr. Felix SCHUERMANN on 3 Nov 2008 at 09:40
Type: Parallel Talk Session: Computing Technology for Physics Research
Track: 1. Computing Technology
The ultimate performance of the CMS detector relies crucially on precise and prompt alignment and calibration of its components. A sizable number of workflows need to be coordinated and performed with minimal delay through the use of a computing infrastructure which is able to provide the constants for a timely reconstruction of the data for subsequent physics analysis. The framework supporting th ... More
Presented by Gero FLUCKE on 5 Nov 2008 at 17:00
Type: Parallel Talk Session: Computing Technology for Physics Research - Session 1
Track: 1. Computing Technology
The CMS Tier 0 is responsible for handling the data in the first period of it's life, from being written to a disk buffer at the CMS experiment site in Cessy by the DAQ system, to the time transfer completes from CERN to one of the Tier1 computing centres. It contains all automatic data movement, archival and processing tasks run at CERN. This includes the bulk transfers of data from Cessy to a ... More
Presented by Ian FISK on 4 Nov 2008 at 14:50
Type: Parallel Talk Session: Computing Technology for Physics Research
Track: 1. Computing Technology
The DAQ/HLT system of the ATLAS experiment at CERN, Switzerland, is being commissioned for first collisions in 2009. Presently, the system is composed of an already very large farm of computers that accounts for about one-third of its event processing capacity. Event selection is conducted in two steps after the hardware-based Level-1 Trigger: a Level-2 Trigger processes detector data based on reg ... More
Presented by Dr. André DOS ANJOS on 5 Nov 2008 at 14:25
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 1
Track: 3. Computation in Theoretical Physics
The CMS collaboration supports a wide spectrum of Monte Carlo generator packages in its official production, each of them requiring a dedicated software integration and physics validation effort. We report on the progress of the usage of these external programs with particular emphasis on the handling and tuning of the Matrix Element tools. The first integration tests in a large scale production f ... More
Presented by Dr. Paolo BARTALINI on 4 Nov 2008 at 14:25
Type: Parallel Talk Session: Computing Technology for Physics Research
Track: 1. Computing Technology
The PanDA system was developed by US ATLAS to meet the requirements for full scale production and distributed analysis processing for the ATLAS Experiment at CERN. The system provides an integrated service architecture with late binding of job, maximal automation through layered services, tight binding with the ATLAS Distributed Data Management system, advanced job recovery and error discovery fun ... More
Presented by Paul NILSSON on 3 Nov 2008 at 14:50
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
Compiled code is fast, interpreted code is slow. There is not much we can do about it, and it's the reason why interpreters use in high performance computing is usually restricted to job submission. I will show where interpreters make sense even in the context of analysis code, and what aspects have to be taken into account to make this combination a success.
Presented by Axel NAUMANN on 3 Nov 2008 at 15:15
Type: Parallel Talk Session: Computing Technology for Physics Research
Track: 1. Computing Technology
The computing system of the CMS experiment works using distributed resources from more than 80 computing centres worldwide. These centres, located in Europe, America and Asia are interconnected by the Worldwide LHC Computing Grid. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established a procedure to extensively test all relevan ... More
Presented by Dr. Andrea SCIABA' on 5 Nov 2008 at 14:00
Session: Thursday, 06 November 2008
Track: 1. Computing Technology
Power consumption is the ultimate limiter to current and future processor design, leading us to focus on more power efficient architectural features such as multiple cores, more powerful vector units, and use of hardware multi-threading (in place of relatively expensive out-of-order techniques). It is (increasingly) well understood that developers face new challenges with multi-core software devel ... More
Presented by Dr. Anwar GHULOUM on 6 Nov 2008 at 12:00
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 1
Track: 3. Computation in Theoretical Physics
Event generator programs are a ubiquitous feature of modern particle physics, since the ability to produce exclusive, unweighted simulations of high-energy events is necessary for design of detectors, analysis methods and understanding of SM backgrounds. However --- particularly in the non-perturbative areas of physics simulated by shower+hadronisation event generators --- there are many parameter ... More
Presented by Dr. Andy BUCKLEY on 4 Nov 2008 at 14:00
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
The Compressed Baryonic Matter (CBM) experiment at the future FAIR accelerator at Darmstadt is being designed for a comprehensive measurement of hadron and lepton production in heavy-ion collisions from 8-45 AGeV beam energy, producing events with large track multiplicity and high hit density. The setup consists of several detectors, including the silicon tracking system (STS) placed in a dipole m ... More
Presented by Mr. Andrey LEBEDEV on 3 Nov 2008 at 17:00
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 2
Track: 3. Computation in Theoretical Physics
We compare two approaches to the combining of signal significances: the approach, in which the signal significances are considered as corresponding random variables, and the approach with the using of confidence distributions. Several signal significances, which are often used in analysis of data in experimental physics as a measure of excess of the observed or expected number of signal e ... More
Presented by Dr. Sergey BITYUKOV on 4 Nov 2008 at 16:35
Type: Parallel Talk Session: Methodology of Computations in Theoretical Physics - Session 1
Track: 3. Computation in Theoretical Physics
Unitarity methods provide an efficient way of calculating 1-loop amplitudes for which Feynman diagram techniques are impracticable. Recently several approaches have been developed that apply these techniques to systematically generate amplitudes. The 'canonical basis' implementation of the unitarity method will be discussed in detail and illustrated using seven point QCD processes.
Presented by Warren PERKINS on 3 Nov 2008 at 14:00
Type: Parallel Talk Session: Computing Technology for Physics Research - Session 2
Track: 1. Computing Technology
In order to achieve both fast and coordinated data transfer to collaborative sites as well as to create a distribution of data over multiple sites, efficient data movement is one of the most essential aspects in distributed environment. With such capabilities at hand, truly distributed task scheduling with minimal latencies would be reachable by internationally distributed collaborations (such as ... More
Presented by Mr. Michal ZEROLA on 4 Nov 2008 at 16:35
Session: Computing Technology for Physics Research
Track: 1. Computing Technology
Grid systems are used for calculations and data processing in various applied areas such as biomedicine, nanotechnology and materials science, cosmophysics and high energy physics as well as in a number of industrial and commercial areas. However, one of the basic problems costing on a way to wide use of grid systems is related to the fact that applied jobs, as a rule, are developed for execution ... More
Presented by Alexander KRYUKOV on 5 Nov 2008 at 14:50
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
VISPA is a novel graphical development environment for physics analysis, following an experiment-independent approach. It introduces a new way of steering a physics data analysis, combining graphical and textual programming. The purpose is to speed up the design of an analysis, and to facilitate its control. As the software basis for VISPA the C++ toolkit Physics eXtension Library (PXL) is use ... More
Presented by Tatsiana KLIMKOVICH on 4 Nov 2008 at 14:00
Type: Poster Track: 2. Data Analysis
Visualization is one of the most powerful and direct ways how the huge amount of information contained in multidimensional histograms can be conveyed in a form comprehensible to a human eye. With increasing dimensionality of histograms (nuclear spectra) the requirements in developing of multidimensional scalar visualization techniques become striking. In the contribution we present a hypervolume v ... More
Presented by Miroslav MORHAC
Type: Parallel Talk Session: Data Analysis - Algorithms and Tools
Track: 2. Data Analysis
Accelerator R&D environments produce data characterized by different levels of organization. Whereas some systems produce repetitively predictable and standardized structured data, others may produce data of unknown or changing structure. In addition, structured data, typically sets of numeric values, are frequently logically connected with unstructured content (e.g., images, graphs, comments). De ... More
Presented by Dr. Jerzy NOGIEC on 5 Nov 2008 at 15:15
Type: Parallel Talk Session: Computing Technology for Physics Research - Session 2
Track: 1. Computing Technology
One of the biggest challenges in LHC experiments at CERN is data management for data analysis. Event tags and iterative looping over datasets for physics analysis require many file opens per second and (mainly forward) seeking access. Analyses will typically access large datasets reading terabytes in a single iteration. A large user community requires policies for space management and a highly pe ... More
Presented by Mr. Andreas Joachim PETERS on 4 Nov 2008 at 17:25
Type: Parallel Talk Session: Computing Technology for Physics Research
Track: 1. Computing Technology
g-Eclipse is both a user friendly graphical user interface and a programming framework for accessing Grid and Cloud infrastructures. Based on the extension mechanism of the well known Eclipse platform, it provides a middleware independent core implementation including standardized user interface components. Based on these components, implementations for any available Grid and Cloud middleware can ... More
Presented by Dr. Ariel GARCIA on 3 Nov 2008 at 14:00