# ACAT 2010

22-27 February 2010
Jaipur, India
Europe/Zurich timezone
Home > Contribution List
Displaying 99 contributions out of 99
Type: Parallel Talk Session: Friday, 26 February - Computing Technology for Physics Research
Track: Computing Technology for Physics Research
In recent years a new type of database has emerged in the computing landscape. These "NoSQL" databases tend to originate from large internet companies that have to serve simple data structures to millions of customers daily. The databases specialise for certain use cases or data structures and run on commodity hardware, as opposed to large traditional database clusters. In this paper we discuss ... More
Presented by Andrew MELO on 26 Feb 2010 at 16:10
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Computing Technology for Physics Research
An unprecedented amount of data will soon come out of CERN’s Large Hadron Collider (LHC). Large user communities will immediately demand data access for physics analysis. Despite the Grid and the distributed infrastructure allowing geographically distributed data mining and analysis, there will be an important concentration of user analysis activities where the data resides, nullifying, to some ... More
Presented by Fabrizio FURANO
on 27 Feb 2010 at 11:20
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Data Analysis - Algorithms and Tools
The ATLAS experiment at the Large Hadron Collider is expected to start colliding proton beams in September 2009. The enormous amount of data produced (~1PB per year) poses a great challenge to the ATLAS computing. ATLAS will search for the Higgs boson and Physics beyond the standard model. In order to meet this challenge, a suite of common Physics Analysis Tools (PAT) has been developed as pa ... More
Presented by Roger JONES
Type: Parallel Talk Session: Thursday, 25 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
The ATLAS online filtering (trigger) system comprises three sequential filtering levels and uses information from the three subdetectors (calorimeters, muon system and tracking). The electron/jet channel is very important for triggering system performance as interesting signatures (Higgs, SUSY, etc.) may be found efficiently through decays that produce electrons as final-state particles. Electron/ ... More
Presented by Mr. Eric LEITE on 25 Feb 2010 at 14:00
Type: Parallel Talk Session: Tuesday, 23 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
This contribution discusses a novel approach to estimate the Standard Model backgrounds based on modifying Monte Carlo predictions within their systematic uncertainties. The improved background model is obtained by altering the original predictions with successively more complex correction functions in signal-free control selections. Statistical tests indicate when sufficient compatibility with da ... More
Presented by Mr. Stephan HORNER on 23 Feb 2010 at 16:10
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Computing Technology for Physics Research
At the time of this conference, the ALICE experiment will have already data from the LHC accelerator at CERN. ALICE uses AliEn to be able to distribute and analyze all this data among the more than eighty sites that participate in the collaboration. AliEn is a system that allows the use of distributed computing and storage resources all over the world. It hides the differences between the het ... More
Presented by Pablo SAIZ
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Data Analysis - Algorithms and Tools
CERN’s Large Hadron Collider (LHC) is the world’s largest particle accelerator. It will make two proton beams collide at an unprecedented centre-of-mass energy of 14 TeV. ATLAS is a general purpose detector which will record the products of the LHC proton-proton collisions. At the inner radii, the detector is equipped with a charged-particle tracking system built on two technologies: silicon a ... More
Presented by Dr. Stephen HAYWOOD
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Computing Technology for Physics Research
Abstract: Caching in data grid has great benefits because of faster and nearer data availability of data objects. Caching decreases retrieval time of data objects. One of the challenging tasks in designing a cache is designing its replacement policy. The replacement policy decides which set of files are to be evicted to accommodate the newly arrived file in the cache and also whether a newly arriv ... More
Presented by J. P. ACHARA
Type: Parallel Talk Session: Tuesday, 23 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
Reliable analysis of any experimental data is always difficult due to the presence of noise and other types of errors. This paper analyzes data obtained from photoluminescence measurement, after the annealing of interdiffused Quantum Well Hetrostructures, by a recently proposed Real coded Quantum inspired Evolutionary Algorithm (RQiEA). The proposed algorithm directly measures interdiffusion param ... More
Presented by Mr. Ashish MANI on 23 Feb 2010 at 16:35
Type: Plenary Session: Tuesday, 23 February - Plenary Session
Track: Data Analysis - Algorithms and Tools
The MAGIC-5 Project focuses on the development of analysis algorithms for the automated detection of anomalies in medical images, compatible with the use in a distributed environment. Presently, two main research subjects are being addressed: the detection of nodules in low-dose high-resolution lung computed tomographies and the analysis of brain MRIs for the segmentation and classification of th ... More
Presented by Dr. Piergiorgio CERELLO on 23 Feb 2010 at 10:40
Type: Plenary Session: Friday, 26 February - Plenary Session
Track: Methodology of Computations in Theoretical Physics
Recently, many-core accelerators are developing so fast that the computing devices attract researchers who are always demanding faster computers. Since many-core accelerators such as graphic processing unit (GPU) are nothing but parallel computers, we need to modify an existing application program with specific optimizations (mostly parallelization) for a given accelerator. In this paper, we des ... More
Presented by Naohito NAKASATO on 26 Feb 2010 at 10:40
Type: Plenary Session: Friday, 26 February - Plenary Session
Track: Computing Technology for Physics Research
Presented by Dr. Mohammad AL-TURANY on 26 Feb 2010 at 09:40
Type: Parallel Talk Session: Tuesday, 23 February - Methodology of Computations in Theoretical Physics
Track: Methodology of Computations in Theoretical Physics
The problem of an efficient and automated computation of scattering amplitudes at the one-loop level for processes with more than 4 particles is crucial for the analysis of the LHC data. In this presentation I will review the main features of a powerful new approach for the reduction of one-loop amplitudes that operates at the integrand level. The method, also known as OPP reduction, is an import ... More
Presented by Giovanni OSSOLA on 23 Feb 2010 at 16:30
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Data Analysis - Algorithms and Tools
Particle beams are now circulating in the world’s most powerful particle accelerator LHC at CERN and the experiments are ready to record data from beam. Data from first collisions will be crucial for sub-detector commissioning, making alignment and calibration high priority activities. Executing the alignment and calibration workflow represents a complex and time consuming task, with intricate d ... More
Presented by Mr. Hassen RIAHI
Type: Plenary Session: Wednesday, 24 February - Plenary Session
Track: Methodology of Computations in Theoretical Physics
In the last years, much progress has been made in the computation of one-loop virtual matrix elements for processes involving many external particles. I this talk I will show the importance of NLO-accuracy computations for phenomenologically relevant processes and review the recent progress that will make their automated computation tractable and their inclusion in Monte Carlo tools possible.
Presented by Dr. Daniel MAITRE on 24 Feb 2010 at 10:40
Type: Parallel Talk Session: Tuesday, 23 February - Computing Technology for Physics Research
Track: Computing Technology for Physics Research
The BNL facility, supporting the RHIC experiments as its Tier0 center and thereafter the Atlas/LHC as a Tier1 center had to address early the issue of efficient access to data stored to Mass Storage. Random use destroys access performance to tape by causing too frequent, high latency and time consuming tape mount and dismount. Coupled with a high job throughput from multiple RHIC experiments, in t ... More
Presented by Mr. David YU on 23 Feb 2010 at 15:15
Type: Parallel Talk Session: Thursday, 25 February - Computing Technology for Physics Research
Track: Computing Technology for Physics Research
Unprecedented data challenges both in terms of Peta-scale volume and concurrent distributed computing have seen birth with the rise of statistically driven experiments such as the ones represented by the high-energy and nuclear physics community. Distributed computing strategies, heavily relying on the presence of data at the proper place and time, have further raised demands for coordination of d ... More
Presented by Mr. Michal ZEROLA on 25 Feb 2010 at 14:25
Type: Parallel Talk Session: Thursday, 25 February - Methodology of Computations in Theoretical Physics
Track: Methodology of Computations in Theoretical Physics
Processes with more than 5 legs are added to experimentalists' wish list for a long time now. This study is targeted to the NLO qcd corrections of such processes in the LHC. Many Feynman diagrams are contributing, including those with five- and six-point functions. A Fortran code for the numerical calculation of one-loop corrections for the process $gg\rightarrow t \bar{t}+gg$ is reviewed. ... More
Presented by Dr. Theodoros DIAKONIDIS on 25 Feb 2010 at 14:00
Type: Parallel Talk Session: Tuesday, 23 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
Imbalanced data sets containing much more background than signal instances are very common in particle physics, and will also be characteristic for the upcoming analyses of LHC data. Following up the work presented at ACAT 2008, we use the multivariate technique presented there (a rule growing algorithm with the meta-methods bagging and instance weighting) on much more imbalanced data sets, especi ... More
Presented by Markward BRITSCH on 23 Feb 2010 at 14:25
Type: Plenary Session: Afternoon session
Track: Computing Technology for Physics Research
Presented by Ian FOSTER on 22 Feb 2010 at 16:30
Session: ACAT 2010 Summary
Track: Computing Technology for Physics Research
Presented by Axel NAUMANN on 27 Feb 2010 at 09:00
Type: Parallel Talk Session: Tuesday, 23 February - Computing Technology for Physics Research
Track: Computing Technology for Physics Research
The Belle II experiment, a next-generation B factory experiment at KEK, is expected to record a two orders of magnitude larger data volume than its predecessor, the Belle experiment. The data size and rate are comparable to or more than the ones of LHC experiments and requires to change the computing model from the Belle way, where basically all computing resources were provided by KEK, to a more ... More
Presented by Takanori HARA on 23 Feb 2010 at 14:50
Type: Parallel Talk Session: Thursday, 25 February - Computing Technology for Physics Research
Track: Computing Technology for Physics Research
Dynamic virtual organization clusters with user-supplied virtual machines (VMs) have advantages over generic environments. These advantages include the ability for the user to have a priori knowledge of the scientific tools and libraries available to programs executing in the virtualized environment well as the other details of the environment. The user can also perform small-scale testing local ... More
Presented by Dr. Jerome LAURET on 25 Feb 2010 at 15:15
Session: ACAT 2010 Summary
Track: Data Analysis - Algorithms and Tools
Presented by Dr. Liliana TEODORESCU on 27 Feb 2010 at 09:40
Presented by Dr. Rene BRUN, Andrew HANUSHEVSKY, Tony CASS, Beob Kyun KIM, Alberto PACE on 26 Feb 2010 at 17:30
Type: Plenary Session: Thursday, 25 February - Plenary Session
Track: Computing Technology for Physics Research
Scheduling data transfers is frequently realized using heuristic approaches. This is justifiable for on-line systems when extremely fast response is required, however, when sending large amount of data such as transferring large files or streaming video, it is worthwhile to do real optimization. This paper describes formal models for various networking problems with the focus on data networks. In ... More
Presented by Prof. Roman BARTAK on 25 Feb 2010 at 09:00
Type: Plenary Session: Wednesday, 24 February - Plenary Session
Track: Computing Technology for Physics Research
In this talk we pragmatically address some general aspects about massive data access in the HEP environment, starting to focus on the relationships that lie among the characteristics of the available technologies and the data access strategies which are consequently possible. Moreover, the upcoming evolutions in the computing performance available also at the personal level will likely pose new ch ... More
Presented by Dr. Fabrizio FURANO on 24 Feb 2010 at 09:00
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Computing Technology for Physics Research
The configuration of the CMS Pixel detector consists in a complex set of data that uniquely define its startup condition. Since several of these conditions are used to both calibrate the detector over time and to properly initialize it for a physics run, all these data have been collected in a suitably designed database for historical archival and retrieval. In this talk we present a description o ... More
Presented by Dr. Marco ROVERE
Type: Parallel Talk Session: Tuesday, 23 February - Methodology of Computations in Theoretical Physics
Track: Methodology of Computations in Theoretical Physics
We provide a fully numerical, deterministic integration at the level of the three- and four-point functions, in the reduction of the one-loop hexagon integral by sector decomposition. For the corresponding two- and three-dimensional integrals we use an adaptive numerical approach applied recursively in two and three dimensions, respectively. The adaptive integration is coupled with an ext ... More
Presented by Prof. Elise DE DONCKER on 23 Feb 2010 at 15:00
Type: Parallel Talk Session: Thursday, 25 February - Computing Technology for Physics Research
Track: Computing Technology for Physics Research
The real time data analysis at next generation experiments is a challenge because of their enormous data rate and size. The Belle II experiment, the upgraded Belle experiment, requires to manage a data amount of O(100) times the current Belle data size collected at more than 30kHz. A sophisticated data analysis is required for the efficient data reduction in the high level trigger farm in ad ... More
Presented by Mr. Sogo MINEO on 25 Feb 2010 at 14:50
Type: Parallel Talk Session: Tuesday, 23 February - Computing Technology for Physics Research
Track: Computing Technology for Physics Research
EU-IndiaGrid2 - Sustainable e-infrastructures across Europe and India capitalises on the achievements of the FP6 EU-IndiaGrid project and huge infrastructural developments in India. EU-IndiaGrid2 will act as a bridge across European and Indian e-Infrastructures, leveraging on the expertise obtained by partners during the EU-IndiaGrid project. EU-IndiaGrid2 will further the continuous e-Infrastruc ... More
Presented by Dr. Alberto MASONI on 23 Feb 2010 at 14:00
Type: Parallel Talk Session: Thursday, 25 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
Particle trajectory recognition is an important and challenging task in the Compressed Baryonic Matter (CBM) experiment at the future FAIR accelerator at Darmstadt. The tracking algorithms have to process terabytes of input data produced in particle collisions. Therefore, the speed of the tracking software is extremly important for data analysis. In this contribution, a fast parallel track reconst ... More
Presented by Mr. Andrey LEBEDEV on 25 Feb 2010 at 16:10
Type: Parallel Talk Session: Friday, 26 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
Monte Carlo simulation of the detector response is an inevitable part of any kind of analysis which is performed with data from the LHC experiments. These simulated data sets are needed with large statistics and high precision level, which makes their production a CPU-cost intensive task. ATLAS has thus concentrated on optimizing both full and fast detector simulation techniques to achieve this go ... More
Presented by Sebastian FLEISCHMANN on 26 Feb 2010 at 16:35
Type: Parallel Talk Session: Friday, 26 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
The Compressed Baryonic Matter (CBM) experiment at the future FAIR facility at Darmstadt will measure dileptons emitted from the hot and dense phase in heavy-ion collisions. In case of an electron measurement, a high purity of identified electrons is required in order to suppress the background. Electron identification in CBM will be performed by a Ring Imaging Cherenkov (RICH) detector and Transi ... More
Presented by Semen LEBEDEV on 26 Feb 2010 at 14:25
Type: Parallel Talk Session: Friday, 26 February - Methodology of Computations in Theoretical Physics
Track: Methodology of Computations in Theoretical Physics
Sector decomposition in its practical aspect is a constructive method used to evaluate Feynman integrals numerically. We present a new program performing the sector decomposition and integrating the expression afterwards. Also the program can be used in order to expand Feynman integrals automatically in limits of momenta and masses with the use of sector decompositions and Mellin--Barnes repr ... More
Presented by Mikhail TENTYUKOV on 26 Feb 2010 at 14:30
Type: Parallel Talk Session: Friday, 26 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
Hadronic final states in hadron-hadron collisions are often studied by clustering final state hadrons into jets, each jet approximately corresponding to a hard parton. The typical jet size in a high energy hadron collision is between 0.4 and 1.0 in eta-phi. On the other hand, there may be structures of interest in an event that are of a different scale to the jet size. For example, to a first a ... More
Presented by Dr. James William MONK on 26 Feb 2010 at 14:00
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Data Analysis - Algorithms and Tools
The HEPDATA repository is a venerable collection of major HEP results from more than 35 years of particle physics activity. Historically accessed by teletype and remote terminal, the primary interaction mode has for many years been via the HEPDATA website, hosted in Durham, UK. The viability of this system has been limited by a set of legacy software choices, in particular a hierarchical database ... More
Presented by Dr. Andy BUCKLEY
on 22 Feb 2010 at 15:30
Type: Parallel Talk Session: Thursday, 25 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
The GlueX experiment will gather data at up to 3GB/s into a level-3 trigger farm, a rate unprecedented at Jefferson Lab. Monitoring will be done using the cMsg publish/subscribe system to transport ROOT objects over the network using the newly developed RootSpy package. RootSpy can be attached as a plugin to any monitoring program to "publish" its objects on the network without modification ... More
Presented by Dr. David LAWRENCE on 25 Feb 2010 at 14:25
Type: Plenary Session: Afternoon session
Track: Computing Technology for Physics Research
The ROOT system is now widely used in HEP, Nuclear Physics and many other fields. It is becoming a mature system and the software backbone for most experiments ranging from data acquisition systems, controls, simulation, reconstruction and of course data analysis. The talk will review the history of its conception at a time when HEP was moving from the Fortran era to C++. While the original t ... More
on 22 Feb 2010 at 17:30
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Data Analysis - Algorithms and Tools
The ALICE Core Computing Project conducted in the fall 2009 an inquiry on the social connections between the Computing Centres of the ALICE Distributed Computing infrastructure. This inquiry was based on social network analysis, a scientific method dedicated to the understanding of complex relational structures linking human beings. The paper provides innovative insights on various relational dime ... More
Presented by Dr. Federico CARMINATI
Type: Plenary Session: Thursday, 25 February - Plenary Session
Track: Computing Technology for Physics Research
Something strange has been happening in the slowly evolving, placid world of high performance computing. Software and hardware vendors have been introducing new programming models at a breakneck pace. At first blush, the proliferation of parallel programming models might seem confusing to software developers, but is it really surprising? In fact, programming models have been rapidly evolving for ... More
Presented by Dr. Anwar GHULOUM on 25 Feb 2010 at 09:40
Type: Parallel Talk Session: Friday, 26 February - Methodology of Computations in Theoretical Physics
Track: Methodology of Computations in Theoretical Physics
To compute jet cross sections at higher orders in QCD efficiently one has to deal with infrared divergences. These divergences cancel out between virtual and real corrections once the phase space integrals are performed. To use standard numerical integration methods like Monte Carlo the divergences' cancellation must be performed explicitly. Usually this is done constructing appropriate counterte ... More
Presented by Dr. Paolo BOLZONI on 26 Feb 2010 at 14:00
Presented by Ian FOSTER on 23 Feb 2010 at 19:00
Type: Parallel Talk Session: Thursday, 25 February - Computing Technology for Physics Research
Track: Computing Technology for Physics Research
By the time of this conference the LHC ALICE experiment at CERN will have collected a significant amount of data. To process the data that will be produced during the life time of the LHC, ALICE has developed over the last years a distributed computing model across more than 90 sites that build on the overall WLCG (World-wide LHC Computing Grid) service. ALICE implements the different Grid service ... More
Presented by Fabrizio FURANO on 25 Feb 2010 at 16:10
Session: Student Session
Presented by Prof. Lawrence PINSKY on 22 Feb 2010 at 12:10
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Computing Technology for Physics Research
For the intensive offline computation and storage needs of LHC, the Grid has become a necessary tool. The grid software is called middleware, and comes in different flavors. The ALICE experiment has developed AliEn, while ARC has been developed in the Nordic countries. The Nordic community has pledged to LHC a large amount of resources distributed over four countries, where the job management shou ... More
Presented by Philippe GROS
Type: Plenary Session: Tuesday, 23 February - Plenary Session
Track: Computing Technology for Physics Research
Using virtualization technology, the entire application environment of an LHC experiment, including its Linux operating system and the experiment's code, libraries and support utilities, can be incorporated into a virtual image and executed under suitable hypervisors installed on a choice of target host platforms. The Virtualization R&D project at CERN is developing CernVM, a virtual machine d ... More
Presented by Dr. Ben SEGAL on 23 Feb 2010 at 09:40
Type: Plenary Session: Thursday, 25 February - Plenary Session
Track: Methodology of Computations in Theoretical Physics
The formulation of QCD on a 4-dimensional space-time euclidean lattice is given. We describe, how with particular implementations of the lattice Dirac operator the lattice artefacts can be changed from a linear to a quadratic behaviour in the lattice spacing allowing therefore to reach the continuum limit faster. We give an account of the algorithmic aspects of the simulations, discuss the superco ... More
Presented by Dr. Karl JANSEN on 25 Feb 2010 at 11:20
Type: Parallel Talk Session: Tuesday, 23 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
We present a new technique for accurate energy measurement of hadronically decaying tau leptons. The technique was developed and tested at CDF experiment at the Tevatron. The technique employs a particle flow algorithm complemented with a likelihood-based method for separating contributions of overlapping energy depositions of spatially close particles. In addition to superior energy resolution ... More
Presented by Andrey ELAGIN on 23 Feb 2010 at 14:00
Session: ACAT 2010 Summary
Track: Methodology of Computations in Theoretical Physics
Presented by Peter UWER on 27 Feb 2010 at 10:40
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Computing Technology for Physics Research
Up-to-date informations about a software project helps to find problems as early as possible. This includes for example information if a software project can be build on all supported platforms without errors or if specified tests can be executed and deliver the correct results. We will present the scheme which is used within the FairRoot framework to continuously monitor the status of the proj ... More
Presented by Dr. Florian UHLIG
Session: Student Session
Presented by Dr. Alfio LAZZARO on 22 Feb 2010 at 10:50
The multicore panel will review recent activities in the multicore/manycore arena. It will consist of four people kicking off the session by making short presentations, but it will mainly rely on a good interaction with the audience: Mohammad Al-Turany (GSI/IT) Anwar Ghuloum (INTEL Labs) Sverre Jarp (CERN/IT) Alfio Lazzaro (CERN/IT)
Presented by Dr. Mohammad AL-TURANY, Mr. Sverre JARP, Dr. Alfio LAZZARO, Anwar GHULOUM on 25 Feb 2010 at 17:30
Type: Parallel Talk Session: Friday, 26 February - Methodology of Computations in Theoretical Physics
Track: Methodology of Computations in Theoretical Physics
An importance of the multiple-polylog function (MLP) for the calculation of loop integrals was pointed out by many authors. We give some general discussions between MLP and multi-loop integrals from view point of computer algebra.
Presented by Yoshimasa KURIHARA on 26 Feb 2010 at 16:00
Type: Parallel Talk Session: Thursday, 25 February - Methodology of Computations in Theoretical Physics
Track: Methodology of Computations in Theoretical Physics
Data analyses in hadron collider physics depend on background simulations performed by Monte Carlo (MC) event generators. However, calculational limitations and non-perturbative effects require approximate models with adjustable parameters. In fact, we need to simultaneously tune many phenomenological parameters in a high-dimensional parameter-space in order to make the MC generator predicti ... More
Presented by Dr. James MONK on 25 Feb 2010 at 16:00
Type: Plenary Session: Friday, 26 February - Plenary Session
Track: Methodology of Computations in Theoretical Physics
Presented by Dr. Fukuko YUASA on 26 Feb 2010 at 11:20
on 22 Feb 2010 at 14:30
Type: Parallel Talk Session: Tuesday, 23 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
The penetration of a meteor on Earth’s atmosphere results on the creation of an ionized trail, able to produce the forward scattering of VHF electromagnetic waves. This fact inspired the RMS (Radio Meteor Scatter) technique, which consists in the meteor detection using passive radar. Considering the characteristic of continuous acquisition inherent to the radar detection technique and the genera ... More
Presented by Mr. Eric LEITE on 23 Feb 2010 at 15:15
Type: Parallel Talk Session: Friday, 26 February - Computing Technology for Physics Research
Track: Computing Technology for Physics Research
In a World Wide distributed system like the ALICE Environment (AliEn) Grid Services, the closeness of the data to the actual computational infrastructure denotes a substantial difference in terms of resources utilization efficiency. Applications unaware of the locality of the data or the status of the storage environment can waste network bandwidth in case of slow networks or fail accessing data ... More
Presented by Mr. Costin GRIGORAS on 26 Feb 2010 at 15:15
Type: Parallel Talk Session: Friday, 26 February - Computing Technology for Physics Research
Track: Computing Technology for Physics Research
CMS is a large, general-purpose experiment at the Large Hadron Collider (LHC) at CERN. For its simulation, triggering, data reconstruction and analysis needs, CMS collaborators have developed many millions of lines of C++ code, which are used to create applications run in computer centers around the world. Maximizing the performance and efficiency of the software is highly desirable in order ... More
Presented by Dr. Peter ELMER on 26 Feb 2010 at 14:50
Type: Parallel Talk Session: Friday, 26 February - Computing Technology for Physics Research
Track: Computing Technology for Physics Research
With PROOF, the parallel ROOT Facility, being widely adopted for LHC data analysis, it becomes more and more important to understand the different parameters that can be tuned to make the system perform optimally. In this talk we will describe a number of "best practices" to get the most out of your PROOF system, based on feedback from several pilot setups. We will describe different cluster confi ... More
Presented by Dr. Fons RADEMAKERS on 26 Feb 2010 at 14:25
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Computing Technology for Physics Research
The Parallel ROOT Facility, PROOF, is an extension of ROOT enabling interactive analysis of large sets of ROOT files in parallel on clusters of computers or many-core machines. PROOF provides an alternative to the traditional batch-oriented exploitation of distributed computing resources. The PROOF dynamic approach allows for better adaptation to the varying and unpredictable work-load during the ... More
Presented by Gerardo GANIS
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Computing Technology for Physics Research
PROOF on Demand (PoD) is a set of utilities, which allows starting a PROOF cluster at user request, on any resource management system. It provides a plug-in based system, to use different job submission frontends, such as LSF or gLite WMS. PoD is fully automated and no special knowledge is required to start to use it. Main components of PoD are pod-agent and pod-console. pod-agent provides the co ... More
Presented by Mr. Anar MANAFOV
Type: Parallel Talk Session: Friday, 26 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
Future many-core CPU and GPU architectures require relevant changes in the traditional approach to data analysis. Massive hardware parallelism at the levels of cores, threads and vectors has to be adequately reflected in mathematical, numerical and programming optimization of the algorithms used for event reconstruction and analysis. An investigation of the Kalman filter, which is the core of the ... More
Presented by Dr. Ivan KISEL on 26 Feb 2010 at 16:10
Type: Parallel Talk Session: Tuesday, 23 February - Methodology of Computations in Theoretical Physics
Track: Methodology of Computations in Theoretical Physics
The symbolic manipulation program FORM is specialized to handle very large algebraic expressions. Some specific features of its internal structure make FORM very well suited for parallelization. We have now parallel versions of FORM, one is based on POSIX threads and is optimal for modern multicore computers while another one uses MPI and can be used to parallelize FORM on clusters and Mas ... More
Presented by Mikhail TENTYUKOV on 23 Feb 2010 at 14:30
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Computing Technology for Physics Research
The most fundamental task in the design and analysis of a nuclear reactor core is to find out the neutron distribution as a function of space, direction, energy and possibly time. The most accurate description of the average behavior of neutrons is given by the linear form of Boltzmann transport equation. Due to massive number of unknowns, the solution of the transport ... More
Presented by Mr. Kislay BHATT, Dr. Anurag GUPTA
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Data Analysis - Algorithms and Tools
With the startup of the LHC experiments, the community will be focused on the data analysis of the collected data. The complexity of the data analyses will be a key factor to find eventual new phenomena. For such a reason many data analysis tools are being developed in the last years. allowing the use of different techniques, such as likelihood-based procedures, neural networks, boost decision tre ... More
Presented by Dr. Alfio LAZZARO
Type: Parallel Talk Session: Thursday, 25 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
A great portion of data mining in a high-energy detector experiment is spent in the complementary tasks of track ﬁnding and track ﬁtting. These problems correspond, respectively, to associating a set of measurements to a single particle, and to determining the parameters of the track given a candidate path [Avery 1992]. These parameters usually correspond to the 5-tuple state of the model ... More
Presented by Dr. Michael D. MCCOOL on 25 Feb 2010 at 17:00
The reconstruction of charged tracks and interaction vertices is an important step in the data analysis chain of particle physics experiments. I give a survey of the most popular methods that have been employed in the past and are currently employed by the LHC experiments. Whereas pattern recognition methods are very diverse and rather detector dependent, fitting algorithms offer less variety and ... More
Presented by Dr. Rudolf FRüHWIRTH on 23 Feb 2010 at 09:00
Poster list can be found here: <a href="http://indico.cern.ch/sessionDisplay.py?sessionId=15&tab=contribs&confId=59397">Poster list</a>
on 23 Feb 2010 at 17:00
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Computing Technology for Physics Research
The analysis and visualisation of the LHC data is a good example of human interaction with petabytes of inhomogenous data. A proposal is presented, addressing both physics analysis and information technology, to develop a novel distributed analysis infrastructure which is scalable to allow real time random access to and interaction with peta-bytes of data. The proposed hardware basis is a network ... More
Presented by Markward BRITSCH
Type: Parallel Talk Session: Tuesday, 23 February - Methodology of Computations in Theoretical Physics
Track: Methodology of Computations in Theoretical Physics
A new reduction of tensorial one-loop Feynman integrals with massive and massless propagators to scalar functions is introduced. The method is recursive: n-point integrals of rank R are expressed by n-point and (n-1)-point integrals of rank (R-1). The algorithm is realized in a Fortran package.
Presented by Tord RIEMANN on 23 Feb 2010 at 16:00
Type: Parallel Talk Session: Tuesday, 23 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
In a typical offline data analysis in high-energy-physics a large number of collision events are studied. For each event the reconstruction software of the experiments stores a large number of measured event properties in sometimes complex data objects and formats. Usually this huge amount of initial data is reduced in several analysis steps, selecting a subset of interesting events and obser ... More
Presented by Dr. Attila KRASZNAHORKAY on 23 Feb 2010 at 14:50
Type: Plenary Session: Friday, 26 February - Plenary Session
Track: Computing Technology for Physics Research
In an era where high throughput instruments and sensors are increasingly providing us faster access to new kinds of data, it is becoming very important to have timely access to resources which allow scientists to collaborate and share data while maintaining the ability to process vas > quantities of data or run large scale simulations when required. Built on Amazon's vast global computing infrast ... More
Presented by Dr. Singh DEEPAK on 26 Feb 2010 at 09:00
Type: Parallel Talk Session: Friday, 26 February - Methodology of Computations in Theoretical Physics
Track: Methodology of Computations in Theoretical Physics
One of the powerful tools for evaluating multi-loop/leg integrals is sector decomposition, which can isolate infrared divergences from parametric representations of the integrals. The aim of this talk is to present a new method to replace iterated sector decomposition, in which the problems are converted into a set of problems in convex geometry, and then they can be solved by using algorithm ... More
Presented by Dr. Toshiaki KANEKO on 26 Feb 2010 at 15:00
Session: Student Session
Presented by Matevz TADEL on 22 Feb 2010 at 09:00
Session: Student Session
Presented by Dr. Federico CARMINATI on 22 Feb 2010 at 11:30
Session: Student Session
This lecture will present key statistical concepts and methods for multivariate data analysis and their applications in high energy physics. It will discuss the meaning of multivariate statistical analysis and its benefits, will present methods for data preparation for applying multivariate analysis techniques, the generic problems addresses by these techniques and a few classes of such technique ... More
Presented by Dr. Liliana TEODORESCU on 22 Feb 2010 at 09:40
Type: Plenary Session: Wednesday, 24 February - Plenary Session
Track: Data Analysis - Algorithms and Tools
The LHC was built as a discovery machine, whether for a Higgs Boson or Supersymmetry. In this review talk we will concentrate on the methods used in the HEP community to test hypotheses. We will cover via a comparative study, from the LEP hybrid "CLs" method and the Bayesian TEVATRON exclusion techniques to the LHC frequentist discovery techniques. We will explain how to read all the e ... More
Presented by Dr. Eilam GROSS on 24 Feb 2010 at 09:40
Type: Parallel Talk Session: Tuesday, 23 February - Methodology of Computations in Theoretical Physics
Track: Methodology of Computations in Theoretical Physics
Currently there is a lot of activity in the FORM project. There is much progress on making it open source. Work is done on simplification of lengthy formulas and routines for dealing with rational polynomials are under construction. In addition new models of parallelization are being studied to make optimal use of current multi-processor machines.
Presented by Dr. Irina PUSHKINA on 23 Feb 2010 at 14:00
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Computing Technology for Physics Research
Open source computing clusters for scientific purposes are growing in size, complexity and eterogeneity; often they are also included in some geographically distributed computing GRID. In this framework the difficulty of assessing the overall efficiency, identigying the bottlenecks and tracking the failures of single components is increasing continously. In previous works we have formalized and ... More
Presented by Dr. Leonello SERVOLI
Type: Parallel Talk Session: Thursday, 25 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
At the dawn of LHC data taking, multivariate data analysis techniques have become the core of many physics analyses. TMVA provides easy access to sophisticated multivariate classifiers and is widely used to study and deploy these for data selection. Beyond classification, most multivariate methods in TMVA perform regression optimization which can be used to predict data corrections, e.g. for ... More
Presented by Dr. Joerg STELZER on 25 Feb 2010 at 15:15
Type: Parallel Talk Session: Tuesday, 23 February - Computing Technology for Physics Research
Track: Computing Technology for Physics Research
Most software libraries have coding rules. They are usually checked by a dedicated tool which is closed source, not free, and difficult to configure. With the advent of clang, part of the LLVM compiler project, an open source C++ compiler is in reach that allows coding rules to be checked by a production grade parser through its C++ API. An implementation for ROOT's coding convention will be prese ... More
Presented by Axel NAUMANN on 23 Feb 2010 at 14:25
Type: Parallel Talk Session: Thursday, 25 February - Computing Technology for Physics Research
Track: Computing Technology for Physics Research
ALICE (A Large Ion Collider Experiment) is the detector designed to study the physics of strongly interacting matter and the quark-gluon plasma in Heavy-Ion collisions at the CERN Large Hadron Collider (LHC). The online Data Quality Monitoring (DQM) is a critical element of the data acquisition's software chain. It intends to provide shifters with precise and complete information to quickly ide ... More
Presented by Mr. Barthelemy VON HALLER on 25 Feb 2010 at 14:00
Type: Parallel Talk Session: Thursday, 25 February - Methodology of Computations in Theoretical Physics
Track: Methodology of Computations in Theoretical Physics
The talk describes the recent additions in the automated Feynman diagram computation system FeynArts, FormCalc, and LoopTools
Presented by Thomas HAHN on 25 Feb 2010 at 15:30
Type: Parallel Talk Session: Thursday, 25 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
RooStats is a project to create advanced statistical tools required for the analysis of LHC data, with emphasis on discoveries, confidence intervals, and combined measurements. The idea is to provide the major statistical techniques as a set of C++ classes with coherent interfaces, which can be used on arbitrary model and datasets in a common way. The classes are built on top of RooFit, which p ... More
Presented by Dr. Lorenzo MONETA, Dr. Gregory SCHOTT on 25 Feb 2010 at 16:35
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Data Analysis - Algorithms and Tools
C. Zampolli for the ALICE Collaboration. ALICE will collect data at a rate of 1.25 GB/s during heavy-ion runs, and of 100 MB/s during p-p data taking. In a standard data taking year, the expected total data volume is of the order of 2PB. This includes raw data, reconstructed data, and the conditions data needed for the calibration and the alignment of the ALICE detectors, on top of simula ... More
Presented by Dr. Chiara ZAMPOLLI
Type: Parallel Talk Session: Thursday, 25 February - Methodology of Computations in Theoretical Physics
Track: Methodology of Computations in Theoretical Physics
There has been made tremendous progress in the automation of one-loop (or virtual) contributions to next-to-leading order (NLO) calculations in QCD, using both the conventional Feynman diagram approach as well as unitarity-based techniques. To have rates and distributions for observables at particle colliders at NLO accuracy also the real emission and subtraction terms have to be included in the c ... More
Presented by Dr. Rikkert FREDERIX on 25 Feb 2010 at 14:30
Type: Plenary Session: Thursday, 25 February - Plenary Session
Track: Methodology of Computations in Theoretical Physics
Presented by Dr. Alexander PUKHOV on 25 Feb 2010 at 10:40
Type: Parallel Talk Session: Friday, 26 February - Computing Technology for Physics Research
Track: Computing Technology for Physics Research
The Grid approach provides an uniform access to a set of geographically distributed heterogeneous resources and services, enabling projects that would be impossible without massive computing power. Different storage projects have been developed and a few protocols are being used to interact with them such as GsiFtp and SRM (Storage Resource Manager). Moreover, during last few years different Grid ... More
Presented by Dr. Mattia CINQUILLI on 26 Feb 2010 at 14:00
Type: Parallel Talk Session: Friday, 26 February - Methodology of Computations in Theoretical Physics
Track: Methodology of Computations in Theoretical Physics
A comprehensive number of one-loop integrals in a theory with Wilson fermions at $r=1$ is computed using the Burgio--Caracciolo--Pelissetto algorithm. With the use of these results, the fermionic propagator in the coordinate representation is evaluated, making it possible to extend the Luscher-Weisz procedure for two-loop integrals to the fermionic case. Computations are performed with FORM and RE ... More
Presented by Dr. Roman ROGALYOV on 26 Feb 2010 at 16:30
Type: Parallel Talk Session: Friday, 26 February - Methodology of Computations in Theoretical Physics
Track: Methodology of Computations in Theoretical Physics
We consider pair production and decay of fundamental unstable particles in the framework of a modified perturbation theory (MPT) which treats resonant contributions of unstable particles in the sense of distributions. The cross-section of the process is calculated within the NNLO of the MPT in a model that admits exact solution. Universal massless-particles contributions are taken into considerati ... More
Presented by Dr. Maksim NEKRASOV on 26 Feb 2010 at 17:00
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Data Analysis - Algorithms and Tools
The new physics searches like SUSY in the CMS detector at the LHC will require a very fine scanning of the parameter space over a a large number of the points. Accordingly we need to address the problem of developing a very fast setup to generate and simulate large MC samples. We have explored the use TurboSim as a fast and the standalone setup for generating such samples. TurboSim does not int ... More
Presented by Anil P. SINGH
Type: Poster Session: Tuesday, 23 February - Poster Session
Track: Computing Technology for Physics Research
Distributed computer systems pose a new class of problems, due to increased heterogeneity either from the hardware than from the user's request point of view. One possible solution is to create on demand virtual working environments tailored on the user’s requirements, hence the need to manage dynamically such environments. This work proposes a solution based on the use of Virtual Machines ... More
Presented by Dr. Leonello SERVOLI
Type: Parallel Talk Session: Friday, 26 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
VISPA (Visual Physics Analysis) is a novel development environment to support physicists in prototyping, execution, and verification of data analysis of any complexity. The key idea of VISPA is developing physics analyses using a combination of graphical and textual programming. In VISPA, a multipurpose window provides visual tools to design and execute modular analyses, create analysis templates, ... More
Presented by Andreas HINZMANN on 26 Feb 2010 at 17:00
Type: Parallel Talk Session: Friday, 26 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
A lot of code written for high-level data analysis has many similar properties, e.g. reading out the data of given input files, data selection, overlap removal of physical objects, calculation of basic physical quantities and the output of the analysis results. Because of this, too many times, writing a new piece of code, one starts copying and pasting from old code, modyfing it then for spec ... More
Presented by Riccardo Maria BIANCHI on 26 Feb 2010 at 14:50
Type: Parallel Talk Session: Thursday, 25 February - Data Analysis - Algorithms and Tools
Track: Data Analysis - Algorithms and Tools
mc4qcd is a web based collaboration for analysis of Lattice QCD data. Lattice QCD computations consists of a large scale Markov Chain Monte Carlo. Multiple measurements are performed at each MC step. Our system acquires the data by uploading log files, parses them for results of measurements, filters them, mines the data for required information by aggregating results in multiple forms, represents ... More
Presented by Prof. Massimo DI PIERRO on 25 Feb 2010 at 14:50