22–27 Feb 2010
Jaipur, India
Europe/Zurich timezone

Contribution List

99 out of 99 displayed
Export to PDF
  1. Matevz Tadel (CERN)
    22/02/2010, 09:00
  2. Dr Liliana Teodorescu (Brunel University)
    22/02/2010, 09:40
    This lecture will present key statistical concepts and methods for multivariate data analysis and their applications in high energy physics. It will discuss the meaning of multivariate statistical analysis and its benefits, will present methods for data preparation for applying multivariate analysis techniques, the generic problems addresses by these techniques and a few classes of such...
    Go to contribution page
  3. Dr Alfio Lazzaro (Universita degli Studi di Milano & INFN, Milano)
    22/02/2010, 10:50
  4. Dr Federico Carminati (CERN)
    22/02/2010, 11:30
  5. Prof. Lawrence Pinsky (UNIVERSITY OF HOUSTON)
    22/02/2010, 12:10
  6. 22/02/2010, 14:30
  7. 22/02/2010, 15:30
  8. Ian Foster (Unknown)
    22/02/2010, 16:30
    Computing Technology for Physics Research
    Plenary
  9. 22/02/2010, 17:30
    Computing Technology for Physics Research
    Plenary
    The ROOT system is now widely used in HEP, Nuclear Physics and many other fields. It is becoming a mature system and the software backbone for most experiments ranging from data acquisition systems, controls, simulation, reconstruction and of course data analysis. The talk will review the history of its conception at a time when HEP was moving from the Fortran era to C++. While the...
    Go to contribution page
  10. Dr Rudolf Frühwirth (Institute of High Energy Physics, Vienna)
    23/02/2010, 09:00
    The reconstruction of charged tracks and interaction vertices is an important step in the data analysis chain of particle physics experiments. I give a survey of the most popular methods that have been employed in the past and are currently employed by the LHC experiments. Whereas pattern recognition methods are very diverse and rather detector dependent, fitting algorithms offer less variety...
    Go to contribution page
  11. Dr Ben Segal (CERN)
    23/02/2010, 09:40
    Computing Technology for Physics Research
    Plenary
    Using virtualization technology, the entire application environment of an LHC experiment, including its Linux operating system and the experiment's code, libraries and support utilities, can be incorporated into a virtual image and executed under suitable hypervisors installed on a choice of target host platforms. The Virtualization R&D project at CERN is developing CernVM, a virtual...
    Go to contribution page
  12. Dr Piergiorgio Cerello (INFN - TORINO)
    23/02/2010, 10:40
    Data Analysis - Algorithms and Tools
    Plenary
    The MAGIC-5 Project focuses on the development of analysis algorithms for the automated detection of anomalies in medical images, compatible with the use in a distributed environment. Presently, two main research subjects are being addressed: the detection of nodules in low-dose high-resolution lung computed tomographies and the analysis of brain MRIs for the segmentation and classification...
    Go to contribution page
  13. Dr Alberto Masoni (INFN - Cagliari)
    23/02/2010, 14:00
    Computing Technology for Physics Research
    Parallel Talk
    EU-IndiaGrid2 - Sustainable e-infrastructures across Europe and India capitalises on the achievements of the FP6 EU-IndiaGrid project and huge infrastructural developments in India. EU-IndiaGrid2 will act as a bridge across European and Indian e-Infrastructures, leveraging on the expertise obtained by partners during the EU-IndiaGrid project. EU-IndiaGrid2 will further the continuous...
    Go to contribution page
  14. Andrey Elagin (Texas A&M University (TAMU))
    23/02/2010, 14:00
    Data Analysis - Algorithms and Tools
    Parallel Talk
    We present a new technique for accurate energy measurement of hadronically decaying tau leptons. The technique was developed and tested at CDF experiment at the Tevatron. The technique employs a particle flow algorithm complemented with a likelihood-based method for separating contributions of overlapping energy depositions of spatially close particles. In addition to superior energy...
    Go to contribution page
  15. Dr Irina Pushkina (NIKHEF)
    23/02/2010, 14:00
    Methodology of Computations in Theoretical Physics
    Parallel Talk
    Currently there is a lot of activity in the FORM project. There is much progress on making it open source. Work is done on simplification of lengthy formulas and routines for dealing with rational polynomials are under construction. In addition new models of parallelization are being studied to make optimal use of current multi-processor machines.
    Go to contribution page
  16. Markward Britsch (Max-Planck-Institut fuer Kernphysik (MPI)-Unknown-Unknown)
    23/02/2010, 14:25
    Data Analysis - Algorithms and Tools
    Parallel Talk
    Imbalanced data sets containing much more background than signal instances are very common in particle physics, and will also be characteristic for the upcoming analyses of LHC data. Following up the work presented at ACAT 2008, we use the multivariate technique presented there (a rule growing algorithm with the meta-methods bagging and instance weighting) on much more imbalanced data sets,...
    Go to contribution page
  17. Axel Naumann (CERN)
    23/02/2010, 14:25
    Computing Technology for Physics Research
    Parallel Talk
    Most software libraries have coding rules. They are usually checked by a dedicated tool which is closed source, not free, and difficult to configure. With the advent of clang, part of the LLVM compiler project, an open source C++ compiler is in reach that allows coding rules to be checked by a production grade parser through its C++ API. An implementation for ROOT's coding convention will be...
    Go to contribution page
  18. Mikhail Tentyukov (Karlsruhe University)
    23/02/2010, 14:30
    Methodology of Computations in Theoretical Physics
    Parallel Talk
    The symbolic manipulation program FORM is specialized to handle very large algebraic expressions. Some specific features of its internal structure make FORM very well suited for parallelization. We have now parallel versions of FORM, one is based on POSIX threads and is optimal for modern multicore computers while another one uses MPI and can be used to parallelize FORM on clusters and...
    Go to contribution page
  19. Takanori Hara (KEK)
    23/02/2010, 14:50
    Computing Technology for Physics Research
    Parallel Talk
    The Belle II experiment, a next-generation B factory experiment at KEK, is expected to record a two orders of magnitude larger data volume than its predecessor, the Belle experiment. The data size and rate are comparable to or more than the ones of LHC experiments and requires to change the computing model from the Belle way, where basically all computing resources were provided by KEK, to a...
    Go to contribution page
  20. Dr Attila Krasznahorkay (New York University)
    23/02/2010, 14:50
    Data Analysis - Algorithms and Tools
    Parallel Talk
    In a typical offline data analysis in high-energy-physics a large number of collision events are studied. For each event the reconstruction software of the experiments stores a large number of measured event properties in sometimes complex data objects and formats. Usually this huge amount of initial data is reduced in several analysis steps, selecting a subset of interesting events and...
    Go to contribution page
  21. Prof. Elise de Doncker (Western Michigan University)
    23/02/2010, 15:00
    Methodology of Computations in Theoretical Physics
    Parallel Talk
    We provide a fully numerical, deterministic integration at the level of the three- and four-point functions, in the reduction of the one-loop hexagon integral by sector decomposition. For the corresponding two- and three-dimensional integrals we use an adaptive numerical approach applied recursively in two and three dimensions, respectively. The adaptive integration is coupled with an...
    Go to contribution page
  22. Mr David YU (BROOKHAVEN NATIONAL LABORATORY)
    23/02/2010, 15:15
    Computing Technology for Physics Research
    Parallel Talk
    The BNL facility, supporting the RHIC experiments as its Tier0 center and thereafter the Atlas/LHC as a Tier1 center had to address early the issue of efficient access to data stored to Mass Storage. Random use destroys access performance to tape by causing too frequent, high latency and time consuming tape mount and dismount. Coupled with a high job throughput from multiple RHIC experiments,...
    Go to contribution page
  23. Mr Eric LEITE (Federal University of Rio de Janeiro)
    23/02/2010, 15:15
    Data Analysis - Algorithms and Tools
    Parallel Talk
    The penetration of a meteor on Earth’s atmosphere results on the creation of an ionized trail, able to produce the forward scattering of VHF electromagnetic waves. This fact inspired the RMS (Radio Meteor Scatter) technique, which consists in the meteor detection using passive radar. Considering the characteristic of continuous acquisition inherent to the radar detection technique and the...
    Go to contribution page
  24. Tord Riemann (DESY)
    23/02/2010, 16:00
    Methodology of Computations in Theoretical Physics
    Parallel Talk
    A new reduction of tensorial one-loop Feynman integrals with massive and massless propagators to scalar functions is introduced. The method is recursive: n-point integrals of rank R are expressed by n-point and (n-1)-point integrals of rank (R-1). The algorithm is realized in a Fortran package.
    Go to contribution page
  25. Mr Stephan Horner (Albert-Ludwigs-Universitaet Freiburg)
    23/02/2010, 16:10
    Data Analysis - Algorithms and Tools
    Parallel Talk
    This contribution discusses a novel approach to estimate the Standard Model backgrounds based on modifying Monte Carlo predictions within their systematic uncertainties. The improved background model is obtained by altering the original predictions with successively more complex correction functions in signal-free control selections. Statistical tests indicate when sufficient compatibility...
    Go to contribution page
  26. Giovanni Ossola (New York City College of Technology (CUNY))
    23/02/2010, 16:30
    Methodology of Computations in Theoretical Physics
    Parallel Talk
    The problem of an efficient and automated computation of scattering amplitudes at the one-loop level for processes with more than 4 particles is crucial for the analysis of the LHC data. In this presentation I will review the main features of a powerful new approach for the reduction of one-loop amplitudes that operates at the integrand level. The method, also known as OPP reduction, is an...
    Go to contribution page
  27. Mr Ashish Mani (Dayalbagh Educational Institute)
    23/02/2010, 16:35
    Data Analysis - Algorithms and Tools
    Parallel Talk
    Reliable analysis of any experimental data is always difficult due to the presence of noise and other types of errors. This paper analyzes data obtained from photoluminescence measurement, after the annealing of interdiffused Quantum Well Hetrostructures, by a recently proposed Real coded Quantum inspired Evolutionary Algorithm (RQiEA). The proposed algorithm directly measures interdiffusion...
    Go to contribution page
  28. 23/02/2010, 17:00
    Poster list can be found here: Poster list
    Go to contribution page
  29. Ian Foster (Unknown)
    23/02/2010, 19:00
  30. Dr Fabrizio Furano (Conseil Europeen Recherche Nucl. (CERN))
    24/02/2010, 09:00
    Computing Technology for Physics Research
    Plenary
    In this talk we pragmatically address some general aspects about massive data access in the HEP environment, starting to focus on the relationships that lie among the characteristics of the available technologies and the data access strategies which are consequently possible. Moreover, the upcoming evolutions in the computing performance available also at the personal level will likely pose...
    Go to contribution page
  31. Dr Eilam Gross (Weissman Institute of Physical Sciences)
    24/02/2010, 09:40
    Data Analysis - Algorithms and Tools
    Plenary
    The LHC was built as a discovery machine, whether for a Higgs Boson or Supersymmetry. In this review talk we will concentrate on the methods used in the HEP community to test hypotheses. We will cover via a comparative study, from the LEP hybrid "CLs" method and the Bayesian TEVATRON exclusion techniques to the LHC frequentist discovery techniques. We will explain how to read all ...
    Go to contribution page
  32. Dr Daniel Maitre (IPPP, Great Britain)
    24/02/2010, 10:40
    Methodology of Computations in Theoretical Physics
    Plenary
    In the last years, much progress has been made in the computation of one-loop virtual matrix elements for processes involving many external particles. I this talk I will show the importance of NLO-accuracy computations for phenomenologically relevant processes and review the recent progress that will make their automated computation tractable and their inclusion in Monte Carlo tools possible.
    Go to contribution page
  33. Prof. Roman Bartak (Charles University in Prague)
    25/02/2010, 09:00
    Computing Technology for Physics Research
    Plenary
    Scheduling data transfers is frequently realized using heuristic approaches. This is justifiable for on-line systems when extremely fast response is required, however, when sending large amount of data such as transferring large files or streaming video, it is worthwhile to do real optimization. This paper describes formal models for various networking problems with the focus on data networks....
    Go to contribution page
  34. Dr Anwar Ghuloum (Intel Corporation)
    25/02/2010, 09:40
    Computing Technology for Physics Research
    Plenary
    Something strange has been happening in the slowly evolving, placid world of high performance computing. Software and hardware vendors have been introducing new programming models at a breakneck pace. At first blush, the proliferation of parallel programming models might seem confusing to software developers, but is it really surprising? In fact, programming models have been rapidly evolving...
    Go to contribution page
  35. Dr Alexander Pukhov (Moscow State University, Russia)
    25/02/2010, 10:40
    Methodology of Computations in Theoretical Physics
    Plenary
  36. Dr Karl Jansen (NIC, DESY, Zeuthen)
    25/02/2010, 11:20
    Methodology of Computations in Theoretical Physics
    Plenary
    The formulation of QCD on a 4-dimensional space-time euclidean lattice is given. We describe, how with particular implementations of the lattice Dirac operator the lattice artefacts can be changed from a linear to a quadratic behaviour in the lattice spacing allowing therefore to reach the continuum limit faster. We give an account of the algorithmic aspects of the simulations, discuss the...
    Go to contribution page
  37. Mr Eric LEITE (Federal University of Rio de Janeiro)
    25/02/2010, 14:00
    Data Analysis - Algorithms and Tools
    Parallel Talk
    The ATLAS online filtering (trigger) system comprises three sequential filtering levels and uses information from the three subdetectors (calorimeters, muon system and tracking). The electron/jet channel is very important for triggering system performance as interesting signatures (Higgs, SUSY, etc.) may be found efficiently through decays that produce electrons as final-state particles....
    Go to contribution page
  38. Dr Theodoros Diakonidis (DESY,Zeuthen)
    25/02/2010, 14:00
    Methodology of Computations in Theoretical Physics
    Parallel Talk
    Processes with more than 5 legs are added to experimentalists' wish list for a long time now. This study is targeted to the NLO qcd corrections of such processes in the LHC. Many Feynman diagrams are contributing, including those with five- and six-point functions. A Fortran code for the numerical calculation of one-loop corrections for the process $gg\rightarrow t \bar{t}+gg$ is...
    Go to contribution page
  39. Mr Barthelemy von Haller (CERN)
    25/02/2010, 14:00
    Computing Technology for Physics Research
    Parallel Talk
    ALICE (A Large Ion Collider Experiment) is the detector designed to study the physics of strongly interacting matter and the quark-gluon plasma in Heavy-Ion collisions at the CERN Large Hadron Collider (LHC). The online Data Quality Monitoring (DQM) is a critical element of the data acquisition's software chain. It intends to provide shifters with precise and complete information to quickly...
    Go to contribution page
  40. Mr Michal ZEROLA (Nuclear Physics Inst., Academy of Sciences)
    25/02/2010, 14:25
    Computing Technology for Physics Research
    Parallel Talk
    Unprecedented data challenges both in terms of Peta-scale volume and concurrent distributed computing have seen birth with the rise of statistically driven experiments such as the ones represented by the high-energy and nuclear physics community. Distributed computing strategies, heavily relying on the presence of data at the proper place and time, have further raised demands for coordination...
    Go to contribution page
  41. Dr David Lawrence (Jefferson Lab)
    25/02/2010, 14:25
    Data Analysis - Algorithms and Tools
    Parallel Talk
    The GlueX experiment will gather data at up to 3GB/s into a level-3 trigger farm, a rate unprecedented at Jefferson Lab. Monitoring will be done using the cMsg publish/subscribe system to transport ROOT objects over the network using the newly developed RootSpy package. RootSpy can be attached as a plugin to any monitoring program to "publish" its objects on the network without...
    Go to contribution page
  42. Dr Rikkert Frederix (University Zurich)
    25/02/2010, 14:30
    Methodology of Computations in Theoretical Physics
    Parallel Talk
    There has been made tremendous progress in the automation of one-loop (or virtual) contributions to next-to-leading order (NLO) calculations in QCD, using both the conventional Feynman diagram approach as well as unitarity-based techniques. To have rates and distributions for observables at particle colliders at NLO accuracy also the real emission and subtraction terms have to be included in...
    Go to contribution page
  43. Mr Sogo Mineo (University of Tokyo)
    25/02/2010, 14:50
    Computing Technology for Physics Research
    Parallel Talk
    The real time data analysis at next generation experiments is a challenge because of their enormous data rate and size. The Belle II experiment, the upgraded Belle experiment, requires to manage a data amount of O(100) times the current Belle data size collected at more than 30kHz. A sophisticated data analysis is required for the efficient data reduction in the high level trigger...
    Go to contribution page
  44. Prof. Massimo Di Pierro (DePaul University)
    25/02/2010, 14:50
    Data Analysis - Algorithms and Tools
    Parallel Talk
    mc4qcd is a web based collaboration for analysis of Lattice QCD data. Lattice QCD computations consists of a large scale Markov Chain Monte Carlo. Multiple measurements are performed at each MC step. Our system acquires the data by uploading log files, parses them for results of measurements, filters them, mines the data for required information by aggregating results in multiple forms,...
    Go to contribution page
  45. Dr Jerome LAURET (BROOKHAVEN NATIONAL LABORATORY)
    25/02/2010, 15:15
    Computing Technology for Physics Research
    Parallel Talk
    Dynamic virtual organization clusters with user-supplied virtual machines (VMs) have advantages over generic environments. These advantages include the ability for the user to have a priori knowledge of the scientific tools and libraries available to programs executing in the virtualized environment well as the other details of the environment. The user can also perform small-scale testing...
    Go to contribution page
  46. Dr Joerg Stelzer (DESY, Germany)
    25/02/2010, 15:15
    Data Analysis - Algorithms and Tools
    Parallel Talk
    At the dawn of LHC data taking, multivariate data analysis techniques have become the core of many physics analyses. TMVA provides easy access to sophisticated multivariate classifiers and is widely used to study and deploy these for data selection. Beyond classification, most multivariate methods in TMVA perform regression optimization which can be used to predict data corrections, e.g....
    Go to contribution page
  47. Thomas Hahn (MPI Munich)
    25/02/2010, 15:30
    Methodology of Computations in Theoretical Physics
    Parallel Talk
    The talk describes the recent additions in the automated Feynman diagram computation system FeynArts, FormCalc, and LoopTools
    Go to contribution page
  48. Dr James Monk (MCnet/Cedar)
    25/02/2010, 16:00
    Methodology of Computations in Theoretical Physics
    Parallel Talk
    Data analyses in hadron collider physics depend on background simulations performed by Monte Carlo (MC) event generators. However, calculational limitations and non-perturbative effects require approximate models with adjustable parameters. In fact, we need to simultaneously tune many phenomenological parameters in a high-dimensional parameter-space in order to make the MC generator...
    Go to contribution page
  49. Mr Andrey Lebedev (GSI, Darmstadt / JINR, Dubna)
    25/02/2010, 16:10
    Data Analysis - Algorithms and Tools
    Parallel Talk
    Particle trajectory recognition is an important and challenging task in the Compressed Baryonic Matter (CBM) experiment at the future FAIR accelerator at Darmstadt. The tracking algorithms have to process terabytes of input data produced in particle collisions. Therefore, the speed of the tracking software is extremly important for data analysis. In this contribution, a fast parallel track...
    Go to contribution page
  50. Fabrizio Furano (CERN IT/DM)
    25/02/2010, 16:10
    Computing Technology for Physics Research
    Parallel Talk
    By the time of this conference the LHC ALICE experiment at CERN will have collected a significant amount of data. To process the data that will be produced during the life time of the LHC, ALICE has developed over the last years a distributed computing model across more than 90 sites that build on the overall WLCG (World-wide LHC Computing Grid) service. ALICE implements the different Grid...
    Go to contribution page
  51. Dr Gregory Schott (Karlsruhe Institute of Technology), Dr Lorenzo Moneta (CERN)
    25/02/2010, 16:35
    Data Analysis - Algorithms and Tools
    Parallel Talk
    RooStats is a project to create advanced statistical tools required for the analysis of LHC data, with emphasis on discoveries, confidence intervals, and combined measurements. The idea is to provide the major statistical techniques as a set of C++ classes with coherent interfaces, which can be used on arbitrary model and datasets in a common way. The classes are built on top of RooFit,...
    Go to contribution page
  52. Dr Michael D. McCool (Intel/University of Waterloo)
    25/02/2010, 17:00
    Data Analysis - Algorithms and Tools
    Parallel Talk
    A great portion of data mining in a high-energy detector experiment is spent in the complementary tasks of track finding and track fitting. These problems correspond, respectively, to associating a set of measurements to a single particle, and to determining the parameters of the track given a candidate path [Avery 1992]. These parameters usually correspond to the 5-tuple state of the model...
    Go to contribution page
  53. Dr Alfio Lazzaro (Universita degli Studi di Milano & INFN, Milano), Anwar Ghuloum (Intel Corporation), Dr Mohammad Al-Turany (GSI DARMSTADT), Mr Sverre Jarp (CERN)
    25/02/2010, 17:30
    The multicore panel will review recent activities in the multicore/manycore arena. It will consist of four people kicking off the session by making short presentations, but it will mainly rely on a good interaction with the audience: Mohammad Al-Turany (GSI/IT) Anwar Ghuloum (INTEL Labs) Sverre Jarp (CERN/IT) Alfio Lazzaro (CERN/IT)
    Go to contribution page
  54. Dr Singh Deepak (Business Development Manager - Amazon EC2)
    26/02/2010, 09:00
    Computing Technology for Physics Research
    Plenary
    In an era where high throughput instruments and sensors are increasingly providing us faster access to new kinds of data, it is becoming very important to have timely access to resources which allow scientists to collaborate and share data while maintaining the ability to process vas > quantities of data or run large scale simulations when required. Built on Amazon's vast global computing...
    Go to contribution page
  55. Dr Mohammad AL-TURANY (GSI DARMSTADT)
    26/02/2010, 09:40
    Computing Technology for Physics Research
    Plenary
  56. Naohito Nakasato (University of Aizu)
    26/02/2010, 10:40
    Methodology of Computations in Theoretical Physics
    Plenary
    Recently, many-core accelerators are developing so fast that the computing devices attract researchers who are always demanding faster computers. Since many-core accelerators such as graphic processing unit (GPU) are nothing but parallel computers, we need to modify an existing application program with specific optimizations (mostly parallelization) for a given accelerator. In this paper, we...
    Go to contribution page
  57. Dr Fukuko YUASA (KEK)
    26/02/2010, 11:20
    Methodology of Computations in Theoretical Physics
    Plenary
  58. Dr James William Monk (Department of Physics and Astronomy - University College London)
    26/02/2010, 14:00
    Data Analysis - Algorithms and Tools
    Parallel Talk
    Hadronic final states in hadron-hadron collisions are often studied by clustering final state hadrons into jets, each jet approximately corresponding to a hard parton. The typical jet size in a high energy hadron collision is between 0.4 and 1.0 in eta-phi. On the other hand, there may be structures of interest in an event that are of a different scale to the jet size. For example, to a...
    Go to contribution page
  59. Dr Paolo Bolzoni (DESY)
    26/02/2010, 14:00
    Methodology of Computations in Theoretical Physics
    Parallel Talk
    To compute jet cross sections at higher orders in QCD efficiently one has to deal with infrared divergences. These divergences cancel out between virtual and real corrections once the phase space integrals are performed. To use standard numerical integration methods like Monte Carlo the divergences' cancellation must be performed explicitly. Usually this is done constructing appropriate...
    Go to contribution page
  60. Dr Mattia Cinquilli (INFN, Sezione di Perugia)
    26/02/2010, 14:00
    Computing Technology for Physics Research
    Parallel Talk
    The Grid approach provides an uniform access to a set of geographically distributed heterogeneous resources and services, enabling projects that would be impossible without massive computing power. Different storage projects have been developed and a few protocols are being used to interact with them such as GsiFtp and SRM (Storage Resource Manager). Moreover, during last few years different...
    Go to contribution page
  61. Semen Lebedev (GSI, Darmstadt / JINR, Dubna)
    26/02/2010, 14:25
    Data Analysis - Algorithms and Tools
    Parallel Talk
    The Compressed Baryonic Matter (CBM) experiment at the future FAIR facility at Darmstadt will measure dileptons emitted from the hot and dense phase in heavy-ion collisions. In case of an electron measurement, a high purity of identified electrons is required in order to suppress the background. Electron identification in CBM will be performed by a Ring Imaging Cherenkov (RICH) detector and...
    Go to contribution page
  62. Dr Fons Rademakers (CERN)
    26/02/2010, 14:25
    Computing Technology for Physics Research
    Parallel Talk
    With PROOF, the parallel ROOT Facility, being widely adopted for LHC data analysis, it becomes more and more important to understand the different parameters that can be tuned to make the system perform optimally. In this talk we will describe a number of "best practices" to get the most out of your PROOF system, based on feedback from several pilot setups. We will describe different cluster...
    Go to contribution page
  63. Mikhail Tentyukov (Karlsruhe University)
    26/02/2010, 14:30
    Methodology of Computations in Theoretical Physics
    Parallel Talk
    Sector decomposition in its practical aspect is a constructive method used to evaluate Feynman integrals numerically. We present a new program performing the sector decomposition and integrating the expression afterwards. Also the program can be used in order to expand Feynman integrals automatically in limits of momenta and masses with the use of sector decompositions and Mellin--Barnes...
    Go to contribution page
  64. Dr Peter Elmer (PRINCETON UNIVERSITY)
    26/02/2010, 14:50
    Computing Technology for Physics Research
    Parallel Talk
    CMS is a large, general-purpose experiment at the Large Hadron Collider (LHC) at CERN. For its simulation, triggering, data reconstruction and analysis needs, CMS collaborators have developed many millions of lines of C++ code, which are used to create applications run in computer centers around the world. Maximizing the performance and efficiency of the software is highly desirable in...
    Go to contribution page
  65. Riccardo Maria Bianchi (Physikalisches Institut-Albert-Ludwigs-Universitaet Freiburg-Unk)
    26/02/2010, 14:50
    Data Analysis - Algorithms and Tools
    Parallel Talk
    A lot of code written for high-level data analysis has many similar properties, e.g. reading out the data of given input files, data selection, overlap removal of physical objects, calculation of basic physical quantities and the output of the analysis results. Because of this, too many times, writing a new piece of code, one starts copying and pasting from old code, modyfing it then for...
    Go to contribution page
  66. Dr Toshiaki KANEKO (KEK, Computing Research Center)
    26/02/2010, 15:00
    Methodology of Computations in Theoretical Physics
    Parallel Talk
    One of the powerful tools for evaluating multi-loop/leg integrals is sector decomposition, which can isolate infrared divergences from parametric representations of the integrals. The aim of this talk is to present a new method to replace iterated sector decomposition, in which the problems are converted into a set of problems in convex geometry, and then they can be solved by using...
    Go to contribution page
  67. Mr Costin Grigoras (CERN)
    26/02/2010, 15:15
    Computing Technology for Physics Research
    Parallel Talk
    In a World Wide distributed system like the ALICE Environment (AliEn) Grid Services, the closeness of the data to the actual computational infrastructure denotes a substantial difference in terms of resources utilization efficiency. Applications unaware of the locality of the data or the status of the storage environment can waste network bandwidth in case of slow networks or fail accessing...
    Go to contribution page
  68. Yoshimasa Kurihara (KEK)
    26/02/2010, 16:00
    Methodology of Computations in Theoretical Physics
    Parallel Talk
    An importance of the multiple-polylog function (MLP) for the calculation of loop integrals was pointed out by many authors. We give some general discussions between MLP and multi-loop integrals from view point of computer algebra.
    Go to contribution page
  69. Andrew Melo (Vanderbilt)
    26/02/2010, 16:10
    Computing Technology for Physics Research
    Parallel Talk
    In recent years a new type of database has emerged in the computing landscape. These "NoSQL" databases tend to originate from large internet companies that have to serve simple data structures to millions of customers daily. The databases specialise for certain use cases or data structures and run on commodity hardware, as opposed to large traditional database clusters. In this paper we...
    Go to contribution page
  70. Dr Ivan Kisel (Gesellschaft fuer Schwerionen forschung mbH (GSI)-Unknown-Unknow)
    26/02/2010, 16:10
    Data Analysis - Algorithms and Tools
    Parallel Talk
    Future many-core CPU and GPU architectures require relevant changes in the traditional approach to data analysis. Massive hardware parallelism at the levels of cores, threads and vectors has to be adequately reflected in mathematical, numerical and programming optimization of the algorithms used for event reconstruction and analysis. An investigation of the Kalman filter, which is the core of...
    Go to contribution page
  71. Dr Roman Rogalyov (IHEP)
    26/02/2010, 16:30
    Methodology of Computations in Theoretical Physics
    Parallel Talk
    A comprehensive number of one-loop integrals in a theory with Wilson fermions at $r=1$ is computed using the Burgio--Caracciolo--Pelissetto algorithm. With the use of these results, the fermionic propagator in the coordinate representation is evaluated, making it possible to extend the Luscher-Weisz procedure for two-loop integrals to the fermionic case. Computations are performed with FORM...
    Go to contribution page
  72. Sebastian Fleischmann (U. Bonn)
    26/02/2010, 16:35
    Data Analysis - Algorithms and Tools
    Parallel Talk
    Monte Carlo simulation of the detector response is an inevitable part of any kind of analysis which is performed with data from the LHC experiments. These simulated data sets are needed with large statistics and high precision level, which makes their production a CPU-cost intensive task. ATLAS has thus concentrated on optimizing both full and fast detector simulation techniques to achieve...
    Go to contribution page
  73. Dr Maksim Nekrasov (Institute for High Energy Physics)
    26/02/2010, 17:00
    Methodology of Computations in Theoretical Physics
    Parallel Talk
    We consider pair production and decay of fundamental unstable particles in the framework of a modified perturbation theory (MPT) which treats resonant contributions of unstable particles in the sense of distributions. The cross-section of the process is calculated within the NNLO of the MPT in a model that admits exact solution. Universal massless-particles contributions are taken into...
    Go to contribution page
  74. Andreas Hinzmann (III. Physikalisches Institut A, RWTH Aachen University, Germany)
    26/02/2010, 17:00
    Data Analysis - Algorithms and Tools
    Parallel Talk
    VISPA (Visual Physics Analysis) is a novel development environment to support physicists in prototyping, execution, and verification of data analysis of any complexity. The key idea of VISPA is developing physics analyses using a combination of graphical and textual programming. In VISPA, a multipurpose window provides visual tools to design and execute modular analyses, create analysis...
    Go to contribution page
  75. Alberto Pace (CERN), Andrew Hanushevsky (Unknown), Beob Kyun Kim (KISTI), Dr Rene Brun (CERN), Tony Cass (CERN)
    26/02/2010, 17:30
  76. Axel Naumann (CERN)
    27/02/2010, 09:00
    Computing Technology for Physics Research
  77. Dr Liliana Teodorescu (Brunel University)
    27/02/2010, 09:40
    Data Analysis - Algorithms and Tools
  78. Peter Uwer (Humboldt-Universität zu Berlin)
    27/02/2010, 10:40
    Methodology of Computations in Theoretical Physics
  79. 27/02/2010, 11:20
  80. Fabrizio Furano (Conseil Europeen Recherche Nucl. (CERN))
    Computing Technology for Physics Research
    Poster
    An unprecedented amount of data will soon come out of CERN’s Large Hadron Collider (LHC). Large user communities will immediately demand data access for physics analysis. Despite the Grid and the distributed infrastructure allowing geographically distributed data mining and analysis, there will be an important concentration of user analysis activities where the data resides, nullifying, to...
    Go to contribution page
  81. Pablo Saiz (CERN)
    Computing Technology for Physics Research
    Poster
    At the time of this conference, the ALICE experiment will have already data from the LHC accelerator at CERN. ALICE uses AliEn to be able to distribute and analyze all this data among the more than eighty sites that participate in the collaboration. AliEn is a system that allows the use of distributed computing and storage resources all over the world. It hides the differences between the...
    Go to contribution page
  82. Dr Stephen Haywood (RAL)
    Data Analysis - Algorithms and Tools
    Poster
    CERN’s Large Hadron Collider (LHC) is the world’s largest particle accelerator. It will make two proton beams collide at an unprecedented centre-of-mass energy of 14 TeV. ATLAS is a general purpose detector which will record the products of the LHC proton-proton collisions. At the inner radii, the detector is equipped with a charged-particle tracking system built on two technologies: silicon...
    Go to contribution page
  83. J. P. Achara (LNMIIT)
    Computing Technology for Physics Research
    Poster
    Abstract: Caching in data grid has great benefits because of faster and nearer data availability of data objects. Caching decreases retrieval time of data objects. One of the challenging tasks in designing a cache is designing its replacement policy. The replacement policy decides which set of files are to be evicted to accommodate the newly arrived file in the cache and also whether a newly...
    Go to contribution page
  84. Roger Jones (Physics Department-Lancaster University-Unknown)
    Data Analysis - Algorithms and Tools
    Poster
    The ATLAS experiment at the Large Hadron Collider is expected to start colliding proton beams in September 2009. The enormous amount of data produced (~1PB per year) poses a great challenge to the ATLAS computing. ATLAS will search for the Higgs boson and Physics beyond the standard model. In order to meet this challenge, a suite of common Physics Analysis Tools (PAT) has been developed...
    Go to contribution page
  85. Mr Hassen Riahi (University and INFN Perugia)
    Data Analysis - Algorithms and Tools
    Poster
    Particle beams are now circulating in the world’s most powerful particle accelerator LHC at CERN and the experiments are ready to record data from beam. Data from first collisions will be crucial for sub-detector commissioning, making alignment and calibration high priority activities. Executing the alignment and calibration workflow represents a complex and time consuming task, with intricate...
    Go to contribution page
  86. Dr Marco Rovere (CERN)
    Computing Technology for Physics Research
    Poster
    The configuration of the CMS Pixel detector consists in a complex set of data that uniquely define its startup condition. Since several of these conditions are used to both calibrate the detector over time and to properly initialize it for a physics run, all these data have been collected in a suitably designed database for historical archival and retrieval. In this talk we present a...
    Go to contribution page
  87. Dr Andy Buckley (University of Edinburgh)
    Data Analysis - Algorithms and Tools
    Poster
    The HEPDATA repository is a venerable collection of major HEP results from more than 35 years of particle physics activity. Historically accessed by teletype and remote terminal, the primary interaction mode has for many years been via the HEPDATA website, hosted in Durham, UK. The viability of this system has been limited by a set of legacy software choices, in particular a hierarchical...
    Go to contribution page
  88. Dr Federico Carminati (CERN)
    Data Analysis - Algorithms and Tools
    Poster
    The ALICE Core Computing Project conducted in the fall 2009 an inquiry on the social connections between the Computing Centres of the ALICE Distributed Computing infrastructure. This inquiry was based on social network analysis, a scientific method dedicated to the understanding of complex relational structures linking human beings. The paper provides innovative insights on various relational...
    Go to contribution page
  89. Philippe Gros (Lund University)
    Computing Technology for Physics Research
    Poster
    For the intensive offline computation and storage needs of LHC, the Grid has become a necessary tool. The grid software is called middleware, and comes in different flavors. The ALICE experiment has developed AliEn, while ARC has been developed in the Nordic countries. The Nordic community has pledged to LHC a large amount of resources distributed over four countries, where the job management...
    Go to contribution page
  90. Dr Florian Uhlig (GSI Darmstadt)
    Computing Technology for Physics Research
    Poster
    Up-to-date informations about a software project helps to find problems as early as possible. This includes for example information if a software project can be build on all supported platforms without errors or if specified tests can be executed and deliver the correct results. We will present the scheme which is used within the FairRoot framework to continuously monitor the status of the...
    Go to contribution page
  91. Dr Alfio Lazzaro (Universita degli Studi di Milano & INFN, Milano)
    Data Analysis - Algorithms and Tools
    Poster
    With the startup of the LHC experiments, the community will be focused on the data analysis of the collected data. The complexity of the data analyses will be a key factor to find eventual new phenomena. For such a reason many data analysis tools are being developed in the last years. allowing the use of different techniques, such as likelihood-based procedures, neural networks, boost decision...
    Go to contribution page
  92. Dr Anurag Gupta (Scientific Officer 'F'), Mr Kislay Bhatt (Scientific Officer 'F')
    Computing Technology for Physics Research
    Poster
    The most fundamental task in the design and analysis of a nuclear reactor core is to find out the neutron distribution as a function of space, direction, energy and possibly time. The most accurate description of the average behavior of neutrons is given by the linear form of Boltzmann transport equation. Due to massive number of unknowns, the solution of the...
    Go to contribution page
  93. Gerardo GANIS (CERN)
    Computing Technology for Physics Research
    Poster
    The Parallel ROOT Facility, PROOF, is an extension of ROOT enabling interactive analysis of large sets of ROOT files in parallel on clusters of computers or many-core machines. PROOF provides an alternative to the traditional batch-oriented exploitation of distributed computing resources. The PROOF dynamic approach allows for better adaptation to the varying and unpredictable work-load during...
    Go to contribution page
  94. Mr Anar Manafov (GSI Helmholtzzentrum für Schwerionenforschung GmbH, Germany)
    Computing Technology for Physics Research
    Poster
    PROOF on Demand (PoD) is a set of utilities, which allows starting a PROOF cluster at user request, on any resource management system. It provides a plug-in based system, to use different job submission frontends, such as LSF or gLite WMS. PoD is fully automated and no special knowledge is required to start to use it. Main components of PoD are pod-agent and pod-console. pod-agent provides...
    Go to contribution page
  95. Markward Britsch (MPI for Nuclear Physics,Heidelberg, Germany)
    Computing Technology for Physics Research
    Poster
    The analysis and visualisation of the LHC data is a good example of human interaction with petabytes of inhomogenous data. A proposal is presented, addressing both physics analysis and information technology, to develop a novel distributed analysis infrastructure which is scalable to allow real time random access to and interaction with peta-bytes of data. The proposed hardware basis is a...
    Go to contribution page
  96. Dr Leonello Servoli (INFN - Sezione di Perugia)
    Computing Technology for Physics Research
    Poster
    Open source computing clusters for scientific purposes are growing in size, complexity and eterogeneity; often they are also included in some geographically distributed computing GRID. In this framework the difficulty of assessing the overall efficiency, identigying the bottlenecks and tracking the failures of single components is increasing continously. In previous works we have formalized...
    Go to contribution page
  97. Dr Chiara Zampolli (CERN & CNAF-INFN)
    Data Analysis - Algorithms and Tools
    Poster
    C. Zampolli for the ALICE Collaboration. ALICE will collect data at a rate of 1.25 GB/s during heavy-ion runs, and of 100 MB/s during p-p data taking. In a standard data taking year, the expected total data volume is of the order of 2PB. This includes raw data, reconstructed data, and the conditions data needed for the calibration and the alignment of the ALICE detectors, on top of...
    Go to contribution page
  98. Anil P. Singh (Panjab Univerisity)
    Data Analysis - Algorithms and Tools
    Poster
    The new physics searches like SUSY in the CMS detector at the LHC will require a very fine scanning of the parameter space over a a large number of the points. Accordingly we need to address the problem of developing a very fast setup to generate and simulate large MC samples. We have explored the use TurboSim as a fast and the standalone setup for generating such samples. TurboSim does not...
    Go to contribution page
  99. Dr Leonello Servoli (INFN - Sezione di Perugia)
    Computing Technology for Physics Research
    Poster
    Distributed computer systems pose a new class of problems, due to increased heterogeneity either from the hardware than from the user's request point of view. One possible solution is to create on demand virtual working environments tailored on the user’s requirements, hence the need to manage dynamically such environments. This work proposes a solution based on the use of Virtual...
    Go to contribution page