-
Mr Federico Carminati (CERN)03/11/2008, 08:45
-
Denis Perret-Gallix (Laboratoire d'Annecy-le-Vieux de Physique des Particules (LAPP))03/11/2008, 08:55
-
Lawrence Pinsky (University of Houston-Unknown-Unknown)03/11/2008, 09:00Intellectual Property, which includes the following areas of the law: Copyrights, Patents, Trademarks, Trade Secrets, and most recently Database Protection and Internet Law, might seem to be an issue for lawyers only. However, increasingly the impact of the laws governing these areas and the International reach of the effects of their implementation makes it important for all software...Go to contribution page
-
Mr Felix Schuermann03/11/2008, 09:40The initial phase of the Blue Brain Project aims to reconstruct the detailed cellular structure and function of the neocortical column (NCC) of the young rat. As a collaboration between the Brain Mind Institute of the Ecole Polytechnique Federale de Lausanne (EPFL) and IBM the project is based on the many years of experimental data from an electrophysiology lab and a dedicated massively...Go to contribution page
-
Dr Andrei Kataev (Institute for Nucleaer Research , Moscow, Russia)03/11/2008, 10:40Different methods for treating the results of higher-order perturbative QCD calculations of the decay width of the Standard Model Higgs boson into bottom quarks are discusssed. Special attention is paid to the analysis of the $M_H$ dependence of the decay width $\Gamma(H\to \bar{b}b})$ in the cases when the mass of b-quark is defined as the running parameter in the $\bar{MS}$-scheme and as the...Go to contribution page
-
David Bailey (Lawrence Berkeley Laboratory)03/11/2008, 11:20For the vast majority of computations done both in pure and applied physics, ordinary 64-bit floating-point arithmetic (about 16 decimal digits) is sufficient. But for a growing body of applications, this level is not sufficient. For applications such as supernova simulations, climate modeling, n-body atomic structure calculations, "double-double" (approx. 32 digits) or even "quad-double"...Go to contribution page
-
Mr Andrew Hanushevsky (Stanford Linear Accelerator Center (SLAC))03/11/2008, 12:00There are many ways to build a Storage Element. This talk surveys the common and popular architectures used to construct today's Storage Elements and presents points for consideration. The presentation then asks, "Are these architectures ready for LHC era experiments?". The answer may be surprising and certainly shows that the context in which they are used matters.Go to contribution page
-
Dr Ariel Garcia (FORSCHUNGSZENTRUM KARLSRUHE, GERMANY)03/11/2008, 14:00g-Eclipse is both a user friendly graphical user interface and a programming framework for accessing Grid and Cloud infrastructures. Based on the extension mechanism of the well known Eclipse platform, it provides a middleware independent core implementation including standardized user interface components. Based on these components, implementations for any available Grid and Cloud middleware...Go to contribution page
-
Dr David Lawrence (Jefferson Lab)03/11/2008, 14:00The C++ reconstruction framework JANA has been written to support the next generation of Nuclear Physics experiments at Jefferson Lab in anticipation of the 12GeV upgrade. This includes the GlueX experiment in the planned 4th experimental hall "Hall-D". The JANA framework was designed to allow multi-threaded event processing with a minimal impact on developers of reconstruction software....Go to contribution page
-
Warren Perkins (Swansea University UK)03/11/2008, 14:003. Computation in Theoretical PhysicsParallel TalkUnitarity methods provide an efficient way of calculating 1-loop amplitudes for which Feynman diagram techniques are impracticable. Recently several approaches have been developed that apply these techniques to systematically generate amplitudes. The 'canonical basis' implementation of the unitarity method will be discussed in detail and illustrated using seven point QCD processes.Go to contribution page
-
Mikhail Titov (Moscow Physical Engineering Inst. (MePhI))03/11/2008, 14:25There is ATLAS wide policy how different types of data is distributed between centers of different level (T0/T1/Tn) it is well defined and centrally operated activity (uses Atlas Central Services which include Catalogue services, Sites services, T0 services, Panda Services and etc). At the same ATLAS Operations Group designed user oriented services to allow ATLAS physicists to place data...Go to contribution page
-
Johannes Bluemlein (DESY)03/11/2008, 14:253. Computation in Theoretical PhysicsParallel TalkWe present a method to unfold the complete functional dependence of single-scale quantities as QCD splitting functions and Wilson coefficients from a finite number of moments. These quantities obey recursion relations which can be found in an automated way. The exact functional form is obtained solving the corresponding difference equations. We apply the algorithm to the QCD Wilson...Go to contribution page
-
Joerg Stelzer (DESY)03/11/2008, 14:25Multivariate data analysis techniques are becoming increasingly important for high energy physics experiments. TMVA is a tool, integrated in the ROOT environment, which allows easy access to sophisticated multivariate classifiers allowing for a widespread use of these very effective data selection techniques. It furthermore provides a number of pre-processing capabilities and...Go to contribution page
-
Dr Mikhail Rogal (DESY)03/11/2008, 14:503. Computation in Theoretical PhysicsParallel Talkwill be sent laterGo to contribution page
-
Dr Dominik Dannheim (CERN)03/11/2008, 14:50Probability-Density Estimation (PDE) is a multivariate discrimination technique based on sampling signal and background densities in a multi-dimensional phase space. The signal and background densities are defined by event samples (from data or monte carlo) and are evaluated using a binary search tree (range searching). This method is a powerful classification tool for problems with highly...Go to contribution page
-
Paul Nilsson (University of Texas at Arlington)03/11/2008, 14:50The PanDA system was developed by US ATLAS to meet the requirements for full scale production and distributed analysis processing for the ATLAS Experiment at CERN. The system provides an integrated service architecture with late binding of job, maximal automation through layered services, tight binding with the ATLAS Distributed Data Management system, advanced job recovery and error discovery...Go to contribution page
-
Mr Andrei Gheata (ISS/CERN)03/11/2008, 15:15The talk will describe the current status of the offline analysis framework used in ALICE. The software was designed and optimized to take advantage of distributed computing resources and be compatible with ALICE computing model. The framwork's main features: possibility to use parallelism in PROOF or GRID environments, transparency of the computing infrastructure and data model, scalability...Go to contribution page
-
Dr Yoshimasa KURIHARA (KEK)03/11/2008, 15:153. Computation in Theoretical PhysicsParallel TalkAutomatic Feynman-amplitude calculation system, GRACE, has been extended to treat next-to-leading order (NLO) QCD calculations. Matrix elements of loop diagrams as well as those of tree level ones can be generated using the GRACE system. A soft/collinear singularity is treated using a leading-log subtraction method. Higher order re-summation of the soft/collinear correction by the parton...Go to contribution page
-
Axel Naumann (CERN)03/11/2008, 15:15Compiled code is fast, interpreted code is slow. There is not much we can do about it, and it's the reason why interpreters use in high performance computing is usually restricted to job submission. I will show where interpreters make sense even in the context of analysis code, and what aspects have to be taken into account to make this combination a success.Go to contribution page
-
Giuseppe Codispoti (Dipartimento di Fisica)03/11/2008, 16:10CRAB (CMS Remote Analysis Builder) is the tool used by CMS to enable running physics analysis in a transparent manner over data distributed across many sites. It abstracts out the interaction with the underlying batch farms, grid infrastructure and CMS workload management tools, such that it is easily usable by non-experts. CRAB can be used as a direct interface to the computing system or...Go to contribution page
-
Prof. Vladimir Ivantchenko (CERN, ESA)03/11/2008, 16:103. Computation in Theoretical PhysicsParallel TalkThe status of Geant4 electromagnetic (EM) physics models is presented, focusing on the models most relevant for collider HEP experiments, at LHC in particular. Recently improvements were undertaken in models for the transport of electrons and positrons, and for hadrons. Models revised included those for single and multiple scattering, ionization at low and high energies, bremsstrahlung,...Go to contribution page
-
Dr Monica Verducci (INFN RomaI)03/11/2008, 16:10The ATLAS Muon System has extensively started to use the LCG conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worlwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and...Go to contribution page
-
Pier Paolo Ricci (INFN CNAF)03/11/2008, 16:35The activities in the last 5 years for the storage access at the INFN CNAF Tier1 can be enlisted under two different solutions efficiently used in production: the CASTOR software, developed by CERN, for Hierarchical Storage Manager (HSM), and the General Parallel File System (GPFS), by IBM, for the disk resource management. In addition, since last year, a promising alternative solution for...Go to contribution page
-
Sergei V. Gleyzer (Florida State University)03/11/2008, 16:35In high energy physics, variable selection and reduction are key to a high quality multivariate analysis. Initial variable selection often leads to a variable set cardinality greater than the underlying degrees of freedom of the model, which motivates the needs for variable reduction and more fundamentally, a consistent decision making framework. Such a framework called PARADIGM, based on a...Go to contribution page
-
Vladimir Kolesnikov (Joint Institute for Nuclear Research (JINR))03/11/2008, 16:353. Computation in Theoretical PhysicsParallel TalkTwo types of SANC system output are presented. At first the status of stand-alone packages for calculations of the EW and QCD NLO RC at the parton level (Standard SANC FORM and/or FORTRAN Modules) are done. Short overview of these packages in sector of the Neutral Current: (uu, dd) -> (mu,mu, ee) and ee(uu, dd) -> HZ; and in the sector of the Charge Current: ee(uu, dd) -> (mu nu_mu, e...Go to contribution page
-
Hegoi Garitaonandia (NIKHEF)03/11/2008, 17:00The ATLAS experiment at CERN will require about 4000 CPUs for the online data acquisition system (DAQ). When the DAQ system experiences software errors, such as event selection algorithm problems, crashes or timeouts, the fault tolerance mechanism routes the corresponding event data to the so called debug stream. During first beam commissioning and early data taking, a large fraction of events...Go to contribution page
-
Dr Stanislav Zub (National Science Center, Kharkov Institute of Physics and Techn)03/11/2008, 17:003. Computation in Theoretical PhysicsParallel TalkDynamics of two bodies, which interacts by magnetic forces, is considered. Model of interaction builds on quasi-stationary approach for electromagnetic field, and symmetric rotors with different moments of inertia of the bodies are considered. Interaction energy general form is discovered for the case of coincidence of mass and magnetic symmetries. Since the energy of interaction depends only...Go to contribution page
-
Mr Andrey Lebedev (GSI, Darmstadt / JINR, Dubna)03/11/2008, 17:00The Compressed Baryonic Matter (CBM) experiment at the future FAIR accelerator at Darmstadt is being designed for a comprehensive measurement of hadron and lepton production in heavy-ion collisions from 8-45 AGeV beam energy, producing events with large track multiplicity and high hit density. The setup consists of several detectors, including the silicon tracking system (STS) placed in a...Go to contribution page
-
Dr Christopher Jones (CORNELL UNIVERSITY)03/11/2008, 17:25Event displays in HEP are used for many different purposes, e.g. algorithm debugging, commissioning, geometry checking and physics studies. The physics studies case is unique since few user are likely to become experts on the event display, the breadth of information all such users will want to see is quite large although any one user may only want a small subset of information and the best...Go to contribution page
-
Dr Stuart Wakefield (Imperial College London)03/11/2008, 17:25From its conception the job management system has been distributed to increase scalability and robustness. The system consists of several applications (called prodagents) which each manage Monte Carlo, reconstruction and skimming jobs on collections of sites within different Grid environments (OSG, NorduGrid?, LCG) and submission systems (GlideIn?, local batch, etc..). Production of...Go to contribution page
-
Mr Christophe Saout (CMS, CERN & IEKP, University of Karlsruhe)03/11/2008, 17:50The CMS Offline software contains a widespread set of algorithms to identify jets originating from the weak decay of b-quarks. Different physical properties of b-hadron decays like lifetime information, secondary vertices and soft leptons are exploited. The variety of selection algorithms range from simple and robust ones, suitable for early data-taking and online environments as the trigger...Go to contribution page
-
Dr Elena Solfaroli (INFN RomaI & Universita' di Roma La Sapienza), Dr Monica Verducci (INFN RomaI)03/11/2008, 17:50ATLAS is a large multipurpose detector, presently in the final phase of construction at LHC, the CERN Large Hadron Collider accelerator. In ATLAS the muon detection is performed by a huge magnetic spectrometer, built with the Monitored Drift Tube (MDT) technology. It consists of more than 1,000 chambers and 350,000 drift tubes, which have to be controlled to a spatial accuracy better...Go to contribution page
-
Tony Johnson (SLAC)04/11/2008, 09:00This talk will give a brief overview of the features of Java which make it well suited for use in High-Energy and Astro-physics, including recent enhancements such as the addition of parameterized types and advanced concurrency utilities, and its release as an open-source (GPL) product. I will discuss the current status of a number of Java based tools for High-Energy and Astro-physics...Go to contribution page
-
Mr Chris Lattner04/11/2008, 09:40This talk gives a high level introduction to the LLVM Compiler System (http://llvm.org/), which supports high performance compilation of C and C++ code, as well as adaptive runtime optimization and code generation. Using LLVM as a drop-in replacement for GCC offers several advantages, such as being able to optimize across files in your application, producing better generated code performance,...Go to contribution page
-
Dr Gerardo Ganis (CERN)04/11/2008, 10:40In this talk we describe the latest developments in the PROOF system. PROOF is the parallel extension of ROOT and allows large datasets to be processed in parallel on large clusters and/or multi-core machines. The recent developments have focused on readying PROOF for the imminent data analysis tasks of the LHC experiments. Main improvements have been made in the areas of overall robustness...Go to contribution page
-
Iosif Legrand (CALTECH)04/11/2008, 11:20The MonALISA (Monitoring Agents in A Large Integrated Services Architecture) framework provides a set of distributed services for monitoring, control, management and global optimization for large scale distributed systems. It is based on an ensemble of autonomous, multi-threaded, agent-based subsystems which are registered as dynamic services. They can be automatically discovered and used by...Go to contribution page
-
Dr Akira Shibata (New York University)04/11/2008, 14:00An impressive amount of effort has been put in to realize a set of frameworks to support analysis in this new paradigm of GRID computing. However, much more than half of a physicist's time is typically spent after the GRID processing of the data. Due to the private nature of this level of analysis, there has been little common framework or methodology. While most physicists agree to use...Go to contribution page
-
Dr Andy Buckley (Durham University)04/11/2008, 14:003. Computation in Theoretical PhysicsParallel TalkEvent generator programs are a ubiquitous feature of modern particle physics, since the ability to produce exclusive, unweighted simulations of high-energy events is necessary for design of detectors, analysis methods and understanding of SM backgrounds. However --- particularly in the non-perturbative areas of physics simulated by shower+hadronisation event generators --- there are many...Go to contribution page
-
Tatsiana Klimkovich (RWTH-Aachen)04/11/2008, 14:00VISPA is a novel graphical development environment for physics analysis, following an experiment-independent approach. It introduces a new way of steering a physics data analysis, combining graphical and textual programming. The purpose is to speed up the design of an analysis, and to facilitate its control. As the software basis for VISPA the C++ toolkit Physics eXtension Library (PXL) is...Go to contribution page
-
Guido Negri (Unknown)04/11/2008, 14:25The LHC machine has just started operations. Very soon, Petabytes of data from the ATLAS detector will need to be processed, distributed worldwide, re-processed and finally analyzed. This data-intensive physics analysis chain relies on a fabric of computer centers on three different sub-grids: the Open Science Grid, the LHC Computing Grid and the Nordugrid Data Facility--all part of the...Go to contribution page
-
Alexandre Vaniachine (Argonne National Laboratory)04/11/2008, 14:25HEP experiments at the LHC store petabytes of data in ROOT files described with TAG metadata. The LHC experiments have challenging goals for efficient access to this data. Physicists need to be able to compose a metadata query and rapidly retrieve the set of matching events. Such skimming operations will be the first step in the analysis of LHC data, and improved efficiency will facilitate the...Go to contribution page
-
Dr Paolo Bartalini (CERN)04/11/2008, 14:253. Computation in Theoretical PhysicsParallel TalkThe CMS collaboration supports a wide spectrum of Monte Carlo generator packages in its official production, each of them requiring a dedicated software integration and physics validation effort. We report on the progress of the usage of these external programs with particular emphasis on the handling and tuning of the Matrix Element tools. The first integration tests in a large scale...Go to contribution page
-
Dr Mikhail Kirsanov (Institute for Nuclear Research (INR), Moscow)04/11/2008, 14:503. Computation in Theoretical PhysicsParallel TalkThe Generator Services project collaborates with the Monte Carlo generators authors and with the LHC experiments in order to prepare validated LCG compliant code for both the theoretical and the experimental communities at the LHC. On the one side it provides the technical support as far as the installation and the maintenance of the generators packages on the supported platforms is...Go to contribution page
-
Anna Kreshuk (GSI)04/11/2008, 14:50This presentation discusses activities at GSI to support interactive data analysis for the LHC experiment ALICE. GSI is a tier-2 centre for ALICE. One focus is a setup where it is possible to dynamically switch the resources between jobs from the Grid, jobs from the local batch system and the GSI Analysis Facility (GSIAF), a PROOF farm for fast interactive analysis. The second emphasis is on...Go to contribution page
-
Ian Fisk (Fermi National Accelerator Laboratory (FNAL))04/11/2008, 14:50The CMS Tier 0 is responsible for handling the data in the first period of it's life, from being written to a disk buffer at the CMS experiment site in Cessy by the DAQ system, to the time transfer completes from CERN to one of the Tier1 computing centres. It contains all automatic data movement, archival and processing tasks run at CERN. This includes the bulk transfers of data from Cessy...Go to contribution page
-
Axel Naumann (CERN)04/11/2008, 15:15High performance computing with a large code base and C++ has proved to be a good combination. But when it comes to storing data, C++ is a really bad choice: it offers no support for serialization, type definitions are amazingly complex to parse, and the dependency analysis (what does object A need to be stored?) is incredibly difficult. Nevertheless, the LHC data consists of C++ objects that...Go to contribution page
-
Mr Ricky Egeland (University of Minnesota โ Twin Cities, Minneapolis, MN, USA)04/11/2008, 15:15The CMS PhEDEx (Physics Experiment Data Export) project is responsible for facilitating large-scale data transfers across the grid ensuring transfer reliability, enforcing data placement policy, and accurately reporting results and performance statistics. The system has evolved considerably since its creation in 2004, and has been used daily by CMS since then. Currently CMS tracks over 2 PB of...Go to contribution page
-
Mr Sergey Belov (JINR, Dubna)04/11/2008, 15:153. Computation in Theoretical PhysicsParallel TalkIn this talk we present a way of making Monte-Carlo simulation chain fully automated.automation Last years there was a need for common place to store sophisticated MC event samples prepared by experienced theorists. Also such samples should be accessible in some standard manner to be easyly imported and used in experiments' software. The main motivation behind the LCG MCDB project is to...Go to contribution page
-
Mr John Alison (Department of Physics and Astronomy, University of Pennsylvania)04/11/2008, 16:10The CERN's Large Hadron Collider (LHC) is the world largest particle accelerator. It will collide two proton beams at an unprecedented center of mass energy of 14 TeV and first colliding beams are expected during summer 2008. ATLAS is one of the two general purpose experiments that will record the decay products of the proton-proton collisions. ATLAS is equipped with a charge particle...Go to contribution page
-
Dr Ian Fisk (Fermi National Accelerator Laboratory (FNAL))04/11/2008, 16:10In this presentation we will discuss the early experience with the CMS computing model from the last large scale challenge activities to the first days of data taking. The current version of the CMS computing model was developed in 2004 with a focus on steady state running. In 2008 a revision of the model was made to concentrate on the unique challenges associated with the commission period....Go to contribution page
-
Prof. Vladimir Ivantchenko (CERN, ESA)04/11/2008, 16:103. Computation in Theoretical PhysicsParallel TalkAn overview of recent developments for the Geant4 hadronic modeling is provided with a focus on the start of the LHC experiments. Improvements in Pre-Compound model, Binary and Bertini cascades, models of elastic scattering, quark-gluon string and Fritiof high energy models, and low-energy neutron transport were introduced using validation versus data from thin target experiments. Many of...Go to contribution page
-
Dr Lorenzo Moneta (CERN)04/11/2008, 16:35Advanced mathematical and statistical computational methods are required by the LHC experiments for analyzing their data. Some of these methods are provided by the Math work package of the ROOT project, a C++ Object Oriented framework for large scale data handling applications. We present in detail the recent developments of this work package, in particular the recent improvements in the...Go to contribution page
-
Dr Sergey Bityukov (INSTITUTE FOR HIGH ENERGY PHYSICS, PROTVINO)04/11/2008, 16:353. Computation in Theoretical PhysicsParallel TalkWe compare two approaches to the combining of signal significances: the approach, in which the signal significances are considered as corresponding random variables, and the approach with the using of confidence distributions. Several signal significances, which are often used in analysis of data in experimental physics as a measure of excess of the observed or expected number of...Go to contribution page
-
Mr Michal ZEROLA (Nuclear Physics Inst., Academy of Sciences, Praha)04/11/2008, 16:35In order to achieve both fast and coordinated data transfer to collaborative sites as well as to create a distribution of data over multiple sites, efficient data movement is one of the most essential aspects in distributed environment. With such capabilities at hand, truly distributed task scheduling with minimal latencies would be reachable by internationally distributed collaborations (such...Go to contribution page
-
Ms Sonia Khatchadourian (ETIS - UMR CNRS 8051)04/11/2008, 17:00The HESS project is a major international experiment currently performed in gamma astronomy. This project relies on a system of four Cherenkov telescopes enabling the observation of cosmic gamma rays. The outstanding performance obtained so far in the HESS experiment has led the research labs involved in this project to improve the existing system: an additional telescope is currently...Go to contribution page
-
Dr Valeri FINE (BROOKHAVEN NATIONAL LABORATORY)04/11/2008, 17:00With the era of multi-core CPUs, software parallelism is becoming both affordable as well as a practical need. Especially interesting is to re-evaluate the adaptability of the high energy and nuclear physics sophisticated, but time-consuming, event reconstruction frameworks to the reality of the multi-threaded environment. The STAR offline OO ROOT-based framework implements a well known...Go to contribution page
-
Dr Andrej Arbuzov (Joint Institute for Nuclear Research (JINR))04/11/2008, 17:003. Computation in Theoretical PhysicsParallel TalkRadiative corrections to processes of single Z and W boson production are obtained within the SANC computer system. Interplay of one-loop QCD and electroweak corrections is studied. Higher order QED final state radiation is taken into account. Monte Carlo event generators at the hadronic level are constructed. Matching with general purpose programs like HERWIG and PYTHIA is performed to...Go to contribution page
-
Miroslav Morhac (Institute of Physics, Slovak Academy of Sciences)04/11/2008, 17:25The accuracy and reliability of the analysis of spectroscopic data depend critically on the treatment in order to resolve strong peak overlaps, to account for continuum background contributions, and to distinguish artifacts to the responses of some detector types. Analysis of spectroscopic data can be divided to 1. estimation of peaks positions (peak searching) 2. fitting of peak...Go to contribution page
-
Mr Andreas Joachim Peters (CERN)04/11/2008, 17:25One of the biggest challenges in LHC experiments at CERN is data management for data analysis. Event tags and iterative looping over datasets for physics analysis require many file opens per second and (mainly forward) seeking access. Analyses will typically access large datasets reading terabytes in a single iteration. A large user community requires policies for space management and a...Go to contribution page
-
Mr Tim Muenchen (Bergische Universitaet Wuppertal)04/11/2008, 17:50As the Large Hadron Collider (LHC) at CERN, Geneva, has begun operation in september, the large scale computing grid LCG (LHC Computing Grid) is meant to process and store the large amount of data created in simulating, measuring and analyzing of particle physic experimental data. Data acquired by ATLAS, one of the four big experiments at the LHC, are analyzed using compute jobs running on the...Go to contribution page
-
Dr Alexander Sherstnev (University of Oxford)05/11/2008, 09:00We present a new version of the CompHEP program package, version 4.5. We describe shortly new techniques and options implemented: interfaces to ROOT and HERWIG, generation of the XML-based header in event files (HepML), full implementation of Les Houches agreements (LHA I, SUSY LHA, LHA PDF, Les Houches events), realisation of the improved von Neumann procedure for the event generation, etc....Go to contribution page
-
Dr Harrison Prosper (Department of Physics, Florida State University)05/11/2008, 09:40ultivariate methods are used routinely in particle physics research to classify objects or to discriminate signal from background. They have also been used successfully to approximate multivariate functions. Moreover, as is evident from this conference, excellent easy-to-use implementations of these methods exist, making it possible for everyone to deploy these sophisticated methods. From...Go to contribution page
-
Dr Thomas Binoth (University of Edinburgh)05/11/2008, 10:40In this talk I will motivate that a succesful descripton of LHC physics needs the inclusion of higher order corrections for all kinds of signal and background processes. In the case of multi-particle production the combinatorial complexity of standard approaches triggered many new developments which allow for the efficient evaluation of one-loop amplitudes for LHC phenomenology. I will...Go to contribution page
-
Paolo Tonella (FBK-IRST)05/11/2008, 11:20Code quality has traditionally been decomposed into internal and external quality. In this talk, I will discuss the differences between these two views and I will consider the contexts in which either of the two becomes the main quality goal. I will argue that for physics software the programmer's perspective, focused on the internal quality, is the most important one. Then, I will revise...Go to contribution page
-
Dr Giulio Palombo (University of Milan - Bicocca)05/11/2008, 14:00Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables (features). Reducing a full feature set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various feature selection algorithms for supervised learning using several datasets such as,...Go to contribution page
-
Mikhail Tentyukov (Karlsruhe University)05/11/2008, 14:003. Computation in Theoretical PhysicsParallel TalkWe report on the status of the current development in parallelization of the symbolic manipulation system FORM. Most existing FORM programs will be able to take advantage of the parallel execution, without the need for modifications.Go to contribution page
-
Dr Andrea Sciaba' (CERN, Geneva, Switzerland)05/11/2008, 14:00The computing system of the CMS experiment works using distributed resources from more than 80 computing centres worldwide. These centres, located in Europe, America and Asia are interconnected by the Worldwide LHC Computing Grid. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established a procedure to extensively test all...Go to contribution page
-
Dr Takahiro Ueda (KEK)05/11/2008, 14:253. Computation in Theoretical PhysicsParallel TalkNowadays the sector decomposition technique, which can isolate divergences from parametric representations of integrals, becomes quite useful tool for numerical evaluations of the Feynman loop integrals. It is used to verify the analytical results of multi-loop integrals in the Euclidean region, or in some cases practically used in the physical region by combining with other methods handling...Go to contribution page
-
Dr Marcin Wolter (Henryk Niewodniczanski Institute of Nuclear Physics PAN)05/11/2008, 14:25Tau leptons will play an important role in the physics program at the LHC. They will not only be used in electroweak measurements and in detector related studies like the determination of the E_T^miss scale, but also in searches for new phenomena like the Higgs boson or Supersymmetry. Due to the overwhelming background from QCD processes, highly efficient algorithms are essential to...Go to contribution page
-
Dr Andrรฉ dos Anjos (University of Wisconsin, Madison, USA)05/11/2008, 14:25The DAQ/HLT system of the ATLAS experiment at CERN, Switzerland, is being commissioned for first collisions in 2009. Presently, the system is composed of an already very large farm of computers that accounts for about one-third of its event processing capacity. Event selection is conducted in two steps after the hardware-based Level-1 Trigger: a Level-2 Trigger processes detector data based on...Go to contribution page
-
Thomas Hahn (MPI Munich)05/11/2008, 14:503. Computation in Theoretical PhysicsParallel TalkThe talk will cover the latest version of the Feynman-diagram calculator FormCalc. The most significant improvement is the communication of intermediate expressions from FORM to Mathematica and back, for the primary purpose of introducing abbreviations at an early stage. Thus, longer expressions can be treated and a severe bottleneck in particular for processes with high multiplicities removed.Go to contribution page
-
Dr Fabrizio Furano (Conseil Europeen Recherche Nucl. (CERN))05/11/2008, 14:50In this talk we address the way the ALICE Offline Computing is starting to exploit the possibilities given by the Scalla/Xrootd repository globalization tools. These tools are quite general and can be adapted to many situations, without disrupting existing designs, but adding a level of coordination among xrootd-based storage clusters, and the ability to interact between them.Go to contribution page
-
Alexander Kryukov (Skobeltsyn Institute for Nuclear Physics Moscow State University)05/11/2008, 14:501. Computing TechnologyGrid systems are used for calculations and data processing in various applied areas such as biomedicine, nanotechnology and materials science, cosmophysics and high energy physics as well as in a number of industrial and commercial areas. However, one of the basic problems costing on a way to wide use of grid systems is related to the fact that applied jobs, as a rule, are developed for...Go to contribution page
-
Dr Fukuko YUASA (KEK)05/11/2008, 15:153. Computation in Theoretical PhysicsParallel TalkWe apply a 'Direct Computation Method', which is purely numerical, to evaluate Feynman integrals. This method is based on the combination of an efficient numerical integration and an efficient extrapolation strategy. In addition, high-precision arithmetic and parallelization techniques can be used if required. We present our recent progress in the development of this method and show...Go to contribution page
-
David Cameron (University of Oslo)05/11/2008, 15:15The NorduGrid collaboration and its middleware product, ARC (the Advanced Resource Connector), span institutions in Scandinavia and several other countries in Europe and the rest of the world. The innovative nature of the ARC design and flexible, lightweight distribution make it an ideal choice to connect heterogeneous distributed resources for use by HEP and non-HEP applications alike. ARC...Go to contribution page
-
Dr Jerzy Nogiec (FERMI NATIONAL ACCELERATOR LABORATORY)05/11/2008, 15:15Accelerator R&D environments produce data characterized by different levels of organization. Whereas some systems produce repetitively predictable and standardized structured data, others may produce data of unknown or changing structure. In addition, structured data, typically sets of numeric values, are frequently logically connected with unstructured content (e.g., images, graphs,...Go to contribution page
-
Dr Alfio Lazzaro (Universita' degli Studi and INFN, Milano)05/11/2008, 16:10MINUIT is the most common package used in high energy physics for numerical minimization of multi-dimensional functions. The major algorithm of this package, MIGRAD, searches for the minimum by using the function gradient. For each minimization iteration, MIGRAD requires the calculation of the first derivatives for each parameter of the function to be minimized. In this presentation we will...Go to contribution page
-
Dr Yoshimasa Kurihara (KEK)05/11/2008, 16:103. Computation in Theoretical PhysicsParallel TalkMultiple Polylog functions (MPL) often appear as a result of the Feynman parameter integrals in higher order correction in quantum field theory. Numerical evaluation of the MPL with higher depth and weight is necessary for multi-loop calculations. We propose a purely numerical method to evaluate MPL using numerical contour integral in multi-parameter complex-plane. We can obtain values of MPL...Go to contribution page
-
Dr Biglietti Michela (UNIVERSITY OF NAPOLI and INFN)05/11/2008, 16:35The ATLAS trigger system is designed to select rare physics processes of interest from an extremely high rate of proton-proton collisions, reducing the LHC incoming rate of about 10^7. The short LHC bunch crossing period of 25 ns and the large background of soft-scattering events overlapped in each bunch crossing pose serious challenges, both on hardware and software, that the ATLAS trigger...Go to contribution page
-
Dr Mohammad Al-Turany (GSI DARMSTADT)05/11/2008, 16:35The new development in the FairRoot framework will be presented. FairRoot is the simulation and anaysis framework used by CBM and PANDA at FAIR/GSI experiments. The CMake based building and testing system will be described. A new event display based on EVE-package from ROOT and Geane will be shown, also the new developments for using GPUs and multi-core systems will be discussed.Go to contribution page
-
Tord Riemann (DESY)05/11/2008, 16:353. Computation in Theoretical PhysicsParallel TalkWe present some recent results on the evaluation of massive one-loop multileg Feynman integrals, which are of relevance for LHC processes. An efficient complete analytical tensor reduction was derived and implemented in a Mathematica package hexagon.m. Alternatively, one may use Mellin-Barnes techniques in order to avoid the tensor reduction. We shortly report on a new version of the...Go to contribution page
-
Dr mikhail kalmykov (Hamburg U./JINR)05/11/2008, 17:003. Computation in Theoretical PhysicsParallel TalkRecent results related with manipulation of hypergeometric functions: reduction and construction of higher-order terms in epsilon-expansion is revised. The application of given technique to the analytical evaluation of Feynman diagrams is considered.Go to contribution page
-
Mr Danilo Enoque Ferreira De Lima (Federal University of Rio de Janeiro (UFRJ) - COPPE/Poli)05/11/2008, 17:00
The ATLAS trigger system is responsible for selecting the interesting collision events delivered by the Large Hadron Collider(LHC). The ATLAS trigger will need to achieve a ~10โ7 rejection factor against random protonโproton collisions, and still be able to efficiently select interesting events. After a first processing level based on FPGAs and ASICS, the final event selection is based on...
Go to contribution page -
Gero Flucke (Universitรคt Hamburg)05/11/2008, 17:00The ultimate performance of the CMS detector relies crucially on precise and prompt alignment and calibration of its components. A sizable number of workflows need to be coordinated and performed with minimal delay through the use of a computing infrastructure which is able to provide the constants for a timely reconstruction of the data for subsequent physics analysis. The framework...Go to contribution page
-
Dario Berzano (Istituto Nazionale di Fisica Nucleare (INFN) and University of Torino)05/11/2008, 17:25Current Grid deployments for LHC computing (namely the WLCG infrastructure) do not allow efficient parallel interactive processing of data. In order to allow physicists to interactively access subsets of data (e.g. for algorithm tuning and debugging before running over a full dataset) parallel Analysis Facilities based on PROOF have been deployed by the ALICE experiment at CERN and elsewhere....Go to contribution page
-
Dr Markward Britsch (Max-Planck-Institut fuer Kernphysik (MPI)-Unknown-Unknown)05/11/2008, 17:25A large hadron machine like the LHC with its high track multiplicities always asks for powerful tools that drastically reduce the large background while selecting signal events efficiently. Actually such tools are widely needed and used in all parts of particle physics. Regarding the huge amount of data that will be produced at the LHC, the process of training as well as the process of...Go to contribution page
-
Liliana Teodorescu (Brunel University)05/11/2008, 17:50In order to address the data analysis challenges imposed by the complexity of the data generated by the current and future particle physics experiments, new techniques for performing various analysis tasks need to be investigated. In 2006 we introduced to the particle physics field one such new technique, based on Gene Expression Programming (GEP), and successfully applied it to an event...Go to contribution page
-
David Lange (LLNL)05/11/2008, 17:50The offline software suite of the CMS experiment must support the production and analysis activities across a distributed computing environment. This system relies on over 100 external software packages and includes the developments of more than 250 active developers. This system requires consistent and rapid deployment of code releases, a stable code development platform, and efficient tools...Go to contribution page
-
Prof. Volker Lindenstruth (Kirchhoff Institute for Physics)06/11/2008, 09:00The ALICE High Level Trigger is a high performance computer, setup to process the ALICE on-line data, exceeding 25GB/sec in real time. The most demanding detector for the event reconstruction is the ALICE TPC. The HLT implements different kinds of processing elements, including AMD, Intel processors, FPGAs and GPUs. The FPGAs perform an on the fly cluster reconstruction and the tracks are...Go to contribution page
-
Predrag Buncic (CERN)06/11/2008, 09:40CernVM is a Virtual Software Appliance to run physics applications from the LHC experiments at CERN. The virtual appliance provides a complete, portable and easy to install and configure user environment for developing and running LHC data analysis on any end-user computer (laptop, desktop) and on the Grid independently of operating system software and hardware platform (Linux, Windows,...Go to contribution page
-
Mr Sverre Jarp (CERN)06/11/2008, 10:40This talk will start by reminding the audience that Moore's law is very much alive (even after 40+ years of existence). Transistors will continue to double for every new silicon generation every other year. Chip designers are therefore trying every possible "trick" for putting the transistors to good use. The most notable one is to push more parallelism into each CPU: More and longer...Go to contribution page
-
Dr Ivan Kisel (Gesellschaft fuer Schwerionenforschung mbH (GSI), Darmstadt, Germany)06/11/2008, 11:20On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of the computer architecture. One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achieving more operations per clock cycle. The novel Cell processor extends the parallelization further by...Go to contribution page
-
Dr Anwar Ghuloum (Intel Corporation)06/11/2008, 12:001. Computing TechnologyPower consumption is the ultimate limiter to current and future processor design, leading us to focus on more power efficient architectural features such as multiple cores, more powerful vector units, and use of hardware multi-threading (in place of relatively expensive out-of-order techniques). It is (increasingly) well understood that developers face new challenges with multi-core software...Go to contribution page
-
06/11/2008, 14:00
-
Dr Ian Fisk (Fermi National Accelerator Laboratory, Batavia, United States)07/11/2008, 09:00
-
Dr Thomas Speer (Brown University)07/11/2008, 09:302. Data Analysis
-
Prof. Kiyoshi Kato (Kogakuin University)07/11/2008, 10:30
-
Mr Federico Carminati (CERN)07/11/2008, 11:00
-
Mr Mihai Niculescu (Institute of Space Sciences)2. Data AnalysisPosterIn this paper we present an integrated system for online Monte Carlo simulations in High Energy Physics. Several Monte Carlo simulations codes will be implemented: GEANT, PYTHIA, FLUKA, HIJING. This system will be structured in several basic modules. First module will ensure the system's web interface, the access to the other modules and will allow the logging of many users at the same...Go to contribution page
-
Sergei V. Gleyzer (Florida State University)2. Data AnalysisPosterThe Compact Muon Solenoid (CMS) experiment features an electromagnetic calorimeter (ECAL) composed of lead tungstate crystals and a sampling hadronic calorimeter (HCAL) made of brass and scintillator, along with other detectors. For hadrons, the response of the electromagnetic and hadronic calorimeters is inherently different. Because sampling calorimeters measure a fraction of the energy...Go to contribution page
-
Dr Nectarios Benekos (University of Illinois)2. Data AnalysisPosterATLAS is a large multipurpose detector, presently in the final phase of construction at LHC, the CERN Large Hadron Collider accelerator. In ATLAS the Muon Spectrometer (MS) is optimized to measure final state muons of 14 TeV proton-proton interactions with a good momentum resolution of 2-3% at 10-100 GeV/c and 10% at 1 TeV, and an efficiency close to 100%, taking into account the high level...Go to contribution page
-
Pawel Wolniewicz (PSNC)1. Computing TechnologyPosterThe g-Eclipse is an integrated workbench framework to access the power of existing Grid infrastructures. g-Eclipse can be used on user level or application level. On user level g-Eclipse is just a rich client application with user friendly interface which allows users to access Grid resources, operators to manage Grid resources and developers to speed up the development cycle of new Grid...Go to contribution page
-
Lee Lueking (Fermilab, Batavia, IL, USA)1. Computing TechnologyPosterThe CMS experiment has implemented a flexible and powerful approach to enable users to find data within the CMS physics data catalog. The Dataset Bookkeeping Service (DBS) comprises a database and the services used to store and access metadata related to its physics data. In addition to the existing WEB based and programatic API, a generalized query system has been designed and built. This...Go to contribution page
-
Mr Luciano Manhaes De Andrade Filho (Universidade Federal do Rio de Janeiro)2. Data AnalysisPosterThe hadronic calorimeter of ATLAS, TileCal, provides a large amount of readout channels (about 10,000). Therefore, track detection may be performed by TileCal when cosmic muons cross the detector. The muon track detection has extensively been used in the TileCal commissioning phase, for both energy and timing calibrations, and it will also be important for background noise removal during...Go to contribution page
-
Dr Federico Carminati (CERN), Dr Giuliana Galli Carminati, Dr Rene Brun (CERN)1. Computing TechnologyPosterThis poster presents a book which is due to be published in 2009 about HEP computing. HEP research has been constantly limited by technology, both in the accelerator and detector domains as well as that of computing. At the same time High Energy physicists have greatly contributed to the development of Information Technology. Several developments conceived for HEP have found applications well...Go to contribution page
-
Kathleen Knobe (Intel)1. Computing TechnologyPlenaryConcurrent Collections is a different way of writing parallel applications. Its major contribution is to isolate the task of specifying the application semantics from any consideration of its parallel execution. This isolation makes it much easier for the domain-expert, the physicist for example, to specify the application. It also makes the task of the tuning-expert, mapping the application...Go to contribution page
-
Mrs Maaike Limper (NIKHEF)2. Data AnalysisPosterOn behalf of the ATLAS Collaboration. The ATLAS collaboration at the Large Hadron Collider at CERN intends to study a variety of final states produced in proton-proton collisions at the energy of 14 TeV. The precise reconstruction of trajectories of charged and neutral particles including those which underwent decays is crucial for many phycics analyses. In addition, a study of...Go to contribution page
-
Carlos AGUADO SANCHEZ (CERN)1. Computing TechnologyPosterThe CernVM Virtual Software Appliance contains a minimal operating system sufficient to host the application frameworks developed by the LHC experiments. In CernVM model the experiment application software and it dependencies are built independently from CernVM Virtual Machine. The procedures for building, installing and validating each software release remains in the hands and under...Go to contribution page
-
Dr Matevz Tadel (CERN)2. Data AnalysisLive DemoEVE is a high-level environment using ROOT's data-processing, GUI and OpenGL interfaces. It can serve as a framework for object management offering hierarhical data organization, object interaction and visualization via GUI and OpenGL representations and automatic creation of 2D projected views. On the other hand, it can serve as a toolkit satisfying most HEP requirements, allowing...Go to contribution page
-
Mr Loic Quertenmont (Universite Catholique de Louvain)2. Data AnalysisPosterFROG is a generic framework dedicated to visualize events in a given geometry. \newline It has been written in C++ and use OpenGL cross-platform libraries. It can be used to any particular physics experiment or detector design. The code is very light and very fast and can run on various Operating System. Moreover, FROG is self consistent and does not require installation of ROOT or...Go to contribution page
-
Alberto Falzone (NICE srl), Giuseppe La Rocca (Istituto Nazionale di Fisica Nucleare (INFN) Sez. Catania โ Italy), Nicola Venuti (NICE srl), Roberto Barbera (University of Catania and INFN โ Italy), Valeria Ardizzone (Istituto Nazionale di Fisica Nucleare (INFN) Sez. Catania โ Italy)1. Computing TechnologyPosterIn order to address new challenges in modern e-Science and technological developments, the needs to have a transparent access to the distributed computational and storage resources within the grid paradigm is becoming of particular importance for different applications and communities. So far, the basic know-how requested to access the grid infrastructures is not so easy, especially for not...Go to contribution page
-
Prof. Nikolai Gagunashvili (University of Akureyri, Iceland)2. Data AnalysisPosterWeighted histograms in Monte-Carlo simulations are often used for the estimation of a probability density functions. They are obtained as a result of random experiment with random events that have weights. In this paper the bin contents of weighted histogram are considered as a sum of random variables with random number of terms. Goodness of fit tests for weighted histograms and for weighted...Go to contribution page
-
Mr Mario Lassnig (CERN & University of Innsbruck, Austria), Mr Mark Michael Hall (Cardiff University, Wales, UK)1. Computing TechnologyPosterIn highly data-driven environments such as the LHC experiments a reliable and high-performance distributed data management system is a primary requirement. Existing work shows that intelligent data replication is the key to achieving such a system, but current distributed middleware replication strategies rely mostly on computing, network and storage properties when deciding how...Go to contribution page
-
Mr Eduardo Simas (Federal University of Rio de Janeiro)2. Data AnalysisPosterThe ATLAS online trigger system has three filtering levels and accesses information from calorimeters, muon chambers and the tracking system. The electron/jet channel is very important for triggering system performance as Higgs signatures may be found efficiently through decays that produce electrons as final-state particles. Electron/jet separation relies very much on calorimeter...Go to contribution page
-
Mr Alexander Ayriyan (JINR)1. Computing TechnologyPosterThe CICC JINR cluster has been installed in 2007-2008 years increasing computational power and disk space memory. It is generally used for distributed computing as part of Russian Data Intensive Grid (EGEE-RDIG) to work in LHC Computing Grid (LCG). With the just installed superblade modules at mid-May 2008, the CICC JINR cluster reached a heterogeneous 560-core structure. The system consists...Go to contribution page
-
Ryabinkin Eygene (Russian Research Centre "Kurchatov Institute")1. Computing TechnologyPosterThe major subject of this talk is the presentation of the distributed computing status report for the ALICE experiment at Russian sites just before and at the time of the data taking at the Large Hadron Collider in CERN. We present the usage of the ALICE application software, AliEn[1], at the top of the modern EGEE middleware called gLite for the simulation and data analysis in the experiment...Go to contribution page
-
Alberto Pulvirenti (University of Catania - INFN Catania)2. Data AnalysisPosterALICE is the LHC experiment most specifically aimed at studying the hot and dense nuclear matter produced in Pb-Pb collisions at 5.5 TeV, in order to investigate the properties of the Quark-Gluon Plasma, whose formation is expected in such conditions. Among the physics topics of interest within this experiment, resonances play a fundamental role, since they allow one to probe the chiral...Go to contribution page
-
Benedikt Hegner (CERN)1. Computing TechnologyPosterThe CMS experiment at LHC has a very large body of software of its own and uses extensively software from outside the experiment. Ensuring the software quality of such a large project requires checking and testing at every level of complexity. The aim is to give the developers very quick feedback on all the relevant CMS offline workflows during the (twice daily) Integration Builds. In addition...Go to contribution page
-
Dr Victor Eduardo Bazterra (Univ Illinois at Chicago)2. Data AnalysisPosterThe CMS Collaboration is studying several algorithms to discriminate jets coming from the hadronization of b quarks from the lighter background. These will be used to identify top quarks and in searches of the Higgs boson and non-Standard Model processes. A reliable estimate of the performance of these algorithms is therefore crucial, and methods to estimate efficiencies and mistag rates...Go to contribution page
-
Sergio Grancagnolo (INFN & University Lecce)2. Data AnalysisPosterThe ATLAS trigger system has a three-levels structure, implemented to retain interesting physics events, here described for the muon case ("Muon Vertical Slice"). The first level, implemented in a custom hardware, uses measurements from the trigger chambers of the Muon Spectrometer to select muons with high transverse momentum and defines a Region of Interest (RoI) in the detector. RoIs are...Go to contribution page
-
Miroslav Morhac (Institute of Physics, Slovak Academy of Sciences)2. Data AnalysisPosterVisualization is one of the most powerful and direct ways how the huge amount of information contained in multidimensional histograms can be conveyed in a form comprehensible to a human eye. With increasing dimensionality of histograms (nuclear spectra) the requirements in developing of multidimensional scalar visualization techniques become striking. In the contribution we present a...Go to contribution page
Choose timezone
Your profile timezone: