13โ€“17 Feb 2006
Tata Institute of Fundamental Research
Europe/Zurich timezone

Contribution List

441 out of 441 displayed
Export to PDF
  1. 13/02/2006, 09:30
    Plenary
  2. Dr Jos Engelen (CERN)
    13/02/2006, 10:00
    Plenary
    oral presentation
  3. Garzoglio Gabriele (FERMI NATIONAL ACCELERATOR LABORATORY)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    In 2005, the DZero Data Reconstruction project processed 250 tera-bytes of data on the Grid, using 1,600 CPU-years of computing cycles in 6 months. The large computational task required a high-level of refinement of the SAM-Grid system, the integrated data, job, and information management infrastructure of the RunII experiments at Fermilab. The success of the project was in part due to the...
    Go to contribution page
  4. Mr Lars Schley (University Dortmund, IRF-IT, Germany)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    This paper discusses an architectural approach to enhance job scheduling in data intensive applications in HEP computing. First, a brief introduction to the current grid system based on LCG/gLite is given, current bottlenecks are identified and possible extensions to the system are described. We will propose an extended scheduling architecture, which adds a scheduling framework on top...
    Go to contribution page
  5. Dr Silvio Pardi (DIPARTIMENTO DI MATEMATICA ED APPLICAZIONI "R.CACCIOPPOLI")
    13/02/2006, 11:00
    Computing Facilities and Networking
    poster
    The INFN-GRID project allows experimenting and testing many different and innovative solutions in the GRID environment. In this research ad development it is important to find the most useful solutions for simplified the managment and access to the resources. In the VIRGO laboratory in Napoli we have tested a non standard implementation based on LCG 2.6.0 by using a diskless solution...
    Go to contribution page
  6. Mieczyslaw Krasny (LPNHE, Uviversity Paris)
    13/02/2006, 11:00
    Online Computing
    poster
    Traditionally, in the pre-LHC muti-purpose high-energy experiements the diversification of their physics programs has been largely decoupled from the process of the data-taking - physics groups could only influence the selection criteria of recorded events according to predefined trigger menus. In particular, the physics-oriented choice of subdetector data and the implementation of refined...
    Go to contribution page
  7. Mr Andreas Wildauer (UNIVERSITY OF INNSBRUCK)
    13/02/2006, 11:00
    Event processing applications
    poster
    The design of a general jet tagging algorithm for the ATLAS detector reconstruction software is presented. For many physics analyses, reliable and efficient flavour identification, 'tagging', of jets is vital in the process of reconstructing the physics content of the event. To allow for a broad range of identification methods emphasis is put on the flexibility of the framework. A...
    Go to contribution page
  8. Dr Rosa palmiero (INFN and University of Naples)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    The Grid technology is attracting a lot of interest, involving hundreds of researchers and software engineers around the world. The characteristics of Grid demand the developing of suitable monitoring system able to obtain the significant information in order to make management decision and control system behaviour. In this paper we are going to analyse a formal declarative interpreted...
    Go to contribution page
  9. Mr William Tomlin (CERN)
    13/02/2006, 11:00
    Computing Facilities and Networking
    poster
    The collaboration between BARC and CERN is driving a series of enhancements to ELFms [1], the fabric management tool-suite developed with support from the HEP community under CERN's coordination. ELFms components are used in production at CERN and a large number of other HEP sites for automatically installing, configuring and monitoring hundreds of clusters comprising of thousands of...
    Go to contribution page
  10. Mr Sven Karstensen (DESY Hamburg)
    13/02/2006, 11:00
    Computing Facilities and Networking
    poster
    The next generations of large colliders and their experiments will have the advantage that groups from all over the world will participate with their competence to meet the challenges of the future. Therefore itโ€™s necessary to become even more global than in the past, giving members the option of remote access to most controlling parts of this facilities. The experience in the past has...
    Go to contribution page
  11. Dr Joachim Flammer (CERN)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    gLite is the next generation middleware for grid computing. Born from the collaborative efforts of more than 80 people in 12 different academic and industrial research centers as part of the EGEE Project, gLite provides a bleeding-edge, best- of-breed framework for building grid applications tapping into the power of distributed computing and storage resources across the Internet....
    Go to contribution page
  12. Dr Alexander Borissov (University of Glasgow, Scotland, UK)
    13/02/2006, 11:00
    Event processing applications
    poster
    HERMES experiment at DESY has performed extensive measurements on diffractive production of light vector mesons (rho^0, omega, phi) in the intermediate energy region. Spin density matrix elements (SDMEs) were determined for exclusive diffractive rho^0 and phi mesons and compared with results of high energy experiments. Several methods for the extraction of SDMEs have been applied on...
    Go to contribution page
  13. Dr Cristian Stanescu (Istituto Nazionale Fisica Nucleare - Sezione Roma III)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    The data taking of ARGO-YBJ experiment in Tibet is operational with 54 RPC clusters installed and is moving rapidly to more than 100 clusters configuration. The paper describes the processing of this phase experimental data , based on a local computer farm. The software developed for the data management, job submission and information retrieval is described together to the...
    Go to contribution page
  14. Dr Alessandra Forti (Univ.of Milano Faculty of Art), Dr Chris Brew (CCLRC - RAL)
    13/02/2006, 11:00
    Distributed Event production and processing
    poster
    For the BaBar Computing Group We describe enhancements to the BaBar Experiment's distributed Monte Carlo generation system to make use of European and North American GRID resources and present the results with regard to BaBar's latest cycle of Monte-Carlo production. We compare job success rates and manageability issues between GRID and non-GRID production and present an investigation...
    Go to contribution page
  15. Dr Michael Ernst (DESY)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    The IT Group at DESY is involved in a variety of projects ranging from Analysis of High Energy Physics Data at the HERA Collider and Synchrotron Radiation facilities to cutting edge computer science experiments focused on grid computing. In support of these activities members of the IT group have developed and deployed a local computational facility which comprises many service nodes,...
    Go to contribution page
  16. Vardan Gyurjyan (JEFFERSON LAB)
    13/02/2006, 11:00
    Online Computing
    poster
    Software agent based control system is implemented to control experiments running on the CLAS detector at Jefferson Lab. Within the CLAS experiments DAQ, trigger, detector and beam line control systems are both logically and physically separated, and are implemented independently using a common software infrastructure. CLAS experimental control system (ECS) was designed, using earlier...
    Go to contribution page
  17. Mr Alexei Sibidanov (Budker Institute of Nuclear Physics)
    13/02/2006, 11:00
    Computing Facilities and Networking
    poster
    CMD-3 is the general purpose cryogenic magnetic detector for VEPP-2000 electron-positron collider, which is being commissioned at Budker Institute of Nuclear Physics (BINP, Novosibirsk, Russia). The main aspects of physical program of the experiment are study of known and search for new vector mesons, study of the ppbar a nnbar production cross sections in the vicinity of the threshold and...
    Go to contribution page
  18. Mr Alexander Zaytsev (Budker Institute of Nuclear Physics (BINP))
    13/02/2006, 11:00
    Event processing applications
    poster
    CMD-3 is the general purpose cryogenic magnetic detector for VEPP-2000 electron-positron collider, which is being commissioned at Budker Institute of Nuclear Physics (BINP, Novosibirsk, Russia). The main aspects of physical program of the experiment are study of known and search for new vector mesons, study of the ppbar a nnbar production cross sections in the vicinity of the threshold and...
    Go to contribution page
  19. Mr Sergey Pirogov (Budker Institute of Nuclear Physics)
    13/02/2006, 11:00
    Event processing applications
    poster
    CMD-3 is the general purpose cryogenic magnetic detector for VEPP-2000 electron-positron collider, which is being commissioned at Budker Institute of Nuclear Physics (BINP, Novosibirsk, Russia). The main aspects of physical program of the experiment are study of known and search for new vector mesons, study of the ppbar a nnbar production cross sections in the vicinity of the threshold...
    Go to contribution page
  20. Mr Elliott Wolin (Jefferson Lab)
    13/02/2006, 11:00
    Online Computing
    poster
    cMsg is a highly extensible open-source framework within which one can deploy multiple underlying interprocess communication systems. It is powerful enough to support asyncronous publish/subscribe communications as well as synchronous peer-to-peer communications. It further includes a proxy system whereby client requests are transported to a remote server that actually connects to the...
    Go to contribution page
  21. Dr Nayana Majumdar (Saha Institute of Nuclear Physics)
    13/02/2006, 11:00
    Event processing applications
    poster
    The three dimensional electrostatic field configuration in a multiwire proportional chamber (MWPC) has been simulated using an efficient boundary element method (BEM) solver set up to solve an integral equation of the first kind. To compute the charge densities over the bounding surfaces representing the system for known potentials, the nearly exact formulation of BEM has been implemented...
    Go to contribution page
  22. Dr Monica Verducci (European Organization for Nuclear Research (CERN))
    13/02/2006, 11:00
    Event processing applications
    poster
    The size and complexity of LHC experiments raise unprecedented challenges not only in terms of detector design, construction and operation, but also in terms of software models and data persistency. One of the more challenging tasks is the calibration of the 375000 Monitored Drift Tubes, that will be used as precision tracking detectors in the Muon Spectrometer of the ATLAS experiment. An...
    Go to contribution page
  23. Dr Andreas Heiss (FORSCHUNGSZENTRUM KARLSRUHE)
    13/02/2006, 11:00
    Computing Facilities and Networking
    poster
    GridKa, the German Tier-1 center in the Worldwide LHC Computing Grid (WLCG), supports all four LHC experiments, ALICE, ATLAS, CMS and LHCb as well as currently some non-LHC high energy physics experiments. Several German and European Tier-2 sites will be connected to GridKa as their Tier-1. We present technical and organizational aspects pertaining the connection and support of the Tier-2s...
    Go to contribution page
  24. Moreno Marzolla (INFN Padova)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    Efficient and robust system for accessing computational resources and managing job operations is a key component of any Grid framework designed to support large distributed computing environment. CREAM (Computing Resource Execution And Management) is a simple, minimal system designed to provide efficient processing of a large number of requests for computation on managed resources....
    Go to contribution page
  25. Dr Johannes Elmsheuser (Ludwig-Maximilians-Universitat Mรผnchen)
    13/02/2006, 11:00
    Distributed Data Analysis
    poster
    The German LHC computing resources are built on the Tier 1 center at Gridka in Karlsruhe and several planned Tier 2 centers. These facilities provide us with a testbed on which we can evaluate current distributed analysis tools. Various aspects of the analysis of simulated data using LCG middleware and local batch systems have been tested and evaluated. Here we present our experiences with...
    Go to contribution page
  26. Prof. Patrick Skubic (University of Oklahoma)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    Hadron Collider experiments in progress at Fermilabโ€™s Tevatron and under construction at the Large Hadron Collider (LHC) at CERN will record many petabytes of data in pursuing the goals of understanding nature and searching for the origin of mass. Computing resources required to analyze these data far exceed the capabilities of any one institution. The computing grid has long been...
    Go to contribution page
  27. Dr Ashok Agarwal (Univeristy of Victoria)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    The heterogeneity of resources in computational grids, such as the Canadian GridX1, makes application deployment a difficult task. Virtual machine environments promise to simplify this task by homogenizing the execution environment across the grid. One such environment, Xen, has been demonstrated to be a highly performing virtual machine monitor. In this work, we evaluate the...
    Go to contribution page
  28. Dr Thomas Kuhr (UNIVERSITY OF KARLSRUHE, GERMANY), Mr Ulrich Kerzel (UNIVERSITY OF KARLSRUHE, GERMANY)
    13/02/2006, 11:00
    Distributed Event production and processing
    poster
    The German Grid computing centre "GridKa" offers large computing and storing facilities to the Tevatron and LHC experiments, as well as BaBar and Compass. It has been the first large scale CDF cluster to adopt and use the FermiGrid software "SAM" to enable users to perform data-intensive analyses. The system has been operated on production level for about 2 years. We review the challenges...
    Go to contribution page
  29. Dr Yoshiji Yasu (KEK)
    13/02/2006, 11:00
    Software Tools and Information Systems
    poster
    Recent Information Technology (IT) grows quickly and it is not so easy for us to adopt the software from IT into data acquisition (DAQ) because the software from the IT sometimes depends on OSs,languages and communication protocols. The dependency is not convenient to construct data acquisition software and then an experimental group makes their own DAQ software according to their own...
    Go to contribution page
  30. Mr Laurent GARNIER (LAL-IN2P3-CNRS)
    13/02/2006, 11:00
    Software Components and Libraries
    poster
    We want to do a short communication to present our first experience in C# and mono within an OpenScientist context. Mainly attempt to integrate Inventor within a C# context then within the native GUI API coming with C#. We want to point out too the perspectives, for example within AIDA.
    Go to contribution page
  31. Daniela Rebuzzi (Istituto Nazionale de Fisica Nucleare (INFN))
    13/02/2006, 11:00
    Event processing applications
    poster
    The Muon Digitization is the simulation of the Raw Data Objects (RDO), or the electronic output, of the Muon Spectrometer. It has been recently completely re-written to run within the Athena framework and to interface with the Geant4 Muon Spectrometer detector simulation. The digitization process consists of two steps: in the first step, the output of the detector simulation, henceforth...
    Go to contribution page
  32. Mr Laurence Field (CERN)
    13/02/2006, 11:00
    Distributed Event production and processing
    poster
    Since CHEP2005, the LHC Computing Grid (LCG) has grown from 30 sites to over 160 sites and this has increased the load on the informations system. This paper describes the recent changes to information system that were necessary to keep pace with the expanding grid. The performance of the a key component, the Berkley Database Information Index (BDII), is given special attention. During...
    Go to contribution page
  33. Mr Laurence Field (CERN)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    This paper describes the introduction of Relation Grid Monitoring Architecture (R-GMA) into the LHC Computing Grid (LCG) as a production quality monitoring system and how, after an initial period of production hardening, it performed during the LCG Service Challenges. The results from the initial evaluation and performance tests are presented as well as the process of integrating R-GMA...
    Go to contribution page
  34. Dr Ariel Garcia (Forschungszentrum Karlsruhe, Karlsruhe, Germany)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    The LHC's Computing Grid (LCG) middleware interfaces at each site with local computing resources provided by a batch system. However, currently only the PBS/Torque, LSF and Condor resource management systems are supported out of the box in the middleware distribution. Therefore many computing centers serving scientific needs other than HEP, which in many cases use other batch systems like...
    Go to contribution page
  35. Mr Laurent GARNIER (LAL-IN2P3-CNRS)
    13/02/2006, 11:00
    Software Components and Libraries
    poster
    We want to do a short communication of a job done at LAL about integrating the graphviz library within the OnX environment. graphviz is a well known library good at visualizing a scene containing boxes connected by lines. The strength of this library is in the routing algorithms that permit to connect boxes. For example, graphviz is used by Doxygen to produce class diagrams. We want to...
    Go to contribution page
  36. Valeria Bartsch (FERMILAB / University College London)
    13/02/2006, 11:00
    Distributed Event production and processing
    poster
    CDF has recently changed its data handling system from the DFC (Data File Catalogue) system to the SAM (Sequential Access to Metadata) system. This change was done as a preparation for distributed computing because SAM can handle distributed computing and provides mechanisms which enable it to work together with GRID systems. Experience shows that the usage of a new data handling system...
    Go to contribution page
  37. Dr Surya Pathak (Vanderbilt University)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    Storing and accessing large volumes of data across geographically separated locations or cutting across labs and universities in a transparent, reliable fashion is a difficult problem. There is urgency to this problem with the commissioning of the LHC around the corner (2007). The primary difficulties that need to be over come in order to address this problem are policy driven secure...
    Go to contribution page
  38. Mr Laurence Dawson (Vanderbilt University)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    Introducing changes to a working high-performance computing environment is typically both necessary and risky. Testing these changes can be highly manpower intensive. L-TEST supplies a framework that allows the testing of complex distributed systems with reduced configuration. It reduces setting up a test to implementing the specific tasks for that test. L-TEST handles three jobs that must...
    Go to contribution page
  39. Dr Pavel Nevski (BROOKHAVEN NATIONAL LABORATORY)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    During last few years ATLAS has ran a serie of Data Challenges producing simulated data used to understand the detector performace. Altogether more than 100 terabytes of useful data are now spread over few dozens of storage elements on the GRID. With the emergence of Tier1 centers and constant restructuring of storage elements there is a need to consolidate the data placement in a more...
    Go to contribution page
  40. Dr Ofer Rind (Brookhaven National Laboratory), Ms Zhenping Liu (Brookhaven National Laboratory)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    The Brookhaven RHIC/ATLAS Computing Facility serves as both the tier-0 computing center for RHIC and the tier-1 computing center for ATLAS in the United States. The increasing challenge of providing local and grid-based access to very large datasets in a reliable, cost-efficient and high-performance manner, is being addressed by a large-scale deployment of dCache, the distributed disk...
    Go to contribution page
  41. Dr Donald Holmgren (FERMILAB)
    13/02/2006, 11:00
    Computing Facilities and Networking
    poster
    As part of the DOE SciDAC "National Infrastructure for Lattice Gauge Computing" and DOE LQCD Projects, Fermilab builds and operates production clusters for lattice QCD simulations for the US community. We currently operate two clusters: a 128-node Pentium 4E Myrinet cluster, and a 520-node Pentium 640 Infiniband cluster. We discuss the operation of these systems and examine...
    Go to contribution page
  42. Mr Sylvain Reynaud (IN2P3/CNRS)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    It is broadly admitted that grid technologies have to deal with heterogeneity in both computational and storage resources. In the context of grid operations, heterogeneity is also a major concern, especially for worldwide grid projects as LCG and EGEE. Indeed, the usage of various technologies, protocols and data formats induces complexity. As learned from our experience on participating...
    Go to contribution page
  43. Dr Armando Fella (INFN, Pisa)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    The increasing instantaneous luminosity of the Tevatron collider will soon cause the computing requirements for data analysis and MC production to grow larger than the dedicated CPU resources that will be available. In order to meet future demands, CDF is investing in shared, Grid, resources. A significant fraction of opportunistic Grid resources will be available to CDF before LHC era...
    Go to contribution page
  44. Mr Andrew Cameron Smith (CERN, University of Edinburgh)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to allow high bandwidth distribution of data across the grid in accordance with the computing model. To enable reliable bulk replication of data, LHCb's DIRAC system has been integrated with gLite's File Transfer Service middleware component to make use of dedicated network...
    Go to contribution page
  45. Arthur Kreymer (FERMILAB)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    The SAM data handling system has been deployed successfully by the Fermilab D0 and CDF experiments, managing Petabytes of data and millions of files in a Grid working environment. D0 and CDF have large computing support staffs, have always managed their data using file catalog systems, and have participated strongly in the development of the SAM product. But we think that SAM's long term...
    Go to contribution page
  46. Mr Marian ZUREK (CERN, ETICS)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    gLite is the next generation middleware for grid computing. Born from the collaborative efforts of more than 80 people in 12 different academic and industrial research centers as part of the EGEE Project, gLite provides a bleeding-edge, best-of-breed framework for building grid applications tapping into the power of distributed computing and storage resources across the Internet....
    Go to contribution page
  47. Dr john kennedy (ATLAS)
    13/02/2006, 11:00
    Distributed Event production and processing
    poster
    The presented monitoring framework builds on the experience gained during the ATLAS Data Challenge 2 and Rome physics workshop productions. During these previous productions several independent monitoring tools were created. Although these tools were created to some degree in isolation they provided a good degree of complementary functionality and are taken as a basis for the current...
    Go to contribution page
  48. Mr A.J. Wilson (Rutherford Appleton Laboratory)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    R-GMA is a relational implementation of the GGF's Grid Monitoring Architecture (GMA). In some respects it can be seen as a virtual database (VDB), supporting the publishing and retrieval of time-stamped tuples. The scope of an R-GMA installation is defined by its schema and registry. The schema holds the table definitions and, in future, the authorization rules. The registry holds a list...
    Go to contribution page
  49. Dr Andrew McNab (UNIVERSITY OF MANCHESTER)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    GridSite provides a Web Service hosting framework for services written as native executables (eg in C/C++) or scripting languages (such as Perl and Python.) These languages are of particular relevance to HEP applications, which typically have large investments of code and expertise in C++ and scripting languages. We describe the Grid-based authentication and authorization environment...
    Go to contribution page
  50. Dr Grigory Trubnikov (Joint Institute for Nuclear Research, Dubna)
    13/02/2006, 11:00
    Software Components and Libraries
    poster
    BETACOOL program developed by JINR electron cooling group is a kit of algorithms based on common format of input and output files. The program is oriented to simulation of the ion beam dynamics in a storage ring in presence of cooling and heating effects. The version presented in this report includes three basic algorithms: simulation of r.m.s. parameters of the ion distribution...
    Go to contribution page
  51. Dr Sven Hermann (Forschungszentrum Karlsruhe)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    Forschungszentrum Karlsruhe is one of the largest science and engineering research institutions in Europe. The resource centre GridKa as part of this science centre is building up a Tier 1 centre for the LHC project. Embedded in the European grid initiative EGEE, GridKa also manages the ROC (regional operation centre) for the German Swiss region. The management structure of the ROC and its...
    Go to contribution page
  52. Dr Tony Chan (BROOKHAVEN NATIONAL LAB)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    The operation and management of a heterogeneous large-scale, multi-purpose computer cluster is a complex task given the competing nature of requests for resources by a large, world-wide user base. Besides providing the bulk of the computational resources to experiments at the Relativistic Heavy-Ion Collider (RHIC), this large cluster is part of the U.S. Tier 1 Computing Center for the...
    Go to contribution page
  53. Mr Wayne BETTS (BROOKHAVEN NATIONAL LABORATORY)
    13/02/2006, 11:00
    Online Computing
    poster
    For any large experiment with multiple sub-systems and their respective experts spread throughout the world, real-time and near-real-time information accessible to a wide audience is critical to efficiency and success. Large and varied amounts of information about the current and past state of facilities and detector systems are necessary, both for current running, and for eventual data...
    Go to contribution page
  54. Dr Szymon Gadomski (UNIVERSITY OF BERN, LABORATORY FOR HIGH ENERGY PHYSICS)
    13/02/2006, 11:00
    Distributed Event production and processing
    poster
    The Swiss ATLAS Computing prototype consists of clusters of PCs located at the universities of Bern and Geneva (Tier 3) and at the Swiss National Supercomputing Centre (CSCS) in Manno (Tier 2). In terms of software, the prototype includes ATLAS off-line releases as well as middleware for running the ATLAS off-line in a distributed way. Both batch and interactive use cases are supported....
    Go to contribution page
  55. Dr Jukka Klem (Helsinki Institute of Physics HIP)
    13/02/2006, 11:00
    Event processing applications
    poster
    Projects like SETI@home use computing resources donated by the general public for scientific purposes. Many of these projects are based on the BOINC (Berkeley Open Interface for Network Computing) software framework that makes it easier to set up new public resource computing projects. BOINC is used at CERN for the LHC@home project where more than 10000 home users donate time of their...
    Go to contribution page
  56. Mr Fons Rademakers (CERN)
    13/02/2006, 11:00
    Software Tools and Information Systems
    poster
    Providing all components and designing good user interfaces requires from developers to know and apply some basic principles. The different parts of the ROOT GUIs should fit and complete each other. They must form a window via which users see the capability of the software system and understand how to use them. If well-designed, the user interface adds quality and inspires confidence...
    Go to contribution page
  57. Cristina Lazzeroni (University of Cambridge), Dr Raluca-Anca Muresan (Oxford University)
    13/02/2006, 11:00
    Event processing applications
    poster
    The LHCb experiment will make high precision studies of CP violation and other rare phenomena in B meson decays. Particle identification, in the momentum range from ~2-100 GeV/c, is essential for this physics programme, and will be provided by two Ring Imaging Cherenkov (RICH) detectors. The experiment will use several levels of trigger to reduce the 10MHz rate of visible interactions to...
    Go to contribution page
  58. Mr Timur Perelmutov (FNAL)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    dCache is a distributed storage system currently used to store and deliver data on a petabyte scale in several large HEP experiments. Initially dCache was designed as a disk front-end for robotic tape storage file systems. Lately, dCache systems have been increased in scale by several orders of magnitude and considered for deployment in US-CMS T2 centers lacking expensive tape robots. This...
    Go to contribution page
  59. Rene Brun (CERN)
    13/02/2006, 11:00
    Software Components and Libraries
    poster
    ROOT 2D graphics offers a wide set of data representation and visualisation techniques. Over the years, responding to user comments and requests, these have been improved and enriched. The current system is very flexible and can easily be tuned to meet user imagination. We present a patchwork demonstrating the wide variety of output which can be produced.
    Go to contribution page
  60. Rene Brun (CERN)
    13/02/2006, 11:00
    Software Components and Libraries
    poster
    Overview and examples of: -Common viewer architecture (TVirtualViewer3D interface and TBuffer3D shape hierarchy) used by all 3D viewers. -Significant features in the OpenGL viewer - in pad embedding, render styles, composite (CSG/Boolean) shapes and clipping.
    Go to contribution page
  61. Dr Stefan Roiser (CERN)
    13/02/2006, 11:00
    Software Components and Libraries
    poster
    Reflex is a package, which enhances C++ with reflection capabilities. It was developed in the LCG Applications Area at CERN and recently it was decided that it will be tightly integrated with the ROOT analysis framework and especially with the CINT interpreter. This strategy will unify the dictionary systems of ROOT/CINT and Reflex into a common one. The advantages of this move for...
    Go to contribution page
  62. Dr David Malon (ARGONNE NATIONAL LABORATORY)
    13/02/2006, 11:00
    Software Components and Libraries
    poster
    ATLAS has deployed an inter-object association infrastructure that allows the experiment to track at the object level what data have been written and where, and to assign both object-level and process-level labels to identify data objects for later retrieval. This infrastructure provides the foundation for opportunistic run-time navigation to upstream data, and in principle supports both...
    Go to contribution page
  63. Dr Sinisa Veseli (Fermilab)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    SAMGrid presently relies on the centralized database for providing several services vital for the system operation. These services are all encapsulated in the SAMGrid Database Server, and include access to file metadata and replica catalogs, dataset and processing bookkeeping, as well as the runtime support for the SAMGrid station services. Access to the centralized database and DB Servers...
    Go to contribution page
  64. Dr Sinisa Veseli (Fermilab)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    SAMGrid is a distributed (CORBA-based) HEP data handling system presently used by three running experiments at Fermilab: D0, CDF and MINOS. User access to the SAMGrid services is provided via Python and C++ client APIs, which handle the low-level CORBA calls. Although the use of SAMGrid API's is fairly straightforward and very well documented, in practice SAMGrid users are facing numerous...
    Go to contribution page
  65. Dr Marcin Nowak (BROOKHAVEN NATIONAL LABORATORY)
    13/02/2006, 11:00
    Software Components and Libraries
    poster
    The ATLAS event data model will almost certainly change over time. ATLAS must retain the ability to read both old and new data after such a change, regulate the introduction of such changes, minimize the need to run massive data conversion jobs when such changes are introduced, and maintain the machinery to support such data conversions when they are unavoidable. In database literature,...
    Go to contribution page
  66. Dr Philip Clark (University of Edinburgh)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    ScotGrid is a distributed Tier-2 computing centre formed as a collaboration between the Universities of Durham, Edinburgh and Glasgow, as part of the UK's national particle physics grid, GridPP. This paper describes ScotGrid's current resources by institute and how these were configured to enable participation in the LCG service challenges. In addition, we outline future development plans...
    Go to contribution page
  67. Dr Valerie GAUTARD (CEA-SACLAY)
    13/02/2006, 11:00
    Event processing applications
    poster
    The muon spectrometer of the ATLAS experiment aims at reconstructing very high energy muon tracks (up to 1 TeV) with a transverse momentum resolution better than 10 %. For this purpose a resolution of 50 micrometer on the sagitta of tracks has to be achieved. Each muon track is measured with three wire chambers stations placed inside an air core toroid magnet (the chambers seat around...
    Go to contribution page
  68. Dr Jan BALEWSKI (Indiana University Cyclotron Facility)
    13/02/2006, 11:00
    Event processing applications
    poster
    One of the world's largest time projection chambers (TPC) has been used at STAR for reconstruction of collisions at luminosities yielding thousands of piled-up background tracks resulting from few hundreds pp minBias background events or several heavy ion background events, respectively. The combination of TPC tracks and trigger detector data used for tagging of tracks are sufficient to...
    Go to contribution page
  69. Dr Jamie Shiers (CERN)
    13/02/2006, 11:00
    Plenary
    oral presentation
  70. Walter Lampl (Department of Physics, University of Arizona)
    13/02/2006, 11:00
    Event processing applications
    poster
    The event data model for the ATLAS calorimeters in the reconstruction software is described, starting from the raw data to the analysis domain calorimeter data. The data model includes important features like compression strategies with insignificant loss of signal precision, flexible and configurable data content for high level reconstruction objects, and backward navigation from the...
    Go to contribution page
  71. Dr Tony Chan (BROOKHAVEN NATIONAL LAB)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    Monitoring a large-scale computing facility is evolving from a passive to a more active role in the LHC era, from monitoring the health, availability and performance of the facility to taking a more active and automated role in restoring availability, updating software and becoming a meta-scheduler for batch systems. This talk will discuss the experiences of the RHIC and ATLAS U.S. Tier...
    Go to contribution page
  72. Mr GUANGKUN LEI (IHEP)
    13/02/2006, 11:00
    Online Computing
    poster
    The BESIII โ€œreadoutโ€ is meant an interface between DAQ framework and FEEs. As a part of DAQ system, the readout plays a very important role in the process of data acquisition. The principle functionality of Readout Crate is to receive, repack, buffer and forward the data coming from FEEs to Readout PC. The implementation is based on commercial components: VMEbus PowerPC based single board...
    Go to contribution page
  73. Garzoglio Gabriele (FERMI NATIONAL ACCELERATOR LABORATORY)
    13/02/2006, 11:00
    Grid middleware and e-Infrastructure operation
    poster
    The SAM-Grid system is an integrated data, job, and information management infrastructure. The SAM-Grid addresses the distributed computing needs of the experiments of RunII at Fermilab. The system typically relies on SAM-Grid services deployed at the remote facilities in order to manage the computing resources. Such deployment requires special agreements with each resource provider and it...
    Go to contribution page
  74. Dr Igor Sfiligoi (INFN Frascati)
    13/02/2006, 11:00
    Distributed Event production and processing
    poster
    The CDF software model was developed with dedicated resources in mind. One of the main assumptions is to have a large set of executables, shared libraries and configuration files on a shared file system. As CDF is moving toward a Grid model, this assumption is limiting the general physics analysis to only a small set of CDF friendly sites with the appropriate file system installed. ...
    Go to contribution page
  75. Mr Laurent GARNIER (LAL-IN2P3-CNRS)
    13/02/2006, 11:00
    Software Components and Libraries
    poster
    We want to do a short communication of a job done at LAL to visualize, within the OnX interactive environment, HEP geometries accessed through the VGM abstract interfaces. VGM and OnX had been presented at the Interlaken CHEP'04.
    Go to contribution page
  76. Dr paolo branchini (INFN)
    13/02/2006, 11:00
    Online Computing
    poster
    We describe a VLSI implementation based on FPGA of a new greedy algorithm for approximating minimum set covering in ad hoc wireless network applications. The implementation makes the algorithm suitable for embedded and real-time architectures.
    Go to contribution page
  77. Dr Paris Sphicas (CERN)
    13/02/2006, 11:30
    Plenary
    oral presentation
  78. Dr Ashok Jhunjhunwala (IIT, Chennai)
    13/02/2006, 12:00
    Plenary
    oral presentation
  79. Dr Pere Mato (CERN)
    13/02/2006, 14:00
    Software Components and Libraries
    oral presentation
    The Applications Area of the LCG Project is concerned with developing, deploying and maintaining that part of the physics applications software and associated supporting infrastructure software that is common among the LHC experiments. This area is managed as a number of specific projects with well-defined policies for coordination between them and with the direct participation of the...
    Go to contribution page
  80. David Adams (BNL)
    13/02/2006, 14:00
    Distributed Data Analysis
    oral presentation
    DIAL is a generic framework for distributed analysis. The heart of the system is a scheduler (also called analysis service) that receives high-level processing requests expressed in terms of an input dataset and a transformation to act on that dataset. The scheduler splits the dataset, applies the transformation to each subdataset to produce a new subdataset, and then merges these to...
    Go to contribution page
  81. Mr Vladimir Bahyl (CERN IT-FIO)
    13/02/2006, 14:00
    Computing Facilities and Networking
    oral presentation
    Availability approaching 100% and response time converging to 0 are two factors that users expect of any system they interact with. Even if the real importance of these factors is a function of the size and nature of the project, todays users are rarely tolerant of performance issues with system of any size. Commercial solutions for load balancing and failover are plentiful. Citrix...
    Go to contribution page
  82. Dr John Apostolakis (CERN)
    13/02/2006, 14:00
    Event processing applications
    oral presentation
    Geant4 has become an established tool, in production for the majority of LHC experiments during the past two years, and in use in many other HEP experiments and for applications in medical, space and other fields. Improvements and extensions to its capabilities continue, while its physics modeling are refined and results are accumulating for its validation for a variety uses. An overview...
    Go to contribution page
  83. Bebo White (STANFORD LINEAR ACCELERATOR CENTER (SLAC))
    13/02/2006, 14:00
    Software Tools and Information Systems
    oral presentation
    Protรฉgรฉ is a free, open source ontology editor and knowledge-base framework developed at Stanford University (http://protege.stanford.edu/). The application is based on Java, is extensible, and provides a foundation for customized knowledge-based and Semantic Web applications. Protรฉgรฉ supports Frames, XML Schema, RDF(S), and OWL. It provides a "plug and play environment" that makes it a...
    Go to contribution page
  84. Klaus SCHOSSMAIER (CERN)
    13/02/2006, 14:00
    Online Computing
    oral presentation
    The data-acquisition software framework DATE for the ALICE experiment at the LHC has evolved over a period of several years. The latest version DATE V5 is geared for deployment during the test and commissioning phase. The DATE software is designed to runs on several hundred machines being installed with Scientific Linux CERN (SLC) to handle the data streams of approximatly 400 optical...
    Go to contribution page
  85. Dr Jamie Shiers (CERN)
    13/02/2006, 14:00
    Distributed Event production and processing
    oral presentation
    The LCG Service Challenges are aimed at achieving the goal of a production quality world-wide Grid that meets the requirements of the LHC experiments in terms of functionality and scale. This talk highlights the main goals of the Service Challenge programme, significant milestones as well as the key services that have been validated in production by the 4 LHC experiments. The LCG...
    Go to contribution page
  86. Frank Wuerthwein (UCSD for the OSG consortium), Ruth Pordes (Fermi National Accelerator Laboratory (FNAL)), Mrs Ruth Pordes (FERMILAB)
    13/02/2006, 14:00
    Grid middleware and e-Infrastructure operation
    oral presentation
    We report on the status and plans for the Open Science Grid Consortium, an open, shared national distributed facility in the US which supports a multi-discplinary suite of science applications. More than fifty University and Laboratory groups, including 2 in Brazil and 3 in Asia, now have their resources and services accessible to OSG. 16 Virtual Organizations have registered their...
    Go to contribution page
  87. Lawrence S. Pinsky (University of Houston)
    13/02/2006, 14:18
    Event processing applications
    oral presentation
    The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. We review the progress achieved since the last CHEP Conference on the physics models, and some recent applications. From the point of view of hadronic physics, most of the effort is still in...
    Go to contribution page
  88. Dr Alexandre Vaniachine (ANL)
    13/02/2006, 14:20
    Software Components and Libraries
    oral presentation
    In preparation for data taking, the ATLAS experiment has run a series of large-scale computational exercises to test and validate distributed data grid solutions under development. ATLAS experience in prototypes and production systems of Data Challenges and Combined Test Team provided various database connectivity requirements for applications: connection management, online-offline...
    Go to contribution page
  89. Marco Pieri (University of California, San Diego, San Diego, California, USA)
    13/02/2006, 14:20
    Online Computing
    oral presentation
    The CMS Data Acquisition system is designed to build and filter events originating from approximately 500 data sources from the detector at a maximum Level 1 trigger rate of 100 kHz and with an aggregate throughput of 100 GByte/sec. For this purpose different architectures and switch technologies have been evaluated. Events will be built in two stages: the first stage, the FED Builder,...
    Go to contribution page
  90. Dr Jรถrn Adamczewski (GSI)
    13/02/2006, 14:20
    Distributed Data Analysis
    oral presentation
    The new version 3 of the ROOT based GSI standard analysis framework GO4 (GSI Object Oriented Online Offline) has been released. GO4 provides multithreaded remote communication between analysis process and GUI process, a dynamically configurable analysis framework, and a Qt based GUI with embedded ROOT graphics. In the new version 3 a new internal object manager was developed. Its...
    Go to contribution page
  91. Bebo White (STANFORD LINEAR ACCELERATOR CENTER (SLAC))
    13/02/2006, 14:20
    Software Tools and Information Systems
    oral presentation
    The Semantic Web shows great potential in the HEP community as an aggregation mechanism for weakly structured data and a knowledge management tool for acquiring, accessing, and maintaining knowledge within experimental collaborations. FOAF (Friend-Of-A-Friend) (http://www.foaf-project.org/) is an RDFS/OWL ontology (some of the fundamental Semantic Web technologies) for expressing...
    Go to contribution page
  92. Dr Jukka Klem (Helsinki Institute of Physics HIP)
    13/02/2006, 14:20
    Distributed Event production and processing
    oral presentation
    Public resource computing uses the computing power of personal computers that belong to the general public. LHC@home is a public-resource computing project based on the BOINC (Berkeley Open Interface for Network Computing) platform. BOINC is an open source software system, developed by the team behind SETI@home, that provides the infrastructure to operate a public-resource computing...
    Go to contribution page
  93. Robert Gardner (University of Chicago)
    13/02/2006, 14:20
    Grid middleware and e-Infrastructure operation
    oral presentation
    We describe the purpose, architectural definition, deployment and operational processes for the Integration Testbed (ITB) of the Open Science Grid (OSG). The ITB has been successfully used to integrate a set of functional interfaces and services required for the OSG Deployment. Activity leading to two major deployments of the OSG grid infrastructure. We discuss the methods and logical...
    Go to contribution page
  94. Dr Doris Ressmann (Forschungszentrum Karlsruhe)
    13/02/2006, 14:20
    Computing Facilities and Networking
    oral presentation
    At GridKa an initial capacity of 1.5 PB online and 2 PB background storage is needed for the LHC start in 2007. Afterwards the capacity is expected to grow almost exponentially. No computing site will be able to keep this amount of data in online storage, hence a highly accessible tape connection is needed. This paper describes a high-performance connection of the online storage to an IBM...
    Go to contribution page
  95. Mr Pedro Arce (Cent.de Investigac.Energeticas Medioambientales y Tecnol. (CIEMAT))
    13/02/2006, 14:36
    Event processing applications
    oral presentation
    GEANT4e is a package of the GEANT4 Toolkit that allows to propagate a track with its error parameters. It uses the standard GEANT4 code to propagate the track and for the track propagation it makes an helix approximation (with the step controlled by the user) using the same equations as GEANT3/GEANE. We present here a first working prototype of the GEANT4e package and compare its results...
    Go to contribution page
  96. Caitriana Nicholson (University of Glasgow), Caitriana Nicholson (Unknown), Dr David Malon (ARGONNE NATIONAL LABORATORY)
    13/02/2006, 14:40
    Software Components and Libraries
    oral presentation
    The ATLAS experiment will deploy an event-level metadata system as a key component of support for data discovery, identification, selection, and retrieval in its multi-petabyte event store. ATLAS plans to use the LCG POOL collection infrastructure to implement this system, which must satisfy a wide range of use cases and must be usable in a widely distributed environment. The system...
    Go to contribution page
  97. Michal Kwiatek (CERN)
    13/02/2006, 14:40
    Computing Facilities and Networking
    oral presentation
    Over the last years, we have experienced a growing demand for hosting java web applications. At the same time, it has been difficult to find an off-the-shelf solution that would enable load balancing, easy administration and a high level of isolation between applications hosted within a J2EE server. The architecture developed and used in production at CERN is based on a linux...
    Go to contribution page
  98. Dr Gennady KUZNETSOV (Rutherford Appleton Laboratory, Didcot)
    13/02/2006, 14:40
    Distributed Data Analysis
    oral presentation
    DIRAC is the LHCb Workload and Data Management system used for Monte Carlo production, data processing and distributed user analysis. Such a wide variety of applications requires a general approach to the tasks of job definition, configuration and management. In this paper, we present a suite of tools called a Production Console, which is a general framework for job formulation,...
    Go to contribution page
  99. Dr Frederik Orellana (Institute of Nuclear and Particle Physics, Universitรฉ de Genรจve)
    13/02/2006, 14:40
    Distributed Event production and processing
    oral presentation
    In 2004, a full slice of the ATLAS detector was tested for 6 months in the H8 experimental area of the CERN SPS, in the so-called Combined Test Beam, with beams of muons, pions, electrons and photons in the range 1 to 350 GeV. Approximately 90 million events were collected, corresponding to a data volume of 4.5 terabytes. The importance of this exercise was two-fold: for the first time the...
    Go to contribution page
  100. Dr Peter Malzacher (Gesellschaft fuer Schwerionenforschung mbH (GSI))
    13/02/2006, 14:40
    Grid middleware and e-Infrastructure operation
    oral presentation
    The German Ministry for Education and Research announced a 100 million euro German e-science initiative focused on: Grid computing, e-learning and knowledge management. In a first phase started September 2005 the Ministry has made available 17 million euro for D-Grid, which currently comprises six research consortia: five community grids - HEP-Grid (high-energy physics),...
    Go to contribution page
  101. Mr Deepak Narasimha (VMRF Deemed University)
    13/02/2006, 14:40
    Software Tools and Information Systems
    oral presentation
    The objective of the paper is to advance the research in component-based software development by including agent oriented software engineering techniques. Agent oriented Component-based software development is the next step after object-oriented programming that promises to overcome the problems, such as reusability and complexity that have not yet been solved adequately with...
    Go to contribution page
  102. Dr marc dobson (CERN)
    13/02/2006, 14:45
    Online Computing
    oral presentation
    The needs of ATLAS experiment at the upcoming LHC accelerator, CERN, in terms of data transmission rates and processing power require a large cluster of computers (of the order of thousands) administrated and exploited in a coherent and optimal manner. Requirements like stability, robustness and fast recovery in case of failure impose a server-client system architecture with servers...
    Go to contribution page
  103. Dr Gabriele Cosmo (CERN)
    13/02/2006, 14:54
    Event processing applications
    oral presentation
    The Geometry modeler is a key component of the Geant4 tookit. It has been designed to exploit at the best the features provided by the Geant4 simulation toolkit, allowing the description in a natural way of the geometrical structure of complex detectors, from a few up to the hundreds of thousands of volumes of the LHC experiments, as well as human phantoms for medical applications or...
    Go to contribution page
  104. Mr Sverre Jarp (CERN)
    13/02/2006, 15:00
    Software Tools and Information Systems
    oral presentation
    HEP programs commonly have very flat execution profiles, implying that the execution ime is spread over many routines/methods. Consequently, compiler optimization should be applied to the whole program and not just a few inner loops. In this talk I, nevertheless, discuss the value of extracting some of the most solicited routines (relatively speaking) and using them to gauge overall...
    Go to contribution page
  105. A. Vaniachine (ANL)
    13/02/2006, 15:00
    Distributed Event production and processing
    oral presentation
    In the ATLAS Computing Model widely distributed applications require access to terabytes of data stored in relational databases. In preparation for data taking, the ATLAS experiment at the LHC has run a series of large-scale computational exercises to test and validate multi-tier distributed data grid solutions under development. We present operational experience in ATLAS database...
    Go to contribution page
  106. Dr Patrick Fuhrmann (DESY)
    13/02/2006, 15:00
    Computing Facilities and Networking
    oral presentation
    For the last two years, the dCache/SRM Storage Element has been successfully integrated into the LCG framework and is in heavy production at several dozens of sites, spanning a range from single host installations up to those with some hundreds of tera bytes of disk space, delivering more than 50 TByes per day to clients. Based on the permanent feedback from our users and the detailed...
    Go to contribution page
  107. Dr Chadwick Keith (Fermilab)
    13/02/2006, 15:00
    Grid middleware and e-Infrastructure operation
    oral presentation
    FermiGrid is a cooperative project across the Fermilab Computing Division and its stakeholders which includes the following 4 key components: Centrally Managed & Supported Common Grid Services, Stakeholder Bilateral Interoperability, Development of OSG Interfaces for Fermilab and Exposure of the Permanent Storage System. The initial goals, current status and future plans for FermiGrid will...
    Go to contribution page
  108. Gerardo GANIS (CERN)
    13/02/2006, 15:00
    Distributed Data Analysis
    oral presentation
    The Parallel ROOT Facility, PROOF, enables the interactive analysis of distributed data sets in a transparent way. It exploits the inherent parallelism in data of uncorrelated events via a multi-tier architecture that optimizes I/O and CPU utilization in heterogeneous clusters with distributed storage. Being part of the ROOT framework PROOF inherits the benefits of a performant...
    Go to contribution page
  109. Dr Jamie Shiers (CERN)
    13/02/2006, 15:00
    Software Components and Libraries
    oral presentation
    The past decade has been an era of sometimes tumultuous change in the area of Computing for High Energy Physics. This talk addresses the evolution of databases in HEP, starting from the LEP era and the visions presented during the CHEP 92 panel "Databases for High Energy Physics" (D. Baden, B. Linder, R. Mount, J. Shiers). It then reviews the rise and fall of Object Databases as a "one...
    Go to contribution page
  110. Dr Beat Jost (CERN)
    13/02/2006, 15:05
    Online Computing
    oral presentation
    LHCb is one of the four experiments currently under construction at Cern's LHC accelerator. It is a single arm spectrometer designed to study CP violation the B-meson system with high precision. This paper will describe the LHCb online system, which consists of three sub-systems: - The Timing and Fast Control (TFC) system, responsible for distributing the clock and trigger decisions...
    Go to contribution page
  111. Dr Michel Maire (LAPP)
    13/02/2006, 15:12
    Event processing applications
    oral presentation
    The current status and the recent developments of Geant4 "Standard" electromagnetic package are presented. The design iteration of the package carried out for the last two years is completed. It provides model versus process structure of the code. The internal database of elements and materials based on the NIST databases is introduced inside the Geant4 toolkit as well. The focus of...
    Go to contribution page
  112. Mr Tigran Mkrtchyan Mkrtchyan (Deutsches Elektronen-Synchrotron DESY)
    13/02/2006, 16:00
    Computing Facilities and Networking
    oral presentation
    After successfully deploying dCache over the last few years, the dCache team reevaluated the potential of using dCache for extremely huge and heavily used installations. We identified the filesystem namespace module as one of the components which would very likely need a redesign to cope with expected requirements in the medium term future. Having presented the initial design of Chimera...
    Go to contribution page
  113. Caitriana Nicholson (University of Glasgow)
    13/02/2006, 16:00
    Distributed Data Analysis
    oral presentation
    Simulations have been performed with the grid simulator OptorSim using the expected analysis patterns from the LHC experiments and a realistic model of the LCG at LHC startup, with thousands of user analysis jobs running at over a hundred grid sites. It is shown, first, that dynamic data replication plays a significant role in the overall analysis throughput in terms of optimising job...
    Go to contribution page
  114. Mr Michel Jouvin (LAL / IN2P3)
    13/02/2006, 16:00
    Grid middleware and e-Infrastructure operation
    oral presentation
    Several HENP laboratories in Paris region have joined together to provide an LCG/EGEE Tier2 center. This resource, called GRIF, will focus on LCG experiments but will also be opened to EGEE users from other disciplines and to local users. It will provide resources for both analysis and simulation and offer a large storage space (350 TB planned by end of 2007). This Tier2 will have...
    Go to contribution page
  115. Dr Andy Buckley (Durham University), Andy Buckley (University of Cambridge)
    13/02/2006, 16:00
    Event processing applications
    oral presentation
    Accurate modelling of hadron interactions is essential for the precision analysis of data from the LHC. It is therefore imperative that the predictions of Monte Carlos used to model this physics are tested against relevant existing and future measurements. These measurements cover a wide variety of reactions, experimental observables and kinematic regions. To make this process more...
    Go to contribution page
  116. Mr Philippe Canal (FERMILAB)
    13/02/2006, 16:00
    Software Components and Libraries
    oral presentation
    Since version 4.01/03, we have continued to strenghten and improve the ROOT I/O system. In particular we extended and optimized support for all STL collections, including adding support for member-wise streaming. The handling of TTree objects was also improved by adding support for indexing of chains, for using bitmap algorithm to speed up search, and for accessing an sql table through...
    Go to contribution page
  117. Dr Roger JONES (LANCASTER UNIVERSITY)
    13/02/2006, 16:00
    Distributed Event production and processing
    oral presentation
    The ATLAS Computing Model is under continuous development. Previous exercises focussed on the Tier-0/Tier-1 interactions, with an emphasis on the resource implications and only a high-level view of the data and workflow. The work presented here attempts to describe in some detail the data and control flow from the High Level Trigger farms all the way through to the physics user. The...
    Go to contribution page
  118. Prof. Ryosuke ITOH (KEK)
    13/02/2006, 16:00
    Online Computing
    oral presentation
    The Belle experiment, which is a B-factory experiment at KEK in Japan, is currently taking data with a DAQ system based on FASTBUS readout, switchless event building and higher level trigger(HLT) farm. To cope with a higher trigger rate from the expected sizeable increase in the accelerator luminosity in coming years, the upgrade of the DAQ system is in progress. FASTBUS modules are...
    Go to contribution page
  119. Mr Giulio Eulisse (Northeastern University, Boston)
    13/02/2006, 16:00
    Software Tools and Information Systems
    oral presentation
    The CMS tracker has more than 50 millions channels organized in 16540 modules each one being a complete detector. Its monitoring requires the creation, analysis and storage of at least 4 histograms per module to be done every few minutes. The analysis of these plots will be done by computer programs that will check the data against some reference plots and send alarms to the operator in...
    Go to contribution page
  120. Dr Alberto Ribon (CERN)
    13/02/2006, 16:18
    Event processing applications
    oral presentation
    The complexity of the Geant4 code requires careful testing of all of its components, especially before major releases. In this talk, we will concentrate on the recent development of an automatic suite for testing hadronic physics in high energy calorimetry applications. The idea is to use a simplified set of hadronic calorimeters, with different beam particle types, and various beam...
    Go to contribution page
  121. Dr Donatella Lucchesi (INFN Padova), Dr Francesco Delli Paoli (INFN Padova)
    13/02/2006, 16:20
    Distributed Data Analysis
    oral presentation
    The CDF experiment has a new trigger which selects events depending on the significance of the track impact parameters. With this trigger a sample of events enriched of b and c mesons has been selected and it is used for several important physics analysis like the Bs mixing. The size of the dataset is of about 20 TBytes corresponding to an integrated luminosity of 1 fb-1 collected by CDF....
    Go to contribution page
  122. Dr Gilbert Poulard Poulard (CERN)
    13/02/2006, 16:20
    Distributed Event production and processing
    oral presentation
    The Large Hadron Collider at CERN will start data acquisition in 2007. The ATLAS (A Toroidal LHC ApparatuS) experiment is preparing for the data handling and analysis via a series of Data Challenges and production exercises to validate its computing model and to provide useful samples of data for detector and physics studies. DC1 was conducted during 2002-03; the main goals were to put in...
    Go to contribution page
  123. Dr Ioannis Papadopoulos (CERN, IT Department, Geneva 23, CH-1211, Switzerland)
    13/02/2006, 16:20
    Software Components and Libraries
    oral presentation
    The COmmon Relational Abstraction Layer (CORAL) is a C++ software system,developed within the context of the LCG persistency framework, which provides vendor-neutral software access to relational databases with defined semantics. The SQL-free public interfaces ensure the encapsulation of all the differences that one may find among the various RDBMS flavours in terms of SQL syntax and data...
    Go to contribution page
  124. Dr Hans G. Essel (GSI)
    13/02/2006, 16:20
    Online Computing
    oral presentation
    At the upcoming new Facility for Antiproton and Ion Research FAIR at GSI the Compressed Baryonic Matter experiment CBM requires a new architecture of front-end electronics, data acquisition, and event processing. The detector systems of CBM are a Silicon Tracker System, RICH detectors, a TRD, RPCs, and an electromagnetic calorimeter. The envisioned interaction rate of 10~MHz produces a...
    Go to contribution page
  125. Prof. Arshad Ali (National University of Sciences & Technology (NUST) Pakistan)
    13/02/2006, 16:20
    Grid middleware and e-Infrastructure operation
    oral presentation
    We present a report on Grid activities in Pakistan over the last three years and conclude that there is significant technical and economic activity due to the participation in Grid research and development. We started collaboration with participation in the CMS software development group at CERN and Caltech in 2001. This has led to the current setup for CMS production and the LCG Grid...
    Go to contribution page
  126. Dr Roger Cottrell (Stanford Linear Accelerator Center)
    13/02/2006, 16:20
    Computing Facilities and Networking
    oral presentation
    The future of computing for HENP applications depends increasingly on how well the global community is connected. With South Asia and Africa accounting for about 36% of the worldโ€™s population, the issues of internet/network facilities are a major concern for these regions if they are to successfully partake in scientific endeavors. However, not only is the International bandwidth for these...
    Go to contribution page
  127. Fons Rademakers (CERN), Fons Rademakers (CERN)
    13/02/2006, 16:20
    Software Tools and Information Systems
    oral presentation
    ROOT as a scientific data analysis framework provides a large selection data presentation objects and utilities. The graphical capabilities of ROOT range from 2D primitives to various plots, histograms, and 3D graphical objects. Its object- oriented design and developments offer considerable benefits for developing object- oriented user interfaces. The ROOT GUI classes support an...
    Go to contribution page
  128. Dr Aatos Heikkinen (HIP), Dr Barbara Mascialino (INFN Genova), Dr Francesco Di Rosa (INFN LNS), Dr Giacomo Cuttone (INFN LNS), Dr Giorgio Russo (INFN LNS), Dr Giuseppe Antonio Pablo Cirrone (INFN LNS), Dr Maria Grazia Pia (INFN GENOVA), Dr Susanna Guatelli (INFN Genova)
    13/02/2006, 16:36
    Event processing applications
    oral presentation
    A project is in progress for a systematic, rigorous, quantitative validation of all Geant4 physics models against experimental data, to be collected in a Geant4 Physics Book. Due to the complexity of Geant4 hadronic physics, the validation of Geant4 hadronic models proceeds according to a bottom-up approach (i.e. from the lower energy range up to higher energies): this approach allows...
    Go to contribution page
  129. Valeria Bartsch (FERMILAB / University College London)
    13/02/2006, 16:40
    Distributed Data Analysis
    oral presentation
    SAM is a data handling system that provides Fermilab HEP experiments of D0, CDF and MINOS with the means to catalog, distribute and track the usage of their collected and analyzed data. Annually, SAM serves petabytes of data to physics groups performing data analysis, data reconstruction and simulation at various computing centers across the world. Given the volume of the detector data, a...
    Go to contribution page
  130. Dr Andrea Valassi (CERN)
    13/02/2006, 16:40
    Software Components and Libraries
    oral presentation
    Since October 2004, the LCG Conditions Database Project has focused on the development of COOL, a new software product for the handling of experiment conditions data. COOL merges and extends the functionalities of the two previous software implementations developed in the context of the LCG common project, which were based on Oracle and MySQL. COOL is designed to minimise the...
    Go to contribution page
  131. Dr William Badgett (Fermilab)
    13/02/2006, 16:40
    Online Computing
    oral presentation
    The CDF Experiment's control and configuration system consists of several database applications and supportive application interfaces in both Java and C++. The CDF Oracle database server runson a SunOS platform and provide both configuration data, real-time monitoring information and historical run conditions archiving. The Java applications running on the Scientific Linux operating system...
    Go to contribution page
  132. Mr Fons Rademakers (CERN)
    13/02/2006, 16:40
    Software Tools and Information Systems
    oral presentation
    One of the main design challenges is the task of selecting appropriate Graphical User Interface (GUI) elements and organizing them to meet successfully the application requirements. - How to choose and assign the basic user interface elements (so-called widgets from `window gadgets') into the single panels of interactions? - How to organize these panels to appropriate levels of the...
    Go to contribution page
  133. Mr Gilles Mathieu (IN2P3, Lyon), Ms Helene Cordier (IN2P3, Lyon), Mr Piotr Nyczyk (CERN)
    13/02/2006, 16:40
    Grid middleware and e-Infrastructure operation
    oral presentation
    The paper reports on the evolution of operational model which was set up in the "Enabling Grids for E-sciencE" (EGEE) project, and on the implications of Grid Operations in LHC Computing Grid (LCG). The primary tasks of Grid Operations cover monitoring of resources and services, notification of failures to the relevant contacts and problem tracking through a ticketing system. Moreover,...
    Go to contribution page
  134. Dr gokhan unel (UNIVERSITY OF CALIFORNIA AT IRVINE AND CERN)
    13/02/2006, 16:40
    Distributed Event production and processing
    oral presentation
    The ATLAS experiment at LHC will start taking data in 2007. As preparative work, a full vertical slice of the final higher level trigger and data acquisition (TDAQ) chain, "the pre-series", has been installed in the ATLAS experimental zone. In the pre-series setup, detector data are received by the readout system and next partially analyzed by the second level trigger (LVL2). On...
    Go to contribution page
  135. Richard Cavanaugh (University of Florida)
    13/02/2006, 16:40
    Computing Facilities and Networking
    oral presentation
    UltraLight is a collaboration of experimental physicists and network engineers whose purpose is to provide the network advances required to enable petabyte-scale analysis of globally distributed data. Current Grid-based infrastructures provide massive computing and storage resources, but are currently limited by their treatment of the network as an external, passive, and largely unmanaged...
    Go to contribution page
  136. Dr Barbara Mascialino (INFN Genova), Dr Federico Ravotti (CERN), Dr Maria Grazia Pia (INFN GENOVA), Dr Maurice Glaser (CERN), Dr Michael Moll (CERN), Dr Riccardo Capra (INFN Genova)
    13/02/2006, 16:54
    Event processing applications
    oral presentation
    Monitoring radiation background is a crucial task for the operation of LHC experiments. A project is in progress at CERN for the optimisation of the radiation monitors for LHC experiments. A general, flexibly configurable simulation system based on Geant4, designed to assist the engineering optimisation of LHC radiation monitor detectors, is presented. Various detector packaging...
    Go to contribution page
  137. Mr Sylvain Chapeland (CERN)
    13/02/2006, 17:00
    Online Computing
    oral presentation
    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). A large bandwidth and flexible Data Acquisition System (DAQ) is required to collect sufficient statistics in the short running time available per year for heavy ion and to accommodate very...
    Go to contribution page
  138. Dr Flavia Donno (CERN), Dr Marco Verlato (INFN Padova)
    13/02/2006, 17:00
    Grid middleware and e-Infrastructure operation
    oral presentation
    The organization and management of the user support in a global e-science computing infrastructure such as the Worldwide LHC Computing Grid (WLCG) is one of the challenges of the grid. Given the widely distributed nature of the organization, and the spread of expertise for installing, configuring, managing and troubleshooting the grid middleware services, a standard centralized model could...
    Go to contribution page
  139. John Huth (Harvard University)
    13/02/2006, 17:00
    Distributed Data Analysis
    oral presentation
    The ATLAS experiment uses a tiered data Grid architecture that enables possibly overlapping subsets, or replicas, of original datasets to be located across the ATLAS collaboration. Many individual elements of these datasets can also be recreated locally from scratch based on a limited number of inputs. We envision a time when a user will want to determine which is more expedient,...
    Go to contribution page
  140. Mr Matthias Schneebeli (Paul Scherrer Institute, Switzerland)
    13/02/2006, 17:00
    Software Tools and Information Systems
    oral presentation
    This talk presents a new approach of writing analysis frameworks. We will point out a way of generating analysis frameworks out of a short experiment description. The generation process is completely experiment independent and can thus be applied to any event based analysis. The presentation will focus on a software package called ROME. This software generates analysis frameworks which...
    Go to contribution page
  141. Dr Roger JONES (LANCASTER UNIVERSITY)
    13/02/2006, 17:00
    Computing Facilities and Networking
    oral presentation
    Following on from the LHC experimentsโ€™ computing Technical Design Reports, HEPiX, with the agreement of the LCG, formed a Storage Task Force. This group was to: examine the current LHC experiment computing models; attempt to determine the data volumes, access patterns and required data security for the various classes of data, as a function of Tier and of time; consider the current...
    Go to contribution page
  142. Robert Petkus (Brookhaven National Laboratory)
    13/02/2006, 17:00
    Distributed Event production and processing
    oral presentation
    The roles of centralized and distributed storage at the RHIC/USATLAS Computing Facility have been undergoing a redefinition as the size and demands of computing resources continues to expand. Traditional NFS solutions, while simple to deploy and maintain, are marred by performance and scalability issues, whereas distributed software solutions such as PROOF and rootd are application...
    Go to contribution page
  143. Dr Douglas Smith (STANFORD LINEAR ACCELERATOR CENTER)
    13/02/2006, 17:00
    Software Components and Libraries
    oral presentation
    The data production and analysis system of the BaBar Experiment has evolved through a series of changes from a day when the first data were taken in May 1999. The changes, in particular, have also involved persistent technologies used to store the event data as well as a number of related databases. This talk is about CDB - the distributed Conditions Database of the BaBar Experiment. The...
    Go to contribution page
  144. Dr Satoru Kameoka (High Energy Accelerator Research Organisation)
    13/02/2006, 17:12
    Event processing applications
    oral presentation
    Geant4 is a toolkit to simulate the passage of a particle through matter based on Monte Carlo method. Geant4 incorporates many of available experimental data and theoretical models over wide energy region, extending its application scope not only to high energy physics but also medical physics, astro-physics, etc. We have developed a simulation framework for heavy ion therapy system based...
    Go to contribution page
  145. Dr Julia Andreeva (CERN)
    13/02/2006, 17:20
    Distributed Data Analysis
    oral presentation
    The ARDA project focuses in delivering analysis prototypes together with the LHC experiments. The ARDA/CMS activity delivered a fully-functional analysis prototype exposed to a pilot community of CMS users. The current integration work of key components into the CMS system is described: the activity focuses on providing a coherent monitor layer where information from diverse sources...
    Go to contribution page
  146. Mr Rajesh Kalmady (Bhabha Atomic Research Centre)
    13/02/2006, 17:20
    Grid middleware and e-Infrastructure operation
    oral presentation
    The LHC Computing Grid (LCG) connects together hundreds of sites consisting of thousands of components such as computing resources, storage resources, network infrastructure and so on. Various Grid Operation Centres (GOCs) and Regional Operations Centres (ROCs) are setup to monitor the status and operations of the grid. This paper describes Gridview, a Grid Monitoring and Visualization...
    Go to contribution page
  147. Mr Francois Fluckiger (CERN)
    13/02/2006, 17:20
    Computing Facilities and Networking
    oral presentation
    The openlab, created three years ago at CERN, was a novel concept: to involve leading IT companies in the evaluation and the integration of cutting-edge technologies or services, focusing on potential solutions for the LCG. The novelty lay in the duration of the commitment (three years during which companies provided a mix of in-kind and in-cash contributions), the level of the...
    Go to contribution page
  148. Matthew Norman (University of California at San Diego)
    13/02/2006, 17:20
    Distributed Event production and processing
    oral presentation
    The increasing instantaneous luminosity of the Tevatron collider will cause the computing requirements for data analysis and MC production to grow larger than the dedicated CPU resources that will be available. In order to meet future demands, CDF is investing in shared, Grid, resources. A significant fraction of opportunistic Grid resources will be available to CDF before the LHC era...
    Go to contribution page
  149. Dr Andreas Pfeiffer (CERN, PH/SFT)
    13/02/2006, 17:20
    Software Tools and Information Systems
    oral presentation
    In the context of the LCG Applications Area the SPI, Software Process and Infrastructure, project provides several services to the users in the LCG projects and the experiments (mainly at the LHC). These services comprise the CERN Savannah bug-tracking service, the external software service, and services concerning configuration management and applications build, as well as software...
    Go to contribution page
  150. Marco Clemencic (CERN)
    13/02/2006, 17:20
    Software Components and Libraries
    oral presentation
    The LHCb Conditions Database (CondDB) project aims to provide the necessary tools to handle non-event time-varying data. The LCG project COOL provides a generic API to handle this type of data and an interface to it has been integrated into the LHCb framework Gaudi. The interface is based on the Persistency Service infrastructure of Gaudi, allowing the user to load it at run-time only if...
    Go to contribution page
  151. Prof. Adele Rimoldi (University of Pavia)
    13/02/2006, 17:30
    Event processing applications
    oral presentation
    The simulation program for the ATLAS experiment at CERN is currently in a full operational mode and integrated into the ATLASโ€™s common analysis framework, ATHENA. The OO approach, based on GEANT4, and in use during the DC2 data challenge has been interfaced within ATHENA and to GEANT4 using the LCG dictionaries and Python scripting. The robustness of the application was proved during the...
    Go to contribution page
  152. Mr Sergio Andreozzi (INFN-CNAF)
    13/02/2006, 17:40
    Grid middleware and e-Infrastructure operation
    oral presentation
    The Grid paradigm enables the coordination and sharing of a large number of geographically-dispersed heterogeneous resources that are contributed by different institutions. These resources are organized into virtual pools and assigned to group of users. The monitoring of such a distributed and dynamic system raises a number of issues like the need for dealing with administrative...
    Go to contribution page
  153. Dr Ashok Agarwal (Department of Physics and Astronomy, University of Victoria, Victoria, Canada)
    13/02/2006, 17:40
    Distributed Event production and processing
    oral presentation
    GridX1, a Canadian computational Grid, combines the resources of various Canadian research institutes and universities through the Globus Toolkit and the CondorG resource broker (RB). It has been successfully used to run ATLAS and BaBar simulation applications. GridX1 is interfaced to LCG through a RB at the TRIUMF Laboratory (Vancouver), which is an LCG computing element, and ATLAS jobs...
    Go to contribution page
  154. Dr Sergey Linev (GSI DARMSTADT)
    13/02/2006, 17:40
    Software Components and Libraries
    oral presentation
    ROOT already has powerful and flexible I/O, which potentially can be used for storage of objects data in SQL databases. Usage of ROOT I/O together with SQL database will provide advanced functionality like: guarantee of data integrity, logging of data changes, possibility to rollback changes and lot of other features, provided by modern databases. At the same time data representation...
    Go to contribution page
  155. Roger Jones (Lancaster University)
    13/02/2006, 17:48
    Event processing applications
    oral presentation
    The project โ€œEvtGen in ATLASโ€ has the aim of accommodating EvtGen into the LHC-ATLAS context. As such it comprises both physics and software aspects of the development. ATLAS has developed interfaces to enable the use of EvtGen within the experiment's object-oriented simulation and data-handling framework ATHENA, and furthermore has enabled the running of the software on the LCG. ...
    Go to contribution page
  156. Dr Beat Jost (CERN)
    14/02/2006, 09:00
    Plenary
    oral presentation
  157. Dr Elizabeth Sexton-Kennedy (FNAL)
    14/02/2006, 09:30
    Plenary
    oral presentation
  158. Dr Martin Purschke (BNL)
    14/02/2006, 10:00
    Plenary
    oral presentation
  159. Dr Tony Hey (Microsoft, UK)
    14/02/2006, 11:00
    Plenary
    oral presentation
  160. Dr David Axmark (MySQL)
    14/02/2006, 11:30
    Plenary
    oral presentation
  161. Dr Alan Gara (IBM T. J. Watson Research Center)
    14/02/2006, 12:00
    Plenary
    oral presentation
  162. Dr Massimo Lamanna (CERN)
    14/02/2006, 14:00
    Distributed Data Analysis
    oral presentation
    The ARDA project focuses in delivering analysis prototypes together with the LHC experiments. Each experiment prototype is in principle independent but commonalities have been observed. The first level of commonality is represented by mature projects which can be effectively shared across different users. The best example is GANGA, providing a toolkit to organize usersโ€™ activity,...
    Go to contribution page
  163. Dr Maya Stavrianakou (FNAL)
    14/02/2006, 14:00
    Event processing applications
    oral presentation
    The CMS simulation based on the Geant4 toolkit and the CMS object-oriented framework has been in production for almost two years and has delivered a total of more than a 100 M physics events for the CMS Data Challenges and Physics Technical Design Report studies. The simulation software has recently been successfully ported to the new CMS Event-Data-Model based software framework. In this...
    Go to contribution page
  164. Subir Sarkar (INFN-CNAF)
    14/02/2006, 14:00
    Distributed Event production and processing
    oral presentation
    Higher instantaneous luminosity of the Tevatron Collider forces large increases in computing requirements for CDF experiment which has to be able to cover future needs of data analysis and MC production. CDF can no longer afford to rely on dedicated resources to cover all of its needs and is therefore moving toward shared, Grid, resources. CDF has been relying on a set of CDF Analysis...
    Go to contribution page
  165. Mr Dinesh Sarode (Computer Division, BARC, Mumbai-85, India)
    14/02/2006, 14:00
    Computing Facilities and Networking
    oral presentation
    Today we can have huge datasets resulting from computer simulations (CFD, physics, chemistry etc) and sensor measurements (medical, seismic and satellite). There is exponential growth in computational requirements in scientific research. Modern parallel computers and Grid are providing the required computational power for the simulation runs. The rich visualization is essential in...
    Go to contribution page
  166. Dr Maria Cristina Vistoli (Istituto Nazionale di Fisica Nucleare (INFN))
    14/02/2006, 14:00
    Grid middleware and e-Infrastructure operation
    oral presentation
    Moving from a National Grid Testbed to a Production quality Grid service for the HEP applications requires an effective operations structure and organization, proper user and operations support, flexible and efficient management and monitoring tools. Moreover the middleware releases should be easily deployable using flexible configuration tools, suitable for various and different local...
    Go to contribution page
  167. Dr Stefan Roiser (CERN)
    14/02/2006, 14:00
    Software Components and Libraries
    oral presentation
    Reflection is the ability of a programming language to introspect and interact with it's own data structures at runtime without prior knowledge about them. Many recent languages (e.g. Java, Python) provide this ability inherently but it is lacking for C++. This paper will describe a software package, Reflex, which provides reflection capabilities for C++. Reflex was developed in the...
    Go to contribution page
  168. Dr Benedetto Gorini (CERN)
    14/02/2006, 14:00
    Online Computing
    oral presentation
    The Trigger and Data Acquisition system (TDAQ) of the ATLAS experiment at the CERN Large Hadron Collider is based on a multi-level selection process and a hierarchical acquisition tree. The system, consisting of a combination of custom electronics and commercial products from the computing and telecommunication industry, is required to provide an online selection power of 105 and a total...
    Go to contribution page
  169. Hegoi Garitaonandia Elejabarrieta (Instituto de Fisica de Altas Energias (IFAE))
    14/02/2006, 14:00
    Software Tools and Information Systems
    oral presentation
    ATLAS Trigger & DAQ software, with six Gbytes per release, will be installed in about two thousand machines in the final system. Already during the development phase, it is tested and debugged in various Linux clusters of different sizes and network topologies. For the distribution of the software across the network there are, at least, two possible aproaches: fixed routing points, and...
    Go to contribution page
  170. Mr Andreas Salzburger (UNIVERSITY OF INNSBRUCK)
    14/02/2006, 14:18
    Event processing applications
    oral presentation
    Various systematic physics and detector performance studies with the ATLAS detector require very large event samples. To generate those samples, a fast simulation technique is used instead of the full detector simulation, which often takes too much effort in terms of computing time and storage space. The widely used ATLAS fast simulation program ATLFAST, however, is based on intial four...
    Go to contribution page
  171. Dr Valeri FINE (BROOKHAVEN NATIONAL LABORATORY)
    14/02/2006, 14:20
    Distributed Event production and processing
    oral presentation
    Job tracking, i.e. monitoring bundle of jobs or individual job behavior from submission to completion, is becoming very complicated in the heterogeneous Grid environment. This paper presents the principles of an integrating tracking solution based on components already deployed at STAR, none of which are experiment specific: a Generic logging layer and the STAR Unified Meta-Scheduler...
    Go to contribution page
  172. Dr David Lawrence (Jefferson Lab)
    14/02/2006, 14:20
    Software Components and Libraries
    oral presentation
    The JLab Introspection Library (JIL) provides a level of introspection for C++ enabling object persistence with minimal user effort. Type information is extracted from an executable that has been compiled with debugging symbols. The compiler itself acts as a validator of the class definitions while enabling us to avoid implementing an alternate C++ preprocessor to generate dictionary...
    Go to contribution page
  173. Mr stuart WAKEFIELD (Imperial College, University of London, London, UNITED KINGDOM)
    14/02/2006, 14:20
    Distributed Data Analysis
    oral presentation
    BOSS (Batch Object Submission System) has been developed to provide logging and bookkeeping and real-time monitoring of jobs submitted to a local farm or a grid system. The information is persistently stored in a relational database for further processing. By means of user-supplied filters, BOSS extracts the specific job information to be logged from the standard streams of the job itself...
    Go to contribution page
  174. Anja Vest (University of Karlsruhe)
    14/02/2006, 14:20
    Grid middleware and e-Infrastructure operation
    oral presentation
    Computer clusters at universities are usually shared among many groups. As an example, the Linux cluster at the "Institut fuer Experimentelle Kernphysik" (IEKP), University of Karlsruhe, is shared between working groups of the high energy physics experiments AMS, CDF and CMS, and has successfully been integrated into the SAM grid of CDF and the LHC computing grid LCG for CMS while it still...
    Go to contribution page
  175. Dr Wenji Wu (Fermi National Accelerator Laboratory)
    14/02/2006, 14:20
    Computing Facilities and Networking
    oral presentation
    The computing models for HEP experiments are becoming ever more globally distributed and grid-based, both for technical reasons (e.g., to place computational and data resources near each other and the demand) and for strategic reasons (e.g., to leverage technology investments). To support such computing models, the network and end systems (computing and storage) face unprecedented...
    Go to contribution page
  176. Marco Mambelli (UNIVERSITY OF CHICAGO)
    14/02/2006, 14:20
    Software Tools and Information Systems
    oral presentation
    We describe the Capone workflow manager which was designed to work for Grid3 and the Open Science Grid. It has been used extensively to run ATLAS managed and user production jobs during the past year but has undergone major redesigns to improve reliablility and scalability as a result of lessons learned (cite Prod paper). This paper introduces the main features of the new system covering...
    Go to contribution page
  177. Mr Sebastian Neubert (Technical University Munich)
    14/02/2006, 14:25
    Online Computing
    oral presentation
    PANDA is a universal detector system, which is being designed in the scope of the FAIR-Project at Darmstadt, Germany and is dedicated to high precision measurements of hadronic systems in the charm quark mass region. At the HESR storage ring a beam of antiprotons will interact with internal targets to achieve the desired luminosity of 2x10^32cm^-2s^-1. The experiment is designed for event...
    Go to contribution page
  178. Joanna Weng (Karlsruhe/CERN)
    14/02/2006, 14:36
    Event processing applications
    oral presentation
    An object-oriented package for parameterizing electromagnetic showers in the framework of the Geant4 toolkit has been developed. This parameterization is based on the algorithms in the GFLASH package (implemented in Geant3 / FORTRAN), but has been adapted to the new simulation context of Geant4. This package can substitute the full tracking of high energy electrons/positrons(normally form...
    Go to contribution page
  179. Dr Steffen G. Kappler (III. Physikalisches Institut, RWTH Aachen university (Germany))
    14/02/2006, 14:40
    Software Components and Libraries
    oral presentation
    Physics analyses at modern collider experiments enter a new dimension of event complexity. At the LHC, for instance, physics events will consist of the final state products of the order of 20 simultaneous collisions. In addition, a number of todayโ€™s physics questions is studied in channels with complex event topologies and configuration ambiguities occurring during event analysis....
    Go to contribution page
  180. Mr Laurence Field (CERN)
    14/02/2006, 14:40
    Distributed Event production and processing
    oral presentation
    As a result of the interoperations activity between LHC Computing Grid (LCG) and Open Science Grid (OSG), it was found that the information and monitoring space within these grids is a crowded area with many closed end-to-end solutions that do not interoperate. This paper gives the current overview of the information and monitoring space within these grids and tries to find overlapping...
    Go to contribution page
  181. Mr Laurence Field (CERN)
    14/02/2006, 14:40
    Grid middleware and e-Infrastructure operation
    oral presentation
    Open Science Grid (OSG) and LHC Computing Grid (LCG) are two grid infrastructures that were built independently on top of a Virtual Data Toolkit (VDT) core. Due to the demands of the LHC Virtual Organizations (VOs), it has become necessary to ensure that these grids interoperate so that the experiments can seamlessly use them as one resource. This paper describes the work that was...
    Go to contribution page
  182. Mr Giulio Eulisse (Northeastern University, Boston)
    14/02/2006, 14:40
    Distributed Data Analysis
    oral presentation
    We describe how a new programming paradigm dubbed AJAX (Asynchronous Javascript and XML) has enabled us to develop highly-performant web-based graphics applications. Specific examples are shown of our web clients for: CMS Event Display (real-time Cosmic Challenge), remote detecotr monitoring with ROOT displays, and performat 3D displays of GEANT4 descriptions of LHC detectors. The...
    Go to contribution page
  183. Igor Mandrichenko (FNAL)
    14/02/2006, 14:40
    Computing Facilities and Networking
    oral presentation
    Fermilab is a high energy physics research lab that maintains a dynamic network which typically supports around 10,000 active nodes. Due to the open nature of the scientific research conducted at FNAL, the portion of the network used to support open scientific research requires high bandwidth connectivity to numerous collaborating institutions around the world, and must facilitate...
    Go to contribution page
  184. Mr Florian Urmetzer (Research Assistant in the ACET centre, The University of Reading, UK)
    14/02/2006, 14:40
    Software Tools and Information Systems
    oral presentation
    Ongoing research has shown that testing grid software is complex. Automated testing mechanisms seem to be widely used, but are critically discussed on account of their efficiency and correctness in finding errors. Especially when programming distributed collaborative systems, structures get complex and systems get more error-prone. Past projects done by the authors have shown that the...
    Go to contribution page
  185. Sebastian Robert Bablok (Department of Physics and Technology, University of Bergen, Norway)
    14/02/2006, 14:45
    Online Computing
    oral presentation
    The HLT, integrating all major detectors of ALICE, is designed to analyse LHC events online. A cluster of 400 to 500 dual SMP PCs will constitute the heart of the HLT system. To synchronize the HLT with the other online systems of ALICE (Data Acquisition (DAQ), Detector Control System (DCS), Trigger (TRG)) the Experiment Control System (ECS) has to be interfaced. In order to do so, the...
    Go to contribution page
  186. Dr Edward Moyse (University of Massachusetts)
    14/02/2006, 14:54
    Event processing applications
    oral presentation
    The event data model (EDM) of the ATLAS experiment is presented. For large collaborations like the ATLAS experiment common interfaces and data objects are a necessity to insure easy maintenance and coherence of the experiments software platform over a long period of time. The ATLAS EDM improves commonality across the detector subsystems and subgroups such as trigger, test beam...
    Go to contribution page
  187. Dr Ketevi Adikle Assamagan (Brookhaven National Laboratory), PAT ATLAS (ATLAS)
    14/02/2006, 15:00
    Software Components and Libraries
    oral presentation
    The physics program at the LHC includes precision tests of the Standard Model (SM), the search for the SM Higgs boson up to 1 TeV, the search for the MSSM Higgs bosons in the entire parameter space, the search for Super Symmetry, sensitivity to alternative scenarios such as compositeness, large extra dimensions, etc. This requires general purpose detectors with excellent performance....
    Go to contribution page
  188. Mr Stuart Paterson (University of Glasgow / CPPM, Marseille)
    14/02/2006, 15:00
    Distributed Data Analysis
    oral presentation
    DIRAC is the LHCb Workload and Data Management system for Monte Carlo simulation, data processing and distributed user analysis. Using DIRAC, a variety of resources may be integrated, including individual PC's, local batch systems and the LCG grid. We report here on the progress made in extending DIRAC for distributed user analysis on LCG. In this paper we describe the advances in the...
    Go to contribution page
  189. Dr Graeme A Stewart (University of Glasgow)
    14/02/2006, 15:00
    Grid middleware and e-Infrastructure operation
    oral presentation
    Data management has proved to be one of the hardest jobs to do in a the grid environment. In particular, file replication has suffered problems of transport failures, client disconnections, duplication of current transfers and resultant server saturation. To address these problems the globus and gLite grid middlewares offer new services which improve the resiliancy and robustness of...
    Go to contribution page
  190. Dr Dantong Yu (BROOKHAVEN NATIONAL LABORATORY), Dr Dimitrios Katramatos (BROOKHAVEN NATIONAL LABORATORY)
    14/02/2006, 15:00
    Computing Facilities and Networking
    oral presentation
    A DOE MICS/SciDac funded project, TeraPaths, deployed and prototyped the use of differentiated networking services based on a range of new transfer protocols to support the global movement of data in the high energy physics distributed computing environment. While this MPLS/LAN QoS work specifically targets networking issues at BNL, the experience acquired and expertise developed is...
    Go to contribution page
  191. Dr Victor Daniel Elvira (Fermi National Accelerator Laboratory (FNAL))
    14/02/2006, 15:00
    Software Tools and Information Systems
    oral presentation
    Monte Carlo simulations are a critical component of physics analysis in a large HEP experiment such as CMS. The validation of the simulation sofware is therefore essencial to guarantee the quality and accuracy of the Monte Carlo samples. CMS is developing a Simulation Validation Suite (SVS) consisting of a set of packages associated with the different sub-detector systems: tracker,...
    Go to contribution page
  192. Gordon Watts (University of Washington)
    14/02/2006, 15:05
    Online Computing
    oral presentation
    Dร˜, one of two collider experiments at Fermilab's Tevatron, upgraded its DAQ system for the start of Run II. The run started in March 2001, and the DAQ system was fully operational shortly afterwards. The DAQ system is a fully networked system based on Single Board Computers (SBCs) located in VME readout crates which forward their data to a 250 node farm of commodity processors for trigger...
    Go to contribution page
  193. Dr Christopher Jones (CORNELL UNIVERSITY)
    14/02/2006, 15:12
    Event processing applications
    oral presentation
    The new CMS Event Data Model and Framework that will be used for the high level trigger, reconstruction, simulation and analysis is presented. The new framework is centered around the concept of an Event. A data processing job is composed of a series of algorithms (e.g., a track finder or track fitter) that run in a particular order. The algorithms only communicate via data stored in...
    Go to contribution page
  194. Dr Dirk Pleiter (DESY)
    14/02/2006, 16:00
    Computing Facilities and Networking
    oral presentation
    apeNEXT is the latest generation of massively parallel machines optimized for simulating QCD formulated on a lattice (LQCD). In autumn 2005 the commissioning of several large-scale installations of apeNEXT started, which will provide a total of 15 TFlops of compute power. This fully custom designed computer has been developed by an European collaboration composed of groups from INFN...
    Go to contribution page
  195. Dr Eric HJORT (Lawrence Berkeley National Laboratory)
    14/02/2006, 16:00
    Distributed Event production and processing
    oral presentation
    This paper describes the integration of Storage Resource Management (SRM) technology into the grid-based analysis computing framework of the STAR experiment at RHIC. Users in STAR submit jobs on the grid using the STAR Unified Meta-Scheduler (SUMS) which in turn makes best use of condor-G to send jobs to remote sites. However, the result of each job may be sufficiently large that existing...
    Go to contribution page
  196. Dr Lorenzo Moneta (CERN)
    14/02/2006, 16:00
    Software Components and Libraries
    oral presentation
    LHC experiments obtain needed mathematical and statistical computational methods via the coherent set of C++ libraries provided by the Math work package of the ROOT project. We present recent developments of this work package, formed from the merge of the ROOT and SEAL activities: (1) MathCore, a new core library, has been developed as a self contained component encompassing basic...
    Go to contribution page
  197. Dr Frederick Luehring (Indiana University)
    14/02/2006, 16:00
    Software Tools and Information Systems
    oral presentation
    ATLAS is one of the largest collaborations ever attempted in the physical sciences. This paper explains how the software infrastructure is organized to manage collaborative code development by around 200 developers with varying degrees of expertise, situated in 30 different countries. We will describe how succeeding releases of the software are built, validated and subsequently deployed to...
    Go to contribution page
  198. Dr Da-Peng JIN (IHEP (Institute of High Energy Physics, Beijing, China))
    14/02/2006, 16:00
    Online Computing
    oral presentation
    Physical study is the base of the hardware designs of the BES3 trigger system. It includes detector simulations, generation and optimization of the sub-detectorsโ€™ trigger conditions, main trigger simulations (Combining the trigger conditions from different detectors to find out the trigger efficiencies of the physical events and the rejection factors of the backgrounds events.) and...
    Go to contribution page
  199. Federico Carminati (CERN)
    14/02/2006, 16:00
    Event processing applications
    oral presentation
    The ALICE Offline framework is now in its 8th year of development and is now close to be used for data taking. This talk will provide a short description of the history of AliRoot and then will describe the latest developments. The newly added alignment framework, based on the ROOT geometrical modeller will be described. The experience with the FLUKA MonteCarlo used for full detector...
    Go to contribution page
  200. Dr Dietrich Liko (CERN)
    14/02/2006, 16:00
    Distributed Data Analysis
    oral presentation
    The ATLAS strategy follows a service oriented approach to provide Distributed Analysis capabilities to its users. Based on initial experiences with an Analysis service, the ATLAS production system has been evolved to support analysis jobs. As the ATLAS production system is based on several grid flavours (LCG, OSG and Nordugrid), analysis jobs will be supported by specific executors on the...
    Go to contribution page
  201. Mr Paolo Badino (CERN)
    14/02/2006, 16:00
    Grid middleware and e-Infrastructure operation
    oral presentation
    In this paper we report on the lessons learned from the Middleware point of view while running the gLite File Transfer Service (FTS) on the LCG Service Challenge 3 setup. The FTS has been designed based on the experience gathered from the Radiant service used in Service Challenge 2, as well as the CMS Phedex transfer service. The first implementation of the FTS was put to use in the...
    Go to contribution page
  202. Dr Weidong Li (IHEP, Beijing)
    14/02/2006, 16:18
    Event processing applications
    oral presentation
    The BESIII is a general-purpose experiment for studying electron-positron collision at BEPCII, which is currently under construction at IHEP, Beijing. The BESIII offline software system is built on the Gaudi architecture. This contribution describes the BESIII specific framework implementation for offline data processing and physics analysis. And we will also present the development status...
    Go to contribution page
  203. Hans von der Schmitt (MPI for Physics, Munich), Hans von der Schmitt (ATLAS)
    14/02/2006, 16:20
    Online Computing
    oral presentation
    The ATLAS detector at CERN's LHC will be exposed to proton-proton collisions at a nominal rate of 1 GHz from beams crossing at 40 MHz. A three-level trigger system will select potentially interesting events in order to reduce this rate to about 200 Hz. The first trigger level is implemented in custom-built electronics and firmware, whereas the higher trigger levels are based on software. A...
    Go to contribution page
  204. Dr David Cameron (European Organization for Nuclear Research (CERN))
    14/02/2006, 16:20
    Grid middleware and e-Infrastructure operation
    oral presentation
    The ATLAS detector currently under construction at CERN's Large Hadron Collider presents data handling requirements of an unprecedented scale. From 2008 the ATLAS distributed data management (DDM) system must manage tens of petabytes of event data per year, distributed around the world: the collaboration comprises 1800 physicists participating from more than 150 universities and...
    Go to contribution page
  205. Dr Chih-Hao Huang (Fermi National Accelerator Laboratory)
    14/02/2006, 16:20
    Computing Facilities and Networking
    oral presentation
    ENSTORE is a very successful petabyte-scale mass storage system developed at Fermilab. Since its inception in the late 1990s, ENSTORE has been serving the Fermilab community, as well as its collaborators, and now holds more than 3 petabytes of data on tape. New data is arriving at an ever increasing rate. One practical issue that we are confronted with is: storage technologies have been...
    Go to contribution page
  206. Wim Lavrijsen (LBNL)
    14/02/2006, 16:20
    Software Tools and Information Systems
    oral presentation
    The offline and high-level trigger software for the ATLAS experiment has now fully migrated to a scheme which allows large tasks to be broken down into many functionally independent components. These components can focus, for example, on conditions or physics data access, on purely mathematical or combinatorial algorithms or on providing detector-specific geometry and calibration...
    Go to contribution page
  207. Dr Isidro Gonzalez Caballero (Instituto de Fisica de Cantabria (CSIC-UC))
    14/02/2006, 16:20
    Distributed Data Analysis
    oral presentation
    A typical HEP analysis in the LHC experiments involves the processing of data corresponding to several million events, terabytes of information, to be analysed in the last phases. Currently, processing one million events in a single modern workstation takes several hours, thus slowing the analysis cycle. The desirable computing model for a physicist would be closer to a High Performance...
    Go to contribution page
  208. Mr Christoph Wissing (University of Dortmund)
    14/02/2006, 16:20
    Distributed Event production and processing
    oral presentation
    The H1 Experiment at HERA records electron-proton collisions provided by beam crossings of a frequency of 10 MHz. The detector has about half a million readout channels and the data acquisition allows to log about 25 events per second with a typical size of 100kB. The increased event rates after the upgrade of the HERA accelerator at DESY led to a more demanding usage of computing and...
    Go to contribution page
  209. Philippe Canal (FNAL)
    14/02/2006, 16:20
    Software Components and Libraries
    oral presentation
    We have initiated a repository of tools, software, and technique documentation for techniques used in HEP and related physics disciplines, which are related to statistics. Fermilab is to assume custodial responsibility for the operation of this Phystat repository, which will be in the nature of an open archival repository. Submissions of appropriate packages, papers, modules and code...
    Go to contribution page
  210. Frank Gaede (DESY)
    14/02/2006, 16:36
    Event processing applications
    oral presentation
    The International Linear Collider project ILC is in a very active R&D phase where currently three different detector concepts are developed in international working groups. In order to investigate and optimize the different detector concepts and their physics potential it is highly desirable to have flexible and easy to use software tools. In this talk we present Marlin, a modular C++...
    Go to contribution page
  211. Timur Perelmutov (FERMI NATIONAL ACCELERATOR LABORATORY)
    14/02/2006, 16:40
    Grid middleware and e-Infrastructure operation
    oral presentation
    dCache collaboration actively works on the implementation and improvement of the features and the grid support of dCache storage. It has delivered Storage Resource Managers (SRM) interface, GridFtp server, Resilient Manager and Interactive Web Monitoring tools. SRMs are middleware components whose function is to provide dynamic space allocation and file management of shared storage...
    Go to contribution page
  212. Dr Alberto Ribon (CERN), Dr Andreas Pfeiffer (CERN), Dr Barbara Mascialino (INFN Genova), Dr Maria Grazia Pia (INFN GENOVA), Dr Paolo Viarengo (IST Genova)
    14/02/2006, 16:40
    Software Components and Libraries
    oral presentation
    Many Goodness-of-Fit tests have been collected in a new open-source Statistical Toolkit: Chi-squared, Kolmogorov-Smirnov, Goodman, Kuiper, Cramer-von Mises, Anderson-Darling, Tiku, Watson, as well as novel weighted formulations of some tests. None of the Goodness-of-Fit tests included in the toolkit is optimal for any analysis case. Statistics does not provide a universal recipe to...
    Go to contribution page
  213. Dr Simone Campana (CERN)
    14/02/2006, 16:40
    Distributed Event production and processing
    oral presentation
    The LHC Computing Grid Project (LCG) provides and operates the computing support and infrastructure for the LHC experiments. In the present phase, the experiments systems are being commissioned and the LCG Experiment Integration Support team provides support for the integration of the underlying grid middleware with the experiment specific components. The support activity during the...
    Go to contribution page
  214. Dr Conrad Steenberg (CALIFORNIA INSTITUTE OF TECHNOLOGY)
    14/02/2006, 16:40
    Distributed Data Analysis
    oral presentation
    We present the architecture and implementation of a bi-directional system for monitoring long-running jobs on large computational clusters. JobMon comprises an asyncronous intra-cluster communication server and a Clarens web service on a head node, coupled with a job wrapper for each monitored job to provide monitoring information both periodically and upon request. The Clarens web service...
    Go to contribution page
  215. Mr Gianluca Comune (Michigan State University)
    14/02/2006, 16:40
    Online Computing
    oral presentation
    This paper descibes an analysis and conceptual design for the steering of the ATLAS High Level Trigger (HLT). The steering is the framework that organises the event selection software. It implements the key event selection strategies of the ATLAS trigger, which are designed to minimise processing time and data transfers: reconstruction within regions of interest, menu-driven selection and...
    Go to contribution page
  216. Dr Gidon Moont (GridPP/Imperial)
    14/02/2006, 16:40
    Computing Facilities and Networking
    oral presentation
    A working prototype portal for the LHC Computing Grid (LCG) is being customised for use by the T2K 280m Near Detector software group. This portal is capable of submitting jobs to the LCG and retrieving the output on behalf of the user. The T2K specific developement of the portal will create customised submission systems for the suites of production and analysis software being written by...
    Go to contribution page
  217. Stefano Argiro (European Organization for Nuclear Research (CERN))
    14/02/2006, 16:40
    Software Tools and Information Systems
    oral presentation
    Releasing software for projects with large code bases is a challenging task. When developers are geographically dispersed, often in different time zones, coordination can be difficult. A successful release strategy is therefore paramount and clear guidelines for all the stages of software development are required. The CMS experiment recently started a major refactorization of its...
    Go to contribution page
  218. Dr Denis Bertini (GSI Darmstadt)
    14/02/2006, 16:54
    Event processing applications
    oral presentation
    The simulation and analysis framework of the CBM collaboration will be presented. CBM (Compressed Baryonic Matter) is an experiment at the future FAIR (Facility for Antiproton and Ion Research) in Darmstadt. The goal of the experiment is to explore the phase diagram of strongly interacting matter in high-energy nucleus-nucleus collisions. The Virtual Monte Carlo concept allows...
    Go to contribution page
  219. Kostas Kordas (Laboratori Nazionali di Frascati (LNF))
    14/02/2006, 17:00
    Online Computing
    oral presentation
    The ATLAS experiment at the LHC will start taking data in 2007. Event data from protonโ€”proton collisions will be selected in a three level trigger system which reduces the initial bunch crossing rate of 40 MHz at its first level trigger (LVL1) to 75 kHz with a fixed latency of 2.5 ฮผs. The second level trigger (LVL2) collects and analyses Regions of Interest (RoI) identified by LVL1 and...
    Go to contribution page
  220. Dr Dantong Yu (BROOKHAVEN NATIONAL LABORATORY), Dr Xin Zhao (BROOKHAVEN NATIONAL LABORATORY)
    14/02/2006, 17:00
    Grid middleware and e-Infrastructure operation
    oral presentation
    We describe two illustrative cases in which Grid middleware (GridFtp, dCache and SRM) was used successfully to transfer hundreds of terabytes of data between BNL and its remote RHIC and ATLAS collaborators. The first case involved PHENIX production data transfers to CCJ, a regional center in Japan, during the 2005 RHIC run. Approximately 270TB of data, representing 6.8 billion polarized...
    Go to contribution page
  221. Dr Pablo Garcia-Abia (CIEMAT)
    14/02/2006, 17:00
    Distributed Event production and processing
    oral presentation
    In preparation for the start of the experiment, CMS must produce large quantities of detailed full-detector simulation. In this presentation we will present the experiencing with running official CMS Monte Carlo simulation on distributed computing resources. We will present the implementation used to generate events using the LHC Computing Grid (LCG-2) resources in Europe, as well as the...
    Go to contribution page
  222. klaus rabbertz (Karlsruhe University)
    14/02/2006, 17:00
    Software Tools and Information Systems
    oral presentation
    Packaging and distribution of experiment-specific software becomes a complicated task when the number of versions and external dependencies increases. With the advent of Grid computing, the distribution and update process must become a simple, robust and transparent step. Furthermore, one must take into account that running a particular application requires setup of the appropriate...
    Go to contribution page
  223. Mr Marco Corvo (Cnaf and Cern)
    14/02/2006, 17:00
    Distributed Data Analysis
    oral presentation
    CRAB (Cms Remote Analysis Builder) is a tool, developed by INFN within the CMS collaboration, which provides to physicists the possibility to analyze large amount of data exploiting the huge computing power of grid distributed systems. It's currently used to analyze simulated data needed to prepare the Physics Technical Design Report. Data produced by CMS are distributed among several...
    Go to contribution page
  224. Dr Ilya Narsky (California Institute of Technology), Mr Julian Bunn (CALTECH), Dr Julian Bunn (CALTECH), Julian Bunn (California Institute of Technology (CALTECH))
    14/02/2006, 17:00
    Software Components and Libraries
    oral presentation
    Modern analysis of high energy physics (HEP) data needs advanced statistical tools to separate signal from background. A C++ package has been implemented to provide such tools for the HEP community. The package includes linear and quadratic discriminant analysis, decision trees, bump hunting (PRIM), boosting (AdaBoost), bagging and random forest algorithms, and interfaces to the...
    Go to contribution page
  225. Shawn Mc Kee (High Energy Physics)
    14/02/2006, 17:00
    Computing Facilities and Networking
    oral presentation
    We will describe the networking details of NSF-funded UltraLight project and report on its status. The projectโ€™s goal is to meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network-focused agenda. The UltraLight network is a hybrid packet- and circuit-switched network infrastructure employing both โ€œultrascaleโ€...
    Go to contribution page
  226. Andreas.Morsch@cern.ch Morsch (CERN)
    14/02/2006, 17:12
    Event processing applications
    oral presentation
    The ALICE Offline Project has developed a virtual interface to the detector transport code called Virtual Monte Carlo. It isolates the user code from changes of the detector simulation package and hence allows a seamless transition from GEANT3 to GEANT4 and FLUKA. Moreover, a new geometrical modeler has been developed in collaboration with the ROOT team, and successfully interfaced to...
    Go to contribution page
  227. Andreas Nowack (Aaachen University), Klaus Rabbertz (Karlsruhe University)
    14/02/2006, 17:20
    Software Tools and Information Systems
    oral presentation
    We describe the various tools used by CMS to create and manage the packaging and distribution of software, including the various CMS software packages and the external components upon which CMS software depends. It is crucial to manage the environment to ensure that the configuration is correct, consistent, and reproducible at the many computing centres running CMS software. We describe...
    Go to contribution page
  228. Dr Peter Elmer (PRINCETON UNIVERSITY)
    14/02/2006, 17:20
    Distributed Event production and processing
    oral presentation
    The Monte Carlo Processing Service (MCPS) package is a Python based workflow modelling and job creation package used to realise CMS Software workflows and create executable jobs for different environments ranging from local node operation to wide ranging distributed computing platforms. A component based approach to modelling workflows is taken to allow both executable tasks as well as...
    Go to contribution page
  229. Dr Les Cottrell (Stanford Linear Accelerator Center (SLAC))
    14/02/2006, 17:20
    Computing Facilities and Networking
    oral presentation
    High Energy and Nuclear Physics (HENP) experiments generate unprecedented volumes of data which need to be transferred, analyzed and stored. This in turn requires the ability to sustain, over long periods, the transfer of large amounts of data between collaborating sites, with relatively high throughput. Groups such as the Particle Physics Data Grid (PPDG) and Globus are developing and...
    Go to contribution page
  230. Dr Alberto De Min (Politecnico di Milano)
    14/02/2006, 17:20
    Software Components and Libraries
    oral presentation
    In the last few decades operations research has made dramatic progress in providing efficient algorithms and fast software implementations to solve practical problems related to a wide range of disciplines, from logistics to finance, from political sciences to digital image analysis. After a brief introduction to the most used techniques, such as linear and mixed-integer programming,...
    Go to contribution page
  231. Dr Alexandre Vaniachine (ANL)
    14/02/2006, 17:20
    Grid middleware and e-Infrastructure operation
    oral presentation
    High energy and nuclear physics applications on computational grids require efficient access to terabytes of data managed in relational databases. Databases also play a critical role in grid middleware: file catalogues, monitoring, etc. Crosscutting the computational grid infrastructure, a hyperinfrastructure of the databases emerges. The Database Access for Secure Hyperinfrastructure...
    Go to contribution page
  232. Dr Tommaso Boccali (Scuola Normale Superiore and INFN Pisa)
    14/02/2006, 17:30
    Event processing applications
    oral presentation
    The Reconstruction Software for the CMS detector is designed to serve multiple use cases, from the online triggering of the High Level Trigger to the offline analysis. The software is based on the CMS Framework, and comprises reconstruction modules which can be scheduled independently. These produce and store event data ranging from low-level objects to objects useful for analysis on...
    Go to contribution page
  233. Edmund Erich Widl (Institute for High Energy Physics, Vienna)
    14/02/2006, 17:40
    Software Components and Libraries
    oral presentation
    The Inner Tracker of the CMS experiment consists of approximately 20,000 sensitive modules in order to cope with the bunch crossing rate and the high particle multiplicity expected in the environment of the Large Hadron Collider. For such a big number of modules conventional methods for track-based alignment face serious difficulties because of the large number of alignment parameters and...
    Go to contribution page
  234. Dr Daniele - on behalf of CMS Italy Tier-1 and Tier-2's Bonacorsi (INFN-CNAF Bologna, Italy)
    14/02/2006, 17:40
    Distributed Event production and processing
    oral presentation
    The CMS experiment is travelling its path towards the real LHC data handling by building and testing its Computing Model through daily experience on production-quality operations as well as in challenges of increasing complexity. The capability to simultaneously address both these complex tasks on a regional basis - e.g. within INFN - relies on the quality of the developed tools and...
    Go to contribution page
  235. Dr Birger Koblitz (CERN)
    14/02/2006, 17:40
    Grid middleware and e-Infrastructure operation
    oral presentation
    We present the AMGA (ARDA Metadata Grid Application) metadata catalog, which is a part of the gLite middleware. AMGA provides a very lightweight metadata service as well as basic database access functionality on the Grid. Following a brief overview of the AMGA design, functionality, implementation and security features, we will show performance comparisons of AMGA with direct database...
    Go to contribution page
  236. Marian Ivanov (CERN)
    14/02/2006, 17:48
    Event processing applications
    oral presentation
    An overview of the online reconstruction algorithms for the ALICE Time Projection Chamber and Inner Tracking System is given. Both the tracking efficiency and the time performance of the algorithms are presented in details. The application of the tracking algorithms in possible high transverse momentum jet and open charm triggers is discussed.
    Go to contribution page
  237. Andreas Joachim Peters (CERN)
    15/02/2006, 09:00
    Distributed Event production and processing
    poster
    The LHC experiments at CERN will collect data at a rate of several petabytes per year and produce several hundred files per second. Data has to be processed and transferred to many tier centres for distributed data analysis in different physics data formats increasing the amount of files to handle. All these files must be accounted for, reliably and securely tracked in a GRID environment,...
    Go to contribution page
  238. Mr Davide Rebatto (INFN - MILANO)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    In current, widely deployed management schemes, intensive computing farms are locally managed by batch systems (e.g. Platform LSF, PBS/Torque, BQS, etc.). When approached from the outside, at the global (or 'grid') level, these local resource managers (LRMS) are seen as services providing at least a basic set of job operations, namely submission, status retrieval, cancellation and security...
    Go to contribution page
  239. Mr Sankhadip Sengupta (Undergraduate student,Aerospace Engineering,IIT Kharagpur,Kharagpur,India)
    15/02/2006, 09:00
    Distributed Event production and processing
    poster
    This paper addresses the growing usages of high performance computing in modern computational fluid dynamics to simulate the flow-induced vibrations of cylindrical structures necessary to enhance the Reactor Safety in Nuclear plants. The study is essential to prevent the damage of steam tubes causing an accident due to the release of reactor coolant containing radioactive materials out of...
    Go to contribution page
  240. Prof. Harvey Newman (CalTech)
    15/02/2006, 09:00
    Plenary
    oral presentation
  241. Mr Gian Luca Rubini (INFN-CNAF)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    One of the most interesting challenges of the 'computing Grid' is how to administer grid resources allocation and data access, in order to obtain an effective and optimized computing usage and a secure data access. To reach this goal, a new entity has appeared, the Virtual Organization (VO), which represents a distributed community of users, accessing a distributed computing environment....
    Go to contribution page
  242. Dr Enrico Pasqualucci (Istituto Nazionale di Fisica Nucleare (INFN), Roma)
    15/02/2006, 09:00
    Online Computing
    poster
    The ATLAS DAQ and monitoring software are currently commonly used to test detectors during the commissioning phase. In this paper, their usage in MDT and RPC commissioning is described, both at the surface pre-commissioning and commissioning stations and in the ATLAS pit. Two main components are heavily used for detector tests. The ROD Crate DAQ software is based on the ATLAS ReadOut...
    Go to contribution page
  243. Toby Burnett (University of Washington)
    15/02/2006, 09:00
    Event processing applications
    poster
    We have developed a package that trains and applies boosted classification trees, a technology long used by the statistics community, but only recently being explored by HEP. We will discuss its design (Object-Oriented C++), and show two examples of its use: to detect single top production in DZERO events, and for background rejection in GLAST.
    Go to contribution page
  244. Gordon Watts (DZERO Collaboration)
    15/02/2006, 09:00
    Event processing applications
    poster
    Dร˜, one of the collider detectors at Fermilab's Tevatron, depends on efficient and pure b-quark identification for much of its high-pT physics program. Dร˜ currently has two algorithms, one based on impact parameter and the other on explicit reconstruction of the B hadrons decay vertex. A third, combined algorithm is under development. Dร˜ certifies all of its b-quark tagging algorithms...
    Go to contribution page
  245. Dr Gene Oleynik (Fermilab)
    15/02/2006, 09:00
    Computing Facilities and Networking
    poster
    Fermilab provides a primary and tertiary permanent storage facility for its High Energy Physics program and other world wide scientific endeavors. The lifetime of the files in this facility, which are maintained in automated robotic tape libraries, is typically many years. Currently the amount of data in the Fermilab permanent store facility is 3.3 PB and growing rapidly. The...
    Go to contribution page
  246. Dr Alessandra Forti (University of Manchester)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    The HEP department of the University of Manchester has purchased a 1000 nodes cluster. The cluster will be accessible to various VOs through EGEE/LCG grid middleware. One of the interesting aspects of the equipment bought is that each node has 2x250 GB disks leading to a total of aproximately 4TB of usable disk space. The space is intended to be managed using dcache and its resilience...
    Go to contribution page
  247. Dr Jose Hernandez (CIEMAT)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    CMS has chosen to adopt a distributed model for all computing in order to cope with the requirements on computing and storage resources needed for the processing and analysis of the huge amount of data the experiment will be providing from LHC startup. The architecture is based on a tier-organised structure of computing resources, based on a Tier-0 centre at CERN, a small number of...
    Go to contribution page
  248. Prof. Sridhara Dasu (UNIVERSITY OF WISCONSIN)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    The University of Wisconsin campus research computing grid is an offshoot of Condor project, which is providing middle ware for many world-wide computing grids. The Grid Laboratory of Wisconsin (GLOW) and other UW based computing facilities exploit Condor technologies to provide research computing for a variety of fields including high energy physics projects on the UW campus. The...
    Go to contribution page
  249. Dr Andrea Valassi (CERN)
    15/02/2006, 09:00
    Distributed Event production and processing
    poster
    In April 2005, the LCG Conditions Database Project delivered the first production release of the COOL software, providing basic functionalities for the handling of conditions data. Since that time, several new production releases have extended the functionalities of the software. As the project is now moving into the deployment phase in Atlas and LHCb, its priorities are the...
    Go to contribution page
  250. Dr Daniele Spiga (INFN & Universitร  degli Studi di Perugia)
    15/02/2006, 09:00
    Distributed Data Analysis
    poster
    CMS is one of the four experiments expected to take data at LHC. Order of some PetaBytes of data per year will be stored in several computing sites all over the world. The collaboration has to provide tools for accessing and processing the data in a distribuited environment, using the grid infrastructure. CRAB (Cms Remote Analysis Builder) is a user-friendly tool developed by INFN within...
    Go to contribution page
  251. Dr Andreas Gellrich (for the Grid team at DESY)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    DESY is one of the world-wide leading centers for research with particle accelerators and a center for research with synchrotron light. The hadron-electron collider HERA houses four experiments which are taking data and will be operated until mid 2007. DESY has been operating a LCG-based Grid infrastructure since 2004 which was set up in the context of the EU e-science Project...
    Go to contribution page
  252. Dr Kilian Schwarz (GSI)
    15/02/2006, 09:00
    Distributed Data Analysis
    poster
    The D-Grid initiative, following similar programs in the USA and the UK, shall help to set up a nationwide German Grid infrastructure. Within work package 3 of the HEP community Grid distributed analysis tools under usage of grid resources shall be developed. A starting point is the analysis framework ROOT. A set of abstract ROOT classes (TGrid ...) provides the user interface to...
    Go to contribution page
  253. Santiago Gonzalez De La Hoz (European Organization for Nuclear Research (CERN))
    15/02/2006, 09:00
    Distributed Data Analysis
    poster
    The ATLAS production system provides access to resources across several grid flavors. Based on the experiences from the last data challenge the system has evolved. While key aspect of the old system are kept (Supervisor and executors), new implementations of the components aim for a more stable and scalable operation. An important aspect is also the integration with the new data management...
    Go to contribution page
  254. Natalia Ratnikova (FERMILAB)
    15/02/2006, 09:00
    Software Tools and Information Systems
    poster
    Packaging and distribution of experiment-specific software becomes a complicated task when the number of versions and external dependencies increases. In order to run a single application, it is often enough to create appropriate runtime environment that ensures availability of required shared objects and data files. The idea of distributing software applications based on runtime...
    Go to contribution page
  255. Mr Andrey Bobyshev (FERMILAB)
    15/02/2006, 09:00
    Computing Facilities and Networking
    poster
    An ACL (access control list) is one of a few tools that network administrators are often using to limit access to various network objects, e.g. restrict access to the certain network areas for specific traffic patterns. The ACLs are also used to control forwarding traffic, e.g. for implementing so-called policy based routing. Nowadays demand is to do update of ACLs dynamically by...
    Go to contribution page
  256. Dr Jens Jensen (Rutherford Appleton Laboratory)
    15/02/2006, 09:00
    Software Components and Libraries
    poster
    The most commonly deployed library for handling Secure Sockets Layer (SSL) and Transport Layer Security (TLS) is OpenSSL. The library is used by the client to negotiate connections to the server. It also offers features for caching parts of the information that is required, thus speeding up the process and the cost of renegotiation. Those features are generally not used fully. This...
    Go to contribution page
  257. Mr Brian Davies (LANCASTER UNIVERSITY), Dr Roger JONES (LANCAS)
    15/02/2006, 09:00
    Computing Facilities and Networking
    poster
    The ESLEA (Exploitation of Switched Lightpaths for E-science Applications) project has been working to put switched optical lightpath technology to the service of key large scientific projects. Central to the activity is the provision of services to ATLAS experiment. The project is facing the practical problems of finding the best way of interfacing the power (but also the...
    Go to contribution page
  258. Dr David Colling (Imperial College London), Dr Olivier van der Aa (Imperial College London)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    The LCG [1] have adopted a hierarchical Grid computing model which has a Tier 0 centre at CERN, national Tier 1 centres and regional Tier 2 centres. The roles of the different Tier centres are described in the LCG Technical Design Report [2] and the levels of service required from each level of Tier centre is described in the LCG Memorandum of Understanding [3] . Many of the Tier 2 centres...
    Go to contribution page
  259. Ms Natascia De Bortoli (INFN - Naples)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    Monitoring activity plays an essential role in Grid Computing: it deals with the dynamics, variety and geographical distribution of Grid resources in order to measure important parameters and provide relevant information of a Grid system related to aspects such as usage, behaviour and performance. One of the basic requirements for a monitoring service is the capability of detection and...
    Go to contribution page
  260. Dr David Colling (Imperial College London)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    While remote control of, and data collection from, instrumentation was part of the initial Grid concept most recent Grid developments have concentrated on the sharing of distributed computational and storage resources. The GRIDCC project is working to bring instrumentation back to the Grid alongside compute and storage resources. To this end we have defined an Instrument Element (IE)...
    Go to contribution page
  261. Dr Livio Fano' (INFN - Universita' degli Studi di Perugia)
    15/02/2006, 09:00
    Event processing applications
    poster
    The CMS detector is a general purpose experiment for the LHC. At the designed maximum luminosity more than 10**9 events/second will be produced, while the data acquisition system will be able to manage 100 Hz bandwidth. The trigger strategy for CMS is organised in 2 steps: a first level hardware trigger is implemented taking advantage of the fast response dectors, as the mu-chambers and...
    Go to contribution page
  262. Dr Lucas Taylor (Northeastern University, Boston)
    15/02/2006, 09:00
    Software Components and Libraries
    poster
    IGUANA is a well-established generic interactive visualisation framework based on a C++ component model and open-source graphics products. We describe developments since the last CHEP, including: the event display toolkit, with examples from CMS and D0; the generic IGUANA visualisation system for GEANT4; integration of ROOT and Hippoplot with IGUANA; and a new lightweight and portable...
    Go to contribution page
  263. Mr Andrey Bobyshev (FERMILAB)
    15/02/2006, 09:00
    Computing Facilities and Networking
    poster
    To satisfy the requirements of US-CMS, D0, CDF, SDSS and other experiments, Fermilab has established an optical path to the StarLight exchange point in Chicago. It gives access to multiple experimental networks, such as UltraScience Net, UltraLight, UKLight, and others, with very high bandwidth capacity but generally sub- production level service. The ongoing LambdaStation project is ...
    Go to contribution page
  264. Dr Matthew Hodges (RAL - CCLRC)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    In preparation of the Grid for LHC start-up, and as part of the early production service (under the UK GridPP project), we calculate efficiencies for jobs submitted to the RAL Tier-1 Batch Farm. Early usage of the Farm was characterised by high occupancy, but low efficiency of Grid jobs, but improvement has been observed over the last six months. This behaviour has been examined by...
    Go to contribution page
  265. Mr Colin Morey (University of Manchester)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    The HEP department of the University of Manchester has purchased a 1000 nodes cluster. The cluster will be accessible to various VOs through EGEE/LCG grid middleware. In this talk we will describe the management, security and monitoring setup we have chosen for the administration of the cluster with minimum effort and mostly from remote. From remote power up to centralised installation and...
    Go to contribution page
  266. Dr Michael Gronager (Copenhagen University)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    LCG and ARC are two of the major production-ready Grid middleware solutions being used by hundreds of HEP researchers every day. Even though the middlewares are based on same technology, there are substantial architectural and implementational divergencies. An ordinary user faces difficulties trying to cross the boundaries of the two systems: ARC clients so far have not been capable...
    Go to contribution page
  267. Akram Khan (Brunel University)
    15/02/2006, 09:00
    Distributed Event production and processing
    poster
    The LCG-RUS project implemented the Global Grid Forum's Resource Usage Service standard and made grid resources for LHC accountable in a common schema (GGF-URWG). This project is a part of UK e-Science programme with the purpose of staging grid computing from e-Research to computational market. The LCG-RUS is a complementary work for the precedor MCS (Market for Computational Service) RUS...
    Go to contribution page
  268. Bruno Hoeft (Forschungszentrum Karlsruhe)
    15/02/2006, 09:00
    Computing Facilities and Networking
    poster
    Besides a brief overview of the GridKa private and public LAN network, the integration into the LHC-OPN network as well as the links to the T2 sites will be presented in the view of the physical network layout as well as there higher protocol layer implementations. Results about the feasibility discussion of dynamical routes for all connections of FZK including all different types the...
    Go to contribution page
  269. Dr Jeremy Coles (GridPP)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    Based on experiences from the last 18 months of UK Particle Physics Grid (GridPP) operation, this paper examines several key areas for the success of the LHC Computing Grid. Among these is the necessity of establishing useful metrics (from job level to overall operational), accurate monitoring at both the grid and local fabric levels, and mechanisms to rapidly address potentially or...
    Go to contribution page
  270. Dr David Evans (FERMILAB)
    15/02/2006, 09:00
    Distributed Event production and processing
    poster
    The Shahkar Runtime Execution Environment Kit (ShREEK) is a threaded workflow execution tool designed to run and intelligently manage arbitrary task workflows within a batch job. The Kit consists of three main components, an executor that runs tasks, a control point system to allow reordering of the workflow during execution and a thread based pluggable monitoring framework that offers...
    Go to contribution page
  271. Mr Andrey Shevel (Petersburg Nuclear Physics Institute (Russia))
    15/02/2006, 09:00
    Distributed Event production and processing
    poster
    High Energy Physics analysis is often performed on midrange computing clusters (10-50 machines) in relatively small physics groups (3-10 physicists). Such clusters are usually built from commodity equipment and are running under one of several Linux flavors. In an enviornment of limited resources, it is important to choose "right" cluster architecture to achieve maximum performance. We...
    Go to contribution page
  272. Dr Iosif Legrand (CALTECH)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    The MonaLISA (Monitoring Agents in A Large Integrated Services Architecture) system provides a distributed service for monitoring, control and global optimization of complex grid systems and networks for high energy physics, and many other fields of data-intensive science. It is based on an ensemble of autonomous multi-threaded, agent-based subsystems which are registered as dynamic...
    Go to contribution page
  273. Harald Vogt (DESY Zeuthen)
    15/02/2006, 09:00
    Online Computing
    poster
    Building a software repository of simulation and reconstruction tools for a future International Linear Collider (ILC) detector we started with applications based on code used in the LEP experiments with Fortran and C as programming languages. All future software development for the ILC is done using modern OO languages, mainly C++ and Java. But for comparisons and providing a smooth...
    Go to contribution page
  274. Dr Enrico Pasqualucci (Istituto Nazionale di Fisica Nucleare (INFN), Roma)
    15/02/2006, 09:00
    Online Computing
    poster
    In the ATLAS experiment, fast calibration of the detector is vital to feed prompt data reconstruction with fresh calibration constants. We present the use case of the muon detector, where an high rate of muon tracks (small data size) is needed to accomplish calibration requirements. The ideal place to get data suitable for muon detector calibration is the second level trigger, where the...
    Go to contribution page
  275. Dr Barbara Mascialino (INFN Genova), Prof. Gerard Montarou (Univ. Blaise Pascal Clermont-Ferrand), Dr Maria Grazia Pia (INFN GENOVA), Dr Petteri Nieminen (ESA), Prof. Philippe Moretto (CENBG), Dr Riccardo Capra (INFN Genova), Dr Sebastien Incerti (CENBG), Ziad Francis (Univ. Blaise Pascal Clermont-Ferrand)
    15/02/2006, 09:00
    Event processing applications
    poster
    The extension of Geant4 simulation capabilities down to the electronvolt scale is required for precision studies of radiation effects on electronics and detector components, and for micro-/nano-dosimetry studies in various experimental environments. A project is in progress to extend the coverage of Geant4 physics to this energy range. The complexity of the problem domain is discussed...
    Go to contribution page
  276. Dr Frank van Lingen (CALIFORNIA INSTITUTE OF TECHNOLOGY)
    15/02/2006, 09:00
    Distributed Event production and processing
    poster
    Abstract: We describe a set of Web Services, created to support scientists in performing distributed production tasks (e.g. Monte Carlo). The Web Services described in this paper provide a portal for scientists to execute different production workflows which can consist of many consecutive steps. The main design goal of the Web Services discussed is to provide controlled access for...
    Go to contribution page
  277. Dr Julius Hrivnac (LAL)
    15/02/2006, 09:00
    Software Components and Libraries
    poster
    Efficient and friendly access to the large amount of data distributed over the wide area network is a challenge for the near future LCG experiments. The problem can be solved using current standard open technologies and tools. A JDBC standard solution has been chosen as a base for a comprehensive system for the relational data access and management. Widely available open tools have been...
    Go to contribution page
  278. Dr Jiri Chudoba (Institute of Physics, Prague)
    15/02/2006, 09:00
    Computing Facilities and Networking
    poster
    Many computing farms use as a local batch system management PBSPro or its free version OpenPBS, respectively Torque and Maui products. These packages are delivered with graphical tools for a status overview, but summary and detailed reports from accounting log files are not available. This poster describes set of tools we are using for an overview of resources consumption in a last few...
    Go to contribution page
  279. Dr Giacomo Govi (CERN)
    15/02/2006, 09:00
    Software Components and Libraries
    poster
    The LCG POOL project has recently moved its focus on the developments of storage back-ends based on Relational Databases. Following the requirements of the LHC experiments, POOL has developed a framework for object persistency into relational schemas. This presentation will describe the main functionality of the package, explaining how the mechanism provided by POOL allows to...
    Go to contribution page
  280. Maria Cristina Vistoli (Istituto Nazionale di Fisica Nucleare (INFN))
    15/02/2006, 09:00
    Distributed Event production and processing
    poster
    The production and analysis frameworks for LHC experiments are demanding advanced features in the middleware functionality and a complete integration with the experiment specific software environment. They also require an effective and distributed test platform where the integrated middleware functionality is verified and certified. The deployment in a production infrastructure of such...
    Go to contribution page
  281. Hans von der Schmitt (MPI for Physics, Munich), Rob McPherson (University of Victoria, TRIUMF)
    15/02/2006, 09:00
    Distributed Event production and processing
    poster
    Commissioning of the ATLAS detector at the CERN Large Hadron Collider (LHC) includes, as partially overlapping phases, subsystem standalone work, integration of systems into the full detector, cosmics data taking, single beam running and finally first collisions. These tasks require services like DAQ with data recording to Tier0 and distributed data management, databases,...
    Go to contribution page
  282. Anzar Afaq (FERMILAB)
    15/02/2006, 09:00
    Software Tools and Information Systems
    poster
    The idea of an application database server is not new. It is a key element in multi-tiered architectures and business application frameworks. We present here a paradigm of developing such an application server in a complete schema independent way. We introduce a Generic Query Object Layer (QOL) and set of Database/Query Objects (D/QO) as the key component of the multi-layer Application...
    Go to contribution page
  283. Mr Leandro Franco (IN2P3/CNRS Computing Centre)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    Managing the temporal disk space used by jobs in a farm can be an operational issue. Efforts have been put on controlling this space by the batch scheduler to make sure the job will use at most the requested amount of space, and that this space is cleaned up after the end of the job. ScratchFS is a virtual file system that addresses this problem for grid as well as conventional jobs at the...
    Go to contribution page
  284. Mr Aatos Heikkinen (Helsinki Institute of Physics)
    15/02/2006, 09:00
    Event processing applications
    poster
    B tagging is an important tool for separating the LHC Higgs events with associated b jets from the Drell-Yan background. We extend standard neural network (NN) approach using multilayer perceptron in b tagging [1] to include self-organizing feature maps. We demonstrate the use of the self-organizing maps (SOM_PAK program package) and the learning vector quantization (LVQ_PAK). A...
    Go to contribution page
  285. Sanjay Ranka (University of Florida)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    Grid computing is becoming a popular way of providing high performance computing for many data intensive, scientific applications. The execution of user applications must simultaneously satisfy both job execution constraints and system usage policies. The SPHINX middleware addresses both these issues. In this paper, we present performance results of SPHINX on Open Science Grid. The...
    Go to contribution page
  286. Mr Randolph J. Herber (FNAL)
    15/02/2006, 09:00
    Software Tools and Information Systems
    poster
    (For the SAMGrid Team) SQLBuilder's purpose is to translate selection criteria in a high-level form to SQL query statements. The internal design is intended to permit easy changes to the selection criteria available and to permit retargeting the specific dialect of SQL generated. The initial target language will be Oracle 9i SQL. The input language will be defined in a formal grammar...
    Go to contribution page
  287. Luca Magnoni (INFN - CNAF), Riccardo Zappi (INFN - CNAF)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    LHC analysis farms - present at sites collaborating with LHC experiments - have been used in the past for analyzing data coming from an experimentโ€™s production center. With time such facilities were provided with high performance storage solutions in order to respond to the demand for big capacity and fast processing capabilities. Today, Storage Area Network solutions are commonly deployed...
    Go to contribution page
  288. Timothy Adam Barrass (University of Bristol)
    15/02/2006, 09:00
    Distributed Event production and processing
    poster
    Distributed data management at LHC scales is a staggering task, accompanied by equally challenging practical management issues with storage systems and wide-area networks. CMS data transfer management system, PhEDEx, is designed to handle this task with minimum operator effort, automating the workflows from large scale distribution of HEP experiment datasets down to reliable and scalable...
    Go to contribution page
  289. Robert GARDNER (UNIVERSITY OF CHICAGO)
    15/02/2006, 09:00
    Computing Facilities and Networking
    poster
    The purpose of the Teraport project is to provide computing and network infrastructure for a university-based, multi-disciplinary, Grid-enabled analysis platform with superior network connectivity to both domestic and international networks. The facility is configured and managed as part of larger Grid infrastructures, with specific focus on integration and interoperability with...
    Go to contribution page
  290. Elena Slabospitskaya (State Res.Center of Russian Feder. Inst.f.High Energy Phys. (IFVE))
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    A Directed Acyclic Graph (DAG) can be used to represent a set of programs where the input, output or execution of one or more programs is dependent on one or more other programs. We developed a basic test suite for DAG jobs. It consists of 2 main parts: a) functionality tests are using of CLI (in Perl). The generation of the DAG with arbitrary structure and different JDL-attributes for...
    Go to contribution page
  291. Predrag Buncic (CERN)
    15/02/2006, 09:00
    Distributed Event production and processing
    poster
    The ALICE Computing Team has developed since 2001 a distributed computing environment implementing a Grid paradigm under the name of AliEn. With the evolution of the middleware provided by various large grid projects in Europe and in the US (EGEE, OSG, ARC), a number of services provided by AliEn are now provided and maintained by the corresponding Grid infrastructures. AliEn has therefore...
    Go to contribution page
  292. Dr Antonio Sidoti (INFN Roma1 and University "La Sapienza")
    15/02/2006, 09:00
    Online Computing
    poster
    The ATLAS experiment at the LHC proton-proton collider at CERN will be faced with several technological challenges. A three level trigger and data acquisition system has been designed to reduce the 40 MHz bunch-crossing frequency, corresponding to an interaction rate of 1GHz at the design instantaneous luminosity to about ~100 Hz allowed by the permanent storage system. The capability to...
    Go to contribution page
  293. Dr Paolo Meridiani (INFN Sezione di Roma 1)
    15/02/2006, 09:00
    Event processing applications
    poster
    The design goal of the CMS electromagnetic calorimeter is to reach an excellent energy resolution; several aspects concur to the fulfillment of this ambitious goal. An enormous quantity of hardware monitoring data will be available, together with a laser monitoring system that will be able to follow quasi on-line the change of transparency of the crystals due to radiation damage. This...
    Go to contribution page
  294. Dr Alberto Ribon (CERN), Dr Andreas Pfeiffer (CERN), Dr Barbara Mascialino (INFN Genova), Dr Maria Grazia Pia (INFN GENOVA), Dr Paolo Viarengo (IST Genova)
    15/02/2006, 09:00
    Software Components and Libraries
    poster
    Statistical methods play a significant role throughout the life-cycle of high energy physics experiments. Only a few basic tools for statistical analysis were available in the public domain FORTRAN libraries for high energy physics. Nowadays the situation is hardly unchanged even among the libraries of the new generation. The present project in progress develops an object-oriented...
    Go to contribution page
  295. Dr Paul Millar (GridPP)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    Continuing the UK's strong involvement with Grid computing, the GridPP2 project (2004--2007) has established a group to investigate the use of metadata within HEP Grid computing. Three posts (based at Glasgow) are dedicated to metadata, but the group includes others working for CERN, various LHC experiments, EGEE and further afield. An important aspect of the group's work is to provide a...
    Go to contribution page
  296. Stefano Veneziano (Istituto Nazionale di Fisica Nucleare Sezione di Roma 1)
    15/02/2006, 09:00
    Online Computing
    poster
    The ATLAS Level-1 Barrel system is devoted to identify muons crossing the two outer Resistive Plate Chambers stations of the Barrel spectrometer, passing a set of programmable pT thresholds, to find their position with a granularity of Delta EtaX Delta Phi=0.1X0.1, and to associate them to a specific bunch crossing number. The system sends this trigger information to the Central Trigger...
    Go to contribution page
  297. Robert Gardner (University of Chicago)
    15/02/2006, 09:00
    Computing Facilities and Networking
    poster
    The Midwest U.S. ATLAS Tier2 facility being deployed jointly by the University of Chicago and Indiana University is described in terms of a set of functional capabilities and opertional provisions in support of ATLAS managed Monte Carlo production and distributed analysis of datasets by individual physicist-users. We describe a two-site shared systems administration model as well as the...
    Go to contribution page
  298. Prof. Homer Alfred Neal (University of Michigan)
    15/02/2006, 09:00
    Event processing applications
    poster
    We will report on a set of studies we have conducted to assess the feasibility of measuring the polarization of lambda_b hyperons in the CERN ATLAS experiment by making the first successful adaptation of the generation package EvtGen for polarized spin-1/2 particles. The simulations were based on the EvtGen version of ATLAS, a product of ATLAS EvtGen project, reported in other ATLAS...
    Go to contribution page
  299. Mr Chris Perkins (STAR)
    15/02/2006, 09:00
    Online Computing
    poster
    We describe a new, high-speed trigger network for the STAR detector at RHIC to be used during the upcoming 2006 run and thereafter. The STAR Trigger Data Pusher (STP) replaces the off-the-shelf Myrinet network used in the STAR trigger system during the first five RHIC runs. The STP will lower latencies and increase bandwidth through the trigger system. Custom electronics provide...
    Go to contribution page
  300. Dr Lorenzo Moneta (CERN)
    15/02/2006, 09:00
    Software Components and Libraries
    poster
    Aiming to provide and support a coherent set of libraries, the mathematical functionality of the ROOT project has been reorganized following a merge of the ROOT and SEAL activities. Two new libraries, coded in C++, have been released in ROOT version 5: MathCore (basic functionality) and MathMore (functionality for advanced users). We present the structure and design of these new...
    Go to contribution page
  301. Mr Krzysztof Wrona (Deutsches Elektronen-Synchrotron (DESY),Germany)
    15/02/2006, 09:00
    Distributed Event production and processing
    poster
    The HERA luminosity upgrade and enhancements of the detector have led to considerably increased demands on computing resources for the ZEUS experiment. In order to meet these higher requirements, the ZEUS computing model has been extended to support computations in the Grid environment. We show how to use the Grid services in the production system of a real experiment and point out the...
    Go to contribution page
  302. Heidi Alvarez (Florida International University), Dr Paul Avery (University of Florida)
    15/02/2006, 09:00
    Computing Facilities and Networking
    poster
    Florida International University (FIU), in collaboration with partners at Florida State University (FSU), the University of Florida (UF), and the California Institute of Technology (Caltech), in cooperation with the National Science Foundation, are creating and operating an interregional Grid-enabled Center for High-Energy Physics Research and Educational Outreach (CHEPREO) at FIU,...
    Go to contribution page
  303. Gordon Watts (University of Washington)
    15/02/2006, 09:00
    Software Components and Libraries
    poster
    Dร˜ is a traditional High Energy Physics collider experiment located at the Tevatron at Fermilab. Similar to recent past and most future experiments almost all computing work is done on Linux using standard open source tools like the gcc compiler, the make utility, and ROOT. I have been using the Microsoft platform for quite some time to develop physics tools and algorithms. Once developed...
    Go to contribution page
  304. Aatos Heikkinen (Helsinki Institute of Physics)
    15/02/2006, 09:00
    Event processing applications
    poster
    We present an investigation to validate Geant4 [1] Bertini cascade nuclide production by proton- and neutron-induced reactions on various target elements [2]. The production of residual nuclides is calculated in the framework of an intra-nuclear cascade, pre-equilibrium, fission, and evaporation model [3]. A 132 CPU Opteron Linux cluster running the NPACI Rocks Cluster Distribution [4,...
    Go to contribution page
  305. Dr Alessandra Forti (University of Manchester)
    15/02/2006, 09:00
    Grid middleware and e-Infrastructure operation
    poster
    With the development of the grid and the acquisition of large clusters to support major HEP experiments on the grid. Has triggered different requests One is from local physicist from the major VOs to have privileged access to their resources and the second is to support smaller groups that will never have access to this amount of resources. Unfortunately both these categories of users up...
    Go to contribution page
  306. Mr Francesco Maria Taurino (CNR/INFM - INFN - Dip. di Fisica Univ. di Napoli "Federico II")
    15/02/2006, 09:00
    Computing Facilities and Networking
    poster
    Virtualization is a methodology of dividing the resources of a computer into multiple execution environments, by applying one or more concepts or technologies such as hardware and software partitioning, time-sharing, partial or complete machine simulation, emulation, quality of service, and many others. These techniques can be used to consolidate the workloads of several under-utilized...
    Go to contribution page
  307. Gerardo GANIS (CERN)
    15/02/2006, 09:00
    Distributed Data Analysis
    poster
    XrdSec is the security framework developed in the context of the XROOTD project. It provides a high-level abstract security interface for client-server applications. Concrete implementations of the interface can be written for any security protocol as plugin libraries, where all technical details about the protocol are confined. Clients and server administrators can configure the system...
    Go to contribution page
  308. Andrew Hanushevsky (Stanford Linear Accelerator Center)
    15/02/2006, 09:00
    Distributed Event production and processing
    poster
    Server clustering is an effective method in increasing the pool of resources available to applications. Many clustering mechanisms exist; each with its own strengths as well as weaknesses. This paper describes the mechanism used by xrootd to provide a uniform data access space consisting of an unbounded number of independent distributed servers. We show how the mechanism is especially...
    Go to contribution page
  309. Les Robertson (CERN)
    15/02/2006, 09:30
    Plenary
    oral presentation
  310. Ruth Pordes (Fermi National Accelerator Laboratory (FNAL))
    15/02/2006, 10:00
    Plenary
    oral presentation
  311. Dr Peter Elmer (PRINCETON UNIVERSITY)
    15/02/2006, 11:15
    Plenary
    oral presentation
  312. Rene Brun (CERN)
    15/02/2006, 11:45
    Plenary
    oral presentation
  313. Dr Rodney Walker (SFU)
    15/02/2006, 14:00
    Grid middleware and e-Infrastructure operation
    oral presentation
    The Condor-G meta-scheduling system has been used to create a single Grid of GT2 resources from LCG and GridX1, and ARC resources from NorduGrid. Condor-G provides the submission interfaces to GT2 and ARC gatekeepers, enabling transparent submission via the scheduler. Resource status from the native information systems is converted to the Condor ClassAd format and used for matchmaking to...
    Go to contribution page
  314. Dr Eric van Herwijnen (CERN)
    15/02/2006, 14:00
    Online Computing
    oral presentation
    LHCb has an integrated Experiment Control System (ECS), based on the commercial SCADA system PVSS. The novelty of this control system is that, in addition to the usual control and monitoring of all experimental equipment, it also provides control and monitoring for software processes, namely the on-line trigger algorithms. The trigger decisions are computed by algorithms on an event...
    Go to contribution page
  315. Dr Andrei TSAREGORODTSEV (CNRS-IN2P3-CPPM, MARSEILLE)
    15/02/2006, 14:00
    Distributed Event production and processing
    oral presentation
    DIRAC is the LHCb Workload and Data Management system used for Monte Carlo production, data processing and distributed user analysis. It is designed to be light and easy to deploy which allows integrating in a single system different kinds of computing resources including stand-alone PC's, computing clusters or Grid systems. DIRAC uses the paradigm of the overlay network of โ€œPilot Agentsโ€,...
    Go to contribution page
  316. Oliver Gutsche (FERMILAB)
    15/02/2006, 14:00
    Distributed Data Analysis
    oral presentation
    The CMS computing model provides reconstruction and access to recorded data of the CMS detector as well as to Monte Carlo (MC) generated data. Due to the increased complexity, these functionalities will be provided by a tier structure of globally located computing centers using GRID technologies. In the CMS baseline, user access to data is provided by the CMS Remote Analysis Builder...
    Go to contribution page
  317. Dr Douglas Smith (STANFORD LINEAR ACCELERATOR CENTER)
    15/02/2006, 14:00
    Software Tools and Information Systems
    oral presentation
    In the increasingly distributed collaborations of today's experiments, there is a need to bring people together and manage all discussions. The main ways for doing this on-line are the use of e-mail or web forums. HyperNews is a discussion management system which bridges these two, by including the use of e-mail for input, but also archiving the discussions in easy to access web pages. The...
    Go to contribution page
  318. Mr Rohitashva Sharma (BARC)
    15/02/2006, 14:00
    Computing Facilities and Networking
    oral presentation
    It is important to know the Quality of Service offered by nodes in a cluster both for users and load balancing programs like LSF, PBS and CONDOR for submitting a job on to a given node. This will help in achieving optimal utilization of nodes in a cluster. Simple metrics like load average, memory utilization etc do not adequately describe load on the nodes or Quality of Service (QoS)...
    Go to contribution page
  319. Rene Brun (CERN)
    15/02/2006, 14:00
    Software Components and Libraries
    oral presentation
    HEP experiments have generally complex geometries that have to be represented and modelled for several purposes. The most important are simulation and reconstruction, where people generally do rely on some "ideal" geometry representation that is modelled within the simulation framework. The problem that the "real" experiment geometry contains perturbations to this "perfectly aligned" model...
    Go to contribution page
  320. Mr David Primor (Tel Aviv University, ISRAEL (CERN))
    15/02/2006, 14:00
    Event processing applications
    oral presentation
    This talk presents new methods to address the problem of muon track identification in the monitored drift tube chambers (MDT) of the ATLAS Muon Spectrometer. Pattern recognition techniques, employed by the current reconstruction software suffer when exposed to the high background rates expected at the LHC. We propose new techniques, exploiting existing knowledge of the detector...
    Go to contribution page
  321. Dr GENE VAN BUREN (BROOKHAVEN NATIONAL LABORATORY)
    15/02/2006, 14:18
    Event processing applications
    oral presentation
    The Solenoid Tracker At RHIC (STAR) experiment has observed luminosity fluctuations on time scales much shorter than expected during its design and construction. These operating conditions lead to rapid variations in distortions of data from the STAR TPC which are dependent upon the luminosity and planned techniques for calibrating these distortions became insufficient to provide high...
    Go to contribution page
  322. Mr Ashiq Anjum (University of the West of England)
    15/02/2006, 14:20
    Distributed Data Analysis
    oral presentation
    Results from and progress on the development of a Data Intensive and Network Aware (DIANA) Scheduling engine primarily for data intensive sciences such as physics analysis is described. Scientific analysis tasks can involve thousands of computing, data handling, and network resources and the size of the input and output files and the amount of overall storage space allotted to a user...
    Go to contribution page
  323. Giuseppe AVELLINO (Datamat S.p.A.)
    15/02/2006, 14:20
    Grid middleware and e-Infrastructure operation
    oral presentation
    Contemporary Grids are characterized by a middleware that provides the necessary virtualization of computation and data resources for the shared working environment of the Grid. In a large-scale view, different middleware technologies and implementations have to coexist. The SOA approach provides the needed architectural backbone for interoperable environments, where different...
    Go to contribution page
  324. Mr Jakub Moscicki (CERN), Dr Maria Grazia Pia (INFN GENOVA), Dr Patricia Mendez Lorenzo (CERN), Dr Susanna Guatelli (INFN Genova)
    15/02/2006, 14:20
    Distributed Event production and processing
    oral presentation
    The quantitative results of a study concerning Geant4 simulation in a distributed computing environment (local farm and LCG GRID) are presented. The architecture of the system, based on DIANE, is presented; it allows to configure a Geant4 application transparently for sequential execution (on a single PC), and for parallel execution on a local PC farm or on the GRID. Quantitative results...
    Go to contribution page
  325. Dr Witold Pokorski (CERN)
    15/02/2006, 14:20
    Software Components and Libraries
    oral presentation
    The Geometry Description Markup Language (GDML) is a specialised XML-based language designed as an application-independent persistent format for describing the detector geometries. It serves to implement 'geometry trees' which correspond to the hierarchy of volumes a detector geometry can be composed of, and to allow to identify the position of individual solids, as well as to describe the...
    Go to contribution page
  326. Mr Andrey Bobyshev (FERMILAB)
    15/02/2006, 14:20
    Computing Facilities and Networking
    oral presentation
    High Energy Physics collaborations consist of hundreds to thousands of physicists and are world-wide in scope. Experiments and applications now running, or starting soon, need the data movement capabilities now available only on advanced and/or experimental networks. The Lambda Station project steers selectable traffic through site infrastructure and onto these "high-impact" wide-area ...
    Go to contribution page
  327. Dr Muge Karagoz Unel (University of Oxford)
    15/02/2006, 14:20
    Software Tools and Information Systems
    oral presentation
    The silicon system of the ATLAS Inner Detector consists of about 6000 modules in its Semiconductor Tracker and Pixel Detector. Therefore, the offline global fit alignment algorithm has to deal with solving a problem of up to 36000 degrees of freedom.32-bit single-CPU platforms were foreseen to be unable to handle such large-size operations needed by the algorithm. The proposed solution is...
    Go to contribution page
  328. Dr Wainer Vandelli (Universitร  and INFN Pavia)
    15/02/2006, 14:20
    Online Computing
    oral presentation
    ATLAS is one of the four experiments under construction along the Large Hadron Collider (LHC) ring at CERN. The LHC will produce interactions at a center of mass energy equal to $\sqrt s~=~14~TeV$ at a $40~MHz$ rate. The detector consists of more than 140 million electronic channels. The challenging experimental environment and the extreme detector complexity impose the necessity of a...
    Go to contribution page
  329. Dr Anselmo Cervera Villanueva (University of Geneva)
    15/02/2006, 14:36
    Event processing applications
    oral presentation
    RecPack is a general reconstruction toolkit, which can be used as a base for any reconstruction program for a HEP detector. Its main functionalities are track finding, fitting, propagation and matching. Track fitting can be done either via conventional least squares methods or Kalman Filter techniques. The last, in conjunction with the matching package, allows simultaneous track finding...
    Go to contribution page
  330. Abhishek Singh RANA (University of California, San Diego, CA, USA)
    15/02/2006, 14:40
    Grid middleware and e-Infrastructure operation
    oral presentation
    We report on first experiences with building and operating an Edge Services Framework (ESF) based on Xen virtual machines instantiated via the Workspace Service available in Globus Toolkit, and developed as a joint project between EGEE, LCG, and OSG. Many computing facilities are architected with their compute and storage clusters behind firewalls. Edge Services are instantiated on a small...
    Go to contribution page
  331. Dr Ulrik Egede (IMPERIAL COLLEGE LONDON)
    15/02/2006, 14:40
    Distributed Data Analysis
    oral presentation
    Physics analysis of large amounts of data by many users requires the usage of Grid resources. It is however important that users can see a single environment for developing and testing algorithms locally and for running on large data samples on the Grid. The Ganga job wizard, developed by LHCb and ATLAS, provides physicists such an integrated environment for job preparation, bookkeeping...
    Go to contribution page
  332. Karl Harrison (High Energy Physics Group, Cavendish Laboratory)
    15/02/2006, 14:40
    Distributed Event production and processing
    oral presentation
    Ganga is a lightweight, end-user tool for job submission and monitoring and provides an open framework for multiple applications and submission backends. It is developed in a joint effort in LHCb and ATLAS. The main goal of Ganga is to effectively enable large-scale distributed data analysis for physicists working in the LHC experiments. Ganga offers simple, pleasant and consistent user...
    Go to contribution page
  333. Prof. Manuel Delfino Reznicek (Port d'Informaciรณ Cientรญfica)
    15/02/2006, 14:40
    Computing Facilities and Networking
    oral presentation
    Efficient hierarchical storage management of small size files continues to be a challenge. Storing such files directly on tape-based tertiary storage leads to extremely low operational efficiencies. Commercial tape virtualization products are few, expensive and only proven in mainframe environments. Asking the users to deal with the problem by โ€œbundlingโ€ their files leads to a plethora of...
    Go to contribution page
  334. Mrs Doris Burckhart (CERN)
    15/02/2006, 14:40
    Online Computing
    oral presentation
    The Atlas Data Acquisition (DAQ) and High Level Trigger (HLT) software system will be comprised initially of 2000 PC nodes which take part in the control, event readout, second level trigger and event filter operations. This high number of PCs will only be purchased before data taking in 2007. The large CERN IT lxbatch facility provided the opportunity to run in July 2005 online...
    Go to contribution page
  335. Dr Cibran Santamarina Rios (European Organization for Nuclear Research (CERN))
    15/02/2006, 14:40
    Software Tools and Information Systems
    oral presentation
    In this presentation we will discuss the design and functioning of a new tool that runs the ATLAS High Level Trigger Software on Event Summary Data (ESD) files, the format for data analysis in the experiment. An example of how to implement a sequence of algorithms based on the electrons selection will be shown.
    Go to contribution page
  336. Dr Maxim POTEKHIN (BROOKHAVEN NATIONAL LABORATORY)
    15/02/2006, 14:40
    Software Components and Libraries
    oral presentation
    The STAR Collaboration is currently migrating its simulation software based on Geant3, to the Root-based Virtual Monte Carlo Framework. One critical component of the framework is the mechanism of the Geometry Description, which comprises both the geometry model as used in the application, and the external language that allows the users to define and maintain the detector configuration on...
    Go to contribution page
  337. Mr Tapio Lampen (HELSINKI INSTITUTE OF PHYSICS)
    15/02/2006, 14:54
    Event processing applications
    oral presentation
    Modern tracking detectors are composed of a large number of modules assembled in a hierarchy of support structures. The sensor modules are assembled in ladders or petals. Ladders and petals in turn are assembled in cylindrical or disk-like layers and layers are assembled to make a complete tracking device. Sophisticated geometrical calibration is essential in these kind of detector...
    Go to contribution page
  338. Dr Douglas Smith (STANFORD LINEAR ACCELERATOR CENTER)
    15/02/2006, 15:00
    Distributed Event production and processing
    oral presentation
    For the BaBar computing group: Two years ago BaBar changed from using a database event storage technology to the use of ROOT-files. This change drastically affected the simulation production within the experiment, as well as the bookkeeping and the distribution of the data. Despite these large changes to production, events were produced as needed and on time for analysis. In fact the...
    Go to contribution page
  339. Dr Ian Fisk (FERMILAB)
    15/02/2006, 15:00
    Computing Facilities and Networking
    oral presentation
    CMS is preparing seven remote Tier-1 computing facilities to archive and serve experiment data. These centers represent the bulk of CMS's data serving capacity, a significant resource for reprocessing data, all of the simulation archiving capacity, and operational support for Tier-2 centers and analysis facilities. In this paper we present the progress on deploying the largest remote...
    Go to contribution page
  340. Prof. Kaushik De (UNIVERSITY OF TEXAS AT ARLINGTON)
    15/02/2006, 15:00
    Distributed Data Analysis
    oral presentation
    A new offline processing system for production and analysis, Panda, has been developed for the ATLAS experiment and deployed in OSG. ATLAS will accrue tens of petabytes of data per year, and the Panda design is accordingly optimized for data intensive processing. Its development followed three years of production experience, the lessons from which drove a markedly different design for the...
    Go to contribution page
  341. Dr Christos Leonidopoulos (CERN)
    15/02/2006, 15:00
    Online Computing
    oral presentation
    The Physics and Data Quality Monitoring framework (DQM) aims at providing a homogeneous monitoring environment across various applications related to data taking at the CMS experiment. Initially developed as a monitoring application for the 1000 dual-CPU box (High-Level) Trigger Farm, it quickly expanded its scope to accommodate different groups across the experiment. The DQM organizes the...
    Go to contribution page
  342. Ms Niranjani S (Department of Information Technology, Mohamed Sathak A.J. College of Engineering, 43, Old Mahabalipuram Road, Sipcot IT Park, Egatur, Chennai - 603 103, India.)
    15/02/2006, 15:00
    Software Components and Libraries
    oral presentation
    The enormity of data obtained in scientific experiments often necessitates a suitable graphical representation for analysis. Surface contour is one such graphical representation which renders a pictorial view that aids in easy data interpretation. It is essentially a two-dimensional visualization of a three-dimensional surface plot. Very recently, it has been shown that Super Heavy...
    Go to contribution page
  343. Mr Jeremy Herr (University of Michigan), Dr Steven Goldfarb (University of Michigan)
    15/02/2006, 15:00
    Software Tools and Information Systems
    oral presentation
    The size and geographical diversity of the LHC collaborations present new challenges for communication and training. The Web Lecture Archive Project (WLAP), a joint project between the University of Michigan and CERN Academic and Technical Training, has been involved in recording, archiving and disseminating physics lectures and software tutorials for CERN and the ATLAS Collaboration since...
    Go to contribution page
  344. Mr Marcus Hardt (Unknown)
    15/02/2006, 15:00
    Grid middleware and e-Infrastructure operation
    oral presentation
    One problem in distributed computing is bringing together application developers and resource providers to ensure that applications work well on the resources provided. A layer of abstraction between resources and applications provides new possibilities in designing Grid solutions. This paper compares different virtualisation environments, among which are Xen (developed at the...
    Go to contribution page
  345. Vakhtang Tsulaia (UNIVERSITY OF PITTSBURGH)
    15/02/2006, 15:12
    Event processing applications
    oral presentation
    This talk addresses two issues related to the implementation of a variable software description of the ATLAS detector. The first topic is how we implement an evolving description of an evolving ATLAS detector, including special configurations at varying levels of realism, in a way which plugs into the simulation and reconstruction software. The second topic is how time-dependent...
    Go to contribution page
  346. Andrei Kazarov (Petersburg Nuclear Physics Institute (PNPI))
    15/02/2006, 16:00
    Online Computing
    oral presentation
    In order to meet the requirements of ATLAS data taking, the ATLAS Trigger-DAQ system is composed of O(1000) of applications running on more than 2000 computers in a network. With such system size, s/w and h/w failures are quite often. To minimize system downtime, the Trigger-DAQ control system shall include advanced verification and diagnostics facilities. The operator should use tests and...
    Go to contribution page
  347. Mr Jeremy Herr (University of Michigan), Dr Steven Goldfarb (University of Michigan)
    15/02/2006, 16:00
    Software Tools and Information Systems
    oral presentation
    The major challenges preventing the wide-scale generation of web lecture recordings include the compactness and price of the required hardware, the speed of the compression and posting operations, and the need for a human camera operator. We will report on efforts that have led to major progress in addressing each of these issues. We will describe the design, prototyping and pilot...
    Go to contribution page
  348. Dr Hans Wenzel (FERMILAB)
    15/02/2006, 16:00
    Computing Facilities and Networking
    oral presentation
    We report on the ongoing evaluation of new 64 Bit processors as they become available to us. We present the results of benchmarking these systems in various operating modes and also measured the power consumption. To measure the performance we use HEP and CMS specific applications including: the analysis tool ROOT (C++), the MonteCarlo generator Pythia (FORTRAN), OSCAR (C++) the GEANT 4...
    Go to contribution page
  349. Mr Pavel JAKL (Nuclear Physics Inst., Academy of Sciences - Czech Republic)
    15/02/2006, 16:00
    Distributed Data Analysis
    oral presentation
    With its increasing data samples, the RHIC/STAR experiment has faced a challenging data management dilemma: solutions using cheap disks attached to processing nodes have rapidly become economically beneficial over standard centralized storage. At the cost of data management, the STAR experiment moved to a multiple component locally distributed data model rendered viable by the...
    Go to contribution page
  350. Abhishek Singh Rana (UCSD)
    15/02/2006, 16:00
    Grid middleware and e-Infrastructure operation
    oral presentation
    Securely authorizing incoming users with appropriate privileges on distributed grid computing resources is a difficult problem. In this paper we present the work of the Open Science Grid Privilege Project which is a collaboration of developers from universities and national labs to develop an authorization infrastructure to provide finer grained authorization consistently to all grid...
    Go to contribution page
  351. Dr Hartmut Stadie (Deutsches Elektronen-Synchrotron (DESY), Germany)
    15/02/2006, 16:00
    Distributed Event production and processing
    oral presentation
    The detector and collider upgrades for the HERA-II running at DESY have considerably increased the demand on computing resources for Monte Carlo production for the ZEUS experiment. To close the gap, an automated production system capable of using Grid resources has been developed and commissioned. During its first year of operation, 400 000 Grid jobs were submitted by the production...
    Go to contribution page
  352. Rene Brun (CERN)
    15/02/2006, 16:00
    Software Components and Libraries
    oral presentation
    We present an overview of the common viewer architecture (TVirtualViewer3D interface and TBuffer3D shape hierarchy) used by all 3D viewers. This ensures clients of the viewers are decoupled from the viewers, and free of specific drawing code. We detail progress on new OpenGL viewer - the primary development focus, including architecture (publish 'on demand' model, caching, native shapes,...
    Go to contribution page
  353. Marian Ivanov (CERN)
    15/02/2006, 16:00
    Event processing applications
    oral presentation
    Tracks finding and fitting algorithm in ALICE barrel detectors, Time projection chamber (TPC), Inner Tracking System (ITS), Transition radiation detector (TRD) based on the Kalman-filtering are presented. The filtering algorithm is able to cope with non-Gaussian noise and ambiguous measurements in high-density environments. The approach have been implemented within the ALICE...
    Go to contribution page
  354. Dr Liliana Teodorescu (Brunel University)
    15/02/2006, 16:18
    Event processing applications
    oral presentation
    Evolutionary Algorithms, with Genetic Algorithms (GA) and Genetic Programming (GP) as the most known versions, have a gradually increasing presence in High Energy Physics. They were proven successful in solving problems such as regression, parameter optimisation and event selection. Gene Expression Programming (GEP) is a new evolutionary algorithm that combines the advantages of both GA...
    Go to contribution page
  355. Mr Bartlomiej Pawlowski (CERN), Mr Nick Ziogas (CERN), Mr Wim Van Leersum (Cern)
    15/02/2006, 16:20
    Software Tools and Information Systems
    oral presentation
    CRA is a multi layered system with a web based front end providing centralized management and rules enforcement in a complex, distributed computing environment such as Cern. Much like an orchestra conductor CRAโ€™s role is essential and multi functional. Account management, resource usage and consistency controls for every central computing service at Cern with about 75000 active accounts is...
    Go to contribution page
  356. Mr Philippe Canal (FERMILAB)
    15/02/2006, 16:20
    Grid middleware and e-Infrastructure operation
    oral presentation
    We will describe the architecture and implementation of the new accounting service for the Open Science Grid. Gratia's main goal is to provide the OSG stakeholders with a reliable and accurate set of views of the usage of ressources across the OSG. Gratia implements a service oriented, secure framework for the necessary collectors and sensors. Gratia also provides repositories and access...
    Go to contribution page
  357. Mr Carsten Germer (DESY IT)
    15/02/2006, 16:20
    Computing Facilities and Networking
    oral presentation
    Taking the implementation of ZOPE/ZMS at DESY as an example we will show and discuss various approaches and procedures to introduce a Content Management System in a HEP Institute. We will show how requirements were gathered to make decisions regarding software and hardware. How existing Systems and management procedures needed to be taken into consideration. How the project was...
    Go to contribution page
  358. Mr Fabrizio Furano (INFN sez. di Padova)
    15/02/2006, 16:20
    Distributed Data Analysis
    oral presentation
    The latencies induced by network communication often play a big role in reducing the performances of systems which access big amounts of data in a distributed environment. The problem is present in Local Area Networks, but in Wide Area Networks is much more evident. It is generally perceived as a critical problem which makes very difficult to get access to remote data. However, a more...
    Go to contribution page
  359. Dr Dirk Duellmann Duellmann (CERN IT/LCG 3D project)
    15/02/2006, 16:20
    Distributed Event production and processing
    oral presentation
    The LCG Distributed Deployment of Databases (LCG 3D) project is a joint activity between LHC experiments and LCG tier sites to co-ordinate the set-up of database services and facilities for relational data transfers as part of the LCG infrastructure. The project goal is to provide a consistent way of accessing database services at CERN tier 0 and collaborating LCG tier sites to achieve a...
    Go to contribution page
  360. Dr Benedetto Gorini (CERN)
    15/02/2006, 16:20
    Online Computing
    oral presentation
    This paper introduces the Log Service, developed at CERN within the ATLAS TDAQ/DCS framework. This package remedies the long standing problem of attempting to direct messages to the standard output and/or error in diskless nodes with no terminal. The Log Service provides a centralized mechanism for archiving and retrieving qualified information (Log Messages) created by TDAQ applications...
    Go to contribution page
  361. Dr Valeri FINE (BROOKHAVEN NATIONAL LABORATORY)
    15/02/2006, 16:20
    Software Components and Libraries
    oral presentation
    This talk presents an overview of the main components of a unique set of tools, in use in the STAR experiment, born from the fusion of two advanced technologies: the ROOT framework and libraries and the Qt GUI and event handling package. Together, they allow creating software packages and help resolving complex data-analysis or visualization problems, enhance computer simulation or help...
    Go to contribution page
  362. Dr Christopher Jones (CORNELL UNIVERSITY)
    15/02/2006, 16:36
    Event processing applications
    oral presentation
    In order to properly understand the data taken for an HEP Event, information external to the Event must be available. Such information includes geometry descriptions, calibrations values, magnetic field readings plus many more. CMS has chosen a unified approach to access to such information via a data model based on the concept of an 'Interval of Validity', IOV. This data model is...
    Go to contribution page
  363. Miguel Branco (CERN)
    15/02/2006, 16:40
    Distributed Event production and processing
    oral presentation
    To validate its computing model, ATLAS, one of the four LHC experiments, conducted in Q4 of 2005 a Tier-0 scaling test. The Tier-0 is responsible for prompt reconstruction of the data coming from the event filter, and for the distribution of this data and the results of prompt reconstruction to the tier-1s. Handling the unprecedented data rates and volumes will pose a huge challenge on the...
    Go to contribution page
  364. Dr Douglas Smith (STANFORD LINEAR ACCELERATOR CENTER)
    15/02/2006, 16:40
    Distributed Data Analysis
    oral presentation
    For the BaBar Computing Group: Two years ago, the BaBar experiment changed its event store from an object oriented database system, to one based on ROOT files. A new bookkeeping system was developed to manage the meta-data of these files. This system has been in constant use since that time, and has successfully provided the needed meta-data information for users' analysis jobs,...
    Go to contribution page
  365. Mr Stephan Petit (CERN)
    15/02/2006, 16:40
    Software Tools and Information Systems
    oral presentation
    Ensuring personnel and equipment safety under all conditions, while operating the complex CERN systems, is a vital condition for CERN success. By applying accurate operating and maintenance procedures as well as executing regular safety inspections, CERN has an excellent safety record. Regular safety inspections also permit the traceability of all important events that have occurred...
    Go to contribution page
  366. Mr Adrian Casajus Ramo (Departamento d' Estructura i Constituents de la Materia)
    15/02/2006, 16:40
    Grid middleware and e-Infrastructure operation
    oral presentation
    DIRAC is the LHCb Workload and Data Management System and is based on a service-oriented architecture. It enables generic distributed computing with lightweight Agents and Clients for job execution and data transfers. DIRAC code base is 99% python with all remote requests handled using the XML-RPC protocol. DIRAC is used for the submission of production and analysis jobs by the LHCb...
    Go to contribution page
  367. Dr Stefan Stancu (University of California, Irvine)
    15/02/2006, 16:40
    Computing Facilities and Networking
    oral presentation
    The ATLAS experiment will rely on Ethernet networks for several purposes. A control network will provide infrastructure services and will also handle the traffic associated with control and monitoring of trigger and data acquisition (TDAQ) applications. Two independent data networks (dedicated TDAQ networks) will be used exclusively for transferring the event data within the High Level...
    Go to contribution page
  368. Dr Christopher Pinkenburg (BROOKHAVEN NATIONAL LABORATORY)
    15/02/2006, 16:40
    Online Computing
    oral presentation
    The PHENIX experiment took 2*10^9 CuCu events and more than 7*10^9 pp events during Run5 of RHIC. The total stored raw data volume was close to 500 TB. Since our DAQ bandwidth allowed us to store all events selected by the low level triggers, we did not filter events with an online processor farm which we refer to as level 2 trigger. Instead we ran the level 2 triggers offline in the...
    Go to contribution page
  369. Vakhtang Tsulaia (UNIVERSITY OF PITTSBURGH)
    15/02/2006, 16:40
    Software Components and Libraries
    oral presentation
    We describe an event visualization package in use in ATLAS. The package is based upon Open Inventor and its HEPVIs extensions. It is integrated into ATLAS's analysis framework, is modular and open to user extensions, co-displays the real detector description/simulation (GeoModel/GEANT) geometry together with event data, and renders in real time on regular laptop computers, using their...
    Go to contribution page
  370. Dr JUAN PALACIOS (CERN)
    15/02/2006, 16:54
    Event processing applications
    oral presentation
    The LHCb alignment framework allows clients of the LHCb detector description software suite (DetDesc) to modify the position of components of the detector at run-time and see the changes propagated to all users of the detector geometry. DetDesc is used in the simulation, digitization and reconstruction phases of data processing and the alignment framework is available in all these stages....
    Go to contribution page
  371. Dr Robert Bainbridge (Imperial College London)
    15/02/2006, 17:00
    Online Computing
    oral presentation
    The CMS silicon strip tracker (SST), comprising a sensitive area of over 200m2 and 10M readout channels, is unprecedented in its size and complexity. The readout system is based on a 128-channel analogue front-end ASIC, optical readout and an off-detector VME board, using FPGA technology, that performs digitization, zero suppression and data formatting before forwarding the detector data...
    Go to contribution page
  372. Alberto Pepe (CERN)
    15/02/2006, 17:00
    Software Tools and Information Systems
    oral presentation
    The traditional dissemination channels of research results, via article publishing in scientific journals, are facing a profound metamorphosis driven by the advent of the internet and broader access to electronic resources. This change is naturally leading away from the traditional publishing paradigm towards an archive-based approach in which institutional libraries organize, manage and...
    Go to contribution page
  373. Abhishek Singh RANA (University of California, San Diego, CA, USA)
    15/02/2006, 17:00
    Computing Facilities and Networking
    oral presentation
    We introduce gPLAZMA (grid-aware PLuggable AuthoriZation MAnagement) Architecture. Our work is motivated by a need for fine-grain security (Role Based Access Control or RBAC) in Storage Systems, and utilizes VOMS extended X.509 certificate specification for defining extra attributes (FQANs), based on RFC 3281. Our implementation, the gPLAZMA module for dCache, introduces Storage...
    Go to contribution page
  374. Dr James Shank (Boston University)
    15/02/2006, 17:00
    Distributed Event production and processing
    oral presentation
    We describe experiences and lessons learned from over a year of nearly continuous running of managed production on Grid3 for the ATLAS data challenges. Two major phases of production were peformed: the first, large scale GEANT based Monte Carlo simulations ("DC2") were followed by extensive production for the ATLAS "Rome" physics workshop incorporating several new job types (digitization,...
    Go to contribution page
  375. Andrew Hanushevsky (Stanford Linear Accelerator Center)
    15/02/2006, 17:00
    Distributed Data Analysis
    oral presentation
    When the BaBar experiment transitioned to using the Root Framework s new data server architecture, xrootd, was developed to address event analysis needs. This architecture was deployed at SLAC two years ago and since then has also been deployed at other BaBar Tier 1 sites: IN2P3, INFN, FZK, and RAL; as well as other non-BaBar sites: CERN (Alice), BNL (Star), and Cornell (CLEO). As part of...
    Go to contribution page
  376. Mrs Tanya Levshina (FERMILAB)
    15/02/2006, 17:00
    Grid middleware and e-Infrastructure operation
    oral presentation
    Currently, grid development projects require end users to be authenticated under the auspices of a "recognized" organization, called a Virtual Organization (VO). A VO establishes resource-usage agreements with grid resource providers. The VO is responsible for authorizing its members and optionally assigning them to groups and roles within the VO. This enables fine-grained authorization...
    Go to contribution page
  377. Dr Julius Hrivnac (LAL)
    15/02/2006, 17:00
    Software Components and Libraries
    oral presentation
    Huge requirements on computing resources have made it difficult to run Frameworks of some new HEP experiments on the users' personal workstations. Fortunately, new software technology allows us to give users back at least a bit of the user-friendliness they were used to in the past. A Java Analysis Studio (JAS) plugin has been developed, which accesses the Python API of the Atlas Offline...
    Go to contribution page
  378. Adlene Hicheur (Particle Physics)
    15/02/2006, 17:12
    Event processing applications
    oral presentation
    The ATLAS Inner Detector is composed of a pixel detector (PIX), a silicon strip detector (SCT) and a Transition radiation tracker (TRT). The goal of the algorithm is to align the silicon based detectors (PIX and SCT) using a global fit of the alignment constants. The total number of PIX and SCT silicon modules is about 35000, leading to many challenges. The current presentation will focus...
    Go to contribution page
  379. Matevz Tadel (CERN)
    15/02/2006, 17:20
    Software Components and Libraries
    oral presentation
    ALICE Event Visualization Environment (AEVE) is a general framework for visualization of detector geometry and event-related data being developed for the ALICE experiment. Its design is guided by the large raw event size (80 MBytes) and an even larger footprint of a full simulation--reconstruction pass (1.5 TBytes). An extensible pre-processing mechanism needed to reduce the data volume,...
    Go to contribution page
  380. 15/02/2006, 17:20
    Distributed Event production and processing
    oral presentation
    Within 5 years CMS expects to be managing many tens of petabytes of data in tens of sites around the world. This represents more than orderof magnitude increase in data volume over existing HEP experiments. This presentation will describe the underlying concepts and architecture of the CMS model for distributed data management, including connections to the new CMS Event Data Model. The...
    Go to contribution page
  381. Dr Andrew McNab (UNIVERSITY OF MANCHESTER)
    15/02/2006, 17:20
    Grid middleware and e-Infrastructure operation
    oral presentation
    GridSite has extended the industry-standard Apache webserver for use within Grid projects, both by adding support for Grid security credentials such as GSI and VOMS, and with the GridHTTP protocol for bulk file transfer via HTTP. We describe how GridHTTP combines the security model of X.509/HTTPS with the performance of Apache, in local and wide area bulk transfer applications. GridSite...
    Go to contribution page
  382. Pedro Arce (Cent.de Investigac.Energeticas Medioambientales y Tecnol. (CIEMAT))
    15/02/2006, 17:30
    Event processing applications
    oral presentation
    We describe a C++ software that is able to reconstruct the positions, angular orientations and internal optical parameters of any optical system described by a seamless combination of many different types of optical objects. The program also handles the propagation of uncertainties, what makes it very useful to simulate the system in the design phase. The software is currently in use by...
    Go to contribution page
  383. Matevz Tadel (CERN)
    15/02/2006, 17:40
    Software Components and Libraries
    oral presentation
    Gled is an OO research framework for fast prototyping of applications in distributed and multi-threaded envirnoments with support for direct data interaction and dynamic visualization. It is an extension of the ROOT framework and thus inherits its core features, including object serialization, versatile I/O infrastructure (files with inner directory structures, trees, rootd), CINT -- the...
    Go to contribution page
  384. Mr Levente HAJDU (BROOKHAVEN NATIONAL LABORATORY)
    15/02/2006, 17:40
    Grid middleware and e-Infrastructure operation
    oral presentation
    In the distributed computing world of heterogeneity, sites may have from the bare minimum Globus package available to a plethora of advanced services. Moreover, sites may have restrictions and limitations which need to be understood by resource brokers and planner in order to take the best advantage of resource and computing cycles. Facing this reality and to take full advantage of any...
    Go to contribution page
  385. Dr Jose Hernandez (CIEMAT)
    15/02/2006, 17:40
    Distributed Event production and processing
    oral presentation
    (For the CMS Collaboration) Since CHEP04 in Interlaken, the CMS experiment has developed a baseline Computing Model and a Technical Design for the computing system it expects to need in the first years of LHC running. Significant attention was focused on the development of a data model with heavy streaming at the level of the RAW data based on trigger physics selections. We expect that...
    Go to contribution page
  386. Dr Ken Miura (National Institute of Informatics, Japan)
    16/02/2006, 09:00
    Plenary
    oral presentation
  387. Dr Gang Chen (IHEP, Beijing)
    16/02/2006, 09:30
    Plenary
    oral presentation
  388. Dr Simon Lin
    16/02/2006, 10:00
    Plenary
    oral presentation
  389. Dr Piergiorgio Cerello (INFN - TORINO)
    16/02/2006, 11:00
    Plenary
    oral presentation
  390. Mathai Joseph (Tata Research Development and Design Centre)
    16/02/2006, 11:30
    Plenary
    oral presentation
  391. Dr Rajiv Gavai (TIFR)
    16/02/2006, 12:00
    Plenary
    oral presentation
  392. Dr Mikhail Kirsanov (CERN)
    16/02/2006, 14:00
    Software Components and Libraries
    oral presentation
    The library of Monte Carlo generator tools maintained by LCG (GENSER) guarantees the centralized software and physics support for the simulation of fundamental interactions, and is currently widely adopted by the LHC collaborations. While the activity in the LCG Phase I was mostly concentrating in the standardization, integration and maintenance of the existing Monte Carlo...
    Go to contribution page
  393. Dr Joel Snow (Langston University)
    16/02/2006, 14:00
    Grid middleware and e-Infrastructure operation
    oral presentation
    Periodically an experiment will reprocess data taken previously to take advantage of advances in its reconstruction code and improved understanding of the detector. Within a period of ~6 months the Dร˜ experiment has reprocessed, on the grid, a large fraction (0.5fb-1) of the Run II data. This corresponds to some 1 billion events or 250TB of data and used raw data as input, requiring...
    Go to contribution page
  394. Dr Steven Goldfarb (High Energy Physics)
    16/02/2006, 14:00
    Software Tools and Information Systems
    oral presentation
    I report on the findings and recommendations of the LCG Project's Requirements and Technical Assessment Group (RTAG 12) on Collaborative Tools for the LHC. A group comprising representatives of the LHC collaborations, CERN IT and HR, and leading experts in the field of collaborative tools evaluated the requirements of the LHC, current practices, and expected future usage, in comparison...
    Go to contribution page
  395. Dr Andrea Dotti (Universitร  and INFN Pisa)
    16/02/2006, 14:00
    Event processing applications
    oral presentation
    ATLAS is one of the four experiments under construction along the Large Hadron Collider ring at CERN. During the last few years much effort has gone in carrying out test beam sessions that allowed to assess the performance of ATLAS sub-detectors. During the data taking we have started the development of an histogram display application designed to satisfy the needs of all ATLAS...
    Go to contribution page
  396. Jens Rehn (CERN)
    16/02/2006, 14:00
    Distributed Event production and processing
    oral presentation
    Distributed data management at LHC scales is a staggering task, accompanied by equally challenging practical management issues with storage systems and wide-area networks. CMS data transfer management system, PhEDEx, is designed to handle this task with minimum operator effort, automating the workflows from large scale distribution of HEP experiment datasets down to reliable and scalable...
    Go to contribution page
  397. Dr Catalin Meirosu (CERN and "Politehnica" Bucharest)
    16/02/2006, 14:00
    Online Computing
    oral presentation
    The Trigger and Data Acquisition System of the ATLAS experiment is currently being installed at CERN. A significant amount of computing resources will be deployed in the Online computing system, in the close proximity of the ATLAS detector. More than 3000 high-performance computers will be supported by networks composed of about 200 Ethernet switches. The architecture of the networks was...
    Go to contribution page
  398. Iosif Legrand (CALTECH)
    16/02/2006, 14:00
    Computing Facilities and Networking
    oral presentation
    To satisfy the demands of data intensive grid applications it is necessary to move to far more synergetic relationships between applications and networks. The main objective of the VINCI project is to enable data intensive applications to efficiently use and coordinate shared, hybrid network resources, to improve the performance and throughput of global-scale grid systems, such as those...
    Go to contribution page
  399. Dr Giacomo Bruno (UCL, Louvain-la-Neuve, Belgium)
    16/02/2006, 14:18
    Event processing applications
    oral presentation
    At the end of 2004 CMS decided to redesign the software framework used for simulation and reconstruction. The new design includes a completely revisited event data model. This new software will be used in the first months of 2006 for the so called Magnet Test Cosmic Challenge (MTCC). The MTCC is a slice test in which a small fraction of all the CMS detection equipment is expected to be...
    Go to contribution page
  400. Mrs Mona Aggarwal (Imperial College London)
    16/02/2006, 14:20
    Grid middleware and e-Infrastructure operation
    oral presentation
    The LCG is an operational Grid currently running at 136 sites in 36 countries, offering its users access to nearly 14,000 CPUs and approximately 8PB of storage [1]. Monitoring the state and performance of such a system is challenging but vital to successful operation. In this context the primary motivation for this research is to analyze LCG performance by doing a statistical analysis of...
    Go to contribution page
  401. Marco La Rosa (University of Melbourne)
    16/02/2006, 14:20
    Distributed Event production and processing
    oral presentation
    In 2004 the Belle Experimental Collaboration reached a critical stage in their computing requirements. Due to an increased rate of data collection an extremely large amount of simulated (Monte Carlo) data was required to correctly analyse and understand the experimental data. The resulting simulation effort consumed more CPU power than was readily available to the experiment at the host...
    Go to contribution page
  402. Dr Mathias de Riese (DESY)
    16/02/2006, 14:20
    Computing Facilities and Networking
    oral presentation
    DESY is one of the worlds leading centers for research with particle accelerators and synchrotron light. The computer center manages a data volume of the order of 1 PB and houses around 1000 CPUs. During DESY's engagement as Tier-2 center for LHC experiments these numbers will at least double. In view of these increasing activities an improved fabric management infrastructure is being...
    Go to contribution page
  403. Dr Andy Buckley (Durham University), Andy Buckley (University of Cambridge)
    16/02/2006, 14:20
    Software Tools and Information Systems
    oral presentation
    Setting up the infrastructure to manage a software project can easily become more work than writing the software itself. A variety of useful open-source tools, such as Web-based viewers for version control systems, "wikis" for collaborative discussions and bug-tracking systems are available but their use in high-energy physics, outside large collaborations, is small. We introduce the...
    Go to contribution page
  404. Dr Ben Waugh (University College London)
    16/02/2006, 14:20
    Software Components and Libraries
    oral presentation
    A common problem in particle physics is the requirement to reproduce comparisons between data and theory when the theory is a (general purpose) Monte Carlo simulation and the data are measurements of final state observables in high energy collisions. The complexity of the experiments, the obervables and the models all contribute to making this a highly non-trivial task. We describe an...
    Go to contribution page
  405. Mr Michael DePhillips (BROOKHAVEN NATIONAL LABORATORY)
    16/02/2006, 14:20
    Online Computing
    oral presentation
    The STAR experiment at Brookhaven National Laboratory's Relativistic Heavy-Ion Collider (RHIC) has been accumulating 100's of millions events over its already 5 years running program. Within a growing Physics demand for statistics, STAR has more than doubled the events taken each year and is planning to increase its capability by an order of magnitude to reach billion events capabilities...
    Go to contribution page
  406. Zdenek Maxa (University College London)
    16/02/2006, 14:36
    Event processing applications
    oral presentation
    We describe the design of Atlantis, an event visualisation program for the ATLAS experiment at CERN, and the other supporting applications within the visualisation project, mainly focusing on the technologies employed. The ATLAS visualisation consists of several parts with Atlantis being the central application. The main purpose of Atlantis is to help visually investigate and intuitively...
    Go to contribution page
  407. Lassi Tuura (NORTHEASTERN UNIVERSITY, BOSTON, MA, USA)
    16/02/2006, 14:40
    Distributed Event production and processing
    oral presentation
    The most significant data challenge for CMS in 2005 has been the LCG service challenge 3 (SC3). For CMS the main purpose of the challenge was to exercise a realistic LHC startup scenario using complete experiment system, in what concerns transferring and serving data, submitting jobs and collecting their data, employing the next-generation world-wide LHC computing service. A number of...
    Go to contribution page
  408. Mr Piotr Golonka (INP Cracow, CERN)
    16/02/2006, 14:40
    Software Components and Libraries
    oral presentation
    Solving the 'simulation=experiment' equation, which is the ultimate task of every HEP experiment, becomes impossible without computer simulation techniques. HEP Monte Carlo simulations, traditionally written as FORTRAN codes, became complex computational projects: their rich physical content needs to be matched with the software organization of the experimental collaborations to make them...
    Go to contribution page
  409. Mr Philippe Galvez (California Institute of Technology (CALTECH))
    16/02/2006, 14:40
    Software Tools and Information Systems
    oral presentation
    During this session we will describe and demonstrate the MonALISA (MONitoring Agents using A Large Integrated Services Architecture) and the new enhanced VRVS (Virtual Room Videoconferencing System) systems, and their integration to provide a next generation of collaboration system called EVO. The melding of these two systems creates a distributed intelligent system that provides an...
    Go to contribution page
  410. Piotr Golonka (CERN, IT/CO-BE)
    16/02/2006, 14:40
    Online Computing
    oral presentation
    The control systems of the LHC experiments are built using the common commercial product: PVSS II (from the ETM company). The JCOP Framework Project delivers a set of common tools built on top of, or extending the functionality of, PVSS (such as the control for widely used hardware, a Finite State Machine (FSM) toolkit, access control management, cooling and ventilation application)...
    Go to contribution page
  411. Mr Dirk Jahnke-Zumbusch (DESY)
    16/02/2006, 14:40
    Computing Facilities and Networking
    oral presentation
    DESY operates some thousand computers, based on different operating systems. On Servers and workstations not only the operating systems but many centrally supported software systems are used. Most of these systems, operating and software systems come with their own user and account management tools. Typically they do not know of each other, which makes live harder for users, when you have...
    Go to contribution page
  412. Dr Dirk Pleiter (DESY)
    16/02/2006, 14:40
    Grid middleware and e-Infrastructure operation
    oral presentation
    Numerical simulations of QCD formulated on the lattice (LQCD) require a huge amount of computational resources. Grid technologies can help to improve exploitation of these precious resources, e.g. by sharing the produced data on a global level. The International Lattice DataGrid (ILDG) has been founded to define the required standards needed for a grid infrastructure to be used for...
    Go to contribution page
  413. Prof. Stephen Watts (Brunel University)
    16/02/2006, 14:54
    Event processing applications
    oral presentation
    Visualisation of data in particle physics currently involves event displays, histograms and scatterplots. Since 1975 there has been an explosion of techniques for data visualisation driven by highly interactive computer systems and ideas from statistical graphics. This field has been driven by demands for data mining of large databases and genomics. Two key areas are direct manipulation of...
    Go to contribution page
  414. Dr Gene VAN BUREN (BROOKHAVEN NATIONAL LABORATORY)
    16/02/2006, 15:00
    Software Tools and Information Systems
    oral presentation
    Samples of data acquired by the STAR Experiment at RHIC are examined at various stages of processing for quality assurance (QA) purposes. As STAR continues to mature and utilize new hardware and software, it remains imperative to the experiment to work cohesively to insure the quality of STAR data so that the collaboration may continue to produce many new physics results in the efficient...
    Go to contribution page
  415. Wim Lavrijsen (LBNL)
    16/02/2006, 15:00
    Software Components and Libraries
    oral presentation
    Eclipse is a popular, open source, development platform and application framework. It provides extensible tools and frameworks that span the complete software development lifecycle. Plugins exist for all the major parts that today make up the physicist software toolkit in ATLAS: programming environments/editors for C++ and python, browsers for CVS and SVN, networking with ssh and sftp,...
    Go to contribution page
  416. Go Iwai (JST)
    16/02/2006, 15:00
    Grid middleware and e-Infrastructure operation
    oral presentation
    A new project for advanced simulation technology in radiotherapy was launched on Oct. 2003 with funding of JST (Japan Science and Technology Agency) in Japan. The project aim is to develop an ample set of simulation package for radiotherapy based on Geant4 in collaboration between Geant4 developers and medical users. They need much more computing power and strong security for accurate and...
    Go to contribution page
  417. Dr Suchandra Dutta (Scuola Normale Superiore, INFN, Pisa), Dr Vincenzo Chiochia (University of Zurich)
    16/02/2006, 15:12
    Event processing applications
    oral presentation
    The CMS silicon tracker, consisting of about 17,000 detector modules divided into micro-strip and pixel sensors, will be the largest silicon tracker ever realized for high energy physics experiments. The detector performance will be monitored using applications based on the CMS Data Quality Monitoring (DQM) framework and running on the High-Level Trigger Farm as well as local DAQ systems....
    Go to contribution page
  418. Wolfgang Von Rueden (CERN)
    16/02/2006, 17:00
    Plenary
    oral presentation
  419. Dr Randall Sobie (Univeristy of Victoria)
    17/02/2006, 10:00
    Plenary
    oral presentation
  420. Lalitesh Kathragadda (Google India)
    17/02/2006, 10:15
    Plenary
    oral presentation
  421. Anirban Chakrabarti (Infosys)
    17/02/2006, 10:45
    Plenary
    oral presentation
    Grid Computing technologies are transforming the scientific and enterprise computing in a big way. Especially in the different verticals like Life Sciences, Energy, Finance, there is tremendous pressure to reduce cost and enhance productivity. Grid allows linking up as many processors, storage and/or memory of distributed computers to make more efficient use of all available computing...
    Go to contribution page
  422. Dr Beat Jost (CERN)
    17/02/2006, 11:15
    Plenary
    oral presentation
  423. Dr Gabriele Cosmo (CERN)
    17/02/2006, 11:35
    Plenary
    oral presentation
  424. Dr Lorenzo Moneta (CERN)
    17/02/2006, 11:55
    Plenary
    oral presentation
  425. 17/02/2006, 12:40
    Plenary
    oral presentation
    Welcome by Director, TIFR Address by Governor, Maharashtra National Anthem
    Go to contribution page
  426. 17/02/2006, 12:55
    Plenary
    oral presentation
  427. Dr Andreas Pfeiffer (CERN)
    17/02/2006, 14:30
    Plenary
    oral presentation
  428. Simon Lin, Dr Simon Lin (Academia Sinica Grid Computing Centre)
    17/02/2006, 14:50
    Plenary
    oral presentation
  429. Mr Markus Schulz (CERN)
    17/02/2006, 15:10
    Plenary
    oral presentation
  430. Dr Gavin McCance (CERN)
    17/02/2006, 15:30
    Plenary
    oral presentation
  431. Fons Rademakers (CERN)
    17/02/2006, 15:50
    Plenary
    oral presentation
  432. Prof. A. S. Kolaskar (Pune University), Alberto Santoro (Instituto de Fisica), Dr D. P. S. Seth (Telecom Regulatory Authority of India), Prof. Harvey B Newman (CALIFORNIA INSTITUTE OF TECHNOLOGY), Dr S. Ramakrishnan (CDAC, Pune), Viatcheslav Ilin (Moscow State University)
    17/02/2006, 17:00
    Plenary
    oral presentation
  433. Markus Elsing (CERN)
    Event processing applications
    oral presentation
    Over the past 3 years the ATLAS Inner Detector reconstruction software has undergone a major redesign based on the recommendations of an internal review in spring 2003. The new track reconstruction infrastructure is characterized by: - a standardized ATLAS geometry model - a common track reconstruction data model - a suite of common extrapolation, track fitting, vertexing and pattern...
    Go to contribution page
  434. Dr marc dobson (CERN)
    Online Computing
    poster
    The ATLAS TDAQ System will be composed of 3000 processors with a few processes per processor. The Process Manager component of the TDAQ software is responsible for launching and controlling these processes. The main requirements are for robustness, availability and recoverability of the system, as well as the possibiity of full launch, control and monitoring of the TDAQ processes. This...
    Go to contribution page
  435. Dr Francesco Delli Paoli (INFN Padova)
    Distributed Event production and processing
    oral presentation
    The improvements of the peak instantaneous luminosity of the Tevatron Collider will give CDF up to 2 fb-1 of new data every year, forcing the collaboration to increase proportionally the amount of Monte Carlo data it produces. This is in turn forcing the CDF collaboration to move beyond the dedicated resources it is using today and start exploiting Grid resources. Monte Carlo production...
    Go to contribution page
  436. Stephane Willocq (University of Massachusetts)
    Event processing applications
    poster
    The ATLAS detector, currently being installed at CERN, is designed to make precise measurements of 14 TeV proton-proton collisions at the LHC, starting in 2007. Arguably the clearest signatures for new physics, including the Higgs Boson and supersymmetry, will involve the production of isolated final-stated muons. The identification and precise reconstruction of muons are performed using a...
    Go to contribution page
  437. Plenary
    oral presentation
  438. Dr Maarten Ballintijn (MIT)
    Distributed Data Analysis
    oral presentation
    The Parallel ROOT Facility, PROOF, allows one to analyze and understand very large data sets on an interactive time scale. It makes use of the inherent parallelism in event data and implements an architecture that optimizes I/O and CPU utilization in heterogeneous clusters with distributed storage. We will present our experiences in using a very large PROOF cluster in production for the...
    Go to contribution page
  439. Stephane Willocq (University of Massachusetts)
    Event processing applications
    poster
    The Muon Spectrometer for the Atlas experiment at the LHC is designed to identify muons with transverse momentum greater than 3 GeV/c and measure muon momenta with high precision up to the highest momenta expected at the LHC. The 50-micron sagitta resolution translates into a transverse momentum resolution of 10% for muon transverse momenta of 1 TeV/c. Precise tracking is provided by...
    Go to contribution page
  440. W. E. Brown (FERMILAB)
    Software Components and Libraries
    oral presentation
    As an active participant in the international C++ standardization effort, Fermilab has contributed significant expertise toward the analysis and design of a random-number facility suitable for incorporation into the forthcoming update to the C++ standard. A first version of this design has been promulgated as part of a recently-approved Technical Report issued by the C++ Working Group of...
    Go to contribution page
  441. Mr Olivier MARTIN (CERN (on pre-retirement program until July 2006))
    Computing Facilities and Networking
    oral presentation
    The ongoing evolution from packet based networks to hybrid networks in research & education (R&E) networks or what are the fundamental reasons behind the growing gap between commercial and R&E networks As exemplified by the Internet2 HOPI initiative (http://networks.internet2.edu/hopi/), the new GEANT2 backbone (http://www.dante.net/server/show/nav.00100f00d) and projects such as...
    Go to contribution page