2–9 Sept 2007
Victoria, Canada
Europe/Zurich timezone
Please book accomodation as soon as possible.

Session

Poster 2

PT2
5 Sept 2007, 08:00
Victoria, Canada

Victoria, Canada

Presentation materials

There are no materials yet.

  1. Marco Mambelli (University of Chicago)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    A Data Skimming Service (DSS) is a site-level service for rapid event filtering and selection from locally resident datasets based on metadata queries to associated "tag" databases. In US ATLAS, we expect most if not all of the AOD-based datasets to be be replicated to each of the five Tier 2 regional facilities in the US Tier 1 "cloud" coordinated by Brookhaven National Laboratory. ...
    Go to contribution page
  2. Marco Cecchi (INFN-CNAF)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    Since the beginning, one of the design guidelines for the Workload Management System currently included in the gLite middleware was flexibility with respect to the deployment scenario: the WMS has to work correctly and efficiently in any configuration: centralized, decentralized, and in perspective even peer-to-peer. Yet the preferred deployment solution is to concentrate the workload...
    Go to contribution page
  3. Dr Torsten Harenberg (University of Wuppertal)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    Today, one of the major challenges in science is the processing of large datasets. The LHC experiments will produce an enormous amount of results that are stored in databases or files. These data are processed by a large number of small jobs that read only chunks. Existing job monitoring tools inside the LHC Computing Grid (LCG) provide just limited functionality to the user. These...
    Go to contribution page
  4. Dr Silvio Pardi (University of Naples ``Federico II'' - C.S.I. and INFN)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    The user interface is a crucial service to guarantee the Grid accessibility. The goal to achieve, is the implementation of an environment able to hide the grid complexity and offer a familiar interface to the final user. Currently many graphical interfaces have been proposed to simplify the grid access, but the GUI approach appears not very congenital to UNIX developers and...
    Go to contribution page
  5. Valentin Kuznetsov (Cornell University)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    The CMS Dataset Bookkeeping System (DBS) search page is a web-based application used by physicists and production managers to find data from the CMS experiment. The main challenge in the design of the system was to map the complex, distributed data model embodied in the DBS and the Data Location Service (DLS) to a simple, intuitive interface consistent with the mental model...
    Go to contribution page
  6. Mr Giacinto Piacquadio (Physikalisches Institut - Albert-Ludwigs-Universität Freiburg)
    05/09/2007, 08:00
    Event Processing
    poster
    A new inclusive secondary vertexing algorithm which exploits the topological structure of weak b- and c-hadron decays inside jets is presented. The primary goal is the application to b-jet tagging. The fragmentation of a b-quark results in a decay chain composed of a secondary vertex from the weakly decaying b-hadron and typically one or more tertiary vertices from c-hadron decays. The...
    Go to contribution page
  7. Dr Sebastien Incerti (CENBG-IN2P3)
    05/09/2007, 08:00
    Event Processing
    poster
    Detailed knowledge of the microscopic pattern of energy deposition related to the particle track structure is required to study radiation effects in various domains, like electronics, gaseous detectors or biological systems. The extension of Geant4 physics down to the electronvolt scale requires not only new physics models, but also adequate design technology. For this purpose a...
    Go to contribution page
  8. Michele Pioppi (CERN)
    05/09/2007, 08:00
    Event Processing
    poster
    In the CMS software, a dedicated electron track reconstruction algorithm, based on a Gaussian Sum Filter (GSF), is used. This algorithm is able to follow an electron along its complete path up to the electromagnetic calorimeter, even in the case of a large amount of Bremsstrahlung emission. Because of the significant CPU consumption of this algorithm, however, it can be run only on a...
    Go to contribution page
  9. Mr Pablo Martinez (Insitituto de Física de Cantabria)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    A precise alignment of Muon System is one of the requirements to fulfill the CMS expected performance to cover its physics program. A first prototype of the software and computing tools to achieve this goal has been successfully tested during the CSA06, Computing, Software and Analysis Challenge in 2006. Data was exported from Tier-0 to Tier-1 and Tier-2, where the alignment software...
    Go to contribution page
  10. Dr Josva Kleist (Nordic Data Grid Facility)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    AliEn or Alice Environment is the Gridware developed and used within the ALICE collaboration for storing and processing data in a distributed manner. ARC (Advanced Resource Connector) is the Grid middleware deployed across the Nordic countries and gluing together the resources within the Nordic Data Grid Facility (NDGF). In this paper we will present our approach to integrate AliEn and...
    Go to contribution page
  11. Mr Luca Magnoni (INFN-CNAF)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    In a Grid environment the naming capability allows users to refer to specific data resources in a physical storage system using a high level logical identifier. This logical identifier is typically organized in a file system like structure, a hierarchical tree of names. Storage Resource Manager (SRM) services map the logical identifier to the physical location of data evaluating a set of...
    Go to contribution page
  12. Stephane Chauvie (INFN Genova)
    05/09/2007, 08:00
    Event Processing
    poster
    An original model is presented for the simulation of the energy loss of negatively charged hadrons: it calculates the stopping power by regarding the target atoms as an ensemble of quantum harmonic oscillators. This approach allows to account for charge dependent effects in the stopping power, which are relevant at low energy: the differences between the stopping powers of positive and...
    Go to contribution page
  13. Dr Jerome Lauret (BROOKHAVEN NATIONAL LABORATORY)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    Secure access to computing facilities has been increasingly on demand of practical tools as the world of cyber-security infrastructure has changed the landscape to access control via gatekeepers or gateways. However, the venue of two factor authentication (SSH keys for example) preferred over simpler Unix based login has introduced the challenging task of managing private keys and its...
    Go to contribution page
  14. Dr Josva Kleist (Nordic Data Grid Facility)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    The Nordic Data Grid Facility (NDGF) consists of Grid resources running ARC middleware in Scandinavia and other countries. These resources serve many virtual organisations and contribute a large fraction of total worldwide resources for the ATLAS experiment, whose data is distributed and managed by the DQ2 software. Managing ATLAS data within NDGF and between NDGF and other Grids used by...
    Go to contribution page
  15. Rolf Seuster (University of Victoria)
    05/09/2007, 08:00
    Event Processing
    poster
    The ATLAS Liquid Argon Calorimter consists of precision electromagnetic accordion calorimeters in the barrel and endcaps, hadronic calorimeters in the endcaps, and calorimeters in the forward region. The initial high energy collision data at the LHC experiments is expected in the spring of 2008. While tools for the reconstruction of the calorimeter data are quite developed through years...
    Go to contribution page
  16. Dr Daniela Rebuzzi (INFN, Sezione di Pavia), Dr Nectarios Benekos (Max-Planck-Institut fur Physik)
    05/09/2007, 08:00
    Event Processing
    poster
    The ATLAS detector, currently being installed at CERN, is designed to make precise measurements of 14 TeV proton-proton collisions at the LHC, starting in 2007. Arguably the clearest signatures for new physics, including the Higgs Boson and supersymmetry, will involve the production of isolated final-stated muons. The identification and precise reconstruction of muons are performed using...
    Go to contribution page
  17. Dr Ricardo Vilalta (University of Houston)
    05/09/2007, 08:00
    Event Processing
    poster
    Advances in statistical learning have placed at our disposal a rich set of classification algorithms (e.g., neural networks, decision trees, Bayesian classifiers, support vector machines, etc.) with little or no guidelines on how to select the analysis technique most appropriate for the task at hand. In this paper we present a new approach for the automatic selection of predictive models...
    Go to contribution page
  18. Michal Kwiatek (CERN)
    05/09/2007, 08:00
    Collaborative tools
    poster
    The digitalization of CERN audio-visual archives, a major task currently in progress, will generate over 40 TB of video, audio and photo files. Storing these files is one issue, but a far more important challenge is to provide long- time coherence of the archive and to make these files available on line with minimum manpower investment. An infrastructure, based on standard CERN...
    Go to contribution page
  19. Dr Andrew McNab (University of Manchester)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    GridSite has extended the industry-standard Apache webserver for use within Grid projects, by adding support for Grid security credentials such as GSI and VOMS. With the addition of the GridHTTP protocol for bulk file transfer via HTTP and the development of a mapping between POSIX filesystem operations and HTTP requests we have extended this scope of GridSite into bulk data transfer and...
    Go to contribution page
  20. Dr Douglas Benjamin (Duke University)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    The CDF experiment at Fermilab produces Monte Carlo data files using computing resources on both the Open Science Grid (OSG) and LHC Computing Grid (LCG) grids. This data produced must be brought back to Fermilab for archival storage. In the past CDF produced Monte Carlo data on dedicated computer farms through out the world. The data files were copied directly from the worker nodes to...
    Go to contribution page
  21. Dr Daniele Bonacorsi (INFN-CNAF, Bologna, Italy)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    The CMS experiment operated a Computing, Software and Analysis Challenge in 2006 (CSA06). This activity is part of the constant work of CMS in computing challenges of increasing complexity to demonstrate the capability to deploy and operate a distributing computing system at the desired scale in 2008. The CSA06 challenge was a 25% exercise, and included several workflow elements: event...
    Go to contribution page
  22. Dr Andreas Nowack (III. Physikalisches Institut (B), RWTH Aachen)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    In Germany, several university institutes and research centres take part in the CMS experiment. Concerning the data analysis, a couple of computing centres at different Tier levels, ranging from Tier 1 to Tier 3, exists at these places. The German Tier 1 centre GridKa at the research centre at Karlsruhe serves all four LHC experiments as well as for four non-LHC experiments. With respect...
    Go to contribution page
  23. Prof. Alexander Read (University of Oslo, Department of Physics)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    Computing and storage resources connected by the Nordugrid ARC middleware in the Nordic countries, Switzerland and Slovenia are a part of the ATLAS computing grid. This infrastructure is being commissioned with the ongoing ATLAS Monte Carlo simulation production in preparation for the commencement of data taking in late 2007. The unique non-intrusive architecture of ARC, it's...
    Go to contribution page
  24. Prof. Richard McClatchey (UWE)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    We introduce the concept, design and deployment of the DIANA meta-scheduling approach to solving the challenge of the data analysis being faced by the CERN experiments. The DIANA meta-scheduler supports data intensive bulk scheduling, is network aware and follows a policy centric meta-scheduling that will be explained in some detail. In this paper, we describe a Physics analysis case...
    Go to contribution page
  25. Dr Domenico Giordano (Dipartimento Interateneo di Fisica)
    05/09/2007, 08:00
    Event Processing
    poster
    The CMS Silicon Strip Tracker (SST), consisting of more than 10 millions of channels, is organized in about 16,000 detector modules and it is the largest silicon strip tracker ever built for high energy physics experiments. In the first half of 2007 the CMS SST project is facing the important milestone of commissioning and testing a quarter of the entire SST with cosmic muons. The full...
    Go to contribution page
  26. Mr Tigran Mkrtchyan Mkrtchyan (Deutsches Elektronen-Synchrotron DESY)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    Starting June 2007, all WLCG data management services have to be ready and prepared to move terabytes of data from CERN to the Tier 1 centers world wide, and from the Tier 1s to their corresponding Tier 2s. Reliable file transfer services, like FTS, on top of the SRM v2.2 protocol are playing a major role in this game. Nevertheless, moving large junks of data is only part of the...
    Go to contribution page
  27. Mr Enrico Fattibene (INFN-CNAF, Bologna, Italy), Mr Giuseppe Misurelli (INFN-CNAF, Bologna, Italy)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    A monitoring tool for complex Grid systems can gather a huge amount of information that have to be presented to the users in the most comprehensive way. Moreover different types of consumers could be interested in inspecting and analyzing different subsets of data. The main goal in designing a Web interface for the presentation of monitoring information is to organize the huge amount of...
    Go to contribution page
  28. Dr Ricardo Graciani Diaz (Universidad de Barcelona)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    DIRAC Services and Agents are defined in the context of the DIRAC system (the LHCb's Grid Workload and Data Management system), and how they cooperate to build functional sub-systems is presented. How the Services and Agents are built from the low level DIRAC framework tools is described. Practical experiente in the LHCb production system has directed the creation of the current DIRAC...
    Go to contribution page
  29. Mr Adrian Casajus Ramo (Universitat de Barcelona)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    The DIRAC system is made of a number of cooperating Services and Agents that interact between them with a Client-Server architecture. All DIRAC components rely on a low level framework that provides the necessary basic functionality. In the current version of DIRAC these components have been identified as: DISET, the secure communication protocol for remote procedure call and file...
    Go to contribution page
  30. Gianluca Castellani (European Organization for Nuclear Research (CERN))
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    LHCb accesses Grid through DIRAC, its WorkLoad and Data Management system. In DIRAC all the jobs are stored in central task queues and then pulled onto worker nodes via generic Grid jobs called Pilot Agents. These task queues are characterized by different requirements about CPUtime and destination. Because the whole LHCb community is divided in sets of physicists, developers,...
    Go to contribution page
  31. Dr Andrei Tsaregorodtsev (CNRS-IN2P3-CPPM, Marseille)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    The DIRAC system was developed in order to provide a complete solution for using distributed computing resources of the LHCb experiment at CERN for data production and analysis. It allows a concurrent use of over 10K CPUs and 10M file replicas distributed over many tens of sites. The sites can be part of a computing grid such as WLCG or standalone computing clusters all integrated in a...
    Go to contribution page
  32. Andrew Cameron Smith (CERN)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    DIRAC, LHCb’s Grid Workload and Data Management System, utilises WLCG resources and middleware components to perform distributed computing tasks satisfying LHCb’s Computing Model. The Data Management System (DMS) handles data transfer and data access within LHCb. Its scope ranges from the output of the LHCb Online system to Grid-enabled storage for all data types. It supports metadata for...
    Go to contribution page
  33. Dr Julius Hrivnac (LAL)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    LCG experiments will contain large amount of data in relational databases. Those data will be spread over many sites (Grid or not). Fast and easy access will required not only from the batch processing jobs, but also from the interactive analysis. While many system have been proposed and developed for access to file-based data in the distributed environment, methods of efficient access...
    Go to contribution page
  34. Lana Abadie (CERN)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    The DPM (Disk Pool Manager) provides a lightweight and scalable managed disk storage system. In this paper, we describe the new features of the DPM. It is integrated in the grid middleware and is compatible with both VOMS and grid proxies. Besides the primary/secondary groups (or roles), the DPM supports ACLs adding more flexibility in setting file permissions. Tools ...
    Go to contribution page
  35. Mr Claude Charlot (Ecole Polytechnique)
    05/09/2007, 08:00
    Event Processing
    poster
    We describe the strategy developed for electron reconstruction in CMS. Emphasis is put on isolated electrons and on recovering the bremsstrahlung losses due to the presence of the material before the ECAL. Following the strategy used for the high level triggers, a first filtering is obtained building seeds from the clusters reconstructed in the ECAL. A dedicated trajectory building is...
    Go to contribution page
  36. Dr Vincenzo Ciaschini (INFN CNAF)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    While starting to use the grid in production, applications have begun to demand the implementation of complex policies regarding the use of resources. Some want to divide their users in different priority brackets and classify the resources in different classes, others again content themselves with considering all users and resources equal. Resource managers have to work into enabling...
    Go to contribution page
  37. Mr Joel Closier (CERN)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    The LHCb experiment has chosen to use the SAM framework (Service Availability Monitoring Environment) provided by the WLCG developers to make extensive tests of the LHCb environment at all the accessible grid resources. The availability and the proper definition of the local Computing and Storage Elements, user interfaces as well as the WLCG software environment are checked. The same...
    Go to contribution page
  38. Mr Sergey Gorbunov (GSI), Dr alexander glazov (DESY)
    05/09/2007, 08:00
    Event Processing
    poster
    Stand-alone event reconstruction was developed for the Forward and the Backward Silicon Trackers of the H1 experiment at HERA. The reconstruction module includes the pattern recognition algorithm, a track fitter and primary vertex finder. The reconstruction algorithm shows high efficiency and speed. The detector alignment was performed to within an accuracy of 10 um which...
    Go to contribution page
  39. Mr Trunov Artem (CC-IN2P3 (Lyon) and EKP (Karlsruhe))
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    We present our experience in setting up an xrootd storage cluster at CC-IN2P3 - a LCG Tier-1 computing Center. The solution consists of xrootd storage cluster made of NAS boxes and includes an interface to dCache/SRM, and Mass Storage System. The feature of this system is integration of PROOF for facilitation of analysis. The setup allows to take advantage of ease of administrative burden,...
    Go to contribution page
  40. Ludek Matyska (CESNET)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    Grid middleware stacks, including gLite, matured into the state of being able to process upto millions of jobs per day. Logging and Bookkeeping, the gLite job-tracking service keeps pace with this rate, however it is not designed to provide a long-term archive of executed jobs. ATLAS---representative of large user community--- addresses this issue with its own job catalogue (prodDB)....
    Go to contribution page
  41. Mr Kyu Park (Department of Electrical and Computer Engineering, University of Florida)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    A primary goal of the NSF-funded UltraLight Project is to expand existing data-intensive grid computing infrastructures to the next level by enabling a managed network that provides dynamically constructed end-to-end paths (optically or virtually, in whole or in part). Network bandwidth used to be the primary limiting factor, but with the recent advent of 10Gb/s network paths end-to-end,...
    Go to contribution page
  42. Ms Ying Ying Li (University of Cambridge)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    The DIRAC workload-management system of the LHCb experiment allows coordinated use of globally distributed computing power and data storage. The system was initially deployed only on Linux platforms, where it has been used very successfully both for collaboration-wide production activities and for single- user physics studies. To increase the resources available to LHCb, DIRAC has...
    Go to contribution page
  43. Dr Klaus Goetzen (GSI Darmstadt)
    05/09/2007, 08:00
    Event Processing
    poster
    As one of the primary experiments to be located at the new Facility for Antiproton and Ion Research in Darmstadt the PANDA experiment aims for high quality hadron spectroscopy from antiproton proton collisions. The versatile and comprehensive projected physics program requires an elaborate detector design. The detector for the PANDA experiment will be a very complex machine consisting of...
    Go to contribution page
  44. Dr Manuel Venancio Gallas Torreira (CERN)
    05/09/2007, 08:00
    Event Processing
    poster
    Based on the ATLAS TileCal 2002 test-beam setup example, we present here the technical, software aspects of a possible solution to the problem of using two different simulation engines, like Geant4 and Fluka, with the common geometry and digitization code. The specific use case we discuss here, which is probably the most common one, is when the Geant4 application is already implemented....
    Go to contribution page
  45. Mr Edmund Widl (Institut für Hochenergiephysik (HEPHY Vienna))
    05/09/2007, 08:00
    Event Processing
    poster
    The Kalman alignment algorithm (KAA) has been specifically developed to cope with the demands that arise from the specifications of the CMS Tracker. The algorithmic concept is based on the Kalman filter formalism and is designed to avoid the inversion of large matrices. Most notably, the KAA strikes a balance between conventional global and local track-based alignment algorithms, by...
    Go to contribution page
  46. Remi Mollon (CERN)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    GFAL, or Grid File Access Library, is a C library developed by LCG to give a uniform POSIX interface to local and remote Storage Elements on the Grid. LCG-Util is a set of tools to copy/replicate/delete files and register them in a Grid File Catalog. In order to match experiment requirements, these two components had to evolve. Thus, the new Storage ...
    Go to contribution page
  47. Ted Hesselroth (Fermi National Accelerator Laboratory)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    gPlazma is the authorization mechanism for the distributed storage system dCache. Clients are authorized based on a grid proxy and may be allowed various privileges based on a role contained in the proxy. Multiple authorization mechanisms may be deployed through gPlazma, such as legacy dcache-kpwd, grid-mapfile, grid-vorolemap, or GUMS. Site-authorization through SAZ is also supported....
    Go to contribution page
  48. Mr Laurence Field (CERN)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    Grid Information Systems are mission-critical components for production grid infrastructures. They provide detailed information which is needed for the optimal distribution of jobs, data management and overall monitoring of the Grid. As the number of sites within these infrastructure continues to grow, it must be understood if the current systems have the capacity to handle the extra...
    Go to contribution page
  49. Alexandre Vaniachine (Argonne National Laboratory)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    To process the vast amount of data from high energy physics experiments, physicists rely on Computational and Data Grids; yet, the distribution, installation, and updating of a myriad of different versions of different programs over the Grid environment is complicated, time-consuming, and error-prone. We report on the development of a Grid Software Installation Management Framework...
    Go to contribution page
  50. Ms Alessandra Forti (University of Manchester)
    05/09/2007, 08:00
    Collaborative tools
    poster
    System Management Working Group (SMWG) of sys admins from Hepix and grid sites has been setup to address the fabric management problems that HEP sites might have. The group is open and its goal is not to implement new tools but to share what is already in use at sites according to existing best practices. Some sites are already publicly sharing their tools and sensors and some other...
    Go to contribution page
  51. Prof. Nobuhiko Katayama (High Energy Accelerator Research Organization)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    The Belle experiment operates at the KEKB accelerator, a high luminosity asymmetric energy e+ e- collider. The Belle collaboration studies CP violation in decays of B meson to answer one of the fundamental questions of Nature, the matter-anti-matter asymmetry. Currently, Belle accumulates more than one million B Bbar meson pairs that correspond to about 1.2 TB of raw data in one...
    Go to contribution page
  52. Alfonso Mantero (INFN Genova)
    05/09/2007, 08:00
    Event Processing
    poster
    A component of the Geant4 toolkit is responsible for the simulation of atomic relaxation: it is part of a modelling approach of electromagnetic interactions that takes into account the detailed atomic structure of matter, by describing particle interactions at the level of the atomic shells of the target material. The accuracy of Geant4 Atomic Relaxation has been evaluated against the...
    Go to contribution page
  53. Dr Daniela Rebuzzi (INFN Pavia and Pavia University)
    05/09/2007, 08:00
    Event Processing
    poster
    The Atlas Muon Spectrometer is designed to reach a very high transverse momentum resolution for muons in a pT range extending from 6 GeV/c up to 1 Tev/c. The most demanding design goal is an overall uncertainty of 50 microns on the sagitta of a muon with pT = 1 TeV/c. Such precision requires an accurate control of the positions of the muon detectors and of their movements during the...
    Go to contribution page
  54. Aatos Heikkinen (Helsinki Institute of Physics, HIP)
    05/09/2007, 08:00
    Event Processing
    poster
    We introduce a new implementation of Liege cascade INCL4 with ABLA evaporation in Geant4. INCL4 treats hadron, Deuterium, Tritium, and Helium beams up to 3 GeV energy, while ABLA provides treatment for light evaporation residues. The physics models in INCL4 and ABLA and are reviewd with focus on recent additions. Implementation details, such as first version of object oriented...
    Go to contribution page
  55. Timur Perelmutov (FERMI NATIONAL ACCELERATOR LABORATORY)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    The Storage Resource Manager (SRM) and WLCG collaborations recently defined version 2.2 of the SRM protocol, with the goal of satisfying the requirement of the LCH experiments. The dCache team has now finished the implementation of all SRM v2.2 elements required by the WLCG. The new functions include space reservation, more advanced data transfer, and new namespace and permission...
    Go to contribution page
  56. Mr Thomas Doherty (University of Glasgow)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    AMI is an application which stores and allows access to dataset metadata for the ATLAS experiment. It provides a set of generic tools for managing database applications. It has a three-tier architecture with a core that supports a connection to any RDBMS using JDBC and SQL. The middle layer assumes that the databases have an AMI compliant self-describing structure. It provides a...
    Go to contribution page
  57. Mr Jay Packard (BNL)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    Identity mapping is necessary when a site's resources do not use GRID credentials natively, but instead use a different mechanism to identify users, such as UNIX accounts or Kerberos principals. In these cases, the GRID credential for each incoming job must be associated with an appropriate site credential. Many sites consist of a heterogeneous environment with multiple gatekeepers, which...
    Go to contribution page
  58. Akos Frohner (CERN)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    The goal of the Medical Data Management (MDM) task is to provide secure (encrypted and under access control) access to medical images, which are stored at hospitals in DICOM servers or are replicated to standard grid Storage Elements (SE) elsewhere. In gLite 3.0 there are three major components to satisfy the requirements: The dCache/DICOM SE is a special SE, which...
    Go to contribution page
  59. Dr Robert Harakaly (CERN)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    Configuration is an essential part of the deployment process of any software product. In the case of Grid middleware the variety and complexity of grid services coupled with multiple deployment scenarios make the provision of a coherent configuration both more important and more difficult. The configuration system must provide a simple interface which strikes a balance between the...
    Go to contribution page
  60. Dr Iosif Legrand (CALTECH)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    MonaLISA (Monitoring Agents in A Large Integrated Services Architecture) provides a distributed service for monitoring, control and global optimization of complex systems including the grids and networks used by the LHC experiments. MonALISA is based on an ensemble of autonomous multi-threaded, agent-based subsystems which able to collaborate and cooperate to perform a wide range of...
    Go to contribution page
  61. Gianluca Castellani (CERN)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    Facilities offered by WLCG are extensively used by LHCb in all aspects of their computing activity. A real time knowledge of the status of all Grid components involved is needed to optimize their exploitation. This is achieved by employing different monitoring services each one supplying a specific overview of the system. SAME tests are used in LHCb for monitoring the status of CE...
    Go to contribution page
  62. Dr Paul Millar (GridPP)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    Computing resources in HEP are increasingly delivered utilising grid technologies, which presents new challenges in terms of monitoring. Monitoring involves the flow of information between different communities: the various resource-providers and the different user communities. The challenge is providing information so everyone can find what they need: from the local site administrators,...
    Go to contribution page
  63. Dr Sergio Andreozzi (INFN-CNAF)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    GridICE is an open source distributed monitoring tool for Grid systems that is integrated in the gLite middleware and provides continuous monitoring of the EGEE infrastructure. The main goals of GridICE are: to provide both summary and detailed view of the status and availability of Grid resource, to highlight a number of pre-defined fault situations and to present usage information. In...
    Go to contribution page
  64. Mr Sylvain Reynaud (IN2P3/CNRS)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    Advanced capabilities available in nowadays batch systems are fundamental for operators of high-performance computing centers in order to provide a high- quality service to their local users. Existing middleware allow sites to expose grid-enabled interfaces of the basic functionalities offered by the site’s computing service. However, they do not provide enough mechanisms for...
    Go to contribution page
  65. Dr Graeme Stewart (University of Glasgow)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    When operational, the Large Hadron Collider experiments at CERN will collect tens of petabytes of physics data per year. The worldwide LHC computing grid (WLCG) will distribute this data to over two hundred Tier-1 and Tier-2 computing centres, enabling particle physicists around the globe to access the data for analysis. Different middleware solutions exist for effective management of...
    Go to contribution page
  66. Mr Martin Radicke (DESY Hamburg)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    The dCache software has become a major storage element in the WLCG, providing high-speed file transfers by caching datasets on potentially thousands of disk servers in front of tertiary storage. Currently dCache's model of separately connecting all disk servers to the tape backend leads to locally controlled flush and restore behavior has shown some inefficiencies in respect of tape drive...
    Go to contribution page
  67. Dr Marco La Rosa (The University of Melbourne)
    05/09/2007, 08:00
    Software components, tools and databases
    poster
    With the proliferation of multi-core x86 processors, it is reasonable to ask whether the supporting infrastructure of the system (memory bandwidth, IO bandwidth etc) can handle as many jobs as there are cores. Furthermore, are traditional benchmarks like SpecINT and SpecFloat adequate for assessing multi-core systems in real computing situations. In this paper we present the results of...
    Go to contribution page
  68. Michal Kwiatek (CERN)
    05/09/2007, 08:00
    Collaborative tools
    poster
    For many years at CERN we had a very sophisticated print server infrastructure which supported several different protocols (AppleTalk, IPX and TCP/IP ) and many different printing standards. Today’s situation differs a lot: we have much more homogenous network infrastructure, where TCP/IP is used everywhere and we have less printer models, which almost all work with current standards...
    Go to contribution page
  69. Mr Alexander Kulyavtsev (FNAL)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    dCache is a distributed storage system which today stores and serves petabytes of data in several large HEP experiments. Resilient dCache is a top level service within dCache, created to address reliability and file availability issues when storing data for extended periods of time on disk-only storage systems. The Resilience Manager automatically keeps the number of copies within...
    Go to contribution page
  70. Dr Gregory Dubois-Felsmann (SLAC)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    The BaBar experiment currently uses approximately 4000 KSI2k on dedicated Tier 1 and Tier 2 compute farms to produce Monte Carlo events and to create analysis datasets from detector and Monte Carlo events. This need will double in the next two years requiring additional resources. We describe enhancements to the BaBar experiment's distributed system for the creation of skimmed...
    Go to contribution page
  71. Dr Maria Grazia Pia (INFN GENOVA)
    05/09/2007, 08:00
    Collaborative tools
    poster
    Journal publication plays a fundamental role in scientific research, and has practical effects on researchers’ academic career and towards funding agencies. An analysis is presented, also based on the author’s experience as a member of the Editorial Board of a major journal in Nuclear Technology, of publications about high energy physics computing in refereed journals. The statistical...
    Go to contribution page
  72. Prof. Sridhara Dasu (University of Wisconsin)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    We describe the ideas and present performance results from a rapid-response adaptive computing environment (RACE) that we setup at the UW-Madison CMS Tier-2 computing center. RACE uses Condor technologies to allow rapid-response to certain class of jobs, while suspending the longer running jobs temporarily. RACE allows us to use our entire farm for long running production jobs, but also...
    Go to contribution page
  73. Sophie Lemaitre (CERN)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    The LFC (LCG File Catalogue) allows retrieving and registering the location of physical replicas in the grid infrastructure given a LFN (Logical File Name) or a GUID (Grid Unique Identifier). Authentication is based on GSI (Grid Security Infrastructure) and authorization uses also VOMS. The catalogue has been installed in more than 100 sites. It is essential to provide consistent ...
    Go to contribution page
  74. Nancy Marinelli (University of Notre Dame)
    05/09/2007, 08:00
    Event Processing
    poster
    A seed/track finding algorithm has been developed for reconstruction of e+e- from converted photons. It combines the information of the electromagnetic calorimeter with the accurate information provided by the tracker. An Ecal seeded track finding is used to locate the approximate vertex of the conversion. Tracks found with this method are then used as input to further inside-out...
    Go to contribution page
  75. Dr Kilian Schwarz (GSI)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    After all LHC experiments managed to run globally distributed Monte Carlo productions on the Grid, now the development of tools for equally spread data analysis stands in the foreground. To grant Physicists access to this world suited interfaces must be provided. As a starting point serves the analysis framework ROOT/PROOF, which enjoys a wide distribution within the HEP community....
    Go to contribution page
  76. Dr Andy Buckley (Durham University)
    05/09/2007, 08:00
    Event Processing
    poster
    The Rivet system is a framework for validation of Monte Carlo event generators against archived experimental data, and together with JetWeb and HepData forms a core element of the CEDAR event generator tuning programme. It is also an essential tool in the development of next generation event generators by members of the MCnet network. Written primarily in C++, Rivet provides a uniform...
    Go to contribution page
  77. Emmanuel Ormancey (CERN)
    05/09/2007, 08:00
    Collaborative tools
    poster
    Nearly every large organization use a tool to broadcast messages and information across the internal campus (messages like alerts announcing interruption in services or just information about upcoming events). The tool typically allows administrators (operators) to send "targeted" messages which is sent only to specific group of users or computers (for instance only those ones...
    Go to contribution page
  78. Dr Gregory Dubois-Felsmann (SLAC)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    The BaBar experiment needs fast and efficient procedure for distributing jobs to produce a large amount of simulated events for analysis purpose. We discuss the benefits/drawbacks gained mapping the traditional production schema on the grid paradigm, and describe the structure implemented on the standard "public" resources of INFN-Grid project. Data access/distribution on sites...
    Go to contribution page
  79. Dr Steven Goldfarb (University of Michigan)
    05/09/2007, 08:00
    Collaborative tools
    poster
    "Shaping Collaboration 2006" was a workshop held in Geneva, on December 11-13, 2006, to examine the status and future of collaborative tool technology and its usage for large global scientific collaborations, such as those of the CERN LHC (Large Hadron Collider). The workshop brought together some of the leading experts in the field of collaborative tools (WACE 2006) with physicists and...
    Go to contribution page
  80. Dr Yaodong Cheng (Institute of High Energy Physics,Chinese Academy of Sciences)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    Currently more and more heterogeneous resources are integrated into LCG. Sharing LCG files across different platforms, including different OS and grid middlewares, is a basic issue. We implemented web service interface for LFC and simulated LCG file access client by using globus Java CoG Kit.
    Go to contribution page
  81. Dr Dorian Kcira (University of Louvain)
    05/09/2007, 08:00
    Event Processing
    poster
    With a total area of more than 200 square meters and about 16000 silicon detectors the Tracker of the CMS experiment will be the largest silicon detector ever built. The CMS silicon Tracker will detect charged tracks and will play a determinant role in lepton reconstruction and heavy flavour quark tagging. A general overview of the Tracker data handling software, which allows the...
    Go to contribution page
  82. Dr Paul Miyagawa (University of Manchester)
    05/09/2007, 08:00
    Event Processing
    poster
    The ATLAS solenoid produces a magnetic field which enables the Inner Detector to measure track momentum by track curvature. This solenoidal magnetic field was measured using a rotating-arm mapping machine and, after removing mapping machine effects, has been understood to the 0.05% level. As tracking algorithms require the field strength at many different points, the representation of...
    Go to contribution page
  83. Dr Pavel Nevski (Brookhaven National Laboratory (BNL))
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    In order to be ready for the physics analysis ATLAS experiment is running a world wide Monte Carlo production for many different physics samples with different detector conditions. Job definition is the starting point of ATLAS production system. This is a common interface for the ATLAS community to submit jobs for processing by the Distrubuted production system used for all...
    Go to contribution page
  84. Robert Petkus (Brookhaven National Laboratory)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    The RHIC/USATLAS Computing Facility at BNL has evaluated high-performance, low-cost storage solutions in order to complement a substantial distributed file system deployment of dCache (>400 TB) and xrootd (>130 TB). Currently, these file systems are spread across disk-heavy computational nodes providing over 1.3 PB of aggregate local storage. While this model has proven sufficient to...
    Go to contribution page
  85. Dr Andrea Sciabà (CERN)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    The main goal of the Experiment Integration and Support (EIS) team in WLCG is to help the LHC experiments with using proficiently the gLite middleware as part of their computing framework. This contribution gives an overview of the activities of the EIS team, and focuses on a few of them particularly important for the experiments. One activity is the evaluation of the gLite workload...
    Go to contribution page
  86. Prof. Vladimir Ivantchenko (CERN, ESA)
    05/09/2007, 08:00
    Event Processing
    poster
    The testing suite for validation of Geant4 hadronic generators with the data of thin target experiments is presented. The results of comparisons with the neutron and pion production data of are shown for different Geant4 hadronic generators for the beam momentum interval 0.5 – 12.9 GeV/c.
    Go to contribution page
  87. Tapio Lampen (Helsinki Institute of Physics HIP)
    05/09/2007, 08:00
    Event Processing
    poster
    We demonstrate the use of a ROOT Toolkit for Multivariate Data Analysis (TMVA) in tagging b-jets associated with heavy neutral MSSM Higgs bosons at the LHC. The associated b-jets can be used to extract Higgs events from the Drell-Yan background, for which the associated jets are mainly light quark and gluon jets. TMVA provides an evaluation for different multivariate classification...
    Go to contribution page
  88. Suren Chilingaryan (The Institute of Data Processing and Electronics, Forschungszentrum Karlsruhe)
    05/09/2007, 08:00
    Collaborative tools
    poster
    For the reliable and timely forecasts of dangerous conditions of Space Weather world-wide networks of particle detectors are located at different latitudes, longitudes and altitudes. To provide better integration of these networks the DAS (Data Acquisition System) is facing a challenge to establish reliable data exchange between multiple network nodes which are often located in hardly...
    Go to contribution page
  89. Dr Solveig Albrand (LPSC/IN2P3/UJF Grenoble France)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    AMI was chosen as the ATLAS dataset selection interface in July 2006. It should become the main interface for searching for ATLAS data using physics metadata criteria. AMI has been implemented as a generic database management framework which allows parallel searching over many catalogues, which may have differing schema. The main features of the web interface will be described; in...
    Go to contribution page
  90. Dr Andy Buckley (Durham University)
    05/09/2007, 08:00
    Event Processing
    poster
    Monte Carlo event generators are an essential tool for modern particle physics; they simulate aspects of collider events ranging from the parton-level "hard process" to cascades of QCD radiation in both initial and final states, non-perturbative hadronization processes, underlying event physics and specific particle decays. LHC events in particular are so complex that event generator...
    Go to contribution page
  91. Dr Daniele Bonacorsi (INFN-CNAF, Bologna, Italy)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    Early in 2007 the CMS experiment deployed a traffic load generator infrastructure, aimed at providing CMS Computing Centers (Tiers of the WLCG) with a means for debugging, load-testing and commissioning data transfer routes among them. The LoadTest is built upon, and relies on, the PhEDEx dataset transfer tool as a reliable data replication system in use by CMS. On top of PhEDEx, the CMS...
    Go to contribution page
  92. Dr Andrew McNab (University of Manchester)
    05/09/2007, 08:00
    Collaborative tools
    poster
    We describe the operation of www.gridpp.ac.uk, the website provided for GridPP and its precursor, UK HEP Grid, since 2000, and explain the operational procedures of the service and the various collaborative tools and components that were adapted or developed for use on the site. We pay particular attention to the security issues surrounding such a prominent site, and how the GridSite...
    Go to contribution page
  93. Dr Raja Nandakumar (Rutherford Appleton Laboratory)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    The worldwide computing grid is essential to the LHC experiments in analysing the data collected by the detectors. Within LHCb, the computing model aims to simulate data at Tier-2 grid sites as well as non-grid resources. The reconstruction, stripping and analysis of the produced LHCb data will primarily place at the Tier-1 centres. The computing data challenge DC06 started in May 2006...
    Go to contribution page
  94. Mr Rudolf Frühwirth (Inst. of High Energy Physics, Vienna)
    05/09/2007, 08:00
    Event Processing
    poster
    We present the "LiC Detector Toy'' ("LiC'' for Linear Collider) program, a simple but powerful software tool for detector design, modification and geometry studies. It allows the user to determine the resolution of reconstructed track parameters for the purpose of comparing and optimizing various detector set-ups. It consists of a simplified simulation of the detector measurements, taking...
    Go to contribution page
  95. Mr Antonio Retico (CERN)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    The WLCG/EGEE Pre-Production Service (PPS) is a grid infrastructure whose goal is to give early access to new services to WLCG/EGEE users in order to evaluate new features and changes in the middleware before new versions are actually deployed in PPS. The PPS grid counts about 30 sites providing resources and manpower. The service contributes to the overall quality of the grid...
    Go to contribution page
  96. Dr Winfried A. Mitaroff (Institute of High Energy Physics (HEPHY) of the Austrian Academy of Sciences, Vienna)
    05/09/2007, 08:00
    Event Processing
    poster
    A detector-independent toolkit (RAVE) is being developed for the reconstruction of the common interaction vertices from a set of reconstructed tracks. It deals both with "finding" (pattern recognition of track bundles) and with "fitting" (estimation of vertex position and track momenta). The algorithms used so far include robust adaptive filters which are derived from the CMS...
    Go to contribution page
  97. Dr Fabio Cossutti (INFN)
    05/09/2007, 08:00
    Event Processing
    poster
    The CMS Collaboration has developed a detailed simulation of the electromagnetic calorimeter (ECAL), which has been fully integrated in the collaboration software framework CMSSW. The simulation is based on the Geant4 detector simulation toolkit for the modelling of the passage of particles through matter and magnetic field. The geometrical description of the detector is being...
    Go to contribution page
  98. Dr Sergio Andreozzi (INFN-CNAF)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    A key advantage of Grid systems is the capability of sharing heterogeneous resources and services across traditional administrative and organizational domains. This capability enables the creation of virtual pools of resources that can be assigned to groups of users. One of the problems that the utilization of such pools presents is the awareness of the resources, i.e., the fact that...
    Go to contribution page
  99. Mr Riccardo Zappi (INFN-CNAF)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    In Grid systems, a core resource being shared among geographically-dispersed communities of users is the storage. For this resource, a standard interface specification (Storage Resource Management or SRM) was defined and is being evolved in the context of the Open Grid Forum. By implementing this interface, all storage resources part of a Grid could be managed in an homogenous fashion. In...
    Go to contribution page
  100. Dr Piergiulio Lenzi (Dipartimento di Fisica)
    05/09/2007, 08:00
    Event Processing
    poster
    The first application of one of the official CMS tracking algorithm, known as Combinatorial Track Finder, on cosmic muon real data is described. The CMS tracking system consists of a silicon pixel vertex detector and a surrounding silicon microstrip detector. The silicon strip tracker consists of 10 barrel layers and 12 endcap disks on each side. The system is currently going through...
    Go to contribution page
  101. Dr Andrea Fontana (INFN-Pavia)
    05/09/2007, 08:00
    Event Processing
    poster
    The concept of Virtual Monte Carlo allows to use different Monte Carlo programs to simulate particle physics detectors without changing the geometry definition and the detector response simulation. In this context, to study the reconstruction capabilities of a detector, the availability of a tool to extrapolate the track parameters and their associated errors due to magnetic field,...
    Go to contribution page
  102. Dr Gabriele Compostella (University Of Trento INFN Padova)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    When the CDF experiment was developing its software infrastructure, most computing was done on dedicated clusters. As a result, libraries, configuration files, and large executable were deployed over a shared file system. As CDF started to move into the Grid world, the assumption of having a shared file system showed its limits. In a widely distributed computing model, such as the...
    Go to contribution page
  103. Don Petravick (FNAL)
    05/09/2007, 08:00
    Grid middleware and tools
    poster
    Computing in High Energy Physics and other sciences is quickly moving toward the Grid paradigm, with resources being distributed over hundreds of independent pools scattered over the five continents. The transition from a tightly controlled, centralized computing paradigm to a shared, widely distributed model, while bringing many benefits, has also introduced new problems, a major one...
    Go to contribution page
  104. Mr Andreas Weindl (FZ Karlsruhe / IK), Dr Harald Schieler (FZ Karlsruhe / IK)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    The KASCADE-Grande experiment is a multi-detector installation at the site of the Forschungszentrum Karlsruhe, Germany, to measure and study extensive air showers induced in the atmosphere by primary cosmic rays in the energy range from 10^14 to 10^18 eV. For three of the detector components, WEB based online event displays have been implemented. They provide in a fast and simplified way...
    Go to contribution page
  105. Wilko Kroeger (SLAC)
    05/09/2007, 08:00
    Distributed data analysis and information management
    poster
    The BaBar Experiment stores it reconstructed event data in root files which amount to more then one petabyte and more then two million files. All the data are stored in the mass storage system (HPSS) at SLAC and part of the data is exported to Tier-A sites. Fast and reliable access to the data is provided by Xrootd at all sites. It integrates with a mass storage system and files that...
    Go to contribution page