21–27 Mar 2009
Prague
Europe/Prague timezone

Contribution List

533 out of 533 displayed
Export to PDF
  1. Dr Gabriele Garzoglio (FERMI NATIONAL ACCELERATOR LABORATORY)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    In recent years, it has become more and more evident that software threat communities are taking an increasing interest in Grid infrastructures. To mitigate the security risk associated with the increased numbers of attacks, the Grid software development community needs to scale up effort to reduce software vulnerabilities. This can be achieved by introducing security review processes as a...
    Go to contribution page
  2. Dr Sanjay Padhi (UCSD)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    This paper presents a web based Job Monitoring framework for individual Grid sites that allows users to follow in detail their jobs in quasi-real time. The framework consists of several independent components, (a) a set of sensors that run on the site CE and worker nodes and update a database, (b) a simple yet extensible web services framework and (c) an Ajax powered web interface having a...
    Go to contribution page
  3. Dr David Lawrence (Jefferson Lab)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    A minimal xpath 1.0 parser has been implemented within the JANA framework that allows easy access to attributes or tags in an XML document. The motivating implmentation was to access geometry information from XML files in the HDDS specification (derived from ATLAS's AGDD). The system allows components in the reconstruction package to pick out individual numbers from a collection of XML...
    Go to contribution page
  4. Daniel Colin Van Der Ster (Conseil Europeen Recherche Nucl. (CERN))
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Ganga provides a uniform interface for running ATLAS user analyses on a number of local, batch, and grid backends. PanDA is a pilot-based production and distributed analysis system developed and used extensively by ATLAS. This work presents the implementation and usage experiences of a PanDA backend for Ganga. Built upon reusable application libraries from GangaAtlas and PanDA, the Ganga PanDA...
    Go to contribution page
  5. Mr Andrey TSYGANOV (Moscow Physical Engineering Inst. (MePhI))
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    CERN, the European Laboratory for Particle Physics, located in Geneva - Switzerland, has recently started the Large Hadron Collider (LHC), a 27 km particle accelerator. The CERN Engineering and Equipment Data Management Service (EDMS) provides support for managing engineering and equipment information throughout the entire lifecycle of a project. Based on several both in-house developed and...
    Go to contribution page
  6. Dr Suren Chilingaryan (The Institute of Data Processing and Electronics, Forschungszentrum Karlsruhe)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    During operation of high energy physics experiments a big amount of slow control data is recorded. It is necessary to examine all collected data checking the integrity and validity of measurements. With growing maturity of AJAX technologies it becomes possible to construct sophisticated interfaces using web technologies only. Our solution for handling time series, generally slow control...
    Go to contribution page
  7. Dr David Lawrence (Jefferson Lab)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Factory models are often used in object oriented programming to allow more complicated and controlled instantiation than is easily done with a standard C++ constructor. The alternative factory model implemented in the JANA event processing framework addresses issues of data integrity important to the type of reconstruction software developed for experimental HENP. The data on...
    Go to contribution page
  8. Ms Gerhild Maier (Johannes Kepler Universität Linz)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Grid computing is associated with a complex, large scale, heterogeneous and distributed environment. The combination of different Grid infrastructures, middleware implementations, and job submission tools into one reliable production system is a challenging task. Given the impracticability to provide an absolutely fail-safe system, strong error reporting and handling is a crucial part of...
    Go to contribution page
  9. Dr David Malon (Argonne National Laboratory), Dr Peter Van Gemmeren (Argonne National Laboratory)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    At a data rate of 200 hertz, event metadata records ("TAGs," in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise "data mining," but our interest is different. Advanced statistical methods and tools such as...
    Go to contribution page
  10. José Mejia (Rechenzentrum Garching)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The ATLAS computing Grid consists of several hundred compute clusters distributed around the world as part of the Worldwide LHC Computing Grid (WLCG). The Grid middleware and the ATLAS software which has to be installed on each site, often require certain Linux distribution and sometimes even specific version thereof. On the other hand, mostly due to maintenance reasons, computer centres...
    Go to contribution page
  11. Dr John Kennedy (LMU Munich)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The organisation and operations model of the ATLAS T1-T2 federation/cloud associated to the GridKa T1 in Karlsruhe is described. Attention is paid to cloud level services and the experience gained during the last years of operation. The ATLAS GridKa Cloud is large and divers spanning 5 countries, 2 ROC's and is currently comprised of 13 core sites. A well defined and tested operations...
    Go to contribution page
  12. Lassi Tuura (Northeastern University)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The CMS experiment at the Large Hadron Collider has deployed numerous web-based services in order to serve the collaboration effectively. We present the two-phase authentication and authorisation system in use in the data quality and computing monitoring services, and in the data- and workload management services. We describe our techniques intended to provide a high level of security with...
    Go to contribution page
  13. Marco Clemencic (European Organization for Nuclear Research (CERN))
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    An extensive test suite is the first step towards the delivery of robust software, but it is not always easy to implement it, especially in projects with many developers. An easy to use and flexible infrastructure to use to write and execute the tests reduces the work each developer has to do to instrument his packages with tests. At the same time, the infrastructure gives the same look and...
    Go to contribution page
  14. Mr Ricardo Manuel Salgueiro Domingues da Silva (CERN)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    A frequent source of concern for resource providers is the efficient use of computing resources in their centres. This has a direct impact on requests for new resources. There are two different but strongly correlated aspects to be considered: while users are mostly interested in a good turn-around time for their jobs, resource providers are mostly interested in a high and efficient usage...
    Go to contribution page
  15. Alessandro De Salvo (Istituto Nazionale di Fisica Nucleare Sezione di Roma 1)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The measurement of the experiment software performances is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit...
    Go to contribution page
  16. Dr Florian Uhlig (GSI Darmstadt)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    One of the challenges of software development for large experiments is to manage the contributions from globally distributed teams. In order to keep the teams synchronized a strong quality control is important. For a software project this means that it has to be tested on all supported platforms if the project can be build from source, if it runs and in the end if the program delivers the...
    Go to contribution page
  17. Witold Pokorski (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    We present the new monitoring system for CASTOR (CERN Advanced STORage) which allows an integrated view on all the different storage components. With the massive data-taking phase approaching, CASTOR is one of the key elements of the software needed by the LHC experiments. It has to provide a reliable storage machinery for saving the event data, as well as to enable an efficient...
    Go to contribution page
  18. Dr Antonio Pierro (INFN-BARI)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The web application service as part of the conditions database system serves applications and users outside the event-processing. The application server is built upon conditions python API in the CMS offline software framework. It responds to http requests on various conditions database instances. The main client of the application server is the conditions database web GUI which currently...
    Go to contribution page
  19. Edward Karavakis (Brunel University-CERN)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Dashboard is a monitoring system developed for the LHC experiments in order to provide the view of the Grid infrastructure from the perspective of the Virtual Organisation. The CMS Dashboard provides a reliable monitoring system that enables the transparent view of the experiment activities across different middleware implementations and combines the Grid monitoring data with information that...
    Go to contribution page
  20. Lassi Tuura (Northeastern University)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    A central component of the data quality monitoring system of the CMS experiment at the Large Hadron Collider is a web site for browsing data quality histograms. The production servers in data taking provide access to several hundred thousand histograms per run, both live in online as well as for up to several terabytes of archived histograms for the online data taking, Tier-0 prompt...
    Go to contribution page
  21. Natalia Ratnikova (Fermilab-ITEP(Moscow)-Karlsruhe University(Germany))
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The CMS Software project CMSSW embraces more than a thousand packages organized in over a hundred subsystems covering the areas of analysis, event display, reconstruction, simulation, detector description, data formats, framework, utilities and tools. The release integration process is highly automated, using tools developed or adopted by CMS. Packaging in rpm format is a built-in step in the...
    Go to contribution page
  22. Mr Shahzad Muzaffar (NORTHEASTERN UNIVERSITY)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The CMS offline software consists of over two million lines of code actively developed by hundreds of developers from all around the world. Optimal builds and distribution of such a large scale system for production and analysis activities for hundreds of sites and multiple platforms are major challenges. Recent developments have not only optimized the whole process but also helped us identify...
    Go to contribution page
  23. Dr Thomas Kress (RWTH Aachen, III. Physikal. Institut B)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The Tier-2 centers in CMS are the only location, besides the specialized analysis facility at CERN, where users are able to obtain guaranteed access to CMS data samples. The Tier-1 centers are used primarily for organized processing and storage. The Tier-1s are specified with data export and network capacity to allow the Tier-2 centers to refresh the data in disk storage regularly for...
    Go to contribution page
  24. Dr Ajit Kumar Mohapatra (University of Wisconsin, Madison, USA)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The CMS experiment has been using the Open Science Grid, through its US Tier-2 computing centers, from its very beginning for production of Monte Carlo simulations. In this talk we will describe the evolution of the usage patterns indicating the best practices that have been identified. In addition to describing the production metrics and how they have been met, we will also present the...
    Go to contribution page
  25. Dr Alessandra Fanfani (on beahlf of CMS - INFN-BOLOGNA (ITALY))
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    CMS has identified the distributed Tier-2 sites as the primary location for physics analysis. There is a specialized analysis cluster at CERN, but it represents approximately 15% of the total computing available to analysis users. The more than 40 Tier-2s on 4 continents will provide analysis computing and user storage resources for the vast majority of physicists in CMS. The CMS estimate is...
    Go to contribution page
  26. Andrea Valassi (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The COOL project provides software components and tools for the handling of the LHC experiment conditions data. The project is a collaboration between the CERN IT Department and Atlas and LHCb, the two experiments that have chosen it as the base of their conditions database infrastructure. COOL supports persistency for several relational technologies (Oracle, MySQL and SQLite), based on the...
    Go to contribution page
  27. Prof. Kihyeon Cho (KISTI)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    KISTI (Korea Institute of Science and Technology Information) in Korea is the national headquarter of supercomputer, network, Grid and e-Science. We have been working on cyberinfrastructure for high energy physics experiment, especially CDF experiment and ALICE experiment. We introduce the cyberinfrastructure which includes resources, Grid and e-Science for these experiments. The goal of...
    Go to contribution page
  28. Cédric Serfon (LMU München)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    A set of tools have been developed to ensure the Data Management operations (deletion, movement of data within a site and consistency checks) within the German cloud for ATLAS. These tools that use local protocols which allow a fast and efficient processing are described hereafter and presented in the context of the operational procedures of the cloud. A particular emphasis is put on the...
    Go to contribution page
  29. Dr Ashok Agarwal (University of Victoria, Victoria, BC, Canada)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    An interface between dCache and the local Tivoli Storage Manager (TSM) tape storage facility has been developed at the University of Victoria (UVic) for High Energy Physics (HEP) applications. The interface is responsible for transferring the data from disk pools to tape and retrieving data from tape to disk pools. It also checks the consistency between the PNFS filename space and the TSM...
    Go to contribution page
  30. Dirk Hufnagel (Conseil Europeen Recherche Nucl. (CERN))
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The CMS Tier 0 is responsible for handling the data in the first period of it's life, from being written to a disk buffer at the CMS experiment site in Cessy by the DAQ system, to the time transfer completes from CERN to one of the Tier1 computing centres. It contains all automatic data movement, archival and processing tasks run at CERN. This includes the bulk transfers of data from Cessy to...
    Go to contribution page
  31. Mr Adrian Casajus Ramo (Departament d' Estructura i Constituents de la Materia)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    DIRAC, the LHCb community Grid solution, provides access to a vast amount of computing and storage resources to a large number of users. In DIRAC users are organized in groups with different needs and permissions. In order to ensure that only allowed users can access the resources and to enforce that there are no abuses, security is mandatory. All DIRAC services and clients use secure...
    Go to contribution page
  32. Galina Shabratova (Joint Inst. for Nuclear Research (JINR))
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    A. Bogdanov3, L. Malinina2, V. Mitsyn2, Y. Lyublev9, Y. Kharlov8, A. Kiryanov4, D. Peresounko5, E.Ryabinkin5, G. Shabratova2 , L. Stepanova1, V. Tikhomirov3, W. Urazmetov8, A.Zarochentsev6, D. Utkin2, L. Yancurova2, S. Zotkin8 1 Institute for Nuclear Research of the Russian, Troitsk, Russia; 2 Joint Institute for Nuclear Research, Dubna, Russia; 3 Moscow Engineering Physics Institute,...
    Go to contribution page
  33. Predrag Buncic (CERN)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Infrastructure-as-a-Service (IaaS) providers allow users to easily acquire on-demand computing and storage resources. For each user they provide an isolated environment in the form of Virtual Machines which can be used to run services and deploy applications. This approach, also known as 'cloud computing', has proved to be viable for a variety of commercial applications. Currently there are...
    Go to contribution page
  34. Mr omer khalid (CERN)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Omer Khalid, Paul Nillson, Kate Keahey, Markus Schulz --- Given the profileration of virtualization technology in every technological domain, we have been investigating on enabling virtualization in the LCG Grid to bring in virtualization benefits such as isolation, security and environment portability using virtual machines as job execution containers. There are many different ways to...
    Go to contribution page
  35. Paul Rossman (Fermi National Accelerator Lab. (Fermilab))
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    CMS utilizes a distributed infrastructure of computing centers to custodially store data, to provide organized processing resources, and to provide analysis computing resources for users. Integrated over the whole system, even in the first year of data taking, the available disk storage approaches 10 peta bytes of space. Maintaining consistency between the data bookkeeping, the data transfer...
    Go to contribution page
  36. Matevz Tadel (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    EVE is a high-level visualization library using ROOT's data-processing, GUI and OpenGL interfaces. It is designed as a framework for object management offering hierarchical data organization, object interaction and visualization via GUI and OpenGL representations. Automatic creation of 2D projected views is also supported. On the other hand, it can serve as an event visualization toolkit...
    Go to contribution page
  37. Prof. Roger Jones (Lancaster University)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Despite the all too brief availability of beam-related data, much has been learned about the usage patterns and operational requirements of the ATLAS computing model since Autumn 2007. Bottom-up estimates are now more detailed, and cosmic ray running has exercised much of the model in both duration and volume. Significant revisions have been made in the resource estimates, and in the usage of...
    Go to contribution page
  38. Claudio Grandi (INFN Bologna)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centers located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centers for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging...
    Go to contribution page
  39. Dr Tomasz Wlodek (Brookhaven National Laboratory (BNL)), Dr Yuri Smirnov (Brookhaven National Laboratory (BNL))
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all Panda information: active and archival...
    Go to contribution page
  40. Juraj Sucik (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    CERN has a successful experience with running Server Self Service Center (S3C) for virtual server provisioning which is based on Microsoft Virtual Server 2005. With the introduction of Window Server 2008 and its built-in hypervisor based virtualization (Hyper-V) there are new possibilities for the expansion of the current service. Observing a growing industry trend of provisioning Virtual...
    Go to contribution page
  41. Mr Michele De Gruttola (INFN, Sezione di Napoli - Universita & INFN, Napoli/ CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Reliable population of the condition database is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. We will describe here the system put in place in the CMS experiment to populate the database and make condition data promptly available online for the high-level trigger and offline for reconstruction. The system has been...
    Go to contribution page
  42. Loic Quertenmont (Universite Catholique de Louvain)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    FROG is a generic framework dedicated to visualize events in a given geometry. \newline It has been written in C++ and use OpenGL cross-platform libraries. It can be used to any particular physics experiment or detector design. The code is very light and very fast and can run on various Operating System. Moreover, FROG is self consistent and does not require installation of ROOT or...
    Go to contribution page
  43. Victor Diez Gonzalez (Univ. Rov. i Virg., Tech. Sch. Eng.-/CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Geant4 is a toolkit to simulate the passage of particles through matter, and is widely used in HEP, in medical physics and for space applications. Ongoing developments and improvements require regular integration testing for new or modified code. The current system uses a customised version of the Bonsai Mozilla tool to collect and select tags for testing, a set of shell and...
    Go to contribution page
  44. Mr Laurent GARNIER (LAL-IN2P3-CNRS)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Qt is a powerfull cross-platform application framework , powerful, free (even on Windows), used by lots of people and applications. That's why, last developments in Geant4 visualization group come with a new driver, based on Qt toolkit. Qt library has OpenGL available, then all 3D scenes could be move by mouse (like in OpenInventor driver). This driver try to resume all the features already...
    Go to contribution page
  45. Mr Luiz Henrique Ramos De Azevedo Evora (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    During the operation, maintenance, and dismantling periods of the ATLAS Experiment, the traceability of all detector equipment must be guaranteed for logistic and safety matters. The running of the Large Hadron Collider will expose the ATLAS detector to radiation. Therefore, CERN shall follow specific regulation from French and Swiss authorities for equipment removal, transport, repair, and...
    Go to contribution page
  46. Dr Jose Caballero (Brookhaven National Laboratory (BNL))
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Worker nodes on the grid exhibit great diversity, making it difficult to offer uniform processing resources. A pilot job architecture, which probes the environment on the remote worker node before pulling down a payload job, can help. Pilot jobs become smart wrappers, preparing an appropriate environment for job execution and providing logging and monitoring capabilities. PanDA (Production...
    Go to contribution page
  47. Dr Bogdan Lobodzinski (DESY, Hamburg,Germany)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The H1 Collaboration at HERA has entered the period of high precision analyses based on the final data sample. These analyses require a massive production of simulated Monte Carlo (MC) events. The H1 MC framework is a software for mass MC production on the LCG Grid infrastructure and on a local batch system created by H1 Collaboration. The aim of the tool is a full automatization of the...
    Go to contribution page
  48. Dr Sebastian Böser (University College London)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Within the last years, the HepMC data format has established itself as the standard data format for simulation of high-energy physics interactions and is commonly used by all four LHC experiments. At the energies of the proton-proton collisisions at the LHC, a full description of the generation of these events and the subsequent interactions with the detector typically involves several...
    Go to contribution page
  49. Axel Naumann (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    C++ does not offer access to reflection data: the types and their members as well as their memory layout are not accessible. Reflex adds that: it can be used to describe classes and any other types, to lookup and call functions, to lookup and access data members, to create and delete instances of types. It is rather unique and attracts considerable interest also outside of high energy...
    Go to contribution page
  50. Dr David Dykstra (Fermilab)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The CMS experiment requires worldwide access to conditions data by nearly a hundred thousand processing jobs daily. This is accomplished using a software subsystem called Frontier. This system translates database queries into http, looks up the results in a central database at CERN, and caches the results in an industry-standard http proxy/caching server called Squid. One of the most...
    Go to contribution page
  51. Dr Hartmut Stadie (Universität Hamburg)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    While the Grid infrastructure for the LHC experiments is well suited for batch-like analysis, it does not support the final steps of an analysis on a reduced data set, e.g. the optimization of cuts and derivation of the final plots. Usually this part is done interactively. However, for the LHC these steps might still require a large amount of data. The German "National Analysis Facility"(NAF)...
    Go to contribution page
  52. Dr Vladimir Korenkov (Joint Institute for Nuclear Research (JINR))
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Different monitoring systems are now extensively used to keep an eye on real time state of each service of distributed grid infrastructures and jobs running on the Grid. Tracking current services’ state as well as the history of state changes allows rapid error fixing, planning future massive productions, revealing regularities of Grid operation and many other things. Along with...
    Go to contribution page
  53. Marco Mambelli (UNIVERSITY OF CHICAGO)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The ATLAS experiment is projected to collect over one billion events/year during the first few years of operation. The efficient selection of events for various physics analyses across all appropriate samples presents a significant technical challenge. ATLAS computing infrastructure...
    Go to contribution page
  54. Dr Pavel Nevski (BNL)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    In addition to challenges on computing and data handling, ATLAS and other LHC experiments place a great burden on users to configure and manage the large number of parameters and options needed to carry out distributed computing tasks. Management of distribute physics data is being made more transparent by dedicated ATLAS grid computing technologies, such as PanDA (a pilot-based job...
    Go to contribution page
  55. Prof. Marco Cattaneo (CERN)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    LHCb had been planning to commission its High Level Trigger software and Data Quality monitoring procedures using real collisions data from the LHC pilot run. Following the LHC incident on 19th September 2008, it was decided to commission the system using simulated data. This “Full Experiment System Test” consists of: - Injection of simulated minimum bias events into the full HLT farm,...
    Go to contribution page
  56. Luciano Piccoli (Fermilab)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Large computing clusters used for scientific processing suffer from systemic failures when operated over long continuous periods for executing workflows. Diagnosing job problems and faults leading to eventual failures in this complex environment is difficult, specifically when the success of whole workflow might be affected by a single job failure. In this paper, we introduce a model-based,...
    Go to contribution page
  57. Alexey Zhelezov (Physikalisches Institut, Universitaet Heidelberg)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    LHC experiments are producing very large volumes of data either accumulated from the detectors or generated via the Monte-Carlo modeling. The data should be processed as quickly as possible to provide users with the input for their analysis. Processing of multiple hundreds of terabytes of data necessitates generation, submission and following a huge number of grid jobs running all over the...
    Go to contribution page
  58. Noriza Satam (Department of Mathematics, Faculty of Science,Universiti Teknologi Malaysia), Norma Alias (Institute of Ibnu Sina, Universiti Teknologi Malaysia,)
    23/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    New Iterative Alternating Group Explicit (NAGE) is a powerful parallel numerical algorithm for multidimensional temperature prediction. The discretization is based on the finite difference method of partial differential equation (PDE) with parabolic type. The 3-Dimensional temperature visualization is critical since it’s involves large scale of computational complexity. The three fundamental...
    Go to contribution page
  59. Mr Andrew Baranovski (FNAL)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    In a shared computing environment, activities orchestrated by workflow management systems often need to span organizational and ownership domains. In such a setting, common tasks, such as the collection and display of metrics and debugging information, are challenged by the informational entropy inherent to independently maintained and owned software sub-components. Because such information...
    Go to contribution page
  60. Benjamin Gaidioz (CERN)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The ATLAS production system is one of the most critical components in the experiment's distributed system, and this becomes even more true now that real data has entered the scene. Monitoring such a system is a non trivial task, even more when two of its main characteristics are the flexibility in the submission of job processing units and the heterogeneity of the resources it uses. In...
    Go to contribution page
  61. Dr Xavier Espinal (PIC/IFAE)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The ATLAS distributed computing activities involve about 200 computing centers distributed world-wide and need people on shift covering 24 hours per day. Data distribution, data reprocessing, user analysis and Monte Carlo event simulation runs continuously. Reliable performance of the whole ATLAS computing community is of crucial importance to meet the ambitious physics goals of the ATLAS...
    Go to contribution page
  62. Alexander Undrus (BROOKHAVEN NATIONAL LABORATORY, USA)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The system of automated multi-platform software nightly builds is a major component in ATLAS collaborative software organization and code approval scheme. Code developers from more than 30 countries use about 25 branches of nightly releases for testing new packages, validation of patches to existing software, and migration to new platforms and compilers. The successful nightly releases...
    Go to contribution page
  63. Dr Philippe Calfayan (Ludwig-Maximilians-University Munich)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The PROOF (Parallel ROOT Facility) library is designed to perform parallelized ROOT-based analyses with a heterogeneous cluster of computers. The installation, configuration and monitoring of PROOF have been carried out using the Grid-Computing environments dedicated to the ATLAS experiment. A PROOF cluster hosted at the Leibniz Rechenzentrum (LRZ) and consisting of a scalable amount of...
    Go to contribution page
  64. Dr Alfio Lazzaro (Universita and INFN, Milano / CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    MINUIT is the most common package used in high energy physics for numerical minimization of multi-dimensional functions. The major algorithm of this package, MIGRAD, searches for the minimum by using the gradient function. For each minimization iteration, MIGRAD requires the calculation of the first derivatives for each parameter of the function to be minimized. Minimization is required for...
    Go to contribution page
  65. Dr Niklaus Berger (Institute for High Energy Physics, Beijing)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Partial wave analysis is an important tool for determining resonance properties in hadron spectroscopy. For large data samples however, the un-binned likelihood fits employed are computationally very expensive. At the Beijing Spectrometer (BES) III experiment, an increase in statistics compared to earlier experiments of up to two orders of magnitude is expected. In order to allow for a timely...
    Go to contribution page
  66. Alexandre Vaniachine (Argonne National Laboratory), David Malon (Argonne National Laboratory), Jack Cranshaw (Argonne National Laboratory), Jérôme Lauret (Brookhaven National Laboratory), Paul Hamill (Tech-X Corporation), Valeri Fine (Brookhaven National Laboratory)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    High Energy and Nuclear Physics (HENP) experiments store petabytes of event data and terabytes of calibrations data in ROOT files. The Petaminer project develops a custom MySQL storage engine to enable the MySQL query processor to directly access experimental data stored in ROOT files. Our project is addressing a problem of efficient navigation to petabytes of HENP experimental data...
    Go to contribution page
  67. Mr Igor Sfiligoi (Fermilab)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In...
    Go to contribution page
  68. Marco Clemencic (European Organization for Nuclear Research (CERN))
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The LHCb software, from simulation to user analysis, is based on the framework Gaudi. The extreme flexibility that the framework provides, through its component model and the system of plug-ins, allows us to define a specific application as its behavior more than its code. The application is then described by some configuration files read by the bootstrap executable (shared by all...
    Go to contribution page
  69. Ms Elena Oliver (Instituto de Fisica Corpuscular (IFIC) - Universidad de Valencia)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The ATLAS data taking is due to start in Spring 2009. In this contribution and given the expectation, a rigorous evaluation of the readiness parameters of the Spanish ATLAS Distributed Tier-2 is given. Special attention will be paid to the readiness to perform Physics Analysis from different points of view: Network Efficiency, Data Discovery, Data Management, Production of...
    Go to contribution page
  70. Mr Olivier Couet (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The ROOT framework provides many visualization techniques. Lately several new ones have been implemented. This poster will present all the visualization techniques ROOT provides highlighting the best use one can do of each of them.
    Go to contribution page
  71. Prof. Gordon Watts (UNIVERSITY OF WASHINGTON)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    ROOT.NET provides an interface between Microsoft’s Common Language Runtime (CLR) and .NET technology and the ubiquitous particle physics analysis tool, ROOT. This tool automatically generates a series of efficient wrappers around the ROOT API. Unlike pyROOT, these wrappers are statically typed and so are highly efficient as compared to the Python wrappers. The connection to .NET means that one...
    Go to contribution page
  72. Mr romain wartel (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Different computing grids may provide services to the same user community, and in addition, a grid resource provider may share its resources across different unrelated user communities. Security incidents are therefore increasingly prone to propagate from one resource center to the another, either via the user community or via cooperating grid infrastructures. As a result, related and...
    Go to contribution page
  73. Mr Jan KAPITAN (Nuclear Physics Inst., Academy of Sciences, Praha)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    High Energy Nuclear Physics (HENP) collaborations’ experience show that the computing resources available from a single site are often not sufficient nor satisfy the need of remote collaborators eager to carry their analysis in the fastest and most convenient way. From latencies in the network connectivity to the lack interactivity, having fully functional software stack on local resources is...
    Go to contribution page
  74. Ms Jaroslava Schovancova (Institute of Physics, Prague), Dr Jiri Chudoba (Institute of Physics, Prague)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The Pierre Auger Observatory studies ultra-high energy cosmic rays. Interactions of these particles with the nuclei of air gases at energies many orders of magnitude above the current accelerator capabilities induce unprecedented extensive air showers in the atmosphere. Different interaction models are used to describe the first interactions in such showers and their predictions are...
    Go to contribution page
  75. Dr Simon Metson (H.H. Wills Physics Laboratory)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    In a collaboration the size of CMS (approx. 3000 users, and almost 100 computing centres of varying size) communication and accurate information about the sites it has access to is vital in co-ordinating the multitude of computing tasks required for smooth running. SiteDB is a tool developed by CMS to track sites available to the collaboration, the allocation to CMS of resources available at...
    Go to contribution page
  76. Dr Ricardo Graciani Diaz (Universidad de Barcelona)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The usage of CPU resources by LHCb on the Grid id dominated by two different applications: Gauss and Brunel. Gauss the application doing the Monte Carlo simulation of proton-proton collisions. Brunel is the application responsible for the reconstruction of the signals recorded by the detector converting them into objects that can be used for later physics analysis of the data (tracks,...
    Go to contribution page
  77. Dr Dagmar Adamova (Nuclear Physics Institute AS CR)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Czech Republic (CR) has been participating in the LHC Computing Grid project (LCG) ever since 2003 and gradually, a middle-sized Tier2 center has been built in Prague, delivering computing services for national HEP experiments groups including the ALICE project at the LHC. We present a brief overview of the computing activities and services being performed in the CR for the ALICE...
    Go to contribution page
  78. Pier Paolo Ricci (INFN CNAF)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    In the framework of WLCG, the Tier-1 computing centres have very stringent requirements in the sector of the data storage, in terms of size, performance and reliability. Since some years, at the INFN-CNAF Tier-1 we have been using two distinct storage systems: Castor as tape-based storage solution (also known as the D0T1 storage class in the WLCG language) and the General Parallel File...
    Go to contribution page
  79. Mr Matti Kortelainen (Helsinki Institute of Physics)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    We study the performance of different ways of running a physics analysis in preparation for the analysis of petabytes of data in the LHC era. Our test cases include running the analysis code in a Linux cluster with a single thread in ROOT, with the Parallel ROOT Facility (PROOF), and in parallel via the Grid interface with the ARC middleware. We use of the order of millions of Pythia8...
    Go to contribution page
  80. Dr Monica Verducci (INFN Roma)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The ATLAS Muon Spectrometer is the outer part of the ATLAS detector at LHC. It has been designed to detect charged particles exiting the barrel and end-cap calorimeters and to measure their momentum in the pseudorapidity range |η| < 2.7. The challenge performance in momentum measurements needs an accurate monitoring of detector and calibration parameters and an high complex architecture to...
    Go to contribution page
  81. Pedro Salgado (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The ATLAS Distributed Data Management system, Don Quijote2 (DQ2), has been in use since 2004. Its goal is to manage tens of petabytes of data per year, distributed among the WLCG. One of the most critical components of DQ2 is the central catalogues which comprises a set of web services with a database back-end and a distributed memory object caching system. This component has proven to...
    Go to contribution page
  82. Dr Vincent Garonne (CERN)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The DQ2 Distributed Data Management system is the system developed and used by ATLAS for handling very large datasets. It encompasses data bookkeeping, managing of largescale production transfers as well as endusers data access requests. In this paper, we will describe the design and implementation of the DQ2 accounting service. It collects different data usage informations in order to show...
    Go to contribution page
  83. Dr Solveig Albrand (LPSC)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    AMI is the main interface for searching for ATLAS datasets using physics metadata criteria. AMI has been implemented as a generic database management framework which allows parallel searching over many catalogues, which may have differing schema, and may be distributed geographically, using different RDBMS. The main features of the web interface will be described; in particular the powerful...
    Go to contribution page
  84. Florbela Viegas (CERN)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary...
    Go to contribution page
  85. Dr Daniele Bonacorsi (Universita & INFN, Bologna)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The CMS Facilities and Infrastructure Operations group is responsible for providing and maintaining a working distributed computing fabric with a consistent working environment for Data operations and the physics user community. Its mandate is to maintain the core CMS computing services; ensure the coherent deployment of Grid or site specific components (such as workload management, file...
    Go to contribution page
  86. Dr Lee Lueking (FERMILAB)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The CMS experiment has implemented a flexible and powerful approach enabling users to find data within the CMS physics data catalog. The Dataset Bookkeeping Service (DBS) comprises a database and the services used to store and access metadata related to its physics data. In addition to the existing WEB based and programmatic API, a generalized query system has been designed and built. This...
    Go to contribution page
  87. Dr Andrea Sartirana (INFN-CNAF)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The CMS experiment is preparing for data taking in many computing activities, including the testing, deployment and operation of various storage solutions to support the computing workflows of the experiment. Some Tier-1 and Tier-2 centers supporting the collaboration are deploying and commissioning StoRM storage systems. That is, posix-based disk storage systems on top of which StoRM...
    Go to contribution page
  88. Zoltan Mathe (UCD Dublin)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The LHCb Bookkeeping is a system for the storage and retrieval of meta data associated with LHCb datasets. e.g. whether it is real or simulated data, which running period it is associated with, how it was processed and all the other relevant characteristics of the files. The meta data are stored in an oracle database which is interrogated using services provided by the LHCb DIRAC3...
    Go to contribution page
  89. Hubert Degaudenzi (European Organization for Nuclear Research (CERN))
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The installation of the LHCb software is handled by a single python script: install_project.py. This bootstrap script is unique by allowing the installation of software projects on various operating system (Linux, Windows, MacOSX). It is designed for the LHCb software deployment for a single user or for multiple users, in a shared area or on the Grid. It retrieves the software packages and...
    Go to contribution page
  90. Bertrand Bellenot (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Description of the new implementation of the ROOT browser
    Go to contribution page
  91. Dr Hubert Degaudenzi (CERN), Karol Kruzelecki (Cracow University of Technology-Unknown-Unknown)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The core software stack both from the LCG Application Area and LHCb consists of more than 25 C++/Fortran/Python projects build for about 20 different configurations on Linux, Windows and MacOSX. To these projects, one can also add about 20 external software packages (Boost, Python, Qt, CLHEP, ...) which have also to be build for the same configurations. It order to reduce the time of...
    Go to contribution page
  92. Ilektra Christidi (Physics Department - Aristotle Univ. of Thessaloniki)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The ATLAS detector has been designed to exploit the full discovery potential of the LHC proton-proton collider at CERN, at the c.m. energy of 14 TeV. Its Muon Spectrometer (MS) has been optimized to measure final state muons from those interactions with good momentum resolution (3-10% for momentum of 100GeV/c-1TeV/c). In order to ensure that the hardware, DAQ and reconstruction software of...
    Go to contribution page
  93. Dr Mine Altunay (FERMILAB)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Open Science Grid stakeholders invariably depend on multiple infrastructures to build their community-based distributed systems. To meet this need, OSG has built new gateways with TeraGrid, Campus Grids, and Regional Grids (NYSGrid, BrazilGrid). This has brought new security challenges for the OSG architecture and operations. The impact of security incidents now has a larger scope and...
    Go to contribution page
  94. Bertrand Bellenot (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Description of the ROOT event recorder, a GUI testing and validation tool.
    Go to contribution page
  95. Anar Manafov (GSI)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Memory monitoring is a very important part of complex project development. Open Source tools, such as valgrind, are available for the task, however, their performance penalties make them not suitable for debugging long, CPU-intensive programs, such as reconstruction or simulation. We have developed the TMemStat tool, which, while not providing the full functionality of valgrind, gives...
    Go to contribution page
  96. David Chamont (Laboratoire Leprince-Ringuet (LLR)-Ecole Polytechnique-Unknown)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The same as many experiments, FERMI is storing its data within ROOT trees. A very common activity of physicists is the tuning of selection criteria which define the events of interest, thus cutting and pruning the ROOT trees so to extract all the data linked to those specific events. It is rather straighforward to write a ROOT script so to skim a single kind of data, for example the...
    Go to contribution page
  97. Dr Richard Wilkinson (California Institute of Technology)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    In 2008, the CMS experiment made the transition from a custom-parsed language for job configuration to using Python. The current CMS software release has over 180,000 lines of Python configuration code. We describe the new configuration system, the motivation for the change, the transition itself, and our experiences with the new configuration language.
    Go to contribution page
  98. Dr Oliver Gutsche (FERMILAB)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The CMS software stack currently consists of more than 2 Million lines of code developed by over 250 authors with a new version being released every week. CMS has setup a release validation process for quality assurance which enables the developers to compare to previous releases and references. This process provides the developers with reconstructed datasets of real data and MC samples....
    Go to contribution page
  99. Tatsiana Klimkovich (RWTH Aachen University)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    VISPA is a novel development environment for high energy physics analyses which enables physicists to combine graphical and textual work. A physics analysis cycle consists of prototyping, performing, and verifying the analysis. The main feature of VISPA is a multipurpose window for visual steering of analysis steps, creation of analysis templates, and browsing physics event data at different...
    Go to contribution page
  100. Prof. Rodriguez Jorge Luis (Florida Int'l University)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The CMS experiment will generate tens of petabytes of data per year, data that will be processed, moved and stored in large computing facilities at locations all over the globe. Each of these facilities deploys complex and sophisticated hardware and software components which require dedicated expertise lacking at many of the university and institutions wanting access to the data as soon as it...
    Go to contribution page
  101. Jiri Drahos (chair of the Academy of Sciences of the Czech Republic), Vaclav Hampl (rector of the Charles University in Prague), Vaclav Havlicek (rector of the Czech Technical University in Prague)
    23/03/2009, 09:00
    Plenary
  102. Prof. Sergio Bertolucci (CERN)
    23/03/2009, 09:30
    Plenary
    oral
    The LHC Machine and Experiments: Status and Prospects
    Go to contribution page
  103. Dr Neil Geddes (RAL)
    23/03/2009, 10:00
    Plenary
    oral
    A personal review of WLCG and the readiness for first real LHC data, highlighting some particular successes, concerns and challenges that lie ahead.
    Go to contribution page
  104. Dr Niko Neufeld (CERN)
    23/03/2009, 11:30
    Plenary
    oral
    Data Acquisition systems are an integral part of their respective experiments. They are designed to meet the needs set by the physics programme. Despite some very interesting differences in the architecture the unprecedented data-rates at LHC have led to a lot of commonalities among the four large LHC data acquisition systems. All of them rely on commercial local area network technology and...
    Go to contribution page
  105. Prof. Kors Bos (NIKHEF)
    23/03/2009, 12:00
    Plenary
    oral
    Status and Prospects of The LHC Experiments Computing
    Go to contribution page
  106. Les Robertson (CERN)
    23/03/2009, 12:30
    Plenary
    oral
    For various reasons the computing facility for LHC data analysis has been organised as a widely distributed computational grid. Will this be able to meet the requirements of the  experiments as LHC energy and luminosity ramp up? Will grid operation become a basic component of science infrastructure? Will virtualisation and the cloud model eliminate the need for complex grid middleware? Will...
    Go to contribution page
  107. Dr Lucas Taylor (Northeastern U., Boston)
    23/03/2009, 14:00
    Collaborative Tools
    oral
    The CMS Experiment at the LHC is establishing a global network of inter-connected "CMS Centres" for controls, operations and monitoring. These support: (1) CMS data quality monitoring, detector calibrations, and analysis; and (2) computing operations for the processing, storage and distribution of CMS data. We describe the infrastructure, computing, software, and communications, systems...
    Go to contribution page
  108. Dr Johannes Gutleber (CERN)
    23/03/2009, 14:00
    Online Computing
    oral
    The CMS data acquisition system is made of two major subsystems: event building and event filter. The presented paper describes the architecture and design of the software that processes the data flow in the currently operating experiment. The central DAQ system relies heavily on industry standard networks and processing equipment. Adopting a single software infrastructure in all...
    Go to contribution page
  109. Dr Peter Elmer (PRINCETON UNIVERSITY)
    23/03/2009, 14:00
    Event Processing
    oral
    Performance of an experiment's simulation, reconstruction and analysis software is of critical importance to physics competitiveness and making optimum use of the available budget. In the last 18 months the performance improvement program in the CMS experiment has produced more than a ten-fold gain in reconstruction performance alone, a significant reduction in mass storage system...
    Go to contribution page
  110. Dr Zhen Xie (Princeton University)
    23/03/2009, 14:00
    Software Components, Tools and Databases
    oral
    Non-event data describing detector conditions change with time and come from different data sources. They are accessible by physicists within the offline event-processing applications for precise calibration of reconstructed data as well as for data-quality control purposes. Over the past three years CMS has developed and deployed a software system managing such data. Object-relational...
    Go to contribution page
  111. Dr Jeremy Coles (University of Cambridge - GridPP)
    23/03/2009, 14:00
    Grid Middleware and Networking Technologies
    oral
    During 2008 we have seen several notable changes in the way the LHC experiments have tried to tackle outstanding gaps in the implementation of their computing models. The development of space tokens and changes in job submission and data movement tools are key examples. The first section of this paper will review these changes and the technical/configuration impacts they have had at the site...
    Go to contribution page
  112. Dr Jakub Moscicki (CERN IT/GS), Dr Patricia Mendez Lorenzo (CERN IT/GS)
    23/03/2009, 14:00
    Distributed Processing and Analysis
    oral
    Recently a growing number of various applications have been quickly and successfully enabled on the Grid by the CERN Grid application support team. This allowed the applications to achieve and publish large-scale results in short time which otherwise would not be possible. The examples of successful Grid applications include the medical and particle physics simulation (Geant4, Garfield),...
    Go to contribution page
  113. Alexandre Vaniachine (Argonne), Rodney Walker (LMU Munich)
    23/03/2009, 14:20
    Software Components, Tools and Databases
    oral
    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic workflow, ATLAS database scalability tests provided feedback for Conditions DB software optimization and allowed precise determination of required distributed database resources. In distributed...
    Go to contribution page
  114. Valentin Kuznetsov (Cornell University)
    23/03/2009, 14:20
    Distributed Processing and Analysis
    oral
    The CMS experiment has a distributed computing model, supporting thousands of physicists at hundreds of sites around the world. While this is a suitable solution for "day to day" work in the LHC era there are edge use-cases that Grid solutions do not satisfy. Occasionally it is desirable to have direct access to a file on a users desktop or laptop; for code development, debugging or examining...
    Go to contribution page
  115. Mr Adrian Casajus Ramo (Departament d' Estructura i Constituents de la Materia)
    23/03/2009, 14:20
    Collaborative Tools
    oral
    Traditionally interaction between users and the Grid is done with command line tools. However, these tools are difficult to use by a non-expert user providing minimal help and generating outputs not always easy to understand especially in case of errors. Graphical User Interfaces are typically limited to providing access to the monitoring or accounting information and concentrate on some...
    Go to contribution page
  116. Mr Giulio Eulisse (NORTHEASTERN UNIVERSITY OF BOSTON (MA) U.S.A.)
    23/03/2009, 14:20
    Event Processing
    oral
    In 2007 the CMS experiment first reported some initial findings on the impedance mismatch between HEP use of C++ and the current generation of compilers and CPUs. Since then we have continued our analysis of the CMS experiment code base, including the external packages we use. We have found that large amounts of C++ code has been written largely ignoring any physical reality of the...
    Go to contribution page
  117. Tobias Koenig (Karlsruhe Institute of Technology (KIT))
    23/03/2009, 14:20
    Grid Middleware and Networking Technologies
    oral
    Offering sustainable Grid services to users and other computing centres is the main aim of GridKa, the German Tier-1 centre of the WLCG infrastructure. The availability and reliability of IT services directly influences the customers’ satisfaction as well as the reputation of the service provider and not to forget the economical aspects. It is thus important to concentrate on processes and...
    Go to contribution page
  118. Werner Wiedenmann (University of Wisconsin)
    23/03/2009, 14:20
    Online Computing
    oral
    Event selection in the ATLAS High Level Trigger is accomplished to a large extent by reusing software components and event selection algorithms developed and tested in an offline environment. Many of these offline software modules are not specifically designed to run in a heavily multi threaded online data flow environment. The ATLAS High Level Trigger (HLT) framework based on the GAUDI and...
    Go to contribution page
  119. Mr SooHyung Lee (Korea Univ.)
    23/03/2009, 14:40
    Online Computing
    oral
    The real time data analysis at next generation experiments is a challenge because of their enormous data rate and size. The SuperKEKB experiment, the upgraded Belle experiment, requires to process 100 times larger data of current one taken at 10kHz. The offline-level data analysis is necessary in the HLT farm for the efficient data reduction. The real time processing of huge data is also...
    Go to contribution page
  120. Ms Maite Barroso (CERN), Nicholas Thackray (CERN)
    23/03/2009, 14:40
    Grid Middleware and Networking Technologies
    oral
    A review of the evolution of WLCG/EGEE grid operations Authors: Maria BARROSO, Diana BOSIO, David COLLADOS, Maria DIMOU, Antonio RETICO, John SHADE, Nick THACKRAY, Steve TRAYLEN, Romain WARTEL As the EGEE grid infrastructure continues to grow in size, complexity and usage, the task of ensuring the continued, uninterrupted availability of the grid services to the ever increasing number...
    Go to contribution page
  121. Dr Daniel van der Ster (CERN)
    23/03/2009, 14:40
    Distributed Processing and Analysis
    oral
    Ganga has been widely used for several years in Atlas, LHCb and a handful of other communities in the context of the EGEE project. Ganga provides a simple yet powerful interface for submitting and managing jobs to a variety of computing backends. The tool helps users configuring applications and keeping track of their work. With the major release of version 5 in summer 2008, Ganga's main...
    Go to contribution page
  122. Andrea Valassi (CERN)
    23/03/2009, 14:40
    Software Components, Tools and Databases
    oral
    The LCG Persistency Framework consists of three software packages (POOL, CORAL and COOL) that address the data access requirements of the LHC experiments in several different areas. The project is the result of the collaboration between the CERN IT Department and the three experiments (ATLAS, CMS and LHCb) that are using some or all of the Persistency Framework components to access their data....
    Go to contribution page
  123. Mr Jeremy Herr (U. of Michigan)
    23/03/2009, 14:40
    Collaborative Tools
    oral
    The ATLAS Collaboratory Project at the University of Michigan has been a leader in the area of collaborative tools since 1999. Its activities include the development of standards, software and hardware tools for lecture archiving, and making recommendations for videoconferencing and remote teaching facilities. Starting in 2006 our group became involved in classroom recordings, and in early...
    Go to contribution page
  124. Zachary Marshall (Caltech, USA & Columbia University, USA)
    23/03/2009, 14:40
    Event Processing
    oral
    The ATLAS Simulation validation project is done in two distinct phases. The first one is the computing validation, the second being the physics performance that must be tested and compared to available data. Infrastructure needed at each stage of validation is here described. In ATLAS software development is controlled by nightly builds to check stability and performance. The complete...
    Go to contribution page
  125. Laura Perini (INFN Milano), Tiziana Ferrari (INFN CNAF)
    23/03/2009, 15:00
    Grid Middleware and Networking Technologies
    oral
    International research collaborations increasingly require secure sharing of resources owned by the partner organizations and distributed among different administration domains. Examples of resources include data, computing facilities (commodity computer clusters, HPC systems, etc.), storage space, metadata from remote archives, scientific instruments, sensors, etc. Sharing is made possible...
    Go to contribution page
  126. Dr Douglas Smith (STANFORD LINEAR ACCELERATOR CENTER)
    23/03/2009, 15:00
    Distributed Processing and Analysis
    oral
    The Babar experiment produced one of the largest datasets in high energy physics. To provide for many different concurrent analyses the data is skimmed into many data streams before analysis can begin, multiplying the size of the dataset both in terms of bytes and number of files. As a large scale problem of job management and data control, the Babar Task Manager system was...
    Go to contribution page
  127. Dr Maria Girone (CERN)
    23/03/2009, 15:00
    Software Components, Tools and Databases
    oral
    Originally deployed at CERN for the construction of LEP, relational databases now play a key role in the experiments' production chains, from online acquisition through to offline production, data distribution, reprocessing and analysis. They are also a fundamental building block for the Tier0 and Tier1 data management services. We summarize the key requirements in terms of availability,...
    Go to contribution page
  128. Dr Clara Gaspar (CERN)
    23/03/2009, 15:00
    Online Computing
    oral
    LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services...
    Go to contribution page
  129. Dr Thomas Kittelmann (University of Pittsburgh)
    23/03/2009, 15:00
    Event Processing
    oral
    We present an event display for the ATLAS Experiment, called Virtual Point 1 (VP1), designed initially for deployment at point 1 of the LHC, the location of the ATLAS detector. The Qt/OpenGL based application provides truthful and interactive 3D representations of both event and non-event data, and now serves a general-purpose role within the experiment. Thus, VP1 is used both online (in...
    Go to contribution page
  130. Dr Dimitri BOURILKOV (University of Floria)
    23/03/2009, 15:00
    Collaborative Tools
    oral
    A key feature of collaboration in large scale scientific projects is keeping a log of what and how is being done - for private use and reuse and for sharing selected parts with collaborators and peers, often distributed geographically on an increasingly global scale. Even better if this log is automatic, created on the fly while a scientist or software developer is working in a habitual...
    Go to contribution page
  131. Mrs Ruth Pordes (FERMILAB)
    23/03/2009, 15:20
    Grid Middleware and Networking Technologies
    oral
    The Open Science Grid usage has ramped up more than 25% in the past twelve months due to both the increase in throughput of the core stakeholders – US LHC, LIGO and Run II – and increase in usage by non-physics communities. We present and analyze this ramp up together with the issues encountered and implications for the future. It is important to understand the value of collaborative...
    Go to contribution page
  132. Dr Andrea Valassi (CERN)
    23/03/2009, 15:20
    Software Components, Tools and Databases
    oral
    The CORAL package is the CERN LCG Persistency Framework common relational database abstraction layer for accessing the data of the LHC experiments that is stored using relational database technologies. A traditional two-tier client-server model is presently used by most CORAL applications accessing relational database servers such as Oracle, MySQL, SQLite. A different model, involving a...
    Go to contribution page
  133. Philippe Galvez (California Institute of Technology (CALTECH))
    23/03/2009, 15:20
    Collaborative Tools
    oral
    The EVO (Enabling Virtual Organizations) system is based on a new distributed and unique architecture, leveraging the 10+ years of unique experience of developing and operating large distributed production based collaboration systems. The primary objective being to provide to the High Energy and Nuclear Physics experiments a system/service that meet their unique requirements of usability,...
    Go to contribution page
  134. Kovalskyi Dmytro (University of California, Santa Barbara)
    23/03/2009, 15:20
    Event Processing
    oral
    Fireworks is a CMS event display which is specialized for the physics studies case. This specialization allows to use a stylized rather than 3D accurate representation when it's appropriate. Data handling is greatly simplified by using only reconstructed information and ideal geometry. Fireworks provides an easy to use interface which allows a physicist to concentrate only on the data to...
    Go to contribution page
  135. Dr Fabrizio Furano (Conseil Europeen Recherche Nucl. (CERN))
    23/03/2009, 15:20
    Distributed Processing and Analysis
    oral
    The Scalla/Xrootd software suite is a set of tools and suggested methods useful to build scalable, fault tolerant and high performance storage systems for POSIX-like data access. One of the most important recent development efforts is to implement technologies able to deal with the characteristics of Wide Area Networks, and find solutions in order to allow data analysis applications to...
    Go to contribution page
  136. Ms Chiara Zampolli (CERN)
    23/03/2009, 15:20
    Online Computing
    oral
    The ALICE experiment is the dedicated heavy-ion experiment at the CERN LHC and will take data with a bandwidth of up to 1.25 GB/s. It consists of 18 subdetectors that interact with five online systems (DAQ, DCS, ECS, HLT and Trigger). Data recorded are read out by DAQ in a raw data stream produced by the subdetectors. In addition the subdetectors produce conditions data derived from the raw...
    Go to contribution page
  137. Dr David Malon (Argonne National Laboratory), Dr Elizabeth Gallas (University of Oxford)
    23/03/2009, 15:40
    Software Components, Tools and Databases
    oral
    Metadata--data about data--arise in many contexts, from many diverse sources, and at many levels in ATLAS. Familiar examples include run-level, luminosity-block-level, and event-level metadata, and, related to processing and organization, dataset-level and file-level metadata, but these categories are neither exhaustive nor orthogonal. Some metadata are known a priori, in advance of...
    Go to contribution page
  138. Dr Donatella Lucchesi (University and INFN Padova)
    23/03/2009, 15:40
    Grid Middleware and Networking Technologies
    oral
    The CDF II experiment has been taking data at FNAL since 2001. The CDF computing architecture has evolved from initially using dedicated computing farms to using decentralized Grid-based resources on the EGEE grid, Open Science Grid and FNAL Campus grid. In order to deliver high quality physics results in a timely manner to a running experiment, CDF has had to adapt to Grid with minimum...
    Go to contribution page
  139. Dr Erik Gottschalk (Fermi National Accelerator Laboratory (FNAL))
    23/03/2009, 15:40
    Collaborative Tools
    oral
    We describe the use of professional-quality high-definition (HD) videoconferencing systems for daily HEP experiment operations and large-scale media events. For CMS operations at the Large Hadron Collider, we use such systems for permanently running "telepresence" communications between the CMS Control Room in France and major offline CMS Centres at CERN, DESY, and Fermilab, and with a...
    Go to contribution page
  140. Dr Alexei Klimentov (BNL)
    23/03/2009, 15:40
    Distributed Processing and Analysis
    oral
    We present our experience with distributed reprocessing of the LHC beam and cosmic ray data taken with the ATLAS detector during 2008/2009. Raw data were distributed from CERN to ATLAS Tier-1 centers, reprocessed and validated. The reconstructed data were consolidated at CERN and ten WLCG ATLAS Tier-1 centers and made available for physics analysis. The reprocessing was done...
    Go to contribution page
  141. Prof. Gordon Watts (UNIVERSITY OF WASHINGTON)
    23/03/2009, 15:40
    Online Computing
    oral
    The DZERO Level 3 Trigger and data acquisition system has been successfully running since March of 2001, taking data for the DZERO experiment located at the Tevatron at the Fermi National Laboratory. Based on a commodity parts, it reads out 65 VME front end crates and delivers the 250 MB of data to one of 1200 processing cores for a high level trigger decision at a rate of 1 kHz. Accepted...
    Go to contribution page
  142. Oliver Gutsche (FERMILAB)
    23/03/2009, 15:40
    Event Processing
    oral
    The CMS software stack currently consists of more than 2 million lines of code developed by over 250 authors with a new version being released every week. CMS has setup a central release validation process for quality assurance which enables the developers to compare the performance to previous releases and references. This process provides the developers with reconstructed datasets of...
    Go to contribution page
  143. Michele Michelotto (INFN + Hepix)
    23/03/2009, 16:30
    Hardware and Computing Fabrics
    oral
    The SPEC INT benchmark has been used as a performance reference for computing in the HEP community for the past 20 years. The SPEC CPU INT 2000 (SI2K) unit of performance has been used by the major HEP experiments both in the Computing Technical Design Report for the LHC experiments and in the evaluation of the Computing Centres. At recent HEPiX meetings several HEP sites have reported...
    Go to contribution page
  144. Sophie Lemaitre (CERN)
    23/03/2009, 16:30
    Distributed Processing and Analysis
    oral
    One of the current problem areas for sustainable WLCG operations is in the area of data management and data transfer. The systems involved (e.g. Castor, dCache, DPM, FTS, gridFTP, OPN network) are rather complex and have multiple layers - failures can and do occur in any layer and due to the diversity of systems involved, the differences in the information they have available and their...
    Go to contribution page
  145. Dr Jack Cranshaw (Argonne National Laboratory), Dr Qizhi Zhang (Argonne National Laboratory)
    23/03/2009, 16:30
    Software Components, Tools and Databases
    oral
    ATLAS has developed and deployed event-level selection services based upon event metadata records ("tags") and supporting file and database technology. These services allow physicists to extract events that satisfy their selection predicates from any stage of data processing and use them as input to later analyses. One component of these services is a web-based Event-Level Selection...
    Go to contribution page
  146. Benedikt Hegner (CERN)
    23/03/2009, 16:30
    Event Processing
    oral
    The re-engineered CMS EDM was presented at CHEP in 2006. Since that time we have gained a lot of operational experience with the chosen model. We will present some of our findings, and attempt to evaluate how well it is meeting its goals. We will discuss some of the new features that have been added since 2006 as well as some of the problems that have been addressed. Also discussed is the...
    Go to contribution page
  147. Mr Gilles Mathieu (STFC, Didcot, UK)
    23/03/2009, 16:30
    Grid Middleware and Networking Technologies
    oral
    All grid projects have to deal with topology and operational information like resource distribution, contact lists and downtime declarations. Storing, maintaining and publishing this information properly is one of the key elements to successful grid operations. The solution adopted by EGEE and WLCG projects is a central repository that hosts this information and makes it available to users and...
    Go to contribution page
  148. Daniel Sonnick (University of Applied Sciences Kaiserslautern)
    23/03/2009, 16:30
    Online Computing
    oral
    In LHCb raw data files are created on a high-performance storage system using a custom, speed-optimized file-writing software. The file-writing is orchestrated by a data-base, which represents the life-cycle of a file and is the entry point for all operations related to files such as run-start, run-stop, file-migration, file-pinning and ultimately file-deletion. File copying to the...
    Go to contribution page
  149. Dr Jose Hernandez (CIEMAT)
    23/03/2009, 16:50
    Grid Middleware and Networking Technologies
    oral
    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workfload management tools, the various computing workflows and the underlying computing infrastructure located at more than 50 computing centres worldwide interconnected by the Worldwide LHC Computing...
    Go to contribution page
  150. Mr Matteo Marone (Universita degli Studi di Torino - Universita &amp; INFN, Torino)
    23/03/2009, 16:50
    Online Computing
    oral
    The CMS detector at LHC is equipped with a high precision lead tungstate crystal electromagnetic calorimeter (ECAL). The front-end boards and the photodetectors are monitored using a network of DCU (Detector Control Unit) chips located on the detector electronics. The DCU data are accessible through token rings controlled by an XDAQ based software component. Relevant parameters are...
    Go to contribution page
  151. Mr Sverre Jarp (CERN)
    23/03/2009, 16:50
    Hardware and Computing Fabrics
    oral
    In CERN openlab we have being running tests with a server using a low-power ATOM N330 dual-core/dual-thread processor deploying both HEP offline and online programs. The talk will report on the results, both for single runs as well as max throughput runs, and will also report on the results of thermal measurements. It will also show how the price/performance of an ATOM system compares to a...
    Go to contribution page
  152. Dr Christopher Jones (Fermi National Accelerator Laboratory)
    23/03/2009, 16:50
    Event Processing
    oral
    The CMS Offline framework stores provenance information within CMS's standard ROOT event data files. The provenance information is used to track how every data product was constructed including what other data products were read in order to do the construction. We will present how the framework gathers the provenance information, the efforts necessary to minimize the space used to store the...
    Go to contribution page
  153. David Lawrence (Jefferson Lab)
    23/03/2009, 16:50
    Software Components, Tools and Databases
    oral
    Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. This system allows constants to be retrieved through a single line of C++ code with most of the context implied by the run currently being analyzed. The API is designed to support everything from databases, to web services to flat files...
    Go to contribution page
  154. Mr Levente HAJDU (BROOKHAVEN NATIONAL LABORATORY)
    23/03/2009, 16:50
    Distributed Processing and Analysis
    oral
    Processing datasets on the order of tens of terabytes is an onerous task, faced by production coordinators everywhere. Users solicit data productions and, especially for simulation data, the vast amount of parameters (and sometime incomplete requests) point at the need for a tracking, control and archiving all requests made so a coordinated handling could be made by the production team. With...
    Go to contribution page
  155. Mr Xin Zhao (Brookhaven National Laboratory,USA)
    23/03/2009, 17:10
    Grid Middleware and Networking Technologies
    oral
    ATLAS Grid production, like many other VO applications, requires the software packages to be installed on remote sites in advance. Therefore, a dynamic and reliable system for installing the ATLAS software releases on Grid sites is crucial to guarantee the timely and smooth start of ATLAS production and reduce its failure rate. In this talk, we discuss the issues encountered in the...
    Go to contribution page
  156. Tony Cass (CERN)
    23/03/2009, 17:10
    Hardware and Computing Fabrics
    oral
    The current level of demand for Green Data Centres has created a growing market for consultants providing advice on how to meet the requirement for high levels of electrical power and, above all, cooling capacity both economically and ecologically. How should one choose, in the face of the many competing claims, the right concept for a cooling system in order to reach the right power level,...
    Go to contribution page
  157. Norbert Neumeister (Purdue University)
    23/03/2009, 17:10
    Distributed Processing and Analysis
    oral
    We present a Web portal for CMS Grid submission and management. Grid portals can deliver complex grid solutions to users without the need to download, install and maintain specialized software, or worrying about setting up site-specific components. The goal is to reduce the complexity of the user grid experience and to bring the full power of the grid to physicists engaged in LHC analysis...
    Go to contribution page
  158. Giovanni Petrucciani (SNS & INFN Pisa, CERN)
    23/03/2009, 17:10
    Event Processing
    oral
    The CMS Physics Analysis Toolkit (PAT) is presented. The PAT is a high-level analysis layer enabling the development of common analysis efforts across and within Physics Analysis Groups. It aims at fulfilling the needs of most CMS analyses, providing both ease-of-use for the beginner and flexibility for the advanced user. The main PAT concepts are described in detail and some examples from...
    Go to contribution page
  159. Mr Barthélémy von Haller (CERN)
    23/03/2009, 17:10
    Online Computing
    oral
    ALICE is one of the four experiments installed at the CERN Large Hadron Collider (LHC), especially designed for the study of heavy-ion collisions. The online Data Quality Monitoring (DQM) is an important part of the data acquisition (DAQ) software. It involves the online gathering, the analysis by user-defined algorithms and the visualization of monitored data. This paper presents the final...
    Go to contribution page
  160. Dr Ilse Koenig (GSI Darmstadt)
    23/03/2009, 17:10
    Software Components, Tools and Databases
    oral
    Since 2002 the HADES experiment at GSI employs an Oracle database for storing of all parameters relevant for simulation and data analysis. The implementation features a flexible, multi-dimensional and easy-to-use version management. Direct interfaces to the ROOT-based analysis and simulation framework HYDRA allow for an automated initialization based on actual or historic data which is needed...
    Go to contribution page
  161. Mr Simon Liu (TRIUMF)
    23/03/2009, 17:30
    Hardware and Computing Fabrics
    oral
    We describe in this paper the design and implementation of Tapeguy, a high performance non-proprietary Hierarchical Storage Management System (HSM) which is interfaced to dCache for efficient tertiary storage operations. The system has been successfully implemented at the canadian Tier-1 Centre at TRIUMF. The ATLAS experiment will collect a very large amount of data (approximately 3.5...
    Go to contribution page
  162. Barbara Martelli (INFN)
    23/03/2009, 17:30
    Software Components, Tools and Databases
    oral
    The LCG File Catalog (LFC) is a key component of the LHC Computing Grid (LCG) middleware, as it contains the mapping between all logical and physical file names on the Grid. The Atlas computing model foresees multiple local LFC hosted in each Tier-1 and Tier-0, containing all information about files stored in that cloud. As the local LFC contents are presently not replicated, this turns out in...
    Go to contribution page
  163. Dr Hannes Sakulin (European Organization for Nuclear Research (CERN))
    23/03/2009, 17:30
    Online Computing
    oral
    The CMS Data Acquisition cluster, which runs around 10000 applications, is configured dynamically at run time. XML configuration documents determine what applications are executed on each node and over what networks these applications communicate. Through this mechanism the DAQ System may be adapted to the required performance, partitioned in order to perform (test-) runs in parallel, or...
    Go to contribution page
  164. Dr Graeme Andrew Stewart (University of Glasgow)
    23/03/2009, 17:30
    Grid Middleware and Networking Technologies
    oral
    The ATLAS Production and Distributed Analysis System (PanDA) is a key component of the ATLAS distributed computing infrastructure. All ATLAS production jobs, and a substantial amount of user and group analysis jobs, pass through the PanDA system which manages their execution on the grid. PanDA also plays a key role in production task definition and the dataset replication request system....
    Go to contribution page
  165. Mr Philippe Canal (Fermilab)
    23/03/2009, 17:30
    Event Processing
    oral
    One of the main strength of ROOT I/O is its inherent support for schema evolution. Two distinct modes are supported, one manual via a hand coded Streamer function and one fully automatic via the ROOT StreamerInfo. One draw back of the Streamer function is that they are not usable by TTrees in split mode. Until now, the automatic schema evolution mechanism could not be customized by the...
    Go to contribution page
  166. Mr Marco Meoni (CERN)
    23/03/2009, 17:30
    Distributed Processing and Analysis
    oral
    The ALICE experiment at CERN LHC is intensively using a PROOF cluster for fast analysis and reconstruction. The current system (CAF - CERN Analysis Facility) consists of some 120 CPU cores and about 45 TB of local space. One of the most important aspects of the data analysis on the CAF is the speed with which it can be carried out. Fast feedback on the collected data can be obtained, which...
    Go to contribution page
  167. Dr Shaun Roe (CERN)
    23/03/2009, 17:50
    Software Components, Tools and Databases
    oral
    The COOL database in ATLAS is primarily used for storing detector conditions data, but also status flags which are uploaded summaries of information to indicate the detector reliability during a run. This paper introduces the use of CherryPy, a Python application server which acts as an intermediate layer between a web interface and the database, providing a simple means of storing to and...
    Go to contribution page
  168. Dr James Letts (Department of Physics-Univ. of California at San Diego (UCSD))
    23/03/2009, 17:50
    Distributed Processing and Analysis
    oral
    During normal data taking CMS expects to support potentially as many as 2000 analysis users. In 2008 there were more than 800 individuals who submitted a remote analysis job to the CMS computing infrastructure. The bulk of these users will be supported at the over 40 CMS Tier-2 centers. Supporting a globally distributed community of users on a globally distributed set of computing clusters is...
    Go to contribution page
  169. Dr Andrea Sciabà (CERN)
    23/03/2009, 17:50
    Grid Middleware and Networking Technologies
    oral
    The LHC experiments (ALICE, ATLAS, CMS and LHCb) rely for the data acquisition, processing, distribution, analysis and simulation on complex computing systems, run using a variety of services, provided by the experiment services, the WLCG Grid and the different computing centres. These services range from the most basic (network, batch systems, file systems) to the mass storage services or the...
    Go to contribution page
  170. Mr Pavel JAKL (Nuclear Physics Inst., Academy of Sciences, Praha)
    23/03/2009, 17:50
    Hardware and Computing Fabrics
    oral
    Any experiment facing Peta bytes scale problems is in need for a highly scalable mass storage system (MSS) to keep a permanent copy of their valuable data. But beyond the permanent storage aspects, the sheer amount of data makes complete dataset availability onto “live storage” (centralized or aggregated space such as the one provided by Scala/Xrootd) cost prohibitive implying that a dynamic...
    Go to contribution page
  171. Giovanni Polese (Lappeenranta Univ. of Technology)
    23/03/2009, 17:50
    Online Computing
    oral
    The Resistive Plate Chamber system is composed by 912 double-gap chambers equipped with about 10^4 frontend boards. The correct and safe operation of the RPC system requires a sophisticated and complex online Detector Control System, able to monitor and control 10^4 hardware devices distributed on an area of about 5000 m^2. The RPC DCS acquires, monitors and stores about 10^5 parameters...
    Go to contribution page
  172. Dr Frank Gaede (DESY IT)
    23/03/2009, 17:50
    Event Processing
    oral
    The International Linear Collider is the next large accelerator project in High Energy Physics. The ILD Detector Concept is one of three international working groups that are developing a detector concept for the ILC. It has been created by merging the two concept studies LDC and GLD in 2007. ILD uses a modular C++ application framework (Marlin) that is based on the international...
    Go to contribution page
  173. Dr Douglas Smith (STANFORD LINEAR ACCELERATOR CENTER)
    23/03/2009, 18:10
    Distributed Processing and Analysis
    oral
    The Babar experiment has been running at the SLAC National Accelerator Laboratory for the past nine years, and has measured 500 fb-1 of data. The final data run for the experiment finished in April 2008. Once the data was finished the final processing of all Babar data was started. This was the largest computing production effort in the history of Babar, including a reprocessing of...
    Go to contribution page
  174. Alina Corso-Radu (University of California, Irvine)
    23/03/2009, 18:10
    Online Computing
    oral
    ATLAS is one of the four experiments in the Large Hadron Collider (LHC) at CERN which has been put in operation this year. The challenging experimental environment and the extreme detector complexity required development of a highly scalable distributed monitoring framework, which is currently being used to monitor the quality of the data being taken as well as operational conditions of the...
    Go to contribution page
  175. Stephen Wolbers (FNAL)
    23/03/2009, 18:10
    Hardware and Computing Fabrics
    oral
    As part of its mission to provide integrated storage for a variety of experiments and use patterns, Fermilab's Computing Division examines emerging technologies and reevaluates existing ones to identify the storage solutions satisfying stakeholders' requirements, while providing adequate reliability, security, data integrity and maintainability. We formulated a set of criteria and then...
    Go to contribution page
  176. Prof. Harvey Newman (Caltech)
    23/03/2009, 18:10
    Grid Middleware and Networking Technologies
    oral
    I will review the status, outlook recent technology trends and state of the art developments in the major networks serving the high energy physics community in the LHC era. I will also cover the progress in reducing or closing the Digital Divide separating scientists in several world regions from the mainstream, from the perspective of the ICFA Standing Committee on Inter-regional Connectivity.
    Go to contribution page
  177. Dr Rainer Mankel (DESY)
    23/03/2009, 18:10
    Event Processing
    oral
    The CMS experiment has performed a comprehensive challenge during May 2008 to test the full scope of offline data handling and analysis activities needed for data taking during the first few weeks of LHC collider operations. It constitutes the first full-scale challenge with large statistics under the conditions expected at the start-up of the LHC, including the expected initial mis-alignments...
    Go to contribution page
  178. Andressa Sivolella Gomes (Universidade Federal do Rio de Janeiro (UFRJ))
    23/03/2009, 18:10
    Software Components, Tools and Databases
    oral
    The ATLAS detector consists of four major components: inner tracker, calorimeter, muon spectrometer and magnet system. In the Tile Calorimeter (TileCal), there are 4 partitions, each partition has 64 modules and each module has up to 48 channels. During the ATLAS commissioning phase, a group of physicists need to analyze the Tile Calorimeter data quality, generate reports and update...
    Go to contribution page
  179. Alexander Mazurov (CERN)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    CASTOR provides a powerful and rich interface for managing files and pools of files backed by tape-storage. The API is modelled very closely on that of a POSIX filesystem, where part of the actual I/O part is handled by the rfio library. While the API is very close to POSIX it is still separated, which unfortunately makes it impossible to use standard tools and scripts straight away....
    Go to contribution page
  180. Mr Aatos Heikkinen (Helsinki Institute of Physics, HIP)
    24/03/2009, 08:00
    Event Processing
    poster
    We present a new Geant4 physics list prepared for nuclear physics applications in the domain dominated by spallation. We discuss new Geant4 models based on the translation of INCL intra-nuclear cascade and ABLA de-excitation codes in C++ and used in the physic list. The INCL model is well established for targets heavier than Aluminium and projectile energies from ~ 150 MeV up to 2.5...
    Go to contribution page
  181. Dimosthenis Sokaras (N.C.S.R. Demokritos, Institute of Nuclear Physics)
    24/03/2009, 08:00
    Event Processing
    poster
    Well established values for the X-ray fundamental parameters (fluorescence yields, characteristic lines branching ratios, mass absorption coefficients, etc.) are very important but not adequate for an accurate reference-free quantitative X-Ray Fluorescence (XRF) analysis. Secondary ionization processes following photon induced primary ionizations in matter may contribute significantly to the...
    Go to contribution page
  182. Karsten Koeneke (Deutsches Elektronen-Synchrotron (DESY))
    24/03/2009, 08:00
    Event Processing
    poster
    In the commissioning phase of the ATLAS experiment, low-level Event Summary Data (ESD) are analyzed to evaluate the performance of the individual subdetectors, the performance of the reconstruction and particle identification algorithms, and obtain calibration coefficients. In the GRID model of distributed analysis, these data must be transferred to Tier-1 and Tier-2 sites before they can be...
    Go to contribution page
  183. Dr Rudi Frühwirth (Institut fuer Hochenergiephysik (HEPHY)-Oesterreichische Akademi)
    24/03/2009, 08:00
    Event Processing
    poster
    Reconstruction of interaction vertices is an essential step in the reconstruction chain of a modern collider experiment such as CMS; the primary ("collision") vertex is reconstructed in every event within the CMS reconstruction program, CMSSW. However, the task of finding and fitting secondary ("decay") vertices also plays an important role in several physics cases such as the reconstruction...
    Go to contribution page
  184. Dr Kilian Schwarz (GSI)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    GSI Darmstadt is hosting a Tier2 centre for the ALICE experiment providing about 10% of ALICE Tier2 resources. According to the computing model the tasks of a Tier2 centre are scheduled and unscheduled analysis as well as Monte Carlo simulation. To accomplish this a large water cooled compute cluster has been set up and configured consisting of currently 200 CPUs (1500 Cores). After intensive...
    Go to contribution page
  185. Dr Marian Ivanov (GSI)
    24/03/2009, 08:00
    Event Processing
    poster
    We will present a Particle identification algorithm, as well as a calibration and performance study in the ALICE Time Projection Chamber (TPC) using the dEdx measurement. New calibration algorithms had to be developed, since the simple geometrical corrections were only suitable at 5-10% level. The PID calibration consists of the following parts: gain calibration, energy deposit calibration as...
    Go to contribution page
  186. Dr Marian Ivanov (GSI)
    24/03/2009, 08:00
    Event Processing
    poster
    We will present our studies of the performance of the reconstruction in the ALICE Time projection chamber (TPC). The reconstruction algorithm in question is based on the Kalman filter. The performance is characterized by the resolution in the position, angle and momenta as a function of particle properties (momentum, position). The resulting momentum parametrization is compared with the...
    Go to contribution page
  187. Daniel Kollar
    24/03/2009, 08:00
    Event Processing
    poster
    The CERN's Large Hadron Collider (LHC) is the world largest particle accelerator. ATLAS is one of the two general purpose experiments equipped with a charge particle tracking system built on two technologies: silicon and drift tube based detectors, composing the ATLAS Inner Detector (ID). The required precision for the alignment of the most sensitive coordinates of the silicon sensors is just...
    Go to contribution page
  188. Jan Amoraal (NIKHEF), Wouter Hulsbergen (NIKHEF)
    24/03/2009, 08:00
    Event Processing
    poster
    We report on an implementation of a global chisquare algorithm for the simultaneous alignment of all tracking systems in the LHCb detector. Our algorithm uses hit residuals from the standard LHCb track fit which is based on a Kalman filter. The algorithm is implemented in the LHCb reconstruction framework and exploits the fact that all sensitive detector elements have the same geometry...
    Go to contribution page
  189. Vitali CHOUTKO (CERN)
    24/03/2009, 08:00
    Event Processing
    poster
    The ROOT based event model for the AMS experiment is presented. By adding few pragmas to the main ROOT code the parallel processing of the ROOT chains on the local multi-core machines became possible. The scheme does not require any merging of the user defined output information (like histograms, etc). Also no any pre-installation procedure is needed. The scalability of the scheme is...
    Go to contribution page
  190. Dr Edmund Widl (Institut für Hochenergiephysik (HEPHY Vienna))
    24/03/2009, 08:00
    Event Processing
    poster
    One of the main components of the CMS experiment is the Inner Tracker. This device, designed to measure the trajectories of charged particles, is composed of approximately 16,000 planar silicon detector modules, which makes it the biggest of its kind. However, systematical measurement errors, caused by unavoidable inaccuracies in the construction and assembly phase, reduce the precision of the...
    Go to contribution page
  191. Stefan Kluth (Max-Planck-Institut für Physik)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image...
    Go to contribution page
  192. Dr David Lawrence (Jefferson Lab)
    24/03/2009, 08:00
    Event Processing
    poster
    Automatic ROOT tree creation is achived in the JANA Event Processing Framework through a special plugin. The janaroot plugin can automatically define a TTree from the data objects passed though the framework without using a ROOT dictionary. Details on how this is achieved as well as possible applications will be presented.
    Go to contribution page
  193. Robert Petkus (Brookhaven National Laboratory)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    Gluster, a free cluster file-system scalable to several peta-bytes, is under evaluation at the RHIC/USATLAS Computing Facility. Several production SunFire x4500 (Thumper) NFS servers were dual-purposed as storage bricks and aggregated into a single parallel file-system using TCP/IP as an interconnect. Armed with a paucity of new hardware, the objective was to simultaneously allow traditional...
    Go to contribution page
  194. Dr Peter Kreuzer (RWTH Aachen IIIA)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workflows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast-turnaround. In addition to the low latency requirement on the batch farm, another mandatory condition is the efficient access to the RAW...
    Go to contribution page
  195. Andrea Di Simone (INFN Roma2)
    24/03/2009, 08:00
    Event Processing
    poster
    Resistive Plate Chambers (RPC) are used in ATLAS to provide the first level muon trigger in the barrel region. The total size of the system is about 16000 m2, readout by about 350000 electronic channels. In order to reach the needed trigger performance, a precise knowledge of the detector working point is necessary, and the high number of readout channels calls for severe requirements on...
    Go to contribution page
  196. Dr Silvia Maselli (INFN Torino)
    24/03/2009, 08:00
    Event Processing
    poster
    The calibration process of the Barrel Muon DT System of CMS as developed and tuned during the recent cosmic data run is presented. The calibration data reduction method, the full work flow of the procedure and final results are presented for real and simulated data.
    Go to contribution page
  197. Mr James Jackson (H.H. Wills Physics Laboratory - University of Bristol)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    The UK LCG Tier-1 computing centre located at the Rutherford Appleton Laboratory is responsible for the custodial storage and processing of the raw data from all four LHC experiments; CMS, ATLAS, LHCb and ALICE. The demands of data import, processing, export and custodial tape archival place unique requirements on the mass storage system used. The UK Tier-1 uses CASTOR as the storage...
    Go to contribution page
  198. Rodrigo Sierra Moral (CERN)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    Scientists all over the world collaborate with the CERN laboratory day by day. They must be able to communicate effectively on their joint projects at any time, so telephone conferences become indispensable and widely used. The traditional conference system, managed by 6 switchboard operators, was hosting more than 20000 hours and 5500 conference per year. However, the system needed to be...
    Go to contribution page
  199. Mr Carlos Ghabrous (CERN)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    As a result of the tremendous development of GSM services over the last years, the number of related services used by organizations has drastically increased. Therefore, monitoring GSM services is becoming a business critical issue in order to be able to react appropriately in case of incident. In order to provide with GSM coverage all the CERN underground facilities, more than 50 km of...
    Go to contribution page
  200. Dr Lucas Taylor (Northeastern U., Boston)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    The CMS Experiment at the LHC is establishing a global network of inter-connected "CMS Centres" for controls, operations and monitoring at CERN, Fermilab, DESY and a number of other sites in Asia, Europe, Russia, South America, and the USA. "ci2i" ("see eye to eye") is a generic Web tool, using Java and Tomcat, for managing: hundreds of displays screens in many locations; monitoring...
    Go to contribution page
  201. Miroslav Siket (CERN)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    LHC computing requirements are such that the number of CPU and storage nodes, and the complexity of the services to be managed are bringing new challenges. Operations like checking configuration consistency, executing actions on nodes, moving them between clusters etc. are very frequent. These scaling challenges are the basis for CluMan, a new cluster management tool being designed and...
    Go to contribution page
  202. Martin Gasthuber (DESY)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    Having the first analyses capable data from LHC on the horizon, more and more sites are facing the question/problem of building a high efficient analysis facility, for their local physicists, mostly attached to a Tier2/3. The most important ingredient for such a facility is the underlying storage system and here the selected option for the data management and data access system - well...
    Go to contribution page
  203. Mr Stuart Wakefield (Imperial College)
    24/03/2009, 08:00
    Event Processing
    poster
    ProdAgent is a set of tools to assist in producing various data products such as Monte Carlo simulation, prompt reconstruction, re-reconstruction and skimming In this paper we briefly discuss the ProdAgent architecture, and focus on the experience in using this system in recent computing challenges, feedback from these challenges, and future work. The computing challenges have proven...
    Go to contribution page
  204. Johanna Fleckner (CERN / University of Mainz)
    24/03/2009, 08:00
    Event Processing
    poster
    T Cornelissen on behalf of the ATLAS inner detector software group Several million cosmic tracks were recorded during the combined ATLAS runs in Autumn of 2008. Using these cosmic ray events as well as first beam events, the software infrastructure of the inner detector of the ATLAS experiment (pixels and microstrips silicon detectors as well as straw tubes withadditional transition...
    Go to contribution page
  205. Arshak Tonoyan (CERN)
    24/03/2009, 08:00
    Event Processing
    poster
    Looking towards first LHC collisions, the ATLAS detector is being commissioned using all types of physics data available: cosmic rays and events produced during a few days of LHC single beam operations. In addition to putting in place the trigger and data acquisition chains, commissioning of the full software chain is a main goal. This is interesting not only to ensure that the reconstruction,...
    Go to contribution page
  206. David Futyan (Imperial College, University of London)
    24/03/2009, 08:00
    Event Processing
    poster
    The CMS experiment has developed a powerful framework to ensure the precise and prompt alignment and calibration of its components, which is a major prerequisite to achieve the optimal performance for physics analysis. The prompt alignment and calibration strategy harnesses computing resources both at the Tier-0 site and the CERN Analysis Facility (CAF) to ensure fast turnaround for updating...
    Go to contribution page
  207. Mr Gheni Abla (General Atomics)
    24/03/2009, 08:00
    Online Computing
    poster
    Increasing utilization of the Internet and convenient web technologies has made the web-portal a major application interface for remote participation and control of scientific instruments. While web-portals have provided a centralized gateway for multiple computational services, the amount of visual output often is overwhelming due to the high volume of data generated by complex scientific...
    Go to contribution page
  208. Sunanda Banerjee (Fermilab, USA)
    24/03/2009, 08:00
    Event Processing
    poster
    CMS is looking forward to tune detector simulation using the forthcoming collision data from LHC. CMS established a task force in February 2008 in order to understand and reconcile the discrepancies observed between the CMS calorimetry simulation and the test beam data recorded during 2004 and 2006. Within this framework, significant effort has been made to develop a strategy of tuning fast...
    Go to contribution page
  209. Robert Petkus (Brookhaven National Laboratory)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    Over the last (2) years, the USATLAS Computing Facility at BNL has managed a highly performant, reliable, and cost effective dCache storage cluster using SunFire x4500/4540 (Thumper/Thor) storage servers. The design of a discreet storage cluster signaled a departure from a model where storage resides locally on a disk-heavy compute farm. The consequent alteration of data flow mandated a...
    Go to contribution page
  210. Prof. Gordon Watts (UNIVERSITY OF WASHINGTON)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    Particle physics conferences lasting a week (like CHEP) can have 100’s of talks and posters presented. Current conference web interfaces (like Indico) are well suited to finding a talk by author or by time-slot. However, browsing the complete material in a modern large conference is not user friendly. Browsing involves continually making the expensive transition between HTML viewing and...
    Go to contribution page
  211. Dr Filippo Costa (CERN)
    24/03/2009, 08:00
    Event Processing
    poster
    ALICE (A Large Ion Collider Experiment) is an experiment at the LHC (Large Hadron Collider) optimized for the study of heavy-ion collisions. The main aim of the experiment is to study the behavior of strongly interaction matter and quark gluon plasma. In order to be ready for the first real physics interaction, the 18 sub-detectors composing ALICE have been tested using cosmic rays and...
    Go to contribution page
  212. Dr Martin Aleksa (for the LAr conference committee) (CERN)
    24/03/2009, 08:00
    Event Processing
    poster
    The Liquid Argon (LAr) calorimeter is a key detector component in the ATLAS experiment at the LHC, designed to provide precision measurements of electrons, photons, jets and missing transverse energy. A critical element in the precision measurement is the electronic calibration. The LAr calorimeter has been installed in the ATLAS cavern and filled with liquid argon since 2006. The...
    Go to contribution page
  213. Mrs Elisabetta Ronchieri (INFN CNAF)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    Many High Energy Physics experiments must share and transfer large volumes of data. Therefore, the maximization of data throughput is a key issue, requiring detailed analysis and setup optimization of the underlying infrastructure and services. In Grid computing, the data transfer protocol called GridFTP is widely used for efficiently transferring data in conjunction with various types of file...
    Go to contribution page
  214. Marc Deissenroth, Marc Deissenroth (Universität Heidelberg)
    24/03/2009, 08:00
    Event Processing
    poster
    We report results obtained with different track-based algorithms for the alignment of the LHCb detector with first data. The large-area Muon Detector and Outer Tracker have been aligned with a large sample of tracks from cosmic rays. The three silicon detectors --- VELO, TT-station and Inner Tracker --- have been aligned with beam-induced events from the LHC injection line. We compare...
    Go to contribution page
  215. Dr Pablo Cirrone (INFN-LNS)
    24/03/2009, 08:00
    Event Processing
    poster
    Geant4 is a Monte Carlo toolkit describing transport and interaction of particles with matter. Geant4 covers all particles and materials, and its geometry description allows for complex geometries. Initially focused on high energy applications, the use of Geant4 is growing also in different like radioprotection, dosimetry, space radiation and external radiotherapy with proton and carbon...
    Go to contribution page
  216. Luca Lista (INFN Sezione di Napoli)
    24/03/2009, 08:00
    Event Processing
    poster
    We present a parser to evaluate expressions and boolean selections that is applied on CMS event data for event filtering and analysis purposes. The parser is based on boost spirit grammar definition, and uses Reflex dictionary for class introspections. The parser allows a natural definition of expressions and cuts in users configuration, and provides good run-time performances compared to...
    Go to contribution page
  217. Douglas Orbaker (University of Rochester)
    24/03/2009, 08:00
    Event Processing
    poster
    The experiments at the Large Hadron Collider (LHC) will start their search for answers to some of the remaining puzzles of particle physics in 2008. All of these experiments rely on a very precise Monte Carlo Simulation of the physical and technical processes in the detectors. A fast simulation has been developed within the CMS experiment, which is between 100-1000 times faster than its...
    Go to contribution page
  218. Lorenzo Moneta (CERN), Prof. Nikolai GAGUNASHVILI (University of Akureyri, Iceland)
    24/03/2009, 08:00
    Event Processing
    poster
    Weighted histograms are often used for the estimation of a probability density functions in High Energy Physics. The bin contents of a weighted histogram can be considered as a sum of random variables with random number of terms. A generalization of the Pearson’s chi-square statistics for weighted histograms and for weighted histograms with unknown normalization has been recently proposed...
    Go to contribution page
  219. Mr Sverre Jarp (CERN)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    This talk will start by reminding the audience that Moore's law is very much alive. Transistors will continue to double for every new silicon generation every other year. Chip designers are therefore trying every possible "trick" for putting the transistors to good use. The most notable one is to push more parallelism into each CPU: More and longer vectors, more parallel execution units, more...
    Go to contribution page
  220. Prof. Vladimir Ivantchenko (CERN, ESA)
    24/03/2009, 08:00
    Event Processing
    poster
    The process of multiple scattering of charge particles is an important component of Monte Carlo transport. At high energy it defines deviation of particles from ideal tracks and limitation of spatial resolution. Multiple scattering of low-energy electrons defines energy response and resolution of electromagnetic calorimeters. Recent progress in development of multiple scattering models within...
    Go to contribution page
  221. Ian Gable (University of Victoria)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    Virtualization technologies such as Xen can be used in order to satisfy the disparate and often incompatible system requirements of different user groups in shared-use computing facilities. This capability is particularly important for HEP applications, which often have restrictive requirements. The use of virtualization adds flexibility, however, it is essential that the virtualization...
    Go to contribution page
  222. Cano Ay (University of Goettingen)
    24/03/2009, 08:00
    Event Processing
    poster
    HepMCAnalyser is a tool for generator validation and comparisons. It is a stable, easy-to-use and extendable framework allowing for easy access/integration to generator level analysis. It comprises a class library with benchmark physics processes to analyse HepMC generator output and to fill root histogramms. A web-interface is provided to display all or selected histogramms, compare...
    Go to contribution page
  223. Dr Federico Calzolari (Scuola Normale Superiore - INFN Pisa)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    High availability has always been one of the main problems for a data center. Till now high availability was achieved by host per host redundancy, a highly expensive method in terms of hardware and human costs. A new approach to the problem can be offered by virtualization. Using virtualization, it is possible to achieve a redundancy system for all the services running on a data center. This...
    Go to contribution page
  224. Dr Steven Aplin (DESY)
    24/03/2009, 08:00
    Event Processing
    poster
    The International Linear Collider is proposed as the next large accelerator project in High Energy Physics. The ILD Detector Concept Study is one of three international groups working on designing a detector to be used at the ILC. The ILD Detector is being optimised to employ the so called Particle Flow paradigm. Such an approach means that hardware alone will not be able to realise the full...
    Go to contribution page
  225. Simon Taylor (Jefferson Lab)
    24/03/2009, 08:00
    Event Processing
    poster
    The future GlueX detector in Hall D at Jefferson Lab is a large acceptance (almost 4pi) spectrometer designed to facilitate the study of the excitation of the gluonic field binding quark--anti-quark pairs into mesons. A large solenoidal magnet will provide a 2.2-Tesla field that will be used to momentum-analyze the charged particles emerging from a liquid hydrogen target. The...
    Go to contribution page
  226. Kati Lassila-Perini (Helsinki Institute of Physics HIP)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    Complete and up-to-date documentation is essential for efficient data analysis in a large and complex collaboration like CMS. Good documentation reduces the time spent in problem solving for users and software developers. The scientists in our research environment do not necessarily have the interests or skills of professional technical writers. This results in inconsistencies in the...
    Go to contribution page
  227. Radoslav Ivanov (Unknown)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    The status of high-energy physics (HEP) information systems has been jointly analyzed by the libraries of CERN, DESY, Fermilab and SLAC. As a result, the four laboratories have started the INSPIRE project – a new platform built by moving the successful SPIRES features and content, curated at DESY, Fermilab and SLAC, into the open-source CDS Invenio digital library software that was developed...
    Go to contribution page
  228. Mrs Ianna Osborne (NORTHEASTERN UNIVERSITY)
    24/03/2009, 08:00
    Event Processing
    poster
    Geneva, 10 September 2008. The first beam in the Large Hadron Collider at CERN was successfully steered around the full 27 kilometers of the world¿s most powerful particle accelerator at 10h28 this morning. This historic event marks a key moment in the transition from over two decades of preparation to a new era of scientific discovery. (http://www.interactions.org/cms/?pid=1026796) From...
    Go to contribution page
  229. Dr Monica Verducci (INFN RomaI)
    24/03/2009, 08:00
    Event Processing
    poster
    ATLAS is a large multipurpose detector, presently in the final phase of construction at LHC, the CERN Large Hadron Collider accelerator. In ATLAS the muon detection is performed by a huge magnetic spectrometer, built with the Monitored Drift Tube (MDT) technology. It consists of more than 1,000 chambers and 350,000 drift tubes, which have to be controlled to a spatial accuracy better than 10...
    Go to contribution page
  230. Mitja Majerle (Nuclear Physics institute AS CR, Rez)
    24/03/2009, 08:00
    Event Processing
    poster
    Monte Carlo codes MCNPX and FLUKA are used to analyze the experiments on simplified Accelerator Driven Systems, which are performed at the Joint Institute for Nuclear Research Dubna. At the experiments, protons or deuterons with the energy in the GeV range are directed to thick, lead targets surrounded by different moderators and neutron multipliers. Monte Carlo simulations of these...
    Go to contribution page
  231. Dr David Lawrence (Jefferson Lab)
    24/03/2009, 08:00
    Event Processing
    poster
    Multi-threading is a tool that is not only well suited to high statistics event analysis, but is particularly useful for taking advantage of the next generation many-core CPUs. The JANA event processing framework has been designed to implement multi-threading through use of posix threads. Thoughtful implementation allows reconstruction packages to be developed that are thread enabled...
    Go to contribution page
  232. Dr Rosy Nikolaidou (CEA Saclay)
    24/03/2009, 08:00
    Event Processing
    poster
    ATLAS is one of the four experiments at the Large Hadron Collider (LHC) at CERN. This experiment has been designed to study a large range of physics including searches for previously unobserved phenomena such as the Higgs Boson and super-symmetry. The ATLAS Muon Spectrometer (MS) is optimized to measure final state muons in a large momentum range, from a few GeV up to TeV. Its momentum...
    Go to contribution page
  233. Mr Igor Mandrichenko (FNAL)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    Fermilab is a high energy physics research lab that maintains a highly dynamic network which typically supports around 15,000 active nodes. Due to the open nature of the scientific research conducted at FNAL, the portion of the network used to support open scientific research requires high bandwidth connectivity to numerous collaborating institutions around the world, and must...
    Go to contribution page
  234. Dr Yaodong CHENG (Institute of High Energy Physics,Chinese Academy of Sciences)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    Some large experiments at IHEP will generate more than 5 Petabytes of data in the next few years, which brings great challenges for data analysis and storage. CERN CASTOR version 1 was firstly deployed at IHEP in 2003, but now it is difficult to meet the new requirements. Taking into account the issues of management, commercial software etc., we don’t update CASTOR from version 1 to version 2....
    Go to contribution page
  235. Dr Peter Van Gemmeren (Argonne National Laboratory)
    24/03/2009, 08:00
    Event Processing
    poster
    In ATLAS software, TAGs are event metadata records that can be stored in various technologies, including ROOT files and relational databases. TAGs are used to identify and extract events that satisfy certain selection predicates, which can be coded as SQL-style queries. Several new developments in file-based TAG infrastructure are presented. TAG collection files support in-file metadata...
    Go to contribution page
  236. Andreu Pacheco (IFAE Barcelona), Davide Costanzo (University of Sheffield), Iacopo Vivarelli (INFN and University of Pisa), Manuel Gallas (CERN)
    24/03/2009, 08:00
    Event Processing
    poster
    The ATLAS experiment recently entered the data taking phase, with the focus shifting from software development to validation. The ATLAS software has to be both robust to process large datasets and produce the high quality output needed for the experiment scientific exploitation. The validation process is discussed in this talk, starting from the validation of the nightly builds and...
    Go to contribution page
  237. Keith Rose (Dept. of Physics and Astronomy-Rutgers, State Univ. of New Jerse)
    24/03/2009, 08:00
    Event Processing
    poster
    The silicon pixel detector in CMS contains approximately 66 million channels, and will provide extremely high tracking resolution for the experiment. To ensure the data collected is valid, it must be monitored continuously at all levels of acquisition and reconstruction. The Pixel Data Quality Monitoring process ensures that the detector, as well as the data acquisition and reconstruction...
    Go to contribution page
  238. Dr Alessandra Doria (INFN Napoli)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    The large potential storage and computing power available in the modern grid and data centre infrastructures enable the development of the next generation grid-based computing paradigm, in which a large number of clusters are interconnected through high speed networks. Each cluster is composed of several or often hundreds of computers and devices each with its own specific role in the grid. In...
    Go to contribution page
  239. Dr Maria Grazia Pia (INFN GENOVA)
    24/03/2009, 08:00
    Event Processing
    poster
    A R&D project, named NANO5, has been recently launched at INFN to address fundamental methods in radiation transport simulation and revisit Geant4 kernel design to cope with new experimental requirements. The project, that gathers an international collaborating team, focuses on simulation at different scales in the same environment. This issue requires novel methodological approaches to...
    Go to contribution page
  240. Mr Danilo Piparo (Universitaet Karlsruhe)
    24/03/2009, 08:00
    Event Processing
    poster
    RSC is a software framework based on the RooFit technology and born for the CMS experiment community, whose scope is to allow the modelling and combination of multiple analysis channels together with the accomplishment of statistical studies. That is performed through a variety of methods described in the literature implemented as classes. The design of these classes is oriented to the...
    Go to contribution page
  241. Dr Kristian Harder (RAL)
    24/03/2009, 08:00
    Event Processing
    poster
    The luminosity upgrade of the Large Hadron Collider (SLHC) is foreseen starting from 2013. An eventual factor-of-ten increase in LHC statistics will have a major impact in the LHC Physics program. However, the SLHC as well as offering the possibility to increase the physics potential will create an extreme operating environment for the detectors, particularly the tracking devices and the...
    Go to contribution page
  242. Luca Dell'Agnello (INFN)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    In the framework of WLCG, the Tier-1 computing centres have very stringent requirements in the sector of the data storage, in terms of size, performance and reliability. Since some years, at the INFN-CNAF Tier-1 we have been using two distinct storage systems: Castor as tape-based storage solution (also known as the D0T1 storage class in the WLCG language) and the General Parallel...
    Go to contribution page
  243. Dr Szymon Gadomski (DPNC, University of Geneva)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    Computing for ATLAS in Switzerland has two Tier-3 sites with several years of experience, owned by Universities of Berne and Geneva. They have been used for ATLAS Monte Carlo production, centrally controlled via the NorduGrid, since 2005. The Tier-3 sites are under continuous development. In case of Geneva the proximity of CERN leads to additional use cases, related to commissioning of...
    Go to contribution page
  244. Prof. Gordon Watts (UNIVERSITY OF WASHINGTON), Dr Laurent Vacavant (CPPM)
    24/03/2009, 08:00
    Event Processing
    poster
    The ATLAS detector, one of the two collider experiments at the Large Hadron Collider, will take high energy collision data for the first time in 2009. A general purpose detector, its physics program encompasses everything from Standard Model physics to specific searches for beyond-the-standard-model signatures. One important aspect of separating the signal from large Standard Model backgrounds...
    Go to contribution page
  245. John Chapman (Dept. of Physics, Cavendish Lab.)
    24/03/2009, 08:00
    Event Processing
    poster
    The ATLAS digitization project is steered by a top-level PYTHON digitization package which ensures uniform and consistent configuration across the subdetectors. The properties of the digitization algorithms were tuned to reproduce the detector response seen in lab tests, test beam data and cosmic ray running. Dead channels and noise rates are read from database tables to reproduce conditions...
    Go to contribution page
  246. Simone Frosali (Dipartimento di Fisica - Universita di Firenze)
    24/03/2009, 08:00
    Event Processing
    poster
    The CMS Silicon Strip Tracker (SST) consists of 25000 silicon microstrip sensors covering an area of 210m2 and 10 million readout channels. Starting from December 2007 the SST has been inserted and connected inside the CMS experiment and since summer 2008 it has been commissioned using cosmic muons with and without magnetic field. During these data taking the performance of the SST have been...
    Go to contribution page
  247. Dr Gabriele Benelli (CERN PH Dept (for the CMS collaboration))
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the cpu performance estimates given by the reference benchmark for HEP...
    Go to contribution page
  248. Roberto Valerio (Cinvestav Unidad Guadalajara)
    24/03/2009, 08:00
    Event Processing
    poster
    Decision tree learning constitutes a suitable approach to classification due to its ability to partition the input (variable) space into regions of class-uniform events, while providing a structure amenable to interpretation (as opposed to other methods such as neural networks). But an inherent limitation of decision tree learning is the progressive lessening of the statistical support of the...
    Go to contribution page
  249. Dr Ma Xiang (Institute of High energy Physics, Chinese Academy of Sciences)
    24/03/2009, 08:00
    Event Processing
    poster
    The BEPCII/BESIII(Beijing Electron Positron Collider / Beijing Spectrometer) had been installed and operated successfully in July 2008 and has been commissioning since Sep. 2008. The luminosity has reached 1.3*1032 cm-2s-1@489mA*530mA with 90 bunches now. About 13M psi(2S) physics data is collected by BESIII. The offline data analysis system of BESIII have been tested and operated to handle...
    Go to contribution page
  250. Rodrigues Figueiredo Eduardo (University Glasgow)
    24/03/2009, 08:00
    Event Processing
    poster
    The reconstruction of charged particles in the LHCb tracking systems consists of two parts. The pattern recognition links the signals belonging to the same particle. The track fitter running after the pattern recognition extracts the best parameter estimate out of the reconstructed tracks. A dedicated Kalman-Fitter is used for this purpose. The track model employed in the fit is based on...
    Go to contribution page
  251. Xie Yuguang (Institute of High energy Physics, Chinese Academy of Sciences)
    24/03/2009, 08:00
    Event Processing
    poster
    The new spectrometer for the challenging physics in the tau-charm energy region, BESIII, has been constructed and gone into the commissioning phase at BEPCII, the upgraded e+e- collider with peak luminosity up to 10^33cm^-2s^-1 in Beijing, China. The BESIII muon detector will mainly contribute to the distinguishing muons from hadrons, especially the pions. The Resistive Plate Chambers(RPCs)...
    Go to contribution page
  252. Andrea Dotti (INFN and Università Pisa)
    24/03/2009, 08:00
    Event Processing
    poster
    The challenging experimental environment and the extreme complexity of modern high-energy physics experiments make online monitoring an essential tool to assess the quality of the acquired data. The Online Histogram Presenter (OHP) is the ATLAS tool to display histograms produced by the online monitoring system. In spite of the name, the Online Histogram Presenter is much more than just a...
    Go to contribution page
  253. Mr Gilbert Grosdidier (LAL/IN2P3/CNRS)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    The study and design of a very ambitious petaflop cluster exclusively dedicated to Lattice QCD simulations started in early ’08 among a consortium of 7 laboratories (IN2P3, CNRS, INRIA, CEA) and 2 SMEs. This consortium received a grant from the French ANR agency in July, and the PetaQCD project kickoff is expected to take place in January ’09. Building upon several years of fruitful...
    Go to contribution page
  254. Wouter Verkerke (NIKHEF)
    24/03/2009, 08:00
    Event Processing
    poster
    RooFit is a library of C++ classes that facilitate data modeling in the ROOT environment. Mathematical concepts such as variables, (probability density) functions and integrals are represented as C++ objects. The package provides a flexible framework for building complex fit models through classes that mimic math operators, and is straightforward to extend. For all constructed models RooFit...
    Go to contribution page
  255. Zachary Marshall (Caltech, USA & Columbia University, USA)
    24/03/2009, 08:00
    Event Processing
    poster
    The Simulation suite for ATLAS is in a mature phase ready to cope with the challenge of the 2009 data. The simulation framework already integrated in the ATLAS framework (Athena) offers a set of pre-configured applications for full ATLAS simulation, combined test beam setups, cosmic ray setups and old standalone test-beams. Each detector component was carefully described in all details and...
    Go to contribution page
  256. Fred Luehring (Indiana University)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    The ATLAS Experiment, with over 2000 collaborators, needs efficient and effective means of communicating information. The Collaboration has been using the TWiki Web at CERN for over three years and now has more than 7000 web pages, some of which are protected. This number greatly exceeds the number of “static” HTML pages, and in the last year, there has been a significant migration to the...
    Go to contribution page
  257. Dr Peter Speckmayer (CERN)
    24/03/2009, 08:00
    Event Processing
    poster
    The toolkit for multivariate analysis, TMVA, provides a large set of advanced multivariate analysis techniques for signal/background classification. In addition, TMVA now also contains regression analysis, all embedded in a framework capable of handling the pre-processing of the data and the evaluation of the output, thus allowing a simple and convenient use of multivariate techniques. The...
    Go to contribution page
  258. Mr Andrey Lebedev (GSI, Darmstadt / JINR, Dubna)
    24/03/2009, 08:00
    Event Processing
    poster
    The Compressed Baryonic Matter (CBM) experiment at the future FAIR accelerator at Darmstadt is being designed for a comprehensive measurement of hadron and lepton production in heavy-ion collisions from 8-45 AGeV beam energy, producing events with large track multiplicity and high hit density. The setup consists of several detectors including as tracking detectors the silicon tracking system...
    Go to contribution page
  259. Mr Bruno Lenzi (CEA - Saclay)
    24/03/2009, 08:00
    Event Processing
    poster
    Muons in the ATLAS detector are reconstructed by combining the information from the Inner Detectors and the Muon Spectrometer (MS), located in the outermost part of the experiment. Until they reach the MS, muons traverse typically 100 radiation lengths (X0) of material, most part instrumented by the electromagnetic and hadronic calorimeters. The proper account for multiple scattering and...
    Go to contribution page
  260. Dr Ingo Fröhlich (Goethe-University)
    24/03/2009, 08:00
    Event Processing
    poster
    Due to the fact, that experimental setups are usually not suited to cover the complete full solid angle, event generators are very important tools for experiments. Here, theoretical calculations provide valuable input as they can describe specific distributions for parts of the kinematic variables very precicely. The caveat is that an event has several degrees of freedom which can be...
    Go to contribution page
  261. Prof. Vladimir Ivantchenko (CERN, ESA)
    24/03/2009, 08:00
    Event Processing
    poster
    The standard electromagnetic physics packages of Geant4 are used for simulation of particle transport and HEP detector response. The requirements to the precision and stability of computations are strong, for example, calorimeter response for ATLAS and CMS should be reproduced well within 1%. To keep and control long-stand quality of the package the software suites for validation and...
    Go to contribution page
  262. Dr Tomasz Szumlak (Glasgow)
    24/03/2009, 08:00
    Event Processing
    poster
    The LHCb experiment is dedicated to studying CP violation and rare decays phenomena. In order to achieve these physics goals precise tracking and vertexing around the interaction point is crucial. This is provided by the VELO (VErtex LOcator) silicon detector. After digitization, large FPGAs are employed to run several algorithms to suppress noise and reconstruct clusters. This is...
    Go to contribution page
  263. Christian Helft (LAL/IN2P3/CNRS)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    IN2P3, the institute bringing together HEP laboratories in France along CEA's IRFU, opened a videoconferencing service in 2002 based on a H323 MCU. This service has steadily grown up since then, serving other French communities than the HEP one, to reach an average of about 30 different conferences a day. The relatively small amount of manpower that has been devoted to this project can be...
    Go to contribution page
  264. Mr Joao Fernandes (CERN)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    Several recent initiatives have been put in place by the CERN IT Department to improve the user experience in remote dispersed meetings and remote collaboration at large in the LHC communities worldwide. We will present an analysis of the factors which were historically limiting the efficiency of remote dispersed meetings and describe the consequent actions which were undertaken at CERN to...
    Go to contribution page
  265. Ruth Pordes (FNAL)
    24/03/2009, 09:00
    Plenary
    oral
    The reach and diversity of computationally based Collaboratories continues to expand. The quantity and quality of remote processing and storage continues to advance with new additional entrants from the Commercial Clouds and coverage by Campus, Regional and National Grids. Ensuring interoperability across all these computing facilities is an important responsibility for the common...
    Go to contribution page
  266. Dr Erik Gottschalk (FNAL)
    24/03/2009, 09:30
    Plenary
    oral
    Commissioning the LHC accelerator and experiments will be a vital part of the worldwide high-energy physics program in 2009. Remote operations centers have been established in various locations around the world to support collaboration on LHC activities. For the CMS experiment the development of remote operations centers began with the LHC@FNAL ROC and has evolved into a unified approach with...
    Go to contribution page
  267. Prof. Martin Sevior (University of Melbourne)
    24/03/2009, 10:00
    Plenary
    oral
    The SuperBelle project to increase the Luminosity of the KEKB collider by a factor 50 will search for Physics beyond the Standard Model through precision measurements and the investigation of rare processes in Flavour Physics. The data rate expected from the experiment is comparable to a current era LHC experiment with commensurate Computing needs. Incorporating commercial cloud...
    Go to contribution page
  268. Gregg McKnight (IBM)
    24/03/2009, 11:30
    Commercial
    oral
    In 2008 IBM shattered the U.S. patent record becoming the first company to surpass 4,000 patents in a single year - the 16th consecutive year that IBM has achieved U.S. patent leadership. Come learn how IBM has leveraged our deep Research and Development innovation to deliver the iDataPlex server solution. With over 40 patented innovations, the iDataPlex product is one of the ...
    Go to contribution page
  269. Dr Steve Pawlowski (Intel)
    24/03/2009, 12:00
    Commercial
    oral
    Today’s processors designs have some significant challenges in the coming years. Compute demands are projected to continue to grow at a compound aggregate growth rate of 45% per year, with seemingly no end in sight.  Also, energy as well as property, plant and equipment costs continue to increase as well.    Processor designers can no longer afford to trade off increasing power for increasing...
    Go to contribution page
  270. Prof. Dean Nelson (SUN)
    24/03/2009, 12:30
    Plenary
    oral
    "Change is the law of life. And those who look only to the past or present are certain to miss the future" - John F. Kennedy. The Data Center landscape is changing at an incredible rate. Demand is increasing and technology is advancing rapidly, more so than at any other time in our history. Data Center operational cost increases, growing consumption, and the corresponding carbon...
    Go to contribution page
  271. Dr Shaun Roe (CERN)
    24/03/2009, 14:00
    Software Components, Tools and Databases
    oral
    The combination of three relatively recent technologies is described which allows an easy path from database retrieval to interactive web display. SQL queries on an Oracle database can be performed in a manner which directly return an XML description of the result, and Ajax techniques (Asynchronous Javascript And XML) are used to dynamically inject the data into a web display accompanied by an...
    Go to contribution page
  272. Mr Jose Benito Gonzalez Lopez (CERN)
    24/03/2009, 14:00
    Collaborative Tools
    oral
    While the remote collaboration services at CERN slowly aggregate around the Indico event management software, its new version which is the result of a careful maturation process includes improvements which will set a new reference in its domain. The presentation will focus on the description of the new features of the tool, the user feedback process which resulted in a new record of usability....
    Go to contribution page
  273. Dr Georg Weidenspointner (MPE and MPI-HLL , Munich, Germany)
    24/03/2009, 14:00
    Event Processing
    oral
    The production of particle induced X-ray emission (PIXE) resulting from the de-excitation of an ionized atom is an important physical effect that is not yet accurately modelled in Geant4, nor in other general-purpose Monte Carlo systems. Its simulation concerns use cases in various physics domains – from precision evaluation of spatial energy deposit patterns to material analysis, low...
    Go to contribution page
  274. Mrs Andrew Hanushevsky (SLAC National Accelerator Laboratory)
    24/03/2009, 14:00
    Distributed Processing and Analysis
    oral
    Scalla (also known as xrootd) is quickly becoming a significant part of LHC data analysis as a stand-alone clustered data server (US Atlas T2 and CERN Analysis Farm), globally clustered data sharing framework (ALICE), and an integral part of PROOF-base analysis (multiple experiments). Until recently, xrootd did not fit well in the LHC Grid infrastructure as a Storage Element (SE) largely...
    Go to contribution page
  275. Roberto Divià (CERN)
    24/03/2009, 14:00
    Hardware and Computing Fabrics
    oral
    The ALICE (A Large Ion Collider Experiment) Data Acquisition (DAQ) system has the unprecedented requirement to ensure a very high volume, sustained data stream between the ALICE Detector and the Permanent Data Storage (PDS) system which is used as main data repository for Event processing and Offline Computing. The key component to accomplish this task is the Transient Data Storage System...
    Go to contribution page
  276. Dr Philippe Trautmann (Sun Microsystems)
    24/03/2009, 14:00
    Commercial
    oral
  277. Dr Steven Goldfarb (University of Michigan)
    24/03/2009, 14:20
    Collaborative Tools
    oral
    I report major progress in the field of Collaborative Tools, concerning the organization, design and deployment of facilities at CERN, in support of the LHC. This presentation discusses important steps made during the past year and a half, including the identification of resources for equipment and manpower, the development of a competent team of experts, tightening of the user-feedback loop,...
    Go to contribution page
  278. Dr Sergey Panitkin (Department of Physics - Brookhaven National Laboratory (BNL))
    24/03/2009, 14:20
    Distributed Processing and Analysis
    oral
    The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF can be configured to work with centralized storage systems, but it is especially effective together with distributed local storage systems - like Xrootd, when data are distributed over computing nodes. It works efficiently on...
    Go to contribution page
  279. Oliver Oberst (Karlsruhe Institute of Technology)
    24/03/2009, 14:20
    Hardware and Computing Fabrics
    oral
    Todays experiments in HEP only use a limited number of operating system flavours. Their software might only be validated on one single OS platform. Resource providers might have other operating systems of choice for the installation of the batch infrastructure. This is especially the case if a cluster is shared with other communities, or communities that have stricter security requirements....
    Go to contribution page
  280. Sunanda Banerjee (Fermilab)
    24/03/2009, 14:20
    Event Processing
    oral
    Geant4 provides a number of physics models at intermediate energies (corresponding to incident momenta in the range 1-20 GeV/c). Recently, these models have been validated with existing data from a number of experiments: (a) inclusive proton and neutron production with a variety of beams (pi^-, pi^+, p) at different energies between 1 and 9 GeV/c on a number of nuclear targets (from beryllium...
    Go to contribution page
  281. Andreas Hinzmann (RWTH Aachen University)
    24/03/2009, 14:20
    Software Components, Tools and Databases
    oral
    The job configuration system of the CMS experiment is based on the Python programming language. Software modules and their order of execution are both represented by Python objects. In order to investigate and verify configuration parameters and dependencies naturally appearing in modular software, CMS employs a graphical tool. This tool visualizes the configuration objects, their...
    Go to contribution page
  282. Lassi Tuura (Northeastern University)
    24/03/2009, 14:40
    Distributed Processing and Analysis
    oral
    In the last two years the CMS experiment has commissioned a full end to end data quality monitoring system in tandem with progress in the detector commissioning. We present the data quality monitoring and certification systems in place, from online data taking to delivering certified data sets for physics analyses, release validation and offline re-reconstruction activities at Tier-1s. We...
    Go to contribution page
  283. Prof. Dean Nelson (SUN)
    24/03/2009, 14:40
    Commercial
    oral
  284. Albert Puig Navarro (Universidad de Barcelona), Markus Frank (CERN)
    24/03/2009, 14:40
    Online Computing
    oral
    The LHCb experiment at the LHC accelerator at CERN will collide particle bunches at 40 MHz. After a first level of hardware trigger with output at 1 MHz, the physically interesting collisions will be selected by running dedicated trigger algorithms in the High Level Trigger (HLT) computing farm. It consists of up to roughly 16000 CPU cores and 44TB of storage space. Although limited by...
    Go to contribution page
  285. Prof. Leo Piilonen (Virginia Tech)
    24/03/2009, 14:40
    Event Processing
    oral
    We report on the use of the GEANT4E, the track extrapolation feature written by Pedro Arce, in the analysis of data from Belle experiment: (1) to project charged tracks from the tracking devices outward to the particle identification devices, thereby assisting in the identification of the particle type of each charged track, and (2) to project charged tracks from the tracking...
    Go to contribution page
  286. Ricardo SALGUEIRO DOMINGUES DA SILVA (CERN)
    24/03/2009, 14:40
    Hardware and Computing Fabrics
    oral
    The ramping up of available resources for LHC data analysis at the different sites continues. Most sites are currently running on SL(C)4. However, this operating system is already rather old, and it is becomming difficult to get the required hardware drivers, to get the best out of recent hardware. A possible way out is the migration to SL(C)5 based systems where possible, in...
    Go to contribution page
  287. Benedikt Hegner (CERN)
    24/03/2009, 14:40
    Software Components, Tools and Databases
    oral
    Being a highly dynamic language and allowing reliable programming with quick turnarounds, Python is a widely used programming language in CMS. Most of the tools used in workflow management and the GRID interface tools are written in this language. Also most of the tools used in the context of release management: integration builds, release building and deploying, as well as performance...
    Go to contribution page
  288. Dr Andreas Salzburger (DESY & CERN)
    24/03/2009, 15:00
    Event Processing
    oral
    With the completion of installation of the ATLAS detector in 2008 and the first days of data taking, the ATLAS collaboration is increasingly focusing on the future upgrade of the ATLAS tracking devices. Radiation damage will make it necessary to replace the innermost silicon layer (b-layer) after about five years of operation. In addition, with future luminosity upgrades of the LHC machine...
    Go to contribution page
  289. Dr Alessandro Di Mattia (MSU)
    24/03/2009, 15:00
    Online Computing
    oral
    ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). The trigger system is responsible for making the online selection of interesting collision events. At the LHC design luminosity of 10^34 cm-2s-1 it will need to achieve a rejection factor of the order of 10^-7 against random proton-proton interactions, while selecting with high efficiency events that are...
    Go to contribution page
  290. Dr Stuart Paterson (CERN)
    24/03/2009, 15:00
    Distributed Processing and Analysis
    oral
    DIRAC, the LHCb community Grid solution, uses generic pilot jobs to obtain a virtual pool of resources for the VO community. In this way agents can request the highest priority user or production jobs from a central task queue and VO policies can be applied with full knowledge of current and previous activities. In this paper the performance of the DIRAC WMS will be presented with emphasis...
    Go to contribution page
  291. Andreas Haupt (DESY), Yves Kemp (DESY)
    24/03/2009, 15:00
    Hardware and Computing Fabrics
    oral
    In the framework of a broad collaboration among German particle physicists - the strategic Helmholtz Alliance "Physics a the TeraScale", an Analysis facility has been set up at DESY.The facility is intended to provide the best possible analysis infrastructure for researches of the ATLAS, CMS, LHCb and ILC experiments and also for theory researchers. In a first part of the contribution, we...
    Go to contribution page
  292. Dr Pere Mato (CERN)
    24/03/2009, 15:00
    Software Components, Tools and Databases
    oral
    GAUDI is a software framework in C++ used to build event data processing applications using a set of standard components with well-defined interfaces. Simulation, high-level trigger, reconstruction, and analysis programs used by several experiments are developed using GAUDI. These applications can be configured and driven by simple Python scripts. Given the fact that a considerable amount of...
    Go to contribution page
  293. Mr Andrei Gheata (CERN/ISS)
    24/03/2009, 15:20
    Distributed Processing and Analysis
    oral
    ALICE offline group has developed a set of tools that do formalize data access patterns and impose certain rules on how individual data analysis modules have to be structured in order to maximize the data processing efficiency at the whole collaboration scale. The ALICE analysis framework was developed and extensively tested on MC reconstructed data during the last 2 years in the ALICE...
    Go to contribution page
  294. Dr Sebastien Binet (LBNL)
    24/03/2009, 15:20
    Software Components, Tools and Databases
    oral
    Computers are no longer getting faster: instead, they are growing more and more CPUs, each of which is no faster than the previous generation. This increase in the number of cores evidently calls for more parallelism in HENP software. If end-users' stand-alone analysis applications are relatively easy to modify, LHC experiments frameworks, being mostly written with a single 'thread'...
    Go to contribution page
  295. Dr Isidro Gonzalez Caballero (Instituto de Fisica de Cantabria, Grupo de Altas Energias)
    24/03/2009, 15:20
    Hardware and Computing Fabrics
    oral
    In the CMS computing model, about one third of the computing resources are located at Tier-2 sites, which are distributed across the countries in the collaboration. These sites are the primary platform for user analyses; they host datasets that are created at Tier-1 sites, and users from all CMS institutes submit analysis jobs that run on those data through grid interfaces. They are also the...
    Go to contribution page
  296. Dr Silvia Amerio (University of Padova & INFN Padova)
    24/03/2009, 15:20
    Online Computing
    oral
    The Silicon-Vertex-Trigger (SVT) is a processor developed at CDF experiment to perform online fast and precise track reconstruction. SVT is made of two pipelined processors, the Associative Memory, finding low precision tracks, and the Track Fitter, refining the track quality with high precision fits. We will describe the architecture and the performances of a next generation track fitter,...
    Go to contribution page
  297. Dr Fabio Cossutti (INFN Trieste)
    24/03/2009, 15:20
    Event Processing
    oral
    The CMS simulation has been operational within the new CMS software framework for more than 3 years. While the description of the detector, in particular in the forward region, is being completed, during the last year the emphasis of the work has been put on fine tuning of the physics output. The existing test beam data for the different components of the calorimetric system have been...
    Go to contribution page
  298. Olivier Martin (Ictconsulting)
    24/03/2009, 15:20
    Grid Middleware and Networking Technologies
    oral
    Despite many coordinated efforts to promote the use of IPv6, the migration from IPv4 is far from being up to the expectations of most Internet experts. However, time is running fast and unallocated IPv4 address space should run out within the next 3 years or so. The speaker will attempt to explain the reasons behind the lack of enthusiasm for IPv6, in particular, the lack of suitable migration...
    Go to contribution page
  299. Mr Alexander Zaytsev (Budker Institute of Nuclear Physics (BINP))
    24/03/2009, 15:40
    Software Components, Tools and Databases
    oral
    Hierarchy Software Development Framework provides a lightweight tool for building portable modular applications for performing automated data analysis tasks in a batch mode. The history of design and development activities devoted to the project has begun in March 2005 and from the very beginning it was targeting the case of building experimental data processing applications for the CMD-3...
    Go to contribution page
  300. Vasco Chibante Barroso (CERN)
    24/03/2009, 15:40
    Online Computing
    oral
    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). Some specific calibration tasks are performed regularly for each of the 18 ALICE sub-detectors in order to achieve most accurate physics measurements. These procedures involve events analysis in a wide...
    Go to contribution page
  301. Matevz Tadel (CERN)
    24/03/2009, 15:40
    Event Processing
    oral
    HEP computing is approaching the end of an era when simulation parallelisation could be performed simply by running one instance of full simulation per core. The increasing number of cores and appearance of hardware-thread support both pose a severe limitation on memory and memory-bandwidth available to each execution unit. Typical simulation and reconstruction jobs of AliRoot differ...
    Go to contribution page
  302. Dr Graeme Andrew Stewart (University of Glasgow), Dr Michael John Kenyon (University of Glasgow), Dr Samuel Skipsey (University of Glasgow)
    24/03/2009, 15:40
    Hardware and Computing Fabrics
    oral
    ScotGrid is a distributed Tier-2 centre in the UK with sites in Durham, Edinburgh and Glasgow. ScotGrid has undergone a huge expansion in hardware in anticipation of the LHC and now provides more than 4MSI2K and 500TB to the LHC VOs. Scaling up to this level of provision has brought many challenges to the Tier-2 and we show in this paper how we have adopted new methods of organising...
    Go to contribution page
  303. Gabriele Garzoglio (FERMI NATIONAL ACCELERATOR LABORATORY)
    24/03/2009, 15:40
    Grid Middleware and Networking Technologies
    oral
    Grids enable uniform access to resources by implementing standard interfaces to resource gateways. Gateways control access privileges to resources using user's identify and personal attributes, which are available through Grid credentials. Typically, Gateways implement access control by mapping Grid credentials to local privileges. In the Open Science Grid (OSG), privileges are granted on...
    Go to contribution page
  304. Dr Giacinto Donvito (INFN-Bari)
    24/03/2009, 16:30
    Distributed Processing and Analysis
    oral
    The Job Submitting Tool provides a solution for the submission of a large number of jobs to the grid in an unattended way. Indeed the tool is able to manage the grid submission, bookkeeping and resubmission of failed jobs . It also allows the monitor in real time of the status of each job using the same framework. The key elements of this tool are: A Relational Db that contains all the...
    Go to contribution page
  305. Dr Hans G. Essel (GSI)
    24/03/2009, 16:30
    Online Computing
    oral
    For the new experiments at FAIR new concepts of data acquisition systems have to be developed like the distribution of self-triggered, time stamped data streams over high performance networks for event building. The Data Acquisition Backbone Core (DABC) is a general purpose software framework designed for the implementation of such data acquisition systems. It is based on C++ and...
    Go to contribution page
  306. Dr Cano Ay (Goettingen)
    24/03/2009, 16:30
    Event Processing
    oral
    The Atlas software framework, Athena, is written in C++ using python for job configuration scripts. Physics generators which provide the four-vectors describing the results of LHC collisions are written in general by third parties and are not part of Athena. These libraries are linked from the LCG Generator Services (GENSER) distribution. Generators are run from within Athena and put the...
    Go to contribution page
  307. Marco Clemencic (European Organization for Nuclear Research (CERN))
    24/03/2009, 16:30
    Software Components, Tools and Databases
    oral
    After ten years from its first version, the Gaudi software framework underwent many changes and improvements with a subsequent increased of the code base. Those changes were almost always introduced preserving the backward compatibility and reducing as much as possible changes in the framework itself; obsolete code has been removed only rarely. After a release of Gaudi targeted to the...
    Go to contribution page
  308. Dr Sergey Panitkin (Department of Physics - Brookhaven National Laboratory (BNL))
    24/03/2009, 16:30
    Hardware and Computing Fabrics
    oral
    Solid State Drives (SSD) is a very promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which...
    Go to contribution page
  309. Andrea Ceccanti (INFN CNAF, Bologna, Italy), Tanya Levshina (FERMI NATIONAL ACCELERATOR LABORATORY)
    24/03/2009, 16:30
    Grid Middleware and Networking Technologies
    oral
    The Grid community uses two well-established registration services, which allow users to be authenticated under the auspices of Virtual Organizations (VOs). The Virtual Organization Membership Service (VOMS), developed in the context of the Enabling Grid for E-sciencE (EGEE) project, is an Attribute Authority service that issues attributes expressing membership information of a subject...
    Go to contribution page
  310. Dr Mark Sutton (University of Sheffield)
    24/03/2009, 16:50
    Online Computing
    oral
    The ATLAS experiment is one of two general-purpose experiments at the Large Hadron Collider (LHC). It has a three-level trigger, designed to reduce the 40MHz bunch-crossing rate to about 200Hz for recording. Online track reconstruction, an essential ingredient to achieve this design goal, is performed at the software-based second (L2) and third levels (Event Filter, EF), running on farms of...
    Go to contribution page
  311. Mr Frank van Lingen (California Institute of Technology), Mr Stuart Wakefield (Imperial College)
    24/03/2009, 16:50
    Software Components, Tools and Databases
    oral
    Three different projects within CMS produce various workflow related data products: CRAB (analysis centric), ProdAgent (simulation production centric), T0 (real time sorting and reconstruction of real events). Although their data products and workflows are different, they all deal with job life cycle management (creation, submission, tracking, and cleanup of jobs). WMCore provides a set of...
    Go to contribution page
  312. Mr Rune Sjoen (Bergen University College)
    24/03/2009, 16:50
    Hardware and Computing Fabrics
    oral
    The ATLAS data network interconnects up to 2000 processors using up to 200 edge switches and five multi-blade chassis devices. Classical, SNMP-based, network monitoring provides statistics on aggregate traffic, but something else is needed to be able to quantify single traffic flows. sFlow is an industry standard which enables an Ethernet switch to take a sample of the packets...
    Go to contribution page
  313. Eduardo Rodrigues Figueiredo (University of Glasgow), Manuel Schiller (Universität Heidelberg)
    24/03/2009, 16:50
    Event Processing
    oral
    The LHCb Tracking system consists of four major sub-detectors and a dedicated magnet. A sequence of algorithms have been developed to optimally exploit the capability of all tracking sub-detects. Different configurations of the same algorithms are used to reconstruct tracks at various stages of the trigger system, in the standard offline pattern recognition and under initial conditions...
    Go to contribution page
  314. Gerardo GANIS (CERN)
    24/03/2009, 16:50
    Distributed Processing and Analysis
    oral
    PROOF-Lite is an implementation of the Parallel ROOT Facility (PROOF) optimized for many-core machines. It gives ROOT users a straight-forward way to exploit the many-cores by using them all in parallel for a data analysis or generic computing task controlled via the ROOT TSelector mechanism. PROOF-Lite is, as the name suggests, a lite version of PROOF, where the multi-tier...
    Go to contribution page
  315. Andrea Ceccanti (CNAF - INFN), John White White (Helsinki Institute of Physics HIP)
    24/03/2009, 16:50
    Grid Middleware and Networking Technologies
    oral
    The new authorization service of the gLite middleware stack is presented. In the EGEE-II project, the overall authorization study and review gave recommendations that the authorization should be rationalized throughout the middleware stack. As per the accepted recommendations, the new authorization service is designed to focus on EGEE gLite computational components: WMS, CREAM, and...
    Go to contribution page
  316. Belmiro Pinto (Universidade de Lisboa)
    24/03/2009, 17:10
    Online Computing
    oral
    The ATLAS experiment uses a complex trigger strategy to be able to achieve the necessary Event Filter rate output, making possible to optimize the storage and processing needs of these data. These needs are described in the ATLAS Computing Model, which embraces Grid concepts. The output coming from the Event Filter will consist of three main streams: a primary stream, the express stream and...
    Go to contribution page
  317. Peter Onyisi (University of Chicago)
    24/03/2009, 17:10
    Software Components, Tools and Databases
    oral
    The ATLAS experiment at the Large Hadron Collider reads out 100 Million electronic channels at a rate of 200 Hz. Before the data are shipped to storage and analysis centres across the world, they have to be checked to be free from irregularities which render them scientifically useless. Data quality offline monitoring provides prompt feedback from full first-pass event reconstruction...
    Go to contribution page
  318. Maria Assunta Borgia (Unknown)
    24/03/2009, 17:10
    Event Processing
    oral
    The CMS Silicon Strip Tracker (SST), consisting of more than 10 millions of channels, is organized in about 16,000 detector modules and it is the largest silicon strip tracker ever built for high energy physics experiments. The Data Quality Monitoring system for the Tracker has been developed within the CMS Software framework. More than 100.000 monitorable quantities need to be managed by the...
    Go to contribution page
  319. Dr Oliver Keeble (CERN)
    24/03/2009, 17:10
    Grid Middleware and Networking Technologies
    oral
    Grid computing as currently understood is normally enabled through the deployment of integrated software distributions which expose specific interfaces to core resources (data, CPU), provide clients and also higher level services. This paper examines the reasons for this reliance on large distributions and discusses whether the benefits are genuinely worth the considerable investment...
    Go to contribution page
  320. Mr Eric Grancher (CERN)
    24/03/2009, 17:10
    Hardware and Computing Fabrics
    oral
    The Oracle database system is used extensively in the High Energy Physics community. Access to the storage subsystem is one of the major components of the Oracle database. Oracle has introduced new ways to access and manage the storage subsystem in the past years like ASM (10.1), Direct NFS (11.1) and Exadata (11.1). This paper presents our experience with the different features linked to...
    Go to contribution page
  321. Dr Sanjay Padhi (UCSD)
    24/03/2009, 17:10
    Distributed Processing and Analysis
    oral
    With the evolution of various grid federations, the Condor glide-ins represent a key feature in providing a homogeneous pool of resources using late-binding technology. The CMS collaboration uses the glide-in based Workload Management System, glideinWMS, for production (ProdAgent) and distributed analysis (CRAB) of the data. The Condor glide-in daemons traverse to the worker nodes,...
    Go to contribution page
  322. Dr Jason Smith (Brookhaven National Laboratory), Ms Mizuki Karasawa (Brookhaven National Laboratory)
    24/03/2009, 17:30
    Hardware and Computing Fabrics
    oral
    The RACF provides computing support to a broad spectrum of scientific programs at Brookhaven. The continuing growth of the facility, the diverse needs of the scientific programs and the increasingly prominent role of distributed computing requires the RACF to change from a system to a service-based SLA with our user communities. A service-based SLA allows the RACF to coordinate more...
    Go to contribution page
  323. Dr Dantong Yu (BROOKHAVEN NATIONAL LABORATORY)
    24/03/2009, 17:30
    Distributed Processing and Analysis
    oral
    PanDA, ATLAS Production and Distributed Analysis framework, has been identified as one of the most important services provided by the ATLAS Tier 1 facility at Brookhaven National Laboratory (BNL), and enhanced to what is now a 24x7x365 production system. During this period, PanDA has remained under active development for additional functionalities and bug fix, and processing requirements have...
    Go to contribution page
  324. Dr Simone Pagan Griso (University and INFN Padova)
    24/03/2009, 17:30
    Grid Middleware and Networking Technologies
    oral
    Large international collaborations that use de-centralized computing models are becoming a custom rather than an exception in High Energy Physics. A good computing model for such big and spread collaborations has to deal with the distribution of the experiment-specific software around the world. When the CDF experiment developed its software infrastructure, most computing was done on...
    Go to contribution page
  325. Martin Woudstra (University of Massachusetts)
    24/03/2009, 17:30
    Event Processing
    oral
    The Muon Spectrometer for the ATLAS experiment at the LHC is designed to identify muons with transverse momentum greater than 3 GeV/c and measure muon momenta with high precision up to the highest momenta expected at the LHC. The 50-micron sagitta resolution translates into a transverse momentum resolution of 10% for muon transverse momenta of 1 TeV/c. The design resolution requires an...
    Go to contribution page
  326. Zachary Miller (University of Wisconsin)
    24/03/2009, 17:30
    Software Components, Tools and Databases
    oral
    Many secure communication libraries used by distributed systems, such as SSL, TLS, and Kerberos, fail to make a clear distinction between the authentication, session, and communication layers. In this paper we introduce CEDAR, the secure communication library used by the Condor High Throughput Computing software, and present the advantages to a distributed computing system resulting...
    Go to contribution page
  327. Mr Pablo Martinez Ruiz Del Arbol (Instituto de Física de Cantabria)
    24/03/2009, 17:30
    Online Computing
    oral
    The alignment of the Muon System of CMS is performed using different techniques: photogrammetry measurements, optical alignment and alignment with tracks. For track-based alignment, several methods are employed, ranging from a hit-impact point (HIP) algorithm and a procedure exploiting chamber overlaps to a global fit method based on the Millepede approach. For start-up alignment, cosmic muon...
    Go to contribution page
  328. Dr Arno Straessner (IKTP, TU Dresden), Dr Matthias Schott (CERN)
    24/03/2009, 17:50
    Event Processing
    oral
    The determination of the ATLAS detector performance in data is essential for all physics analyses and even more important to understand the detector during the first data taking period. Hence a common framework for the performance determination provides a useful and important tool for various applications. We report on the implementation of a performance tool with common software...
    Go to contribution page
  329. Johannes Elmsheuser (Ludwig-Maximilians-Universität München)
    24/03/2009, 17:50
    Distributed Processing and Analysis
    oral
    The distributed data analysis using Grid resources is one of the fundamental applications in high energy physics to be addressed and realized before the start of LHC data taking. The needs to manage the resources are very high. In every experiment up to a thousand physicist will be submitting analysis jobs into the Grid. Appropriate user interfaces and helper applications have to be...
    Go to contribution page
  330. Dr Rene Brun (CERN)
    24/03/2009, 17:50
    Software Components, Tools and Databases
    oral
    In the last few years ROOT has continued to consolidate and improve the existing code base and infrastructure. This includes a very smooth transition to SVN that subsequently enabled us to reorganize the existing libraries into semantic packages, which in turn help in improving the documentation. We also continued to improvement performance and reduce memory footprint for example...
    Go to contribution page
  331. Dr Ian Bird (CERN)
    24/03/2009, 17:50
    Grid Middleware and Networking Technologies
    oral
    This paper will provide a review of the middleware that is currently used in WLCG, and how that compares to what was initially expected when the project started. The talk will look at some of the lessons to be learned, and why what is in use today is sometimes quite different from what may have been anticipated. For the future it is clear that finding the effort for long term support and...
    Go to contribution page
  332. Jean-Christophe Garnier (Conseil Europeen Recherche Nucl. (CERN)-Unknown-Unknown)
    24/03/2009, 17:50
    Online Computing
    oral
    The High Level Trigger and Data Acquisition system selects about 2 kHz of events out of the 40 MHz of beam crossings. The selected events are consolidated into files on an onsite storage and then sent to permanent storage for subsequent analysis on the Grid. For local and full-chain tests a method to exercise the data-flow through the High Level Trigger when there are no actual data is needed....
    Go to contribution page
  333. Mr Christopher Hollowell (Brookhaven National Laboratory), Mr Robert Petkus (Brookhaven National Laboratory)
    24/03/2009, 17:50
    Hardware and Computing Fabrics
    oral
    The RHIC/ATLAS Computing Facility (RACF) processor farm at Brookhaven National Laboratory currently provides over 7200 cpu cores (over 13 million SpecInt2000 of processing power) for computation. Our ability to supply this level of computational capacity in a data-center limited by physical space, cooling and electrical power is primarily due to the availability of increasingly dense...
    Go to contribution page
  334. Dr Josva Kleist (Nordic Data Grid Facility)
    24/03/2009, 18:10
    Grid Middleware and Networking Technologies
    oral
    The Tier-1 facility operated by the Nordic DataGrid Facility (NDGF) differs significantly from other Tier-1s in several aspects: It is not located at one or a few locations but instead distributed throughout the Nordic, it is not under the governance of a single organisation but but is instead build from resources under the control of a number of different national organisations. Being...
    Go to contribution page
  335. Pablo Saiz (CERN)
    24/03/2009, 18:10
    Grid Middleware and Networking Technologies
    oral
    AliEn is the GRID interface that ALICE has developed to be able to do its distributed computing. AliEn provides all the components needed to build a distributed environment, including a file and metadata catalogue, a priority-based job execution model and a file replication system. Another of the components provided by AliEn is an automatic software package installation service, PackMan....
    Go to contribution page
  336. David Gonzalez Maline (CERN)
    24/03/2009, 18:10
    Software Components, Tools and Databases
    oral
    ROOT, as a scientific data analysis framework, provides extensive capabilities via graphics user interfaces (GUI) for performing interactive analysis and visualize data objects like histograms and graphs. A new interface for fitting has been developed for performing, exploring and comparing fits on data point sets such as histograms, multi-dimensional graphs or trees. With this new...
    Go to contribution page
  337. Peter Hristov (CERN)
    24/03/2009, 18:10
    Event Processing
    oral
    The fast feedback from the offline reconstruction is essential for understanding the ALICE detector and the reconstruction software, especially for the first LHC physics studies. For this purpose, ALICE offline reconstruction based on the Parallel ROOT Facility (PROOF) has been designed and developed. The architecture and implementation are briefly described. Particular attention is given to...
    Go to contribution page
  338. Bjoern Hallvard Samset (Fysisk institutt - University of Oslo)
    24/03/2009, 18:10
    Distributed Processing and Analysis
    oral
    A significant amount of the computing resources available to the ATLAS experiment at the LHC are connected via the ARC grid middleware. ATLAS ARC-enabled resources, which consist of both major computing centers at Tier-1 level and lesser, local clusters at Tier-2 and 3 level, have shown excellent performance running heavy Monte Carlo (MC) production for the experiment. However, with the...
    Go to contribution page
  339. Mr Bjorn (on behalf of the ATLAS Tile Calorimeter system) Nordkvist (Stockholm University)
    24/03/2009, 18:10
    Online Computing
    oral
    The ATLAS Tile Calorimeter is ready for data taking during the proton-proton collisions provided by the Large Hadron Collider (LHC). The Tile Calorimeter is a sampling calorimeter with iron absorbers and scintillators as active medium. The scintillators are read out by wave length shifting fibers and PMTs. The LHC provides collisions every 25ns, putting very stringent requirements on the...
    Go to contribution page
  340. Prof. Hans Döbbeling (DANTE)
    25/03/2009, 09:00
    Plenary
    oral
    Optical Networks - Evolution and Future
    Go to contribution page
  341. Mine Altunay (FERMI NATIONAL ACCELERATOR LABORATORY)
    25/03/2009, 09:30
    Plenary
    oral
    Grid Security and Identity Management
    Go to contribution page
  342. Prof. Jerome Lauret (BNL)
    25/03/2009, 10:00
    Plenary
    oral
    Computing for the RHIC Experiments
    Go to contribution page
  343. Dr Paolo Calafiura (LBL)
    25/03/2009, 11:00
    Plenary
    oral
    When experiments get close to data taking, the pace of software development becomes frantic, and experiments librarians and software developers rely on performance monitoring and optimization to keep core resources usage (memory and CPU) under control. Performance monitoring and optimization share many tools, but they are distinct processes with very different workflows. In this talk we...
    Go to contribution page
  344. Prof. Markus Elsing (CERN)
    25/03/2009, 11:30
    Plenary
    oral
    After more than a decade of software development the LHC experiments have successfully released their offline software for the commissioning with data. Sophisticated detector description models are necessary to match the physics requirements on the simulation, while fast geometries are in use to speed up the high level trigger and offline track reconstruction. The experiments explore...
    Go to contribution page
  345. Dr Dirk Duellmann (CERN)
    25/03/2009, 12:00
    Plenary
    oral
    Data and meta data management at Petabyte scale remains at the key challenges for the High Energy Physics community. Efficient distribution and reliable access to Petabytes of distributed data in files and relational database will be required to exploit the physics potential of LHC data and the resources available to the experiments in the world wide LHC computing grid. In this presentation...
    Go to contribution page
  346. Dr Dantong Yu (BROOKHAVEN NATIONAL LABORATORY)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The TeraPaths, Lambda Station, and Phoebus projects were funded by the Department Of Energy's (DOE) network research program to support efficient, predictable, prioritized petascale data replication in modern high-speed networks, directly address the "last-mile" problem between local computing resources and WAN paths, and provide interfaces to modern, high performance hybrid networks with low...
    Go to contribution page
  347. Mr Eiji Inoue (KEK)
    26/03/2009, 08:00
    Online Computing
    poster
    We report DAQ System based on DAQ-Middleware. This system is consisting of GUI client application and CC/NET readout programs. CC/NET is a CAMAC crate controller module which was created by us from a joint research of TOYO corporation and KEK. CC/NET based on pipeline processing can operate at CAMAC specification limit speed. It has a single board computer that Linux operating system...
    Go to contribution page
  348. Dr Iosif Legrand (CALTECH)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    To satisfy the demands of data intensive applications it is necessary to move to far more synergetic relationships between data transfer applications and the network infrastructure. The main objective of the High Performance Data Transfer Service we present is to effectively use the available network infrastructure capacity and to coordinate, manage and control large data transfer tasks...
    Go to contribution page
  349. Dr Jingyan Shi (IHEP)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The operation of the BESIII experiment started on July, 2008. More than 5 PB data will be produced in the coming 5 years. To increase the efficiency of data analysis and simulation, it is necessary sometimes for the physicists to cut a long job into a certain number of small jobs and execute in a distributed mode. A tool is developed for the BESIII job submission and management. With the tool,...
    Go to contribution page
  350. Dr Wenji Wu (Fermi National Accelerator Laboratory)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Distributed petascale computing involves analysis of massive data sets in a large-scale cluster computing environment. Its major concern is to efficiently and rapidly move the data sets to the computation and send results back to users or storage. However, the needed efficiency of data movement has hardly been achieved in practice. Present cluster operating systems usually are general-purpose...
    Go to contribution page
  351. Dr Gabriele Compostella (CNAF INFN), Dr Manoj Kumar Jha (INFN Bologna)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Being a large international collaboration established well before the full development of the Grid as the main computing tool for High Energy Physics, CDF has recently changed and improved its computing model, decentralizing some parts of it in order to be able to exploit the rising number of distributed resources available nowadays. Despite those efforts, while the large majority of CDF...
    Go to contribution page
  352. Stefano Bagnasco (INFN Torino)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Current Grid deployments for LHC computing (namely the WLCG infrastructure) do not allow efficient parallel interactive processing of data. In order to allow physicists to interactively access subsets of data (e.g. for algorithm tuning and debugging before running over a full dataset) parallel Analysis Facilities based on PROOF have been deployed by the ALICE experiment at CERN and elsewhere....
    Go to contribution page
  353. Mr Roland Moser (CERN and Technical University of Vienna)
    26/03/2009, 08:00
    Online Computing
    poster
    The CMS Data Acquisition System consists of O(1000) of interdependent services. A monitoring system providing exception and application-specific data is essential for the operation of this cluster. Due to the number of involved services the amount of monitoring data is higher than a human operator can handle efficiently. Thus moving the expert-knowledge for error analysis from the operator to...
    Go to contribution page
  354. Mr Mario Lassnig (CERN & University of Innsbruck)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Unrestricted user behaviour is becoming one of the most critical properties in data intensive supercomputing. While policies can help to maintain a usable environment in clearly directed cases, it is important to know how users interact with the system so that it can be adapted dynamically, automatically and timely. We present a statistical and generative model that can replicate and simulate...
    Go to contribution page
  355. Dr Andrew Stephen McGough (Imperial College London)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The Grid as an environment for large-scale job execution is now moving beyond the prototyping phase to real deployments on national and international scales providing real computational cycles to application scientists. As the Grid move into production, characteristics about how users are exploiting the resources and how the resources are coping with production load are essential in...
    Go to contribution page
  356. Dr Vivian ODell (FNAL)
    26/03/2009, 08:00
    Online Computing
    poster
    The CMS event builder assembles events accepted by the first level trigger and makes them available to the high-level trigger. The system needs to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GBytes/s originating from approximately 500 sources. This paper presents the chosen hardware and software architecture. The system consists of 2 stages: an initial...
    Go to contribution page
  357. Dr Wainer Vandelli (Conseil Europeen Recherche Nucl. (CERN))
    26/03/2009, 08:00
    Online Computing
    poster
    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently...
    Go to contribution page
  358. Raquel Pezoa Rivera (Univ. Tecnica Federico Santa Maria (UTFSM))
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The ATLAS Distributed Computing system provides a set of tools and libraries enabling data movement, processing and analysis on a grid environment. While reaching a state of maturity high enough for real data taking, it became clear that one component was missing exposing consistent information regarding site topology, service and resource information from all three distinct ATLAS grids (EGEE,...
    Go to contribution page
  359. Denis Oliveira Damazio (Brookhaven National Laboratory)
    26/03/2009, 08:00
    Online Computing
    poster
    The ATLAS detector is undergoing intense commissioning effort with cosmic rays preparing for the first LHC colisions next spring. Combined runs with all of the ATLAS subsystems are being taken in order to evaluate the detector performance. This is an unique opportunity also for the trigger system to be studied with different detector operation modes, such as different event rates and...
    Go to contribution page
  360. Dr Luca Fiorini (IFAE Barcelona)
    26/03/2009, 08:00
    Online Computing
    poster
    TileCal is the barrel hadronic calorimeter of the ATLAS experiment presently in an advanced state of commissioning with cosmic and single beam data at the LHC accelerator. The complexity of the experiment, the number of electronics channels and the high rate of acquired events requires a systematic strategy of the System Preparation for the Data Taking. This is done through a precise...
    Go to contribution page
  361. Mr Costin Grigoras (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    A complex software environment such as the ALICE Computing Grid infrastructure requires permanent control and management for the large set of services involved. Automating control procedures reduces the human interaction with the various components of the system and yields better availability of the overall system. In this paper we will present how we used the MonALISA framework to gather,...
    Go to contribution page
  362. Hongyu ZHANG (Experimental Physics Center, Experimental Physics Center, Chinese Academy of Sciences, Beijing, China)
    26/03/2009, 08:00
    Online Computing
    poster
    BEPCII is designed with a peak luminosity of 1033cm-2sec-1. After the Level 1 trigger, the event rate is estimated to be around 4000Hz at J/ψ peak. A pipelined front-end electronic system is designed and developed and the BESIII DAQ system is accomplished to satisfy the requirement of event readout and processing with such a high event rate. BESIII DAQ system consists of about 100 high...
    Go to contribution page
  363. Riccardo Zappi (INFN-CNAF)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    In the storage model adopted by WLCG, the quality of service for a storage capacity provided by an SRM-based service is described by the concept of Storage Class. In this context, two parameters are relevant: the Retention Policy and the Access Latency. With the advent of cloud-based resources, virtualized storage capabilities are available like the Amazon Simple Storage Service (Amazon S3)....
    Go to contribution page
  364. Dr Daniele Bonacorsi (CMS experiment / INFN-CNAF, Bologna, Italy)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this world-wide exercise was to check the readiness of the computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also...
    Go to contribution page
  365. Dr Timm Steinbeck (Institute of Physics)
    26/03/2009, 08:00
    Online Computing
    poster
    For the ALICE heavy-ion experiment a large cluster will be used to perform the last triggering stages in the High Level Trigger. For the first year of operation the cluster consists of about 100 SMP nodes with 4 or 8 CPU cores each, to be increased to more than 1000 nodes for the later years of operation. During the commissioning phases of the detector, the preparations for first LHC...
    Go to contribution page
  366. Claudia Ciocca (INFN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    In the framework of WLCG, Tier1s need to manage large manage volumes of data ranging in the PB scale. Moreover they need to be able to transfer data, from CERN and with the other centres (both Tier1s and Tier2s) with a sustained throughput of the order of hundreds of MB/s over the WAN offering at the same time a fast and reliable access also to the computing farm. In order to cope with...
    Go to contribution page
  367. Dr Volker Friese (GSI Darmstadt)
    26/03/2009, 08:00
    Online Computing
    poster
    The Compressed Baryonic Matter experiment (CBM) is one of the core experiments to be operated at the future FAIR accelerator complex in Darmstadt, Germany, from 2014 on. It will investigate heavy-ion collisions at moderate beam energies but extreme interaction rates, which give access to extremely rare probes such as open charm or charmonium decays near the production threshold. The high...
    Go to contribution page
  368. Daniel Charles Bradley (High Energy Physics)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    A number of recent enhancements to the Condor batch system have been stimulated by the challenges of LHC computing. The result is a more robust, scalable, and flexible computing platform. One product of this effort is the Condor JobRouter, which serves as a high-throughput scheduler for feeding multiple (e.g. grid) queues from a single input job queue. We describe its principles and how it...
    Go to contribution page
  369. Vardan Gyurjyan (JEFFERSON LAB)
    26/03/2009, 08:00
    Online Computing
    poster
    The ever growing heterogeneity of physics experiment control systems presents a real challenge to uniformly describe control system components and their operational details. Control Oriented Ontology Language (COOL) is an experiment control meta-data modeling language that provides a generic means for concise and uniform representation of physics experiment control processes and components,...
    Go to contribution page
  370. Xavier Mol (Forschungszentrum Karlsruhe)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    D-Grid is the German initiative for building a national computing grid. When its customers want to work within the German grid, they need dedicated software, called ‘middleware’. As D-Grid site administrators are free to choose their middleware according to the needs of their users, the project ‘DGI (D-Grid Integration) reference installation’ was launched. Its purpose is to assist the site...
    Go to contribution page
  371. Mr Antonio Delgado Peris (CIEMAT)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Grid infrastructures constitute nowadays the core of the computing facilities of the biggest LHC experiments. These experiments produce and manage petabytes of data per year and run thousands of computing jobs every day to process that data. It is the duty of metaschedulers to allocate the tasks to the most appropriate resources at the proper time. Our work reviews the policies that have...
    Go to contribution page
  372. Peter Onyisi (University of Chicago)
    26/03/2009, 08:00
    Online Computing
    poster
    At the ATLAS experiment, the Detector Control System (DCS) is used to oversee detector conditions and supervise the running of equipment. It is essential that information from the DCS about the status of individual sub-detectors be extracted and taken into account when determining the quality of data taken and its suitability for different analyses. DCS information is written online to...
    Go to contribution page
  373. Mr Yuriy Ilchenko (SMU)
    26/03/2009, 08:00
    Online Computing
    poster
    The start of collisions at the LHC brings with it much excitement and many unknowns. It’s essential at this point in the experiment to be prepared with user-friendly tools to quickly and efficiently determine the quality of the data. Easy visualization of data for the shift crew and experts is one of the key factors in the data quality assessment process. The Data Quality Monitoring...
    Go to contribution page
  374. Dr Hiroyuki Matsunaga (ICEPP, University of Tokyo)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    A Tier-2 regional center is running at the University of Tokyo in Japan. This center receives a large amount of data of the ATLAS experiment from the Tier-1 center in France. Although the link between the two centers has 10Gbps bandwidth, it is not a dedicated link but is shared with other traffic, and the round trip time is 280msec. It is not easy to exploit the available bandwidth...
    Go to contribution page
  375. Mr Vladlen Timciuc (California Institute of Technology)
    26/03/2009, 08:00
    Online Computing
    poster
    The CMS detector at LHC is equipped with a high precision electromagnetic crystal calorimeter (ECAL). The crystals experience a transparency change when exposed to radiation during LHC operation, which recovers in absents of irradiation on the time scale of hours. This change of the crystal response is monitored with a laser system which performs a transparency measurement of each crystal of...
    Go to contribution page
  376. Dr Silke Halstenberg (Karlsruhe Institute of Technology)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The dCache installation at GridKa, the German Tier-1 center, is ready for LHC data taking. After years of tuning and dry runs, several software and operational bottlenecks have been identified. This contribution describes several procedures to improve stability and reliability of the Tier-1 storage setup. These range from redundant hardware and disaster planning over fine grained monitoring...
    Go to contribution page
  377. Mr Tigran Mkrtchyan Mkrtchyan (Deutsches Elektronen-Synchrotron DESY)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Starting spring 2009, all WLCG data management services have to be ready and prepared to move terabytes of data from CERN to the Tier 1 centers world wide, and from the Tier 1s to their corresponding Tier 2s. Reliable file transfer services, like FTS, on top of the SRM v2.2 protocol are playing a major role in this game. Nevertheless, moving large junks of data is only part of the challenge....
    Go to contribution page
  378. Dr Paul Millar (DESY)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    In the gLite grid model a site will typically have a Storage Element (SE) that has no direct mechanism for updating any central or experiment-specific catalogues. This loose coupling was a deliberate decision that simplifies SE design; however, a consequence of this is that the catalogues may provide an incorrect view of what is stored on a SE. In this paper, we present work to allow...
    Go to contribution page
  379. Dr James Letts (Department of Physics-Univ. of California at San Diego (UCSD))
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The CMS experiment at CERN is preparing for LHC data taking in several computing preparation activities. In early 2007 a traffic load generator infrastructure for distributed data transfer tests was designed and deployed to equip the WLCG Tiers which support the CMS Virtual Organization with a means for debugging, load-testing and commissioning data transfer routes among CMS Computing Centres....
    Go to contribution page
  380. Dr Sergio Andreozzi (INFN-CNAF)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The GLUE 2.0 specification is an upcoming OGF specification for standard-based Grid resource characterization to support functionalities such as discovery, selection and monitoring. An XML Schema realization of GLUE 2.0 is available, nevertheless, Grids still lack a standard information service interface. Therefore, there is no uniform agreed solution to expose resource descriptions. On...
    Go to contribution page
  381. Dr Vincenzo Spinoso (INFN, Bari)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Together with the start of LHC, high-energy physics researchers will start massive usage of LHC Tier2s. It is essential to supply physics user groups with a simple and intuitive “user-level” summary of their associated T2 services’ status, showing for example available, busy and unavailable resources. At the same time, site administrators need “technical level” monitoring, namely a view of...
    Go to contribution page
  382. Gabriel Caillat (LAL, Univ. Paris Sud, IN2P3/CNRS)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Desktop grids, such as XtremWeb and BOINC, and service grids, such as EGEE, are two different approaches for science communities to gather computing power from a large number of computing resources. Nevertheless, little work has been done to combine these two Grid technologies in order to establish a seamless and vast grid resource pool. In this paper we present the EGEE service grid, the...
    Go to contribution page
  383. Mr Michal ZEROLA (Nuclear Physics Inst., Academy of Sciences)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    For the past decade, HENP experiments have been heading towards a distributed computing model in an effort to concurrently process tasks over enormous data sets that have been increasing in size as a function of time. In order to optimize all available resources (geographically spread) and minimize the processing time, it is necessary to face also the question of efficient data transfers and...
    Go to contribution page
  384. Dr Simone Campana (CERN/IT/GS)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The ATLAS Experiment at CERN developed an automated system for data distribution of simulated and detector data. Such system, which partially consists of various ATLAS specific services, strongly relies on the WLCG service infrastructure, both at the level of middleware components, service deployment and operations. Because of the complexity of the system and its highly distributed nature, a...
    Go to contribution page
  385. Julia Andreeva (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    One of the most important conclusion of the analysis of the results of CCRC08 and operational experience after CCRC08 is that the LHC experiment specific monitoring systems are the main sources of the monitoring information. They are widely used by people taking computing shifts. They are the first ones to detect the problems of various nature. Though these systems provide rather...
    Go to contribution page
  386. Dr Chadwick Keith (Fermilab)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The architecture of FermiGrid facilitates seamless interoperation of the multiple heterogeneous Fermilab resources with the resources of the other...
    Go to contribution page
  387. Pablo Saiz (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The LHC experiments are going to start collecting data during the spring of 2009. The number of people and centers involved in such experiments sets a new record in the physics community. For instance, in CMS there are more than 3600 physicists, and more than 60 centers distributed all over the world. Managing such a big number of distributed sites and services is not a trivial task....
    Go to contribution page
  388. Dr Armin Scheurer (Karlsruhe Institute of Technology)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The CMS computing model anticipates various hierarchically linked tier centres to counter the challenges provided by the enormous amounts of data which will be collected by the CMS detector at the Large Hadron Collider, LHC, at CERN. During the past years, various computing exercises were performed to test the readiness of the computing infrastructure, the Grid middleware and the experiment's...
    Go to contribution page
  389. Mr Philippe Canal (Fermilab)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The Open Science Grid's usage accounting solution is a system known as, "Gratia." Now that it has been deployed successfully the Open Science Grid's next accounting challenge is to correctly interpret and make the best possible use of the information collected. One such issue is, "Did we use and/or get credit for, the resource we think we used?" Another example is the problem of ensuring that...
    Go to contribution page
  390. Mr David Collados Polidura (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The Worldwide LHC Computing Grid (WLCG) is based on a four-tiered model that comprises collaborating resources from different grid infrastructures such as EGEE and OSG. While grid middleware provides core services on variety of platforms, monitoring tools like Gridview, SAM, Dashboards and GStat are being used for monitoring, visualization and evaluation of the WLCG infrastructure. The...
    Go to contribution page
  391. Andrew McNab (Unknown)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    We present an overview of the current status of the GridSite toolkit, describing the new security model for interactive and programmatic uses introduced in the last year. We discuss our experiences of implementing these internal changes and how they have been promoted by requirements from users and wider security trends in Grids (such as CSRF). Finally, we explain how these have improved the...
    Go to contribution page
  392. Mr Laurence Field (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Authors: Laurence Field, Felix Ehm, Joanna Huang, Min Tsai Grid Information Systems are mission-critical components in todays production grid infrastructures. They enable users, applications and services to discover which services exists in the infrastructure and further information about the service structure and state. It is therefore important that the information system components...
    Go to contribution page
  393. Dantong Yu (BNL)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Modern nuclear and high energy experiments yield large amounts of data and thus require efficient and high capacity storage and transfer. BNL, the hosting site for RHIC experiments and the US center for LHC ATLAS, plays a pivotal role in transferring to and from other sites in the US and around the world in a tiered fashion for data distribution and processing. Each component in the...
    Go to contribution page
  394. Gyoergy Vesztergombi (Res. Inst. Particle & Nucl. Phys. - Hungarian Academy of Science)
    26/03/2009, 08:00
    Online Computing
    poster
    Unusually high intensity ( 10**11 proton/sec ) beam is planned to be ejected for fixed targets at FAIR accelerator upto 90 GeV energy. Using this beam the FAIR-CBM experiment provides an unique high luminosity facility to measure high pT phenomena with unprecedented sensitivity exceeding by orders of magnitude that of previous experiments. Applying 1% target the expected minimum bias event...
    Go to contribution page
  395. Dr Christopher Jung (Forschungszentrum Karlsruhe)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Most Tier-1 centers of LHC Computing Grid are using dCache as their storage system. dCache uses a cost model incorporating CPU and space costs for the distribution of data on its disk pools. Storage resources at Tier-1 centers are usually upgraded once or twice a year according to given milestones. One of the effects of this procedure is the accumulation of heterogeneous hardware resources....
    Go to contribution page
  396. Timur Perelmutov (FERMI NATIONAL ACCELERATOR LABORATORY)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The dCache disk caching file system has been chosen by a majority of LHC Experiments' Tier 1 centers for their data storage needs. It is also deployed at many Tier 2 centers. In preparation for the LHC startup, very large installations of dCache - up to 3 Petabytes of disk - have already been deployed, and the systems have operated at transfer rates exceeding 2000 MB/s over the WAN. As the LHC...
    Go to contribution page
  397. Ms Giulia Taurelli (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    HSM systems such as the CERN’s Advanced STORage manager (CASTOR) [1] are responsible for storing Petabytes of data which is first cached on disk and then persistently stored on tape media. The contents of these tapes are regularly repacked from older, lower-density media to new-generation, higher-density media in order to free up physical space and ensure long term data integrity and...
    Go to contribution page
  398. Mr laurence field (cern)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Author: Laurence Field, Markus Schulz, Felix Ehm, Tim Dyce Grid Information Systems are mission-critical components in todays production grid infrastructures. They enable users, applications and services to discover which services exists in the infrastructure and further information about the service structure and state. As the Grid Information System is pervasive throughout the...
    Go to contribution page
  399. Dr Tony Wildish (PRINCETON)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    PhEDEx, the CMS data- placement system, uses the FTS service to transfer files. Towards the end of 2007 PhEDEx was beginning to show some serious scaling issues, with excessive numbers of processes on the site VOBOX running PhEDEx, poor efficiency in use of FTS job-slots, high latency for failure-retries, and other problems. The core PhEDEx architecture was changed in May 2008 to eliminate...
    Go to contribution page
  400. Dr Sergey Linev (GSI Darmstadt)
    26/03/2009, 08:00
    Online Computing
    poster
    New experiments at FAIR like CBM require new concepts of data acquisition systems, where instead of central trigger self-triggered electronics with time-stamped readout should be used. A first prototype of such a system was implemented in form of a CBM readout controller (ROC) board, which is designed to read time-stamped data from a front-end board equipped with nXYTER chips and transfer that...
    Go to contribution page
  401. Daniel Bradley (University of Wisconsin)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on...
    Go to contribution page
  402. Sergey Kalinin (Universite Catholique de Louvain)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    As the Large Hadron Collider (LHC) at CERN, Geneva, has begun operation in September, the large scale computing grid LCG (LHC Computing Grid) is meant to process and store the large amount of data created in simulating, measuring and analyzing of particle physic experimental data. Data acquired by ATLAS, one of the four big experiments at the LHC, are analyzed using compute jobs running...
    Go to contribution page
  403. Lev Shamardin (Scobeltsyn Institute of Nuclear Physics, Moscow State University (SINP MSU))
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Grid systems are used for calculations and data processing in various applied areas such as biomedicine, nanotechnology and materials science, cosmophysics and high energy physics as well as in a number of industrial and commercial areas. Traditional method of execution of jobs in grid is running jobs directly on the cluster nodes. This limits the choice of the operational environment...
    Go to contribution page
  404. Somogyi Peter (Technical University of Budapest)
    26/03/2009, 08:00
    Online Computing
    poster
    LHCb is one of the four major experiments under completion at the Large Hadron Collider (LHC). Monitoring the quality of the acquired data is important, because it allows the verification of the detector performance. Anomalies, such as missing values or unexpected distributions can be indicators of a malfunctioning detector, resulting in poor data quality. Spotting faulty components can be...
    Go to contribution page
  405. Dr Andrea Chierici (INFN-CNAF)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Quattor is a system administration toolkit providing a powerful, portable, and modular set of tools for the automated installation, configuration, and management of clusters and farms. It is developed as a community effort and provided as open-source software. Today, quattor is being used to manage at least 10 separate infrastructures spread across Europe. These range from massive single-site...
    Go to contribution page
  406. Mr Adolfo Vazquez (Universidad Complutense de Madrid)
    26/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The MAGIC telescope, a 17-meterCherenkov telescope located on La Palma (Canary Islands), is dedicated to the study of the universe in Very High Energy gamma-rays. These particles arrive at the Earth's atmosphere producing atmospheric showers of secondary particles that can be detected on ground through their Cherenkov radiation. MAGIC relies on a large number of Monte Carlo simulations for the...
    Go to contribution page
  407. Jeremiah Jet Goodson (Department of Physics - State University of New York (SUNY))
    26/03/2009, 08:00
    Online Computing
    poster
    The ATLAS detector at the Large Hadron Collider is expected to collect an unprecedented wealth of new data at a completely new energy scale. In particular its Liquid Argon electromagnetic and hadronic calorimeters will play an essential role in measuring final states with electrons and photons and in contributing to the measurement of jets and missing transverse energy. Efficient monitoring...
    Go to contribution page
  408. Luciano Orsini (CERN)
    26/03/2009, 08:00
    Online Computing
    poster
    The CMS data acquisition system comprises of O(10000) of interdependent services that need to be monitored in near real-time. The ability to monitor a large number of distributed applications accurately and effectively is of paramount importance for operation. Application monitoring entails the collection of a large number of simple and composed values made available by the software...
    Go to contribution page
  409. Dr Raja Nandakumar (Rutherford Appleton Laboratory)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    DIRAC, the LHCb community Grid solution, is intended to reliably run large data mining activities. The DIRAC system consists of various services (which wait to be contacted to perform actions) and agents (which carry out periodic activities) to direct jobs as required. An important part of ensuring the reliability of the infrastructure is the monitoring and logging of these DIRAC distributed...
    Go to contribution page
  410. Mr Daniel Filipe Rocha Da Cunha Rodrigues (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The MSG (Messaging System for the Grid) is a set of tools that make a Message Oriented platform available for communication between grid monitoring components. It has been designed specifically to work with the EGEE operational tools and acts as an integration platform to improve the reliability and scalability of the existing operational services. MSG is a core component as WLCG monitoring...
    Go to contribution page
  411. Mr Andrey Bobyshev (FERMILAB)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    There are a number of active projects to design and develop a data control plane capability that steers traffic onto alternate network paths, instead of the default path provided though standard IP connectivity. Lambda Station, developed by Fermilab and Caltech, is one example of such solution, and is currently deployed at US CMS Tier1 facility at Fermilab and various Tier2 sites. When the...
    Go to contribution page
  412. Vakhtang Tsiskaridze (Tbilisi State University, Georgia)
    26/03/2009, 08:00
    Online Computing
    poster
    At this moment, at 100 KHz frequency, in the Tile Calorimeter ROD DSP using Optimal Filtering Reconstruction method Amplitude, Time and Quality Factor (QF) parameters are calculated. If QF is good enough only Amplitude, Time and QF are stored, otherwise the data quality is considered bad and it is proposed to store raw data for further studies. Without any compression, bandwidth limitation...
    Go to contribution page
  413. Daniele Cesini (INFN CNAF)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The Workload Management System is the gLite service supporting the distributed production and analysis activities of various HEP experiments. It is responsible of dispatching computing jobs to remote computing facilities by matching job requirements and the resource status information collected from the Grid information services. Given the distributed and heterogeneous nature of the Grid, the...
    Go to contribution page
  414. Chendong FU (IHEP, Beijing)
    26/03/2009, 08:00
    Online Computing
    poster
    BEPCII is the electron-positron collider with the highest luminosity at tau-charm energy region and BESIII is the corresponding detector with greatly improve detection capacity. For the accelerator and detector, the event tigger is rathe high. In order to reduce the background level and the recorder burden of computers, the online event filtering algorithm is established. Such an...
    Go to contribution page
  415. Dr Greig Cowan (University of Edinburgh)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The ScotGrid distributed Tier-2 now provides more that 4MSI2K and 500TB for LHC computing, which is spread across three sites at Durham, Edinburgh and Glasgow. Tier-2 sites have a dual role to play in the computing models of the LHC VOs. Firstly, their CPU resources are used for the generation of Monte Carlo event data. Secondly, the end user analysis object data is distributed to the site...
    Go to contribution page
  416. Dr Silvio Pardi (INFN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The quality of the connectivity provided by the network infrastructure of a Grid is a crucial factor to guarantee the accessibility of Grid services, schedulate effciently processing and data transfer activity on the Grid and meet QoS expectations. Yet most Grid application do not take into consideration the expected performance of the network resources they plan to use. In this paper we...
    Go to contribution page
  417. Dr Jose Antonio Coarasa Perez (Department of Physics - Univ. of California at San Diego (UCSD))
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The Open Science Grid middleware stack has seen intensive development over the past years and has become more and more mature, as increasing numbers of sites have been successfully added to the infrastructure. Considerable effort has been put into consolidating this infrastructure and enabling it to provide a high degree of scalability, reliability and usability. A thorough evaluation of its...
    Go to contribution page
  418. Dr Max Böhm (EDS / CERN openlab)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    GridMap (http://gridmap.cern.ch) has been introduced to the community at the EGEE'07 conference as a new monitoring tool that provides better visualization and insight to the state of the Grid than previous tools. Since then it has become quite popular in the grid community. Its 2 dimensional graphical visualization technique based on treemaps, coupled with a simple responsive AJAX based rich...
    Go to contribution page
  419. Dr Maxim Potekhin (BROOKHAVEN NATIONAL LABORATORY)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The Panda Workload Management System is designed around the concept of the Pilot Job - a "smart wrapper" for the payload executable, that can probe the environment on the remote worker node before pulling down the payload from the server and executing it. Such design allows for improved logging and monitoring capabilities as well as flexibility in Workload Management. In the Grid...
    Go to contribution page
  420. Dr Ricardo Graciani Diaz (Universitat de Barcelona)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    DIRAC, the LHCb community Grid solution, has pioneered the use of pilot jobs in the Grid. Pilot jobs provide a homogeneous interface to an heterogeneous set of computing resources. At the same time, pilot jobs allow to delay the scheduling decision to the last moment, thus taking into account the precise running conditions at the resource and last moment requests to the system. The DIRAC...
    Go to contribution page
  421. Dr Marie-Christine Sawley (ETHZ)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Resource tracking, like usage monitoring, relies on fine granularity information communicated by each site on the Grid. Data is later aggregated to be analysed under different perspectives to yield global figures which will be used for decision making. The dynamic information collected from distributed sites must therefore be comprehensive, pertinent and coherent with up stream (planning) and...
    Go to contribution page
  422. Mr Antonio Ceseracciu (SLAC)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The Network Engineering team at the SLAC National Accelerator Laboratory is required to manage an increasing number and variety of network devices with a fixed amount of human resources. At the same time, networking equipment has acquired more intelligence to gain introspection and visibility onto the network. Making such information readily available for network engineers and user support...
    Go to contribution page
  423. Mr Andrey Bobyshev (FERMILAB)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Emerging dynamic circuit services are being developed and deployed to facilitate high impact data movement within the research and education communities. These services normally require network awareness in the applications, in order to establish an end-to-end path on-demand programmatically. This approach has significant difficulties because user applications need to be modified to support...
    Go to contribution page
  424. Mr Parag Mhashilkar (Fermi National Accelerator Laboratory)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The Open Science Grid (OSG) offers access to hundreds of Compute elements (CE) and storage elements (SE) via standard Grid interfaces. The Resource Selection Service (ReSS) is a push-based workload management system that is integrated with the OSG information systems and resources. ReSS integrates standard Grid tools such as Condor, as a brokering service and the gLite CEMon, for gathering and...
    Go to contribution page
  425. Mr Volker Buege (Inst. fuer Experimentelle Kernphysik - Universitaet Karlsruhe)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    An efficient administration of computing centres requires sophisticated tools for the monitoring of the local infrastructure. Sharing such resources in a grid infrastructure, like the Worldwide LHC Computing Grid (WLCG), goes ahead with a large number of external monitoring systems, offering information on the status of the services of a grid site. This huge flood of information from many...
    Go to contribution page
  426. Dr Bohumil Franek (Rutherford Appleton Laboratory)
    26/03/2009, 08:00
    Online Computing
    poster
    In the SMI++ framework, the real world is viewed as a collection of objects behaving as finite state machines. These objects can represent real entities, such as hardware devices or software tasks, or they can represent abstract subsystems. A special language (SML) is provided for the object description. The SML description is then interpreted by a Logic Engine (coded in C++) to drive the...
    Go to contribution page
  427. Mr Ales Krenek (CESNET, CZECH REPUBLIC), Mr Jiri Sitera (CESNET, CZECH REPUBLIC), Mr Ludek Matyska (CESNET, CZECH REPUBLIC), Mr Miroslav Ruda (CESNET, CZECH REPUBLIC), Mr Zdenek Sustr (CESNET, CZECH REPUBLIC)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Logging and Bookkeeping (L&B) is a gLite subsystem responsible for tracking jobs on the grid. Normally the user interacts with it via glite-wms-job-status and glite-wms-job-logging-info commands. Here we present other, less generally known but still useful L&B usage patterns which are available with recently developed L&B features. L&B exposes a HTML interface; pointing a web browser...
    Go to contribution page
  428. Dr Jens Jensen (STFC-RAL)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    We show how to achieve interoperation between SDSC's Storage Resource Broker (SRB) and the Storage Resource Manager (SRM) implementations used in the Large Hadron Collider Computing Grid. Interoperation is achieved using gLite tools, to demonstrate file transfers between two different grids. This presentation is different from the work demonstrated by the authors and collaborators at SC2007...
    Go to contribution page
  429. Dr Andreas Gellrich (DESY)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    DESY is one of the world-wide leading centers for research with particle accelerators and synchrotron light. In HEP DESY participates in LHC as a Tier-2 center, supports on-going analyzes of HERA data, is a leading partner for ILC, and runs the National Analysis Facility (NAF) for LHC and ILC. For the research with synchrotron light major new facilities are operated and built (FLASH,...
    Go to contribution page
  430. Mr Alexander Zaytsev (Budker Institute of Nuclear Physics (BINP))
    26/03/2009, 08:00
    Online Computing
    poster
    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating at the LHC machine at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers,...
    Go to contribution page
  431. Vasco Chibante Barroso (CERN)
    26/03/2009, 08:00
    Online Computing
    poster
    All major experiments need tools that provide a way to keep a record of the events and activities, both during commissioning and operations. In ALICE (A Large Ion Collider Experiment) at CERN, this task is performed by the Alice Electronic Logbook (eLogbook), a custom-made application developed and maintained by the Data-Acquisition group (DAQ). Started as a statistics repository, the eLogbook...
    Go to contribution page
  432. Pablo Saiz (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Once the ALICE experiments starts collecting data, it will gather up to 4 PB of information per year. The data will be analyzed in centers distributed all over the world. Each of these centers might have a different software environment. To be able to use all these resources in a similar way, ALICE has developed AliEn, a GRID layer that provides the same interface independently of the...
    Go to contribution page
  433. Christian Ohm (Department of Physics, Stockholm University)
    26/03/2009, 08:00
    Online Computing
    poster
    The ATLAS BPTX stations are comprised of electrostatic button pick-up detectors, located 175 m away along the beam pipe on both sides of ATLAS. The pick-ups are installed as a part of the LHC beam instrumentation and used by ATLAS for timing purposes. The usage of the BPTX signals in ATLAS is twofold; they are used both in the trigger system and for LHC beam monitoring. The ATLAS Trigger...
    Go to contribution page
  434. Ricardo Rocha (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The ATLAS Distributed Data Management (DDM) system is now at the point of focusing almost all its efforts to operations after successfully delivering a high quality product which has proved to scale to the extreme requirements of the experiment users. The monitoring effort has followed the same path and is now focusing mostly on the shifters and experts operating the system. In this paper we...
    Go to contribution page
  435. Alessandro De Salvo (Istituto Nazionale di Fisica Nucleare Sezione di Roma 1)
    26/03/2009, 08:00
    Online Computing
    poster
    The calibration of the ATLAS MDT chambers will be performed at remote sites, called Remote Calibration Centers. Each center will process the calibration data for the assigned part of the detector and send the results back to CERN for general use in the reconstruction and analysis within 24h from the calibration data taking. In this work we present the data extraction mechanism, the data...
    Go to contribution page
  436. Remigius K Mommsen (FNAL, Chicago, Illinois, USA)
    26/03/2009, 08:00
    Online Computing
    poster
    The CMS event builder assembles events accepted by the first level trigger and makes them available to the high-level trigger. The event builder needs to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GBytes/s originating from approximately 500 sources. This paper presents the chosen hardware and software architecture. The system consists of 2 stages: an...
    Go to contribution page
  437. Dr Jose Flix Molina (Port d'Informació Científica, PIC (CIEMAT - IFAE - UAB), Bellaterra, Spain)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The computing system of the CMS experiment works using distributed resources from more than 60 computing centres worldwide. These centres, located in Europe, America and Asia are interconnected by the Worldwide LHC Computing Grid. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established a procedure to extensively test all...
    Go to contribution page
  438. Mr Yuriy Ilchenko (SMU)
    26/03/2009, 08:00
    Online Computing
    poster
    The ATLAS experiment's data acquisition system is distributed across the nodes of large farms. Online monitoring and data quality runs alongside this system. A mechanism is required that integrates the monitoring data from different nodes and makes it available for shift crews. This integration includes but is not limited to summation or averaging of histograms and summation of trigger...
    Go to contribution page
  439. Dr Marco Cecchi (INFN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The gLite Workload Management System (WMS) has been designed and developed to represent a reliable and efficient entry point to high-end services available on a Grid. The WMS translates user requirements and preferences into specific operations and decisions - dictated by the general status of all other Grid services it interoperates with - while taking responsibility to bring requests to...
    Go to contribution page
  440. Dr Alessandro Di Girolamo (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    This contribution describes how part of the monitoring of the services used in the computing systems of the LHC experiments has been integrated with the Service Level Status (SLS) framework. The LHC experiments are using an increasingly number of complex and heterogeneous services: the SLS allows to group all these different services and to report their status and their availability by...
    Go to contribution page
  441. Dr Doris Ressmann (Karlsruher Institut of Technology)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    All four LHC experiments are served by GridKa, the German WLCG Tier-1 at the Steinbuch Centre for Computing of the Karlsruhe Institute of Technology (KIT). Each of the experiments requires a significantly different setup of the dCache data management system. Therefore the use of a single dCache instance for all experiments can have negative effects at different levels, e.g. SRM, space manager...
    Go to contribution page
  442. Mr Fernando Guimaraes Ferreira (Univ. Federal do Rio de Janeiro (UFRJ))
    26/03/2009, 08:00
    Online Computing
    poster
    The web system described here provides functionalities to monitor the Detector Control System (DCS) acquired data. The DCS is responsible for overseeing the coherent and safe operation of the ATLAS experiment hardware. In the context of the Hadronic Tile Calorimeter Detector, it controls the power supplies of the readout electronics acquiring voltages, currents, temperatures and coolant...
    Go to contribution page
  443. Mr Lourenço Vaz (LIP - Coimbra)
    26/03/2009, 08:00
    Online Computing
    poster
    Data describing the conditions of the ATLAS detector and the Trigger and Data Acquisition system are stored in the Conditions DataBases (CDB), and may include from simple values to complex objects like online system messages or monitoring histograms. The CDB are deployed on COOL, a common infrastructure for reading and writing conditions data. Conditions data produced online are saved to an...
    Go to contribution page
  444. Dr Josva Kleist (Nordic Data Grid Facility)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Interoperability of grid infrastructures is becoming increasingly important in the emergence of large scale grid infrastructures based on national and regional initiatives. To achieve interoperability of grid infrastructures adaptions and bridging of many different systems and services needs to be tackled. A grid infrastructure offers services for authentication, authorization, accounting,...
    Go to contribution page
  445. Prof. Jorge Rodiguez (Florida International University), Dr Yujun Wu (University of Florida)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The CMS experiment is expected to produce a few Peta Bytes of data a year and distribute them globally. Within the CMS computing infrastructure, most user analyses and the production of the Monte Carlo events will be carried out at some 50 CMS Tier-2 sites. The way how to store the data and to allow physicists to access them efficiently has been a challenge, especially for Tier-2...
    Go to contribution page
  446. Torsten Antoni (GGUS, KIT-SCC)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The user and operations support of the EGEE series of projects can be captioned "regional support with central coordination". Its central building block is the GGUS portal which acts as an entry point for users and support staff. It is also as an integration platform for the distributed support effort. As WLCG relies heavily on the EGEE infrastructure it is important that the support...
    Go to contribution page
  447. Prof. Vincenzo Innocente (CERN)
    26/03/2009, 09:00
    Plenary
    oral
    Computing in these years zero has been caracterized by the advent of "multicore cpus". Effective exploitation of this new kind of computing architecture requires the adaptation of legacy software and enventually a shift of the programming paradigms to massive parallel. In this talk we will introduce the reasons that brough to the introduction of "multicore" hardware and the consequencies ...
    Go to contribution page
  448. Dr Pere Mato (CERN)
    26/03/2009, 09:30
    Plenary
    oral
    Distributed Data Analysis and Tools
    Go to contribution page
  449. Dr Cristinel Diaconu (CPPM IN2P3)
    26/03/2009, 10:00
    Plenary
    oral
    The high energy physics experiments collect data over long periods of time and exploit this data to produce physics publications. The scientific potential of an experiment is in principle defined and exhausted during the collaboration lifetime. However, the continous improvement of the scientific grounds like the theory, experiment, simulation, new ideeas or unexpected discoveries may lead to...
    Go to contribution page
  450. Dr Harry Renshall (CERN), Dr Jamie Shiers (CERN)
    26/03/2009, 11:30
    Plenary
    oral
    This talk will summarize the main points that were discussed - and where possible agreed - at the WLCG Collaboration workshop held in Prague during the weekend immediately preceding CHEP. The list of topics for the workshop include: * An analysis of the experience with WLCG services from 2008 data taking and processing; * Requirements and schedule(s) for 2009; * Readiness for 2009
    Go to contribution page
  451. Eva Hladka (CESNET)
    26/03/2009, 12:00
    Plenary
    oral
  452. Sasaki Takashi (KEK)
    26/03/2009, 12:30
    Plenary
    oral
  453. Predrag Buncic (CERN)
    26/03/2009, 14:00
    Software Components, Tools and Databases
    oral
    CernVM is a Virtual Software Appliance to run physics applications from the LHC experiments at CERN. The virtual appliance provides a complete, portable and easy to install and configure user environment for developing and running LHC data analysis on any end-user computer (laptop, desktop) and on the Grid independently of operating system software and hardware platform (Linux, Windows,...
    Go to contribution page
  454. Julia Andreeva (CERN)
    26/03/2009, 14:00
    Grid Middleware and Networking Technologies
    oral
    Job processing and data transfer are the main computing activities on the WLCG infrastructure. Reliable monitoring of the job processing on the WLCG scope is a complicated task due to the complexity of the infrastructure itself and the diversity of the currently used job submission methods. The talk will describe the new strategy for the job monitoring on the WLCG scope, covering...
    Go to contribution page
  455. Mr Ricky Egeland (Minnesota)
    26/03/2009, 14:00
    Grid Middleware and Networking Technologies
    oral
    The PhEDEx Data Service provides access to information from the central PhEDEx database, as well as certificate-authenticated managerial operations such as requesting the transfer or deletion of data. The Data Service is integrated with the 'SiteDB' service for fine-grained access control, providing a safe and secure environment for operations. A plugin architecture allows server-side modules...
    Go to contribution page
  456. Semen Lebedev (GSI, Darmstadt / JINR, Dubna)
    26/03/2009, 14:00
    Event Processing
    oral
    The Compressed Baryonic Matter (CBM) experiment at the future FAIR facility at Darmstadt will measure dileptons emitted from the hot and dense phase in heavy-ion collisions. In case of an electron measurement, a high purity of identified electrons is required in order to suppress the background. Electron identification in CBM will be performed by a Ring Imaging Cherenkov (RICH) detector and...
    Go to contribution page
  457. Peter Hristov (CERN)
    26/03/2009, 14:00
    Software Components, Tools and Databases
    oral
    Since 1998 ALICE is developing the AliRoot framework for Offline computing. This talk will critically review the development and present status of the framework. The current functionality for simulation, reconstruction, alignment, calibration and analysis will be described and commented. The integration with the Grid and the Proof systems will be described and discussed. The talk will also...
    Go to contribution page
  458. Dr Guido NEGRI (CERN)
    26/03/2009, 14:00
    Distributed Processing and Analysis
    oral
    Within the ATLAS hierarchical, multi-tier computing infrastructure, the Tier-0 centre at CERN is mainly responsible for prompt processing of the raw data coming from the online DAQ system, to archive the raw and derived data on tape, to register the data with the relevant catalogues and to distribute them to the associated Tier-1 centres. The Tier-0 is already fully functional. It has...
    Go to contribution page
  459. Dr Andrea Chierici (INFN-CNAF)
    26/03/2009, 14:20
    Software Components, Tools and Databases
    oral
    Virtualization is a proven software technology that is rapidly transforming the IT landscape and fundamentally changing the way that people compute. Recently all major software producers (e.g. Microsoft and RedHat) developed or acquired virtualization technologies. Our institute is a Tier1 for LHC experiments and is experiencing lots of benefits from virtualization technologies, like...
    Go to contribution page
  460. Fabrizio Furano (Conseil Europeen Recherche Nucl. (CERN))
    26/03/2009, 14:20
    Distributed Processing and Analysis
    oral
    Performance, reliability and scalability in data access are key issues in the context of Grid computing and High Energy Physics (HEP) data analysis. We present the technical details and the results of a large scale validation and performance measurement achieved at the INFN Tier1, the central computing facility of the Italian National Institute for Nuclear Research (INFN). The aim of this work...
    Go to contribution page
  461. Dr Johan Messchendorp (for the PANDA collaboration) (University of Groningen)
    26/03/2009, 14:20
    Software Components, Tools and Databases
    oral
    The Panda experiment at the future facility FAIR will provide valuable data for our present understanding of the strong interaction. In preparation for the experiments, large-scale simulations for design and feasibility studies are performed exploiting a new software framework, Fair/PandaROOT, which is based on ROOT and the Virtual Monte Carlo (VMC) interface. In this paper, the various...
    Go to contribution page
  462. Mr Alberto Pace (CERN)
    26/03/2009, 14:20
    Grid Middleware and Networking Technologies
    oral
    Data management components at CERN form the backbone for production and analysis activities of the experiments of the LHC accelerator. Significant data amounts (15PB/y) will need to be collected from the online systems, reconstructed and distributed to other sites participating in the Worlrdwide LHC computing Grid for further analysis. More recently also significant resources to support local...
    Go to contribution page
  463. Dr Janusz Martyniak (Imperial College London)
    26/03/2009, 14:20
    Grid Middleware and Networking Technologies
    oral
    In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008. The RTM gathers information from EGEE sites...
    Go to contribution page
  464. Mrs Rosi REED (University of California, Davis)
    26/03/2009, 14:20
    Event Processing
    oral
    Vertex finding is an important part of accurately reconstructing events at STAR since many physics parameters, such as transverse momentum for primary particles, depend on the vertex location. Many analysis depend on trigger selection and require an accurate determination of where the interaction that fired the trigger occurred. Here we present two vertex finding methods, the Pile-Up Proof...
    Go to contribution page
  465. Dr Kirill Prokofiev (CERN)
    26/03/2009, 14:40
    Event Processing
    oral
    In anticipation of the First LHC data to come, a considerable effort has been devoted to ensure the efficient reconstruction of vertices in the ATLAS detector. This includes the reconstruction of photon conversions, long lived particles, secondary vertices in jets as well as finding and fitting of primary vertices. The implementation of the corresponding algorithms requires a modular design...
    Go to contribution page
  466. Ákos Frohner (CERN)
    26/03/2009, 14:40
    Grid Middleware and Networking Technologies
    oral
    Data management is one of the cornerstones in the distributed production computing environment that the EGEE project aims to provide for a e-Science infrastructure. We have designed and implemented a set of services and client components, addressing the diverse requirements of all user communities. LHC experiments as main users will generate and distribute...
    Go to contribution page
  467. Dr Maria Grazia Pia (INFN GENOVA)
    26/03/2009, 14:40
    Software Components, Tools and Databases
    oral
    Geant4 is nowadays a mature Monte Carlo system; new functionality has been extensively added to the toolkit since its first public release in 1998, nevertheless, its architectural design and software technology features have remained substantially unchanged since their original conception in the RD44 phase of the mid ‘90s. A R&D project has been recently launched at INFN to revisit Geant4...
    Go to contribution page
  468. Mr david collados (CERN)
    26/03/2009, 14:40
    Grid Middleware and Networking Technologies
    oral
    Authors: David Collados, Judit Novak, John Shade, Konstantin Skaburskas, Lapka Wojciech It is four years now since the first prototypes of tools and tests started to monitor the Worldwide LHC Computing Grid (WLCG) services. One of these tools is the Service Availability Monitoring (SAM) framework, which superseded the SFT tool, and has become a keystone for the monthly WLCG availability...
    Go to contribution page
  469. Dr David Mason (FNAL)
    26/03/2009, 14:40
    Distributed Processing and Analysis
    oral
    CMS' infrastructure to process, store and analyze data is based on worldwide distributed tiers of computing resources. Monitoring and trouble shooting of all parts of the computing infrastructure, and importantly the experiment specific data flows and workflows running on this infrastructure, is essential to guarantee timely delivery of processed data to the physicists. This is especially...
    Go to contribution page
  470. Dr Yushu Yao (LBNL)
    26/03/2009, 14:40
    Software Components, Tools and Databases
    oral
    ATLAS software has been developed mostly on CERN linux cluster lxplus[1] or on similar facilities at the experiment Tier 1 centers. The fast rise of virtualization technology has the potential to change this model, turning every laptop or desktop into an ATLAS analysis platform. In the context of the CernVM project[2] we are developing a suite of tools and CernVM plug-in extensions to...
    Go to contribution page
  471. Dr Andrea Bocci (Università and INFN, Pisa)
    26/03/2009, 15:00
    Event Processing
    oral
    The CMS offline software contains a widespread set of algorithms to identify jets originating from the weak decay of b-quarks. Different physical properties of b-hadron decays like lifetime information, secondary vertices and soft leptons are exploited. The variety of selection algorithms range from simple and robust ones, suitable for early data-taking and online environments such as the...
    Go to contribution page
  472. Dr Patrick Fuhrmann (DESY)
    26/03/2009, 15:00
    Grid Middleware and Networking Technologies
    oral
    At the time of CHEP'09, the LHC Computing Grid approach and implementation is rapidly approaching the moment it finally has to prove its feasibility. The same is true for dCache, the grid middle-ware storage component, meant to store and manage the largest share of LHC data outside of the LHC Tier 0. This presentation will report on the impact of recently deployed dCache sub-components,...
    Go to contribution page
  473. Dr Mohammad Al-Turany (GSI DARMSTADT)
    26/03/2009, 15:00
    Software Components, Tools and Databases
    oral
    FairRoot is the simulation and analysis framework used by CBM and PANDA experiments at FAIR/GSI. The use of GPU's for event reconstruction in FairRoot will be presented. The fact that CUDA (Nvidia's Compute Unified Device Architecture) development tools work alongside the conventional C/C++ compiler, makes it possible to mix GPU code with general-purpose code for the host CPU, based on...
    Go to contribution page
  474. Ramiro Voicu (California Institute of Technology)
    26/03/2009, 15:00
    Grid Middleware and Networking Technologies
    oral
    USLHCNet provides transatlantic connections of the Tier1 computing facilities at Fermilab and Brookhaven with the Tier0 and Tier1 facilities at CERN as well as Tier1s elsewhere in Europe and Asia. Together with ESnet, Internet2 and the GEANT, USLHCNet also supports connections between the Tier2 centers. The USLHCNet core infrastructure is using the Ciena Core Director devices that provide...
    Go to contribution page
  475. Wolfgang Ehrenfeld (DESY)
    26/03/2009, 15:00
    Software Components, Tools and Databases
    oral
    The ATLAS trigger system is responsible for selecting the interesting collision events delivered by the Large Hadron Collider (LHC). The ATLAS trigger will need to achieve a ~10-7 rejection factor against random proton-proton collisions, and still be able to efficiently select interesting events. After a first processing level based on hardware, the final event selection is based on custom...
    Go to contribution page
  476. Dr Andrew Maier (CERN)
    26/03/2009, 15:00
    Distributed Processing and Analysis
    oral
    Ganga (http://cern.ch/ganga) is a job-management tool that offers a simple, efficient and consistent user experience in a variety of heterogeneous environments: from local clusters to global Grid systems. Experiment specific plugins allow Ganga to be customised for each experiment. This paper will describe these LHCb plugins of Ganga. For LHCb users, Ganga is the job submission tool of choice...
    Go to contribution page
  477. Daniel Colin Van Der Ster (Conseil Europeen Recherche Nucl. (CERN))
    26/03/2009, 15:20
    Distributed Processing and Analysis
    oral
    Effective distributed user analysis requires a system which meets the demands of running arbitrary user applications on sites with varied configurations and availabilities. The challenge of tracking such a system requires a tool to monitor not only the functional statuses of each grid site, but also to perform large-scale analysis challenges on the ATLAS grids. This work presents one such...
    Go to contribution page
  478. Mr Aatos Heikkinen (Helsinki Institute of Physics)
    26/03/2009, 15:20
    Event Processing
    oral
    We report our experience on using ROOT package TMVA for multivariate data analysis, for a problem of tau tagging in the framework of heavy charged MSSM Higgs boson searches at the LHC. With a generator level analysis, we investigate how in the ideal case tau tagging could be performed and hadronic tau decays separated from the hadronic jets of QCD multi-jet background present in...
    Go to contribution page
  479. Dr Hans wenzel (Fermilab), Dr Marian Zvada (Fermilab)
    26/03/2009, 15:20
    Software Components, Tools and Databases
    oral
    We will present the monitoring system for the analysis farm of the CDF experiment at the Tevatron (CAF). All monitoring data is collected in a relational database (PostgreSQL), with SQL providing a common interface to the monitoring data. The display of these monitoring data is done with a Web Application in form of Java Server pages served by the Apache Tomcat server. For the database...
    Go to contribution page
  480. Luca Magnoni (INFN CNAF)
    26/03/2009, 15:20
    Grid Middleware and Networking Technologies
    oral
    StoRM is a Storage Resource Manager (SRM) service adopted in the context of WLCG to provide data management capabilities on high performing cluster and parallel file systems as Lustre and GPFS. The experience gained in the readiness challenges of LHC Grid infrastructure proves that scalability and performance of SRM services are key characteristics to provide effective and reliable storage...
    Go to contribution page
  481. Robert Quick (Indiana University)
    26/03/2009, 15:20
    Grid Middleware and Networking Technologies
    oral
    The Open Science Grid (OSG) Resource and Service Validation (RSV) project seeks to provide solutions for several grid fabric monitoring problems, while at the same time providing a bridge between the OSG operations and monitoring infrastructure and the WLCG (Worldwid LHC Computing Grid) infrastructure. The RSV-based OSG fabric monitoring begins with local resource fabric monitoring, which...
    Go to contribution page
  482. Dr Brinick Simmons (Department of Physics and Astronomy - University College London)
    26/03/2009, 15:20
    Software Components, Tools and Databases
    oral
    The ATLAS experiment's RunTimeTester (RTT) is a software testing framework into which software package developers can plug their tests, have them run automatically, and obtain feedback via email and the web. The RTT processes the ATLAS nightly build releases, using acron to launch runs on a dedicated cluster at CERN, and submitting user jobs to private LSF batch queues. Running higher...
    Go to contribution page
  483. Heidi Schellman (Northwestern University)
    26/03/2009, 15:40
    Event Processing
    oral
    The Minerva experiment is a small fully active neutrino experiment which will run in 2010 in the NUMI beamline at Fermilab. The offline computing framework is based on the GAUDI framework. The small Minerva software development team has used the GAUDI code base to produce a functional software environment for simulation of neutrino interactions generated by the GENIE generator and analysis...
    Go to contribution page
  484. Pablo SAIZ (CERN)
    26/03/2009, 15:40
    Distributed Processing and Analysis
    oral
    WLCG relies on the SAM (Service Availability Monitoring) infrastructure to monitor the behaviour of sites and as a powerful debugging tool. SAM is also used by individual experiments and VOs (Virtual Organisations) to submit application-specific tests to the grid. This degree of specificity implies additional requirements in terms of visualisation and manipulation of the test results provided...
    Go to contribution page
  485. Dr Stefan Roiser (CERN)
    26/03/2009, 15:40
    Software Components, Tools and Databases
    oral
    The LCG Applications Area at CERN provides basic software components for the LHC experiments such as ROOT, POOL, COOL which are developed in house and also a set of "external" software packages (~ 70) which are needed in addition such as Python, Boost, Qt, CLHEP, etc. These packages target many different areas of HEP computing such as data persistency, math, simulation, grid computing,...
    Go to contribution page
  486. Dr Stephen Burke (RUTHERFORD APPLETON LABORATORY)
    26/03/2009, 15:40
    Grid Middleware and Networking Technologies
    oral
    The GLUE information schema has been in use in the LCG/EGEE production Grid since the first version was defined in 2002. In 2007 a major redesign of GLUE, version 2.0, was started in the context of the Open Grid Forum following the creation of the GLUE Working Group. This process has taken input from a number of Grid projects, but as a major user of the version 1 schema LCG/EGEE has had a...
    Go to contribution page
  487. Dr Jamie Shiers (CERN)
    26/03/2009, 15:40
    Grid Middleware and Networking Technologies
    oral
    The WLCG service has been declared officially open for production and analysis during the LCG Grid Fest held at CERN - with live contributions from around the world - on Friday 3rd October 2008. But the service is not without its problems - services or even sites suffer degradation or complete outage with painful repercussions on experiment activities, the operations and service model is...
    Go to contribution page
  488. Mr Alexander Zaytsev (Budker Institute of Nuclear Physics (BINP))
    26/03/2009, 16:30
    Event Processing
    oral
    CMD-3 is the general purpose cryogenic magnetic detector for VEPP-2000 electron-positron collider, which is being commissioned at Budker Institute of Nuclear Physics (BINP, Novosibirsk, Russia). The main aspects of physical program of the experiment are precision measurements of hadronic cross sections, study of known and search for new vector mesons, study of the ppbar a nnbar production...
    Go to contribution page
  489. Mr Pierre VANDE VYVRE (CERN)
    26/03/2009, 16:30
    Online Computing
    oral
    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). A large bandwidth and flexible Data Acquisition System (DAQ) has been designed and deployed to collect sufficient statistics in the short running time available per year for heavy ion and to...
    Go to contribution page
  490. Dr Jukka Klem (Helsinki Institute of Physics HIP)
    26/03/2009, 16:30
    Grid Middleware and Networking Technologies
    oral
    The Compact Muon Solenoid (CMS) is one of the LHC (Large Hadron Collider) experiments at CERN. CMS computing relies on different grid infrastructures to provide calculation and storage resources. The major grid middleware stacks used for CMS computing are gLite, OSG and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) builds one of the Tier-2 centers for CMS computing....
    Go to contribution page
  491. Axel Naumann (CERN)
    26/03/2009, 16:30
    Software Components, Tools and Databases
    oral
    ROOT is planning to replace a large part of its C++ interpreter CINT. The new implementation will be based on the LLVM compiler infrastructure. LLVM is developed among others by Apple, Adobe, the university of Illinois at Urbana-Champaign; it is open source. Once available, LLVM will offer an ISO compliant C++ parser, a bytecode generator and execution engine, a just-in-time-compiler, and...
    Go to contribution page
  492. Ulrich Schwickerath (CERN)
    26/03/2009, 16:30
    Distributed Processing and Analysis
    oral
    Instrumentation of jobs throughout its lifecycle is not obvious, as they are quite independent after being submitted, crossing multiple environments and locations until landing on a worker node. In order to measure correctly the resources used at each step, and to compare it with the view from a Fabric Infrastructure, we propose a solution using Messaging System for the Grids (MSG)...
    Go to contribution page
  493. Dr Alessandro di Girolamo (CERN IT/GS), Dr Andrea Sciaba (CERN IT/GS), Dr Elisa Lanciotti (CERN IT/GS), Dr Nicolo Magini (CERN IT/GS), Dr Patricia Mendez Lorenzo (CERN IT/GS), Dr Roberto Santinelli (CERN IT/GS), Dr Simone Campana (CERN IT/GS), Dr Vincenzo Miccio (CERN IT/GS)
    26/03/2009, 16:30
    Grid Middleware and Networking Technologies
    oral
    In a few months, the four LHC detectors will collect data at a significant rate that is expected to ramp-up to around 15PB per year. To process such a large quantity of data, the experiments have developed over the last years distributed computing models that build on the overall WLCG service. These implement the different services provided by the gLite middleware into the computing models of...
    Go to contribution page
  494. Dr Oxana Smirnova (Lund University / NDGF)
    26/03/2009, 16:50
    Grid Middleware and Networking Technologies
    oral
    The Advanced Resource Connector (ARC) middleware introduced by NorduGrid is one of the leading Grid solutions used by scientists worldwide. Its simplicity, reliability and portability, matched by unparalleled efficiency, make it attractive for large-scale facilities like the Nordic DataGrid Facility (NDGF) and its Tier1 center, and also for smaller scale projects. Being well-proven in...
    Go to contribution page
  495. Prof. joel snow (Langston University)
    26/03/2009, 16:50
    Distributed Processing and Analysis
    oral
    DZero uses a variety of resources on four continents to pursue a strategy of flexibility and automation in the generation of simulation data. This strategy provides a resilient and opportunistic system which ensures an adequate and timely supply of simulation data to support DZero's physics analyses. A mixture of facilities, dedicated and opportunistic, specialized and generic, large...
    Go to contribution page
  496. Fred Luehring (Indiana University)
    26/03/2009, 16:50
    Software Components, Tools and Databases
    oral
    We update our CHEP06 presentation on the ATLAS experiment software infrastructure used to build, validate, distribute, and document the ATLAS offline software. The ATLAS collaboration's computational resources and software developers are distributed around the globe in more then 30 counties. The ATLAS offline code base is currently over 5 MSLOC in 10000+ C++ classes organized into about...
    Go to contribution page
  497. Lorenzo Moneta (on behalf of the ROOT, TMVA, RooFit and RooStats teams)
    26/03/2009, 16:50
    Event Processing
    oral
    ROOT, a data analysis framework, provides advanced mathematical and statistical methods needed by the LHC experiments for analyzing their data. In addition, the ROOT distribution include packages such as TMVA, which provides advanced multivariate analysis tools for both classification and regression, and RooFit for performing data modeling and complex fitting. Recently a large effort is...
    Go to contribution page
  498. Dr Jose Antonio Coarasa Perez (Department of Physics - Univ. of California at San Diego (UCSD) and CERN, Geneva, Switzerland)
    26/03/2009, 16:50
    Online Computing
    oral
    The CMS online cluster consists of more than 2000 computers, mostly under Scientific Linux CERN, running the 10000 applications instances responsible for the data acquisition and experiment control on a 24/7 basis. The challenging dimension of the cluster constrained the design and implementation of the infrastructure: - The critical nature of the control applications demands a tight...
    Go to contribution page
  499. Giuseppe Codispoti (Dipartimento di Fisica)
    26/03/2009, 16:50
    Grid Middleware and Networking Technologies
    oral
    The CMS experiment at LHC started using the Resource Broker (by the EDG and LCG projects) to submit production and analysis jobs to distributed computing resources of the WLCG infrastructure over 6 years ago. In 2006 it started using the gLite Workload Management System (WMS) and Logging & Bookkeeping (LB). In current configuration the interaction with the gLite-WMS/LB happens through the CMS...
    Go to contribution page
  500. Gabriele Garzoglio (FERMI NATIONAL ACCELERATOR LABORATORY)
    26/03/2009, 17:10
    Grid Middleware and Networking Technologies
    oral
    The Open Science Grid (OSG) and the Enabling Grids for E-sciencE (EGEE) have a common security model, based on Public Key Infrastructure. Grid resources grant access to users because of their membership in a Virtual Organization (VO), rather than on personal identity. Users push VO membership information to resources in the form of identity attributes, thus declaring that resources will be...
    Go to contribution page
  501. Daniele Spiga (Universita degli Studi di Perugia & CERN)
    26/03/2009, 17:10
    Distributed Processing and Analysis
    oral
    CMS has a distributed computing model, based on a hierarchy of tiered regional computing centres. However, the end physicist is not interested in the details of the computing model nor the complexity of the underlying infrastructure, but only to access and use efficiently and easily the remote services. The CMS Remote Analysis Builder (CRAB) is the official CMS tool that allows the access to...
    Go to contribution page
  502. Daniel Kollár (CERN)
    26/03/2009, 17:10
    Event Processing
    oral
    The main goals of a typical data anaysis are to compare model predictions with data, to draw conclusions on the validity of the model as a representation of the data, and to extract the possible values of parameters within the context of a model. The Bayesian Analysis Toolkit, BAT, is a tool developed to evaluate the posterior probability distribution for models and their parameters. It is...
    Go to contribution page
  503. David Lange (LLNL)
    26/03/2009, 17:10
    Software Components, Tools and Databases
    oral
    The offline software suite of the Compact Muon Solenoid (CMS) experiment must support the production and analysis activities across the distributed computing environment developed by the LHC experiments. This system relies on over 100 external software packages and includes the developments of hundreds of active developers. The applications of this software require consistent and rapid...
    Go to contribution page
  504. Mr Thilo Pauly (CERN)
    26/03/2009, 17:10
    Online Computing
    oral
    The ATLAS Level-1 Central Trigger (L1CT) electronics is a central part of ATLAS data-taking. It receives the 40 MHz bunch clock from the LHC machine and distributes it to all sub-detectors. It initiates the detector read-out by forming the Level-1 Accept decision, which is based on information from the calorimeter and muon trigger processors, plus a variety of additional trigger inputs from...
    Go to contribution page
  505. Massimo Sgaravatto (INFN Padova)
    26/03/2009, 17:10
    Grid Middleware and Networking Technologies
    oral
    In this paper we describe the use of CREAM and CEMON for job submission and management within the gLite Grid middleware. Both CREAM and CEMON address one of the most fundamental operations of a Grid middleware, that is job submission and management. Specifically, CREAM is a job management service used for submitting, managing and monitoring computational jobs. CEMON is an event...
    Go to contribution page
  506. Dr Marian Zvada (Fermilab)
    26/03/2009, 17:30
    Grid Middleware and Networking Technologies
    oral
    Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. Nowadays, CDF experiment is increasingly relying on...
    Go to contribution page
  507. Mr Maxim Grigoriev (FERMILAB)
    26/03/2009, 17:30
    Grid Middleware and Networking Technologies
    oral
    Fermilab hosts the US Tier-1 center for data storage and analysis of the Large Hadron Collider's (LHC) Compact Muon Solenoid (CMS) experi ment. To satisfy operational requirements for the LHC networking model, the networking group at Fermilab, in collaboration with Internet2 and ESnet, is participating in the perfSONAR-PS project. This collaboration has created a collection of network...
    Go to contribution page
  508. Dr Alessandra Doria (INFN Napoli)
    26/03/2009, 17:30
    Distributed Processing and Analysis
    oral
    An optimized use of the grid computing resources in the ATLAS experiment requires the enforcement of a mechanism of job priorities and of resource sharing among the different activities inside the ATLAS VO. This mechanism has been implemented through the VOViews publication in the information system and the fair share implementation per UNIX group in the batch system. The VOView concept...
    Go to contribution page
  509. Yoshiji Yasu (High Energy Accelerator Research Organization (KEK))
    26/03/2009, 17:30
    Online Computing
    oral
    DAQ-Middleware is a software framework of network-distributed DAQ system based on Robot Technology Middleware, which is an international standard of Object Management Group (OMG) in Robotics and developed by AIST. DAQ-Component is a software unit of DAQ Middleware. Basic components are already developed. For examples, Gatherer is a readout component, Logger is a logging component, Monitor is...
    Go to contribution page
  510. Mr Dmitri Konstantinov (IHEP Protvino)
    26/03/2009, 17:30
    Software Components, Tools and Databases
    oral
    The Generator Services project collaborates with the Monte Carlo generators authors and with the LHC experiments in order to prepare validated LCG compliant code for both the theoretical and the experimental communities at the LHC. On the one side it provides the technical support as far as the installation and the maintenance of the generators packages on the supported platforms is...
    Go to contribution page
  511. Andrea Ventura (INFN Lecce, Universita' degli Studi del Salento, Dipartimento di Fisica, Lecce)
    26/03/2009, 17:30
    Event Processing
    oral
    The ATLAS experiment CERN's Large Hadron Collider has been projected and realized for new discoveries in High Energy Physics as well as for precision measurements of Standard Model parameters. To satisfy the limited data acquisition capability, at the LHC project luminosity, the ATLAS trigger system will have to select a very small rate of physically interesting events (~200 Hz) among about 40...
    Go to contribution page
  512. Mr Andrzej Nowak (CERN)
    26/03/2009, 17:50
    Software Components, Tools and Databases
    oral
    At CHEP2007 we reported on the perfmon2 subsystem as a tool for interfacing to the PMUs (Performance Monitoring Units) which are found in the hardware of all modern processors (from AMD, Intel, SUN, IBM, MIPS, etc.). The intent was always to get the subsystem into the Linux kernel by default. The talk will report on how progress is now being made (after long discussions) and also show the...
    Go to contribution page
  513. Mogens Dam (Niels Bohr Institute)
    26/03/2009, 17:50
    Event Processing
    oral
    The ATLAS tau trigger is a challenging component of the online event selection, as it has to apply a rejection of 10^6 in a very short time with a typical signal efficiency of 80%. Whilst in the first hardware level narrow calorimeter jets are selected, in the second and third software levels candidates are refined on base of simple but fast (second level) and slow but accurate (third...
    Go to contribution page
  514. Dr Andrei TSAREGORODTSEV (CNRS-IN2P3-CPPM, MARSEILLE)
    26/03/2009, 17:50
    Grid Middleware and Networking Technologies
    oral
    DIRAC, the LHCb community Grid solution, was considerably reengineered in order to meet all the requirements for processing the data coming from the LHCb experiment. It is covering all the tasks starting with raw data transportation from the experiment area to the grid storage, data processing up to the final user analysis. The reengineered DIRAC3 version of the system includes a...
    Go to contribution page
  515. Mr Anar Manafov (GSI)
    26/03/2009, 17:50
    Distributed Processing and Analysis
    oral
    “PROOF on demand” is a set of utilities, that allows to start a PROOF cluster at user request, on a batch farm or on the Grid. It provides a plug-in based system, which allows to use different job submission frontends, such as LSF or gLite WMS. Main components of “PROOF on demand” are the PROOFAgent and the PAConsole. PROOFAgent provides the communication layer between the xrootd...
    Go to contribution page
  516. Giovanni Organtini (Univ. + INFN Roma 1)
    26/03/2009, 17:50
    Online Computing
    oral
    The Electromagnetic Calorimeter (ECAL) of the CMS experiment at the LHC is made of about 75000 scintillating crystals. The detector properties must be continuously monitored in order to ensure the extreme stability and precision required by its design. This leads to a very large volume of non-event data to be accessed continuously by shifters, experts, automatic monitoring tasks,...
    Go to contribution page
  517. Mr Philip DeMar (FERMILAB)
    26/03/2009, 17:50
    Grid Middleware and Networking Technologies
    oral
    Fermilab has been one of the earliest sites to deploy data circuits in production for wide-area high impact data movement. The US-CMS Tier-1 Center at Fermilab uses end-to-end (E2E) circuits to support data movement with the Tier-0 Center at CERN, as well as with all of the US-CMS Tier-2 sites. On average, 75% of the network traffic into and out of the Laboratory is carried on E2E circuits....
    Go to contribution page
  518. Ian Fisk (Fermi National Accelerator Laboratory (FNAL))
    26/03/2009, 18:10
    Distributed Processing and Analysis
    oral
    CMS is the the process of commissioning a complex detector and a globally distributed computing model simultaneously. The represents a unique challenge for the current generation of experiments. Even at the beginning there is not sufficient analysis or organized processing resources at CERN alone. In this presentation we will discuss the unique computing challenges CMS expects to face during...
    Go to contribution page
  519. Dr Ivan Kisel (GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt)
    26/03/2009, 18:10
    Online Computing
    oral
    The CBM Collaboration builds a dedicated heavy-ion experiment to investigate the properties of highly compressed baryonic matter as it is produced in nucleus-nucleus collisions at the Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany. This requires the collection of a huge number of events which can only be obtained by very high reaction rates and long data taking periods....
    Go to contribution page
  520. Robert Petkus (Brookhaven National Laboratory)
    26/03/2009, 18:10
    Software Components, Tools and Databases
    oral
    Robust, centralized system and application logging services are vital to all computing organizations, regardless of size. For the past year, the RHIC/USATLAS Computing Facility (RACF) has dramatically augmented the utility of logging services with Splunk. Splunk is a powerful application that functions as a log search engine, providing fast, real-time access to data from servers,...
    Go to contribution page
  521. Dr Alina Grigoras (CERN PH/AIP), Dr Andreas Joachim Peters (CERN IT/DM), Dr Costin Grigoras (CERN PH/AIP), Dr Fabrizio Furano (CERN IT/GS), Dr Federico Carminati (CERN PH/AIP), Dr Latchezar Betev (CERN PH/AIP), Dr Pablo Saiz (CERN IT/GS), Dr Patricia Mendez Lorenzo (CERN IT/GS), Dr Predrag Buncic (CERN PH/SFT), Dr Stefano Bagnasco (INFN/Torino)
    26/03/2009, 18:10
    Grid Middleware and Networking Technologies
    oral
    With the startup of LHC, the ALICE detector will collect data at a rate that, after two years, will reach 4PB per year. To process such a large quantity of data, ALICE has developed over ten years a distributed computing environment, called AliEn, integrated with the WLCG environment. The ALICE environment presents several original solutions, which have shown their viability in a number of...
    Go to contribution page
  522. Vasile Mihai Ghete (Institut fuer Hochenergiephysik (HEPHY))
    26/03/2009, 18:10
    Event Processing
    oral
    The CMS L1 Trigger processes the muon and calorimeter detector data using a complex system of custom hardware processors. A bit-level emulation of the trigger data processing has been developed. This is used to validate and monitor the trigger hardware, to simulate the trigger response in monte-carlo data, and for some components, to seed higher-level triggers. The multiple-use cases are...
    Go to contribution page
  523. Volker Guelzow (Unknown)
    27/03/2009, 09:00
    Plenary
  524. Dr Elizabeth Sexton-Kennedy (FNAL)
    27/03/2009, 09:30
    Plenary
  525. Dr Julius Hrivnac (LAL)
    27/03/2009, 10:00
    Plenary
  526. Dr Ales Krenek (MASARYK UNIVERSITY, BRNO, CZECH REPUBLIC)
    27/03/2009, 11:00
    Plenary
  527. Dagmar Adamova (Nuclear Physics Institute)
    27/03/2009, 11:30
    Plenary
  528. Dr Dario Barberis (CERN/Genoa)
    27/03/2009, 12:00
    Plenary
  529. Stella Shen (Academia Sinica), Vicky, Pei-Hua HUANG (Academia Sinica)
    27/03/2009, 12:30
    Plenary
  530. Milos Lokajicek (Institute of Physics)
    27/03/2009, 12:40
    Plenary
    oral
  531. Commercial
    oral
  532. Commercial
    oral
  533. Dr Gregory Dubois-Felsmann (SLAC)
    Plenary
    oral
    Lessons learned from the large pre-LHC experiments
    Go to contribution page