21–27 Mar 2009
Prague
Europe/Prague timezone

Session

Poster session

Poster
23 Mar 2009, 08:00
Prague

Prague

Prague Congress Centre 5. května 65, 140 00 Prague 4, Czech Republic

Presentation materials

There are no materials yet.

  1. Dr Gabriele Garzoglio (FERMI NATIONAL ACCELERATOR LABORATORY)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    In recent years, it has become more and more evident that software threat communities are taking an increasing interest in Grid infrastructures. To mitigate the security risk associated with the increased numbers of attacks, the Grid software development community needs to scale up effort to reduce software vulnerabilities. This can be achieved by introducing security review processes as a...
    Go to contribution page
  2. Dr Sanjay Padhi (UCSD)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    This paper presents a web based Job Monitoring framework for individual Grid sites that allows users to follow in detail their jobs in quasi-real time. The framework consists of several independent components, (a) a set of sensors that run on the site CE and worker nodes and update a database, (b) a simple yet extensible web services framework and (c) an Ajax powered web interface having a...
    Go to contribution page
  3. Dr David Lawrence (Jefferson Lab)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    A minimal xpath 1.0 parser has been implemented within the JANA framework that allows easy access to attributes or tags in an XML document. The motivating implmentation was to access geometry information from XML files in the HDDS specification (derived from ATLAS's AGDD). The system allows components in the reconstruction package to pick out individual numbers from a collection of XML...
    Go to contribution page
  4. Daniel Colin Van Der Ster (Conseil Europeen Recherche Nucl. (CERN))
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Ganga provides a uniform interface for running ATLAS user analyses on a number of local, batch, and grid backends. PanDA is a pilot-based production and distributed analysis system developed and used extensively by ATLAS. This work presents the implementation and usage experiences of a PanDA backend for Ganga. Built upon reusable application libraries from GangaAtlas and PanDA, the Ganga PanDA...
    Go to contribution page
  5. Mr Andrey TSYGANOV (Moscow Physical Engineering Inst. (MePhI))
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    CERN, the European Laboratory for Particle Physics, located in Geneva - Switzerland, has recently started the Large Hadron Collider (LHC), a 27 km particle accelerator. The CERN Engineering and Equipment Data Management Service (EDMS) provides support for managing engineering and equipment information throughout the entire lifecycle of a project. Based on several both in-house developed and...
    Go to contribution page
  6. Dr Suren Chilingaryan (The Institute of Data Processing and Electronics, Forschungszentrum Karlsruhe)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    During operation of high energy physics experiments a big amount of slow control data is recorded. It is necessary to examine all collected data checking the integrity and validity of measurements. With growing maturity of AJAX technologies it becomes possible to construct sophisticated interfaces using web technologies only. Our solution for handling time series, generally slow control...
    Go to contribution page
  7. Dr David Lawrence (Jefferson Lab)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Factory models are often used in object oriented programming to allow more complicated and controlled instantiation than is easily done with a standard C++ constructor. The alternative factory model implemented in the JANA event processing framework addresses issues of data integrity important to the type of reconstruction software developed for experimental HENP. The data on...
    Go to contribution page
  8. Ms Gerhild Maier (Johannes Kepler Universität Linz)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Grid computing is associated with a complex, large scale, heterogeneous and distributed environment. The combination of different Grid infrastructures, middleware implementations, and job submission tools into one reliable production system is a challenging task. Given the impracticability to provide an absolutely fail-safe system, strong error reporting and handling is a crucial part of...
    Go to contribution page
  9. Dr David Malon (Argonne National Laboratory), Dr Peter Van Gemmeren (Argonne National Laboratory)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    At a data rate of 200 hertz, event metadata records ("TAGs," in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise "data mining," but our interest is different. Advanced statistical methods and tools such as...
    Go to contribution page
  10. José Mejia (Rechenzentrum Garching)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The ATLAS computing Grid consists of several hundred compute clusters distributed around the world as part of the Worldwide LHC Computing Grid (WLCG). The Grid middleware and the ATLAS software which has to be installed on each site, often require certain Linux distribution and sometimes even specific version thereof. On the other hand, mostly due to maintenance reasons, computer centres...
    Go to contribution page
  11. Dr John Kennedy (LMU Munich)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The organisation and operations model of the ATLAS T1-T2 federation/cloud associated to the GridKa T1 in Karlsruhe is described. Attention is paid to cloud level services and the experience gained during the last years of operation. The ATLAS GridKa Cloud is large and divers spanning 5 countries, 2 ROC's and is currently comprised of 13 core sites. A well defined and tested operations...
    Go to contribution page
  12. Lassi Tuura (Northeastern University)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The CMS experiment at the Large Hadron Collider has deployed numerous web-based services in order to serve the collaboration effectively. We present the two-phase authentication and authorisation system in use in the data quality and computing monitoring services, and in the data- and workload management services. We describe our techniques intended to provide a high level of security with...
    Go to contribution page
  13. Marco Clemencic (European Organization for Nuclear Research (CERN))
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    An extensive test suite is the first step towards the delivery of robust software, but it is not always easy to implement it, especially in projects with many developers. An easy to use and flexible infrastructure to use to write and execute the tests reduces the work each developer has to do to instrument his packages with tests. At the same time, the infrastructure gives the same look and...
    Go to contribution page
  14. Mr Ricardo Manuel Salgueiro Domingues da Silva (CERN)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    A frequent source of concern for resource providers is the efficient use of computing resources in their centres. This has a direct impact on requests for new resources. There are two different but strongly correlated aspects to be considered: while users are mostly interested in a good turn-around time for their jobs, resource providers are mostly interested in a high and efficient usage...
    Go to contribution page
  15. Alessandro De Salvo (Istituto Nazionale di Fisica Nucleare Sezione di Roma 1)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The measurement of the experiment software performances is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit...
    Go to contribution page
  16. Dr Florian Uhlig (GSI Darmstadt)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    One of the challenges of software development for large experiments is to manage the contributions from globally distributed teams. In order to keep the teams synchronized a strong quality control is important. For a software project this means that it has to be tested on all supported platforms if the project can be build from source, if it runs and in the end if the program delivers the...
    Go to contribution page
  17. Witold Pokorski (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    We present the new monitoring system for CASTOR (CERN Advanced STORage) which allows an integrated view on all the different storage components. With the massive data-taking phase approaching, CASTOR is one of the key elements of the software needed by the LHC experiments. It has to provide a reliable storage machinery for saving the event data, as well as to enable an efficient...
    Go to contribution page
  18. Dr Antonio Pierro (INFN-BARI)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The web application service as part of the conditions database system serves applications and users outside the event-processing. The application server is built upon conditions python API in the CMS offline software framework. It responds to http requests on various conditions database instances. The main client of the application server is the conditions database web GUI which currently...
    Go to contribution page
  19. Edward Karavakis (Brunel University-CERN)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Dashboard is a monitoring system developed for the LHC experiments in order to provide the view of the Grid infrastructure from the perspective of the Virtual Organisation. The CMS Dashboard provides a reliable monitoring system that enables the transparent view of the experiment activities across different middleware implementations and combines the Grid monitoring data with information that...
    Go to contribution page
  20. Lassi Tuura (Northeastern University)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    A central component of the data quality monitoring system of the CMS experiment at the Large Hadron Collider is a web site for browsing data quality histograms. The production servers in data taking provide access to several hundred thousand histograms per run, both live in online as well as for up to several terabytes of archived histograms for the online data taking, Tier-0 prompt...
    Go to contribution page
  21. Natalia Ratnikova (Fermilab-ITEP(Moscow)-Karlsruhe University(Germany))
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The CMS Software project CMSSW embraces more than a thousand packages organized in over a hundred subsystems covering the areas of analysis, event display, reconstruction, simulation, detector description, data formats, framework, utilities and tools. The release integration process is highly automated, using tools developed or adopted by CMS. Packaging in rpm format is a built-in step in the...
    Go to contribution page
  22. Mr Shahzad Muzaffar (NORTHEASTERN UNIVERSITY)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The CMS offline software consists of over two million lines of code actively developed by hundreds of developers from all around the world. Optimal builds and distribution of such a large scale system for production and analysis activities for hundreds of sites and multiple platforms are major challenges. Recent developments have not only optimized the whole process but also helped us identify...
    Go to contribution page
  23. Dr Thomas Kress (RWTH Aachen, III. Physikal. Institut B)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The Tier-2 centers in CMS are the only location, besides the specialized analysis facility at CERN, where users are able to obtain guaranteed access to CMS data samples. The Tier-1 centers are used primarily for organized processing and storage. The Tier-1s are specified with data export and network capacity to allow the Tier-2 centers to refresh the data in disk storage regularly for...
    Go to contribution page
  24. Dr Ajit Kumar Mohapatra (University of Wisconsin, Madison, USA)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The CMS experiment has been using the Open Science Grid, through its US Tier-2 computing centers, from its very beginning for production of Monte Carlo simulations. In this talk we will describe the evolution of the usage patterns indicating the best practices that have been identified. In addition to describing the production metrics and how they have been met, we will also present the...
    Go to contribution page
  25. Dr Alessandra Fanfani (on beahlf of CMS - INFN-BOLOGNA (ITALY))
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    CMS has identified the distributed Tier-2 sites as the primary location for physics analysis. There is a specialized analysis cluster at CERN, but it represents approximately 15% of the total computing available to analysis users. The more than 40 Tier-2s on 4 continents will provide analysis computing and user storage resources for the vast majority of physicists in CMS. The CMS estimate is...
    Go to contribution page
  26. Andrea Valassi (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The COOL project provides software components and tools for the handling of the LHC experiment conditions data. The project is a collaboration between the CERN IT Department and Atlas and LHCb, the two experiments that have chosen it as the base of their conditions database infrastructure. COOL supports persistency for several relational technologies (Oracle, MySQL and SQLite), based on the...
    Go to contribution page
  27. Prof. Kihyeon Cho (KISTI)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    KISTI (Korea Institute of Science and Technology Information) in Korea is the national headquarter of supercomputer, network, Grid and e-Science. We have been working on cyberinfrastructure for high energy physics experiment, especially CDF experiment and ALICE experiment. We introduce the cyberinfrastructure which includes resources, Grid and e-Science for these experiments. The goal of...
    Go to contribution page
  28. Cédric Serfon (LMU München)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    A set of tools have been developed to ensure the Data Management operations (deletion, movement of data within a site and consistency checks) within the German cloud for ATLAS. These tools that use local protocols which allow a fast and efficient processing are described hereafter and presented in the context of the operational procedures of the cloud. A particular emphasis is put on the...
    Go to contribution page
  29. Dr Ashok Agarwal (University of Victoria, Victoria, BC, Canada)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    An interface between dCache and the local Tivoli Storage Manager (TSM) tape storage facility has been developed at the University of Victoria (UVic) for High Energy Physics (HEP) applications. The interface is responsible for transferring the data from disk pools to tape and retrieving data from tape to disk pools. It also checks the consistency between the PNFS filename space and the TSM...
    Go to contribution page
  30. Dirk Hufnagel (Conseil Europeen Recherche Nucl. (CERN))
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The CMS Tier 0 is responsible for handling the data in the first period of it's life, from being written to a disk buffer at the CMS experiment site in Cessy by the DAQ system, to the time transfer completes from CERN to one of the Tier1 computing centres. It contains all automatic data movement, archival and processing tasks run at CERN. This includes the bulk transfers of data from Cessy to...
    Go to contribution page
  31. Mr Adrian Casajus Ramo (Departament d' Estructura i Constituents de la Materia)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    DIRAC, the LHCb community Grid solution, provides access to a vast amount of computing and storage resources to a large number of users. In DIRAC users are organized in groups with different needs and permissions. In order to ensure that only allowed users can access the resources and to enforce that there are no abuses, security is mandatory. All DIRAC services and clients use secure...
    Go to contribution page
  32. Galina Shabratova (Joint Inst. for Nuclear Research (JINR))
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    A. Bogdanov3, L. Malinina2, V. Mitsyn2, Y. Lyublev9, Y. Kharlov8, A. Kiryanov4, D. Peresounko5, E.Ryabinkin5, G. Shabratova2 , L. Stepanova1, V. Tikhomirov3, W. Urazmetov8, A.Zarochentsev6, D. Utkin2, L. Yancurova2, S. Zotkin8 1 Institute for Nuclear Research of the Russian, Troitsk, Russia; 2 Joint Institute for Nuclear Research, Dubna, Russia; 3 Moscow Engineering Physics Institute,...
    Go to contribution page
  33. Predrag Buncic (CERN)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Infrastructure-as-a-Service (IaaS) providers allow users to easily acquire on-demand computing and storage resources. For each user they provide an isolated environment in the form of Virtual Machines which can be used to run services and deploy applications. This approach, also known as 'cloud computing', has proved to be viable for a variety of commercial applications. Currently there are...
    Go to contribution page
  34. Mr omer khalid (CERN)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Omer Khalid, Paul Nillson, Kate Keahey, Markus Schulz --- Given the profileration of virtualization technology in every technological domain, we have been investigating on enabling virtualization in the LCG Grid to bring in virtualization benefits such as isolation, security and environment portability using virtual machines as job execution containers. There are many different ways to...
    Go to contribution page
  35. Paul Rossman (Fermi National Accelerator Lab. (Fermilab))
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    CMS utilizes a distributed infrastructure of computing centers to custodially store data, to provide organized processing resources, and to provide analysis computing resources for users. Integrated over the whole system, even in the first year of data taking, the available disk storage approaches 10 peta bytes of space. Maintaining consistency between the data bookkeeping, the data transfer...
    Go to contribution page
  36. Matevz Tadel (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    EVE is a high-level visualization library using ROOT's data-processing, GUI and OpenGL interfaces. It is designed as a framework for object management offering hierarchical data organization, object interaction and visualization via GUI and OpenGL representations. Automatic creation of 2D projected views is also supported. On the other hand, it can serve as an event visualization toolkit...
    Go to contribution page
  37. Prof. Roger Jones (Lancaster University)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Despite the all too brief availability of beam-related data, much has been learned about the usage patterns and operational requirements of the ATLAS computing model since Autumn 2007. Bottom-up estimates are now more detailed, and cosmic ray running has exercised much of the model in both duration and volume. Significant revisions have been made in the resource estimates, and in the usage of...
    Go to contribution page
  38. Claudio Grandi (INFN Bologna)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centers located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centers for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging...
    Go to contribution page
  39. Dr Tomasz Wlodek (Brookhaven National Laboratory (BNL)), Dr Yuri Smirnov (Brookhaven National Laboratory (BNL))
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all Panda information: active and archival...
    Go to contribution page
  40. Juraj Sucik (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    CERN has a successful experience with running Server Self Service Center (S3C) for virtual server provisioning which is based on Microsoft Virtual Server 2005. With the introduction of Window Server 2008 and its built-in hypervisor based virtualization (Hyper-V) there are new possibilities for the expansion of the current service. Observing a growing industry trend of provisioning Virtual...
    Go to contribution page
  41. Mr Michele De Gruttola (INFN, Sezione di Napoli - Universita & INFN, Napoli/ CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Reliable population of the condition database is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. We will describe here the system put in place in the CMS experiment to populate the database and make condition data promptly available online for the high-level trigger and offline for reconstruction. The system has been...
    Go to contribution page
  42. Loic Quertenmont (Universite Catholique de Louvain)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    FROG is a generic framework dedicated to visualize events in a given geometry. \newline It has been written in C++ and use OpenGL cross-platform libraries. It can be used to any particular physics experiment or detector design. The code is very light and very fast and can run on various Operating System. Moreover, FROG is self consistent and does not require installation of ROOT or...
    Go to contribution page
  43. Victor Diez Gonzalez (Univ. Rov. i Virg., Tech. Sch. Eng.-/CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Geant4 is a toolkit to simulate the passage of particles through matter, and is widely used in HEP, in medical physics and for space applications. Ongoing developments and improvements require regular integration testing for new or modified code. The current system uses a customised version of the Bonsai Mozilla tool to collect and select tags for testing, a set of shell and...
    Go to contribution page
  44. Mr Laurent GARNIER (LAL-IN2P3-CNRS)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Qt is a powerfull cross-platform application framework , powerful, free (even on Windows), used by lots of people and applications. That's why, last developments in Geant4 visualization group come with a new driver, based on Qt toolkit. Qt library has OpenGL available, then all 3D scenes could be move by mouse (like in OpenInventor driver). This driver try to resume all the features already...
    Go to contribution page
  45. Mr Luiz Henrique Ramos De Azevedo Evora (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    During the operation, maintenance, and dismantling periods of the ATLAS Experiment, the traceability of all detector equipment must be guaranteed for logistic and safety matters. The running of the Large Hadron Collider will expose the ATLAS detector to radiation. Therefore, CERN shall follow specific regulation from French and Swiss authorities for equipment removal, transport, repair, and...
    Go to contribution page
  46. Dr Jose Caballero (Brookhaven National Laboratory (BNL))
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Worker nodes on the grid exhibit great diversity, making it difficult to offer uniform processing resources. A pilot job architecture, which probes the environment on the remote worker node before pulling down a payload job, can help. Pilot jobs become smart wrappers, preparing an appropriate environment for job execution and providing logging and monitoring capabilities. PanDA (Production...
    Go to contribution page
  47. Dr Bogdan Lobodzinski (DESY, Hamburg,Germany)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The H1 Collaboration at HERA has entered the period of high precision analyses based on the final data sample. These analyses require a massive production of simulated Monte Carlo (MC) events. The H1 MC framework is a software for mass MC production on the LCG Grid infrastructure and on a local batch system created by H1 Collaboration. The aim of the tool is a full automatization of the...
    Go to contribution page
  48. Dr Sebastian Böser (University College London)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Within the last years, the HepMC data format has established itself as the standard data format for simulation of high-energy physics interactions and is commonly used by all four LHC experiments. At the energies of the proton-proton collisisions at the LHC, a full description of the generation of these events and the subsequent interactions with the detector typically involves several...
    Go to contribution page
  49. Axel Naumann (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    C++ does not offer access to reflection data: the types and their members as well as their memory layout are not accessible. Reflex adds that: it can be used to describe classes and any other types, to lookup and call functions, to lookup and access data members, to create and delete instances of types. It is rather unique and attracts considerable interest also outside of high energy...
    Go to contribution page
  50. Dr David Dykstra (Fermilab)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The CMS experiment requires worldwide access to conditions data by nearly a hundred thousand processing jobs daily. This is accomplished using a software subsystem called Frontier. This system translates database queries into http, looks up the results in a central database at CERN, and caches the results in an industry-standard http proxy/caching server called Squid. One of the most...
    Go to contribution page
  51. Dr Hartmut Stadie (Universität Hamburg)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    While the Grid infrastructure for the LHC experiments is well suited for batch-like analysis, it does not support the final steps of an analysis on a reduced data set, e.g. the optimization of cuts and derivation of the final plots. Usually this part is done interactively. However, for the LHC these steps might still require a large amount of data. The German "National Analysis Facility"(NAF)...
    Go to contribution page
  52. Dr Vladimir Korenkov (Joint Institute for Nuclear Research (JINR))
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Different monitoring systems are now extensively used to keep an eye on real time state of each service of distributed grid infrastructures and jobs running on the Grid. Tracking current services’ state as well as the history of state changes allows rapid error fixing, planning future massive productions, revealing regularities of Grid operation and many other things. Along with...
    Go to contribution page
  53. Marco Mambelli (UNIVERSITY OF CHICAGO)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The ATLAS experiment is projected to collect over one billion events/year during the first few years of operation. The efficient selection of events for various physics analyses across all appropriate samples presents a significant technical challenge. ATLAS computing infrastructure...
    Go to contribution page
  54. Dr Pavel Nevski (BNL)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    In addition to challenges on computing and data handling, ATLAS and other LHC experiments place a great burden on users to configure and manage the large number of parameters and options needed to carry out distributed computing tasks. Management of distribute physics data is being made more transparent by dedicated ATLAS grid computing technologies, such as PanDA (a pilot-based job...
    Go to contribution page
  55. Prof. Marco Cattaneo (CERN)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    LHCb had been planning to commission its High Level Trigger software and Data Quality monitoring procedures using real collisions data from the LHC pilot run. Following the LHC incident on 19th September 2008, it was decided to commission the system using simulated data. This “Full Experiment System Test” consists of: - Injection of simulated minimum bias events into the full HLT farm,...
    Go to contribution page
  56. Luciano Piccoli (Fermilab)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Large computing clusters used for scientific processing suffer from systemic failures when operated over long continuous periods for executing workflows. Diagnosing job problems and faults leading to eventual failures in this complex environment is difficult, specifically when the success of whole workflow might be affected by a single job failure. In this paper, we introduce a model-based,...
    Go to contribution page
  57. Alexey Zhelezov (Physikalisches Institut, Universitaet Heidelberg)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    LHC experiments are producing very large volumes of data either accumulated from the detectors or generated via the Monte-Carlo modeling. The data should be processed as quickly as possible to provide users with the input for their analysis. Processing of multiple hundreds of terabytes of data necessitates generation, submission and following a huge number of grid jobs running all over the...
    Go to contribution page
  58. Noriza Satam (Department of Mathematics, Faculty of Science,Universiti Teknologi Malaysia), Norma Alias (Institute of Ibnu Sina, Universiti Teknologi Malaysia,)
    23/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    New Iterative Alternating Group Explicit (NAGE) is a powerful parallel numerical algorithm for multidimensional temperature prediction. The discretization is based on the finite difference method of partial differential equation (PDE) with parabolic type. The 3-Dimensional temperature visualization is critical since it’s involves large scale of computational complexity. The three fundamental...
    Go to contribution page
  59. Mr Andrew Baranovski (FNAL)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    In a shared computing environment, activities orchestrated by workflow management systems often need to span organizational and ownership domains. In such a setting, common tasks, such as the collection and display of metrics and debugging information, are challenged by the informational entropy inherent to independently maintained and owned software sub-components. Because such information...
    Go to contribution page
  60. Benjamin Gaidioz (CERN)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The ATLAS production system is one of the most critical components in the experiment's distributed system, and this becomes even more true now that real data has entered the scene. Monitoring such a system is a non trivial task, even more when two of its main characteristics are the flexibility in the submission of job processing units and the heterogeneity of the resources it uses. In...
    Go to contribution page
  61. Dr Xavier Espinal (PIC/IFAE)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The ATLAS distributed computing activities involve about 200 computing centers distributed world-wide and need people on shift covering 24 hours per day. Data distribution, data reprocessing, user analysis and Monte Carlo event simulation runs continuously. Reliable performance of the whole ATLAS computing community is of crucial importance to meet the ambitious physics goals of the ATLAS...
    Go to contribution page
  62. Alexander Undrus (BROOKHAVEN NATIONAL LABORATORY, USA)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The system of automated multi-platform software nightly builds is a major component in ATLAS collaborative software organization and code approval scheme. Code developers from more than 30 countries use about 25 branches of nightly releases for testing new packages, validation of patches to existing software, and migration to new platforms and compilers. The successful nightly releases...
    Go to contribution page
  63. Dr Philippe Calfayan (Ludwig-Maximilians-University Munich)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The PROOF (Parallel ROOT Facility) library is designed to perform parallelized ROOT-based analyses with a heterogeneous cluster of computers. The installation, configuration and monitoring of PROOF have been carried out using the Grid-Computing environments dedicated to the ATLAS experiment. A PROOF cluster hosted at the Leibniz Rechenzentrum (LRZ) and consisting of a scalable amount of...
    Go to contribution page
  64. Dr Alfio Lazzaro (Universita and INFN, Milano / CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    MINUIT is the most common package used in high energy physics for numerical minimization of multi-dimensional functions. The major algorithm of this package, MIGRAD, searches for the minimum by using the gradient function. For each minimization iteration, MIGRAD requires the calculation of the first derivatives for each parameter of the function to be minimized. Minimization is required for...
    Go to contribution page
  65. Dr Niklaus Berger (Institute for High Energy Physics, Beijing)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Partial wave analysis is an important tool for determining resonance properties in hadron spectroscopy. For large data samples however, the un-binned likelihood fits employed are computationally very expensive. At the Beijing Spectrometer (BES) III experiment, an increase in statistics compared to earlier experiments of up to two orders of magnitude is expected. In order to allow for a timely...
    Go to contribution page
  66. Alexandre Vaniachine (Argonne National Laboratory), David Malon (Argonne National Laboratory), Jack Cranshaw (Argonne National Laboratory), Jérôme Lauret (Brookhaven National Laboratory), Paul Hamill (Tech-X Corporation), Valeri Fine (Brookhaven National Laboratory)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    High Energy and Nuclear Physics (HENP) experiments store petabytes of event data and terabytes of calibrations data in ROOT files. The Petaminer project develops a custom MySQL storage engine to enable the MySQL query processor to directly access experimental data stored in ROOT files. Our project is addressing a problem of efficient navigation to petabytes of HENP experimental data...
    Go to contribution page
  67. Mr Igor Sfiligoi (Fermilab)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In...
    Go to contribution page
  68. Marco Clemencic (European Organization for Nuclear Research (CERN))
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The LHCb software, from simulation to user analysis, is based on the framework Gaudi. The extreme flexibility that the framework provides, through its component model and the system of plug-ins, allows us to define a specific application as its behavior more than its code. The application is then described by some configuration files read by the bootstrap executable (shared by all...
    Go to contribution page
  69. Ms Elena Oliver (Instituto de Fisica Corpuscular (IFIC) - Universidad de Valencia)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The ATLAS data taking is due to start in Spring 2009. In this contribution and given the expectation, a rigorous evaluation of the readiness parameters of the Spanish ATLAS Distributed Tier-2 is given. Special attention will be paid to the readiness to perform Physics Analysis from different points of view: Network Efficiency, Data Discovery, Data Management, Production of...
    Go to contribution page
  70. Mr Olivier Couet (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The ROOT framework provides many visualization techniques. Lately several new ones have been implemented. This poster will present all the visualization techniques ROOT provides highlighting the best use one can do of each of them.
    Go to contribution page
  71. Prof. Gordon Watts (UNIVERSITY OF WASHINGTON)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    ROOT.NET provides an interface between Microsoft’s Common Language Runtime (CLR) and .NET technology and the ubiquitous particle physics analysis tool, ROOT. This tool automatically generates a series of efficient wrappers around the ROOT API. Unlike pyROOT, these wrappers are statically typed and so are highly efficient as compared to the Python wrappers. The connection to .NET means that one...
    Go to contribution page
  72. Mr romain wartel (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Different computing grids may provide services to the same user community, and in addition, a grid resource provider may share its resources across different unrelated user communities. Security incidents are therefore increasingly prone to propagate from one resource center to the another, either via the user community or via cooperating grid infrastructures. As a result, related and...
    Go to contribution page
  73. Mr Jan KAPITAN (Nuclear Physics Inst., Academy of Sciences, Praha)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    High Energy Nuclear Physics (HENP) collaborations’ experience show that the computing resources available from a single site are often not sufficient nor satisfy the need of remote collaborators eager to carry their analysis in the fastest and most convenient way. From latencies in the network connectivity to the lack interactivity, having fully functional software stack on local resources is...
    Go to contribution page
  74. Ms Jaroslava Schovancova (Institute of Physics, Prague), Dr Jiri Chudoba (Institute of Physics, Prague)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The Pierre Auger Observatory studies ultra-high energy cosmic rays. Interactions of these particles with the nuclei of air gases at energies many orders of magnitude above the current accelerator capabilities induce unprecedented extensive air showers in the atmosphere. Different interaction models are used to describe the first interactions in such showers and their predictions are...
    Go to contribution page
  75. Dr Simon Metson (H.H. Wills Physics Laboratory)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    In a collaboration the size of CMS (approx. 3000 users, and almost 100 computing centres of varying size) communication and accurate information about the sites it has access to is vital in co-ordinating the multitude of computing tasks required for smooth running. SiteDB is a tool developed by CMS to track sites available to the collaboration, the allocation to CMS of resources available at...
    Go to contribution page
  76. Dr Ricardo Graciani Diaz (Universidad de Barcelona)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The usage of CPU resources by LHCb on the Grid id dominated by two different applications: Gauss and Brunel. Gauss the application doing the Monte Carlo simulation of proton-proton collisions. Brunel is the application responsible for the reconstruction of the signals recorded by the detector converting them into objects that can be used for later physics analysis of the data (tracks,...
    Go to contribution page
  77. Dr Dagmar Adamova (Nuclear Physics Institute AS CR)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    Czech Republic (CR) has been participating in the LHC Computing Grid project (LCG) ever since 2003 and gradually, a middle-sized Tier2 center has been built in Prague, delivering computing services for national HEP experiments groups including the ALICE project at the LHC. We present a brief overview of the computing activities and services being performed in the CR for the ALICE...
    Go to contribution page
  78. Pier Paolo Ricci (INFN CNAF)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    In the framework of WLCG, the Tier-1 computing centres have very stringent requirements in the sector of the data storage, in terms of size, performance and reliability. Since some years, at the INFN-CNAF Tier-1 we have been using two distinct storage systems: Castor as tape-based storage solution (also known as the D0T1 storage class in the WLCG language) and the General Parallel File...
    Go to contribution page
  79. Mr Matti Kortelainen (Helsinki Institute of Physics)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    We study the performance of different ways of running a physics analysis in preparation for the analysis of petabytes of data in the LHC era. Our test cases include running the analysis code in a Linux cluster with a single thread in ROOT, with the Parallel ROOT Facility (PROOF), and in parallel via the Grid interface with the ARC middleware. We use of the order of millions of Pythia8...
    Go to contribution page
  80. Dr Monica Verducci (INFN Roma)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The ATLAS Muon Spectrometer is the outer part of the ATLAS detector at LHC. It has been designed to detect charged particles exiting the barrel and end-cap calorimeters and to measure their momentum in the pseudorapidity range |η| < 2.7. The challenge performance in momentum measurements needs an accurate monitoring of detector and calibration parameters and an high complex architecture to...
    Go to contribution page
  81. Pedro Salgado (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The ATLAS Distributed Data Management system, Don Quijote2 (DQ2), has been in use since 2004. Its goal is to manage tens of petabytes of data per year, distributed among the WLCG. One of the most critical components of DQ2 is the central catalogues which comprises a set of web services with a database back-end and a distributed memory object caching system. This component has proven to...
    Go to contribution page
  82. Dr Vincent Garonne (CERN)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The DQ2 Distributed Data Management system is the system developed and used by ATLAS for handling very large datasets. It encompasses data bookkeeping, managing of largescale production transfers as well as endusers data access requests. In this paper, we will describe the design and implementation of the DQ2 accounting service. It collects different data usage informations in order to show...
    Go to contribution page
  83. Dr Solveig Albrand (LPSC)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    AMI is the main interface for searching for ATLAS datasets using physics metadata criteria. AMI has been implemented as a generic database management framework which allows parallel searching over many catalogues, which may have differing schema, and may be distributed geographically, using different RDBMS. The main features of the web interface will be described; in particular the powerful...
    Go to contribution page
  84. Florbela Viegas (CERN)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary...
    Go to contribution page
  85. Dr Daniele Bonacorsi (Universita & INFN, Bologna)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The CMS Facilities and Infrastructure Operations group is responsible for providing and maintaining a working distributed computing fabric with a consistent working environment for Data operations and the physics user community. Its mandate is to maintain the core CMS computing services; ensure the coherent deployment of Grid or site specific components (such as workload management, file...
    Go to contribution page
  86. Dr Lee Lueking (FERMILAB)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The CMS experiment has implemented a flexible and powerful approach enabling users to find data within the CMS physics data catalog. The Dataset Bookkeeping Service (DBS) comprises a database and the services used to store and access metadata related to its physics data. In addition to the existing WEB based and programmatic API, a generalized query system has been designed and built. This...
    Go to contribution page
  87. Dr Andrea Sartirana (INFN-CNAF)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The CMS experiment is preparing for data taking in many computing activities, including the testing, deployment and operation of various storage solutions to support the computing workflows of the experiment. Some Tier-1 and Tier-2 centers supporting the collaboration are deploying and commissioning StoRM storage systems. That is, posix-based disk storage systems on top of which StoRM...
    Go to contribution page
  88. Zoltan Mathe (UCD Dublin)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The LHCb Bookkeeping is a system for the storage and retrieval of meta data associated with LHCb datasets. e.g. whether it is real or simulated data, which running period it is associated with, how it was processed and all the other relevant characteristics of the files. The meta data are stored in an oracle database which is interrogated using services provided by the LHCb DIRAC3...
    Go to contribution page
  89. Hubert Degaudenzi (European Organization for Nuclear Research (CERN))
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The installation of the LHCb software is handled by a single python script: install_project.py. This bootstrap script is unique by allowing the installation of software projects on various operating system (Linux, Windows, MacOSX). It is designed for the LHCb software deployment for a single user or for multiple users, in a shared area or on the Grid. It retrieves the software packages and...
    Go to contribution page
  90. Bertrand Bellenot (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Description of the new implementation of the ROOT browser
    Go to contribution page
  91. Dr Hubert Degaudenzi (CERN), Karol Kruzelecki (Cracow University of Technology-Unknown-Unknown)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The core software stack both from the LCG Application Area and LHCb consists of more than 25 C++/Fortran/Python projects build for about 20 different configurations on Linux, Windows and MacOSX. To these projects, one can also add about 20 external software packages (Boost, Python, Qt, CLHEP, ...) which have also to be build for the same configurations. It order to reduce the time of...
    Go to contribution page
  92. Ilektra Christidi (Physics Department - Aristotle Univ. of Thessaloniki)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The ATLAS detector has been designed to exploit the full discovery potential of the LHC proton-proton collider at CERN, at the c.m. energy of 14 TeV. Its Muon Spectrometer (MS) has been optimized to measure final state muons from those interactions with good momentum resolution (3-10% for momentum of 100GeV/c-1TeV/c). In order to ensure that the hardware, DAQ and reconstruction software of...
    Go to contribution page
  93. Dr Mine Altunay (FERMILAB)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Open Science Grid stakeholders invariably depend on multiple infrastructures to build their community-based distributed systems. To meet this need, OSG has built new gateways with TeraGrid, Campus Grids, and Regional Grids (NYSGrid, BrazilGrid). This has brought new security challenges for the OSG architecture and operations. The impact of security incidents now has a larger scope and...
    Go to contribution page
  94. Bertrand Bellenot (CERN)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Description of the ROOT event recorder, a GUI testing and validation tool.
    Go to contribution page
  95. Anar Manafov (GSI)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    Memory monitoring is a very important part of complex project development. Open Source tools, such as valgrind, are available for the task, however, their performance penalties make them not suitable for debugging long, CPU-intensive programs, such as reconstruction or simulation. We have developed the TMemStat tool, which, while not providing the full functionality of valgrind, gives...
    Go to contribution page
  96. David Chamont (Laboratoire Leprince-Ringuet (LLR)-Ecole Polytechnique-Unknown)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The same as many experiments, FERMI is storing its data within ROOT trees. A very common activity of physicists is the tuning of selection criteria which define the events of interest, thus cutting and pruning the ROOT trees so to extract all the data linked to those specific events. It is rather straighforward to write a ROOT script so to skim a single kind of data, for example the...
    Go to contribution page
  97. Dr Richard Wilkinson (California Institute of Technology)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    In 2008, the CMS experiment made the transition from a custom-parsed language for job configuration to using Python. The current CMS software release has over 180,000 lines of Python configuration code. We describe the new configuration system, the motivation for the change, the transition itself, and our experiences with the new configuration language.
    Go to contribution page
  98. Dr Oliver Gutsche (FERMILAB)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    The CMS software stack currently consists of more than 2 Million lines of code developed by over 250 authors with a new version being released every week. CMS has setup a release validation process for quality assurance which enables the developers to compare to previous releases and references. This process provides the developers with reconstructed datasets of real data and MC samples....
    Go to contribution page
  99. Tatsiana Klimkovich (RWTH Aachen University)
    23/03/2009, 08:00
    Software Components, Tools and Databases
    poster
    VISPA is a novel development environment for high energy physics analyses which enables physicists to combine graphical and textual work. A physics analysis cycle consists of prototyping, performing, and verifying the analysis. The main feature of VISPA is a multipurpose window for visual steering of analysis steps, creation of analysis templates, and browsing physics event data at different...
    Go to contribution page
  100. Prof. Rodriguez Jorge Luis (Florida Int'l University)
    23/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The CMS experiment will generate tens of petabytes of data per year, data that will be processed, moved and stored in large computing facilities at locations all over the globe. Each of these facilities deploys complex and sophisticated hardware and software components which require dedicated expertise lacking at many of the university and institutions wanting access to the data as soon as it...
    Go to contribution page
  101. Alexander Mazurov (CERN)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    CASTOR provides a powerful and rich interface for managing files and pools of files backed by tape-storage. The API is modelled very closely on that of a POSIX filesystem, where part of the actual I/O part is handled by the rfio library. While the API is very close to POSIX it is still separated, which unfortunately makes it impossible to use standard tools and scripts straight away....
    Go to contribution page
  102. Mr Aatos Heikkinen (Helsinki Institute of Physics, HIP)
    24/03/2009, 08:00
    Event Processing
    poster
    We present a new Geant4 physics list prepared for nuclear physics applications in the domain dominated by spallation. We discuss new Geant4 models based on the translation of INCL intra-nuclear cascade and ABLA de-excitation codes in C++ and used in the physic list. The INCL model is well established for targets heavier than Aluminium and projectile energies from ~ 150 MeV up to 2.5...
    Go to contribution page
  103. Dimosthenis Sokaras (N.C.S.R. Demokritos, Institute of Nuclear Physics)
    24/03/2009, 08:00
    Event Processing
    poster
    Well established values for the X-ray fundamental parameters (fluorescence yields, characteristic lines branching ratios, mass absorption coefficients, etc.) are very important but not adequate for an accurate reference-free quantitative X-Ray Fluorescence (XRF) analysis. Secondary ionization processes following photon induced primary ionizations in matter may contribute significantly to the...
    Go to contribution page
  104. Karsten Koeneke (Deutsches Elektronen-Synchrotron (DESY))
    24/03/2009, 08:00
    Event Processing
    poster
    In the commissioning phase of the ATLAS experiment, low-level Event Summary Data (ESD) are analyzed to evaluate the performance of the individual subdetectors, the performance of the reconstruction and particle identification algorithms, and obtain calibration coefficients. In the GRID model of distributed analysis, these data must be transferred to Tier-1 and Tier-2 sites before they can be...
    Go to contribution page
  105. Dr Rudi Frühwirth (Institut fuer Hochenergiephysik (HEPHY)-Oesterreichische Akademi)
    24/03/2009, 08:00
    Event Processing
    poster
    Reconstruction of interaction vertices is an essential step in the reconstruction chain of a modern collider experiment such as CMS; the primary ("collision") vertex is reconstructed in every event within the CMS reconstruction program, CMSSW. However, the task of finding and fitting secondary ("decay") vertices also plays an important role in several physics cases such as the reconstruction...
    Go to contribution page
  106. Dr Kilian Schwarz (GSI)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    GSI Darmstadt is hosting a Tier2 centre for the ALICE experiment providing about 10% of ALICE Tier2 resources. According to the computing model the tasks of a Tier2 centre are scheduled and unscheduled analysis as well as Monte Carlo simulation. To accomplish this a large water cooled compute cluster has been set up and configured consisting of currently 200 CPUs (1500 Cores). After intensive...
    Go to contribution page
  107. Dr Marian Ivanov (GSI)
    24/03/2009, 08:00
    Event Processing
    poster
    We will present a Particle identification algorithm, as well as a calibration and performance study in the ALICE Time Projection Chamber (TPC) using the dEdx measurement. New calibration algorithms had to be developed, since the simple geometrical corrections were only suitable at 5-10% level. The PID calibration consists of the following parts: gain calibration, energy deposit calibration as...
    Go to contribution page
  108. Dr Marian Ivanov (GSI)
    24/03/2009, 08:00
    Event Processing
    poster
    We will present our studies of the performance of the reconstruction in the ALICE Time projection chamber (TPC). The reconstruction algorithm in question is based on the Kalman filter. The performance is characterized by the resolution in the position, angle and momenta as a function of particle properties (momentum, position). The resulting momentum parametrization is compared with the...
    Go to contribution page
  109. Daniel Kollar
    24/03/2009, 08:00
    Event Processing
    poster
    The CERN's Large Hadron Collider (LHC) is the world largest particle accelerator. ATLAS is one of the two general purpose experiments equipped with a charge particle tracking system built on two technologies: silicon and drift tube based detectors, composing the ATLAS Inner Detector (ID). The required precision for the alignment of the most sensitive coordinates of the silicon sensors is just...
    Go to contribution page
  110. Jan Amoraal (NIKHEF), Wouter Hulsbergen (NIKHEF)
    24/03/2009, 08:00
    Event Processing
    poster
    We report on an implementation of a global chisquare algorithm for the simultaneous alignment of all tracking systems in the LHCb detector. Our algorithm uses hit residuals from the standard LHCb track fit which is based on a Kalman filter. The algorithm is implemented in the LHCb reconstruction framework and exploits the fact that all sensitive detector elements have the same geometry...
    Go to contribution page
  111. Vitali CHOUTKO (CERN)
    24/03/2009, 08:00
    Event Processing
    poster
    The ROOT based event model for the AMS experiment is presented. By adding few pragmas to the main ROOT code the parallel processing of the ROOT chains on the local multi-core machines became possible. The scheme does not require any merging of the user defined output information (like histograms, etc). Also no any pre-installation procedure is needed. The scalability of the scheme is...
    Go to contribution page
  112. Dr Edmund Widl (Institut für Hochenergiephysik (HEPHY Vienna))
    24/03/2009, 08:00
    Event Processing
    poster
    One of the main components of the CMS experiment is the Inner Tracker. This device, designed to measure the trajectories of charged particles, is composed of approximately 16,000 planar silicon detector modules, which makes it the biggest of its kind. However, systematical measurement errors, caused by unavoidable inaccuracies in the construction and assembly phase, reduce the precision of the...
    Go to contribution page
  113. Stefan Kluth (Max-Planck-Institut für Physik)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image...
    Go to contribution page
  114. Dr David Lawrence (Jefferson Lab)
    24/03/2009, 08:00
    Event Processing
    poster
    Automatic ROOT tree creation is achived in the JANA Event Processing Framework through a special plugin. The janaroot plugin can automatically define a TTree from the data objects passed though the framework without using a ROOT dictionary. Details on how this is achieved as well as possible applications will be presented.
    Go to contribution page
  115. Robert Petkus (Brookhaven National Laboratory)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    Gluster, a free cluster file-system scalable to several peta-bytes, is under evaluation at the RHIC/USATLAS Computing Facility. Several production SunFire x4500 (Thumper) NFS servers were dual-purposed as storage bricks and aggregated into a single parallel file-system using TCP/IP as an interconnect. Armed with a paucity of new hardware, the objective was to simultaneously allow traditional...
    Go to contribution page
  116. Dr Peter Kreuzer (RWTH Aachen IIIA)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workflows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast-turnaround. In addition to the low latency requirement on the batch farm, another mandatory condition is the efficient access to the RAW...
    Go to contribution page
  117. Andrea Di Simone (INFN Roma2)
    24/03/2009, 08:00
    Event Processing
    poster
    Resistive Plate Chambers (RPC) are used in ATLAS to provide the first level muon trigger in the barrel region. The total size of the system is about 16000 m2, readout by about 350000 electronic channels. In order to reach the needed trigger performance, a precise knowledge of the detector working point is necessary, and the high number of readout channels calls for severe requirements on...
    Go to contribution page
  118. Dr Silvia Maselli (INFN Torino)
    24/03/2009, 08:00
    Event Processing
    poster
    The calibration process of the Barrel Muon DT System of CMS as developed and tuned during the recent cosmic data run is presented. The calibration data reduction method, the full work flow of the procedure and final results are presented for real and simulated data.
    Go to contribution page
  119. Mr James Jackson (H.H. Wills Physics Laboratory - University of Bristol)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    The UK LCG Tier-1 computing centre located at the Rutherford Appleton Laboratory is responsible for the custodial storage and processing of the raw data from all four LHC experiments; CMS, ATLAS, LHCb and ALICE. The demands of data import, processing, export and custodial tape archival place unique requirements on the mass storage system used. The UK Tier-1 uses CASTOR as the storage...
    Go to contribution page
  120. Rodrigo Sierra Moral (CERN)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    Scientists all over the world collaborate with the CERN laboratory day by day. They must be able to communicate effectively on their joint projects at any time, so telephone conferences become indispensable and widely used. The traditional conference system, managed by 6 switchboard operators, was hosting more than 20000 hours and 5500 conference per year. However, the system needed to be...
    Go to contribution page
  121. Mr Carlos Ghabrous (CERN)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    As a result of the tremendous development of GSM services over the last years, the number of related services used by organizations has drastically increased. Therefore, monitoring GSM services is becoming a business critical issue in order to be able to react appropriately in case of incident. In order to provide with GSM coverage all the CERN underground facilities, more than 50 km of...
    Go to contribution page
  122. Dr Lucas Taylor (Northeastern U., Boston)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    The CMS Experiment at the LHC is establishing a global network of inter-connected "CMS Centres" for controls, operations and monitoring at CERN, Fermilab, DESY and a number of other sites in Asia, Europe, Russia, South America, and the USA. "ci2i" ("see eye to eye") is a generic Web tool, using Java and Tomcat, for managing: hundreds of displays screens in many locations; monitoring...
    Go to contribution page
  123. Miroslav Siket (CERN)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    LHC computing requirements are such that the number of CPU and storage nodes, and the complexity of the services to be managed are bringing new challenges. Operations like checking configuration consistency, executing actions on nodes, moving them between clusters etc. are very frequent. These scaling challenges are the basis for CluMan, a new cluster management tool being designed and...
    Go to contribution page
  124. Martin Gasthuber (DESY)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    Having the first analyses capable data from LHC on the horizon, more and more sites are facing the question/problem of building a high efficient analysis facility, for their local physicists, mostly attached to a Tier2/3. The most important ingredient for such a facility is the underlying storage system and here the selected option for the data management and data access system - well...
    Go to contribution page
  125. Mr Stuart Wakefield (Imperial College)
    24/03/2009, 08:00
    Event Processing
    poster
    ProdAgent is a set of tools to assist in producing various data products such as Monte Carlo simulation, prompt reconstruction, re-reconstruction and skimming In this paper we briefly discuss the ProdAgent architecture, and focus on the experience in using this system in recent computing challenges, feedback from these challenges, and future work. The computing challenges have proven...
    Go to contribution page
  126. Johanna Fleckner (CERN / University of Mainz)
    24/03/2009, 08:00
    Event Processing
    poster
    T Cornelissen on behalf of the ATLAS inner detector software group Several million cosmic tracks were recorded during the combined ATLAS runs in Autumn of 2008. Using these cosmic ray events as well as first beam events, the software infrastructure of the inner detector of the ATLAS experiment (pixels and microstrips silicon detectors as well as straw tubes withadditional transition...
    Go to contribution page
  127. Arshak Tonoyan (CERN)
    24/03/2009, 08:00
    Event Processing
    poster
    Looking towards first LHC collisions, the ATLAS detector is being commissioned using all types of physics data available: cosmic rays and events produced during a few days of LHC single beam operations. In addition to putting in place the trigger and data acquisition chains, commissioning of the full software chain is a main goal. This is interesting not only to ensure that the reconstruction,...
    Go to contribution page
  128. David Futyan (Imperial College, University of London)
    24/03/2009, 08:00
    Event Processing
    poster
    The CMS experiment has developed a powerful framework to ensure the precise and prompt alignment and calibration of its components, which is a major prerequisite to achieve the optimal performance for physics analysis. The prompt alignment and calibration strategy harnesses computing resources both at the Tier-0 site and the CERN Analysis Facility (CAF) to ensure fast turnaround for updating...
    Go to contribution page
  129. Mr Gheni Abla (General Atomics)
    24/03/2009, 08:00
    Online Computing
    poster
    Increasing utilization of the Internet and convenient web technologies has made the web-portal a major application interface for remote participation and control of scientific instruments. While web-portals have provided a centralized gateway for multiple computational services, the amount of visual output often is overwhelming due to the high volume of data generated by complex scientific...
    Go to contribution page
  130. Sunanda Banerjee (Fermilab, USA)
    24/03/2009, 08:00
    Event Processing
    poster
    CMS is looking forward to tune detector simulation using the forthcoming collision data from LHC. CMS established a task force in February 2008 in order to understand and reconcile the discrepancies observed between the CMS calorimetry simulation and the test beam data recorded during 2004 and 2006. Within this framework, significant effort has been made to develop a strategy of tuning fast...
    Go to contribution page
  131. Robert Petkus (Brookhaven National Laboratory)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    Over the last (2) years, the USATLAS Computing Facility at BNL has managed a highly performant, reliable, and cost effective dCache storage cluster using SunFire x4500/4540 (Thumper/Thor) storage servers. The design of a discreet storage cluster signaled a departure from a model where storage resides locally on a disk-heavy compute farm. The consequent alteration of data flow mandated a...
    Go to contribution page
  132. Prof. Gordon Watts (UNIVERSITY OF WASHINGTON)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    Particle physics conferences lasting a week (like CHEP) can have 100’s of talks and posters presented. Current conference web interfaces (like Indico) are well suited to finding a talk by author or by time-slot. However, browsing the complete material in a modern large conference is not user friendly. Browsing involves continually making the expensive transition between HTML viewing and...
    Go to contribution page
  133. Dr Filippo Costa (CERN)
    24/03/2009, 08:00
    Event Processing
    poster
    ALICE (A Large Ion Collider Experiment) is an experiment at the LHC (Large Hadron Collider) optimized for the study of heavy-ion collisions. The main aim of the experiment is to study the behavior of strongly interaction matter and quark gluon plasma. In order to be ready for the first real physics interaction, the 18 sub-detectors composing ALICE have been tested using cosmic rays and...
    Go to contribution page
  134. Dr Martin Aleksa (for the LAr conference committee) (CERN)
    24/03/2009, 08:00
    Event Processing
    poster
    The Liquid Argon (LAr) calorimeter is a key detector component in the ATLAS experiment at the LHC, designed to provide precision measurements of electrons, photons, jets and missing transverse energy. A critical element in the precision measurement is the electronic calibration. The LAr calorimeter has been installed in the ATLAS cavern and filled with liquid argon since 2006. The...
    Go to contribution page
  135. Mrs Elisabetta Ronchieri (INFN CNAF)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    Many High Energy Physics experiments must share and transfer large volumes of data. Therefore, the maximization of data throughput is a key issue, requiring detailed analysis and setup optimization of the underlying infrastructure and services. In Grid computing, the data transfer protocol called GridFTP is widely used for efficiently transferring data in conjunction with various types of file...
    Go to contribution page
  136. Marc Deissenroth, Marc Deissenroth (Universität Heidelberg)
    24/03/2009, 08:00
    Event Processing
    poster
    We report results obtained with different track-based algorithms for the alignment of the LHCb detector with first data. The large-area Muon Detector and Outer Tracker have been aligned with a large sample of tracks from cosmic rays. The three silicon detectors --- VELO, TT-station and Inner Tracker --- have been aligned with beam-induced events from the LHC injection line. We compare...
    Go to contribution page
  137. Dr Pablo Cirrone (INFN-LNS)
    24/03/2009, 08:00
    Event Processing
    poster
    Geant4 is a Monte Carlo toolkit describing transport and interaction of particles with matter. Geant4 covers all particles and materials, and its geometry description allows for complex geometries. Initially focused on high energy applications, the use of Geant4 is growing also in different like radioprotection, dosimetry, space radiation and external radiotherapy with proton and carbon...
    Go to contribution page
  138. Luca Lista (INFN Sezione di Napoli)
    24/03/2009, 08:00
    Event Processing
    poster
    We present a parser to evaluate expressions and boolean selections that is applied on CMS event data for event filtering and analysis purposes. The parser is based on boost spirit grammar definition, and uses Reflex dictionary for class introspections. The parser allows a natural definition of expressions and cuts in users configuration, and provides good run-time performances compared to...
    Go to contribution page
  139. Douglas Orbaker (University of Rochester)
    24/03/2009, 08:00
    Event Processing
    poster
    The experiments at the Large Hadron Collider (LHC) will start their search for answers to some of the remaining puzzles of particle physics in 2008. All of these experiments rely on a very precise Monte Carlo Simulation of the physical and technical processes in the detectors. A fast simulation has been developed within the CMS experiment, which is between 100-1000 times faster than its...
    Go to contribution page
  140. Lorenzo Moneta (CERN), Prof. Nikolai GAGUNASHVILI (University of Akureyri, Iceland)
    24/03/2009, 08:00
    Event Processing
    poster
    Weighted histograms are often used for the estimation of a probability density functions in High Energy Physics. The bin contents of a weighted histogram can be considered as a sum of random variables with random number of terms. A generalization of the Pearson’s chi-square statistics for weighted histograms and for weighted histograms with unknown normalization has been recently proposed...
    Go to contribution page
  141. Mr Sverre Jarp (CERN)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    This talk will start by reminding the audience that Moore's law is very much alive. Transistors will continue to double for every new silicon generation every other year. Chip designers are therefore trying every possible "trick" for putting the transistors to good use. The most notable one is to push more parallelism into each CPU: More and longer vectors, more parallel execution units, more...
    Go to contribution page
  142. Prof. Vladimir Ivantchenko (CERN, ESA)
    24/03/2009, 08:00
    Event Processing
    poster
    The process of multiple scattering of charge particles is an important component of Monte Carlo transport. At high energy it defines deviation of particles from ideal tracks and limitation of spatial resolution. Multiple scattering of low-energy electrons defines energy response and resolution of electromagnetic calorimeters. Recent progress in development of multiple scattering models within...
    Go to contribution page
  143. Ian Gable (University of Victoria)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    Virtualization technologies such as Xen can be used in order to satisfy the disparate and often incompatible system requirements of different user groups in shared-use computing facilities. This capability is particularly important for HEP applications, which often have restrictive requirements. The use of virtualization adds flexibility, however, it is essential that the virtualization...
    Go to contribution page
  144. Cano Ay (University of Goettingen)
    24/03/2009, 08:00
    Event Processing
    poster
    HepMCAnalyser is a tool for generator validation and comparisons. It is a stable, easy-to-use and extendable framework allowing for easy access/integration to generator level analysis. It comprises a class library with benchmark physics processes to analyse HepMC generator output and to fill root histogramms. A web-interface is provided to display all or selected histogramms, compare...
    Go to contribution page
  145. Dr Federico Calzolari (Scuola Normale Superiore - INFN Pisa)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    High availability has always been one of the main problems for a data center. Till now high availability was achieved by host per host redundancy, a highly expensive method in terms of hardware and human costs. A new approach to the problem can be offered by virtualization. Using virtualization, it is possible to achieve a redundancy system for all the services running on a data center. This...
    Go to contribution page
  146. Dr Steven Aplin (DESY)
    24/03/2009, 08:00
    Event Processing
    poster
    The International Linear Collider is proposed as the next large accelerator project in High Energy Physics. The ILD Detector Concept Study is one of three international groups working on designing a detector to be used at the ILC. The ILD Detector is being optimised to employ the so called Particle Flow paradigm. Such an approach means that hardware alone will not be able to realise the full...
    Go to contribution page
  147. Simon Taylor (Jefferson Lab)
    24/03/2009, 08:00
    Event Processing
    poster
    The future GlueX detector in Hall D at Jefferson Lab is a large acceptance (almost 4pi) spectrometer designed to facilitate the study of the excitation of the gluonic field binding quark--anti-quark pairs into mesons. A large solenoidal magnet will provide a 2.2-Tesla field that will be used to momentum-analyze the charged particles emerging from a liquid hydrogen target. The...
    Go to contribution page
  148. Kati Lassila-Perini (Helsinki Institute of Physics HIP)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    Complete and up-to-date documentation is essential for efficient data analysis in a large and complex collaboration like CMS. Good documentation reduces the time spent in problem solving for users and software developers. The scientists in our research environment do not necessarily have the interests or skills of professional technical writers. This results in inconsistencies in the...
    Go to contribution page
  149. Radoslav Ivanov (Unknown)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    The status of high-energy physics (HEP) information systems has been jointly analyzed by the libraries of CERN, DESY, Fermilab and SLAC. As a result, the four laboratories have started the INSPIRE project – a new platform built by moving the successful SPIRES features and content, curated at DESY, Fermilab and SLAC, into the open-source CDS Invenio digital library software that was developed...
    Go to contribution page
  150. Mrs Ianna Osborne (NORTHEASTERN UNIVERSITY)
    24/03/2009, 08:00
    Event Processing
    poster
    Geneva, 10 September 2008. The first beam in the Large Hadron Collider at CERN was successfully steered around the full 27 kilometers of the world¿s most powerful particle accelerator at 10h28 this morning. This historic event marks a key moment in the transition from over two decades of preparation to a new era of scientific discovery. (http://www.interactions.org/cms/?pid=1026796) From...
    Go to contribution page
  151. Dr Monica Verducci (INFN RomaI)
    24/03/2009, 08:00
    Event Processing
    poster
    ATLAS is a large multipurpose detector, presently in the final phase of construction at LHC, the CERN Large Hadron Collider accelerator. In ATLAS the muon detection is performed by a huge magnetic spectrometer, built with the Monitored Drift Tube (MDT) technology. It consists of more than 1,000 chambers and 350,000 drift tubes, which have to be controlled to a spatial accuracy better than 10...
    Go to contribution page
  152. Mitja Majerle (Nuclear Physics institute AS CR, Rez)
    24/03/2009, 08:00
    Event Processing
    poster
    Monte Carlo codes MCNPX and FLUKA are used to analyze the experiments on simplified Accelerator Driven Systems, which are performed at the Joint Institute for Nuclear Research Dubna. At the experiments, protons or deuterons with the energy in the GeV range are directed to thick, lead targets surrounded by different moderators and neutron multipliers. Monte Carlo simulations of these...
    Go to contribution page
  153. Dr David Lawrence (Jefferson Lab)
    24/03/2009, 08:00
    Event Processing
    poster
    Multi-threading is a tool that is not only well suited to high statistics event analysis, but is particularly useful for taking advantage of the next generation many-core CPUs. The JANA event processing framework has been designed to implement multi-threading through use of posix threads. Thoughtful implementation allows reconstruction packages to be developed that are thread enabled...
    Go to contribution page
  154. Dr Rosy Nikolaidou (CEA Saclay)
    24/03/2009, 08:00
    Event Processing
    poster
    ATLAS is one of the four experiments at the Large Hadron Collider (LHC) at CERN. This experiment has been designed to study a large range of physics including searches for previously unobserved phenomena such as the Higgs Boson and super-symmetry. The ATLAS Muon Spectrometer (MS) is optimized to measure final state muons in a large momentum range, from a few GeV up to TeV. Its momentum...
    Go to contribution page
  155. Mr Igor Mandrichenko (FNAL)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    Fermilab is a high energy physics research lab that maintains a highly dynamic network which typically supports around 15,000 active nodes. Due to the open nature of the scientific research conducted at FNAL, the portion of the network used to support open scientific research requires high bandwidth connectivity to numerous collaborating institutions around the world, and must...
    Go to contribution page
  156. Dr Yaodong CHENG (Institute of High Energy Physics,Chinese Academy of Sciences)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    Some large experiments at IHEP will generate more than 5 Petabytes of data in the next few years, which brings great challenges for data analysis and storage. CERN CASTOR version 1 was firstly deployed at IHEP in 2003, but now it is difficult to meet the new requirements. Taking into account the issues of management, commercial software etc., we don’t update CASTOR from version 1 to version 2....
    Go to contribution page
  157. Dr Peter Van Gemmeren (Argonne National Laboratory)
    24/03/2009, 08:00
    Event Processing
    poster
    In ATLAS software, TAGs are event metadata records that can be stored in various technologies, including ROOT files and relational databases. TAGs are used to identify and extract events that satisfy certain selection predicates, which can be coded as SQL-style queries. Several new developments in file-based TAG infrastructure are presented. TAG collection files support in-file metadata...
    Go to contribution page
  158. Andreu Pacheco (IFAE Barcelona), Davide Costanzo (University of Sheffield), Iacopo Vivarelli (INFN and University of Pisa), Manuel Gallas (CERN)
    24/03/2009, 08:00
    Event Processing
    poster
    The ATLAS experiment recently entered the data taking phase, with the focus shifting from software development to validation. The ATLAS software has to be both robust to process large datasets and produce the high quality output needed for the experiment scientific exploitation. The validation process is discussed in this talk, starting from the validation of the nightly builds and...
    Go to contribution page
  159. Keith Rose (Dept. of Physics and Astronomy-Rutgers, State Univ. of New Jerse)
    24/03/2009, 08:00
    Event Processing
    poster
    The silicon pixel detector in CMS contains approximately 66 million channels, and will provide extremely high tracking resolution for the experiment. To ensure the data collected is valid, it must be monitored continuously at all levels of acquisition and reconstruction. The Pixel Data Quality Monitoring process ensures that the detector, as well as the data acquisition and reconstruction...
    Go to contribution page
  160. Dr Alessandra Doria (INFN Napoli)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    The large potential storage and computing power available in the modern grid and data centre infrastructures enable the development of the next generation grid-based computing paradigm, in which a large number of clusters are interconnected through high speed networks. Each cluster is composed of several or often hundreds of computers and devices each with its own specific role in the grid. In...
    Go to contribution page
  161. Dr Maria Grazia Pia (INFN GENOVA)
    24/03/2009, 08:00
    Event Processing
    poster
    A R&D project, named NANO5, has been recently launched at INFN to address fundamental methods in radiation transport simulation and revisit Geant4 kernel design to cope with new experimental requirements. The project, that gathers an international collaborating team, focuses on simulation at different scales in the same environment. This issue requires novel methodological approaches to...
    Go to contribution page
  162. Mr Danilo Piparo (Universitaet Karlsruhe)
    24/03/2009, 08:00
    Event Processing
    poster
    RSC is a software framework based on the RooFit technology and born for the CMS experiment community, whose scope is to allow the modelling and combination of multiple analysis channels together with the accomplishment of statistical studies. That is performed through a variety of methods described in the literature implemented as classes. The design of these classes is oriented to the...
    Go to contribution page
  163. Dr Kristian Harder (RAL)
    24/03/2009, 08:00
    Event Processing
    poster
    The luminosity upgrade of the Large Hadron Collider (SLHC) is foreseen starting from 2013. An eventual factor-of-ten increase in LHC statistics will have a major impact in the LHC Physics program. However, the SLHC as well as offering the possibility to increase the physics potential will create an extreme operating environment for the detectors, particularly the tracking devices and the...
    Go to contribution page
  164. Luca Dell'Agnello (INFN)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    In the framework of WLCG, the Tier-1 computing centres have very stringent requirements in the sector of the data storage, in terms of size, performance and reliability. Since some years, at the INFN-CNAF Tier-1 we have been using two distinct storage systems: Castor as tape-based storage solution (also known as the D0T1 storage class in the WLCG language) and the General Parallel...
    Go to contribution page
  165. Dr Szymon Gadomski (DPNC, University of Geneva)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    Computing for ATLAS in Switzerland has two Tier-3 sites with several years of experience, owned by Universities of Berne and Geneva. They have been used for ATLAS Monte Carlo production, centrally controlled via the NorduGrid, since 2005. The Tier-3 sites are under continuous development. In case of Geneva the proximity of CERN leads to additional use cases, related to commissioning of...
    Go to contribution page
  166. Prof. Gordon Watts (UNIVERSITY OF WASHINGTON), Dr Laurent Vacavant (CPPM)
    24/03/2009, 08:00
    Event Processing
    poster
    The ATLAS detector, one of the two collider experiments at the Large Hadron Collider, will take high energy collision data for the first time in 2009. A general purpose detector, its physics program encompasses everything from Standard Model physics to specific searches for beyond-the-standard-model signatures. One important aspect of separating the signal from large Standard Model backgrounds...
    Go to contribution page
  167. John Chapman (Dept. of Physics, Cavendish Lab.)
    24/03/2009, 08:00
    Event Processing
    poster
    The ATLAS digitization project is steered by a top-level PYTHON digitization package which ensures uniform and consistent configuration across the subdetectors. The properties of the digitization algorithms were tuned to reproduce the detector response seen in lab tests, test beam data and cosmic ray running. Dead channels and noise rates are read from database tables to reproduce conditions...
    Go to contribution page
  168. Simone Frosali (Dipartimento di Fisica - Universita di Firenze)
    24/03/2009, 08:00
    Event Processing
    poster
    The CMS Silicon Strip Tracker (SST) consists of 25000 silicon microstrip sensors covering an area of 210m2 and 10 million readout channels. Starting from December 2007 the SST has been inserted and connected inside the CMS experiment and since summer 2008 it has been commissioned using cosmic muons with and without magnetic field. During these data taking the performance of the SST have been...
    Go to contribution page
  169. Dr Gabriele Benelli (CERN PH Dept (for the CMS collaboration))
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the cpu performance estimates given by the reference benchmark for HEP...
    Go to contribution page
  170. Roberto Valerio (Cinvestav Unidad Guadalajara)
    24/03/2009, 08:00
    Event Processing
    poster
    Decision tree learning constitutes a suitable approach to classification due to its ability to partition the input (variable) space into regions of class-uniform events, while providing a structure amenable to interpretation (as opposed to other methods such as neural networks). But an inherent limitation of decision tree learning is the progressive lessening of the statistical support of the...
    Go to contribution page
  171. Dr Ma Xiang (Institute of High energy Physics, Chinese Academy of Sciences)
    24/03/2009, 08:00
    Event Processing
    poster
    The BEPCII/BESIII(Beijing Electron Positron Collider / Beijing Spectrometer) had been installed and operated successfully in July 2008 and has been commissioning since Sep. 2008. The luminosity has reached 1.3*1032 cm-2s-1@489mA*530mA with 90 bunches now. About 13M psi(2S) physics data is collected by BESIII. The offline data analysis system of BESIII have been tested and operated to handle...
    Go to contribution page
  172. Rodrigues Figueiredo Eduardo (University Glasgow)
    24/03/2009, 08:00
    Event Processing
    poster
    The reconstruction of charged particles in the LHCb tracking systems consists of two parts. The pattern recognition links the signals belonging to the same particle. The track fitter running after the pattern recognition extracts the best parameter estimate out of the reconstructed tracks. A dedicated Kalman-Fitter is used for this purpose. The track model employed in the fit is based on...
    Go to contribution page
  173. Xie Yuguang (Institute of High energy Physics, Chinese Academy of Sciences)
    24/03/2009, 08:00
    Event Processing
    poster
    The new spectrometer for the challenging physics in the tau-charm energy region, BESIII, has been constructed and gone into the commissioning phase at BEPCII, the upgraded e+e- collider with peak luminosity up to 10^33cm^-2s^-1 in Beijing, China. The BESIII muon detector will mainly contribute to the distinguishing muons from hadrons, especially the pions. The Resistive Plate Chambers(RPCs)...
    Go to contribution page
  174. Andrea Dotti (INFN and Università Pisa)
    24/03/2009, 08:00
    Event Processing
    poster
    The challenging experimental environment and the extreme complexity of modern high-energy physics experiments make online monitoring an essential tool to assess the quality of the acquired data. The Online Histogram Presenter (OHP) is the ATLAS tool to display histograms produced by the online monitoring system. In spite of the name, the Online Histogram Presenter is much more than just a...
    Go to contribution page
  175. Mr Gilbert Grosdidier (LAL/IN2P3/CNRS)
    24/03/2009, 08:00
    Hardware and Computing Fabrics
    poster
    The study and design of a very ambitious petaflop cluster exclusively dedicated to Lattice QCD simulations started in early ’08 among a consortium of 7 laboratories (IN2P3, CNRS, INRIA, CEA) and 2 SMEs. This consortium received a grant from the French ANR agency in July, and the PetaQCD project kickoff is expected to take place in January ’09. Building upon several years of fruitful...
    Go to contribution page
  176. Wouter Verkerke (NIKHEF)
    24/03/2009, 08:00
    Event Processing
    poster
    RooFit is a library of C++ classes that facilitate data modeling in the ROOT environment. Mathematical concepts such as variables, (probability density) functions and integrals are represented as C++ objects. The package provides a flexible framework for building complex fit models through classes that mimic math operators, and is straightforward to extend. For all constructed models RooFit...
    Go to contribution page
  177. Zachary Marshall (Caltech, USA & Columbia University, USA)
    24/03/2009, 08:00
    Event Processing
    poster
    The Simulation suite for ATLAS is in a mature phase ready to cope with the challenge of the 2009 data. The simulation framework already integrated in the ATLAS framework (Athena) offers a set of pre-configured applications for full ATLAS simulation, combined test beam setups, cosmic ray setups and old standalone test-beams. Each detector component was carefully described in all details and...
    Go to contribution page
  178. Fred Luehring (Indiana University)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    The ATLAS Experiment, with over 2000 collaborators, needs efficient and effective means of communicating information. The Collaboration has been using the TWiki Web at CERN for over three years and now has more than 7000 web pages, some of which are protected. This number greatly exceeds the number of “static” HTML pages, and in the last year, there has been a significant migration to the...
    Go to contribution page
  179. Dr Peter Speckmayer (CERN)
    24/03/2009, 08:00
    Event Processing
    poster
    The toolkit for multivariate analysis, TMVA, provides a large set of advanced multivariate analysis techniques for signal/background classification. In addition, TMVA now also contains regression analysis, all embedded in a framework capable of handling the pre-processing of the data and the evaluation of the output, thus allowing a simple and convenient use of multivariate techniques. The...
    Go to contribution page
  180. Mr Andrey Lebedev (GSI, Darmstadt / JINR, Dubna)
    24/03/2009, 08:00
    Event Processing
    poster
    The Compressed Baryonic Matter (CBM) experiment at the future FAIR accelerator at Darmstadt is being designed for a comprehensive measurement of hadron and lepton production in heavy-ion collisions from 8-45 AGeV beam energy, producing events with large track multiplicity and high hit density. The setup consists of several detectors including as tracking detectors the silicon tracking system...
    Go to contribution page
  181. Mr Bruno Lenzi (CEA - Saclay)
    24/03/2009, 08:00
    Event Processing
    poster
    Muons in the ATLAS detector are reconstructed by combining the information from the Inner Detectors and the Muon Spectrometer (MS), located in the outermost part of the experiment. Until they reach the MS, muons traverse typically 100 radiation lengths (X0) of material, most part instrumented by the electromagnetic and hadronic calorimeters. The proper account for multiple scattering and...
    Go to contribution page
  182. Dr Ingo Fröhlich (Goethe-University)
    24/03/2009, 08:00
    Event Processing
    poster
    Due to the fact, that experimental setups are usually not suited to cover the complete full solid angle, event generators are very important tools for experiments. Here, theoretical calculations provide valuable input as they can describe specific distributions for parts of the kinematic variables very precicely. The caveat is that an event has several degrees of freedom which can be...
    Go to contribution page
  183. Prof. Vladimir Ivantchenko (CERN, ESA)
    24/03/2009, 08:00
    Event Processing
    poster
    The standard electromagnetic physics packages of Geant4 are used for simulation of particle transport and HEP detector response. The requirements to the precision and stability of computations are strong, for example, calorimeter response for ATLAS and CMS should be reproduced well within 1%. To keep and control long-stand quality of the package the software suites for validation and...
    Go to contribution page
  184. Dr Tomasz Szumlak (Glasgow)
    24/03/2009, 08:00
    Event Processing
    poster
    The LHCb experiment is dedicated to studying CP violation and rare decays phenomena. In order to achieve these physics goals precise tracking and vertexing around the interaction point is crucial. This is provided by the VELO (VErtex LOcator) silicon detector. After digitization, large FPGAs are employed to run several algorithms to suppress noise and reconstruct clusters. This is...
    Go to contribution page
  185. Christian Helft (LAL/IN2P3/CNRS)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    IN2P3, the institute bringing together HEP laboratories in France along CEA's IRFU, opened a videoconferencing service in 2002 based on a H323 MCU. This service has steadily grown up since then, serving other French communities than the HEP one, to reach an average of about 30 different conferences a day. The relatively small amount of manpower that has been devoted to this project can be...
    Go to contribution page
  186. Mr Joao Fernandes (CERN)
    24/03/2009, 08:00
    Collaborative Tools
    poster
    Several recent initiatives have been put in place by the CERN IT Department to improve the user experience in remote dispersed meetings and remote collaboration at large in the LHC communities worldwide. We will present an analysis of the factors which were historically limiting the efficiency of remote dispersed meetings and describe the consequent actions which were undertaken at CERN to...
    Go to contribution page
  187. Dr Dantong Yu (BROOKHAVEN NATIONAL LABORATORY)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The TeraPaths, Lambda Station, and Phoebus projects were funded by the Department Of Energy's (DOE) network research program to support efficient, predictable, prioritized petascale data replication in modern high-speed networks, directly address the "last-mile" problem between local computing resources and WAN paths, and provide interfaces to modern, high performance hybrid networks with low...
    Go to contribution page
  188. Mr Eiji Inoue (KEK)
    26/03/2009, 08:00
    Online Computing
    poster
    We report DAQ System based on DAQ-Middleware. This system is consisting of GUI client application and CC/NET readout programs. CC/NET is a CAMAC crate controller module which was created by us from a joint research of TOYO corporation and KEK. CC/NET based on pipeline processing can operate at CAMAC specification limit speed. It has a single board computer that Linux operating system...
    Go to contribution page
  189. Dr Iosif Legrand (CALTECH)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    To satisfy the demands of data intensive applications it is necessary to move to far more synergetic relationships between data transfer applications and the network infrastructure. The main objective of the High Performance Data Transfer Service we present is to effectively use the available network infrastructure capacity and to coordinate, manage and control large data transfer tasks...
    Go to contribution page
  190. Dr Jingyan Shi (IHEP)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The operation of the BESIII experiment started on July, 2008. More than 5 PB data will be produced in the coming 5 years. To increase the efficiency of data analysis and simulation, it is necessary sometimes for the physicists to cut a long job into a certain number of small jobs and execute in a distributed mode. A tool is developed for the BESIII job submission and management. With the tool,...
    Go to contribution page
  191. Dr Wenji Wu (Fermi National Accelerator Laboratory)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Distributed petascale computing involves analysis of massive data sets in a large-scale cluster computing environment. Its major concern is to efficiently and rapidly move the data sets to the computation and send results back to users or storage. However, the needed efficiency of data movement has hardly been achieved in practice. Present cluster operating systems usually are general-purpose...
    Go to contribution page
  192. Dr Gabriele Compostella (CNAF INFN), Dr Manoj Kumar Jha (INFN Bologna)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Being a large international collaboration established well before the full development of the Grid as the main computing tool for High Energy Physics, CDF has recently changed and improved its computing model, decentralizing some parts of it in order to be able to exploit the rising number of distributed resources available nowadays. Despite those efforts, while the large majority of CDF...
    Go to contribution page
  193. Stefano Bagnasco (INFN Torino)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Current Grid deployments for LHC computing (namely the WLCG infrastructure) do not allow efficient parallel interactive processing of data. In order to allow physicists to interactively access subsets of data (e.g. for algorithm tuning and debugging before running over a full dataset) parallel Analysis Facilities based on PROOF have been deployed by the ALICE experiment at CERN and elsewhere....
    Go to contribution page
  194. Mr Roland Moser (CERN and Technical University of Vienna)
    26/03/2009, 08:00
    Online Computing
    poster
    The CMS Data Acquisition System consists of O(1000) of interdependent services. A monitoring system providing exception and application-specific data is essential for the operation of this cluster. Due to the number of involved services the amount of monitoring data is higher than a human operator can handle efficiently. Thus moving the expert-knowledge for error analysis from the operator to...
    Go to contribution page
  195. Mr Mario Lassnig (CERN & University of Innsbruck)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Unrestricted user behaviour is becoming one of the most critical properties in data intensive supercomputing. While policies can help to maintain a usable environment in clearly directed cases, it is important to know how users interact with the system so that it can be adapted dynamically, automatically and timely. We present a statistical and generative model that can replicate and simulate...
    Go to contribution page
  196. Dr Andrew Stephen McGough (Imperial College London)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The Grid as an environment for large-scale job execution is now moving beyond the prototyping phase to real deployments on national and international scales providing real computational cycles to application scientists. As the Grid move into production, characteristics about how users are exploiting the resources and how the resources are coping with production load are essential in...
    Go to contribution page
  197. Dr Vivian ODell (FNAL)
    26/03/2009, 08:00
    Online Computing
    poster
    The CMS event builder assembles events accepted by the first level trigger and makes them available to the high-level trigger. The system needs to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GBytes/s originating from approximately 500 sources. This paper presents the chosen hardware and software architecture. The system consists of 2 stages: an initial...
    Go to contribution page
  198. Dr Wainer Vandelli (Conseil Europeen Recherche Nucl. (CERN))
    26/03/2009, 08:00
    Online Computing
    poster
    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently...
    Go to contribution page
  199. Raquel Pezoa Rivera (Univ. Tecnica Federico Santa Maria (UTFSM))
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The ATLAS Distributed Computing system provides a set of tools and libraries enabling data movement, processing and analysis on a grid environment. While reaching a state of maturity high enough for real data taking, it became clear that one component was missing exposing consistent information regarding site topology, service and resource information from all three distinct ATLAS grids (EGEE,...
    Go to contribution page
  200. Denis Oliveira Damazio (Brookhaven National Laboratory)
    26/03/2009, 08:00
    Online Computing
    poster
    The ATLAS detector is undergoing intense commissioning effort with cosmic rays preparing for the first LHC colisions next spring. Combined runs with all of the ATLAS subsystems are being taken in order to evaluate the detector performance. This is an unique opportunity also for the trigger system to be studied with different detector operation modes, such as different event rates and...
    Go to contribution page
  201. Dr Luca Fiorini (IFAE Barcelona)
    26/03/2009, 08:00
    Online Computing
    poster
    TileCal is the barrel hadronic calorimeter of the ATLAS experiment presently in an advanced state of commissioning with cosmic and single beam data at the LHC accelerator. The complexity of the experiment, the number of electronics channels and the high rate of acquired events requires a systematic strategy of the System Preparation for the Data Taking. This is done through a precise...
    Go to contribution page
  202. Mr Costin Grigoras (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    A complex software environment such as the ALICE Computing Grid infrastructure requires permanent control and management for the large set of services involved. Automating control procedures reduces the human interaction with the various components of the system and yields better availability of the overall system. In this paper we will present how we used the MonALISA framework to gather,...
    Go to contribution page
  203. Hongyu ZHANG (Experimental Physics Center, Experimental Physics Center, Chinese Academy of Sciences, Beijing, China)
    26/03/2009, 08:00
    Online Computing
    poster
    BEPCII is designed with a peak luminosity of 1033cm-2sec-1. After the Level 1 trigger, the event rate is estimated to be around 4000Hz at J/ψ peak. A pipelined front-end electronic system is designed and developed and the BESIII DAQ system is accomplished to satisfy the requirement of event readout and processing with such a high event rate. BESIII DAQ system consists of about 100 high...
    Go to contribution page
  204. Riccardo Zappi (INFN-CNAF)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    In the storage model adopted by WLCG, the quality of service for a storage capacity provided by an SRM-based service is described by the concept of Storage Class. In this context, two parameters are relevant: the Retention Policy and the Access Latency. With the advent of cloud-based resources, virtualized storage capabilities are available like the Amazon Simple Storage Service (Amazon S3)....
    Go to contribution page
  205. Dr Daniele Bonacorsi (CMS experiment / INFN-CNAF, Bologna, Italy)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this world-wide exercise was to check the readiness of the computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also...
    Go to contribution page
  206. Dr Timm Steinbeck (Institute of Physics)
    26/03/2009, 08:00
    Online Computing
    poster
    For the ALICE heavy-ion experiment a large cluster will be used to perform the last triggering stages in the High Level Trigger. For the first year of operation the cluster consists of about 100 SMP nodes with 4 or 8 CPU cores each, to be increased to more than 1000 nodes for the later years of operation. During the commissioning phases of the detector, the preparations for first LHC...
    Go to contribution page
  207. Claudia Ciocca (INFN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    In the framework of WLCG, Tier1s need to manage large manage volumes of data ranging in the PB scale. Moreover they need to be able to transfer data, from CERN and with the other centres (both Tier1s and Tier2s) with a sustained throughput of the order of hundreds of MB/s over the WAN offering at the same time a fast and reliable access also to the computing farm. In order to cope with...
    Go to contribution page
  208. Dr Volker Friese (GSI Darmstadt)
    26/03/2009, 08:00
    Online Computing
    poster
    The Compressed Baryonic Matter experiment (CBM) is one of the core experiments to be operated at the future FAIR accelerator complex in Darmstadt, Germany, from 2014 on. It will investigate heavy-ion collisions at moderate beam energies but extreme interaction rates, which give access to extremely rare probes such as open charm or charmonium decays near the production threshold. The high...
    Go to contribution page
  209. Daniel Charles Bradley (High Energy Physics)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    A number of recent enhancements to the Condor batch system have been stimulated by the challenges of LHC computing. The result is a more robust, scalable, and flexible computing platform. One product of this effort is the Condor JobRouter, which serves as a high-throughput scheduler for feeding multiple (e.g. grid) queues from a single input job queue. We describe its principles and how it...
    Go to contribution page
  210. Vardan Gyurjyan (JEFFERSON LAB)
    26/03/2009, 08:00
    Online Computing
    poster
    The ever growing heterogeneity of physics experiment control systems presents a real challenge to uniformly describe control system components and their operational details. Control Oriented Ontology Language (COOL) is an experiment control meta-data modeling language that provides a generic means for concise and uniform representation of physics experiment control processes and components,...
    Go to contribution page
  211. Xavier Mol (Forschungszentrum Karlsruhe)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    D-Grid is the German initiative for building a national computing grid. When its customers want to work within the German grid, they need dedicated software, called ‘middleware’. As D-Grid site administrators are free to choose their middleware according to the needs of their users, the project ‘DGI (D-Grid Integration) reference installation’ was launched. Its purpose is to assist the site...
    Go to contribution page
  212. Mr Antonio Delgado Peris (CIEMAT)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Grid infrastructures constitute nowadays the core of the computing facilities of the biggest LHC experiments. These experiments produce and manage petabytes of data per year and run thousands of computing jobs every day to process that data. It is the duty of metaschedulers to allocate the tasks to the most appropriate resources at the proper time. Our work reviews the policies that have...
    Go to contribution page
  213. Peter Onyisi (University of Chicago)
    26/03/2009, 08:00
    Online Computing
    poster
    At the ATLAS experiment, the Detector Control System (DCS) is used to oversee detector conditions and supervise the running of equipment. It is essential that information from the DCS about the status of individual sub-detectors be extracted and taken into account when determining the quality of data taken and its suitability for different analyses. DCS information is written online to...
    Go to contribution page
  214. Mr Yuriy Ilchenko (SMU)
    26/03/2009, 08:00
    Online Computing
    poster
    The start of collisions at the LHC brings with it much excitement and many unknowns. It’s essential at this point in the experiment to be prepared with user-friendly tools to quickly and efficiently determine the quality of the data. Easy visualization of data for the shift crew and experts is one of the key factors in the data quality assessment process. The Data Quality Monitoring...
    Go to contribution page
  215. Dr Hiroyuki Matsunaga (ICEPP, University of Tokyo)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    A Tier-2 regional center is running at the University of Tokyo in Japan. This center receives a large amount of data of the ATLAS experiment from the Tier-1 center in France. Although the link between the two centers has 10Gbps bandwidth, it is not a dedicated link but is shared with other traffic, and the round trip time is 280msec. It is not easy to exploit the available bandwidth...
    Go to contribution page
  216. Mr Vladlen Timciuc (California Institute of Technology)
    26/03/2009, 08:00
    Online Computing
    poster
    The CMS detector at LHC is equipped with a high precision electromagnetic crystal calorimeter (ECAL). The crystals experience a transparency change when exposed to radiation during LHC operation, which recovers in absents of irradiation on the time scale of hours. This change of the crystal response is monitored with a laser system which performs a transparency measurement of each crystal of...
    Go to contribution page
  217. Dr Silke Halstenberg (Karlsruhe Institute of Technology)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The dCache installation at GridKa, the German Tier-1 center, is ready for LHC data taking. After years of tuning and dry runs, several software and operational bottlenecks have been identified. This contribution describes several procedures to improve stability and reliability of the Tier-1 storage setup. These range from redundant hardware and disaster planning over fine grained monitoring...
    Go to contribution page
  218. Mr Tigran Mkrtchyan Mkrtchyan (Deutsches Elektronen-Synchrotron DESY)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Starting spring 2009, all WLCG data management services have to be ready and prepared to move terabytes of data from CERN to the Tier 1 centers world wide, and from the Tier 1s to their corresponding Tier 2s. Reliable file transfer services, like FTS, on top of the SRM v2.2 protocol are playing a major role in this game. Nevertheless, moving large junks of data is only part of the challenge....
    Go to contribution page
  219. Dr Paul Millar (DESY)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    In the gLite grid model a site will typically have a Storage Element (SE) that has no direct mechanism for updating any central or experiment-specific catalogues. This loose coupling was a deliberate decision that simplifies SE design; however, a consequence of this is that the catalogues may provide an incorrect view of what is stored on a SE. In this paper, we present work to allow...
    Go to contribution page
  220. Dr James Letts (Department of Physics-Univ. of California at San Diego (UCSD))
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The CMS experiment at CERN is preparing for LHC data taking in several computing preparation activities. In early 2007 a traffic load generator infrastructure for distributed data transfer tests was designed and deployed to equip the WLCG Tiers which support the CMS Virtual Organization with a means for debugging, load-testing and commissioning data transfer routes among CMS Computing Centres....
    Go to contribution page
  221. Dr Sergio Andreozzi (INFN-CNAF)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The GLUE 2.0 specification is an upcoming OGF specification for standard-based Grid resource characterization to support functionalities such as discovery, selection and monitoring. An XML Schema realization of GLUE 2.0 is available, nevertheless, Grids still lack a standard information service interface. Therefore, there is no uniform agreed solution to expose resource descriptions. On...
    Go to contribution page
  222. Dr Vincenzo Spinoso (INFN, Bari)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Together with the start of LHC, high-energy physics researchers will start massive usage of LHC Tier2s. It is essential to supply physics user groups with a simple and intuitive “user-level” summary of their associated T2 services’ status, showing for example available, busy and unavailable resources. At the same time, site administrators need “technical level” monitoring, namely a view of...
    Go to contribution page
  223. Gabriel Caillat (LAL, Univ. Paris Sud, IN2P3/CNRS)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Desktop grids, such as XtremWeb and BOINC, and service grids, such as EGEE, are two different approaches for science communities to gather computing power from a large number of computing resources. Nevertheless, little work has been done to combine these two Grid technologies in order to establish a seamless and vast grid resource pool. In this paper we present the EGEE service grid, the...
    Go to contribution page
  224. Mr Michal ZEROLA (Nuclear Physics Inst., Academy of Sciences)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    For the past decade, HENP experiments have been heading towards a distributed computing model in an effort to concurrently process tasks over enormous data sets that have been increasing in size as a function of time. In order to optimize all available resources (geographically spread) and minimize the processing time, it is necessary to face also the question of efficient data transfers and...
    Go to contribution page
  225. Dr Simone Campana (CERN/IT/GS)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The ATLAS Experiment at CERN developed an automated system for data distribution of simulated and detector data. Such system, which partially consists of various ATLAS specific services, strongly relies on the WLCG service infrastructure, both at the level of middleware components, service deployment and operations. Because of the complexity of the system and its highly distributed nature, a...
    Go to contribution page
  226. Julia Andreeva (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    One of the most important conclusion of the analysis of the results of CCRC08 and operational experience after CCRC08 is that the LHC experiment specific monitoring systems are the main sources of the monitoring information. They are widely used by people taking computing shifts. They are the first ones to detect the problems of various nature. Though these systems provide rather...
    Go to contribution page
  227. Dr Chadwick Keith (Fermilab)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The architecture of FermiGrid facilitates seamless interoperation of the multiple heterogeneous Fermilab resources with the resources of the other...
    Go to contribution page
  228. Pablo Saiz (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The LHC experiments are going to start collecting data during the spring of 2009. The number of people and centers involved in such experiments sets a new record in the physics community. For instance, in CMS there are more than 3600 physicists, and more than 60 centers distributed all over the world. Managing such a big number of distributed sites and services is not a trivial task....
    Go to contribution page
  229. Dr Armin Scheurer (Karlsruhe Institute of Technology)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The CMS computing model anticipates various hierarchically linked tier centres to counter the challenges provided by the enormous amounts of data which will be collected by the CMS detector at the Large Hadron Collider, LHC, at CERN. During the past years, various computing exercises were performed to test the readiness of the computing infrastructure, the Grid middleware and the experiment's...
    Go to contribution page
  230. Mr Philippe Canal (Fermilab)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The Open Science Grid's usage accounting solution is a system known as, "Gratia." Now that it has been deployed successfully the Open Science Grid's next accounting challenge is to correctly interpret and make the best possible use of the information collected. One such issue is, "Did we use and/or get credit for, the resource we think we used?" Another example is the problem of ensuring that...
    Go to contribution page
  231. Mr David Collados Polidura (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The Worldwide LHC Computing Grid (WLCG) is based on a four-tiered model that comprises collaborating resources from different grid infrastructures such as EGEE and OSG. While grid middleware provides core services on variety of platforms, monitoring tools like Gridview, SAM, Dashboards and GStat are being used for monitoring, visualization and evaluation of the WLCG infrastructure. The...
    Go to contribution page
  232. Andrew McNab (Unknown)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    We present an overview of the current status of the GridSite toolkit, describing the new security model for interactive and programmatic uses introduced in the last year. We discuss our experiences of implementing these internal changes and how they have been promoted by requirements from users and wider security trends in Grids (such as CSRF). Finally, we explain how these have improved the...
    Go to contribution page
  233. Mr Laurence Field (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Authors: Laurence Field, Felix Ehm, Joanna Huang, Min Tsai Grid Information Systems are mission-critical components in todays production grid infrastructures. They enable users, applications and services to discover which services exists in the infrastructure and further information about the service structure and state. It is therefore important that the information system components...
    Go to contribution page
  234. Dantong Yu (BNL)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Modern nuclear and high energy experiments yield large amounts of data and thus require efficient and high capacity storage and transfer. BNL, the hosting site for RHIC experiments and the US center for LHC ATLAS, plays a pivotal role in transferring to and from other sites in the US and around the world in a tiered fashion for data distribution and processing. Each component in the...
    Go to contribution page
  235. Gyoergy Vesztergombi (Res. Inst. Particle & Nucl. Phys. - Hungarian Academy of Science)
    26/03/2009, 08:00
    Online Computing
    poster
    Unusually high intensity ( 10**11 proton/sec ) beam is planned to be ejected for fixed targets at FAIR accelerator upto 90 GeV energy. Using this beam the FAIR-CBM experiment provides an unique high luminosity facility to measure high pT phenomena with unprecedented sensitivity exceeding by orders of magnitude that of previous experiments. Applying 1% target the expected minimum bias event...
    Go to contribution page
  236. Dr Christopher Jung (Forschungszentrum Karlsruhe)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Most Tier-1 centers of LHC Computing Grid are using dCache as their storage system. dCache uses a cost model incorporating CPU and space costs for the distribution of data on its disk pools. Storage resources at Tier-1 centers are usually upgraded once or twice a year according to given milestones. One of the effects of this procedure is the accumulation of heterogeneous hardware resources....
    Go to contribution page
  237. Timur Perelmutov (FERMI NATIONAL ACCELERATOR LABORATORY)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The dCache disk caching file system has been chosen by a majority of LHC Experiments' Tier 1 centers for their data storage needs. It is also deployed at many Tier 2 centers. In preparation for the LHC startup, very large installations of dCache - up to 3 Petabytes of disk - have already been deployed, and the systems have operated at transfer rates exceeding 2000 MB/s over the WAN. As the LHC...
    Go to contribution page
  238. Ms Giulia Taurelli (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    HSM systems such as the CERN’s Advanced STORage manager (CASTOR) [1] are responsible for storing Petabytes of data which is first cached on disk and then persistently stored on tape media. The contents of these tapes are regularly repacked from older, lower-density media to new-generation, higher-density media in order to free up physical space and ensure long term data integrity and...
    Go to contribution page
  239. Mr laurence field (cern)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Author: Laurence Field, Markus Schulz, Felix Ehm, Tim Dyce Grid Information Systems are mission-critical components in todays production grid infrastructures. They enable users, applications and services to discover which services exists in the infrastructure and further information about the service structure and state. As the Grid Information System is pervasive throughout the...
    Go to contribution page
  240. Dr Tony Wildish (PRINCETON)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    PhEDEx, the CMS data- placement system, uses the FTS service to transfer files. Towards the end of 2007 PhEDEx was beginning to show some serious scaling issues, with excessive numbers of processes on the site VOBOX running PhEDEx, poor efficiency in use of FTS job-slots, high latency for failure-retries, and other problems. The core PhEDEx architecture was changed in May 2008 to eliminate...
    Go to contribution page
  241. Dr Sergey Linev (GSI Darmstadt)
    26/03/2009, 08:00
    Online Computing
    poster
    New experiments at FAIR like CBM require new concepts of data acquisition systems, where instead of central trigger self-triggered electronics with time-stamped readout should be used. A first prototype of such a system was implemented in form of a CBM readout controller (ROC) board, which is designed to read time-stamped data from a front-end board equipped with nXYTER chips and transfer that...
    Go to contribution page
  242. Daniel Bradley (University of Wisconsin)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on...
    Go to contribution page
  243. Sergey Kalinin (Universite Catholique de Louvain)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    As the Large Hadron Collider (LHC) at CERN, Geneva, has begun operation in September, the large scale computing grid LCG (LHC Computing Grid) is meant to process and store the large amount of data created in simulating, measuring and analyzing of particle physic experimental data. Data acquired by ATLAS, one of the four big experiments at the LHC, are analyzed using compute jobs running...
    Go to contribution page
  244. Lev Shamardin (Scobeltsyn Institute of Nuclear Physics, Moscow State University (SINP MSU))
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Grid systems are used for calculations and data processing in various applied areas such as biomedicine, nanotechnology and materials science, cosmophysics and high energy physics as well as in a number of industrial and commercial areas. Traditional method of execution of jobs in grid is running jobs directly on the cluster nodes. This limits the choice of the operational environment...
    Go to contribution page
  245. Somogyi Peter (Technical University of Budapest)
    26/03/2009, 08:00
    Online Computing
    poster
    LHCb is one of the four major experiments under completion at the Large Hadron Collider (LHC). Monitoring the quality of the acquired data is important, because it allows the verification of the detector performance. Anomalies, such as missing values or unexpected distributions can be indicators of a malfunctioning detector, resulting in poor data quality. Spotting faulty components can be...
    Go to contribution page
  246. Dr Andrea Chierici (INFN-CNAF)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Quattor is a system administration toolkit providing a powerful, portable, and modular set of tools for the automated installation, configuration, and management of clusters and farms. It is developed as a community effort and provided as open-source software. Today, quattor is being used to manage at least 10 separate infrastructures spread across Europe. These range from massive single-site...
    Go to contribution page
  247. Mr Adolfo Vazquez (Universidad Complutense de Madrid)
    26/03/2009, 08:00
    Distributed Processing and Analysis
    poster
    The MAGIC telescope, a 17-meterCherenkov telescope located on La Palma (Canary Islands), is dedicated to the study of the universe in Very High Energy gamma-rays. These particles arrive at the Earth's atmosphere producing atmospheric showers of secondary particles that can be detected on ground through their Cherenkov radiation. MAGIC relies on a large number of Monte Carlo simulations for the...
    Go to contribution page
  248. Jeremiah Jet Goodson (Department of Physics - State University of New York (SUNY))
    26/03/2009, 08:00
    Online Computing
    poster
    The ATLAS detector at the Large Hadron Collider is expected to collect an unprecedented wealth of new data at a completely new energy scale. In particular its Liquid Argon electromagnetic and hadronic calorimeters will play an essential role in measuring final states with electrons and photons and in contributing to the measurement of jets and missing transverse energy. Efficient monitoring...
    Go to contribution page
  249. Luciano Orsini (CERN)
    26/03/2009, 08:00
    Online Computing
    poster
    The CMS data acquisition system comprises of O(10000) of interdependent services that need to be monitored in near real-time. The ability to monitor a large number of distributed applications accurately and effectively is of paramount importance for operation. Application monitoring entails the collection of a large number of simple and composed values made available by the software...
    Go to contribution page
  250. Dr Raja Nandakumar (Rutherford Appleton Laboratory)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    DIRAC, the LHCb community Grid solution, is intended to reliably run large data mining activities. The DIRAC system consists of various services (which wait to be contacted to perform actions) and agents (which carry out periodic activities) to direct jobs as required. An important part of ensuring the reliability of the infrastructure is the monitoring and logging of these DIRAC distributed...
    Go to contribution page
  251. Mr Daniel Filipe Rocha Da Cunha Rodrigues (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The MSG (Messaging System for the Grid) is a set of tools that make a Message Oriented platform available for communication between grid monitoring components. It has been designed specifically to work with the EGEE operational tools and acts as an integration platform to improve the reliability and scalability of the existing operational services. MSG is a core component as WLCG monitoring...
    Go to contribution page
  252. Mr Andrey Bobyshev (FERMILAB)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    There are a number of active projects to design and develop a data control plane capability that steers traffic onto alternate network paths, instead of the default path provided though standard IP connectivity. Lambda Station, developed by Fermilab and Caltech, is one example of such solution, and is currently deployed at US CMS Tier1 facility at Fermilab and various Tier2 sites. When the...
    Go to contribution page
  253. Vakhtang Tsiskaridze (Tbilisi State University, Georgia)
    26/03/2009, 08:00
    Online Computing
    poster
    At this moment, at 100 KHz frequency, in the Tile Calorimeter ROD DSP using Optimal Filtering Reconstruction method Amplitude, Time and Quality Factor (QF) parameters are calculated. If QF is good enough only Amplitude, Time and QF are stored, otherwise the data quality is considered bad and it is proposed to store raw data for further studies. Without any compression, bandwidth limitation...
    Go to contribution page
  254. Daniele Cesini (INFN CNAF)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The Workload Management System is the gLite service supporting the distributed production and analysis activities of various HEP experiments. It is responsible of dispatching computing jobs to remote computing facilities by matching job requirements and the resource status information collected from the Grid information services. Given the distributed and heterogeneous nature of the Grid, the...
    Go to contribution page
  255. Chendong FU (IHEP, Beijing)
    26/03/2009, 08:00
    Online Computing
    poster
    BEPCII is the electron-positron collider with the highest luminosity at tau-charm energy region and BESIII is the corresponding detector with greatly improve detection capacity. For the accelerator and detector, the event tigger is rathe high. In order to reduce the background level and the recorder burden of computers, the online event filtering algorithm is established. Such an...
    Go to contribution page
  256. Dr Greig Cowan (University of Edinburgh)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The ScotGrid distributed Tier-2 now provides more that 4MSI2K and 500TB for LHC computing, which is spread across three sites at Durham, Edinburgh and Glasgow. Tier-2 sites have a dual role to play in the computing models of the LHC VOs. Firstly, their CPU resources are used for the generation of Monte Carlo event data. Secondly, the end user analysis object data is distributed to the site...
    Go to contribution page
  257. Dr Silvio Pardi (INFN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The quality of the connectivity provided by the network infrastructure of a Grid is a crucial factor to guarantee the accessibility of Grid services, schedulate effciently processing and data transfer activity on the Grid and meet QoS expectations. Yet most Grid application do not take into consideration the expected performance of the network resources they plan to use. In this paper we...
    Go to contribution page
  258. Dr Jose Antonio Coarasa Perez (Department of Physics - Univ. of California at San Diego (UCSD))
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The Open Science Grid middleware stack has seen intensive development over the past years and has become more and more mature, as increasing numbers of sites have been successfully added to the infrastructure. Considerable effort has been put into consolidating this infrastructure and enabling it to provide a high degree of scalability, reliability and usability. A thorough evaluation of its...
    Go to contribution page
  259. Dr Max Böhm (EDS / CERN openlab)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    GridMap (http://gridmap.cern.ch) has been introduced to the community at the EGEE'07 conference as a new monitoring tool that provides better visualization and insight to the state of the Grid than previous tools. Since then it has become quite popular in the grid community. Its 2 dimensional graphical visualization technique based on treemaps, coupled with a simple responsive AJAX based rich...
    Go to contribution page
  260. Dr Maxim Potekhin (BROOKHAVEN NATIONAL LABORATORY)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The Panda Workload Management System is designed around the concept of the Pilot Job - a "smart wrapper" for the payload executable, that can probe the environment on the remote worker node before pulling down the payload from the server and executing it. Such design allows for improved logging and monitoring capabilities as well as flexibility in Workload Management. In the Grid...
    Go to contribution page
  261. Dr Ricardo Graciani Diaz (Universitat de Barcelona)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    DIRAC, the LHCb community Grid solution, has pioneered the use of pilot jobs in the Grid. Pilot jobs provide a homogeneous interface to an heterogeneous set of computing resources. At the same time, pilot jobs allow to delay the scheduling decision to the last moment, thus taking into account the precise running conditions at the resource and last moment requests to the system. The DIRAC...
    Go to contribution page
  262. Dr Marie-Christine Sawley (ETHZ)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Resource tracking, like usage monitoring, relies on fine granularity information communicated by each site on the Grid. Data is later aggregated to be analysed under different perspectives to yield global figures which will be used for decision making. The dynamic information collected from distributed sites must therefore be comprehensive, pertinent and coherent with up stream (planning) and...
    Go to contribution page
  263. Mr Antonio Ceseracciu (SLAC)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The Network Engineering team at the SLAC National Accelerator Laboratory is required to manage an increasing number and variety of network devices with a fixed amount of human resources. At the same time, networking equipment has acquired more intelligence to gain introspection and visibility onto the network. Making such information readily available for network engineers and user support...
    Go to contribution page
  264. Mr Andrey Bobyshev (FERMILAB)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Emerging dynamic circuit services are being developed and deployed to facilitate high impact data movement within the research and education communities. These services normally require network awareness in the applications, in order to establish an end-to-end path on-demand programmatically. This approach has significant difficulties because user applications need to be modified to support...
    Go to contribution page
  265. Mr Parag Mhashilkar (Fermi National Accelerator Laboratory)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The Open Science Grid (OSG) offers access to hundreds of Compute elements (CE) and storage elements (SE) via standard Grid interfaces. The Resource Selection Service (ReSS) is a push-based workload management system that is integrated with the OSG information systems and resources. ReSS integrates standard Grid tools such as Condor, as a brokering service and the gLite CEMon, for gathering and...
    Go to contribution page
  266. Mr Volker Buege (Inst. fuer Experimentelle Kernphysik - Universitaet Karlsruhe)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    An efficient administration of computing centres requires sophisticated tools for the monitoring of the local infrastructure. Sharing such resources in a grid infrastructure, like the Worldwide LHC Computing Grid (WLCG), goes ahead with a large number of external monitoring systems, offering information on the status of the services of a grid site. This huge flood of information from many...
    Go to contribution page
  267. Dr Bohumil Franek (Rutherford Appleton Laboratory)
    26/03/2009, 08:00
    Online Computing
    poster
    In the SMI++ framework, the real world is viewed as a collection of objects behaving as finite state machines. These objects can represent real entities, such as hardware devices or software tasks, or they can represent abstract subsystems. A special language (SML) is provided for the object description. The SML description is then interpreted by a Logic Engine (coded in C++) to drive the...
    Go to contribution page
  268. Mr Ales Krenek (CESNET, CZECH REPUBLIC), Mr Jiri Sitera (CESNET, CZECH REPUBLIC), Mr Ludek Matyska (CESNET, CZECH REPUBLIC), Mr Miroslav Ruda (CESNET, CZECH REPUBLIC), Mr Zdenek Sustr (CESNET, CZECH REPUBLIC)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Logging and Bookkeeping (L&B) is a gLite subsystem responsible for tracking jobs on the grid. Normally the user interacts with it via glite-wms-job-status and glite-wms-job-logging-info commands. Here we present other, less generally known but still useful L&B usage patterns which are available with recently developed L&B features. L&B exposes a HTML interface; pointing a web browser...
    Go to contribution page
  269. Dr Jens Jensen (STFC-RAL)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    We show how to achieve interoperation between SDSC's Storage Resource Broker (SRB) and the Storage Resource Manager (SRM) implementations used in the Large Hadron Collider Computing Grid. Interoperation is achieved using gLite tools, to demonstrate file transfers between two different grids. This presentation is different from the work demonstrated by the authors and collaborators at SC2007...
    Go to contribution page
  270. Dr Andreas Gellrich (DESY)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    DESY is one of the world-wide leading centers for research with particle accelerators and synchrotron light. In HEP DESY participates in LHC as a Tier-2 center, supports on-going analyzes of HERA data, is a leading partner for ILC, and runs the National Analysis Facility (NAF) for LHC and ILC. For the research with synchrotron light major new facilities are operated and built (FLASH,...
    Go to contribution page
  271. Mr Alexander Zaytsev (Budker Institute of Nuclear Physics (BINP))
    26/03/2009, 08:00
    Online Computing
    poster
    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating at the LHC machine at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers,...
    Go to contribution page
  272. Vasco Chibante Barroso (CERN)
    26/03/2009, 08:00
    Online Computing
    poster
    All major experiments need tools that provide a way to keep a record of the events and activities, both during commissioning and operations. In ALICE (A Large Ion Collider Experiment) at CERN, this task is performed by the Alice Electronic Logbook (eLogbook), a custom-made application developed and maintained by the Data-Acquisition group (DAQ). Started as a statistics repository, the eLogbook...
    Go to contribution page
  273. Pablo Saiz (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Once the ALICE experiments starts collecting data, it will gather up to 4 PB of information per year. The data will be analyzed in centers distributed all over the world. Each of these centers might have a different software environment. To be able to use all these resources in a similar way, ALICE has developed AliEn, a GRID layer that provides the same interface independently of the...
    Go to contribution page
  274. Christian Ohm (Department of Physics, Stockholm University)
    26/03/2009, 08:00
    Online Computing
    poster
    The ATLAS BPTX stations are comprised of electrostatic button pick-up detectors, located 175 m away along the beam pipe on both sides of ATLAS. The pick-ups are installed as a part of the LHC beam instrumentation and used by ATLAS for timing purposes. The usage of the BPTX signals in ATLAS is twofold; they are used both in the trigger system and for LHC beam monitoring. The ATLAS Trigger...
    Go to contribution page
  275. Ricardo Rocha (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The ATLAS Distributed Data Management (DDM) system is now at the point of focusing almost all its efforts to operations after successfully delivering a high quality product which has proved to scale to the extreme requirements of the experiment users. The monitoring effort has followed the same path and is now focusing mostly on the shifters and experts operating the system. In this paper we...
    Go to contribution page
  276. Alessandro De Salvo (Istituto Nazionale di Fisica Nucleare Sezione di Roma 1)
    26/03/2009, 08:00
    Online Computing
    poster
    The calibration of the ATLAS MDT chambers will be performed at remote sites, called Remote Calibration Centers. Each center will process the calibration data for the assigned part of the detector and send the results back to CERN for general use in the reconstruction and analysis within 24h from the calibration data taking. In this work we present the data extraction mechanism, the data...
    Go to contribution page
  277. Remigius K Mommsen (FNAL, Chicago, Illinois, USA)
    26/03/2009, 08:00
    Online Computing
    poster
    The CMS event builder assembles events accepted by the first level trigger and makes them available to the high-level trigger. The event builder needs to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GBytes/s originating from approximately 500 sources. This paper presents the chosen hardware and software architecture. The system consists of 2 stages: an...
    Go to contribution page
  278. Dr Jose Flix Molina (Port d'Informació Científica, PIC (CIEMAT - IFAE - UAB), Bellaterra, Spain)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The computing system of the CMS experiment works using distributed resources from more than 60 computing centres worldwide. These centres, located in Europe, America and Asia are interconnected by the Worldwide LHC Computing Grid. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established a procedure to extensively test all...
    Go to contribution page
  279. Mr Yuriy Ilchenko (SMU)
    26/03/2009, 08:00
    Online Computing
    poster
    The ATLAS experiment's data acquisition system is distributed across the nodes of large farms. Online monitoring and data quality runs alongside this system. A mechanism is required that integrates the monitoring data from different nodes and makes it available for shift crews. This integration includes but is not limited to summation or averaging of histograms and summation of trigger...
    Go to contribution page
  280. Dr Marco Cecchi (INFN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The gLite Workload Management System (WMS) has been designed and developed to represent a reliable and efficient entry point to high-end services available on a Grid. The WMS translates user requirements and preferences into specific operations and decisions - dictated by the general status of all other Grid services it interoperates with - while taking responsibility to bring requests to...
    Go to contribution page
  281. Dr Alessandro Di Girolamo (CERN)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    This contribution describes how part of the monitoring of the services used in the computing systems of the LHC experiments has been integrated with the Service Level Status (SLS) framework. The LHC experiments are using an increasingly number of complex and heterogeneous services: the SLS allows to group all these different services and to report their status and their availability by...
    Go to contribution page
  282. Dr Doris Ressmann (Karlsruher Institut of Technology)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    All four LHC experiments are served by GridKa, the German WLCG Tier-1 at the Steinbuch Centre for Computing of the Karlsruhe Institute of Technology (KIT). Each of the experiments requires a significantly different setup of the dCache data management system. Therefore the use of a single dCache instance for all experiments can have negative effects at different levels, e.g. SRM, space manager...
    Go to contribution page
  283. Mr Fernando Guimaraes Ferreira (Univ. Federal do Rio de Janeiro (UFRJ))
    26/03/2009, 08:00
    Online Computing
    poster
    The web system described here provides functionalities to monitor the Detector Control System (DCS) acquired data. The DCS is responsible for overseeing the coherent and safe operation of the ATLAS experiment hardware. In the context of the Hadronic Tile Calorimeter Detector, it controls the power supplies of the readout electronics acquiring voltages, currents, temperatures and coolant...
    Go to contribution page
  284. Mr Lourenço Vaz (LIP - Coimbra)
    26/03/2009, 08:00
    Online Computing
    poster
    Data describing the conditions of the ATLAS detector and the Trigger and Data Acquisition system are stored in the Conditions DataBases (CDB), and may include from simple values to complex objects like online system messages or monitoring histograms. The CDB are deployed on COOL, a common infrastructure for reading and writing conditions data. Conditions data produced online are saved to an...
    Go to contribution page
  285. Dr Josva Kleist (Nordic Data Grid Facility)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    Interoperability of grid infrastructures is becoming increasingly important in the emergence of large scale grid infrastructures based on national and regional initiatives. To achieve interoperability of grid infrastructures adaptions and bridging of many different systems and services needs to be tackled. A grid infrastructure offers services for authentication, authorization, accounting,...
    Go to contribution page
  286. Prof. Jorge Rodiguez (Florida International University), Dr Yujun Wu (University of Florida)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The CMS experiment is expected to produce a few Peta Bytes of data a year and distribute them globally. Within the CMS computing infrastructure, most user analyses and the production of the Monte Carlo events will be carried out at some 50 CMS Tier-2 sites. The way how to store the data and to allow physicists to access them efficiently has been a challenge, especially for Tier-2...
    Go to contribution page
  287. Torsten Antoni (GGUS, KIT-SCC)
    26/03/2009, 08:00
    Grid Middleware and Networking Technologies
    poster
    The user and operations support of the EGEE series of projects can be captioned "regional support with central coordination". Its central building block is the GGUS portal which acts as an entry point for users and support staff. It is also as an integration platform for the distributed support effort. As WLCG relies heavily on the EGEE infrastructure it is important that the support...
    Go to contribution page