-
Dr Simon Patton (LAWRENCE BERKELEY NATIONAL LABORATORY)03/09/2007, 08:00Software components, tools and databasesposterThe Unified Software Development Process (USDP) defines a process for developing software from the initial inception to the final delivery. The process creates a number of difference models of the final deliverable; the use case, analysis, design, deployment, implementation and test models. These models are developed using an iterative approach that breaks down into four main phases;...Go to contribution page
-
Dr Sven Hermann (Forschungszentrum Karlsruhe)03/09/2007, 08:00Computer facilities, production grids and networkingposterForschungszentrum Karlsruhe is one of the largest science and engineering research institutions in Europe. The resource centre GridKa as part of this science centre is building up a Tier 1 centre for the LHC project. Embedded in the European grid initiative EGEE, GridKa also manages the ROC (regional operation centre) for the German Swiss region. A ROC is responsible for regional...Go to contribution page
-
Alasdair Earl (CERN)03/09/2007, 08:00Computer facilities, production grids and networkingposterThe RPMVerify package is a light weight intrusion detection system (IDS) which is used at CERN as part of the wider security infrastructure. The package provides information about potentially nefarious changes to software which has been deployed using the RedHat Package Management system (RPM). The purpose of the RPMVerify project has been to produce a system which makes use of the...Go to contribution page
-
Mr Shahryar Khan (Stanford Linear Acclerator Center)03/09/2007, 08:00Computer facilities, production grids and networkingposterThe future of Computing in High Energy Physics (HEP) applications depends on both the Network and Grid infrastructure. Some South Asian countries such as India and Pakistan are making progress in this direction by not only building Grid clusters, but also by improving their network infrastructure. However to facilitate the use of these resources, they need to overcome the issues of...Go to contribution page
-
Mr Andrey Tsyganov (Moscow Physical Engineering Inst. (MePhI))03/09/2007, 08:00Software components, tools and databasesposterCERN, the European Laboratory for Particle Physics, located in Geneva - Switzerland, is currently building the LHC, a 27 km particle accelerator. The equipment life-cycle management of this project is provided by the Engineering and Equipment Data Management System (EDMS) Service. Using Oracle, it supports the management and follow-up of different kinds of documentation through the whole...Go to contribution page
-
Dr Nick Garfield (CERN)03/09/2007, 08:00Computer facilities, production grids and networkingposterAs computing systems become more distributed and as networks increase in throughput and resources become ever increasingly dispersed over multiple administrative domains, even continents, there is a greater need to know the performance limits of the underlying protocols which make the foundations of complex computing and networking architectures. One such protocol is the Network...Go to contribution page
-
Mr Ulrich Fuchs (CERN & Ludwig-Maximilians-Universitat Munchen)03/09/2007, 08:00Online ComputingposterALICE is a dedicated heavy-ion detector to exploit the physics potential of nucleus-nucleus (lead-lead) interactions at LHC energies. The aim is to study the physics of strongly interacting matter at extreme energy densities, where the formation of a new phase of matter, the quark-gluon plasma, is expected. Running in heavy-ion mode the data rate from event building to permanent...Go to contribution page
-
Mr Belmiro Antonio Venda Pinto (Faculdade de Ciencias - Universidade de Lisboa)03/09/2007, 08:00Online ComputingposterThe ATLAS experiment uses a complex trigger strategy to be able to achieve the necessary Event Filter rate output, making possible to optimize the storage and processing needs of these data. These needs are described in the ATLAS Computing Model which embraces Grid concepts. The output coming from the Event Filter will consist of four main streams: the physical stream, express stream,...Go to contribution page
-
Dr David Malon (Argonne National Laboratory)03/09/2007, 08:00Software components, tools and databasesposterIn the ATLAS event store, files are sometimes "an inconvenient truth." From the point of view of the ATLAS distributed data management system, files are too small--datasets are the units of interest. From the point of view of the ATLAS event store architecture, files are simply a physical clustering optimization: the units of interest are event collections-- sets of events that...Go to contribution page
-
Jos Van Wezel (Forschungszentrum Karlsruhe (FZK/GridKa))03/09/2007, 08:00Computer facilities, production grids and networkingposterThe disk pool managers in use in the HEP community focus on managing disk storage but at the same time rely on a mass storage i.e. tape based system either to offload data that has not been touched for a long time or for archival purposes. Traditionally tape handling systems like HPSS by IBM or Enstore developed at FNAL are used because they offer specialized features to overcome the...Go to contribution page
-
Kaushik De (UT-Arlington)03/09/2007, 08:00Computer facilities, production grids and networkingposterDuring 2006-07, the ATLAS experiment at the Large Hadron Collider launched a massive Monte Carlo simulation production exercise to commission software and computing systems in preparation for data in 2007. In this talk, we will describe the goals and objectives of this exercise, the software systems used, and the tiered computing infrastructure deployed worldwide. More than half a petabyte...Go to contribution page
-
Dr Monica Verducci (European Organization for Nuclear Research (CERN))03/09/2007, 08:00Software components, tools and databasesposterOne of the most challenging task faced by the LHC experiments will be the storage of "non-event data" produced by calibration and alignment stream processes into the Conditions Database. For the handling of this complex experiment conditions data the LCG Conditions Database Project has implemented COOL, a new software product designed to minimise the duplication of effort by developing a...Go to contribution page
-
Mr Brice Copy (CERN)03/09/2007, 08:00Software components, tools and databasesposterThe maintenance and operation of the ATLAS detector will involve thousands of contributors from 170 physics institutes. Planning and coordinating the action of ATLAS members, ensuring their expertise is properly leveraged and that no parts of the detector are under or overstaffed will be a challenging task. The ATLAS Maintenance and Operation (ATLAS M&O) application offers a fluent web...Go to contribution page
-
Nils Gollub (CERN), Nils Gollub (University of Uppsala)03/09/2007, 08:00Software components, tools and databasesposterATLAS Tile Calorimeter detector (TileCal) is presently involved in an intense phase of commissioning with cosmic rays and subsystems integration. Various monitoring programs have been developed at different level of the data flow to tune the set-up of the detector running conditions and to provide a fast and reliable assessment of the data quality. The presentation will focus on the...Go to contribution page
-
Dr Amir Farbin (European Organization for Nuclear Research (CERN))03/09/2007, 08:00Software components, tools and databasesposterThe EventView Analysis Framework is currently the basis for much of the analysis software employed by various ATLAS physics groups (for example the Top, SUSY, Higgs, and Exotics working groups). In ATLAS's central data preparation, this framework provides an assessment of data quality and the first analysis of physics data for the whole collaboration. An EventView is a self-consistent...Go to contribution page
-
Mr Bruno Hoeft (Forschungszentrum Karlsruhe)03/09/2007, 08:00Computer facilities, production grids and networkingposterWhile many fields relevant to Grid security are already covered by existing working groups, their remit rarely goes beyond the scope of the Grid infrastructure itself. However, security issues pertaining to the internal set-up of compute centres have at least as much impact on Grid security. Thus, this talk will present briefly the EU ISSeG project (Integrated Site Security for Grids)....Go to contribution page
-
Dr Andreas Gellrich (DESY)03/09/2007, 08:00Computer facilities, production grids and networkingposterAs a partner of the international EGEE project in the German/Switzerland federation (DECH) and as a member of the national D-GRID initiative, DESY operates a large-scale production-grade Grid infrastructure with hundreds of CPU cores and hundreds of Terabytes of disk storage. As Tier-2/3 center for ATLAS and CMS DESY plays a leading role in Grid computing in Germany. DESY strongly support...Go to contribution page
-
Artur Barczyk (Caltech)03/09/2007, 08:00Computer facilities, production grids and networkingposterMost of today's data networks are a mixture of packet switched and circuit switched technologies, with Ethernet/IP on the campus and in data centers, and SONET/SDH over the wide area infrastructure. SONET/SDH allows creating dedicated circuits with bandwidth guarantees along the path, suitable for the use of aggressive transport protocols optimised for fast data transfer and without...Go to contribution page
-
Dr Bockjoo Kim Kim (University of Florida)03/09/2007, 08:00Computer facilities, production grids and networkingposterThe CMS experiment will begin data collection at the end of 2007 and released its software with new framework since the end of 2005. The CMS experiment employs a tiered distributed computing based on the Grids, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). There are approximately 37 tiered CMS centers around the world. The number of the CMS software releases was three...Go to contribution page
-
Dirk Hufnagel (for the CMS Offline/Computing group)03/09/2007, 08:00Computer facilities, production grids and networkingposterWith the upcoming LHC engineering run in November, the CMS Tier0 computing effort will be the one of the most important activities of the experiment. The CMS Tier0 is responsible for all data handling and processing of real data events in the first period of their life, from when the data is written by the DAQ system to a disk buffer at the CMS experiment site to when it is transferred...Go to contribution page
-
Dr Carl Timmer (TJNAF)03/09/2007, 08:00Online ComputingpostercMsg is software used to send and receive messages in the Jefferson Lab online and runcontrol systems. It was created to replace the several IPC software packages in use with a single API. cMsg is asynchronous in nature, running a callback for each message received. However, it also includes synchronous routines for convenience. On the framework level, cMsg is a thin API layer in...Go to contribution page
-
Rosy Nikolaidou (DAPNIA)03/09/2007, 08:00Online ComputingposterThe Muon Spectrometer of the ATLAS experiment is made of a large toroidal magnet, arrays of high-pressure drift tubes for precise tracking and dedicated fast detectors for the first-level trigger. All the detectors in the barrel toroid have been installed and commissioning has started with cosmic rays. These detectors are arranged in three concentric rings and the total area is about...Go to contribution page
-
Dr Ivan D. Reid (School of Design and Engineering - Brunel University, UK)03/09/2007, 08:00Software components, tools and databasesposterGoodness-of-fit statistics measure the compatibility of random samples against some theoretical probability distribution function. The classical one-dimensional Kolmogorov-Smirnov test is a non-parametric statistic for comparing two empirical distributions, which defines the largest absolute difference between the two cumulative probability distribution functions as a measure of...Go to contribution page
-
Mr Georges Kohnen (Université de Mons-hainaut)03/09/2007, 08:00Online ComputingposterThe IceCube neutrino telescope is a cubic kilometer Cherenkov detector currently under construction in the deep ice at the geographic South Pole. As of 2007, it has reached more than 25 % of its final instrumented volume and is actively taking data. We will briefly describe the design and current status, as well as the physics goals of the detector. The main focus will, however, be on the...Go to contribution page
-
Mr Martin Gasthuber (Deutsches Elektronen Synchrotron (DESY))03/09/2007, 08:00Computer facilities, production grids and networkingposterBased on todays understanding of LHC scale analysis requirements and the clear dominance of fast and high capacity random access storage, this talk will present a generic architecture for a national facility based on existing components from various computing domains. The following key areas will be discussed in detail and solutions will be proposed, building the overall...Go to contribution page
-
Craig Dowell (Univ. of Washington)03/09/2007, 08:00Software components, tools and databasesposterThe ATLAS Muon Spectrometer is constructed out of 1200 drift tube chambers with a total area of nearly 7000 square meters. It must determine muon track positions to a very high precision despite its large size necessitating complex real-time alignment measurements. Each chamber, as well as approximately 50 alignment reference bars in the endcap region, are equipped with CCD cameras,...Go to contribution page
-
Marco Clemencic (European Organization for Nuclear Research (CERN))03/09/2007, 08:00Software components, tools and databasesposterThe COOL software has been chosen by both Atlas and LHCb as the base of their conditions database infrastructure. The main focus of the COOL project in 2007 will be the deployment, test and validation of Oracle-based COOL database services at Tier0 and Tier1. In this context, COOL software development will concentrate on service-related issues, and in particular on the optimization...Go to contribution page
-
Dr Dantong Yu (Brookhaven National Laboratory), Dr Dimitrios Katramatos (Brookhaven National Laboratory), Dr Shawn McKee (University of Michigan)03/09/2007, 08:00Computer facilities, production grids and networkingSupporting reliable, predictable, and efficient global movement of data in high-energy physics distributed computing environments requires the capability to provide guaranteed bandwidth to selected data flows and schedule network usage appropriately. The DOE-funded TeraPaths project at Brookhaven National Laboratory (BNL), currently in its second year, is developing methods and tools that...Go to contribution page
-
Dr Hans G. Essel (GSI)03/09/2007, 08:00Online ComputingposterEuropean FP6 program "HadronPhysics", JRA1 "FutureDAQ" contract number RII3-CT-2004-506078) For the new experiments at FAIR like CBM new concepts of data acquisition systems have to be developed like the distribution of self-triggered, time stamped data streams over high performance networks for event building. The DAQ backbone DABC is designed for FAIR detector tests, readout...Go to contribution page
-
Dr Giuseppe Della Ricca (Univ. of Trieste and INFN)03/09/2007, 08:00Online ComputingposterThe electromagnetic calorimeter of the Compact Muon Solenoid experiment will play a central role in the achievement of the full physics performance of the detector at the LHC. The detector performance will be monitored using applications based on the CMS Data Quality Monitoring (DQM) framework and running on the High-Level Trigger Farm as well as on local DAQ systems. The monitorable...Go to contribution page
-
Dr Doris Ressmann (Forschungszentrum Karlsruhe)03/09/2007, 08:00Computer facilities, production grids and networkingposterThe grid era brings upon new and steeply rising demands in data storage. The GridKa project at Forschungszentrum Karlsruhe delivers its share of the computation and storage requirements of all LHC and 4 other HEP experiments. Access throughput from the worker nodes to the storage can be as high a 2 GB/s. At the same time a continuous throughput in the order of 300-400 MB/s into and...Go to contribution page
-
Dr Niko Neufeld (CERN)03/09/2007, 08:00Online ComputingposterEvents selected by LHCb's online event filtering farm will be assembled into raw data files of about 2 GBs. Under nominal conditions about 2 such files will be produced per minute. These files must be copied to tape storage and made available online to various calibration and monitoring tasks. The life cycle and state transitions of each files are managed by means of a dedicated data-...Go to contribution page
-
Dr Manuela Cirilli (University of Michigan)03/09/2007, 08:00Software components, tools and databasesposterThe calibration of the 375000 ATLAS Monitored Drift Tubes will be a highly challenging task: a dedicated set of data will be extracted from the second level trigger of the experiment and streamlined to three remote Tier-2 Calibration Centres. This presentation reviews the complex chain of databases envisaged to support the MDT Calibration and describes the actual status of the...Go to contribution page
-
Dr Wolfgang Waltenberger (Hephy Vienna)03/09/2007, 08:00Software components, tools and databasesposterA tool is presented that is capable of reading from and writing to several different file formats. Currently supported file formats are ROOT, HBook, HDF, XML, Sqlite3 and a few text file formats. A plugin mechanism decouples the file-format specific "backends" from the main library. All data are internally represented as "heterogenous hierarchic tuples"; no other data structure exists in...Go to contribution page
-
Ian Fisk (Fermi National Accelerator Laboratory (FNAL))03/09/2007, 08:00Computer facilities, production grids and networkingposterCMS is preparing seven remote Tier-1 computing facilities to archive and serve experiment data. These centers represent the bulk of CMS's data serving capacity, a significant resource for reprocessing data, all of the simulation archiving capacity, and operational support for Tier-2 centers and analysis facilities. In this paper we present the progress on deploying the largest remote...Go to contribution page
-
Irina Sourikova (BROOKHAVEN NATIONAL LABORATORY)03/09/2007, 08:00Online ComputingposterAfter seven years of running and collecting 2 Petabytes of physics data, PHENIX experiment at the Relativistic Heavy Ion Collider (RHIC) has gained a lot of experience with database management systems ( DBMS ). Serving all of the experiment's operations - data taking, production and analysis - databases provide 24/7 access to calibrations and book-keeping information for hundreds of...Go to contribution page
-
Dr Iosif Legrand (CALTECH), Ramiro Voicu (CALTECH)03/09/2007, 08:00Computer facilities, production grids and networkingposterThe efficient use of high-speed networks to transfer large data sets is an essential component for many scientific applications including CERN’s LCG experiments. We present an efficient data transfer application, Fast Data Transfer (FDT), and a distributed agent system (LISA) able to monitor, configure, control and globally coordinate complex, large scale data transfers. FDT is an...Go to contribution page
-
Prof. Toby Burnett (University of Washington)03/09/2007, 08:00Software components, tools and databasesposterApplications often need to have many parameters defined for execution. A few can be done with the command line, but this does not scale very well. I present a simple use of embedded Python that makes it easy to specify configuration data for applications, avoiding wiring in constants, or writing elaborate parsing difficult to justify for small, or one-off applications. But the...Go to contribution page
-
Dr Wenji Wu (FERMILAB)03/09/2007, 08:00Computer facilities, production grids and networkingposterThe computing models for LHC experiments are globally distributed and grid-based. In such a computing model, the experiments’ data must be reliably and efficiently transferred from CERN to Tier-1 regional centers, processed, and distributed to other centers around the world. Obstacles to good network performance arise from many causes and can be a major impediment to the success of this...Go to contribution page
-
Elisabetta Ronchieri (INFN CNAF)03/09/2007, 08:00Software components, tools and databasesposterPeople involved in modular projects need to improve the build software process, planning the correct execution order and detecting circular dependencies. The lack of suitable tools may cause delays in the development, deployment and maintenance of the software. Experience in such projects has shown that the arranged use of version control and build systems is not able to support the...Go to contribution page
-
Mr Alexander Withers (Brookhaven National Laboratory)03/09/2007, 08:00Software components, tools and databasesposterThe PostgreSQL database is a vital component of critical services at the RHIC/USATLAS Computing Facility such as the Quill subsystem of the Condor Project and both PNFS and SRM within dCache. Current deployments are relatively unsophisticated, utilizing default configurations on small-scale commodity hardware. However, a substantial increase in projected growth has exposed deficiencies...Go to contribution page
-
Dr Maria Grazia Pia (INFN Genova)03/09/2007, 08:00Software components, tools and databasesposterThe Statistical Toolkit provides an extensive collection of algorithms for the comparison of two data samples: in addition to the chisquared test, it includes all the tests based on the empirical distribution function documented in literature for binned and unbinned distributions. Some of these tests, like the Kolmogorov-Smirnov one, are widely used; others, like the Anderson-Darling...Go to contribution page
-
Dr Elliott Wolin (Jefferson Lab)03/09/2007, 08:00Software components, tools and databasesposterEVIO is a lightweight event I/O package consisting of an object-oriented layer on top of a pre-existing, highly efficient, C-based event I/O package. The latter, part of the JLab CODA package, has been in use in JLab high-speed DAQ systems for many years, but other underlying disk I/O packages could be substituted. The event format on disk, a packed tree-like hierarchy of banks, maps...Go to contribution page
-
Dr Jose Hernandez (CIEMAT)03/09/2007, 08:00Computer facilities, production grids and networkingposterCMS undertakes periodic computing challenges of increasing scale and complexity to test its computing model and Grid computing systems. The computing challenges are aimed at establishing a working distributed computing system that implements the CMS computing model based on an underlying multi-flavour grid infrastructure. CMS dataflows and data processing workflows are exercised during a...Go to contribution page
-
Mr LUIS MARCH (Instituto de Fisica Corpuscular)03/09/2007, 08:00Computer facilities, production grids and networkingposterThe Spanish ATLAS Tier-2 is geographically distributed between three HEP institutes. They are IFAE (Barcelona) and IFIC (Valencia) and UAM (Madrid). Currently it has a computing power of about 400 kSI2k CPU, a disk storage capacity of 40 TB and a network bandwidth connecting the three sites and the nearest Tier-1 of 1 Gb/s. These resources will increase with time in parallel to those of...Go to contribution page
-
Tomas Kouba (Institute of Physics - Acad. of Sciences of the Czech Rep. (ASCR)03/09/2007, 08:00Computer facilities, production grids and networkingposterEach tier 2 site is monitored by various services from outside. The Prague T2 is monitored by SAM tests, GSTAT monitoring, RTM from RAL, regional nagios monitoring and experiment specific tools. Besides that we monitor our own site for hardware and software failures and middleware status. All these tools produce an output that must be regularly checked by site administrators. We...Go to contribution page
-
Mr Alessandro Italiano (INFN-CNAF)03/09/2007, 08:00Computer facilities, production grids and networkingposterEvery day operations on a big computer center farm like that of a Tier1 can be numerous. Opening or closing a host, changing batch system configuration, replacing a disk, reinstalling a host and so on, is just a short list of what can and will really happen. In these conditions remembering all that has been done could be really difficult. Typically a big farm is managed by a team so it...Go to contribution page
-
Dr Chadwick Keith (Fermilab)03/09/2007, 08:00Computer facilities, production grids and networkingposterFermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of...Go to contribution page
-
Manuel Gallas (CERN)03/09/2007, 08:00Software components, tools and databasesposterBased on the ATLAS TileCal 2002 test-beam setup example, we present here the technical, software aspects of a possible solution to the problem of using two differe! nt simulation engines, like Geant4 and Fluka, with ! the comm on geometry and digitization code. The specific use case we discuss here, which is probably the most common one, is when the Geant4 application is already...Go to contribution page
-
Prof. Wolfgang Kuehn (Univ. Giessen, II. Physikalisches Institut)03/09/2007, 08:00Online ComputingposterPANDA is a new universal detector for antiproton physics at the HESR facility at FAIR/GSI. The PANDA data acquisition system has to handle interaction rates of the order of 10**7 /s and data rates of several 100 Gb /s. FPGA based compute nodes with multi-Gb/s bandwidth capability using the ATCA architecture are designed to handle tasks such as event building, feature extraction and...Go to contribution page
-
Kathy Pommes (CERN)03/09/2007, 08:00Software components, tools and databasesposterDuring the construction and commissioning phases of the ATLAS Collaboration, data related to the installation, testing and performance of the equipment are stored in distinctive databases. Each group acquires information and saves them in repositories placed in different servers, using diverse technologies. Both data modeling and terminology may vary among the storage areas. The...Go to contribution page
-
Dr Sven Gabriel (Forschungszentrum Karlsruhe)03/09/2007, 08:00Computer facilities, production grids and networkingposterGridKa is the German Tier1 centre in the Worldwide LHC Computing Grid (WLCG). It is part of the Institut für Wissenschaftliches Rechnen (IWR) at the Forschungszentrum Karlsruhe (FZK). It started in 2002 as the successor of the ”Regional Data and Computing Centre in Germany” (RDCCG) GridKa supports all four LHC experiments, ALICE, ATLAS, CMS and LHCb, four non-LHC high energy physics...Go to contribution page
-
Dr Christopher Jones (Cornell University)03/09/2007, 08:00Software components, tools and databasesposterWhen doing an HEP analysis, physicists typically repeat the same operations over and over while applying minor variations. Doing the operations as well as remembering the changes done during each iteration can be a very tedious process. HEPTrails in an analysis application written in Python and built on top of the University of Utah's VisTrails system which provides workflow and full...Go to contribution page
-
Dr Enrico Mazzoni (INFN Pisa)03/09/2007, 08:00Computer facilities, production grids and networkingposterWe report about the tests performed in the INFN Pisa Computing Centre with some of the latest generation storage devices. Fibre Channel and NAS solutions have been tested in a realistic enviroment, both participating in Worldwide CMS's Service Challenges, and simulating analysis patterns with more than 500 jobs accessing concurrently]data files. Both usage pattern have evidentiated the...Go to contribution page
-
Dr David Bailey (University of Manchester), Dr Robert Appleby (University of Manchester)03/09/2007, 08:00Software components, tools and databasesposterUnderstanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the...Go to contribution page
-
Mr Enrico Fattibene (INFN-CNAF, Bologna, Italy), Mr Federico Pescarmona (INFN-Torino, Italy), Mr Giuseppe Misurelli (INFN-CNAF, Bologna, Italy), Mr Stefano Dal Pra (INFN-Padova, Italy)03/09/2007, 08:00Computer facilities, production grids and networkingposterIn production quality Grid infrastructure accounting data play a key role on the possibility to spot out how the allocated resources have been used. The different types of Grid user have to be taken into account in order to provide different subsets of accounting data based on the specific role covered by a Grid user. Grid end users, VO (Virtual Organization) managers, site administrators...Go to contribution page
-
Dr Patricia Conde Muíño (LIP-Lisbon)03/09/2007, 08:00Online ComputingposterATLAS is one of the four major LHC experiments, designed to cover a wide range of physics topics. In order to cope with a rate of 40MHz and 25 interactions per bunch crossing, the ATLAS trigger system is divided in three different levels. The first one (LVL1, hardware based) identifies signatures in 2 microseconds that are confirmed by the the following trigger levels (software based)....Go to contribution page
-
Antonio Amorim (Universidade de Lisboa (SIM and FCUL, Lisbon))03/09/2007, 08:00Software components, tools and databasesposterThe ATLAS conditions databases will be used to manage information of quite diverse nature and level of complexity. The infrastructure in being built using the LCG COOL infrastructure and provides a powerful information sharing gateway upon many different systems. The nature of the stored information ranges from temporal series of simple values to very complex objects describing...Go to contribution page
-
Luca dell'Agnello (INFN-CNAF)03/09/2007, 08:00Computer facilities, production grids and networkingposterINFN CNAF is a multi experiment computing center acting as Tier-1 for LCG but also supporting other HEP and non HEP experiments and Virtual Organizations. The CNAF Tier-1 is one of the main Resource Centers of the Grid Infrastructure (WLCG/EGEE); the preferred access method to the center is through WLCG/EGEE and INFNGRID middleware and services. Critical issues to be addressed to meet...Go to contribution page
-
Prof. Manuel Delfino Reznicek (Port d'Informació Científica (PIC))03/09/2007, 08:00Computer facilities, production grids and networkingposterA new data center has been deployed for the MAGIC Gamma Ray Telescope, located in the Roque de los Muchachos observatory in the Canary Islands, Spain, at the Port d'Informació Científica in Barcelona. The MAGIC Datacenter at PIC recieves all the raw data produced by MAGIC, either via the network or tape cartridges, and provides archiving, rapid processing for quality control and...Go to contribution page
-
Tomasz Wlodek (Brookhaven National Laboratory)03/09/2007, 08:00Computer facilities, production grids and networkingposterManaging large number of heterogeneous grid servers with different service requirements posts great challenges. We describe a cost-effective integrated operation framework which manages hardware inventory, monitors services, raises alarms with different severity levels and tracks the facility response to them. The system is based on open source components: RT (Request Tracking) tracks...Go to contribution page
-
Jonathan Butterworth (University College London)03/09/2007, 08:00Software components, tools and databasesposterAccurate modelling of high energy hadron interactions is essential for the precision analysis of data from the LHC. It is therefore imperative that the predictions of Monte Carlos used to model this physics are tested against existing and future measurements. These measurements cover a wide variety of reactions, experimental observables and kinematic regions. To make this process more...Go to contribution page
-
Antonio Amorim (Universidade de Lisboa (SIM and FCUL, Lisbon))03/09/2007, 08:00Online ComputingposterThe ATLAS Trigger and Data Acquisition systems (TDAQ) to the Conditions databases has strong requirements on reliability and performance. Several applications were developed to support the integration of Condition database access with the online services in TDAQ like the interface to the Information Services and to the TDAQ configuration.. The DBStressor was developed to test and stress...Go to contribution page
-
Vincenzo Chiochia (Universitat Zurich)03/09/2007, 08:00Online ComputingposterThe CMS Pixel Detector is hosted inside the large solenoid generating a magnetic field of 4 T. The electron-hole pairs produced by particles traversing the pixel sensors will thus experience the Lorentz force due to the combined presence of magnetic and electric field. This results in a systematic shift of the charge distribution. In order to achieve a high position resolution a...Go to contribution page
-
Dr Robert Bainbridge (Imperial College London)03/09/2007, 08:00Online ComputingposterThe CMS silicon strip tracker is unprecedented in terms of its size and complexity, providing a sensitive area of >200 m^2 and comprising 10M readout channels. Its data acquisition system is based around a custom analogue front-end ASIC, an analogue optical link system and an off-detector VME board that performs digitization, zero-suppression and data formatting. These data are forwarded...Go to contribution page
-
Dr Ichiro Adachi (KEK)03/09/2007, 08:00Software components, tools and databasesposterThe Belle experiment has been operational since 1999 and we have processed more than 700/fb of data so far. To cope with ever increasing data, complete automation of the event processing is one of the most critical issues. In addition, unified management in the processing job and the processed data files to be analyzed is very important especially to deal with ~400K data files amounting...Go to contribution page
-
Mr Philip DeMar (FERMILAB)03/09/2007, 08:00Computer facilities, production grids and networkingposterAdvances in wide area network service offerings, coupled with comparable developments in local area network technology have enabled many HEP sites to keep their offsite network bandwidth ahead of demand. For most sites, the more difficult and costly aspect of increasing wide area network capacity is the local loop, which connects the facility LAN to the wide area service provider(s). ...Go to contribution page
-
Dr Andreas Heiss (Forschungszentrum Karlsruhe)03/09/2007, 08:00Computer facilities, production grids and networkingposterWithin the Worldwide LHC Computing Grid (WLCG), a Tier-1 centre like the German GridKa computing facility has to provide significant CPU and storage resources as well as several Grid services with a high level of quality. GridKa currently supports all four LHC Experiments, Alice, Atlas, CMS and LHCb as well as four non-LHC high energy physics experiments, and is about to significantly...Go to contribution page
-
Dr David Lawrence (Jefferson Lab)03/09/2007, 08:00Software components, tools and databasesposterThe C++ reconstruction framework JANA has been written to support the next generation of Nuclear Physics experiments at Jefferson Lab in anticipation of the 12GeV upgrade. The JANA framework was designed to allow multi-threaded event processing with a minimal impact on developers of reconstruction software. As we enter the multi-core (and soon many-core) era, thread-enabled code will...Go to contribution page
-
Dr Stefan Roiser (CERN)03/09/2007, 08:00Software components, tools and databasesposterThe Software Process and Infrastructure project (SPI) of the LCG Applications Area (AA) is responsible for a set of services for software build, software packaging, software distribution, communication and quality assurance. Recently a new tool has been developed in SPI for the automatic configuration and build of the LCG AA software stack which is used for nightly builds. In this talk...Go to contribution page
-
Dr Markus Frank (CERN)03/09/2007, 08:00Online ComputingposterThe High Level Trigger and Data Acquisition system selects about 2 kHz of events out of the 40 MHz of beam crossings. The selected events are sent to permanent storage for subsequent analysis. In order to ensure the quality of the collected data, indentify possible malfunctions of the detector and perform calibration and alignment checks, a small fraction of the accepted events is...Go to contribution page
-
Prof. Manuel Delfino Reznicek (Port d'Informació Científica (PIC))03/09/2007, 08:00Computer facilities, production grids and networkingposterSmall files pose performance issues for Mass Storage Systems, particularly those using magnetic tape. The ViVo project reported at CHEP06 solved some of these problems by using Virtual Volumes based on ISO images containing the small files, and only storing and retrieving these images from the MSS. Retrieval was handled using Unix automounters, requiring deployment of ISO servers with a...Go to contribution page
-
Eric Grancher (CERN)03/09/2007, 08:00Software components, tools and databasesposterDatabase applications increasingly demand higher performance. This is especially true in the context of the LHC accelerator, LHC experiments, and LHC Computing Grid projects at CERN. Oracle RAC (Real Application Cluster) is a cluster solution which allows a database to be served by several nodes, and is a technology that is being exploited successfully at CERN and at LCG Tier1 sites. ...Go to contribution page
-
Ms Geraldine Conti (EPFL)03/09/2007, 08:00Software components, tools and databasesposterThe LHCb warm magnet has been designed to provide an integrated field of 4 Tm for tracks coming from the primary vertex.To insure good momentum resolution of a few per mil, an accurate description of the magnetic field map is needed. This is achieved by combining the information from a TOSCA-based simulation and data from measurements. The paper presents the fit method applied to...Go to contribution page
-
Dr Sebastien Binet (LBNL)03/09/2007, 08:00Software components, tools and databasesposterLHC experiments are entering in a phase where optimization in view of data taking as well as robustness' improvements are of major importance. Any reduction in event data size can bring very significant savings in the amount of hardware (disk and tape in particular) needed to process data. Another area of concern and potential major gains is reducing the memory size and I/O bandwidth...Go to contribution page
-
Miguel Coelho Dos Santos (CERN)03/09/2007, 08:00Computer facilities, production grids and networkingposterWe present our design, development and deployment of a portable monitoring system for the CERN Archival and Storage System (Castor) based on its existing internal database infrastructure and deployment architecture. This new monitoring architecture is seen as an important requirement for future development and support. Castor is now deployed at several sites which use...Go to contribution page
-
Mr Martin Bly (STFC/RAL)03/09/2007, 08:00Computer facilities, production grids and networkingposterThe GRIDPP Tier-1 Centre at RAL is one of 10 Tier-1 centres worldwide preparing for the start of LHC data taking in late 2007. The RAL Tier-1 is expected to provide a reliable grid-based computing service running thousands of simultaneous batch jobs with access to a multi-petabyte CASTOR-managed disk storage pool and tape silo, and will support the ATLAS, CMS and LHCb experiments as well...Go to contribution page
-
Dr Stefano Mersi (INFN & Università di Firenze)03/09/2007, 08:00Online ComputingposterThe CMS silicon strip tracker comprises a sensitive area of >200 m2 and 10M readout channels. Its data acquisition system is based around a custom analogue front-end ASIC, an analogue optical link system and an off-detector VME board that performs digitization, zero-suppression and data formatting. The data acquisition system uses the CMS online software framework, known as XDAQ, to...Go to contribution page
-
Dr Oliver Keeble (CERN)03/09/2007, 08:00Computer facilities, production grids and networkingposterWe describe an approach to maintaining a large integrated software distribution, the gLite middleware. We describe why we have moved away from the concept of regular releases of the entire distribution, favoring instead a multispeed approach where components can evolve at their own pace. An overview of our implementation of such a release process is given, explaining the full life cycle...Go to contribution page
-
Dr Marc Dobson (CERN)03/09/2007, 08:00Online ComputingposterThe ATLAS experiment will use of order three thousand nodes for the online processing farms. The administration of such a large cluster is a challenge especially due to high impact of any down time. The ability to quickly and remotely turn on/off machines, especially following a power cut, and the ability to monitor the hardware health whether the machine be on or off are some of the...Go to contribution page
-
Dr Charles Leggett (LAWRENCE BERKELEY NATIONAL LABORATORY)03/09/2007, 08:00Software components, tools and databasesposterRuntime memory usage in experiments has grown enormously in recent years, especially in large experiments like Atlas. However, it is difficult to break down total memory usage as indicated by OS-level tools, to identify the precise users and abusers. Without a detailed knowledge of memory footprints, monitoring memory growth as an experiment evolves in order to control ballooning...Go to contribution page
-
Mr Sebastian Lopienski (CERN)03/09/2007, 08:00Computer facilities, production grids and networkingposterNowadays, IT departments provide, and people use, computing services of an increasingly heterogeneous nature. There is thus a growing need for a status display that groups these different services and reports status and availability in a uniform way. The Service Level Status (SLS) system addresses these needs by providing a web-based display that dynamically shows availability, basic...Go to contribution page
-
Dr Ilya Narsky (California Institute of Technology)03/09/2007, 08:00Software components, tools and databasesposterSPR implements various tools for supervised learning such as boosting (3 flavors), bagging, random forest, neural networks, decision trees, bump hunter (PRIM), multi-class learner, logistic regression, linear and quadratic discriminant analysis, and others. Presented at CHEP 2006, SPR has been extended with several important features since then. The package has been stripped of CLHEP...Go to contribution page
-
Dr Rene Brun (CERN)03/09/2007, 08:00Software components, tools and databasesposterA poster (two A0 pages) shows the main software systems used in HEP in the period 1970 -> 2010 from their conception to their death. Graphics bands are used to indicate the relative importance of each system or tool in the following categories: -Machines and Operating systems -Storage systems and access libraries -Networking and communication software -Compiled languages -Code...Go to contribution page
-
Mr Andreas Unterkircher (CERN)03/09/2007, 08:00Software components, tools and databasesposterWe describe the methodology for testing gLite releases. Starting from the needs given by the EGEE software management process we illustrate our design choices for testing gLite. For certifying patches different test scenarios have to be considered: regular regression tests, stress tests and manual verification of bug fixes. Conflicts arise if these tests are all carried out at the same...Go to contribution page
-
Mr Ian Gable (University of Victoria)03/09/2007, 08:00Computer facilities, production grids and networkingposterThe ATLAS Canada computing model consists of a Tier-1 computing centre located at the TRIUMF Laboratory in Vancouver, Canada, and two distributed Tier-2 computing centres: one in Eastern Canada and one in Western Canada. Each distributed Tier-2 computing centre is made up of a group of universities. To meet the network requirements of each institution, HEPnet Canada and CANARIE...Go to contribution page
-
Alessandro De Salvo (Istituto Nazionale di Fisica Nucleare Sezione di Roma 1)03/09/2007, 08:00Computer facilities, production grids and networkingposterThe huge amount of resources available in the Grids, and the necessity to have the most updated experiment software deployed in all the sites within a few hours, have spotted the need for automatic installation systems for the LHC experiments. In this paper we describe the ATLAS system for the experiment software installation in LCG/EGEE, based on the Lightweight Job Submission Framework...Go to contribution page
-
Ms Elizabeth Sexton-Kennedy (FNAL)03/09/2007, 08:00Online ComputingposterWith the turn-on of the LHC, the CMS DAQ system is expecting to log petabytes of experiment data in the coming years. The CMS Storage Manager system is a part of the high bandwidth event data handling pipeline of the CMS high level DAQ. It has two primary functions. Each Storage Manager instance collects data from the sub-farm, or DAQ slice of the Event Filter farm it has been assigned...Go to contribution page
-
Lorenzo Masetti (CERN)03/09/2007, 08:00Online ComputingposterThe Tracker Control System (TCS) is a distributed control software to operate 2000 power supplies for the silicon modules of the CMS Tracker and monitor its environmental sensors. TCS must thus be able to handle 10^4 power supply parameters, 10^3 environmental probes from the Programmable Logic Controllers of the Tracker Safety System (TSS), 10^5 parameters read via DAQ from the...Go to contribution page
-
Prof. Gang Chen (IHEP, China)03/09/2007, 08:00Computer facilities, production grids and networkingposterBeijing Electron Spectrometer (BESIII) experiment will produce 5 PB of data in next five years. Grid is used to solve this challenge. This paper introduces BES grid computing model and specific technologies, including automatic data replication, fine-grained job scheduling and so on.Go to contribution page
-
Obreshkov Emil (INRNE/CERN)03/09/2007, 08:00Software components, tools and databasesposterThe ATLAS offline software comprises over 1000 software packages organized into 10 projects that are built on a variety of compiler and operating system combinations every night. File-level parallelism, package-level parallelism and multi-core build servers are used to perform simultaneous builds of 6 platforms that are merged into a single installation on AFS. This in turn is used to...Go to contribution page
-
Go Iwai (KEK/CRC)03/09/2007, 08:00Computer facilities, production grids and networkingposterThe Belle Experiment is an ongoing experiment with an asymmetric electron-positron collider at KEK and already has a few PB scales of data in total including hundreds TB DST (Data Summary Tape) and MC data. It’s too much difficult to export existing data to LCG (LHC Computing Grid) physically because of huge amount of data. We setup a SRB (Storage Resource Broker) server to access them by...Go to contribution page
-
Mr Sigve Haug (LHEP University of Bern)03/09/2007, 08:00Computer facilities, production grids and networkingposterSince 2005 the Swiss ATLAS Grid is in production. It comprises four clusters at one Tier 2 and two Tier 3 sites. About 800 heterogenous cores and 60 TB disk space are connected by a dark fibre network operated at 10 Giga bit per second. Three different operating systems are deployed. The Tier 2 cluster runs both LCG and NorduGrid middleware (ARC) while the Tier 3 clusters run only the...Go to contribution page
-
Dr Tony Cass (CERN)03/09/2007, 08:00Computer facilities, production grids and networkingposterCERN, as other sites, has been preparing computing services for the arrival of LHC data for some time---more than 11 years if everything started at the First LHC Computing Workshop, held in Padova in June 1996. With LHC data taking now just around the corner, this presentation takes a look back at preparations at CERN and considers some of the key choices made along the way. Which were...Go to contribution page
-
Dan Nae (California Institute of Technology (CALTECH))03/09/2007, 08:00Computer facilities, production grids and networkingposterIn this paper we present the design, implementation and evolution of the mission-orientedUSLHCNet for HEP research. The design philosophy behind our network is to help meet the dataintensive computing challenges of the next generation of particle physics experiments with a comprehensive, network-focused approach. Instead of treating the network as a static, unchanging and unmanaged set of...Go to contribution page
-
Dr Patricia Conde Muíño (LIP-Lisbon)03/09/2007, 08:00Software components, tools and databasesposterWith the project PHEASANT a DSVQL was proposed for the purpose of providing a tools that could increase user's productivity while producing query code for data analysis. The previous project aimed at the proof concept and methodology feasability by introducing the concept of DSLs. We are now concetrated on implementation issues in order to deploy a final tool. The concept of domain...Go to contribution page
-
Konstantinos Bachas (Aristotle University of Thessaloniki)03/09/2007, 08:00Software components, tools and databasesposterThe measurement of the muon energy deposition in the calorimeters is an integral part of muon identification, track isolation and correction for catastrophic muon energy losses, which are the prerequisites to the ultimate goal of refitting the muon track using calorimeter information as well. To this end, an accurate energy loss measurement method in the calorimeters is developed which...Go to contribution page
-
Kenneth Bloom (University of Nebraska-Lincoln)03/09/2007, 08:00Computer facilities, production grids and networkingposterThe CMS computing model relies heavily on the use of "Tier-2" computing centers. At LHC startup, the typical Tier-2 center will have 1 MSpecInt2K of CPU resources, 200 TB of disk for data storage, and a WAN connection of at least 1 Gbit/s. These centers will be the primary sites for the production of large-scale simulation samples and for the hosting of experiment data for user...Go to contribution page
-
Mark Donszelmann (SLAC)03/09/2007, 08:00Software components, tools and databasesposterMaven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a single XML file which declaratively specifies the project's properties. In short, Maven replaces Make or Ant, adds the handling of dependencies and generates documentation and a project website. Maven...Go to contribution page
-
Dr Ulrich Schwickerath (CERN)03/09/2007, 08:00Software components, tools and databasesposterLSF 7, the latest version of Platform's batch workload management system, addresses many issues which limited the ability of LSF 6.1 to support large scale batch farms, such as the lxbatch service at CERN. In this paper we will present the status of the evaluation and deployment of LSF 7 at CERN, including issues concerning the integration of LSF 7 witht the gLite grid...Go to contribution page
-
Mr Colin Morey (Manchester University)03/09/2007, 08:00Computer facilities, production grids and networkingposterCfengine is a middle to high level policy language and autonomous agent for building expert systems to administrate and configure large computer clusters. It is ideal for large-scale cluster management and is highly portable across varying computer platforms, allowing the management of multiple architectures and node types within the same farm. As well as being a highly capable...Go to contribution page
-
Mr Andrey Bobyshev (FERMILAB)03/09/2007, 08:00Computer facilities, production grids and networkingposterAt Fermilab, there is a long history of utilizing network flow data collected from site routers for various analyses, including network performance characterization, anomalous traffic detection, investigation of computer security incidents, network traffic statistics and others. Fermilab’s flow analysis model is currently built as a distributed system that collects flow data from the site...Go to contribution page
-
Dr David Alexander (Tech-X Corporation)03/09/2007, 08:00Computer facilities, production grids and networkingposterNuclear and high-energy physicists routinely execute data processing and data analysis jobs on a Grid and need to be able to monitor their jobs execution at an arbitrary site at any time. Existing Grid monitoring tools provide abundant information about the whole system, but are geared towards production jobs and well suited for Grid administrators, while the information tailored towards...Go to contribution page
-
Prof. Gordon Watts (University of Washington)03/09/2007, 08:00Software components, tools and databasesposterROOT is firmly based on C++ and makes use of many of its features – templates and multiple inheritance, in particular. Many modern languages like Java and C# and python are missing these features or have radically different implementations. These programming languages, however, have many advantages to offer scientists including improved programming paradigms, development...Go to contribution page
-
Hegoi Garitaonandia Elejabarrieta (Instituto de Fisica de Altas Energias (IFAE))03/09/2007, 08:00Online ComputingposterThe ATLAS Trigger & Data Acquisition System has been designed to use more than 2000 CPUs. During the current development stage it is crucial to test the system on a number of CPUs of similar scale. A dedicated farm of this size is difficult to find, and can only be made available for short periods. On the other hand many large farms have become available recently as part of computing...Go to contribution page
-
Prof. Harvey Newman (CALTECH)03/09/2007, 08:00Computer facilities, production grids and networkingposterThe main objective of the VINCI project is to enable data intensive applications to efficiently use and coordinate shared, hybrid network resources, to improve the performance and throughput of global-scale grid systems, such as those used in high energy physics. VINCI uses a set of agent-based services implemented in the MonALISA framework to enable the efficient use of network resources,...Go to contribution page
-
Dr Tony Chan (BROOKHAVEN NATIONAL LAB)03/09/2007, 08:00Computer facilities, production grids and networkingposterThe Brookhaven Computing Facility provides for the computing needs of the RHIC experiments, supports the U.S. Tier 1 center for the ATLAS experiment at the LHC and provides computing support for the LSST experiment. The multi-purpose mission of the facility requires a complex computing infrastructure to meet different requirements and can result in duplication of services with a large...Go to contribution page
-
Andrea Dotti (INFN)03/09/2007, 08:00Software components, tools and databasesposterDuring the ATLAS detector commissioning phase, installed readout electronics must pass performance standards tests. The resulting data must be analyzed to ensure correct operation. For the Tile Calorimeter, developers plug their code into a specific framework for physics data-processing,. Collaboration members, taking shifts on commissioning work, interpret the results, in thousands of...Go to contribution page
Choose timezone
Your profile timezone: