-
Dr Simon Patton (LAWRENCE BERKELEY NATIONAL LABORATORY)03/09/2007, 08:00The Unified Software Development Process (USDP) defines a process for developing software from the initial inception to the final delivery. The process creates a number of difference models of the final deliverable; the use case, analysis, design, deployment, implementation and test models. These models are developed using an iterative approach that breaks down into four main phases;...Go to contribution page
-
Dr Sven Hermann (Forschungszentrum Karlsruhe)03/09/2007, 08:00Forschungszentrum Karlsruhe is one of the largest science and engineering research institutions in Europe. The resource centre GridKa as part of this science centre is building up a Tier 1 centre for the LHC project. Embedded in the European grid initiative EGEE, GridKa also manages the ROC (regional operation centre) for the German Swiss region. A ROC is responsible for regional...Go to contribution page
-
Alasdair Earl (CERN)03/09/2007, 08:00The RPMVerify package is a light weight intrusion detection system (IDS) which is used at CERN as part of the wider security infrastructure. The package provides information about potentially nefarious changes to software which has been deployed using the RedHat Package Management system (RPM). The purpose of the RPMVerify project has been to produce a system which makes use of the...Go to contribution page
-
Mr Shahryar Khan (Stanford Linear Acclerator Center)03/09/2007, 08:00The future of Computing in High Energy Physics (HEP) applications depends on both the Network and Grid infrastructure. Some South Asian countries such as India and Pakistan are making progress in this direction by not only building Grid clusters, but also by improving their network infrastructure. However to facilitate the use of these resources, they need to overcome the issues of...Go to contribution page
-
Mr Andrey Tsyganov (Moscow Physical Engineering Inst. (MePhI))03/09/2007, 08:00CERN, the European Laboratory for Particle Physics, located in Geneva - Switzerland, is currently building the LHC, a 27 km particle accelerator. The equipment life-cycle management of this project is provided by the Engineering and Equipment Data Management System (EDMS) Service. Using Oracle, it supports the management and follow-up of different kinds of documentation through the whole...Go to contribution page
-
Dr Nick Garfield (CERN)03/09/2007, 08:00As computing systems become more distributed and as networks increase in throughput and resources become ever increasingly dispersed over multiple administrative domains, even continents, there is a greater need to know the performance limits of the underlying protocols which make the foundations of complex computing and networking architectures. One such protocol is the Network...Go to contribution page
-
Mr Ulrich Fuchs (CERN & Ludwig-Maximilians-Universitat Munchen)03/09/2007, 08:00ALICE is a dedicated heavy-ion detector to exploit the physics potential of nucleus-nucleus (lead-lead) interactions at LHC energies. The aim is to study the physics of strongly interacting matter at extreme energy densities, where the formation of a new phase of matter, the quark-gluon plasma, is expected. Running in heavy-ion mode the data rate from event building to permanent...Go to contribution page
-
Mr Belmiro Antonio Venda Pinto (Faculdade de Ciencias - Universidade de Lisboa)03/09/2007, 08:00The ATLAS experiment uses a complex trigger strategy to be able to achieve the necessary Event Filter rate output, making possible to optimize the storage and processing needs of these data. These needs are described in the ATLAS Computing Model which embraces Grid concepts. The output coming from the Event Filter will consist of four main streams: the physical stream, express stream,...Go to contribution page
-
Dr David Malon (Argonne National Laboratory)03/09/2007, 08:00In the ATLAS event store, files are sometimes "an inconvenient truth." From the point of view of the ATLAS distributed data management system, files are too small--datasets are the units of interest. From the point of view of the ATLAS event store architecture, files are simply a physical clustering optimization: the units of interest are event collections-- sets of events that...Go to contribution page
-
Jos Van Wezel (Forschungszentrum Karlsruhe (FZK/GridKa))03/09/2007, 08:00The disk pool managers in use in the HEP community focus on managing disk storage but at the same time rely on a mass storage i.e. tape based system either to offload data that has not been touched for a long time or for archival purposes. Traditionally tape handling systems like HPSS by IBM or Enstore developed at FNAL are used because they offer specialized features to overcome the...Go to contribution page
-
Kaushik De (UT-Arlington)03/09/2007, 08:00During 2006-07, the ATLAS experiment at the Large Hadron Collider launched a massive Monte Carlo simulation production exercise to commission software and computing systems in preparation for data in 2007. In this talk, we will describe the goals and objectives of this exercise, the software systems used, and the tiered computing infrastructure deployed worldwide. More than half a petabyte...Go to contribution page
-
Dr Monica Verducci (European Organization for Nuclear Research (CERN))03/09/2007, 08:00One of the most challenging task faced by the LHC experiments will be the storage of "non-event data" produced by calibration and alignment stream processes into the Conditions Database. For the handling of this complex experiment conditions data the LCG Conditions Database Project has implemented COOL, a new software product designed to minimise the duplication of effort by developing a...Go to contribution page
-
Mr Brice Copy (CERN)03/09/2007, 08:00The maintenance and operation of the ATLAS detector will involve thousands of contributors from 170 physics institutes. Planning and coordinating the action of ATLAS members, ensuring their expertise is properly leveraged and that no parts of the detector are under or overstaffed will be a challenging task. The ATLAS Maintenance and Operation (ATLAS M&O) application offers a fluent web...Go to contribution page
-
Nils Gollub (CERN), Nils Gollub (University of Uppsala)03/09/2007, 08:00ATLAS Tile Calorimeter detector (TileCal) is presently involved in an intense phase of commissioning with cosmic rays and subsystems integration. Various monitoring programs have been developed at different level of the data flow to tune the set-up of the detector running conditions and to provide a fast and reliable assessment of the data quality. The presentation will focus on the...Go to contribution page
-
Dr Amir Farbin (European Organization for Nuclear Research (CERN))03/09/2007, 08:00The EventView Analysis Framework is currently the basis for much of the analysis software employed by various ATLAS physics groups (for example the Top, SUSY, Higgs, and Exotics working groups). In ATLAS's central data preparation, this framework provides an assessment of data quality and the first analysis of physics data for the whole collaboration. An EventView is a self-consistent...Go to contribution page
-
Mr Bruno Hoeft (Forschungszentrum Karlsruhe)03/09/2007, 08:00While many fields relevant to Grid security are already covered by existing working groups, their remit rarely goes beyond the scope of the Grid infrastructure itself. However, security issues pertaining to the internal set-up of compute centres have at least as much impact on Grid security. Thus, this talk will present briefly the EU ISSeG project (Integrated Site Security for Grids)....Go to contribution page
-
Dr Andreas Gellrich (DESY)03/09/2007, 08:00As a partner of the international EGEE project in the German/Switzerland federation (DECH) and as a member of the national D-GRID initiative, DESY operates a large-scale production-grade Grid infrastructure with hundreds of CPU cores and hundreds of Terabytes of disk storage. As Tier-2/3 center for ATLAS and CMS DESY plays a leading role in Grid computing in Germany. DESY strongly support...Go to contribution page
-
Artur Barczyk (Caltech)03/09/2007, 08:00Most of today's data networks are a mixture of packet switched and circuit switched technologies, with Ethernet/IP on the campus and in data centers, and SONET/SDH over the wide area infrastructure. SONET/SDH allows creating dedicated circuits with bandwidth guarantees along the path, suitable for the use of aggressive transport protocols optimised for fast data transfer and without...Go to contribution page
-
Dr Bockjoo Kim Kim (University of Florida)03/09/2007, 08:00The CMS experiment will begin data collection at the end of 2007 and released its software with new framework since the end of 2005. The CMS experiment employs a tiered distributed computing based on the Grids, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). There are approximately 37 tiered CMS centers around the world. The number of the CMS software releases was three...Go to contribution page
-
Dirk Hufnagel (for the CMS Offline/Computing group)03/09/2007, 08:00With the upcoming LHC engineering run in November, the CMS Tier0 computing effort will be the one of the most important activities of the experiment. The CMS Tier0 is responsible for all data handling and processing of real data events in the first period of their life, from when the data is written by the DAQ system to a disk buffer at the CMS experiment site to when it is transferred...Go to contribution page
-
Dr Carl Timmer (TJNAF)03/09/2007, 08:00cMsg is software used to send and receive messages in the Jefferson Lab online and runcontrol systems. It was created to replace the several IPC software packages in use with a single API. cMsg is asynchronous in nature, running a callback for each message received. However, it also includes synchronous routines for convenience. On the framework level, cMsg is a thin API layer in...Go to contribution page
-
Rosy Nikolaidou (DAPNIA)03/09/2007, 08:00The Muon Spectrometer of the ATLAS experiment is made of a large toroidal magnet, arrays of high-pressure drift tubes for precise tracking and dedicated fast detectors for the first-level trigger. All the detectors in the barrel toroid have been installed and commissioning has started with cosmic rays. These detectors are arranged in three concentric rings and the total area is about...Go to contribution page
-
Dr Ivan D. Reid (School of Design and Engineering - Brunel University, UK)03/09/2007, 08:00Goodness-of-fit statistics measure the compatibility of random samples against some theoretical probability distribution function. The classical one-dimensional Kolmogorov-Smirnov test is a non-parametric statistic for comparing two empirical distributions, which defines the largest absolute difference between the two cumulative probability distribution functions as a measure of...Go to contribution page
-
Mr Georges Kohnen (Université de Mons-hainaut)03/09/2007, 08:00The IceCube neutrino telescope is a cubic kilometer Cherenkov detector currently under construction in the deep ice at the geographic South Pole. As of 2007, it has reached more than 25 % of its final instrumented volume and is actively taking data. We will briefly describe the design and current status, as well as the physics goals of the detector. The main focus will, however, be on the...Go to contribution page
-
Mr Martin Gasthuber (Deutsches Elektronen Synchrotron (DESY))03/09/2007, 08:00Based on todays understanding of LHC scale analysis requirements and the clear dominance of fast and high capacity random access storage, this talk will present a generic architecture for a national facility based on existing components from various computing domains. The following key areas will be discussed in detail and solutions will be proposed, building the overall...Go to contribution page
-
Craig Dowell (Univ. of Washington)03/09/2007, 08:00The ATLAS Muon Spectrometer is constructed out of 1200 drift tube chambers with a total area of nearly 7000 square meters. It must determine muon track positions to a very high precision despite its large size necessitating complex real-time alignment measurements. Each chamber, as well as approximately 50 alignment reference bars in the endcap region, are equipped with CCD cameras,...Go to contribution page
-
Marco Clemencic (European Organization for Nuclear Research (CERN))03/09/2007, 08:00The COOL software has been chosen by both Atlas and LHCb as the base of their conditions database infrastructure. The main focus of the COOL project in 2007 will be the deployment, test and validation of Oracle-based COOL database services at Tier0 and Tier1. In this context, COOL software development will concentrate on service-related issues, and in particular on the optimization...Go to contribution page
-
Dr Dantong Yu (Brookhaven National Laboratory), Dr Dimitrios Katramatos (Brookhaven National Laboratory), Dr Shawn McKee (University of Michigan)03/09/2007, 08:00Computer facilities, production grids and networkingSupporting reliable, predictable, and efficient global movement of data in high-energy physics distributed computing environments requires the capability to provide guaranteed bandwidth to selected data flows and schedule network usage appropriately. The DOE-funded TeraPaths project at Brookhaven National Laboratory (BNL), currently in its second year, is developing methods and tools that...Go to contribution page
-
Dr Hans G. Essel (GSI)03/09/2007, 08:00European FP6 program "HadronPhysics", JRA1 "FutureDAQ" contract number RII3-CT-2004-506078) For the new experiments at FAIR like CBM new concepts of data acquisition systems have to be developed like the distribution of self-triggered, time stamped data streams over high performance networks for event building. The DAQ backbone DABC is designed for FAIR detector tests, readout...Go to contribution page
-
Dr Giuseppe Della Ricca (Univ. of Trieste and INFN)03/09/2007, 08:00The electromagnetic calorimeter of the Compact Muon Solenoid experiment will play a central role in the achievement of the full physics performance of the detector at the LHC. The detector performance will be monitored using applications based on the CMS Data Quality Monitoring (DQM) framework and running on the High-Level Trigger Farm as well as on local DAQ systems. The monitorable...Go to contribution page
-
Dr Doris Ressmann (Forschungszentrum Karlsruhe)03/09/2007, 08:00The grid era brings upon new and steeply rising demands in data storage. The GridKa project at Forschungszentrum Karlsruhe delivers its share of the computation and storage requirements of all LHC and 4 other HEP experiments. Access throughput from the worker nodes to the storage can be as high a 2 GB/s. At the same time a continuous throughput in the order of 300-400 MB/s into and...Go to contribution page
-
Dr Niko Neufeld (CERN)03/09/2007, 08:00Events selected by LHCb's online event filtering farm will be assembled into raw data files of about 2 GBs. Under nominal conditions about 2 such files will be produced per minute. These files must be copied to tape storage and made available online to various calibration and monitoring tasks. The life cycle and state transitions of each files are managed by means of a dedicated data-...Go to contribution page
-
Dr Manuela Cirilli (University of Michigan)03/09/2007, 08:00The calibration of the 375000 ATLAS Monitored Drift Tubes will be a highly challenging task: a dedicated set of data will be extracted from the second level trigger of the experiment and streamlined to three remote Tier-2 Calibration Centres. This presentation reviews the complex chain of databases envisaged to support the MDT Calibration and describes the actual status of the...Go to contribution page
-
Dr Wolfgang Waltenberger (Hephy Vienna)03/09/2007, 08:00A tool is presented that is capable of reading from and writing to several different file formats. Currently supported file formats are ROOT, HBook, HDF, XML, Sqlite3 and a few text file formats. A plugin mechanism decouples the file-format specific "backends" from the main library. All data are internally represented as "heterogenous hierarchic tuples"; no other data structure exists in...Go to contribution page
-
Ian Fisk (Fermi National Accelerator Laboratory (FNAL))03/09/2007, 08:00CMS is preparing seven remote Tier-1 computing facilities to archive and serve experiment data. These centers represent the bulk of CMS's data serving capacity, a significant resource for reprocessing data, all of the simulation archiving capacity, and operational support for Tier-2 centers and analysis facilities. In this paper we present the progress on deploying the largest remote...Go to contribution page
-
Irina Sourikova (BROOKHAVEN NATIONAL LABORATORY)03/09/2007, 08:00After seven years of running and collecting 2 Petabytes of physics data, PHENIX experiment at the Relativistic Heavy Ion Collider (RHIC) has gained a lot of experience with database management systems ( DBMS ). Serving all of the experiment's operations - data taking, production and analysis - databases provide 24/7 access to calibrations and book-keeping information for hundreds of...Go to contribution page
-
Dr Iosif Legrand (CALTECH), Ramiro Voicu (CALTECH)03/09/2007, 08:00The efficient use of high-speed networks to transfer large data sets is an essential component for many scientific applications including CERN’s LCG experiments. We present an efficient data transfer application, Fast Data Transfer (FDT), and a distributed agent system (LISA) able to monitor, configure, control and globally coordinate complex, large scale data transfers. FDT is an...Go to contribution page
-
Prof. Toby Burnett (University of Washington)03/09/2007, 08:00Applications often need to have many parameters defined for execution. A few can be done with the command line, but this does not scale very well. I present a simple use of embedded Python that makes it easy to specify configuration data for applications, avoiding wiring in constants, or writing elaborate parsing difficult to justify for small, or one-off applications. But the...Go to contribution page
-
Dr Wenji Wu (FERMILAB)03/09/2007, 08:00The computing models for LHC experiments are globally distributed and grid-based. In such a computing model, the experiments’ data must be reliably and efficiently transferred from CERN to Tier-1 regional centers, processed, and distributed to other centers around the world. Obstacles to good network performance arise from many causes and can be a major impediment to the success of this...Go to contribution page
-
Elisabetta Ronchieri (INFN CNAF)03/09/2007, 08:00People involved in modular projects need to improve the build software process, planning the correct execution order and detecting circular dependencies. The lack of suitable tools may cause delays in the development, deployment and maintenance of the software. Experience in such projects has shown that the arranged use of version control and build systems is not able to support the...Go to contribution page
-
Mr Alexander Withers (Brookhaven National Laboratory)03/09/2007, 08:00The PostgreSQL database is a vital component of critical services at the RHIC/USATLAS Computing Facility such as the Quill subsystem of the Condor Project and both PNFS and SRM within dCache. Current deployments are relatively unsophisticated, utilizing default configurations on small-scale commodity hardware. However, a substantial increase in projected growth has exposed deficiencies...Go to contribution page
-
Dr Maria Grazia Pia (INFN Genova)03/09/2007, 08:00The Statistical Toolkit provides an extensive collection of algorithms for the comparison of two data samples: in addition to the chisquared test, it includes all the tests based on the empirical distribution function documented in literature for binned and unbinned distributions. Some of these tests, like the Kolmogorov-Smirnov one, are widely used; others, like the Anderson-Darling...Go to contribution page
-
Dr Elliott Wolin (Jefferson Lab)03/09/2007, 08:00EVIO is a lightweight event I/O package consisting of an object-oriented layer on top of a pre-existing, highly efficient, C-based event I/O package. The latter, part of the JLab CODA package, has been in use in JLab high-speed DAQ systems for many years, but other underlying disk I/O packages could be substituted. The event format on disk, a packed tree-like hierarchy of banks, maps...Go to contribution page
-
Dr Jose Hernandez (CIEMAT)03/09/2007, 08:00CMS undertakes periodic computing challenges of increasing scale and complexity to test its computing model and Grid computing systems. The computing challenges are aimed at establishing a working distributed computing system that implements the CMS computing model based on an underlying multi-flavour grid infrastructure. CMS dataflows and data processing workflows are exercised during a...Go to contribution page
-
Mr LUIS MARCH (Instituto de Fisica Corpuscular)03/09/2007, 08:00The Spanish ATLAS Tier-2 is geographically distributed between three HEP institutes. They are IFAE (Barcelona) and IFIC (Valencia) and UAM (Madrid). Currently it has a computing power of about 400 kSI2k CPU, a disk storage capacity of 40 TB and a network bandwidth connecting the three sites and the nearest Tier-1 of 1 Gb/s. These resources will increase with time in parallel to those of...Go to contribution page
-
Tomas Kouba (Institute of Physics - Acad. of Sciences of the Czech Rep. (ASCR)03/09/2007, 08:00Each tier 2 site is monitored by various services from outside. The Prague T2 is monitored by SAM tests, GSTAT monitoring, RTM from RAL, regional nagios monitoring and experiment specific tools. Besides that we monitor our own site for hardware and software failures and middleware status. All these tools produce an output that must be regularly checked by site administrators. We...Go to contribution page
-
Mr Alessandro Italiano (INFN-CNAF)03/09/2007, 08:00Every day operations on a big computer center farm like that of a Tier1 can be numerous. Opening or closing a host, changing batch system configuration, replacing a disk, reinstalling a host and so on, is just a short list of what can and will really happen. In these conditions remembering all that has been done could be really difficult. Typically a big farm is managed by a team so it...Go to contribution page
-
Dr Chadwick Keith (Fermilab)03/09/2007, 08:00Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of...Go to contribution page
-
Manuel Gallas (CERN)03/09/2007, 08:00Based on the ATLAS TileCal 2002 test-beam setup example, we present here the technical, software aspects of a possible solution to the problem of using two differe! nt simulation engines, like Geant4 and Fluka, with ! the comm on geometry and digitization code. The specific use case we discuss here, which is probably the most common one, is when the Geant4 application is already...Go to contribution page
-
Prof. Wolfgang Kuehn (Univ. Giessen, II. Physikalisches Institut)03/09/2007, 08:00PANDA is a new universal detector for antiproton physics at the HESR facility at FAIR/GSI. The PANDA data acquisition system has to handle interaction rates of the order of 10**7 /s and data rates of several 100 Gb /s. FPGA based compute nodes with multi-Gb/s bandwidth capability using the ATCA architecture are designed to handle tasks such as event building, feature extraction and...Go to contribution page
-
Kathy Pommes (CERN)03/09/2007, 08:00During the construction and commissioning phases of the ATLAS Collaboration, data related to the installation, testing and performance of the equipment are stored in distinctive databases. Each group acquires information and saves them in repositories placed in different servers, using diverse technologies. Both data modeling and terminology may vary among the storage areas. The...Go to contribution page
-
Dr Sven Gabriel (Forschungszentrum Karlsruhe)03/09/2007, 08:00GridKa is the German Tier1 centre in the Worldwide LHC Computing Grid (WLCG). It is part of the Institut für Wissenschaftliches Rechnen (IWR) at the Forschungszentrum Karlsruhe (FZK). It started in 2002 as the successor of the ”Regional Data and Computing Centre in Germany” (RDCCG) GridKa supports all four LHC experiments, ALICE, ATLAS, CMS and LHCb, four non-LHC high energy physics...Go to contribution page
-
Dr Christopher Jones (Cornell University)03/09/2007, 08:00When doing an HEP analysis, physicists typically repeat the same operations over and over while applying minor variations. Doing the operations as well as remembering the changes done during each iteration can be a very tedious process. HEPTrails in an analysis application written in Python and built on top of the University of Utah's VisTrails system which provides workflow and full...Go to contribution page
-
Dr Enrico Mazzoni (INFN Pisa)03/09/2007, 08:00We report about the tests performed in the INFN Pisa Computing Centre with some of the latest generation storage devices. Fibre Channel and NAS solutions have been tested in a realistic enviroment, both participating in Worldwide CMS's Service Challenges, and simulating analysis patterns with more than 500 jobs accessing concurrently]data files. Both usage pattern have evidentiated the...Go to contribution page
-
Dr David Bailey (University of Manchester), Dr Robert Appleby (University of Manchester)03/09/2007, 08:00Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the...Go to contribution page
-
Mr Enrico Fattibene (INFN-CNAF, Bologna, Italy), Mr Federico Pescarmona (INFN-Torino, Italy), Mr Giuseppe Misurelli (INFN-CNAF, Bologna, Italy), Mr Stefano Dal Pra (INFN-Padova, Italy)03/09/2007, 08:00In production quality Grid infrastructure accounting data play a key role on the possibility to spot out how the allocated resources have been used. The different types of Grid user have to be taken into account in order to provide different subsets of accounting data based on the specific role covered by a Grid user. Grid end users, VO (Virtual Organization) managers, site administrators...Go to contribution page
-
Dr Patricia Conde Muíño (LIP-Lisbon)03/09/2007, 08:00ATLAS is one of the four major LHC experiments, designed to cover a wide range of physics topics. In order to cope with a rate of 40MHz and 25 interactions per bunch crossing, the ATLAS trigger system is divided in three different levels. The first one (LVL1, hardware based) identifies signatures in 2 microseconds that are confirmed by the the following trigger levels (software based)....Go to contribution page
-
Antonio Amorim (Universidade de Lisboa (SIM and FCUL, Lisbon))03/09/2007, 08:00The ATLAS conditions databases will be used to manage information of quite diverse nature and level of complexity. The infrastructure in being built using the LCG COOL infrastructure and provides a powerful information sharing gateway upon many different systems. The nature of the stored information ranges from temporal series of simple values to very complex objects describing...Go to contribution page
-
Luca dell'Agnello (INFN-CNAF)03/09/2007, 08:00INFN CNAF is a multi experiment computing center acting as Tier-1 for LCG but also supporting other HEP and non HEP experiments and Virtual Organizations. The CNAF Tier-1 is one of the main Resource Centers of the Grid Infrastructure (WLCG/EGEE); the preferred access method to the center is through WLCG/EGEE and INFNGRID middleware and services. Critical issues to be addressed to meet...Go to contribution page
-
Prof. Manuel Delfino Reznicek (Port d'Informació Científica (PIC))03/09/2007, 08:00A new data center has been deployed for the MAGIC Gamma Ray Telescope, located in the Roque de los Muchachos observatory in the Canary Islands, Spain, at the Port d'Informació Científica in Barcelona. The MAGIC Datacenter at PIC recieves all the raw data produced by MAGIC, either via the network or tape cartridges, and provides archiving, rapid processing for quality control and...Go to contribution page
-
Tomasz Wlodek (Brookhaven National Laboratory)03/09/2007, 08:00Managing large number of heterogeneous grid servers with different service requirements posts great challenges. We describe a cost-effective integrated operation framework which manages hardware inventory, monitors services, raises alarms with different severity levels and tracks the facility response to them. The system is based on open source components: RT (Request Tracking) tracks...Go to contribution page
-
Jonathan Butterworth (University College London)03/09/2007, 08:00Accurate modelling of high energy hadron interactions is essential for the precision analysis of data from the LHC. It is therefore imperative that the predictions of Monte Carlos used to model this physics are tested against existing and future measurements. These measurements cover a wide variety of reactions, experimental observables and kinematic regions. To make this process more...Go to contribution page
-
Antonio Amorim (Universidade de Lisboa (SIM and FCUL, Lisbon))03/09/2007, 08:00The ATLAS Trigger and Data Acquisition systems (TDAQ) to the Conditions databases has strong requirements on reliability and performance. Several applications were developed to support the integration of Condition database access with the online services in TDAQ like the interface to the Information Services and to the TDAQ configuration.. The DBStressor was developed to test and stress...Go to contribution page
-
Vincenzo Chiochia (Universitat Zurich)03/09/2007, 08:00The CMS Pixel Detector is hosted inside the large solenoid generating a magnetic field of 4 T. The electron-hole pairs produced by particles traversing the pixel sensors will thus experience the Lorentz force due to the combined presence of magnetic and electric field. This results in a systematic shift of the charge distribution. In order to achieve a high position resolution a...Go to contribution page
-
Dr Robert Bainbridge (Imperial College London)03/09/2007, 08:00The CMS silicon strip tracker is unprecedented in terms of its size and complexity, providing a sensitive area of >200 m^2 and comprising 10M readout channels. Its data acquisition system is based around a custom analogue front-end ASIC, an analogue optical link system and an off-detector VME board that performs digitization, zero-suppression and data formatting. These data are forwarded...Go to contribution page
-
Dr Ichiro Adachi (KEK)03/09/2007, 08:00The Belle experiment has been operational since 1999 and we have processed more than 700/fb of data so far. To cope with ever increasing data, complete automation of the event processing is one of the most critical issues. In addition, unified management in the processing job and the processed data files to be analyzed is very important especially to deal with ~400K data files amounting...Go to contribution page
-
Mr Philip DeMar (FERMILAB)03/09/2007, 08:00Advances in wide area network service offerings, coupled with comparable developments in local area network technology have enabled many HEP sites to keep their offsite network bandwidth ahead of demand. For most sites, the more difficult and costly aspect of increasing wide area network capacity is the local loop, which connects the facility LAN to the wide area service provider(s). ...Go to contribution page
-
Dr Andreas Heiss (Forschungszentrum Karlsruhe)03/09/2007, 08:00Within the Worldwide LHC Computing Grid (WLCG), a Tier-1 centre like the German GridKa computing facility has to provide significant CPU and storage resources as well as several Grid services with a high level of quality. GridKa currently supports all four LHC Experiments, Alice, Atlas, CMS and LHCb as well as four non-LHC high energy physics experiments, and is about to significantly...Go to contribution page
-
Dr David Lawrence (Jefferson Lab)03/09/2007, 08:00The C++ reconstruction framework JANA has been written to support the next generation of Nuclear Physics experiments at Jefferson Lab in anticipation of the 12GeV upgrade. The JANA framework was designed to allow multi-threaded event processing with a minimal impact on developers of reconstruction software. As we enter the multi-core (and soon many-core) era, thread-enabled code will...Go to contribution page
-
Dr Stefan Roiser (CERN)03/09/2007, 08:00The Software Process and Infrastructure project (SPI) of the LCG Applications Area (AA) is responsible for a set of services for software build, software packaging, software distribution, communication and quality assurance. Recently a new tool has been developed in SPI for the automatic configuration and build of the LCG AA software stack which is used for nightly builds. In this talk...Go to contribution page
-
Dr Markus Frank (CERN)03/09/2007, 08:00The High Level Trigger and Data Acquisition system selects about 2 kHz of events out of the 40 MHz of beam crossings. The selected events are sent to permanent storage for subsequent analysis. In order to ensure the quality of the collected data, indentify possible malfunctions of the detector and perform calibration and alignment checks, a small fraction of the accepted events is...Go to contribution page
-
Prof. Manuel Delfino Reznicek (Port d'Informació Científica (PIC))03/09/2007, 08:00Small files pose performance issues for Mass Storage Systems, particularly those using magnetic tape. The ViVo project reported at CHEP06 solved some of these problems by using Virtual Volumes based on ISO images containing the small files, and only storing and retrieving these images from the MSS. Retrieval was handled using Unix automounters, requiring deployment of ISO servers with a...Go to contribution page
-
Eric Grancher (CERN)03/09/2007, 08:00Database applications increasingly demand higher performance. This is especially true in the context of the LHC accelerator, LHC experiments, and LHC Computing Grid projects at CERN. Oracle RAC (Real Application Cluster) is a cluster solution which allows a database to be served by several nodes, and is a technology that is being exploited successfully at CERN and at LCG Tier1 sites. ...Go to contribution page
-
Ms Geraldine Conti (EPFL)03/09/2007, 08:00The LHCb warm magnet has been designed to provide an integrated field of 4 Tm for tracks coming from the primary vertex.To insure good momentum resolution of a few per mil, an accurate description of the magnetic field map is needed. This is achieved by combining the information from a TOSCA-based simulation and data from measurements. The paper presents the fit method applied to...Go to contribution page
-
Dr Sebastien Binet (LBNL)03/09/2007, 08:00LHC experiments are entering in a phase where optimization in view of data taking as well as robustness' improvements are of major importance. Any reduction in event data size can bring very significant savings in the amount of hardware (disk and tape in particular) needed to process data. Another area of concern and potential major gains is reducing the memory size and I/O bandwidth...Go to contribution page
-
Miguel Coelho Dos Santos (CERN)03/09/2007, 08:00We present our design, development and deployment of a portable monitoring system for the CERN Archival and Storage System (Castor) based on its existing internal database infrastructure and deployment architecture. This new monitoring architecture is seen as an important requirement for future development and support. Castor is now deployed at several sites which use...Go to contribution page
-
Mr Martin Bly (STFC/RAL)03/09/2007, 08:00The GRIDPP Tier-1 Centre at RAL is one of 10 Tier-1 centres worldwide preparing for the start of LHC data taking in late 2007. The RAL Tier-1 is expected to provide a reliable grid-based computing service running thousands of simultaneous batch jobs with access to a multi-petabyte CASTOR-managed disk storage pool and tape silo, and will support the ATLAS, CMS and LHCb experiments as well...Go to contribution page
-
Dr Stefano Mersi (INFN & Università di Firenze)03/09/2007, 08:00The CMS silicon strip tracker comprises a sensitive area of >200 m2 and 10M readout channels. Its data acquisition system is based around a custom analogue front-end ASIC, an analogue optical link system and an off-detector VME board that performs digitization, zero-suppression and data formatting. The data acquisition system uses the CMS online software framework, known as XDAQ, to...Go to contribution page
-
Dr Oliver Keeble (CERN)03/09/2007, 08:00We describe an approach to maintaining a large integrated software distribution, the gLite middleware. We describe why we have moved away from the concept of regular releases of the entire distribution, favoring instead a multispeed approach where components can evolve at their own pace. An overview of our implementation of such a release process is given, explaining the full life cycle...Go to contribution page
-
Dr Marc Dobson (CERN)03/09/2007, 08:00The ATLAS experiment will use of order three thousand nodes for the online processing farms. The administration of such a large cluster is a challenge especially due to high impact of any down time. The ability to quickly and remotely turn on/off machines, especially following a power cut, and the ability to monitor the hardware health whether the machine be on or off are some of the...Go to contribution page
-
Dr Charles Leggett (LAWRENCE BERKELEY NATIONAL LABORATORY)03/09/2007, 08:00Runtime memory usage in experiments has grown enormously in recent years, especially in large experiments like Atlas. However, it is difficult to break down total memory usage as indicated by OS-level tools, to identify the precise users and abusers. Without a detailed knowledge of memory footprints, monitoring memory growth as an experiment evolves in order to control ballooning...Go to contribution page
-
Mr Sebastian Lopienski (CERN)03/09/2007, 08:00Nowadays, IT departments provide, and people use, computing services of an increasingly heterogeneous nature. There is thus a growing need for a status display that groups these different services and reports status and availability in a uniform way. The Service Level Status (SLS) system addresses these needs by providing a web-based display that dynamically shows availability, basic...Go to contribution page
-
Dr Ilya Narsky (California Institute of Technology)03/09/2007, 08:00SPR implements various tools for supervised learning such as boosting (3 flavors), bagging, random forest, neural networks, decision trees, bump hunter (PRIM), multi-class learner, logistic regression, linear and quadratic discriminant analysis, and others. Presented at CHEP 2006, SPR has been extended with several important features since then. The package has been stripped of CLHEP...Go to contribution page
-
Dr Rene Brun (CERN)03/09/2007, 08:00A poster (two A0 pages) shows the main software systems used in HEP in the period 1970 -> 2010 from their conception to their death. Graphics bands are used to indicate the relative importance of each system or tool in the following categories: -Machines and Operating systems -Storage systems and access libraries -Networking and communication software -Compiled languages -Code...Go to contribution page
-
Mr Andreas Unterkircher (CERN)03/09/2007, 08:00We describe the methodology for testing gLite releases. Starting from the needs given by the EGEE software management process we illustrate our design choices for testing gLite. For certifying patches different test scenarios have to be considered: regular regression tests, stress tests and manual verification of bug fixes. Conflicts arise if these tests are all carried out at the same...Go to contribution page
-
Mr Ian Gable (University of Victoria)03/09/2007, 08:00The ATLAS Canada computing model consists of a Tier-1 computing centre located at the TRIUMF Laboratory in Vancouver, Canada, and two distributed Tier-2 computing centres: one in Eastern Canada and one in Western Canada. Each distributed Tier-2 computing centre is made up of a group of universities. To meet the network requirements of each institution, HEPnet Canada and CANARIE...Go to contribution page
-
Alessandro De Salvo (Istituto Nazionale di Fisica Nucleare Sezione di Roma 1)03/09/2007, 08:00The huge amount of resources available in the Grids, and the necessity to have the most updated experiment software deployed in all the sites within a few hours, have spotted the need for automatic installation systems for the LHC experiments. In this paper we describe the ATLAS system for the experiment software installation in LCG/EGEE, based on the Lightweight Job Submission Framework...Go to contribution page
-
Ms Elizabeth Sexton-Kennedy (FNAL)03/09/2007, 08:00With the turn-on of the LHC, the CMS DAQ system is expecting to log petabytes of experiment data in the coming years. The CMS Storage Manager system is a part of the high bandwidth event data handling pipeline of the CMS high level DAQ. It has two primary functions. Each Storage Manager instance collects data from the sub-farm, or DAQ slice of the Event Filter farm it has been assigned...Go to contribution page
-
Lorenzo Masetti (CERN)03/09/2007, 08:00The Tracker Control System (TCS) is a distributed control software to operate 2000 power supplies for the silicon modules of the CMS Tracker and monitor its environmental sensors. TCS must thus be able to handle 10^4 power supply parameters, 10^3 environmental probes from the Programmable Logic Controllers of the Tracker Safety System (TSS), 10^5 parameters read via DAQ from the...Go to contribution page
-
Prof. Gang Chen (IHEP, China)03/09/2007, 08:00Beijing Electron Spectrometer (BESIII) experiment will produce 5 PB of data in next five years. Grid is used to solve this challenge. This paper introduces BES grid computing model and specific technologies, including automatic data replication, fine-grained job scheduling and so on.Go to contribution page
-
Obreshkov Emil (INRNE/CERN)03/09/2007, 08:00The ATLAS offline software comprises over 1000 software packages organized into 10 projects that are built on a variety of compiler and operating system combinations every night. File-level parallelism, package-level parallelism and multi-core build servers are used to perform simultaneous builds of 6 platforms that are merged into a single installation on AFS. This in turn is used to...Go to contribution page
-
Go Iwai (KEK/CRC)03/09/2007, 08:00The Belle Experiment is an ongoing experiment with an asymmetric electron-positron collider at KEK and already has a few PB scales of data in total including hundreds TB DST (Data Summary Tape) and MC data. It’s too much difficult to export existing data to LCG (LHC Computing Grid) physically because of huge amount of data. We setup a SRB (Storage Resource Broker) server to access them by...Go to contribution page
-
Mr Sigve Haug (LHEP University of Bern)03/09/2007, 08:00Since 2005 the Swiss ATLAS Grid is in production. It comprises four clusters at one Tier 2 and two Tier 3 sites. About 800 heterogenous cores and 60 TB disk space are connected by a dark fibre network operated at 10 Giga bit per second. Three different operating systems are deployed. The Tier 2 cluster runs both LCG and NorduGrid middleware (ARC) while the Tier 3 clusters run only the...Go to contribution page
-
Dr Tony Cass (CERN)03/09/2007, 08:00CERN, as other sites, has been preparing computing services for the arrival of LHC data for some time---more than 11 years if everything started at the First LHC Computing Workshop, held in Padova in June 1996. With LHC data taking now just around the corner, this presentation takes a look back at preparations at CERN and considers some of the key choices made along the way. Which were...Go to contribution page
-
Dan Nae (California Institute of Technology (CALTECH))03/09/2007, 08:00In this paper we present the design, implementation and evolution of the mission-orientedUSLHCNet for HEP research. The design philosophy behind our network is to help meet the dataintensive computing challenges of the next generation of particle physics experiments with a comprehensive, network-focused approach. Instead of treating the network as a static, unchanging and unmanaged set of...Go to contribution page
-
Dr Patricia Conde Muíño (LIP-Lisbon)03/09/2007, 08:00With the project PHEASANT a DSVQL was proposed for the purpose of providing a tools that could increase user's productivity while producing query code for data analysis. The previous project aimed at the proof concept and methodology feasability by introducing the concept of DSLs. We are now concetrated on implementation issues in order to deploy a final tool. The concept of domain...Go to contribution page
-
Konstantinos Bachas (Aristotle University of Thessaloniki)03/09/2007, 08:00The measurement of the muon energy deposition in the calorimeters is an integral part of muon identification, track isolation and correction for catastrophic muon energy losses, which are the prerequisites to the ultimate goal of refitting the muon track using calorimeter information as well. To this end, an accurate energy loss measurement method in the calorimeters is developed which...Go to contribution page
-
Kenneth Bloom (University of Nebraska-Lincoln)03/09/2007, 08:00The CMS computing model relies heavily on the use of "Tier-2" computing centers. At LHC startup, the typical Tier-2 center will have 1 MSpecInt2K of CPU resources, 200 TB of disk for data storage, and a WAN connection of at least 1 Gbit/s. These centers will be the primary sites for the production of large-scale simulation samples and for the hosting of experiment data for user...Go to contribution page
-
Mark Donszelmann (SLAC)03/09/2007, 08:00Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a single XML file which declaratively specifies the project's properties. In short, Maven replaces Make or Ant, adds the handling of dependencies and generates documentation and a project website. Maven...Go to contribution page
-
Dr Ulrich Schwickerath (CERN)03/09/2007, 08:00LSF 7, the latest version of Platform's batch workload management system, addresses many issues which limited the ability of LSF 6.1 to support large scale batch farms, such as the lxbatch service at CERN. In this paper we will present the status of the evaluation and deployment of LSF 7 at CERN, including issues concerning the integration of LSF 7 witht the gLite grid...Go to contribution page
-
Mr Colin Morey (Manchester University)03/09/2007, 08:00Cfengine is a middle to high level policy language and autonomous agent for building expert systems to administrate and configure large computer clusters. It is ideal for large-scale cluster management and is highly portable across varying computer platforms, allowing the management of multiple architectures and node types within the same farm. As well as being a highly capable...Go to contribution page
-
Mr Andrey Bobyshev (FERMILAB)03/09/2007, 08:00At Fermilab, there is a long history of utilizing network flow data collected from site routers for various analyses, including network performance characterization, anomalous traffic detection, investigation of computer security incidents, network traffic statistics and others. Fermilab’s flow analysis model is currently built as a distributed system that collects flow data from the site...Go to contribution page
-
Dr David Alexander (Tech-X Corporation)03/09/2007, 08:00Nuclear and high-energy physicists routinely execute data processing and data analysis jobs on a Grid and need to be able to monitor their jobs execution at an arbitrary site at any time. Existing Grid monitoring tools provide abundant information about the whole system, but are geared towards production jobs and well suited for Grid administrators, while the information tailored towards...Go to contribution page
-
Prof. Gordon Watts (University of Washington)03/09/2007, 08:00ROOT is firmly based on C++ and makes use of many of its features – templates and multiple inheritance, in particular. Many modern languages like Java and C# and python are missing these features or have radically different implementations. These programming languages, however, have many advantages to offer scientists including improved programming paradigms, development...Go to contribution page
-
Hegoi Garitaonandia Elejabarrieta (Instituto de Fisica de Altas Energias (IFAE))03/09/2007, 08:00The ATLAS Trigger & Data Acquisition System has been designed to use more than 2000 CPUs. During the current development stage it is crucial to test the system on a number of CPUs of similar scale. A dedicated farm of this size is difficult to find, and can only be made available for short periods. On the other hand many large farms have become available recently as part of computing...Go to contribution page
-
Prof. Harvey Newman (CALTECH)03/09/2007, 08:00The main objective of the VINCI project is to enable data intensive applications to efficiently use and coordinate shared, hybrid network resources, to improve the performance and throughput of global-scale grid systems, such as those used in high energy physics. VINCI uses a set of agent-based services implemented in the MonALISA framework to enable the efficient use of network resources,...Go to contribution page
-
Dr Tony Chan (BROOKHAVEN NATIONAL LAB)03/09/2007, 08:00The Brookhaven Computing Facility provides for the computing needs of the RHIC experiments, supports the U.S. Tier 1 center for the ATLAS experiment at the LHC and provides computing support for the LSST experiment. The multi-purpose mission of the facility requires a complex computing infrastructure to meet different requirements and can result in duplication of services with a large...Go to contribution page
-
Andrea Dotti (INFN)03/09/2007, 08:00During the ATLAS detector commissioning phase, installed readout electronics must pass performance standards tests. The resulting data must be analyzed to ensure correct operation. For the Tile Calorimeter, developers plug their code into a specific framework for physics data-processing,. Collaboration members, taking shifts on commissioning work, interpret the results, in thousands of...Go to contribution page
-
03/09/2007, 09:00
-
Tejinder Virdee (CERN/Imperial College)03/09/2007, 09:15The current status of the LHC machine and the experiments, especially the general-purpose experiments, will be given. Also discussed will be the preparations for the physics run in 2008. The prospects for physics, with an emphasis on what can be expected with an integrated luminosity of 1 fb-1, will be outlined.Go to contribution page
-
Les Robertson (CERN)03/09/2007, 10:00The talk will review the progress so far in setting up the distributed computing services for LHC data handling and analysis and look at some of the challenges we face when the real data begins to flow.Go to contribution page
-
Dr Ian Fisk (FERMILAB)03/09/2007, 11:00
-
Sylvain Chapeland (CERN)03/09/2007, 11:30The CERN Large Hadron Collider (LHC) is one of the most awesome science tool ever built. To fully exploit the potential of this great instrument, a huge design and development effort has been initiated in order to ensure that measurements can optimally flow out from the detectors in terms of quantity, selectivity, and integrity, be accessible for online monitoring and be recorded for...Go to contribution page
-
Dr Eng Lim Goh (SGI)03/09/2007, 12:00Cluster systems now comprise 50% to 90% of the High Performance Computing (HPC) market. However, with computing and storage needs outpacing Moore's law, the traditional approach of scaling is giving rise to facility, administrative and performance issues. Details of industry trends and unmet customer requirements for cluster computing will be presented. Implications on systems and...Go to contribution page
-
Dr Amir Farbin (European Organization for Nuclear Research (CERN))03/09/2007, 14:00As we near the collection of the first data from the Large Hadron Collider, the ATLAS collaboration is preparing the software and computing infrastructure to allow quick analysis of the first data and support of the long-term steady-state ATLAS physics program. As part of this effort considerable attention has been payed to the "Analysis Model", a vision of the interplay of the...Go to contribution page
-
Dr Steven Goldfarb (University of Michigan)03/09/2007, 14:00I report on major current activities in the domain of Collaborative Tools, focusing on development for the LHC collaborations and HEP, in general, including audio and video conferencing, web archiving, and more. This presentation addresses the follow-up to the LCG RTAG 12 Final Report (presented at CHEP 2006), including the formation of the RCTF (Remote Collaboration Task Force) to...Go to contribution page
-
Dr Andrew Maier (CERN)03/09/2007, 14:00Distributed data analysis and information managementoral presentationGanga, the job-management system (http://cern.ch/ganga), developed as an ATLAS- LHCb common project, offers a simple, efficient and consistent user experience in a variety of heterogeneous environments: from local clusters to global Grid systems. Ganga helps end-users to organise their analysis activities on the Grid by providing automatic persistency of the job's metadata. A user has...Go to contribution page
-
Pablo Saiz (CERN)03/09/2007, 14:00Thanks to the grid, users have access to computing resources distributed all over the world. The grid hides the complexity and the differences of its heterogeneous components. In order for this to work, it is vital that all the elements are setuped properly, and that they can interact with each other. It is also very important that errors are detected as soon as possible, and that the...Go to contribution page
-
Dr Jamie Shiers (CERN)03/09/2007, 14:00Computer facilities, production grids and networkingoral presentationThis talk summarises the main lessons learnt from deploying WLCG production services, with a focus on Reliability, Scalability, Accountability, which lead to both manageability and usability. Each topic is analysed in turn. Techniques for zero-user-visible downtime for the main service interventions are described, together with pathological cases that need special treatment. The...Go to contribution page
-
Prof. Adele Rimoldi (Pavia University & INFN)03/09/2007, 14:00The ATLAS detector is entering the final phases of construction and commissioning in order to be ready to take data during the first LHC commissioning run, foreseen by the end of 2007. A good understanding of the experiment performance from the beginning is essential to efficiently debug the detector and assess its physics potential in view of the physics runs which are going to take...Go to contribution page
-
Dr Akram Khan (Brunel University)03/09/2007, 14:20Distributed data analysis and information managementoral presentationASAP is a system for enabling distributed analysis for CMS physicists. It was created with the aim of simplifying the transition from a locally running application to one that is distributed across the Grid. The experience gained in operating the system for the past 2 years has been used to redevelop a more robust, performant and scalable version. ASAP consists of a client for job...Go to contribution page
-
Prof. Nobuhiko Katayama (High Energy Accelerator Research Organization)03/09/2007, 14:20We developed the original CABS language more than 10 years ago. The main objective of the language was to describe a decay of a particle as simply as possible in the context of usual HEP data analysis. A decay mode, for example, can be defined as follows: define Cand Dzerobar kpi 2 { K+ identified pi- identified } hist 1d inv_mass 0 80 1.5 2.3 ``all momentum'' cut inv_mass...Go to contribution page
-
Mr Philippe Galvez (California Institute of Technology)03/09/2007, 14:20The EVO (Enabling Virtual Organizations) system is based on a new distributed and unique architecture, leveraging the 10+ years of unique experience of developing and operating the large distributed production based VRVS collaboration system. The primary objective being to provide to the High Energy and Nuclear Physics experiments a system/service that meet their unique requirements of...Go to contribution page
-
Sunanda Banerjee (Fermilab/TIFR)03/09/2007, 14:20The CMS simulation based on the Geant4 toolkit and the CMS object-oriented framework has been in production for more than three years and has delivered a total of more than 200 M physics events for the CMS Data Challenges and Physics Technical Design Report studies. The simulation software has been successfully ported to the new CMS Event-Data-Model based software framework and is used in...Go to contribution page
-
Dr Markus Schulz (CERN)03/09/2007, 14:20Computer facilities, production grids and networkingoral presentationToday's production Grids connect large numbers of distributed hosts using high throughput networks and hence are valuable targets for attackers. In the same way users transparently access any Grid service independently of its location, an attacker may attempt to propagate an attack to different sites that are part of a Grid. In order to contain and resolve the incident, and since such an...Go to contribution page
-
Dr Oliver Gutsche (FERMILAB)03/09/2007, 14:20The CMS computing model to process and analyze LHC collision data follows a data-location driven approach and is using the WLCG infrastructure to provide access to GRID resources. As a preparation for data taking beginning end of 2007, CMS tests its computing model during dedicated data challenges. Within the CMS computing model, user analysis plays an important role in the CMS...Go to contribution page
-
Mr Emmanuel Ormancey (CERN)03/09/2007, 14:40The need for Single Sign On has always been restricted by the lack of cross platform solutions: a single sign on working only on one platform or technology is nearly useless. The recent improvements in Web Services Federation (WS- Federation) standard enabling federation of identity, attribute, authentication and authorization information can now provide real extended Single Sign On...Go to contribution page
-
Dr Amber Boehnlein (FERMI NATIONAL ACCELERATOR LABORATORY)03/09/2007, 14:40High energy physics experiments periodically reprocess data, in order to take advantage of improved understanding of the detector and the data processing code. Between February and May 2007, the DZero experiment will reprocess a substantial fraction of its dataset. This consists of half a billion events, corresponding to more than 100 TB of data, organized in 300,000 files. The...Go to contribution page
-
Luca Lista (INFN Napoli)03/09/2007, 14:40By end of 2007 the CMS experiment will start running and Petabytes of data will be produced every year. To make analysis of this huge amount of data possible the CMS Physics Tools package builds the highest layer of the CMS experiment software. A core part of this package is the Candidate Model providing a coherent interface to different types of data. Standard tasks like combinatorial...Go to contribution page
-
Mr Jan Fiete Grosse Oetringhaus (CERN)03/09/2007, 14:40Distributed data analysis and information managementoral presentationALICE (A Large Ion Collider Experiment) at the LHC plans to use a PROOF cluster at CERN (CAF - Cern Analysis Facility) for fast analysis. The system is especially aimed at the prototyping phase of analyses that need a high number of development iterations and thus desire a short response time. Typical examples are the tuning of cuts during the development of an analysis as well as...Go to contribution page
-
Mrs Ruth Pordes (FERMILAB)03/09/2007, 14:40Computer facilities, production grids and networkingoral presentationThe Open Science Grid (OSG) is receiving five years of funding across six program offices of the Department of Energy Office of Science and the National Science Foundation. OSG is responsible for operating a secure production-quality distributed infrastructure, a reference software stack including the Virtual Data Toolkit (VDT), extending the capabilities of the high throughput virtual...Go to contribution page
-
Thomas Paul (Northeastern University)03/09/2007, 14:40The Pierre Auger Observatory aims to discover the nature and origins of the highest energy cosmic rays. The large number of physicists involved in the project and the diversity of simulation and reconstruction tasks pose a challenge for the offline analysis software, not unlike the challenges confronting software for very large high energy physics experiments. Previously we have...Go to contribution page
-
Dr Stuart Paterson (CERN)03/09/2007, 15:00Distributed data analysis and information managementoral presentationThe LHCb distributed data analysis system consists of the Ganga job submission front-end and the DIRAC Workload and Data Management System. Ganga is jointly developed with ATLAS and allows LHCb users to submit jobs on several backends including: several batch systems, LCG and DIRAC. The DIRAC API provides a transparent and secure way for users to run jobs to the Grid and is the default...Go to contribution page
-
James William Monk (Department of Physics and Astronomy - University College London)03/09/2007, 15:00The Durham HepData database has for many years provided an up-to-date archive of published numerical data from HEP experiments worldwide. In anticipation of the abundance of new data expected from the LHC, the database is undergoing a complete metamorphosis to add new features and improve the scope for use of the database by external applications. The core of the HepData restructuring is...Go to contribution page
-
Norman Graf (SLAC)03/09/2007, 15:00The International Linear Collider (ILC) promises to provide electron-positron collisions at unprecedented energy and luminosities. Designing the detectors to extract the physics from these events requires efficient tools to simulate the detector response and reconstruct the events. The detector response package, slic, is based on the Geant4 toolkit and adds a thin layer of C++ code....Go to contribution page
-
Tadashi Maeno (Brookhaven National Laboratory)03/09/2007, 15:00A new distributed software system was developed in the fall of 2005 for the ATLAS experiment at the LHC. This system, called PanDA, provides an integrated service architecture with late binding of jobs, maximal automation through layered services, tight binding with ATLAS distributed data management (DDM) system, advanced error discovery and recovery procedures, and other features. In this...Go to contribution page
-
Prof. Richard McClatchey (University of the West of England)03/09/2007, 15:00The Health-e-Child (HeC) project is an EC Framework Programme 6 Integrated Project that aims at developing an integrated healthcare platform for paediatrics. Through this platform biomedical informaticians will integrate heterogeneous data and perform epidemiological studies across Europe. The main objective of the project is to gain a comprehensive view of a child's health by...Go to contribution page
-
Dr Jeremy Coles (RAL)03/09/2007, 15:00Computer facilities, production grids and networkingoral presentationOver the last few years, UK research centres have provided significant computing resources for many high-energy physics collaborations under the guidance of the GridPP project. This paper reviews recent progress in the Grid deployment and operations area including findings from recent experiment and infrastructure service challenges. These results are discussed in the context of how GridPP...Go to contribution page
-
Dr Pablo Saiz (CERN)03/09/2007, 15:20Starting from the end of this year, the ALICE detector will collect data at a rate that, after two years, will reach 4PB per year. To process such a large quantity of data, ALICE has developed over the last seven years a distributed computing environment, called AliEn, integrated in the WLCG environment. The ALICE environment presents several original solutions, which have shown their...Go to contribution page
-
Dr Maria Grazia Pia (INFN Genova)03/09/2007, 15:20Computational tools originating from high energy physics developments provide solutions to common problems in other disciplines: this study presents quantitative results concerning the application of HEP simulation and analysis tools, and of the grid technology, to dosimetry for oncological radiotherapy. The study concerned all the three major radiotherapy techniques: therapy...Go to contribution page
-
Dr Pavel Murat (Fermilab)03/09/2007, 15:20Computer facilities, production grids and networkingoral presentationCDFII detector at Fermilab is taking physics data since 2002. The architechture of the CDF computing system has substantially evolved during the years of the data taking and currently it reached stable configuration which will allow experiment to process and analyse the data until the end of Run II. We describe major architechtural components of the CDF offline computing - dedicated...Go to contribution page
-
Dr Johannes Elmsheuser (Ludwig-Maximilians-Universität München)03/09/2007, 15:20Distributed data analysis and information managementoral presentationThe distributed data analysis using Grid resources is one of the fundamental applications in high energy physics to be addressed and realized before the start of LHC data taking. The needs to manage the resources are very high. In every experiment up to a thousand physicist will be submitting analysis jobs into the Grid. Appropriate user interfaces and helper applications have to be made...Go to contribution page
-
Dr Alberto Di Meglio (CERN)03/09/2007, 15:20The ETICS system is a distributed software configuration, build and test system designed to fulfill the needs to improve the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin- Madison)....Go to contribution page
-
Dr Frank Gaede (DESY IT)03/09/2007, 15:20The International Linear Collider is the next large accelerator project in High Energy Physics. The Large Detector Concept (LDC) study is one of four international working groups that are developing a detector concept for the ILC. The LDC uses a modular C++ application framework (Marlin) that is based on the international data format LCIO. It allows the distributed development of...Go to contribution page
-
Mr Lars Fischer (Nordic Data Grid Facility)03/09/2007, 15:40Computer facilities, production grids and networkingoral presentationThe Tier-1 facility operated by the Nordic DataGrid Facility (NDGF) differs significantly from other Tier-1s in several aspects: It is not located one or a few locations but instead distributed throughout the Nordic, it is not under the governance of a single organization but instead a "virtual" Tier-1 build out of resources under the control of a number of different national...Go to contribution page
-
Giulio Eulisse (Northeastern University)03/09/2007, 15:40We describe a relatively new effort within CMS to converge on a set of web based tools, using state of the art industry techniques, to engage with the CMS offline computing system. CMS collaborators require tools to monitor various components of the computing system and interact with the system itself. The current state of the various CMS web tools is described along side current planned...Go to contribution page
-
Mr Adam Kocoloski (MIT)03/09/2007, 15:40Distributed data analysis and information managementoral presentationModern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a...Go to contribution page
-
Mr Matevz Tadel (CERN)03/09/2007, 15:40ALICE Event Visualization Environment (AliEVE) is based on ROOT and its GUI, 2D & 3D graphics classes. A small application kernel provides for registration and management of visualization objects. CINT scripts are used as an extensible mechanism for data extraction, selection and processing as well as for steering of frequent event- related tasks. AliEVE is used for event visualization in...Go to contribution page
-
Dr Flavia Donno (CERN)03/09/2007, 15:40Storage Services are crucial components of the Worldwide LHC Computing Grid (WLCG) infrastructure spanning more than 200 sites and serving computing and storage resources to the High Energy Physics LHC communities. Up to tens of Petabytes of data are collected every year by the 4 LHC experiments at CERN. To process these large data volumes it is important to establish a protocol and a very...Go to contribution page
-
Dr Lee Lueking (FERMILAB)03/09/2007, 15:40Distributed data analysis and information managementoral presentationThe CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It includes the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the system provides support for...Go to contribution page
-
Dr Patrick Fuhrmann (DESY)03/09/2007, 16:30With the start of the Large Hardron Collider at CERN, end of 2007, the associated experiments will feed the major share of their data into the dCache Storage Element technology at most of the Tier I centers and many of the Tier IIs including the larger sites. For a project, not having its center of gravity at CERN, and receiving contributions from various loosely coupled sites in...Go to contribution page
-
Dirk Duellmann (CERN)03/09/2007, 16:30The CORAL package has been developed as part of the LCG Persistency Framework project, to provide the LHC experiments with a single C++ access layer supporting a variety of relational database systems. In the last two years, CORAL has been integrated as database foundation in several LHC experiment frameworks and is used in both offline and online domains. Also, the other LCG...Go to contribution page
-
Leandro Franco (CERN)03/09/2007, 16:30Distributed data analysis and information managementoral presentationParticle accelerators produce huge amounts of information in every experiment and such quantity cannot be stored easily in a personal computer. For that reason, most of the analysis is done using remote storage servers (this will be particularly true when the Large Hadron Collider starts its operation in 2007). Seeing how the bandwidth has increased in the last few years, the biggest...Go to contribution page
-
Dr Richard Mount (SLAC)03/09/2007, 16:30Computer facilities, production grids and networkingoral presentationThe PetaCache project started at SLAC in 2004 with support from DOE Computer Science and the SLAC HEP program. PetaCache focuses on using cost-effective solid state storage for the hottest data under analysis. We chart the evolution of metrics such as accesses per second per dollar for different storage technologies and deduce the near inevitability of a massive use of solid- state...Go to contribution page
-
Prof. Gordon Watts (University of Washington)03/09/2007, 16:30The DZERO experiment records proton-antiproton collisions at the Fermilab Tevatron collider. The DZERO Level 3 data acquisition (DAQ) system is required to transfer event fragments of approximately 1-20 kilobytes from 63 VME crate sources to any of approximately 240 processing nodes at a rate of 1 kHz. It is built upon a Cisco 6509 Ethernet switch, standard PCs, and commodity VME...Go to contribution page
-
Dr Ivana Hrivnacova (IPN)03/09/2007, 16:30The Virtual Monte Carlo (VMC) provides the abstract interface into the Monte Carlo transport codes: Geant3, Geant4 and Fluka. The user VMC based application, independent from the specific Monte Carlo codes, can be then run with all three simulation programs. The VMC has been developed by the ALICE Offline Project and since then it draw attention in more experimental...Go to contribution page
-
Dr Gerd Behrmann (Nordic Data Grid Facility)03/09/2007, 16:50The LCG collaboration is encompased by a number of Tier 1 centers. The nordic LCG Tier 1 is in contrast to other Tier 1 centers distributed over most of Scandinavia. A distributed setup was chosen for both political and technical reasons, but also provides a number of unique challenges. dCache is well known and respected as a powerfull distributed storage resource manager, and was chosen...Go to contribution page
-
Dr Tsukasa Aso (Toyama National College of Maritime Technology, JST CREST)03/09/2007, 16:50The GEANT4 Monte Carlo code provides many powerful functions for conducting particle transport simulations with great reliability and flexibility. GEANT4 has been extending the application fields for not only the high energy physics but also medical physics. Using the reliable simulation for the radiation therapy, it will become possible to validate treatment planning and select the...Go to contribution page
-
Dr Giuseppe Lo Presti (CERN/INFN)03/09/2007, 16:50Computer facilities, production grids and networkingoral presentationIn this paper we present the architecture design of the CERN Advanced Storage system (CASTOR) and its new disk cache management layer (CASTOR2). Mass storage systems at CERN have evolved over time to meet growing requirements, both in terms of scalability and fault resiliency. CASTOR2 has been designed as a Grid-capable storage resource sharing facility, with a database-centric...Go to contribution page
-
Marco Clemencic (European Organization for Nuclear Research (CERN))03/09/2007, 16:50The COOL project provides software components and tools for the handling of the LHC experiment conditions data. COOL software development is the result of a collaboration between the CERN IT Department and Atlas and LHCb, the two experiments that have chosen it as the base of their conditions database infrastructure. COOL supports persistency for several relational technologies...Go to contribution page
-
Lassi Tuura (Northeastern University)03/09/2007, 16:50Distributed data analysis and information managementoral presentationThe CMS experiment will need to sustain uninterrupted high reliability, high throughput and very diverse data transfer activities as the LHC operations start. PhEDEx, the CMS data transfer system, will be responsible for the full range of the transfer needs of the experiment. Covering the entire spectrum is a demanding task: from the critical high-throughput transfers between CERN and...Go to contribution page
-
Dr Simon George (Royal Holloway)03/09/2007, 16:50The High Level Trigger (HLT) of the ATLAS experiment at the Large Hadron Collider receives events which pass the LVL1 trigger at ~75 kHz and has to reduce the rate to ~200 Hz while retaining the most interesting physics. It is a software trigger and performs the reduction in two stages: the LVL2 trigger should take ~10 ms and the Event Filter (EF) ~1 s. At the heart of the HLT is the...Go to contribution page
-
Emilio Meschi (CERN)03/09/2007, 17:05The CMS experiment at the CERN Large Hadron Collider is currently being commissioned and is scheduled to collect the first pp collision data towards the end of 2007. CMS features a two-level trigger system. The Level-1 trigger, based on custom hardware, is designed to reduce the collision rate of 40 MHz to approximately 100 kHz. Data for events accepted by the Level-1 trigger are read...Go to contribution page
-
Dr Douglas Smith (Stanford Linear Accelerator Center)03/09/2007, 17:10Distributed data analysis and information managementoral presentationThe BaBar high energy experiment has been running for many years now, and has resulted in a data set of over a petabyte in size, containing over two million files. The management of this set of data has to support the requirements of further data production along with a physics community that has vastly different needs. To support these needs the BaBar bookkeeping system was developed,...Go to contribution page
-
Dr Horst Goeringer (GSI)03/09/2007, 17:10Computer facilities, production grids and networkingoral presentationGSI in Darmstadt (Germany) is a center for heavy ion research and hosts an Alice Tier2 center. For the future FAIR experiments at GSI, CBM and Panda, the planned data rates will reach those of the current LHC experiments at Cern. Since more than ten years gStore, the GSI Mass Storage System, is successfully in operation. It is a hierarchical storage system with a unique name...Go to contribution page
-
Barbara Martelli (Italian INFN National Center for Telematics and Informatics (CNAF))03/09/2007, 17:10Database replication is a key topic in the LHC Computing GRID environment to allow processing of data in a distributed environment. In particular LHCb computing model relies on the LHC File Catalog (LFC). LFC is the database catalog which stores informations about files spread across the GRID, their logical names and physical locations of all their replicas. The LHCb computing model...Go to contribution page
-
Prof. Vladimir Ivantchenko (CERN, ESA)03/09/2007, 17:10Current status of the Standard EM package of the Geant4 toolkit is described. The precision of simulation results is discussed with the focus on LHC experiments. The comparisons of the simulation with the experimental data are shown.Go to contribution page
-
Dr Markus Schulz (CERN)03/09/2007, 17:10As a part of the EGEE project the data management group at CERN has developed and support a number of tools for various aspects of data management: A file catalog (LFC), a key store for encryption keys (Hydra), a grid file access library (GFAL) which transparently uses various byte access protocols to access data in various storage systems, a set of utilities (lcg_utils) for higher level...Go to contribution page
-
Dr Markus Frank (CERN)03/09/2007, 17:20The High Level Trigger and Data Acquisition system of the LHCb experiment at the CERN Large Hadron Collider must handle proton-proton collisions from beams crossing at 40 MHz. After a hardware-based first level trigger events have to be processed at the rate of 1 MHz and filtered by software-based trigger applications that run in a trigger farm consisting of up to 2000 PCs. The final...Go to contribution page
-
Paul Avery (University of Florida)03/09/2007, 17:30Computer facilities, production grids and networkingoral presentationUltraLight is a collaboration of experimental physicists and network engineers whose purpose is to provide the network advances required to enable and facilitate petabyte-scale analysis of globally distributed data. Existing Grid-based infrastructures provide massive computing and storage resources, but are currently limited by their treatment of the network as an external, passive, and...Go to contribution page
-
Andrew Cameron Smith (CERN)03/09/2007, 17:30Distributed data analysis and information managementoral presentationThe LHCb Computing Model describes the dataflow model for all stages in the processing of real and simulated events and defines the role of LHCb associated Tier1 and Tier2 computing centres. The WLCG ‘dressed rehearsal’ exercise aims to allow LHC experiments to deploy the full chain of their Computing Models, making use of all underlying WLCG services and resources, in preparation for real...Go to contribution page
-
Dr Maria Grazia Pia (INFN Genova)03/09/2007, 17:30A project is in progress for a systematic, quantitative validation of Geant4 physics models against experimental data. Due to the complexity of Geant4 physics, the validation of Geant4 hadronic models proceeds according to a bottom-up approach (i.e. from the lower energy range up to higher energies): this approach, which is different from the one adopted in the LCG Simulation Validation...Go to contribution page
-
Mr Mario Lassnig (CERN & University of Innsbruck, Austria)03/09/2007, 17:30The ATLAS detector at CERN's Large Hadron Collider presents data handling requirements on an unprecedented scale. From 2008 on the ATLAS distributed data management system (DQ2) must manage tens of petabytes of event data per year, distributed globally via the LCG, OSG and NDGF computing grids, now known as the WLCG. Since its inception in 2005 DQ2 has continuously managed all datasets...Go to contribution page
-
Mr Michael DePhillips (BROOKHAVEN NATIONAL LABORATORY)03/09/2007, 17:30Database demands resulting from offline analysis and production of data at The STAR experiment at Brookhaven National Laboratory's Relativistic Heavy-Ion Collider has steadily increased over the last 6 years of data taking activities. With each year STAR more than doubles events taken with an anticipation of reaching a billion event capabilities as early as next year. The challenges...Go to contribution page
-
Leonard Apanasevich (University of Chicago at Illinois)03/09/2007, 17:40The High Level Trigger (HLT) that runs in the 1000 dual-CPU box Filter Farm of the CMS experiment is a set of sophisticated software tools for selecting a very small fraction of interesting events in real time. The coherent tuning of these algorithms to accommodate multiple physics channels is a key issue for CMS, one that literally defines the reach of the experiment's physics program....Go to contribution page
-
Ms Alessandra Forti (University of Manchester)03/09/2007, 17:50Computer facilities, production grids and networkingoral presentationThe HEP department of the University of Manchester has purchased a 1000 nodes cluster. The cluster is dedicated to run EGEE and LCG software and is currently supporting 12 active VOs. Each node is equipped with 2x250 GB disks for a total amount of 500 GB and there is no tape storage behind nor raid arrays are used. Three different storage solutions are currently being deployed to...Go to contribution page
-
Wolfgang Ehrenfeld (Univ. of Hamburg/DESY)03/09/2007, 17:50The simulation of the ATLAS detector is largely dominated by the showering of electromagnetic particles in the heavy parts of the detector, especially the electromagnetic barrel and endcap calorimeters. Two procedures have been developed to accelerate the processing time of EM particles in these regions: (1) a fast shower parameterization and (2) a frozen shower library. Both work...Go to contribution page
-
Dr Caitriana Nicholson (University of Glasgow)03/09/2007, 17:50The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be...Go to contribution page
-
Dr Syed Naqvi (CoreGRID Network of Excellence)03/09/2007, 17:50Security requirements of service oriented architectures (SOA) are reasonably higher than the classical information technology (IT) architectures. Loose coupling – the inherent benefit of SOA – stipulates security as a service so as to circumvent tight binding of the services. The services integration interfaces are developed with minimal assumptions between the sending and receiving...Go to contribution page
-
Teresa Maria Fonseca Martin (CERN)03/09/2007, 17:55The ATLAS experiment under construction at CERN is due to begin operation at the end of 2007. The detector will record the results of proton-proton collisions at a centre-of-mass energy of 14 TeV. The trigger is a three-tier system designed to identify in real-time potentially interesting events that are then saved for detailed offline analysis. The trigger system will select...Go to contribution page
-
Dmitry Emeliyanov (RAL)03/09/2007, 18:10The unprecedented rate of beauty production at the LHC will yield high statistics for measurements such as CP violation and Bs oscillation and will provide the opportunity to search for and study very rare decays, such as B→ .The trigger is a vital component for this work and must select events containing the channels of interest from a huge background in order to reduce the 40 MHz...Go to contribution page
-
Miron Livny (University of Wisconsin)04/09/2007, 08:30
-
James Sexton (IBM)04/09/2007, 09:00IBM's Blue Gene/L system had demonstrated that it is now feasable to run applications at sustained performances of 100's of teraflops. The next generation Blue Gene/P system is designed to scale up to a peak performance of 3.6 Petaflops. This talk will look at some of the key application successes already achieved at the 100TF scale. It will then address the emerging petascale...Go to contribution page
-
Bill St Arnaud (CANARIE)04/09/2007, 09:30
-
Dr Rene Brun (CERN)04/09/2007, 11:00The BOOT project was introduced at CHEP06 and is gradually implemented in the ROOT project. A first phase of the project has consisted in an important restructuring of the ROOT core classes such that only a small subset is required when starting a ROOT application (including user libraries). Thanks to this first phase, the virtual address space required by the interactive version...Go to contribution page
-
Dr Ian Fisk (FNAL)04/09/2007, 11:00Computer facilities, production grids and networkingoral presentationIn preparation for the start of the experiment, CMS has conducted computing, software, and analysis challenges to demonstrate the functionality, scalability, and useability of the computing and software components. These challenges are designed to validate the CMS distributed computing model by demonstrating the functionality of many components simultaneously. In the challenges CMS...Go to contribution page
-
Dr Marianne Bargiotti (European Organization for Nuclear Research (CERN))04/09/2007, 11:00The DIRAC Data Management System (DMS) relies on both WLCG Data Management services (LCG File Catalogues, Storage Resource Managers and FTS) and LHCb specific components (Bookkeeping Metadata File Catalogue). The complexity of both the DMS and its interactions with numerous WLCG components as well as the instability of facilities concerned, has turned frequently into unexpected problems...Go to contribution page
-
Dan Fraser (Globus)04/09/2007, 11:00Globus software was devleoped to enable previously disconnected communities to securely share computational resources and data that span organizational boundaries. As a community driven project, the Globus commiunity is continually creating and enhancing Grid technology to make it easier to administer Grids as well as lowering the barriers to entry for both Grid users and Grid...Go to contribution page
-
Dr Roger Jones (LANCAS)04/09/2007, 11:00Distributed data analysis and information managementoral presentationThe ATLAS Computing Model was constructed after early tests and was captured in the ATLAS Computing TDR in June 2005. Since then, the grid tools and services have evolved and their performance is starting to be understood through large-scale exercises. As real data taking becomes immanent, the computing model continues to evolve, with robustness and reliability being the watchwords for...Go to contribution page
-
Dr Markus Schulz (CERN)04/09/2007, 11:20A key feature of WLCG's multi-tier model is a robust and reliable file transfer service that efficiently moves bulk data sets between the various tiers, corresponding to the different stages of production and user analysis. We describe in detail the file transfer service both the tier-0 data export and the inter-tier data transfers, discussing the transition and lessons learned in moving...Go to contribution page
-
Dr Simone Pagan Griso (University and INFN Padova)04/09/2007, 11:20Distributed data analysis and information managementoral presentationThe upgrades of the Tevatron collider and of the CDF detector have considerably increased the demand on computing resources in particular for Monte Carlo production for the CDF experiment. This has forced the collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. The CDF Analysis Farm (CAF) model has been reimplemented into LcgCAF ...Go to contribution page
-
Mr Michel Jouvin (LAL / IN2P3)04/09/2007, 11:20Computer facilities, production grids and networkingoral presentationQuattor is a tool aimed at efficient management of fabrics with hundred or thousand of Linux machines, still being easy enough to manage smaller clusters. It has been originally developed inside the European Data Grid (EDG) project. It is now in use at more than 30 grid sites running gLite middleware, ranging from small LCG T3 to very large one like CERN. Main goals and specific...Go to contribution page
-
Mr Olivier Couet (CERN)04/09/2007, 11:20The ROOT graphical libraries provide support for many different functions including basic graphics, high-level visualization techniques, output on files, 3D viewing etc. They use well-known world standards to render graphics on screen, to produce high-quality output files, and to generate images for Web publishing. Many techniques allow visualization of all the basic ROOT data types,...Go to contribution page
-
Prof. Shahram Rahatlou (Univ di Roma La Sapienza), Dr Tommaso Boccali (INFN Sezione di Pisa)04/09/2007, 11:20At the end of 2007 the first colliding beams from LHC are expected. The CMS Computing model enforces the use of the same software (with different performance settings) for offline and online(HLT) operations; this is particularly true for the reconstruction software: the different settings must allow a processing time per event (typically, numbers for 2x10e33 luminosity are given) of 50 ms...Go to contribution page
-
Dr Matevz Tadel (CERN)04/09/2007, 11:35OpenGL has been promoted to become the main 3D rendering engine of ROOT. This required a major re- modularization of OpenGL support on all levels, from basic window-system specific interface to medium-level object-representation and top-level scene management. This new architecture allows seamless integration of external scene-graph libraries into the ROOT OpenGL viewer as well as...Go to contribution page
-
Swagato Banerjee (University of Victoria)04/09/2007, 11:40BaBar Abstract #8 - Track 2 (Event processing) Experience with validating GEANT4 v7 and v8 against v6 in BaBar S. Banerjee, P. Kim, W. Lockman, and D. Wright for the BaBar Computing Group The BaBar experiment at SLAC has been using the GEANT 4 package version 6 for simulation of the detector response to passage of particles through its material. Since 2005 and 2006, respectively,...Go to contribution page
-
Mr Igor Sfiligoi (FNAL)04/09/2007, 11:40Grids are making it possible for Virtual Organizations (VOs) to run hundreds of thousands of jobs per day. However, the resources are distributed among hundreds of independent Grid sites. A higer level Workload Management System (WMS) is thus necessary. glideinWMS is a pilot-based WMS, inheriting several useful features: 1) Late binding: Pilots are sent to all suitable Grid...Go to contribution page
-
Torsten Antoni (Forschungszentrum Karlsruhe)04/09/2007, 11:40Computer facilities, production grids and networkingoral presentationThe organization and management of the user support in a global e-science computing infrastructure such as EGEE is one of the challenges of the grid. Given the widely distributed nature of the organisation, and the spread of expertise for installing, configuring, managing and troubleshooting the grid middleware services, a standard centralized model could not be deployed in EGEE. This...Go to contribution page
-
Dr Hartmut Stadie (Universitaet Hamburg)04/09/2007, 11:40Distributed data analysis and information managementoral presentationThe detector and collider upgrades for the HERA-II running at DESY have considerably increased the demand on computing resources for the ZEUS experiment. To meet the demand, ZEUS commissioned an automated Monte Carlo(MC) production capable of using Grid resources in November 2004. Since then, more than one billion events have been simulated and reconstructed on the Grid which corresponds...Go to contribution page
-
Mr Philippe Canal (FERMILAB)04/09/2007, 11:50For the last several months the main focus of development in the ROOT I/O package has been code consolidation and performance improvements. Access to remote files is affected both by bandwidth and latency. We introduced a pre-fetch mechanism to minimize the number of transactions between client and server and hence reducing the effect of latency. We will review the...Go to contribution page
-
Dr Ashok Agarwal (University of Victoria)04/09/2007, 12:00Distributed data analysis and information managementoral presentationThe present paper highlights the approach used to design and implement a web services based BaBar Monte Carlo (MC) production grid using Globus Toolkit version 4. The grid integrates the resources of two clusters at the University of Victoria, using the ClassAd mechanism provided by the Condor-G metascheduler. Each cluster uses the Portable Batch System (PBS) as its local resource...Go to contribution page
-
Mr Luigi Zangrando (INFN Padova)04/09/2007, 12:00Modern GRID middlewares are built around components providing basic functionality, such as data storage, authentication and security, job management, resource monitoring and reservation. In this paper we describe the Computing Resource Execution and Management (CREAM) service. CREAM provides a Web service-based job execution and management capability for Grid systems; in particular, it is...Go to contribution page
-
Mr Antonio Retico (CERN)04/09/2007, 12:00Computer facilities, production grids and networkingoral presentationGrids have the potential to revolutionise computing by providing ubiquitous, on demand access to computational services and resources. They promise to allow for on demand access and composition of computational services provided by multiple independent sources. Grids can also provide unprecedented levels of parallelism for high-performance applications. On the other hand, grid...Go to contribution page
-
Mr Andrei Gheata (CERN/ISS)04/09/2007, 12:05The ROOT geometry modeller (TGeo) offers powerful tools for detector geometry description. The package provides several functionalities like: navigation, geometry checking, enhanced visualization, geometry editing GUI and many others, using ROOT I/O. A new interface module g4root was recently developed to take advantage of ROOT geometry navigation optimizations in the context of GEANT4...Go to contribution page
-
04/09/2007, 14:00oral presentation
-
04/09/2007, 14:05oral presentationEffective security needs resources and support from senior management. This session will look at some ways of gaining that support by establishing a common understanding of risk.Go to contribution page
-
04/09/2007, 15:00This session will look to establish a common understanding of risk and introduce the ISSeG risk assessment questionnaire.Go to contribution page
-
04/09/2007, 16:30This session will look at some of the emerging recommendations that can be used at sites to improve security.Go to contribution page
-
04/09/2007, 17:30
-
Marco Mambelli (University of Chicago)05/09/2007, 08:00A Data Skimming Service (DSS) is a site-level service for rapid event filtering and selection from locally resident datasets based on metadata queries to associated "tag" databases. In US ATLAS, we expect most if not all of the AOD-based datasets to be be replicated to each of the five Tier 2 regional facilities in the US Tier 1 "cloud" coordinated by Brookhaven National Laboratory. ...Go to contribution page
-
Marco Cecchi (INFN-CNAF)05/09/2007, 08:00Since the beginning, one of the design guidelines for the Workload Management System currently included in the gLite middleware was flexibility with respect to the deployment scenario: the WMS has to work correctly and efficiently in any configuration: centralized, decentralized, and in perspective even peer-to-peer. Yet the preferred deployment solution is to concentrate the workload...Go to contribution page
-
Dr Torsten Harenberg (University of Wuppertal)05/09/2007, 08:00Today, one of the major challenges in science is the processing of large datasets. The LHC experiments will produce an enormous amount of results that are stored in databases or files. These data are processed by a large number of small jobs that read only chunks. Existing job monitoring tools inside the LHC Computing Grid (LCG) provide just limited functionality to the user. These...Go to contribution page
-
Dr Silvio Pardi (University of Naples ``Federico II'' - C.S.I. and INFN)05/09/2007, 08:00The user interface is a crucial service to guarantee the Grid accessibility. The goal to achieve, is the implementation of an environment able to hide the grid complexity and offer a familiar interface to the final user. Currently many graphical interfaces have been proposed to simplify the grid access, but the GUI approach appears not very congenital to UNIX developers and...Go to contribution page
-
Valentin Kuznetsov (Cornell University)05/09/2007, 08:00The CMS Dataset Bookkeeping System (DBS) search page is a web-based application used by physicists and production managers to find data from the CMS experiment. The main challenge in the design of the system was to map the complex, distributed data model embodied in the DBS and the Data Location Service (DLS) to a simple, intuitive interface consistent with the mental model...Go to contribution page
-
Mr Giacinto Piacquadio (Physikalisches Institut - Albert-Ludwigs-Universität Freiburg)05/09/2007, 08:00A new inclusive secondary vertexing algorithm which exploits the topological structure of weak b- and c-hadron decays inside jets is presented. The primary goal is the application to b-jet tagging. The fragmentation of a b-quark results in a decay chain composed of a secondary vertex from the weakly decaying b-hadron and typically one or more tertiary vertices from c-hadron decays. The...Go to contribution page
-
Dr Sebastien Incerti (CENBG-IN2P3)05/09/2007, 08:00Detailed knowledge of the microscopic pattern of energy deposition related to the particle track structure is required to study radiation effects in various domains, like electronics, gaseous detectors or biological systems. The extension of Geant4 physics down to the electronvolt scale requires not only new physics models, but also adequate design technology. For this purpose a...Go to contribution page
-
Michele Pioppi (CERN)05/09/2007, 08:00In the CMS software, a dedicated electron track reconstruction algorithm, based on a Gaussian Sum Filter (GSF), is used. This algorithm is able to follow an electron along its complete path up to the electromagnetic calorimeter, even in the case of a large amount of Bremsstrahlung emission. Because of the significant CPU consumption of this algorithm, however, it can be run only on a...Go to contribution page
-
Mr Pablo Martinez (Insitituto de Física de Cantabria)05/09/2007, 08:00A precise alignment of Muon System is one of the requirements to fulfill the CMS expected performance to cover its physics program. A first prototype of the software and computing tools to achieve this goal has been successfully tested during the CSA06, Computing, Software and Analysis Challenge in 2006. Data was exported from Tier-0 to Tier-1 and Tier-2, where the alignment software...Go to contribution page
-
Dr Josva Kleist (Nordic Data Grid Facility)05/09/2007, 08:00AliEn or Alice Environment is the Gridware developed and used within the ALICE collaboration for storing and processing data in a distributed manner. ARC (Advanced Resource Connector) is the Grid middleware deployed across the Nordic countries and gluing together the resources within the Nordic Data Grid Facility (NDGF). In this paper we will present our approach to integrate AliEn and...Go to contribution page
-
Mr Luca Magnoni (INFN-CNAF)05/09/2007, 08:00In a Grid environment the naming capability allows users to refer to specific data resources in a physical storage system using a high level logical identifier. This logical identifier is typically organized in a file system like structure, a hierarchical tree of names. Storage Resource Manager (SRM) services map the logical identifier to the physical location of data evaluating a set of...Go to contribution page
-
Stephane Chauvie (INFN Genova)05/09/2007, 08:00An original model is presented for the simulation of the energy loss of negatively charged hadrons: it calculates the stopping power by regarding the target atoms as an ensemble of quantum harmonic oscillators. This approach allows to account for charge dependent effects in the stopping power, which are relevant at low energy: the differences between the stopping powers of positive and...Go to contribution page
-
Dr Jerome Lauret (BROOKHAVEN NATIONAL LABORATORY)05/09/2007, 08:00Secure access to computing facilities has been increasingly on demand of practical tools as the world of cyber-security infrastructure has changed the landscape to access control via gatekeepers or gateways. However, the venue of two factor authentication (SSH keys for example) preferred over simpler Unix based login has introduced the challenging task of managing private keys and its...Go to contribution page
-
Dr Josva Kleist (Nordic Data Grid Facility)05/09/2007, 08:00The Nordic Data Grid Facility (NDGF) consists of Grid resources running ARC middleware in Scandinavia and other countries. These resources serve many virtual organisations and contribute a large fraction of total worldwide resources for the ATLAS experiment, whose data is distributed and managed by the DQ2 software. Managing ATLAS data within NDGF and between NDGF and other Grids used by...Go to contribution page
-
Rolf Seuster (University of Victoria)05/09/2007, 08:00The ATLAS Liquid Argon Calorimter consists of precision electromagnetic accordion calorimeters in the barrel and endcaps, hadronic calorimeters in the endcaps, and calorimeters in the forward region. The initial high energy collision data at the LHC experiments is expected in the spring of 2008. While tools for the reconstruction of the calorimeter data are quite developed through years...Go to contribution page
-
Dr Daniela Rebuzzi (INFN, Sezione di Pavia), Dr Nectarios Benekos (Max-Planck-Institut fur Physik)05/09/2007, 08:00The ATLAS detector, currently being installed at CERN, is designed to make precise measurements of 14 TeV proton-proton collisions at the LHC, starting in 2007. Arguably the clearest signatures for new physics, including the Higgs Boson and supersymmetry, will involve the production of isolated final-stated muons. The identification and precise reconstruction of muons are performed using...Go to contribution page
-
Dr Ricardo Vilalta (University of Houston)05/09/2007, 08:00Advances in statistical learning have placed at our disposal a rich set of classification algorithms (e.g., neural networks, decision trees, Bayesian classifiers, support vector machines, etc.) with little or no guidelines on how to select the analysis technique most appropriate for the task at hand. In this paper we present a new approach for the automatic selection of predictive models...Go to contribution page
-
Michal Kwiatek (CERN)05/09/2007, 08:00The digitalization of CERN audio-visual archives, a major task currently in progress, will generate over 40 TB of video, audio and photo files. Storing these files is one issue, but a far more important challenge is to provide long- time coherence of the archive and to make these files available on line with minimum manpower investment. An infrastructure, based on standard CERN...Go to contribution page
-
Dr Andrew McNab (University of Manchester)05/09/2007, 08:00GridSite has extended the industry-standard Apache webserver for use within Grid projects, by adding support for Grid security credentials such as GSI and VOMS. With the addition of the GridHTTP protocol for bulk file transfer via HTTP and the development of a mapping between POSIX filesystem operations and HTTP requests we have extended this scope of GridSite into bulk data transfer and...Go to contribution page
-
Dr Douglas Benjamin (Duke University)05/09/2007, 08:00The CDF experiment at Fermilab produces Monte Carlo data files using computing resources on both the Open Science Grid (OSG) and LHC Computing Grid (LCG) grids. This data produced must be brought back to Fermilab for archival storage. In the past CDF produced Monte Carlo data on dedicated computer farms through out the world. The data files were copied directly from the worker nodes to...Go to contribution page
-
Dr Daniele Bonacorsi (INFN-CNAF, Bologna, Italy)05/09/2007, 08:00The CMS experiment operated a Computing, Software and Analysis Challenge in 2006 (CSA06). This activity is part of the constant work of CMS in computing challenges of increasing complexity to demonstrate the capability to deploy and operate a distributing computing system at the desired scale in 2008. The CSA06 challenge was a 25% exercise, and included several workflow elements: event...Go to contribution page
-
Dr Andreas Nowack (III. Physikalisches Institut (B), RWTH Aachen)05/09/2007, 08:00In Germany, several university institutes and research centres take part in the CMS experiment. Concerning the data analysis, a couple of computing centres at different Tier levels, ranging from Tier 1 to Tier 3, exists at these places. The German Tier 1 centre GridKa at the research centre at Karlsruhe serves all four LHC experiments as well as for four non-LHC experiments. With respect...Go to contribution page
-
Prof. Alexander Read (University of Oslo, Department of Physics)05/09/2007, 08:00Computing and storage resources connected by the Nordugrid ARC middleware in the Nordic countries, Switzerland and Slovenia are a part of the ATLAS computing grid. This infrastructure is being commissioned with the ongoing ATLAS Monte Carlo simulation production in preparation for the commencement of data taking in late 2007. The unique non-intrusive architecture of ARC, it's...Go to contribution page
-
Prof. Richard McClatchey (UWE)05/09/2007, 08:00We introduce the concept, design and deployment of the DIANA meta-scheduling approach to solving the challenge of the data analysis being faced by the CERN experiments. The DIANA meta-scheduler supports data intensive bulk scheduling, is network aware and follows a policy centric meta-scheduling that will be explained in some detail. In this paper, we describe a Physics analysis case...Go to contribution page
-
Dr Domenico Giordano (Dipartimento Interateneo di Fisica)05/09/2007, 08:00The CMS Silicon Strip Tracker (SST), consisting of more than 10 millions of channels, is organized in about 16,000 detector modules and it is the largest silicon strip tracker ever built for high energy physics experiments. In the first half of 2007 the CMS SST project is facing the important milestone of commissioning and testing a quarter of the entire SST with cosmic muons. The full...Go to contribution page
-
Mr Tigran Mkrtchyan Mkrtchyan (Deutsches Elektronen-Synchrotron DESY)05/09/2007, 08:00Starting June 2007, all WLCG data management services have to be ready and prepared to move terabytes of data from CERN to the Tier 1 centers world wide, and from the Tier 1s to their corresponding Tier 2s. Reliable file transfer services, like FTS, on top of the SRM v2.2 protocol are playing a major role in this game. Nevertheless, moving large junks of data is only part of the...Go to contribution page
-
Mr Enrico Fattibene (INFN-CNAF, Bologna, Italy), Mr Giuseppe Misurelli (INFN-CNAF, Bologna, Italy)05/09/2007, 08:00A monitoring tool for complex Grid systems can gather a huge amount of information that have to be presented to the users in the most comprehensive way. Moreover different types of consumers could be interested in inspecting and analyzing different subsets of data. The main goal in designing a Web interface for the presentation of monitoring information is to organize the huge amount of...Go to contribution page
-
Dr Ricardo Graciani Diaz (Universidad de Barcelona)05/09/2007, 08:00DIRAC Services and Agents are defined in the context of the DIRAC system (the LHCb's Grid Workload and Data Management system), and how they cooperate to build functional sub-systems is presented. How the Services and Agents are built from the low level DIRAC framework tools is described. Practical experiente in the LHCb production system has directed the creation of the current DIRAC...Go to contribution page
-
Mr Adrian Casajus Ramo (Universitat de Barcelona)05/09/2007, 08:00The DIRAC system is made of a number of cooperating Services and Agents that interact between them with a Client-Server architecture. All DIRAC components rely on a low level framework that provides the necessary basic functionality. In the current version of DIRAC these components have been identified as: DISET, the secure communication protocol for remote procedure call and file...Go to contribution page
-
Gianluca Castellani (European Organization for Nuclear Research (CERN))05/09/2007, 08:00LHCb accesses Grid through DIRAC, its WorkLoad and Data Management system. In DIRAC all the jobs are stored in central task queues and then pulled onto worker nodes via generic Grid jobs called Pilot Agents. These task queues are characterized by different requirements about CPUtime and destination. Because the whole LHCb community is divided in sets of physicists, developers,...Go to contribution page
-
Dr Andrei Tsaregorodtsev (CNRS-IN2P3-CPPM, Marseille)05/09/2007, 08:00The DIRAC system was developed in order to provide a complete solution for using distributed computing resources of the LHCb experiment at CERN for data production and analysis. It allows a concurrent use of over 10K CPUs and 10M file replicas distributed over many tens of sites. The sites can be part of a computing grid such as WLCG or standalone computing clusters all integrated in a...Go to contribution page
-
Andrew Cameron Smith (CERN)05/09/2007, 08:00DIRAC, LHCb’s Grid Workload and Data Management System, utilises WLCG resources and middleware components to perform distributed computing tasks satisfying LHCb’s Computing Model. The Data Management System (DMS) handles data transfer and data access within LHCb. Its scope ranges from the output of the LHCb Online system to Grid-enabled storage for all data types. It supports metadata for...Go to contribution page
-
Dr Julius Hrivnac (LAL)05/09/2007, 08:00LCG experiments will contain large amount of data in relational databases. Those data will be spread over many sites (Grid or not). Fast and easy access will required not only from the batch processing jobs, but also from the interactive analysis. While many system have been proposed and developed for access to file-based data in the distributed environment, methods of efficient access...Go to contribution page
-
Lana Abadie (CERN)05/09/2007, 08:00The DPM (Disk Pool Manager) provides a lightweight and scalable managed disk storage system. In this paper, we describe the new features of the DPM. It is integrated in the grid middleware and is compatible with both VOMS and grid proxies. Besides the primary/secondary groups (or roles), the DPM supports ACLs adding more flexibility in setting file permissions. Tools ...Go to contribution page
-
Mr Claude Charlot (Ecole Polytechnique)05/09/2007, 08:00We describe the strategy developed for electron reconstruction in CMS. Emphasis is put on isolated electrons and on recovering the bremsstrahlung losses due to the presence of the material before the ECAL. Following the strategy used for the high level triggers, a first filtering is obtained building seeds from the clusters reconstructed in the ECAL. A dedicated trajectory building is...Go to contribution page
-
Dr Vincenzo Ciaschini (INFN CNAF)05/09/2007, 08:00While starting to use the grid in production, applications have begun to demand the implementation of complex policies regarding the use of resources. Some want to divide their users in different priority brackets and classify the resources in different classes, others again content themselves with considering all users and resources equal. Resource managers have to work into enabling...Go to contribution page
-
Mr Joel Closier (CERN)05/09/2007, 08:00The LHCb experiment has chosen to use the SAM framework (Service Availability Monitoring Environment) provided by the WLCG developers to make extensive tests of the LHCb environment at all the accessible grid resources. The availability and the proper definition of the local Computing and Storage Elements, user interfaces as well as the WLCG software environment are checked. The same...Go to contribution page
-
Mr Sergey Gorbunov (GSI), Dr alexander glazov (DESY)05/09/2007, 08:00Stand-alone event reconstruction was developed for the Forward and the Backward Silicon Trackers of the H1 experiment at HERA. The reconstruction module includes the pattern recognition algorithm, a track fitter and primary vertex finder. The reconstruction algorithm shows high efficiency and speed. The detector alignment was performed to within an accuracy of 10 um which...Go to contribution page
-
Mr Trunov Artem (CC-IN2P3 (Lyon) and EKP (Karlsruhe))05/09/2007, 08:00We present our experience in setting up an xrootd storage cluster at CC-IN2P3 - a LCG Tier-1 computing Center. The solution consists of xrootd storage cluster made of NAS boxes and includes an interface to dCache/SRM, and Mass Storage System. The feature of this system is integration of PROOF for facilitation of analysis. The setup allows to take advantage of ease of administrative burden,...Go to contribution page
-
Ludek Matyska (CESNET)05/09/2007, 08:00Grid middleware stacks, including gLite, matured into the state of being able to process upto millions of jobs per day. Logging and Bookkeeping, the gLite job-tracking service keeps pace with this rate, however it is not designed to provide a long-term archive of executed jobs. ATLAS---representative of large user community--- addresses this issue with its own job catalogue (prodDB)....Go to contribution page
-
Mr Kyu Park (Department of Electrical and Computer Engineering, University of Florida)05/09/2007, 08:00A primary goal of the NSF-funded UltraLight Project is to expand existing data-intensive grid computing infrastructures to the next level by enabling a managed network that provides dynamically constructed end-to-end paths (optically or virtually, in whole or in part). Network bandwidth used to be the primary limiting factor, but with the recent advent of 10Gb/s network paths end-to-end,...Go to contribution page
-
296. Extension of the DIRAC workload-management system to allow use of distributed Windows resourcesMs Ying Ying Li (University of Cambridge)05/09/2007, 08:00The DIRAC workload-management system of the LHCb experiment allows coordinated use of globally distributed computing power and data storage. The system was initially deployed only on Linux platforms, where it has been used very successfully both for collaboration-wide production activities and for single- user physics studies. To increase the resources available to LHCb, DIRAC has...Go to contribution page
-
Dr Klaus Goetzen (GSI Darmstadt)05/09/2007, 08:00As one of the primary experiments to be located at the new Facility for Antiproton and Ion Research in Darmstadt the PANDA experiment aims for high quality hadron spectroscopy from antiproton proton collisions. The versatile and comprehensive projected physics program requires an elaborate detector design. The detector for the PANDA experiment will be a very complex machine consisting of...Go to contribution page
-
Dr Manuel Venancio Gallas Torreira (CERN)05/09/2007, 08:00Based on the ATLAS TileCal 2002 test-beam setup example, we present here the technical, software aspects of a possible solution to the problem of using two different simulation engines, like Geant4 and Fluka, with the common geometry and digitization code. The specific use case we discuss here, which is probably the most common one, is when the Geant4 application is already implemented....Go to contribution page
-
Mr Edmund Widl (Institut für Hochenergiephysik (HEPHY Vienna))05/09/2007, 08:00The Kalman alignment algorithm (KAA) has been specifically developed to cope with the demands that arise from the specifications of the CMS Tracker. The algorithmic concept is based on the Kalman filter formalism and is designed to avoid the inversion of large matrices. Most notably, the KAA strikes a balance between conventional global and local track-based alignment algorithms, by...Go to contribution page
-
Remi Mollon (CERN)05/09/2007, 08:00GFAL, or Grid File Access Library, is a C library developed by LCG to give a uniform POSIX interface to local and remote Storage Elements on the Grid. LCG-Util is a set of tools to copy/replicate/delete files and register them in a Grid File Catalog. In order to match experiment requirements, these two components had to evolve. Thus, the new Storage ...Go to contribution page
-
Ted Hesselroth (Fermi National Accelerator Laboratory)05/09/2007, 08:00gPlazma is the authorization mechanism for the distributed storage system dCache. Clients are authorized based on a grid proxy and may be allowed various privileges based on a role contained in the proxy. Multiple authorization mechanisms may be deployed through gPlazma, such as legacy dcache-kpwd, grid-mapfile, grid-vorolemap, or GUMS. Site-authorization through SAZ is also supported....Go to contribution page
-
Mr Laurence Field (CERN)05/09/2007, 08:00Grid Information Systems are mission-critical components for production grid infrastructures. They provide detailed information which is needed for the optimal distribution of jobs, data management and overall monitoring of the Grid. As the number of sites within these infrastructure continues to grow, it must be understood if the current systems have the capacity to handle the extra...Go to contribution page
-
Alexandre Vaniachine (Argonne National Laboratory)05/09/2007, 08:00To process the vast amount of data from high energy physics experiments, physicists rely on Computational and Data Grids; yet, the distribution, installation, and updating of a myriad of different versions of different programs over the Grid environment is complicated, time-consuming, and error-prone. We report on the development of a Grid Software Installation Management Framework...Go to contribution page
-
Ms Alessandra Forti (University of Manchester)05/09/2007, 08:00System Management Working Group (SMWG) of sys admins from Hepix and grid sites has been setup to address the fabric management problems that HEP sites might have. The group is open and its goal is not to implement new tools but to share what is already in use at sites according to existing best practices. Some sites are already publicly sharing their tools and sensors and some other...Go to contribution page
-
Prof. Nobuhiko Katayama (High Energy Accelerator Research Organization)05/09/2007, 08:00The Belle experiment operates at the KEKB accelerator, a high luminosity asymmetric energy e+ e- collider. The Belle collaboration studies CP violation in decays of B meson to answer one of the fundamental questions of Nature, the matter-anti-matter asymmetry. Currently, Belle accumulates more than one million B Bbar meson pairs that correspond to about 1.2 TB of raw data in one...Go to contribution page
-
Alfonso Mantero (INFN Genova)05/09/2007, 08:00A component of the Geant4 toolkit is responsible for the simulation of atomic relaxation: it is part of a modelling approach of electromagnetic interactions that takes into account the detailed atomic structure of matter, by describing particle interactions at the level of the atomic shells of the target material. The accuracy of Geant4 Atomic Relaxation has been evaluated against the...Go to contribution page
-
Dr Daniela Rebuzzi (INFN Pavia and Pavia University)05/09/2007, 08:00The Atlas Muon Spectrometer is designed to reach a very high transverse momentum resolution for muons in a pT range extending from 6 GeV/c up to 1 Tev/c. The most demanding design goal is an overall uncertainty of 50 microns on the sagitta of a muon with pT = 1 TeV/c. Such precision requires an accurate control of the positions of the muon detectors and of their movements during the...Go to contribution page
-
Aatos Heikkinen (Helsinki Institute of Physics, HIP)05/09/2007, 08:00We introduce a new implementation of Liege cascade INCL4 with ABLA evaporation in Geant4. INCL4 treats hadron, Deuterium, Tritium, and Helium beams up to 3 GeV energy, while ABLA provides treatment for light evaporation residues. The physics models in INCL4 and ABLA and are reviewd with focus on recent additions. Implementation details, such as first version of object oriented...Go to contribution page
-
Timur Perelmutov (FERMI NATIONAL ACCELERATOR LABORATORY)05/09/2007, 08:00The Storage Resource Manager (SRM) and WLCG collaborations recently defined version 2.2 of the SRM protocol, with the goal of satisfying the requirement of the LCH experiments. The dCache team has now finished the implementation of all SRM v2.2 elements required by the WLCG. The new functions include space reservation, more advanced data transfer, and new namespace and permission...Go to contribution page
-
Mr Thomas Doherty (University of Glasgow)05/09/2007, 08:00AMI is an application which stores and allows access to dataset metadata for the ATLAS experiment. It provides a set of generic tools for managing database applications. It has a three-tier architecture with a core that supports a connection to any RDBMS using JDBC and SQL. The middle layer assumes that the databases have an AMI compliant self-describing structure. It provides a...Go to contribution page
-
Mr Jay Packard (BNL)05/09/2007, 08:00Identity mapping is necessary when a site's resources do not use GRID credentials natively, but instead use a different mechanism to identify users, such as UNIX accounts or Kerberos principals. In these cases, the GRID credential for each incoming job must be associated with an appropriate site credential. Many sites consist of a heterogeneous environment with multiple gatekeepers, which...Go to contribution page
-
Akos Frohner (CERN)05/09/2007, 08:00The goal of the Medical Data Management (MDM) task is to provide secure (encrypted and under access control) access to medical images, which are stored at hospitals in DICOM servers or are replicated to standard grid Storage Elements (SE) elsewhere. In gLite 3.0 there are three major components to satisfy the requirements: The dCache/DICOM SE is a special SE, which...Go to contribution page
-
Dr Robert Harakaly (CERN)05/09/2007, 08:00Configuration is an essential part of the deployment process of any software product. In the case of Grid middleware the variety and complexity of grid services coupled with multiple deployment scenarios make the provision of a coherent configuration both more important and more difficult. The configuration system must provide a simple interface which strikes a balance between the...Go to contribution page
-
Dr Iosif Legrand (CALTECH)05/09/2007, 08:00MonaLISA (Monitoring Agents in A Large Integrated Services Architecture) provides a distributed service for monitoring, control and global optimization of complex systems including the grids and networks used by the LHC experiments. MonALISA is based on an ensemble of autonomous multi-threaded, agent-based subsystems which able to collaborate and cooperate to perform a wide range of...Go to contribution page
-
Gianluca Castellani (CERN)05/09/2007, 08:00Facilities offered by WLCG are extensively used by LHCb in all aspects of their computing activity. A real time knowledge of the status of all Grid components involved is needed to optimize their exploitation. This is achieved by employing different monitoring services each one supplying a specific overview of the system. SAME tests are used in LHCb for monitoring the status of CE...Go to contribution page
-
Dr Paul Millar (GridPP)05/09/2007, 08:00Computing resources in HEP are increasingly delivered utilising grid technologies, which presents new challenges in terms of monitoring. Monitoring involves the flow of information between different communities: the various resource-providers and the different user communities. The challenge is providing information so everyone can find what they need: from the local site administrators,...Go to contribution page
-
Dr Sergio Andreozzi (INFN-CNAF)05/09/2007, 08:00GridICE is an open source distributed monitoring tool for Grid systems that is integrated in the gLite middleware and provides continuous monitoring of the EGEE infrastructure. The main goals of GridICE are: to provide both summary and detailed view of the status and availability of Grid resource, to highlight a number of pre-defined fault situations and to present usage information. In...Go to contribution page
-
Mr Sylvain Reynaud (IN2P3/CNRS)05/09/2007, 08:00Advanced capabilities available in nowadays batch systems are fundamental for operators of high-performance computing centers in order to provide a high- quality service to their local users. Existing middleware allow sites to expose grid-enabled interfaces of the basic functionalities offered by the site’s computing service. However, they do not provide enough mechanisms for...Go to contribution page
-
Dr Graeme Stewart (University of Glasgow)05/09/2007, 08:00When operational, the Large Hadron Collider experiments at CERN will collect tens of petabytes of physics data per year. The worldwide LHC computing grid (WLCG) will distribute this data to over two hundred Tier-1 and Tier-2 computing centres, enabling particle physicists around the globe to access the data for analysis. Different middleware solutions exist for effective management of...Go to contribution page
-
Mr Martin Radicke (DESY Hamburg)05/09/2007, 08:00The dCache software has become a major storage element in the WLCG, providing high-speed file transfers by caching datasets on potentially thousands of disk servers in front of tertiary storage. Currently dCache's model of separately connecting all disk servers to the tape backend leads to locally controlled flush and restore behavior has shown some inefficiencies in respect of tape drive...Go to contribution page
-
Dr Marco La Rosa (The University of Melbourne)05/09/2007, 08:00With the proliferation of multi-core x86 processors, it is reasonable to ask whether the supporting infrastructure of the system (memory bandwidth, IO bandwidth etc) can handle as many jobs as there are cores. Furthermore, are traditional benchmarks like SpecINT and SpecFloat adequate for assessing multi-core systems in real computing situations. In this paper we present the results of...Go to contribution page
-
Michal Kwiatek (CERN)05/09/2007, 08:00For many years at CERN we had a very sophisticated print server infrastructure which supported several different protocols (AppleTalk, IPX and TCP/IP ) and many different printing standards. Today’s situation differs a lot: we have much more homogenous network infrastructure, where TCP/IP is used everywhere and we have less printer models, which almost all work with current standards...Go to contribution page
-
Mr Alexander Kulyavtsev (FNAL)05/09/2007, 08:00dCache is a distributed storage system which today stores and serves petabytes of data in several large HEP experiments. Resilient dCache is a top level service within dCache, created to address reliability and file availability issues when storing data for extended periods of time on disk-only storage systems. The Resilience Manager automatically keeps the number of copies within...Go to contribution page
-
Dr Gregory Dubois-Felsmann (SLAC)05/09/2007, 08:00The BaBar experiment currently uses approximately 4000 KSI2k on dedicated Tier 1 and Tier 2 compute farms to produce Monte Carlo events and to create analysis datasets from detector and Monte Carlo events. This need will double in the next two years requiring additional resources. We describe enhancements to the BaBar experiment's distributed system for the creation of skimmed...Go to contribution page
-
Dr Maria Grazia Pia (INFN GENOVA)05/09/2007, 08:00Journal publication plays a fundamental role in scientific research, and has practical effects on researchers’ academic career and towards funding agencies. An analysis is presented, also based on the author’s experience as a member of the Editorial Board of a major journal in Nuclear Technology, of publications about high energy physics computing in refereed journals. The statistical...Go to contribution page
-
Prof. Sridhara Dasu (University of Wisconsin)05/09/2007, 08:00We describe the ideas and present performance results from a rapid-response adaptive computing environment (RACE) that we setup at the UW-Madison CMS Tier-2 computing center. RACE uses Condor technologies to allow rapid-response to certain class of jobs, while suspending the longer running jobs temporarily. RACE allows us to use our entire farm for long running production jobs, but also...Go to contribution page
-
Sophie Lemaitre (CERN)05/09/2007, 08:00The LFC (LCG File Catalogue) allows retrieving and registering the location of physical replicas in the grid infrastructure given a LFN (Logical File Name) or a GUID (Grid Unique Identifier). Authentication is based on GSI (Grid Security Infrastructure) and authorization uses also VOMS. The catalogue has been installed in more than 100 sites. It is essential to provide consistent ...Go to contribution page
-
Nancy Marinelli (University of Notre Dame)05/09/2007, 08:00A seed/track finding algorithm has been developed for reconstruction of e+e- from converted photons. It combines the information of the electromagnetic calorimeter with the accurate information provided by the tracker. An Ecal seeded track finding is used to locate the approximate vertex of the conversion. Tracks found with this method are then used as input to further inside-out...Go to contribution page
-
Dr Kilian Schwarz (GSI)05/09/2007, 08:00After all LHC experiments managed to run globally distributed Monte Carlo productions on the Grid, now the development of tools for equally spread data analysis stands in the foreground. To grant Physicists access to this world suited interfaces must be provided. As a starting point serves the analysis framework ROOT/PROOF, which enjoys a wide distribution within the HEP community....Go to contribution page
-
Dr Andy Buckley (Durham University)05/09/2007, 08:00The Rivet system is a framework for validation of Monte Carlo event generators against archived experimental data, and together with JetWeb and HepData forms a core element of the CEDAR event generator tuning programme. It is also an essential tool in the development of next generation event generators by members of the MCnet network. Written primarily in C++, Rivet provides a uniform...Go to contribution page
-
Emmanuel Ormancey (CERN)05/09/2007, 08:00Nearly every large organization use a tool to broadcast messages and information across the internal campus (messages like alerts announcing interruption in services or just information about upcoming events). The tool typically allows administrators (operators) to send "targeted" messages which is sent only to specific group of users or computers (for instance only those ones...Go to contribution page
-
Dr Gregory Dubois-Felsmann (SLAC)05/09/2007, 08:00The BaBar experiment needs fast and efficient procedure for distributing jobs to produce a large amount of simulated events for analysis purpose. We discuss the benefits/drawbacks gained mapping the traditional production schema on the grid paradigm, and describe the structure implemented on the standard "public" resources of INFN-Grid project. Data access/distribution on sites...Go to contribution page
-
Dr Steven Goldfarb (University of Michigan)05/09/2007, 08:00"Shaping Collaboration 2006" was a workshop held in Geneva, on December 11-13, 2006, to examine the status and future of collaborative tool technology and its usage for large global scientific collaborations, such as those of the CERN LHC (Large Hadron Collider). The workshop brought together some of the leading experts in the field of collaborative tools (WACE 2006) with physicists and...Go to contribution page
-
Dr Yaodong Cheng (Institute of High Energy Physics,Chinese Academy of Sciences)05/09/2007, 08:00Currently more and more heterogeneous resources are integrated into LCG. Sharing LCG files across different platforms, including different OS and grid middlewares, is a basic issue. We implemented web service interface for LFC and simulated LCG file access client by using globus Java CoG Kit.Go to contribution page
-
Dr Dorian Kcira (University of Louvain)05/09/2007, 08:00With a total area of more than 200 square meters and about 16000 silicon detectors the Tracker of the CMS experiment will be the largest silicon detector ever built. The CMS silicon Tracker will detect charged tracks and will play a determinant role in lepton reconstruction and heavy flavour quark tagging. A general overview of the Tracker data handling software, which allows the...Go to contribution page
-
Dr Paul Miyagawa (University of Manchester)05/09/2007, 08:00The ATLAS solenoid produces a magnetic field which enables the Inner Detector to measure track momentum by track curvature. This solenoidal magnetic field was measured using a rotating-arm mapping machine and, after removing mapping machine effects, has been understood to the 0.05% level. As tracking algorithms require the field strength at many different points, the representation of...Go to contribution page
-
Dr Pavel Nevski (Brookhaven National Laboratory (BNL))05/09/2007, 08:00In order to be ready for the physics analysis ATLAS experiment is running a world wide Monte Carlo production for many different physics samples with different detector conditions. Job definition is the starting point of ATLAS production system. This is a common interface for the ATLAS community to submit jobs for processing by the Distrubuted production system used for all...Go to contribution page
-
Robert Petkus (Brookhaven National Laboratory)05/09/2007, 08:00The RHIC/USATLAS Computing Facility at BNL has evaluated high-performance, low-cost storage solutions in order to complement a substantial distributed file system deployment of dCache (>400 TB) and xrootd (>130 TB). Currently, these file systems are spread across disk-heavy computational nodes providing over 1.3 PB of aggregate local storage. While this model has proven sufficient to...Go to contribution page
-
Dr Andrea Sciabà (CERN)05/09/2007, 08:00The main goal of the Experiment Integration and Support (EIS) team in WLCG is to help the LHC experiments with using proficiently the gLite middleware as part of their computing framework. This contribution gives an overview of the activities of the EIS team, and focuses on a few of them particularly important for the experiments. One activity is the evaluation of the gLite workload...Go to contribution page
-
Prof. Vladimir Ivantchenko (CERN, ESA)05/09/2007, 08:00The testing suite for validation of Geant4 hadronic generators with the data of thin target experiments is presented. The results of comparisons with the neutron and pion production data of are shown for different Geant4 hadronic generators for the beam momentum interval 0.5 – 12.9 GeV/c.Go to contribution page
-
Tapio Lampen (Helsinki Institute of Physics HIP)05/09/2007, 08:00We demonstrate the use of a ROOT Toolkit for Multivariate Data Analysis (TMVA) in tagging b-jets associated with heavy neutral MSSM Higgs bosons at the LHC. The associated b-jets can be used to extract Higgs events from the Drell-Yan background, for which the associated jets are mainly light quark and gluon jets. TMVA provides an evaluation for different multivariate classification...Go to contribution page
-
Suren Chilingaryan (The Institute of Data Processing and Electronics, Forschungszentrum Karlsruhe)05/09/2007, 08:00For the reliable and timely forecasts of dangerous conditions of Space Weather world-wide networks of particle detectors are located at different latitudes, longitudes and altitudes. To provide better integration of these networks the DAS (Data Acquisition System) is facing a challenge to establish reliable data exchange between multiple network nodes which are often located in hardly...Go to contribution page
-
Dr Solveig Albrand (LPSC/IN2P3/UJF Grenoble France)05/09/2007, 08:00AMI was chosen as the ATLAS dataset selection interface in July 2006. It should become the main interface for searching for ATLAS data using physics metadata criteria. AMI has been implemented as a generic database management framework which allows parallel searching over many catalogues, which may have differing schema. The main features of the web interface will be described; in...Go to contribution page
-
Dr Andy Buckley (Durham University)05/09/2007, 08:00Monte Carlo event generators are an essential tool for modern particle physics; they simulate aspects of collider events ranging from the parton-level "hard process" to cascades of QCD radiation in both initial and final states, non-perturbative hadronization processes, underlying event physics and specific particle decays. LHC events in particular are so complex that event generator...Go to contribution page
-
Dr Daniele Bonacorsi (INFN-CNAF, Bologna, Italy)05/09/2007, 08:00Early in 2007 the CMS experiment deployed a traffic load generator infrastructure, aimed at providing CMS Computing Centers (Tiers of the WLCG) with a means for debugging, load-testing and commissioning data transfer routes among them. The LoadTest is built upon, and relies on, the PhEDEx dataset transfer tool as a reliable data replication system in use by CMS. On top of PhEDEx, the CMS...Go to contribution page
-
Dr Andrew McNab (University of Manchester)05/09/2007, 08:00We describe the operation of www.gridpp.ac.uk, the website provided for GridPP and its precursor, UK HEP Grid, since 2000, and explain the operational procedures of the service and the various collaborative tools and components that were adapted or developed for use on the site. We pay particular attention to the security issues surrounding such a prominent site, and how the GridSite...Go to contribution page
-
Dr Raja Nandakumar (Rutherford Appleton Laboratory)05/09/2007, 08:00The worldwide computing grid is essential to the LHC experiments in analysing the data collected by the detectors. Within LHCb, the computing model aims to simulate data at Tier-2 grid sites as well as non-grid resources. The reconstruction, stripping and analysis of the produced LHCb data will primarily place at the Tier-1 centres. The computing data challenge DC06 started in May 2006...Go to contribution page
-
Mr Rudolf Frühwirth (Inst. of High Energy Physics, Vienna)05/09/2007, 08:00We present the "LiC Detector Toy'' ("LiC'' for Linear Collider) program, a simple but powerful software tool for detector design, modification and geometry studies. It allows the user to determine the resolution of reconstructed track parameters for the purpose of comparing and optimizing various detector set-ups. It consists of a simplified simulation of the detector measurements, taking...Go to contribution page
-
Mr Antonio Retico (CERN)05/09/2007, 08:00The WLCG/EGEE Pre-Production Service (PPS) is a grid infrastructure whose goal is to give early access to new services to WLCG/EGEE users in order to evaluate new features and changes in the middleware before new versions are actually deployed in PPS. The PPS grid counts about 30 sites providing resources and manpower. The service contributes to the overall quality of the grid...Go to contribution page
-
Dr Winfried A. Mitaroff (Institute of High Energy Physics (HEPHY) of the Austrian Academy of Sciences, Vienna)05/09/2007, 08:00A detector-independent toolkit (RAVE) is being developed for the reconstruction of the common interaction vertices from a set of reconstructed tracks. It deals both with "finding" (pattern recognition of track bundles) and with "fitting" (estimation of vertex position and track momenta). The algorithms used so far include robust adaptive filters which are derived from the CMS...Go to contribution page
-
Dr Fabio Cossutti (INFN)05/09/2007, 08:00The CMS Collaboration has developed a detailed simulation of the electromagnetic calorimeter (ECAL), which has been fully integrated in the collaboration software framework CMSSW. The simulation is based on the Geant4 detector simulation toolkit for the modelling of the passage of particles through matter and magnetic field. The geometrical description of the detector is being...Go to contribution page
-
Dr Sergio Andreozzi (INFN-CNAF)05/09/2007, 08:00A key advantage of Grid systems is the capability of sharing heterogeneous resources and services across traditional administrative and organizational domains. This capability enables the creation of virtual pools of resources that can be assigned to groups of users. One of the problems that the utilization of such pools presents is the awareness of the resources, i.e., the fact that...Go to contribution page
-
Mr Riccardo Zappi (INFN-CNAF)05/09/2007, 08:00In Grid systems, a core resource being shared among geographically-dispersed communities of users is the storage. For this resource, a standard interface specification (Storage Resource Management or SRM) was defined and is being evolved in the context of the Open Grid Forum. By implementing this interface, all storage resources part of a Grid could be managed in an homogenous fashion. In...Go to contribution page
-
Dr Piergiulio Lenzi (Dipartimento di Fisica)05/09/2007, 08:00The first application of one of the official CMS tracking algorithm, known as Combinatorial Track Finder, on cosmic muon real data is described. The CMS tracking system consists of a silicon pixel vertex detector and a surrounding silicon microstrip detector. The silicon strip tracker consists of 10 barrel layers and 12 endcap disks on each side. The system is currently going through...Go to contribution page
-
Dr Andrea Fontana (INFN-Pavia)05/09/2007, 08:00The concept of Virtual Monte Carlo allows to use different Monte Carlo programs to simulate particle physics detectors without changing the geometry definition and the detector response simulation. In this context, to study the reconstruction capabilities of a detector, the availability of a tool to extrapolate the track parameters and their associated errors due to magnetic field,...Go to contribution page
-
Dr Gabriele Compostella (University Of Trento INFN Padova)05/09/2007, 08:00When the CDF experiment was developing its software infrastructure, most computing was done on dedicated clusters. As a result, libraries, configuration files, and large executable were deployed over a shared file system. As CDF started to move into the Grid world, the assumption of having a shared file system showed its limits. In a widely distributed computing model, such as the...Go to contribution page
-
Don Petravick (FNAL)05/09/2007, 08:00Computing in High Energy Physics and other sciences is quickly moving toward the Grid paradigm, with resources being distributed over hundreds of independent pools scattered over the five continents. The transition from a tightly controlled, centralized computing paradigm to a shared, widely distributed model, while bringing many benefits, has also introduced new problems, a major one...Go to contribution page
-
Mr Andreas Weindl (FZ Karlsruhe / IK), Dr Harald Schieler (FZ Karlsruhe / IK)05/09/2007, 08:00The KASCADE-Grande experiment is a multi-detector installation at the site of the Forschungszentrum Karlsruhe, Germany, to measure and study extensive air showers induced in the atmosphere by primary cosmic rays in the energy range from 10^14 to 10^18 eV. For three of the detector components, WEB based online event displays have been implemented. They provide in a fast and simplified way...Go to contribution page
-
Wilko Kroeger (SLAC)05/09/2007, 08:00The BaBar Experiment stores it reconstructed event data in root files which amount to more then one petabyte and more then two million files. All the data are stored in the mass storage system (HPSS) at SLAC and part of the data is exported to Tier-A sites. Fast and reliable access to the data is provided by Xrootd at all sites. It integrates with a mass storage system and files that...Go to contribution page
-
Alberto Pace (CERN)05/09/2007, 08:30
-
Mr Laurence Field (CERN)05/09/2007, 09:00Over recent years a number of grid projects have emerged which have built grid infrastructures that are now the computing backbones for various user communities. A significant number of these user communities are artificially limited to only one grid due the different middleware used in each grid project. Grid interoperation is trying to bridge these differences and enable virtual...Go to contribution page
-
Prof. Frank Wuerthwein (UCSD)05/09/2007, 09:30
-
Harvey Newman (California Institute of Technology (CALTECH))05/09/2007, 11:00Networks of sufficient and rapidly increasing end-to-end capability, as well as a high degree of reliability are vital for the LHC and other major HEP programs. Our bandwidth usage on the major national backbones and intercontinental links used by our field has progressed by a factor of several hundred over the past decade, and the outlook is for a similar increase over the next decade. This...Go to contribution page
-
S. Pawlowski (Intel)05/09/2007, 11:30Dozens of cores will not be a dream. Multiple processor cores drive energy efficient performance for highly parallel applications. However, looking beyond cores, achieving balanced high performance throughput has many challenges. Intel Senior Fellow and CTO of Digital Enterprise Group Steve Pawlowski will provide his technology vision to address bandwidth, capacity and power needs on...Go to contribution page
-
Mr Sverre Jarp (CERN)05/09/2007, 12:00In the CERN openlab we have looked at how well LHC software matches the execution capabilities of current and, to some extent, future processors. Thanks to current silicon processes, transistor counts in the billions (10^9) have become commonplace and microprocessor manufacturers have been deploying transistors in multiple ways to increase performance. In this talk I will review the...Go to contribution page
-
Paul Nilsson (UT-Arlington)05/09/2007, 14:00The PanDA software provides a highly performant distributed production and distributed analysis system. It is the first system in the ATLAS experiment to use a pilot based late job delivery technique. In this talk, we will describe the architecture of the pilot system used in Panda. Unique features have been implemented for high reliability automation in a distributed environment....Go to contribution page
-
Marco Clemencic (European Organization for Nuclear Research (CERN))05/09/2007, 14:00Distributed data analysis and information managementoral presentationThe LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The...Go to contribution page
-
Dirk Duellmann (CERN)05/09/2007, 14:00Computer facilities, production grids and networkingoral presentationRelational database services are a key component of the computing models for the Large Hadron Collider (LHC). A large proportion of non-event data including detector conditions, calibration, geometry and production bookkeeping metadata require reliable storage and query services in the LHC Computing Grid (LCG). Also core grid services to catalogue and distribute data cannot operate...Go to contribution page
-
Lorenzo Moneta (CERN)05/09/2007, 14:00Advanced mathematical and statistical computational methods are required by the LHC experiments to analyzed their data. These methods are provided by the Math work package of the ROOT project. We present an overview of the recent developments of this work package by describing in detail the restructuring of the core mathematical library in a coherent set of new C++ classes and...Go to contribution page
-
Dr Matthias Wittgen (SLAC)05/09/2007, 14:00The BaBar slow control system uses EPICS (Experimental Physics and Industrial Control System) running on 17 VME based single board computers (SBCs). EPICS supports the real-time operating systems vxWorks and RTEMS. During the 2004/05 shutdown BaBar started to install a new detector component, the Limited Streamer Tubes (LST), adding over 20000 high voltage channels and about 350...Go to contribution page
-
Boris Mangano (University of California, San Diego)05/09/2007, 14:00With nominal collision energies of 14 TeV at luminosities of 10^34 cm^-2 s^-1, the LHC will explore energies an order of magnitude higher than colliders before. This poses big challenges for the tracking system and the tracking software to reconstruct tracks in the primary collision and the ~20 underlying events. CMS has built a full silicon tracking system consisting of an inner pixel...Go to contribution page
-
Dr Lee Lueking (FERMILAB)05/09/2007, 14:20Distributed data analysis and information managementoral presentationThe CMS experiment at the LHC has established an infrastructure using the FroNTier framework to deliver conditions (i.e. calibration, alignment, etc.) data to processing clients worldwide. FroNTier is a simple web service approach providing client HTTP access to a central database service. The system for CMS has been developed to work with POOL which provides object relational mapping...Go to contribution page
-
Dr Stuart Paterson (CERN)05/09/2007, 14:20The LHCb DIRAC Workload and Data Management System employs advanced optimization techniques in order to dynamically allocate resources. The paradigms realized by DIRAC, such as late binding through the Pilot Agent approach, have proven to be highly successful. For example, this has allowed the principles of workload management to be applied not only at the time of user job submission to...Go to contribution page
-
Dr Xavier Espinal (PIC/IFAE)05/09/2007, 14:20Computer facilities, production grids and networkingoral presentationIn preparation for first data at the LHC, a series of Data Challenges, of increasing scale and complexity, have been performed. Large quantities of simulated data have been produced on three different Grids, integrated into the ATLAS production system. During 2006, the emphasis moved towards providing stable continuous production, as is required in the immediate run-up to first data, and...Go to contribution page
-
Mr Federico Carminati (CERN)05/09/2007, 14:20Since 1998 the ALICE Offline Project has developed an integrated offline framework (AliRoot) and a distributed computing environment (AliEn) to process the data of the ALICE experiment. These systems are integrated with the LCG computing infrastructure, and in particular with the ROOT system and with the WLCG Grid middleware, but they also present a number of original solutions, which...Go to contribution page
-
Mr Filimon Roukoutakis (CERN & University of Athens)05/09/2007, 14:20ALICE is one of the experiments under installation at CERN Large Hadron Collider, dedicated to the study of Heavy-Ion Collisions. The final ALICE Data Acquisition system has been installed and is being used for the testing and commissioning of detectors. Data Quality Monitoring (DQM) is an important aspect of the online procedures for a HEP experiment. In this presentation we overview the...Go to contribution page
-
Mr Sergio Gonzalez-Sevilla (Instituto de Fisica Corpuscular (IFIC) UV-CSIC)05/09/2007, 14:20It is foreseen that the Large Hadron Collider will start its operations and collide proton beams during November 2007. ATLAS is one of the four LHC experiments currently under preparation. The alignment of the ATLAS tracking system is one of the challenges that the experiment must solve in order to achieve its physics goals. The tracking system comprises two silicon technologies: pixel...Go to contribution page
-
Mr Serguei Kolos (University of California Irvine)05/09/2007, 14:35Data Quality Monitoring (DQM) is an important and integral part of the data taking and data reconstruction of HEP experiments. In an online environment, DQM provides the shift crew with live information beyond basic monitoring. This is used to overcome problems promptly and help avoid taking faulty data. During the off-line reconstruction DQM is used for more complex analysis of physics...Go to contribution page
-
Mr Jose Hernandez Calama (CIEMAT)05/09/2007, 14:40Computer facilities, production grids and networkingoral presentationMonte Carlo production in CMS has received a major boost in performance and scale since last CHEP conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two...Go to contribution page
-
Alexandre Vaniachine (Argonne National Laboratory)05/09/2007, 14:40Distributed data analysis and information managementoral presentationIn preparation for ATLAS data taking in ATLAS database activities a coordinated shift from development towards operations has occurred. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and...Go to contribution page
-
Dr Yuri Fisyak (BROOKHAVEN NATIONAL LABORATORY)05/09/2007, 14:40The STAR experiment was primarily designed to detect signals of a possible phase transition in nuclear matter. Its layout, typical for a collider experiment, contains a large Time Projection Chamber (TPC) in a Solenoid Magnet, a set of four layers of combined silicon strip and silicon drift detectors for secondary vertex reconstruction plus other detectors. In this presentation, we will...Go to contribution page
-
Florbela Viegas (CERN)05/09/2007, 14:40The ATLAS experiment at LHC will make extensive use of relational databases in both online and offline contexts, running to O(TBytes) per year. Two of the most challenging applications in terms of data volume and access patterns are conditions data, making use of the LHC conditions database, COOL, and the TAG database, that stores summary event quantities allowing a rapid selection of...Go to contribution page
-
Mr Marco Cecchi (INFN cnaf)05/09/2007, 14:40The gLite Workload Management System (WMS) is a collection of components providing a service responsible for the distribution and management of tasks across resources available on a Grid. The main purpose is to accept a request of execution of a job from a client, find appropriate resources to satisfy it and follow it until completion. Different aspects of job management are accomplished...Go to contribution page
-
Dr William Badgett (Fermilab)05/09/2007, 14:50We present the Online Web Based Monitoring (WBM) system of the CMS experiment, consisting of a web services framework based on Jakarta/Tomcat and the Root data display package. Due to security concerns, many monitoring applications of the CMS experiment cannot be run outside of the experimental site. As such, in order to allow remote users access to CMS experimental status information,...Go to contribution page
-
Mr Igor Sfiligoi (FNAL)05/09/2007, 15:00The advent of the Grids have made it possible for any user to run hundreds of thousands of jobs in a matter of days. However, the batch slots are not organized in a common pool, but are instead grouped in independent pools at hundreds of Grid sites distributed among the five continents. A higher level Workload Management System (WMS) that aggregates resources from many sites is thus...Go to contribution page
-
Dr Markus Stoye (Inst. f. Experimentalphysik, Universitaet Hamburg)05/09/2007, 15:00The CMS silicon tracker comprises about 17000 silicon modules. Its radius and length of 120 cm and 560 cm, respectively, make it the largest silicon tracker ever built. To fully exploit the precise hit measurements, it is necessary to determine the positions and orientations of the silicon modules to the level of mum and murad, respectively. Among other track based alignment algorithms,...Go to contribution page
-
Maria Girone (CERN)05/09/2007, 15:00Computer facilities, production grids and networkingoral presentationPhysics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and...Go to contribution page
-
Dr Douglas Smith (Stanford Linear Accelerator Center)05/09/2007, 15:00Distributed data analysis and information managementoral presentationThere is a need for a large dataset of simulated events for use in analysis of the data from the BaBar high energy physics experiment. The largest cycle of this production in the history of the experiment was just completed in the past year, simulating events against all detector conditions in the history of the experiment, resulting in over eleven billion events in eighteen months. ...Go to contribution page
-
Mr Igor Soloviev (CERN/PNPI)05/09/2007, 15:05This paper describes challenging requirements on the configuration service. It presents the status of the implementation and testing one year before the start of the ATLAS experiment at CERN providing details of: - capabilities of underlying OKS* object manager to store and to archive configuration descriptions, it's user and programming interfaces; - the organization of configuration...Go to contribution page
-
Dr Martin Weber (RWTH Aachen, Germany)05/09/2007, 15:20The full-silicon tracker of the CMS experiment with its 15148 strip and 1440 pixel modules is of an unprecedented size. For optimal track-parameter resolution, the position and orientation of its modules need to be determined with a precision of a few micrometer. Starting from the inclusion of survey measurements, the use of a hardware alignment system, and track based alignment, this...Go to contribution page
-
Dr Michael Wilson (European Organisation for Nuclear Research (CERN))05/09/2007, 15:20Assessing the quality of data recorded with the Atlas detector is crucial for commissioning and operating the detector to achieve sound physics measurements. In particular, the fast assessment of complex quantities obtained during event reconstruction and the ability to easily track them over time are especially important given the large data throughput and the distributed nature of the...Go to contribution page
-
Smirnov Yuri (Brookhaven National Laboratory)05/09/2007, 15:20Computer facilities, production grids and networkingoral presentationThe Open Science Grid infrastructure provides one of the largest distributed computing systems deployed in the ATLAS experiment at the LHC. During the CSC exercise in 2006-2007, OSG resources provided about one third of the worldwide distributed computing resources available in ATLAS. About half a petabyte of ATLAS MC data is stored on OSG sites. About 2000k SpecInt2000 CPU's is available....Go to contribution page
-
Ms Helen McGlone (University of Glasgow/CERN)05/09/2007, 15:20Distributed data analysis and information managementoral presentationThe ATLAS TAG database is a multi-terabyte event-level metadata selection system, intended to allow discovery, selection of and navigation to events of interest to an analysis. The TAG database encompasses file- and relational-database-resident event-level metadata, distributed across all ATLAS Tiers. ...Go to contribution page
-
Dr Sanjay Padhi (University of Wisconsin-Madison)05/09/2007, 15:20With the evolution of various Grid Technologies along with foreseen first LHC collision this year, a homogeneous and interoperable Production system for ATLAS is a necessity. We present the CRONUS, which a Condor Glide-in based ATLAS Production Executor. The Condor glide-in daemons traverse to the Worker nodes, submitted via Condor-G or gLite RB. Once activated, they preserve the...Go to contribution page
-
Dr Yao Zhang (Institute of High Energy Physics, Chinese Academy of Sciences)05/09/2007, 15:20The BESIII detector will be commissioned at the upgraded Beijing Electron Positron Collider (BEPCII) at the end of 2007. The drift chamber(MDC), which is one of the most important sub-detectors of the BESIII detector, is expected to provide good momentum resolution (0.5%@1GeV/c) and tracking efficiency in a range of 0.1~2.0 GeV/c. This makes stringent demands on the performance of...Go to contribution page
-
Vardan Gyurjyan (Jefferson Lab)05/09/2007, 15:35AFECS is a pure Java based software framework for designing and implementing distributed control systems. AFECS creates a control system environment as a collection of software agents behaving as finite state machines. These agents can represent real entities, such as hardware devices, software tasks, or control subsystems. A special control oriented ontology language (COOL), based on RDFS...Go to contribution page
-
Dr Steve Fisher (RAL)05/09/2007, 15:40R-GMA, as deployed by LCG, is a large distributed system. We are currently addressing some design issues to make it highly reliable, and fault tolerant. In validating the new design, there were two classes of problems to consider: one related to the flow of data and the other to the loss of control messages. R-GMA streams data from one place to another; there is a need to consider the...Go to contribution page
-
Mr Dave Evans (Fermi National Laboratory)05/09/2007, 15:40Computer facilities, production grids and networkingoral presentationThe CMS production system has undergone a major architectural upgrade from its predecessor, with the goals of reducing the operations manpower requirement and preparing for the large scale production required by the CMS physics plan. This paper discusses the CMS Monte Carlo Workload Management architecture. The system consist of 3 major components: ProdRequest, ProdAgent, and ProdMgr...Go to contribution page
-
Mr Juan Manuel Guijarro (CERN)05/09/2007, 15:40The Database and Engineering Services Group of CERN's Information Technology Department provides the Oracle based Central Data Base services used in many activities at CERN. In order to provide High Availability and ease management for those services, a NAS (Network Attached Storage) based infrastructure has been set up. It runs several instances of the Oracle RAC (Real...Go to contribution page
-
Dr Conrad Steenberg (Caltech)05/09/2007, 15:40Distributed data analysis and information managementoral presentationWe describe how we have used the Clarens Grid Portal Toolkit to develop powerful application and browser-level interfaces to ROOT and Pythia. The Clarens Toolkit is a codebase that was initially developed under the auspices of the Grid Analysis Environment project at Caltech, with the goal of enabling LHC physicists engaged in analysis to bring the full power of the Grid to their desktops,...Go to contribution page
-
Dr Stefano Spataro (II Physikalisches Institut, Universität Giessen (Germany))05/09/2007, 15:40The PANDA detector will be located at the future GSI accelerator FAIR. Its primary objective is the investigation of strong interaction with anti-proton beams, in the range up to 15 GeV/c as momentum of the incoming anti-proton. The PANDA offline simulation framework is called “PandaRoot”, as it is based upon the ROOT 5.12 package. It is characterized by a high versatility; it allows...Go to contribution page
-
Luca Malgeri (CERN)05/09/2007, 16:30The Calibration software framework is a crucial ingredient for all LHC experiments. In this report we shall focus on the technical challenges of this effort in the CMS experiment. It spans between careful design of the DataBase infrastructure for a quick and safe storing and retrieving of calibration constants and algorithm optimization to cope with the time and workflow constraints of High...Go to contribution page
-
Mr Philippe Canal (FERMILAB)05/09/2007, 16:30We will review the architecture and implementation of the accounting service for the Open Science Grid. Gratia's main goal is to provide the OSG stakeholders with a reliable and accurate set of views of the usage of resources across the OSG. We will review the status of deployment of Gratia across the OSG and its upcoming development. We will also discuss some aspects of current OSG...Go to contribution page
-
Mr Nicholas Robinson (CERN)05/09/2007, 16:30CERN has long been committed to the free dissemination of scientific research results and theories. Towards this end, CERN's own institutional repository, the CERN Document Server (CDS) offers access to CERN works and to all related scholarly literature in the HEP domain. Hosting over 500 document collections containing more than 900,000 records, CDS provides access to anything from...Go to contribution page
-
Dr Jörg Stelzer (CERN)05/09/2007, 16:30In high-energy physics, with the search for ever smaller signals in ever larger data sets, it has become essential to extract a maximum of the available information from the data. Multivariate classification methods based on machine learning techniques have become a fundamental ingredient to most analyses. Also the multivariate classifiers themselves have significantly evolved in recent...Go to contribution page
-
Mr Philip DeMar (FERMILAB)05/09/2007, 16:30Computer facilities, production grids and networkingoral presentationFermilab hosts the American Tier-1 Center for the LHC/CMS experiment. In preparation for the startup of CMS, and building upon extensive experience supporting TeVatron experiments and other science collaborations, the Laboratory has established high bandwidth, end-to-end (E2E) circuits with a number of US-CMS Tier2 sites, as well as other research facilities in the collaboration. These...Go to contribution page
-
Mrs Maria Del Carmen Barandela Pazos (University of Vigo)05/09/2007, 16:45In a High Energy Physics experiment it is fundamental to handle information related to the status of the detector and its environment at the time of the acquired event. This type of time-varying non-event data are often grouped under the term “conditions”. The LHCb’s Experiment Control System groups all the infrastructure for the configuration, control and monitoring of all the...Go to contribution page
-
Dr Denis Bertini (GSI)05/09/2007, 16:50The experiments design studies at FAIR are done using a ROOT based simulation and analysis framework : FairRoot. The framework is using the Virtual Monte Carlo concept which allows to perform simulation using Geant3, Geant4 or Fluka without changing the user code. The same framework is then used for data analysis. An Oracle database with a build-in versioning management is used to...Go to contribution page
-
Martin Flechl (IKP, Uppsala Universitet)05/09/2007, 16:50A Grid is defined as being ``coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations''. Over recent years a number of grid projects, many of which have a strong regional presence, have emerged to help coordinate institutions and enable grids. Today, we face a situation where a number of grid projects exist, most of which have slightly...Go to contribution page
-
Dr Douglas Smith (Stanford Linear Accelerator Center)05/09/2007, 16:50International multi-institutional high energy physics experiments require easy means for collaborators to communicate coherently in a global community. To fill this need, the HyperNews system has been widely used in HEP. HyperNews is a discussion management system which is a hybrid between a web-base forum system and a mailing list system. Its goal is to provide a tool for distributed...Go to contribution page
-
Mr Maxim Grigoriev (FERMILAB)05/09/2007, 16:50Computer facilities, production grids and networkingoral presentationThe LHC experiments will start very soon, creating immense data volumes capable of demanding allocation of an entire network circuit for task-driven applications. Circuit-based alternate network paths are one solution to meeting the LHC high bandwidth network requirements. The Lambda Station project is aimed at addressing growing requirements for dynamic allocation of alternate network...Go to contribution page
-
O Solovyanov (IHEP, Protvino, Russia)05/09/2007, 17:00An online control system to calibrate and monitor ATLAS Barrel hadronic calorimeter (TileCal) with a movable radioactive source, driven by liquid flow, is described. To read out and control the system an online software has been developed, using ATLAS TDAQ components like DVS (Diagnostic and Verification System) to verify the HW before running, IS (Information Server) for data and...Go to contribution page
-
Julia Andreeva (CERN)05/09/2007, 17:10The goal of the Grid is to provide a coherent access to distributed computing resources. All LHC experiments are using several Grid infrastructures and a variety of the middleware flavors. Due to the complexity and heterogeinity of a distributed system the monitoring represents a challenging task. Independently of the underlying platform , the experiments need to ave a complete and uniform...Go to contribution page
-
Dr Matt Crawford (FERMILAB)05/09/2007, 17:10Computer facilities, production grids and networkingoral presentationDue to shortages of IPv4 address space - real or artificial - many HEP computing installations have turned to NAT and application gateways. These workarounds carry a high cost in application complexity and performance. Recently a few HEP facilities have begun to deploy IPv6 and it is expected that many more must follow within several years. While IPv6 removes the problem of address...Go to contribution page
-
Dr Thijs Cornelissen (CERN)05/09/2007, 17:10While most high energy experiments use track fitting software that is based on the Kalman technique, the ATLAS offline reconstruction has several global track fitters available. One of these is the global chi^2 fitter, which is based on the scattering angle formulation of the track fit. One of the advantages of this method over the Kalman fit is that it can provide the scattering angles...Go to contribution page
-
Mr Jeremy Herr (University of Michigan)05/09/2007, 17:10Large scientific collaborations as well as universities have a growing need for multimedia archiving of meetings and courses. Collaborations need to disseminate training and news to their wide-ranging members, and universities seek to provide their students with more useful studying tools. The University of Michigan ATLAS Collaboratory Project has been involved in the recording and...Go to contribution page
-
Mr Sebastian Robert Bablok (Department of Physics and Technology, University of Bergen)05/09/2007, 17:15The ALICE HLT is designed to perform event analysis including calibration of the different ALICE detectors online. The detector analysis codes process data using the latest calibration and condition settings of the experiment. This requires a high reliability on the interfaces to the various other systems operating ALICE. In order to have a comparable analysis with the results from...Go to contribution page
-
Mr Maxim Grigoriev (FERMILAB)05/09/2007, 17:30Computer facilities, production grids and networkingoral presentationEnd-to-end (E2E) circuits are used to carry high impact data movement into and out of the US CMS Tier-1 Center at Fermilab. E2E circuits have been implemented to facilitate the movement of raw experiment data from Tier-0, as well as processed data to and from a number of the US Tier-2 sites. Troubleshooting and monitoring those circuits presents a challenge, since the circuits typically...Go to contribution page
-
Norman Graf (SLAC)05/09/2007, 17:30High energy physics is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF)...Go to contribution page
-
Dr Ronan McNulty (University College Dublin, School of Physics)05/09/2007, 17:30As programming and their environments become increasingly complex, more effort must be invested in presenting the user with a simple yet comprehensive interface. Feicim is a tool that unifies the representation of data and algorithms. It provides resource discovery of data-files, data-content and algorithm implementation through an intuitive graphical user interface. It allows...Go to contribution page
-
Dr Antonio Pierro (INFN-BARI)05/09/2007, 17:30The monitoring of the grid user activity and application performance is extremely useful to plan resource usage strategies particularly in cases of complex applications. Large VO's , like the LHC ones, do their monitoring by means of dashboards. Other VO's or communities, like for example the BioinforGRID one, are characterized by a greater diversification of the application types: so...Go to contribution page
-
Dr Boyd Jamie (CERN)05/09/2007, 17:30The ATLAS detector at CERN's LHC will be exposed to proton-proton collisions from beams crossing at 40 MHz. At the design luminosity there are roughly 23 collisions per bunch crossing. ATLAS has designed a three-level trigger system to select potentially interesting events. The first-level trigger, implemented in custom-built electronics, reduces the incoming rate to less than 100 kHz...Go to contribution page
-
Tumanov Alexander (T.W. Bonner Nuclear Laboratory)05/09/2007, 17:45Unprecedented data rates that are expected at the LHC put high demand on the speed of the detector data acquisition system. The CSC subdetector located in the Muon Endcaps of the CMS detector has a data readout system equivalent in size to that of a whole Tevatron detector (60 VME crates in the CSC DAQ equal to the whole D0 DAQ size). As a part of the HLT, the CSC data unpacking...Go to contribution page
-
James Casey (CERN)05/09/2007, 17:50During 2006, the Worldwide LHC Computing Grid Project (WLCG) constituted several working groups in the area of fabric and application monitoring with the mandate of improving the reliability and availability of the grid infrastructure through improved monitoring of the grid fabric. This talk will discuss the ‘Grid Service Monitoring’ Working Group. This has the aim to evaluate the...Go to contribution page
-
Dr Christopher Jones (Cornell University)05/09/2007, 17:50The CMS offline software suite uses a layered approach to provide several different environments suitable for a wide range of analysis styles. At the heart of all the environments is the ROOT-based event data model file format. The simplest environment uses "bare" ROOT to read files directly, without the use of any CMS-specific supporting libraries. This is useful for performing...Go to contribution page
-
Dr Luc Goossens (CERN)05/09/2007, 17:50Computer facilities, production grids and networkingoral presentationATLAS is a multi-purpose experiment at the LHC at CERN, which will start taking data in November 2007. To handle and process the unprecedented data rates expected at the LHC (at nominal operation, ATLAS will record about 10 PB of raw data per year) poses a huge challenge on the computing infrastructure. The ATLAS Computing Model foresees a multi-tier hierarchical model to perform this...Go to contribution page
-
Mr Keith Beattie (LBNL)05/09/2007, 17:50In this experiential paper we report on lessons learned during the development of the data acquisition software for the IceCube project - specifically, how to effectively address the unique challenges presented by a distributed, collaborative, multi-institutional, multi-disciplined project such as this. While development progress in software projects is often described solely in terms of...Go to contribution page
-
Dietrich Liko (CERN)06/09/2007, 08:30Dietrich Liko: Dietrich Liko is researcher at the Institute for High Energy Physics of the Austrian Academy of Sciences. He is currently on leave to participate in the devlopement of analysis tools for the grid with the EGEE project and as ATLAS Distributed Analysis Coordinator.Go to contribution page
-
Dr Amber Boehnlein (FERMI NATIONAL ACCELERATOR LABORATORY)06/09/2007, 09:00
-
Peter Tenenbaum (SLAC)06/09/2007, 09:30The Global Design Effort for the International Linear Collider (ILC) has made use of modern computing capabilities in a number of areas: modeling the desired (accelerating) and undesired (wakefields, RF deflections) fields in the RF cavities, simulations of accelerator operations and tuning, prediction of accelerator uptime based on component performance and overall site design, and computer...Go to contribution page
-
Dr Richard Mount (SLAC)06/09/2007, 11:00
-
Dr Jamie Shiers (CERN)06/09/2007, 11:30This talk summarises the main discussions and issues raised at the WLCG Collaboration workshop held immediately prior to CHEP. The workshop itself will focus on service needs for initial data taking: commissioning, calibration and alignment, early physics. Target audience: all active sites plus experiments We start with a detailed update on the schedule and operation of the...Go to contribution page
-
Peter Clarke (School of Physics - University of Edinburgh)06/09/2007, 12:00
-
Dr Lucas Taylor (Northeastern University, Boston)06/09/2007, 14:00Distributed data analysis and information managementoral presentationThe CMS experiment is about to embark on its first physics run at the LHC. To maximize the effectiveness of physicists and technical experts at CERN and worldwide and to facilitate their communications, CMS has established several dedicated and inter-connected operations and monitoring centers. These include a traditional “Control Room” at the CMS site in France, a “CMS Centre” for...Go to contribution page
-
Sylvain Chapeland (CERN)06/09/2007, 14:00ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). A large bandwidth and flexible Data Acquisition System (DAQ) has been designed and deployed to collect sufficient statistics in the short running time available per year for heavy ion and to...Go to contribution page
-
Dr Stephen Burke (Rutherford Appleton Laboratory, UK)06/09/2007, 14:00A common information schema for the description of Grid resources and services is an essential requirement for interoperating Grid infrastructures, and its implementation interacts with every Grid component. In this context, the GLUE information schema was originally defined in 2002 as a joint project between the European DataGrid and DataTAG projects and the US iVDGL (the...Go to contribution page
-
Mr Tomasz Maciej Frueboes (Institute of Experimental Physics - University of Warsaw)06/09/2007, 14:00The CMS detector will start its operation in the end of 2007. Until that time great care must be taken in order to assure that hardware operation is fully understood. We present an example of how emulation software helps achieving this goal in the CMS Level-1 RPC Trigger system. The design of the RPC trigger allows to insert sets of so-called test pulses at any stage of the hardware...Go to contribution page
-
Dr Lukas Nellen (I. de Ciencias Nucleares, UNAM)06/09/2007, 14:00Computer facilities, production grids and networkingoral presentationThe EELA project aims at building a grid infrastructure in Latin America and at attracting users to this infrastructure. The EELA infrastructure is based on the gLite middleware, developed by the EGEE project. A test-bed, including several European and Latin American countries, was set up in the first months of the project. Several applications from different areas, especially...Go to contribution page
-
Dr Ivana Hrivnacova (IPN, Orsay, France)06/09/2007, 14:00The Virtual Geometry Model (VGM) was introduced at CHEP in 2004, where its concept, based on the abstract interfaces to geometry objects, has been presented. Since then, it has undergone a design evolution to pure abstract interfaces, it has been consolidated and completed with more advanced features. Currently it is used in Geant4 VMC for the support of TGeo geometry definition...Go to contribution page
-
Dr Alexei Klimentov (BNL)06/09/2007, 14:20Computer facilities, production grids and networkingoral presentationATLAS Distributed Data Management Operations Team unites experts from Tier-1s and Tier-2s computer centers. The group is responsible for all day by day ATLAS data distribution between different sites and centers. In our paper we describe ATLAS DDM operation model and address the data management and operation issues. A serie of Functional Tests have been conducted in the past and is in...Go to contribution page
-
Dr Helen Hayward (University of Liverpool)06/09/2007, 14:20The inner detector of the ATLAS experiment is in the process of being commissioned using cosmic ray events. First tests were performed in the SR1 assembly hall at CERN with both barrel and endcaps for all different detector technologies (pixels and microstrips silicon detectors as well as straw tubes with additional transition radiation detection). Integration with the rest of the ATLAS...Go to contribution page
-
Dr David Groep (NIKHEF)06/09/2007, 14:20The majority of compute resources in today’s scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management is based around the well-known and trusted concepts of ‘user IDs’ and ‘group IDs’ that are local to the resource; in contrast, the grid concepts of user and group management are centered around globally assigned user identities and...Go to contribution page
-
Dr Frank Gaede (DESY IT)06/09/2007, 14:20The ILC is in a very active R&D phase where currently four international working groups are developing different detector designs. Increasing the interoperability of the software frameworks that are used in these studies is mandatory for comparing and optimizing the detector concepts. One key ingredient for interoperability is the geometry description. We present a new package (LCGO)...Go to contribution page
-
Dr John Kennedy (LMU Munich)06/09/2007, 14:20Distributed data analysis and information managementoral presentationThe ATLAS production system is responsible for the distribution of O(100,000) jobs per day to over 100 sites worldwide. The tracking and correlation of errors and resource usage within such a large distributed system is of extreme importance. The monitoring system presented here is designed to abstract the monitoring information away form the central database of jobs....Go to contribution page
-
Dr Martin Purschke (BROOKHAVEN NATIONAL LABORATORY)06/09/2007, 14:20The PHENIX experiment at the Relativistic Heavy Ion Collider (RHIC) has commissioned several new detector systems which are part of the general readout for the first time in the RHIC Run 7, which is currently under way. In each of the RHIC Run periods since 2003, PHENIX has collected about 0.5 PB of data. For Run 7 we expect record luminosities for the Au-Au beams, which will lead to...Go to contribution page
-
Dr Alexander Oh (CERN)06/09/2007, 14:35The CMS experiment at the LHC at CERN will start taking data towards the end of 2007. To configure, control and monitor the experiment during data-taking the Run Control and Monitoring System (RCMS) was developed. This paper describes the architecture and the technology used to implement the RCMS, as well as the deployment and commissioning strategy of this online software component...Go to contribution page
-
Lassi Tuura (Northeastern University)06/09/2007, 14:40The CMS experiment at LHC has a very large body of software of its own and uses extensively software from outside the experiment. Understanding the performance of such a complex system is a very challenging task, not the least because there are extremely few developer tools capable of profiling software systems of this scale, or producing useful reports. CMS has mainly used IgProf,...Go to contribution page
-
Dr Daniele Bonacorsi (INFN-CNAF, Bologna, Italy)06/09/2007, 14:40Computer facilities, production grids and networkingoral presentationThe CMS experiment is gaining experience towards the data taking in several computing preparation activities, and a roadmap towards a mature computing operations model stands as a primary target. The responsibility of the Computing Operations projects in the complex CMS computing environment spawns a wide area and aims at integrating the management of the CMS Facilities Infrastructure,...Go to contribution page
-
Dr Tofigh Azemoon (Stanford Linear Accelerator Center)06/09/2007, 14:40Distributed data analysis and information managementoral presentationPetascale systems are in existence today and will become widespread in the next few years. Such systems are inevitably very complex, highly distributed and heterogeneous. Monitoring a petascale system in real time and understanding its status at any given moment without impacting its performance is a highly intricate task. Common approaches and off the shelf tools are either...Go to contribution page
-
Giuseppe Bagliesi (INFN Sezione di Pisa)06/09/2007, 14:40Tau leptons play surely a key role in the physics studies at the LHC. Interests in using tau leptons include (but are not limited to) their ability to offer a relatively low background environment, a competitive way of probing new physics as well as the possibility to explore new physics regions not accessible otherwise.The Tau identification and reconstruction algorithms developed for...Go to contribution page
-
Dr Andrew McNab (University of Manchester)06/09/2007, 14:40Components of the GridSite system are used within WLCG and gLite to process security credentials and access policies. We describe recent extensions to this system to include the Shibboleth authentication framework of Internet2, and how the GridSite architecture can now import a wide variety of credential types, including onetime passcodes, X.509, GSI, VOMS, Shibboleth and OpenID and then...Go to contribution page
-
Dr Benedetto Gorini (CERN)06/09/2007, 14:50During 2006 and early 2007, integration and commissioning of trigger and data acquisition (TDAQ) equipment in the ATLAS experimental area have progressed. Much of the work has focussed on a final prototype setup consisting of around 80 computers representing a subset of the full TDAQ system. There have been a series of technical runs using this setup. Various tests have been run...Go to contribution page
-
Mr Riccardo Zappi (INFN-CNAF)06/09/2007, 15:00In the near future, data on the order of hundred of Petabytes will be spread in multiple storage systems worldwide dispersed in, potentially, billions of replicated data items. Users, typically, are agnostic about the location of their data and they want to get access by either specifying logical names or using some lookup mechanism. A global namespace is a logical layer that allows...Go to contribution page
-
Ricardo Rocha (CERN)06/09/2007, 15:00Distributed data analysis and information managementoral presentationThe ATLAS Distributed Data Management (DDM) system is evolving to provide a production-quality service for data distribution and data management support for production and users' analysis. Monitoring the different components in the system has emerged as one of the key issues to achieve this goal. Its distributed nature over different grid infrastructures (EGEE, OSG and NDGF)...Go to contribution page
-
Dr Sebastien Binet (LBNL)06/09/2007, 15:00Python does not, as a rule, allow many optimizations, because there are too many things that can change dynamically. However, a lot of HEP analysis work consists of logically immutable blocks of code that are executed many times: looping over events, fitting data samples, making plots. In fact, most parallelization relies on this. There is therefore room for optimizations. There are...Go to contribution page
-
Mr Pavel Reznicek (IPNP, Charles University in Prague)06/09/2007, 15:00The LHC experiments will search for physics phenomena beyond the Standard Model (BSM). Highly sensitive tests of beauty hadrons will represent an alternative approach to this research. The analyzes of complex decay chains of beauty hadrons will require involving several nodes, and detector tracks made by these reactions must be extracted efficiently from other events to make...Go to contribution page
-
Luca dell'Agnello (INFN-CNAF)06/09/2007, 15:00Computer facilities, production grids and networkingoral presentationPerformance, reliability and scalability in data access are key issues when considered in the context of HEP data processing and analysis applications. The importance of these topics is even larger when considering the quantity of data and the request load that a LHC data centers has to support. In this paper we give the results and the technical details of a large scale validation,...Go to contribution page
-
Dr Niko Neufeld (CERN)06/09/2007, 15:05The first level trigger of LHCb acceptes 1 MHz of events per second. After preprocessing in custom FPGA-based boards these events are distributed to a large farm of PC-servers using a high-speed Gigabit Ethernet network. Synchronisation and event management is achieved by the Timing and Trigger system of LHCb. Due to the complex nature of the selection of B-events, which are the...Go to contribution page
-
Valerio Venturi (INFN)06/09/2007, 15:20The Virtual Organization Membership Service (VOMS) is a system for managing users in a Virtual Organization. It manages and releases user's information such as group membership, roles, and other authorization data. VOMS was born with the aim of supporting dynamic, fine grained, and multi-stakeholder access control to enable coordinate sharing in virtual organizations. The current software...Go to contribution page
-
Dr Fons Rademakers (CERN)06/09/2007, 15:20Distributed data analysis and information managementoral presentationThe goal of PROOF (Parallel ROOt Facility) is to enable interactive analysis of large data sets in parallel on a distributed cluster or multi-core machine. PROOF represents a high-performance alternative to a traditional batch-oriented computing system. The ALICE collaboration is planning to use PROOF at the CERN Analysis Facility (CAF) and has been stress testing the system since mid...Go to contribution page
-
Dr Robert Bainbridge (Imperial College London)06/09/2007, 15:20The CMS silicon strip tracker, providing a sensitive area of >200 m^2 and comprising 10M readout channels, is undergoing final assembly at the tracker integration facility at CERN. The strip tracker community is currently working to develop and integrate the online and offline software frameworks, known as XDAQ and CMSSW respectively, for the purposes of data acquisition and detector...Go to contribution page
-
Dr Kirill Prokofiev (University of Sheffield)06/09/2007, 15:20In the harsh environment of the Large Hadron Collider at CERN (design luminosity of 10^34 cm-2s-1) efficient reconstruction of the signal primary vertex is crucial for many physics analyses. Described in this paper are primary vertex reconstruction strategies implemented in the ATLAS software framework Athena. The implementation of the algorithms follows a very modular design based on...Go to contribution page
-
Jan van ELDIK (CERN)06/09/2007, 15:20Computer facilities, production grids and networkingoral presentationThis paper presents work, both completed and planned, for streamlining the deployment, operation and re-tasking of Castor2 instances. We present a summary of what has recently been done to reduce the human intervention necessary for bringing systems into operation; including the automation of Grid host certificate requests and deployment in conjunction with the CERN Trusted CA and...Go to contribution page
-
Dr Jörg Stelzer (CERN, Switzerland)06/09/2007, 15:20The ATLAS detector at CERN's LHC will be exposed to proton-proton collisions at a rate of 40 MHz. To reduce the data rate, only potentially interesting events are selected by a three-level trigger system. The first level is implemented in custom-made electronics, reducing the data output rate to less than 100 kHz. The second and third levels are software triggers with a final output rate...Go to contribution page
-
Mr Levente Hajdu (BROOKHAVEN NATIONAL LABORATORY)06/09/2007, 15:35Keeping a clear and accurate experiment log is important for any scientific experiment. The concept is certainly not new but keeping accurate while useful records for a Nuclear Physics experiment such as RHIC/STAR is not a priori a simple matter – STAR operates 24 hours a day for six months out of the year with more then 24 shift crews operating 16 different subsystems (some located...Go to contribution page
-
Mr Fabrizio Furano (INFN sez. di Padova)06/09/2007, 15:40Distributed data analysis and information managementoral presentationHEP data processing and analysis applications typically deal with the problem of accessing and processing data at high speed. Recent study, development and test work has shown that the latencies due to data access can often be hidden by parallelizing them with the data processing, thus giving the ability to have applications which process remote data with a high level of...Go to contribution page
-
Dirk Duellmann (CERN)06/09/2007, 15:40The CORAL package is the LCG Persistency Framework foundation for accessing relational databases. From the start CORAL has been designed to facilitate the deployment of the LHC experiment database applications in a distributed computing environment. This contribution focuses on the description of CORAL features for distributed database deployment. In particular we cover -...Go to contribution page
-
Mr Timur Perelmutov (FERMILAB)06/09/2007, 15:40Computer facilities, production grids and networkingoral presentationThe Storage Resource Manager (SRM) and WLCG collaborations recently defined version 2.2 of the SRM protocol, with the goal of satisfying the requirement of the LCH experiments. The dCache team has now finished the implementation of all SRM v2.2 elements required by the WLCG. The new functions include space reservation, more advanced data transfer, and new namespace and permission...Go to contribution page
-
Norman Graf (SLAC)06/09/2007, 15:40The International Linear Collider (ILC) promises to provide electron-positron collisions at unprecedented energy and luminosities. The relative democracy with which final states are produced at these high energies places a premium on the efficiency and resolution with which events can be reconstructed. In particular, the physics program places very demanding requirements on the dijet...Go to contribution page
-
Valentin Kuznetsov (Cornell University)06/09/2007, 15:40We disscuss the rapid development of a large scale data discovery service for the CMS experiment using modern AJAX techniques and the Python language. To implement a flexible interface capable of accommodating several different versions of the DBS databse, we used a "stack" approach. Asynchronous JavaScript and XML (AJAX) together with an SQL abstraction layer, template engine, code...Go to contribution page
-
Dr Haleh Hadavand (Southern Methodist University)06/09/2007, 16:30The ATLAS experiment of the LHC is now taking its first data by collecting cosmic ray events. The full reconstruction chain including all sub-systems (inner detector, calorimeters and muon spectrometer) is being commissioned with this kind of data for the first time. Specific adaptations to deal with particles not coming from the interaction point and not synchronized with the readout...Go to contribution page
-
Dr Maxim Potekhin (BROOKHAVEN NATIONAL LABORATORY)06/09/2007, 16:30Computer facilities, production grids and networkingoral presentationThe simulation program for the STAR experiment at Relativistic Heavy Ion Collider at Brookhaven National Laboratory is growing in scope and responsiveness to the needs of the research conducted by the Physics Working Groups. In addition, there is a significant ongoing R&D activity aimed at future upgrades of the STAR detector, which also requires extensive simulations support. The...Go to contribution page
-
Dr Mikhail Kirsanov (Institute for Nuclear Research (INR))06/09/2007, 16:30The Generator Services project collaborates with the Monte Carlo generators authors and with the LHC experiments in order to prepare validated LCG compliant code for both the theoretical and the experimental communities at the LHC. On the one side it provides the technical support as far as the installation and the maintenance of the generators packages on the supported platforms is...Go to contribution page
-
Dan Flath (SLAC)06/09/2007, 16:30Distributed data analysis and information managementoral presentationThe Data Handling Pipeline ("Pipeline") has been developed for the Gamma-Ray Large Area Space Telescope (GLAST) launching at the end of 2007. Its goal is to generically process graphs of dependent tasks, maintaining a full record of its state, history and data products. In cataloging the relationship between data, analysis results, software versions, as well as statistics (memory usage,...Go to contribution page
-
Dr Greig A Cowan (University of Edinburgh)06/09/2007, 16:30The start of data taking this year at the Large Hadron Collider will herald a new era in data volumes and distributed processing in particle physics. Data volumes of 100s of Terabytes will be shipped to Tier-2 centres for analysis by the LHC experiments using the Worldwide LHC Computing Grid (WLCG). In many countries Tier-2 centres are distributed between a number of institutes, e.g.,...Go to contribution page
-
Igor Sfiligoi (Fermilab)06/09/2007, 16:50Computer facilities, production grids and networkingoral presentationPilot jobs are becoming increasingly popular in the Grid world. Experiments like ATLAS and CDF are using them in production, while others, like CMS, are actively evaluating them. Pilot jobs enter Grid sites using a generic pilot credential, and once on a worker node, call home to fetch the job of an actual user. However, this operation mode poses several new security problems when...Go to contribution page
-
Mrs Ianna Osborne (Northeastern University)06/09/2007, 16:50The event display and data quality monitoring visualisation systems are especially crucial for commissioning CMS in the imminent CMS physics run at the LHC. They have already proved invaluable for the CMS magnet test and cosmic challenge. We describe how these systems are used to navigate and filter the immense amounts of complex event data from the CMS detector and prepare clear and...Go to contribution page
-
Dr Vitaly Choutko (Massachusetts Institute of Technology (MIT))06/09/2007, 16:50Distributed data analysis and information managementoral presentationThe AMS-02 detector will be installed on ISS ifor at least 3 years. The data will be transmitted from ISS to NASA Marshall Space Flight Center (MSFC, Huntsvile, Alabama) and transfered to CERN (Geneva Switzerland) for processing and analysis. We are presenting the AMS-02 Ground Data Handling scenario and requirements to AMS ground centers: the Payload Operation and Control Center (POCC)...Go to contribution page
-
Mr Ian Gable (University of Victoria)06/09/2007, 16:50Deployment of HEP application in heterogeneous grid environments can be challenging because many of the applications are dependent on specific OS versions and have a large number of complex software dependencies. Virtual machine monitors such as Xen could ease the deployment burden by allowing applications to be packaged complete with their execution environments. Our previous work has...Go to contribution page
-
Dr Peter Elmer (Princeton University)06/09/2007, 16:50Modern HEP experiments at colliders typically require offline software systems consisting of many millions of lines of code. The software is developed by hundreds of geographically distributed developers and is often used actively for 10-15 years or longer. The tools and technologies to support this HEP software development model have long been an interesting topic at CHEP conferences....Go to contribution page
-
Dr Andrea Dotti (Università and INFN Pisa)06/09/2007, 17:10The Tile Calorimeter (TileCal) is the central hadronic calorimeter of the ATLAS experiment presently in an advanced state of installation and commissioning at the LHC accelerator. The complexity of the experiment, the number of electronics channels and the high rate of acquired events requires a detailed commissioning of the detector, during the installation phase of the experiment and...Go to contribution page
-
Dr Simone Campana (CERN/IT/PSS)06/09/2007, 17:10Computer facilities, production grids and networkingoral presentationThe ATLAS experiment has been running continuous simulated events production since more than two years. A considerable fraction of the jobs is daily submitted and handled via the gLite Workload Management System, which overcomes several limitations of the previous LCG Resource Broker. The gLite WMS has been tested very intensively for the LHC experiments use cases for more than six months,...Go to contribution page
-
Dr Alfredo Pagano (INFN/CNAF, Bologna, Italy)06/09/2007, 17:10Worldwide grid projects such as EGEE and WLCG need services with high availability, not only for grid usage, but also for associated operations. In particular, tools used for daily activities or operational procedures are considered critical. In this context, the goal of the work done to solve the EGEE failover problem is to propose, implement and document well-established mechanisms and...Go to contribution page
-
Mr Sverre Jarp (CERN)06/09/2007, 17:10A new interface to the performance monitoring hardware of almost all supported hardware processors (AMD, IBM, INTEL, SUN, etc.) is in the process of being added to the Linux 2.6 kernel. CERN openlab has participated in some of the development together with one of the key developers from HP labs. In this talk we review the capabilities of this interface on relevant platforms, such as the...Go to contribution page
-
Dr Nicola De Filippis (INFN Bari)06/09/2007, 17:10Distributed data analysis and information managementoral presentationThe Tracker detector has been taking real data with cosmics at the Tracker Integration Facility (TIF) at CERN. First DAQ checks and on-line monitoring tasks are executed at the Tracker Analysis Centre (TAC) which is a dedicated Control Room at TIF with limited computing resources. A set of software agents were developed to perform the real-time data conversion in a standard Event...Go to contribution page
-
Dr Marcin Nowak (Brookhaven National Laboratory)06/09/2007, 17:30In anticipation of data taking, ATLAS has undertaken a program of work to develop an explicit state representation of the experiment's complex transient event data model. This effort has provided both an opportunity to consider explicitly the structure, organization, and content of the ATLAS persistent event store before writing tens of petabytes of data (replacing simple...Go to contribution page
-
Mr Pavel Jakl (Nuclear Physics Institute, Academy of Sciences of the Czech Republic)06/09/2007, 17:30Distributed data analysis and information managementoral presentationFacing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift in the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space...Go to contribution page
-
Victor Serbo (SLAC)06/09/2007, 17:30JAIDA is a Java implementation of the Abstract Interfaces for Data Analysis (AIDA); it is part of the FreeHEP library. JAIDA allows Java programmers to quickly and easily create histograms, scatter plots and tuples, perform fits, view plots and store and retrieve analysis objects from files. JAIDA can be used either in a non-graphical environment (for batch processing) or with a GUI. Files...Go to contribution page
-
Mr Robert Stober (Platform Computing)06/09/2007, 17:30Universus refers to an extension to Platform LSF that provides a secure, transparent, one-way interface from an LSF cluster to any foreign cluster. A foreign cluster is a local or remote cluster managed by a non-LSF workload management system. Universus schedules work to foreign clusters as it would to any other execution host. Beyond its ability to interface with foreign workload...Go to contribution page
-
Mr Sergey Chechelnitskiy (Simon Fraser University)06/09/2007, 17:30Computer facilities, production grids and networkingoral presentationSFU is responcible for running two different clusters - one is designed for WestGrid internal jobs with its specific software and the other should run Atlas jobs only. In addition to different software configuration the Atlas cluster should have a diffener networking confirugation. We would also like to have a flexibility of running jobs on different hardware. That is why it has been...Go to contribution page
-
Ms Zhenping Liu (BROOKHAVEN NATIONAL LABORATORY)06/09/2007, 17:50Computer facilities, production grids and networkingoral presentationBNL ATLAS Computing Facility needs to provide a Grid-based storage system with these requirements: a total of one gigabyte per second of incoming and outgoing data rate between BNL and ATLAS T0, T1 and T2 sites, thousands of reconstruction/analysis jobs accessing locally stored data objects, three petabytes of disk/tape storage in 2007 scaling up to 25 petabytes by 2011, and a...Go to contribution page
-
Mr Giulio Eulisse (Northeastern University of Boston)06/09/2007, 17:50CMS software depends on over one hundred external packages, it's therefore obvious that being able to manage the way they are built, deployed and configured and their dependencies (both among themselves and with respect to core CMS software) is a critical part of the system. We present a completely new system used to build and distribute CMS software which has enabled us to go from...Go to contribution page
-
Mr Andreas Salzburger (University of Innsbruck & CERN)06/09/2007, 17:50The track reconstruction of modern high energy physics experiments is a very complex task that puts stringent requirements onto the software realisation. The ATLAS track reconstruction software has been in the past dominated by a collection of individual packages, each of which incorporating a different intrinsic event data model, different data flow sequences and calibration data. The...Go to contribution page
-
Daniele Spiga (Universita degli Studi di Perugia)06/09/2007, 17:50Distributed data analysis and information managementoral presentationStarting from 2007 the CMS experiment will produce several Pbytes of data each year, to be distributed over many computing centers located in many different countries. The CMS computing model defines how the data are to be distributed such that CMS physicists can access them in an efficient manner in order to perform their physics analyses. CRAB (CMS Remote Analysis Builder) is a...Go to contribution page
-
Niko Neufeld (CERN)07/09/2007, 08:30
-
Prof. Roger Jones (Lancaster University)07/09/2007, 08:50
-
Mr Federico Carminati (CERN)07/09/2007, 09:10
-
Patricia McBride (Fermi National Accelerator Laboratory (FNAL))07/09/2007, 09:30
-
Kors Bos (NIKHEF)07/09/2007, 10:30
-
Dr Ian Bird (CERN)07/09/2007, 10:50
-
Matthias Kasemann (CERN)07/09/2007, 11:10
-
07/09/2007, 11:40
-
-
Dan Fraser (Globus)Globus software was devleoped to enable previously disconnected communities to securely share computational resources and data that span organizational boundaries. As a community driven project, the Globus commiunity is continually creating and enhancing Grid technology to make it easier to administer Grids as well as lowering the barriers to entry for both Grid users and Grid developers. In...Go to contribution page
-
IBM's Blue Gene/L system had demonstrated that it is now feasable to run applications at sustained performances of 100's of teraflops. The next generation Blue Gene/P system is designed to scale up to a peak performance of 3.6 Petaflops. This talk will look at some of the key application successes already achieved at the 100TF scale. It will then address the emerging petascale...Go to contribution page
-
Alberto Pace (CERN)Plenaryoral presentationThis talk will introduce identity management concepts and discuss various issues associated with its implementation. The presentation will try to highlight technical, legal, and social aspects that must been foreseen when defining the numerous processes that an identity management infrastructure must support.Go to contribution page
-
Mr Jose Miguel Dana Perez (CERN), Mr Xavier Grehant (CERN)Grid middleware and toolsoral presentationToday virtualization is used in computing centers to supply execution environments to a variety of users and applications. Appropriate flavours and configurations can be booted depending on the requirement, and in the same time the resources of a single server can be shared while preserving isolation between the environments. In order to optimize distributed resource sharing,...Go to contribution page
-
Dr Jamie Shiers (CERN)
Choose timezone
Your profile timezone: