19–23 Apr 2010
LNEC
Europe/Lisbon timezone

Contribution List

52 out of 52 displayed
Export to PDF
  1. Mario David (LIP Laboratorio de Instrumentaco e Fisica Experimental de Particulas), Michel Jouvin (LAL / IN2P3), Sandy Philpott (JLAB)
    19/04/2010, 13:00
  2. 19/04/2010, 13:30
  3. Martin Bly (STFC-RAL)
    19/04/2010, 14:00
    Site Reports
    Developments at RAL
    Go to contribution page
  4. Mr Christopher Hollowell (Brookhaven National Laboratory)
    19/04/2010, 14:15
    Site Reports
    New developments at the RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory.
    Go to contribution page
  5. Dr Helge Meinhard (CERN-IT)
    19/04/2010, 14:30
    Site Reports
    Report on news at CERN-IT since the Fall meeting in Berkeley
    Go to contribution page
  6. Andrei Maslennikov (CASPUR)
    19/04/2010, 15:15
  7. Patrick Fuhrmann (DESY)
    19/04/2010, 15:45
    The talk will be covering experience accumulated with NFS4.1/dCache at DESY.
    Go to contribution page
  8. Mr Andreas Haupt (Deutsches Elektronen-Synchrotron (DESY))
    19/04/2010, 16:15
    Storage & Filesystems
    High data throughput is a key feature in today's LHC analysis centres. This talk will present the approach of the DESY site Zeuthen to build up a high performance dCache out of commodity hardware. Additional optimisations to achieve further throughput enhancements will be mentioned, as well. By presenting current benchmarks we show competitive results compared to more expensive installations.
    Go to contribution page
  9. Wolfgang Friebel (Deutsches Elektronen-Synchrotron (DESY))
    19/04/2010, 16:45
    DESY site report
    Go to contribution page
  10. Andrey Shevel (Petersburg Nuclear Physics Institute (PNPI))
    19/04/2010, 17:00
    Site Reports
    In this short presentation I plan to describe the responsibility for Computing Systems Department (CSD): LAN (400 hosts), mail service for the Institute, other centralized servers, computing cluster. The computing needs for small physics laboratory, experience with the cluster at PNPI HEPD and plan for cluster future in connection to LHC data analysis are discussed in more details.
    Go to contribution page
  11. Alberto Proença (Dep. Informática, Universidade do Minho)
    20/04/2010, 09:00
    The third paradigm of research known today as e-Science requires that the computational sciences community rely heavily on HPC clusters to run their simulations. At the core of each cluster lie commodity devices with increasingly parallel capabilities. The two most successful processing devices today complement each other: the multicore CPUs and the manycore GPUs as accelerator vector devices....
    Go to contribution page
  12. Randy Melen (SLAC)
    20/04/2010, 09:45
    SLAC site update since the Fall 2009 HEPiX meeting.
    Go to contribution page
  13. Dr Chadwick Keith (Fermilab)
    20/04/2010, 10:00
    Site Reports
    Site Report from Fermilab
    Go to contribution page
  14. John Bartelt (SLAC)
    20/04/2010, 10:30
    On January 19th, 2010, a series of powerful thunderstorms knocked out all power to the SLAC site (and surrounding areas). While the SLAC Facilities personnel worked to restore power to the computer center and the rest of the site, Computing Division planned for restoring services. This included setting priorities for such things as business systems needed to prepare the payroll and servers...
    Go to contribution page
  15. Tim Bell (CERN)
    20/04/2010, 11:00
    The CERN evaluation of Lustre as a potential solution for home directories, project space and physics storage has now completed. This experience will be presented along with outlook for bulk data storage and life cycle management in a capacity hungry and performance sensitive environment.
    Go to contribution page
  16. Alf Wachsmann (SLAC)
    20/04/2010, 11:30
    I will present the experience and lessons learned from the first few month of real users taking real data at LCLS. From those lessons, a new offline data analysis facility was designed and will be installed before the next set of users arrive on May 6.
    Go to contribution page
  17. Vladimir Sapunenko (CNAF)
    20/04/2010, 13:30
    New Grid Enabled MSS solution based on IBM's products (GPFS and TSM) has been developed and commissioned into production for LHC experiments in Tier1 at CNAF. Architectural view as well as installation and configuration details will be presented. We will also present technical details of data migration from CASTOR to GEMSS for CMS, LHCb and ATLAS experiments.
    Go to contribution page
  18. Jeffrey Altman (Your File System Inc.), Simon Wilkinson (Your File System Inc.)
    20/04/2010, 14:00
    Storage & Filesystems
    This talk will focus on two areas of performance improvement that will be shipping with the OpenAFS 1.6 series this coming Summer. Significant work on improving performance has been completed on the OpenAFS 1.5 branch. Linux memory management has been heavily overhauled, removing a number of deadlock conditions, and significantly speeding access to data which is already held in the local...
    Go to contribution page
  19. Cyril L'Orphelin (CNRS/CCIN2P3)
    20/04/2010, 14:45
    Monitoring & Infrastructure tools
    Lavoisier is an Open Source Tool developed at CC-IN2P3 used in the project EGEE/ LCG by the Operations Portal. Lavoisier is an extensible service designed to provide an unified view of data collected from multiple heterogeneous data sources (e.g. database, Web Service,LDAP, text files, XML files...). This unified view is represented as XML documents, which can be queried using standard...
    Go to contribution page
  20. Mr Alberto Ciampa (INFN, Sezione di Pisa)
    20/04/2010, 15:15
    Monitoring & Infrastructure tools
    In Data Center the necessity for the accounting of resources used by different groups is growing. In this talk (and the correspondent note) the context and the scientific computing activity will be defined. Taking as an example a medium size Data Center (Pisa INFN) an industrial production approach will be followed. A first methodology for the quantitative evaluation of the following data...
    Go to contribution page
  21. Vladimir Sapunenko (INFN-CNAF)
    21/04/2010, 09:00
    News from INFN-T1
    Go to contribution page
  22. Jay Srinivasan (Lawrence Berkeley National Lab. (LBNL)-Unknown-Unknown)
    21/04/2010, 09:15
    We will present a report on the current status of the PDSF cluster at NERSC/LBNL.
    Go to contribution page
  23. Mr Esteban Freire Garcia (CESGA)
    21/04/2010, 09:30
    Grid and WLCG
    Grid Engine (GE) is an open source batch system with a complete documentation and support for advanced features like the possibility to configure a shadow master host for failover purposes, support for up to 10.000 nodes per master server, application level checkpointing, array jobs, DRMAA, fully integrated MPI support, a complete administration GUI, a web-based accounting tool (ARCo), etc....
    Go to contribution page
  24. Oliver Keeble (CERN)
    21/04/2010, 10:00
    The 2010 program of work for the CERN Grid Data Management team is presented, describing the planned development activity for 2010. The plan for FTS, DPM/LFC and gfal/lcg_util is described along with longer term perspectives.
    Go to contribution page
  25. Mr Tiago Sá (Uminho)
    21/04/2010, 11:00
    EGEE is a dynamic organism with requirements that constantly evolve over time. The deployment of UMinho-CP - an EGEE site supporting Civil Protection related activities -, revealed new challenges, some of which were overcome till now, while others are planned for future work. A great effort has been made to automate the distribution, installation and configuration of an EGEE site, for...
    Go to contribution page
  26. Tony Cass (CERN)
    21/04/2010, 11:30
    Virtualization
    The HEPiX working group on Virtualisation started it activities since the last HEPiX meeting. This talk will present the current status and future plans.
    Go to contribution page
  27. Ulrich Schwickerath (CERN)
    21/04/2010, 13:30
    During the HEPiX meeting 2009 in Berkeley several reports related to virtualization efforts at CERN were given. This presentation will be a status report of these projects, with the focus on service consolidation and batch virtualization. Recent developments like the evaluation of the Infrastructure Sharing Facility (ISF) from Platform computing, issues seen with the tools we use and the...
    Go to contribution page
  28. Marc Rodriguez Espadamala (Marc Rodriguez)
    21/04/2010, 14:00
    Since PIC is supporting the data management of several scientific projects it is faced to requirements that might not be possible to enable in a homogeneous infrastructure. When looking for an alternative approach we have chosen to export the specific requirements into a virtualized environment. In order to minimize the impact to our physical computing infrastructure it was necessary to...
    Go to contribution page
  29. Ian Gable (University of Victoria)
    21/04/2010, 14:30
    We have developed a method using Condor and a simple software component we call Cloud Scheduler to run High Throughput Computing workloads on multiple clouds situated at different sites. Cloud Scheduler is able to instantiate user-customized VMs to meet the requirements of jobs in Condor queues. Users supply Condor job attributes to indicate the type of VM required to complete their job....
    Go to contribution page
  30. Sandy Philpott (JLAB)
    22/04/2010, 09:00
    An update on JLab's Scientific Computing facilities, including the new ARRA cpu and gpu clusters for USQCD, Lustre, experimental physics' data analysis farm, data storage, and networking.
    Go to contribution page
  31. Lorenzo Dini (CERN)
    22/04/2010, 09:15
    During the past years, the Grid Deployment and Grid Technology groups at CERN have gained experience in using virtualization to support the gLite Grid Middleware software-engineering process. Some technologies such as VNode and VMLoader have been developed to leverage virtualization in supporting activities such as development, build, test, integration and certification of software. This...
    Go to contribution page
  32. Prof. Miguel Mira da Silva (IST)
    22/04/2010, 09:45
    The relationship between information systems and management outside large organizations has traditionally been one way only, with information systems supporting management but management not supporting information systems. However, with information systems becoming ever more critical and complex, the need to manage information systems is becoming pervasive in most, if not all,...
    Go to contribution page
  33. Thomas Finnern (DESY)
    22/04/2010, 11:00
    This talk describes techniques for providing highly available network services with help of a hardware based load balancing cluster handled by a sophisticated control software. An abstract representation of web, mail and other services is implemented on base of application specific routing on network layers 3 to 7. Combining application, network and security issues in one point you may...
    Go to contribution page
  34. Carlos Garcia Fernandez (CERN)
    22/04/2010, 11:30
    With the growing number of Oracle database instances and Oracle application servers, and the need to control the necessary resources in terms of manpower, electricity and cooling, virtualisation is a strategic direction that is being seriously considered at CERN for databases and application servers. Oracle VM is the Oracle certified and supported server virtualisation solution for...
    Go to contribution page
  35. Mr Thomas LEIBOVICI (CEA)
    22/04/2010, 13:30
    Storage & Filesystems
    Lustre-HSM binding is a collaborative effort between Sun and CEA. This presentation will introduce the first version of Lustre-HSM: HSM support model, its features, components, development status, and roadmap. It will also give examples of importing files from an existing HPSS namespace, archiving and restoring data, releasing disk space...
    Go to contribution page
  36. Mr Ian Peter Collier (STFC/RAL)
    22/04/2010, 14:00
    Monitoring & Infrastructure tools
    The RAL Tier1 is continuing to benefit from bringing more of the Tier1 systems under Quattor control. That experience will be presented along with a report on recent and planned developments with Quattor across the wider grid and non-grid environments.
    Go to contribution page
  37. Mr Troy Dawson (FERMILAB)
    22/04/2010, 14:30
    Monitoring & Infrastructure tools
    Spacewalk is an open source Linux systems management solution. It is the upstream community project from which the Red Hat Network Satellite product is derived. Koji is Fedora's build platform. Fermilab is currently testing Spacewalk for various roles. We are testing it for desktop and server monitoring, system configuration, and errata information. Although we currently are still in...
    Go to contribution page
  38. 22/04/2010, 15:00
  39. Mr Michele Michelotto (Univ. + INFN)
    23/04/2010, 09:00
    Benchmarking
    We had access to a remote server with 2 x 6174 amd processor. Preliminary measurements will be show Some early measurements on an Intel 5600 worker node will be exposed.
    Go to contribution page
  40. Joao Marttins (LIP)
    23/04/2010, 09:30
    Benchmarking
    In this report we present the hyperthreading influence on CPU performance when running the HEP-SPEC2006 benchmark suite in a 2 quadcore CPU shared memory system (HP BL460c G6). This study was performed as a function of the number of running instances (from eight to sixteen), and with and without SPEC RATE. We concluded that the elapsed application run time can be clearly reduced if...
    Go to contribution page
  41. Mr Troy Dawson (FERMILAB)
    23/04/2010, 10:00
    Operating Systems & Applications
    Progress of Scientific Linux over the past 6 months. What we are currently working on. What we see in the future for Scientific Linux. Also we will have a Plenary discussion to get feedback to and input for the Scientific Linux developers from the HEPiX community. This may influence upcoming decisions e.g. on distribution lifecycles, and packages added to the distribution.
    Go to contribution page
  42. Michal Budzowski (Cern)
    23/04/2010, 11:00
    Operating Systems & Applications
    Windows 7 is officially supported at CERN since the end of March. The official Windows 7 service was preceded by pilot project that was started in December 2009. We will present our experience gathered during pilot phase as well as trends and deployment plans for Windows 7 at CERN.
    Go to contribution page
  43. Mr Pete Jones (CERN)
    23/04/2010, 11:30
    Storage and Filesystems III
    TWiki was introduced at CERN in 2003 following a request from a small group of software developers. The service soon grew in popularity and today there are over 7000 registered editors of 60000 topics. This presentation takes a look at the current service ,the problems experienced and a look at future developments for TWiki at CERN.
    Go to contribution page
  44. Michel Jouvin (LAL / IN2P3), Sandy Philpott (JLAB)
    23/04/2010, 12:00
  45. 23/04/2010, 15:00
  46. Alf Wachsmann (SLAC)
    Miscellaneous
    I will present the experience and lessons learned from the first few month of real users taking real data at LCLS. From those lessons, a new offline data analysis facility was designed and will be installed before the next set of users arrive on May 6.
    Go to contribution page
  47. Mattias Wadenstein (NDGF)
    Overview and recent news in NDGF and the associated sub-sites.
    Go to contribution page
  48. Mattias Wadenstein (NDGF)
    The huge amount of data needed for analysis jobs made the naive data caching approach used by the ARC grid manager fall over. This talk gives details on the solutions implemented and under implementation to make an efficient distributed analysis facility in the ATLAS NDGF cloud.
    Go to contribution page
  49. Francisco Martinez Ramirez (PIC)
    Recent development and news from the PIC site, as well as an overview of the status of our installations, deployed software and current topics we are working on.
    Go to contribution page
  50. Mario David (LIP Laboratorio de Instrumentaco e Fisica Experimental de Particulas)
    Grid and WLCG
    The Portuguese WLCG Tier 2 for Atlas and CMS will be presented. The status, performance and issues will be discussed
    Go to contribution page
  51. Dr Ulrich Schwickerath (CERN)
    Security & Networking
    In the recent years, High Energy Physics sites have significantly improved their collaboration and are providing services to users from a growing number of locations. The resulting attack surface, along with the increased sophistication of the attacks, has been a decisive change to encourage all the involved security teams to cooperate very closely together. News challenges in the security...
    Go to contribution page