Pre-GDB - Grid Storage Services

Europe/Zurich
IT Auditorium (CERN)

IT Auditorium

CERN

Flavia Donno (CERN)
Description
In depth discussions with the developers, who will give configuration and management hints to site admins and site admins will have an opportunity to list their problems and express their requests. EVO Conference: http://evo.caltech.edu/evoGate/koala.jnlp?meeting=e8eteDv8vuava9IaasI8 EVO Phone Bridge: 599586
Minutes
Participants
  • Ammar Benabdelkader
  • Andrew Elwell
  • Andrew Smith
  • Doris Ressmann
  • Erik Mattias Wadenstein
  • Francisco Martinez
  • Gerd Behrmann
  • Greig Cowan
  • Hiroyuki Matsunaga
  • Jeff Templon
  • John Gordon
  • Lorne Levinson
  • Marco Dias
  • Mario David
  • Martin Gasthuber
  • Oleg Tsigenov
  • Patrick Fuhrmann
  • Paul Millar
  • Reda Tafirout
  • Serge Vrijaldenhoven
  • Silke Halstenberg
  • Simon LIN
  • Trompert Ron
    • 10:00 12:30
      Tier-2s

      This session will be dedicated to Tier-2s running DPM and StoRM.

      • 10:00
        Introduction and experiment requirements 15m
        Speaker: Dr Flavia Donno (CERN)
        Slides
      • 10:15
        FTS: an update 15m
        Speaker: Akos Frohner (CERN)
        Slides
        Items arising from the discussion after the presentation: Full listing of channel parameters: https://savannah.cern.ch/bugs/?43819 Checksums support: https://savannah.cern.ch/bugs/?43825 [FTM] cleanup and publishing of transfer logs: https://savannah.cern.ch/bugs/?43837
      • 10:30
        GFAL and lcg_util 15m
        Speaker: Remi Mollon (CERN)
        Slides
      • 10:45
        DPM status and plans 20m
        An update on DPM status, plans and the currently recommended release.
        Speaker: David Smith (CERN)
        Slides
      • 11:05
        DPM configuration and discussions - question from the sites 20m
        * user quotas * tools for performing common administration tasks * access control for spaces * more advanced pool selection mechanism for DPM * improved logging (centralised) * tools for checking SE-LFC synchronisation * nagios style alarms * Are all of the current SEs set up properly in order to optimally deal with local user analysis?
        Speaker: All
        Items arising from the discussion: Addition of more filesystem selection methods https://savannah.cern.ch/bugs/?43932 Tool to balance filesystems https://savannah.cern.ch/bugs/?43931
      • 11:25
        StoRM status and new release 30m
        Speaker: Luca Magnoni (CNAF)
        Slides
      • 11:55
        StoRM configuration and discussions - question from the sites 30m
        * Some sites are testing StoRM now. Some sites are using GPFS and others Lustre. Can we expect exactly the same functionality from both types of SEs? * access control for storage area * gridFTP load balancer
        Speaker: All
    • 12:30 14:00
      Lunch break 1h 30m
    • 14:00 17:00
      dCache for Tier-1s and Tier2s

      This session will cover dCache specific issues and discussions

      • 14:00
        dCache status and plans 20m
        An update on dCache releases recommended for Tier2-s
        Speaker: Dr Patrick Fuhrmann (DESY)
      • 14:20
        Installation and configuration hints for Tier-2s running dCache 30m
        Advise on how to install, configure and run a dCache installation at a Tier-2
        Speaker: Dr Patrick Fuhrmann (DESY)
        Slides
      • 14:50
        Coffee break 15m
      • 15:05
        dCache configuration for Tier-1s 40m
        Speaker: Dr Patrick Fuhrmann (DESY)
        • TRIUMF 10m
          Speaker: Reda Tafirout (TRIUMF)
          Slides
        • SARA 10m
          Speaker: Ron Trompert (SARA)
          Slides
      • 15:45
        dCache - questions from the sites 1h
        1. Splitting SRM instances. Is it feaseable ? 2. Configuration of gsiftp doors 3. Hardware setup at sites (32 or 64bits? How much memory and where? ...) and recommended software packages to use (Java version, DB, etc.) 4. What versions on the head nodes and what versions on the pool. 5. Avoiding high load on the PNFS node(busy threads in PnfsManager - long queue) 6. Limit the number of requests of a certain type (put request) globally or per user. 7. "We currently experience a long-standing problem of storage availability at CERN that I think is worth discussing (again) at the preGDB. One disk server of T0D1 service class has now been down for 2 weeks. We have a few thousand files on there that are inaccessible :((( It is a real burden to figure out, even if we could get the list (which we don;t have). How do sites envisage to face such problems?" - Philippe Charpentier 8. Managing databases: Putrequest and Putrequesthistory tables becoming big.Should "[Vacuum] analyze" being performed? How often ? 9. What should clients be careful using ? What are the most expensive calls ? smrls -l 10. How big should be the directories in order not to run into pnfs slowness ? 11. What is the advice for dCache sites regarding migrating to Chimera? It seems as if no one has moved yet. 12. Are all of the current SEs set up properly in order to optimally deal with local user analysis? 13. Implementations for 64 bits architetures 14. Is there a plan for a user-friendly tool for dCache installation 15. What is the status of the dCache dynamic information providers ? 16. The "fast" PNFS implementation 17. Poor ability of some dCache componentes to scale 18. No automation for disaster recovery. Many disaster recovery-related actions have to be done by hand, which is not our target. dCache somewhat hinders the possibility of deploying high-availability solutions - though allows easy rapid-recovery solutions. 19. PIC: Monitoring of our system as a whole is not at the level we would like it to be. Performance bottlenecks are difficult to find. We are working to improve this. 20. IN2P3: Massive prestaging from tape system to dCache are not efficient enough: when pretaging through SRM, dCache sends the prestaging requests in little packets (say 10) simultaneously to HPSS. It causes a lot of inefficiency in the tape mounting and unmounting. This is the big bottleneck we experience in the prestaging exercises.