Pre-GDB - Grid Storage Services
→
Europe/Zurich
IT Auditorium (CERN)
IT Auditorium
CERN
Flavia Donno
(CERN)
Description
In depth discussions with the developers, who will give configuration and management hints to site admins and site admins will have an opportunity to list their problems and express their requests.
EVO Conference:
http://evo.caltech.edu/evoGate/koala.jnlp?meeting=e8eteDv8vuava9IaasI8
EVO Phone Bridge: 599586
Participants
Ammar Benabdelkader
Andrew Elwell
Andrew Smith
Doris Ressmann
Erik Mattias Wadenstein
Francisco Martinez
Gerd Behrmann
Greig Cowan
Hiroyuki Matsunaga
Jeff Templon
John Gordon
Lorne Levinson
Marco Dias
Mario David
Martin Gasthuber
Oleg Tsigenov
Patrick Fuhrmann
Paul Millar
Reda Tafirout
Serge Vrijaldenhoven
Silke Halstenberg
Simon LIN
Trompert Ron
-
-
10:00
→
12:30
Tier-2s
This session will be dedicated to Tier-2s running DPM and StoRM.
- 10:00
-
10:15
FTS: an update 15mSpeaker: Akos Frohner (CERN)
- 10:30
- 10:45
-
11:05
DPM configuration and discussions - question from the sites 20m* user quotas * tools for performing common administration tasks * access control for spaces * more advanced pool selection mechanism for DPM * improved logging (centralised) * tools for checking SE-LFC synchronisation * nagios style alarms * Are all of the current SEs set up properly in order to optimally deal with local user analysis?Speaker: All
- 11:25
-
11:55
StoRM configuration and discussions - question from the sites 30m* Some sites are testing StoRM now. Some sites are using GPFS and others Lustre. Can we expect exactly the same functionality from both types of SEs? * access control for storage area * gridFTP load balancerSpeaker: All
-
12:30
→
14:00
Lunch break 1h 30m
-
14:00
→
17:00
dCache for Tier-1s and Tier2s
This session will cover dCache specific issues and discussions
-
14:00
dCache status and plans 20mAn update on dCache releases recommended for Tier2-sSpeaker: Dr Patrick Fuhrmann (DESY)
- 14:20
-
14:50
Coffee break 15m
- 15:05
-
15:45
dCache - questions from the sites 1h1. Splitting SRM instances. Is it feaseable ? 2. Configuration of gsiftp doors 3. Hardware setup at sites (32 or 64bits? How much memory and where? ...) and recommended software packages to use (Java version, DB, etc.) 4. What versions on the head nodes and what versions on the pool. 5. Avoiding high load on the PNFS node(busy threads in PnfsManager - long queue) 6. Limit the number of requests of a certain type (put request) globally or per user. 7. "We currently experience a long-standing problem of storage availability at CERN that I think is worth discussing (again) at the preGDB. One disk server of T0D1 service class has now been down for 2 weeks. We have a few thousand files on there that are inaccessible :((( It is a real burden to figure out, even if we could get the list (which we don;t have). How do sites envisage to face such problems?" - Philippe Charpentier 8. Managing databases: Putrequest and Putrequesthistory tables becoming big.Should "[Vacuum] analyze" being performed? How often ? 9. What should clients be careful using ? What are the most expensive calls ? smrls -l 10. How big should be the directories in order not to run into pnfs slowness ? 11. What is the advice for dCache sites regarding migrating to Chimera? It seems as if no one has moved yet. 12. Are all of the current SEs set up properly in order to optimally deal with local user analysis? 13. Implementations for 64 bits architetures 14. Is there a plan for a user-friendly tool for dCache installation 15. What is the status of the dCache dynamic information providers ? 16. The "fast" PNFS implementation 17. Poor ability of some dCache componentes to scale 18. No automation for disaster recovery. Many disaster recovery-related actions have to be done by hand, which is not our target. dCache somewhat hinders the possibility of deploying high-availability solutions - though allows easy rapid-recovery solutions. 19. PIC: Monitoring of our system as a whole is not at the level we would like it to be. Performance bottlenecks are difficult to find. We are working to improve this. 20. IN2P3: Massive prestaging from tape system to dCache are not efficient enough: when pretaging through SRM, dCache sends the prestaging requests in little packets (say 10) simultaneously to HPSS. It causes a lot of inefficiency in the tape mounting and unmounting. This is the big bottleneck we experience in the prestaging exercises.
-
14:00
-
10:00
→
12:30