Distributed Database Operations Workshop wrap-up
1) All the sites have presented their status and plans. It is agreed that database resources allocated for STEP'09 and for the 2009/2010 run are within the experiments' requests
2) Evaluation of RHEL 5 is very positive and some Tier1 sites already run on it. CERN Physics Databases will upgrade by Summer 2009
3) Tracing of 3D-related (scheduled and unscheduled) interventions and activities will be kept on our TWiki and regularly reviewed at every phone meeting. It will continue to be the input to the daily operational meetings at 3pm and to the weekly MB WLCG service report.
(The phone meetings might take place weekly during the ~3 weeks of
STEP'09 - most likely first 3 weeks of June).
4) Good progress is acknowledged to the Tier1 master replication of AMI from IN2P3-Lyon to CERN and the Tier2 master replication of the ATLAS Muon Centres to CERN
5) It is agreed that Tier1 site re-synchronization targets to be a full responsibility of the Tier1 sites. CERN will supervise and help the sites who will exercise it. RAL has volunteered to re-synchronize ASCG when the site will be ready
6) Very good progress with the ATLAS scalability tests and handling of the DB overload via Pilot query. Can be extended would the use-case arise
7) The database migration of NDGF to new hardware (from Helsinki to
OSLO) will be performed using Data-Guard if possible
8) A logical standby of the ATLAS local LFC catalog at CNAF has been setup to Roma1. This solution should be also evaluated for a similar use-case by other Tier1 sites
9) Review of Oracle Service Requests will continue to be held in the Distributed Database Operations phone meetings. Progress reported about the first meeting with Oracle on the subject
10) Next workshop will likely happen at CERN, possibly in conjunction with WLCG events (July and/or September)
11) The invited talks On GAIA Processing Challenges and GSI Experience have been very much appreciated and felt to be very important to widen the spectrum of interest of our database community beyond WLCG towards other scientific disciplines
12) Several castor DB operational issues and procedures have been discussed
- thread count and db session determination as part of deployment tests
- castor DB monitoring integration into the 3D monitoring setup
- need for a reliable execution of castor cleanup jobs as part of the database service
13) It was further proposed that castor Tier 1 sites jointly define a standard castor Tier 1DB configuration as basis for an identical test setup at CERN
- as testbed to reproduce Tier 1 castor or db problems
- for certification tests as part of the release procedure
There are minutes attached to this event.