Present: Brain Davies, Jeremy Coles, Matt Doidge, Derek Ross, Olivier van der Aa, Chris Brew, Stephen Burke, Phil Roffe, Graeme Stewart, Gianfranco Sciacca, Winnie Lacesso, Giuseppe Mazza, Philippa Strange, Mona Aggarwal, Duncan Rand. Site Review ----------- Sheffield: Job submission problem. Oxford: Warning about CAs. Edinburgh: gStat warning, but this is a known gstat bug. Experiment problems: LHCb link to logs when raising tickets. ATLAS have started this - Steve Lloyd's page (http://hepwww.ph.qmul.ac.uk/~lloyd/atlas/atest.php). Some problems may be related to software installation. N.B. Jobs queued are not necisarrily a problem if the site is busy. ATLAS jobs at Lancaster were observed to do this - many nodes can come free at once. ATLAS production currently takes account of all queued jobs at a site, which can lead to problems when another VO queues a lot of jobs. ATLAS will move to using pilot (glide in) jobs soon. Sites will really need help in debugging complex problems with compilation - need to be able to contact experts to find out if it is an ATLAS software install problem or a problem for which the site is responsible. Site problems: Lancaster had some problems related to failure of their dCache headnode and the way that dCache published storage information. Experiments should be getting data about SEs from BDII, not gStat. Brunel reported a problem with APEL data continuity. WLCG Collaboration workshop --------------------------- Contact Jeremy if there is a problem with accomodation. Please review conditions attached to the experiment visit (bring your passport, no high heels, ...) and email Jamie Sheirs if you can't go. Issues to raise: Olivier: Move to SL4. Brian: How are experiments going to have data produced at T2 moved to T1? Stephen: ATLAS define clouds - expect all UK T2 produced data to go to RAL. Transfers should happen with FTS - sites should not have to install anything extra. GridPP networking forward look ------------------------------ Need to have a projection in order to request better connections from clusters to other sites, e.g., 1Gb _dedicated_ connection to SJ5. CMS worried about peak rates. Rates in 2009, 2010 are high - can't just look at 2008 figures. T2Cs will discuss this with sites. In LT2 no site, except QM, will have a dedicated connection to the London MAN. Will have to plan based on expectation, not on current usage. Site testing ------------ Graeme and Jamie did tests of file transfers at sites last year. Now need to retest using new CASTOR at T1. Also need to look at LAN testing - connections between WNs and disk. Should have some milestones next week. Graeme and Greig have started testing rfio between WNs and storage. Initial results look pretty good (at least at Glasgow). Watch out that WN may only have 100Mb networking. Some work on this at Lancaster (Peter?). Simon: What is required bandwidth between WNs and storage system? Have data from CMS, bit no numbers yet from ATLAS. Watch out that numbers are likely to be averages, not peaks. AOB --- Sites need to build up disk resources this year. CPU/disk ratios need to be addressed, 1:4 CMS, 1:3 ATLAS (TB:kSI2K). However, noted that current disk is underutilised. APEL accounting - need to check that SI2K values are correct. Dave and Greig have also modified accouting to deal with storage as well. Sites should check that their storage figures are correct. (Linked from http://www.gridpp.ac.uk/deployment/links.html.) ACTION: On all T2 sites: Check for continuity of APEL data and that storage stats are corrcet. DPM site survey: Reminder to fill this in and send to Sophie and to Greig and Graeme. GridPP oversite committee in Feb - Jeremy will be preparing staistics on sites (utilisation, efficiency, etc.)