GDB

Europe/Zurich
IT Auditorium (CERN)

IT Auditorium

CERN

John Gordon (STFC-RAL)
Description
WLCG Grid Deployment Board monthly meeting
Discussion minutes

April 2011 – GDB (CERN)
 
Introduction (John Gordon)
LHCONE meeting, Paris.
IB: On the experiment requirements… look at the MB slides from yesterday to see a summary of the request. The overview is that 2011 the requirements are a little increased in some cases and a lot in others and this relates to the pile up events. For 2012 the requirements increase slightly over this year… ATLAS and CMS have worked hard to stay within this years resources. LHCb have increased Charm trigger and ALICE have increased but put the requirement at CERN.
JG: This how goes to the RRB and is not so relevant to sites until decisions are made by the funding bodies. ATLAS and CMS have changed operational models to keep requirements down. For ATLAS it is what they keep on disc/tape and CMS have deleted some of what they had on tape.
July workshop dates are now fixed (11th-13th) so the July GDB has been cancelled. Next meeting is May 11th.
Next week is the EGI User Forum in Vilnius. Spring HEPiX is 2nd-6th May in Darmstadt.
LHCONE feedback from Paris meeting on 5th April “I think it was a rather good meeting, network experts were interested to meet with real sites!” MJ. Many European NRENs were present, DANTE a few site representatives and D. Buonacorsi for the experiments.
In May the discussions will be on the topics of: EMI-1 release; Information Service; HEPiX; Virtualisation; multicore and whole box scheduling; EOS and glexec deployment & testing.
MS: What do you mean that gLite 2 will get only security fixes for 6 months? In the discussion between WLCG and the director of the project (Alberto) he did not say this was a fixed duration – it was forseen if things went well they would use these dates. Nobody knows what will be required.
JG: That is reassuring.
IB: That should be rephrased – it was our requirement.
 
Installed Capacity (John Gordon)
The RRB has been asking for some time how people have been meeting (or not meeting) the pledges. For a year or more we have been working on gathering the information from the BDII etc. This is my update.
Last time this was looked at in the GDB, there were 13 sites not publishing. Now there are 9. Please could reps take a look at which sites are not publishing and ask them to publish.
OS: CSC on your list has been renamed.
JG: Probably we need to follow that change through offline.
There are now 34 sites not publishing the LHC VO shares.
One site said they would not do this publishing – to be discussed offline. The overall report is attached to the agenda.
Slide 7 looks at the Tier-2 accounting data. For the US are the pledge figures correct?
IB: The pledge is correct.
LdA: Does this plot take into account the reconfiguration in Italy:
SF: No not yet.
 
Core/CPU figures are possible but worth checking in case the cores per WN are being published rather than cores per CPU.
NDGF – number of cores is correct
Matthias: The ARC info system only cares about nodes and cores not sockets – so number of CPUs not published.
JG: gLite may do something similar as the number of CPUs is not used… but it is published.
SF: Estonia realise they have published their HS06 wrongly by a factor of 4.
 
Request ALL sites to publish ad check the data that is being published. The report will be going to the RRB and the report will be automated if it is what they want to see.
SF: To follow on from the MB discussion. Need to understand where T2s are with their installed 2011 pledge. There is not much time before next week’s RRB. I have information for about half the sites.
JG: Your slides in the MB showed which sites you are waiting for now?
 
CERNVM-FS
 Security review (Ian Collier)
ST: You  mentioned upgrades potentially can not be done while running. In the future I think upgrades should be okay – just not now while doing operational changes.
IC: For some things you will have to dismount.
ST: Sure – it will become less frequent though.
 
Status of deployment at CERN (Steve Traylen)
JG: A site could say can we have a Stratum 2 …
ST: Well that is a normal squid server. Stratum 1 has a full replica.
JG: Slide 9 FUSE talks straight to Stratum 1? Would this happen for every WN?
ST: It could. May have to white list Squid servers if it becomes a problem
?: How do we deal with installation problems. Have a few files that need to modify. Can I modify just at the top level… how quickly does the cache update.
ST: The catalogue will be modified (SQL database).
??: Catalogue published within 1 hr. Create new file. Catalogue contains a time to live stamp.
?: So we can do a quick post-install fix?
??: Yes.
The client insitially downloads the time-to-live. After that it will update and look for new files. The time to live can be reset to less than an hour but is currently at about 24hrs. The manager of the stratum-0 decides the time.
ST: Considerably quicker than sgm route.
JG: You said it is obvious why we do not do virus scanning. You could at least do a scan of the stratum-0.
ST: It has not been done traditionally.
??: The proper time would be when you publish the software.
GM: Could we misconfigure the squid so that it gets the old version?
We are further along now than 1.5 weeks ago. Now need to work out how to do a snapshot so that rollback can be made possible. The migration will happen over the summer period.
Ulrich: You mentioned cache sizes up to 40GB. This would be a show stopper for CERN since VMs have individual caches – say for 48 core machines.
ST: RAL required to give 100GB per job slot.
Issue of running jobs of given experiment on same nodes to avoid re-caching between each job.
CVMFS – can define 10GB for each cache and override it.
IC: The mount point and cache area are separate. On the hypervisors…. There may be a way to share.
MJ: On many of our WNs (80GB disks) we would not be able to run CVMFS if each repository cache is kept…
ST: One VO can not flush the cache of another.
There is a benefit in preserving a cache between jobs but still a gain in using CVMFS anyway.
 
Site deployment (Ian Collier)
Use grid to validate releases of geant4. In May/June plan to use file system to deploy. Are there any sites not going to use.
ST: If you want to find out then we need to publish a new tag – perhaps CVMFS. Will put something on the talk list. As for how many are using it… we need to look at the number of proxy servers using us.
They should also publish the repositories they support – not just that they support CVMFS.
JG: What about deployment?
IC: We have quattor profiles. There is the setup script that can be used for worked examples.
 
Middleware Update (Maria Alandes Pradillo)
JG: Is this a release of the BDII with the static part in the update?
MAP: I will need to check with Laurence.
 
Upcoming releases 3.2 TORQUE has several patches – new version of Torque.
MJ: Does this include Maui?
MAP: I don’t think so but I will check and get back to you.
ML: Maui has not changed.
Mario D: The new GOCDB URL – it is hardwired in one of the top-bdii scripts and they need to switch to the new URL…. The latest bdii version has the new .eu URL in it.

EMI 1 Release Status (Crstina Aiftimiei)
Do we need to formalise the tarball request from WLCG?
MS: Looking at the nightly builds for RC3 it is building about half the things which does not correlate with the summary in the slide.
CA: The target for the build is 60%. For RC3
SB: What is the implication for this to be released for WLCG? It may not be “nearly out” as far as WLCG is concerned.
JG: EMI will release in to their repository… and take other things… what we don’t know is how long the validation step in EGI will take.
SB: So there could be several months where there are no new releases.
MS: You need to negotiate with the product teams.
Tizana F: Standard updates will be applied until October 2011.
JG: There may be patches but not new functionality being back ported. It would be agreed product by product.
Alberto M: That is correct.
Looking at the UMD link (on the agenda)… to get an idea of priorities.
JG: It is worth WLCG taking a view on what our top priorities are. CERN can presumably input as any of the NGI members.
TF: Yes but they are representing CERN. Can integrate WLCG input with NGI channels.
JG: Well I can do that via a request tracker ticket to UMD team. We could ask all the NGIs represented to do the same but one entry stating WLCG requirements should be sufficient. As for the mechanism for deciding the requirements…
IB: It should come out of the operations meeting.
ADM: On the earlier discussion….60% of builds functional but all the executables/binaries have been checked.
 
CREAM – status (John Gordon)
23 WLCG sites with no CREAM CEs.
ML: Note that some of the sites listed are OSG sites.

CREAM & LCG-CE (Maria Alandes Pradillo)
JG: Notice number of LCG-CEs going down.
MJ: D0 were one VO who relied on lcg-ce. I believe they are now ready to start a move to CREAM.
TF: About the timeline – just about okay. Could put standard updates to October (to align with gLite 3.2) and end support/security updates in April 2012.
JG: lcg-ce is not in gLite 3.2
SB: The WMS is also in need of a deadline.
MAP: Best to have it in EMI-1 first. Right now it is fully supported. Once EMI-1 out we should think about the retirement schedule for the WMS.
ML: At CERN we will take the WMS as soon as it comes out of EMI and that should help speed up the timeline a bit.

Status of ACE availabilities (Wojciech Lapka)
JG: Last month there were 30+ sites reporting differences and now about 15.
WL: Yes. We get data from ATP.
Services must be correctly declared in the GOCDB
By mid-April the identified issues should be resolved… so for the start of May we could consider generating reports using the last 2 weeks of April.
ML:For the foreseeable future we need the algorithm that allows an OR over the CEs. Lcg-ces should be dropping quickly on the WLCG side…
IB: What is the difference – surely it is always better to allow any?
JG: Well it could be that working CREAM CEs as part of the availability will speed up the transition.
ML: The calculation should take into consideration what the LCG VOs can use.
IB: We want to maintain flexibility.
SB: There is a difference between global tests and VO tests. Some VOs require say CREAM and so for them the availability may be different.
ML: Yes you need to include the services of interest to a VO in the calculation for a VO.
JG: What happens when WLCG sites only want to use CREAM
IB: We need to remember that there is the generic ops test and also the more specific measure given by the experiment tests.
JG: Yes – but with the AND at the moment it is broken. Sounds like leaving the LCG-CE in the availability calculation is not a problem.
TF: The ops availability should be an OR .. since there are other VOs who may need the LCG-CE. We are mixing support, availability and use.
JG: Typically the MB signs off changes in the availability calculation – can we sign off based on two weeks running in April?
IB: If the two weeks look good and any differences are understood then we can probably sign it off then. What is the status of the experiment tests?
ALICE and ATLAS have switched to ACE. The others have not…..
IB: Then we could say from May or June that anyone without CREAM (or ARC or… ) is failing.
ML: If WLCG experiments want to use CREAM then they could change their algorithm and have sites without CREAM failing their metrics.
JG:…. Okay so we agree with the proposed timeline and will review in May. I think all the experiments have now signed off on using CREAM.
ML: Still less than lcg-CEs but they are now all running.
 
MUPJ – gLexec update (Maarten Litmaath)
SB: You say Tier-2s can look at it but unless sites start failing tests they may not.
ML: There is a test available but it is not compulsory. We will need to watch the uptake in May/June. Many T2s are going to look into this. There may be issues in the batch systems that preclude meeting the deadlines.
SB: Finding the bugs is part of the problem.
JG: We need a good spread of sites to volunteer – to cover for example SGE. Need to be sure full matrix coverage. Even different setups of the same batch system can cause new problems.
ML: We have a tentative milestone but making it happen at the 70-80% level may be what is possible.  We need some of the experiment production manager’s to do some dedicated tests of real work.
 
UK glexec experience (John Gordon)

ML: The site that opened the ticket relies on extra group IDs in their batch system – anything spawned should be killed. Glexec just resets that environment. Not a huge problem to fix but there is a bit of work to be done so getting it fixed to meet the June deadline may be tricky. A solution is underway. Other batch systems did not rely on this feature for clean-up so did not see it – imperfections in the job leave things behind. This is a fair ticket for that site.
JG: You mentioned that there were helper scripts etc…. are there other things sites should do to work around these problems.
ML: IN the questionnaire done last year on MUPJ/glexec, at least DESY mentioned that if UID changes then cleanup of sandboxes is a problem. They had an epilogue script to clean-up as normal user. Suggested at time ways to clear “stuff” from any job no longer running. At that time ie meant DESY could not run glexec in setuid mode.
Issue of job directory being writable by original user not target user.
SB: Relates to old issue of tmp directories
ML: glexec bootstraps process… goes to home directory… target script would cd into the required directory. If you use glexec naively then will run into many problems. There are helpers … these are issues for the users of glexec not the sites.
JG: Could be …
MJ: The last issue is for the site not the user. For sites using shared home directories, the users will not be aware of what is being used. We need to get this well documented.
ML: To first order it should not be a problem for the site. It is up to the developers of the pilot job frameworks to figure out how to start a pilot job correctly. … and making sure target payload goes to correct target area. We could make glexec more helpful in this area but some of the VOs are just now testing this and do not use the pilot framework to submit the jobs. To second order yes it is a problem for the site.
MJ: Where we use shared directories we do not want to rely on users…. Want a wrapper that makes it transparent to the user
JG: One question that follows is whether this is under the site’s control.
ML: Jobs coming via pilot factory will not run into this problem. Perhaps more intuitive if the current directory is kept. Bit like running su command. If run with an option you also get put in the home directory. So, best to first cd to area job created…. Okay, so can probably use one of the tickets to create a bug from and take it from there.
JG: Is there a worry then that the way sites test things are not going to be
SO we need the twiki mentioned earlier.
Failure to read proxy from NFS is now fixed. Patch committed.  Likewise with ARGUS about account name issues.
SB: But for the next release which is EMI-1.
ML: We need to see whether it makes sense to do this in gLite 3.2. ARGUS is a standalone service. Also CREAM 1.7 needs to be able to use ARGUS… it is tested in EMI-1. There may be some issues with gLite 3.2 backports.  There are a few things in EMI-1 that are of interest.
For first release there is 3 months given for resolving problems.
JG: The problem on the tarball install – the glexec binary itself needs to be built
ML: glexec has its own independent life. More like a system component. We can co-locate with software from these projects. Issue was taken up by glexec developers. Now in shape for anyone to take up.
ML: In questionnaire, there was some feedback that running glexec setuid may be an issue for some sites and we need to decide what to do there.
For next time would be good to see Nagios tests and experiment progress at the T1s. Claudio for example is working on a test for CMS to conclude if user mapping is okay for them. … over the standard ops test.
 
Communicating Machine Features to Batch Jobs (Tony Cass)– talk to MB on 8th March
No comments or discussion.
 
Technical discussions in GDB (Ian Bird)
There has been a need for more technical discussions for a while now. We are a collaboration yet in some areas we do not collaborate – people do not volunteer to take things forward.
JG: Presumably some of the topics will be full day discussions. There is also a pre-GDB slot that could be used.
SB: Many of the areas are very general. Is the idea to pick a narrow topic and get to a decision at the end of the discussion?
IB: We should be deciding what we want to do and not be driven by wider projects.
SB: What about X509?
IB: I hear many people saying that X509 is a barrier to some communities using the grid. There is an issue about whether existing credentials from institutes to generate proxies on the fly? Is this something we should be pushing or not as it may have implications on us.
OX: Really this is about grid authorisation and virtual organisations.
JG: The other issue is about commonality.
IB: There was an ATLAS meeting yesterday where some topics were discussed and it was clear on some of these topics that they would appreciate WLCG help in collaborating with others – e.g. on noSQL databases.
JG: On the issue of MB – GDB duplication … what is the forum to get MB discussions out to the wider community?
JT: good example is collaboration now on dataset popularity frameworks between the experiments
JG: How do we make progress on it?
IB: I will take it forward.
 
Meeting closed at 16:20.
 
 
EVO chat:
[09:00:27] CERN 31-3-004 joined
[09:03:18] Pete Gronbech joined
[09:04:58] Jeremy Coles Pete can you hear John?
[09:05:37] Pete Gronbech I can hear but my mike is not working
[09:05:50] Jeff Templon joined
[09:08:30] Pierre Girard joined
[09:12:26] Denise Heagerty joined
[09:12:26] Denise Heagerty left
[09:15:49] Yannick Patois joined
[09:17:34] Tiziana Ferrari joined
[09:17:39] Tiziana Ferrari left
[09:18:34] peter solagna joined
[09:27:02] Martin Bly joined
[09:39:12] Duncan Rand joined
[09:40:32] Derek Ross joined
[09:40:32] Derek Ross left
[09:40:39] Derek Ross joined
[09:50:29] Martin Bly left
[09:53:06] Pablo Fernandez joined
[09:53:28] David Kelsey joined
[09:57:19] Duncan Rand left
[10:00:13] Martin Bly joined
[10:07:28] Tiziana Ferrari left
[10:10:43] Martin Bly left
[10:22:46] Mario David joined
[10:31:38] Jeff Templon left
[10:34:07] Jeff Templon joined
[10:38:39] Jeff Templon lost the sound
[10:38:43] Yannick Patois sound ?
[10:38:52] Jeff Templon hallloooooooo
[10:38:56] Derek Ross left
[10:39:01] Yannick Patois No sound from EVO.
[10:39:11] Derek Ross joined
[10:39:16] Yannick Patois back, thanks.
[10:39:18] Pete Gronbech good
[10:39:19] Jeff Templon that's better
[10:39:37] Jeremy Coles The mic was switched off!
[10:58:22] Jeff Templon the thing to do would be to replace, for each VO for which CVMFS is switched on, ALL the sw tags for that Vo
[10:58:33] Jeff Templon using the single sw tag "CVMFS"
[10:58:46] Jeff Templon in this way you don't need to publish the list of VOs using CVMFS
[10:59:13] Jeff Templon nice by product is that the information system is reduced in size by about 30% due to disappearance of all ATLAS sw tags.
[11:00:16] Jeremy Coles Breaking for lunch. Back at 14:00 CET.
[11:00:23] Jeff Templon you don't need all those tags if you know you have cvmfs
[11:00:28] Jeff Templon everything is in there by default!
[11:01:02] Derek Ross left
[11:01:26] peter solagna left
[11:01:37] Jeff Templon left
[10:38:40] Jeff Templon lost the sound
[10:38:44] Yannick Patois sound ?
[10:38:52] Jeff Templon hallloooooooo
[10:39:01] Yannick Patois No sound from EVO.
[10:39:17] Yannick Patois back, thanks.
[10:39:19] Jeff Templon that's better
[10:58:22] Jeff Templon the thing to do would be to replace, for each VO for which CVMFS is switched on, ALL the sw tags for that Vo
[10:58:34] Jeff Templon using the single sw tag "CVMFS"
[10:58:47] Jeff Templon in this way you don't need to publish the list of VOs using CVMFS
[10:59:13] Jeff Templon nice by product is that the information system is reduced in size by about 30% due to disappearance of all ATLAS sw tags.
[11:00:23] Jeff Templon you don't need all those tags if you know you have cvmfs
[11:00:29] Jeff Templon everything is in there by default!
[12:52:58] Sam Skipsey joined
[12:53:16] Cristina Aiftimiei joined
[12:53:27] Alberto Di Meglio joined
[12:53:31] Brian Davies joined
[13:02:53] Paolo Veronesi joined
[13:11:50] Jeff Templon joined
[13:23:24] peter solagna joined
[13:23:24] Andrew Elwell joined
[13:29:30] Massimo Sgaravatto joined
[13:35:14] Tiziana Ferrari joined
[13:59:11] Alvaro Fernandez joined
[13:59:11] Alvaro Fernandez left
[13:59:11] Jeff Templon how about we have that discussion at the point that the lcg-CE actually becomes unsupported and not now???
[13:59:11] Tiziana Ferrari indeed
[14:10:45] Martin Bly joined
[14:21:06] Jeremy Coles Is the info system used with gluececapacility (as in Maarten's slide) or a flag is being set.
[14:21:31] Oscar Koeroo joined
[14:21:32] Oscar Koeroo left
[14:25:47] Oscar Koeroo joined
[14:27:45] Jeremy Coles John has a summary of the SGE issues in slides coming next.
[14:43:25] Oscar Koeroo Maarten: this is what we see that pilot fw builders are doing
14:37:38] Jeff Templon can't hear the questions / comments from the room
[14:38:47] Jeff Templon John microphone
[14:38:54] Oscar Koeroo mic please
[14:39:00] Jeff Templon only hear maart
[14:39:06] Jeff Templon yep!
[14:39:06] Oscar Koeroo yes
[14:39:46] Oscar Koeroo what is a better alternative?
[14:39:55] Oscar Koeroo $HOME ? $TMPDIR
[14:39:59] Oscar Koeroo other
[14:40:16] Oscar Koeroo we've asked user communities, and had no oppions
[14:40:24] Oscar Koeroo (the pilot job frameworks)
[14:43:16] Oscar Koeroo Maarten: this is what we see that pilot fw builders are doing
[14:43:43] Oscar Koeroo Staying in the same dir is problematic, because that directory is not writeable to the target
[14:44:31] Oscar Koeroo So the pilot needs to create a writeable directory, for the payload to chdir into
[14:44:35] Jeff Templon you say if pilot starts in shared home, then glexec'd job starts there ... if pilot starts in tmpdir, glexec'd job starts there
[14:44:45] Oscar Koeroo It should be a gLexec usage pattern
[14:45:33] Jeff Templon Maart
[14:45:57] Jeff Templon and where it starts ... is a site decision, so this solves Michel's problem
[14:46:57] Jeff Templon the solution i discussed has been there for more than a year
[14:47:20] Jeff Templon mkgltmpdir ....
[14:47:33] Oscar Koeroo the mkgltmpdir script is now also shipped in the glexec wn release.
[14:47:47] Jeff Templon read the fine material 
[14:47:51] Oscar Koeroo it makes the magic happen
[14:48:28] Oscar Koeroo the tool wrappes glexec and makes a writeable area, in the pilot's CWD
[14:49:31] Oscar Koeroo 69332 is fixed
[14:50:22] Jeff Templon suggest removing 69359 and putting that on a slide labelled "argus problems"
[14:53:09] Oscar Koeroo 69362 is going to be addressed soon after EMI-1 release. You have to have the VOMS clients first 
[14:54:36] Oscar Koeroo SRPMS are avaialble for IN2P3
[14:54:45] Oscar Koeroo incl src-tarballs
[15:04:07] Jeff Templon if the project is accepted than it becomes supported for us and hence better than best effort. action is on me to submit the request to the national project.
 
 
 
 
 
 
 
 
There are minutes attached to this event. Show them.
    • 10:00 12:00
      Morning
      Convener: Dr John Gordon (STFC-RAL)
      • 10:00
        Introduction 30m
        Speaker: Dr John Gordon (STFC-RAL)
        Slides
      • 10:30
        Installed Capacity 30m
        New Tier2 Report
        Speaker: Dr John Gordon (STFC-RAL)
        Slides
        T2 Report
      • 11:00
        CERNVMFS 1h
        • Security Review 20m
          Speaker: Ian Collier (UK Tier1 Centre)
          Slides
        • Status of Production Service at CERN 20m
          Speaker: Steve Traylen (CERN)
          Slides
        • Site Deployment 20m
          Speaker: Ian Collier (UK Tier1 Centre)
          Slides
    • 12:00 14:00
      Lunch 2h
    • 14:00 17:00
      Afternoon
      Convener: Dr John Gordon (STFC-RAL)
      • 14:00
        Middleware 30m
        gLite 3.1 retiral gLite 3.2 status EMI-1 Status UMD Priorities
        Speaker: Maria Alandes Pradillo (Unknown)
      • 14:30
        CREAM 30m
        Signoff of Availability Calculations Lifetime of LCG-CE?
        Speakers: Dr John Gordon (STFC-RAL), Maria Alandes Pradillo (Unknown), Wojciech Lapka (Unknown)
        CREAM
        Status of ACE
      • 15:00
        Multi-User Pilot Jobs 30m
        • gLExec 15m
          Tier1 Status Tier2 Status Showstoppers?
          Slides
          UK Experiences
        • Experiment Workflows 15m
          Readiness for widespread deployment Coexistence of glexec and non-glexec sites?
      • 15:30
        WN Environment 15m
        Speakers: Dr John Gordon (STFC-RAL), Tony Cass (CERN)
        Slides
      • 15:45
        Technical Discussions 15m
        Speaker: Ian Bird (CERN)
        Slides