In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.
Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.
Permanent link for public information only:
Permanent link for all public and protected information:
28-R-15 (CERN conferencing service (joining details below))
CERN conferencing service (joining details below)
firstname.lastname@example.org Weekly OSG, EGEE, WLCG infrastructure coordination meeting.
We discuss the weekly running of the production grid infrastructure based on weekly reports from the attendees. The reported issues are discussed, assigned to the relevant teams, followed up and escalated when needed. The meeting is also the forum for the sites to get a summary of the weekly WLCG activities and plans
OSG operations team
EGEE operations team
EGEE ROC managers
WLCG coordination representatives
WLCG Tier-1 representatives
other site representatives (optional)
To dial in to the conference:
a. Dial +41227676000
b. Enter access code 0140768
NB: Reports were not received in advance of the meeting from:
ROCs: Italy, Russia, SE Europe, UK/I
VOs: Alice, ATLAS, LBCb, BioMed
Recording of the meeting
Feedback on last meeting's minutes
<big> Grid-Operator-on-Duty handover </big>
From: France / NDGF
To: CERN / SE Europe
Report from France COD:
A new wiki has been set up for the COD to follow up operational use case and their status: https://twiki.cern.ch/twiki/bin/view/EGEE/OperationalUseCasesAndStatus#Use_cases_status
2 related issues are of priority in that list:
retention period and SD in SAM
As COD, we would like that the retention period to be reduced to 1 day as decided in ARM 11 (february 2008). This is still not the case.
I want to add the recurrent case of YerPhI in the handover to be discuss at the WLCG meeting:
This site is not able to have a production quality. They have big problems with network and the availability of the site is very low.
What is the interest for the site and for the users that the site stays in production in these conditions ?
Report from NDGF COD:
Main issue was the large amount of alarms generated after the cert for lcg-voms.cern.ch was changed. Most of those were because the site may not have upgraded their host cert. Konstantin Skaburskas replied to my query with some useful information. The situation made creating tickets from alarms unviable.
As Friday was a holiday, tickets expiring on that day were postponed to this week. I think many Russian sites could also have been on holidays last week.
Its still very noticeable that you have to update the alarm fields a few times before an OK alarm is properly switched off.
<big> PPS Report & Issues </big>
PPS reports were not received from these ROCs:
AP, IT, RU, SEE, SWE, UKI
Issues from EGEE ROCs:
ROC France (IN2P3-CC-PPS):
A new lcg-CE (cclcgvmli10.in2p3.fr) was set up to provide access to our x86_64 WNs installed with the WN_TAR-x85_64 3.1.5-0 distribution. At the time being, 4 WNs (~30 job slots) are then available. All VOs are invited to test their 64b-software through this PPS CE.
<big> gLite Release News</big>
Now in production
No releases to production last week.
Last one: gLite3.1 Update21
Details in http://glite.web.cern.ch/glite/packages/R3.1/updates.asp
Now in pre-production
No releases to production last week.
Last one: gLite3.1.0 PPS Update26
Soon in production
Release of gLite 3.1 Update22 in preparation.
The update, to be released next Wednesday, will contain:
SGE Engine enabled on lcg-CE
fix for DENY tags to lcg-info-dynamic-scheduler
Dcache 184.108.40.206.p6 (First dcache 1.8 release)
Rebuild MPI_utils mpich RPM with Fortran wrappers
first version of the dynamic service publisher, replacing the previous static configuration
VOMS core (affecting clients)
new VOMS core 1.8.3-4 (affecting VOMS servers and clients on UI WN VOBOX CE SE_dpm LFC WMS LB
Many bug fixes. Fully backward compatible.
fix to trustmanager install script
lcg-infosites: new option to query for the wms and the lb associated to a certain VO. The -f option to filter based on the site name is also available.
bug fixes for edg-gridftp-client
<big> EGEE issues coming from ROC reports </big>
ROC France: - IN2P3-SUBATECH: I would like to discuss the sec-fp test : I use a world writable directory /dlocal on the worker nodes as EDG_WL_SCRATCH and consequently the sec-fp test report a warning. Can we exclude the directory referenced by EDG_WL_SCRATCH from the test ?
As this variable is site-wide and used by all VOS, I do not see a simple method to avoid the top dir being world writable (apart use the t bit like /tmp)
ROC SW Europe: Comment from PIC site: Last thursday and friday there was a scheduled downtime at PIC. We scheduled the downtime in the GOCDB some days ago, and this generated some info e-mails from the CIC-portal. We know this because we received copy of them, but we do not know who exactly received this notification. We would like to have a way to ensure that the vo-managers of the VOs affected did receive this e-mail. How can we do this now?
<big> WLCG issues coming from ROC reports </big>
ROC ???: Item
<big>WLCG Service Interventions (with dates / times where known) </big>
The Classic SEs at IN2P3-LPC are planned to be removed from production the 15th May:
Please backup your data before that date.
The old Edinburgh site, ce.epcc.ed.ac.uk will be retired from use in one week time (1 May 2008). Storage services, via srm.epcc.ed.ac.uk, will be accessible via the new Edinburgh site, ce.glite.ecdf.ed.ac.uk for some time after this, although the intention is to slowly migrate to newer storage.
This means that support for several VOs will be dropped by Edinburgh, as they are not part of UKI-SCOTGRID-ECDF's supported VO list. In particular, these vos are:
alice, babar, biomed, cdf, cms, dzero, esr, fusion, geant4, hone, magic, minos, na48, planck, sixt, t2k and zeus
At the start of May, the site egee.man.poznan.pl will be removed from production and shut down. Please backup your data stored on storage elements belonging to this site.
GOG-Singapore would like to decommission their site by June 2, 2008. The hardware and services at the site will be shutdown permanently. Please migrate data that is still needed by your VO before the site is disabled.
The site currently supports the following VOs: alice, atlas, lhcb, cms, biomed, dteam and ops
Data certification, Processing at the T0:
CERN CPUs busy mostly with CMSSW 205 RelVal production. Validated releases: CMSSW V2.0.5. On the Tier-0 side, we had the Castor upgrade of the CMS instance to 2.1.7, plus LSF and /afs interventions. The /afs problem for cmsprod volume seems fixed (moving to a fresh volume did the trick).
Still ~10 CSA07 "long tails" workflows running HLT step. Finished most of the requests with FastSim 1.8.4, running some large MadGraph workflows. Started the iCSA08 pre-production: in progress. These data need to get back to CERN for further manipulation and injection to T1 sites for the CSA08 exercise.
DPG requests with CMSSW_184: 10M cosmics done, 4M cosmic (4T) done (GEN-SIM-DIGI-RECO) (running AlcaReco), 6M BeamHalo done, 4M MinBias done, 1M Zmumu done; 4M cosmic (0T) will start soon. DPG requests with CMSSW_177: 1M TIF cosmics (all files at CERN, Reco can start). FastSim production with CMSSW_184: 8 QCD workflows (6 done, 2 running), 9 photonjets done, 10 photonjets_etgam done, 1 Bphys done; 7 QCD workflows showed a substantial job crash so prod of these stopped and situation is under investigation. Pre-CSA08 production with CMSSW_205 running.
Data Transfers and Integrity, DDT-2/LT status:
Production transfers in the /Prod instance of the pre-CSA data suffer of the "FILE_EXISTS" problem on Castor at CERN: Castor experts suggests it is related not to Castor itself but to to the SRM11->22 upgrade, SRM/storageware experts contacted already. --- Production subscriptions older than 2 months being suspended and cleaned up to prepare for May. Change in the transfer priorities to accomodate the CSA/CCRC use-cases in May: done. Stop&start of PhEDEx agents to move PhEDEx names into more consistent CMS naming convention: done (only few sites in the tails still need fixes: no worries). --- DDT status: progress continues, now efforts go into the debugging of non-regional routes in the CCRC scope. Day by day details at https://twiki.cern.ch/twiki/bin/view/CMS/DDTLinkExercising, visual overview at http://magini.web.cern.ch/magini/ddt.html.