In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.
Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.
Permanent link for public information only:
Permanent link for all public and protected information:
28-R-15 (CERN conferencing service (joining details below))
CERN conferencing service (joining details below)
email@example.com Weekly OSG, EGEE, WLCG infrastructure coordination meeting.
We discuss the weekly running of the production grid infrastructure based on weekly reports from the attendees. The reported issues are discussed, assigned to the relevant teams, followed up and escalated when needed. The meeting is also the forum for the sites to get a summary of the weekly WLCG activities and plans
OSG operations team
EGEE operations team
EGEE ROC managers
WLCG coordination representatives
WLCG Tier-1 representatives
other site representatives (optional)
To dial in to the conference:
a. Dial +41227676000
b. Enter access code 0148141
OR click HERE (Please specify your name & affiliation in the web-interface)
France : Since 01/06/2009, one of the regional Top BDII, hosted at GRIF, had some problem initially due to a air cooling system problem. GRIF WMS had consequently some problems because it was linked to this Top BDII.
France : IN2P3-CC, the MSS software update successfully ended on friday. Dcache SE is now fully available.
DECH : We needed to ban some users because various things, completely filling /tmp (VOs icecube and biomed) and running hundreds of jobs being killed by CPU time limit (ATLAS).
The first two cases got quickly fixed via GGUS. The ATLAS case is still open since almost two weeks:
(Assigned to VOsupport)
How should sites react in cases users got banned? LHC have alarm tickets to sites, how should sites approach the VOs?
SWE:During the migration of 32bit workers to 64bit PIC faced to many problems related to the dependencies of LHC software on 32/64bit libraries. We are not happy with the situation of having production releases that are poorly tested against software of experiments (at least LHC):
- thread in LCG-ROLLOUT: "libstdc++-devel.i386 and libstdc++-devel.x86_64"
Reply from Integration and Certification: we are working with the Applications Area to produce a meta-rpm that pulls in the OS libraries needed by the HEP VOs.
<big>Grid Service Interventions </big>
ALL TIMES IN UTC+2
Downtimes effecting the WLCG tier-1 sites:
NDGF-T1: At risk: 08:00 9 Jun - 00:00 11 Jun. Services: Bergen will update the fimm cluster and the Tier1 machines (compute nodes, dcache machines, grid middleware servers) to Rocks 5.1 with CentOS 5.3 at UiB. Will degrade services a bit.
RAL-LCG2: OUTAGE: 10:00 8 Jun - 10:00 15 Jun. Services: Relocation to new machine room [IN PROGRESS].
NDGF-T1: OUTAGE: 00:15 8 Jun - 04:15 8 Jun. Services: GEANT's circuit provider will be performing maintenance on the dark fibre route COP-FRA.
NDGF-T1: At Risk: 7:30 5 Jun - 15:00 8 Jun. Services: Some dCache pools crashed this morning. Some Atlas and Alice files will be unavailable until the pools have been brought online again. Most pools got back again, but two are still giving us problem. Investigation in progress. [IN PROGRESS]
Maria Dimou, Rob Quick
(OSG - Indiana University)
Discussion of open tickets for OSG
It is now urgent to get an OSG answer on the site email as per https://savannah.cern.ch/support/?107531
Ticket analysis done today by Guenter Grein:
1. GGUS Ticket #49049 (OSG #6926)
Ticket is in progress in GGUS but closed in OSG
Reason: GGUS received the "Closing" mail before the update mails that made
the mail parser setting GGUS ticket into "in progress".
Conclusion: the mail parser works correctly, but problems occur in case of
mail delays especially if sending more than 1 update mails in a short time
-> I closed this ticket manually.
2. GGUS Ticket #48962 (OSG #6924)
Both tickets open -> ok
3. GGUS Ticket #48737 (OSG #6922)
Both tickets open -> ok
4. GGUS Ticket #37059 (OSG #6926)
Both tickets open -> ok