In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.
Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.
Permanent link for public information only:
Permanent link for all public and protected information:
In depth discussions with the developers, who will give configuration and management hints to site admins and site admins will have an opportunity to list their problems and express their requests.
EVO Phone Bridge: 599586
Erik Mattias Wadenstein
This session will be dedicated to Tier-2s running DPM and StoRM.
Introduction and experiment requirements15m
FTS: an update15m
Items arising from the discussion after the presentation:
Full listing of channel parameters:
[FTM] cleanup and publishing of transfer logs:
GFAL and lcg_util15m
DPM status and plans20m
An update on DPM status, plans and the currently recommended release.
DPM configuration and discussions - question from the sites20m
* user quotas
* tools for performing common administration tasks
* access control for spaces
* more advanced pool selection mechanism for DPM
* improved logging (centralised)
* tools for checking SE-LFC synchronisation
* nagios style alarms
* Are all of the current SEs set up properly in order to optimally deal with local user analysis?
Items arising from the discussion:
Addition of more filesystem selection methods
Tool to balance filesystems
StoRM status and new release30m
StoRM configuration and discussions - question from the sites30m
* Some sites are testing StoRM now. Some sites are using GPFS and others Lustre. Can we expect exactly the same functionality from both types of SEs?
* access control for storage area
* gridFTP load balancer
This session will cover dCache specific issues and discussions
dCache status and plans20m
An update on dCache releases recommended for Tier2-s
Installation and configuration hints for Tier-2s running dCache30m
Advise on how to install, configure and run a dCache installation at a Tier-2
dCache configuration for Tier-1s40m
dCache - questions from the sites1h
1. Splitting SRM instances. Is it feaseable ?
2. Configuration of gsiftp doors
3. Hardware setup at sites (32 or 64bits? How much memory and where? ...) and recommended software packages to use (Java version, DB, etc.)
4. What versions on the head nodes and what versions on the pool.
5. Avoiding high load on the PNFS node(busy threads in PnfsManager - long queue)
6. Limit the number of requests of a certain type (put request) globally or per user.
7. "We currently experience a long-standing problem of storage availability at CERN that I think is worth discussing (again) at the preGDB. One disk server of T0D1 service class has now been down for 2 weeks. We have a few thousand files on there that are inaccessible :((( It is a real burden to figure out, even if we could get the list (which we don;t have). How do sites envisage to face such problems?" - Philippe Charpentier
8. Managing databases: Putrequest and Putrequesthistory tables becoming big.Should "[Vacuum] analyze" being performed? How often ?
9. What should clients be careful using ? What are the most expensive calls ? smrls -l
10. How big should be the directories in order not to run into pnfs slowness ?
11. What is the advice for dCache sites regarding migrating to Chimera? It seems as if no one has moved yet.
12. Are all of the current SEs set up properly in order to optimally deal with local user analysis?
13. Implementations for 64 bits architetures
14. Is there a plan for a user-friendly tool for dCache installation
15. What is the status of the dCache dynamic information providers ?
16. The "fast" PNFS implementation
17. Poor ability of some dCache componentes to scale
18. No automation for disaster recovery. Many disaster recovery-related actions have to be done by hand, which is not our target. dCache somewhat hinders the possibility of deploying high-availability solutions - though allows easy rapid-recovery solutions.
19. PIC: Monitoring of our system as a whole is not at the level we would like it to be. Performance bottlenecks are difficult to find. We are working to improve this.
20. IN2P3: Massive prestaging from tape system to dCache are not efficient enough: when pretaging through SRM, dCache sends the prestaging requests in little packets (say 10) simultaneously to HPSS. It causes a lot of inefficiency in the tape mounting and unmounting. This is the big bottleneck we experience in the prestaging exercises.