There is a Mobile version of Indico. Would you like to try it?
FilteriCal exportI have read and understood the above.
I have read and understood the above.
Download current event:Event calendar file
In order to enable an iCal export link, your account needs to have a key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.
In conjunction with a having a key associated with your account, to have the possibility of exporting private event information necessitates the creation of a persistent key. This new key is also associated with your account and whilst it is active the data which can be obtained through using this key can be obtained by anyone in possession of the link provided. Due to this reason, it is extremely important that you keep links generated with this key private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately remove it from 'My Profile' under the 'HTTP API' tab and generate a new key before regenerating iCalendar links.
Permanent link for public information only:
Permanent link for all public and protected information:
Please use CTRL + C to copy this URLDetailed timetable
SC4 / pilot WLCG Service Workshopchaired by
from to (Europe/Zurich)
at TIFR, Mumbai
at TIFR, Mumbai
Mailing list for attendees: email@example.com The conclusions on data management are attached in the following "document"
Go to day
09:00 - 18:50
Convener: Location: Mumbai
Introduction & Workshop Goals
This talk will try to identify the key issues that we need to understand as a result of this workshop, including: - Discussing and obtaining draft agreement of the detailed Service Plan for 2006 (what services are deployed where, schedule, agreed functionality that is delivered by each release etc.) - Data flows / rates (reprocessing at Tier1s, calibration etc. (e.g. the TierX-TierY ones...) - Corresponding impact on Tier1s and Tier2s The Tier0-Tier1 rates, as well as first pass processing, are assumed to be well known ...
Speaker: Jamie Shiers Material: more information
Update on LCG OPN
The status and timeline of the LCG Optical Private Network (OPN) is presented.
Speaker: David Foster Material: more information transparencies
FTS status and directions / FTS requirements from a site point of view
This session will cover recent and foreseen enhancements to the FTS s/w and service. (20') It will also provide an opportunity for sites to express their requirements in terms of operation, manageability and so forth.
Speaker: Gavin McCance, Andreas Heiss (tbc) (CERN, FZK) Material: more information transparencies
DPM / LFC
This session will summarise recent and foreseen enhancements to the DPM & LFC software and services (20') It will also cover the site perspective from a deployment / service point of view.
Speaker: Jean-Philippe Baud, Graeme Stewart (CERN, Glasgow) Material: transparencies
- 12:00 Lunch 1h30'
CASTOR2 + CASTOR SRM
Deployment schedule of CASTOR2 and CASTOR SRM across the sites, including: - proposed h/w configuration to be used at CERN for SC4; - specific CASTOR and SRM 2.1 features provided by each release, - testing schedule, site & experiment validation etc.
Speaker: Jan van Eldik, Ben Couturier - for the CASTOR / SRM team (CERN - RAL) Material: transparencies
- Summary of dCache workshop. - Deployment schedule of dCache 2.1 SRM across the sites, including specific features provided by each release, testing schedule, experiment validation etc.
Speaker: Patrick Fuhrmann/Michael Ernst for the dCache team (DESY)
SRM / FTS / etc interoperability issues
By definition, the various components cited above must interoperate. We should identify any outstanding issues and produce a plan to resolve them.
- 18:30 End of day 1 20'
- 09:00 Introduction & Workshop Goals 15'
- 09:00 - 18:50 Data Management
09:00 - 18:20
Services and Service Levels
Convener: Location: Mumbai Material: more information
List of requirements for setting up a production service - the "service dashboard" / "checklist"
Speaker: Tim Bell Material: more information
Deploying Production Grid Servers & Services
By the time of this workshop, all main Grid services at CERN will have been redeployed to address the MoU targets. This session covers the various techniques deployed as well as initial experience with running the services. Recommendations for sites deploying the same services will be offered.
Speaker: Thorsten Kleinwort Material: transparencies
- 11:00 COFFEE 30'
Service Availability Monitoring
Experience with monitoring services using the Site Functional Tests
Speaker: Piotr Nyczyk Material: transparencies
- 12:30 Lunch 1h30'
Service Level Metrics - Monitoring and Reporting
A proposal for how component level testing could be aggregated into MoU-level services; metrics; reporting etc.
Speaker: Harry Renshall Material: more information transparencies
Deploying Agreed Services at Tier1 / Tier2 Sites
In this session we attempt to reach draft agreement on the detailed Service Schedule for 2006. This includes: - The list of services deployed at which sites; - The service roll-out schedule; - The service roll-out procedure (which must include both pre-testing / certification by the deployment teams as well as prolonged testing in a production environment (the Pre-Production System assumed) by the experiments.
Speaker: SC team, Sites, Experiments
- The operations model for 2006 and roll-out schedule (from 'SC operations' to standard operations - what, if anything, remains particular to WLCG production?) - Coordinating the various mailing lists, announcement mechanisms, out-of-hours coverage... - The support model for 2006: moving from stop-gap support lists setup for SC3 to GGUS support; time-table, scope etc.
Speaker: Maite Barroso Material: transparencies
Service Levels by Service / Tier / VO
A discussion on the target service levels. The baseline is given in the attached planning document. All sites - at least the Tier1s and DESY - should come prepared to report on service levels and availability at their site.
Speaker: Helene Cordier + ALL SITES (IN2P3) Material: actionlist more information
Distributed Database Deployment
Conclusions of the workshop held at CERN on February 6th: Agenda : http://agenda.cern.ch/fullAgenda.php?ida=a058495 The workshop goals are to review the status of the database project milestones on the tier 1/0 side and on the experiment side. This boils down to - to what extent are the production database servers and frontier installations at T0 and T1 becoming available? What problems setup have been encountered/ solved? Which steps still need to be completed before opening the first distributed LCG database setup for users from the experiments in March? How can the remaining Tier 1 sites join/participate during the first production phase (March-Sept) in order to be ready to also provide database services for the full deployment milestone in autumn? - to what extent has the development of the main applications (conditions data and other apps to be tested as part of SC4) been completed? This includes the definition of concrete detector data models including refined estimates for volume and access patterns, connection between online and offline, connection between T0 and T1 (reports from the various distribution tests).
Speaker: Dirk Duellmann, Andrea Valassi Material: transparencies
- 18:00 End of day 2 20'
- 09:00 Service Checklist 1h0'
- 09:00 - 18:20 Services and Service Levels
09:00 - 18:00
Experiment and ARDA Use Cases
The various experiment Use Cases, data volumes, flows and rates will be presented. In particular, reprocessing at Tier1s, detector calibration processing and analysis use cases should be addressed (including Tier1-Tier1, Tier1<->Tier2 transfers, and TierX<->TierY transfers).
Convener: Location: Mumbai
LHCb Use Cases for SC4
Speaker: Nick Brook / Ricardo Graciani Material: more information transparencies
CMS Use Cases for SC4
Speaker: Ian Fisk (TBC) Material: more information transparencies
ATLAS Use Cases for SC4
Speaker: Dario Barberis Material: more information transparencies
- 12:00 Lunch 1h30'
ALICE Use Cases for SC4
Speaker: Federico Carminati / Piergiorgio Cerello Material: transparencies
ARDA Use Cases for SC4
Speaker: Massimo Lamanna Material: transparencies
ROOT / PROOF / xrootd
Service and deployment issues... Target sites, service levels, monitoring, procedures, etc.
Speaker: Rene Brun / Fons Rademakers / Andy Hanushevsky
Discussion on Analysis Requirements and Services Required
Speaker: Jamie Shiers
- 17:30 END OF WORKSHOP 20'
- 09:00 LHCb Use Cases for SC4 1h0'
18:00 - 19:30
Storage Types BOF
The text below is now a summary of the BOF conclusions ordered logically; see the "minutes" link for a more faithful record of the discussions.
- It was agreed that the resaons for the volatile/durable/permanent distinction make little sense for HEP centres storing HEP data and that issues of interest to HEP physicists (is a file on tape, who manages the cache) are not covered.
- Although the discussions on the Sunday talked about changng the SRM specification (and this may indeed be done), discussions on the Monday and Tuesday focussed more on the Glue Schema which uses these same terms to describe Storage Areas which are provided by a Storage Element. Although the changing/extending the Glue schema could take some time, there is little to stop people using a new type ("fred") for these storage areas provided all agree what is meant by the new type name.
- One of the word documents attached here describes a list of possible storage types. If this list can be agreed to be exhaustive and names assigned to all types in the next week or so then fine. (It may be possible to pick [replacement] names for just the two initial SC4 classes, but time presses. The advantage would be that a migration from these names would therefore be avoided.) If not, we use durable and permanent as the sole storage types for the start of SC4, with the meanings as described there.
- The SRM 1.1 interface will be used for the start of SC4
- The consequence of points 3 & 4 is that HSM providers and sites must decide how permanent and durable storage areas can be implemented such that someone checking the information service can use the areas as required. This to be done by February 27th.
- Thereafter, HSM providers and sites have a little longer to name the different storage types and implement all of these in the SRM 2.1 interface. The target to have all of this done (and a migration path defined) such that we can move to the SRM 2.1 interface by October. SRM 2.1 interfaces that are not compatible with what is agreed are not to be introduced.
- Maarten Litmaath of CERN will be responsible for fostering discussions and convening meetings as necessary to ensure agreement for SRM 2.1 introduction in October.
Convener: Material: Minutes document
- 09:00 - 18:00 Experiment and ARDA Use Cases