DPM Collaboration Board
DPM Collaboration Board and Dev Meeting
-
News from CERN
-
DPM Project Status
-
Effort projections
-
-
DPM project strategy 2018
-
DPM 1.10 release
-
Testbeds
-
-
End of support for legacy components and Dome transition planning
-
Caching
-
-
DPM Workshop 2018
-
Collaboration
-
agreement
-
membership and tasks
-
-
AOB
https://svnweb.cern.ch/trac/lcgdm/wiki/DPMCollaborationAgreement
# DPM Collaboration and Extended Dev Meeting
# Minutes
https://indico.cern.ch/event/713749/
Wed 14th March 2018
Present
Jiri Chudoba
Alessandra Doria
Fabrizio Furano
Oliver Keeble
Andrea Manzi
Andrea Sartirana
Sam Skipsey
# News from CERN
## DPM Project Status
The project is in a consolidation phase with no major developments planned. The priority is the infrastructure transition to Dome.
Andrea Manzi has taken over leadership of the FTS project and will have less time to dedicate to DPM support. He will continue to handle puppet.
## DPM project strategy 2018
Oriented around DPM 1.10 release and transition to Dome.
# DPM 1.10 release
DPM 1.10 addresses all known issues. It is currently in EPEL-test where it will remain until the forthcoming xrootd release, at which point it will be rebuilt and pushed to stable. The current EPEL-test release is fully testable.
## Testbeds
We should use "WLCG middleware readiness" where possible to get experiment workflows onto Dome testbeds. DPM team wants to work with one or (preferably) two sites for each of Atlas and CMS in this way to ensure optimal pre-prod style test coverage.
Andrea S is ready to upgrade and trigger middleware readiness (CMS site).
Andrea will also ask LAPP (Atlas site).
Sam to get in touch with Edinburgh who are talking to Atlas about SE testing
Alessandra D. OK for upgrade after Easter (Atlas site).
## End of support for legacy components and Dome transition planning
The release of 1.10 to EPEL-stable will trigger the announcement of a 12-month support timetable for the "legacy components" (ie SRM, dpm, dpns, rfio). Support will end 12 months from that announcement.
## Caching
Fabrizio reiterated the presence of "volatile pools" in the latest DPM releases and their relevance to evolving Data Management scenarios in WLCG.
Alessandra has a fellow working on DPM caching.
working on Atlas
...may also try CMS
## Multi-site DPM
France and Italy indicated interest in this scenario. The Italian scenario is a multi-site cache.
Sam - UK decided that there wasn't much saving on a multi-site DPM - the cost of administering lower level storage remains
# DPM Workshop 2018
Alessandra: no big wish for "site reports" like before
stick to highlights
maybe more on caches
space reservation details in Dome
Andrea S. agreed
wld like feedback on particular experience (e.g. caches)
also experiments if possible
wld like hands-on session
will collect main topics
"Dome upgrade for CMS sites, Atlas sites etc"
Agreement that orienting sessions and advice by experiment would be beneficial.
Sam - agree on highlights only
UK is a bit slow with puppet
Jiri - show some best practises
e.g. we don't check logs until something breaks
how can we compare protocol usage?
is our SRM usage only tests? How do we know for sure we can turn it off?
round table suggestions
what are your best practise tips
how do you analyse log files
...
Fabrizio - interested in have a more round-table, interactive style of meeting this time around.
will shortly send out an announcement inviting registration for workshop
will put an expiration date on registration to help planning
Times
Decided 10am thu - 4pm fri
We can put the core in Thu afternoon and Fri morning.
Still not sure about vid link - probably broadcast only.
confirmation end of March.
fee of 20-30 EUR
dinner not included in fee.
# Collaboration
## agreement
No comments on agreement.
## membership and tasks
### Italy
as before
+ multi-site deployment
e.g. multi-site cache, where cache is headless.
### France
remove gridftp redir testing
add multi-site DPM
### UK
admin tools
dome deployment
Argus
ansible
will publish recipes
### Czech Rep.
DPM workshop
Dome upgrade, depending on availability
# AOB