Do we expect to run in parallel transfers authorized with X.509 using different roles and tokens? What would be the correct identity mapping in this case and does current SE implementation provides sufficient flexibility?
For DC23 we'll need to use both X.509 and tokens at same time to be able to use production storages.
Is current token only testbed sufficient to cover concurrent X.509 + token usage? ... we should try to cover use-cases that we need by our production infrastructure not only for DC23, but also to cover future transition from tokens to X.509 which might not happen over night.
Plans not to shift DC23 too much, but realistically it may even be postponed after winter conferences ... nothing official yet, we should still focus to find potential implementation issues / missing features ASAP, because of long design-develop-test-deploy cycle.
Same (sub)namespace for all storages
Capability based security defined by storage.*:/path scope.
prefix / base path can be different - configurable per issuer
everything below must follow same structure
currently not true for all ATLAS RSEs, we need everywhere same structure
even different spacetokens for data vs. scratch ("analysis area") vs. localgroup (institutional resources) disk
symlinks could help with transition to the new structure
supported in dCache
Echo, EOS, XRootD - can use plugin for LFN mapping (already in use by CMS)
File access permissions
Only one identity for clients using storage.*:/ scopes (capability), because we can't map identity according different path restrictions in the storage scope. Is it possible to come with configuration that allows X.509 and tokens live in one storage namespace without introducing security permission issues?
Different storage areas currently use different permissions (data vs. scratch vs. localgroup).
does dCache consider ACLs for capability based access?
storage.*:/ scope capability doesn't always give client access
mapped user identity is validated later by XRootD
this is different behavior compared e.g. to the dCache
WLCG JWT compliant EOS behavior require precise identity mapping
technically EOS also supports ACLs
not used for LHC experiments
could be useful to solve complicated identity mapping scenarios
fortunately LHC experiments use very simple mapping and can live without ACLs
Provide to the developers how exactly we use different X.509 identities so they will be able to tell if/how storage can be configured for mixed usage with tokens.
As an output for this effort provide documentation with details how to configure sites with common SEs for concurrent X.509 and token usage.
Discussed in a project-lcg-authz mailing list: "SE token deployment/development"
Recommended storage configuration
Does LHC experiments have their documentation how to correctly configure their storage? Please provide links here:
ATLAS - not aware of any documentation with technical details except generic ATLAS Sites Setup and Configuration and ACL configuration for DDM - identity for everybody (analysis data), production data and multiple national identities (but storage usually supports just space for members of one national group)
Alice - single user on storage side (already using capability security model)