- What configurations should be provided to the analyzers to run their interactive analysis? Ideally it should be a restricted set not to complicate the user experience.
- What policies should be implemented to schedule the interactive analysis jobs?
- It is important to guarantee a stable software environment both on the client and the cluster side. CVMFS could make this possible. Allowing custom user containers can make this more difficult (i.e. possible conflicts between client and worker nodes).
- How can we guarantee a good interactive experience for the user? What are the expected times of resource allocation and application execution? How can we ensure good resource utilization? Dask should help with these points as resource allocation can be progressive and resources are immediately freed after the application finishes (no idle time).
- How are users expected to transition from an interactive scenario to a Grid / batch one, when they need their analysis to run on a bigger dataset? An analysis written in RDataFrame should stay the same no matter the infrastructure where it runs, so RDataFrame could potentially hide those details. The SWAN interface could also help in this transition.
- It would be useful to collect user stories: how are people going to use this service? What would be their common workflows?
- EOS should be accessible on the HTCondor cluster side, currently there is some extra step needed to place the output data of the job on EOS. This is one of the technical details to figure out during the integration of SWAN and HTCondor.
1. Integration SWAN - HTCondor: the respective teams will meet again to figure out the technical details of this integration.
2. Involvement of RDataFrame and AF WG, performance tests with real analyses. More discussion on scheduling, caching, user workflows.
3. Improve user experience in SWAN interface (allocation of HTCondor resources, Dask monitoring).