This workshop will consider the evolution of analysis in HEP, anticipating the huge increases in event data that we will gather in upgraded and future experiments. These scaling challenges mean that declarative models become more and more interesting, as compact and expressive ways to describe work. This allows many interesting backend optimisations to be applied transparently, from the analysis engine to the level of the computing site, which, in the future, might be a specialised facility tuned for high throughput analysis work. In this respect, we will discuss emerging technologies around storage and their potential role (object stores, content delivery network…) and new approaches for structuring and processing data (RDataFrame, Spark…).
The workshop will bring these topics together, joining communities that have traditionally been separate. Physics analysts, software developers and facilities experts will all find topics of mutual interest.