2–3 Nov 2019
University of Adelaide, North Terrace campus
Australia/Adelaide timezone

This workshop will consider the evolution of analysis in HEP, anticipating the huge increases in event data that we will gather in upgraded and future experiments. These scaling challenges mean that declarative models become more and more interesting, as compact and expressive ways to describe work. This allows many interesting backend optimisations to be applied transparently, from the analysis engine to the level of the computing site, which, in the future, might be a specialised facility tuned for high throughput analysis work.

How will this work in practice though?  If I have an analysis facility, can I still perform interactive analysis?  Will funding agencies even allow us to create specialised facilities, and if so then what will the constraints be?  Machine learning, and improving the performance of machine learning techniques, is likely to increase in importance and the scale of its resource needs - how does that fit into all of this?  And how can I build the bridges between extracting optimal detector performance, doing interactive analysis work, training machine learning algorithms and making the final plots for my publication?

The workshop will try to answer these questions, and such a global optimisation requires input from all of the communities concerned; physics data analysts, software developers and facilities experts will all find topics of mutual interest.

 

Please note the venue location - it is NOT the same as for CHEP itself.