Speaker
Description
Future neutrino experiments like DUNE represent big-data experiments that will acquire petabytes of data per year. Processing this amount of data itself is a significant challenge. In recent years, however, the use of deep learning applications in the reconstruction and analysis of data acquired by LArTPC-based experiments has grown substantially. This will impose an even bigger amount of strain on the computing requirements of these experiments since the CPU-based systems used to run offline processing are not well suited to the task of deep learning inference. To address this problem, we adopt an "as a Service" model where the inference task is provided as a web service. We demonstrate the feasibility of this approach by testing it on the full reconstruction chain of ProtoDUNE using fully simulated data, where the GPU-based inference server is hosted on the Google Cloud Platform. We present encouraging results from our tests that include detailed studies of scaling behavior. Based on these results, the "as a Service" approach shows great promise as a solution for the growing computing needs of future neutrino experiments which are associated with deep-learning inference tasks.