We upgraded Indico to version 3.0. The new search is now available as well.
10-15 March 2019
Steinmatte conference center
Europe/Zurich timezone

Scaling the training of semantic segmentation on MicroBooNE events to multiple GPUs

Not scheduled
Steinmatte conference center

Steinmatte conference center

Hotel Allalin, Saas Fee, Switzerland https://allalin.ch/conference/
Poster Track 2: Data Analysis - Algorithms and Tools Poster Session


Alexander Hagen (PNNL)


Measurements in Liquid Argon TPC (LArTPC) neutrino detectors, such as the MicroBooNE detector at Fermilab, feature large, high fidelity event images. Deep learning techniques have been extremely successful in classification tasks of photographs, but their application to LArTPC event images is challenging, due to the large size of the events. Events in these detectors are typically two orders of magnitude larger than images found in classical challenges, like recognition of handwritten digits contained in the MNIST database or object recognition in the ImageNet database. Ideally, training would occur on many instances of the entire event data, instead of many instances of cropped regions of interest from the event data. However, such efforts lead to extremely long training cycles, which slow down the exploration of new network architectures and hyperparameter scans to improve the classification performance.
We present studies of scaling a LArTPC classification problem on multiple architectures, spanning multiple nodes. The studies are carried out on simulated events in the MicroBooNE detector. Institutional computing at Pacific Northwest National Laboratory and the Summit-dev machine at Oak Ridge National Laboratory’s Leadership Computing Facility have been used. Additionally, we discuss difficulties encountered (and solutions thereof) while scaling from single to multiple GPU training for this context.

Primary authors

Presentation materials

Peer reviewing