The Jiangmen Underground Neutrino Observatory (JUNO) is a multiple purpose neutrino experiment to determine neutrino mass hierarchy and precisely measure oscillation parameters. The experimental site is under a 286m mountain, and the detector will be at -480m depth. Twenty thousand ton liquid scintillator (LS) is contained in a spherical container of radius of 17.7 m as the central detector (CD). The light emitted by the LS is watched by about 17,000 20-inch PMTs.
For such a large LS detector, the rate of cosmic muons reaching the inner detector is about 3 Hz. The muon induced background is one of the main backgrounds for the experiment. The most effective approach to reject this background is to define a sufficient detector volume along the muon trajectory and then to veto the events with their vertexes lying in the defined region within a time window. Precise reconstruction of the muon track can reduce unnecessary vetoes and therefore improve the efficiency of the neutrino detection. The traditional reconstruction methods based on the theoretical optical model can only leverage the first-hit-time (FHT) signals of PMTs, and meanwhile it is very difficult to model impacts such as reflection and refraction of optical photons, latency of the light scintillation, and time resolution of the PMT. Additional corrections to the FHT bias are sometimes necessary for these methods.
In this paper, we propose a novel approach of muon reconstruction with convolutional neural networks (CNNs). The main idea is to treat the CD as a 2D image and PMTs as pixels of the image, then use methods of object detection in computer vision to predict parameters of the muon trajectory. This method can leverage both the charge quantity and time signals of PMTs and bypass the thorny task of optical modeling of the CD. Preliminary results show that by using a 5-layers CNN model (3 convolutional layers and 2 fully connected layers) trained with 50k MC events, we achieved a slightly better performance compared to the traditional method. With 10K testing MC events, the mean error of injection angle is ~0.5 degree and the mean error of injecting point is ~8 cm. We will further present the improvements by increasing the complexity of CNN models, enlarging the training dataset and optimizing the PMT arrangement in the 2D image.