30 July 2026 to 5 August 2026
Natal, Brazil
America/Sao_Paulo timezone

Masked Autoencoder Pre-trained Transformers for Pulse-Shape Discrimination in HPGe Detectors

Not scheduled
20m
Natal, Brazil

Natal, Brazil

Via Costeira Sen. Dinarte Medeiros Mariz, 6664-6704 - Ponta Negra, Natal - RN, 59090-002
Talk Artificial Intelligence, Machine Learning and Quantum Computing in HEP

Speaker

Marta Babicz (University of Zurich (CH))

Description

Pulse-shape discrimination (PSD) in point-contact high-purity germanium (HPGe) detectors is a primary handle for background rejection in neutrinoless double-beta decay searches. Standard analyses reduce each waveform to a few engineered scalars (e.g.\ AvsE, delayed-charge recovery (DCR) and late charge (LQ)), potentially discarding information in the full time series. We benchmark end-to-end transformer models on 3,800-sample charge waveforms from the Majorana Demonstrator AI/ML data release. The model ingests the waveform and a simple current proxy (finite-difference derivative) and is trained multi-task to reproduce accept/reject labels for low/high AvsE, DCR and LQ, and to regress calibrated energy. Transformers outperform a gradient-boosted decision tree using 12 geometric features, achieving AUROC $>0.92$ for each label and the largest gains for DCR and LQ. For the combined PSD-pass label (passing all four selections), the fine-tuned transformer reaches AUROC $0.992$ versus $0.958$ for the feature baseline. Masked autoencoder pre-training on unlabeled waveforms improves sample efficiency, reducing labeled-data needs by $\sim 2$-$4\times$ in low-label regimes. These gains make the approach promising for future HPGe programs such as LEGEND, where scalable, transferable PSD is critical; ongoing work targets robustness across detectors, conditions and the $0\nu\beta\beta$ region of interest.

I read the instructions above Yes

Author

Marta Babicz (University of Zurich (CH))

Co-author

Dr Saul Alonso Monsalve (ETH Zurich)

Presentation materials

There are no materials yet.