A3D3 Undergraduate Summer Research Symposium

US/Pacific
https://ucsd.zoom.us/j/95846737086?pwd=Pwygoui0Ka9cUi0yW7RckjPSduOQll.1 (On Zoom)

https://ucsd.zoom.us/j/95846737086?pwd=Pwygoui0Ka9cUi0yW7RckjPSduOQll.1

On Zoom

Aobo Li, Sonata Simonaitis-boyd (UC San Diego)
    • 13:00 13:20
      Kilonova Posteriors for Estimating the Hubble Constant 20m

      Observations of gravitational waves emitted from compact binary mergers and associated kilonovae show promise for estimating the Hubble Constant. We are able to take data gathered from these kilonova events and implement them into different methods to calibrate a precise method of estimation. The Kernel Density Estimation (KDE) method is founded in Bayes' Theorem. This method utilizes functions representing individual data points of the set, then adds them together to get a KDE. We then combined the KDEs using a method outlined here https://github.com/tsunhopang/KDE_multiply/tree/main

      Speaker: Megan Averill
    • 13:20 13:40
      FPGA Deployment of PointNET for Use in the KamLAND-Zen Experiment 20m

      While there are several experiments searching for neutrinoless double beta decay $(0\nu\beta\beta)$, a rare decay phenomenon, we focus our attention on the KamLAND-Zen experiment, which is a monolithic liquid xenon scintillator detector. Because the events that occur in the detector are indirectly observed, a reconstruction of the event must be done, which is a non-trivial process and is done after all the data is collected. This means that the experiment commits resources towards storing data that is not relevant to the $0\nu\beta\beta$ analysis. To help solve this problem, we propose a toolchain that deploys a machine learning model, PointNET, onto a Field Programmable Gate Array so that a fast inference may be done at the time of data collection. This is the first time that hardware-Algorithm co-design has been brought to the stage of $0\nu\beta\beta$ experiments.

      Speaker: Alex Migala
    • 13:40 14:00
      Improving Data Compression with Conditional Autoencoders 20m

      As a result of the need to improve searches for new particles and measure particle properties at CERN, the LHC is undergoing a high-luminosity upgrade which will provide a dataset ten times larger than the one currently available. To avoid complications in particle reconstruction as a result of the increased number of simultaneous interactions (pileup) per collision, a radiation-hard high-granularity calorimeter, which will measure the energy and position of particles with significantly improved precision, will be installed. This level of precision and higher levels of pileup represent a significant increase in complexity and data rates, which must be reduced by several orders of magnitude in real time to be processed. Thus, we aim to explore the application of machine learning to optimize the data com-
      pression performed by the HGCal front-end electronics through the development of a conditional autoencoder to compress data automatically before transmission.

      Speaker: Mariel Peczak
    • 14:00 14:20
      Evaluation of Machine Learning Classifiers for Characterizing Gravitational Wave Events 20m

      The detection of gravitational waves with the Laser Interferometer Gravitational Wave Observatory (LIGO) has provided the tools to probe the furthest reaches of the universe. A rapid follow up to compact binary coalescence (CBC) events and their electromagnetic counterparts is crucial to find short lived transients. After a gravitational wave (GW) detection another particular challenge is determining a fast and efficient way of characterizing events as astrophysical or terrestrial in origin. The mergers themselves provide many data products from low latency CBC search pipelines which can aid in discerning whether or not a GW signal is real. We present an efficient low latency method of alert classification applying Bayes’ Factors into three machine learning classification algorithms: Random Forest (RF), K-Nearest Neighbors (KNN), and Neural network (NN) using event data from the Mock Data Challenge (MDC). We report the true positive rate of the RF, KNN, and NN classifiers as is 0.82, 0.84, and 0.89 respectively.

      Speaker: Seiya Tsukamoto
    • 14:20 14:40
      Episodic reinforcement learning for 0vbb signal discrimination 20m

      0vbb decay is a Beyond the Standard Model process that, if discovered, could prove the Majorana nature of neutrinos---that they are their own antiparticles. The Majorana Demonstrator (MJD) is one experiment searching for 0vbb decay using semiconductor detectors, however the nature of the waveform data produced by the detectors is such that they are unlabelled, and producing ground-truth labels is an involved process if using traditional methods. Fortunately, machine learning methods like reinforcement learning (RL) are able to perform tasks on unlabelled data. I present an episodic RL algorithm implementing Randomized Return Decomposition for binary classification of detector events from the Majorana Demonstrator Data Release for AI/ML Applications. Under stringent masking of the MJD detector data, the RL-trained classifier slightly outperforms a standard supervised learning model trained under the same conditions, showing potential for further development and even future deployment as a first-stop classifier on other 0vbb decay experiments like LEGEND.

      Speaker: Sonata Simonaitis-boyd (UC San Diego)
    • 14:40 15:00
      Neural Architecture Codesign for Physics Applications 20m

      We develop an automated pipeline to streamline neural architecture codesign for physics applications. Our method employs neural architecture search to enhance these models, including hardware costs, leading to the discovery of more hardware-efficient neural architectures. We exceed performance and show further speedup through model compression techniques such as quantization-aware-training and neural network pruning.
      We synthesize the ideal models to high level synthesis code for FPGA deployment with the hls4ml library. Additionally, our hierarchical search space provides greater flexibility in optimization, which can easily extend to other tasks and domains. We demonstrate this with two case studies: Bragg Peak finding in materials science and jet classification in high energy physics.

      Speakers: Dmitri Demler, Jason Weitz