Speaker
Description
Beam imaging presents significant challenges due to the necessity of positioning imaging devices near the beam pipe, an area subjected to high levels of radiation that can damage cameras and their peripheral electronics, reducing their lifespan and reliability. With the global discontinuation of radiation-hardened tube cameras previously used for this purpose, a robust and durable replacement imaging solution is needed. Multimode optical fibers have emerged as viable alternatives, capable of relaying the image signal to a standard CMOS camera location in a radiation-safe environment. A challenge within this approach is mode coupling and scattering within the fiber, which increases the difficulty in accurately reconstructing beam information.
This contribution showcases a method of reconstructing transverse beam distribution parameters from a distorted fiber output. This is achieved with an experimental setup that makes use of a synthetic input dataset generated from multiple high-variance 2D Gaussian fields, multimode optical fibers to propagate and distort these images, and a 2D convolutional autoencoder to reconstruct the inputs. This setup is used as a training dataset, with the input dataset chosen to support generalizability. Our machine learning model is tested on a real dataset of transverse beam distributions collected at CERN’s CLEAR facility. We achieve an average RMSE of 2.44% over four key transverse beam parameters after reconstruction on the testset. Our model further demonstrates mitigated bias in beam parameter estimation and strong generalization capability, reconstructing well radically different parameter distributions.