Multimodal Large Language models extend LLMs’ capabilities to input beyond text, often images. At the European XFEL, these models are used as Retrieval-Augmented Generative (RAG) Knowledge assistants in technical and administrative domains. We present a selection of current applications and prototypes: chatbot assistants for data service support, business travel aid, vision-based document...
The specialized terminology and complex concepts inherent in physics present significant challenges for Natural Language Processing (NLP), particularly when relying on general-purpose models. In this talk, I will discuss the development of physics-specific text embedding models designed to overcome these obstacles, beginning with PhysBERT—the first model pre-trained exclusively on a curated...
As particle accelerators grow in complexity, traditional control methods face increasing challenges in achieving optimal performance. This paper envisions a paradigm shift: a decentralized multi-agent framework for accelerator control, powered by Large Language Models (LLMs) and distributed among autonomous agents. We present a proposition of a self-improving decentralized system where...