Speaker
Description
Large Language Models (LLMs) are increasingly used in particle physics as coding agents, but their role is expanding from software assistance to building the scientific analysis workflow itself. This talk examines how LLMs can function as connective elements across the stages of a modern high-energy physics analysis, from dataset discovery and metadata retrieval to analysis specification, plotting, and possibly workflow orchestration. We survey emerging applications beyond code generation, briefly highlighting operational and experiment-support use cases, and then focus on analysis as a structured, multi-step process with distinct opportunities for automation. Using open-source agent interfaces and composable Model Context Protocol (MCP) tools, we discuss how reusable “skills” can expose experiment services in a controlled and deterministic way, enabling LLMs to generate validated plotting and analysis workflows rather than ad hoc snippets. We discuss open questions around termination criteria, validation, and the boundary between framework and agent, and outline the requirements for a field-wide shared ecosystem—common interfaces, skill libraries, and evaluation practices—that supports rigorous, reproducible deployment of LLMs for physics in the HL-LHC era.