Speaker
Description
For distributed High Throughput Computing (dHTC), the original -- and potentially still most popular -- interface for workflow management is the command line interface (CLI). Decades of researchers have been trained on the CLI and knowledgeable users can effectively integrate it into larger scripts with little friction. As the ecosystem has grown and matured, new interfaces have appeared such as Application Programming Interfaces (APIs), targeting automated systems that interact with the dHTC layer; RESTful Interfaces, targeting remote interactions over the internet; or web user interfaces, targeting individuals accessing via the browser.
In 2025, a new approach surfaced: an “AI interface” which allows Large Language Model (LLM)-based agents to invoke and interact with tools as part of an agentic AI driven workflow. In this work, we present new interfaces, based on the popular “Model Context Protocol” (MCP) to two common dHTC software packages, the HTCondor Software Suite and the Pelican Platform. These MCPs provide building blocks: from an user platform like VS Code, agents can submit jobs, check status, or transfer objects between storage. How much can the AI agent close the gap between these “building blocks” and “running science”? Can a user leverage these tools to work with complex cyberinfrastructure with minimal expertise? Can an agent effectively monitor and fix problematic workflows? This work explores not just the immediate functionality but how well the setup works across sample problems encountered in the dHTC ecosystem.