Speaker
Description
The design of advanced physics instruments is a complex and resource‐intensive task—one that requires optimizing numerous parameters (such as the sizes and shapes of various elements) to achieve high performance while meeting stringent cost, material, and spatial constraints. Our new work extends the approach presented in arXiv:2412.10237, which leverages Reinforcement Learning (RL) for instrument design, by further incorporating a varying budget. In this new method, the RL agent does not learn a single detector strategy; instead, it learns several models—each conditioned on a different resource budget. Demonstrated through calorimeter design, this conditioning enables the agent to tailor sensor placement, layer thickness, and overall detector architecture in accordance with available resources in a non-linear fashion. As a result, the algorithm outputs multiple optimized designs corresponding to distinct budgets, thereby allowing decision-makers to assess trade-offs between performance (for example, energy resolution and efficiency) and cost or material consumption. We argue that this approach will prove highly beneficial in the context of high energy physics, as budget constraints during experiment design evolve in response to achievable performance and policy changes.
Would you like to be considered for an oral presentation? | Yes |
---|