Speaker
Description
The LHCb experiment operates a full-software trigger comprising of two stages, labelled HLT1 and HLT2. The two stages are separated by a disk buffer, which not only allows the HLT2 processing to be asynchronous with respect to the data taking, it also allows real-time alignment and calibration to be performed prior to HLT2 processing. HLT2 then performs full offline-level reconstruction and applies complex physics selections across approximately 3,000 trigger lines. Maintaining and optimizing such
a large menu of selections whilst respecting strict throughput and storage constraints is operationally challenging. Critically, the maximum allowed output rate of HLT1 is limited by the buffer size and the HLT2 average throughput.
This talk presents two complementary efforts to reduce the processing cost of the HLT2 selection framework. First, the framework now automatically generates control-flow from existing data-flow descriptions, eliminating manual authoring overhead. This implementation is deliberately
simple but effective, with earlier termination of lines and throughput improvements. Second, algorithmic overlap across the selection program was systematically characterized to understand duplicated processing and identify candidates for future optimization. Both efforts are validated against data using throughput benchmarks and checksums to verify data integrity. Results demonstrate throughput gains with preserved physics retention. We outline next steps toward operational deployment of a coherent, maintainable, and cost-effective HLT2 selection framework.