In 2019 Belle II will start the planned physics runs with the entire detector installed. Compared to current collider experiments at the LHC, where all critical services are provided by the CERN as host lab and only storage and CPU resources are provided externally, Belle II and KEK chose a different, more distributed strategy. In particular, it provides easier access to existing expertise and resources at the participating institutions.
Many of the services are hosted outside the host lab. DESY runs the suite of collaborative tools for issue tracking and code management, and the Brookhaven National Lab runs the data management and conditions infrastructure. Proper orchestration of these services and sites is critical. Thus choosing this service distribution model increases both the pressure for and provides opportunities for using community-wide or industry-standard solutions. Better standardization allows to eventually implement fallback instances for the most critical services, which is very hard in the setups chosen by previous experiments.
We will present our experience with setting up and running computing services of Belle II at BNL, the challenges this system and the cross-lab handshakes pose, and where and why we base the work on widely accepted tools like e.g. RUCIO. In addition, we will give an outlook for where we see future potential for more community-wide solutions on tackling conditions and other important experiment services