Speaker
Thomas Kuhr
(KIT)
Description
The Belle II experiment, a next-generation B factory experiment at KEK, is expected to record a two orders of magnitude larger data volume than its predecessor, the Belle experiment. The data size and rate are comparable to or more than the ones of LHC experiments and requires to change the computing model from the Belle way, where basically all computing resources were provided by KEK, to a more distributed scheme. We exploit existing grid technologies, like DIRAC for the management of jobs and AMGA for the metadata catalog, to build a distributed computing infrastructure for the Belle II experiment. The system provides an abstraction layer for collections of jobs, called a project, and collections of files in a dataset. This year we could demonstrate for the first time the viability of our system in a generation, simulation, and reconstruction of 60M events on several grid sites. The results of this Monte-Carlo production campaign and the further plans for the distributed computing system of the Belle II experiment are presented in this talk.
Primary authors
Hideki Miyake
(KEK)
Martin Sevior
(University of Melbourne (AU))
Takanori HARA
(KEK)
Thomas Kuhr
(KIT)