Help us make Indico better by taking this survey! Aidez-nous à améliorer Indico en répondant à ce sondage !

Multi-VO Support in IHEP’s Distributed Computing Environment

13 Apr 2015, 18:00
15m
B250 (B250)

B250

B250

oral presentation Track4: Middleware, software development and tools, experiment frameworks, tools for distributed computing Track 4 Session

Speaker

Dr Tian Yan (Institution of High Energy Physics, Chinese Academy of Science)

Description

For Beijing Spectrometer III (BESIII) experiment located at the Institute of High Energy Physics (IHEP), China, the distributed computing environment (DCE) has been setup and been in production status since 2012. The basic framework or middleware is DIRAC (Distributed Infrastructure with Remote Agent Control) with BES-DIRAC extensions. About 2000 CPU cores and 400 TB storage contributed by BESIII collaboration members are integrated to serve MC simulation and reconstruction jobs, as well as data transfer between sites. Recently, this distributed computing environment has been extended to support other experiments operated by IHEP, such as Jiangmen Underground Neutrino Observatory (JUNO), Circular Electron Positron Collider (CEPC), Large High Altitude Air Shower Observatory (LHAASO), Hard X-ray Modulation Telescope (HXMT), etc. Each experiment’s community naturally forms a virtual organization (VO). They have various heterogeneous computing and storage resources located at geographically distributed universities or institutions worldwide, with larger or smaller size. They wish to integrate those resources and supply to their VO’s users. The already established BES-DIRAC DCE is chosen as platform to serve their requirements, with considerable multi-VO support technologies implemented. The advantage of building DIRAC as a service is obvious in our case. First of all, DIRAC servers need dedicated hardware and expert manpower to maintain, some small VOs are not willing to afford that. Second, many universities and institutions in China are collaboration members of several experiments listed above, they would like to contribute resources to more than one experiment, a single DIRAC setup will be the easiest way to manage these resources. Therefore, Instead of setting up dedicated DCE for each experiment, operating a single setup for all the experiments is an effective way to make maximal use of shared resources and manpower while remaining flexible and open for new experiments to join. The key point for this single setup is to design and develop a multi-VO support scheme based on BES-DIRAC framework. The VO Management System (VOMS) plays a central role in multi-VO members and resources information management. Other services will retrieve user proxy information and resource per-VO configurations from VOMS server. In data management subsystem, DIRAC file catalogs (DFC) with dynamic dataset functionality is extended to handle files and datasets belong to users from different VOs. StoRM SE is employed to manage the storage resources contributed by universities belong to more than two VOs. In workload management subsystem, the pilot, job matcher and scheduler should take into consideration the VO properties of jobs, priorities of VOs in resources configurations, etc. In frontend subsystem, a task based request submission and management system is developed. Different VO’s users can specify their own workflows. It allows users from different VOs to submit jobs, manage data transferring, get monitoring and accounting information of their requests. The experience of designing multi-VO support in IHEP’s distributed computing environment may be interested and useful for other high energy physics center operating several experiments.

Primary author

Dr Tian Yan (Institution of High Energy Physics, Chinese Academy of Science)

Co-authors

Mr Bing SUO (IHEP, CAS, China) Mr Xianghu ZHAO (IHEP, CAS, China) Dr Xiaomei ZHANG (IHEP, CAS, China)

Presentation materials