Critical to the success of ITER reaching its scientific goal (Q≥10) is a data system that supports the broad range of diagnostics, data analysis, and computational simulations required for this scientific mission. Such a data system, termed ITERDB in this document, will be the centralized data access point and data archival mechanism for all of ITER’s scientific data. ITERDB will provide a unified interface for accessing all types of ITER scientific data regardless of the consumer (e.g., scientist, engineer, plant operations) including interfaces for data management, archiving system administration, and health monitoring capabilities.
Due to the INB nature of ITER, there are two parts – one located in POZ (Plant Operation Zone) to collect experimental data and another one located in XPOZ (outside Plant Operation Zone) to allow offline analysis execution and storage. In this paper, we will focus on ITERDB-POZ part, the other part being still under-designed.
ITER is the international project consisting of seven Das (Domestic Agencies). Its procurement makes it quite challenging. To smooth integration, we developed the CODAC Core system which is a mini-platform based on RHEL and EPICS which simulates the functional CODAC behaviour. Since its first version (2010), it has been increased with new features and new APIs. ITER consists of roughly 200 systems (roughly millions of variables). In this paper, we will focus on the Data Acquisition Network (DAN). Many systems will stream data over DAN at various rates from a few hundred kB/sec to 50GB/sec). We describe in this document the various components involved in the data acquisition and a data storage chain.