BiG Data Beyond Moore’s Law, coming to terms with Moore-less cores, ever increasing data volumes, and managing more resources without using more people. That's the theme of this CHEP Conference. With dedicated tracks on data acquisition, trigger and controls; event processing, simulation and analysis; distributed processing and data handling; data stores, data bases and storage systems; software engineering, parallelism & multi-core programming; and a track on facilities, production infrastructures, networking and collaborative tools.
The scientific program of CHEP 2013 will consist of plenary sessions with invited oral presentations, a number of parallel sessions comprising oral and poster presentations, and an industrial exhibition. The plenary sessions will occupy the five mornings of the conference and the parallel sessions will be held on 4 afternoons. Contributions are solicited in the form of abstracts and the Program Committee, with the help of the International Advisory Committee, will use these to finalize the program.
-
Data acquisition, trigger and controls
T1Topics for this track include: event building and farm networks; compute farms for high-level triggering; configuration and run control; describing and managing configuration data and conditions databases; online software frameworks and tools; online calibration procedures; remote access to and control of data acquisition systems and experimental facilities
-
Event Processing, Simulation and Analysis
T2Topics for this track include: event generation, simulation and reconstruction; detector geometries, physics analysis; tools and techniques for data classification and parameter fitting; event visualization and data presentation; frameworks for event processing; toolkits for simulation, reconstruction and analysis; event data models.
-
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
T3ATopics for this track include: grid computing; virtualization; infrastructure as a service; clouds; distributed data processing; data management; distributed analysis; distributed processing experience, including experience with grids and clouds; experience with production and data challenges; experience with analysis using distributed resources; interactive analysis using distributed resources; solutions for coping with a heterogeneous environment; mobile computing; monitoring of user jobs and data; grid and cloud software and monitoring tools; global usage and management of resources; middleware reliability, interoperability and security; experiment specific middleware applications.
-
Data Stores, Data Bases, and Storage Systems
T4Topics for this track include: storage management; local I/O and data access; mass storage systems; object dictionaries; event stores; metadata and supporting infrastructure; databases; access patterns and caching strategies; data preservation; data curation and long-term data reproducibility.
-
Software Engineering, Parallelism & Multi-Core
T5Topics for this track include: CPU/GPU architectures; tightly-coupled systems; GPGPU; concurrency; vectorization and parallelization; mathematical libraries; foundation and utility libraries; programming techniques and tools; software testing and quality assurance; configuration management; software build, release and distribution tools; documentation.
-
Facilities, Production Infrastructures, Networking and Collaborative Tools
T6Topics for this track include: basic hardware, benchmarks and experience; fabric virtualization; fabric management and administration; local and wide-area networking; private networks; collaborative systems: progress in technologies and applications; tele-presence and teleconferencing systems; experience in the use of teleconferencing tools.
-
Distributed Processing and Data Handling B: Experiment Data Processing, Data Handling and Computing Models
T3BTopics for this track include: grid computing; virtualization; infrastructure as a service; clouds; distributed data processing; data management; distributed analysis; distributed processing experience, including experience with grids and clouds; experience with production and data challenges; experience with analysis using distributed resources; interactive analysis using distributed resources; solutions for coping with a heterogeneous environment; mobile computing; monitoring of user jobs and data; grid and cloud software and monitoring tools; global usage and management of resources; middleware reliability, interoperability and security; experiment specific middleware applications.