Peter Chochula
(CERN)
06/06/2011, 11:00
We describe the architecture and implementation of the ALICE DCS database service. The whole dataflow from devices to the ORACLE database as well as the interface to online and offline data consumers is briefly overviewed.
The operational experience with the present configuration as well as future plans and requirements are summarized in this talk.
Mr
Sylvain Chapeland
(CERN)
06/06/2011, 11:30
MySQL has been in use to store and access structured information for the ALICE data-acquisition since 2004. It copes with the implementation of 9 distinct data repositories (configurations, logs, etc) for the online subsystems implemented at the experimental area, all of them having different I/O patterns and requirements.
We will review the architecture, performance, features, and future...
Tim Bell
(CERN)
06/06/2011, 12:00
Requirements
CERN is deploying a new content management approach based on Drupal (http://drupal.org) for the main www.cern.ch site, departments and experiments. This talk will review the requirements and options for the database part of the deployment to create an infrastructure capable of supporting millions of hits per day.
Mr
Chris Roderick
(CERN), Ms
Zory Zaharieva
(CERN)
06/06/2011, 14:00
Implementations
The control and operation of the CERN accelerator complex is fully based on data-driven applications. The data foundation models the complex reality, necessary for the configuration of the accelerators controls systems and is used in an online and dynamic way to drive the particle beams and surrounding installations. Integrity of the data and performance of the data-interacting applications...
Mr
Ronny Billen
(CERN)
06/06/2011, 14:30
Requirements
Since more than two decades, relational database design and implementations have been satisfying data management needs in the CERN Accelerator Sector. The requirements always covered a wide range of functional domains from complex controls systems configuration data to the tracking of high-volume data acquisitions. The requirements to store large data sets have increased by several orders of...
Christophe Delamare
(CERN),
Derek Mathieson
(CERN)
06/06/2011, 15:00
Requirements
We present the range of Administrative and Engineering applications together with expectations for future developments, growth and requirements.
Mr
Tony Wildish
(PRINCETON)
06/06/2011, 17:00
Requirements
We describe the current use of Oracle by CMS offline dataflow and workflow components (T0, PhEDEx, DBS).
We consider how the database use is expected to evolve over the next few years, in terms of both data-volume, data-structure and application-use
Marco Clemencic
(CERN PH-LBC)
07/06/2011, 09:00
Several database applications are used by the LHCb collaboration to help and organize the day-to-day tasks, to assist the data taking, processing and analysis.
I will present a brief overview of the technologies used and the requirements for the long term support of both the current database applications and the possible future ones.
Dr
Stefan Schlenker
(CERN)
07/06/2011, 11:00
The ATLAS detector control system (DCS) archives detector conditions data in a dedicated Oracle database using a proprietary schema (PVSS Oracle archive) and representing one of the main users of the ATLAS online database service. The contribution will give an overview about the database usage and operation experience, e.g. with respect to data volume, insert rates, and pending issues....
Dr
Maxim Potekhin
(Brookhaven National Laboratory (BNL))
07/06/2011, 11:30
Implementations
For the past few years, Panda Workload Management System has been the mainstay of computing power for ATLAS experiment at the LHC. Since the start of data taking, Panda usage gradually ramped up to 840,000 jobs processed daily in the Fall of 2010, and remains at consistently high levels ever since. Given the upward trend in workload and associated monitoring data volume, the Panda team is...
Dr
Vincent Garonne
(Conseil Europeen Recherche Nucl. (CERN)-Unknown-Unknown)
07/06/2011, 12:00
The Distributed Data Management System DQ2 is responsible for the global management of petabytes of ATLAS physics data. DQ2 has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle, as RDBMS are well-suited to enforce data integrity in online transaction processing application. Despite these advantages, concerns have been raised recently on the scalability of...
Simon Metson
(H.H. Wills Physics Laboratory)
07/06/2011, 14:00
We discuss potential future requirements for CERN/IT managed/provided "NoSQL" data stores, and provide some high level observations based on our experiences with these technologies.
valentin kuznetsov
(cornell)
07/06/2011, 14:30
The CMS Offline project has been developing against "NoSQL" data stores since 2009 and have experience with three projects in particular; CouchDB, Kyoto Cabinet and MongoDB. We present how these tools are used in our software, why they were chosen and lessons we've learnt along the way."
Dr
Dave Dykstra
(Fermilab)
07/06/2011, 16:30
Technologies
Frontier has been successfully distributing high-volume, high throughput, and long-distance data for CMS for many years and more recently for ATLAS, greatly reducing the expectations on the WLCG database servers. This talk will briefly describe the present status and cover the expected changes coming in the future. No major changes are foreseen, but improvements in robustness and security,...
Daniel Wang
(SLAC National Accelerator Laboratory)
07/06/2011, 17:00
The LSST catalog of celestial objects will need to answer both simple
and complex queries over many billions of rows. Since no existing
open-source database efficiently supports its requirements, we have
are developing Qserv, a prototype database-style system, to handle
such volumes. Qserv uses Xrootd as a framework for data-addressed
communication to a cluster of machines with...