6–7 Jun 2011
CERN
Europe/Zurich timezone

Contribution List

23 out of 23 displayed
Export to PDF
  1. 06/06/2011, 10:50
  2. Peter Chochula (CERN)
    06/06/2011, 11:00
    We describe the architecture and implementation of the ALICE DCS database service. The whole dataflow from devices to the ORACLE database as well as the interface to online and offline data consumers is briefly overviewed. The operational experience with the present configuration as well as future plans and requirements are summarized in this talk.
    Go to contribution page
  3. Mr Sylvain Chapeland (CERN)
    06/06/2011, 11:30
    MySQL has been in use to store and access structured information for the ALICE data-acquisition since 2004. It copes with the implementation of 9 distinct data repositories (configurations, logs, etc) for the online subsystems implemented at the experimental area, all of them having different I/O patterns and requirements. We will review the architecture, performance, features, and future...
    Go to contribution page
  4. Tim Bell (CERN)
    06/06/2011, 12:00
    Requirements
    CERN is deploying a new content management approach based on Drupal (http://drupal.org) for the main www.cern.ch site, departments and experiments. This talk will review the requirements and options for the database part of the deployment to create an infrastructure capable of supporting millions of hits per day.
    Go to contribution page
  5. Mr Chris Roderick (CERN), Ms Zory Zaharieva (CERN)
    06/06/2011, 14:00
    Implementations
    The control and operation of the CERN accelerator complex is fully based on data-driven applications. The data foundation models the complex reality, necessary for the configuration of the accelerators controls systems and is used in an online and dynamic way to drive the particle beams and surrounding installations. Integrity of the data and performance of the data-interacting applications...
    Go to contribution page
  6. Mr Ronny Billen (CERN)
    06/06/2011, 14:30
    Requirements
    Since more than two decades, relational database design and implementations have been satisfying data management needs in the CERN Accelerator Sector. The requirements always covered a wide range of functional domains from complex controls systems configuration data to the tracking of high-volume data acquisitions. The requirements to store large data sets have increased by several orders of...
    Go to contribution page
  7. Christophe Delamare (CERN), Derek Mathieson (CERN)
    06/06/2011, 15:00
    Requirements
    We present the range of Administrative and Engineering applications together with expectations for future developments, growth and requirements.
    Go to contribution page
  8. Frank Glege (CERN)
    06/06/2011, 16:00
    CMS has chosen to use an online DB located at IP5 both for security reasons and to be able to take data even without GPN connection. The online DB (OMDS) is accessed by various applications for data acquisition configuration (through OCI libraries via TStore), detector slow control (via PVSS) and monitoring via java or c++ libraries. It also contains offline conditions data which are...
    Go to contribution page
  9. Giacomo Govi (Fermilab)
    06/06/2011, 16:30
    CMS experiment is made of many detectors which in total sum up to 60 million channels. Calibrations and alignments are fundamental to maintain the design performance of the experiment. The conditions database contains the alignment and calibrations data for the various detectors. Conditions data sets are accessed by a tag and an interval of validity through the offline reconstruction...
    Go to contribution page
  10. Mr Tony Wildish (PRINCETON)
    06/06/2011, 17:00
    Requirements
    We describe the current use of Oracle by CMS offline dataflow and workflow components (T0, PhEDEx, DBS). We consider how the database use is expected to evolve over the next few years, in terms of both data-volume, data-structure and application-use
    Go to contribution page
  11. Marco Clemencic (CERN PH-LBC)
    07/06/2011, 09:00
    Several database applications are used by the LHCb collaboration to help and organize the day-to-day tasks, to assist the data taking, processing and analysis. I will present a brief overview of the technologies used and the requirements for the long term support of both the current database applications and the possible future ones.
    Go to contribution page
  12. Dr Dario Barberis (CERN)
    07/06/2011, 09:30
    The use of databases in ATLAS is going through a continuous process of development, deployment and optimisation, in order to cope with the increasing amounts of data and new demands from the user community. In 2011 and 2012 work will concentrate on two major lines, namely the transition to Oracle 11g and re-optimisation of the existing database in Oracle, and the study of new technologies...
    Go to contribution page
  13. Gancho Dimitrov (BNL)
    07/06/2011, 10:00
    It is planned that in the beginning of 2012 all ATLAS databases at CERN will be upgraded to the Oracle 11g Release 2. In the light of making the ATLAS DB applications more reliable and performant, we would like to explore and evaluate the new 11g database features for development and performance tuning. In the talk will be described the expected benefits of having some of the Oracle 11g...
    Go to contribution page
  14. Dr Stefan Schlenker (CERN)
    07/06/2011, 11:00
    The ATLAS detector control system (DCS) archives detector conditions data in a dedicated Oracle database using a proprietary schema (PVSS Oracle archive) and representing one of the main users of the ATLAS online database service. The contribution will give an overview about the database usage and operation experience, e.g. with respect to data volume, insert rates, and pending issues....
    Go to contribution page
  15. Dr Maxim Potekhin (Brookhaven National Laboratory (BNL))
    07/06/2011, 11:30
    Implementations
    For the past few years, Panda Workload Management System has been the mainstay of computing power for ATLAS experiment at the LHC. Since the start of data taking, Panda usage gradually ramped up to 840,000 jobs processed daily in the Fall of 2010, and remains at consistently high levels ever since. Given the upward trend in workload and associated monitoring data volume, the Panda team is...
    Go to contribution page
  16. Dr Vincent Garonne (Conseil Europeen Recherche Nucl. (CERN)-Unknown-Unknown)
    07/06/2011, 12:00
    The Distributed Data Management System DQ2 is responsible for the global management of petabytes of ATLAS physics data. DQ2 has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle, as RDBMS are well-suited to enforce data integrity in online transaction processing application. Despite these advantages, concerns have been raised recently on the scalability of...
    Go to contribution page
  17. Simon Metson (H.H. Wills Physics Laboratory)
    07/06/2011, 14:00
    We discuss potential future requirements for CERN/IT managed/provided "NoSQL" data stores, and provide some high level observations based on our experiences with these technologies.
    Go to contribution page
  18. valentin kuznetsov (cornell)
    07/06/2011, 14:30
    The CMS Offline project has been developing against "NoSQL" data stores since 2009 and have experience with three projects in particular; CouchDB, Kyoto Cabinet and MongoDB. We present how these tools are used in our software, why they were chosen and lessons we've learnt along the way."
    Go to contribution page
  19. Mr Jerome Belleman (CERN)
    07/06/2011, 15:00
    Monitoring typically requires to store large amounts of metric samples which are recorded at a high rate. These samples must then be massively read back and reprocessed for analysis and visualisation purposes. For the past few years, different monitoring systems have been developed on top of NoSQL databases for the scalability they provide. Likewise, a monitoring system for the batch service...
    Go to contribution page
  20. Dr Andrea Valassi (CERN)
    07/06/2011, 16:00
    This presentation will report on the current plans for the future meintenance and development of two Persistency Framework packages used by several LHC experiments for accessing Oracle databases: CORAL (the generic RDBMS access layer, used by ATLAS, CMS and LHCb) and COOL (the conditions database package used by ATLAS and LHCb). It will also cover the status and plans for the CORAL Server, the...
    Go to contribution page
  21. Dr Dave Dykstra (Fermilab)
    07/06/2011, 16:30
    Technologies
    Frontier has been successfully distributing high-volume, high throughput, and long-distance data for CMS for many years and more recently for ATLAS, greatly reducing the expectations on the WLCG database servers. This talk will briefly describe the present status and cover the expected changes coming in the future. No major changes are foreseen, but improvements in robustness and security,...
    Go to contribution page
  22. Daniel Wang (SLAC National Accelerator Laboratory)
    07/06/2011, 17:00
    The LSST catalog of celestial objects will need to answer both simple and complex queries over many billions of rows. Since no existing open-source database efficiently supports its requirements, we have are developing Qserv, a prototype database-style system, to handle such volumes. Qserv uses Xrootd as a framework for data-addressed communication to a cluster of machines with...
    Go to contribution page
  23. Tony Cass (CERN)
    07/06/2011, 17:20