Session

Track 5 Session

13 Apr 2015, 14:00
OIST

OIST

1919-1 Tancha, Onna-son, Kunigami-gun Okinawa, Japan 904-0495

Conveners

Track 5 Session: #1 (Computing Models)

  • Stefan Roiser (CERN)

Track 5 Session: #2 (Computing Activities)

  • Simone Campana (CERN)

Track 5 Session: #3 (Data Preservation, Computing Activities)

  • Gordon Watts (University of Washington (US))

Description

Computing activities and Computing models

Presentation materials

There are no materials yet.

  1. Stephen Gowdy (Fermi National Accelerator Lab. (US))
    13/04/2015, 14:00
    Track5: Computing activities and Computing models
    oral presentation
    The global distributed computing system (WLCG) used by the Large Hadron Collider (LHC) is evolving. The treatment of wide-area-networking (WAN) as a scarce resource that needs to be strictly managed is far less necessary that originally foreseen. Static data placement and replication, intended to limit interdependencies among computing centers, is giving way to global data federations...
    Go to contribution page
  2. Ian Fisk (Fermi National Accelerator Lab. (US))
    13/04/2015, 14:15
    Track5: Computing activities and Computing models
    oral presentation
    Beginning in 2015 CMS will collected and produce data and simulation adding to 10B new events a year. In order to realize the physics potential of the experiment these events need to be stored, processed, and delivered to analysis users on a global scale. CMS has 150k processor cores and 80PB of disk storage and there is constant pressure to reduce the resources needed and increase the...
    Go to contribution page
  3. Dr Simone Campana (CERN)
    13/04/2015, 14:30
    Track5: Computing activities and Computing models
    oral presentation
    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming...
    Go to contribution page
  4. Dr Andrea Sciaba (CERN)
    13/04/2015, 14:45
    Track5: Computing activities and Computing models
    oral presentation
    The Worldwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse the ~50 Petabytes of data annually generated by the LHC. The WLCG operations are coordinated by a distributed team of managers and experts and performed by people at all participating sites and from all the experiments. Several...
    Go to contribution page
  5. Jiri Chudoba (Acad. of Sciences of the Czech Rep. (CZ))
    13/04/2015, 15:00
    Track5: Computing activities and Computing models
    oral presentation
    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are succesfully used. The first and the second versions of production system based on bash...
    Go to contribution page
  6. Alec Habig (Univ. of Minnesota Duluth)
    13/04/2015, 15:15
    Track5: Computing activities and Computing models
    oral presentation
    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study nu-e appearance in a nu-mu beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of offsite resources...
    Go to contribution page
  7. Dr Baosong Shan (Beihang University (CN))
    13/04/2015, 15:30
    Track5: Computing activities and Computing models
    oral presentation
    The Alpha Magnetic Spectrometer (AMS) is a high energy physics experiment installed and operating on board of the International Space Station (ISS) from May 2011 and expected to last through Year 2024 and beyond. The computing strategy of the AMS experiment is discussed in the paper, including software design, data processing and modelling details, simulation of the detector performance and...
    Go to contribution page
  8. Dr Takashi SUGIMOTO (Japan Synchrotron Radiation Research Institute)
    13/04/2015, 15:45
    Track5: Computing activities and Computing models
    oral presentation
    An X-ray free electron laser (XFEL) facility, SACLA, is generating ultra-short, high peak brightness, and full-spatial-coherent X-ray pulses [1]. The unique characteristics of the X-ray pulses, which have never been obtained with conventional synchrotron orbital radiation, are now opening new opportunities in a wide range of scientific fields such as atom, molecular and optical physics,...
    Go to contribution page
  9. Christoph Paus (Massachusetts Inst. of Technology (US))
    13/04/2015, 16:30
    Track5: Computing activities and Computing models
    oral presentation
    The Dynamic Data Management (DDM) framework is designed to manage the majority of the CMS data in an automated fashion. At the moment 51 CMS Tier-2 data centers have the ability to host about 20 PB of data. Tier-1 centers will also be included adding substantially more space. The goal of DDM is to facilitate the management of the data distribution and optimize the accessibility of data for the...
    Go to contribution page
  10. Thomas Beermann (Bergische Universitaet Wuppertal (DE))
    13/04/2015, 16:45
    Track5: Computing activities and Computing models
    oral presentation
    This contribution presents a study on the applicability and usefulness of dynamic data placement methods for data-intensive systems, such as ATLAS distributed data management (DDM). In this system the jobs are sent to the data, therefore having a good distribution of data is significant. Ways of forecasting workload patterns are examined which then are used to redistribute data to achieve a...
    Go to contribution page
  11. Prof. Daniele Bonacorsi (University of Bologna)
    13/04/2015, 17:00
    Track5: Computing activities and Computing models
    oral presentation
    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed...
    Go to contribution page
  12. Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US))
    13/04/2015, 17:15
    Track5: Computing activities and Computing models
    oral presentation
    Today there are many different experimental event processing frameworks in use by running or about to be running experiments. This talk will compare and contrast the different components of these frameworks and highlight the different solutions chosen by different groups.ย  In the past there have been attempts at shared framework projects for example the collaborations on the BaBar framework...
    Go to contribution page
  13. Dr Bodhitha Jayatilaka (Fermilab)
    13/04/2015, 17:30
    Track5: Computing activities and Computing models
    oral presentation
    The Open Science Grid (OSG) ties together individual experiments' computing power, connecting their resources to create a large, robust computing grid; this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. OSG has been funded by the Department of Energy Office of Science and National Science Foundation...
    Go to contribution page
  14. Federica Legger (Ludwig-Maximilians-Univ. Muenchen (DE))
    13/04/2015, 17:45
    Track5: Computing activities and Computing models
    oral presentation
    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data for the distributed physics community is a challenging task. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user...
    Go to contribution page
  15. Dirk Hufnagel (Fermi National Accelerator Lab. (US))
    13/04/2015, 18:00
    Track5: Computing activities and Computing models
    oral presentation
    In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system...
    Go to contribution page
  16. Tim Smith (CERN)
    14/04/2015, 14:00
    Track5: Computing activities and Computing models
    oral presentation
    In this paper we present newly launched services for open data and for long-term preservation and reuse of high-energy-physics data analyses. We follow the "data continuum" practices through several progressive data analysis phases up to the final publication. The aim is to capture all digital assets and associated knowledge inherent in the data analysis process for subsequent generations, and...
    Go to contribution page
  17. Martin Urban (Rheinisch-Westfaelische Tech. Hoch. (DE))
    14/04/2015, 14:15
    Track5: Computing activities and Computing models
    oral presentation
    VISPA provides a graphical front-end to computing infrastructures giving its users all functionality needed for working conditions comparable to a personal computer. It is a framework that can be extended with custom applications to support individual needs, e.g. graphical interfaces for experiment-specific software. By design, VISPA serves as a multi-purpose platform for many disciplines and...
    Go to contribution page
  18. Dr Bodhitha Jayatilaka (Fermilab)
    14/04/2015, 14:30
    Track5: Computing activities and Computing models
    oral presentation
    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at...
    Go to contribution page
  19. Roger Jones (Lancaster University (GB))
    14/04/2015, 14:45
    Track5: Computing activities and Computing models
    oral presentation
    Complementary to parallel open access and analysis preservation initiatives, ATLAS is taking steps to ensure that the data taken by the experiment during run-1 remain accessible and available for future analysis by the collaboration. An evaluation of what is required to achieve this is underway, examining the ATLAS data production chain to establish the effort required and potential problems....
    Go to contribution page
  20. Jetendr Shamdasani (University of the West of England (GB))
    14/04/2015, 15:00
    Track5: Computing activities and Computing models
    oral presentation
    In complex data analyses it is increasingly important to capture information about the usage of data sets in addition to their preservation over time in order to ensure reproducibility of results, to verify the work of others and to ensure appropriate conditions data have been used for specific analyses. This so-called provenance data in the computer science world is defined as the history or...
    Go to contribution page
  21. Dr Andrew Norman (Fermilab)
    14/04/2015, 15:15
    Track5: Computing activities and Computing models
    oral presentation
    The ability of modern HEP experiments to acquire and process unprecedented amounts of data and simulation have led to an explosion in the volume of information that individual scientists deal with on a daily basis. This explosion has resulted in a need for individuals to generate and keep large โ€œpersonal analysisโ€ data sets which represent the skimmed portions of official data collections...
    Go to contribution page
  22. Fons Rademakers (CERN)
    14/04/2015, 15:30
    Track5: Computing activities and Computing models
    oral presentation
    CERN openlab is a unique public-private partnership between CERN and leading ICT companies. Its mission is to accelerate the development of cutting-edge solutions to be used by the worldwide HEP community. Since January 2015 openlab phase V has started. To bring the openlab conducted research closer to the experiments, phase V has been changed to a project based structure which allows research...
    Go to contribution page
  23. Mr Romain Wartel (CERN)
    14/04/2015, 15:45
    Track5: Computing activities and Computing models
    oral presentation
    This presentation gives an overview of the current computer security landscape. It describes the main vectors of compromises in the academic community including lessons learnt, reveals inner mechanisms of the underground economy to expose how our computing resources are exploited by organised crime groups, and gives recommendations how to better protect our computing infrastructures. By...
    Go to contribution page
Building timetable...