Conveners
Track 5 Session: #1 (Computing Models)
- Stefan Roiser (CERN)
Track 5 Session: #2 (Computing Activities)
- Simone Campana (CERN)
Track 5 Session: #3 (Data Preservation, Computing Activities)
- Gordon Watts (University of Washington (US))
Description
Computing activities and Computing models
Stephen Gowdy
(Fermi National Accelerator Lab. (US))
4/13/15, 2:00โฏPM
Track5: Computing activities and Computing models
oral presentation
The global distributed computing system (WLCG) used by the Large Hadron
Collider (LHC) is evolving. The treatment of wide-area-networking (WAN) as
a scarce resource that needs to be strictly managed is far less
necessary that originally foreseen. Static data placement and replication,
intended to limit interdependencies among computing centers, is giving way
to global data federations...
Ian Fisk
(Fermi National Accelerator Lab. (US))
4/13/15, 2:15โฏPM
Track5: Computing activities and Computing models
oral presentation
Beginning in 2015 CMS will collected and produce data and simulation adding to 10B new events a year. In order to realize the physics potential of the experiment these events need to be stored, processed, and delivered to analysis users on a global scale. CMS has 150k processor cores and 80PB of disk storage and there is constant pressure to reduce the resources needed and increase the...
Dr
Simone Campana
(CERN)
4/13/15, 2:30โฏPM
Track5: Computing activities and Computing models
oral presentation
The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming...
Dr
Andrea Sciaba
(CERN)
4/13/15, 2:45โฏPM
Track5: Computing activities and Computing models
oral presentation
The Worldwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse the ~50 Petabytes of data annually generated by the LHC. The WLCG operations are coordinated by a distributed team of managers and experts and performed by people at all participating sites and from all the experiments. Several...
Jiri Chudoba
(Acad. of Sciences of the Czech Rep. (CZ))
4/13/15, 3:00โฏPM
Track5: Computing activities and Computing models
oral presentation
Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are succesfully used. The first and the second versions of production system based on bash...
Alec Habig
(Univ. of Minnesota Duluth)
4/13/15, 3:15โฏPM
Track5: Computing activities and Computing models
oral presentation
The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study nu-e appearance in a nu-mu beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of offsite resources...
Dr
Baosong Shan
(Beihang University (CN))
4/13/15, 3:30โฏPM
Track5: Computing activities and Computing models
oral presentation
The Alpha Magnetic Spectrometer (AMS) is a high energy physics experiment installed and operating on board of the International Space Station (ISS) from May 2011 and expected to last through Year 2024 and beyond. The computing strategy of the AMS experiment is discussed in the paper, including software design, data processing and modelling details, simulation of the detector performance and...
Dr
Takashi SUGIMOTO
(Japan Synchrotron Radiation Research Institute)
4/13/15, 3:45โฏPM
Track5: Computing activities and Computing models
oral presentation
An X-ray free electron laser (XFEL) facility, SACLA, is generating ultra-short, high peak brightness, and full-spatial-coherent X-ray pulses [1]. The unique characteristics of the X-ray pulses, which have never been obtained with conventional synchrotron orbital radiation, are now opening new opportunities in a wide range of scientific fields such as atom, molecular and optical physics,...
Christoph Paus
(Massachusetts Inst. of Technology (US))
4/13/15, 4:30โฏPM
Track5: Computing activities and Computing models
oral presentation
The Dynamic Data Management (DDM) framework is designed to manage the majority of the CMS data in an automated fashion. At the moment 51 CMS Tier-2 data centers have the ability to host about 20 PB of data. Tier-1 centers will also be included adding substantially more space. The goal of DDM is to facilitate the management of the data distribution and optimize the accessibility of data for the...
Thomas Beermann
(Bergische Universitaet Wuppertal (DE))
4/13/15, 4:45โฏPM
Track5: Computing activities and Computing models
oral presentation
This contribution presents a study on the applicability and usefulness of dynamic data placement methods for data-intensive systems, such as ATLAS distributed data management (DDM). In this system the jobs are sent to the data, therefore having a good distribution of data is significant. Ways of forecasting workload patterns are examined which then are used to redistribute data to achieve a...
Prof.
Daniele Bonacorsi
(University of Bologna)
4/13/15, 5:00โฏPM
Track5: Computing activities and Computing models
oral presentation
During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed...
Elizabeth Sexton-Kennedy
(Fermi National Accelerator Lab. (US))
4/13/15, 5:15โฏPM
Track5: Computing activities and Computing models
oral presentation
Today there are many different experimental event processing frameworks in use by running or about to be running experiments. This talk will compare and contrast the different components of these frameworks and highlight the different solutions chosen by different groups.ย In the past there have been attempts at shared framework projects for example the collaborations on the BaBar framework...
Dr
Bodhitha Jayatilaka
(Fermilab)
4/13/15, 5:30โฏPM
Track5: Computing activities and Computing models
oral presentation
The Open Science Grid (OSG) ties together individual experiments' computing power, connecting their resources to create a large, robust computing grid; this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. OSG has been funded by the Department of Energy Office of Science and National Science Foundation...
Federica Legger
(Ludwig-Maximilians-Univ. Muenchen (DE))
4/13/15, 5:45โฏPM
Track5: Computing activities and Computing models
oral presentation
The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data for the distributed physics community is a challenging task. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user...
Dirk Hufnagel
(Fermi National Accelerator Lab. (US))
4/13/15, 6:00โฏPM
Track5: Computing activities and Computing models
oral presentation
In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system...
Tim Smith
(CERN)
4/14/15, 2:00โฏPM
Track5: Computing activities and Computing models
oral presentation
In this paper we present newly launched services for open data and for long-term preservation and reuse of high-energy-physics data analyses. We follow the "data continuum" practices through several progressive data analysis phases up to the final publication. The aim is to capture all digital assets and associated knowledge inherent in the data analysis process for subsequent generations, and...
Martin Urban
(Rheinisch-Westfaelische Tech. Hoch. (DE))
4/14/15, 2:15โฏPM
Track5: Computing activities and Computing models
oral presentation
VISPA provides a graphical front-end to computing infrastructures giving its users all functionality needed for working conditions comparable to a personal computer. It is a framework that can be extended with custom applications to support individual needs, e.g. graphical interfaces for experiment-specific software. By design, VISPA serves as a multi-purpose platform for many disciplines and...
Dr
Bodhitha Jayatilaka
(Fermilab)
4/14/15, 2:30โฏPM
Track5: Computing activities and Computing models
oral presentation
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at...
Roger Jones
(Lancaster University (GB))
4/14/15, 2:45โฏPM
Track5: Computing activities and Computing models
oral presentation
Complementary to parallel open access and analysis preservation initiatives, ATLAS is taking steps to ensure that the data taken by the experiment during run-1 remain accessible and available for future analysis by the collaboration. An evaluation of what is required to achieve this is underway, examining the ATLAS data production chain to establish the effort required and potential problems....
Jetendr Shamdasani
(University of the West of England (GB))
4/14/15, 3:00โฏPM
Track5: Computing activities and Computing models
oral presentation
In complex data analyses it is increasingly important to capture information about the usage of data sets in addition to their preservation over time in order to ensure reproducibility of results, to verify the work of others and to ensure appropriate conditions data have been used for specific analyses. This so-called provenance data in the computer science world is defined as the history or...
Dr
Andrew Norman
(Fermilab)
4/14/15, 3:15โฏPM
Track5: Computing activities and Computing models
oral presentation
The ability of modern HEP experiments to acquire and process unprecedented amounts of data and simulation have led to an explosion in the volume of information that individual scientists deal with on a daily basis. This explosion has resulted in a need for individuals to generate and keep large โpersonal analysisโ data sets which represent the skimmed portions of official data collections...
Fons Rademakers
(CERN)
4/14/15, 3:30โฏPM
Track5: Computing activities and Computing models
oral presentation
CERN openlab is a unique public-private partnership between CERN and leading ICT companies. Its mission is to accelerate the development of cutting-edge solutions to be used by the worldwide HEP community. Since January 2015 openlab phase V has started. To bring the openlab conducted research closer to the experiments, phase V has been changed to a project based structure which allows research...
Mr
Romain Wartel
(CERN)
4/14/15, 3:45โฏPM
Track5: Computing activities and Computing models
oral presentation
This presentation gives an overview of the current computer security landscape. It describes the main vectors of compromises in the academic community including lessons learnt, reveals inner mechanisms of the underground economy to expose how our computing resources are exploited by organised crime groups, and gives recommendations how to better protect our computing infrastructures. By...