-
Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US))10/03/2020, 09:00
-
Gabriele Benelli (Brown University (US))10/03/2020, 09:20
-
Martin Barisits (CERN)10/03/2020, 09:30
-
Dr Adam Lyon (Fermilab)10/03/2020, 09:45
-
David Michael South (Deutsches Elektronen-Synchrotron (DE)), Mario Lassnig (CERN)10/03/2020, 11:00
-
Aristeidis Fkiaras (CERN)10/03/2020, 11:20
The ESCAPE European Union funded project aims at integrating facilities of astronomy, astroparticle and particle physics into a single collaborative cluster or data lake. The data requirements of such data lake are in the exabyte scale and the data should follow the FAIR principles (Findable, Accessible, Interoperable, Re-usable). To fulfill those requirements significant RnD is foreseen with...
Go to contribution page -
Steven Timm (Fermi National Accelerator Lab. (US))10/03/2020, 11:40
The DUNE collaboration has been using Rucio since 2018 to transport data to our many European remote storage elements. We currently have 13.8 PB of data under Rucio management at 13 remote storage elements. We present our experience thus far, as well as our future plans to make Rucio our sole file location catalog.
Go to contribution page
We will present our planned data discovery system, and the role of Rucio in... -
Wilko Kroeger (SLAC National Accelerator Laboratory)10/03/2020, 12:00
We will describe our plans for using RUCIO within the data management system at the Linac Coherent Light Source (LCLS) at SLAC. An overview of the LCLS data management system will be presented and what role RUCIO will play for cataloging, distributing and archiving of the data files. We are still in the testing phase but plan to use RUCIO in production within the next few month.
Go to contribution page -
Thomas Beermann (Bergische Universitaet Wuppertal (DE))10/03/2020, 13:30
-
Dimitrios Christidis (University of Texas at Arlington (US))10/03/2020, 13:45
-
Panos Paparrigopoulos (CERN)10/03/2020, 14:00
CRIC is a high-level information system which provides flexible, reliable and complete topology and configuration description for a large scale distributed heterogeneous computing infrastructure. CRIC aims to facilitate distributed computing operations for HEP experiments and consolidate WLCG topology information. Being a topology framework, CRIC offers a generic solution with out of the box...
Go to contribution page -
10/03/2020, 14:20
-
Cedric Serfon (Brookhaven National Laboratory (US))10/03/2020, 14:40
-
Arfath Pasha (MSKCC)10/03/2020, 15:00
MSKCC's Computational Oncology group performs prospective and retrospective studies on a number of cancer types with a focus on cancer evolution. The data being collected and managed for research comes from many sources. Broadly, the data may be categorized into molecular, imaging and clinical data types. The studies tend to be cross-sectional and longitudinal. Users require heterogenous...
Go to contribution page -
Inder Monga (ESNet)11/03/2020, 09:00
-
Katy Ellis (Science and Technology Facilities Council STFC (GB))11/03/2020, 09:45
-
Eric Vaandering (Fermi National Accelerator Lab. (US))11/03/2020, 10:05
An update on the CMS transition to Rucio, expected to be completed this year, will be given.
Results of scale tests, data consistency work, and improvements in the kubernetes infrastructure will be the focus of this talk.
Go to contribution page -
Andrea Manzi11/03/2020, 11:00
The Data Management requirements coming from the EGI and EOSC-Hub user communities have pictured Rucio (together with a Data transfer engine) as one of the possible solutions for their needs. Since the 2nd Rucio workshop a number of enhancements and new developments (in primis the support for OIDC and the kubernetes deployment improvements) have been implemented and they are going towards the...
Go to contribution page -
40. iDDS: A New Service with Intelligent Orchestration and Data Transformation and Delivery (Remote)Wen Guan (University of Wisconsin (US))11/03/2020, 11:20
The Production and Analysis system (PanDA system) has continuously been evolving in order to cope with rapidly changing computing infrastructure and paradigm. The system is required to be more dynamic and proactive to integrate emerging workflows such as data carousel and active learning, in contrast to conventional HEP workflows such as Monte-Carlo simulation and data...
Go to contribution page -
Cedric Serfon (Brookhaven National Laboratory (US))11/03/2020, 11:40
-
11/03/2020, 11:50
-
Dmitry Litvintsev (Fermi National Accelerator Lab. (US))11/03/2020, 13:30
dCache is highly scalable distributed storage system that is used to
Go to contribution page
implement storage elements with and without tape back-ends.
dCache is offering a comprehensive RESTFul data management interface
that uses language of QoS states and transitions to steer the data
life-cycle. This interface provides functionality inspired by the
experiences of the LHC and other data intensive experiments.... -
Federica Legger (Universita e INFN Torino (IT))11/03/2020, 13:30
In the near future, large scientific collaborations will face unprecedented computing challenges. Processing and storing exabyte datasets require a federated infrastructure of distributed computing resources. The current systems have proven to be mature and capable of meeting the experiment goals, by allowing timely delivery of scientific results. However, a substantial amount of interventions...
Go to contribution page -
Dimitrios Christidis (University of Texas at Arlington (US))11/03/2020, 13:50
-
Julien Leduc (CERN)11/03/2020, 13:50
CTA is designed to replace CASTOR as the CERN Tape Archive solution, in order to face scalability and performance challenges arriving with LHC Run-3.
This presentation will focus on the current CTA deployment and will provide an up-to-date snapshot of CTA achievements.
It will also cover the final Run3 CTA Service architecture and underlying hardware that have been deployed at the end of 2019.
Go to contribution page -
Wei Yang (SLAC National Accelerator Laboratory (US))11/03/2020, 14:10
-
Panos Paparrigopoulos (CERN)11/03/2020, 14:15
This contribution describes how and why we decided to create the “OpInt Framework", what it offers and how we architected it. Last year we began the development of the "Rucio OpInt" project in order to optimise the operational effort and minimize human interventions in the distributed data management.
Go to contribution page
When we brought "Rucio OpInt" to the Operational intelligence forum we realized that there... -
11/03/2020, 14:30
-
Siarhei Padolski (BNL)11/03/2020, 14:40
Reliable automatization of the root cause analysis procedure is an essential prerequisite for the Operational Intelligence deployment. That kind of data processing is important as an input for the automatic decision making and has its own value as an instrument for offloading shifters operations. The order of magnitude of failing rate in distributed computing, for instance in ATLAS experiment,...
Go to contribution page -
Kevin Michael Retzke (Fermi National Accelerator Lab. (US)), Shreyas Bhat11/03/2020, 15:05
-
11/03/2020, 16:00
https://events.fnal.gov/colloquium/events/event/open-17/
Go to contribution page -
David Schultz (University of Wisconsin-Madison)12/03/2020, 09:00
-
Paschalis Paschos (University of Chicago)12/03/2020, 09:20
The search for Dark Matter in the XENON experiment at the LNGS laboratory in Italy enters a new phase, XENONnT in 2020. Managed by the University of Chicago, Xenon's Rucio deployment plays a central role in the data management between the collaboration's end points. In preparation for the new phase, there have been notable upgrades in components of the production and analysis pipeline and they...
Go to contribution page -
Matthew Snyder (Brookhaven National Laboratory)12/03/2020, 09:40
Rucio has evolved as a distributed data management system to be used by scientific communities beyond High Energy Physics. This includes disengaging its core code from a specific file transfer tool. In this talk I will discuss using Globus Online as a file transfer tool with Rucio, the current state of testing and the possibilities for the future in light of NSLSII's data ecosystem
Go to contribution page -
Mr Gabriele Gaetano Fronze' (University e INFN Torino (IT), Subatech Nantes (FR))12/03/2020, 10:00
-
Edward Karavakis (CERN)12/03/2020, 11:00
The File Transfer Service (FTS) is distributing the majority of the LHC data across the WLCG infrastructure and, in 2019, it has transferred more than 800 million files and a total of 0.95 exabyte of data. It is used by more than 28 experiments at CERN and in other data-intensive sciences outside of the LHC and even the High Energy Physics domain.
The FTS team has been very active in...
Go to contribution page -
Eli Benjamin Chadwick (Science and Technology Facilities Council STFC (GB))12/03/2020, 11:20
-
Martin Barisits (CERN)12/03/2020, 11:40
-
12/03/2020, 12:00
-
Gabriele Gaetano Fronze' (University e INFN Torino (IT), Subatech Nantes (FR))
Choose timezone
Your profile timezone: