An update on recent INFN-T1 activities
News from the lab
Changes at LAL and GRIF grid site.
Site report, news and ongoing activities at the Swiss National Supercomputing Centre T2 site (CSCS-LCG2) running ATLAS, CMS and LHCb.
- More hardware issues with HPE SL4510 Gen9
- Parsing HP ADU Reports
- dCache upgrade
- IPv6
I will give a short overview of our institute and its IT capabilities.
Report on new developments and insights from NDGF. The report will focus on half a year of experience with HA dCache and how this works for us in practice.
An overview of BNL's RHIC/ATLAS Computing Facility, highlighting significant developments since the last HEPiX meeting at LBNL.
We will present an update on our site since the Fall 2016 report, covering our changes in software, tools and operations.
Some of the details to cover include changes and updates to our networking, storage and deployed middleware.
We conclude with a summary of what has worked and what problems we encountered and indicate directions for future work.
In the last year, the Nebraska site has worked hard to reinvent the services offered to its user communities. The high-throughput-computing resources have successfully transitioned to Docker, offering more flexibility in terms of OS environments. We have upgraded and improved our CVMFS infrastructure, allowing local users to heavily utilize it for data distribution. Finally, we have adopted...
An update on CERN Linux support distributions and services.
An update on the CentOS community and CERN involvement will be given. We will discuss software collections, virtualization and openstack SIGs and how we use them.
We will present our new puppet based configuration tool and its future.
A brief status on alternative arches (aarch64, ppc64le, etc...) work done by the community will be given.
The initiative to create a journal about Software and Computing for Big Science was presented one year ago, at HEPiX Berlin. The journal has now been launched. This talk will remind what are the goals of the journal and how to contribute.
After many months of work the wLCG Tier 1 centre at RAL has begun to deploy IPv6 addresses to production hosts. This talk will detail the work that has been done and explain the strategy that has been adopted for managing addresses in a dual-stack environment.
During the first quarter of 2017 CERN IT migrated from a Puppet 3-based service to a Puppet 4 one. We highlight the steps we took, the methods we used and the problems we discovered along the way.
saltstack is a newer configuration management tool, first developed for remote execution. This talk will cover my experiences with salt in two organizations, with two different roles.
- Cleaning up an organizations use of salt.
- Making ceph execution modules in python.
We would present an update of the changes at our site since 2016 reporting. Through the presentation we share the advancements, roadblocks and achievements made concerning different aspects (like unix, grid, projects etc.) at our facility.
We conclude by summing up and mentioning our goals.
The KEK central computer system had been upgraded in September 2016. In this talk, we would like to report some experiences on the operation of hierarchical storage system and Grid system with status and usage after the system upgrade.
The Tokyo Tier-2 site, which is located in International Center for Elementary Particle Physics (ICEPP) at the University of Tokyo, is providing computing resources for the ATLAS experiment in the WLCG.
Updates on the site since the Fall 2016 meeting, including the status of batch system migration and an implementation of redundancy in the database of the storage element, will be reported.
We will present the latest status of the GSDC. And migration plan of administrative system will be presented.
The report talks about the current status of IHEP site including new physics experiment it supports, the migration to HTCondor cluster, both EOS and lustre file system deployed at IHEP and network upgraded since Oct. 2016.
This presentation provides an update on the global security landscape since the last HEPiX meeting. It describes the main vectors of compromises in the academic community including lessons learnt, presents interesting recent attacks while providing recommendations on how to best protect ourselves. It also covers security risks management in general, as well as the security aspects of the...
WLCG relies on the network as a critical part of its infrastructure and therefore needs to guarantee effective network usage and prompt detection and resolution of any network issues, including connection failures, congestion and traffic routing. The OSG Networking Area is a partner of the WLCG effort and is focused on being the primary source of networking information for its partners and...
ESnet staff are in the early stages of planning the next generation of their network, ESnet6. ESnet is providing network services to all of the large US LHC computing centers and this community is the biggest user of the current ESnet5 network. ESnet6 is expected to be online during the LHC Run 3 and Run 4. How the LHC community uses the network has a big impact on the ESnet6 project, and...
In order to provide a more secure and manageable network in IHEP, we designed a new network architecture which will be implemented in the middle of this year, this report will give an introduction of this architecture, and under this architecture, we have done some IPv6 tests and deployed some monitoring tools, the test results will be shown. Moreover, the research of the network security...
This update from the HEPiX IPv6 Working Group will present activities during the last 6-12 months. In September 2016, the WLCG Management Board approved the group's plan for the support of IPv6-only CPU, together with the linked requirement for the deployment of production Tier 1 dual-stack storage and other services. This talk will remind HEPiX of the requirements for support of IPv6 and the...
We present an update of KEK computer security since HEPiX spring 2016. Over this past year, several security incidents occurred in KEK and Japanese academic sites. Consequently, we are forced to change our computer security strategy.
In this presentation, we also report our experiences, practices, and future plans on KEK computer security.
The HEP community is facing an ever increasing wave of computer security threats, with more and more recent attacks showing a very high level of complexity. Having a Security Operations Center (SOC) in place is paramount for the early detection and remediation of such threats. Key components and recommendations to build an appropriate monitoring and detection Security Operation Center will be...
The IT-Storage group at CERN is responsible for the operations and the development of the infrastructure to accommodate all the storage requirements from the physics data generated by LHC and non-LHC experiments to the personnel users’ files.
This presentation will give an overview of the solutions operated by the group, current and future developments, highlighting the group strategy to...
EOS, the high-performance CERN IT distributed storage for High-Energy Physics provides now more than 160PB of disks and supports several work-flows from data-taking and reconstruction to physics analysis. With the next storage delivery the system will grow above the 250PB mark. EOS provide as well “sync and share” capabilities to more than 9k users for administrative, scientific and...
Network-attached online storage, aka cloud storage, is a very popular form
of storage service provided by many commercial vendors. Providers include
Dropbox, Box, Google Drive, MS One Drive and Amazon Cloud Drive. All
have similar capabilities, providing users with quota space and custom
applications to transfer data between local sites and cloud storage. In
addition, all have well designed...
Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics...
The CernVM File System (CernVM-FS) was developed to assist WLCG High Energy Physics (HEP) collaborations to deploy software on the worldwide distributed computing infrastructure used to run data processing applications. The technology is now firmly established as the primary method for distributing WLCG experiment software, and its use by other HEP and non-HEP communities has increased...
The new Ceph based storage system, Echo, is now accepting production data from LHC VOs. This talk gives an update on the work done while reaching this milestone. It will also cover other non-Echo Ceph related work at RAL.
Since 2012, 8 physics labs from Orsay/Saclay work together to provide a efficient and resilient scientific computing infrastructure. After building shared hosting facilities, this year, the 8 labs IT submitted a project to build a distributed data infrastructure based on Ceph technology that got funded at the end of 2016. The objective is to deploy on 3 sites, connected with a 100G network, 1...
This is the PIC report to HEPIX Spring 2017.
We present CosmoHub, a web platform to perform interactive analysis of massive cosmological data without any SQL knowledge. CosmoHub is built on top of Apache Hive, which is an Apache Hadoop ecosystem component, which facilitates reading, writing, and managing large datasets.
CosmoHub is hosted at the Port de Informació Científica (PIC) and currently provides support to several international...
HammerCloud is a testing service to run continuous tests or on-demand large-scale stress tests of the WLCG resources with real-life experiment jobs.
HammerCloud is used by the ATLAS and CMS experiments in production. It has been a useful service to commission both compute resources and various components of the complex distributed systems of the LHC experiments, as well as integral part of...
Brookhaven Lab recently acquired an Intel Knight's Landing (KNL) cluster consisting of 144 nodes connected with a dual-rail OmniPath (OPA) fabric. We will detail our experiences integrating this cluster into our environment, testing the performance and deugging issues relating to the fabric and hardware. Details about the integration with the batch system (Slurm) and performance issues found...
Update to JLab' Fall 2016 SciPhi-XVI KNL talk, to include the addition of 64 nodes into our Knights Landing cluster, reaching #397 on the Top500 list at 429.5 TFlops and #10 on the Green500 list at 3836.6 MFLOPS/W. It will include an overview of our cluster configuration updates, Omni-Path fabric, benchmarking, integration with Lustre and NFS over Infiniband, as well as current open issues.
An update of the activity of the HEPiX Benchmarking Working Group will be reported.
IHEP cluster with more 10,000 jobs slots was migrated from PBS to HTCondor by the end of 2016. The report talks about the the sharing pool scheduling policy deployed at IHEP cluster to promote the resource utility. And the experience of HTCondor management would be talked in the report.
The multi-user pilot job (MUPJ) model has become deeply embedded in the LHC computing ecosystem. In this model, the pilot job sent to a site batch will dynamically pull down one or more user payload jobs as it is running at the site.
While providing the experiments with utmost flexibility, the MUPJ presents challenges in isolation (preventing payloads from interacting with the pilot) and...
The HEPiX Benchmarking Working group has been investigating fast benchmark applications with the objective of identifying candidates that can run quickly enough to avoid wasting compute resources, but still capable of accurately representing HEP workloads. Understanding how the underlying processor microarchitecture affects the results of these benchmarks is important to foresee scenarios...
This contribution describes the CRIC (Computing Resource Information Catalog) framework which is designed to describe the topology of the Experiments computing models, providing unified description of resources and services used by Experiments applications
The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centers affiliated with several partner projects....
[ElastiCluster][1] is a command-line application (and a Python API) to deploy, configure, and resize various types of computational clusters on Infrastructure-as-a-Service clouds. Currently supported is the deployment of SLURM/GridEngine/TORQUE batch clusters, Spark/Hadoop systems (with Hive and HDFS), and various types of distributed filesystems (GlusterFS, OrangeFS, Ceph) on OpenStack,...
We'll give an update on the status of our cloud, focusing in the features recently added with special attention over containers.
One obstacle to effective and efficient exploitation of public cloud resources is the work required to accomodate their different APIs. Observing that many public clouds offer varying degrees of support for container orchestration using Kubernetes, we present the results of practical experiments involving several large public cloud providers.
We also present a brief update on container...
The IT Storage group at CERN develops the software responsible for archiving to tape the custodial copy of the physics data generated by the LHC experiments. This software is code named CTA
(the CERN Tape Archive).
It needs to be seamlessly integrated with EOS
, which has become the de facto disk storage system provided by the IT Storage group for physics data.
CTA
and EOS
integration...
IHEP distributed computing was built based on DIRAC in 2012 and started operations in 2014 to meet peek needs of the BESIII experiment. As more new experiments (JUNO, LHAASO, CEPC, etc) with challenging data volume are coming into operations or are planned in IHEP, the system has been progressively developed into a common platform to support multi experiments in one instance. In this platform,...
The LHC Run4 phase, also known as HL-LHC, is scheduled to start in mid 2026 and it will impose formidable challenges to the capability of processing and storing data according to the planned data acquisition rates. A tenfold increase in recorded event rates for ATLAS and CMS and a threefold increase in event pile-up will require an amount of computational power and storage far in excess of...
This is a whistle-stop tour of some of the new approaches and technologies that enables companies to derive insight from their data, both today and into the future. It compares the progress of SSD and HDD, and maps out how HDD can stay on the
aerial density curve for the foreseeable future, up to 100TB per device.
Technology touched on will be Dual-Actuator drives, Helium, Two-Dimensional...
In January 2016 CERN launched a new project with the aim to provide a centralised Elasticsearch service. This presentation will summarise the status of the project,
challenges, experiences from the pre-production phase, and methods applied to configure access control.
Over the past two years, the operations at CNAF, the ICT center of the Italian Institute for Nuclear Physics, have undergone significant changes. The adoption of configuration management tools, such as Puppet, and the constant increase of dynamic and cloud infrastructures have led us to investigate a new monitoring approach. The present work deals with the centralization of the monitoring...
For over a decade, the CERN IT Data Centres have been using a centralized monitoring infrastructure collecting data from hardware, services and applications via in-house sensors, metrics and notifications. Meanwhile also the LHC experiments were relying on dedicated WLCG Dashboards visualizing and reporting the status and progress of the job execution, data transfers and sites availability...
Touching on the fact that we have an ongoing data collection project and its progression to the next phase, monitoring. Talk about a couple of monitoring paths taken, ones that look promising also talk about the ones that failed.
Building upon last year, I'll discuss a bit about how to create a small data collection and monitoring setup. Instruction will be place on the HEPiX twiki.
Event logging is a central source of information for IT.
The syslog-ng application collects logs from many different
sources, performs real-time log analysis by processing and filtering them,
and finally it stores the logs or routes them for further analysis.
In an ideal world, all log messages come in a structured format, ready to
be used for log analysis, alerting or dashboards. But in a...
We present the log infrastructure at CCIN2P3 and illustrate how syslog-ng plays a central part in it.
Following up on Balabit's talk on syslog-ng's features, we present several use-cases which are likely to be of interest to the HEPiX community.
For instance, we present real-life examples on how to parse and correlate operating system and batch scheduler events.
We present its integration...
Report about the development on the regulation system of the chillers: the current status, target in this phase and possible future plans.
This talk will give the current status of two on-going Data Centre projects as well as two recent incidents.
P2IO, a group of laboratories that LAL is member of, build the first phase of a shared datacenter a few years ago, in production since October 2013. This datacenter has been designed for achieving a good energy efficiency in the context of scientific computing. The extension of this datacenter is in progress to increase its capacity from 30 to 50 racks. This talk will presen the lessons...
Hardware maintenance can be time consuming depending on your processes and your retailers ones. The goal of the talk is to depict how the end-to-end chain of hardware failures, from the event to the case closing, has been mostly automated in our machines rooms. It covers diagnostics, incident tracking, parts dispatching, statistics, processes, tools, bits of SOAP code and people...