Through participation in the Community Cluster Program of Purdue University, our Tier-2 center has for many years been one of the most productive and reliable sites for CMS computing, providing both dedicated and opportunistic resources to the collaboration. In this report we will present an overview of the site, review the successes and challenges of the last year of operation, and outline...
Updates from BNL since KEK meeting
We will present an update on our site since the Fall 2017 report, covering our changes in software, tools and operations.
Some of the details to cover include the enabling of IPv6 for all of our AGLT2 nodes, our migration to SL7, exploration of the use of Bro/MISP at the UM site, the use of Open vSwitch on our dCache storage and information about our newest hardware purchases and deployed...
Updates from T2_US_Nebraska covering our experiences operating CentOS 7 + Docker/Singularity, random dabbling with SDN to better HEP transfers, involvement with the Open Science Grid, and trying to live the IPv6 dream.
PDSF, the Parallel Distributed Systems Facility, was moved to Lawrence Berkeley National Lab from Oakland CA in 2016. The cluster has been in continuous operation since 1996 serving high energy physics research. The cluster is a tier-1 site for Star, a tier-2 site for Alice and a tier-3 site for Atlas.
This site report will describe lessons learned and challenges met, when migrating from...
The computing center of IHEP maintains a HTC cluster with 10,000 cpu cores and a site including about 15,000 CPU cores and more than 10PB storage. The presentation will talk about the its progress and next plan of IHEP Site.
News about what happened at DESY during the last months
News from CERN since the HEPiX Fall 2017 workshop at KEK, Tsukuba, Japan.
A brief update on INFN-T1 site, what is our current status and what is still to be done to reach 100% functionality
News from PIC since the HEPiX Fall 2017 workshop at KEK, Tsukuba, Japan.
Site report from Nikhef
Update on activities at RAL
Recently we deployed new cluster with worker nodes with 10 Gbps network connection
and new disk servers for DPM and xrootd. I will also discuss migration from Torque/Maui to HTCondor batch system.