JLab high performance and experimental physics computing environment updates since the spring 2016 meeting, including recent hardware installs of KNL and Broadwell compute clusters, Supermicro storage; our Lustre Intel upgrade status; 12GeV computing updates; and Data Center modernization progress.
The site report contains the latest news and updates on
computing at BNL.
Updates on the status of the Canadian Tier-1 and other TRIUMF computing news will be presented.
We will present an update on our site since the Spring 2016 report, covering our changes in software, tools and operations.
We will also report on our recent significant hardware purchases during summer 2016 and the impact it is having on our site.
We conclude with a summary of what has worked and what problems we encountered and indicate directions for future work.
Updates from T2_US_Nebraska covering our experiences operating CentOS 7 + Docker/SL6 worker nodes, banishing SRM in favor of LVS balanced GridFTP, and some attempts at smashing OpenFlow + GridFTP + ONOS together to live the SDN dream.
As a major WLCG/OSG T2 site, the University of Wisconsin-Madison CMS T2 has consistently been delivering highly reliable and productive services towards large scale CMS MC production/processing, data storage, and physics analysis for last 10 years. The site utilises high throughput computing (HTCondor), highly available storage system (Hadoop), scalable distributed software systems (CVMFS),...
This talk will give a brief introduction to the status of computing center IHEP, CAS, including local cluster, Grid Tier2 site for Atlas and CMS, file and storage system, cloud infrastructure, planned HPC system, Internet and domestic network.
The new KEK Central Computer system started the service on September 1st, 2016 after the renewal of all hardware. In this talk, we would like to introduce the performance of the new system and improvement of network connectivity with LHCONE.
News and updates from Fermilab.
The Tokyo Tier-2 site, which is located in International Center for Elementary Particle Physics (ICEPP)
at the University of Tokyo, is providing resources for the ATLAS experiment in WLCG. In December 2015,
almost all hardware devices were replaced as the 4th system. Operation experiences with the new system
and ??a migration plan from CREAM-CE + Troque/Maui to ARC-CE + HTCondor will be reported.
Will provide updates on technical and managerial changes to Australia's only HEP grid computing site.
Update on SLAC Scientific Computing Service
SLAC’s Scientific Computing Services team provide long-term storage and
midrange compute capability for multiple science projects across the lab.
The team is also responsible for core enterprise (non-science) unix
infrastructure. Sustainable hardware lifecycle is a key part of the...
Caltech site report (USCMS Tier 2 site)
report on facility deployment, recent activities, collaborations and plans
News from CERN since the DESY workshop.
Latest news of activities at the RAL Tier1.
A short update on what's going on at the Italian T1 center.
News and interesting events from NDGF and NeIC.
News about GridKa Tier-1 and other KIT IT projects and infrastructure.
During the last few months, HPC @ GSI has moved servers and services to the new data center Green IT Cube. This included moving the users from the old compute cluster to the new one with a new scheduler, and moving several Petabytes of data from the old to the new Lustre cluster.
Critical to the success of ITER reaching its scientific goal (Q≥10) is a data system that supports the broad range of diagnostics, data analysis, and computational simulations required for this scientific mission. Such a data system, termed ITERDB in this document, will be the centralized data access point and data archival mechanism for all of ITER’s scientific data. ITERDB will provide a...
- hardware renewal
- dCache and OS upgrade
- Windows10 migration
- network : IPV6
- infra : monitoring
- new H2020 call EOSF
We give an update on the infrastructure, Tier-0 hosting services, Cloud services and other recent developments at the Wigner Datacenter.