17–21 Oct 2016
LBNL
US/Pacific timezone

Session

Site Report

17 Oct 2016, 09:55
Building 50 Auditorium (LBNL)

Building 50 Auditorium

LBNL

Berkeley, CA 94720

Presentation materials

There are no materials yet.

  1. Sandy Philpott
    17/10/2016, 09:55
    Site Reports

    JLab high performance and experimental physics computing environment updates since the spring 2016 meeting, including recent hardware installs of KNL and Broadwell compute clusters, Supermicro storage; our Lustre Intel upgrade status; 12GeV computing updates; and Data Center modernization progress.

    Go to contribution page
  2. William Strecker-Kellogg (Brookhaven National Lab)
    17/10/2016, 10:10
    Site Reports

    The site report contains the latest news and updates on
    computing at BNL.

    Go to contribution page
  3. Denice Deatrich
    17/10/2016, 10:25
    Site Reports

    Updates on the status of the Canadian Tier-1 and other TRIUMF computing news will be presented.

    Go to contribution page
  4. Shawn Mc Kee (University of Michigan (US))
    17/10/2016, 11:10
    Site Reports

    We will present an update on our site since the Spring 2016 report, covering our changes in software, tools and operations.

    We will also report on our recent significant hardware purchases during summer 2016 and the impact it is having on our site.

    We conclude with a summary of what has worked and what problems we encountered and indicate directions for future work.

    Go to contribution page
  5. Garhan Attebury (University of Nebraska-Lincoln (US))
    17/10/2016, 11:25
    Site Reports

    Updates from T2_US_Nebraska covering our experiences operating CentOS 7 + Docker/SL6 worker nodes, banishing SRM in favor of LVS balanced GridFTP, and some attempts at smashing OpenFlow + GridFTP + ONOS together to live the SDN dream.

    Go to contribution page
  6. Ajit Mohapatra (University of Wisconsin-Madison (US))
    17/10/2016, 11:40
    Site Reports

    As a major WLCG/OSG T2 site, the University of Wisconsin-Madison CMS T2 has consistently been delivering highly reliable and productive services towards large scale CMS MC production/processing, data storage, and physics analysis for last 10 years. The site utilises high throughput computing (HTCondor), highly available storage system (Hadoop), scalable distributed software systems (CVMFS),...

    Go to contribution page
  7. Yaodong Cheng (IHEP)
    17/10/2016, 11:55
    Site Reports

    This talk will give a brief introduction to the status of computing center IHEP, CAS, including local cluster, Grid Tier2 site for Atlas and CMS, file and storage system, cloud infrastructure, planned HPC system, Internet and domestic network.

    Go to contribution page
  8. Tomoaki Nakamura (KEK)
    17/10/2016, 12:10
    Site Reports

    The new KEK Central Computer system started the service on September 1st, 2016 after the renewal of all hardware. In this talk, we would like to introduce the performance of the new system and improvement of network connectivity with LHCONE.

    Go to contribution page
  9. Rennie Scott (Fermilab)
    17/10/2016, 12:25
    Site Reports

    News and updates from Fermilab.

    Go to contribution page
  10. Tomoe Kishimoto (University of Tokyo (JP))
    17/10/2016, 14:00
    Site Reports

    The Tokyo Tier-2 site, which is located in International Center for Elementary Particle Physics (ICEPP)
    at the University of Tokyo, is providing resources for the ATLAS experiment in WLCG. In December 2015,
    almost all hardware devices were replaced as the 4th system. Operation experiences with the new system
    and ??a migration plan from CREAM-CE + Troque/Maui to ARC-CE + HTCondor will be reported.

    Go to contribution page
  11. Lucien Philip Boland (University of Melbourne (AU))
    17/10/2016, 14:15
    Site Reports

    Will provide updates on technical and managerial changes to Australia's only HEP grid computing site.

    Go to contribution page
  12. Yemi Adesanya
    17/10/2016, 17:30
    Site Reports

    Update on SLAC Scientific Computing Service

    SLAC’s Scientific Computing Services team provide long-term storage and
    midrange compute capability for multiple science projects across the lab.
    The team is also responsible for core enterprise (non-science) unix
    infrastructure. Sustainable hardware lifecycle is a key part of the...

    Go to contribution page
  13. Wayne Hendricks (California Institute of Technology (US))
    17/10/2016, 17:45
    Site Reports

    Caltech site report (USCMS Tier 2 site)

    Go to contribution page
  14. Eric Yen (Academia Sinica Grid Computing), Felix.hung-te Lee (Academia Sinica (TW))
    18/10/2016, 09:00
    Site Reports

    report on facility deployment, recent activities, collaborations and plans

    Go to contribution page
  15. Jerome Belleman (CERN)
    18/10/2016, 09:15
    Site Reports

    News from CERN since the DESY workshop.

    Go to contribution page
  16. Martin Bly (STFC-RAL)
    18/10/2016, 09:30
    Site Reports

    Latest news of activities at the RAL Tier1.

    Go to contribution page
  17. Paul Kuipers (Nikhef)
    18/10/2016, 09:45
    Site Reports

    Update from Nikhef

    Go to contribution page
  18. Andrea Chierici (INFN-CNAF)
    18/10/2016, 10:00
    Site Reports

    A short update on what's going on at the Italian T1 center.

    Go to contribution page
  19. Erik Mattias Wadenstein (University of Umeå (SE))
    18/10/2016, 10:45
    Site Reports

    News and interesting events from NDGF and NeIC.

    Go to contribution page
  20. Andreas Petzold (KIT - Karlsruhe Institute of Technology (DE))
    18/10/2016, 11:00
    Site Reports

    News about GridKa Tier-1 and other KIT IT projects and infrastructure.

    Go to contribution page
  21. Dr Thomas Roth (GSI Darmstadt)
    18/10/2016, 11:15
    Site Reports

    During the last few months, HPC @ GSI has moved servers and services to the new data center Green IT Cube. This included moving the users from the old compute cluster to the new one with a new scheduler, and moving several Petabytes of data from the old to the new Lustre cluster.

    Go to contribution page
  22. lana abadie (ITER)
    18/10/2016, 11:30
    Site Reports

    Critical to the success of ITER reaching its scientific goal (Q≥10) is a data system that supports the broad range of diagnostics, data analysis, and computational simulations required for this scientific mission. Such a data system, termed ITERDB in this document, will be the centralized data access point and data archival mechanism for all of ITER’s scientific data. ITERDB will provide a...

    Go to contribution page
  23. Johan Henrik Guldmyr (Helsinki Institute of Physics (FI))
    18/10/2016, 11:45
    Site Reports
    • hardware renewal
    • dCache and OS upgrade
    • ansible
    Go to contribution page
  24. Sophie Ferry
    18/10/2016, 12:00
    Site Reports
    • Windows10 migration
    • network : IPV6
    • infra : monitoring
    • new H2020 call EOSF
    Go to contribution page
  25. Mr Domokos Szabo (Wigner Datcenter)
    18/10/2016, 12:15
    Site Reports

    We give an update on the infrastructure, Tier-0 hosting services, Cloud services and other recent developments at the Wigner Datacenter.

    Go to contribution page
Building timetable...