24–28 Apr 2017
Hungarian Academy of Sciences
Europe/Budapest timezone

Contribution List

67 out of 67 displayed
Export to PDF
  1. 24/04/2017, 09:00
  2. Balazs Bago (Hungarian Academy of Sciences (HU)), Szilvia Racz (Wigner Datacenter), Mr Gabor Peto (Wigner Datacenter)
    24/04/2017, 09:10
  3. Jerome Belleman (CERN)
    24/04/2017, 09:30
    Site Reports

    News from CERN since the LBNL workshop.

    Go to contribution page
  4. Vladimir Sapunenko (INFN-CNAF (IT))
    24/04/2017, 09:45
    Site Reports

    An update on recent INFN-T1 activities

    Go to contribution page
  5. Andreas Haupt (Deutsches Elektronen-Synchrotron (DE))
    24/04/2017, 10:00
    Site Reports

    News from the lab

    Go to contribution page
  6. Michel Jouvin (Universite de Paris-Sud 11 (FR))
    24/04/2017, 10:15
    Site Reports

    Changes at LAL and GRIF grid site.

    Go to contribution page
  7. Martin Bly (STFC-RAL)
    24/04/2017, 10:30
    Site Reports

    Update from the RAL Tier1

    Go to contribution page
  8. Dino Conciatore (Eidgenoessische Technische Hochschule Zuerich (CH))
    24/04/2017, 11:15
    Site Reports

    Site report, news and ongoing activities at the Swiss National Supercomputing Centre T2 site (CSCS-LCG2) running ATLAS, CMS and LHCb.

    Go to contribution page
  9. Johan Henrik Guldmyr (Helsinki Institute of Physics (FI))
    24/04/2017, 11:30
    Site Reports
    • More hardware issues with HPE SL4510 Gen9
    • Parsing HP ADU Reports
    • dCache upgrade
    • IPv6
    Go to contribution page
  10. Alf Wachsmann (Max Delbrück Center for Molecular Medicine (MDC))
    24/04/2017, 11:45
    Site Reports

    I will give a short overview of our institute and its IT capabilities.

    Go to contribution page
  11. Erik Mattias Wadenstein (University of Umeå (SE))
    24/04/2017, 12:00
    Site Reports

    Report on new developments and insights from NDGF. The report will focus on half a year of experience with HA dCache and how this works for us in practice.

    Go to contribution page
  12. Ofer Rind
    24/04/2017, 12:15
    Site Reports

    An overview of BNL's RHIC/ATLAS Computing Facility, highlighting significant developments since the last HEPiX meeting at LBNL.

    Go to contribution page
  13. Shawn Mc Kee (University of Michigan (US))
    24/04/2017, 12:30
    Site Reports

    We will present an update on our site since the Fall 2016 report, covering our changes in software, tools and operations.

    Some of the details to cover include changes and updates to our networking, storage and deployed middleware.

    We conclude with a summary of what has worked and what problems we encountered and indicate directions for future work.

    Go to contribution page
  14. Brian Paul Bockelman (University of Nebraska-Lincoln (US))
    24/04/2017, 12:45
    Site Reports

    In the last year, the Nebraska site has worked hard to reinvent the services offered to its user communities. The high-throughput-computing resources have successfully transitioned to Docker, offering more flexibility in terms of OS environments. We have upgraded and improved our CVMFS infrastructure, allowing local users to heavily utilize it for data distribution. Finally, we have adopted...

    Go to contribution page
  15. Ulrich Schwickerath (CERN)
    24/04/2017, 14:30
    End-User IT Services & Operating Systems

    An update on CERN Linux support distributions and services.

    An update on the CentOS community and CERN involvement will be given. We will discuss software collections, virtualization and openstack SIGs and how we use them.

    We will present our new puppet based configuration tool and its future.

    A brief status on alternative arches (aarch64, ppc64le, etc...) work done by the community will be given.

    Go to contribution page
  16. Michel Jouvin (Universite de Paris-Sud 11 (FR))
    24/04/2017, 14:55

    The initiative to create a journal about Software and Computing for Big Science was presented one year ago, at HEPiX Berlin. The journal has now been launched. This talk will remind what are the goals of the journal and how to contribute.

    Go to contribution page
  17. James Adams (STFC RAL)
    24/04/2017, 15:20
    Security & Networking

    After many months of work the wLCG Tier 1 centre at RAL has begun to deploy IPv6 addresses to production hosts. This talk will detail the work that has been done and explain the strategy that has been adopted for managing addresses in a dual-stack environment.

    Go to contribution page
  18. Jerome Belleman (CERN)
    24/04/2017, 16:15
    Basic IT Services

    During the first quarter of 2017 CERN IT migrated from a Puppet 3-based service to a Puppet 4 one. We highlight the steps we took, the methods we used and the problems we discovered along the way.

    Go to contribution page
  19. Owen Synge
    24/04/2017, 16:40
    Basic IT Services

    saltstack is a newer configuration management tool, first developed for remote execution. This talk will cover my experiences with salt in two organizations, with two different roles.

    • Cleaning up an organizations use of salt.
    • Making ceph execution modules in python.
    Go to contribution page
  20. Arnab Sinha (CEA/IRFU)
    24/04/2017, 17:05
    Site Reports

    We would present an update of the changes at our site since 2016 reporting. Through the presentation we share the advancements, roadblocks and achievements made concerning different aspects (like unix, grid, projects etc.) at our facility.
    We conclude by summing up and mentioning our goals.

    Go to contribution page
  21. Tomoaki Nakamura (High Energy Accelerator Research Organization (JP))
    25/04/2017, 09:00
    Site Reports

    The KEK central computer system had been upgraded in September 2016. In this talk, we would like to report some experiences on the operation of hierarchical storage system and Grid system with status and usage after the system upgrade.

    Go to contribution page
  22. Tomoe Kishimoto (University of Tokyo (JP))
    25/04/2017, 09:15
    Site Reports

    The Tokyo Tier-2 site, which is located in International Center for Elementary Particle Physics (ICEPP) at the University of Tokyo, is providing computing resources for the ATLAS experiment in the WLCG.
    Updates on the site since the Fall 2016 meeting, including the status of batch system migration and an implementation of redundancy in the database of the storage element, will be reported.

    Go to contribution page
  23. Jeongheon Kim (Korea Institute of Science and Technology Information)
    25/04/2017, 09:30
    Site Reports

    We will present the latest status of the GSDC. And migration plan of administrative system will be presented.

    Go to contribution page
  24. Jingyan Shi (IHEP)
    25/04/2017, 09:45
    Site Reports

    The report talks about the current status of IHEP site including new physics experiment it supports, the migration to HTCondor cluster, both EOS and lustre file system deployed at IHEP and network upgraded since Oct. 2016.

    Go to contribution page
  25. Liviu Valsan (CERN)
    25/04/2017, 10:00
    Security & Networking

    This presentation provides an update on the global security landscape since the last HEPiX meeting. It describes the main vectors of compromises in the academic community including lessons learnt, presents interesting recent attacks while providing recommendations on how to best protect ourselves. It also covers security risks management in general, as well as the security aspects of the...

    Go to contribution page
  26. Liviu Valsan (CERN)
    25/04/2017, 11:00
  27. Shawn Mc Kee (University of Michigan (US))
    25/04/2017, 14:30
    Security & Networking

    WLCG relies on the network as a critical part of its infrastructure and therefore needs to guarantee effective network usage and prompt detection and resolution of any network issues, including connection failures, congestion and traffic routing. The OSG Networking Area is a partner of the WLCG effort and is focused on being the primary source of networking information for its partners and...

    Go to contribution page
  28. Mr Joe Metzger (ESnet)
    25/04/2017, 14:55
    Security & Networking

    ESnet staff are in the early stages of planning the next generation of their network, ESnet6. ESnet is providing network services to all of the large US LHC computing centers and this community is the biggest user of the current ESnet5 network. ESnet6 is expected to be online during the LHC Run 3 and Run 4. How the LHC community uses the network has a big impact on the ESnet6 project, and...

    Go to contribution page
  29. Ms SHAN ZENG (IHEP)
    25/04/2017, 15:20
    Security & Networking

    In order to provide a more secure and manageable network in IHEP, we designed a new network architecture which will be implemented in the middle of this year, this report will give an introduction of this architecture, and under this architecture, we have done some IPv6 tests and deployed some monitoring tools, the test results will be shown. Moreover, the research of the network security...

    Go to contribution page
  30. Andrea Sciaba (CERN)
    25/04/2017, 16:15
    Security & Networking

    This update from the HEPiX IPv6 Working Group will present activities during the last 6-12 months. In September 2016, the WLCG Management Board approved the group's plan for the support of IPv6-only CPU, together with the linked requirement for the deployment of production Tier 1 dual-stack storage and other services. This talk will remind HEPiX of the requirements for support of IPv6 and the...

    Go to contribution page
  31. Dr Tadashi Murakami (KEK)
    25/04/2017, 16:40
    Security & Networking

    We present an update of KEK computer security since HEPiX spring 2016. Over this past year, several security incidents occurred in KEK and Japanese academic sites. Consequently, we are forced to change our computer security strategy.
    In this presentation, we also report our experiences, practices, and future plans on KEK computer security.

    Go to contribution page
  32. Liviu Valsan (CERN)
    25/04/2017, 17:05
    Security & Networking

    The HEP community is facing an ever increasing wave of computer security threats, with more and more recent attacks showing a very high level of complexity. Having a Security Operations Center (SOC) in place is paramount for the early detection and remediation of such threats. Key components and recommendations to build an appropriate monitoring and detection Security Operation Center will be...

    Go to contribution page
  33. Luca Mascetti (CERN), Julien Leduc (CERN)
    26/04/2017, 09:00

    The IT-Storage group at CERN is responsible for the operations and the development of the infrastructure to accommodate all the storage requirements from the physics data generated by LHC and non-LHC experiments to the personnel users’ files.

    This presentation will give an overview of the solutions operated by the group, current and future developments, highlighting the group strategy to...

    Go to contribution page
  34. Luca Mascetti (CERN)
    26/04/2017, 09:25

    EOS, the high-performance CERN IT distributed storage for High-Energy Physics provides now more than 160PB of disks and supports several work-flows from data-taking and reconstruction to physics analysis. With the next storage delivery the system will grow above the 250PB mark. EOS provide as well “sync and share” capabilities to more than 9k users for administrative, scientific and...

    Go to contribution page
  35. Dr Hironori Ito (Brookhaven National Laboratory)
    26/04/2017, 09:50

    Network-attached online storage, aka cloud storage, is a very popular form
    of storage service provided by many commercial vendors. Providers include
    Dropbox, Box, Google Drive, MS One Drive and Amazon Cloud Drive. All
    have similar capabilities, providing users with quota space and custom
    applications to transfer data between local sites and cloud storage. In
    addition, all have well designed...

    Go to contribution page
  36. Andrey Kirianov (Petersburg Nuclear Physics Institut (RU))
    26/04/2017, 10:45

    Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics...

    Go to contribution page
  37. Catalin Condurache (STFC - Rutherford Appleton Lab. (GB))
    26/04/2017, 11:10

    The CernVM File System (CernVM-FS) was developed to assist WLCG High Energy Physics (HEP) collaborations to deploy software on the worldwide distributed computing infrastructure used to run data processing applications. The technology is now firmly established as the primary method for distributing WLCG experiment software, and its use by other HEP and non-HEP communities has increased...

    Go to contribution page
  38. Tom Byrne (STFC)
    26/04/2017, 11:35

    The new Ceph based storage system, Echo, is now accepting production data from LHC VOs. This talk gives an update on the work done while reaching this milestone. It will also cover other non-Echo Ceph related work at RAL.

    Go to contribution page
  39. Guillaume PHILIPPON (CNRS - LAL)
    26/04/2017, 12:00

    Since 2012, 8 physics labs from Orsay/Saclay work together to provide a efficient and resilient scientific computing infrastructure. After building shared hosting facilities, this year, the 8 labs IT submitted a project to build a distributed data infrastructure based on Ceph technology that got funded at the end of 2016. The objective is to deploy on 3 sites, connected with a 100G network, 1...

    Go to contribution page
  40. Dr Jose Flix Molina (Centro de Investigaciones Energéti cas Medioambientales y Tecno)
    26/04/2017, 12:25
    Site Reports

    This is the PIC report to HEPIX Spring 2017.

    Go to contribution page
  41. Jordi Casals Hernandez (University of Barcelona (ES))
    26/04/2017, 12:40
    Computing & Batch Services

    We present CosmoHub, a web platform to perform interactive analysis of massive cosmological data without any SQL knowledge. CosmoHub is built on top of Apache Hive, which is an Apache Hadoop ecosystem component, which facilitates reading, writing, and managing large datasets.

    CosmoHub is hosted at the Port de Informació Científica (PIC) and currently provides support to several international...

    Go to contribution page
  42. Jaroslava Schovancova (CERN)
    26/04/2017, 14:30
    Computing & Batch Services

    HammerCloud is a testing service to run continuous tests or on-demand large-scale stress tests of the WLCG resources with real-life experiment jobs.

    HammerCloud is used by the ATLAS and CMS experiments in production. It has been a useful service to commission both compute resources and various components of the complex distributed systems of the LHC experiments, as well as integral part of...

    Go to contribution page
  43. William Strecker-Kellogg (Brookhaven National Lab)
    26/04/2017, 14:55
    Computing & Batch Services

    Brookhaven Lab recently acquired an Intel Knight's Landing (KNL) cluster consisting of 144 nodes connected with a dual-rail OmniPath (OPA) fabric. We will detail our experiences integrating this cluster into our environment, testing the performance and deugging issues relating to the fabric and hardware. Details about the integration with the batch system (Slurm) and performance issues found...

    Go to contribution page
  44. Sandy Philpott
    26/04/2017, 15:20
    Computing & Batch Services

    Update to JLab' Fall 2016 SciPhi-XVI KNL talk, to include the addition of 64 nodes into our Knights Landing cluster, reaching #397 on the Top500 list at 429.5 TFlops and #10 on the Green500 list at 3836.6 MFLOPS/W. It will include an overview of our cluster configuration updates, Omni-Path fabric, benchmarking, integration with Lustre and NFS over Infiniband, as well as current open issues.

    Go to contribution page
  45. Domenico Giordano (CERN)
    26/04/2017, 16:15
    Computing & Batch Services

    An update of the activity of the HEPiX Benchmarking Working Group will be reported.

    Go to contribution page
  46. Jingyan Shi (IHEP)
    26/04/2017, 16:40
    Computing & Batch Services

    IHEP cluster with more 10,000 jobs slots was migrated from PBS to HTCondor by the end of 2016. The report talks about the the sharing pool scheduling policy deployed at IHEP cluster to promote the resource utility. And the experience of HTCondor management would be talked in the report.

    Go to contribution page
  47. Brian Paul Bockelman (University of Nebraska-Lincoln (US))
    26/04/2017, 17:05
    Computing & Batch Services

    The multi-user pilot job (MUPJ) model has become deeply embedded in the LHC computing ecosystem. In this model, the pilot job sent to a site batch will dynamically pull down one or more user payload jobs as it is running at the site.

    While providing the experiments with utmost flexibility, the MUPJ presents challenges in isolation (preventing payloads from interacting with the pilot) and...

    Go to contribution page
  48. Luca Atzori (CERN)
    26/04/2017, 17:30
    Computing & Batch Services

    The HEPiX Benchmarking Working group has been investigating fast benchmark applications with the objective of identifying candidates that can run quickly enough to avoid wasting compute resources, but still capable of accurately representing HEP workloads. Understanding how the underlying processor microarchitecture affects the results of these benchmarks is important to foresee scenarios...

    Go to contribution page
  49. Alessandro Di Girolamo (CERN)
    27/04/2017, 09:00
    Grid, Cloud & Virtualisation

    This contribution describes the CRIC (Computing Resource Information Catalog) framework which is designed to describe the topology of the Experiments computing models, providing unified description of resources and services used by Experiments applications

    The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centers affiliated with several partner projects....

    Go to contribution page
  50. Riccardo Murri
    27/04/2017, 09:25
    Grid, Cloud & Virtualisation

    [ElastiCluster][1] is a command-line application (and a Python API) to deploy, configure, and resize various types of computational clusters on Infrastructure-as-a-Service clouds. Currently supported is the deployment of SLURM/GridEngine/TORQUE batch clusters, Spark/Hadoop systems (with Hive and HDFS), and various types of distributed filesystems (GlusterFS, OrangeFS, Ceph) on OpenStack,...

    Go to contribution page
  51. Luis Pigueiras (CERN)
    27/04/2017, 09:50
    Grid, Cloud & Virtualisation

    We'll give an update on the status of our cloud, focusing in the features recently added with special attention over containers.

    Go to contribution page
  52. Ian Collier (STFC - Rutherford Appleton Lab. (GB))
    27/04/2017, 10:15
    Grid, Cloud & Virtualisation

    One obstacle to effective and efficient exploitation of public cloud resources is the work required to accomodate their different APIs. Observing that many public clouds offer varying degrees of support for container orchestration using Kubernetes, we present the results of practical experiments involving several large public cloud providers.
    We also present a brief update on container...

    Go to contribution page
  53. Julien Leduc (CERN)
    27/04/2017, 11:15
    Grid, Cloud & Virtualisation

    The IT Storage group at CERN develops the software responsible for archiving to tape the custodial copy of the physics data generated by the LHC experiments. This software is code named CTA (the CERN Tape Archive).
    It needs to be seamlessly integrated with EOS, which has become the de facto disk storage system provided by the IT Storage group for physics data.

    CTA and EOS integration...

    Go to contribution page
  54. Xiaomei Zhang (Chinese Academy of Sciences (CN))
    27/04/2017, 11:40
    Grid, Cloud & Virtualisation

    IHEP distributed computing was built based on DIRAC in 2012 and started operations in 2014 to meet peek needs of the BESIII experiment. As more new experiments (JUNO, LHAASO, CEPC, etc) with challenging data volume are coming into operations or are planned in IHEP, the system has been progressively developed into a common platform to support multi experiments in one instance. In this platform,...

    Go to contribution page
  55. Andrey Kiryanov (Petersburg Nuclear Physics Institute, National Research Center "Kurchatov Institute"), Andrea Sciaba (CERN)
    27/04/2017, 12:05
    Grid, Cloud & Virtualisation

    The LHC Run4 phase, also known as HL-LHC, is scheduled to start in mid 2026 and it will impose formidable challenges to the capability of processing and storing data according to the planned data acquisition rates. A tenfold increase in recorded event rates for ATLAS and CMS and a threefold increase in event pile-up will require an amount of computational power and storage far in excess of...

    Go to contribution page
  56. Mr Joe Fagan (Seagate)
    27/04/2017, 12:30

    This is a whistle-stop tour of some of the new approaches and technologies that enables companies to derive insight from their data, both today and into the future. It compares the progress of SSD and HDD, and maps out how HDD can stay on the
    aerial density curve for the foreseeable future, up to 100TB per device. 

    Technology touched on will be Dual-Actuator drives, Helium, Two-Dimensional...

    Go to contribution page
  57. Ulrich Schwickerath (CERN)
    27/04/2017, 14:30
    Basic IT Services

    In January 2016 CERN launched a new project with the aim to provide a centralised Elasticsearch service. This presentation will summarise the status of the project,
    challenges, experiences from the pre-production phase, and methods applied to configure access control.

    Go to contribution page
  58. Mr Stefano Bovina (INFN)
    27/04/2017, 14:55
    Basic IT Services

    Over the past two years, the operations at CNAF, the ICT center of the Italian Institute for Nuclear Physics, have undergone significant changes. The adoption of configuration management tools, such as Puppet, and the constant increase of dynamic and cloud infrastructures have led us to investigate a new monitoring approach. The present work deals with the centralization of the monitoring...

    Go to contribution page
  59. Jaroslava Schovancova (CERN)
    27/04/2017, 15:20
    Basic IT Services

    For over a decade, the CERN IT Data Centres have been using a centralized monitoring infrastructure collecting data from hardware, services and applications via in-house sensors, metrics and notifications. Meanwhile also the LHC experiments were relying on dedicated WLCG Dashboards visualizing and reporting the status and progress of the job execution, data transfers and sites availability...

    Go to contribution page
  60. Cary Whitney (LBNL)
    27/04/2017, 16:15
    Basic IT Services

    Touching on the fact that we have an ongoing data collection project and its progression to the next phase, monitoring. Talk about a couple of monitoring paths taken, ones that look promising also talk about the ones that failed.

    Building upon last year, I'll discuss a bit about how to create a small data collection and monitoring setup. Instruction will be place on the HEPiX twiki.

    Go to contribution page
  61. Mr Péter Czanik (Balabit)
    27/04/2017, 16:40
    Basic IT Services

    Event logging is a central source of information for IT.
    The syslog-ng application collects logs from many different
    sources, performs real-time log analysis by processing and filtering them,
    and finally it stores the logs or routes them for further analysis.

    In an ideal world, all log messages come in a structured format, ready to
    be used for log analysis, alerting or dashboards. But in a...

    Go to contribution page
  62. Fabien Wernli (CCIN2P3)
    27/04/2017, 17:05
    Basic IT Services

    We present the log infrastructure at CCIN2P3 and illustrate how syslog-ng plays a central part in it.
    Following up on Balabit's talk on syslog-ng's features, we present several use-cases which are likely to be of interest to the HEPiX community.
    For instance, we present real-life examples on how to parse and correlate operating system and batch scheduler events.
    We present its integration...

    Go to contribution page
  63. Mr Gábor Szentiványi (Wigner Datacenter)
    28/04/2017, 09:00
    IT Facilities & Business Continuity

    Report about the development on the regulation system of the chillers: the current status, target in this phase and possible future plans.

    Go to contribution page
  64. Mr Wayne Salter (CERN)
    28/04/2017, 09:25
    IT Facilities & Business Continuity

    This talk will give the current status of two on-going Data Centre projects as well as two recent incidents.

    Go to contribution page
  65. Michel Jouvin (Universite de Paris-Sud 11 (FR))
    28/04/2017, 09:50
    IT Facilities & Business Continuity

    P2IO, a group of laboratories that LAL is member of, build the first phase of a shared datacenter a few years ago, in production since October 2013. This datacenter has been designed for achieving a good energy efficiency in the context of scientific computing. The extension of this datacenter is in progress to increase its capacity from 30 to 50 racks. This talk will presen the lessons...

    Go to contribution page
  66. Mattieu Puel
    28/04/2017, 10:15
    IT Facilities & Business Continuity

    Hardware maintenance can be time consuming depending on your processes and your retailers ones. The goal of the talk is to depict how the end-to-end chain of hardware failures, from the event to the case closing, has been mostly automated in our machines rooms. It covers diagnostics, incident tracking, parts dispatching, statistics, processes, tools, bits of SOAP code and people...

    Go to contribution page
  67. Tony Wong (Brookhaven National Laboratory)
    28/04/2017, 11:10
    Miscellaneous