3rd ASPERA Computing and Astroparticle Physics Workshop 3-4 MAY 2012, HANNOVER, GERMANY

Europe/Zurich
Hannover, Germany

Hannover, Germany

Albert Einstein Institute (Max Planck Institute for Gravitational Physics) Callinstrasse 38, Hannover 30167, Germany
Description
Astroparticle physics is entering a new era of discovery. It has many aims, including:
-- Using cosmic messengers (high energy photons, cosmic rays, neutrinos and gravitational waves) to understand the formation of cosmological structure and the behavior and evolution of stars, galaxies and black holes.
-- Using large sky surveys and underground dark matter search experiments to understanding the material content of the Universe, and in particular the nature of dark matter and energy.
-- Using rare decays in underground laboratories to study the properties of the neutrino and the proton lifetime, to better understand the form of matter and interactions at the shortest scales.

In a few years, Astroparticle Physics has grown from a field of a few charismatic pioneers, transgressing interdisciplinary frontiers to a global science activity with large infrastructures and collaborations each involving hundreds of researchers.

The large-scale projects and activities proposed in the  ASPERA Roadmap (www.aspera-eu.org) face challenging problems of data collection, data storage and data mining. For some, these computing costs will be a significant fraction of the cost of the infrastructure. The issues of
computation, data mining complexity and public access are extremely challenging.

The Hannover Workshop is the third in an annual series of workshops that directly address these data collection, storage and analysis issues.

The Lyon Workshop [7-8 october 2010] presented these computational challenges and contrasted them with the data storage and analysis models developed in neighbouring fields of particle physics (grid and cloud computing, large databases) and astrophysics (virtual observatories, public access).  It also looked at specific astroparticle-physics issues which are also of relevance, including intelligent distributed data gathering and heterogeneous data fusion.

The Barcelona Workshop [30-31 May 2011] reviewed the current computing models developed by upcoming astroparticle observatories, including CTA, KM3net, Auger, VIRGO/LIGO, and LSST.  Availability of environmental data relevant to other areas, outreach, and the links with existing centres of particle physics and astrophysics were also discussed.  The workshop was organized as a dialog between Models for Computing and Data Pipelines for astroparticle projects ("the Modellers") and Technologies for Data Processing and Computing (the "Technologists").

THIS MEETING -- The upcoming Hannover Workshop [3-4 May 2012] will focus on hardware and technology.  In some cases the computing challenges are the bottleneck, and so using the best and most appropriate hardware and technology will enable more and better science to be done.  Because computing technology is largely driven by non-science market forces, the workshop will also involve some of the relevant market leaders, whose technology roadmaps are of great relevance.  The half-day immediately following the Workshop will be used to draft an ASPERA Computing Whitepaper summarizing the conclusions of the three Workshops.

The 3rd ASPERA Computing and Astroparticle Physics Workshop is FREE to attend but you must register.


Slides
Participants
  • ABIODUN OLUMIDE TELLA
  • ABIODUN OLUMIDE TELLA
  • Alberto Gennai
  • Aleksander Paravac
  • Alex Nielsen
  • Anastasios Liolios
  • Andreas Stiller
  • Antonella Bozzi
  • Aris Karastergiou
  • Badri Krishnan
  • Bernd Machenschalk
  • Bernd Panzer-Steindel
  • Bijan Saghai
  • Bruce Allen
  • Carsten Aulbert
  • Christian Gräf
  • David Anderson
  • Denis Bastieri
  • Dominique Boutigny
  • DOUCHY Laurent
  • Drew Keppel
  • Edmondo Orlotti
  • Emanuel Jacobi
  • Etienne Lyard
  • Francesco Salemi
  • Gergely Debreczeni
  • Gevorg Poghosyan
  • Giovanni Lamanna
  • Heinz-Bernd Eggenstein
  • Herbert Cornelius
  • Holger Pletsch
  • Hyunjoo Kim
  • Hélène DEMONFAUCON
  • Ino Agrafioti
  • James Bosch
  • Jiri Chudoba
  • Karl Wette
  • Karsten Wiesner
  • Katharina HENJES-KUNST
  • Leif Nordlund
  • Luciano Rezzolla
  • Manuel Delfino Reznicek
  • MARSOLLIER Arnaud
  • Maude Le Jeune
  • Michael Born
  • Miroslav Shaltev
  • Oliver Bock
  • Peter Wegner
  • pranita das
  • Rachid Lemrani
  • Roland Walter
  • Sandra Hesping
  • stavros katsanevas
  • Thomas Berghöfer
  • Timothy Lanfear
  • Tito Dal Canton
  • Vilmos Nemeth
  • Yifan Yang
    • 1
      Welcome to the AEI, and introduction to ASPERA AEI Hannover, Germany

      AEI Hannover, Germany

      I'll first give you a lightning overview of the AEI. The Max-Planck Institute for Gravitational Physics, also known as the Albert Einstein Institute or AEI, is the world's largest institute devoted to the study of gravitation. The main focus in Hannover is the detection of gravitational waves. These were first predicted by Einstein about a century ago; we hope to make the first direct detections with the LIGO/VIRGO and GEO instruments in the coming five years. Searching for weak gravitational-wave signals in detector noise is a large-scale computing and data analysis problem, hence the relevance of this ASPERA workshop. I'll then give you a lightning overview to ASPERA. ASPERA is a network of European national government agencies responsible for coordinating and funding national research efforts in Astroparticle Physics. Within the ERA-NET scheme of the European Commission, ASPERA started in July 2006. After a first successful three-year period, ASPERA, as ASPERA 2, is now continuing for another three-year programme towards the development of a sustainable body for astroparticle physics in Europe. ASPERA 2 is funded by the European Commission through the 7th Framework Programme.
      Speaker: Prof. Bruce Allen
    • 2
      Energy Efficient Computing with GPUs NVIDIA, UK

      NVIDIA, UK

      The past five years have seen the use of graphical processing units (GPUs) for computation grow from being of interest to a handful of early adopters to a mainstream technology used in the world's largest supercomputers. One of the attractions of the GPU architecture is the efficiency with which it can perform computations. Energy efficiency is a key concern in the design of all modern computing systems, from the lowest power mobile devices to the largest supercomputers; it will be paramount in the push to exascale computing. We discuss the Echelon project, and how the NVIDIA GPU architecture will evolve over the coming 5-10 years. Echelon is a DARPA-funded project investigating efficient parallel computer architectures for the Exascale era.
      Speaker: Dr Timothy Lanfear
      Slides
    • 10:30
      Coffee
    • 3
      General purpose, high-performance and energy-efficient x86 based computing with Many-Core Technologies (TFLOPS on a chip) Intel, Germany

      Intel, Germany

      Albert Einstein Institute (Max Planck Institute for Gravitational Physics) Callinstrasse 38, Hannover 30167, Germany
      As we see Moore's Law alive and well, more and more parallelism is introduced to all computing platforms on all levels of integration and programming to achieve higher performance and energy efficiency. We will discuss the new Intel(r) Many Integrated Core (MIC) architecture for highly-parallel workloads with general purpose, energy efficient TFLOPS performance on a single chip. We will also discuss the journey to ExaScale including technology trends for high-performance and look at some of the R&D areas for HPC at Intel.
      Speaker: Herbert Cornelius
      Slides
    • 4
      AMD Accelerator technologies - CPU, GPU and APU architectures to enable scientific computing in the future. Making accelerators accessable through the Heterogenous System Architecture and Open standards. AMD, Sweden

      AMD, Sweden

      AMD is developing processor technology in three formats - CPU, GPU, and more recently, APU. And across these areas, AMD is leading a drive toward Heterogeneous Systems Architecture (HSA), an open platform standard that takes advantage of CPU/GPU processor cores as a unified processing engine, which we are making into an open platform standard. This architecture enables many benefits for HPC, including greatly enhanced application performance and significantly low power consumption.
      Speaker: Leif Nordlund
      Slides
    • 12:30
      Lunch
    • 5
      the ASPERA process
      Speaker: Stavros KATSANEVAS (CNRS/IN2P3)
      Slides
    • 6
      Large Synoptic Survey Telescope (LSST): New Algorithms and Architectures Department of Astrophysical Sciences, University of Princeton, USA

      Department of Astrophysical Sciences, University of Princeton, USA

      When it enters operation, the Large Synoptic Survey Telescope will produce 15TB of image data each night, more than any other optical survey. In many respects, applying existing algorithms at this scale is a significant technical challenge on its own. However, the improved statistical errors and the fact that LSST is "deep, wide, and fast" will demand algorithms that are qualitatively different from those sufficient for surveys that are smaller in any one of these dimensions. In many cases, the computational demands for these more complex algorithms are considerably greater. In this talk, I will touch on several of these computational problems and the LSST collaboration's plans to address them, with a focus on difference imaging, image coaddition, and galaxy shape measurement for weak lensing. A particular challenge for LSST is that the state-of-the-art in both algorithm development and hardware architecture may change significantly before the survey begins, and our approach must be flexible enough to take advantage of both.
      Speaker: Dr Bosch Jim
      Slides
    • 7
      Computing in the context of Cherenkov Telescope Array (CTA) IN2P3/CNRS, France

      IN2P3/CNRS, France

      The Cherenkov Telescope Array (CTA) – an array of tens of Cherenkov telescopes deployed on an unprecedented scale – will allow the European scientific community to remain at the forefront of research in the field of very high energy gamma-ray astronomy. One of the challenges to design the CTA observatory is to handle the large amounts of data generated by the instrument and to provide simple and efficient user access at any level and according to astrophysical standards in order to serve the data and the software for data analysis to the physics community. The high data rate of CTA together with the large computing power requirements for Monte Carlo simulations, fundamental tool for data selection and calibration, demand dedicated computer resources which can be well handled through a DCI approach. Preliminary works and ideas about the organization of a coherent Data Management system for CTA will be presented.
      Speaker: Dr Giovanni Lamanna (LAPP)
      Slides
    • 15:30
      Coffee
    • 8
      GPUs in Fermi satellite data analysis INFN/University of Padova, Italy

      INFN/University of Padova, Italy

      The standard analysis of the Fermi LAT collaboration could be sped up by two orders of magnitude porting the most time-consuming Science Tools toward a GPU architecture. Using an NVIDIA S2050, with its Fermi architecture, we were able to accelerate the computation of the satellite "livetime cube", reducing the execution time from 70 minutes (CPU) to 30 seconds (GPU). Other analysis tools could benefit from GPUs: in particular, the likelihood analysis and the upper limit computation. In this talk, we will present Uriel, the Ultrafast Robotic Interface for Extended Likelihood, and many different applications where GPUs can have an impact in gamma-ray astrophysics.
      Speaker: Dr Denis Bastieri (Università di Padova)
      Slides
    • 9
      GPUs in gravitational wave data analysis AEI Hannover, Germany

      AEI Hannover, Germany

      Searches for gravitational-wave signals from inspiraling black hole or neutron star binaries push the limits of currently available computing resources with conventional CPU-based computer clusters. Previous efforts have used the advantages associated with GPU-hardware to accelerate computationally-intensive portions of the searches by porting those computations to run on the GPUs. Additionally, future computational savings could be obtained through further code optimization and novel analysis techniques, which will of course be impacted by the technologies available in the coming years. In this presentation, I will summarize the LIGO Scientific Collaboration and the Virgo Collaborations' efforts to accelerate inspiral searches using GPUs and will discuss how these efforts will be focused in the coming years.
      Speaker: Drew Keppel
      Slides
    • 10
      GPUs in real-time discovery of millisecond radio transients with LOFAR University of Oxford, UK

      University of Oxford, UK

      I will present a project that uses GPU technology with the next-generation LOFAR radio telescope, to search for bright millisecond bursts of radio emission from astrophysical sources. GPUs provide the computing power necessary to remove in real time the effects of propagation of the radio emission through the ionised interstellar medium. I will present details of the specific problem, our current approach for optimisation of the relevant GPU code, and why GPUs are currently the most appropriate solution compared to other multicore technologies. Finally, I will describe how our current work fits in the context of the Square Kilometer Array and its pathfinders for the science of astrophysical radio transients.
      Speaker: Dr Aris Karastergiou
      Slides
    • 18:30
      Apperitif and hors d'oeuvres Reception area AEI

      Reception area AEI

      Hannover, Germany

    • 18:30
      Tours of the AEI: 1. The ATLAS computer cluster 2. The 10-metre gravitational wave detector prototype 3. The LISA Pathfinder optical labs AEI, Hannover, Germany

      AEI, Hannover, Germany

    • 19:00
      Outdoor barbecue Tent just outside the AEI

      Tent just outside the AEI

      Hannover, Germany

    • 11
      Frontiers of Volunteer Computing UC Berkeley, USA

      UC Berkeley, USA

      Ten years from now, as today, the majority of the world's computing and storage resources will reside not in machine rooms but in the hands of consumers. Through volunteer computing much of these resources will be available to science. The first PetaFLOPS computation was done using volunteered computers, and the same is likely to be true for the ExaFLOPS milestone. Volunteer computing has existed for a decade and is being used to do breakthrough science in areas ranging from molecular biology to radio astronomy; however, it is an emerging technology and has potential applications in many new areas, including those involving storage and processing of large data. The landscape of volunteer computing is shaped by many factors. Some of these involve hardware technology; mobile devices, Graphics Processing Units (GPUs), wired and wireless communication networks, memory, and storage. I will discuss trends in these areas. Other factors involve software: technologies like virtualization are making it easier for scientists to use volunteer computing, while the rise of proprietary software environments and vendor-controlled application markets is making it more difficult. Finally, I will discuss the organizational, economic, and marketing issues that must be addressed for volunteer computing to achieve its potential.
      Speaker: Prof. David Anderson (UC Berkeley)
      Slides
    • 12
      Technology and Market Trend in Computing CERN, Switzerland

      CERN, Switzerland

      For the past 15 years the CERN IT department has launched regular (every ~2-3 years) technology+market evaluations which are used as input for the computer center architecture and cost/budget planning activities. The talk will try to give an overview of the various market and technology developments in the area of data processing and data storage. This will cover processors, memory, HDDs, SSDs and some future technologies. Cost and technology trends for the next 3-5 years will be discussed.
      Speaker: Dr Bernd Panzer-Steindel
      Slides
    • 10:30
      Coffee
    • 13
      Computing Challenges at the Pierre Auger Observatory Institute of Physics Academy of Sciences of the Czech Republic, Czech Republic

      Institute of Physics Academy of Sciences of the Czech Republic, Czech Republic

      Pierre Auger Observatory needs a lot of computing resources for simulation of cosmic ray showers with ultra-high energies up to 10^21 eV. We are able to use simultaneously several thousand cores and generate more than 1 TB of data daily in the current EGI grid environment. We are limited by available resources and a long duration of a single job for very high energies, which is already simplified by thinning parameter in the Corsika simulation program. Details of the time traces which would be useful for mass composition analyses and hadronic interaction physics get lost by thinning. Thousand times more computing power and correspondingly increased storage is needed for simulations without thinning. Significant speedup could be obtained by using many CPUs or even GPUs for generation of a single shower. We discuss the current trends in the middleware heading to a provision of a whole worker node with many cores to a single parallel job. The expected development in the Corsika and Geant4 towards parallelization and usage of GPU is needed for an efficient usage of the new infrastructure. Possibilities of computing in clouds are also discussed.
      Speaker: Jiri Chudoba (Acad. of Sciences of the Czech Rep. (CZ))
      Slides
    • 14
      White Paper plan and writing assignments IN2P3/CNRS, France

      IN2P3/CNRS, France

      Speaker: Prof. Stavros KATSANEVAS (CNRS/IN2P3)
      Slides
    • 13:00
      lunch
    • Room 1 available for small discussions
    • Room 2 available for small discussions
    • Room 3 available for small discussions
    • Room 4 available for small discussions