Contribution List

14 out of 14 displayed
Export to PDF
  1. Prof. Bruce Allen
    03/05/2012, 09:30
    I'll first give you a lightning overview of the AEI. The Max-Planck Institute for Gravitational Physics, also known as the Albert Einstein Institute or AEI, is the world's largest institute devoted to the study of gravitation. The main focus in Hannover is the detection of gravitational waves. These were first predicted by Einstein about a century ago; we hope to make the first direct...
    Go to contribution page
  2. Dr Timothy Lanfear
    03/05/2012, 09:45
    The past five years have seen the use of graphical processing units (GPUs) for computation grow from being of interest to a handful of early adopters to a mainstream technology used in the world's largest supercomputers. One of the attractions of the GPU architecture is the efficiency with which it can perform computations. Energy efficiency is a key concern in the design of all modern...
    Go to contribution page
  3. Herbert Cornelius
    03/05/2012, 11:00
    As we see Moore's Law alive and well, more and more parallelism is introduced to all computing platforms on all levels of integration and programming to achieve higher performance and energy efficiency. We will discuss the new Intel(r) Many Integrated Core (MIC) architecture for highly-parallel workloads with general purpose, energy efficient TFLOPS performance on a single chip. We will also...
    Go to contribution page
  4. Leif Nordlund
    03/05/2012, 11:45
    AMD is developing processor technology in three formats - CPU, GPU, and more recently, APU. And across these areas, AMD is leading a drive toward Heterogeneous Systems Architecture (HSA), an open platform standard that takes advantage of CPU/GPU processor cores as a unified processing engine, which we are making into an open platform standard. This architecture enables many benefits for HPC,...
    Go to contribution page
  5. Stavros KATSANEVAS (CNRS/IN2P3)
    03/05/2012, 13:45
  6. Dr Bosch Jim
    03/05/2012, 14:00
    When it enters operation, the Large Synoptic Survey Telescope will produce 15TB of image data each night, more than any other optical survey. In many respects, applying existing algorithms at this scale is a significant technical challenge on its own. However, the improved statistical errors and the fact that LSST is "deep, wide, and fast" will demand algorithms that are qualitatively...
    Go to contribution page
  7. Dr Giovanni Lamanna (LAPP)
    03/05/2012, 14:45
    The Cherenkov Telescope Array (CTA) – an array of tens of Cherenkov telescopes deployed on an unprecedented scale – will allow the European scientific community to remain at the forefront of research in the field of very high energy gamma-ray astronomy. One of the challenges to design the CTA observatory is to handle the large amounts of data generated by the instrument and to provide...
    Go to contribution page
  8. Dr Denis Bastieri (Università di Padova)
    03/05/2012, 16:00
    The standard analysis of the Fermi LAT collaboration could be sped up by two orders of magnitude porting the most time-consuming Science Tools toward a GPU architecture. Using an NVIDIA S2050, with its Fermi architecture, we were able to accelerate the computation of the satellite "livetime cube", reducing the execution time from 70 minutes (CPU) to 30 seconds (GPU). Other analysis tools could...
    Go to contribution page
  9. Drew Keppel
    03/05/2012, 16:40
    Searches for gravitational-wave signals from inspiraling black hole or neutron star binaries push the limits of currently available computing resources with conventional CPU-based computer clusters. Previous efforts have used the advantages associated with GPU-hardware to accelerate computationally-intensive portions of the searches by porting those computations to run on the GPUs....
    Go to contribution page
  10. Dr Aris Karastergiou
    03/05/2012, 17:20
    I will present a project that uses GPU technology with the next-generation LOFAR radio telescope, to search for bright millisecond bursts of radio emission from astrophysical sources. GPUs provide the computing power necessary to remove in real time the effects of propagation of the radio emission through the ionised interstellar medium. I will present details of the specific problem, our...
    Go to contribution page
  11. Prof. David Anderson (UC Berkeley)
    04/05/2012, 09:00
    Ten years from now, as today, the majority of the world's computing and storage resources will reside not in machine rooms but in the hands of consumers. Through volunteer computing much of these resources will be available to science. The first PetaFLOPS computation was done using volunteered computers, and the same is likely to be true for the ExaFLOPS milestone. Volunteer computing has...
    Go to contribution page
  12. Dr Bernd Panzer-Steindel
    04/05/2012, 09:45
    For the past 15 years the CERN IT department has launched regular (every ~2-3 years) technology+market evaluations which are used as input for the computer center architecture and cost/budget planning activities. The talk will try to give an overview of the various market and technology developments in the area of data processing and data storage. This will cover processors, memory,...
    Go to contribution page
  13. Jiri Chudoba (Acad. of Sciences of the Czech Rep. (CZ))
    04/05/2012, 11:00
    Pierre Auger Observatory needs a lot of computing resources for simulation of cosmic ray showers with ultra-high energies up to 10^21 eV. We are able to use simultaneously several thousand cores and generate more than 1 TB of data daily in the current EGI grid environment. We are limited by available resources and a long duration of a single job for very high energies, which is already...
    Go to contribution page
  14. Prof. Stavros KATSANEVAS (CNRS/IN2P3)
    04/05/2012, 11:45