Dr
Giovanni Lamanna
(LAPP)
03/05/2012, 14:45
The Cherenkov Telescope Array (CTA) – an array of tens of Cherenkov telescopes deployed on an unprecedented scale – will allow the European scientific community to remain at the forefront of research in the field of very high energy gamma-ray astronomy.
One of the challenges to design the CTA observatory is to handle the large amounts of data generated by the instrument and to provide...
Dr
Denis Bastieri
(Università di Padova)
03/05/2012, 16:00
The standard analysis of the Fermi LAT collaboration could be sped up by two orders of magnitude porting the most time-consuming Science Tools toward a GPU architecture. Using an NVIDIA S2050, with its Fermi architecture, we were able to accelerate the computation of the satellite "livetime cube", reducing the execution time from 70 minutes (CPU) to 30 seconds (GPU). Other analysis tools could...
Prof.
David Anderson
(UC Berkeley)
04/05/2012, 09:00
Ten years from now, as today, the majority of the world's computing and storage resources will reside not in machine rooms but in the hands of consumers. Through volunteer computing much of these resources will be available to science. The first PetaFLOPS computation was done using volunteered computers, and the same is likely to be true for the ExaFLOPS milestone. Volunteer computing has...
Jiri Chudoba
(Acad. of Sciences of the Czech Rep. (CZ))
04/05/2012, 11:00
Pierre Auger Observatory needs a lot of computing resources for simulation of cosmic ray showers with ultra-high energies up to 10^21 eV. We are able to use simultaneously several thousand cores and generate more than 1 TB of data daily in the current EGI grid environment. We are limited by available resources and a long duration of a single job for very high energies, which is already...
Prof.
Stavros KATSANEVAS
(CNRS/IN2P3)
04/05/2012, 11:45