GPGPU meets reality
Wednesday 27 April 2016 -
17:00
Monday 25 April 2016
¶
Tuesday 26 April 2016
¶
Wednesday 27 April 2016
¶
17:00
News
News
17:00 - 17:10
Room: 32/1-A24
17:10
The GPGPU & Many Vector Core Folly ... is there hope for HEP?
-
Matevz Tadel
(
Univ. of California San Diego (US)
)
The GPGPU & Many Vector Core Folly ... is there hope for HEP?
Matevz Tadel
(
Univ. of California San Diego (US)
)
17:10 - 17:30
Room: 32/1-A24
GPGPUs and Intel MIC processors of the current generation were designed to run efficiently on codes that are very different from almost everything we use in HEP. While there is some good value in our attempts to modernize our code for those architectures, we should understand we are entering a fight we can not win outright without some concessions from the chip manufacturers. For the short term future work, however, KNL seems to be the most appropriate architecture: we should be able to use it with not-too-embarrassing degree of efficiency, provided that we continue the work on progressive adaptation of our most time consuming algorithms.
17:30
Status of GPU technology and applications in HEP
-
Felice Pantaleo
(
CERN - Universität Hamburg
)
Status of GPU technology and applications in HEP
Felice Pantaleo
(
CERN - Universität Hamburg
)
17:30 - 17:50
Room: 32/1-A24
The use of accelerators, in particular of Graphic Processing Units (GPUs), in High Performance Scientific Computing is growing very fast since few years. GPUs have brought desktop and laptop computers to the Terascale (i.e. bringing computational power beyond a Teraflop), clusters to the Petascale and, in the foreseeable future, supercomputers to the Exascale. The status of the GPU architecture and an overview about their programmability is evolving will be given. A review and outlooks of the applications of the GPU technology in the field of High Energy Physics will be discussed as well.
17:50
GPU for triggering at Level0 in NA62 experiment
-
Gianluca Lamanna
(
Istituto Nazionale Fisica Nucleare Frascati (IT)
)
GPU for triggering at Level0 in NA62 experiment
Gianluca Lamanna
(
Istituto Nazionale Fisica Nucleare Frascati (IT)
)
17:50 - 18:10
Room: 32/1-A24
General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughputs, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. I will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on tests performed on CERN NA62 experiment trigger system. GPUs typically show deterministic behaviour in terms of processing latency, but assessment of real-time features of a standard GPGPU system takes a careful characterization of all subsystems. The networking subsystem results the most critical one in terms of latency fluctuations. Our envisioned solution to this issue is NaNet, an FPGA-based PCIe Network Interface Card (NIC) to enable GPUDirect connection.
18:10
LHCb Experience with GPGPUs
-
Michael David Sokoloff
(
University of Cincinnati (US)
)
LHCb Experience with GPGPUs
Michael David Sokoloff
(
University of Cincinnati (US)
)
18:10 - 18:30
Room: 32/1-A24
LHCb is evaluating GPGPU technologies and related issues in an effort to make hardware decisions for Run 3 circa March 2017. A key element is developing a number of demonstrators as "proof-of-principle" projects. Success is deemed necessary, but not sufficient, to move in this direction. Some important questions related to using GPUs are also important for other architectures: how do we convert our algorithms to be "stateless"? how can the framework manage GPUs and other accelerators? how do we write efficient parallel algorithms to take advantage of SIMD and vector processors? how do we determine the functional equivalency of algorithms which produce architecture-specific results? how do we manage memory usage? what level of expertise is required to write and maintain good code? how should we evaluate life-cycle hardware and software costs? In this presentation, I will discuss elements of the Roadmap for an Upgrade Software and Computing TDR produced earlier this year.