Speaker
Tim Mattson
(Intel-Retired)
Description
We continue with OpenMP by exploring: (1) how data is managed in shared memory systems and (2) task level parallelism. We’ll finish our journey into parallel programming with a high-level discussion of how the concepts we’ve learned with OpenMP map onto GPU programming and programming distributed memory systems using MPI. The goal is a solid working knowledge of OpenMP and a high level understanding of parallel programming beyond shared memory, CPU-based systems.