Performance productivity challenges and researches for the future of computing
by Victor Lee (Intel Corp.)
at CERN ( 31-3-004 - IT Amphitheatre )
Over the past few decades, compute performance has been improving steadily due to the advancements in semiconductor manufacturing technology, circuit design and computer architecture. We expect the technology treadmill to continue providing us with innovations that would take us to the Exa-scale computing within this decade. Many technology advances that brought us the performance improvement have also increased the level of complexity programmers must deal with. For example, switching from the single-core processor design to the many-core processor design allows us to circumvent the power density challenge and allows the processor performance to continue to scale; however, this switch also requires developers to explore parallelism in order to harness the available compute capability. Similar trends in other technology areas will make performance productivity one of the grand challenges in the future of computing. This talk will discuss technology trends for future computing and how these trends have created performance productivity challenges for developers. It will also provide a survey of researches that academia and industry have started to overcome these challenges.
About the speaker
Victor Lee received a B.S. in Electrical Engineering from University of Washington in 1994, and M.S. in Electrical Engineering and Computer Science from Massachusetts Institute of Technology in 1996. He joined Intel Corp. in 1997 where he worked on the Intel Pentium Pro, the Intel Pentium 4 processor, the Intel Itanium processor and the QPI interconnect architecture. He is also a key contributor to the Intel Many Integrated Core architecture and a member of the Intel Xeon Phi coprocessor design team. Victor is an IEEE senior member, a principal engineer and research scientist in Intel’s Parallel Computing Lab. His research interests include computer architecture, parallel algorithms and applications and auto-tuning.
|Organised by||Andrzej Nowak and Miguel Angel Marquina
Computing Seminars /IT Department