The reason we use computers is their speed, and the reason we use parallel computers is that they're faster than single-processor computers. Yet, after 70 years of electronic digital computing, we still do not have a solid definition of what computer 'speed' means, or even what it means to be 'faster'. Unlike measures in physics, where the definition of speed is rigorous and unequivocal, in computing there is no definition of speed that is universally accepted. As a result, computer customers have made purchases misguided by dubious information, computer designers have optimized their designs for the wrong goals, and computer programmers have chosen methods that optimize the wrong things.
This talk describes why some of the obvious and historical ways of defining 'speed' haven't served us well, and the things we've learned in the struggle to find a definition that works.
Dr. John Gustafson is a Director at Intel Labs in Santa Clara, California. John is well known in High Performance Computing, having introduced the first commercial cluster system in 1985 and having first demonstrated 1000x scalable parallel performance on real applications in 1988, for which he won the inaugural Gordon Bell Award. That demonstration created a watershed that led to the widespread manufacture and use of highly parallel computers. It also led to a counter-argument to Amdahl's law called Gustafson's law, that some now refer to as "weak scaling". He received the IEEE Computer Society's Golden Core Award in 2007. His decisions in computer design are informed by his experience as a high performance computing user while at Ames Laboratory, Sandia National Laboratories, and the Jet Propulsion Laboratory.