Speaker
Stan Williams
(HP)
Description
Today's computers are roughly a factor of one billion less efficient at doing their
job than the laws of fundamental physics state that they could be. How much of this
efficiency gain will we actually be able to harvest? What are the biggest obstacles
to achieving many orders of magnitude improvement in our computing hardware, rather
that the roughly factor of two we are used to seeing with each new generation of
chip? Shrinking components to the nanoscale offers both potential advantages and
severe challenges. The transition from classical mechanics to quantum mechanics is
a major issue. Others are the problems of defect and fault tolearance: defects are
manufacturing mistakes or components that irreversibly break over time and faults
are transient interuptions that occur during operation. Both of these issues become
bigger problems as component sizes shrink and the number of components scales up
massively. In 1955, John von Neumann showed that a completely general approach to
building a reliable machine from unreliable components would require a redundancy
overhead of at least 10,000 - this would completely negate any advantages of
building at the nanoscale. We have been examining a variety of defect and fault
tolerant techniques that are specific to particular structures or functions, and are
vastly more efficient for their particular task than the general approach of von
Neumann. Our strategy is to layer these techniques on top of each other to achieve
high system reliability even with component reliability of no more than 97% or so,
and a total redudancy of less than 3. This strategy preserves the advantages of
nanoscale electronics with a relatively modest overhead.