Continuous online sequence learning with an unsupervised neural network model

40/2-A01 (CERN)



Show room on map

There are many things humans find easy to do that computers are currently unable to do. Tasks such as visual pattern recognition, understanding spoken language, recognizing and manipulating objects by touch, and navigating in a complex world are easy for humans. Yet, despite decades of research, we have no viable algorithms for performing these and other cognitive functions on a computer.

In a human, these capabilities are largely performed by the neocortex. Hierarchical Temporal Memory (HTM) is a technology that replicates the structural and algorithmic properties of the neocortex. HTM therefore offers the promise of building machines that approach or exceed human level performance for many cognitive tasks.

HTMs are unlike traditional programmable computers. With traditional computers, a programmer creates specific programs to solve specific problems. For example, one program may be used to recognize speech and another completely different program may be used to model weather. HTM, on the other hand, is best thought of as a memory system. HTMs are not programmed and do not execute different algorithms for different problems. Instead, HTMs “learn” how to solve problems. HTMs are trained by exposing them to sensory data and the capability of the HTM is determined largely by what it has been exposed to. 

    • 11:15 AM 12:00 PM
      Paper discussion 45m
      Speaker: Adrian Alan Pol (Université Paris-Saclay (FR))