The mode of operation of state-of-the-art image sensors is useful for one thing: photography, i.e. for taking an image of a still scene. Exposing an array of pixels for a defined amount of time to the light coming from a static scene is an adequate procedure for capturing its visual content. However, as soon as change or motion is involved, the paradigm of visual frame acquisition becomes fundamentally flawed. If a camera observes a dynamic scene, no matter where you set your frame rate to, it will always be wrong. Because there is no relation between scene dynamics and the chosen frame rate, over-sampling or under-sampling will occur, and both will usually happen at the same time. As different parts of a scene have different dynamic contents, a single sampling rate governing the exposure of all pixels in an imaging array will naturally fail to yield adequate acquisition of these different scene dynamics present at the same time. The solution is an image sensor that samples parts of the scene that contain fast motion and changes at high sampling rates and slow changing parts at low rates - with the sampling rate going to zero if nothing changes. Unfortunately, the information about where in a scene, and at which speed, things change and move is usually not known beforehand. A way to solve this problem is to let each individual pixel adapt and optimize its own sampling rate to the visual input it receives by autonomously reacting to the temporal evolution of light incident to its photosensor. As a consequence, (a) the image sampling process is no longer governed by a fixed timing source but by the signal to be sampled itself and (b) image information is not acquired and transmitted frame-wise but continuously and conditionally only from parts of the scene containing relevant information. These sensors are able to combine ultra-high-speed operation at wide dynamic range with low data rates, outperforming conventional image sensors in many areas of machine vision.