30 June 2016 to 1 July 2016
Other Institutes
Europe/Zurich timezone

Video-Based Drone Detection for Collision Avoidance Purposes

Not scheduled
10m
Auditorium & Conference Room (Other Institutes)

Auditorium & Conference Room

Other Institutes

ESADE Business School, Avenida Pedralbes, 60-62, 08034 Barcelona, Spain.

Speaker

Pascal Fua

Description

We are headed for a world in which the skies are occupied not only by birds and
planes but also by unmanned drones ranging from relatively large Unmanned
Aerial Vehicles (UAVs) to much smaller consumer ones. Some of these will carry
transponders that make them easy to detect but not all. In addition to these
unequipped drones, one must also account for other flying objects such as paragliders,
blimps, and even large birds that all constitute potential collision threats.
To allow aircrafts to safely navigate such a crowded environment without
requiring heavy or expensive equipment, we will develop a lightweight videobased
system that can detect threats and alert the pilot to it.

We have begun developing video-based algorithms that rely on classifying
descriptors extracted from spatio-temporal image cubes. These cubes are
formed by stacking motion-stabilized image windows over several consecutive
frames, which gives more information than using a single image. What makes
this approach both practical and effective is a learning-based motionstabilization
algorithm. Unlike those relying on optical flow, it remains effective
even when the shape of the object to be detected is blurry or barely visible. This
arises from the fact that learning-based motion compensation focuses on the
object and is more resistant to complicated backgrounds.

Our current results are encouraging but our algorithms are not yet reliable
enough, in large part because rely on Statistical Machine Learning, and more
specifically on Deep Nets that require very large training databases to reach their
full potential. Since such databases do not yet exist, the main goal of this project
will be to build them and then exploit them to the full to achieve the required
detection reliability

Summary

We have begun developing video-based algorithms that rely on classifying descriptors extracted from spatio-temporal image cubes for Collision Avoidance Purposes. These cubes are formed by stacking motion-stabilized image windows over several consecutive frames, which gives more information than using a single image.

Author

Presentation materials

There are no materials yet.