How to build robust, visual-inertial state estimation for autonomous navigation?

Robust, Visual-Inertial State Estimation: from Frame-based to Event-based Cameras

This lecture was presented by Professor Davide Scaramuzza at Affiliation University of Zurich, ETH Zurich in September 25, 2017 on Series Microsoft Research Talks.

Professor Davide Scaramuzza presented main algorithms to achieve robust, 6-DOF, state estimation for mobile robots using passive sensing. Since cameras alone are not robust enough to high-speed motion and high-dynamic range scenes, he described how IMUs and event-based cameras can be fused with visual information to achieve higher accuracy and robustness. He also talked about the topic of event-based cameras, which are revolutionary sensors with a latency of microseconds, a very high dynamic range, and a measurement update rate that is almost a million time faster than standard cameras. Finally, He showed concrete applications of these methods in autonomous navigation of vision-controlled drones.

For further info, please watch the lecture video on Microsoft.