Visual Odometry

This is the visual odometry software I designed for Professor Whittaker’s 16-865 Advanced Mobile Robot Development Class. the algorithm had less than 5% error and over 99% confidence in its pose estimate.

Planetary rovers require accurate position estimation from sensory data (e.g. Inertial Measurement Units (IMU), quadrature encoders, and stereo cameras) to determine position, reconstruct path, and register to global navigation. IMUs drift over time and wheels slip on soil. Pairing this data with image data produces an accurate relative position of robot and terrain. For position estimation to be useful, the algorithm must run in real time which limits the amount of computation per frame.1 This project designed, developed and tested a pose estimation algorithm that achieves less than 5% error over a 1000-meter path. In comparison, odometry error rose to 20% over 100 meters using encoders.

Below is an illustration of the visual odometry algorithm. First, features are detected from a pair of stereo camera images. These features are matched in the current and previous timesteps. Direction and motion is determined from the world coordinates obtained from the set of matched features.