Planetary rovers require accurate position estimation from sensory data (e.g. Inertial Measurement Units (IMU), quadrature encoders, and stereo cameras) to determine position, reconstruct path, and register to global navigation. IMUs drift over time and wheels slip on soil. Pairing this data with image data produces an accurate relative position of robot and terrain. For position estimation to be useful, the algorithm must run in real time which limits the amount of computation per frame.1 This project designed, developed and tested a pose estimation algorithm that achieves less than 5% error over a 1000-meter path. In comparison, odometry error rose to 20% over 100 meters using encoders.
Below is an illustration of the visual odometry algorithm. First, features are detected from a pair of stereo camera images. These features are matched in the current and previous timesteps. Direction and motion is determined from the world coordinates obtained from the set of matched features.