Mosaic and Multiview Vision-Aided Navigation
We develop a new method for vision-aided navigation based on three-view geometry.
The main goal of the research is to provide position estimation in GPS-denied
environments for vehicles equipped with a standard inertial navigation system and a single camera only, without using any a-priori information. Images taken along the trajectory are stored and associated with partial navigation data. By using sets of three overlapping images and the concomitant navigation data, constraints relating the motion between the time instances of the three images are developed. These constraints include, in addition to the well-known epipolar constraints, a new constraint related to the three-view geometry of a general scene. The scale ambiguity, inherent to pure computer vision-based motion estimation techniques, is resolved by utilizing the navigation data attached to each image. The developed constraints are fused with an inertial navigation system using an implicit extended Kalman filter. The new method reduces the position and velocity errors to the levels present while the first two images were captured. Reduced computational resources are required compared to bundle adjustment and Simultaneous Localization and Mapping. The proposed method is being experimentally validated using real navigation and imagery data.