Visual attention-based robot self-localization
Author(s)
Ouerhani, Nabil
Bur, Alexandre
Hügli, Heinz
Date issued
September 7, 2005
In
In Proceeding of European Conference on Mobile Robotics, 2005//8-13
Subjects
Visual attention Computer Vision Robot navigation Markov navigation Robot localization
Abstract
This paper reports a landmark-based localization method relying on visual attention. In a learning phase, the multi-cue, multi-scale saliency-based model of visual attention is used to automatically acquire robust visual landmarks that are integrated into a topological map of the navigation environment. During navigation, the same visual attention model detects the most salient visual features that are then matched to the learned landmarks. The matching result yields a probabilistic measure of the current location of the robot. Further, this measure is integrated into a more general Markov localization framework in order to take into account the structural constraints of the navigation environment, which significantly enhances the localization results. Some experiments carried out with real training and test image sequences taken by a robot in a lab environment show the potential of the proposed method.
Later version
http://ecmr05.univpm.it/
Publication type
journal article
File(s)![Thumbnail Image]()
Loading...
Name
1_Ouerhani_Nabil_-_Visual_Attention-Based_Robot_Self-Localization_20051130.pdf
Type
Main Article
Size
1.53 MB
Format
Adobe PDF
