Options
Visual attention-based robot self-localization
Auteur(s)
Ouerhani, Nabil
Bur, Alexandre
Hügli, Heinz
Date de parution
2005-09-07
In
In Proceeding of European Conference on Mobile Robotics, 2005//8-13
Résumé
This paper reports a landmark-based localization method relying on visual attention. In a learning phase, the multi-cue, multi-scale saliency-based model of visual attention is used to automatically acquire robust visual landmarks that are integrated into a topological map of the navigation environment. During navigation, the same visual attention model detects the most salient visual features that are then matched to the learned landmarks. The matching result yields a probabilistic measure of the current location of the robot. Further, this measure is integrated into a more general Markov localization framework in order to take into account the structural constraints of the navigation environment, which significantly enhances the localization results. Some experiments carried out with real training and test image sequences taken by a robot in a lab environment show the potential of the proposed method.
Identifiants
Autre version
http://ecmr05.univpm.it/
Type de publication
journal article