Logo du site
  • English
  • Français
  • Se connecter
Logo du site
  • English
  • Français
  • Se connecter
  1. Accueil
  2. Université de Neuchâtel
  3. Publications
  4. Visual attention-based robot self-localization
 
  • Details
Options
Vignette d'image

Visual attention-based robot self-localization

Auteur(s)
Ouerhani, Nabil
Editeur(s)
Bur, Alexandre
Hügli, Heinz
Date de parution
2005-09-07
In
In Proceeding of European Conference on Mobile Robotics, 2005//8-13
Mots-clés
  • Visual attention

  • Computer Vision

  • Robot navigation

  • Markov navigation

  • Robot localization

Résumé
This paper reports a landmark-based localization method relying on visual attention. In a learning phase, the multi-cue, multi-scale saliency-based model of visual attention is used to automatically acquire robust visual landmarks that are integrated into a topological map of the navigation environment. During navigation, the same visual attention model detects the most salient visual features that are then matched to the learned landmarks. The matching result yields a probabilistic measure of the current location of the robot. Further, this measure is integrated into a more general Markov localization framework in order to take into account the structural constraints of the navigation environment, which significantly enhances the localization results. Some experiments carried out with real training and test image sequences taken by a robot in a lab environment show the potential of the proposed method.
URI
https://libra.unine.ch/handle/123456789/19079
Autre version
http://ecmr05.univpm.it/
Type de publication
Resource Types::text::journal::journal article
Dossier(s) à télécharger
 main article: 1_Ouerhani_Nabil_-_Visual_Attention-Based_Robot_Self-Localization_20051130.pdf (1.53 MB)
google-scholar
Présentation du portailGuide d'utilisationStratégie Open AccessDirective Open Access La recherche à l'UniNE Open Access ORCID

Adresse:
UniNE, Service information scientifique & bibliothèques
Rue Emile-Argand 11
2000 Neuchâtel

Construit avec Logiciel DSpace-CRIS Maintenu et optimiser par 4Sciences

  • Paramètres des témoins de connexion
  • Politique de protection de la vie privée
  • Licence de l'utilisateur final