Logo du site
  • English
  • Français
  • Se connecter
Logo du site
  • English
  • Français
  • Se connecter
  1. Accueil
  2. Université de Neuchâtel
  3. Publications
  4. Monte-Carlo utility estimates for Bayesian reinforcement learning
 
  • Details
Options
Vignette d'image

Monte-Carlo utility estimates for Bayesian reinforcement learning

Auteur(s)
Dimitrakakis, Christos 
Institut d'informatique 
Date de parution
2013
In
52nd IEEE Conference on Decision and Control
Mots-clés
  • Machine Learning (cs.LG)
  • Machine Learning (stat.ML)
  • Machine Learning (cs....

  • Machine Learning (sta...

Résumé
This paper introduces a set of algorithms for Monte-Carlo Bayesian reinforcement learning. Firstly, Monte-Carlo estimation of upper bounds on the Bayes-optimal value function is employed to construct an optimistic policy. Secondly, gradient-based algorithms for approximate upper and lower bounds are introduced. Finally, we introduce a new class of gradient algorithms for Bayesian Bellman error minimisation. We theoretically show that the gradient methods are sound. Experimentally, we demonstrate the superiority of the upper bound method in terms of reward obtained. However, we also show that the Bayesian Bellman error method is a close second, despite its significant computational simplicity.
Identifiants
https://libra.unine.ch/handle/123456789/30979
_
10.1109/CDC.2013.6761048
Type de publication
conference paper
Dossier(s) à télécharger
 main article: 1303.2506.pdf (131.22 KB)
google-scholar
Présentation du portailGuide d'utilisationStratégie Open AccessDirective Open Access La recherche à l'UniNE Open Access ORCIDNouveautés

Service information scientifique & bibliothèques
Rue Emile-Argand 11
2000 Neuchâtel
contact.libra@unine.ch

Propulsé par DSpace, DSpace-CRIS & 4Science | v2022.02.00