Options
Bayesian Reinforcement Learning via Deep, Sparse Sampling
Auteur(s)
Date de parution
2020
In
AISTATS
Vol.
2020
Résumé
We address the problem of Bayesian reinforcement learning using efficient model-based online planning. We propose an optimism-free Bayes-adaptive algorithm to induce deeper and sparser exploration with a theoretical bound on its performance relative to the Bayes optimal policy, with a lower computational complexity. The main novelty is the use of a candidate policy generator, to generate long-term options in the planning tree (over beliefs), which allows us to create much sparser and deeper trees. Experimental results on different environments show that in comparison to the state-of-the-art, our algorithm is both computationally more efficient, and obtains significantly higher reward in discrete environments.
Identifiants
Type de publication
journal article
Dossier(s) à télécharger