Monte-Carlo utility estimates for Bayesian reinforcement learning
Date issued
2013
In
52nd IEEE Conference on Decision and Control
Subjects
Machine Learning (cs.LG) Machine Learning (stat.ML)
Abstract
This paper introduces a set of algorithms for Monte-Carlo Bayesian reinforcement learning. Firstly, Monte-Carlo estimation of upper bounds on the Bayes-optimal value function is employed to construct an optimistic policy. Secondly, gradient-based algorithms for approximate upper and lower bounds are introduced. Finally, we introduce a new class of gradient algorithms for Bayesian Bellman error minimisation. We theoretically show that the gradient methods are sound. Experimentally, we demonstrate the superiority of the upper bound method in terms of reward obtained. However, we also show that the Bayesian Bellman error method is a close second, despite its significant computational simplicity.
Publication type
conference paper
File(s)![Thumbnail Image]()
Loading...
Name
1303.2506.pdf
Type
Main Article
Size
131.22 KB
Format
Adobe PDF
Checksum
(MD5):414d6167751fdd838fb706b9d8119230
