Exploration in POMDPs
Date issued
#VALEUR!
In
#N/A
Vol
1
No
1
From page
24
To page
31
Abstract
In recent work, Bayesian methods for exploration in Markov decision processes (MDPs) and for solving known partially-observable Markov decision processes (POMDPs) have been proposed. In this paper we review the similarities and differences between those two domains and propose methods to deal with them simultaneously. This enables us to attack the Bayes-optimal reinforcement learning problem in POMDPs.
Publication type
journal article
File(s)
