Voici les éléments 1 - 10 sur 56
  • Publication
    Accès libre
    Bandits Meet Mechanism Design to Combat Clickbait in Online Recommendation
    (2024)
    Thomas Kleine Buening
    ;
    Aadirupa Saha
    ;
    ;
    Haifen Xu
    We study a strategic variant of the multi-armed bandit problem, which we coin the strategic click-bandit. This model is motivated by applications in online recommendation where the choice of recommended items depends on both the click-through rates and the post-click rewards. Like in classical bandits, rewards follow a fixed unknown distribution. However, we assume that the click-rate of each arm is chosen strategically by the arm (e.g., a host on Airbnb) in order to maximize the number of times it gets clicked. The algorithm designer does not know the post-click rewards nor the arms' actions (i.e., strategically chosen click-rates) in advance, and must learn both values over time. To solve this problem, we design an incentive-aware learning algorithm, UCB-S, which achieves two goals simultaneously: (a) incentivizing desirable arm behavior under uncertainty; (b) minimizing regret by learning unknown parameters. We approximately characterize all Nash equilibria of the arms under UCB-S and show a $\tilde{\mathcal{O}} (\sqrt{KT})$ regret bound uniformly in every equilibrium. We also show that incentive-unaware algorithms generally fail to achieve low regret in the strategic click-bandit. Finally, we support our theoretical results by simulations of strategic arm behavior which confirm the effectiveness and robustness of our proposed incentive design
  • Publication
    Accès libre
    Strategic Linear Contextual Bandits
    (2024)
    Thomas Kleine Buening
    ;
    Aadirupa Saha
    ;
    ;
    Haifeng Xu
    Motivated by the phenomenon of strategic agents gaming a recommender system to maximize the number of times they are recommended to users, we study a strategic variant of the linear contextual bandit problem, where the arms can strategically misreport privately observed contexts to the learner. We treat the algorithm design problem as one of mechanism design under uncertainty and propose the Optimistic Grim Trigger Mechanism (OptGTM) that incentivizes the agents (i.e., arms) to report their contexts truthfully while simultaneously minimizing regret. We also show that failing to account for the strategic nature of the agents results in linear regret. However, a trade-off between mechanism design and regret minimization appears to be unavoidable. More broadly, this work aims to provide insight into the intersection of online learning and mechanism design.
  • Publication
    Accès libre
    Environment Design for Inverse Reinforcement Learning
    (2024)
    Thomas Kleine Buening
    ;
    ;
    Learning a reward function from demonstrations suffers from low sample-efficiency. Even with abundant data, current inverse reinforcement learning methods that focus on learning from a single environment can fail to handle slight changes in the environment dynamics. We tackle these challenges through adaptive environment design. In our framework, the learner repeatedly interacts with the expert, with the former selecting environments to identify the reward function as quickly as possible from the expert’s demonstrations in said environments. This results in improvements in both sample-efficiency and robustness, as we show experimentally, for both exact and approximate inference.
  • Publication
    Accès libre
  • Publication
    Accès libre
    Better Luck Next Time: About Robust Recourse in Binary Allocation Problems
    (2024)
    Meirav Segal
    ;
    Anne-marie George
    ;
    Ingrid Chieh Yu
    ;
    In this work, we present the problem of algorithmic recourse for the setting of binary allocation problems. In this setting, the optimal allocation does not depend only on the prediction model and the individual’s features, but also on the current available resources, utility function used by the decision maker and other individuals currently applying for the resource. We provide a method for generating counterfactual explanations under separable utilities that are monotonically increasing with prediction scores. Here, we assume that we can translate probabilities of “success” together with some other parameters into utility, such that the problem can be phrased as a knapsack problem and solved by known allocation policies: optimal 0–1 knapsack and greedy. We use the two policies respectively in the use cases of loans and college admissions. Moreover, we address the problem of recourse invalidation due to changes in allocation variables, under an unchanged prediction model, by presenting a method for robust recourse under variables’ distributions. Finally, we empirically compare our method with perturbation-robust recourse and show that our method can provide higher validity at a lower cost.
  • Publication
    Accès libre
    Eliciting Kemeny Rankings
    (2024)
    George, Anne-Marie
    ;
    We formulate the problem of eliciting agents’ preferences with the goal of finding a Kemeny ranking as a Dueling Bandits problem. Here the bandits’ arms correspond to alternatives that need to be ranked and the feedback corresponds to a pairwise comparison between alternatives by a randomly sampled agent. We consider both sampling with and without replacement, i.e., the possibility to ask the same agent about some comparison multiple times or not. We find approximation bounds for Kemeny rankings dependant on confidence intervals over estimated winning probabilities of arms. Based on these we state algorithms to find Probably Approximately Correct (PAC) solutions and elaborate on their sample complexity for sampling with or without replacement. Furthermore, if all agents’ preferences are strict rankings over the alternatives, we provide means to prune confidence intervals and thereby guide a more efficient elicitation. We formulate several adaptive sampling methods that use look-aheads to estimate how much confidence intervals (and thus approximation guarantees) might be tightened. All described methods are compared on synthetic data.
  • Publication
    Accès libre
    Isoperimetry is All We Need: Langevin Posterior Sampling for RL
    (2024)
    Emilio Jorge
    ;
    ;
    Debabrota Basu
    In Reinforcement Learning theory, we often assume restrictive assumptions, like linearity and RKHS structure on the model, or Gaussianity and log-concavity of the posteriors over models, to design an algorithm with provably sublinear regret. In this paper, we study whether we can design efficient low-regret RL algorithms for any isoperimetric distribution, which includes and extends the standard setups in the literature. Specifically, we show that the well-known PSRL (Posterior Sampling-based RL) algorithm yields sublinear regret if the posterior distributions satisfy the Log-Sobolev Inequality (LSI), which is a form of isoperimetry. Further, for the cases where we cannot compute or sample from an exact posterior, we propose a Langevin sampling-based algorithm design scheme, namely LaPSRL. We show that LaPSRL also achieves sublinear regret if the posteriors only satisfy LSI. Finally, we deploy a version of LaPSRL with a Langevin sampling algorithms, SARAH-LD. We numerically demonstrate their performances in different bandit and MDP environments. Experimental results validate the generality of LaPSRL across environments and its competetive performance with respect to the baselines.
  • Publication
    Accès libre
    Reinforcement Learning in the Wild with Maximum Likelihood-based Model Transfer
    (2023)
    Hannes Eriksson
    ;
    Debabrota Basu
    ;
    Tommy Tram
    ;
    Mina Alibeigi
    ;
    In this paper, we study the problem of transferring the available Markov Decision Process (MDP) models to learn and plan efficiently in an unknown but similar MDP. We refer to it as \textit{Model Transfer Reinforcement Learning (MTRL)} problem. First, we formulate MTRL for discrete MDPs and Linear Quadratic Regulators (LQRs) with continuous state actions. Then, we propose a generic two-stage algorithm, MLEMTRL, to address the MTRL problem in discrete and continuous settings. In the first stage, MLEMTRL uses a \textit{constrained Maximum Likelihood Estimation (MLE)}-based approach to estimate the target MDP model using a set of known MDP models. In the second stage, using the estimated target MDP model, MLEMTRL deploys a model-based planning algorithm appropriate for the MDP class. Theoretically, we prove worst-case regret bounds for MLEMTRL both in realisable and non-realisable settings. We empirically demonstrate that MLEMTRL allows faster learning in new MDPs than learning from scratch and achieves near-optimal performance depending on the similarity of the available MDPs and the target MDP.
  • Publication
    Accès libre
    Minimax-Bayes Reinforcement Learning
    (PMLR, 2023)
    Thomas Kleine Buening
    ;
    ;
    Hannes Eriksson
    ;
    Divya Grover
    ;
    Emilio Jorge
    While the Bayesian decision-theoretic framework offers an elegant solution to the problem of decision making under uncertainty, one question is how to appropriately select the prior distribution. One idea is to employ a worst-case prior. However, this is not as easy to specify in sequential decision making as in simple statistical estimation problems. This paper studies (sometimes approximate) minimax-Bayes solutions for various reinforcement learning problems to gain insights into the properties of the corresponding priors and policies. We find that while the worst-case prior depends on the setting, the corresponding minimax policies are more robust than those that assume a standard (i.e. uniform) prior.
  • Publication
    Accès libre
    Minimax-Bayes Reinforcement Learning
    (2023)
    Thomas Kleine Buening
    ;
    ;
    Hannes Eriksson
    ;
    Divya Grover
    ;
    Emilio Jorge
    While the Bayesian decision-theoretic framework offers an elegant solution to the problem of decision making under uncertainty, one question is how to appropriately select the prior distribution. One idea is to employ a worst-case prior. However, this is not as easy to specify in sequential decision making as in simple statistical estimation problems. This paper studies (sometimes approximate) minimax-Bayes solutions for various reinforcement learning problems to gain insights into the properties of the corresponding priors and policies. We find that while the worst-case prior depends on the setting, the corresponding minimax policies are more robust than those that assume a standard (i.e. uniform) prior.