Bandits Meet Mechanism Design to Combat Clickbait in Online Recommendation
Author(s)
Date issued
2024
In
The Twelfth International Conference on Learning Representations
From page
1
To page
29
Subjects
bandits mechanism design incentive-aware learning nash equilibrium
Abstract
We study a strategic variant of the multi-armed bandit problem, which we coin the strategic click-bandit. This model is motivated by applications in online recommendation where the choice of recommended items depends on both the click-through rates and the post-click rewards. Like in classical bandits, rewards follow a fixed unknown distribution. However, we assume that the click-rate of each arm is chosen strategically by the arm (e.g., a host on Airbnb) in order to maximize the number of times it gets clicked. The algorithm designer does not know the post-click rewards nor the arms' actions (i.e., strategically chosen click-rates) in advance, and must learn both values over time. To solve this problem, we design an incentive-aware learning algorithm, UCB-S, which achieves two goals simultaneously: (a) incentivizing desirable arm behavior under uncertainty; (b) minimizing regret by learning unknown parameters. We approximately characterize all Nash equilibria of the arms under UCB-S and show a $\tilde{\mathcal{O}} (\sqrt{KT})$ regret bound uniformly in every equilibrium. We also show that incentive-unaware algorithms generally fail to achieve low regret in the strategic click-bandit. Finally, we support our theoretical results by simulations of strategic arm behavior which confirm the effectiveness and robustness of our proposed incentive design
Event name
ICLR 2024
Location
Vienna, Austria
Publication type
conference paper
File(s)![Thumbnail Image]()
Loading...
Name
6260_Bandits_Meet_Mechanism_De.pdf
Type
Main Article
Size
3.26 MB
Format
Adobe PDF
Checksum
(MD5):3af9ca951ae78f1b5f26c8c9dc7b33c8