Learning with bandit feedback in potential games
Résumé
This paper examines the equilibrium convergence properties of no-regret learning
with exponential weights in potential games. To establish convergence with minimal
information requirements on the players’ side, we focus on two frameworks:
the semi-bandit case (where players have access to a noisy estimate of their payoff
vectors, including strategies they did not play), and the bandit case (where players
are only able to observe their in-game, realized payoffs). In the semi-bandit case,
we show that the induced sequence of play converges almost surely to a Nash
equilibrium at a quasi-exponential rate. In the bandit case, the same result holds for
"-approximations of Nash equilibria if we introduce an exploration factor " > 0
that guarantees that action choice probabilities never fall below ". In particular, if
the algorithm is run with a suitably decreasing exploration factor, the sequence of
play converges to a bona fide Nash equilibrium with probability 1.