Countering feedback delays in multi-agent learning
Résumé
We consider a model of game-theoretic learning based on online mirror de-
scent (
OMD
) with asynchronous and delayed feedback information. Instead of
focusing on specific games, we consider a broad class of continuous games defined
by the general equilibrium stability notion, which we call
λ
-variational stabil-
ity
. Our first contribution is that, in this class of games, the actual sequence of
play induced by
OMD
-based learning converges to Nash equilibria provided that
the feedback delays faced by the players are synchronous and bounded. Subse-
quently, to tackle fully decentralized, asynchronous environments with (possibly)
unbounded delays between actions and feedback, we propose a variant of
OMD
which we call delayed mirror descent (
DMD
), and which relies on the repeated
leveraging of past information. With this modification, the algorithm converges to
Nash equilibria with no feedback synchronicity assumptions and even when the
delays grow superlinearly relative to the horizon of play.