Convergence to Nash equilibrium in continuous games with noisy first-order feedback
Résumé
This paper examines the convergence of a broad class
of distributed learning dynamics for games with continuous action
sets. The dynamics under study comprise a multi-agent generalization of Nesterov’s dual averaging (DA) method, a primal-dual
mirror descent method that has recently seen a major resurgence
in the field of large-scale optimization and machine learning. To
account for settings with high temporal variability and uncertainty, we adopt a continuous-time formulation of dual averaging
and we investigate the dynamics’ long-run behavior when players
have either noiseless or noisy information on their payoff
gradients. In both the deterministic and stochastic regimes, we establish
sublinear rates of convergence of actual and averaged trajectories
to Nash equilibrium under a variational stability condition.