Distributed Stochastic Optimization in Networks With Low Informational Exchange
Résumé
We consider a distributed stochastic optimization problem in networks with finite number of nodes. Each node adjusts its action to optimize the global utility of the network, which is defined as the sum of local utilities of all nodes. While Gradient descent method is a common technique to solve such optimization problem, the computation of the gradient may require much information exchange. In this paper, we consider that each node can only have a noisy numerical observation of its local utility, of which the closed-form expression is not available. This assumption is quite realistic, especially when the system is either too complex or constantly changing. Nodes may exchange partially the observation of their local utilities to estimate the global utility at each timeslot. We propose a distributed algorithm based on stochastic perturbation, under the assumption that each node has only part of the local utilities of the other nodes. We use stochastic approximation tools to prove that our algorithm converges almost surely to the optimum, given that the objective function is smooth and strictly concave. The convergence rate is also derived, under the additional assumption of strongly concave objective function. It is shown that the convergence rate scales as O(K -0.5 ) after a sufficient number of iterations K > K 0 , which is the optimal rate order in terms of K for our problem. Although the proposed algorithm can be applied to general optimization problems, we perform simulations for a typical power control problem in wireless networks and present numerical results to corroborate our claims.