Sequential stochastic blackbox optimization with zeroth order gradient estimators
Résumé
This work considers stochastic optimization problems in which the objective function values can only be computed by a blackbox corrupted
by some random noise following an unknown distribution. The proposed method is based on sequential stochastic optimization (SSO):
the original problem is decomposed into a sequence of subproblems.
Each of these subproblems is solved using a zeroth order version of
a sign stochastic gradient descent with momentum algorithm (ZOSignum) and with an increasingly fine precision. This decomposition allows a good exploration of the space while maintaining the efficiency of the algorithm once it gets close to the solution. Under
Lipschitz continuity assumption on the blackbox, a convergence rate
in expectation is derived for the ZO-signum algorithm. Moreover, if
the blackbox is smooth and convex or locally convex around its minima, a convergence rate to an epsilon-optimal point of the problem
may be obtained for the SSO algorithm. Numerical experiments are
conducted to compare the SSO algorithm with other state-of-the-art
algorithms and to demonstrate its competitiveness.