Solving highly cyclic distributed optimization problems without busting the bank: a decimation-based approach
Résumé
In the context of solving large distributed constraint optimization problems (DCOP), belief-propagation and incomplete inference algorithms are candidates of choice. However, in general, when the problem structure is very cyclic, these solution methods suffer from bad performance, due to non-convergence and many exchanged messages. As to improve performances of the MaxSum inference algorithm when solving cyclic constraint optimization problems, we propose here to take inspiration from the belief-propagation-guided decimation used to solve sparse random graphs (k-satisfiability). We propose the novel DeciMaxSum method, which is parameterized in terms of policies to decide when to trigger decimation, which variables to decimate, and which values to assign to decimated variables. Based on an empirical evaluation on a classical constraint optimization benchmarks (graph coloring, random graph, and Ising model), some of these combinations of policies, using periodic decimation, cycle detection-based decimation, parallel and non parallel decimation, random or deterministic variable selection, and deterministic or random sampling for value selection, outperform state-of-the-art competitors in many settings.