A Dual-Based Distributed Optimization Method on Time-Varying Networks
Abstract
We propose a time-varying dual accelerated gradient method for minimizing the average of strongly convex and smooth functions over a time-varying network with nodes. We prove that the Time-Varying Dual Accelerated Gradient Ascent (TV-DAGA) method converges at a linear rate with the time to reach an ε-neighborhood of the solution being of . We test the proposed method on two classes of problems: -regularized least squares and logistic classification problems. For each class, we generate 1000 problems and use the Dolan-Moré performance profiles to compare our obtained results with the ones obtained by several state-of-the-art algorithms to illustrate the efficiency of our method.