A Dual-Based Distributed Optimization Method on Time-Varying Networks

Authors

  • Elham Monifi * Department of Mathematics, Faculty of Mathematical Sciences, Sharif University of Technology, Tehran, Iran.
  • Nezam Mahdavi Amiri Department of Mathematics, Faculty of Mathematical Sciences, Sharif University of Technology, Tehran, Iran.

https://doi.org/10.48314/anowa.v1i2.43

Abstract

We propose a time-varying dual accelerated gradient method for minimizing the average of  strongly convex and smooth functions over a time-varying network with  nodes. We prove that the Time-Varying Dual Accelerated Gradient Ascent (TV-DAGA) method converges at a linear rate with the time to reach an ε-neighborhood of the solution being of . We test the proposed method on two classes of problems: -regularized least squares and logistic classification problems. For each class, we generate 1000 problems and use the Dolan-Moré performance profiles to compare our obtained results with the ones obtained by several state-of-the-art algorithms to illustrate the efficiency of our method. 

Keywords:

Distributed learning, Distributed optimization, Time-varying networks

Published

2025-05-24

How to Cite

A Dual-Based Distributed Optimization Method on Time-Varying Networks. (2025). Annals of Optimization With Applications, 1(2), 110-118. https://doi.org/10.48314/anowa.v1i2.43

Similar Articles

You may also start an advanced similarity search for this article.