On The Performance and Linear Convergence of Decentralized Primal-Dual Methods

Speaker: Sulaiman Alghunaim
Affiliation: Ph.D. Candidate

Abstract:  Decentralized multi-agent optimization is a powerful paradigm that finds applications in diverse fields in learning and engineering design. In this dissertation, the focus is on multi-agent networks where agents are connected through some topology and agents are allowed to share information only locally. Their overall goal is to seek the minimizer of a global optimization problem through localized interactions.  In decentralized consensus problems, the agents are coupled through a common consensus variable that they need to agree upon. While in decentralized resource allocation problems, the agents are coupled through global affine constraints.  This work studies the performance and linear convergence properties of decentralized primal-dual methods for the solution of decentralized multi-agent optimization problems.

Various decentralized optimization algorithms already exist in the literature.  Some methods are derived from a primal-dual perspective. Others methods are derived as gradient tracking mechanism meant to tracks the average of the local gradients. Among the gradient tracking methods are the adapt-then-combine implementations, which are based on the earlier study of diffusion strategies for adaptation and learning. The adapt-then-combine implementations have been observed to perform better than the other methods.  However, it is still unclear how all of these methods are related to each other.  In this dissertation, we develop a novel adapt-then-combine primal-dual algorithmic framework that captures most state-of-the-art gradient methods as special cases when the objective is smooth including all the variations of the gradient-tracking methods. We also develop a concise and novel analysis technique that establishes the linear convergence of this general framework under strongly-convex objectives. Due to our unified framework, the analysis reveals important characteristics for these methods such as their convergence rates and step-size stability range. Moreover, the analysis reveals how the augmented Lagrangian penalty matrix term, which is utilized in most of these methods, affects the performance of decentralized algorithms.

One important question regarding decentralized consensus methods is whether linear convergence is achievable under non-smooth settings. For centralized algorithms, linear convergence has been established in the presence of a non-smooth term. In this dissertation, we close the gap between centralized and decentralized proximal gradient algorithms and show that decentralized proximal algorithms can also achieve linear convergence in the presence of a non-smooth term. Furthermore, we show that when each agent owns a different local non-smooth term then linear convergence cannot be established in general.  Most works that study decentralized optimization problems assume that all the agents are involved in computing all variables. However, in many applications the coupling across agents is sparse in the sense that only a few agents are involved in computing certain variables. We show how to design algorithms to exploit such sparsity structure. More importantly, we establish analytically the importance of exploiting the sparsity structure in coupled large-scale networks.

Biography:  Sulaiman Alghunaim received his B.S. in electrical engineering from Kuwait University in 2013.  Prior to joining UCLA, he was working for Huawei Telecommunication Company in Kuwait.  He received his M.S. degree from the University of California, Los Angeles in 2016.  He is currently a PhD candidate in the Electrical and Computer Engineering Department at UCLA. His research interests include decentralized optimization and learning.

For more information, contact Prof. Ali H. Sayed (sayed@ee.ucla.edu)

Date/Time:
Date(s) - Oct 04, 2019
1:00 pm - 2:30 pm

Location:
E-IV Tesla Room #53-125
420 Westwood Plaza - 5th Flr., Los Angeles CA 90095