Qsparse-local-SGD: Communication Efficient Distributed SGD with Quantization, Sparsification and Local Computations

Speaker: Debraj Basu
Affiliation: ECE Department - UCLA

Abstract:  Large scale distributed optimization has become increasingly important with the emergence of edge computation architectures such as in the federated learning setup, where large amounts of data is massively distributed across personal devices. A key bottleneck for many such large-scale problems is in the communication overhead of exchanging information between devices over bandwidth limited networks. Existing approaches propose to mitigate these bottlenecks either by using different forms of compression or by iterative mixing of local models. We first propose a novel class of highly communication efficient operators that employ quantization with explicit sparsification. Furthermore, we incorporate local iterations into our algorithm, which allows the communication to be infrequent and possibly asynchronous, thereby enabling significantly reduced communication.

Putting them together we have distributed Qsparse-local-SGD for federated learning for which our analysis demonstrates convergence rates matching vanilla distributed SGD, where we observe that the compression is almost for “free”, for both non-convex and convex objectives. We characterize the asymptotic allowable limits of local iterations for Qsparselocal-SGD.  Our numerics demonstrate that Qsparse-local-SGD combines the bit savings of our composed operators, as well as local computations, thereby outperforming the cases where these techniques are individually used.


Debraj Basu received his B.Tech degree in Electrical Engineering from the Indian Institute of Technology Bombay in 2017. His interests predominantly span the intersection of distributed learning and information theory. Debraj was a research intern at the Nanyang Technological University, Singapore in 2015, and at Adobe in 2016 and 2018. He joined LICOS at UCLA, in 2017 as a Master’s student, and his recent endeavors have been in devising communication efficient methods for large scale distributed learning.

For more information, contact Prof. Suhas Diggavi ()






For more information, contact Prof. Suhas Diggavi ()

Date(s) - Jun 10, 2019
1:00 pm - 2:30 pm

E-IV Maxwell Room #57-124
420 Westwood Plaza - 5th Flr. , Los Angeles CA 90095