Exact Diffusion Learning over Networks

Speaker: Kun Yuan
Affiliation: UCLA Ph.D. Candidate

Abstract: In this dissertation, we study optimization, adaptation, and learning problems over connected networks. In these problems, each agent collects and learns from its own local data and is able to communicate with its local neighbors. While each single node in the network may not be capable of sophisticated behavior on its own, the agents collaborate to solve large-scale and challenging learning problems.

Different approaches have been proposed in the literature to boost the learning capabilities of networked agents. Among these approaches, the class of diffusion strategies has been shown to be particularly well suited due to their enhanced stability range over other methods and improved performance in adaptive scenarios. However, diffusion implementations suffer from a small inherent bias in the iterates. When a constant step-size is employed to solve deterministic optimization problems, the iterates generated by the diffusion strategy will converge to a small neighborhood around the desired global solution but not to the exact solution itself. This bias is not due to any gradient noise arising from stochastic approximation; it is instead due to the update structure in diffusion implementations. The existence of the bias leads to three questions: (1) What is the origin of this inherent bias?; (2) Can we design new approaches that eliminate the bias?; and (3) Does the correction of the bias bring benefits to distributed optimization, distributed adaptation, or distributed learning?

This dissertation provides affirmative solutions to these questions. Specifically, we design a new {\em exact diffusion} approach that eliminates the inherent bias in diffusion. Exact diffusion has almost the same structure as diffusion, with the addition of a “correction” step between the adaptation and combination steps. Next, this dissertation studies the performance of exact diffusion for the scenarios of distributed optimization, distributed adaptation, and distributed learning, respectively. For distributed optimization, exact diffusion is proven to converge exponentially fast to the {\em exact} global solution under proper conditions. For distributed adaptation, exact diffusion is proven to have better steady-state mean-square-error than diffusion, and this superiority is analytically shown to be more evident for sparsely connected networks such as line, cycle, grid, and other topologies. In distributed learning, exact diffusion can be integrated with the amortized variance-reduced gradient method (AVRG) so that it converges exponentially fast to the exact global solution while employing stochastic gradients per iteration. This dissertation also compares exact diffusion with other state-of-the-art methods in literature. Intensive numerical simulations are provided to illustrate the theoretical results derived in the dissertation.

Biography: Kun Yuan is a Ph.D. candidate student in the Department of Electrical and Computer Engineering, UCLA, supervised by Prof. Ali H. Sayed. He got his M.S. degree from the University of Science and Technology of China (USTC) in 2014, and B.E. degree from Xidian University in 2011.  His research interest is to design efficient parallel or distributed algorithms for large-scale optimization, adaptation, and machine learning problems. He is the recipient of the 2017 IEEE Signal Processing Society Young Author Best Paper Award.

For more information, contact Prof. Ali H. Sayed (sayed@ee.ucla.edu)

Date/Time:
Date(s) - Jun 17, 2019
10:00 am - 12:00 pm

Location:
E-IV Elliott Room #53-135
420 Westwood Plaza - 5th Flr., Los Angeles CA 90095