Robust Communication and Optimization over Dynamic Networks

Speaker: Can Karakus
Affiliation: Ph.D. Candidate - UCLA

Abstract:  Many types of communication and computation networks arising in modern systems have fundamentally dynamic, time-varying, and ultimately unreliably available resources. In wireless communication networks, such unreliability may manifest itself as variability in channel conditions and intermittent availability of undedicated resources (such as unlicensed spectrum), while in large-scale distributed optimization and machine learning, it may manifest as communication bottlenecks and straggling or failed nodes, which slow down the computational task. In this thesis, we develop information-theoretically-motivated approaches that make progress towards building robust and reliable communication and computation systems built upon networks with unreliable resources. In the first part of the thesis, we focus on three problems in wireless networks, which involve opportunistically harnessing time-varying resources while providing theoretical performance guarantees. First, we show that in full-duplex uplink-downlink cellular networks, a simple, low-overhead user scheduling scheme that exploits the variations in channel conditions can be used to optimally mitigate inter-user interference in the many-user regime. Next, we consider the use of intermittently available links over unlicensed bands to enhance communication over the licensed cellular band. We show that channel output feedback over such unreliable links, combined with quantize-map-forward relaying, still provides generalized-degrees-of-freedom gain in interference networks. We finally consider the use of such intermittent links in device-to-device cooperation to enhance cellular downlink. We develop an optimal dynamic resource allocation algorithm for such networks using stochastic approximation and graph theory techniques, and show that the resulting scheme results in up to 5-6x throughput gain for cell-edge users. In the second part, we consider the problem of distributed optimization and learning over large-scale, yet unreliable clusters. Focusing on a master-worker architecture, where large-scale datasets are distributed across a set of computational nodes, we develop a framework for embedding redundancy in the dataset to mitigate node failures and delays. This framework consists of an efficient linear transformation (coding) of the dataset that results in an over-complete representation, combined with a coding-oblivious application of a distributed optimization algorithm. We show that if the linear transformation is designed to satisfy certain spectral properties resembling the restricted isometry property, nodes that fail or delay their computation can be dynamically left out of the computational process, while still converging to a reasonable solution with fast convergence rates, obviating the need for explicit fault-tolerance mechanisms and significantly speeding up overall computation.

Biography: Can Karakus is a Ph.D. candidate at University of California Los Angeles (UCLA) in Electrical & Computer Engineering Department. He received his M.S. degree in 2013 from UCLA, and B.S. in 2011 from Bilkent University, Turkey, in Electrical Engineering. His research interests are in optimization and information theory, with applications to wireless networks, machine learning, and distributed computing. He is a recipient of UCLA Graduate Division Fellowship, UCLA Preliminary Exam Fellowship, and Qualcomm Roberto Padovani Award.

For more information, contact Prof. Suhas Diggavi (suhasdiggavi@ucla.edu)

Date/Time:
Date(s) - Feb 27, 2018
12:00 pm - 2:00 pm

Location:
E-IV Tesla Room #53-125
420 Westwood Plaza - 5th Flr., Los Angeles CA 90095