Suhas Diggavi, a professor in the Electrical and Computer Engineering department, was recently announced as a recipient of the 2020 Amazon Research Awards (ARA). Diggavi was one of 101 recipients of the award, alongside Yizhou Sun, an associate professor in UCLA’s Computer Science department.
Diggavi was selected out of a record-high number of applicants, who applied to receive a “no-strings-attached” gift for any broad research ideas that pertain to certain scientific subjects. Diggavi’s research titled, “Compressed private and secure distributed edge learning,” aims to address issues in machine learning—more specifically that of privacy, efficiency, and protection from malicious actors in collaborative learning.
These issues apply to “many information and sensing networks,” several of which are used in everyday contexts. For instance, devices like smartphones can predict what we text through autocompletion; however, Diggavi mentioned that although it can learn aspects of our language, it cannot predict exactly what we are going to say because it lacks sufficient local data for sophisticated learning models. The algorithm could improve through the use of a collaborative machine learning system, which would use data from multiple devices to compose a “broader prediction model.” But Diggavi explained there are caveats to such a model: “You want to do that without actually ever sharing your data, or the particular thing that you’re doing. You absolutely want to get a guarantee that nobody can figure out what your actual data is.” He noted that other issues in creating a collaborative model of data include sharing large amounts of data efficiently, and protecting shared data against malicious actors.
“Such valid privacy concerns can be a significant roadblock to collaborative learning,” Diggavi said. But while he agreed these are valid concerns for users to have, he clarified the benefits of collaborative learning, if all concerns are addressed: “There is some value to learning from each other because you do not have enough data of your own to get useful learning models and predictions. But you want to collaborate with very strict guarantees that nothing that you care about is leaked. How do you do that in a distributed/decentralized setup without trusted intermediaries?” This is the question that Diggavi hopes to answer, one that has been previously posited by engineers given the broader implication it holds for collaborative and distributed machine learning.
The award offers recipients a variety of advantages, including unrestricted funds, access to 200+ Amazon public datasets, AWS Promotional Credits, and an Amazon research consultant who offers advice and opportunities to participate in Amazon-exclusive events. Bratin Saha, vice president of AWS Machine Learning Services, said in a statement, “We are excited by the depth of [the recipients’] proposals, as well as the opportunity to advance the science through strengthened connections among academic researchers, their institutions, and our research teams.”
Among his many honors, Professor Diggavi also recently received the prestigious Guggenheim Fellowship, and was a recipient of the 2019 Google Faculty Research Award. At UCLA, he leads the Information Theory and Systems Laboratory, where he conducts research related to information theory and its applications to learning, cyber-physical systems, security & privacy, wireless networks, bioinformatics, and neuroscience.