Security and Privacy in Dynamical Systems

Speaker: Mehrdad Showkatbakhsh
Affiliation: UCLA Ph.D. Candidate

Abstract:  Dynamical systems find applications in many domains, among other things, control and optimization are of utmost importance. Physical processes are dynamical systems in nature and control theory tries to understand these systems in order to design mechanisms to obtain the desired behaviours. On the other hand, optimization algorithms are inherently recursive and therefore can be modeled as dynamical systems. These systems enable abundance of applications and addressing their unreliability is important. In this talk, we focus on challenges arising from vulnerabilities of such system against (active) attacks on the physical components and (passive) attacks to infer about sensitive information. We take steps forward toward understanding these challenges and toward making progress toward building systems that are robust against such vulnerabilities.

Many of control systems have cyber-physical nature, meaning there is a tight interaction between cyber (computation and communication) and physical (sensing and actuation) components. Cyber-Physical Systems (CPS) enabled numerous applications in which decisions need to be taken depending on the environment and sensory information.  However, addressing the unreliability associated with such systems still remains a fundamental challenge. The unreliability may stem from communication, software security, and physical vulnerabilities. In the first part of this talk, we focus on the physical vulnerabilities on sensing and actuation modules, in which an adversary manipulates these components. Particularly, two problems of “state estimation” and “system identification” are analyzed in the adversarial environment.

In the recent years, personal data from health care, finance, etc., are becoming available that enables learning high complexity models for applications ranging from medical diagnosis, financial portfolio strategies, and etc. The common paradigm to learn such models is to optimize a cost function involving the model parameters and the data. Acquiring data from individuals and publishing models based on them compromises the privacy of users against a passive adversary observing the training procedure. Addressing this vulnerability is crucial in this increasingly common scenario where we build models based on sensitive data. In fact, a roadblock to large scale use of sensitive personal data such as in health care is one of the primary concerns. In the second part of this talk, we investigate two problems in this area, “private linear-regression” and “private distributed optimization.”  These methods develop and analyze private learning mechanisms which guarantee utility while ensuring a given privacy level.

Biography:  Mehrdad Showkatbakhsh is a Ph.D. candidate in the Electrical and Computer Engineering Department at UCLA under the supervision of Prof. Suhas Diggavi. He received his B.S. degree in Electrical Engineering from Sharif University, Tehran, Iran in 2013 and the M.S. degree in Electrical Engineering from University of California, Los Angeles, in 2015.  Mehrdad’s research lies in the area of cyber physical security, privacy and its application to machine learning. He is a recipient of the UCLA Henry Samueli and UCLA Preliminary Exam Fellowships.

For more information, contact Prof. Suhas Diggavi (suhasdiggavi@ucla.edu)

Date/Time:
Date(s) - Mar 12, 2019
11:00 am - 1:00 pm

Location:
E-IV Faraday Room #67-124
420 Westwood Plaza - 6th Flr., Los Angeles CA 90095