Security and Privacy for Machine Learning
Machine learning is being widely deployed in many aspects of our society. Our vision is that machine learning systems will become a new attack surface and attackers will exploit the vulnerabilities in machine learning algorithms and systems to subvert their security and privacy. In this research thrust, we aim to protect the confidentiality and integrity of machine learning algorithms and systems. Specifically, in machine learning systems, both users and model providers desire confidentiality: users desire privacy of their confidential training and testing data, while model providers desire confidentiality of their proprietary models, learning algorithms, and training data, as they represent intellectual property. We are interested in protecting confidentiality for both users and model providers. Moreover, we are interested in uncovering new vulnerabilities that an attacker can exploit to compromise the integrity of machine learning systems, as well as designing new mechanisms to mitigate them.
Confidentiality of machine learning
Confidentiality/Intellectual Property for model providers
Confidentiality for users
Integrity of machine learning
Integrity at prediction phase (i.e., adversarial examples): attacks, defenses, and their applications to privacy protection
Integrity at training phase: poisoning attacks and their defenses