Security for Machine Learning
Federated machine learning (aka collaborative learning) is an emerging paradigm for distributed machine learning-based applications providing significant benefits in terms of privacy and efficiency. Federated learning allows to train a joint machine learning model by aggregating locally trained models from collaborating participants without the need to share the local, potentially sensitive data used in the training. However, a straightforward application of federated learning is susceptible to so-called backdoor attacks (targeted attacks), in which an adversary seeks to manipulate the aggregated machine learning model to output false predictions selected by the adversary on specific inputs. Existing countermeasures, however, are insufficient.
Our research focuses on systematically analyzing backdoor attacks on federated machine learning and designing a secure federated learning-based system. We investigate new techniques and design solutions that make federated learning resilient and robust against backdoor attacks by detecting (and removing) or mitigating the backdoor completely without sacrificing the accuracy of aggregated machine learning models on their benign main task. We are also interested in a secure federated learning system that provides sufficient user privacy guarantees. To this end, we are doing extensive research to implement Secure Multi-Party Computation (SMPC) on top of our secure federated learning system to maintain a pre-defined rate of trust.