Differentially Private Federated Learning in Online Social Networks

Master Thesis

Google uses the federated learning technique to build machine learning models based on distributed data. Users train the model locally on their data and send only the model updates to the server, which aggregates all updates to optimize the global model. This technique is prone to several attacks aiming at revealing information about individual users.

In this thesis, we will work on protecting user data using Differential Privacy (DP). DP distorts the data by adding noise to it. However, adding noise to the data reduces its utility. Exploiting the trust among users in online social networks, we balance the trade-off between privacy and data utility.