Federated Learning enables collaborative training of Deep Neural Network among distributed entities without necessitating the exchange of raw training data. While offering notable advantages, this decentralized approach introduces potential vulnerabilities, particularly in the form of poisoning attacks capable of manipulating models during collaborative training.
The workshop will provide participants with an understanding of Federated Learning security challenges and equip them with practical skills to address these concerns. During the hands-on session, attendees will gain practical insights into the attack landscape for building secure and private Federated Learning systems.
Key Workshop Components
Insights: Participants will gain comprehensive insights into the security and privacy challenges inherent in Federated Learning.
Talks: The workshop includes several sessions covering various facets of poisoning attacks and state-of-the-art detection techniques.
Hands-On Sessions: During the practical sessions participants will implement targeted and untargeted poisoning attacks. Additionally, attendees will explore defense mechanisms to understand their effectiveness.
Target Audience: This workshop addresses beginners in Deep Learning as well as people already having experience, the only prerequisite being familiarity with an imperative programming language.
Date: February 15, 2024
Location: IBM Watson Center Munich
Collaborators: Plattform Lernende System and IBM Innovation Studio Munich