Successful PhD Defense: Phillip Rieger Completes PhD on Adversarially Robust Machine Learning

Successful PhD Defense: Phillip Rieger Completes PhD on Adversarially Robust Machine Learning

2025/05/27

Phillip has been an integral part of the System Security Lab since 2019,

initially joining as a student assistant. In March 2020, just as the

COVID-19 pandemic reshaped academic life, he began his PhD journey.

Notably, he was the lab’s first pure AI-focused PhD student and helped

establish the group’s AI research direction. Over the past five years,

Phillip has significantly contributed to the lab’s research on Federated

Learning (FL), developing robust defense mechanisms against adversarial

attacks in distributed machine learning.

His dissertation addresses key challenges in securing machine learning

under adversarial conditions, introducing novel techniques to make FL

systems resilient to poisoning and backdoor attacks, adaptive anomaly

detection systems for IoT environments, and new defense strategies for

split learning setups. Over the course of his PhD, Phillip published an

impressive 19 papers, including 12 at Core A* conferences, and received

two distinguished paper awards for his work on DeepFake detection and

mitigating backdoor attacks in Split Learning. Beyond publications, he

has led several workshops on FL and shared his expertise through a

dedicated lecture series at TU Darmstadt on FL.

Congratulations, Phillip, on this outstanding achievement, and all the

best for your future!