Research Topics

Security for Machine Learning

Our research in Machine Learning (ML) resp. Deep Learning (DL) focuses on two directions: (i) improving the security and privacy of IT systems by deploying customized ML/DL, such as for intrusion detection, run-time attestation, and (ii) providing security for ML/DL algorithms against various attacks, for instance data and model poisoning, and inference attacks that attempt to leak information about the data from the models.

AI for security

Applying ML for security tasks opens various new opportunities. It allows handling and monitoring of huge amounts of data, where humans would be overwhelmed. Two examples of projects that we are working on are the detection of compromised IoT devices and evaluating risks on mobile devices:
IoT Anomaly Detection. Due to the ever-increasing number of different types of IoT devices and the continuously evolving spectrum of novel attacks it is in practice very difficult for signature-based intrusion detection systems to detect previously unknown attacks. We developed an approach that detects compromised IoT devices based on their typical network communication. To be able to detect also novel attacks, the normal behavior of a device is learned by a Deep Neural Network and attacks are detected as anomalies with regard to the known normal behavior.
Cyber-Risk Intelligence for Mobile Services. Mobile service providers are faced with various challenges to protect their services and associated mobile apps. They lack an effective risk management to limit the exposure of their services to threats and potential damages caused by attacks. One solution is to utilize ML-based risk management that evaluates security risks on mobile devices and share information about risks only in form of Machine Learning models.

Security for AI

The widespread of ML creates new attack vectors, focusing on privacy, e.g., by inferring information about the training data of deployed algorithms, or the integrity by manipulating the ML model, e.g., to cause misbehavior in specific situations. Our research here includes attacks on normal, centralized learning as well as collaborative learning, allowing multiple clients to jointly train an ML model without revealing their data. One example of such a scheme is Federated Learning (FL).
However, with the increasing application of Federated Learning systems, a number of security, privacy, and functional challenges are posed on the design and implementation of the underlying algorithms and systems. Attacks on FL stem from either the privacy perspective when a malicious user or the central server attempts to infer the private data of a victim user, or the security perspective when a malicious user aims to compromise the resulting model. As part of our research, we develop new algorithms to make FL robust against manipulations and also to protect the confidentiality of the used training data.

Current Projects

Examples of currently performed research in security aspects of AI include:

  • We investigate the deployment of DL for further applications such as context-based intrusion detection for IoT devices or run-time attestation to detect attacks exploiting memory vulnerabilities like buffer-overflows.
  • We design new security attacks in FL that circumvent state-of-the-art attack mitigation schemes.
  • We investigate the deployment of hardware-based enclaves to strengthen the robustness, backdoor resilience, and privacy of FL.
  • We extend the research on security attacks on FL to further collaborative learning schemes.
  • We investigate security threats to Semi-Supervised Learning and Few-Shot Learning paradigms.
  • We explore new security and privacy attacks and design efficient countermeasures against the attacks for different Deep Neural Network (DNN) architectures.