Security for Machine Learning
Our research in Machine Learning (ML) and Deep Learning (DL) is threefold focus. Firstly, we aim to enhance the security and privacy of IT systems by deploying customised machine learning and deep learning techniques. Through innovative applications such as intrusion detection and runtime attestation, we strengthen the overall security of IT infrastructures, protecting against emerging threats.
Secondly, we are dedicated to providing security for machine learning algorithms themselves. As the popularity of machine learning grows, so do the risks targeting the integrity and privacy of these algorithms. Our research centres on developing robust defences against attacks such as data and model poisoning and inference attacks that exploit vulnerabilities in machine learning models.
Lastly, we recognise the significance of explainability in ensuring the trustworthiness and accountability of AI systems. Our focus does not lie only in enhancing the security and privacy of machine learning, we also strive to address the challenge of AI explainability.