Deep Fake
The rise of deep learning and its capabilities to handle increasingly complex tasks have enabled the creation of highly realistic images and videos, known as Deep Fakes. While these technologies show the impressive progress of artificial intelligence, they also cause the risk of abuse, including the spread of disinformation, spear phishing, and cyberbullying. Our research group is developes new algorithms for the reliable detection and identification of Deep Fakes, employing methods such as physics-augmented intelligence.
Despite the risks, Deep Fakes also offer opportunities for positive applications, such as creating realistic avatars and entertainment. In addition to detection, our team researches responsible Deep Fake generation, focusing on improving real-time avatar quality and embedding robust digital watermarks, ensuring AI-generated content can be clearly identified to prevent abuse.