Towards privacy-aware mental health AI models published in Nature Computational Science

AI with Confidentiality

2025/10/13

Use of Artificial Intelligence in the Diagnosis and Treatment of Mental Disorders

Researchers led by Computer Science Professor Iryna Gurevych at TU Darmstadt and the Indian Institute of Technology (IIT) Delhi are addressing a key question: How can AI-based tools in the mental health domain be designed to reliably protect patient privacy? In a new study published in Nature Computational Science, they present a roadmap for developing support systems that improve diagnosis and therapy while safeguarding sensitive information. Ensuring privacy is a key prerequisite for using AI in mental health care.

Mental disorders are among the leading causes of disability worldwide, with severe consequences for individuals, their families, and society at large. Detecting mental disorders typically requires resource-intensive clinical interviews conducted by specialists. In addition, there is a global shortage of trained therapists. In the early stages of a mental disorder, when interventions are most effective, artificial intelligence could significantly improve diagnosis and treatment.

Privacy as a critical challenge

AI systems could support therapists by analyzing subtle signals in patients’ language, facial expressions, and choice of words. Training such systems, however, requires highly sensitive data from real therapy sessions. Speech and video data can reveal patient identities, and models trained on such data risk memorizing and unintentionally exposing personal information.

Researchers at the Ubiquitous Knowledge Processing (UKP) Lab at the Department of Computer Science at TU Darmstadt and at IIT Delhi have now published research in Nature Computational Science that outlines a new path forward. They describe how AI systems for mental health can be designed in a way that preserves the confidentiality of patient information.

To achieve this, the authors propose a development pipeline for privacy-aware AI systems based on several approaches. These include the removal of personally identifiable information, anonymization of voice and facial data, the generation of synthetic data, and privacy-preserving training methods.

International collaboration

The first author of the study, Aishik Mandal, is part of the NLPsych group at the UKP Lab, a group of researchers working at the intersection of natural language processing (NLP) and mental health to develop data-driven solutions that support both those seeking and those providing help. Co-authors are Professor Tanmoy Chakraborty (IIT Delhi), who was a visiting researcher at the UKP Lab supported by a Humboldt Research Fellowship for Experienced Researchers from the Alexander von Humboldt Foundation, and Professor Iryna Gurevych, head of the UKP Lab at TU Darmstadt.

This research was supported by the German Federal Ministry of Education and Research (BMBF) and the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK) within the framework of the National Research Center for Applied Cybersecurity ATHENE, by the LOEWE Research Excellence Chair Ubiquitous Knowledge Processing and by the LOEWE Center DYNAMIC.

Publication

Aishik Mandal, Tanmoy Chakraborty, Iryna Gurevych: Towards privacy-aware mental health AI models

Scientific contact

Prof. Dr. Iryna Gurevych

Hochschulstraße 10, S2|02 B110

64289 Darmstadt, Germany

Phone: +49 6151 16-25290

Email: iryna.gurevych@tu-darmstadt.de

Further information

UKP Lab

NLPsych Group

National Research Center for Applied Cybersecurity ATHENE

LOEWE Center DYNAMIC