Privacy in NLP
Motivation
The primary purpose of natural language text is to effectively convey explicit information. However, textual content also carries implicit cues and details. For instance, an author's word choices and stylometric attributes can subtly indicate their age, cultural or religious background, and other potentially sensitive characteristics.
Beyond the publicly accessible textual resources like web pages commonly employed in mainstream Natural Language Processing (NLP) research, there exist extensive sets of textual data with restricted access due to their sensitive nature. These encompass confidential corporate documents, specific conference research evaluations, comprehensive medical records from major hospitals, and numerous other confidential sources. Gaining access to these datasets can be challenging, burdensome, or even unattainable, thereby impeding the progress of research in this field.
We contend that sharing sensitive data can be advantageous for the public if privacy is assured. Unrestricted access to specialized text domains facilitates various forms of learning, such as manual analysis for studying domain-specific discourse, creating annotated datasets in Bioinformatics-NLP, enhancing the objectivity of conference review processes by understanding review dynamics, or training machine learning models for tasks like SPAM detection using current datasets.
However, current text anonymization methods only focus on replacing explicit entities in texts. While this approach provides high precision, it suffers from the ambiguity of defining private information and lacks formal guarantees to prevent attacks by adversaries with access to background information. Although it's possible to train privacy-preserving models with private datasets and release these models with formal assurances, this limits dataset usage to a single analysis, specifically for training the model. Other potential tasks, such as manual analysis, error analysis, explaining model predictions, or training additional models, become impossible after the initial analysis.
Goals
This SenPAI project aims to privatize texts with formal guarantees, using differential privacy (DP), while simultaneously preserving their utility for research. This would allow the scientific community to analyze texts in any form without breaching the privacy of their creators. In our research, we specifically focus on the following topics:
- Obscuring author-related information in texts
As previously mentioned, texts can carry explicit and implicit information about their creators. We design methods that conceal that information while simultaneously maintaining semantic signals desired for research. - Synthetic data generation with formal privacy guarantees
In the domain of tabular data, DP methods have already been successfully deployed to create artificial data that represents the original data distribution without undermining the privacy of its subjects. We expand this approach to the area of texts, particularly using non-autoregressive language models. - Global vs local DP for texts
We explore and compare privacy preserving methods, given that a data collector can or cannot be trusted (global vs. local). Specifically local DP methods need further investigation if they are capable of achieving meaningful privacy guarantees when handling texts. - Practical measurements of formal privacy guarantees
DP provides strong, but rather theoretical protection of one’s privacy. To communicate better what that means when handling written text, we create evaluation methods that translate formal guarantees to practical implications.
Team
- Prof. Dr. Iryna Gurevych, Principal Investigator
- Prof. Dr. Ivan Habernal, Principal Investigator
- Sebastian Ochs, MSc, Doctoral Researcher
- Tianyu Yang, MSc, Doctoral Researcher
Funding
This research work is funded from 2023 – 2026 by the German Federal Ministry of Education and Research and the Hessen State Ministry for Higher Education, Research and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE.