DP-MLM: Differentially Private Text Rewriting Using Masked Language Models

التفاصيل البيبلوغرافية
العنوان: DP-MLM: Differentially Private Text Rewriting Using Masked Language Models
المؤلفون: Meisenbacher, Stephen, Chevli, Maulik, Vladika, Juraj, Matthes, Florian
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: The task of text privatization using Differential Privacy has recently taken the form of $\textit{text rewriting}$, in which an input text is obfuscated via the use of generative (large) language models. While these methods have shown promising results in the ability to preserve privacy, these methods rely on autoregressive models which lack a mechanism to contextualize the private rewriting process. In response to this, we propose $\textbf{DP-MLM}$, a new method for differentially private text rewriting based on leveraging masked language models (MLMs) to rewrite text in a semantically similar $\textit{and}$ obfuscated manner. We accomplish this with a simple contextualization technique, whereby we rewrite a text one token at a time. We find that utilizing encoder-only MLMs provides better utility preservation at lower $\varepsilon$ levels, as compared to previous methods relying on larger models with a decoder. In addition, MLMs allow for greater customization of the rewriting mechanism, as opposed to generative approaches. We make the code for $\textbf{DP-MLM}$ public and reusable, found at https://github.com/sjmeis/DPMLM .
Comment: 15 pages, 2 figures, 8 tables. Accepted to ACL 2024 (Findings)
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2407.00637
رقم الأكسشن: edsarx.2407.00637
قاعدة البيانات: arXiv