On Adversarial Examples for Text Classification by Perturbing Latent Representations

التفاصيل البيبلوغرافية
العنوان: On Adversarial Examples for Text Classification by Perturbing Latent Representations
المؤلفون: Sooksatra, Korn, Khanal, Bikram, Rivas, Pablo
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Cryptography and Security, 68T01, 68T50, I.2.7
الوصف: Recently, with the advancement of deep learning, several applications in text classification have advanced significantly. However, this improvement comes with a cost because deep learning is vulnerable to adversarial examples. This weakness indicates that deep learning is not very robust. Fortunately, the input of a text classifier is discrete. Hence, it can prevent the classifier from state-of-the-art attacks. Nonetheless, previous works have generated black-box attacks that successfully manipulate the discrete values of the input to find adversarial examples. Therefore, instead of changing the discrete values, we transform the input into its embedding vector containing real values to perform the state-of-the-art white-box attacks. Then, we convert the perturbed embedding vector back into a text and name it an adversarial example. In summary, we create a framework that measures the robustness of a text classifier by using the gradients of the classifier.
Comment: 7 pages
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2405.03789
رقم الأكسشن: edsarx.2405.03789
قاعدة البيانات: arXiv