Algebraic Adversarial Attacks on Integrated Gradients

التفاصيل البيبلوغرافية
العنوان: Algebraic Adversarial Attacks on Integrated Gradients
المؤلفون: Simpson, Lachlan, Costanza, Federico, Millar, Kyle, Cheng, Adriel, Lim, Cheng-Chew, Chew, Hong Gunn
سنة النشر: 2024
المجموعة: Computer Science
Mathematics
مصطلحات موضوعية: Computer Science - Machine Learning, Mathematics - Group Theory
الوصف: Adversarial attacks on explainability models have drastic consequences when explanations are used to understand the reasoning of neural networks in safety critical systems. Path methods are one such class of attribution methods susceptible to adversarial attacks. Adversarial learning is typically phrased as a constrained optimisation problem. In this work, we propose algebraic adversarial examples and study the conditions under which one can generate adversarial examples for integrated gradients. Algebraic adversarial examples provide a mathematically tractable approach to adversarial examples.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2407.16233
رقم الأكسشن: edsarx.2407.16233
قاعدة البيانات: arXiv