Stealth edits for provably fixing or attacking large language models

التفاصيل البيبلوغرافية
العنوان: Stealth edits for provably fixing or attacking large language models
المؤلفون: Sutton, Oliver J., Zhou, Qinghua, Wang, Wei, Higham, Desmond J., Gorban, Alexander N., Bastounis, Alexander, Tyukin, Ivan Y.
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Artificial Intelligence, Computer Science - Machine Learning, 68T07, 68T50, 68W40, I.2.7, F.2.0
الوصف: We reveal new methods and the theoretical foundations of techniques for editing large language models. We also show how the new theory can be used to assess the editability of models and to expose their susceptibility to previously unknown malicious attacks. Our theoretical approach shows that a single metric (a specific measure of the intrinsic dimensionality of the model's features) is fundamental to predicting the success of popular editing approaches, and reveals new bridges between disparate families of editing methods. We collectively refer to these approaches as stealth editing methods, because they aim to directly and inexpensively update a model's weights to correct the model's responses to known hallucinating prompts without otherwise affecting the model's behaviour, without requiring retraining. By carefully applying the insight gleaned from our theoretical investigation, we are able to introduce a new network block -- named a jet-pack block -- which is optimised for highly selective model editing, uses only standard network operations, and can be inserted into existing networks. The intrinsic dimensionality metric also determines the vulnerability of a language model to a stealth attack: a small change to a model's weights which changes its response to a single attacker-chosen prompt. Stealth attacks do not require access to or knowledge of the model's training data, therefore representing a potent yet previously unrecognised threat to redistributed foundation models. They are computationally simple enough to be implemented in malware in many cases. Extensive experimental results illustrate and support the method and its theoretical underpinnings. Demos and source code for editing language models are available at https://github.com/qinghua-zhou/stealth-edits.
Comment: 24 pages, 9 figures. Open source implementation: https://github.com/qinghua-zhou/stealth-edits
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.12670
رقم الأكسشن: edsarx.2406.12670
قاعدة البيانات: arXiv