Time Sensitive Knowledge Editing through Efficient Finetuning

التفاصيل البيبلوغرافية
العنوان: Time Sensitive Knowledge Editing through Efficient Finetuning
المؤلفون: Ge, Xiou, Mousavi, Ali, Grave, Edouard, Joulin, Armand, Qian, Kun, Han, Benjamin, Arefiyan, Mostafa, Li, Yunyao
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Artificial Intelligence, Computer Science - Machine Learning
الوصف: Large Language Models (LLMs) have demonstrated impressive capability in different tasks and are bringing transformative changes to many domains. However, keeping the knowledge in LLMs up-to-date remains a challenge once pretraining is complete. It is thus essential to design effective methods to both update obsolete knowledge and induce new knowledge into LLMs. Existing locate-and-edit knowledge editing (KE) method suffers from two limitations. First, the post-edit LLMs by such methods generally have poor capability in answering complex queries that require multi-hop reasoning. Second, the long run-time of such locate-and-edit methods to perform knowledge edits make it infeasible for large scale KE in practice. In this paper, we explore Parameter-Efficient Fine-Tuning (PEFT) techniques as an alternative for KE. We curate a more comprehensive temporal KE dataset with both knowledge update and knowledge injection examples for KE performance benchmarking. We further probe the effect of fine-tuning on a range of layers in an LLM for the multi-hop QA task. We find that PEFT performs better than locate-and-edit techniques for time-sensitive knowledge edits.
Comment: ACL 2024 main
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.04496
رقم الأكسشن: edsarx.2406.04496
قاعدة البيانات: arXiv