NEFTune: Noisy Embeddings Improve Instruction Finetuning

التفاصيل البيبلوغرافية
العنوان: NEFTune: Noisy Embeddings Improve Instruction Finetuning
المؤلفون: Jain, Neel, Chiang, Ping-yeh, Wen, Yuxin, Kirchenbauer, John, Chu, Hong-Min, Somepalli, Gowthami, Bartoldson, Brian R., Kailkhura, Bhavya, Schwarzschild, Avi, Saha, Aniruddha, Goldblum, Micah, Geiping, Jonas, Goldstein, Tom
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Machine Learning
الوصف: We show that language model finetuning can be improved, sometimes dramatically, with a simple augmentation. NEFTune adds noise to the embedding vectors during training. Standard finetuning of LLaMA-2-7B using Alpaca achieves 29.79% on AlpacaEval, which rises to 64.69% using noisy embeddings. NEFTune also improves over strong baselines on modern instruction datasets. Models trained with Evol-Instruct see a 10% improvement, with ShareGPT an 8% improvement, and with OpenPlatypus an 8% improvement. Even powerful models further refined with RLHF such as LLaMA-2-Chat benefit from additional training with NEFTune.
Comment: 25 pages, Code is available on Github: https://github.com/neelsjain/NEFTune
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2310.05914
رقم الأكسشن: edsarx.2310.05914
قاعدة البيانات: arXiv