Fine-tuning Large Language Models with Sequential Instructions

التفاصيل البيبلوغرافية
العنوان: Fine-tuning Large Language Models with Sequential Instructions
المؤلفون: Hu, Hanxu, Yu, Simon, Chen, Pinzhen, Ponti, Edoardo M.
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: Despite the success of existing instruction-tuned models, we find that they usually struggle to respond to queries with multiple instructions. This impairs their performance in complex problems whose solution consists of multiple intermediate tasks. Thus, we contend that part of the fine-tuning data mixture should be sequential--containing a chain of interrelated tasks. We first approach sequential instruction tuning from a task-driven perspective, manually creating interpretable intermediate tasks for multilingual and visual question answering: namely "translate then predict" and "caption then answer". Next, we automate this process by turning instructions in existing datasets (e.g., Alpaca and FlanCoT) into diverse and complex sequential instructions, making our method general-purpose. Models that underwent our sequential instruction tuning show improved results in coding, maths, and open-ended generation. Moreover, we put forward a new benchmark named SeqEval to evaluate a model's ability to follow all the instructions in a sequence, which further corroborates the benefits of our fine-tuning method. We hope that our endeavours will open new research avenues on instruction tuning for complex tasks.
Comment: 21pages, 8 figures
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2403.07794
رقم الأكسشن: edsarx.2403.07794
قاعدة البيانات: arXiv