تقرير
Language Models as Science Tutors
العنوان: | Language Models as Science Tutors |
---|---|
المؤلفون: | Chevalier, Alexis, Geng, Jiayi, Wettig, Alexander, Chen, Howard, Mizera, Sebastian, Annala, Toni, Aragon, Max Jameson, Fanlo, Arturo Rodríguez, Frieder, Simon, Machado, Simon, Prabhakar, Akshara, Thieu, Ellie, Wang, Jiachen T., Wang, Zirui, Wu, Xindi, Xia, Mengzhou, Jia, Wenhan, Yu, Jiatong, Zhu, Jun-Jie, Ren, Zhiyong Jason, Arora, Sanjeev, Chen, Danqi |
سنة النشر: | 2024 |
المجموعة: | Computer Science |
مصطلحات موضوعية: | Computer Science - Computation and Language |
الوصف: | NLP has recently made exciting progress toward training language models (LMs) with strong scientific problem-solving skills. However, model development has not focused on real-life use-cases of LMs for science, including applications in education that require processing long scientific documents. To address this, we introduce TutorEval and TutorChat. TutorEval is a diverse question-answering benchmark consisting of questions about long chapters from STEM textbooks, written by experts. TutorEval helps measure real-life usability of LMs as scientific assistants, and it is the first benchmark combining long contexts, free-form generation, and multi-disciplinary scientific knowledge. Moreover, we show that fine-tuning base models with existing dialogue datasets leads to poor performance on TutorEval. Therefore, we create TutorChat, a dataset of 80,000 long synthetic dialogues about textbooks. We use TutorChat to fine-tune Llemma models with 7B and 34B parameters. These LM tutors specialized in math have a 32K-token context window, and they excel at TutorEval while performing strongly on GSM8K and MATH. Our datasets build on open-source materials, and we release our models, data, and evaluations. Comment: 8 pages without bibliography and appendix, 26 pages total |
نوع الوثيقة: | Working Paper |
URL الوصول: | http://arxiv.org/abs/2402.11111 |
رقم الأكسشن: | edsarx.2402.11111 |
قاعدة البيانات: | arXiv |
الوصف غير متاح. |