Uncertainty quantification in fine-tuned LLMs using LoRA ensembles

التفاصيل البيبلوغرافية
العنوان: Uncertainty quantification in fine-tuned LLMs using LoRA ensembles
المؤلفون: Balabanov, Oleksandr, Linander, Hampus
سنة النشر: 2024
المجموعة: Computer Science
Statistics
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Statistics - Machine Learning
الوصف: Fine-tuning large language models can improve task specific performance, although a general understanding of what the fine-tuned model has learned, forgotten and how to trust its predictions is still missing. We derive principled uncertainty quantification for fine-tuned LLMs with posterior approximations using computationally efficient low-rank adaptation ensembles. We analyze three common multiple-choice datasets using low-rank adaptation ensembles based on Mistral-7b, and draw quantitative and qualitative conclusions on their perceived complexity and model efficacy on the different target domains during and after fine-tuning. In particular, backed by the numerical experiments, we hypothesise about signals from entropic uncertainty measures for data domains that are inherently difficult for a given architecture to learn.
Comment: 8 pages, 4 figures
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2402.12264
رقم الأكسشن: edsarx.2402.12264
قاعدة البيانات: arXiv