SFR-RAG: Towards Contextually Faithful LLMs

التفاصيل البيبلوغرافية
العنوان: SFR-RAG: Towards Contextually Faithful LLMs
المؤلفون: Nguyen, Xuan-Phi, Pandit, Shrey, Purushwalkam, Senthil, Xu, Austin, Chen, Hailin, Ming, Yifei, Ke, Zixuan, Savarese, Silvio, Xong, Caiming, Joty, Shafiq
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Artificial Intelligence
الوصف: Retrieval Augmented Generation (RAG), a paradigm that integrates external contextual information with large language models (LLMs) to enhance factual accuracy and relevance, has emerged as a pivotal area in generative AI. The LLMs used in RAG applications are required to faithfully and completely comprehend the provided context and users' questions, avoid hallucination, handle unanswerable, counterfactual or otherwise low-quality and irrelevant contexts, perform complex multi-hop reasoning and produce reliable citations. In this paper, we introduce SFR-RAG, a small LLM that is instruction-tuned with an emphasis on context-grounded generation and hallucination minimization. We also present ContextualBench, a new evaluation framework compiling multiple popular and diverse RAG benchmarks, such as HotpotQA and TriviaQA, with consistent RAG settings to ensure reproducibility and consistency in model assessments. Experimental results demonstrate that our SFR-RAG-9B model outperforms leading baselines such as Command-R+ (104B) and GPT-4o, achieving state-of-the-art results in 3 out of 7 benchmarks in ContextualBench with significantly fewer parameters. The model is also shown to be resilient to alteration in the contextual information and behave appropriately when relevant context is removed. Additionally, the SFR-RAG model maintains competitive performance in general instruction-following tasks and function-calling capabilities.
Comment: Technical report
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2409.09916
رقم الأكسشن: edsarx.2409.09916
قاعدة البيانات: arXiv