How to Train Your Fact Verifier: Knowledge Transfer with Multimodal Open Models

التفاصيل البيبلوغرافية
العنوان: How to Train Your Fact Verifier: Knowledge Transfer with Multimodal Open Models
المؤلفون: Lee, Jaeyoung, Lu, Ximing, Hessel, Jack, Brahman, Faeze, Yu, Youngjae, Bisk, Yonatan, Choi, Yejin, Gabriel, Saadia
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: Given the growing influx of misinformation across news and social media, there is a critical need for systems that can provide effective real-time verification of news claims. Large language or multimodal model based verification has been proposed to scale up online policing mechanisms for mitigating spread of false and harmful content. While these can potentially reduce burden on human fact-checkers, such efforts may be hampered by foundation model training data becoming outdated. In this work, we test the limits of improving foundation model performance without continual updating through an initial study of knowledge transfer using either existing intra- and inter- domain benchmarks or explanations generated from large language models (LLMs). We evaluate on 12 public benchmarks for fact-checking and misinformation detection as well as two other tasks relevant to content moderation -- toxicity and stance detection. Our results on two recent multi-modal fact-checking benchmarks, Mocheg and Fakeddit, indicate that knowledge transfer strategies can improve Fakeddit performance over the state-of-the-art by up to 1.7% and Mocheg performance by up to 2.9%.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2407.00369
رقم الأكسشن: edsarx.2407.00369
قاعدة البيانات: arXiv