How Much Annotation is Needed to Compare Summarization Models?

التفاصيل البيبلوغرافية
العنوان: How Much Annotation is Needed to Compare Summarization Models?
المؤلفون: Shaib, Chantal, Barrow, Joe, Siu, Alexa F., Wallace, Byron C., Nenkova, Ani
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: Modern instruction-tuned models have become highly capable in text generation tasks such as summarization, and are expected to be released at a steady pace. In practice one may now wish to choose confidently, but with minimal effort, the best performing summarization model when applied to a new domain or purpose. In this work, we empirically investigate the test sample size necessary to select a preferred model in the context of news summarization. Empirical results reveal that comparative evaluation converges quickly for both automatic and human evaluation, with clear preferences for a system emerging from under 100 examples. The human preference data allows us to quantify how well automatic scores can reproduce preference rankings across a variety of downstream summarization tasks. We find that, while automatic metrics are stable at smaller sample sizes, only some automatic metrics are able to moderately predict model win rates according to human preference.
Comment: Preprint
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2402.18756
رقم الأكسشن: edsarx.2402.18756
قاعدة البيانات: arXiv