What Did I Do Wrong? Quantifying LLMs' Sensitivity and Consistency to Prompt Engineering

التفاصيل البيبلوغرافية
العنوان: What Did I Do Wrong? Quantifying LLMs' Sensitivity and Consistency to Prompt Engineering
المؤلفون: Errica, Federico, Siracusano, Giuseppe, Sanvito, Davide, Bifulco, Roberto
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Software Engineering
الوصف: Large Language Models (LLMs) changed the way we design and interact with software systems. Their ability to process and extract information from text has drastically improved productivity in a number of routine tasks. Developers that want to include these models in their software stack, however, face a dreadful challenge: debugging their inconsistent behavior across minor variations of the prompt. We therefore introduce two metrics for classification tasks, namely sensitivity and consistency, which are complementary to task performance. First, sensitivity measures changes of predictions across rephrasings of the prompt, and does not require access to ground truth labels. Instead, consistency measures how predictions vary across rephrasings for elements of the same class. We perform an empirical comparison of these metrics on text classification tasks, using them as guideline for understanding failure modes of the LLM. Our hope is that sensitivity and consistency will be powerful allies in automatic prompt engineering frameworks to obtain LLMs that balance robustness with performance.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.12334
رقم الأكسشن: edsarx.2406.12334
قاعدة البيانات: arXiv