APEER: Automatic Prompt Engineering Enhances Large Language Model Reranking

التفاصيل البيبلوغرافية
العنوان: APEER: Automatic Prompt Engineering Enhances Large Language Model Reranking
المؤلفون: Jin, Can, Peng, Hongwu, Zhao, Shiyu, Wang, Zhenting, Xu, Wujiang, Han, Ligong, Zhao, Jiahui, Zhong, Kai, Rajasekaran, Sanguthevar, Metaxas, Dimitris N.
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Artificial Intelligence
الوصف: Large Language Models (LLMs) have significantly enhanced Information Retrieval (IR) across various modules, such as reranking. Despite impressive performance, current zero-shot relevance ranking with LLMs heavily relies on human prompt engineering. Existing automatic prompt engineering algorithms primarily focus on language modeling and classification tasks, leaving the domain of IR, particularly reranking, underexplored. Directly applying current prompt engineering algorithms to relevance ranking is challenging due to the integration of query and long passage pairs in the input, where the ranking complexity surpasses classification tasks. To reduce human effort and unlock the potential of prompt optimization in reranking, we introduce a novel automatic prompt engineering algorithm named APEER. APEER iteratively generates refined prompts through feedback and preference optimization. Extensive experiments with four LLMs and ten datasets demonstrate the substantial performance improvement of APEER over existing state-of-the-art (SoTA) manual prompts. Furthermore, we find that the prompts generated by APEER exhibit better transferability across diverse tasks and LLMs. Code is available at https://github.com/jincan333/APEER.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.14449
رقم الأكسشن: edsarx.2406.14449
قاعدة البيانات: arXiv