Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers

التفاصيل البيبلوغرافية
العنوان: Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers
المؤلفون: Tam, Weng Lam, Liu, Xiao, Ji, Kaixuan, Xue, Lilong, Zhang, Xingjian, Dong, Yuxiao, Liu, Jiahua, Hu, Maodi, Tang, Jie
سنة النشر: 2022
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Information Retrieval, Computer Science - Machine Learning
الوصف: Prompt tuning attempts to update few task-specific parameters in pre-trained models. It has achieved comparable performance to fine-tuning of the full parameter set on both language understanding and generation tasks. In this work, we study the problem of prompt tuning for neural text retrievers. We introduce parameter-efficient prompt tuning for text retrieval across in-domain, cross-domain, and cross-topic settings. Through an extensive analysis, we show that the strategy can mitigate the two issues -- parameter-inefficiency and weak generalizability -- faced by fine-tuning based retrieval methods. Notably, it can significantly improve the out-of-domain zero-shot generalization of the retrieval models. By updating only 0.1% of the model parameters, the prompt tuning strategy can help retrieval models achieve better generalization performance than traditional methods in which all parameters are updated. Finally, to facilitate research on retrievers' cross-topic generalizability, we curate and release an academic retrieval dataset with 18K query-results pairs in 87 topics, making it the largest topic-specific one to date.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2207.07087
رقم الأكسشن: edsarx.2207.07087
قاعدة البيانات: arXiv