Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting
Topics: AI (Deep Learning), Ranking, Retrieval Augmented Generation (RAG)
Researchers at Google have developed a technique called Pairwise Ranking Prompting (PRP) to enhance document ranking using Large Language Models (LLMs). PRP simplifies the task by comparing document pairs, achieving state-of-the-art performance on benchmark datasets and outperforming even larger commercial models like GPT-4 in certain metrics. This method demonstrates robustness and efficiency, proving its effectiveness across various applications in information retrieval.
- Assignee: Google Research
- Inventors: Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Le Yan, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky