End-to-End Query Term Weighting (TW-BERT)
Topics: AI (Deep Learning), Marc Najork, Ranking, Scoring, Search Query Processing
The document discusses a new model called Term Weighting BERT (TW-BERT) aimed at improving the effectiveness of lexical retrieval systems by predicting weights for query terms such as unigrams and bigrams. This approach enables a more targeted and effective retrieval by integrating directly with the existing scoring functions like BM25, reducing the integration complexity typically associated with deep learning methods in real-world search applications.
BM25 is a popular ranking function used in information retrieval systems to estimate the relevance of documents to a given search query. It belongs to a family of scoring functions known as probabilistic information retrieval models, which are based on the probabilistic relevance framework. More on BM25.