Author: Olaf Kopp , 21.January 2024
Reading time: 24 Minutes

Patents and research papers for deep learning & ranking by Marc Najork

5/5 - (1 vote)

In this article i summarize the most interesting Google patents and scientific paper in terms of ranking from Marc Najork. I will update this collection every time a new interesting source is published. Googles Marc Najork plays a central role and and was involved in the most important patents and research papers relevant to deep learning and ranking of the last years.

More about Google patents in my followings arcticles:

Since 2013 I have been working intensively on semantic SEO and entity-based searches. You can find more well-researched and well-founded information about semantic and entity-based searches from me in other articles here on the blog or at Search Engine Land.

Permutation Equivariant Document Interaction Network for Neural Learning-to-Rank

This research paper by Rama Kumar Pasumarthi, Honglei Zhuang, Xuanhui Wang, Michael Bendersky, and Marc Najork focuses on improving ranking performance in information retrieval using deep learning.

Background

  • The paper addresses the challenge of leveraging cross-document interactions to enhance ranking performance in information retrieval.
  • Traditional learning-to-rank (LTR) approaches have focused on univariate scoring functions, which evaluate documents in isolation.
  • The recent success of neural networks in LTR motivates the exploration of deep learning frameworks for capturing cross-document interactions.

Here are some examples of possible interactions between documents:

  1. Topical Relationships: Documents covering similar topics or subjects might influence each other’s relevance. For instance, in a search query about renewable energy, a document about solar power might be more relevant if there are other documents about related topics like wind energy or sustainable practices.
  2. Citation or Reference Links: In academic or research-oriented searches, documents that cite each other or are commonly referenced together could be considered more relevant. For example, two research papers where one cites the other could be mutually reinforcing in terms of relevance.
  3. User Engagement Patterns: If historical data shows that users who view one document often view another, this interaction can suggest a relevance relationship. For instance, in a product search, if users frequently view both a smartphone and a compatible accessory, these documents may interact in terms of relevance to the query.
  4. Semantic Relationships: Documents might interact based on shared or related concepts, even if they don’t explicitly mention the same keywords. Advanced natural language processing techniques can identify these semantic relationships, enhancing the understanding of document relevance.
  5. Contextual Complementarity: Sometimes, documents complement each other by providing different perspectives or types of information on a similar topic. For example, in a news search about a political event, an analytical article and a factual report might be considered complementary.
  6. Temporal Relevance: Documents might interact based on their timeliness or publication dates. For instance, in a search for current news, recent documents might interact to enhance the relevance of the most current information.
  7. Geographical Relevance: For location-based searches, documents related to or originating from the same geographical area might interact. For example, in a search for local events, news articles from local sources might be deemed more relevant when considered together.

Challenges

  • A key challenge is defining a scoring function that captures cross-document interactions while being permutation equivariant, meaning the score of each document should not be affected by the order of input documents.
  • Existing models either do not guarantee permutation equivariance or are not applicable to full set ranking.

Solutions

  • The authors propose a self-attentive Document Interaction Network (attn-DIN) that extends a univariate scoring function with contextual features capturing cross-document interactions.
  • This network uses a self-attention mechanism and satisfies the permutation equivariance requirement.
  • It can handle document sets of varying sizes and automatically learns to capture document interactions without auxiliary information.

Learnings

  • Experiments conducted on four ranking datasets (WEB30K, Istella, Gmail search, and Google Drive Quick Access) show significant improvements over state-of-the-art neural ranking models.
  • The proposed methods are competitive with gradient boosted decision tree (GBDT) based models on the WEB30K dataset.
  • The paper demonstrates the effectiveness of self-attention layers in capturing pairwise and higher-order document interactions in a permutation equivariant manner.

Implications for SEO

In summary, this research can inform a more sophisticated, context-aware, and user-focused SEO strategy, emphasizing the interconnectedness of content and the advanced capabilities of search engines in understanding and leveraging these connections.

  1. Content Interconnectivity: The research highlights the importance of how documents (or web pages) interact with each other. For SEO, this means creating content that is not only relevant on its own but also contextually connected to other content on your website or externally. This could involve strategic internal linking, referencing related topics, or building content clusters that support a central theme.
  2. Topic Authority and E-E-A-T: The concept of cross-document interaction aligns well with Google’s emphasis on Expertise, Authoritativeness, and Trustworthiness (E-E-A-T). By creating interconnected content that covers a topic comprehensively, a website can demonstrate authority, potentially improving its ranking for related queries.
  3. Semantic SEO: The research suggests that search engines are moving towards understanding and leveraging semantic relationships between documents. This underscores the importance of semantic SEO, where the focus is on topic relevance, context, and user intent, rather than just keywords.
  4. User Engagement Signals: If search algorithms consider the interaction between documents based on user behavior patterns, it’s crucial to optimize for user engagement. This includes improving the user experience, providing relevant and complementary content, and encouraging longer website visits and interactions.
  5. Natural Language Processing (NLP) and AI in SEO: The use of self-attention mechanisms in the research points to the growing importance of NLP and AI in search. SEO strategies should consider how search engines might use AI to understand and rank content, focusing on natural, user-focused content creation.
  6. Diverse Content Formats: Since documents can interact across different formats (like text, video, images), integrating various content types on your website could be beneficial. This approach can cater to different user preferences and potentially create richer, more interconnected content networks.
  7. Competitive Analysis and Co-Occurrence: Understanding how your content interacts with competitors’ content can be insightful. Tools that analyze keyword co-occurrence or shared backlinks can help identify how different domains are interconnected, offering opportunities for strategic content planning or partnerships.
  8. Backlink Quality and Context: The quality and context of backlinks might be evaluated based on the interrelation of the linking documents. This suggests a need for a more strategic approach to link building, focusing on relevance and the mutual context of linked documents.

Learning-to-Rank with BERT in TF-Ranking

  • Shuguang Han, Xuanhui Wang, Michael Bendersky, Marc Najork
  • TF-Ranking Team, Google Research, Mountain View, CA

Background

  • The paper describes a machine learning algorithm for document (re)ranking.
  • It utilizes BERT (Bidirectional Encoder Representations from Transformers) for encoding queries and documents.
  • On top of that, a Learning-to-Rank (LTR) model constructed with TF-Ranking (TFR) is applied to further optimize ranking performance.
  • The method was tested in the public MS MARCO benchmark and showed effective results.

TF-Ranking (TensorFlow Ranking) is an open-source library developed by Google for building machine learning models to solve ranking problems. It’s specifically designed for the TensorFlow framework. Ranking problems are common in various applications like search engines, recommendation systems, and information retrieval systems, where the goal is to sort items in a way that the most relevant or useful items appear first.

Key features and aspects of TF-Ranking include:

  1. Specialization in Ranking: Unlike general-purpose machine learning libraries, TF-Ranking focuses specifically on ranking tasks. This specialization allows it to provide tools and functionalities uniquely suited for these types of problems.
  2. Support for Various Ranking Models: TF-Ranking supports a range of ranking models, including pointwise, pairwise, and listwise approaches.
    • Pointwise models consider a single item at a time and predict its relevance score.
    • Pairwise models look at pairs of items and predict which item in the pair is better.
    • Listwise models consider the entire list of items and try to optimize the order of the entire list.
  3. Integration with TensorFlow: Being a part of the TensorFlow ecosystem, TF-Ranking benefits from TensorFlow’s features like automatic differentiation, scalable computation across CPUs and GPUs, and a robust community.
  4. Customizability and Extensibility: Users can customize existing models or create new ones to suit their specific ranking tasks. TF-Ranking is designed to be flexible and extensible.
  5. Scalability: TF-Ranking is built to handle large datasets and complex ranking models, making it suitable for industrial-scale applications.
  6. Research and Development: TF-Ranking is not only used for practical applications but also serves as a platform for academic and industrial research in the field of learning-to-rank.

Challenges

  • The challenge is to effectively assess and order the relevance of documents in relation to a query.
  • Traditional approaches using BERT for ranking tasks are less suited compared to pairwise or listwise LTR algorithms.

Solutions

  • Development of TFR-BERT, a generic document ranking framework that builds an LTR model by fine-tuning BERT representations of query-document pairs within TF-Ranking.
  • Experiments with the MS MARCO dataset to assess the performance of TFR-BERT.
  • Use of various loss functions (pointwise, pairwise, listwise) in TF-Ranking to improve ranking performance.

Learnings

  • TFR-BERT models outperform official baselines and improve upon existing approaches using the same training data and BERT checkpoints.
  • Combining ranking losses with BERT representations is effective for passage ranking.
  • Integrating RoBERTa and ELECTRA into TF-Ranking further improved performance.
  • Ensemble techniques that combine different loss functions and models led to improved performance compared to single model approaches.

The paper demonstrates the effectiveness of combining BERT representations with ranking losses and the benefits of ensemble techniques in LTR models.

Implications for SEO

  1. Importance of Natural Language Understanding: The use of BERT highlights the growing importance of natural language processing in search algorithms. SEO strategies should focus more on natural language content that aligns with user intent, rather than just targeting specific keywords.
  2. Content Quality and Relevance: The paper’s emphasis on understanding the context and relevance of documents to queries suggests that search engines are becoming better at evaluating content quality. This means SEO efforts should prioritize creating high-quality, relevant, and informative content that genuinely addresses the needs and questions of users.
  3. User Intent and Query Understanding: The ability of models like BERT to understand the nuances of language implies that search engines are getting better at discerning user intent. SEO strategies should consider different types of user intent (informational, transactional, navigational) and optimize content accordingly.
  4. Long-Form, Comprehensive Content: Given the advanced capabilities of language models to understand and evaluate content, comprehensive and well-structured long-form content might become more important. Such content can cover a topic in-depth, potentially aligning better with the sophisticated content analysis done by these models.
  5. Semantic Search Optimization: With the evolution of semantic search capabilities, SEO strategies should focus on semantic relevance, using related keywords, synonyms, and topics to build a more comprehensive and contextually relevant content strategy.
  6. Adapting to AI and Machine Learning Trends: SEO professionals should stay informed about the latest developments in AI and machine learning in search technologies. As these technologies evolve, so too will the best practices for optimizing content.
  7. Diverse Content Formats: Considering the advancements in understanding different content types, diversifying content into various formats (like videos, podcasts, infographics) might become increasingly important for SEO.

Ranking Search Result Documents

  • Countries of Publication: United States, WIPO, China, Germany
  • Last Publication Date: April 6, 2021
  • Inventors: Mike Bendersky, Marc Alexander Najork, Donald Metzler, Xuanhui Wang
  • Patent ID: US10970293B2

The patent describes methods and apparatus for using document and query features to determine a presentation characteristic for displaying a search result. It focuses on improving the relevance and presentation of search results based on past interactions and features.

Process

  • Receiving a query
  • Identifying documents that respond to the query
  • Generating dependent and independent measures for documents based on past interactions

Factors

  1. Document and Query Features: The patent emphasizes the use of both document features (attributes or characteristics of a document) and query features (attributes or characteristics of a search query) to assess the relevance and presentation of search results. This suggests that Google considers a wide array of features from both the content of the web pages and the nature of the queries to determine how results should be ranked and presented.
  2. Past Interactions: A significant part of the patent discusses using measures associated with document features and/or query features based on past interactions. This implies that user behavior and interaction with search results (such as click-through rates, time spent on a page, etc.) could influence future rankings. The history of how users interact with documents sharing similar features to the document in question plays a role in determining its presentation in search results.
  3. Presentation Characteristics: The patent details methods for determining a presentation characteristic for search results, which could include the ranking position but also other aspects of how results are displayed. This indicates that ranking is not just about the order of results but also how information is presented to the user, potentially including rich snippets, featured boxes, or other specialized search result formats.
  4. Adaptation to Query and Document Features: The system described in the patent can generate both query-dependent and query-independent measures for a document based on past interactions. This means the ranking can adapt based on the specific query being made and the general relevance of the document, suggesting a dynamic ranking process that adjusts to the context of the search and the document’s attributes.

Implications for SEO

While the patent does not specify traditional ranking factors, it highlights a complex interplay between document characteristics, user queries, and past user interactions in determining how search results are ranked and presented. For SEO professionals, this reinforces the importance of focusing on comprehensive SEO strategies that go beyond keywords to include user experience, content quality, and engagement.

  • Focus on User Experience: Enhancing the user experience on a website can indirectly influence its ranking. Factors such as ease of navigation, page load speed, and the quality of content can affect user interactions, which in turn could influence rankings.
  • Content Relevance and Quality: High-quality, relevant content that aligns closely with user queries is likely to perform better. The patent underscores the importance of document features, suggesting that content should be optimized not just for keywords but for overall relevance and user intent.
  • Engagement Metrics: Metrics that indicate user engagement and satisfaction with content could play a role in ranking. This includes metrics like bounce rate, dwell time, and click-through rate from search results.

Training a Ranking Model

  • Countries Published: United States
  • Last Publishing Date: April 29, 2021
  • Inventors: Donald Arthur Metzler, JR., Marina del Rey, CA; Xuanhui Wang, Cupertino, CA; Marc Alexander Najork, Palo Alto, CA; Michael Bendersky, Sunnyvale, CA.

The document highlights the challenge of training machine learning models for ranking purposes, particularly due to biases in user interactions with search results. It addresses the need for models to accurately interpret user behavior and preferences.

The patent outlines a method for training a ranking machine learning model. It involves:

  • Receiving Training Data: Including examples with data identifying a search query, result documents from a result list, and a document selected by a user.
  • Position Data: Identifying the position of the selected document in the result list.
  • Selection Bias Value: Determining a selection bias value for each training example.
  • Importance Value: Determining an importance value for each training example based on its selection bias value.

Factors

  • Selection Bias: The process accounts for the bias in user selection based on the position of documents in search results.
  • Importance Value: Adjusts the training of the model based on the derived importance of each example, potentially improving the relevance of search results.

The patent addresses the concept of selection bias in the context of training ranking models for search engines. Selection bias is a critical factor that influences the accuracy and effectiveness of machine learning models, especially those used for ranking search results.

The patent details a method for determining selection bias by analyzing user interactions with search result documents at various positions in the result lists. The system receives experiment data, including user selections of experiment result documents from result lists. For each position in the experiment result lists, the system calculates a count of selections by users and determines a position bias value based on this count. This value represents the degree to which the position influenced the selection of the document. The position bias value corresponding to the position of the selected result document is assigned as the selection bias value for each training example.

The method includes training the ranking machine learning model using the respective importance values for the training examples, aiming to mitigate the effects of selection bias and leverage sparse click data more effectively.

By incorporating selection bias into the training process, the system can more accurately utilize sparse click data, reducing the impact of selection bias on the generated ranking scores. This leads to search results that better satisfy user informational needs.

The importance value for a given training example is a measure that defines how significant the example is in the process of training the ranking machine learning model. The importance value is determined for each training example based on its selection bias value. This can be an inverse relationship, where the importance value is inversely proportional to the selection bias value, meaning that examples with higher bias might be deemed less important in the training process to counteract their skewed influence.

The importance value is used to adjust the loss for each training example, generating an adjusted loss that reflects the example’s significance in the training process. This adjusted loss is then utilized in training the ranking machine learning model.

By determining and applying importance values, the training engine aims to reduce or eliminate the impact of position bias on the ranking scores generated by the model. This leads to a more accurate and fair ranking system that better reflects the true relevance of documents to search queries.

Implications for SEO

For SEO professionals, this patent underscores the importance of understanding how search engines might use machine learning to rank search results. Key takeaways include:

  • User Behavior: The selection and position of search results play a critical role in training ranking models, highlighting the importance of optimizing content to match user intent and improve its visibility.
  • Bias Mitigation: Understanding the biases in user selections can provide insights into optimizing content beyond traditional SEO factors.
  • Content Relevance: Enhancing the relevance and positioning of content could directly influence its selection by users, further informing SEO strategies to align with how search engines might prioritize content.

Towards Disentangling Relevance and Bias in Unbiased Learning to Rank

The study addresses the challenge of separating relevance and bias within the context of unbiased learning to rank (ULTR). This research is particularly focused on mitigating various biases that arise from implicit user feedback data, such as clicks, which have been a significant area of interest in recent times. The study employs a two-tower architecture approach, which is popular in real-world applications, where click modeling is divided into a relevance tower with regular input features and a bias tower with bias-relevant inputs like the position of a document.

Inventors/Authors

  • Yunan Zhang, University of Illinois Urbana-Champaign
  • Le Yan, Zhen Qin, Honglei Zhuang, Jiaming Shen, Xuanhui Wang, Michael Bendersky, and Marc Najork from Google Research

Background

The study addresses the problem of bias in Unbiased Learning to Rank (ULTR), which is crucial for many real-world applications like search engines and recommendation systems. Implicit user feedback, such as clicks, often contains various biases (e.g., position bias), which can distort the training of ranking models. ULTR seeks to mitigate these biases to improve the fairness and accuracy of ranking outcomes.

Challenges

  • Confounding Bias: A significant issue identified is the confounding effect where the bias tower can be confounded with the relevance tower via the underlying true relevance. This is because positions are often determined by a previous production model, which likely contains relevance information.
  • Existing ULTR Methods: Previous methods in ULTR have not adequately addressed the confounding between relevance and bias, potentially leading to ineffective mitigation strategies.

Solutions

The study proposes two innovative solutions to mitigate the negative confounding effects and better disentangle relevance and bias:

  1. Adversarial Training Method: This method aims to disentangle the relevance information from the observation tower, ensuring that the relevance tower can learn unbiased relevance signals.
  2. Dropout Algorithm: A technique designed to zero-out relevance-related neurons in the observation tower, further helping to separate the relevance learning from bias.

Learnings and Insights

  • Effectiveness of Proposed Methods: Both proposed methods show promise in disentangling relevance and bias, as demonstrated by offline empirical results on controlled public datasets and a large-scale industry dataset. A live experiment on a popular web store also showed significant improvements in user clicks over the baseline, highlighting the practical applicability of these solutions.
  • Theoretical and Empirical Analysis: The study provides a comprehensive theoretical analysis and empirical evidence on the negative effects of confounding on relevance tower learning. It also explains why the proposed methods are effective in mitigating these effects.

Conclusion

The research makes significant contributions to the field of ULTR by identifying and addressing the critical issue of confounding between relevance and bias. The proposed solutions not only offer a theoretical foundation for understanding the problem but also provide practical tools for improving the fairness and accuracy of ranking systems in real-world applications.

Implications for SEO

In conclusion, the implications of this study for SEO revolve around the increasing sophistication of search algorithms in identifying and rewarding true relevance and quality. SEO professionals should focus on strategies that genuinely meet user needs and provide value, supported by a deep understanding of data and user behavior, to succeed in this evolving landscape.

  • Emphasis on True Relevance:

The move towards disentangling relevance from bias in ranking algorithms underscores the increasing importance of genuine content relevance. For SEO, this means a shift away from tactics that exploit algorithmic biases (e.g., position bias) and a greater focus on creating content that truly matches user intent and provides value.

  • Quality and User Experience:

As search engines become better at identifying and compensating for biases in user feedback (like clicks), the quality of content and user experience will likely play an even more significant role in SEO. Websites and content that offer a superior user experience and meet the users’ informational or transactional needs more accurately could see improvements in their rankings.

  • Data-Driven SEO Strategies:

The methodologies proposed in the study, such as adversarial training and dropout algorithms for disentangling relevance from bias, highlight the importance of data-driven approaches in SEO. Understanding and leveraging data about how users interact with content can help in optimizing for true relevance and user satisfaction.

  • Increased Transparency and Fairness:

As search engines implement more sophisticated methods to reduce bias, we might see an increase in the transparency and fairness of search rankings. This could level the playing field for content creators and marketers, allowing high-quality content to surface based on merit rather than exploitation of algorithmic quirks.

  • E-E-A-T and Semantic Analysis:

The focus on mitigating bias and enhancing relevance aligns with Google’s emphasis on Expertise, Authoritativeness, and Trustworthiness (E-A-T) and semantic search. SEO strategies that prioritize in-depth, authoritative content and use semantic SEO techniques to match user queries with content meaning are likely to benefit.

  • 7. Long-Term SEO Value:

Investing in high-quality, relevant content that addresses the user’s intent will continue to be a sustainable SEO strategy. As algorithms become more adept at recognizing and rewarding genuine relevance, the long-term value of such content is likely to increase.

In conclusion, the implications of this study for SEO revolve around the increasing sophistication of search algorithms in identifying and rewarding true relevance and quality. SEO professionals should focus on strategies that genuinely meet user needs and provide value, supported by a deep understanding of data and user behavior, to succeed in this evolving landscape.

End-to-End Query Term Weighting (TW-BERT)

The research paper “End-to-End Query Term Weighting” introduces an innovative method to enhance the performance of Information Retrieval (IR) systems by employing a model named Term Weighting BERT (TW-BERT).

  • Main Authors: Karan Samel (Georgia Tech) and Cheng Li, Weize Kong, Tao Chen, Mingyang Zhang, Shaleen Gupta, Swaraj Khadanga, Wensong Xu, Xingyu Wang, Kashyap Kolipaka, Michael Bendersky, Marc Najork (all from Google).

Background

  • Problem Statement: Lexical retrieval systems based on the Bag-of-Words approach are prevalent but hit their limits due to their inability to consider the context within a search query. Although deep learning methods have shown promising results in improving these systems, they are expensive to operate, difficult to integrate into existing systems, and may not generalize well to scenarios outside of their training domain.
  • Solution Approach: TW-BERT builds on existing lexical retrieval systems and learns to predict the weight for individual n-gram (e.g., uni-grams and bi-grams) search terms. These weights can be used directly by a retrieval system to conduct a search query.

Challenges

  • Integrating deep learning methods into existing production environments is complex and resource-intensive.
  • Traditional methods of weighting search terms do not consider the context of the entire query.
  • Generalizing to new, unknown domains is challenging.

Solutions

  • TW-BERT Model: A model that learns to predict weights for n-gram search terms by incorporating the scoring function used by the search engine scorer (e.g., BM25). This allows for the optimization of search term weights in an end-to-end manner.
  • Integration into Existing Systems: TW-BERT is designed to be integrated with minimal changes into existing production applications, unlike other deep learning-based search methods that would require extensive infrastructure optimizations.

Learnings

  • Performance Improvement: TW-BERT improves retrieval performance over strong baselines for term weighting within the MSMARCO dataset and in out-of-domain retrieval scenarios on TREC datasets.
  • Applicability: The weights learned by TW-BERT can be easily utilized by standard lexical retrieval systems and other retrieval techniques such as query expansion.

Methodological and Experimental Contributions

  • Methodological Contributions: Definition of the TW-BERT model, which provides n-gram weights for an input sequence of query tokens, and description of how a relevance score for a query-document pair is performed.
  • Experimental Contributions: Use of TW-BERT n-gram weights within an established lexical retriever platform to evaluate the system. TW-BERT shows improved performance in in-domain tasks and outperforms both traditional and deep learning-based retrievers in zero-shot retrieval scenarios.

This summary provides an overview of the key points and contributions of the research paper. The work demonstrates how combining traditional retrieval methods with modern deep learning techniques can significantly improve the efficiency and accuracy of search systems.

Implications for SEO

In summary, the development and application of models like TW-BERT have the potential to significantly influence SEO strategies, making them more sophisticated, context-aware, and aligned with the evolving capabilities of search engines. As these technologies become more integrated into SEO tools and practices, professionals in the field will need to stay informed and adaptable to leverage these advancements effectively.

  • Contextual Weighting: TW-BERT’s ability to predict the weight of individual n-gram search terms based on their context within a query can lead to a more nuanced understanding of user intent. For SEO, this means optimizing content not just for keywords but for the context in which those keywords are used, aligning content more closely with user intent.
  • Content Optimization: By understanding how search engines might weigh terms within queries, SEO professionals can better optimize content to match these weights, ensuring that content is more likely to be deemed relevant by search engines. This could involve focusing on specific n-grams or phrases that are weighted more heavily by models like TW-BERT.
  • Niche Optimization: For SEO practitioners working in niche domains or with specialized content, the ability of TW-BERT to improve retrieval performance in out-of-domain scenarios suggests that similar models could help these websites perform better in search results. This is particularly relevant for new or emerging topics where traditional keyword-based optimization may not be as effective.
  • Long-Tail Strategy: The use of TW-BERT for query expansion indicates that SEO strategies could benefit from a deeper focus on long-tail keywords and phrases. By understanding how terms are weighted and expanded upon in queries, SEO professionals can target a broader range of search queries, potentially capturing more traffic.

Regression Compatible Listwise Objectives for Calibrated Ranking with Binary Relevance

  • Main Authors: Aijun Bai, Rolf Jagerman, Zhen Qin, Le Yan, Pratyush Kar, Bing-Rong Lin, Xuanhui Wang, Michael Bendersky, Marc Najork
  • Institution: Google LLC

Background

  • LTR systems aim to improve ranking quality, but their output scores are not inherently scale-calibrated, limiting their use in scale-sensitive applications.
  • A combination of regression and ranking objectives can learn scale-calibrated scores, but these objectives are not necessarily compatible, leading to suboptimal outcomes.

Key Concepts

  • Learning to Rank (LTR): The goal is to train a model that can correctly rank unseen objects, focusing on improving ranking metrics.
  • Scale Calibration: Adjusting the output scores of LTR models to an external scale to improve their applicability in scale-sensitive applications.

Challenges

  • The need to develop LTR models that are effective in terms of both ranking metrics and regression metrics (for calibrating output scores).
  • The difficulty in finding a good balance between ranking quality and scale calibration without compromising performance in either area.

Solutions

  • Regression Compatible Ranking (RCR) Approach: A practical approach where the ranking and regression components are proven to be mutually aligned. This approach achieves a better compromise between the two goals.
  • The focus is mainly on binary labels, although the concept is also applicable to graded relevance.

In the specific context of the paper, regression components are used to ensure that the scores produced by a Learning-to-Rank (LTR) system are not only effective for ranking purposes but also calibrated to a certain scale that reflects the true magnitude or probability of the outcome being predicted. This is particularly important in applications where the absolute values of the predictions (and not just their relative order) are significant, such as in predicting click-through rates in online advertising.

Results and Learnings

  • The proposed method was evaluated using several public LTR benchmarks and consistently achieved either the best or competitive results in terms of both regression and ranking metrics.
  • A significant improvement in Pareto frontiers in the context of multi-objective optimization was observed.
  • When applied to YouTube Search, the approach not only improved the ranking quality of the production predicted Click-Through Rate (pCTR) model but also increased the accuracy of click prediction.
  • The approach has been successfully implemented in the YouTube production system.

This paper provides valuable insights into the development of LTR systems that are effective in terms of both ranking quality and scale calibration, an area that could be particularly relevant to your expertise in semantic SEO and search engine technologies.

Implications for SEO

In summary, the implications of the research for SEO revolve around the integration of advanced machine learning models to better predict and optimize for both rankings and user engagement. As search engines continue to evolve, adopting a more sophisticated, data-driven approach to SEO will be crucial for success.

  • User Intent Matching: The calibration of scores to reflect the likelihood of user engagement (click-through rates, for example) suggests that aligning content closely with user intent is increasingly important. SEO strategies that focus on user intent and the probability of engagement can benefit from insights gained through regression-compatible ranking models.
  • Personalization: The ability to calibrate scores based on binary relevance (e.g., click or no click) can be extended to personalize search results or content recommendations on websites. For SEO, this means optimizing content not just for search engines but for specific segments of the audience, enhancing the user experience and potentially improving engagement metrics.
  • Content Quality and Relevance: The emphasis on both ranking and regression metrics underscores the importance of content quality and relevance. High-quality, relevant content is more likely to receive positive engagement signals, which can be accurately predicted and valued by advanced LTR systems. SEO strategies should continue to prioritize content that meets these criteria.
  • Data-Driven SEO: The methodologies discussed in the paper highlight the importance of a data-driven approach to SEO. By leveraging similar models that consider both ranking quality and engagement probability, SEO professionals can refine their strategies based on more nuanced insights into how content performs.
  • Advanced Analytics Tools: The adoption of regression-compatible objectives in ranking models may spur the development of more sophisticated SEO and analytics tools. These tools could offer deeper insights into how specific content elements correlate with both ranking positions and user engagement, enabling more targeted optimizations.

Leveraging Semantic and Lexical Matching to Improve the Recall of Document Retrieval Systems: A Hybrid Approach

This work provides valuable insights into improving document retrieval systems and underscores the potential of combining semantic and lexical models to enhance efficiency and effectiveness in the retrieval stage.

  • Saar Kuzi (University of Illinois at Urbana-Champaign)
  • Mingyang Zhang, Cheng Li, Michael Bendersky, Marc Najork (Google Research)

Background

  • Search engines often use a two-phase paradigm: In the first phase (retrieval stage), an initial list of documents is retrieved, and in the second phase (re-ranking stage), the documents are re-ranked to produce the final result list.
  • There is little literature on using deep neural networks to improve the retrieval stage, although their benefits for the re-ranking stage have been demonstrated.
  • Semantic matching goes beyond simple keyword matching by understanding the complex relationships between words, thus capturing the meaning or context of the query and the documents. This is crucial for retrieving relevant documents that may not contain the exact query terms but are related in context.
  • The approach is based on the premise that effective semantic models, especially in recent years, have been largely developed using deep neural networks. These models can understand the nuances of language, including synonyms, related terms, and context, which are often missed by lexical models.
  • The semantic retrieval model is built upon deep neural networks, specifically leveraging architectures like BERT (Bidirectional Encoder Representations from Transformers). BERT and similar models are pre-trained on vast amounts of text data, allowing them to understand complex language patterns and semantics.
  • For the retrieval process, the model generates embedding vectors for both queries and documents. These embeddings represent the semantic content of the text in a high-dimensional space, where the semantic similarity between a query and a document can be measured, typically using cosine similarity.
  • Improved Recall: By capturing the meaning behind the words, the semantic approach can retrieve a broader range of relevant documents, including those that do not share exact keywords with the query. This is particularly useful for addressing the vocabulary mismatch problem, where the query and relevant documents use different terms to describe the same concept.

  • Complementarity: The semantic model complements the lexical model by covering the gaps left by keyword-based retrieval. It can identify relevant documents that are semantically related to the query but would be missed by a purely lexical search.

Challenges

  • The retrieval stage aims to maximize the recall of retrieved relevant documents. Since this stage is performed against all documents in the collection, efficiency is a major requirement.
  • A retrieval based solely on a lexical model is likely not optimal, as it may fail to retrieve relevant documents that contain none of the query terms.
  • Recall vs. Precision: Semantic models tend to have lower precision at the top ranks compared to lexical models, as they retrieve a broader set of documents. The hybrid approach proposed in the paper aims to combine the strengths of both semantic and lexical models to improve overall recall without significantly compromising precision.
  • Efficiency: Running complex neural models for every query in real-time can be computationally expensive. The paper addresses this by using efficient techniques like approximate nearest neighbor (ANN) search to quickly find the most relevant document embeddings for a given query embedding.

Solutions

  • Hybrid Approach: Combining deep neural network models and lexical models for the retrieval stage. This approach leverages both semantic (based on deep neural networks) and lexical (keyword matching-based) retrieval models.
  • Parallel Execution: Semantic and lexical retrievals are performed in parallel, and the two result lists are merged to create the initial list for re-ranking.
  • Weak Supervision Learning: Designing weakly supervised learning tasks to learn domain-specific knowledge for a new search scenario without needing access to large query logs.

Learnings

  • The empirical analysis demonstrates that the semantic approach can retrieve a large number of relevant documents not covered by the lexical approach.
  • By using a simple unsupervised approach for merging the result lists, significant improvements in recall can be achieved.
  • An exploration of the different characteristics of the semantically and lexically retrieved documents highlights the complementary nature of the two approaches.

Conclusions

  • The proposed hybrid document retrieval approach, leveraging lexical and semantic models, is efficient enough to be deployed in any commercial system.
  • The study emphasizes the importance of combining semantic and lexical approaches to improve recall in the retrieval stage and offers an effective end-to-end approach for weak supervision training in this phase.

Implications for SEO

The move towards semantic search represents a shift from a keyword-centric approach to a more nuanced understanding of content and user intent. For SEO professionals, this means adapting strategies to focus on semantic relevance, content quality, and the broader context in which search queries are made. By aligning SEO practices with the principles of semantic search, it’s possible to create more effective, user-centered content strategies that perform well in modern search engines.

  • Content Creation: The shift towards semantic search emphasizes the importance of creating content that is not just keyword-focused but also semantically rich and contextually relevant. This means focusing on topics, entities, and their relationships, rather than merely incorporating specific keywords.
  • Keyword Research: While traditional keyword research remains important, there’s a growing need to understand the broader topics and user intent behind search queries. Tools that provide insights into related topics, questions, and semantic relationships between terms will become increasingly valuable.
  • Understanding User Intent: SEO strategies must prioritize understanding the intent behind search queries. This involves categorizing content to match different stages of the user journey (informational, navigational, transactional) and ensuring that content addresses the underlying questions or needs.
  • Content Depth and Quality: High-quality, in-depth content that covers a topic comprehensively is likely to perform better in a semantic search landscape. This aligns with the E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) principles, as content that demonstrates expertise and covers related semantic fields is more likely to be deemed relevant.
  • Enhancing Semantic Understanding: Implementing structured data using schema markup helps search engines understand the context and semantics of the content on a webpage. This can improve content visibility for relevant queries, especially for voice search and conversational queries where semantic matching is crucial.
  • Rich Snippets and SERP Features: Proper use of schema markup can also lead to enhanced presentations in search results (e.g., rich snippets, knowledge graphs) which can improve click-through rates and visibility.
  • Semantic Link Building: The importance of link building remains, but there’s a shift towards creating semantically related links. This means focusing on linking from and to content that is contextually relevant, enhancing the semantic network around topics.
  • Internal Linking: Effective internal linking strategies that help search engines and users discover related content can reinforce the semantic relevance of content, improving site structure and SEO performance.

About Olaf Kopp

Olaf Kopp is Co-Founder, Chief Business Development Officer (CBDO) and Head of SEO at Aufgesang GmbH. He is an internationally recognized industry expert in semantic SEO, E-E-A-T, modern search engine technology, content marketing and customer journey management. As an author, Olaf Kopp writes for national and international magazines such as Search Engine Land, t3n, Website Boosting, Hubspot, Sistrix, Oncrawl, Searchmetrics, Upload … . In 2022 he was Top contributor for Search Engine Land. His blog is one of the most famous online marketing blogs in Germany. In addition, Olaf Kopp is a speaker for SEO and content marketing SMX, CMCx, OMT, OMX, Campixx...

COMMENT ARTICLE



Content from the blog

The dimensions of the Google ranking

The ranking factors at Google have become more and more multidimensional and diverse over the read more

Interesting Google patents for search and SEO in 2024

In this article I would like to contribute to archiving well-founded knowledge from Google patents read more

What is the Google Shopping Graph and how does it work?

The Google Shopping Graph is an advanced, dynamic data structure developed by Google to enhance read more

“Google doesn’t like AI content!” Myth or truth?

Since the AI revolution, fueled by the development of large language models (LLMs) and generative read more

Most interesting Google Patents for semantic search

Since 2013 Google is enabling an entity based semantic search system parallel to the classical read more

How does Google search (ranking) may be working today

Google has disclosed information about its ranking systems. With this information, my own thoughts and read more