Author: Olaf Kopp
Reading time: 8 Minutes

How Google evaluates E-E-A-T? 80+ signals for E-E-A-T

5/5 - (4 votes)

In 2022 I published an overview of E-E-A-T signals for the first time, which Google can measure to evaluate E-E-A-T for domains, companies and authors. I had researched all signals from various Google sources and patents. The post was Search Engine Land’s second most popular article in 2022.

Since then, I have come across many more Google patents that provide clues to other possible E-E-A-T signals. That’s why it’s time for an update. I was able to get over 80 possible signals from 40+ sources. All patents I refer to are in the database of my SEO Resesarch Suite.

I keep hearing that SEOs don’t believe that E-E-A-T has an impact on rankings and think it’s just a buzzword. What needs to be understood is that Google likes to use PR claims such as Helpful Content or E-E-A-T to provide the search product with meaningful positive attributes. These names are just a bracket or collections for many individual signals and algorithms that work independently of each other.

Google needs to identify and measure signals in order to put the E-E-A-T puzzle together and achieve more  trustful ressources in the SERPs algorithmically for scaling this quality evaluation up. This quality evaluation could also play a major rule for chosing ressources for training LLMs. Therefore this research is so important for increasing knowledge and the foundation for optimizing for E-E-A-T.

Neither in Google patents, the API leak nor the DOJ documents explicitly refer to E-E-A-T. During my research, I focused on finding sources that mention quality, trust, authority or expertise.

More about E-E-A-T in my comprehensive guide.

Ranking dimensions on Google

In order to structure the 80+ signals for clarity, I decided to divide the signals into the ranking dimensions I developed:

  • Document-Level
  • Domain-Level
  • Source Entity Level

The dimensions of Google rankings have evolved over the years into a complex and multi-layered system. Search engine optimization used to focus primarily on optimizing individual documents. 

Google has increasingly introduced domain- and site-wide factors over the last 15 years. 

Today you can have three evaluation levels: the document level, the domain level and the source level. Relevance-based factors such as keyword usage and content quality are evaluated at the document level. 

The domain level takes site-wide quality factors such as link profile and E-E-A-T into account. 

At the source level, the quality of the author entity is assessed according to E-E-A-T criteria. 

This multi-dimensional rating allows Google to classify content more comprehensively and deliver high-quality search results.

The signals covered in this article can be incorporated into one or more quality classifiers.

Classifiers do not perform scoring and award points, but classify source entities, domains and documents into different classes such as spam, bad, medium or good. So, as confirmed by Google, there are no E-E-A-T scores but rather classes.

Overview: Signals for an E-E-A-T evaluation based on fundamental research

I have researched and compiled the following 80+ E-E-A-T signals from 39 Google patents, the whitepaper How Google fights Spam and the Quality Rater Guidelines and other Google statements. I was supported by the AI Patent & Paper Analyzer from the SEO Research Suite.

E-E-A-T on Document Level (Document quality assessment)

Expertise & Experience:

    • Content originality: High ratio of original to copied content.
    • Comprehensive topic coverage: Satisfying both informational and navigational intents.
    • Relevance to alternative queries: The ability of content to rank for both initial and alternative queries may indicate expertise and authoritativeness on a topic.
    • Anchor text relevance: Contextually relevant anchor text signals expertise.
    • Grammar and layout quality: Clean, professional presentation of content.
    • Content length: In-depth resources covering topics comprehensively.
    • Frequency of updates: Regularly updated content to maintain relevance and accuracy.
    • Diversity of content types (text, video, images) that cater to different user preferences and engagement patterns.
    • Grammar and layout quality – Assessing factors like grammar and layout, which could signal expertise and professionalism.
    • Quotations and external outbound link references to authoritative sources: The number and quality of outlinks from a document to other authoritative sources may signal expertise and authoritativeness. Clean link profiles are important for establishing trustworthiness.
    • Anchor text and n-grams suggests that contextually relevant anchor text in links is important. This could indicate topic expertise.
    • Entity relationships: Demonstrating clear, accurate relationships between entities in content could signal expertise and authoritativeness.
    • Co-occurrence patterns: Consistent, relevant co-occurrence of related entities in content may indicate topic expertise. Checking the number of occurrences of an entity’s name in resources. Frequent, contextually appropriate mentions of relevant entities may signal topical expertise.
    • Content length – The length of the resource is considered, indicating that comprehensive, in-depth content may be preferred.
    • Query idenpendent long-term user engagement document (like CTR, dwell time)
    • High click ratios compared to other results for the same query, suggesting the page is viewed as more authoritative.
    • Consistent selection of a particular result for a query over time, implying sustained authority on the topic.
    • Association of multiple related queries with a single authoritative page, suggesting comprehensive topical coverage.
    • Search Term-Entity Selection Values: These calculated values reflect how well resources match user intent for specific search terms and entities.
    • Direct URL inputs: Boosting duration measurements for resources accessed via direct URL input, as this indicates a positive user assessment of quality. This could be seen as a signal of trust and authority.

E-E-A-T on a document level represents helpful content and can be transferred sitewide or website areas. So the sum of high quality helpful content impact the domain level.

E-E-A-T on Domain-Level (Site-wide or website area focussed quality assessment)

Trustworthiness:

    • Domain reputation: Domain name considered as a feature for classification.
    • Association with verified business information: Consistent business details like addresses and phone numbers.
    • Reduced need for inference: By providing verified information directly, entities demonstrate transparency and trustworthiness.
    • Clean link profiles: Outbound links to authoritative sources signal trustworthiness.
    • Proximity to trusted seed sites: Short distances in link graphs to authoritative sites.
    • Factual accuracy: Use of Knowledge-Based Trust (KBT) to assess the correctness of factual information.
    • Presence of inappropriate content: Negative signals impact domain trustworthiness.
    • Consistency across signals: Matching information across links, titles, and content.
    • Long-term user engagement sitewide: CTR and dwell time across the domain.
    • Match between domain name and business name: Domain names matching business names suggest official or authoritative status.
    • Use of https

Authoritativeness:

    • PageRank and link diversity: Varied, high-quality links from reputable sources.
    • Historical data: Long-term consistency in rankings and site performance.
    • Network of related documents: Interlinked, relevant content boosting authority.
    • Domain-wide quality: Site-wide evaluation of authority, not just individual pages.
    • Matching topic-specific vocabulary: Using relevant terms and concepts for specific subject areas.
    • Consistent high rankings: Sustained performance across various queries.
    • Age of domain & content sitewide
    • Consistent high rankings: Resources that consistently rank highly across different query types (initial and alternative) may be seen as more authoritative.
    • Entity references: Resources that accurately and comprehensively cover relevant entities may be seen as more authoritative.
    • Consistency in being identified as a navigational resource for specific topics over time, building trust and authority.
    • Brand recognition: Queries directly referencing a site or brand may indicate its authority and trustworthiness in users’ minds

Expertise & Experience (Helpful Content)

    • Topical focus and content originality: Original content that thoroughly covers relevant topics.
    • Content freshness: Regular updates and timely content revisions.
    • Category relevance: Strong performance in relevant content categories.
    • Broad vs niche appeal: Demonstrating expertise in both niche and broad subject areas.
    • Presence of inappropriate content
    • Broad vs niche appeal – Whether a site has broad or niche appeal, which may relate to topical expertise.
    • Historical data: Use of historical data on site quality suggests that long-term reputation and consistency are factored into E-E-A-T assessments. Using past performance metrics, suggesting that established expertise and authority over time may be valued.
    • Content relevance to frequently or rarely searched topics. This could signal expertise in specific subject areas, especially for niche or specialized content.
    • Comprehensive topic coverage that satisfies both informational and navigational intents, demonstrating depth of knowledge. In-depth, well-structured content that covers a subject thoroughly. By examining macro and micro contexts to understand the depth and relevance of content on specific topics, which could indicate content quality.
    • First instance content – Content that is the first of its kind on a particular topic is valued more highly, indicating expertise and authoritativeness.
    • Matching topic domain typical vocabulary: Domain lists of distinctive terms for different subject areas implies search engines may evaluate site-wide topical focus and authority. Consideration of important terms and relevant entities, indicating that content should thoroughly cover key concepts and entities related to the topic.
    • User behavior patterns: Transitions from informational to navigational queries signaling expertise.
    • Query independent long-term user engagement sitewide or website areas (CTR, dwell time)
      • Category relevance: The patent uses category-specific duration scores to evaluate websites. Performing well in relevant categories could signal expertise and authoritativeness in those topics.
      • Consistent performance across categories: Websites that perform well across multiple relevant categories may be seen as more authoritative and trustworthy overall.

E-E-A-T on Source Entity Level

Trustworthiness

  • Authentication of contributors: Verifying personal information of content creators.
  • Reputation and credibility history: Track record for providing accurate information.
  • Sentiment of mentions and ratings: Consensus sentiment score for the entity.
  • Peer influence and endorsements: Reviews or endorsements from reputable authors.
  • Trust relationships between entities: Calculating trust ranks based on perceived trustworthiness by other entities, indicating a network of trust that could relate to authority and expertise.
  • Contribution Metric: Based on critical reviews and fame rankings, this metric likely rewards entities and content creators who have made significant contributions in their field.
  • Neighbor quality: The quality of linked or related entities influences an entity’s score, suggesting that authoritative connections boost E-E-A-T signals. Considering affiliations among documents, potentially indicating that content from established authors or reputable sources may be valued more highly.
  • Reputation and credibility history – An entity’s track record for providing reliable and accurate information influences their authorship score.
  • Frequency of publication of high-quality content is noted as a factor in improving reputation scores.
  • Verified credentials: Educational background, professional experience, and other verified credentials are described as contributing to an author’s credibility factor.

Authoritativeness

  • Entity references in autoritative sources: Frequency and accuracy of mentions in authoritative sources.
  • Prize metric: Awards or recognition linked to the entity signal authority.
  • Content citation frequency: How often an entity’s content is cited by others.
  • Publication history: Volume and consistency of content contributions over time.
  • Contribution metric: Significant contributions in the entity’s field, based on critical reviews.
  • Brand recognition: Queries specifically referencing the entity.
  • Matching anchor text with business names suggests recognition and authority for that entity.
  • Presence in authoritative structured online databases and encyclopedias could be an E-E-A-T signal.
  • Citation frequency – How often an entity’s content is cited by other credible sources contributes to their authorship score, signaling expertise and trustworthiness
  • Long-term consistency: Historical data integrity and consistency of an author’s identity across platforms are mentioned as factors in authentication scores.
  • Notable Type Metric: This combines global popularity with the importance of the entity type, suggesting that well-known entities in prominent categories may be favored.
  • Prize Metric: This reflects recognition and awards associated with an entity, potentially signaling expertise and achievement.
  • Author associations with topics based on claimed authorship and user engagement with their content, suggesting expertise in specific areas.
  • User session data: By including user session nodes in a graph, the system may evaluate how a resource fits into broader user research patterns, possibly signaling its authoritativeness within a topic area.
  • Number of contents published on a topic by source entity
  • Popularity of the source entity
  • Number of backlinks / references to the source entity
  • Proportion of content that a source entity contributed to a topical document corpus

Expertise & Experience:

  • First instance content: Pioneering content on a specific topic.
  • Subject matter relevance: The alignment of the author’s expertise with the content topic.
  • Publication history – The volume, consistency, and diversity of an entity’s content contributions over time are considered, potentially indicating experience and expertise.
  • Subject matter relevance: Importance of an author’s expertise being relevant to the content they produce.
  • Time until the last publication about a topic of the source entity

This overview is a good starting point for orientation. I am very grateful, if you could share this knowledge and motivate me to let me update this overview regularly. THX

About Olaf Kopp

Olaf Kopp is Co-Founder, Chief Business Development Officer (CBDO) and Head of SEO & Content at Aufgesang GmbH. He is an internationally recognized industry expert in semantic SEO, E-E-A-T, LLMO, AI- and modern search engine technology, content marketing and customer journey management. As an author, Olaf Kopp writes for national and international magazines such as Search Engine Land, t3n, Website Boosting, Hubspot, Sistrix, Oncrawl, Searchmetrics, Upload … . In 2022 he was Top contributor for Search Engine Land. His blog is one of the most famous online marketing blogs in Germany. In addition, Olaf Kopp is a speaker for SEO and content marketing SMX, SERP Conf., CMCx, OMT, OMX, Campixx...

COMMENT ARTICLE



  • Satish Kumar Matta

    06.11.2024, 18:06 Uhr

    I thought the Agent Rank was a primary source of EEAT by associating content quality and experience with authors and publishers. Looks like there’s a bigger network of signals which help the system. Thank you for sharing.

    • Olaf Kopp

      06.11.2024, 18:36 Uhr

      All Agent Rank Google patents are very old and also expired. It is therefore unlikely that they will be used. I analyse only patents which are status active. There a lot of theories out there based on outdated and expired patents.

Content from the blog

What is the Google Shopping Graph and how does it work?

The Google Shopping Graph is an advanced, dynamic data structure developed by Google to enhance read more

How Google can personalize search results?

The personalization of search results is one of the last steps in the ranking process read more

LLMO: How do you optimize for the answers of generative AI systems?

As more and more people prefer to ask ChatGPT rather than Google when searching for read more

The dimensions of the Google ranking

The ranking factors at Google have become more and more multidimensional and diverse over the read more

How Google evaluates E-E-A-T? 80+ signals for E-E-A-T

In 2022 I published an overview of E-E-A-T signals for the first time, which Google read more

E-E-A-T: More than an introduction to Experience ,Expertise, Authority, Trust

There are many definitions and explanations of E-E-A-T, but few are truly tangible. This article read more