Helpful content: What Google really evaluates?
Since the first Helpful Content Update in 2022, the SEO world has been thinking about how to create helpful content or optimize it accordingly. Hypotheses are put forward, analyses, checklists and audits are created. I don’t find most of these approaches useful because they are derived from the perspective of a human and not a machine or algorithm.
Google is a machine, not a human!
My SEO mantra is: “Think like an engineer, act like a human.”
The focus here is often on the nature of the content. But does Google really evaluate content according to helpfullness?
With this article I would like to invite you to a discussion.
Contents
Helpful content, what is it anyway?
Helpful Content is a terminology that Google introduced as part of the first Hepful Content update in August 2022. Google initially announced that the Helpful Content System was a “sitewide classifier”. It was later announced that it would also be used to rate individual documents.
Our helpful content system is designed to better ensure people see original, helpful content written by people, for people, in search results, rather than content made primarily to gain search engine traffic.
Our central ranking systems are primarily designed for use at page level. Various signals and systems are used to determine the usefulness of individual pages. There are also some website-wide signals that are also taken into account.
I have already commented on the first Helpful Content Update that this update was primarily a PR update, and not just because of the meaningful title. You can read my reasoning and criticism in detail here.
One of Google’s PR goals is to encourage website operators to make crawling, indexing and therefore ranking easier. At least that was the aim of the biggest updates such as the changeover to Page Speed Update, Page Experience Update, Spam Update … These updates have one thing in common. They imply a recommendation for action through the meaningful concrete title and thus help Google with information retrieval.
I would have prefer to call the Helpful Content System a “User Satisfaction System”. But more on that later.
What is helpful?
In order to answer this question, you should take a closer look at the information retrieval terms relevance, pertinence and usefulness. As described in my article “Relevance, pertinence and quality in search engines“, these terms are described as follows:
Something is relevant to search engines if a document or content is significant in relation to the search query. The search query describes the situation and the context. Google determines this relevance using text analysis methods such as BM25 or TF-IDF, Word2Vec …
Pertinence describes the subjective importance of a document for the user. This means that in addition to the match with the search query, a subjective user level is added.
In addition to the conditions for relevance and pertinence,usefulness also restricts the level of novelty.
For me, pertinence and usefulness are the two levels that stand for helpfulness.
How can you algorithmically measure helpfulness, pertinence and usefulness?
- Overview: Brand Monitoring Tools for LLMO / Generative Engine Optimization - 16. June 2025
- LLMO / Generative Engine Optimization (GEO): How do you optimize for the answers of generative AI systems? - 30. April 2025
- LLMO / GEO: How to optimize content for LLMs and generative AI like AIOverviews, ChatGPT, Perplexity …? - 21. April 2025
- Digital brand building: The interplay of (online) branding & customer experience - 27. March 2025
- E-E-A-T: Discovery and evaluation of high quality ressources - 25. March 2025
- E-E-A-T: More than an introduction to Experience ,Expertise, Authority, Trust - 19. March 2025
- Learning to Rank (LTR): A comprehensive introduction - 18. March 2025
- Quality Classification vs. Relevance Scoring in search engines - 1. March 2025
- How Google evaluates E-E-A-T? 80+ ranking factors for E-E-A-T - 27. February 2025
- Query document matching: How are queries matched with documents in information retrieval? - 24. February 2025
Simon
22.07.2024, 02:14 Uhr
Olaaf, thank you for another informative article. So just to be clear, is your view that AI writers that analyze entities contained in the top end results and seek to add these to an article are just a waste of time?
Another question: is there a place for a tool that measures user interaction on the page and comes up with some sort of helpfulness metric to guide owners as to the helpfulness of content?
Olaf Kopp
22.07.2024, 08:06 Uhr
Hi Simon, no. You have to differentiate between the different steps of ranking and ranking systems. The helpful content system is one of them and part of re-ranking. In my opinion helpful content is a quality classifier, that is activated in the re-ranking process. The initial ranking happens in the ascorer or scoring process and here content based relevance signals are important.
Lee Stuart
23.08.2024, 05:10 Uhr
Olaff thanks for the interesting and reasoned view. I was wondering what your view is on this now that in the latest core update it appears that some sites previously heavily impacted by HCU have recovered. The point of contention is that those user signals have been next to zero for a long time for some of these. So do you think that G is using historical data beyond its normal look back period or is their another re-ranking component added or perhaps some kind of manual intervention ? Interested to hear your thoughts.
Olaf Kopp
23.08.2024, 08:22 Uhr
Hi Lee, good question. The Helpful Content System is only one part of the Ranking Core. Other systems and concepts are e.g. E-E-A-T. Adjustments to the search intents can also have an influence. I think you can find at least as many examples of websites that have not recovered. This is the problem with the analysis of core updates. You will never get a complete overview, but only focus on examples that support your theses. You are subject to confirmation bias.