LLMO / Generative Engine Optimization (GEO): How do you optimize for the answers of generative AI systems?
As more and more people prefer to ask ChatGPT rather than Google when searching for products, companies need to ask themselves: how do we show up at the top of the list?
In this article, I shed light on the technical background. It shows that there are still no clear answers to many questions in this area. However, previous methods of branding seem to work here too.
Contents
- 1 What is LLMO? What is GEO?
- 2 Goals of LLMO / Generative Engine Optimization
- 3 Success metrics for LLMO / GEO
- 4 What other terms are for Large Language Model Optimization (LLMO) and Generative Engine Optimization (GEO)?
- 5 A look at the current shift to generative AI
- 6 Which answers we have to answer befor starting LLMO / GEO?
- 7 LLMO / GEO strategies
- 8 The impact of SEO for LLMO / GEO and visibility in generative AI
What is LLMO? What is GEO?
LLMO (Large Language Model Optimization) or Generative Engine Optimization (GEO) refers to the practice of optimizing content and authority and reputation signals to gain better visibility in AI generated answers on AI platforms like ChatGPT, Perplexity, Gemini, Microsoft Copilot or AI Overviews.
Goals of LLMO / Generative Engine Optimization
The goal of LLMO is to improve the visibility in the responses of generative AI, as used by systems such as ChatGPT, Microsoft Copilot, Perplexity, Gemini or Claude. This goal can be achieved in two ways:
- Linking and mentioning your own content in the referenced sources.
- Mention/recommendation of your own brand, company or offers/products in the results generated by AI.

Success metrics for LLMO / GEO
In a world where more and more users will use AI-systems like ChatGPT and AI features like Google AIOverviews to get information they deserve with less clicks to the content on websites, we have to rethink our success measurement of our optimization for search engines and AI platforms. Rankings, organic clicks and search engine visibility should not ij the focus in a generative AI world.
Here are some ideas to track instead:
- Brand Popularity for measuring brand mentions in AI generated answers and online documents.
- Referral Traffic from AI driven platforms like ChatGPT, Perplexity, Gemini, Copilot, Google AI Overviews, Bing KI and search results.
- CTR in AI driven platforms like ChatGPT, Perplexity, Gemini, Copilot, Google AI Overviews, Bing KI and search results.
- Brand-Context-Match: Amount of Cooccurences out of Brand + attributes/entities in AI generated answers, documents, prompts and queries.

What other terms are for Large Language Model Optimization (LLMO) and Generative Engine Optimization (GEO)?
In addition to the terms Large Language Model Optimization (LLMO) and Generative Engine Optimization (GEO), other terms have become established in the discussion about optimizing content for Large Language Models (LLMs) and generative AI systems. These reflect the different perspectives and focuses in this developing field. Here are some common alternatives:
- Generative AI Optimization (GAIO): This term emphasizes the broader category of generative artificial intelligence and thus includes not only pure language models, but potentially other generative AI applications as well. This term was first mentioned by Philipp Klöckner in the Doppelgänger podcast, but also has the disadvantage that the abbreviation is not unique.
- AI Optimization (AIO): An even more general term that encompasses optimization for AI systems in general, regardless of whether they are language models or other AI technologies. This term is general enough to be used regardless of where the language models remain. However, neither the abbreviation nor the written variant is unambiguous.
- Answer Engine Optimization (AEO): This term focuses on the goal of being prominent in AI-generated answers. It emphasizes the role of AI as an “answer engine” for user queries. I find this term too imprecise in terms of content, as it can be used to refer to everything and nothing. I find it too imprecise in terms of content and the abbreviation is also ambiguous.
- AI SEO (KI SEO): A direct analogy to traditional Search Engine Optimization (SEO), but with a focus on the specifics of AI-powered search results.
- Content Optimization for AI: A descriptive term that clearly outlines the core of the topic, namely the adaptation of content to be better understood and used by AI systems.
- Digital Authority Management: This broader approach looks at the need to manage a brand’s digital authority and reputation so that it is also perceived positively in the results of AI systems. The concept was originally invented in 2021 by Olaf Kopp (Aufgesang) with a view to optimizing E-E-A-T.
The term LLMO was one of the first, along with GEO and GAIO, and I used it in my first post on Search Engine Land on the topic in October 2023. (https://searchengineland.com/large-language-model-optimization-generative-ai-outputs-433148) The advantage of this term is that the abbreviation is also unambiguous. In other words, there is no confusion due to ambiguity. The disadvantage is that if language models are replaced by another technology in the future, the term will no longer fit.
GEO = Generative Engine Optimization was invented as part of the research paper “GEO: Generative Engine Optimization”. https://www.kopp-online-marketing.com/patents-papers/geo-generative-engine-optimization. This term is more general and can also be used after an era of language models. The disadvantage is the ambiguity of the abbreviation.

A look at the current shift to generative AI
Applications based on generative AI are taking the world by storm. Especially for researching information and answering questions quickly, services such as ChatGPT, Gemini and others could represent serious competition for search engines such as Google – at least in part.
Basically, we should differentiate between:
- Website clicks from the search results (SERPs) or traffic that could be lost,
- and a possible reduction in general usage or a reduction in search queries.
According to a study by Gartner, search engine usage will fall by 25% by 2026 in favor of AI chatbots and virtual assistants.
Personally, I don’t believe that we will see such a shift by 2026. Nevertheless, I believe that future generations will increasingly rely on AI chatbots for researching information and products.
So I think 25% and more is quite realistic, but in five to ten years rather than two. It will be a slower but steady development. User habits remain habits!
I see the reduction in search engine traffic on websites coming our way much faster. With the introduction of SGE (now “AI Overviews”), I expect a reduction in search engine traffic of up to 20% on average in the first two years of introduction. Depending on the search intention, it could be more or less. However, I am sure that “no-click searches” will increase because generative AI is already providing the solutions and answers.
This will shorten the time in the research journey and customer journey or messy middle.

To generate awareness in the user journey, it would also be negligent to focus solely on search engine rankings and clicks or website traffic as a result.
If you ask ChatGPT today for a car that should fulfill certain characteristics, they will suggest specific models:

If you ask Gemini the same question, certain car models are also suggested, including pictures:

Interestingly, the example above recommends different car models depending on the application.
ChatGPT:
- Tesla Model Y
- Toyota Highlander Hybrid
- Hyundai Ioniq 5
- Volvo XC90 Recharge
- Ford Mustang Mach-E
- Honda CR-V Hybrid
Gemini:
- Chrysler Pacifica Hybrid
- Toyota Sienna
- Mid-Size SUVs in general
- Toyota RAV4 Hybrid
- Row SUVs
- Toyota Highlander Hybrid
This makes it clear that the underlying language model (Large Language Model, LLM) works differently depending on the AI application.
In future, it will become increasingly important for companies to be named in recommendations such as these in order to be included in a relevant set of possible solutions.
But why exactly are these models proposed by generative AI?
In order to answer this question, we need to understand a little more about the technological functioning of generative AI and LLMs.
Excursus: How LLMs work
Modern transformer-based LLMs such as GPT or Gemini are based on a statistical analysis of the co-occurrence of tokens or words.
For this purpose, texts and data are broken down into tokens for machine processing and positioned in semantic spaces with the help of vectors. Vectors can also be whole words (Word2Vec), entities (Node2Vec) and attributes.
In semantics, the semantic space is also described as an ontology. Since LLMs are based more on statistics than on real semantics, they are not ontologies. However, AI comes closer to semantic understanding due to the highly scalable amount of data that can be processed.
The semantic proximity can be determined by the Euclidean distance or the cosine angle measure in the semantic space:

In this way, the relationship between products and attributes can be established.
LLMs determine the relationship as part of encoding using natural language processing. This allows texts broken down into tokens to be divided into entities and attributes.
The number of co-occurrences of certain tokens increases the probability of a relationship between these tokens.
Language models are initially trained using human-labeled data. This initial training data are crawl databases of the Internet, other databases, books, Wikipedia … The vast majority of the data used to train state-of-the-art LLMs are therefore texts from publicly accessible Internet resources (e.g. the latest “Common Crawl” dataset, which contains data from more than three billion pages).
It is not clear exactly which sources are used for the initial crawl.
In order to reduce hallucinations and make deeper specific subject knowledge accessible to the LLM, modern LLMs are additionally supported with content from domain-specific sources. This process takes place as part of Retrieval Augmented Generation (RAG).

Graph databases such as the Google Knowledge Graph or Shopping Graph can also be used in the context of RAG to develop a better semantic understanding.
RAG follows a two-step process:
- Retrieval : First, a search query is made to an external database to find relevant information. This database can include collections of texts, structured data, or knowledge graphs.
- Augmentation : The retrieved information is then fed as context into the generative model, which generates a detailed and informed response based on both its pre-trained knowledge and the retrieved information.
RAG significantly changes how content gets surfaced in AI responses:
Source Prioritization
RAG systems typically prioritize content from:
- Timely and authoritative news websites
- Reputable industry publications
- Established knowledge platforms
- Discussion forums
- Knowledge graphs
- Sources that rank well in the respective underlying retrieval system (Google, bing …)
- Content that is easy to understand and process by LLMs

Content Quality Requirements
- Content must be highly relevant and authoritative to be retrieved
- Information should be clearly structured for easy extraction
- Factual accuracy becomes more important than keyword optimization
Visibility Strategy Implications
- “Retrievability is the key to visibility in AI search”
- Brands must optimize their presence across sources that RAG systems frequently retrieve from
- Content needs to be not just findable but also extractable and contextually relevant
Addressing Traditional LLM Limitations
- RAG helps mitigate hallucinations in LLMs by grounding responses in retrieved information
- It enables more up-to-date information to be included in responses
- Content that helps AI models address these limitations may be prioritized
LLMO, GEO, GAIO as a new discipline for influencing generative AI
The big challenge for companies will be to play a role not only in the previously known search engines but also in the output of language models. Be it in the form of source references including links or by mentioning their own brand(s) and products.
Influencing the output of generative AI is a previously unexplored field of research. There are several theories and many names such as Large Language Model Optimization (LLMO), Generative Engine Optimization (GEO), Generative AI Optimization (GAIO).
Reliable evidence for optimization approaches from practice is still scarce. This leaves only the derivation from the technological understanding of LLMs.
Establishment as a thematically trustworthy and relevant source for non-commercially driven prompts
In the case of non-commercial prompts, the most important goal is to be named as a source, including a link to your own website.
It would be logical for AI systems with direct access to search engines to refer to the best-ranking content when compiling an answer.
Here is an example of a prompt: “google core update march 2024”

The sources mentioned in Copilot are
- searchengineland.com
- coalitiontechnologies.com
- seroundtable.com
Rankings in the classic results without videos and news for the corresponding search query on Bing is as follows:
- searchengineland.com
- blog.google
- searchenginejournal.com
- yoast.com
- developers.google.com
- semrush.com
- …
Some of the sources show overlaps with the search results, but not all.
ChatGPT shows the following sources at the same prompt:

Google’s Gemini is mentioning following sources:

In addition to the relevance criteria, other quality criteria also appear to be used in the selection of sources, which could presumably be similar to Google’s E-E-A-T concept.
Some studies on Google’s SGE also show a high correlation between well-known brands, such as a study by Peak Ace in the tourism segment and another study by authoritas.
Peak Ace examined which domains in the travel segment are frequently linked out of the SGE:

Authoritas has investigated which domains are generally linked from the SGE:

A connection between brand strength and the selection of sources for SGE can be guessed at.
Digital brand and product positioning of commercially driven prompts
With purchase-oriented prompts, the most important goal is to be directly recommended as a brand or product by the AI in the shopping grids or in the output.
But how can this be achieved?
As always, a sensible approach here is to start with the user or the prompts. As is so often the case, understanding the user and their needs is the basis.
Prompts can provide more context than the few terms of a standard search query:

Companies should therefore aim to position their own brands and products in specific user contexts.
Frequently requested attribute classes on the market and in prompts (e.g. condition, usage, number of users …) can be an initial point of reference for finding out in which contexts brands and products should be positioned.
But where should this positioning take place?
To do this, you need to know which training data an LLM uses. This in turn depends on the LLM in question:
- If an LLM has access to a search engine, highly ranking content in this search engine could be a possible source.
- Renowned (industry) directories, (product) databases or other thematically authoritative sources could be places for optimization.
- Google’s E-E-A-T concept can also play an important role here, at least for Gemini or the SGE, in identifying trustworthy sources as well as trustworthy brands and products.
Which answers we have to answer befor starting LLMO / GEO?
Following answers you have to ask yourself before starting with LLMO / Generative Engine Optimization (GEO):
- What is the specific LLMO goal? Getting cited as a source in AI generated answers and/or positioning your brand in AI generated answers.
- Where I have to publish content? Your own websites? What other kind of media?
- What is Your Core Topic or Theme?
- What semantic signals and related terms you would like to position for?
- Who is Your Target Audience and which AI platforms or features they are using?
- For which AI driven platform, LLM or AI feature you want to optimize for?
- What Are the Key Questions in Your Niche?
- What are the prompts your target group is entering in AI driven machines?
- What Is Your Content’s Extraction Readiness for LLMs?
- How Will You Demonstrate Authority and Trust?
LLMO / GEO strategies
The following strategies are recommended for LLMO / GEO for more brand visibility in responses from LLMs: § Presence in high-authority publications: Aim for mentions and articles in renowned industry publications that are frequently cited by AI systems.
- Aim for listicles and rankings: Ensure mentions in “best of” lists and product comparisons on trusted domains.
- Use Quora, Reddit and other Q&A platforms: Actively participate in relevant discussions and answer technical questions constructively.
- Promote customer reviews and UGC: Actively collect customer reviews and display them prominently on your offer and product pages.
- Establish a presence on influential e-commerce platforms: Establish a strong presence on Amazon and other relevant marketplaces. (e-commerce)
- Build backlinks and mentions: Aim for product mentions in relevant product comparison and review websites.
- Look for co-occurrence of brands/products and attributes: You should optimize for these as this is an important factor for visibility in LLMs.
- Optimization of company representations on your own website, important industry directories, sponsorships, cooperations, Google Business Profiles… with a view to entities and NLP extraction.§ Pay attention to the joint appearance of brands/products and attributes: One should optimize for these as this is an important factor for visibility in LLMs.
- Optimization of company representations on your own website, important business directories, sponsorships, cooperations, Google Business Profiles… with a view to entities and NLP extraction.

The impact of SEO for LLMO / GEO and visibility in generative AI
The influence of SEO on LLMO or Generative Engine Optimization (GEO) depends on the extent to which the respective Large Language Model relates to an underlying search engine retrieval system. Generative AI such as ChatGPT, Perplexity, Gemini or Copilot use the retrieval systems of search engines to ground the answers. The greater the influence of the ranking of content within the RAG process for source selection in grounding, the more important SEO is and vice versa. The smaller the overlap between the referenced sources of the generated answer and the top rankend content, the less it is possible to use classic SEO for LLMO/GEO.

By shifting to other data sources in the grounding process, such as exclusive data via collaborations, API interfaces to databases, knowledge graphs … the influence of search engine retrieval and thus SEO on LLMO/GEO can decrease. The way in which search engine indexes are evaluated as part of the retrieval process also influences how much influence traditional SEO has.Search engine rankings are determined with a view to the relevance and quality of the content and the trustworthiness of the originator. Traditional SEO still focuses on relevance optimization. For LLMO, it might be more important to focus on quality rating concepts such as Google’s E-E-A-T, rather than relevance. The optimization of E-E-A-T criteria is fundamentally different from relevance optimization.
Conclusion
It remains to be seen whether LLMO or GAIO will really become a legitimate strategy for influencing LLMs with regard to their own goals. On the data science side, there is skepticism. Others believe in this approach.
If this is the case, the following goals need to be achieved:
- Establish owned media as a source of training data via E-E-A-T.
- Generate mentions of the brand and products in qualified media.
- Generate co-occurrences of the own brand with other relevant entities and attributes in qualified media.
- Becoming part of established graph databases such as Knowledge Graph or Shopping Graph.
The chances of success of LLM optimization are directly related to the size of the market: The more niche a market is, the easier it is to position yourself as a brand in the respective thematic context.
This means that fewer co-occurrences in the qualified media are required in order to be associated with the relevant attributes and entities in the LLMs. The larger the market, the more difficult this is, as many market players have large PR and marketing resources and a long history.
GAIO or LLMO requires significantly more resources than classic SEO to influence public perception.
At this point, I would like to refer to my concept of Digital Authority Management. You can read more about this in the article Authority Management: A new discipline in the age of SGE and E-E-A-T.
Let’s assume that LLM optimization proves to be a sensible strategy. In this case, big brands will have significant advantages in search engine positioning and generative AI results in the future due to their PR and marketing resources.
Another perspective is that search engine optimization can be continued as before, as well-ranked content is simultaneously used to train LLMs. You should also pay attention to co-occurrences between brands/products and attributes or other entities and optimize for them.
Which of these approaches will be the future for SEO is unclear and will only become clear when SGE is finally introduced.
- What we can learn about Googles AI Search from the official Vertex & Cloud documentaion - 19. September 2025
- What we can learn from DOJ trial and API Leak for SEO? - 6. September 2025
- Top Generative Engine Optimization (GEO) Experts for LLMO - 3. September 2025
- From Query Refinement to Query Fan-Out: Search in times of generative AI and AI Agents - 28. July 2025
- What is MIPS (Maximum inner product search) and its impact on SEO? - 20. July 2025
- From User-First to Agent-First: Rethinking Digital Strategy in the Age of AI Agents - 18. July 2025
- The Evolution of Search: From Phrase Indexing to Generative Passage Retrieval and how to optimize LLM Readability and Chunk Relevance - 7. July 2025
- How to optimize for ChatGPT Shopping? - 1. July 2025
- LLM Readability & Chunk Relevance – The most influential factors to become citation worthy in AIOverviews, ChatGPT and AIMode - 30. June 2025
- Overview: Brand Monitoring Tools for LLMO / Generative Engine Optimization - 16. June 2025
