Controlling Output Rankings in Generative Engines for LLM-based Search
Topics: Brand Context, LLM Readability, LLMO / GEO
This paper introduces CORE (Controlling Output Rankings in Generative Engines), a method designed to manipulate how large language models (LLMs) with search capabilities rank products and other items in their generated recommendations. The authors demonstrate that when users ask LLMs like GPT-4o or Gemini for product recommendations, the final output rankings are heavily influenced by the initial order of results returned by external search engines, which disadvantages smaller businesses whose products appear lower in retrieval results. CORE works by appending strategically optimized content—in three forms: string-based, reasoning-based, and review-based—to target product descriptions to push them higher in the LLM’s final ranked output.
To test this in a realistic setting, the authors created ProductBench, a large-scale benchmark covering 15 Amazon product categories with 200 products each. Experiments across four major LLMs (GPT-4o, Gemini-2.5, Claude-4, and Grok-3) show that CORE achieves average promotion success rates of 91.4% for Top-5, 86.6% for Top-3, and 80.3% for Top-1 rankings, significantly outperforming existing manipulation methods while keeping the optimized content natural-sounding.
