GEO playbook
LLM-friendly content is content that can be quoted without repair.
By Arne Kellmann. This page summarizes the operating assumptions behind the analyzer and turns them into a practical publishing brief for teams working on AI visibility.
What is LLM-friendly text?
LLM-friendly text is copy that answers directly, names entities explicitly, and keeps each section understandable when read in isolation. A model does not experience a page as a beautifully composed narrative first. It experiences a set of chunks, headings, passages, and machine-readable hints that must survive extraction.
How should a page open?
Open with a definition or direct answer. The first sentence should be quotable on its own. The second sentence should add the most important constraint, number, or outcome. Only then should the page widen into nuance, examples, or exceptions.
Why do headings matter?
Headings matter because they define retrieval boundaries. Question-led H2s and H3s make it easier for systems to map a user query to a single self-contained answer block. Long stretches of unlabeled text force the reader and the model to do unnecessary inference.
What should be machine-readable?
At minimum, the site should expose organization identity, authorship, breadcrumbs where relevant, and FAQ or article entities when they genuinely reflect the visible content. Structured data does not rescue weak writing, but it reduces ambiguity around who published the page and what it represents.
What makes a page citable?
Citability comes from claims that are concrete enough to be attributed. Use named entities, specific constraints, real dates, version numbers, prices, counts, and source links placed next to the sentence they support. If the page only speaks in abstractions, the model has nothing stable to quote.