Entity optimization . LLM SEO

ChatGPT cannot recommend a brand it does not clearly understand.

Entity optimization is the work of making your brand unambiguously recognisable to AI systems with the right product category, the right ICP, the right use cases, with consistent signals across every source LLMs read. Without it, citations either do not happen or describe the wrong product.

Weak entity · ChatGPT output
"Gumlet is a media processing tool, similar to Cloudinary, that helps companies manage and optimize images and videos..."
Wrong category. Wrong ICP. Competitor-anchored.
Post entity optimization · ChatGPT output
"Gumlet is a video infrastructure platform built for developer teams at SaaS companies who need adaptive streaming, per-minute billing, and a headless video API without building their own pipeline..."
Correct category. ICP-specific. Independently described.

This page covers entity optimization — one layer of the full LLM SEO program. For citation engineering, measurement, and the complete stack, visit the LLM SEO overview.

See LLM SEO overview
What Is Entity Optimization

AI systems do not look up your brand. They recall a model of it. That model needs to be correct.

"Entity optimization is the discipline of engineering consistent, accurate brand signals so that AI systems build and maintain a precise, unambiguous model of your product: its category, ICP, use cases, and differentiation."

When ChatGPT or Perplexity generates a recommendation, it is drawing on a model it has built of your brand from hundreds of sources: your structured data, third-party mentions, review platform profiles, community threads, and the cross-web pattern of how people describe you relative to competitors.

If those signals are inconsistent, outdated, or competitor-anchored, the model AI systems form of your product is wrong. It may describe you as an alternative to a competitor rather than as a category of your own. It may cite old use cases after a pivot. It may describe your ICP inaccurately. Or it may skip you entirely because its internal confidence is too low to risk a recommendation.

Entity optimization fixes the inputs. Structured data, LLM-readable brand information, knowledge panel accuracy, and consistent third-party signals — all aligned to the product you are selling today, to the buyers you actually want.

The Four Entity Signals

What AI systems read when building their model of your brand.

Entity optimization is not a single fix. It is the alignment of four distinct signal types that AI systems weight when constructing their internal representation of your product.

Structured Data

JSON-LD and Schema Markup

The machine-readable layer of your own website is the most direct signal available to AI systems. Organization, SoftwareApplication, and Service schema tell AI systems exactly what you do, who you serve, and how to categorise you — in a format they can parse without inferring from prose.

// SoftwareApplication schema — entity foundation { "@type": "SoftwareApplication", "name": "Gumlet", "applicationCategory": "VideoInfrastructure", "audience": { "@type": "Audience", "audienceType": "SaaS developer teams" }, "description": "Headless video API with adaptive streaming and per-minute billing" }
LLM-Readable Information

The /llm-info/ Page

A dedicated, crawlable page that gives AI tools a structured, authoritative description of your brand — product category, ICP, key differentiators, and explicit corrections of common mischaracterisations. It is written to be read by language models, not converted by them. LLMs that encounter it describe you accurately. LLMs that have not are guessing from fragments.

DerivateX maintains its own /llm-info/ page as the reference implementation. We build the equivalent for every LLM SEO client in the first 30 days.

Brand Signal Consistency

Cross-Source Entity Alignment

AI systems aggregate entity signals from across the web. If your homepage, G2 profile, Crunchbase entry, and Reddit threads each describe you differently, the model AI systems form is an average of those inconsistencies. Entity alignment ensures every high-authority source that AI tools sample uses consistent, ICP-accurate language to describe your product.

Entity Independence

Decoupling from Competitor Anchoring

The most common entity problem for B2B SaaS is not absence — it is being defined relative to a dominant competitor. When AI systems describe you primarily as "an alternative to X," you are a satellite. Decoupling means seeding enough independent, category-specific mentions that AI systems learn to describe you on your own terms, without needing a competitor as a reference point.

The /llm-info/ Page

The most direct entity signal you can give an AI system is a page written for it.

What an /llm-info/ page is

An /llm-info/ page is a publicly crawlable page on your domain that gives AI tools a structured, authoritative description of your brand. It is written to be read by language models — clear categorical statements, explicit ICP definitions, unambiguous differentiation claims, and corrections of common mischaracterisations.

When Perplexity or ChatGPT encounters your /llm-info/ page during live retrieval, it has a single authoritative source telling it exactly what your product is, who it is for, and what it is not. That removes the guesswork that leads to inaccurate descriptions and missed citations.

It is not a robots.txt. It is not a sitemap. It is a structured brand brief for AI systems — one of the highest-leverage entity fixes available because it works on every LLM that crawls your domain.

See DerivateX’s own /llm-info/ page

What it contains

A well-structured /llm-info/ page covers six areas: what the product is in plain categorical language, who it is built for with ICP specifics, what problem it solves stated use-case first, what it is not as explicit corrections of common mischaracterisations, how it differs from the most common competitor comparisons, and where authoritative third-party information about the brand can be found.

The format is prose and structured lists — no navigation, no marketing copy, no calls to action. It reads like a briefing document because that is exactly what it is. AI systems that read it produce materially more accurate brand descriptions in citations.

We build a custom /llm-info/ page for every LLM SEO client in the first 30 days of an engagement, submit it for indexing, and update it quarterly as the product evolves.

What We Fix

The six entity problems that cause wrong or absent AI recommendations.

Every engagement starts with a live entity audit: running your brand through ChatGPT and Perplexity to diagnose exactly how AI systems currently describe you. These are the patterns we fix most consistently for B2B SaaS companies.

Competitor anchoring
AI tools describe your product primarily as "an alternative to X." Fixed by seeding independent category language across the sources AI tools sample most heavily — review platforms, industry publications, community threads — until the model learns to describe you without the competitor reference.
Category misclassification
AI tools place your product in the wrong or too-broad category. Fixed through applicationCategory schema corrections, /llm-info/ deployment, and category-specific third-party mentions that train the correct classification across AI retrieval surfaces.
Outdated description
AI tools describe features, pricing models, or use cases from a previous product version. Fixed through structured data freshness signals, updated review platform profiles, /llm-info/ recency markers, and fresh third-party content that establishes current positioning.
ICP inaccuracy
AI tools describe your target customer too broadly or incorrectly. Fixed through audience-specific schema, updated G2 and Capterra category tags, and community mentions that establish the specific ICP on surfaces AI models train against.
Low entity confidence
AI tools say they lack sufficient information and skip recommendations entirely. The fix is citation volume — more independent, authoritative third-party mentions — combined with a structured /llm-info/ page that anchors the entity with a single authoritative source.
Schema absence
Your site has no structured data, or has generic schema that conveys no product category, ICP, or differentiation. We implement SoftwareApplication, Organization, and Service schema tailored to your product, category, and buyer audience.

Establish your baseline AVS

We define 20 buyer prompts specific to your category and ICP, run them across ChatGPT, Perplexity, Claude, and Gemini, score every result on the AVS rubric, and give you your Week 1 number. You know where you stand before anything else happens. Most brands score between 0 and 8 at baseline. That gap is the opportunity.

Map the gaps by prompt

The prompt-level breakdown from Week 1 tells you exactly which buyer queries your brand is absent from and which ones you are winning. Low-scoring prompts become the content and authority priorities for the first sprint. This is the diagnostic that makes Citation Engineering targeted rather than generic.

Execute Citation Engineering against the gaps

Content production, digital PR placements, entity optimization, and schema implementation → all mapped to the specific prompts where your AVS is lowest. Every execution decision is tied to a measurable scoring event. If the work is effective, the AVS moves on the specific prompts it was targeting. Read the full AVS methodology for the scoring details.

 

Google clicks in SaaS are already declining

AI Overviews are absorbing organic clicks across SaaS categories. The buyers who used to arrive via Google search are increasingly arriving via AI answer or not at all. ChatGPT SEO is not a future investment, it is a response to a shift that is already underway.

Get Started

Find out how AI systems currently describe your brand.

On the discovery call, we run a live entity audit: testing your brand across ChatGPT and Perplexity to show you exactly what AI systems think you are right now. Most SaaS teams are surprised by what they see.