AI Overviews

TL;DR

  • AI Overviews are Google’s AI-generated answer summaries that appear above all organic results, before any blue link is shown.
  • They redefine what visibility means in search. A page can sit in position one and still be invisible to users if it is not cited inside the AI Overview.
  • They reduce clicks on informational queries. Users who get their answer from the summary rarely scroll further, let alone click through to a website.
  • They favour content structure over keyword density. Pages that answer questions clearly in extractable formats earn citations. Pages built around keyword repetition often do not.
  • Their footprint is growing. AI Overviews now appear across a significant and expanding share of Google searches globally.
  • Citation Engineering is the practice of building AI Overview presence deliberately. DerivateX built a structured audit and optimisation framework around it, the same methodology applied across all GEO engagements.

What Are AI Overviews?

AI Overviews are AI-generated summaries that Google places at the very top of the search results page, pulling from multiple indexed sources to answer a user’s query directly, before a single organic result appears.

AI Overviews are distinct from every other element on the search results page. When a user types a query into Google, they have historically received a list of links and made a choice. When Google generates an AI Overview for that same query, the model synthesises a direct answer above everything else. The user may never reach the organic results at all.

Originally launched as Search Generative Experience (SGE) in 2023 and rebranded at Google I/O in May 2024, AI Overviews represent the most consequential structural change to the Google SERP in more than a decade. For a broad range of queries, questions, comparisons, how-to requests, and recommendations Google now generates a prose summary that references a small number of sources and delivers a direct answer. For many users, it is the only thing they read.

For SEO and marketing teams, this creates a new competitive reality. Earning a top ranking no longer guarantees that users encounter your brand. Visibility now depends on whether your content is selected as a citation source inside the summary itself. Ranking and citation are independent outcomes. Both need to be tracked.

Why AI Overviews Matter for Marketing and SEO Teams

AI Overviews sit above organic results, not beside them. Every other result on the page, including the number one ranking, is pushed further down the moment an AI Overview appears. For teams responsible for organic performance, this changes three things in particular.

Click-through rates are declining on informational queries

Pages holding strong keyword rankings for how-to, definition, and comparison queries are seeing falling CTR as users read the AI summary and leave satisfied. The ranking position has not changed. The user behaviour surrounding it has. Revenue and pipeline impact lags the visibility shift by months, which makes early measurement critical.

Impressions and clicks are drifting apart

A page cited inside an AI Overview may accumulate significant impression volume without generating a corresponding click, because the summary resolved the query completely. Standard rank tracking does not surface this. Google Search Console’s AI Overview impression data does, and it tells a materially different story about how content is actually performing.

The citation pool is extremely small

AI Overviews typically draw from two to five sources per query. Every brand outside that pool is absent from the answer users see first. Unlike the ten organic results of a traditional SERP, there is no secondary position in an AI Overview. You are either cited or you are not.

These are structural changes to how search works, and they apply across industries and query types. Teams that treat AI Overview citation as a metric alongside ranking position and CTR will have a more accurate picture of organic performance than those that do not.

How AI Overviews Work

AI Overviews run on Google’s Gemini model and use a retrieval-augmented generation architecture. Rather than drawing purely from pre-trained knowledge, the system retrieves relevant pages from Google’s live index, uses them as reference material, and generates a grounded summary with attributed citations. What gets retrieved, and what gets cited, is not arbitrary.

Indexability is the baseline requirement

A page must be crawlable, indexed, and free of technical barriers. Pages blocked by crawl errors, covered by “no snippet” directives, or carrying thin content are removed from the retrieval pool before any quality evaluation takes place.

Content structure determines extractability

The retrieval layer assesses whether a page contains a clear, specific answer in a format it can use. Pages that open with a direct answer, use question-format headings, and present information in short and focused paragraphs are retrieved more reliably than pages written primarily for keyword placement. Narrative introductions, buried conclusions, and dense unbroken text all reduce the probability of citation regardless of ranking.

Source credibility shapes selection

Google’s systems apply E-E-A-T signals, covering Experience, Expertise, Authoritativeness, and Trustworthiness, when choosing which sources to cite. Pages with verifiable author credentials, original research or data, and demonstrated topical authority on the subject are preferred over generic content without clear attribution.

The practical takeaway is that AI Overview eligibility is not a separate technical concern. It is the outcome of content quality decisions that good editorial strategy already supports, applied to a more demanding and explicit retrieval standard.

AI Overview Citation vs Search Ranking

These are different outcomes produced by different systems, and optimising for one does not automatically improve the other.

CriteriaSearch RankingAI Overview Citation
SystemSearch engine ranking algorithmGemini model with RAG retrieval layer
OutputPosition in a list of linksNamed source in a generated prose answer
User action requiredUser clicks a link and reads a pageUser reads the response; brand is already cited
What drives itKeywords, backlinks, technical SEOContent structure, E-E-A-T, entity clarity
Can you measure it?Yes, via rank trackers and Search ConsoleYes, via Search Console AIO filter and manual prompt audits

A brand can hold first-page rankings for every target keyword in its category and have a near-zero AI Overview citation rate. The reverse is also increasingly common: brands that have built strong citation signals are being surfaced to users who never reach the organic results below. Both outcomes matter. Neither is a substitute for the other.

Citation Frequency vs Citation Prominence

Not all AI Overview appearances are equal. Two dimensions matter independently.

What is Citation Frequency?

Citation frequency is how often a brand appears across the full set of queries relevant to its category. A brand cited in 15 out of 20 target searches has higher citation frequency than one cited in 4.

What is Citation Prominence?

Citation prominence is the weight given to the brand within each response it appears in. A brand named first as the primary recommendation scores differently from one listed fifth in a multi-option comparison, even if both technically appear in the answer.


The same distinction applies when auditing AI Overview performance with the AI Visibility Score (AVS) framework. A prominently named primary recommendation scores 5 points. A secondary mention scores 3. A passing reference in a list without context scores 1. Absence scores 0. Multiplied across a defined query set and across multiple AI tools, this produces a normalised weekly score that separates frequency from prominence and gives content teams a metric they can report on and improve against.

DerivateX perspective

Why do most brands with strong Google rankings have low AI Overview citation rates and what it takes to close the gap?

The pattern is consistent across new client audits: a brand holds page-one positions across dozens of commercial queries and is cited in fewer than 20% of AI Overviews for those same terms. The ranking exists. The citation does not. The gap is not a content volume problem.

Content built for keyword matching is structurally different from content built for machine extraction. The entity signals are often fragmented. The category vocabulary is inconsistent between the website, third-party profiles, and press coverage. The answers exist on the page, but they are buried inside narrative introductions or spread across continuous prose that a retrieval system cannot cleanly extract from.

Fixing this does not require a content sprint. It requires a signal audit and a targeted restructuring pass on the pages that already rank well. That is where DerivateX Citation Engineering work begins and it is why proximity and entity clarity are addressed before any new content is commissioned.

How AI Overviews Change Content Strategy

Many content patterns that performed well for keyword rankings actively reduce AI Overview citation eligibility. Understanding where the gap lies is the starting point for closing it.

Leading with context instead of answers is the most common problem

AI retrieval systems look for a direct answer near the top of a page or section. Paragraphs that establish background before stating the main point are structurally difficult to extract from. The fix is to place the answer first and support it immediately after.

Keyword repetition does not signal relevance to a language model

Content written to include a target keyword at high frequency often lacks the specific, attributable claims that AI systems need to cite with confidence. A precise answer to a well-formed question carries more weight than a page optimised around keyword density.

Breadth without structure dilutes citation signals

A single page covering ten related subtopics in continuous prose can reduce the extractability of each individual answer. Sections dedicated to one question, answered completely and concisely, retrieve more reliably than comprehensive guides where answers are embedded inside longer narrative flows.

Structured data creates a measurable advantage

FAQ schema, HowTo schema, and author entity markup signal to Google’s systems that a page contains discrete, extractable content. Pages with relevant structured data implemented correctly have a documented advantage in AI Overview citation eligibility compared to equivalent pages without it.

For most content teams, the highest-leverage work is not publishing new pages. It is auditing existing pages that rank well but cite poorly, identifying the structural gaps, and correcting them.

Measuring AI Overview Visibility

Conventional SEO reporting does not reflect AI Overview performance. Domain authority, ranking position, and organic session counts can all remain stable while AI Overview citation rate falls to zero on the queries that matter most. Measuring AI Overview visibility accurately requires three inputs working together.

Google Search Console provides the native citation data

Under Performance, filtering by the AI Overviews search appearance shows impression and click volume generated specifically by AIO citations. This is the most direct signal of current AIO performance available without third-party tools but it only covers queries where citation is already occurring. It does not show where a brand is absent.

Manual prompt audits reveal the gaps Search Console cannot

Running a defined set of target queries the 20 to 30 questions buyers in a given category are most likely to search on a weekly basis and recording citation presence produces a directional visibility score that tracks progress over time and surfaces competitive shifts before they appear in traffic data.

Third-party SERP monitoring platforms add scale and competitive context

Tools including Semrush, Ahrefs, and dedicated AI visibility platforms track AIO appearance rates across large query sets and flag when competitors enter or exit citations on strategically important queries. For teams that want a structured approach to building and improving that baseline, DerivateX’s GEO agency service covers visibility measurement alongside end-to-end optimisation.

FAQs

1. Does ranking well on Google guarantee inclusion in AI Overviews?

No. Ranking position and AIO citation are independent outcomes driven by different signals. A page in position one may never appear in an AI Overview if its content is not structured for retrieval and extraction. A page ranking sixth or seventh may be cited consistently if it contains a clear, well-formed answer to the query. Both metrics should be tracked separately through Google Search Console.

2. Do AI Overviews appear on every search?

No. AI Overviews are triggered selectively, with the highest frequency on informational, how-to, definition, and comparison queries. Navigational and transactional queries produce AIOs less often. The share of queries generating an AI Overview continues to shift as Google refines its eligibility criteria and user experience approach.

3. Can a brand opt out of being cited in AI Overviews?

Yes, with trade-offs. Applying the “no snippet” meta tag prevents Google from extracting content from a page for use in any snippet format, including AI Overviews. This also removes featured snippet eligibility and can reduce overall SERP visibility. A more precise approach is the data-nosnippet attribute, which can be applied to specific sections of a page while leaving the remainder eligible for citation.

4. Will AI Overviews reduce organic traffic?

For informational query types, the evidence from early rollout periods points to measurable CTR declines on pages where AI Overviews appeared. The scale of impact varies by industry and how completely the summary resolves user intent. The strategic response is to earn AIO citations alongside maintaining rankings. A page cited inside the AI Overview recovers a share of the visibility that declining organic CTR removes.

5. How is optimising for AI Overviews different from optimising for featured snippets?

The underlying content principles overlap: both reward concise, directly-answering content in structured formats. The distinctions are meaningful in practice. AI Overviews synthesise across several sources rather than extracting from one, so topical consistency and cross-page coverage carry more weight. They are also generated by a language model rather than algorithmically assembled, which means content needs to be interpretable in context, not just pattern-matched to a query. Pages already performing well for featured snippets are strong AIO citation candidates, but entity clarity, E-E-A-T signals, and answer block structure are specifically relevant to AI Overview eligibility in ways that featured snippet optimisation alone does not address.

Also Read

Before you go

If your buyers use ChatGPT or Perplexity,
you need to know exactly where you stand.

Most B2B SaaS teams have no idea whether AI tools recommend them โ€” or a competitor. We audit your AI search visibility and show you what to fix first.

~20% inbound from LLMs
for Gumlet
#1 AI-cited CRM for
REsimpli in 90 days
14+ B2B SaaS teams
trust DerivateX
Trusted by
Gumlet REsimpli Kroto Fable Verito Peppo
Alekhya R
Alekhya R

Focuses on SEO, AI search, and content, with an emphasis on how structured content drives visibility and pipeline for B2B SaaS companies.