Home  /  Frameworks  /  Search Budget Framework
Framework
12 min read Updated May 1, 2026

The Search Budget Framework: allocate B2B SaaS search effort by retrieval surface, not channel.

Most teams budget search by channel: SEO percent, paid percent, social percent. The Search Budget Framework reallocates by retrieval surface: Google, ChatGPT, Perplexity, Claude, Gemini, Reddit, G2. The output is a single 0 to 100 score that tells you whether your effort is sitting on the surface that converts.

Why this framework exists
AI Search
14.2%
B2B SaaS demo conversion · ChatGPT, Claude, Perplexity, Gemini referral traffic
conversion gap
Google Organic
2.8%
B2B SaaS demo conversion · Same offer, same product, same audience
TL;DR

The framework in five lines.

The five takeaways every B2B SaaS marketing lead at $5M+ ARR should be able to repeat back after reading this article.

01

Most teams allocate by channel. The framework reallocates by retrieval surface: Google, AI assistants, citation engines, communities and directories.

02

AI search converts at roughly 14.2% on B2B SaaS demo intent. Google organic converts at roughly 2.8%. A 5x gap most budgets ignore.

03

Score yourself with the Search Budget Score (0 to 100). Below 60 means most of your effort sits on the surface that drives the least pipeline.

04

Five components: Retrieval Surface Audit, Allocation Map, Conversion-Weighted Reallocation, the SBS metric, and a Quarterly Rebalance Protocol.

05

Gumlet shifted 10 points off Google to AI retrieval. Six months later, 20% of inbound revenue came from AI search. Google traffic stayed flat.

The Problem

Channel-based search budgets stopped working when ChatGPT became a buying surface.

The standard SEO budget for a $5M to $20M ARR B2B SaaS company splits across SEO, paid search, paid social, content, and tools. 40% content, 20% links, 20% paid, 10% tools, 10% experiments. Most public SEO budget guides still recommend roughly this structure.

That structure assumed one thing: that all of those activities feed the same retrieval surface, Google.

In 2026 that assumption broke. ChatGPT alone processes a billion-plus search-equivalent queries per day. Perplexity, Claude, and Gemini add tens of millions more. Buyers at $5M+ ARR B2B SaaS companies start roughly a third of their vendor research in an AI assistant before they touch a Google SERP. Search effort that ranks a blog post on Google does not automatically produce a ChatGPT citation. Different surface, different mechanics.

A channel-based budget cannot answer "where does my pipeline come from?" because channels describe production inputs, not retrieval destinations.

The Anatomy

The four retrieval surfaces every B2B SaaS buyer touches before deciding.

Across our client work, B2B SaaS buyers consistently touch four classes of retrieval surface before they book a demo or submit a contact form. These are not interchangeable. The Search Budget Framework treats each as a distinct allocation target.

Google SERP
Search Engine

Still the largest surface by raw query volume. Includes the ten blue links, AI Overviews, and featured snippets. Wide-net research, comparison shopping, validation.

Optimized through: traditional SEO, keyword targeting, technical SEO, content depth, internal linking, backlinks.

AI Assistants
ChatGPT · Claude · Gemini

Conversational retrieval surfaces where buyers ask "what is the best [category] software for [use case]" and receive a synthesized recommendation.

Optimized through: Citation Engineering, structured claim density, named entity clarity, third-party coverage, definition-forward formatting.

Citation Engines
Perplexity

Live-retrieval AI tools that cite sources alongside answers. Sit between Google and ChatGPT in behaviour: rewards structural signals but pulls from live web indexes, so recency still matters.

Optimized through: structured data, recent publication dates, third-party validation, comparison clusters, named entity associations.

Community & Directories
Reddit · G2 · Capterra

Reddit threads, Quora answers, G2 reviews, Capterra listings, vendor blogs. Heavily cited by all three AI surfaces above. A B2B SaaS company that is invisible here is invisible to the AI models that pull from them.

Optimized through: review campaigns, founder-led commentary, partnership content, category-specific community engagement.

The Math

The conversion differential is the entire engine of this framework.

Without it, the framework is a structural reorganization with no urgency. With it, every percentage point of misallocation has a measurable pipeline cost. These numbers are not forecasts. They are portfolio averages from DerivateX-tracked engagements.

AI Search Demo Conversion
14.2%
B2B SaaS portfolio average

AI search traffic arrives pre-qualified. A buyer who asks ChatGPT "what is the best video hosting platform for B2B SaaS with DRM and adaptive streaming" has done the explanatory work upstream. They know what they want. By the time they land on your site, they are evaluating you, not learning the category.

Google Organic Demo Conversion
2.8%
B2B SaaS portfolio average

Google traffic on the same keyword is wider and earlier. A meaningful portion of "best video hosting platform" Google search traffic is researchers, comparison-shoppers, students, and adjacent personas who are not in-market. Same keyword, same offer, less-qualified audience.

AI traffic arrives in smaller volumes than Google but converts at multiples of the rate. Per visit, the dollar value of AI traffic is materially higher. Budgeting that ignores this is leaving pipeline on the table.

Most marketing leads we audit estimate they are spending 25% to 30% on AI search. The real number is consistently in the single digits.

The gap is a specific accounting error: teams count "we rewrote a Google blog post and added FAQ schema" as AI search investment. It is not. That is Google content with extraction-friendly formatting bolted on. Real AI retrieval engineering puts a definition-forward answer in the first 60 words after the H2, surfaces attributable numerical claims in early paragraphs, and treats the page as a data-extraction surface for an LLM rather than a click-through surface for a human.

The Framework

Five components, each producing a concrete artifact.

The framework runs in five steps. Each one is mechanical and produces something your team can act on. None of it is theoretical.

01
Component 01

The Retrieval Surface Audit

List every retrieval surface where your ICP buyers ask a question your product could answer. For each surface, record three data points: whether your brand currently appears, how often it appears against your top 20 to 50 buyer queries, and what its competitive position looks like.

DerivateX Retrieval Surface Taxonomy
SurfaceTypeBuyer Use
Google SERPSearch EngineWide-net research, comparison shopping, validation
ChatGPTAI AssistantSynthesized recommendations, conversational research
ClaudeAI AssistantTechnical evaluation, longer-form comparison
GeminiAI AssistantQuick recommendations, integrated within Google ecosystem
PerplexityCitation EngineSource-cited research, factual comparisons
RedditCommunityReal-user opinion, edge cases, complaint surfacing
G2 / CapterraDirectorySide-by-side feature comparison, peer review
02
Component 02

The Current Allocation Map

Pull two weeks of timesheets, content calendar entries, and project tracker tickets. For every hour spent on search, classify it by the retrieval surface it was actually engineered for. Most teams find the real distribution looks nothing like what they thought.

Most B2B SaaS Teams Have This Problem
What Teams Think
~30%
Reported AI
Google SEO45%
AI Search30%
Community15%
Other10%
What Audits Reveal
~5%
Real AI
Google SEO70%
AI Search5%
Community8%
Other17%
03
Component 03

Conversion-Weighted Reallocation

For each surface, multiply effort percentage by conversion rate to produce a yield-per-effort ratio. Surfaces above 1.0 are converting at higher rates than allocation predicts. Surfaces below 1.0 are under-yielding. Run the math on a 10-point shift from Google to AI assistants.

A 10-Point Shift, In Numbers
Starting State
SurfaceEffortYield
Google SERP70%1.96
AI Assistants5%0.71
Perplexity2%0.22
Community8%0.44
Cross-surface15%0.60
Total Yield
3.93
After 10-Point Shift
SurfaceEffortYield
Google SERP60%1.68
AI Assistants15%2.13
Perplexity4%0.44
Community8%0.44
Cross-surface13%0.52
Total Yield
5.21
+33% conversion-weighted yield from a 10-point reallocation, before any improvement in absolute conversion rates.
04
Component 04

The Search Budget Score (SBS)

A 0 to 100 number measuring how well your effort allocation matches a conversion-weighted distribution of buyer research across surfaces. Below 60, most of your effort is sitting on the surface that drives the least pipeline per unit of work.

SBS Score Bands
0
Moderate Misalignment
0-39
Severe Misalignment
Distribution from an earlier era of search behaviour. Most pipeline that should be coming in is being missed.
40-59
Significant Misalignment
Most effort is on a surface that is no longer the highest-yielding for your category. Reallocation is the most valuable move this quarter.
60-79
Moderate Misalignment
Directionally correct, but one or two surfaces are meaningfully under-invested or over-invested.
80-100
Aligned
Effort distribution closely tracks conversion-weighted buyer behaviour. Refinement, not reallocation.
05
Component 05

The Quarterly Rebalance Protocol

The implementation playbook. Three principles govern every rebalance. Most teams that adopt this framework move their SBS from 50 to 65 at the start to a sustained score above 75 within three to four quarters.

Principle 1: Shift 5% to 15% per quarter, never more.

A larger shift breaks pipeline. The Google retrieval engine is still a meaningful absolute pipeline contributor. Cutting it too fast strands traffic and tanks aggregate volume before the AI surface has compounded. A 10-point shift per quarter is the safe ceiling. A 5-point shift is the safe floor.

Principle 2: Cut bottom-tail content first, redirect to top-page Citation Engineering.

Inside any Google retrieval allocation there is a long tail of content that is not ranking, not converting, and not worth sustaining. That is the first hours to redirect. Apply those hours to Citation Engineering on your top 10 to 20 highest-traffic pages, since AI assistants disproportionately cite content that already has authority signals.

Principle 3: Do not break Google.

The single most common rebalance failure is treating it as a switch rather than a shift. Google is still 60%+ of total search volume for most B2B SaaS categories. Redirect marginal hours, do not dismantle the existing engine. If your Google traffic drops measurably during a rebalance quarter, you moved too fast.

Live Example

How Gumlet went from 65% Google to 18% AI retrieval in six months.

Gumlet, a B2B video hosting and image optimization platform, ran the framework end to end with us. The starting Search Budget Score was in the moderate-misalignment band. The first quarter shift was a 10-point reallocation off Google to AI retrieval engineering.

~20%
of inbound revenue from AI search referrals at month 6
137+
tracked citations across ChatGPT, Perplexity, Claude, Gemini
flat
Google traffic. No measurable loss from the 10-point shift.
Allocation: Before vs After
Google (before)
65%
Google (after)
55%
AI search (before)
5%
AI search (after)
18%
Community (before)
8%
Community (after)
11%
Timing

When to run the framework, and one moment when you should not.

The Search Budget Framework is a planning tool, not a continuous operating tool. It produces the most value at four specific moments, and one moment when running it is the wrong call.

Run the framework when

  • Annual planning

    If your team is already allocating next year's budget across SEO, paid, and social, that conversation is the right time to introduce the retrieval-surface axis.

  • Mid-quarter pipeline reset

    AI referral traffic has started showing up in your analytics, leadership has noticed, and someone asks where it came from and how to make more of it.

  • The pre-board CMO question

    "Where should our marketing dollars go in 2026?" Walk in with an SBS and the conversation moves from a debate about percentages to a discussion of trajectory.

Skip it when

You are pre-PMF, sub-$5M ARR, and do not yet have an existing search foundation. The framework optimizes the allocation of an existing search engine. If the engine itself does not exist, you do not have anything to allocate. Build the foundation first. Get to a meaningful base of organic traffic and content footprint, then run the framework once the underlying allocation question is real.

FAQ

Common questions from operators.

The Search Budget Framework is a 5-component method for allocating B2B SaaS search effort by retrieval surface (Google SERP, AI assistants, citation engines, community and directory surfaces) rather than by channel (SEO vs paid vs social). A regular SEO budget answers "how much do we spend on SEO?" The Search Budget Framework answers "where should our finite search effort land in retrieval terms, and how does that distribution compare to where our buyers actually research and which surfaces actually convert them?"
There is no single right answer, because the right allocation depends on where your buyers actually research and what your tracked conversion rates are. A reasonable starting target for a B2B SaaS company at $5M+ ARR is 15% to 25% of total search effort going to AI retrieval engineering, weighted toward Citation Engineering on top pages plus third-party coverage expansion. The Search Budget Score gives you a way to derive the right number for your specific company instead of using a generic benchmark.
Run the Retrieval Surface Audit and the Current Allocation Map first. Calculate the conversion-weighted ideal effort distribution using your tracked conversion rates per surface (or portfolio benchmarks if you do not have your own data yet). Sum the absolute percentage-point gaps between your actual distribution and the ideal distribution. Normalize against a maximum possible misalignment of 200 percentage points. Subtract the result from 100. A score above 75 means your allocation is well aligned. A score below 60 means significant reallocation is the most valuable move this quarter.
No, but it does mean shifting marginal hours within your existing search effort. The Gumlet reallocation moved 10 percentage points off Google retrieval (from 65% to 55%) and added 18 percentage points to AI retrieval, all by cutting bottom-tail underperforming Google content and redirecting those hours. Google traffic held flat because the cut was concentrated on content that was not ranking or converting in the first place. The principle is "do not break Google," and it is enforced by capping each quarterly shift at 5% to 15% of total effort.
In our portfolio, AI citation gains start appearing in months 2 to 3 if Citation Engineering is active on top pages. Pipeline influence (AI referral traffic showing up as a measurable demo source) is typically visible at months 5 to 6. The Gumlet reallocation hit 20% of inbound revenue from AI search at the 6-month mark. Different categories compound at different speeds, but the 6-month milestone is a reasonable planning anchor.
Probably not, unless you already have an existing search engine producing meaningful organic pipeline. The framework optimizes the allocation of an existing engine. If the engine itself is small or absent, the most valuable work is building the foundation, not optimizing its allocation. The framework becomes worth running when you have at least 12 to 18 months of consistent search effort behind you and can map the existing distribution honestly.
The two metrics measure different things. The AI Presence Score measures outcomes: how often your brand appears in AI assistant responses, in what position, with what sentiment, across how many platforms. The Search Budget Score measures inputs: how well your team's effort allocation across retrieval surfaces matches a conversion-weighted ideal. AI Presence Score tells you whether you are visible. Search Budget Score tells you whether your allocation is set up to make you more visible over time.
What To Do This Week

Three numbers carry this article. The action is mechanical.

Roughly 5x conversion gap. A single-quarter rebalance window of 5% to 15%. No need to cut Google to gain AI. Open a spreadsheet and start.

01

Pull two weeks of search-related hours

From timesheets, content calendars, project tracker. Anything tagged content, SEO, or marketing operations counts.

02

Classify by retrieval surface

Use the four classes: Google, AI assistants, citation engines, community/directories. Be honest. The accounting trap is real.

03

Calculate your starting distribution

Two hours of work. The output is a starting allocation distribution. That is the foundation for every conversation about budget reallocation that follows.

Related Reading

Connected frameworks and data.

Apoorv Sharma, co-founder of DerivateX
Apoorv Sharma
Co-founder, DerivateX

Apoorv is the co-founder of DerivateX, a B2B SaaS SEO and Generative Engine Optimization (GEO) agency that engineers AI citations in ChatGPT, Perplexity, Claude, and Gemini and connects them to demo bookings and revenue pipeline. Author of the 2026 AI Visibility Benchmark Report and the Citation Engineering methodology.

Channels describe what you do. Retrieval surfaces describe where your buyers go. Budget the second.