Is AI recommending your brand? Check your free AI Presence Score โ
The State of AI Visibility in B2B SaaS: 2026 Benchmark Report

Executive Summary
44.0% of B2B SaaS companies in this study score below 50 out of 100 on AI visibility, meaning nearly half are functionally invisible in the AI systems their buyers use daily. The average AI Presence Score across 50 companies is 56.9/100, with a median of 63.5. The gap between the highest scorer (Clio at 89) and the lowest (LeadSquared at 2) is 87 points.
Category dominance varies dramatically. In field service management, ServiceTitan (68) leads Jobber (41) by 27 points. In workflow automation, Zapier (63) outpaces Make (40) by 23 points. In SEO analytics, Ahrefs (83) leads Semrush (68) by 15 points. These are not marginal differences: they represent fundamentally different levels of presence in the AI-mediated buyer journey. In narrower categories like property management, AppFolio (71) and Buildium (67) are separated by only 4 points, suggesting that category maturity and competitive density affect how AI platforms differentiate between vendors.
Platform behavior is not uniform. Claude is the most selective AI platform, mentioning only 88% of companies tested, compared to ChatGPT and Gemini (both at 100%). Perplexity falls in between at 90%. Gemini produces the best average position (1.0), while ChatGPT and Claude share an average position of 1.2. Sentiment is overwhelmingly positive across all four platforms, with Claude being the only one below 100% at 97.7%.
The gap between high and low scorers is structural, not random. The 26 companies scoring 60 or above average a mention rate of 18.8/30 and platform breadth of 18.4/20, with 24 of 26 present on all 4 platforms. The 8 companies scoring 35 or below average a mention rate of 3.0/30 and platform breadth of 2.5/20. The delta is 15.8 points on mention rate and 15.9 points on platform breadth. These are fixable gaps: 10 companies have perfect sentiment (20/20) but mention rates of 8/30 or lower, indicating the AI platforms already view them positively when they appear. They simply do not appear often enough.
Methodology
Company Selection
50 B2B SaaS companies were selected to represent a cross-section of the B2B SaaS landscape. Companies span categories including CRM, SEO tools, project management, payments, field service, property management, security, design, analytics, and more. The dataset includes both market leaders and mid-market challengers to capture the full range of AI visibility outcomes. Scores measure how well each company performs in Generative Engine Optimization (GEO) across all four major AI platforms.
Scoring Framework
Each company receives an AI Presence Score out of 100, composed of four sub-scores:
| Sub-Score | Max Points | What It Measures |
|---|---|---|
| Mention Rate | 30 | How frequently the company is cited in response to category-relevant buyer prompts across all 4 platforms |
| Position | 30 | Where the company appears in ranked lists or recommendation order when mentioned |
| Sentiment | 20 | Whether AI platforms describe the company positively, neutrally, or negatively |
| Platform Breadth | 20 | How many of the 4 AI platforms mention the company and how consistently |
Prompt Design
1,400 prompts were tested: 7 prompts per company, across each of the 4 platforms, for all 50 companies (7 × 4 × 50 = 1,400). Prompts were designed to simulate real buyer queries, such as “What is the best [category] software?” and “Compare [Company A] vs [Company B].”
Platforms Tested
Four AI platforms were evaluated: ChatGPT (OpenAI), Perplexity (Perplexity AI), Claude (Anthropic), and Gemini (Google). These represent the four most-used AI assistants for professional research and buying decisions as of early 2026.
Limitations
Important Caveats
Non-determinism: AI responses are inherently non-deterministic. Repeated identical prompts can yield different results. Based on repeated testing, scores carry an estimated variance of ±3 to 8 points.
Point-in-time: This report reflects AI platform responses as of March and April 2026. AI models are updated regularly, and visibility can shift as training data and retrieval systems change.
Headline Findings
Score Distribution
The distribution of AI Presence Scores across all 50 companies reveals a bifurcated market, not a normal distribution. The largest cluster (18 companies) falls in the 61 to 80 range, while the second-largest (12 companies) sits at 41 to 60. Only 8 companies score above 80, and 12 companies score at 40 or below.
22% of companies score below 40, indicating substantial invisibility. 52% score above 60, suggesting that a slim majority has achieved meaningful AI presence. The median (63.5) sits above the mean (56.9), pulled upward by the concentration in the 61 to 80 band. The mean is dragged down by the long tail of low scorers.
Category Analysis
Categories with multiple companies reveal how AI platforms differentiate between direct competitors. Below, each multi-company category is analyzed with a leaderboard and platform-level data.
SEO and Digital Marketing Analytics
| # | Company | Score | Mention | Position | Sentiment | Breadth | ChatGPT | Perplexity | Claude | Gemini |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | Ahrefs ↗ | 83 | 22/30 | 21/30 | 20/20 | 20/20 | ✓ | ✓ | ✓ | ✓ |
| 2 | Semrush ↗ | 68 | 17/30 | 17/30 | 20/20 | 14/20 | ✓ | ✓ | ✓ | ✓ |
Ahrefs leads Semrush by 15 points. Both are present on all 4 platforms, but Ahrefs achieves a higher mention rate (22 vs 17) and position score (21 vs 17). Semrush loses 6 points on platform breadth (14 vs 20), suggesting less consistent cross-platform presence.
SEO Content Optimization
| # | Company | Score | Mention | Position | Sentiment | Breadth | ChatGPT | Perplexity | Claude | Gemini |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | SurferSEO ↗ | 75 | 17/30 | 19/30 | 19/20 | 20/20 | ✓ | ✓ | ✓ | ✓ |
| 2 | Clearscope ↗ | 68 | 14/30 | 14/30 | 20/20 | 20/20 | ✓ | ✓ | ✓ | ✓ |
SurferSEO leads Clearscope by 7 points. Both achieve identical platform breadth (20/20) and appear on all 4 platforms. The difference comes from SurferSEO’s higher mention rate (17 vs 14) and position (19 vs 14). Clearscope edges ahead on sentiment (20 vs 19).
Enterprise SEO
| # | Company | Score | Mention | Position | Sentiment | Breadth | ChatGPT | Perplexity | Claude | Gemini |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | Conductor ↗ | 53 | 10/30 | 10/30 | 19/20 | 14/20 | ✓ | ✓ | ✓ | ✓ |
| 2 | BrightEdge ↗ | 39 | 6/30 | 5/30 | 20/20 | 8/20 | ✓ | ✓ | ✓ | ✓ |
Conductor leads BrightEdge by 14 points. Both appear on all 4 platforms, but BrightEdge has a mention rate of only 6/30, compared to Conductor’s 10/30. BrightEdge’s perfect sentiment (20/20) is not enough to compensate for low visibility and positioning.
Field Service Management
| # | Company | Score | Mention | Position | Sentiment | Breadth | ChatGPT | Perplexity | Claude | Gemini |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | ServiceTitan ↗ | 68 | 17/30 | 17/30 | 20/20 | 14/20 | ✓ | ✓ | ✓ | ✓ |
| 2 | Jobber ↗ | 41 | 8/30 | 6/30 | 19/20 | 8/20 | ✓ | ✗ | ✓ | ✓ |
This category has the widest gap in the study: 27 points. ServiceTitan’s mention rate is more than double Jobber’s (17 vs 8). Jobber is absent from Perplexity entirely, while ServiceTitan appears on all 4 platforms.
Property Management
| # | Company | Score | Mention | Position | Sentiment | Breadth | ChatGPT | Perplexity | Claude | Gemini |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | AppFolio ↗ | 71 | 17/30 | 14/30 | 20/20 | 20/20 | ✓ | ✓ | ✓ | ✓ |
| 2 | Buildium ↗ | 67 | 18/30 | 16/30 | 19/20 | 14/20 | ✓ | ✓ | ✓ | ✓ |
The narrowest gap among paired categories: only 4 points. Buildium actually has a higher mention rate (18 vs 17) and position (16 vs 14), but AppFolio wins on sentiment (20 vs 19) and platform breadth (20 vs 14).
Workflow Automation
| # | Company | Score | Mention | Position | Sentiment | Breadth | ChatGPT | Perplexity | Claude | Gemini |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | Zapier ↗ | 63 | 15/30 | 15/30 | 19/20 | 14/20 | ✓ | ✓ | ✗ | ✓ |
| 2 | Make ↗ | 40 | 8/30 | 8/30 | 16/20 | 8/20 | ✓ | ✓ | ✓ | ✓ |
Zapier leads Make by 23 points. Zapier is absent from Claude, while Make is present on all 4 platforms. Despite broader platform presence, Make scores lower across every other sub-score, particularly mention rate (8 vs 15) and position (8 vs 15).
Product Analytics
| # | Company | Score | Mention | Position | Sentiment | Breadth | ChatGPT | Perplexity | Claude | Gemini |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | Mixpanel ↗ | 68 | 15/30 | 14/30 | 19/20 | 20/20 | ✓ | ✓ | ✓ | ✓ |
| 2 | Amplitude ↗ | 49 | 11/30 | 12/30 | 18/20 | 8/20 | ✓ | ✓ | ✓ | ✓ |
Mixpanel leads Amplitude by 19 points. Both appear on all 4 platforms, but Mixpanel’s platform breadth score (20 vs 8) drives the largest portion of the gap, indicating more consistent cross-platform recognition.
Payments
| # | Company | Score | Mention | Position | Sentiment | Breadth | ChatGPT | Perplexity | Claude | Gemini |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | Stripe ↗ | 65 | 16/30 | 15/30 | 20/20 | 14/20 | ✓ | ✓ | ✓ | ✓ |
| 2 | Razorpay ↗ | 39 | 6/30 | 5/30 | 20/20 | 8/20 | ✓ | ✓ | ✗ | ✓ |
Stripe leads Razorpay by 26 points. Both have perfect sentiment (20/20), but Stripe’s mention rate is nearly 3x higher (16 vs 6). Razorpay is absent from Claude.
Single-Company Categories
The following companies are the sole representative of their category in this study. Their scores represent an isolated data point, not a competitive benchmark.
| Company | Category | Score | ChatGPT | Perplexity | Claude | Gemini |
|---|---|---|---|---|---|---|
| Clio ↗ | Legal Practice Management | 89 | ✓ | ✓ | ✓ | ✓ |
| Procore ↗ | Construction Management | 86 | ✓ | ✓ | ✓ | ✓ |
| Loom ↗ | Async Video Communication | 86 | ✓ | ✓ | ✓ | ✓ |
| Figma ↗ | Collaborative Interface Design | 86 | ✓ | ✓ | ✓ | ✓ |
| CrowdStrike ↗ | Cloud Endpoint Protection | 83 | ✓ | ✓ | ✓ | ✓ |
| Typeform ↗ | Forms/Surveys | 81 | ✓ | ✓ | ✓ | ✓ |
| Notion ↗ | All-in-One Workspace | 81 | ✓ | ✓ | ✓ | ✓ |
| monday.com ↗ | Work Operating System | 79 | ✓ | ✓ | ✓ | ✓ |
| Veeva Systems ↗ | Life Sciences CRM | 79 | ✓ | ✓ | ✓ | ✓ |
| Salesforce ↗ | CRM Platform | 77 | ✓ | ✓ | ✓ | ✓ |
| Webflow ↗ | No-Code Website Design | 75 | ✓ | ✓ | ✓ | ✓ |
| Pipedrive ↗ | Sales Pipeline CRM | 73 | ✓ | ✓ | ✓ | ✓ |
| Intercom ↗ | Conversational Platform | 72 | ✓ | ✓ | ✓ | ✓ |
| Hotjar ↗ | User Behavior Analytics | 70 | ✓ | ✓ | ✓ | ✓ |
| DocuSign ↗ | E-Signature | 65 | ✓ | ✓ | ✓ | ✓ |
| BrowserStack ↗ | Cross-Browser Testing | 64 | ✓ | ✗ | ✓ | ✓ |
| Postman ↗ | API Development | 59 | ✓ | ✓ | ✓ | ✓ |
| ClickUp ↗ | Project Management | 48 | ✓ | ✓ | ✓ | ✓ |
| Zoho ↗ | Business Management Suite | 46 | ✓ | ✓ | ✓ | ✓ |
| Chargebee ↗ | Subscription Billing | 46 | ✓ | ✓ | ✗ | ✓ |
| Linear ↗ | Issue Tracking | 45 | ✓ | ✓ | ✓ | ✓ |
| Toast ↗ | Restaurant POS | 44 | ✓ | ✓ | ✗ | ✓ |
| Airtable ↗ | Collaborative Work Management | 44 | ✓ | ✓ | ✓ | ✓ |
| Slack ↗ | Team Communication | 41 | ✓ | ✗ | ✓ | ✓ |
| Boulevard ↗ | Salon/Spa Management | 41 | ✓ | ✓ | ✓ | ✓ |
| Mindbody ↗ | Wellness/Fitness | 36 | ✓ | ✗ | ✗ | ✓ |
| Mangools ↗ | SEO Tools Suite | 35 | ✓ | ✓ | ✓ | ✓ |
| Freshworks ↗ | Customer Engagement CRM | 29 | ✓ | ✓ | ✓ | ✓ |
| Kissflow ↗ | Workflow Automation | 29 | ✓ | ✓ | ✓ | ✓ |
| CleverTap ↗ | Customer Engagement | 28 | ✓ | ✓ | ✓ | ✓ |
| SE Ranking ↗ | SEO Platform | 26 | ✓ | ✓ | ✓ | ✓ |
| Close ↗ | Inside Sales CRM | 22 | ✓ | ✓ | ✓ | ✓ |
| WebEngage ↗ | CDP/Marketing Automation | 22 | ✓ | ✓ | ✗ | ✓ |
| LeadSquared ↗ | Sales/Marketing Automation | 2 | ✓ | ✗ | ✓ | ✓ |
Platform Analysis
Platform Comparison
| Metric | ChatGPT | Perplexity | Claude | Gemini |
|---|---|---|---|---|
| Mention Rate | 100% | 90% | 88% | 100% |
| Avg Position | 1.2 | 1.3 | 1.2 | 1.0 |
| Positive Sentiment | 100% | 100% | 97.7% | 98% |
Platform-by-Platform Breakdown
ChatGPT OpenAI
ChatGPT mentions 100% of the 50 companies tested, making it the most inclusive platform alongside Gemini. Its average position of 1.2 and 100% positive sentiment indicate that when ChatGPT recommends a B2B SaaS product, it does so favorably and prominently. ChatGPT’s response pattern is to generate structured lists with brief justifications for each recommendation.
Perplexity Perplexity AI
Perplexity mentions 90% of companies (45 of 50), with an average position of 1.3 and 100% positive sentiment. It is the second most selective platform after Claude. Perplexity differentiates itself by citing sources alongside recommendations, giving its responses a research-report quality. The 5 companies absent from Perplexity are: Mindbody, Jobber, BrowserStack, Slack, and LeadSquared.
Claude Anthropic
Claude is the most selective AI platform, mentioning 88% of companies (44 of 50). Its average position of 1.2 matches ChatGPT, but its positive sentiment rate of 97.7% is the lowest. Claude occasionally returns neutral sentiment rather than positive. The 6 companies absent from Claude are: Zapier, Mindbody, Toast, Razorpay, WebEngage, and Chargebee.
Gemini Google
Gemini matches ChatGPT at 100% mention rate and achieves the best average position of 1.0, meaning it places companies at position #1 more frequently than any other platform. Its positive sentiment rate is 98%. Gemini’s responses are concise and list-oriented, with brief contextual notes about each product.
Multi-Platform Visibility
| Platforms Mentioned On | Companies | Percentage |
|---|---|---|
| All 4 platforms | 39 | 78% |
| 3 platforms | 10 | 20% |
| 2 platforms | 1 | 2% |
39 of 50 companies (78%) are mentioned on all 4 AI platforms. 10 companies appear on 3 platforms, and 1 company (Mindbody) appears on only 2. No company in the dataset is completely absent from AI responses.
Visibility Gaps
Within categories containing direct competitors, the gap between the highest and lowest scorer reveals how much AI visibility varies for functionally similar products.
| Category | Leader | Score | Trailer | Score | Gap |
|---|---|---|---|---|---|
| Field Service Management | ServiceTitan | 68 | Jobber | 41 | 27 pts |
| Workflow Automation | Zapier | 63 | Make | 40 | 23 pts |
| SEO Analytics | Ahrefs | 83 | Semrush | 68 | 15 pts |
| SEO Content Optimization | SurferSEO | 75 | Clearscope | 68 | 7 pts |
| Property Management | AppFolio | 71 | Buildium | 67 | 4 pts |
The widest gap is in field service management, where ServiceTitan leads Jobber by 27 points. One company has achieved strong AI presence and the other has not: Jobber is entirely absent from Perplexity. The narrowest gap is in property management (4 points), where AppFolio and Buildium are nearly indistinguishable from the AI platforms’ perspective.
Workflow automation shows a 23-point gap between Zapier and Make, despite Make being present on all 4 platforms and Zapier missing from Claude. This demonstrates that platform breadth alone does not determine the overall score. Mention frequency and position carry greater weight (60 of 100 total points combined).
What High Scorers Have in Common
Analyzing the 26 companies that scored 60 or above reveals 5 consistent patterns that distinguish them from the 8 companies scoring 35 or below.
- Near-universal platform presence. 24 of 26 high scorers (92.3%) appear on all 4 AI platforms. Their average platform breadth score is 18.4/20. By contrast, low scorers average 2.5/20 on platform breadth. Companies that are invisible on even one platform lose up to 5 points on breadth alone, and the compounding effect on mention rate is substantial. Strong entity optimization across platforms is a common trait of this group.
- High mention frequency, not just mention existence. High scorers average a mention rate of 18.8/30, compared to 3.0/30 for low scorers (a delta of 15.8 points). Being mentioned once is not enough. High scorers appear across a range of buyer prompts (best-of, comparison, alternative, use-case-specific), not just branded queries.
- Category definition, not category membership. The top scorers (Clio, Procore, Loom, Figma) are not simply participants in their categories. They are the names AI platforms use to define the category itself. When a buyer asks “What is the best construction management software?”, Procore is not merely on the list: Procore is the first answer.
- Consistent positioning across platforms. All 10 of the top 10 scorers hold position #1 on all 4 platforms. This cross-platform consistency suggests that the signals AI platforms use to rank products (training data prevalence, third-party coverage, documentation depth) are uniform across model architectures.
- Sentiment is table stakes, not a differentiator. Sentiment scores are high across the board: 44 of 50 companies score 19 or 20 out of 20 on sentiment. The difference between high and low scorers is not how AI platforms describe them but whether AI platforms mention them at all. Fixing visibility (mention rate and platform breadth) delivers more incremental score improvement than optimizing for sentiment.
Full Leaderboard
Top 10: Highest AI Presence Scores
| # | Company | Category | Score | Mention | Position | Sentiment | Breadth | Platforms |
|---|---|---|---|---|---|---|---|---|
| 1 | Clio ↗ | Legal Practice Mgmt | 89 | 25/30 | 24/30 | 20/20 | 20/20 | 4/4 |
| 2 | Procore ↗ | Construction Mgmt | 86 | 23/30 | 23/30 | 20/20 | 20/20 | 4/4 |
| 3 | Loom ↗ | Async Video | 86 | 23/30 | 23/30 | 20/20 | 20/20 | 4/4 |
| 4 | Figma ↗ | Collaborative Design | 86 | 23/30 | 23/30 | 20/20 | 20/20 | 4/4 |
| 5 | Ahrefs ↗ | SEO Analytics | 83 | 22/30 | 21/30 | 20/20 | 20/20 | 4/4 |
| 6 | CrowdStrike ↗ | Endpoint Protection | 83 | 22/30 | 21/30 | 20/20 | 20/20 | 4/4 |
| 7 | Typeform ↗ | Forms/Surveys | 81 | 21/30 | 21/30 | 19/20 | 20/20 | 4/4 |
| 8 | Notion ↗ | All-in-One Workspace | 81 | 21/30 | 20/30 | 20/20 | 20/20 | 4/4 |
| 9 | monday.com ↗ | Work OS | 79 | 20/30 | 19/30 | 20/20 | 20/20 | 4/4 |
| 10 | Veeva Systems ↗ | Life Sciences CRM | 79 | 21/30 | 18/30 | 20/20 | 20/20 | 4/4 |
Clio (89) leads the entire dataset. As the dominant legal practice management platform, Clio benefits from category clarity: there is no ambiguity about what it does, and AI platforms surface it as the default answer for legal tech queries. Its mention rate of 25/30 is the highest in the study.
Procore, Loom, and Figma (86 each) share the second-highest score. Each dominates a well-defined category (construction management, async video, collaborative design) with identical sub-score profiles: 23/30 mention, 23/30 position, 20/20 sentiment, 20/20 breadth. These companies have achieved what amounts to category ownership in AI responses.
Ahrefs and CrowdStrike (83 each) demonstrate that AI visibility correlates with strong third-party coverage. Both companies are frequently cited in independent reviews, comparison articles, and analyst reports: the exact content AI models are trained on.
All 10 companies in the top 10 are present on all 4 platforms with position #1 across the board. This uniformity confirms that AI visibility at the top tier is not platform-dependent but signal-dependent.
Bottom 10: Lowest AI Presence Scores
| # | Company | Category | Score | Mention | Position | Sentiment | Breadth | Platforms |
|---|---|---|---|---|---|---|---|---|
| 41 | BrightEdge ↗ | Enterprise SEO | 39 | 6/30 | 5/30 | 20/20 | 8/20 | 4/4 |
| 42 | Mindbody ↗ | Wellness/Fitness | 36 | 7/30 | 6/30 | 20/20 | 3/20 | 2/4 |
| 43 | Mangools ↗ | SEO Tools | 35 | 4/30 | 3/30 | 20/20 | 8/20 | 4/4 |
| 44 | Freshworks ↗ | Customer Engagement CRM | 29 | 5/30 | 1/30 | 20/20 | 3/20 | 3/4 |
| 45 | Kissflow ↗ | Workflow Automation | 29 | 3/30 | 3/30 | 20/20 | 3/20 | 4/4 |
| 46 | CleverTap ↗ | Customer Engagement | 28 | 3/30 | 2/30 | 20/20 | 3/20 | 4/4 |
| 47 | SE Ranking ↗ | SEO Platform | 26 | 5/30 | 3/30 | 15/20 | 3/20 | 4/4 |
| 48 | Close ↗ | Inside Sales CRM | 22 | 1/30 | 1/30 | 20/20 | 0/20 | 4/4 |
| 49 | WebEngage ↗ | CDP/Marketing Automation | 22 | 1/30 | 1/30 | 20/20 | 0/20 | 3/4 |
| 50 | LeadSquared ↗ | Sales/Marketing Automation | 2 | 2/30 | 0/30 | 0/20 | 0/20 | 3/4 |
LeadSquared (2) is the lowest scorer in the dataset by a wide margin. With a mention rate of 2/30, position score of 0/30, sentiment of 0/20, and platform breadth of 0/20, it is effectively invisible to AI platforms. Despite being mentioned on 3 of 4 platforms, its responses are so infrequent and poorly positioned that it receives near-zero scores across every dimension.
Close and WebEngage (both 22) share the second-lowest score. Both have mention rates of just 1/30 and platform breadth of 0/20, indicating that while AI platforms technically know these companies exist, they almost never recommend them. Both have perfect sentiment (20/20), meaning the rare mentions they do receive are positive.
The pattern in the bottom 10: 8 of 10 have perfect or near-perfect sentiment scores (19 or 20 out of 20). The bottleneck is not perception but presence. These companies are not viewed negatively by AI: they are simply not mentioned at all.
Best Kept Secrets: High Sentiment, Low Mention Rate
10 companies have perfect sentiment (20/20) but mention rates of 8/30 or lower. These are the “best kept secrets” of the AI landscape: when they appear, AI platforms describe them positively, but they appear too infrequently to benefit from that positive perception.
| Company | Overall Score | Sentiment | Mention Rate | Opportunity |
|---|---|---|---|---|
| Close | 22 | 20/20 | 1/30 | Extreme upside: nearly invisible despite positive perception |
| WebEngage | 22 | 20/20 | 1/30 | Extreme upside: nearly invisible despite positive perception |
| Kissflow | 29 | 20/20 | 3/30 | High upside: rare mentions but uniformly positive |
| CleverTap | 28 | 20/20 | 3/30 | High upside: rare mentions but uniformly positive |
| Mangools | 35 | 20/20 | 4/30 | High upside: brand is liked but under-surfaced |
| Freshworks | 29 | 20/20 | 5/30 | High upside: recognized positively but infrequently cited |
| Razorpay | 39 | 20/20 | 6/30 | Moderate upside: positive but missing from Claude |
| BrightEdge | 39 | 20/20 | 6/30 | Moderate upside: enterprise brand underleveraged in AI |
| Mindbody | 36 | 20/20 | 7/30 | Moderate upside: only on 2 platforms, strong when present |
| Toast | 44 | 20/20 | 8/30 | Moderate upside: category leader underleveraged in AI |
These 10 companies represent the clearest optimization opportunity in the dataset. AI platforms already perceive them positively: the gap is in frequency and breadth of mentions. Increasing third-party coverage through Citation Engineering, review presence, and structured content could shift these companies from “occasionally mentioned” to “consistently recommended.”
Complete Dataset: All 50 Companies
The full dataset, sorted by overall AI Presence Score from highest to lowest.
| # | Company | Category | Score | Mention /30 | Position /30 | Sentiment /20 | Breadth /20 | ChatGPT | Perplexity | Claude | Gemini |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | Clio | Legal Practice Mgmt | 89 | 25 | 24 | 20 | 20 | ✓ | ✓ | ✓ | ✓ |
| 2 | Procore | Construction Mgmt | 86 | 23 | 23 | 20 | 20 | ✓ | ✓ | ✓ | ✓ |
| 3 | Loom | Async Video | 86 | 23 | 23 | 20 | 20 | ✓ | ✓ | ✓ | ✓ |
| 4 | Figma | Collaborative Design | 86 | 23 | 23 | 20 | 20 | ✓ | ✓ | ✓ | ✓ |
| 5 | Ahrefs | SEO Analytics | 83 | 22 | 21 | 20 | 20 | ✓ | ✓ | ✓ | ✓ |
| 6 | CrowdStrike | Endpoint Protection | 83 | 22 | 21 | 20 | 20 | ✓ | ✓ | ✓ | ✓ |
| 7 | Typeform | Forms/Surveys | 81 | 21 | 21 | 19 | 20 | ✓ | ✓ | ✓ | ✓ |
| 8 | Notion | All-in-One Workspace | 81 | 21 | 20 | 20 | 20 | ✓ | ✓ | ✓ | ✓ |
| 9 | monday.com | Work OS | 79 | 20 | 19 | 20 | 20 | ✓ | ✓ | ✓ | ✓ |
| 10 | Veeva Systems | Life Sciences CRM | 79 | 21 | 18 | 20 | 20 | ✓ | ✓ | ✓ | ✓ |
| 11 | Salesforce | CRM Platform | 77 | 19 | 18 | 20 | 20 | ✓ | ✓ | ✓ | ✓ |
| 12 | Webflow | No-Code Website Design | 75 | 19 | 16 | 20 | 20 | ✓ | ✓ | ✓ | ✓ |
| 13 | SurferSEO | SEO Content Optimization | 75 | 17 | 19 | 19 | 20 | ✓ | ✓ | ✓ | ✓ |
| 14 | Pipedrive | Sales Pipeline CRM | 73 | 18 | 16 | 19 | 20 | ✓ | ✓ | ✓ | ✓ |
| 15 | Intercom | Conversational Platform | 72 | 17 | 15 | 20 | 20 | ✓ | ✓ | ✓ | ✓ |
| 16 | AppFolio | Property Mgmt | 71 | 17 | 14 | 20 | 20 | ✓ | ✓ | ✓ | ✓ |
| 17 | Hotjar | User Behavior Analytics | 70 | 17 | 15 | 18 | 20 | ✓ | ✓ | ✓ | ✓ |
| 18 | Mixpanel | Product Analytics | 68 | 15 | 14 | 19 | 20 | ✓ | ✓ | ✓ | ✓ |
| 19 | ServiceTitan | Field Service Mgmt | 68 | 17 | 17 | 20 | 14 | ✓ | ✓ | ✓ | ✓ |
| 20 | Clearscope | SEO Content Optimization | 68 | 14 | 14 | 20 | 20 | ✓ | ✓ | ✓ | ✓ |
| 21 | Semrush | SEO Analytics | 68 | 17 | 17 | 20 | 14 | ✓ | ✓ | ✓ | ✓ |
| 22 | Buildium | Property Mgmt | 67 | 18 | 16 | 19 | 14 | ✓ | ✓ | ✓ | ✓ |
| 23 | DocuSign | E-Signature | 65 | 16 | 16 | 19 | 14 | ✓ | ✓ | ✓ | ✓ |
| 24 | Stripe | Payments | 65 | 16 | 15 | 20 | 14 | ✓ | ✓ | ✓ | ✓ |
| 25 | BrowserStack | Cross-Browser Testing | 64 | 15 | 15 | 20 | 14 | ✓ | ✗ | ✓ | ✓ |
| 26 | Zapier | Workflow Automation | 63 | 15 | 15 | 19 | 14 | ✓ | ✓ | ✗ | ✓ |
| 27 | Postman | API Development | 59 | 13 | 12 | 20 | 14 | ✓ | ✓ | ✓ | ✓ |
| 28 | Conductor | Enterprise SEO | 53 | 10 | 10 | 19 | 14 | ✓ | ✓ | ✓ | ✓ |
| 29 | Amplitude | Product Analytics | 49 | 11 | 12 | 18 | 8 | ✓ | ✓ | ✓ | ✓ |
| 30 | ClickUp | Project Mgmt | 48 | 10 | 6 | 18 | 14 | ✓ | ✓ | ✓ | ✓ |
| 31 | Zoho | Business Mgmt Suite | 46 | 9 | 9 | 20 | 8 | ✓ | ✓ | ✓ | ✓ |
| 32 | Chargebee | Subscription Billing | 46 | 9 | 9 | 20 | 8 | ✓ | ✓ | ✗ | ✓ |
| 33 | Linear | Issue Tracking | 45 | 9 | 8 | 20 | 8 | ✓ | ✓ | ✓ | ✓ |
| 34 | Toast | Restaurant POS | 44 | 8 | 8 | 20 | 8 | ✓ | ✓ | ✗ | ✓ |
| 35 | Airtable | Work Mgmt | 44 | 9 | 7 | 20 | 8 | ✓ | ✓ | ✓ | ✓ |
| 36 | Slack | Team Communication | 41 | 9 | 9 | 20 | 3 | ✓ | ✗ | ✓ | ✓ |
| 37 | Jobber | Field Service Mgmt | 41 | 8 | 6 | 19 | 8 | ✓ | ✗ | ✓ | ✓ |
| 38 | Boulevard | Salon/Spa Mgmt | 41 | 8 | 5 | 20 | 8 | ✓ | ✓ | ✓ | ✓ |
| 39 | Make | Workflow Automation | 40 | 8 | 8 | 16 | 8 | ✓ | ✓ | ✓ | ✓ |
| 40 | BrightEdge | Enterprise SEO | 39 | 6 | 5 | 20 | 8 | ✓ | ✓ | ✓ | ✓ |
| 41 | Razorpay | Payments | 39 | 6 | 5 | 20 | 8 | ✓ | ✓ | ✗ | ✓ |
| 42 | Mindbody | Wellness/Fitness | 36 | 7 | 6 | 20 | 3 | ✓ | ✗ | ✗ | ✓ |
| 43 | Mangools | SEO Tools | 35 | 4 | 3 | 20 | 8 | ✓ | ✓ | ✓ | ✓ |
| 44 | Freshworks | Customer Engagement CRM | 29 | 5 | 1 | 20 | 3 | ✓ | ✓ | ✓ | ✓ |
| 45 | Kissflow | Workflow Automation | 29 | 3 | 3 | 20 | 3 | ✓ | ✓ | ✓ | ✓ |
| 46 | CleverTap | Customer Engagement | 28 | 3 | 2 | 20 | 3 | ✓ | ✓ | ✓ | ✓ |
| 47 | SE Ranking | SEO Platform | 26 | 5 | 3 | 15 | 3 | ✓ | ✓ | ✓ | ✓ |
| 48 | Close | Inside Sales CRM | 22 | 1 | 1 | 20 | 0 | ✓ | ✓ | ✓ | ✓ |
| 49 | WebEngage | CDP/Marketing Automation | 22 | 1 | 1 | 20 | 0 | ✓ | ✓ | ✗ | ✓ |
| 50 | LeadSquared | Sales/Marketing Automation | 2 | 2 | 0 | 0 | 0 | ✓ | ✗ | ✓ | ✓ |
About This Report
This report is published by DerivateX (derivatex.agency), a B2B SaaS SEO and GEO (Generative Engine Optimization) agency that helps companies get visible in both traditional search and AI-generated answers. The data is sourced from isaiaware.com, a platform that tracks how brands appear in AI-generated responses across ChatGPT, Perplexity, Claude, and Gemini.
Get Your AI Visibility Score
If your company is not in this report and you want to understand your AI visibility profile, or if you are in this report and want to improve your score, we can help.
Want a free preliminary assessment? Request a free AI visibility audit.
Questions? Reach us at hello@derivatex.agency.
Citation Guide
When referencing data from this report, use one of the following formats:
APA Style
DerivateX. (2026, April 3). The State of AI Visibility in B2B SaaS: 2026 Benchmark Report. derivatex.agency/report/ai-visibility-2026
Inline Citation
According to the 2026 AI Visibility Benchmark Report by DerivateX, 44.0% of B2B SaaS companies score below 50 out of 100 on AI visibility. (Source: derivatex.agency)
Data Attribution
Source: “The State of AI Visibility in B2B SaaS: 2026 Benchmark Report,” DerivateX, April 2026. derivatex.agency
For media inquiries or data licensing, contact hello@derivatex.agency.
If your buyers use ChatGPT or Perplexity,
you need to know exactly where you stand.
Most B2B SaaS teams have no idea whether AI tools recommend them โ or a competitor. We audit your AI search visibility and show you what to fix first.
for Gumlet
REsimpli in 90 days
trust DerivateX