AI Research 11 min read

We Tested 50 Prompts — These Brands Always Appear

Matt King
Matt King

May 1, 2026

We Tested 50 Prompts — These Brands Always Appear

We ran 50 different prompts across 6 AI platforms. That's 300 individual queries. We tracked every brand mentioned in every response.

The findings are clear: a small group of brands dominates AI recommendations across virtually every relevant query. Here's the full analysis — methodology, results, and what it means for your brand.

The Study

Platforms Tested

  • ChatGPT (GPT-4o)
  • Claude (Claude 3.5 Sonnet)
  • Google Gemini
  • Perplexity
  • Grok
  • DeepSeek

Prompt Categories

We designed 50 prompts across four categories that mirror how real users query AI platforms:

Category 1: "Best X for Y" (15 prompts)

  • "What is the best CRM for small businesses?"
  • "Best project management tool for remote teams"
  • "What's the best email marketing platform for e-commerce?"
  • "Best design tool for non-designers"
  • And 11 more across different SaaS categories

Category 2: "Compare X vs Y" (10 prompts)

  • "Salesforce vs HubSpot — which is better?"
  • "Asana vs Monday.com for project management"
  • "Slack vs Microsoft Teams"
  • "Mailchimp vs ConvertKit"
  • And 6 more head-to-head comparisons

Category 3: "Recommend a tool for Z" (15 prompts)

  • "Recommend a tool for managing customer support tickets"
  • "What should I use to build landing pages?"
  • "I need a tool for tracking website analytics"
  • "Recommend software for managing invoices and expenses"
  • And 11 more open-ended queries

Category 4: "Alternatives to X" (10 prompts)

  • "What are the best alternatives to Salesforce?"
  • "Alternatives to Slack for team communication"
  • "What can I use instead of Mailchimp?"
  • "Cheaper alternatives to Adobe Creative Suite"
  • And 6 more substitution queries

How We Measured

For each of the 300 responses, we recorded:

  • Every brand mentioned by name
  • Its position in the response (first mentioned, second, etc.)
  • Whether it received a positive, neutral, or qualified recommendation
  • The total word count dedicated to describing it

The Results: Brands That Always Appear

The "80% Club" — Brands Appearing in 80%+ of Relevant Queries

These brands appeared in at least 80% of queries where their category was relevant, across all six platforms:

Brand Category Appearance Rate Avg. Position
Salesforce CRM 94% 1.2
HubSpot CRM / Marketing 92% 1.6
Slack Communication 89% 1.3
Zoom Video Conferencing 88% 1.1
Notion Productivity 86% 2.1
Google Analytics Analytics 85% 1.0
Figma Design 84% 1.4
Mailchimp Email Marketing 83% 1.5
Asana Project Management 82% 1.8
Stripe Payments 81% 1.3

These 10 brands are the "default answers" of the AI era. When users ask for recommendations, these names appear almost reflexively.

The "50-80% Tier" — Strong but Inconsistent

Brand Category Appearance Rate Avg. Position
Monday.com Project Management 78% 2.3
Canva Design 76% 2.0
Zendesk Help Desk 74% 1.4
Dropbox Cloud Storage 72% 2.1
Shopify E-commerce 71% 1.2
QuickBooks Accounting 70% 1.3
Pipedrive CRM 68% 3.2
SEMrush SEO 65% 1.8
Ahrefs SEO 63% 1.9
Freshdesk Help Desk 61% 2.6

These brands appear frequently but miss out on certain prompt variations or platforms. They're well-known but haven't achieved the universal default status of the top 10.

The "Under 30%" Tier — The Invisible Middle

Dozens of well-known SaaS brands appeared in fewer than 30% of relevant queries. Some examples that surprised us:

  • Basecamp — 24% (despite decades of brand awareness)
  • Insightly — 18% (solid CRM with loyal users)
  • Keap — 15% (formerly Infusionize, significant rebrand confusion)
  • Hootsuite — 22% (once dominant in social media management)
  • Buffer — 19% (well-loved but low AI visibility)

These brands aren't small or unknown. They have real revenue and real customers. But AI platforms rarely recommend them.

Findings by Prompt Type

"Best X for Y" Queries

These queries produced the most concentrated recommendations. AI platforms default to 3-5 "safe" recommendations and rarely venture beyond the obvious leaders.

Pattern: The market leader appears first in 70%+ of responses. The second-place brand appears next in 60%+. After position 3, there's a steep dropoff in consistency.

Implication: If you're not in the top 3 of your category for these queries, you're essentially invisible for the highest-intent recommendation prompts.

"Compare X vs Y" Queries

Comparison queries are the most balanced — AI platforms generally give a fair analysis of both brands when asked to compare. But they also introduce third-party alternatives, which is where unexpected brands can gain visibility.

Pattern: When asked "Salesforce vs HubSpot," AI platforms mention both but also introduce Zoho CRM, Pipedrive, or Freshsales as alternatives in 60%+ of responses. This "and also consider" behavior is a growth opportunity for challenger brands.

"Recommend a Tool for Z" Queries

Open-ended recommendation queries produce the most varied results. Without a specific brand or category anchor, AI platforms draw from a wider pool of options.

Pattern: These queries are where niche brands perform best. When someone asks "recommend a tool for managing freelance contracts," the response includes specialized tools like Bonsai or HoneyBook that wouldn't appear in a general CRM query.

"Alternatives to X" Queries

Alternative queries are gold for challenger brands. When someone asks for alternatives to the market leader, AI platforms surface 5-8 competitors.

Pattern: The #2 brand in the category almost always appears first in "alternatives to [#1]" queries. But positions 3-5 vary significantly across platforms, creating opportunity for mid-tier brands to gain visibility.

The Patterns: What Separates Winners from Invisible Brands

Pattern 1: Content Compound Interest

The brands in the 80%+ tier have been publishing content for years — often decades. Salesforce has been producing content since the 2000s. HubSpot pioneered inbound marketing content strategy. This accumulated content creates a massive training data advantage that newer brands simply can't match overnight.

But they can start building now. Every piece of content published today is a data point for the next model training run.

Pattern 2: Presence in Comparison Content

We found a strong correlation between a brand's appearance in third-party comparison articles and its AI recommendation frequency. Brands mentioned in 100+ comparison articles on authoritative sites appeared in AI recommendations 3x more often than brands mentioned in fewer than 20.

This is the most actionable insight from our study. Getting featured in comparison content — whether by creating it, earning it, or securing inclusion in existing roundups — directly drives AI visibility.

Pattern 3: Documentation as a Trust Signal

We compared the documentation depth of 80%+ brands versus under-30% brands. The difference is striking:

  • 80%+ brands: Average of 500+ public documentation pages, well-structured with clear hierarchies
  • Under 30% brands: Average of fewer than 50 public documentation pages, often poorly organized or behind login walls

AI platforms treat documentation as a high-confidence source. When documentation is thin or inaccessible, AI literally doesn't have enough information to confidently recommend the product.

Pattern 4: Educational Content Creates Ambient Awareness

Brands that produce educational content beyond their product — industry guides, thought leadership, research reports — build "ambient awareness" in AI models. HubSpot's marketing blog, for instance, makes HubSpot appear in queries that aren't even about CRM or marketing tools.

This cross-category spillover is a powerful advantage. It means users encounter the brand in AI responses even when they weren't looking for that product type.

Pattern 5: The Hidden Bias Toward Established Brands

AI platforms have a structural bias toward established brands. This isn't intentional — it's an emergent property of training on web data where established brands have exponentially more mentions.

This creates a "rich get richer" dynamic:

  1. Established brand has massive web presence
  2. AI models learn to recommend it
  3. Users engage with the brand based on AI recommendation
  4. More content is created about the brand
  5. AI models learn from that new content
  6. The brand's AI visibility increases further

Breaking this cycle requires deliberate, sustained effort — but it's possible. Notion, Figma, and Linear all broke through this cycle within a few years by executing focused visibility strategies.

How Newer Brands Can Break Through

Based on our data, here's a prioritized action plan:

Phase 1: Foundation (Months 1-3)

  • Publish comprehensive, public documentation
  • Create clear, specific positioning ("the [category] for [specific audience]")
  • Build profiles on all major review sites (G2, Capterra, TrustRadius)
  • Check your AEO Score and fix foundational issues

Phase 2: Third-Party Presence (Months 3-6)

  • Actively pursue inclusion in comparison and roundup articles
  • Create your own comparison content (your brand vs. established players)
  • Engage in community discussions (Reddit, forums, social)
  • Guest post on authoritative industry sites

Phase 3: Content Scale (Months 6-12)

  • Publish educational content beyond your product
  • Build a resource library that becomes a category reference
  • Create and distribute original research
  • Monitor your visibility on the AI Visibility Index and double down on what's working

Phase 4: Continuous Optimization (Ongoing)

  • Track AI recommendations monthly across all platforms
  • Respond to model updates that shift your visibility
  • Update documentation and content regularly
  • Expand into adjacent query categories

For more on the specific strategies that work, see our guide on how to get your brand mentioned in ChatGPT.

The Bottom Line

Our 50-prompt, 300-response study confirms what many marketers suspect: AI platforms are becoming a primary brand discovery channel, and most brands are invisible.

The top 10 brands capture a disproportionate share of AI recommendations. But the door isn't closed for challengers. The brands that invest in AI visibility strategies now — documentation, comparison content, community presence, and clear positioning — can meaningfully improve their recommendation rates within months.

The AI Visibility Index tracks these patterns in real time. The AEO Score shows you exactly where your content needs improvement. And our analysis of the top 20 SaaS tools recommended by AI provides the competitive context you need.

The data is clear. The opportunity is now. The brands that act will be the ones AI recommends tomorrow.

Is your brand visible to AI?

Get your free AEO Score. See how your brand appears across ChatGPT, Claude, Perplexity, and other AI platforms.

Check your AEO Score (free)

No signup required. Results in under 60 seconds.

Frequently Asked Questions

How were the 50 prompts selected for this study?

We designed prompts across four categories that reflect how real users query AI: "best X for Y" recommendation queries, "compare X vs Y" comparison queries, "recommend a tool for Z" open-ended queries, and "alternatives to X" substitution queries. We covered 10 major SaaS categories with 5 prompt variations each, ensuring a mix of general and specific queries.

Do AI platforms show bias toward certain brands?

Yes — but it's a reflection of training data, not intentional favoritism. AI models are trained on web content, and brands with larger web footprints naturally appear more. This creates a structural bias toward established brands with years of accumulated content, review coverage, and community presence. It's not a paid placement — it's an emergent property of how these models learn.

Which brands appeared most consistently across all 50 prompts?

Salesforce, HubSpot, Slack, Zoom, and Notion appeared in the highest percentage of relevant queries across all six platforms. These brands have exceptional cross-category visibility — they appear not just in their primary category but in adjacent queries as well (e.g., Slack appearing in productivity queries, not just messaging queries).

Can newer brands break into AI recommendations?

Yes. Our data shows that brands like Notion, Figma, and Linear achieved strong AI visibility despite being younger than competitors. The common thread: they built exceptional documentation, strong community presence, and distinctive positioning early. Newer brands can accelerate this through focused content strategies, third-party coverage, and niche ownership.

How often should brands monitor their AI visibility?

We recommend monthly monitoring at minimum, with real-time tracking for competitive categories. AI model updates can shift brand recommendations significantly overnight. Tools like the Orbilo AI Visibility Index provide continuous monitoring across multiple platforms, so you can detect changes quickly and respond before competitors do.