Guides

AI Visibility: The 4 Strategic Variables to Define Before You Measure

10 min read
TL;DR

Your AI visibility depends on 4 strategic variables: which prompts you measure, which sources feed you, which AI platform you prioritize, which languages and regions you target. These are decisions to make upstream of any measurement, not technical adjustments. Demonstrated with real Swatch data.

Ask ChatGPT to recommend a watch as a gift. It suggests Seiko, Casio, Tissot. Not Swatch. Yet it's one of the most recognized watch brands in the world.

I've seen this situation dozens of times in classic SEO. A brand shows up with the same request: "We want to be number one on everything, everywhere." Today with AI, the promise is identical. And just as unrealistic if you don't lay the groundwork first.

Because your AI visibility is not a technical question. It's the result of 4 strategic variables you need to choose upstream of any measurement. Before configuring a tool, before launching an audit, before optimizing a page.

These 4 variables determine what you measure, and therefore what you can improve. Change a variable, the result changes. Swatch goes from 75% to 38% on ChatGPT depending on how the question is phrased. Same brand, same platform, two prompts: 37 points apart.

This article details these 4 variables with real data from Swatch, measured daily across 7 AI platforms for 2 months.

13,240
AI responses analyzed
7
AI platforms
4
Languages
36
Prompts tracked

Before detailing the 4 variables, a useful distinction.

Two approaches: visibility and narrative

When it comes to AI visibility, there are two fundamental questions:

1. Are you cited? This is the mention-first approach. Does your brand appear in AI responses? How often? Compared to your competitors? That's what most brands care about first: being found.

2. Who is talking about you? This is the citation-first approach. When AI talks about your brand, which sources does it cite? Is it your official site, a favorable editorial article, or a competitor's comparison page? The sources cited shape the narrative. If a competitor is cited when someone talks about you, they control how you're presented.

Most brands need both approaches. Each of the 4 variables below relies on one or the other.

Variable 1: the prompts you measure

Approach: mention-first Key question: is your brand recommended when it isn't named?

This is the most immediate strategy. You build a mix of branded queries (that mention your brand) and industry queries (that describe a need without naming anyone), then you compare visibility across both.

Branded queries target direct comparison: "Swatch or Casio, which one should I pick for an everyday watch?" Industry queries target the need: "I'm looking for a watch to give as a gift with good value for money."

The gap between the two tells you whether AI knows you (branded) and whether it recommends you spontaneously (industry). That's your spontaneous awareness in the AI universe.

What it looks like for Swatch

On industry queries (without naming the brand), Swatch only appears in 38% of ChatGPT responses on the gift prompt. On branded queries ("Swatch vs Casio"), it climbs to 92%. AI knows Swatch, but doesn't recommend it spontaneously.

ChatGPT
ChatGPT

"If I want a watch that isn't too expensive but still looks good on the wrist, what should I go for?"

Looking for a watch that combines style and affordability? Here are some suggestions: 1. Seiko: Seiko watches are an excellent choice, they are well-made and relatively affordable. The Seiko 5 model, for instance, is often recommended. 2. Casio: Casio offers sturdy watches, especially in the G-Shock range.

ChatGPT recommends Seiko and Casio. Not Swatch. And it's no accident: across the 3 industry prompts in French, ChatGPT only cites Swatch in 38 to 54% of responses depending on how they're phrased.

The comparative analysis also shows who captures the visibility you're losing:

Mentions in AI responses (industry queries)

Seiko
17,916
Mondaine
13,582
Certina
9,484
Tissot
9,065
Victorinox
4,860

Seiko dominates industry responses in accessible watchmaking. On "gift" and "value for money" queries, it's the AI's first reflex.

Why this strategy is the most actionable

The branded/industry gap points directly at a content problem. If you're cited 92% on branded but 64% on industry, AI knows you but the sources it consults for industry queries don't mention you enough. That's a clear signal for your content and PR teams.

Variable 2: the sources cited by AI

Approach: citation-first Key question: which sources does AI use to talk about you?

Here, the question is no longer "are you cited" but "who is talking about you". When an AI answers a question about your brand, it leans on sources. Some are yours (official site, blog), others are third parties (media, competitors, forums, YouTube).

The ratio between your sources and third-party sources is your owned citation share. The higher it is, the more you control what AI says about you.

Think of it this way: your company may have invested last year in a product repositioning or a CSR initiative. Is it reflected in how AI talks about you? Or is it still repeating an old narrative, perhaps even a comparison article published by a competitor?

What it looks like for Swatch

Out of 51,408 citations extracted from AI responses, swatch.com represents 27% of the total. It's the most cited source, by far. But that also means 73% of the narrative is shaped by third parties.

Among those third parties, competitor sites total more than 7,600 citations:

Sources citées par les IA

casio.comCompetitor
7%
youtube.comUGC
4%
mondaine.comCompetitor
4%
tissotwatches.comCompetitor
4%
seikowatches.comCompetitor
3%
chrono24.itEditorial
1%

When an AI talks about Swatch, it often cites a competitor in the same response.

YouTube, the invisible source

YouTube is the 2nd most cited third-party source in AI responses about Swatch (2,119 citations). "Best affordable watches" videos directly shape Perplexity's answers. If your brand doesn't appear in those videos, it disappears from the responses that cite them.

Gap analysis goes further: it identifies sites that cite your competitors but never your brand. For Swatch, outlets like elle.de, frandroid.com, lesnumeriques.com or businessinsider.de mention Seiko and Certina in their watchmaking articles, but not Swatch. These are concrete PR opportunities: if those outlets talk about your competitors, they could talk about you too.

Sentiment completes the picture: 64% positive responses, 35% neutral, near zero negative. No crisis, but a third of responses are factual without enthusiasm. That's a lever to strengthen brand content. Our article on sentiment analysis in the premium sector details these dynamics.

Variable 3: the AI platform you target

Approach: mention-first Key question: does every AI platform see you the same way?

This is often the most surprising discovery. What many brands don't realize is that every AI engine has its own biases. A brand can be systematically recommended by Claude and completely absent from Perplexity, for the same question.

Why? Because AI platforms don't work the same way. Claude leans more on its internal knowledge. Perplexity searches for real-time web sources and cites what it finds. ChatGPT Web does shopping search and scrapes e-commerce catalogs. Gemini generates long lists that dilute positions.

If you only measure a single platform, you miss half the picture.

What it looks like for Swatch

app.gorepliq.com/dashboard
S

Swatch

Overview

Last 7 days
AI Visibility📊
85.8%
+2.1%
Average position🏆
#2.2
↑ 0.3
Positive sentiment😊
64%
0% negative
Measurements today📈
72
5 providers
Visibility — 7 days85.8%
Mon
Tue
Wed
Thu
Fri
Sat
Sun
By platform
Grok#1.6
Gemini#5.6
Claude#2.3
Perplexity#2.0
ChatGPT#1.3
Repliq— Generative Engine Optimization
Illustrative data

Repliq dashboard: Swatch visibility by AI platform

Swatch's overall visibility is 85.8%. That looks solid. But this number hides dramatic gaps:

  • Claude recommends Swatch in 96% of responses on industry queries
  • Gemini: 97%
  • Grok: 100%
  • Perplexity: 33%
  • ChatGPT Web: 38%

Perplexity knows Swatch (100% on branded queries). But on industry queries, it doesn't find web sources to cite that mention Swatch, so it doesn't recommend it.

Claude
Claude✓ Marque citée

"I'm looking for a watch to give as a gift with good value for money, what would you recommend?"

For a quality gift in Switzerland, here are my suggestions: Swatch (CHF 50-150). Swiss manufacturer based in Biel/Bienne. Excellent value for money, modern and playful design. Reliable quartz movement. Wide range of styles for every taste. Tissot (entry-level, CHF 200-400). Swiss manufacture, PRX or T-Classic watches are very popular.

Same question, same moment, but Claude places Swatch first while Perplexity doesn't mention it at all. This mechanism is detailed in our article on the shift from search engines to answer engines.

The Claude vs ChatGPT contrast

On the same gift prompt, Claude cites Swatch first with prices and product details. ChatGPT only mentions it 66% of the time, but when it does, it places it in position #1.3 (first cited). Two platforms, two radically different behaviors for the same brand.

Variable 4: the languages and regions of your audience

Approach: mention-first Key question: is your brand equally visible across every language in your markets?

AI has a strong linguistic bias. It favors languages where it finds the most quality content to draw from. For a Swiss brand operating in French, German and Italian, that gap can be significant.

For a global brand, multi-language analysis often confirms that everything is fine. For a local or regional brand, it reveals black holes.

What it looks like for Swatch

Swatch is a global brand. Its visibility on industry queries is nearly identical in French (86%), German (86%), English (86%) and Italian (85%). A 1.5-point gap, nothing actionable. Swatch has content everywhere, in every language.

That's the good-news scenario. For a regional brand with uneven language coverage, the same analysis would expose clear gaps, often between 5 and 15 points, on the language of its core market. When you cross language with platform, those gaps can widen further: a brand can drop below 30% on ChatGPT API in the language of its main market, even when it looks healthy in overall scores.

Linguistic bias is not an awareness gap

A language gap on AI rarely reflects a real awareness deficit. It reflects a content deficit: the sources AI consults in one language mention your brand less than sources in another language. That's a direct lever for content teams: the fix is editorial, not advertising.

The upstream work before any measurement

The 4 variables are not options you toggle in a tool. They're choices to make with your marketing team or your agency, before configuring anything at all.

If you start without this, you'll measure noise. Numbers that move for no clear reason, comparisons that mean nothing, actions that don't target the right levers.

If you start with these 4 choices made, measurement becomes useful. You know what you're optimizing, for whom, on which platform, in which language. Reporting follows. So do actions.

For agencies, this upstream work is also a deliverable in itself. It's often the moment when the client realizes that "being visible everywhere" is neither possible nor desirable. Our guide GEO for agencies details how to structure that conversation.

Conclusion

Your AI visibility doesn't depend on a mysterious algorithm. It depends on 4 decisions you make: which prompts to measure, which sources to monitor, which platforms to prioritize, which languages and regions to target.

These decisions are made upstream, not after. That's the difference between piloted AI ranking and background noise measured without a compass.


Methodology

Data collected daily by the Repliq platform between February 10 and April 13, 2026 on the Swatch brand: 13,240 measurements, 36 prompts, 4 languages (FR, EN, DE, IT), 51,408 citations extracted, 1,551 sources identified, 5 competitors tracked. Platforms: ChatGPT (API + Web), Claude, Gemini, Google AI Mode, Perplexity, Grok. Measurements deduplicated (1 per provider per day per prompt). To understand the KPIs used, see our article on the 3 essential KPIs for GEO.

Frequently Asked Questions

What is the difference between branded and industry visibility on AI?

Branded prompts mention your brand ('Swatch vs Casio'). Industry prompts describe a need without naming you ('best watch as a gift'). The gap between the two measures your spontaneous awareness with AI.

Why is my brand visible on Claude but not on ChatGPT?

ChatGPT and Perplexity rely on real-time web sources. If the articles they find don't mention you, you don't exist. Claude leans more on its internal knowledge.

What is owned citation share?

It's the portion of your own content among the sources cited by AI when they talk about you. The higher this ratio, the more you control the narrative around your brand.

How do language gaps affect AI visibility?

AI favors languages where it finds the most content to cite. A Swiss brand can be visible in French but invisible in German if its German-language content is underrepresented in the sources.

Measure your AI visibility

Discover how ChatGPT, Gemini, and Perplexity talk about your brand.

Start free audit

Also available in

AI Visibility: The 4 Strategic Variables to Define Before You Measure | Repliq