How GEO Monitor Measures AI Visibility
Transparency matters. Here is exactly how we collect data, score visibility, and generate recommendations.
How Scans Work
When you run a scan, GEO Monitor sends your monitoring query to each selected AI engine (ChatGPT, Perplexity, Claude, Gemini, DeepSeek, Grok) via their official APIs. We append a prompt suffix asking for specific product recommendations, which encourages the engine to name brands rather than give generic advice.
Each engine query is independent. We capture the full raw response, latency, and any metadata the engine returns (such as Perplexity's native citation URLs).
What Is a "Mention"?
A mention means your brand name (or one of your configured aliases) appeared in an AI engine's response to a query. We detect mentions using case-insensitive word-boundary matching against the full response text.
We also check for possessive attribution (e.g., "Competitor's GEO Monitor") to avoid false positives where your brand name appears as part of another entity's description.
Brands not in your configured list but mentioned by the AI are auto-discovered using formatting patterns (bold text, numbered lists, headings) and shown in the "Also Mentioned" section.
How Position Is Determined
Position is the sequential order in which a brand appears in the AI response. The first brand mentioned is position #1, the second is #2, and so on. This is measured per response, not per engine.
Position matters because AI engines typically front-load their strongest recommendations. A brand at position #1 is more likely to be the user's takeaway than one at position #5.
How Sentiment Is Evaluated
Sentiment is determined by keyword heuristics applied to the sentence containing each mention. We check for 45 positive signals (e.g., "recommend", "excellent", "reliable", "innovative") and 25 negative signals (e.g., "outdated", "expensive", "limited", "disappointing").
The raw signal count is normalized to a -1.0 to +1.0 score. Scores above +0.1 are "positive", below -0.1 are "negative", and everything in between is "neutral".
This is a practical heuristic, not an NLP model. It works well for the structured, recommendation-oriented responses AI engines produce but may miss nuanced sarcasm or qualified praise.
How Visibility Scoring Works
The visibility score is a position-weighted metric that accounts for both where you appear and whether the engine recommended you:
| Position | Base Score | If Recommended |
|---|---|---|
| #1 | 10 | 15 (1.5x) |
| #2 | 8 | 12 (1.5x) |
| #3 | 6 | 9 (1.5x) |
| #4 | 4 | 6 (1.5x) |
| #5+ | 2 | 3 (1.5x) |
A "recommendation" is detected when the AI response contains explicit recommendation language near your brand mention (e.g., "highly recommend", "top pick", "best option"). We check for 19 distinct recommendation signal phrases.
Your total visibility score is the sum of points across all mentions. This is more meaningful than raw mention count because being #1 on one engine is worth more than being #5 on five engines.
What Citation Gaps Mean
When AI engines answer questions, they sometimes cite sources — especially Perplexity, which returns a native list of citation URLs with every response. Other engines occasionally include URLs inline in their text.
A citation gap is a source domain that appears in AI responses about your competitors but never in responses about you. These represent opportunities: if you can get mentioned or linked from those sources, AI engines may start citing you too.
Citation data is strongest from Perplexity (native API citations) and weaker from other engines (regex-extracted URLs from response text). We aggregate both sources.
What the 10-Point GEO Audit Checks
The GEO audit fetches your website and evaluates 10 factors that influence how AI engines perceive and cite your brand:
- Structured Data (JSON-LD) — presence of schema.org markup
- FAQ Content — FAQ section with FAQPage schema
- Comparison Content — "vs" pages or alternative comparisons
- Content Freshness — datePublished/dateModified metadata
- Topical Authority — blog, resources, or guides section
- Third-Party Validation — mentions across independent sources (comparison sites, directories, Reddit, industry blogs)
- AI Bot Access — robots.txt allows GPTBot, ClaudeBot, PerplexityBot
- LLMs.txt — the emerging standard for LLM-friendly site descriptions
- Sitemap Health — sitemap.xml exists with recent lastmod dates
- Meta Signals — og:title, og:description, og:image, twitter:card, canonical URL, meta description
Each check is pass/fail. Your GEO score is the percentage of checks passed (e.g., 8/10 = 80). After scoring, we use Claude to generate 3-4 prioritized fix recommendations for any failed checks.
Engine Volatility and Limitations
AI engine responses are non-deterministic. The same query sent twice may produce different mentions, positions, or recommendations. This is inherent to how large language models work — they sample from probability distributions, not lookup tables.
What this means for your data:
- A single scan is a snapshot, not a guarantee
- Trends over multiple scans are more reliable than individual results
- Position #1 today may become #3 tomorrow without any change on your part
- Some engines are more consistent than others (Perplexity tends to be more stable than ChatGPT)
- Failed scans (engine timeouts, rate limits) are not charged
We recommend running scans at least weekly to build a meaningful trend baseline.