FAQ
AI Visibility & GEO — Frequently asked questions
What AI visibility is, how Generative Engine Optimization (GEO) differs from SEO, how a GeoSuite audit works in practice, and what changes when ChatGPT, Gemini and Perplexity become the entry point of buyer research.
AI visibility & GEO
What is AI visibility? +
AI visibility is how often and how favourably a brand is mentioned in answers from generative AI engines like ChatGPT, Gemini and Perplexity. It is the AI equivalent of a search ranking, but instead of ranked links on a results page it measures whether the brand is named, recommended or cited inside the model's natural-language response.
What is GEO (Generative Engine Optimization)? +
GEO stands for Generative Engine Optimization: the practice of optimising a brand's web presence so that generative AI engines cite, mention or recommend it when answering questions. GEO is to AI answers what SEO is to a Google search results page.
How is GEO different from SEO? +
SEO targets the ranked list of links on a search results page; GEO targets the synthesised answer produced by an AI engine. SEO success is measured in clicks and rankings; GEO success is measured in brand recall, share of voice and citation sentiment inside AI responses. The two overlap — clear, well-structured content helps both — but the levers and KPIs are different.
Why does AI visibility matter for my brand? +
ChatGPT, Gemini, Perplexity and Google's AI Overviews are increasingly the entry point of buyer research. When a user asks 'best CRM for a small agency' or 'alternatives to product X', the brands the AI names get the consideration; brands it omits get skipped. Without an AI visibility programme you do not know whether you are being recommended, ignored, or recommended with the wrong attributes.
How GeoSuite works
Which AI engines does GeoSuite analyse? +
GeoSuite audits ChatGPT (OpenAI), Google Gemini and Perplexity. These are the three engines that drive the majority of generative-AI consumer and B2B traffic today. Coverage of additional engines (Claude.ai, Meta AI, Mistral) is on the roadmap.
How does an AI visibility audit work? +
GeoSuite generates hundreds of buyer-intent questions specific to your industry and brand, sends them to each AI engine, and analyses every response. For each question we extract whether your brand is mentioned, with what sentiment, alongside which competitors, and which sources the engine cited. The output is a score, a competitor map, a source list and a prioritised action plan.
How long does an audit take? +
A standard brand audit completes in roughly 5–10 minutes. The exact time depends on the number of generated questions and on how many engines have grounding (live web retrieval) active for the chosen lanes.
What is the difference between a brand audit and an e-commerce audit? +
A brand audit measures visibility for a domain: how the brand is talked about as a whole. An e-commerce audit goes one level deeper, scoring each product on attribute alignment, query relevance, AI recommendation fit and market demand signal — answering the question 'which of my SKUs is the AI most likely to recommend?'.
How accurate are the results? +
GeoSuite re-runs each query at non-zero temperature and aggregates results to smooth out the inherent variability of generative models. Lane-level scores stabilise after roughly 100–200 queries; brand-level recall and share of voice are reproducible within ±3–5 percentage points across consecutive audits with no underlying changes.
Metrics explained
What is brand recall? +
Brand recall is the share of relevant queries in which an AI engine mentions the audited brand by name. A recall of 40% means that out of 100 relevant questions the brand is mentioned in 40 of the engine's answers. Recall is the most fundamental AI visibility metric — if you are not named, nothing else matters.
What is share of voice (SoV) in AI answers? +
Share of voice in AI answers is the proportion of brand mentions that go to the audited brand vs. its competitors on a given set of queries. If on 100 queries the audited brand is named 30 times and the top three competitors are named a combined 70 times, the audited brand has a 30% SoV.
What is citation sentiment? +
Citation sentiment measures whether the AI engine talks about the brand positively, neutrally or negatively, and on which attributes. A brand can have high recall but poor sentiment — frequently mentioned but always with caveats — which signals different optimisation work than low recall.
What does 'grounding' mean for AI engines? +
Grounding is when an AI engine retrieves live web pages while answering a query, instead of relying only on training data. Perplexity is always grounded; ChatGPT and Gemini ground selectively, mostly on questions with current-events or commercial intent. GeoSuite reports the grounding sources used by each engine so you can see which of your pages are actually being cited.
For brands and agencies
How often should I run an audit? +
Monthly is the typical cadence for active monitoring; weekly during an optimisation sprint when you are shipping changes and want to see them land. AI engines update both their training data and their grounding behaviour continuously, so a one-off audit becomes stale within weeks.
Does GeoSuite work for non-English brands and the Italian market? +
Yes. Audits run in the target market's primary language by default — Italian for Italian brands, English for US/UK, German for Germany — and the question generator uses local buying patterns. GeoSuite is built and operated in Italy and the Italian market is fully supported.
Can my agency use GeoSuite for multiple clients? +
Yes. Agency workspaces support multiple client brands, separate audit histories per client, role-based access for team members and white-label PDF reports on agency plans. Each client lives in its own workspace and audit data does not cross between clients.
Will optimising for AI hurt my Google ranking? +
No, and in practice the opposite is true. The signals AI engines use — clear factual content, structured data, authoritative outbound citations, well-defined brand entities — overlap heavily with Google's quality signals. Most GEO interventions are also improvements for classic SEO.
How can I get cited more often by ChatGPT, Gemini and Perplexity? +
The reliable levers are: explicit factual statements your competitors don't make, schema.org structured data (Organization, Product, FAQPage, BlogPosting), presence on the third-party sources AI engines cite (Wikipedia, industry directories, comparison sites, review platforms), and clear brand-entity pages so the model can disambiguate you. GeoSuite's action plan ranks these levers per audit.
How long changes take to land
How long does it take for an AI visibility change to actually land? +
It depends on the engine. Perplexity recrawls aggressively and can cite a new page within 2–7 days. Google indexes the page in 3–14 days but ranking and AI Overviews mentions take 4–12 weeks. ChatGPT browse (live retrieval) starts citing within days; ChatGPT training-data mentions take months — they only land in the next training cut. Claude sits in between. There is no instant feedback loop: AI visibility moves over weeks, not hours.
What can I do to make crawlers find my updates faster? +
Three things compress the timeline: submit the sitemap to Google Search Console and Bing Webmaster Tools right after deploying changes; keep `robots.txt` open to AI crawlers (GPTBot, PerplexityBot, ClaudeBot, Google-Extended); link the new pages from the home page or footer so crawlers reach them in one hop. With these three, expect Perplexity citations within ~1 week and Google indexing within ~2.
Why do I see results on Perplexity before Google? +
Perplexity is grounded by design — it retrieves live web pages on every answer. New content reaches Perplexity through PerplexityBot in days, not weeks. Google operates on a slower indexing + ranking pipeline; even when the page is indexed, ranking on competitive queries takes weeks of trust-building. AI Overviews further filters to a small set of cited sources per query, so showing up there is the slowest of the bunch.
When should I re-run an audit to measure impact? +
Two-week cadence during an active optimisation sprint: most Perplexity-side movement is visible within 7–14 days. Monthly cadence in steady-state monitoring. Re-running the next day rarely shows changes — and even when it does, they may be model-side variability (not your changes). GeoSuite's audit history view shows the delta between consecutive audits so the signal stays distinguishable from noise.
Practical, pricing and privacy
Is the data sent to OpenAI, Google or Perplexity used to train their models? +
GeoSuite uses the providers' API endpoints, where the default policy is that prompt and response data are not used for model training. We do not send personally identifiable customer data — only the public brand name, domain and target market needed to generate the audit questions.
How is GeoSuite priced? +
GeoSuite has a free trial audit and paid plans for ongoing monitoring, agency multi-client use and e-commerce audits. Pricing and what is included per plan is shared on request — write to [email protected].