Kepten.AI samples ChatGPT, Claude, Perplexity, Gemini, and Copilot for the queries your customers actually ask — and tells you, by language and by model, where you show up and what they say.
Your buyers are increasingly asking ChatGPT, Claude, and Perplexity before search ever fires. The answer they get is now the top of your funnel — and if you can't measure it, you can't shape it.
You see traffic. You see conversions. You don't see what AI says about you to the prospects who never click through.
Run continuously. Drift detection. Granular per-model, per-language, per-prompt scoring with confidence intervals.
Every (prompt × model × language) cell on the heatmap is a single number, blended from three signals that are each measured by a separate LLM extractor.
How often the model names you when asked the question — across repeated samples, not a one-off prompt. The presence floor.
Where in the response you appear. First in the list lands differently than buried at #8. Decays linearly with rank.
What the model says about you when it does mention you. Positive, neutral, or negative — extracted with a separate LLM and validated against schema.
Borgo Santo Pietro, a Tuscan luxury hotel. 5 prompt intents × 3 languages × 5 models = 75 cells. Three patterns jump out before you've even read a row label.
GPT-4o averages 78 across the EN column, driven by trained editorial. Perplexity averages 82 in IT — live retrieval pulls regional press.
Row 5 is dense green across every cell. Row 1 ("best Tuscan hotels") drops to 42. The brand wins when named; loses category-discovery.
DE column mean 51 vs. EN 76. Not enough German-language editorial in training or live retrieval. First-order intervention.
No tag installation, no SDK, no script in your site. Kepten.AI runs entirely on the model side — the same way your customers see you.
Brand, domain, category, target languages. We materialise a prompt bank tuned to that category and translate every prompt into each language.
Claude, GPT-4o, Perplexity, Gemini, and Copilot answer every prompt. We extract mentions, position, and sentiment from each response with a separate LLM extractor.
Cells colour from red (invisible) through yellow (mentioned but late) to green (named, ranked, well-described). Patterns by language and model jump out.
The closer your category is to "ask an AI before buying" — luxury travel, B2B SaaS, financial services, professional services — the more this matters.
Quantify the new top of funnel. Spot mention drops before they show up in pipeline. Defend share of voice in the answer.
Track every client's AI visibility on one dashboard. Per-market, per-model. White-labelled reporting for client renewals.
Prove the impact of placements. Measure the diffusion from a press hit through to the answer ChatGPT gives on your category.
A live heatmap for Borgo Santo Pietro — a Tuscan luxury hotel. 9 prompts × 3 languages × 5 models. 135 cells, fully populated.