Most brand teams have typed their company name into ChatGPT at least once. They got an answer, decided it seemed fine or slightly concerning, and moved on.
That isn't an audit. That's a single data point from a single AI model on a single day. It tells you almost nothing useful — and it certainly doesn't tell you what your customers are seeing when they ask the questions that actually drive their purchasing decisions.
A real brand narrative audit is systematic. It covers multiple AI engines, multiple customer prompts, and produces a scored output you can act on. This framework gives you that.
Run through each step with your brand. By Step 3, you will almost certainly find something you didn't expect.
Why a One-Time ChatGPT Check Isn't Enough
The instinct to "just check ChatGPT" makes sense as a starting point. But it fails in three ways that matter.
Sample size. AI responses are non-deterministic. The same prompt run twice can produce meaningfully different answers. A single check gives you one draw from a distribution. That's not a signal — it's noise.
Coverage. ChatGPT is one of four major AI engines your customers use. Gemini, Perplexity, and Claude each draw on different training data, different web indexing, and different synthesis methods. A brand can appear prominently on one platform and be entirely absent from another. You need data from all four to understand your actual AI footprint.
Change tracking. AI models update continuously. A characterization that looked positive last quarter may have shifted. New competitor content, new reviews, new press coverage — all of it feeds into how AI models describe your brand. A one-time check gives you no baseline to measure against and no ability to detect drift.
of AI search sessions end without a website visit — Semrush, 2025. Decisions happen inside the AI response.
more organic clicks for brands cited in AI answers — and 91% more paid clicks — BrightEdge / Seer Interactive
The audit framework below fixes all three problems. It gives you a representative sample, full-engine coverage, and a scored baseline you can track over time.
Step 1: Define Your 10 Core Customer Prompts
Build the prompt list that reflects real buyer intent
The prompts you test determine everything. If you test the wrong questions, the audit tells you nothing about what's actually reaching your customers.
Start with the questions buyers ask before they've already chosen you. Not "what is [your brand name]?" — that's a brand awareness query. You want the high-intent, category-level questions that buyers use to evaluate options.
Pull from three sources:
- Your sales team. What questions do prospects ask in the first two conversations? These are the exact questions buyers are also asking AI.
- Your support team. What brings people to your product? What problem were they trying to solve when they found you?
- Your search console. Which queries are driving your highest-value organic traffic? These are proven intent signals.
Aim for 10 prompts that span three categories: category-level ("best [product category] for [use case]"), problem-level ("how do I [specific pain point]"), and comparison-level ("is [your brand] better than [competitor]").
Write the prompts as a real buyer would type them — conversational, specific, without jargon. If your buyers are asking "what's the fastest payroll software for a 50-person company," test that exact phrasing, not "top-rated payroll solutions."
Step 2: Test Across All Four Major AI Engines
Run every prompt through ChatGPT, Gemini, Perplexity, and Claude
For each of your 10 prompts, run the query on all four platforms. Record the full response — don't just note whether your brand appeared. You need the characterization, the competitors mentioned, and the framing.
What to note for each response:
- Does your brand appear? Yes / No. If yes, where in the response — first recommendation, secondary mention, footnote, not at all?
- How is your brand described? Copy the exact language AI uses. Is it positive, neutral, hedged, or negative? Does it match your intended positioning?
- Which competitors appear? List every brand mentioned. Note which ones appear before you and which appear instead of you.
- Are there any factual errors? Wrong pricing, discontinued features, inaccurate claims about your product. These are fixable — but only if you find them.
Run each prompt at least twice on each engine. AI responses vary. Two runs give you a basic sense of consistency. If you get materially different answers across runs, note that — inconsistency itself is a signal worth tracking.
Do this manually first. You need to understand what you're looking at before you automate it. Once you know what the data looks like, platforms like Shensuo can run this at scale continuously.
Step 3: Score Your Narrative
Grade presence, characterization, and competitive position
Raw responses need to become a score. Score three dimensions for each prompt across each engine:
| Dimension | What it measures | Green | Yellow | Red |
|---|---|---|---|---|
| Presence | Do you appear at all? | Featured | Mentioned | Absent |
| Characterization | How are you described? | Accurate + positive | Neutral or partial | Inaccurate or negative |
| Competitive position | Who else appears and how? | You lead or appear alone | You appear alongside peers | Competitors lead without you |
Score each prompt on each engine. Your aggregate scores will show you where you're winning (featured, accurate, leading), where you're vulnerable (present but losing to competitors), and where you're invisible (absent entirely).
Pay particular attention to presence on high-intent prompts. A category-level query like "best [your product] for [your buyer]" is a direct sales opportunity. If you're absent on that query across multiple engines, you have a measurable revenue gap — not a branding problem.
"Step 3 is where most businesses get the shock. They assumed they were mentioned. They weren't."
Absence is the hardest result to see coming. Most brands assume AI knows about them — they've been around for years, they have plenty of content, they've been reviewed in publications. But AI citation is not proportional to brand age or content volume. It reflects the specific sources AI trusts for specific queries. Your category may be dominated by a few high-authority sources that happen to favor your competitors.
Step 4: Map the Gaps
Identify which prompts are routing buyers to competitors
Scoring gives you data. Gap mapping turns that data into a prioritized problem list.
For every prompt where you scored Red on presence or competitive position, identify:
- Which competitor(s) appear instead of you? This is your competitive displacement list — the brands currently capturing demand that should be coming to you.
- What sources does AI cite? Many AI engines surface citations or indicate what informed their response. Those sources are your content targets — the publications, review sites, and directories you need to be present in.
- Is the gap consistent across engines? A gap on one engine is a signal. A gap across all four engines is a structural problem in how your brand is represented in the sources AI trusts.
- What is the intent level of the prompt? A gap on a high-intent comparison query ("is [your brand] worth it?") is more urgent than a gap on an awareness query. Prioritize by revenue proximity.
Build a simple table: prompt, engine, who appears instead, estimated intent level, gap type (absent, displaced, mischaracterized). That table is your action plan input.
Step 5: Prioritize and Act
Content fixes, citation-building, and monitoring setup
Audit findings map to three types of action. Work through them in priority order — highest-intent gaps first.
Content fixes (weeks 1–4). If AI is mischaracterizing your brand or citing outdated information, the fastest fix is to create authoritative content that clearly states the correct information and is structured for AI synthesis. That means detailed, fact-rich pages that directly answer the questions your audit surfaced — not blog posts optimized for human reading patterns. AI is looking for specificity and source authority.
Citation-building (weeks 4–12). Presence in AI responses correlates strongly with citation in trusted sources: industry publications, roundup reviews, comparison sites, and analyst reports. For each gap prompt, identify the specific sources AI cited for your competitors. Then build a plan to earn presence in those same sources — through PR, contributed content, product reviews, or third-party validation campaigns.
Monitoring setup (ongoing). The audit you've just run is a baseline. The AI landscape shifts weekly. New model updates, new competitor content, and new web citations all change what AI says about your brand. Manual audits every quarter are a minimum. Automated monitoring — running your core prompts across all engines on a regular cadence — is the only way to catch drift before it costs you customers.
"A brand narrative audit isn't a one-time exercise. It's a baseline. The AI landscape shifts every week."
How Shensuo Automates All of This at Scale
The manual version of this audit takes a skilled operator two to three days the first time through. That's acceptable for establishing a baseline. It's not acceptable as an ongoing operating practice.
Shensuo runs this entire process continuously. You define your core prompt set. Shensuo tests them across ChatGPT, Gemini, Perplexity, and Claude on a rolling basis, scores each response across presence, characterization, and competitive position, and surfaces changes as they happen.
When a competitor starts appearing on a prompt where you used to lead, you know within days — not quarters. When AI characterization drifts from your intended positioning, you see the specific language that changed and the sources driving it. When a new prompt emerges in your category that you haven't covered, the data tells you before your competitors act on it first.
What Shensuo Tracks Automatically
- Narrative presence score — your mention rate across all four major AI engines, tracked over time
- Characterization analysis — the specific language AI uses to describe your brand, flagged when it deviates from your defined positioning
- Competitive displacement alerts — which competitors are appearing on your highest-value prompts
- Citation sources — what third-party content is driving AI's opinions about your brand
- Drift detection — week-over-week changes in how each AI engine describes you, so you catch negative shifts early
The businesses that treat AI narrative as a managed channel — not a background condition — are the ones that will capture AI-driven customer acquisition before their competitors understand that it's a channel at all.
Start with the manual audit above. Use it to understand what you're looking at. Then let the monitoring run continuously so you never have to wonder what AI is saying about your brand.
Shensuo — Brand Narrative Intelligence. Know what AI is saying about your business.