What this is
A modal that shows the full AI response for one prompt run, including:- The ranked mention list (you + competitors)
- Response text with highlights
- Citations used by the response
- Downloadable output
Why it matters
- It’s the clearest “ground truth” behind your metrics.
- It tells you who you’re competing with and why you’re winning/losing.
- Citations reveal which sources to earn, improve, or replace.
Where to find it
Analytics → Prompt Analytics → Analyze → click a response rowHow it works
- Meridian displays run “Properties” (platform, date, region).
- “Mentions” shows a ranked list (e.g., #1 You, #2 competitor).
-
Tabs:
- Response: the model output
- Shopping: shopping-style output (if supported)
- Citations: sources used
How to use it
- Open a response in Analyze.
- Confirm properties (platform/date/region) match what you’re investigating.
- Scan the mentions list to see your rank and main competitors.
-
Read the response and note:
- What criteria it uses to rank brands/products
- What it claims about you and competitors
- Switch to Citations to see the sources (domains/URLs).
- Click Download to export the response and metadata.
How to interpret results
- If you’re #1 but cited sources are off-page → protect those sources (don’t lose them) and add more.
- If competitor is #1 and cited sources are their owned docs → you need stronger owned “sourceable” pages.
- If the response uses “trust” language and you aren’t cited → invest in authoritative citations and structured pages.
- If the response recommends products but not yours → fix product-level information architecture and comparisons.
- If the response’s citations don’t mention you anywhere → your priority is “get into the citation set,” not rewriting copy.
Common questions / troubleshooting
- “Why do citations differ from what I see in Citations tab?” → Citations tab aggregates many runs; this is one run.
- “Why is Shopping empty?” → Shopping may only appear for certain prompt types/platforms.