What this is
A deep-dive report for one or multiple prompts, showing how you performed across platforms and what responses contained.Why it matters
- You can’t improve rank without seeing position and who beat you.
- Platform breakdown shows where you’re strong/weak.
- Response tables show the exact responses behind the metrics.
Where to find it
Analytics → Prompt Analytics → Prompts → AnalyzeHow it works
- Meridian aggregates results for selected prompts.
-
It shows:
- Platform Breakdown (visibility by platform)
- Brand Visibility trend lines over time
- Response table with Mentioned, Position, All mentions, Platform, Region
How to use it
- From Prompts, click Analyze on a prompt (or select multiple prompts).
- Review Platform Breakdown to see which platforms are performing best/worst.
- Review Brand Visibility trend to spot changes over time.
-
Scroll to the response table:
- Find rows where you are Not mentioned
- Find rows where you’re mentioned but low Position
- Click into a row / response to open Mention details (modal).
How to interpret results
- If Gemini is high but ChatGPT is low → optimize toward ChatGPT’s citations and formats (often different sources).
- If the trend drops on a specific date → look for citation shifts or competitor changes around that time.
- If you’re “Mentioned = Yes” but Position is #4+ → you’re included but not preferred → improve credibility + comparisons.
- If “All mentions” shows many competitors and you’re absent → you’re out of the consideration set → build targeted content + off-page mentions.
- If results differ by region → localize content (regulatory, pricing, availability) and add region-specific proof.
Common questions / troubleshooting
- “Why does it say ‘Analyzing 1 prompt’?” → you might have only one selected prompt; select more to compare.
- “Why do I see ‘No mentions’?” → the model didn’t mention you in that run; check citations and competitor list.