What this is
A single source of truth for how Meridian labels and interprets metrics.Why it matters
- Prevents “we’re arguing about definitions” during reporting.
- Makes KPI movement actionable (not confusing).
- Ensures teams interpret results consistently.
Where to find it
Reference → Metrics glossaryCore metrics
Visibility (%)
How often you are present in AI answers across your tracked prompts.- Higher is better.
- Visibility can increase even if rank worsens (you’re included more often, but lower).
Citation Rate (%)
How often answers that include your brand/product also include citations (sources).- Higher usually means more trust and more stable rankings.
- Low citation rate often means you’re “named” but not “sourced.”
Sentiment (%)
A composite score of how positively AI describes your brand across dimensions.- Higher is better.
- Dimension scores (e.g., Fees) may matter more than the overall score for ranking.
Prominence (rank)
Average position when you are mentioned (lower number is better).- Example: #3.8 means you’re typically around 3rd–4th place.
- Prominence can worsen even if visibility stays flat.
Prompt-level metrics
Mentioned (Yes/No)
Whether your brand/product appears in a specific response.Position (#)
Your rank in the list of mentions for that response (e.g., #1, #2).Share of Voice (%)
How much of the “recommendation space” you own relative to competitors across tracked prompts.Citations breakdown
- Owned: your domain(s)
- Off-page: third-party editorial sources
- Competitor: competitor domains
- Social: forums/social platforms
How to interpret results (quick rules)
- If Visibility is high but Prominence is poor → you’re present but not preferred → strengthen proof + structure.
- If Citation Rate is low → trust gap → improve citeability (owned pages + off-page mentions).
- If sentiment drops in one dimension → narrative gap → add explicit factual sections and FAQs.
- If one platform differs → platform ecosystem mismatch → optimize by platform.