The 6 core surfaces that matter for AI visibility
Most visibility improvements come from five surfaces. You do not need to work on all five at once, but it helps to understand what each one controls.1) Website backend code (crawlability + machine readability)
This is everything that determines whether AI crawlers and retrieval systems can access and parse your pages reliably:- robots.txt and bot protection
- sitemaps and internal linking
- canonical/noindex behavior
- performance and stability
- structured data (schema) and metadata
- Website → Pages flags technical issues and generates implementation templates (including schema drafts).
- Website → Crawlers helps validate that AI crawlers are actually visiting your site (currently via Vercel).
2) Existing front-end content (clarity + criteria coverage)
This is the content you already have on your site, and whether it answers the questions AI is trying to answer:- clear headings that map to user intent
- direct answers early (so content is easy to summarize)
- decision criteria and trade-offs
- proof and specifics (numbers, policies, comparisons)
- FAQs that match real buyer questions
- Website → Pages identifies content gaps and generates copy-ready improvements (FAQ drafts, templates, recommended placement).
- Analytics → Sentiment helps you see which objections you need to address with evidence on the page.
3) Net-new content (coverage for prompts you want to win)
Net-new content is how you expand coverage into queries where you currently have low visibility:- “best X” prompts
- comparisons and alternatives
- use-case pages (“X for beginners,” “X for Y need”)
- category guides and source pages
- Opportunities suggests what to create next based on content gaps and citation ecosystems.
- Content takes an opportunity → creates a brief → generates an article → helps you refine and publish.
- Brand Kit makes generated content accurate, specific, and on-brand.
4) Off-page editorial and “owned” third-party sources (trust transfer)
Many categories are shaped by third-party editorial sites and “best X” lists. AI often relies on these sources as evidence. If you are missing from them, you can struggle to rank consistently—even with strong owned content. This includes:- editorial lists and reviews
- editorial articles
- industry associations
- publications and comparison sites
- Off-page → Editorial is built from the most-cited editorial sources for your prompts, filtered to remove:
- sites you’re already included on
- competitor domains
- Outreach updates weekly so you always see the current citation ecosystem.
5) Off-page social / UGC (language and narrative in community spaces)
In some industries (often DTC and consumer purchase prompts), AI relies heavily on UGC sources like Reddit threads, forums, and social discussion. In others, social sources barely matter. This surface is less about links and more about trusted language existing in places AI already uses as sources:- how people describe your category
- what objections come up
- what differentiators matter
- what “best for” positioning is repeated
- Off-page → Social surfaces the most-cited social/community pages for your prompt set.
- Teams use Social in two ways:
- participate in existing threads that are actively being trusted by AI for your prompts
- use those threads as inspiration for what to post next (and where) on your own social channels
In many cases, links matter less than the language itself. The goal is to get accurate, differentiated, helpful language about your brand and products into places AI already trusts.
6) Directories (structured listing sources)
Some industries rely heavily on directories as evidence (industry-agnostic and industry-specific). These often show up in AI answers because directory listings are structured and easy to summarize. How Meridian helps- Off-page → Directories surfaces the highest-impact directories for your industry that are being cited for your prompts, where you are not currently included (or not being cited).
Where review websites fit
Review sites often sit in between “editorial” and “UGC,” and they can be extremely influential in categories where buying decisions are comparison-heavy. Examples include:- “best X” review publishers
- category-specific review sites
- marketplaces and directories
- community review platforms
- Analytics → Citations (right panel / driver sources)
- Off-page → Outreach targets
- If the site is actively cited for your prompts, prioritize inclusion and accuracy there.
- Focus on being described with the criteria that matter for your prompts (not generic hype).
- Provide a strong, citeable source page on your site that supports any claims.
How to use Meridian to improve visibility (end-to-end)
This is a practical sequence that works well for most teams.Step 1 — Make sure you’re tracking the right prompts
Go to Analytics → Prompts. Prompts should represent the queries you want to show up more for in AI search. It is normal (and valuable) to track higher-funnel prompts where you currently have low visibility. Avoid building a prompt set that is mostly branded or too company-specific, because it can inflate visibility without measuring discovery.Step 2 — Diagnose one important prompt
In Analytics → Prompts, pick one “money prompt” and open Analyze. Inspect:- who outranked you,
- what criteria the answer used,
- and what sources were cited.
- site readiness,
- content structure,
- net-new coverage,
- editorial inclusion,
- or social/UGC narrative.
Step 3 — Use Citations to identify the evidence ecosystem
Go to Analytics → Citations. Use it to understand:- which sources AI trusts for your prompt set (the corpus you must compete with),
- and where your domain appears (or is missing).
- Website → Pages fixes,
- Off-page Outreach/Engage,
- or Content creation.
Step 4 — Execute on the right surface
If the bottleneck is website readiness
Go to Website → Pages and fix high-leverage source pages:- add FAQs and schema,
- improve clarity and structure,
- fix metadata and structured data issues,
- use Meridian’s copy-ready drafts where available.
If the bottleneck is coverage (you’re not mentioned)
Go to Opportunities and generate a brief. Then go to Content and publish a page that matches the prompt format:- listicle for “best X”
- comparison for “X vs Y”
- guide for “how to choose X”
If the bottleneck is citations (you’re not in the sources)
Go to Off-page → Outreach. Pick the highest-cited editorial targets for your prompt set and pursue inclusion. If social is a major driver in your category, use Off-page → Engage to participate in the threads AI is already citing.If the bottleneck is narrative (you’re mentioned but not preferred)
Go to Analytics → Sentiment. Find the dimension that’s driving negativity (fees, trust, reliability, support). Then fix it where AI can cite it:- update source pages (Website → Pages)
- add proof and clarity
- influence off-page sources if the narrative is driven externally
Step 5 — Measure impact week over week
Visibility improvements are rarely immediate. Measure over a full period (usually 7 days) and compare to Prev. Period. Watch:- Visibility and Prominence on target prompts (Analytics → Prompts)
- Owned citations increasing (Analytics → Citations)
- Narrative improving in key dimensions (Analytics → Sentiment)
Adding your name to the internet does not guarantee visibility increases. Many factors influence outcomes: prompt coverage, citation ecosystems, page structure, and platform-specific behavior. Meridian helps you identify what matters most and focus effort where it’s likely to compound.
A simple starting plan (one week)
If you want a safe default plan:- Pick 5–10 target prompts.
- Fix 1–2 key source pages in Website → Pages (FAQ + schema is a strong starting point).
- Publish 1 net-new piece from Opportunities/Content aligned to those prompts.
- Do a small Outreach or Engage cadence (depending on what Citations shows is driving your category).
- Measure next week and repeat.