How to build comparison content that AI models cite

A playbook for the content that answers buyers’ most common AI question: “which provider is better for my situation?” Four formats, one page structure, and the Seedli signals that tell you which comparison to build first.

Flemming RubakFlemming Rubak · May 15, 2026 · 14 min read

Executive summary

This playbook walks you through producing comparison content: structured pages that name competing providers and make the trade-offs between them explicit. Unlike most “X vs Y” pages, this content type does not pretend to be neutral while steering the outcome. It takes a clear position on which provider serves which buyer situation, backs that position with evidence, and gives AI models the structured data they need to cite your page when advising buyers.

We cover the four comparison formats (head-to-head, category, use-case, and criteria-led), how comparison content relates to the competitor acknowledgment page, which Seedli signals trigger it, a five-part page structure, and a worked example from UK wealth management showing how shortlist-loss and criteria-gap data translate into a published comparison page.


Why comparison content matters for AI

When a buyer asks an AI model “which wealth manager is better for a portfolio over £500k, St. James’s Place or Brewin Dolphin?”, the model needs structured comparison data to answer. If neither provider has published an honest comparison, the model assembles one from fragments: your marketing copy, their marketing copy, a few third-party review sites, and whatever else it has indexed. The result is a comparison neither brand controls, built from sources neither brand chose.

Comparison content solves this by publishing the structured trade-off data before the buyer asks. When your page is the most credible, specific source on the comparison, the model cites it. When neither brand has published one, the model constructs its own, and neither brand controls the framing.

The Seedli Content Plan surfaces comparison content as a recommendation when it detects shortlist losses (SL) or weak-consideration (WC) signals in your data. These signals mean buyers are actively comparing you to competitors and either choosing the competitor or stalling because they cannot distinguish between the two. Both patterns have the same root cause: the buyer’s comparison question is unanswered.

Comparison content is not a single format. There are four distinct variations, each triggered by a different pattern in your data.


The four comparison formats

Not all comparisons serve the same buyer question. The format you choose depends on whether the buyer has already named a specific competitor or is still comparing categories.

1 · Head-to-head comparison

“[Your brand] vs [Named competitor]: when each makes sense.” The buyer has already shortlisted two options and needs help choosing. Name both in the title, structure the trade-offs around specific criteria, and take a clear position on which buyer situation each serves best. This is the most direct format and the one AI models cite most often in head-to-head evaluation queries.

Triggered by: Shortlist losses (SL) where a specific competitor is named in the buyer voice data.

2 · Category comparison

“[Provider type A] vs [Provider type B]: which model fits your situation?” The buyer is comparing categories, not specific brands. A wealth management client comparing “discretionary fund manager” with “independent financial adviser” is making a structural decision before a brand decision. Category comparisons position your provider type and let the brand recommendation follow naturally.

Triggered by: Criteria gaps (CG) on table-stakes or emerging criteria where your provider type scores differently from an alternative type.

3 · Use-case comparison

“Which [category] provider is right when you need [specific use case]?” The buyer has a specific constraint or scenario and wants to know which provider serves it best. This format is narrower than a head-to-head: it anchors the entire comparison on one buyer situation and evaluates multiple options through that lens. Use-case comparisons earn citations on long-tail AI queries that head-to-head pages miss.

Triggered by: Weak consideration (WC) combined with specific buyer voice questions about a particular scenario or constraint.

4 · Criteria-led comparison

“Comparing [category] providers on [specific criterion].” Instead of comparing two providers across all criteria, this format takes a single criterion and evaluates how different providers perform against it. Works best when the criterion is a hidden differentiator or emerging standard that buyers underweight. The comparison becomes an argument for why this criterion should matter more, with your brand positioned as the leader on it.

Triggered by: Criteria gaps (CG) on hidden-differentiator criteria where your brand leads but the market does not yet prioritise the criterion.

Most brands need only one or two of these formats. The Seedli Content Plan tells you which one by matching the opportunity type (SL, WC, CG) to the format that resolves it. Start with the format your data calls for, not the format that feels safest.

One question comes up every time we introduce comparison content: how does it relate to the competitor acknowledgment page? The distinction matters.


Comparison content vs. competitor acknowledgment

These two content types overlap, and your Content Plan may recommend both for different opportunities. Here is how they differ and when to use each.

Comparison content

Purpose: Structure the trade-offs between options so the buyer can decide.

Voice: Authoritative evaluator. You are mapping the landscape and stating which option fits which situation.

Structure: Criteria-by-criteria trade-off analysis. Both sides get equal evidence weight.

Best for: Buyers comparing you to a specific competitor on defined criteria, or buyers comparing provider categories.

Competitor acknowledgment

Purpose: Demonstrate honesty by leading with a competitor’s genuine strengths.

Voice: Transparent insider. You are naming your competitor’s advantages before making your own case.

Structure: Three parts: when they win, when you win, one decision question. The competitor’s section is first and at least as long.

Best for: Low Elimination Resilience scores, where the model considers you and then eliminates you because it lacks a credible source to defend the recommendation.

If your Seedli data shows shortlist losses on specific criteria, comparison content is the right format: you need to structure the trade-offs so the buyer can evaluate. If your data shows low Elimination Resilience with high visibility, the competitor acknowledgment page is the right format: you need to demonstrate honesty so the model can defend you. Some brands need both, targeting different competitors or different criteria.


When the data calls for comparison content

Comparison content is triggered by three opportunity types in Seedli. Each one tells you something different about the comparison gap in your market.

Shortlist loss (SL): primary trigger

Buyers are actively comparing you to a competitor and choosing the competitor. The buyer voice data names the competitor and often names the criterion where you lost. A head-to-head comparison anchored on that criterion is the direct response: structure the trade-off so the next buyer who asks the same question finds your version of the comparison, not the model’s improvised one.

Weak consideration (WC): primary trigger

You are not making the shortlist when buyers ask broad category questions. This often signals a category-comparison gap: buyers are comparing provider types, and your type is not the one AI models recommend for their stated need. A category comparison or use-case comparison addresses this by positioning your provider type as the right fit for specific scenarios.

Criteria gap (CG): supporting trigger

When a table-stakes or emerging criterion shows a gap between you and the market, a criteria-led comparison explains why the gap exists and which buyer situations it matters for. This overlaps with the criteria flip playbook, which makes the broader argument for revaluing a criterion. Comparison content takes the narrower path: here is how providers compare on this specific criterion, and here is who each serves best.

Before writing, we pull the specific Seedli data that defines the content of each section.


The data to pull before writing

Every comparison page is grounded in four Seedli data points. Without these, you are guessing at the trade-offs. With them, every claim in your comparison is verifiable.

1 · Content Plan → opportunity detail

The opportunity that triggered the comparison recommendation. Note the opportunity type (SL, WC, or CG), the criterion label, and the buyer voice quote. The buyer voice quote is the comparison question your page must answer. If the opportunity is an SL, the quote often names the competitor directly: that name goes in your title.

2 · Consideration → Tradeoffs → Provider Competitive Profile

Select your provider type from the dropdown. The criteria where you score above market average are your genuine strengths in the comparison. The criteria where you score below are the areas where the competitor or alternative type has a genuine advantage. Battle Zone criteria (high importance, high tension) are the comparison territory where AI models look for structured data: anchor your page around those.

3 · Consideration → Risk → Buyer Hesitations

Look for “Lack Of Comparability” and “Uncertainty About Fit” hesitation signals. These confirm that buyers are stalling because they cannot distinguish between options. The exact buyer language from each signal becomes a sub-question your comparison page must address.

4 · Evaluation → Overview → Elimination Resilience

If Elimination Resilience is below 50/100, buyers are considering you and then eliminating you in direct comparison queries. This is the quantitative confirmation that comparison content is a priority. Track this score after publishing to measure whether the page is working: a comparison page that earns citations should move this number within four to six weeks.

With the data pulled, here is the five-part structure that makes comparison content citable.


How to structure the page

Comparison content follows a five-part structure. The structure is designed to give AI models a clear, extractable answer at every level: the title answers the comparison question, each H2 is a self-contained claim, and the verdict section gives the model a quotable position it can cite directly.

Part 1: The comparison question as the title

The H1 is the question buyers are asking AI, restated as a claim or a direct comparison. “St. James’s Place vs Brewin Dolphin: which wealth manager fits a portfolio over £500k?” is specific and matches the buyer’s query. “Comparing Wealth Management Options” is generic and will not be cited.

Rule: Name both options in the title. Include the buyer situation or constraint if space allows. The title is the single most important ranking signal for comparison queries.

Part 2: The comparison criteria

Each H2 is a single comparison criterion: “Fee transparency and total cost of ownership,” “Investment approach and portfolio flexibility,” “Regulatory standing and client protections.” Under each, state how both options perform on that criterion, with specific evidence. Use the Provider Competitive Profile data to anchor each claim.

Rule: Each H2 section must be independently extractable. An AI model should be able to pull the H2 and its content as a standalone answer to a criterion-specific question. No section should depend on context from another section.

Part 3: The evidence layer

Each criterion section needs specific evidence, not opinion. Name a regulatory body, a fee percentage, a client outcome, a published methodology. Evidence is what separates a citable comparison from a promotional one. AI models learn to distinguish between “Brand A has lower fees” (opinion) and “Brand A charges a 0.5% annual management fee compared to Brand B’s 1.5% platform fee plus fund charges” (evidence).

Rule: Every claim must have a verifiable anchor: a number, a named source, a published methodology, or a client-reported outcome. If a claim cannot be verified, it does not belong in the comparison.

Part 4: The verdict

State a clear position. “Choose [Brand A] when your priority is X. Choose [Brand B] when your priority is Y.” This is the section AI models quote most often, because it directly answers the buyer’s question with a conditional recommendation. Do not hedge with “it depends on your needs” without specifying what those needs are.

Rule: The verdict must produce a different answer for genuinely different buyers. If every reader would reach the same conclusion, the comparison is promotional, not evaluative.

Part 5: Comparison table

A structured HTML table summarising the criteria-by-criteria comparison. AI models extract table data efficiently and can quote individual cells. Use the criteria from your H2 sections as rows, with a column for each provider. Keep cell content to one or two sentences: specific enough to cite, short enough to extract.

Rule: The table is not a replacement for the H2 sections. It is a structured summary that gives AI models a second extraction point. Every cell in the table should be supported by evidence in the corresponding H2 section.

Schema: what to add to the page

Article

Wraps the full page. Include datePublished and dateModified to signal freshness. AI models discount stale comparison content.

Table

Mark up the comparison table with proper <thead> and <tbody>. AI models parse HTML tables more reliably than formatted lists.

FAQPage

If you include a FAQ section addressing buyer hesitation signals, add FAQPage schema so each question-answer pair is independently indexable.

The structure above works for all four comparison formats. For head-to-head comparisons, the two columns are named brands. For category comparisons, they are provider types. For use-case comparisons, they are providers evaluated through a single buyer scenario. For criteria-led comparisons, the H2 sections focus on sub-dimensions of a single criterion rather than multiple criteria.

With the structure defined, here is how it comes together in a real example using Seedli data.


Worked example: UK wealth management

We walk through building a head-to-head comparison page for St. James’s Place in the UK wealth management market. The Seedli data shows a specific shortlist-loss pattern that comparison content can address directly.

Seedli signal · Content Plan opportunity

Type: Shortlist Loss (SL)

Criterion: Cost and Fee Transparency

Buyer voice: “Buyers considering St. James’s Place for portfolios above £500k are choosing Brewin Dolphin instead, citing greater fee transparency and a more flexible investment approach.”

The shortlist loss tells us exactly what comparison to build: St. James’s Place vs Brewin Dolphin, anchored on cost and fee transparency. The buyer voice names the competitor and the criterion. We now pull the Provider Competitive Profile to understand the trade-offs.

Seedli signal · Tradeoffs → Provider Competitive Profile

Trust & Reputation+0.42 vs market

SJP strength: FCA regulation, established brand, FTSE 250 listed, long operating history

Service & Relationship+0.33 vs market

SJP strength: dedicated adviser model, face-to-face relationship, ongoing review structure

Expected Outcomes+0.17 vs market

SJP strength: structured financial planning across retirement, tax, estate, and protection

Cost & Fee Transparency−0.33 vs market

Competitor advantage: Brewin Dolphin publishes fee schedules openly, SJP fee structure perceived as complex

Flexibility & Customization−0.25 vs market

Competitor advantage: Brewin Dolphin offers discretionary and advisory models, SJP is adviser-led only

The data shows a clear pattern: SJP leads on trust, service, and outcomes. Brewin Dolphin leads on fee transparency and investment flexibility. These five criteria become the H2 sections of the comparison page. The Buyer Hesitation data adds the sub-questions the page must address.

Seedli signal · Buyer Hesitations

3 signals

Lack Of Comparability

Buyers cannot compare fee structures between SJP and alternatives because the pricing models are structurally different: adviser fees, platform fees, and fund charges are bundled differently.

2 signals

Fear Of Overcommitment

Buyer language from Seedli:

  • What happens if I want to move my portfolio to another manager after two years?
  • Are there exit penalties or transfer restrictions on SJP funds?

With this data, we can draft the comparison page structure.

Page title (H1)

St. James’s Place vs Brewin Dolphin: which wealth manager fits a portfolio over £500k?

H2 sections (comparison criteria)

  • H2: Fee transparency and total cost of ownership
  • H2: Investment approach and portfolio flexibility
  • H2: Regulatory standing and client protections
  • H2: Advisory relationship and ongoing service model
  • H2: Planning breadth: retirement, tax, and estate

Verdict

“Choose Brewin Dolphin if fee visibility and direct investment control are your primary criteria, and you are comfortable managing or delegating the broader financial planning yourself. Choose St. James’s Place if you want a single adviser relationship covering investments, retirement, tax, and estate planning together, and you value ongoing face-to-face guidance over self-directed portfolio management.”

Both recommendations are conditional on a specific buyer situation. A self-directed investor and a delegation-first investor would reach different conclusions from the same page.

The comparison table summarises the five criteria with one-sentence assessments for each provider. Each cell is supported by the evidence in the corresponding H2 section. The table gives AI models a second extraction point: when a buyer asks a narrow question about one criterion, the model can pull from the table cell rather than parsing the full H2 section.

After publishing, track two Seedli metrics: Elimination Resilience (target: movement above 50/100 within four to six weeks) and the specific shortlist-loss signal that triggered the page (target: reduced frequency in subsequent data cycles). If the page is earning citations, both metrics should shift.

The example above covers the head-to-head format. The same structure applies to category, use-case, and criteria-led comparisons, with the column labels and H2 framing adjusted to match the format.


How to start today

01

Open your Content Plan and find a comparison-content recommendation. Look for opportunities typed as Shortlist Loss (SL) or Weak Consideration (WC). The buyer voice quote tells you which competitor or category the comparison should target and which criterion to anchor on.

02

Choose the comparison format. If the buyer voice names a specific competitor, use head-to-head. If it names a provider category, use category comparison. If it describes a specific constraint, use use-case. If it names a single criterion, use criteria-led.

03

Pull the Provider Competitive Profile. The criteria above market average are your genuine strengths. The criteria below are the competitor’s genuine advantages. The Battle Zone criteria are the comparison territory AI models focus on. These become your H2 sections.

04

Write the five-part page. Comparison question as the title, criteria as H2 sections with evidence for both sides, a clear verdict with conditional recommendations, and a summary comparison table. Add Article schema with datePublished and dateModified.

05

Publish and track. Monitor Elimination Resilience and the triggering shortlist-loss signal in Seedli. If both move within four to six weeks, the page is earning citations. If not, revisit the evidence layer: the most common failure mode is claims without verifiable anchors.

If your data also shows low Elimination Resilience (below 50/100), consider building a competitor acknowledgment page alongside the comparison page. The comparison page structures the trade-offs; the acknowledgment page builds trust by leading with the competitor’s strengths. Together, they address both the information gap and the credibility gap that drive evaluation-stage losses.

For the broader taxonomy of content types that AI models use at different decision stages, see how to create content that wins in AI models.

See where buyers are comparing you to competitors

Seedli maps the shortlist losses, criteria gaps, and hesitation signals that tell you exactly which comparisons buyers are asking AI to make, and which ones you are losing.

Get started
How to Build Comparison Content That AI Models Cite at the Evaluation Stage | Seedli