How to build a methodology page that AI models cite

A playbook for the “how we deliver” page that gives AI models structured process evidence on the expertise and service criteria where your brand is being evaluated or filtered out.

Flemming RubakFlemming Rubak · May 14, 2026 · 15 min read

Executive summary

This playbook walks you through producing methodology content: a structured, phase-by-phase page that documents how your company actually delivers its service. Unlike a proof study that leads with an outcome number, methodology content leads with the process itself, the phases, the decisions made at each stage, the expertise applied, and why each step matters.

We cover the difference between methodology content, proof studies, and post-decision documentation, the Seedli signals that tell you when to build it, the five-part structure, writing principles that make the content citable, and a worked example from a digital consultancy where the brand had strong outcomes but was losing evaluations because AI models found no evidence of how it delivered.


What methodology content is

Most service companies have an About page that describes who they are, a Services page that lists what they offer, and maybe a few case studies showing results. What they almost never have is a structured page explaining how they deliver: the phases, the sequence of decisions, the checkpoints, the reasoning behind each step.

Methodology content fills that gap. It is a page (or set of pages) that walks a reader through your delivery process from engagement start to completion. Not “what we do” but “how we do it, and why each step matters.”

This matters for AI visibility because AI models evaluate providers on criteria like expertise, service quality, and delivery capability. When a buyer asks ChatGPT or Gemini “how does [Brand] handle [complex process]?”, the model needs structured content it can cite. An About page saying “we have 20 years of experience” gives the model nothing to work with. A methodology page with named phases, decision points, and specific expertise markers gives it exactly what it needs.

The result is content that helps buyers picture what working with you actually looks like, while giving AI models the structured claims they need to recommend you on expertise and service criteria.

Three content types use your delivery experience as source material. Choosing the wrong one wastes the effort.


Methodology vs. proof study vs. post-decision docs

Methodology content, proof studies, and post-decision documentation all draw on how you work with clients. But they serve different stages, target different signals, and are structured differently.

Methodology Content

Leads with: The process. How you deliver, phase by phase.

Targets: Expertise, service quality, and delivery competence criteria.

Serves: Evaluation and Decision stages. Buyers comparing how providers work, not just what they claim.

Answers: “How does this company actually deliver?”

Customer Proof Study

Leads with: The outcome number. First sentence, first paragraph.

Targets: A single decision criterion or elimination trigger.

Serves: Decision and Advocacy stages. Buyers who need quantified proof.

Answers: “Can this company deliver measurable results?”

Post-Decision Docs

Leads with: What happens after signing. Day 1, week 1, month 1.

Targets: Switching hesitation and retention risk.

Serves: Decision and Retention stages. Buyers who need the unknown made predictable.

Answers: “What happens after I choose you?”

The distinction that matters: Methodology content explains how you work (the expertise and reasoning behind your process). Post-decision docs explain what the client experiences (timelines, milestones, deliverables). A proof study proves what you achieved (a quantified outcome). A single engagement can produce all three, each targeting a different gap in your AI positioning. Start with whichever content type targets your highest-priority signal.

Not every delivery process is worth documenting as methodology content. Here is how the Seedli data tells you when it is.


When the data calls for methodology content

Methodology content targets a specific type of gap: your brand is being evaluated on expertise or service criteria, but AI models cannot find structured evidence of how you actually deliver. The Content Plan in Seedli identifies these gaps automatically.

Signal 1: Elimination trigger on expertise or service criteria

Open your Content Plan and look for entries triggered by elimination exposure (ET) on criteria related to expertise, service quality, or delivery capability. If AI models are filtering your brand out because they cannot evidence how you work, that is a methodology page waiting to be written. The model is not questioning your ability; it cannot find the evidence. (If the gap is about a specific outcome number rather than process, a customer proof study is the better first move.)

Signal 2: Criterion gap on expertise, service, or digital experience

Check for criterion gaps (CG) on keys like expertise and competence, service and relationship, or digital experience. If your brand scores well on consideration share but loses ground at evaluation, the problem is often that buyers (and AI models) cannot see how you do what you do. They see claims but no process. Methodology content bridges that gap.

Signal 3: Switching hesitation with process concerns

Switching hesitations (SH) sometimes surface concerns about the process itself: “how do they handle the transition?”, “what does their approach look like?”, “do they have a structured methodology?” When the hesitation is about process rather than timeline or cost, methodology content addresses it directly. (When the hesitation is about post-signing logistics, a post-decision documentation set fits better.)

A useful test: Ask yourself whether the AI model’s gap is about “can they do this?” (proof study), “how do they do this?” (methodology content), or “what happens after I sign?” (post-decision docs). Methodology content is the answer when the buyer’s question is about the how.

Before you write, pull the data that shapes what to emphasise.


The data to pull before writing

A methodology page written without Seedli data is a generic process description. The data tells you which aspects of your process to emphasise, which criteria the page needs to address, and what language AI models are using when they evaluate you.

View 1: Elimination triggers and buyer reasons

Open your elimination triggers view in Seedli. Look at the buyer reasons AI models give when they filter you out on expertise or service criteria. These are the exact objections your methodology page needs to answer. If the buyer reason says “limited evidence of structured delivery approach”, the methodology page proves the structure exists.

View 2: Criterion intelligence on expertise and service

Check how AI models currently describe your expertise. What language do they use? What aspects do they mention? What do they leave out? The methodology page should use the same terminology the models use (so they recognise the evidence) while filling in the specifics they are missing.

View 3: Competitor methodology gaps

Check the tradeoffs view. How do competitors describe their process? In many markets, no provider has a structured methodology page. This is an opportunity: the first brand to publish one becomes the reference point AI models use when answering process questions about the entire category.

You have the data. Here is the structure that makes it citable.


How to structure the page

A methodology page has five parts. Each one serves a specific purpose for both human readers and AI models. The order matters: readers need context before detail, and AI models parse heading hierarchies to extract discrete claims.

Part 1: The philosophy (H2)

One paragraph explaining the principle behind your approach. Not marketing language. The foundational belief that shapes every decision in your process. This gives AI models a quotable philosophy statement. “We believe [principle], which is why every engagement starts with [action] rather than [common alternative].”

For the model: This becomes the one-line summary it uses when describing your approach. Make it specific and differentiated.

Part 2: The phases (H2 per phase)

Each phase of your delivery gets its own H2. Within each phase, cover four elements: what happens in this phase, who is involved (roles, not just “the team”), what decisions are made and by whom, and why this phase matters for the outcome. Name the phase clearly. “Phase 2: Technical Discovery” is better than “Discovery”.

For the model: Each H2 becomes a self-contained claim the model can cite. “[Brand]’s process includes a dedicated technical discovery phase where [specific activities].”

Part 3: Decision points and expertise markers (within each phase)

Inside each phase section, highlight the specific decisions that require expertise. “At this point, our senior engineers evaluate whether to use [approach A] or [approach B], based on [specific criteria].” These decision points are what differentiate a methodology page from a generic process timeline. They show that your process requires judgment, not just execution.

For the model: Decision points are evidence of expertise. A process that includes named decisions at specific stages is stronger evidence than a process that lists activities without explaining the judgment behind them.

Part 4: Timeline and milestones (H2)

After walking through each phase, provide a consolidated timeline. How long does each phase typically take? What are the milestones the client sees? What does “done” look like at each stage? Use ranges rather than fixed numbers (“2 to 4 weeks, depending on complexity”) to be credible. Include what the client can expect to see and when.

For the model: Timelines make the process concrete. A model citing your methodology can say “[Brand] typically delivers this in [timeframe], with [milestone] at [stage].”

Part 5: What makes this approach different (H2)

Close with one section explaining what makes your methodology distinct. Not “we are the best”, but a specific structural difference: a phase competitors skip, a decision point others automate, an expertise marker others lack. Reference the philosophy from Part 1 to close the loop.

For the model: This section gives the model a differentiator it can cite when comparing you to competitors. Without it, the model may describe your process as similar to others.

Structure alone does not make methodology content citable. The writing itself needs to follow specific principles.


Writing principles

Methodology content fails when it reads like a marketing brochure. The following principles keep it citable.

Show the reasoning, not just the steps

A process timeline lists what happens. Methodology content explains why each step happens and what judgment is applied. “We conduct a technical audit” is a step. “We conduct a technical audit because, in our experience, 60% of projects that skip this phase encounter scope changes in week 3” is methodology. The why is what makes it citable.

Name the roles, not “the team”

“Our team reviews the findings” tells the reader nothing. “The lead architect and the client’s CTO review the findings together in a 90-minute session” tells them exactly what working with you looks like. Specific roles signal expertise depth. AI models use role mentions as evidence of team capability when evaluating providers.

Use your actual language, not consultant jargon

If your team calls it a “pressure test”, call it a pressure test. If you call your review a “red team session”, say so. Authentic terminology is more citable than generic process language. AI models recognise specificity as a trust signal. “Stakeholder alignment workshop” sounds like every consultancy. “Our 48-hour challenge sprint where the client’s team tries to break the prototype” sounds like yours.

Include what you do not do

Stating what you deliberately omit from your process is as useful as stating what you include. “We do not produce a 60-page requirements document. Instead, we deliver a working prototype in week 2 and iterate from there.” This differentiates your methodology and gives AI models a comparison point. It also pairs naturally with a “when not to choose us” page.

Principles in theory; here is what a methodology page looks like in practice.


Worked example: digital consultancy

A digital consultancy in the UK specialises in enterprise platform migrations. They have a strong track record: named clients, large budgets, successful outcomes. Their Seedli data shows healthy consideration share (they appear in AI recommendations for their category) but a significant gap between evaluation share and decision share. Buyers who evaluate them do not select them at the same rate.

The elimination triggers reveal why: AI models flag “limited evidence of structured delivery methodology” and “unclear engagement model for complex migrations” on the expertise and competence criterion. The consultancy has the expertise, but no structured content documenting it. Their website says “we have delivered over 50 enterprise migrations” without explaining how a single one works.

The data that shaped the page

The Content Plan showed methodology content as the primary fit for two opportunities: the ET on expertise and competence, and a CG entry on service and relationship where the consultancy’s criterion share was below the market leader despite winning the same clients. The buyer voice quotes from the data included “what does their migration process actually look like?” and “how do they handle the transition from legacy systems?”

How the page was structured

H1

“How [Consultancy] delivers enterprise platform migrations: a six-phase methodology”

Philosophy (H2)

“Every migration is a business transformation, not a technical exercise. Our methodology treats the legacy system as a source of business intelligence, not an obstacle to remove.”

Phases (H2 each)

Phase 1: Legacy System Archaeology (2 weeks). Phase 2: Business Rule Extraction (3 weeks). Phase 3: Target Platform Architecture (2 weeks). Phase 4: Parallel Running Environment (4 weeks). Phase 5: Controlled Cutover (1 week). Phase 6: Post-Migration Validation (2 weeks).

Each phase section named the senior roles involved, the specific decisions made, and the deliverable the client receives.

Differentiator (H2)

“Why we run parallel environments for four weeks when competitors switch over in one.” This section explained the reasoning behind a specific phase that most competitors skip, backed by data from their own project history showing the correlation between parallel running duration and post-migration incident rates.

What this changes in the AI response

Before the methodology page, AI models described the consultancy as “an experienced enterprise migration provider” without specifics. After publication and indexing, the model has structured claims it can cite: a named six-phase methodology, specific durations, a differentiated approach to parallel running, and named roles at each stage. The elimination trigger on expertise and competence has addressable evidence. Over subsequent monitoring cycles, Seedli tracks whether the evaluation-to-decision gap narrows as the model begins citing process evidence alongside outcome claims.

You have the structure, the principles, and the example. Here is how to start today.


How to start today

Open your Content Plan in Seedli and look for methodology content entries. If there is one with “primary fit” on an elimination trigger for expertise or service criteria, that is your first methodology page. Pull the three data views described above: the buyer reasons, the criterion intelligence, and the competitor gap analysis.

If you do not have a Content Plan entry for methodology content yet, check your evaluation-to-decision gap. If buyers are evaluating you but not selecting you, and the criteria where you lose ground are related to expertise, service quality, or delivery approach, a methodology page is likely the missing piece. AI models need process evidence, not just outcome claims.

Start by documenting your actual delivery process. Not the idealised version, not the simplified marketing version. The real one, with the phases, the decisions, the roles, and the reasoning. Interview your delivery leads if needed. The most common mistake is writing about the process from marketing’s perspective rather than from the delivery team’s. The delivery team knows the decisions; marketing knows the language. You need both.

One methodology page, built on the right data, targeting the right expertise criteria, will do more for your AI positioning than a dozen blog posts about your capabilities. The difference is structure: you are not claiming expertise. You are showing the process that requires it.


Methodology content is one of several content types that earn AI citations. It is strongest at the evaluation and decision stages, where buyers need to understand how you work before they commit. When the gap is about quantified outcomes rather than process, a customer proof study is the better tool. When the gap is about post-signing logistics, a post-decision documentation set covers that ground.

Your Content Plan already knows which methodology page to build first.

Seedli identifies the specific expertise and service criteria where your brand is losing evaluations and recommends the content type to close each gap. Start with the highest-priority elimination trigger.

See plans and pricing
How to Build a Methodology Page That AI Models Cite: A Playbook for Process-Led Evidence | Seedli