Heading hierarchy as AI content map
Your headings are not formatting. AI models parse them as a navigational index that determines which section answers which buyer question.
Flemming Rubak · April 20, 2026 · 9 min read
Executive summary
Most content teams treat headings as visual structure: H1 for the title, H2s for major sections, H3s for subsections. The hierarchy exists to break up the page for human readers. AI models read the same hierarchy for a different purpose: as a content map that tells them which section answers which question, how specific the answer is, and whether the section is self-contained enough to extract without losing meaning.
A flat page with vague H2s and no H3s forces the model to read everything to find anything. A page with descriptive H2s, buyer-language H3s, and proper section nesting lets the model jump to the right section, extract a complete answer, and cite the page. This technique covers the three rules for AI-readable heading hierarchy, how to write H3s that mirror the language buyers use in AI conversations, and the self-contained section test that determines whether your content survives extraction.
Why headings matter more for AI than for SEO
In traditional SEO, headings carry keyword weight. An H1 with the target keyword signals topic relevance to Google. H2s and H3s with related terms support topical depth. The heading hierarchy is a ranking signal, and the advice is to include your target phrases in headings at each level.
AI models use headings differently. When a model receives a buyer query and needs to construct a response from web sources, it does not score pages by keyword density in headings. It parses the heading structure to understand the content architecture: what topics does this page cover, at what level of specificity, and which section is most relevant to the question the buyer asked?
The practical consequence: a page whose H2 says “Overview” and whose H3s say “Part 1,” “Part 2,” “Part 3” gives the model almost no navigational information. Every section could contain anything. The model must read the full body text under each heading to determine relevance. A page whose H2 says “How CRM migration costs break down” and whose H3s say “Data migration,” “Integration rebuilds,” “Team retraining” tells the model exactly where to find the answer to “what does it cost to switch CRMs?” without parsing a single paragraph.
AI models with browsing capabilities already use heading structure to select which section to extract when constructing an answer. Perplexity cites specific sections by heading when building its responses. ChatGPT’s browsing mode identifies relevant sections before extracting content. The heading hierarchy is the first thing these systems parse, and it determines whether your content gets read at all. [Inference: the exact mechanism varies by model and is not fully documented by any provider.]
Three rules govern whether a heading hierarchy works as an AI content map or just as visual formatting.
Three rules for AI-readable heading hierarchy
1. Each heading level narrows the scope
H1 states the page topic. H2s divide it into distinct subtopics. H3s divide each subtopic into specific claims or components. The scope narrows at each level, and no heading at a lower level should be broader than the heading above it.
This sounds obvious, but most pages violate it. An H2 that says “Migration considerations” with H3s that say “Cost,” “Timeline,” “Risk,” “Strategy” has a scope problem: “Strategy” is broader than the H2, not narrower. A model parsing this hierarchy cannot determine whether “Strategy” content belongs to the migration section or is a separate topic that was misplaced in the tree.
Before
H2: Migration considerations
H3: Cost
H3: Timeline
H3: Strategy
After
H2: How CRM migration costs break down
H3: Data migration
H3: Integration rebuilds
H3: Team retraining
2. Headings describe content, not structure
“Introduction,” “Background,” “Methodology,” “Results,” “Conclusion” are structural labels. They tell the reader where they are in the document but not what the section contains. AI models gain nothing from structural labels because every article has an introduction and a conclusion; the labels carry zero information about the specific content.
Descriptive headings encode the content: “Why Salesforce migration timelines double when custom objects exceed fifty” tells the model exactly what this section covers. A buyer asking “how long does it take to migrate from Salesforce?” can be routed to this section by heading alone. The structural label “Timeline considerations” would require the model to read the body text to determine if it answers the same question.
The test: read each heading in isolation, without the body text. Can you tell what the section argues or explains? If the heading could appear on any article in any industry, it is structural. If it could only appear on this article, it is descriptive.
3. Use section elements to reinforce the hierarchy
HTML <section> elements group content under a heading into a semantic unit. A <section> wrapping an H2 and its body text tells parsers (both AI models and accessibility tools) that everything inside belongs to that heading. Without the element, the association between a heading and its content is positional: the model infers that paragraphs below an H2 and above the next H2 belong to the first H2. That inference breaks when content is restructured, when sidebars interrupt flow, or when the page includes components that sit outside the heading tree.
The <section> element makes the association explicit. In React-based sites, the component that renders each section should output a <section> element with the heading as the first child. This is the pattern we use across this site: every ArticleSection component renders as a <section> with a scroll-margin anchor.
<section id="migration-costs">
<h2>How CRM migration costs break down</h2>
<p>Three cost categories account for 80%
of migration budgets...</p>
<h3>Data migration</h3>
<p>The data migration cost depends on...</p>
<h3>Integration rebuilds</h3>
<p>Every integration connected to the
source CRM must be...</p>
</section>The three rules define the structural foundation. The next question is what language the headings should use, and the answer comes from buyer data.
H3s that mirror buyer language
H2s frame the topic. H3s are where the match between your heading and a buyer’s query happens. When a buyer asks an AI model “what are the hidden costs of switching CRMs,” the model looks for a section that answers that question. If your H3 says “Hidden costs in CRM migration,” the heading-to-query match is direct. If your H3 says “Financial considerations,” the model must infer a match from the body text.
The principle: H3 headings should use the same language buyers use when asking AI models about this topic. This is not the same as keyword stuffing. It means writing headings that a buyer would recognise as a direct answer to their question.
Where buyer language comes from
In Seedli, buyer language surfaces in three places: the Buyer Hesitations on the Risk tab (what buyers say when they stall), the criterion names on the Tradeoffs tab (what buyers call their evaluation criteria), and the switching triggers on the Retention tab (what language buyers use when describing why they left or are considering leaving). These are the phrases AI models are already processing. Your H3s should mirror them.
From criterion to heading
If Seedli shows a buyer criterion called “Flexibility & Customisation” in the Battle Zone, the H3 under your evaluation section should not say “Flexibility” alone. The buyer language pairs the two concepts. Write the H3 as “Flexibility and customisation: where CRM platforms diverge” to mirror the criterion name and signal that the section addresses the specific evaluation dimension buyers use.
From hesitation to heading
If the Risk tab shows a buyer hesitation of “Fear of Overcommitment,” the H3 in your risk section should name it directly: “The overcommitment problem in CRM migration.” A buyer asking an AI model “am I committing too much to this CRM?” will trigger a response that pulls from sections whose headings address commitment and lock-in. The heading-to-query match is stronger when the heading uses the buyer’s own framing.
From switching trigger to heading
Retention data shows why buyers leave. If the switching triggers include “integration challenges” and “dissatisfaction with support,” these become H3 candidates in a migration guide: “When integration complexity is the reason to switch” and “When support quality drives the migration decision.” Each H3 creates a section that matches a specific switching scenario AI models encounter in buyer conversations.
The pattern applies across content types. In a market reality report, the H3s under the criteria section should mirror the criterion names from Seedli. In an elimination defence page, the H3s should mirror the risk names. The data source changes; the principle stays the same: your headings should use the language AI models are already processing from buyer queries.
Using buyer language in headings solves the matching problem. The next question is whether the content under each heading survives being extracted on its own.
The self-contained section test
AI models do not always cite a full page. They extract sections. A model answering “what are the risks of migrating from Salesforce?” will pull the risk section from your migration guide, not the entire article. If that section relies on context established in a previous section to make sense, the extracted answer is incomplete or misleading.
The self-contained section test: take any H2 section (or H3 subsection) and read it without the rest of the page. Does it make a complete claim? Does it define any terms it uses? Does it state its conclusion without requiring the reader to have read an earlier section?
Sections that fail the test
“As mentioned above, the same dynamic applies here.” This sentence references content in another section. If the current section is extracted, the reference points to nothing. Any phrase that uses “as discussed,” “building on the previous section,” or “given the data above” creates a dependency that breaks on extraction.
Sections that pass the test
Each section opens by stating what it covers and closes by stating its conclusion. If a term was defined in an earlier section, the current section includes a brief restatement. If a data point from an earlier section is relevant, it restates the number rather than pointing back. This creates redundancy in the full-page reading experience, but it makes every section independently citable.
The tradeoff
Self-contained sections repeat information. A human reader moving through the article linearly will notice the repetition. The tradeoff is deliberate: the article reads slightly more redundantly in exchange for every section being independently extractable. For long-form content that targets multiple buyer questions across its sections, this tradeoff is worth making. For short articles with a single argument, it is unnecessary.
The self-contained section test applies to every heading level, but it matters most at H2 and H3. H4s (if used) are typically so specific that they are always extracted with their parent H3. An H3 section is the atomic unit of AI extraction: broad enough to contain a complete answer, specific enough to match a buyer question.
We applied these rules to the site you are reading. Here is what the heading hierarchy looks like in practice.
What we did on this site
Before formalising these rules, our heading hierarchy was inconsistent. Some articles used descriptive headings; others defaulted to structural labels. The H3 level was underused: most articles had H2s with long prose blocks underneath and no subsection headings at all. That meant every H2 section was a single undifferentiated block from the model’s perspective.
Descriptive H2s as section promises
We rewrote every H2 to describe what the section delivers, not what structural role it plays. “When to produce a Market Reality Report” instead of “When to use this content type.” “The four Seedli screens to open” instead of “Data sources.” Each H2 makes a promise that the section fulfils, and that promise is specific enough that a model can match it to a buyer question.
Buyer-language H3s in playbooks
In the playbook articles, we added H3s that mirror the criterion names, risk categories, and buyer hesitations from Seedli data. The market reality report playbook has H3s named after the six report sections, each mirroring the category language AI models use when structuring market analyses.
Section elements and anchor IDs
Every ArticleSection component on this site outputs a <section> element with an anchor ID derived from the heading. The same ID feeds both the internal link topology (which links to specific sections, not just pages) and the table of contents. This creates a consistent addressing system: any section can be linked to by its anchor, and the heading at that anchor describes exactly what the section contains.
Self-contained sections in worked examples
The worked examples (market reports and criteria flips) now restate data points within each section rather than referencing earlier sections. The risk section restates the competitive landscape context it needs rather than saying “as shown in the market structure section above.” This makes each section independently citable when a model extracts it in response to a specific buyer question.
The technique is straightforward. The failures are too.
What not to do
Do not skip heading levels
Going from H2 directly to H4 (skipping H3) breaks the hierarchy for both accessibility tools and AI parsers. The model interprets the missing level as either a parsing error or a structural inconsistency. If your content has only two levels of depth, use H2 and H3. If it has three, use H2, H3, and H4. Never skip a level.
Do not use headings for visual emphasis
An H3 that says “Important!” or “Note:” uses a heading tag for visual styling, not for content structure. The model parses this as a section heading and expects content that matches. Use bold text, callout boxes, or other visual elements for emphasis. Reserve heading elements for content structure.
Do not write headings that only make sense in sequence
“Step 1,” “Step 2,” “Step 3” as H3 headings create numbered sections with no descriptive content. A model extracting “Step 2” in isolation has no idea what the step covers. Write the heading to describe the step: “Add dateModified to every Article schema” instead of “Step 1.” If sequencing matters, include both: the descriptive heading with the step number as a visual prefix, not the heading text.
Do not duplicate heading text across sections
Two H3s on the same page that both say “Risk” create ambiguity. A model routing a buyer question about risk to “the risk section” cannot determine which one to extract. Every heading on a page should be unique. If two sections address risk from different angles, name the angles: “Risk from the buyer’s perspective” and “Risk from the vendor’s perspective.”
The underlying principle connects to every other technique in this series: structure is communication. Your schema tells models what your content means. Your meta description tells them what it claims. Your heading hierarchy tells them where to find each part. Together, these structural layers determine whether a model can navigate your content or has to guess.
See which buyer questions your content answers
Seedli maps the buyer language, evaluation criteria, and decision-stage queries that AI models process in your market. The heading hierarchy for your content starts with the language your buyers actually use.
Get startedThis is part of the Seedli technique series on structural optimisation for AI-mediated discovery. See all techniques and playbooks.