Intent data vendors love to publish topic counts. 7,500 topics. 12,000 topics. 20,000 topics. The implied logic is that more topics means more precision, and more precision means better targeting. The actual logic is different. Most intent topics are noise, a handful are strong predictors of revenue, and the topic-count race is the wrong race to be running.
I am George, founder of Leadpipe. Our intent platform, Orbit, tracks 20,000+ topics across the Leadpipe network of 5M sites and 60B intent signals refreshed every 24 hours. After running this in production with sellers across SaaS, fintech, cybersecurity, and pro services for the last year, the pattern is clear: topic count is the wrong metric. Topic type, recency, and clustering are what predict revenue.
This post is the framework, not a ranked list of magic topics. The right topics for your business are different from the right topics for mine. The structural patterns hold across both.
The thesis
The number of topics in a vendor’s catalog is a vanity metric. The predictive power of those topics, sorted by type and recency, is what matters.
A 20,000-topic catalog with five strong predictors and 19,995 noise lines is worse than a 1,000-topic catalog with 200 strong predictors. You cannot tell the two apart from a vendor’s marketing page. You can only tell them apart by measuring lift on your own outcomes.
Why most topics are noise
Three structural reasons most topics in any large intent catalog do not predict revenue.
| Failure mode | What it looks like | Why it fails |
|---|---|---|
| Too broad | ”Cloud computing,” “data platforms,” “marketing automation” | Fires on every reader of a tech newsletter; no buying-stage signal |
| Too narrow | ”Internal-wiki tool for 5-person teams” | Insufficient sample to be reliable |
| Too generic | ”Software,” “B2B,” “enterprise” | Defines a universe, not a buying signal |
| News / trend | ”Latest CRM news” | Captures readers, not buyers |
| Awareness-stage | Intro and 101 content topics | Captures researchers, not deciders |
A buyer reading “what is identity resolution” is curious. A buyer reading “alternatives to Clearbit” is in late-stage evaluation. They might be the same person, six months apart, but the topics carry totally different revenue signal.
Intent vendors that publish raw topic counts without separating signal from noise are selling volume. Volume in this category is cheap. Predictive lift is what costs work to produce.
The topic-type hierarchy
The structure that matters, ranked from most predictive to least:
Topic-type predictive power (typical pattern):
Competitor / "alternatives to X" ████████████ late-stage stress-testing
Pricing / "how much does X cost" ████████████ active negotiation
"Best X for Y" / shortlist topics ██████████ shortlisting
Implementation / deployment █████████ post-decision research
Integration / stack-fit █████████ stack-fit due diligence
Use-case / workflow specifics ███████ active scoping
Category solutions (in-market) ███████ category awareness
Benchmark / vendor reports █████ research / due diligence
Broad category awareness ███ early discovery
News / trends ██ readers, not buyers
The hierarchy is intuitive when you read it. Competitor-alternatives search is late-stage, high-intent behavior. Broad category awareness is early-stage, low-conversion behavior. The useful thing is that the hierarchy is also structurally stable across sellers, which means you can build a scoring model on topic type that generalizes.
| Topic type | Why it predicts | What it looks like |
|---|---|---|
| Competitor / alternatives | Buyer is stress-testing the frontrunner | ”ZoomInfo alternatives,” “X vs Y” |
| Pricing / cost | Buyer is in active negotiation | ”How much does X cost,” “X pricing" |
| "Best X for Y” | Buyer is shortlisting | ”Best CRM for agencies” |
| Implementation | Decision close to made | ”How to implement X,” “X setup guide” |
| Integration / stack | Stack-fit research | ”X + HubSpot,” “X API integration” |
| Use-case / workflow | Active scoping | Specific workflow questions |
| Category solutions | Category awareness | ”Visitor identification software” |
| Benchmark / report | Due diligence research | ”X industry report” |
| Broad category | Early discovery | ”Marketing automation,” “data platform” |
| News / trend | Readership, not buying | News and commentary |
A seller who scores by topic type rather than topic id, and weights the top of this list 3-5x higher than the bottom, gets a working intent score in a week. A seller who tries to curate a list of 300 individual “good” topics gets stuck maintaining the list.
Recency beats breadth
The single most underrated finding from running intent in production: intent is perishable. A strong topic fired 45 days ago is worse signal than a mediocre topic fired yesterday.
| Recency window | Predictive value |
|---|---|
| 0-7 days | Highest. Buyer is actively researching now. |
| 8-14 days | Strong. Still in the active window. |
| 15-30 days | Moderate. Decision may be close or may have moved on. |
| 31-60 days | Weak. Often noise. |
| 60+ days | Treat as background, not signal. |
Intent data systems that batch weekly or stale their refresh lose most of the predictive value. This is the architectural reason Orbit refreshes daily. By the end of a week-long batch cycle, the strongest signals in the feed are already cold. We covered this in why Orbit refreshes intent daily and the architecture is in inside the Orbit pixel network.
If your current intent feed is weekly batch, the upgrade to daily refresh produces more lift than expanding the topic catalog. Same topics, fresher data, better outcomes.
The single-topic trap
A single topic firing is a weak signal even when the topic itself is strong. The signal that actually predicts pipeline is a cluster of topics fired in a coordinated pattern, plus on-site behavior.
| Pattern | Strength of signal |
|---|---|
| 1 broad-category topic fired | Weak |
| 1 high-intent topic fired (alternatives, pricing) | Moderate |
| 2+ high-intent topics in 14 days | Strong |
| 2+ high-intent topics + pricing-page visit on your site | Very strong |
| 3+ high-intent topics + competitor-page visit on your site | Highest |
The highest-converting pattern is not a single hot topic. It is a coordinated cluster: multiple category-adjacent topics, plus behavioral signals on your own site. This is where the Orbit approach separates from traditional topic-only intent vendors. We tie account-level intent to person-level site behavior, which is the difference between “an account at this company is researching” and “this specific buyer is in active evaluation.”
For the broader debate on how intent data is structured, see intent data vs visitor identification and our take on “our intent data only shows companies”.
How to score topics on your own data
Concrete framework. Run this on your own (seller, topic, outcome) data, not on a vendor’s marketing chart.
Step 1: Pull a 90-day baseline
Take every account that has fired at least one intent signal in the last 90 days, regardless of topic. Compute the 90-day pipeline-creation rate among those accounts. That is your baseline.
Step 2: Compute lift per topic type, not per topic
For each topic type (competitor, pricing, implementation, etc.), compute the same 90-day pipeline rate among accounts that fired at least one topic in that type. Compare to the baseline.
| Topic type | Baseline 90-day pipeline rate | Lift multiplier |
|---|---|---|
| All accounts (baseline) | X% | 1.0x |
| Competitor / alternatives | (your number) | (your multiplier) |
| Pricing / cost | (your number) | (your multiplier) |
| Implementation | (your number) | (your multiplier) |
| …etc. |
Topic types with a lift multiplier above 2x are your strong predictors. Topic types with multipliers below 1.2x are not adding signal beyond baseline. Treat them as noise weights in your scoring model.
Step 3: Weight by recency
Multiply each topic-type score by a recency decay. A simple working version:
| Days since last fire | Recency weight |
|---|---|
| 0-7 | 1.0 |
| 8-14 | 0.7 |
| 15-30 | 0.4 |
| 31-60 | 0.2 |
| 60+ | 0.0 |
A high-intent topic fired 45 days ago is worth less than a moderate topic fired yesterday. Your scoring model should reflect that.
Step 4: Score by cluster
A single topic fire is a weak signal. Score the cluster: count of high-intent topics fired in the last 14 days, plus on-site behavior signals (pricing-page visit, comparison-page visit, return visits).
Working scoring framework:
Score = (sum of topic-type weights × recency weights)
+ (on-site behavior weights for last 14 days)
+ (return-visit boost)
This is a starting model. Refine on your own outcomes.
Step 5: Tie intent to people, not just accounts
Account-level intent tells you which accounts are in-market. Person-level intent tells you which individual at that account is researching. That changes the outreach from “spray the buying committee” to “reach the one person actually driving it.” This is the Orbit thesis, and it is also the operational reason person-level intent data is the harder version of the problem.
What good intent data looks like
If you are evaluating intent vendors, throw out the topic count number. Use this checklist instead:
| Capability | Why it matters |
|---|---|
| Daily refresh | Recency dominates breadth |
| Topic-type hierarchy in the data model | You score by type, not by id |
| Account + person-level resolution | Reach the actual buyer, not the committee |
| On-site behavior integration | Cluster signals beat single topics |
| Lift transparency | Vendor can show predictive lift on existing customers |
| Daily and event-level webhooks | Realtime workflows beat weekly reports |
Vendors that pass that checklist are vendors worth a trial. Vendors that lead with “we have 20,000 topics” without backing it up with predictive-lift data are selling volume.
For the architectural picture of how Orbit handles each of those, see anatomy of an Orbit intent topic, reading the Orbit intent score, and how early Orbit detects in-market buyers.
Implications for the reader
Throw out the topic-count number when evaluating intent vendors. 20,000 topics vs 10,000 topics is irrelevant if most of the extra topics are noise. Ask the vendor for the distribution of topic predictive lift on their existing customers. If they cannot answer, they have not measured it.
Build your intent scoring on topic type, not topic id. Instead of curating a list of 300 individual topics to track, build a rules-based score that weights competitor-alternatives, pricing, and implementation topics 3-5x higher than broad category-awareness topics. This generalizes better than a curated list and survives catalog changes.
Score by cluster, not single fire. A single topic fire is a weak signal. A 3-topic cluster within 14 days plus on-site behavior is a strong signal. Build the score around the cluster.
Refresh daily, or accept the decay. If your intent feed is weekly batch, you have already burned most of the predictive value. Daily is table stakes for intent to be useful in 2026.
Tie intent to people, not just accounts. Account-level intent tells you which accounts are in-market. Person-level intent tells you which individual at that account is researching. That changes everything downstream.
Limitations of any topic-predictive analysis
Worth flagging clearly:
- Seller heterogeneity. What is a high-lift topic for one seller may be a low-lift topic for another. The patterns by topic type are more robust than any specific topic. Score the type, not the topic.
- Sample size per topic. Many topics in a 20,000-topic catalog do not have enough fires per seller to compute lift reliably. Bucket by type to avoid sample-size traps.
- Confounders. Accounts that fire competitor-alternatives topics are also more likely to be visiting your site. The lift is real, but separating intent signal from on-site behavior signal is part of why scoring by cluster is more reliable than scoring by single topic.
- Revenue lag. 90-day pipeline creation is a leading indicator, not closed revenue. For enterprise cycles, longer windows are more truthful.
The point of the exercise is not to publish a ranked list of “the 20 topics that predict revenue.” The right topics depend on your seller, your category, and your traffic. The point is to give you a framework that works on your own outcomes. Build your first Orbit audience in under 20 minutes. 5M-site pixel network, 60B+ intent signals, daily refresh, person-level resolution. Start free →