Strategy

What Does Google's Non-Commodity Turn Mean for B2B GTM?

Google's Danny Sullivan told Toronto search pros to build non-commodity content. Here's what that means for B2B go-to-market teams.

George Gogidze George Gogidze · · 10 min read
What Does Google's Non-Commodity Turn Mean for B2B GTM?

Earlier this month at Google Search Central Live in Toronto, Danny Sullivan gave the clearest public framing of where Google is taking ranking in the AI era: the future favors non-commodity content, and commodity content is getting compressed by AI Overviews and generative answers. Most B2B marketing playbooks were built around commodity content. They are about to stop working.

I am George, founder of Leadpipe. I watch this from two sides: we publish a lot of B2B content ourselves, and we build data infrastructure that marketing teams use to measure whether content is producing pipeline. The Sullivan framing is important, and the implications for B2B GTM are not the ones most SEO takes are landing on.

The answer up front

Sullivan’s argument, compressed: Google is increasingly comfortable handling commodity content through AI Overviews and generative summaries. If a piece of content is something any LLM could produce from public sources, Google no longer needs to rank a human-written version prominently. It can synthesize. Non-commodity content, the stuff that requires first-hand experience, proprietary data, or perspective no LLM can fabricate, is what the search engine will actually send traffic to.

For B2B GTM teams, the translation is blunt. The content mill era is over. The “ultimate guide to X” written by a contractor from a brief is a dead asset. What wins is content backed by proprietary data, customer evidence, and real operator perspective. And the companies that will win that war are the ones that built the instrumentation to generate proprietary data in the first place.

The trend in one paragraph

For fifteen years, B2B SEO was a volume game. Publish broad pillar pages, rank for category terms, capture inbound, fill the funnel. That game worked because Google treated text as the primary unit of the web and rewarded comprehensive coverage. The game stopped working when LLMs made comprehensive coverage the cheapest thing in the world and AI Overviews turned the SERP into a synthesis layer. Sullivan’s Toronto remarks are Google formalizing what the data already showed: the traffic has moved toward content that only a specific operator could have written, and away from content that looks like everyone else’s.

Three forces driving the trend

Force 1: Generative answers ate commodity content

AI Overviews now appear on a large share of informational queries. For B2B, that includes most of the “what is X” and “how does X work” queries that used to reliably drive top-of-funnel traffic. When the SERP answers the question directly, a ranked article below the fold collects a fraction of the clicks it used to.

The content that keeps getting clicks in this environment shares a pattern: it offers something the generative summary cannot. A specific case study. A dataset. A perspective from someone who actually ran the thing. A counterintuitive argument grounded in experience. That is the non-commodity half of the split.

If your B2B content strategy in 2026 is mostly “keyword research plus brief plus contractor plus publish,” you are competing against free synthesis with more synthesis. That does not work.

Force 2: Everybody’s LLM-generated content looks the same

Inside any given B2B category, you can pull ten recently published “guides” and find the same structure, the same examples, the same verbs, the same benefit lists. The commodity ceiling is not just that AI can produce it. It is that AI has already produced it, multiple times, for multiple competitors, and the variance across those pieces is now a rounding error.

Google’s job, from Sullivan’s framing, is to surface the version that adds something real. If every article in the top ten is a reshuffle of the same paragraph, none of them deserves the traffic. The SERP collapses to the AI Overview and maybe one outlier with actual evidence attached.

The operator opportunity is obvious. Be the outlier with evidence.

Force 3: Zero-click is forcing B2B to earn first-party signal elsewhere

Even when you do rank for a non-commodity piece, more of the engagement is happening in the SERP, not on your site. The buyer reads your answer inside the Overview, nods, and closes the tab. You did not get a visit. You did not get a form fill. You did not capture a cookie.

That puts pressure on B2B GTM in a way that is specifically uncomfortable: you have to assume the traffic you do get is the tip of a much larger iceberg of people who consumed your content without visiting you at all. The signal has to come from somewhere else. Intent data, first-party visitor identification, and person-level intent networks are the places the signal actually lives now. SEO is no longer self-attributing.

This ties directly to why aggregated third-party cookies broke attribution and why analytics is lying about your pipeline. The measurement stack that worked when every visit produced a tracked session does not survive zero-click.

What counts as non-commodity for B2B

Sullivan did not give a rubric. Here is the working rubric I use when briefing our content team:

Non-commodity (keep writing)Commodity (stop writing)
First-hand product teardowns we can back with tests”Ultimate guide to X” with no new data
Customer case studies with numbersListicles of tools pulled from G2
Independent benchmarks we ran ourselves”What is X” definitions competing with AI Overviews
Counter-thesis opinions from founder or RevOps leadKeyword-stuffed pillar pages
Dataset analyses with original cuts”10 tips” posts with generic advice
”We tried X and here is what happened""Everything you need to know about X”
Internal methodology we actually useTemplated integration tutorials

A clean test for any post before it goes into the queue: could the same post have been produced by a freelancer with no access to your product, your customers, or your data? If yes, it is commodity. Kill it or reframe it.

Example of the reframe. “Top 10 visitor identification tools in 2026” as a commodity piece is a list of vendors with generic descriptions. Our version, top 10 visitor identification softwares, is a list plus an independent accuracy test we ran ourselves with specific scores (8.7/10 for Leadpipe, 5.2/10 for RB2B, 4.0/10 for Warmly) that nobody else has because nobody else ran the test. Same topic. Different content class.

What this implies for B2B GTM strategy

The Sullivan framing pushes B2B content strategy toward a specific conclusion: the moat is proprietary data, not proprietary wording. This changes how you should be allocating marketing budget in 2026.

Stop funding content volume. Start funding data collection.

A $40K annual content budget spent on 80 freelance-written category posts will produce less traffic and less pipeline than the same $40K spent on three serious research reports backed by data your company actually owns. The reports compound, they get cited, they produce non-commodity content that Google can tell is real.

Build the instrumentation that generates original data.

You cannot write “we ran this test on 75,000 visitors” unless you have the infrastructure to run that test. Visitor identification, Orbit intent data, a proper CRM, and clean attribution are not just operational tools. They are raw material for the content strategy that actually works in the new SERP.

Publish perspective, not summary.

The LLM can produce the summary. It cannot produce a founder’s argument about why a category is changing. It cannot produce a RevOps leader’s war story from a painful migration. It cannot produce a customer interview where someone tells the awkward truth about a product. Those are non-commodity by construction.

Internal data plus internal voice.

The best non-commodity content marries a dataset nobody else has with a person who can interpret it. Neither alone is enough. A dataset without interpretation reads like a data dump. Interpretation without a dataset reads like another opinion piece. Together they produce work that is uncopyable in practice, which is what Sullivan was pointing at.

The commodity content graveyard

Some B2B content formats are worth writing off now. I would not fund new work in these categories unless you have a specific data angle that makes it non-commodity:

  1. Generic “what is X” category definition pieces. AI Overviews have already eaten these.
  2. Listicles built from G2 data with no independent evaluation.
  3. “10 tips for better outbound” without data or named operator experience.
  4. Keyword-chasing pillar pages on generic GTM topics.
  5. Integration tutorials that read like docs with SEO bolted on.

The replacement is not more content. The replacement is more evidence. For B2B GTM, that means publishing what is actually happening in your pipeline, what you learned from your tests, and what the data shows about your market, not what a contractor synthesized from the top-ranked results last month.

What this means for 2026 and 2027

Two predictions.

Traffic from informational queries will keep declining for most B2B sites that publish commodity content. The decline will not be graceful. It will look like a step function as AI Overviews roll out to more query types. Teams that depended on commodity organic traffic will see their MQL volume drop and blame the algorithm. The algorithm is doing exactly what Sullivan said it would.

Non-commodity content will keep producing pipeline even when click-through rates drop. Because the content that survives the non-commodity filter is the content buyers use to decide. If a CFO is picking between two vendors and one of them published the only serious independent accuracy benchmark in the category, that piece drives the decision even if it did not drive the initial click.

The practical lesson: invest the 2026 content budget in the small number of pieces that only your company could have written. Kill the volume plays. Build the data infrastructure that makes the non-commodity pieces possible.

For a longer argument about why commodity data aggregation is breaking on the same curve, see is the B2B data aggregator era ending and midbound, a new era in marketing.

Leadpipe identifies 30-40%+ of your US B2B visitors with full contact data on the Pro plan at $147/mo. No credit card to start the 500-lead trial. Start identifying visitors