Data

What % of Paid Ad Clicks Are Returning Visitors?

Most B2B ad budgets pay new-audience CPMs to re-reach existing visitors. How to measure the leak using visitor identification, not platform cookies.

George Gogidze George Gogidze · · 10 min read
What % of Paid Ad Clicks Are Returning Visitors?

Every B2B marketer who has ever stared at a LinkedIn retargeting CPM has asked the same question. How much of my “acquisition” budget is actually re-reaching people I already had? The question sounds simple. The answer is nearly impossible to get from inside the ad platforms, because their reporting defines “new” versus “returning” by cookie, and cookies are increasingly useless.

I am George, founder of Leadpipe. Our deterministic identity graph stitches sessions across devices, browsers, and time. That means we can answer this question with a different lens than the ad platforms can. This post is about how to think about the question, what visitor identification reveals that platform reporting hides, and the methodology you can run on your own traffic.

The short answer

You cannot get a single number that applies to every B2B site. Returning-visitor share of paid clicks varies by platform, audience targeting, ICP overlap, brand maturity, and how many retargeting campaigns are running in parallel. What I can tell you with confidence:

  • Platform-reported “new visitor” share is almost always wrong on the high side. Cookie expiry, cross-device gaps, and incognito browsing all count returning visitors as new.
  • A deterministic identity graph that stitches across 280M verified profiles and 60B intent signals catches a portion of those misclassified returners that the ad platforms do not.
  • On every B2B site I have looked at over the last year, the gap between platform-reported “new” and identity-resolved “new” is large enough to change budget decisions.

The actionable framing is this. You are almost certainly overpaying for reach that already exists. The question is by how much, and what you do about it.

Why platform “new visitor” numbers lie

Ad platforms classify a click as “new” if they cannot find a matching cookie or platform-side identifier. That sounds reasonable. In practice, it produces three structural undercounts of returning visitors.

Source of errorWhat happensEffect on “new visitor” count
Cookie expiryDefault cookie windows are 30-90 days; longer research cycles fall outsideInflates “new”
Cross-device gapsA visitor researches on desktop, clicks ad on mobileInflates “new”
Incognito and ITPSafari ITP, Firefox ETP, and incognito sessions break cookie continuityInflates “new”
Identity-graph blind spotPlatforms only see their own audience graphInflates “new”

A deterministic identity graph that stitches sessions across devices, browsers, and time sees a portion of those journeys the platform does not. That is the entire reason buyer-side identity matters for ad measurement.

The methodology you can run yourself

Here is the framework. You do not need our data to do this. You need your own pixel data, your own ad-platform export, and roughly a quarter of clean traffic.

1. Define returning honestly

A click is “returning” if the same person had a prior session on your site within a defined window. The window matters. A 30-day window will under-count returners with longer research cycles. A 180-day window catches almost everyone but tags very long-cycle prospects as new even when they are not.

I use 180 days for B2B because the research window for B2B buyers is long and most enterprise deals span multiple quarters.

2. Resolve identity before you classify

If you classify on cookies alone, you are reproducing the platforms’ blind spot. The deterministic step is to match each click to a person record using a first-party identity graph, not just an ad-platform cookie. Match rate on that step matters. On US B2B traffic, deterministic matching lands around 30-40%+ of paid-ad clicks, depending on traffic mix.

3. Bucket by recency

Once you have classified visitors as new or returning, bucket the returners. The framework I use:

Recency bucketWhat it tells you
Truly first visit (no prior history in 180 days)Genuine acquisition
Returning, last 7 daysDirect retargeting overlap
Returning, 7-30 daysActive research cycle
Returning, 30-90 daysLong-cycle research
Returning, 90+ daysRe-engagement or shifted intent

The shape of this distribution tells you where your ad budget is actually landing.

4. Compare your “prospecting” labels to reality

This is the finding that changes how teams allocate budget. Even campaigns explicitly labeled as prospecting (excluding retargeting audiences, lookalike from non-converter sources, objective set to “new audience”) still pull a meaningful share of returning visitors. Three structural reasons:

  • Audience leak. Lookalike models trained on traits, not on prior site behavior, pull heavily toward existing visitors because the traits overlap.
  • Intent-keyword search. Someone who has already visited your site and is actively researching clicks your ad again on a branded or high-intent keyword.
  • Cookie expiry. The ad platform does not know the visitor returned because its cookie expired. A deterministic graph sees the prior visit the platform missed.

The practical cost: you are paying prospecting CPMs for retargeting-equivalent reach.

What the platforms get directionally right

Some directional patterns hold across B2B sites I have studied. Worth stating because they help calibrate expectations.

PlatformReturning-visitor share, directional
Meta AdsHighest, because Meta lookalike models lean on existing-visitor traits
Google Ads (display)High, because display retargeting overlaps prospecting placements
Google Ads (search)Mixed, depends on branded vs non-branded keyword share
Reddit AdsMixed, varies heavily by subreddit
LinkedIn AdsLowest, because professional targeting reaches a less site-revisitor-heavy audience
X AdsMixed, varies by audience definition

These are directional, not numeric, claims. Run the methodology above to get your own number.

Conversion rate, by prior-visit status

Returning-visitor paid clicks consistently convert at a higher rate than first-visit paid clicks. The ROAS math looks favorable to retargeting on its face. There is a trap.

Many of the buyers in your retargeting pool would have returned organically without the ad. Last-click attribution gives the ad credit for a return that was going to happen anyway. This is the counterfactual problem. The platform counts the click. It does not count the prior return visit that preceded it.

Our first-visit vs fifth-visit close rate study covers the close-rate gap in detail. The short version: returning visitors close at a meaningfully higher rate, but not all of that closing is causally driven by the retargeting click.

Implications for the reader

Audit your prospecting campaigns for audience leak. If a non-trivial share of your “prospecting” spend is hitting existing visitors, you are paying new-audience CPMs for retargeting-equivalent reach. Tighten exclusion audiences. Add your visitor list as a negative audience on every prospecting campaign, refreshed weekly.

Measure incrementality, not just ROAS, on retargeting. Run geo-lift or holdout tests periodically. Even a crude 10% holdout on your retargeting campaigns will give you a real incrementality read in a few weeks. ROAS without incrementality is a vanity number.

Reframe spend toward identified returning visitors. Paid retargeting reaches anyone who was on the site. Visitor identification lets you reach the named individuals from the same pool through sales outreach (email, LinkedIn) at a fraction of the cost. For accounts you have already identified, a sales motion is almost always cheaper than re-buying them through an ad. See what to do when someone visits your pricing page for the workflow.

Build audiences on identified visitors. Feed your visitor identification output into your ad platforms to build better lookalike and exclusion audiences. The Orbit approach goes one layer deeper, targeting people researching the category off your site, before they ever click your ad. See orbit-linkedin-ads-audiences and our Google Ads optimization recipe.

Stop trusting platform-side “new vs returning” reports as ground truth. They are directional inputs, not measurements. The deterministic ground truth lives in your own first-party identity layer.

Why this matters more in 2026

Two structural shifts make this worse, not better, over the next 24 months.

Cookie deprecation accelerates. Safari’s ITP shortened first-party cookies aggressively years ago. Chrome’s privacy changes continue to compress windows. Every quarter, more sessions look “new” to ad platforms even when the same person was on your site last week.

Cross-device research is the default. B2B buyers research on multiple devices across multiple weeks before they fill out a form. Platform-side cookie matching cannot stitch those journeys cleanly. The gap between platform-reported new and identity-resolved new keeps widening.

AI-driven outreach raises the cost of waste. When your AI SDR or your ABM playbook is treating “new visitor” as a primary signal, the misclassification compounds into bad outreach decisions. See the data layer AI sales agents are missing for the broader argument.

The fix is not to abandon ad platforms. They are still the most efficient way to reach people who have not heard of you. The fix is to stop confusing platform-side reporting with measurement, and to build your own first-party identity layer underneath everything you measure.

How to wire this up with Leadpipe

If you want to run this methodology on your own traffic without building it from scratch:

  1. Install the Leadpipe pixel (2-5 minutes, JavaScript).
  2. Run it for 30-60 days to build a deterministic visit history on your own site.
  3. Tag your ad-platform clicks with UTMs that survive across sessions.
  4. Join your ad-click log against the Leadpipe identification log on email, hashed email, or person ID.
  5. Bucket by recency. Compare against the platform-side new-visitor report.

The output is a clean number for your traffic, your audience, your campaigns. Not a benchmark from someone else’s data. Your number. That is the only one that matters for your budget decision.

Limitations

  • Identity coverage on paid clicks. You can only classify a click as returning if you can identify the visitor on this or a prior session. Coverage on most B2B sites lands at 30-40%+ on US traffic. Outside that range you are working with a partial denominator.
  • 180-day visit history window. A click that first visited 200 days ago will read as “truly first visit” even if it is actually a long-return. Lengthening the window catches more returners but also adds noise.
  • Cross-device stitching is strong but imperfect. A buyer who switches devices during a single research cycle may not stitch perfectly to prior sessions.
  • Self-reported campaign labels drift. “Prospecting” classification depends on the seller telling you which campaigns are prospecting. Labels rot over time.

Leadpipe identifies 30-40%+ of your US B2B visitors with full contact data on the Pro plan at $147/mo. No credit card to start the 500-lead trial. Start identifying visitors →