Incrementality Testing for Creator Campaigns: Did the Ad Actually Cause the Sale?
Attribution and causation are different problems. Attribution answers: "Which channel do we credit for this conversion?" Incrementality testing answers: "Would this customer have bought anyway if we hadn't run this campaign?"
The gap between those two questions is the gap between correlation and causation. Your promo code might show 100 redemptions. But how many of those customers already knew about your product, were already on their way to buying, and simply used the code because it was available? If 60 of those 100 would have converted anyway, your "100 conversions" campaign actually drove 40 incremental ones.
For creator marketing specifically, this distinction matters more than it does for most channels, for reasons rooted in how creator audiences behave.
Why Creator Campaigns Are Vulnerable to Attribution Inflation
When a creator with 500,000 followers promotes your product, some portion of that audience already knows you. If you're a direct-to-consumer brand with any meaningful presence, existing customers, lapsed customers, and people who've already visited your site are all in that audience. They see the ad, they use the promo code (the discount is real), and they convert, and your attribution marks them as campaign-driven conversions.
They weren't incremental. They were going to buy. The creator didn't cause the sale; they just provided a discount path to a purchase that was already likely.
This inflates reported campaign performance. The worse your audience overlap between existing customers and the creator's audience, the bigger the inflation. If you're advertising on a creator whose audience is 30% your existing customer base, attribution overcount can be substantial.
What Incrementality Testing Looks Like
The clean version of an incrementality test is a holdout group experiment:
- Identify a group of people who would have been exposed to the campaign.
- Randomly split them into a treatment group (sees the campaign) and a control group (doesn't).
- After the campaign, compare conversion rates between the two groups.
- The difference is your incremental lift.
In practice, perfect holdout groups are hard to achieve for creator campaigns because you can't control which podcast subscribers hear a specific episode or which Instagram followers see a given post. But you can approximate it.
Geographic holdout. Run a creator campaign exclusively in some regions and not others. Compare conversion rates in the exposed regions vs. unexposed regions during the campaign window, holding other variables constant. Requires meaningful geographic variation in your sales data.
Creator-staggered testing. Run different creators in different time windows with similar audience profiles. Compare conversion rates during "on" periods vs. "off" periods, normalized for seasonality. Requires enough historical data to establish baselines.
Synthetic control. Build a statistical model of what conversions would have been without the campaign, using pre-campaign trend data. Compare actual conversions during the campaign to the synthetic baseline. More sophisticated, but doesn't require geographic or time segmentation.
A Simpler Proxy: The New Customer Rate
If true incrementality testing isn't feasible, new customer rate is the most practical proxy. What percentage of conversions attributed to a creator campaign came from people who had never bought from you before?
High new customer rate (>70%) suggests the campaign is driving real discovery and incremental demand. Low new customer rate (<30%) suggests you're largely converting existing customers or people already in your funnel, who may have converted anyway.
This doesn't prove incrementality, but it's directional. A campaign that converted 200 first-time customers is almost certainly more incremental than one that converted 200 customers who already bought from you once.
You can pull this data from your CRM or e-commerce platform by matching conversion email addresses against your customer list. It requires a data connection that not every team has set up, but it's significantly less involved than a full holdout experiment.
The Baseline Conversion Rate Benchmark
Another practical check: compare the site-wide conversion rate during the campaign period against a comparable period without the campaign. If a creator campaign was live and your overall conversion rate didn't change, the campaign's attributed conversions likely came at the expense of organic or direct conversions, customers who would have converted anyway but happened to use the campaign's tracking mechanism.
If the conversion rate visibly increased during the campaign period, that's evidence of incremental demand generation. The campaign brought in people who wouldn't have converted otherwise.
This test is noisy, seasonality, promotions, and other channels all affect conversion rate, but for brands with enough transaction volume to see meaningful signals in weekly conversion rates, it's a quick sanity check.
What to Do When Incrementality Is Low
Sometimes a campaign passes attribution with flying colors and fails incrementality. 100 promo code redemptions, 80% existing customers, no change in conversion rate baseline. The honest read: the creator drove limited incremental value, mostly redistributed purchases from direct/organic to promo-code.
A few possible causes:
Wrong audience. The creator's audience overlaps too heavily with your existing customer base. The campaign was essentially a discount to current customers rather than acquisition of new ones. Solution: find creators with lower audience overlap, typically found in adjacent categories or at earlier stages of category awareness.
Product saturation in that audience. The creator's audience already knows your product well and has either bought or deliberately chosen not to. Additional exposure drives diminishing returns. Solution: expand to new creator categories or reconsider the audience thesis.
Attribution inflation from coupon leakage. The promo code spread off-platform and you're counting conversions that weren't actually from the campaign audience. Solution: tighten promo code hygiene, shorter expiration, creator-exclusive formatting.
How to Present Incrementality Data
When presenting creator campaign performance internally, the cleanest framing separates attribution from incrementality:
- Attributed conversions: X (what the tracking showed)
- Estimated incremental conversions: Y (based on new customer rate or holdout data)
- Confidence in estimate: low/medium/high (based on methodology rigor)
This framing is more defensible than either inflated attribution numbers or overly skeptical dismissals of the channel. It shows you're thinking carefully about causation, which builds credibility with finance and leadership in a way that raw attributed conversion counts often don't.
The goal isn't to make campaigns look better or worse. It's to know which ones actually moved the needle so you can do more of those and less of the others.
Ready to track your podcast ad ROI?
Castlytics gives you per-campaign attribution, real-time ROI, and listener journey analytics — free to get started.
Start free — no credit card