The Real Cost of Not Tracking Podcast Ads: Wasted Spend, Wrong Cuts, and Missed Bets
The most common argument for not investing in podcast attribution is cost and complexity. "We will track it properly when we are spending more." "We do not have the engineering bandwidth right now." "Promo codes are good enough for now."
These are rational-sounding reasons to delay. The problem is that the cost of not tracking is not zero. It is concrete, it accumulates with every week of spend, and it almost always exceeds the cost of the tracking tool by a significant margin.
Here is what not tracking podcast ads actually costs.
The Wrong Campaigns Get Cut
The most damaging consequence of poor attribution is cutting a campaign that is working.
Podcast ads drive conversions in ways that are invisible to most standard analytics setups. A listener hears your ad, searches your brand name two days later, and buys. In your analytics, that sale is attributed to organic search or direct, with no connection to the podcast. If you are using last-touch digital attribution, the podcast campaign gets zero credit.
This happens at scale. Across podcast advertising budgets of all sizes, research consistently shows that link-only tracking captures 30 to 40 percent of podcast-driven conversions. The other 60 to 70 percent are attributed elsewhere or not attributed at all.
Now apply that to a budget decision. You are running two shows. Show A appears to generate 15 conversions in your analytics at a 1.2x ROAS. Show B appears to generate 40 conversions at a 3.8x ROAS. You cut Show A and double down on Show B.
But what if Show A's attribution was 25 percent complete, and its real conversion count was 60, giving a true ROAS of 4.8x? And what if Show B's attribution was 80 percent complete (a highly trackable audience that clicks links), and its real ROAS was 4.7x? You just cut your best-performing show based on incomplete data.
This is not a hypothetical. It happens routinely to brands without multi-signal attribution.
The Wrong Campaigns Get More Budget
The flip side of cutting winners is scaling losers. If your analytics systematically overcounts conversions for easy-to-track channels and undercounts for hard-to-track channels like podcasts, you will consistently over-invest in channels that look better than they are.
The channels that tend to look best in click-based analytics are the ones closest to the conversion: retargeting, branded search, lower-funnel display. These channels benefit from the halo of awareness built by top-of-funnel channels like podcast advertising. But in last-touch attribution, they get all the credit.
The result is a feedback loop: retargeting looks great, gets more budget, continues to benefit from podcast-driven awareness while podcasting is cut back, and then performance slowly degrades as the awareness halo shrinks. By the time the brand realises what happened, they have been running on diminishing returns for six months.
Multi-signal podcast attribution does not solve all of this, but it breaks the feedback loop by ensuring your podcast campaigns get the credit they deserve, making budget allocation decisions less distorted.
The Quantified Cost
Here is a concrete model of what poor attribution costs at two common spend levels.
Scenario 1: Cutting a High-ROAS Show
A brand is spending £1,500 per month on a podcast show. Link-only tracking shows 20 attributed conversions at a 1.1x ROAS. The campaign looks marginal. The brand cuts it.
With proper multi-signal attribution, the true conversion count would have been 70 (the link-only tracking was capturing 29 percent of conversions, typical for a show where listeners prefer to search rather than click links). True ROAS: 3.8x.
By cutting this campaign, the brand is losing £1,500 per month in ad spend that was generating approximately £5,700 in revenue. The monthly opportunity cost of cutting it is £4,200 in revenue. Annualised: £50,400 in foregone revenue.
The attribution tool that would have caught this costs significantly less than that. Most podcast attribution platforms including Castlytics are priced at less than £200 per month even at significant campaign volume.
Scenario 2: Over-investing in a Low-ROAS Show
A brand is scaling a show that appears to have a 4x ROAS in their click-based analytics. They increase the ad frequency from one per month to three per month, at £2,000 per placement. Total monthly spend: £6,000.
In reality, the show's audience is tech-savvy and click-happy. The link-based ROAS was accurate: 4x. But the incremental spend on the second and third placements is reaching an audience already saturated with the first placement. The marginal ROAS on the additional spend is closer to 0.7x.
Without attribution data broken down by episode and signal type, the brand cannot see this saturation signal. They continue to overspend on placements that are not returning.
Proper attribution, including conversion rate trends across episodes of the same show, surfaces this within two to three months and prevents continued overspend.
What "Gut Feel" Attribution Actually Costs
Some brands rely on a rough proxy: if we ran a campaign and revenue went up, it probably worked. This is the minimum viable attribution approach and it has a specific failure mode.
Brand revenue grows for many reasons simultaneously: product launches, seasonal trends, paid social campaigns, PR coverage. Correlating revenue with podcast campaigns without controlling for these other factors means you cannot distinguish a causal relationship from coincidence.
In practice, this leads to some of the most expensive attribution mistakes: attributing revenue growth to a podcast campaign that happened to run during a seasonally strong period, or vice versa, failing to credit a podcast campaign that drove growth during a flat overall period because everything looked flat.
The cost of this is not just the direct misallocation. It is the accumulated wrong model of what works. Brands running on gut feel attribution often oscillate between "podcast ads are amazing" and "podcast ads do not work" without any underlying data to anchor the view. The spending decisions are correspondingly volatile.
The Attribution Tool Pays for Itself
The maths is simple. If you are spending more than £1,000 per month on podcast advertising, and you are not using multi-signal attribution, you are almost certainly making at least one material mistake in your budget allocation. The cost of that mistake, in either cut winners or scaled losers, will exceed the cost of the attribution tool within the first month or two.
Attribution is not a reporting exercise. It is the mechanism by which your podcast advertising spend improves over time. Without it, you are spending in the dark and hoping the overall numbers look good enough to justify continuing.
Stop spending on podcast ads without knowing what is working. Castlytics tracks all four attribution signals for every campaign in one dashboard. Start free and have your first attribution report within 30 days of your next campaign going live.
Related reading: Why Podcast Advertisers Undercount Conversions | Podcast ROAS Benchmarks
I help tech companies and scale-ups build the paid acquisition, tracking, and growth infrastructure needed to scale profitably, with full visibility into what's working.
Ready to track your podcast ad ROI?
Castlytics gives you per-campaign attribution, real-time ROI, and listener journey analytics — free to get started.
Start free — no credit cardRelated reading
How to Scale Podcast Ad Spend Once You Can Actually Prove ROI
6 min read
Podcast AttributionPodcast Ad Attribution From Scratch: What to Do in Your First 30 Days
6 min read
Podcast AttributionWhy Google Analytics Shows Direct Traffic After Your Podcast Ads (And What to Do About It)
6 min read