On Tuesday, a group of North American public radio stations published a set of guidelines on the best way for podcast companies to measure listenership. The document, which was released as a public Google Doc, gets pretty technical in its middle chunk, but all of those things are mere bells and whistles to the central purpose of the document: to be an actionable, catalyzing clarion call for the industry to simply and voluntarily adopt a measurement standard.
(This document, by the way, was the development that I was hinting in this past week’s newsletter. What inconvenient timing!)
These guidelines drew a fair bit of attention from publications more reputable than your friendly neighborhood Hot Pod, including AdWeek, the Observer, Current, and the great Nieman Lab. And, based on my read, I think they’re collectively pretty helpful in sketching out the fault lines, issues, and dynamics at play. But I’ve gotten a bunch of emails asking to further explain what’s going on, and this’ll probably be super stale by the next Hot Pod drop, so here I am with a special Hot Pod on a snowy New York Friday.
Disclaimer: I’m going to try keep this as streamlined as possible, so I’m not going to go into much of the nuances here. But it's going to be long, and it's going to be wonky, so if you're not interested in any of the advertising and/or CMS stuff, I highly recommend you skip this. We'll get much shorter on Tuesday.
THE PROBLEM. The key thing you need to know is that this all about the advertisers.
So, essentially, a “download” doesn’t mean the same thing for all players in the space, and that’s a problem for advertisers. As Sarah van Mosel, Acast’s Chief Commercial Officer, eloquently put it in an email to me: “Buyers just need to know that when they’re spending $100K on one podcast, they’re getting the same amount of ‘stuff’ as if they spend $100K on another podcast.”
The ‘stuff,’ in this case, is the potential of getting returns on whatever their marketing goals are. Broadly speaking, for direct marketing advertisers — the Mailchimps, the Squarespaces, the Audibles — those goals tend to be tangible conversions of their promo codes, and for brand advertisers — the big money players like Ford and General Electric, which I’m told everybody wants because of said big money — those goals tend to be intangible stuff like brand awareness, winning over hearts and minds, compelling actions over a longer period of time. To cultivate ‘mindshare,’ which is a terrible word meaning to simply become top of mind for consumers when they, at later some point in their life cycle, go out and try to buy a car or a washing machine. (You know how advertising works, you’ve seen Mad Men.)
So, why doesn’t a “download” mean the same thing for everybody? In my mind, there are two components to this problem.
The firstcomponent is technical, and it’s a function of the much-lauded democratic nature of the podcasting space. The podcast universe is made up not just of huge number of different podcast consumption apps (the native iOS Podcast app, Stitcher, Pocket Casts, Overcast) and podcast hosting platforms (LibSyn, Acast, Panoply, Art19, etc. etc.), but also a huge number of different ways of consuming and hosting podcasts that aren’t podcast specific (Soundcloud is a hosting example here). And with every different app or platform, there’s potentially a different technical way of defining and reporting what constitutes a download, a listen, a unique and meaningful impression.
How technically different can these epistemologies be? Pretty goddamn different, I’d say. The differences can be big and obvious; the most obvious example of this is the famous “download versus listen” question, with some hosting providers/players counting only full downloads as unique listens and other hosting providers counting any stream past 3 seconds (or 5 seconds or 25 seconds) as a unique listen. But the differences can also be hyper-nuanced and whimsically technical within a singular category; consider Midroll VP of Biz Dev Erik Diehn’s quibble with the public radio guidelines in the Observer:
"...[Erik] said he might ratchet down the time window used in these guidelines. If two requests come from what looks like it could be one device within a day, these standards call that one download. It might be reasonable to shrink the number of hours, he argues, since multiple people on, for example, one shared private network might download the same show in a day’s time."
The secondcomponent is, for lack of better word, political. Building off the abundance of different technical ways you can define a download, the podcast industry also isn’t currently incentivized to speak to advertisers in the same language. They’re also not incentivized to adequately question their metrics, with leads to a general state of download inflation. This inflation situation is at the core of the correction that Diehn alludes to in the Observer article: “If everybody adopted these standards today, some shows might come down a little bit in size and some might come down pretty dramatically.”
In this un-incentivized environment, you have the potential for the existence of three kinds of entities that, individually and collectively, can end up hurting the long-term foundational development of the industry.
Entities that are free to abuse advertisers’ lack of knowledge jumping into the space, relying on inflated download numbers, and seeking short-term profits,
Entities that are variously invested in the industry’s long-term foundational growth but who are unwilling to drill hard into their metrics for fear of undermining short-term profits.
So the combination of a chaotic reporting culture, the spectre of download inflation, and the potential for not-so-diligent actors leads to a situation that isn’t all that attractive for advertisers and ad agencies looking to make better use of the podcast medium. As the theory goes, that lack of a house in order is preventing more, and bigger advertisers, from jumping into the space, which everybody presumably wants, because everybody's chasing that sweet, sweet ~~scale~~.
And this environment, one would argue, is a function of the industry not having relatively strong third-parties able to independently verify metrics for advertisers (like a Nielsen or a Comscore) and enforce competitively productive behavior in the space. Without a strong entity to serve as a check and balance, you’re essentially left with some hope that the industry is able to either self-coordinate or is able to evolve into a productive state of spontaneous order.
Enter the public radio guidelines, then, which is both a call to self-coordinate and, in my mind, a move to compel the entrance of a strong third-party. More on that second part in a bit.
WILL THIS DOC HAVE ANY IMPACT? The answer is, obviously, who knows!
But here’s the thing: for measurement standardization to actually happen, it must uniformly take place across the layer of the podcast ecosystem that directly reports to the advertisers — assuming that, moving forward, this responsibility will increasingly lie within hosting providers, as the new ones that have been popping up seem to really want to own that relationship (I have my doubts of this future.).
And how have players in this layer responded?
Some are cool with it. Acast and Panoply (my former day job employer, btw) have indicated their support and compliance, and so too Midroll, which doesn’t directly do measurements but acts as the layer between a bunch of pods and advertisers, which indicated to me that they will “certainly consider any client on a hosting provider who used this standard to be providing reliable metrics.”
And, as you can imagine, others are not so cool with it. In the Observer article (again!), a VP at LibSyn had some pretty saucy things to say: “The reality is that podcasting has been around for 11 years, and there are companies that understand podcasting methods better than NPR.” Thus we see a fault line, and perhaps a clash between generations.
Because the published guidelines are premised on voluntary adoption, the central tension to watch here is whether voluntary compliance will take place at a critical mass, and if the efficacy of that coalition compliance will hold should there any insurgents in the marketplace go rogue and reshape terms with advertisers on their own.
Honestly, I don’t think that such voluntary coalitions can adequately hold over time. Which is why I kind of think this is really about the IAB.
ON ENFORCEMENT. The Interactive Advertising Bureau (IAB) is one of those third-parties I was talking about, except bigger and encompassing the full spectrum of digital media. It’s a non-profit advertising business association.
The thing you probably should know is that the IAB currently has two working groups focused on podcast advertising, aptly named the “Podcast Technical Working Group” and the “Podcast Business Working Group.” If you look at the membership makeup, you’ll see a few names that are also on the contributors list in the public radio guideline doc — notably, 3 out of 5 of the doc’s steering committee.
So, I’m personally interpreting the doc as a political move by this public radio group to speak directly to the IAB, and to nudge whatever conversation that's going inside further along. I don’t think anybody can reasonably conclude that voluntary adoption of these standards is actually possible, and as such, I’m thinking that this is a way to rope the IAB further into being the third-party arbiter of advertising in the podcasting space.
Of course, the larger question remains on whether the IAB, nudged or otherwise, will actually be the body that ends up enforcing these much needed standards. I’m personally not sure; as it stands, podcasting very much remains a niche industry -- far too niche to demand notable enforcement resources from the IAB, whatever those resources may be -- and furthermore, the IAB has been tangled up with a bigger bogeyman of their own to tame: the rise of ad-blocking technology.
But, I suppose, one can hope.
SOME STRAY OBSERVATIONS
Something historically fun to think about: the whole shitty measurements situation is partly why podcast advertising has so far been dominated by direct response marketers — the Squarespaces, the Mailchimps, the Harry's, the Audibles — because these advertisers are able to get around the ambiguity of downloads by focusing on the promo codes that they provide to podcasters. By focusing on promo code conversions, these advertisers are able to bypass the download metric as the key metric that they should be focusing on. After all, what’s better: a 25,000-strong audience that generates 200 promo code uses, or a 5,000-strong audience that generates 1000 promo code uses?
Here’s something else that’s fun to think about: you know who doesn’t have to care jack-shit about all this hullaballoo? Subscription players, that’s who. So far, an exceedingly small number of players has adopted this and are so far insulated from these politics and dynamics: off the top of my head, there’s Audible (obviously), Artie Lange. Midroll has Howl, but that’s not their core business so far, so let’s call that a hybrid.
Some folks have pointed out to me and to others: podcast metrics may be shitty, but at least it’s not as shitty as in other mediums (particularly TV and radio). Those metrics have long been suspect in terms of their capacity to accurately describe the actual size of an audience. But those mediums have the benefit of legacy, and a long history with advertising and advertising relationships that presumably came about because, back then, buyers had few major choices. Podcasting is coming up in a world that has so, so, so many media choices, and so it has every burden to carry and everything to prove. The onus is on pods to prove value.
Another thing I like thinking about: how will the entrance of streaming players like Google Play Music and Spotify impact this measurement standards question? Podcast creators, then, would have to actually start playing the distributed content game, and getting into that swamp.
One line in the document that raised my untrimmed eyebrow: “If public media stakeholders do not work together now to define standards for the measurement of on-demand audio, others outside of our industry will do so and may establish parameters that are incompatible with public media operating environments.” Can someone spell out the technical incompatibilities to me? I’ve always thought that “underwriting/sponsorships” is essentially the same thing as ads, and I always thought the difference is purely relational context. Why include that line in what is essentially a technical doc?
Okay, that’s enough from me. This has become too long a newsletter, and I’ve blown two hours this morning that I probably should’ve spent building the site. Oy.
But if you have any responses, and if you believe I’m getting anything wrong, let know. This’ll live as a public Google Doc as well, to follow public radio’s suit, and I’ll add additional reader notes/corrections if and when they pop up. Let’s figure this out together.