Measure the Music That Matters

By Peter Don, BPR

Don’t be fooled by the shiny new toy.

Here’s the uncomfortable truth about many “music insight” dashboards: they’re brilliant at counting clicks but terrible at understanding listeners. Streams, skips and social trails are digital footprints—useful clues, sure—but if you program a station off footprints alone, you risk building a playlist that pleases the algorithm more than the audience. Good-quality music research does the opposite. It treats your station like a living brand with distinct fans, neighbours, and fence-sitters, then asks the right people the right questions in the right context. That’s how you get a sound, not just a bunch of songs.

First, proper research starts with people, not platforms. It recognises P1 “core” listeners (the heavy users who define your sound), P2 “secondary” listeners (regulars who spend more time elsewhere), and P3 “occasionals.” Each group has a different job to do: P1s anchor identity and consistency; P2s are where share growth often lives; P3s shape cume potential and perception. Footprint data can’t tell you which click came from a P1 superfan or a P3 tourist—it flattens everyone into the same metric. A solid design samples these segments deliberately so you can decide when to super-serve the fans and when to widen the net without eroding the core.

Second, quality research respects age and gender targets. Real stations live on bell curves, not in the mythic “18–64 everyone.” A contemporary pop outlet is built around younger female cores; a rock or classic rock brand may live with 25–44 or 40–65 males; adult-contemporary flavours shift with life stage. Those differences matter because “what’s a hit” depends on who’s listening, at what time, and in which environment. Digital footprints don’t know if that stream came from a 24-year-old on earbuds or a 52-year-old in the car; they rarely capture station association, and they definitely don’t test whether a song fits your brand. Strong research does—with quotas, context, and questions framed around your station.

Third, the best studies separate familiarity from passion. Listeners cannot love what they don’t know; they can like it on first exposure, but love is learned. That’s why reliable methods track both recognition and emotional intensity, plus the “don’t like at all” landmines that drive tune-out. Footprint methods typically reward recency and ubiquity—songs with algorithmic momentum—creating circular logic: it’s big because it’s played; it’s played because it’s big. Proper testing shows you which currents will become loved with exposure, which recurrents deserve power, and which “buzzy” songs should sit out because they inflame your core.

Fourth, good research builds a music signature—the fingerprint of types, eras, and textures that make your station instantly recognisable. It helps you set guardrails (how far into the 70s? which 2010s flavours? how much pop-rock vs rhythmic?) and then shows where to place highlights so the day never feels flat. Footprint data can hint at trends, but it can’t design a coherent identity; without a signature, you drift into “random great songs” radio, which is forgettable in a world of infinite playlists.

Fifth, context matters. Strong designs ask in the listening context—morning commute vs office vs weekend—because appetite changes with tasks. They also map results to station choice: why do some P2s come to you for a specific fix (e.g., 2000s pop-rock) but leave for something else? Armed with that, you can tighten clocks, pace textures, and promote the moments where you’re already winning. Digital footprints rarely know the job the music was hired to do.

Sixth, strategy > trivia. Quality research answers business questions:

  • How do we raise satisfaction among P1s without shrinking the tent?
  • Which flavours attract high-value P2s we can convert?
  • Where is familiarity high but passion lagging—and what rotation will grow it?
  • Which songs test “polarising popular” (lots of love, lots of hate) and need careful placement?

 

Clickstreams can’t prioritise like that. They’re descriptive, not prescriptive.

 

Finally, the market reality: the more media choice a listener has, the more disciplined your research must be. In small or isolated markets, sloppy mixes sometimes get a pass—less competition, fewer alternatives. In big markets, a fuzzy proposition is punished fast. Robust music research provides the discipline to stay consistent and deliver regular highs: the right golds for reassurance, the right currents for freshness, the right spices for surprise—always inside your brand.

In short, footprints tell you where someone walked; great research tells you who they are, why they came, and what will make them stay. Use footprints as supporting clues, but build your station on representative samples, clear targets, passion vs familiarity metrics, and signature-driven guardrails. That’s the advantage of proper music research: it doesn’t just chase interest—it manufactures loyalty.

 

Discussion

No comments on this post yet, start a discussion below!

Join the discussion