The Key to Getting the Most out of your Music Research
What can research respondents tell you about songs? Keep in mind, they’re respondents – by definition they’ll respond to anything you ask them. People who think a question is irritating or meaningless or confusing often end up becoming non-respondents. But those who keep giving answers to questions, even if those answers are not very meaningful, are the backbone of market research the world over.
A common notion is that music testing works best when it keeps respondents out of their heads – when they’re reacting without consciously considering their choices, like being on auto pilot. In the car, listeners aren’t making reasoned choices concerning whether to turn a song up or switch to another station;
Good song: Their thumb glides over the thumbwheel on one side and ups the volume.
Bad song: Their other thumb nudges the switch on the other side of the steering wheel and the station is changed. No drawn out thought process, just auto pilot.
To keep them in that part of their brain, the interview needs to kept simple:
Typically, respondents would be faced with a scale containing a number of choices for each song. All the choices are related to actions, except for the response for an unfamiliar hook (we wouldn’t ask people to rate a song they don’t know)
We know that it takes each respondent a few songs to get comfortable with the scale, so hooks are randomized to negate that placement bias. Once they get the hang of it, they fly through the interview – and that’s the way we think it should be.
Sure, you could ask whether songs fit on your station. They’ll give you an answer, because they’re respondents. But they really don’t know whether or not the song fits. That’s your job. Respondents know what songs and types of songs they’ve heard on your station. When you ask them about “fit,” they’ll respond with what you’ve taught them. By using the information, you create a tautology and reinforce the too-narrow, same-sounding, repetitive playlists that have been giving radio a black eye for decades.
Worst of all, asking the additional questions about fit pulls respondents out of auto pilot to think critically about an issue they’ve never thought about before. And knowing those additional questions are lurking after each hook means they’re now thinking about issues they’re not going to be thinking about when they’re tooling down the highway at 60 miles per hour.
Or you could ask whether they perceive a song to be going up or down the chart. If they really think it’s “played out,” the song will have a high burn score and a smart programmer will pull back on exposure. If the song is really increasing in popularity, its scores will improve week over week. Again, respondents will answer the question because they’re respondents, but each additional answer means they’re thinking more like a program director and less like a listener.
Really, we think it is best and most effective when respondents don’t think about the interview very much. Maybe you’ve heard the effect in a focus group as listeners start rationalizing their behaviour when talking about radio stations for an hour. Or you’ve listened to a friend substantiate a new car purchase with the car’s great safety record or tremendous gas mileage – when you know the decision was actually motivated by how the car makes them feel.
Valuable data will tell you which songs are familiar and which songs people really like. From there, it’s up to expert programmers to make decisions using experience and intuition. Online services, like Pandora do the best they can with algorithms, but human-curated playlists and schedules, augmented by actionable information from consumers, have created magic for many stations for many years.
Article originally posted by NuVoodoo Media Services.
Read original here
Klicken Sie hier, um sich für unseren Newsletter anzumelden
Discussion
No comments on this post yet, start a discussion below!