Making sure opinions matter

It wouldn’t be a stretch to say that opinions form the basis for almost all forms of market research, and without them we wouldn’t be able to obtain the key consumer insights that we do for our clients. We should therefore be treating the voice of these opinions as an invaluable asset by researchers, to ensure they are as insightful and representative as possible. While rolling out a new panel for one of our telecoms clients, I mused over some ideas for potentially improving the value of these opinions, which may seem pretty fundamental on first glance, but are areas often neglected during the research process.

1.      Panelists are people, not just a resource.

As researchers - whether you work agency or client side or as a panel provider - we can all be guilty of simply viewing our panelists as a source of data – screening exhaustive, unengaging surveys to them in the search for quick, short-sighted answers to very tactical questions. Whilst undoubtedly a valuable method of research, it is important to remind ourselves that panelists are real-life people who would prefer to take surveys in a way more attuned to their daily lives, which in an ideal world would take the form of 5-10 minute surveys they can take on whatever device they’re most comfortable with, as opposed to 20 minute slogs that only really function properly on a computer. Surveys that fit seamlessly into their media consumption habits are more likely to yield a response, and ultimately more reliable and insightful answers. One great example of this is the use of swiping, particularly if you’re trying to measure people’s instinctive, implicit reactions to brands, products or creatives. It offers a fun, intuitive way of screening short, snappy surveys to people and the level of insight available can be surprisingly rich.

2.      We should know more about our respondents

Participants of surveys are now more aware of the value of their answers than ever before, so it’s important we gather as much profiling data as possible to ensure they remain useful and representative. Extra effort should be made to gather this information unobtrusively; methods such as passive metering allow us to build detailed profiles of people’s habits and behaviours as they occur, without needing to pepper them with continuous surveys. We can also make better use of appended data for those people we’ve already recruited, or indeed include more questions at the recruitment stage to establish a richer, more interesting picture of our panelists. Imagine you could cut all of your data by people who… get the tube to work, use an iPhone, shop during Black Friday, feel optimistic about the future; the extent to which we can profile people is practically endless so we should be taking advantage of this on our panels. On one of our retail accounts here, we’ve been looking at how people’s shopping habits differ if they identify as being energy conscious, and it’s thrown up some interesting insights.

3.      We can learn more from drop-outs.

Should we be trying to understand more about why people drop out of surveys? Of course, many would be quick to dismiss any follow-up or pop-up question once they’ve already decided to leave the link, but even if just a few responded it could provide some valuable insight on what’s driving their decision to exit. Responses to this kind of question could shed light on long-standing pain points within a survey that you’d been oblivious to, as well as ideas for improvement. Although we are supposed to be the experts on survey design our ability to judge flow and the overall participant journey is inherently biased by our close involvement with a project, particularly on long-running trackers. Feedback from frustrated drop-outs could turn out to be surprisingly useful or eye-opening and if it prompts tweaks to your survey it could ultimately improve response rate, data quality and even how representative your sample is. Last and certainly not least, it shows that you care about the opinion of these people and are constantly looking for ways to improve their experience. We feel prompting people for this kind of feedback is an important part of the research process and as such we’ve been implementing it into most of our online communities.

Chris Handford