I’ve run more than fifty primary customer research exercises and have probably made every mistake there is to make. This is one of those tasks that seems simple on the surface but the devil is in the detail. I want to share my checklist that any marketer or strategy consultant should use when preparing to run customer research, focusing on online, quantitative surveys.
This guide is intended for customer research to inform internal choices about strategy and tactics. If you are creating a survey to generate website content or PR coverage then some of the steps below may not apply – but I think most are helpful nonetheless.
If you have any to add – please let me know, I’m sure that the list can always be improved.
How to run great customer research
Planning your survey
Be clear on the hypotheses you’re looking to test and ties to specific questions to ensure you’ll get the answers you need. Helps to avoid surveys which are too long with sprawling logic
Start with some small-scale qualitative research: a combination of one-to-one discussions and focus groups to inform questions and response options for your larger sample (expensive) quantitative research. I can highly recommend Kathryn Coles at White Rabbit Research for any focus groups you are looking to run.
Don’t outsource the design of your survey script to an agency or someone who has never written customer research – crafting a survey script is a skilled job, and it is much harder to edit a lengthy script that you aren’t happy with. As a minimum, write the questions and a starting list of response options yourself.
Targeting your survey
When researching to understand customer acquisition behaviour, I normally exclude those customers whose last purchase is not within recent memory. Humans unfortunately have short memory spans for the finer details that you will be interested in capturing. The definition of ‘recent’ will broadly correlate with the significance of the purchase: e.g. the last week for purchasing a cup of coffee, but the last three years for selecting an ERP system.
Target your survey at both customers and non-customers – this will provide invaluable context to the perspectives of your own customers. You’ll typically need to use a panel provider to reach non-customers.
Decide the target sample size based on the statistical significance you aim to achieve and the number of ‘cross-tabs’ you want to analyse your data by. The rule of thumb is that for most businesses, where there is a large group of potential respondents (say more than 50,000), that you need a sample size of 380+ for a 95% confidence interval with a 5% margin of error – you can use one of many online calculators to help with this. If you want to introduce cross tabs then each subset of sample within the cross tab needs this sample size e.g. regional data, gender, age etc. That’s why most consumer surveys start at a sample size 2,000 eg to allow segmentation into five age groups when reviewing responses.
If you are running a survey with senior B2B decision makers, I would be cynical about using large B2C-focused panel providers. Whilst they might ask their members about their job titles, they will do very little validation. I’ve seen panels with a surprisingly high proportion of Chief Executives! Instead you can use one of the expert network providers such as Third Bridge or GLG. You will pay a premium per respondent but I’ve found the quality to be much better.
Designing your survey questions
You will typically need a handful of screening questions at the start of any survey, to ensure that you are reaching your intended respondents. These are also an incredibly valuable way of understanding ‘incidence’ among the population. Make sure your panel provider sends out the survey to a representative sample, then when you ask a question like ‘do you own’ or ‘do you use’, the responses will give you a measure of incidence. As a bonus, many panel providers will not charge you for these initial screener questions.
For many products or services, both B2B and B2C, include a screener question to confirm that the respondent was the decision maker in their business or household for the most recent purchase that you intend to ask about.
Focus on actual, recent behaviour as opposed to expected behaviour/intentions – my own experience and lots of well documented research says that our ability to predict our own behaviours as humans is pretty limited and we end up giving the answers we think are expected.
Ask open questions – in other words don’t lead the witness. This is a common issue I’ve seen with draft scripts. Avoid framing or anchoring in your question if you want valid answers.
A fairly standard question that I include is around provider awareness and usage – a list of providers, with options for:
‘I’ve never heard of this provider’;
‘I’m aware of this provider but haven’t used them’;
‘I’ve used this provider in the past but not currently’; and
‘I’m a current customer of this provider’.
Many survey scripts I see include questions about provider selection criteria – ‘why did you choose brand ABC’. These can be helpful, but I prefer to focus first on purchase triggers - ‘why did you decide to consider purchasing [product/service]’ - and initial research behaviour – ‘what did you do first to find potential providers of [product/service]’. As a marketer, I find these responses more actionable than selection criteria, which tend to be more about the product/service itself than the customer’s purchase journey.
Linked to this, you can ask separate questions about why a customer has continued to purchase a product/service from the same provider. The reasons may well be different to those when they first purchased – including asking how hard the customer believes it would be to switch provider.
I always include a Net Promoter Score question – for providers that the respondent has recent, direct experience of using. Make sure to follow this up to understand the reasons for high and low scores, and potentially also a free text field for ‘what would provider ABC have to change to achieve 10/10’.
Use language your respondents will understand (and test this out) – avoid business jargon (even in B2B surveys) and include definitions for any terms there may be some ambiguity around.
Designing your response options
Ensure you provide a ‘mutually exclusive, collectively exhaustive’ (MECE) set of response options for respondents, in particular where you will ask them to choose just one response. This is where qualitative research can be a great front-runner to a survey to help inform your response options. Overlapping response options will cause confusion and result in poor responses.
Include ‘I don’t know’ / ‘I don’t remember’ as options. You might get slightly fewer usable responses as a result, but you will get better quality responses by excluding guesses; and it is also an interesting datapoint to see whether respondents actually recall the detail you are asking about.
When seeking sentiment-type responses (e.g. ‘agree’ vs ‘disagree’), use an even number of response options. When you have an odd number of response options, respondents tend to gravitate towards the middle, ‘neutral’ option as it is cognitively easier for them – remember than even in well-rewarded surveys, most users are trying to complete their responses as quickly as possible.
Running your survey
You will normally start by ‘soft launching’ your survey to a small set of respondents. Make sure to review the responses carefully, in particular any free text fields, to spot confusing questions, missing response options, and broken survey logic.
When asking your question around provider awareness, include some spoof provider names (i.e. make them up) – this is a good way to subsequently exclude respondents who are speeding through the survey. Just do a quick online search to make sure you’ve not accidently come up with a real business name!
Ensure you understand the true incentives for panel respondents and that these match with the length and complexity of the survey. If you underpay you risk getting poorer quality responses. A panel provider should be willing to provide this information, and you should be wary if they are not.
Include a free text field at the end of your survey asking for any further comments or feedback. This is a useful way to find errors in the survey when you are soft launching.
Customer research is an essential input to defining your customer acquisition strategy, and this checklist will help to ensure a smooth process and high quality output. If I've missed any items from your own checklist, please let me know.
The one topic I've not covered here is how to test attitudes toward price in a survey, as that is worthy of a post by itself – watch this space.
If you’d like to discuss how to run primary customer research for your business, please Contact Me.
All views expressed in this post are the author's own and should not be relied upon for any reason. Clearly.