Why you should care about response rates?

At Critical we believe that the quality of data really matters. 

After all, it forms the basis of any insights gleaned from research. This means that thinking about the quality of data shouldn’t begin once the data starts to roll in, but right from the start during research design, monitored throughout the middle of the project, and checked thoroughly at the end. 

Woman completing an online survey

Let’s look at one key influence on data quality in surveys: response rate. This is the number of people who respond to the survey compared to the population as a whole. 

As with all quantitative research, the total number of survey responses (or sample size) is important, meaning a high response rate is always desirable. However, we believe that a more interesting and more important question for judging data quality than “how many?” is “How many of whom?”. In other words… 

“Does the response rate differ by demographic or other sub-groups of interest, and if so, does this have an impact on results?”

To answer this question, we have created a scenario outlined below. 

Here’s an example….

Imagine you work at a business, and you want to find out your current levels of customer satisfaction. You have a contact list of 10,000 customers and you email them all an invitation to complete a satisfaction survey. 1,000 of them respond. 

At a response rate of 10%, you are feeling pretty confident that you have a robust sample size. (NB. What’s considered a ‘robust’ sample size varies depending on the population size and the number of sub-groups you wish to analyse by; response rates for a customer satisfaction survey would typically be 6-10%). 

You process the data and find that the overall satisfaction score is 77%. 

Different levels of satisfaction graph

You check the profile of your sample against the total population, and all looks reasonable. 60% of respondents are male and 40% are female, which is great because your database is 60% male, 40% female! Your sample profile matches your population profile.  

You do some deeper analysis to try and find any interesting differences, and to your surprise, you find the results show that your male customers have an 85% satisfaction score, whilst your female customers only have 65%. In other words, your male customers are more likely to be satisfied with your service than your female ones- this is a key finding!

But what would happen if males were less likely to complete the survey than females – i.e. the response rate differed by gender? Would we make the same important discovery?

Now imagine that you run the survey again, but this time your male customers were less interested in completing the survey than your female ones and you get a sample breakdown of 20% male: 80% female – not such a good match to the original sample. Assuming the same satisfaction scores by gender, the overall satisfaction score is now 69% – significantly lower than the original 77%. 

So differing response rates can have an impact on results.

Help! My sample profile doesn’t match the population profile  

Man holding onto his ears as it gets too much

Disappointing as it is, there’s no need to panic just yet! 

We can try to correct the differing response rates by applying corrective weighting – this means that we adjust the data so that our sample profile matches the population profile. 

In this case, because males are under-represented in our sample (20% of our sample but they make up 60% of our population), we give males a higher value (or weight) in our sample. 

Therefore each male interview is multiplied by a factor of 3 (so that our 200 interviews represent 600 interviews overall) whereas each female interview is down-weighted by 50% (i.e. 800 interviews represent 400). After doing this, we come to the same satisfaction score of 77% as when we had a more representative sample.

However, these adjustments come at a cost….an increase in the margin of error i.e. the difference between the estimated result from our sample compared to the real result if all customers had completed the survey.

How can I achieve a representative sample?

Before fieldwork begins, consider the population profile and who you wish to target. We can set quotas – targets for the number of interviews to achieve – for specific subgroups so that our sample includes these groups and is representative of the population as a whole. 

If we expect to achieve lower response rates in key sub-groups (e.g. by gender) we can try to correct this at the outset by oversampling and sending out more survey invitations than you would if sending at random to ensure that we achieve a robust sample size in key sub-groups. 

What is a ‘good’ response rate?

For a typical consumer online survey with an organisation’s customers, we would expect to achieve a response rate of c.6-10%. But in B2B markets this can be significantly lower unless appropriate survey incentives are offered. Response rates can also differ substantially by data collection method.

But as we’ve just demonstrated, who you survey can be just as important as how many interviews are completed.

Top Tips to improve response rates

At Critical we have extensive experience of surveying individuals across different markets in the UK and abroad. Here are our top tips:

1. Providing a complete and clean database/ sampling frame. It seems obvious but if you send invitations to email addresses that aren’t checked, it won’t help. Take some time to make sure your database is as good as it can be. 

2. A well-written introduction/ invitation to take part. Give people a good reason to take part; explain why their opinion counts and what the survey results will be used for. Provide reassurance of data confidentiality and reference Market Research Society guidelines where appropriate.

3. Engaging questions and good survey design. Once they have clicked on the survey link you are halfway there, don’t lose them now! Make your survey stand out visually and make it an interesting and enjoyable experience – it’s an extension of your brand, so incorporate your branding and style of communications. Consider the question text, flow, and survey routing. And crucially, make sure it’s optimised for mobile devices. 

4. Survey length. Yes, we may sound like a broken record but it’s so important! Online surveys should typically be a maximum of 15 minutes. Make use of your database – customers get frustrated if you ask them questions they think you should already know the answer to.

……that’s already a lot to consider, and we haven’t even mentioned having a choice of survey completion methods; who is responsible for sending the invitations (client or agency); timeliness/ when is the right time to ask for feedback; incentives; reminders….

Let us know what has worked well for you and please get in touch if you want to chat about how best to optimise your survey response rates.