Skip to main content
All CollectionsBest Practice & Community StoriesBest Practice
Determining good and statistically significant response rates
Determining good and statistically significant response rates
Updated over 2 months ago

The very first question we are often asked when we meet healthcare professionals interested in working with us is, ‘what response rates will we get with Cemplicity’?

Everyone understands that when a large proportion of your target respondents take part in your survey, you have the foundation for a successful programme.

However, achieving a census (where everyone in your target group responds) is nigh on impossible. So, most of us accept that we can only expect to work with responses from a sample of these respondents. (For the naysayers we come across who will not use data unless 100% of their patients have taken part, this paper is for you!)

An understanding of statistics is helpful in determining whether your sample is reliable enough to base decisions on. In this paper we talk about some useful statistical techniques to guide your approach.

Another helpful facet is to understand norms for response rates in the healthcare sector. We may hear about exceptionally high response rates (80%+) or exceptionally poor ones (<2%) but what should you expect? We share some of our global response rates below as a guide.


Response Rates, Samples and Significance – a Layman’s Explanation

The response rate itself is less important that the characteristics of the sample that is achieved. For example, a low response rate that achieves a sample that is representative of your patient population is arguably better than a high response rate that delivers a sample of just one type of patient.

In traditional research projects, a lot of attention is paid to achieving a representative sample, but often in these studies, especially when using public research panels, the response rate itself is very poor.

In continuous programmes like ours, our aim is to establish a good starting response rate building on our experience with other clients. Then, we continue to test different approaches and stay alert to changes in SPAM filters and algorithms to increase response rates over time. This is the role of our Customer Success team.

Once you have your responses building up in your portal, the fun starts. You are able to apply filters, compare results across different types of patients, locations and services. This is when an understanding of key statistical concepts comes in.

As a very simple guide (statisticians cover your eyes), a sample size of 1,000 is not going to get much more reliable no matter how much bigger it gets (regardless of the size of the population you are interested in). You can work with a sample of this size with confidence.

As you start to apply filters and make comparisons, you can be reasonably confident to work with two sets of data of over 100 responses each.

As your sample sizes get smaller than, say 50 responses, don’t ignore them but consider them more qualitative than quantitative. E.g. you may not say “50% of my sample thinks this”. You might say, “people are talking about this, and it is important for us to listen”.

Two provisos. Firstly, remember the point about the difference between a census and a sample? If 100 patients are your whole target respondent group and 80 of them have responded, take note of the results. They are closely reflective of that group’s views.

Secondly, whether 100 responses or 1,000, it is important you are hearing from a range of patients, broadly similar to your service user profile. If there are weaknesses, we have a range of techniques to improve who you are reaching and who is responding to your survey invitations. This includes reviewing your data collection approach to ensure emails and mobile phone numbers are routinely collected on admission to the service, offering additional languages, top up surveying while patients are still in the service and various copywriting techniques.

How safe is your data, as the basis for decision-making?

Two important statistical tools tell us the probability that our sample represents the views of the entire population. These are Confidence Intervals and Margins of Error.

For example, if your survey shows the average patient experience rating is 80% with a margin of error of 5%, it means you are pretty sure that the true experience rate for all patients is somewhere between 75% and 85%. The range between these values is the Confidence Interval. This range accounts for sampling variability, which is the natural fluctuation that occurs because you are surveying a sample rather than the entire population. The smaller the margin of error, the more precise your estimate.

Confidence Level is another helpful statistical term. This is a measure of how sure you are that your survey results are reliable. The standard and widely accepted level of confidence used in survey research is 95% (Wisconsin Department of Health Services, 2024). This means if you were to sample from the population 100 times, then 95 times the results would fall in the confidence interval. Higher confidence levels provide more certainty but require larger sample sizes.

To put it all together, let us say you conduct a patient experience survey with a 95% confidence level and a 5% margin of error. You find that 80% of surveyed patients are satisfied. This means you're pretty confident (95% confident) that the actual satisfaction rate for all patients in the hospital is somewhere between 75% and 85%.

Determining Your Required Response Rate

Many of the tools available to help determine the optimal sample size have been developed for ad hoc research projects. A key goal with these projects is to establish the smallest sample size that will give you enough confidence in the results. It can cost a lot of money to get people to take part in a survey, so you do not want to be paying for responses that are not significantly improving the reliability of the results.

Here is a simple calculator you can use to establish optimal response rates for ad hoc projects, using factors like the size of your population, the margin of error you can tolerate, and your desired confidence level. But keep in mind that our continuous programmes, designed to reach all patients and with ongoing work to optimise response rates, follows a different strategy.

Examples

Let us consider different patient population sizes and determine the number of responses needed for statistically significant results at a 95% confidence level with a ±5% margin of error.

For a Population of 1,000 Patients per Month

Using the sample size calculator, you will find that you need approximately 278 responses to achieve statistically significant results. This would equate to a response rate of 27.8% per month.

For a Population of 500 Patients per Month

For this smaller population, you would need about 217 responses to achieve statistically significant results. This would equate to a response rate of 43.4% per month.

For a Population of 250 Patients per Month

For an even smaller population of 250 patients per month, you would need approximately 152 responses. This would equate to a 60.8% response rate per month.

Significance – A critical consideration before you jump in and act.

Now that you understand Confidence Intervals and Margins of Error, you know that if patient experience rating dropped from 75% to 72% and your Margin of Error is ±5%, you shouldn’t be too worried. You can expect fluctuations between 70% to 80%. However, if the rating changed to 82% or 69% you have cause to be elated or worried – this is a significant change. It is unlikely to have been caused by chance.

Calculating significance can be a useful step if stakeholders are getting concerned about changes in ratings or if you are deciding where to put your attention.


Response Rates in a Healthcare Setting – What to Expect

There are many things that impact response rates and this varies significantly across different industries and countries, as well as within the health sector. When we manage our stakeholder expectations, we try to understand how they have established their expectations so that we have a productive conversation.

A particularly common view is that surveys must be short in order to optimise response rates. This is not supported by Cemplicity’s work. The more crucial factor is that survey questions are accessible and relevant to respondents. It is a real joy working in our field because patients are so generous with their time and responses, understanding that their feedback will help others have a good experience of, and outcome from care. It is also accepted practice to report responses even if a person has not completed a survey (something less common in other survey fields) so we tend to pay close attention to both starting rates and completion rates, optimising both over time.

Explaining this further is beyond the scope of this paper so let us provide some indicative response rates to help set expectations for your programme.

Average response rates

According to multiple sources online, typical survey response rates can lie anywhere between the 5% to 30% range, with those surveys distributed from unknown senders tending to be at the lower end of this scale (Delighted, 2021).

For our healthcare clients' Patient Experience (PREMs) programmes in 2023:

  • The average email response rate is 26%.

  • Private hospitals performed notably better, with an average email response rate of 37% and a median of 40%.

  • Private hospital inpatient settings achieved the highest email response rates, averaging 44% with a median of 49%.

  • SMS/text surveying tends to see lower response rates, with an average and median of 17%. However, this can vary, with a top-performing provider reaching a 44% response rate.

Email response rates across sectors for PREMs:

For PROMs and Symptom Tracking type surveying, the response rates are typically much higher, with some programmes aiming for close to 95%. These types of programmes have a higher benchmark because a well-designed programme will embed the feedback mechanism as part of each individual patient's clinical journey.

You can use these averages to inform expectations and benchmark yourself against, informing what ‘good’ looks like and when improvements can be made.


Conclusion

In this paper, we have provided a guide on what response rates to expect for your programme, noting that there can be big variations depending on your country, service and type of programme.

We have outlined some useful statistical techniques that are often used to

differentiate normal data fluctuations and noise from real, significant changes.

By understanding these statistical tools, healthcare providers can be confident in their decision-making. They’ll know when to invest in strategies to improve email and mobile phone collection or add some additional surveying to reach patient cohorts that are under-represented in results.

Understanding average response rates and setting realistic survey goals helps achieve meaningful data interpretation and enhances the quality of care.

Did this answer your question?