Skip to main content
Does the timing of surveys matter?
Updated over a year ago

Last week, Josh, who leads our London-based account management team, mentioned an interesting snippet from one of our clients which brought to life an important attribute of a well-designed patient experience (PREMs) programme.

Our recommendations to clients are based on published evidence and also our experience from across our client base of what works and what doesn’t.

We pay particular attention to evidence gathered in healthcare settings as this can be quite different from how respondents behave in commercial situations.

The client in question is using two suppliers to gather their patient feedback. We’re the newcomers and have streamlined and automated their surveying following best-practice methodologies for PREMs, including surveying patients post-discharge rather than when they are in the clinic. We’ve slowly picked up work from the original supplier, although they are still working in some areas collecting patient ratings on cards before people leave a clinic.

There has been an incredibly consistent difference resulting from each supplier’s approach, in the percentage of people giving high ratings. Through Cemplicity, the client is consistently averaging 75% of patients giving a ‘Very Good’ rating for overall experience and 18% giving a ‘Good’ rating. Through the original supplier, with feedback captured in situ, the client has consistently averaged 90% for ‘Very High’ and 9% for ‘Good’.

In an inpatient setting, evidence suggests that people need time at home post-discharge, to reflect on their experience of care before giving a rating. If they are asked for a rating before leaving the hospital or immediately afterwards, they haven’t had time to compose their thoughts and tend to be more positive (happy to have survived, perhaps?!) [1] This is sometimes called the “Halo Effect”, a form of gratitude bias.

And other factors might be at play:

  • A person’s condition could deteriorate towards the end of their time in a clinic or hospital, so providing a rating before they leave is premature.

  • Many people are deferential towards care professionals and this power imbalance may lead them to be intimidated into giving more agreeable, positive responses.

  • There is anecdotal evidence that staff seek out more positive patients to complete surveys, in order to gain more positive responses.

This last point is important. In commercial settings there are often financial rewards linked to high patient or client ratings. I have learnt many times that if a system can be gamed for financial advantage, it will be. Even when there aren’t financial rewards linked to results, organisations implement PREMs to measure and benchmark across teams and services. Because of this, staff may not be deliberately choosing who is approached for feedback, it could just happen unconsciously.

So, what are we saying to our client? Cemplicity’s mantra is that our purpose is to improve patient experiences, not simply measure them. Accurate ratings and constructive patient feedback are much better tools for improvement, than artificially high ratings. We now have a great opportunity to make things better, and when we do hit 90%+ of patients saying they had a very good experience, we can be truly confident that they are.


References

  • [1] NHS England (2014) FFT Review: Quantitative strand: pp.18-19.

Did this answer your question?