One of the best bits of my job is getting out to visit housing associations (especially when it’s sunny and the traffic is flowing). It’s always great to meet people doing their best for residents, get a better idea of how each association works, and to talk about the pros and cons of different approaches to resident surveys.
While every association is different, it’s fair to say that smaller ones are less likely to have an in-house researcher, let alone a data analyst or a data scientist. So unless they’ve hired someone who’s experienced in surveys or passionate about doing them as part of their role, my visits usually involve a useful conversation about how to measure tenant satisfaction on repairs, antisocial behaviour, complaints and all the other things which make the difference between happy and unhappy residents.
Interestingly, these conversations often cover the same four areas:
- If you haven’t had a resident satisfaction feedback meeting before, be prepared for it to take longer than you think
- There is no such thing as a STAR survey
- Resident satisfaction surveys are not necessarily a precise science
- Why response rates don’t quite work as you’d expect
Let’s dig into these issues a little deeper.
What needs to be covered in an initial resident satisfaction feedback meeting?
We’ll need to talk about the latest survey techniques in the sector, different survey methods and how they work with the customer experience, reviewing previous surveys, shaping a new satisfaction questionnaire, the right survey method, response rates, statistical reliability, confidentiality, anonymity, GDPR and data protection, timetable, outcomes and reporting, benchmarking, reporting back to boards, best practice – hints and tip – and so the list goes on!
There is no such thing as a STAR survey
- Honest. STAR is a framework based on a large set of questions. A few of these are asked in the majority of surveys and are widely benchmarked.
- You don’t have to do a STAR survey – you don’t actually have to ask residents for their views at all, though few would suggest this is a sensible approach.
- You can include whatever questions you like in the survey – it can be as long or short as you want.
Resident satisfaction surveys are not necessarily a precise science
Take a reality check when you review the performance figures: they are not like a financial statement where you need to be accurate to the penny.
These are perception surveys, where you ask what some people describe as “woolly” questions based on a resident’s feelings – which may or may be affected by their mood when they are completing it.
They are further affected by bias: survey bias (postal, telephone, online and face-to-face will all get you slightly different responses) and also response scale bias. So, what is the difference between 91% overall satisfaction and 97% overall satisfaction?
In one benchmarking club, it’s the difference between upper and lower quartile. How crazy is that? Applying that to the real world, think about the best meal or drink you’ve ever had. If 100% is your best, how would you distinguish between 97 and 91 percent?
Confused? The CAMRA rating scale for real ale is a good way of explaining how you might understand residents’ responses. Although the experts out there will spot that the response scale is biased…!
CAMRA Rating for real ale
- Poor. Beer that is anything from barely drinkable to drinkable with considerable resentment.
- Average. Competently kept, drinkable pint but doesn’t inspire in any way, not worth moving to another pub but you drink the beer without really noticing.
- Good. Good beer in good form. You may cancel plans to move to the next pub. You want to stay for another pint and may seek out the beer again.
- Very Good. Excellent beer in excellent condition.
- Perfect. Probably the best you are ever likely to find. A seasoned drinker will award this score very rarely.
Why response rates don’t quite work as you’d expect
These do matter but not in the way you think.
Many people get het up on getting a good response – and if you have less than 1,000 residents this can make all the difference between highly reliable results and less reliable results. Let’s take some examples:
- A landlord with 200 residents needs to receive the views of two-thirds of them (132) to achieve reliable results (5% margin of error)
- For the same level of reliability, a landlord with 100 residents would need 80 to respond (80% response rate)
- A landlord with 5000 units would only need 357 responses (7 per cent) to have the same reliability.
In practice, common sense prevails and if any landlord, regardless of size, only achieved a seven percent response rate to a survey this would raise a few eyebrows and questions to answer. You also need to apply gut instinct: if you have 50 residents, what response rate would you need to have confidence in the findings? Ten seems too low, 40 much better – where would you draw the line?
Why is all this important?
Carrying out resident feedback – through survey design, collection and reporting – is time-intensive and you need to ensure you understand the robustness of the data you collect. This is because it is ultimately transformed into performance management information which has an impact on service reviews and improvements.
So even if you conduct your own surveys, a little advice – often around questionnaire design and analysing the results – may be a worthwhile investment.
If you’d like to talk about any or all of this, do get in touch – we can usually tailor our consultancy service to meet the needs of every landlord.