Menu 

From Status to STAR?

SPBM and Satisfaction

With the demise of the regulatory requirement to carry out STATUS surveys, the sector needs to decide if it should adopt a new standardised approach to satisfaction measurement on a voluntary basis.

A number of SPBM members have asked us for our thoughts on satisfaction measurement and this article sets out what we see as the main issues for smaller housing providers.

SPBM members and smaller housing providers more generally have an opportunity to influence where the sector goes with satisfaction and specifically you have an opportunity to respond to HouseMark’s proposals to replace STATUS with STAR.

If you have not yet responded to HouseMark’s consultation the closing date for responses is 21 April.

Links to the HouseMark’s consultation documents may be found at the end of this article.

From STATUS to STAR?

The case in favour of a standardised approach is relatively straightforward. Without standardisation providers will not be able to reliably compare performance with each other. For this reason alone, we encourage you to support efforts to agree a new collective approach to satisfaction measurement.

The case against is more complex. Doing your own thing won’t prevent you from comparing performance trends over time. While comparisons with other organisations are important, they are not essential in using satisfaction data to drive service improvements. The inclusion of people’s views should in our opinion also carry at least as much weight as a need for exclusion on tightly defined methodological grounds.

We think there should be a standardised approach that provides a satisfactory means of comparing performance with other organisations. Its shape, however, should be driven primarily by its usefulness in informing action to improve services and engage residents in what you do. It also needs to be flexible enough to ensure that you can collect the data you need and minimise the need to carry out additional and costly data gathering.

Methods

In our work with smaller providers, we have always encouraged them to periodically measure resident satisfaction by means of a postal survey. Response rates from our surveys are generally over 50% and sometimes as high as 70%. Postal surveys are easy to administer and costs compare well with other methods. Telephone and face-to-face data collection methods have their place but not in a sector-wide standardised and comparable approach.

That said, we always suggest providers offer residents the opportunity to respond online. We always include this option in the surveys we run and we have seen over 25% of a social housing survey population respond in this way. Government initiatives continue to target universal broadband access by 2012 (for examples see Ofcom) and, while many residents will continue using a paper questionnaire, it seems sensible to offer the option of online completion to those who prefer it. Printing and postage account for a significant part of the costs of running a survey, so it also makes sense financially to encourage more online responses wherever possible.

We also encourage providers to offer individual residents support in completing surveys by phone or home visits (sometimes in languages other than English) should they require it. While uptake is often low, it ensures people are not excluded from the survey process for no fault of their own.

We favour an approach that offers residents a choice of responding either by post or online. Residents should be offered support by phone and home visits if they need help in completing survey questionnaires.

Frequency

The decision as to how frequently satisfaction surveys are carried out should be for individual providers to decide. It should be based on:

  • The usefulness of the data (i.e. in informing service improvement decisions)
  • Ability to act on those decisions (i.e. before the next survey)
  • What you do in-between in terms of other satisfaction research
  • The rate of change to the survey population (i.e. potential loss of comparability through, for example, growth or reductions in stock)
  • Costs (i.e. can you afford to do it?)

In our experience, surveys provide enough food for thought (and realistic action) for about two years, so providers should decide what works for them (even if that means an irregular cycle of data collection). While we would certainly like to see our benchmarking members sharing and comparing satisfaction data every year, whether it is updated annually, every two or even more years is up to them. The number of organisations involved and our ability to flag the date of data collection will mean there is always a useful and robust data set.

We think you should decide how frequently you measure resident satisfaction. We think you will probably want to carry out a survey at least every two years.

Statistical reliability

This is a difficult issue for smaller providers. For a provider with 500 homes, it will be necessary to get a 54% response rate from a census survey (i.e. a survey of all residents) to achieve results that are accurate to a margin of error of at least +/- 4% at the 95% confidence level. This is certainly achievable.

For a provider with 250 homes, it will be necessary to get a 70% response rate from a census survey to achieve results that are accurate to the same statistical margin of error. Even if we move to a +/- 5% margin of error, a response rate of over 60% will be required. Such response rates are achievable but cannot be guaranteed. For even smaller organisations we enter the world of the unachievable.

Does this mean very small providers should not bother carrying out satisfaction surveys? Should we exclude organisations that do not achieve a defined level of statistical reliability from performance comparisons?

We are concerned that any standardised approach to satisfaction measurement does not exclude smaller providers through a requirement to achieve unrealistic response rates. How we handle statistical reliability in performance comparisons does, however, require some further thought.

Sampling

This has not been an issue in our work with smaller providers. Because of the size of the survey populations we have always recommended a census approach. In our experience, any provider with less than 500 homes will need to survey everyone (assuming a minimum response rate of 50%). The decision to sample gets more interesting above this level.

A provider with 750 homes might opt to survey a sample of around 500 residents. Around 250 responses would provide results at a margin of error of +/- 5% at the 95% confidence level but at what cost? How do the 250 residents feel about being excluded from the survey?

Responding to a satisfaction survey is a form of involvement and potentially a platform for engagement (typically around a quarter of respondents to our surveys say they would like to find out more about getting involved). So at what level do the financial costs of a census survey out weigh the benefits? We certainly think that providers with more than 1,000 homes should consider a census approach.

If you have less than 1,000 homes, we think that you should carry out a census survey even if a sample survey would provide sufficient confidence in the results. Providers with more than 1,000 homes might also want to consider a census approach.

Rating scales

We encourage the use of a five-point numeric scale in measuring resident satisfaction. We discourage the use of verbal scales, although we have often included for entirely pragmatic reasons a few questions using the verbal scale in STATUS.

In our experience, social housing residents have no problems with a numeric scale and numeric scales provide a more reliable picture of satisfaction levels. Put simply, the size of the gaps between ‘very satisfied’, ‘fairly satisfied’ and ‘neither satisfied or dissatisfied’ are unknown and are not necessarily of the same scale.

What ‘neither’ means we have never really been sure and on a bad day it produces a certain amount of existential angst. What am I if I am neither? Am I content with negative self-definition?

We are convinced that any standardised approach to satisfaction measurement should use a five-point numeric scale.

Survey costs

Smaller providers are always going to be at a disadvantage to larger providers when it comes to satisfaction survey costs whether the survey is postal, telephone or face to face. For example, a provider with 500 homes might typically expect to pay around £10 per resident for a postal survey whereas a provider with 5,000 homes might expect to pay less than £2 per resident. These are not insignificant costs and the development of a new standardised approach to satisfaction must consider what opportunities exist to reduce survey costs for all providers generally and smaller providers specifically.

HouseMark has indicated its willingness to explore joint procurement opportunities. We will be talking to them and we would like to talk to you about this. However, our initial modelling suggests the scope for savings is limited. Certainly there may be some scope for shaving set-up, survey design, analysis and reporting costs, but printing, postal and data entry costs are essentially fixed. In the analysis and reporting you will still want something that is tailored for you. Conducting surveys in-house may appear to save money but staff time needs to be factored in. It may also compromise data anonymity (see also below) and affect response rates.

We do think it is worth exploring joint procurement opportunities and that there is some scope for reducing costs, notably through online data collection. However, the most important issue is what you get for your money. Do you get actionable information that will help you increase resident satisfaction?

Demographics

We have always encouraged smaller providers to minimise the number of demographic and household questions in their surveys. Not only do these add to the length (and therefore cost) of the questionnaire, they are frankly things that landlords should already know. If we are to have a new standardised approach it must in our view encourage providers only to ask such questions where there are gaps and when such data cannot be obtained by other means.

We have also always encouraged smaller providers to create a link between demographic data and individual responses to satisfaction questions. Understanding your survey findings by reference to the characteristics of your resident profile (e.g. age, ethnicity, disability, estate, postcode etc.) is one of the key ways in which you can put those findings to work. It does, however, raise important issues relating to respondent anonymity and whether it is a viable option to conduct surveys in-house. As members of our benchmarking service we would like to talk to you about this whether or not we also conduct satisfaction surveys on your behalf.

A standardised approach to satisfaction measurement must in our view allow you flexibility in what demographic data you collect. We think you should always link demographic and satisfaction data and for this reason you will probably want to get an external organisation or individual to carry out your surveys on your behalf.

Benchmarking enhancements

Currently members of SPBM usefully share and compare a number of satisfaction indicators and have access to national as well as SPBM data through our partnership with HouseMark. We think the service can offer you more.

We think there is scope for developing a wider range of satisfaction measures, notably for our more specialist subscribers (e.g. we have just conducted a satisfaction survey for floating support service users). We also think there may be merit in members submitting their full satisfaction and demographic data sets to SPBM.

These would be significant enhancements to the service and would involve no small amount of work. We will run with these ideas if there is real enthusiasm among SPBM members and we can find a way for them to work.

Why satisfaction?

In our view, satisfaction measurement and related performance comparisons are an essential part of running a successful social housing business (whether or not the sector adopts a standardised approach). As you further develop your local offers and your approach to scrutiny, what are your residents going to be interested in?

Satisfaction measurement provides your resident-focused outcome measures and actionable intelligence to inform and prioritise service improvements. Without it you and your residents cannot judge value for money – whether you are ‘doing the right things’ and whether you are ‘doing things right’.

Comments are closed.