Ten classic problems with occupant survey research

Findings from occupant surveys may not be as robust as researchers claim. Roderic Bunn highlights potential flaws in health, comfort and productivity studies

1 Leading questions It is bad practice to use leading questions in occupant surveys, as they may incline a respondent to give a particular response. Leading questions are often in the form of statements, such as ‘My office is always cold’, followed by an ‘agree/disagree’ response option. The tendency for a respondent to agree with a statement if there is any doubt is called acquiescence bias. Also see ‘exaggeration questions’.

2 Exaggeration questions In some surveys, a positive ‘agree’ response to a leading question may initiate a list of ancillary issues that the respondent can also tick. These tend to be linked back to the initial question (e.g. about being too cold) and are therefore biased in the direction of the initial ‘agree’ response, such as draughtiness.

This magnifies the acquiescence bias further, potentially enabling a researcher to claim a strong negative bias in occupant comfort responses. This may suit a research objective, but the use of leading questions to obtain the data may exaggerate the conditions that may, in reality, not be as severe as the survey statistics imply.

3 Low response rates Good surveys should obtain response rates above 80%, and representative of the population normally using the building. Response rates of 40% and below will likely lack spatial representation (among other weaknesses) and therefore lack explanatory power.

Large cohort databases (eg from 50 buildings or more) often decompose into very small individual samples. Researchers replying on concatenated datasets rarely publicise their individual response rates and sample sizes.

Where sample sizes are not quoted, readers should perform a simple check by calculating the possible average sample size by dividing the quoted sample by the quoted number of buildings.

4 Pre-categorisation of buildings making up a research set A common pre-categorisation is ‘green’ and ‘non-green’ buildings determined by their design environmental rating (e.g. Breeam or Leed) but rarely based on analysis of in-use performance.

Survey results, for example on perceived health or productivity, tend to be inconclusive: some green buildings can be found to perform poorly, and some conventional buildings found to perform relatively well. Where found, this may be evidence that the researcher’s pre-categorisation was fundamentally faulty, or worse intentionally biased.

5 Surveys conducted over long periods Some surveys are conducted over days, weeks or even months in order to obtain a quoted sample. This tends to be more the case with web-based surveys as they tend to deliver lower response rates in the absence of tacit pressure applied by a researcher on site for a limited time-frame.

Open-ended survey periods increase the potential for collusion between respondents and interference by managers or other corporate players in survey answers. Furthermore, conditions in the building may change during the survey, increasing variance in the response data.

6 Large cohort databases Occupant survey data may be concatenated from surveys of 50 or more buildings. The act of combining data sets (eg into ‘green’ and ‘non-green’ building samples) has two major drawbacks: Local context is eradicated, removing the drivers behind survey scores at an individual building level. The resulting statistics may therefore not be representative of individual buildings.

Second, by increasing the denominator (i.e. the sample size) the confidence limits will tighten around the mean values, making it much easier to calculate statistical differences (e.g. at 95% significance) between two samples.

Concatenated datasets are commonly used in research for making statistical links between improvements in a given factor (e.g. ventilation rate) and perceived health. Again, such statistical certainty may not be evident in the individual buildings (also see ‘Low response rates’).

7 Climate chambers masquerading as real buildings Surveys of volunteers working in climate chambers, where conditions such as CO2 are controlled and other confounding variables are removed, can deliver indicative statistics.

However, test cells lack the complex environmental conditions of real buildings and therefore lack the explanatory power that researchers may claim or otherwise imply. Sometimes climate chambers are characterised in research papers as ‘office environments’, potentially misleading readers – and institutions and funding agencies to which the research has been aimed – to believe a test environment has replicated a real-world condition when it cannot, and has not.

8 Commercial funding Research including occupant surveys is best performed free of the commercial expectations that may come with corporate backing. Commercial funding does not necessarily lead to research bias, but researchers are always conscious of the aims and objectives of their commercial sponsors.

This means you have to analyse the research method and read the research conclusions extremely carefully. Research limitations are often only given a couple of lines in a report, but might be vital in determining whether the conclusions are robust or not.

9 Data hidden from view This includes sample sizes, response rates, the nature of data distributions, its variance, Any research were you can’t see the base data, and how it was generated and how it was modified for statistical analysis should be treated with considerable caution.

10 Extremes of comparisons As with pre-categorisation (see above) some researchers like to show differences in human performance between two environmental conditions. It is generally the case that replicable statistical differences in human comfort perception and/or cognitive performance only become apparent at the extremes of two conditions, for example at 21oC versus 30oC, or 400 ppm CO2 versus 1,400 ppm CO2.

Researchers are often not entirely transparent about the basis of their test conditions. For example, when comparing ‘green’ buildings with ‘conventional’ buildings, the green condition may be highly filtered while the so-called conventional condition is, in fact, a highly-polluted condition.

Readers should be wary when claims of improvements in human productivity are being made, particularly for pre-categorised building types. The source of research funding may provide a clue.

Read Roderic Bunn’s article Without bias: How to create the perfect occupant survey