Measuring the Impact of Sensitive Questions on Survey Response Rates

Survey design and the formulation of sensitive questions may reduce response rates to a survey. For example, the potential impact that the inclusion of a citizenship question in the 2020 U.S. Census would have had on response rates was at the center of recent litigation against the United States Department of Commerce. This litigation highlights the importance of survey and questionnaire design and the challenges involved in predicting the potential impact of certain questions on response rates. There are empirical techniques that can help quantify potential reductions in response rates, as well as help isolate this effect from other potential competing factors also driving survey response rates.

So-called sensitive questions in surveys are questions respondents are less likely to respond to or to respond to truthfully. Possible explanations for respondent’s reaction to these questions include the perception that the questions are intrusive and fear that responses will be used improperly. The inclusion of sensitive questions in a survey questionnaire may reduce response rates to individual questions (item nonresponse) and also may reduce the response rates to the entire survey questionnaire (unit or total nonresponse). If these reactions are anticipated and quantified, it may be possible to mitigate these effects by allocating more resources to promoting participation through advertising and education, reassuring respondents that their responses will be used only for the intended purposes, and committing sufficient resources to non-response follow-up activities. The impact of a sensitive question also may be reduced through survey design itself – for example, by carefully wording the question.

There are several possible approaches to assessing empirically the impact that a sensitive question may have on response rates. These typically involve comparing response rates to a survey that includes the sensitive question to response rates in a counterfactual survey that does not include the sensitive question. In these analyses, it is important to separate the effect of the inclusion of the question at issue from other potential confounding effects that also may impact the observed outcome of interest, in this case, the response rate to individual questions or to a survey as a whole.

One approach to assessing the impact of the inclusion of a sensitive question in a survey is to conduct a controlled experiment in the form of a survey designed to this effect. The survey can be designed to infer from a sample of respondents out of the population of interest the impact of the inclusion of the sensitive question on the overall population’s willingness to respond to the questionnaire. The sample of respondents can be directly asked how their willingness to respond to the questionnaire or to particular questions would change with the inclusion of the sensitive question. Alternatively, the effect can be identified by comparing responses to questions with and without the sensitive information requests. As with any other survey, the reliability of this approach is itself dependent on the reliability of the survey’s design, implementation, and interpretation.

Another approach, rather than relying on a controlled experiment designed for this specific purpose, may be to rely on a natural experiment in which response rates from prior surveys with and without the sensitive question are compared. This approach, however, presents its own challenges when other factors may explain differences in response rates across different prior surveys. Unlike a randomized test in which the respondents to the survey with the sensitive question (intervention group) are interchangeable with the respondents to the same survey without the sensitive question (control group), other factors may explain differences in outcomes between the two different prior surveys. Indeed, factors driving differences in respondent’s propensity to respond include individual characteristics (e.g., socioeconomic and demographic characteristics), the structure of the survey (e.g., wording, length or complexity), and efforts to promote higher response rates (e.g., media campaigns and monetary rewards). Respondent’s individual characteristics, however, will not drive differences in participation rates across the two surveys if the questions are addressed to the same respondents in both surveys.

Survey characteristics such as length and complexity also may explain differences in response rates. To isolate the effect of interest from the effect of survey characteristics, it is possible to use a difference-in-differences technique. This quasi-experimental approach can be used in this context to isolate the impact of the sensitive question by comparing changes in outcomes between a group that is sensitive to the question (effectively an intervention group) to the outcomes of a group of respondents that is understood to be insensitive to the inclusion of the question (effectively a control group). The difference in response rates for the insensitive group across surveys can be explained by factors such as the survey length and complexity. The change in response rates for the sensitive group above the baseline change observed for the insensitive group (the difference in differences) approximates the effect of the impact of the sensitive question on the sensitive group. By applying the difference-in-differences approach, the effect of the sensitive question can be estimated net of the differences attributable to variation in survey characteristics.

The grouping of respondents into two groups (sensitive and insensitive), however, is not random, and differences in outcomes between the two groups may be explained at least in part because the groups are distinct. To address this problem, the difference-in-differences analysis can be enriched by controlling for other explanatory variables in a regression model of response rates.

In sum, empirical techniques may be implemented to assess the impact of the introduction of a sensitive survey question on response rates. This understanding can inform the design and implementation of adequate mitigation efforts to reach desired response rates.

Senior Vice President Stuart D. Gurrea has been qualified in Federal Court as an expert witness in economics, quantitative analysis of survey data, and impact evaluation.

Principal Jonathan A. Neuberger has experience in survey design and implementation.