Skip to main content icon/video/no-internet

Ignorable Nonresponse

Researchers who use survey data often assume that nonresponse (either unit or item nonresponse) in the survey is ignorable. That is, data that are gathered from responders to the survey are often used to make inferences about a more general population. This implies that the units with missing or incomplete data are a random subsample of the original sample and do not differ from the population at large in any appreciable (i.e. meaningful and nonignorable) way. By definition, if nonresponse is ignorable for certain variables, then it does not contribute to bias in the estimates of those variables.

Because nonresponse error (bias) is a function of both the nonresponse rate and the difference between respondents and nonrespondents on the statistic of interest, it is possible for high nonresponse rates to yield low nonresponse errors (if the difference between respondents and nonrespondents is quite small). The important question, however, is whether there truly are no meaningful differences between respondents and nonrespondents for the variables of interest. In a major article on this topic, reported in 2006 by Robert M. Groves, no consistent patterns were found between the amount of nonresponse and the amount of nonresponse bias across the myriad surveys that were investigated. That is, in many cases the nonresponse was ignorable and in others it surely was not, and this happened regardless of whether there was a great deal of nonresponse or very little.

The survey response rate is an often-used criterion for evaluating survey data quality. The general and conservative underlying assumption of this is that nonresponse is not ignorable. To achieve high response rates, survey organizations must devote a great deal of resources to minimize nonresponse. They might lengthen the field period for data collection, use expensive locating sources to find sample members, use multiple and more expensive modes of contact, and devote additional resources (e.g. through incentives) to convince sample members to cooperate with the survey request. Complex statistical techniques may also be used after data collection to compensate for nonresponse bias. All of these techniques dramatically increase the cost of conducting surveys. In light of this, recent trends of increasing survey nonresponse make the questions of if and when nonresponse is ignorable especially important. If nonresponse does not yield biased estimates, then by implication, it is not advantageous to spend additional resources on minimizing it.

It is difficult to conduct research that evaluates non-response error because data for nonresponders to the survey have to be available from some other source. When available, administrative records can be used to evaluate assumptions about nonresponders. However, such studies are rare and expensive to conduct. Other methods used to evaluate nonresponse error include comparing hard-to-reach respondents with easy-to-reach and cooperative respondents or comparing estimates in surveys with identical questionnaires but different response rates.

Though there is relatively sparse evidence that measures nonresponse error in large surveys, nonresponse error in public opinion polls has received some attention in recent years due to the political and media attention focused on such surveys. Public opinion research (especially pre-election polling) usually has a condensed field period that makes a high response rate unattainable. Key variables in these studies include commonly used measures of political and social attitudes and electoral behavior (e.g. party affiliation, ideology, media use, knowledge, engagement in politics, social integration). Most research has found few, or at most minimal (ignor-able), differences in the measurement of these variables between surveys conducted in short time spans (approximately 1 week or less) with low response rates (approximately 20% to 30%) and surveys conducted with longer field periods (several months) and higher response rates (approximately 60% to 70%). With respect to sample composition, comparisons between low- and high-response rate surveys often show that both types yield similar estimates on most sociodemographic variables to data from the U.S. Census and other large government surveys. If judged by their accuracy in forecasting elections, many public opinion polls with short field periods and low response appear to be accurate and unbiased. This evidence leaves many researchers fairly confident that nonresponse often may be ignorable for public opinion surveys and that it is unnecessary and inefficient to increase the response rate.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading