Skip to main content icon/video/no-internet

Response Bias

Response bias is a general term that refers to conditions or factors that take place during the process of responding to surveys, affecting the way responses are provided. Such circumstances lead to a nonrandom deviation of the answers from their true value. Because this deviation takes on average the same direction among respondents, it creates a systematic error of the measure, or bias. The effect is analogous to that of collecting height data with a ruler that consistently adds (or subtracts) an inch to the observed units. The final outcome is an overestimation (or underestimation) of the true population parameter. Unequivocally identifying whether a survey result is affected by response bias is not as straightforward as researchers would wish. Fortunately, research shows some conditions under which different forms of response bias can be found, and this information can be used to avoid introducing such biasing elements.

The concept of response bias is sometimes used incorrectly as a synonym of nonresponse bias. This use of the term may lead to misunderstanding. Nonresponse bias is related to the decision to participate in a study and the differences between those who decide to cooperate and those from whom data are not gathered. Response bias, on the other hand, takes place once a respondent has agreed to answer, and it may theoretically occur across the whole sample as well as to specific subgroups of the population.

Forms of Response Biases

Rather than being a direct function of respondents' own features per se, it is often the instrument (questionnaire item) characteristics that are responsible for this deviation, in the sense that there is something in the question or context that affects the way respondents undergo the cognitive process of responding, thereby distorting the true answer in some manner. This may occur consciously or not, and the resulting overreporting or underreporting may have different causes. A nonexhaustive list of different response biases is presented here to exemplify the kind of problems the researcher may encounter when conducting a survey.

  • Some effects are related to the length and type of the task. Aspects such as burdensomeness of the task may produce boredom or fatigue in the respondent, affecting the thoughtfulness of their answers.
  • The interaction between interviewer, respondent, and interviewing approach may also affect the way responses are produced. The most typical example of this is social desirability bias, but other effects might involve the interviewer's pace of speech, race, and gender. For example, fast-speaking interviewers may communicate to respondents that they are expected to give quick, off-the-top-of-their-heads answers. They may also affect how questions are understood.
  • The order of the questions and the order of response options may influence the likelihood of respondents to select a particular answer, eliciting context effects. The recency effect is an illustration of this type of bias; here, respondents are more likely to choose the last response options presented to them when the survey is conducted orally.
  • The wording of the question can tip the scale in one or another direction. Push polls are one example where the wording is intentionally manipulated with the intention to obtain a particular result, but unexpected or unintended wording effects are also possible.
  • Response styles are sometimes considered a type of response bias. When response options are presented, the way respondents use them may have a biasing effect. Some respondents seem to prefer a particular section of the scale as compared to others, irrespective of what their real attitude or behavior is.
  • Other forms of bias may appear as a result of lack of specificity in the definition of the task. Researchers need to be aware that conveying the need for high response accuracy is not a given. Discourse norms in everyday life dictate that precise estimates of certain quantities may be inadequate, unnecessary, or even undesirable. Spending several seconds trying to recall whether an activity lasted 13 or 14 days is usually not well received by the audience of a daily life conversation, where rough estimations are common. A similar phenomenon, the rounding effect, has been observed in surveys; researchers have identified that certain values (0, 25, 50, 75, and 100) are more likely to be chosen when using scales from 0 to 100.
  • People not only distort their answers because they want to create a positive impression on their audience, they may also edit (fake) their responses because they fear the consequences that the true answer might have and therefore do not wish to reveal the right information. Respondents, for instance, may lie about their legal status if they fear that confidentiality might be breached.

In essence, for any of the aforementioned examples, the reported information is not data the researcher is seeking. The distortion may set out at any stage of the cognitive processing: the comprehension of the question, the retrieval of the information, the judgment or the editing of the question. Question order effects, for example, can influence the way respondents understand the questions, while long reference periods can increase the likelihood of finding telescoping effects due to memory problems.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading