Skip to main content icon/video/no-internet

Interviewer-Related Error

Interviewer-related error is a form of measurement error and includes both the bias and the variance that interviewers can contribute to the data that are gathered in face-to-face and telephone surveys. In interviewer-administered surveys, although interviewers can contribute much to the accuracy of the data that are gathered, they also can contribute much of the nonsampling error that finds its way into those data.

The methodological literature includes startling examples of measurement error due to interviewer mistakes. In 1983, an interviewer's incorrect recording of one wealthy respondent's income resulted in the erroneous report that the richest half percent of the U.S. population held 35% of the national wealth. This finding, widely publicized, was interpreted to show that Reaganomics favored the wealthy. When the error was detected and corrected, the actual estimate was 27%—only a slight increase from the 1963 figure. Most survey designs do not feature weighting schemes that permit one interviewer's random error to have such a profound effect. Usually, random interviewer errors “cancel each other out,” not threatening data validity.

It is systematic, rather than random, interviewer-related error (i.e. bias) that typically affects survey data. Systematic, or correlated, error occurs when interviewers make similar “mistakes” across many interviews. Such errors may actually reduce item variance, but they play havoc with the accuracy of resulting estimates. This entry focuses on the sources of, and treatments for, systematic interviewer error and discusses efforts to prevent, measure, manage, and correct for this type of bias.

Preventing Interviewer-Related Error

Prevention focuses on three basic strategies: (1) reducing or eliminating human intervention between respondent and data capture, (2) engineering error-proof questionnaires and associated data collection tools, and (3) standardizing interviewer behaviors to minimize error.

In their review of interviewer-related error, Robert M. Groves and his colleagues note that the very presence of an interviewer has been shown to bias responses. Employing computerized, scanned, or voice-response self-administration avoids both the costs and errors associated with employing human interviewers. Sloppy data resulting from respondent-related error, the bugaboo of self-administration, can be attacked through programming that mandates response and requires clarification of contradictory information.

This approach, however, has its own drawbacks. Notable among them are higher front-end costs and lead time, limitations on complexity, and lower response rates. Cybernetic approaches are expensive for all but the simplest of questionnaires; they require intensive programming, pretesting, and debugging to meet the requirement that they be far more error-proof than is required when a trained interviewer is involved. Even for simple questionnaires, the value added by the presence of an interviewer to motivate engagement and probe for focused and detailed answers compensates for and usually exceeds the error contributed. Complex enumerations and life history matrices are approached with trepidation in the absence of trained interviewers. Finally, it is far easier for respondents to opt out of higher-burden self-administered surveys than to avoid or disappoint a pleasant, yet determined and persistent interviewer.

Because both interviewer-administered and self-administered data collection have strengths and weaknesses, in surveys where the elimination of interviewers entirely is not prudent or possible, questions known to be affected by interviewer characteristics or limits to their capabilities can be switched to self-administration. Barbara Mensch and Denise Kandel found in their reanalysis of data from a longitudinal study that young respondents having the same interviewer over multiple data collection waves significantly underreported drug use. Their conclusion was that the development of “over-rapport” with the interviewer heightened self-censorship. Because that very rapport was responsible for the panel's extraordinary retention (ongoing response) rate, the solution was to maintain continuity of interviewer assignments but move the sensitive questions to self-administration.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading