Skip to main content icon/video/no-internet

Paper-and-Pencil Interviewing (PAPI)

Prior to the 1980s, essentially all survey data collection that was done by an interviewer was done via paper-and-pencil interviewing, which came to be known as PAPI. Following the microcomputer revolution of the early 1980s, computer-assisted interviewing (CAI)—for example, computer-assisted personal interviewing (CAPI), computer-assisted self-interviewing (CASI), and computer-assisted telephone interviewing (CATI)—had become commonplace by the 1990s, essentially eliminating most uses of PAPI, with some exceptions. PAPI still is used in instances where data are being gathered from a relatively small sample, with a noncom-plex questionnaire, on an accelerated start-up time basis, and/or the time and effort it would take to program (and test) the instrument into a computer-assisted version simply is not justified. PAPI also serves as a backup for those times when computer systems go down and interviewers would be left without work if there were not a paper version of the questionnaire to fall back to on a temporary basis. (Of note, mail surveys typically use paper-and-pencil questionnaires, but since they are not intemewer-administered surveys, mail questionnaires and that mode of data collection are not discussed here.)

PAPI is markedly inferior to CAI in many ways. The most important of these are (a) how sample processing is done with PAPI and (b) the limits of the complexity of the questionnaires that can be implemented via PAPI. Processing sample cases in PAPI traditionally was done manually. This required a supervisory person or staff to hand-sort “call sheets” or “control sheets” that were printed on paper, on which the interviewers filled out information each time an attempt was made to complete a questionnaire with a sampled case (e.g. at a telephone number or household address). This manual approach put practical limits on the complexity of the sample management system that could be used to sort and reprocess the active sample. It also relied entirely on the behavior and memory of the sample coordinator, which of course was fallible.

Questionnaires in PAPI cannot practically deploy complex randomization schemes that are easily programmed into and controlled by CAI. Although randomization can be built into PAPI, it typically requires that multiple versions of the questionnaire be created, printed, and randomly assigned to sampled cases. And, while randomized “starts” to question sequences can also be implemented in PAPI, interviewer error in implementing that type of randomization accurately is much more prevalent in PAPI. True randomization of the order of items within a question sequence is a nightmare to implement—if not outright impossible—accurately in PAPI when there are more than two items to randomize. The use of questions that use “fills” from answers previously given by the respondent (e.g. Earlier you said that you had gone to the hospital X times the past 3 months…) is also much more difficult to implement accurately in PAPI, whereas there are essentially no limits to its use in CAI. PAPI also has no assured way to control an interviewer from entering an “out-of-range” value to a particular question, whereas in CAI valid value ranges are programmed into each question asked.

The legibility of answers recorded by interviewers to open-ended questions in PAPI always is more problematic than what is captured via CAI. All in all, there is a great deal more potential for certain types of interview-related error in data collection in PAPI than is the case with CAI.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading