Skip to main content icon/video/no-internet

Split-half designs are commonly used in survey research to experimentally determine the difference between two variations of survey protocol characteristics, such as the data collection mode, the survey recruitment protocol, or the survey instrument. Other common names for such experiments are split-sample, split-ballot, or randomized experiments. Researchers using split-half experiments are usually interested in determining the difference on outcomes such as survey statistics or other evaluative characteristics between the two groups.

In this type of experimental design, the sample is randomly divided into two halves, and each half receives a different treatment. Random assignment of sample members to the different treatments is crucial to ensure the internal validity of the experiment by guaranteeing that, on average, any observed differences between the two groups can be attributed to treatment effects rather than to differences in subsam-ple composition. Split-half experiments have been successfully used in various survey settings to study measurement error bias as well as differences in survey nonresponse rates.

This experimental design has been used by questionnaire designers to examine the effects of questionnaire characteristics on answers provided by survey respondents. Current knowledge of question order effects, open-versus closed-response options, scale effects, response order effects, and inclusion versus exclusion of response options such as “Don't Know” is based on split-half experiments. These experiments have been conducted both in field surveys and in laboratory settings. Researchers usually assume that the experimental treatment in questionnaire split-half designs that produces the better result induces less measurement bias in the survey statistic of interest. However, these researchers often conduct such experiments because they do not have a gold standard or true value against which to compare the results of the study. Thus, the difference in the statistic of interest between the two experimental groups is an indicator of the difference in measurement error bias. Without a gold standard, it does not, however, indicate the amount of measurement error bias that remains in the statistic of interest.

Split-half experiments have also proven useful for studying survey nonresponse. Experimenters interested in how variations in field recruitment procedures—such as interviewer training techniques, amounts or types of incentive, advance and refusal letter characteristics, survey topic, and the use of new technologies such as computer-assisted self-interviewing—affect unit response, contact, and refusal rates have used split-half designs. These experiments often state that the design feature that had the higher response rate is the better outcome. Nonresponse bias is less frequently evaluated using a split-half design. Such designs have also been used to study item nonresponse as a function of both questionnaire characteristics and survey recruitment protocol characteristics.

Split-half experiments are an important experimental design when examining the effect of different survey protocol features on survey statistics. However, they do not automatically reveal which protocol or instrument choice is better. In general, to determine the “better” treatment, the researcher fielding a split-half experiment should use theory to predict the desirable outcome. For example, the type of advance letter that produces a higher response rate will often be considered superior, under the hypothesis that higher response rates lead to more representative samples. Alternatively, the mode of data collection that increases the number of reports of sensitive behaviors is considered better under the assumption that respondents underreport sensitive behaviors. Although split-half experiments are a powerful experimental design that isolate, on average, the effects that different treatments, survey protocols, or other procedures have on various types of outcomes, they are made practically useful when the survey researcher has a theory on which outcome should be preferred.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading