Skip to main content icon/video/no-internet

Intervention

Intervention research examines the effects of an intervention on an outcome of interest. The primary purpose of intervention research is to engender a desirable outcome for individuals in need (e.g., reduce depressive symptoms or strengthen reading skills). As such, intervention research might be thought of as differing from prevention research, where the goal is to prevent a negative outcome from occurring, or even from classic laboratory experimentation, where the goal is often to support specific tenets of theoretical paradigms. Assessment of an intervention's effects, the sine qua non of intervention research, varies according to study design, but typically involves both statistical and logical inferences.

The hypothetical intervention study presented next is used to illustrate important features of intervention research. Assume a researcher wants to examine the effects of parent training (i.e., intervention) on disruptive behaviors (i.e., outcome) among preschool-aged children. Of 40 families seeking treatment at a university-based clinic, 20 families were randomly assigned to an intervention condition (i.e., parent training) and the remaining families were assigned to a (wait-list) control condition. Assume the intervention was composed of six, 2-hour weekly therapy sessions with the parent(s) to strengthen theoretically identified parenting practices (e.g., effective discipline strategies) believed to reduce child disruptive behaviors. Whereas parents assigned to the intervention condition attended sessions, parents assigned to the control condition received no formal intervention. In the most basic form of this intervention design, data from individuals in both groups are collected at a single baseline (i.e., preintervention) assessment and at one follow-up (i.e., postintervention) assessment.

Assessing the Intervention's Effect

In the parenting practices example, the first step in assessing the intervention's effect involves testing for a statistical association between intervention group membership (intervention vs. control) and the identified outcome (e.g., reduction in temper tantrum frequency). This is accomplished by using an appropriate inferential statistical procedure (e.g., an independent-samples t test) coupled with an effect size estimate (e.g., Cohen's d), to provide pertinent information regarding both the statistical significance and strength (i.e., the amount of benefit) of the interventionoutcome association.

Having established an interventionoutcome association, researchers typically wish to ascertain whether this association is causal in nature (i.e., that the intervention, not some other factor, caused the observed group difference). This more formidable endeavor of establishing an “intervention to outcome” causal connection is known to social science researchers as establishing a study's internal validity—the most venerable domain of the renowned Campbellian validity typology. Intervention studies considered to have high internal validity have no (identified) plausible alternative explanations (i.e., internal validity threats) for the interventionoutcome association. As such, the most parsimonious explanation for the results is that the intervention caused the outcome.

Random Assignment in Intervention Research

The reason random assignment is a much-heralded design feature is its role in reducing the number of alternative explanations for the intervention outcome association. In randomized experiments involving a no-treatment control, the control condition provides incredibly important information regarding what would have happened to the intervention participants had they not been exposed to the intervention. Because random assignment precludes systematic pretest group differences (as the groups are probabilistically equated on all measured and unmeasured characteristics), it is unlikely that some other factor resulted in postintervention group differences. It is worth noting that this protection conveyed by random assignment can be undone once the study commences (e.g., by differential attrition or participant loss). It is also worth noting that quasi-experiments or intervention studies that lack random assignment to condition are more vulnerable to internal validity threats. Thoughtful design and analysis of quasi-experiments typically involve identifying several plausible internal validity threats a priori and incorporating a mixture of design and statistical controls that attempt to rule out (or render implausible) the influence of these threats.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading