Skip to main content icon/video/no-internet

The “gold standard” in psychological research has traditionally been the experimental method. Such methods have merit, especially for supporting causal inferences. However, it could be argued that they are often inappropriately valued and applied without sufficient attention to the nature of the research problem at hand, especially in attempts to evaluate and improve human services and contribute to effective public policy. Applied developmental scholars, as well as program funders, legislators, and other consumers of research, need to bring these methodological gold standards into greater balance with other approaches to create a useful and valid research base for solving social problems. This entry briefly illustrates key problems related to the two primary features of the experimental method: (1) random assignment of subjects to treatment groups and (2) experimenter-controlled, uniformly applied treatments.1

Random Assignment

Experimental evaluations of social programs frequently involve randomly assigning participants to either receive the program being evaluated or to a comparison group receiving no services or “services as usual.” In doing so, researchers control for potential differences between individuals who receive the treatment and those who do not, and thus increase the internal validity of the study, that is, the confidence with which one can attribute differences between the two groups to the treatment program.

Threats to Validity

However, this investment in internal validity may be associated with considerable loss of external validity (e.g., Sue, 1999). Specifically, outside of randomized clinical research trials, people are not randomly assigned to programs that are provided in communities (e.g., children are not randomly assigned to Head Start). Furthermore, in applied developmental research, participants typically know what treatment group they are in, and the perception and opinion of the participant about the program may substantially influence treatment effectiveness.

This can influence external validity in several ways. First, people willing to be randomly assigned in a demonstration project may differ substantially from those who would choose a particular treatment when it is offered as a community service. This may be true especially among particularly “overstudied” minority populations, who may be reluctant to volunteer for “a research study” but be quite willing to participate in services that are publicly offered in the community (Cauce, Ryan, & Grove, 1998).

Second, the treatment may produce much better outcomes, for example, for those who actively seek out that treatment than for those who are randomly assigned to it. In this case, random assignment can produce misleading results, such as showing that a treatment is ineffective when it is actually quite effective for certain types of participants. Because of these factors, it is important that selection processes and motivational factors leading to positive outcomes be studied directly in social interventions rather than “controlled” through randomization (e.g., Koroloff & Friesen, 1997). For example, consider a design in which half of a sample of divorcing families is randomly assigned to either (a) court determination or (b) mediation, as in a traditional randomized trial, while the remaining parents choose whether they want court determination or mediation. Such a design permits a direct examination of random assignment versus selected participation.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading