Skip to main content icon/video/no-internet

Significance testing provides objective rules for determining whether a researcher's hypotheses are supported by the data. The need for objectivity arises from the fact that the hypotheses refer to values for populations of interest, whereas the data come from samples of varying reliability in representing the populations. For example, the population of interest may be people of college age, and the sample may be students from a particular class who are available at the time the data are collected. This, of course, would not represent a random sample, which would be ideal, but does represent a “handy” sample often used in social science research. (Extra statistical considerations come into play when deciding the extent to which the data from a “handy” sample generalize beyond that sample. For example, the results of a study on visual perception with a handy sample of college students should be readily generalizable.) To conclude that a particular variable influences behavior, the researcher must follow the principles of good research design in obtaining and recording the data. No amount of statistical “massaging” of the data can rescue a poorly designed study.

Even when the data come from a well-designed study, just showing that there are differences between the scores for, say, two groups is not enough. The researcher must also demonstrate that differences in behavior between the two groups are reliable; that is, they are greater than differences attributable to chance.

Chance, or, more formally, chance sampling effects, refers to preexisting differences between samples of individuals that have nothing to do with how they are classified or treated in a research study. Chance sampling effects might be substantial in a study of human behavior where the subjects' prior history of genetic and environmental influences is beyond the researcher's control. In order to eliminate any systematic bias in the assignment of subjects to conditions based on the uncontrolled factors, researchers typically use randomization to assign subjects to conditions such that all subjects have the same chance of being assigned to any condition. This is a more crucial application of randomness than selecting subjects from a population that may be imprecisely defined.

Even with the use of randomization, however, a measure such as the sample mean can vary considerably from sample to sample even if the populations from which the samples are drawn are equivalent. This variation is due to the fact that different individuals fall into different samples, and each individual has unique characteristics. So, of course, the samples will vary one from the other. This will be especially true when sample size is low and a few discrepant scores can skew the results. Later, when we develop a specific example, we will see that sample size plays a crucial role in significance testing. To preview that illustration, the smaller the sample size, the larger the mean difference between groups has to be in order to demonstrate a statistically significant difference.

The formal process by which the researcher determines whether a difference is “statistically significant,” that is, whether it represents more than chance variation, is known as hypothesis testing.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading