Entry
Reader's guide
Entries A-Z
Subject index
Significance Testing
Significance testing provides objective rules for determining whether a researcher's hypotheses are supported by the data. The need for objectivity arises from the fact that the hypotheses refer to values for populations of interest, whereas the data come from samples of varying reliability in representing the populations. For example, the population of interest may be people of college age, and the sample may be students from a particular class who are available at the time the data are collected. This, of course, would not represent a random sample, which would be ideal, but does represent a “handy” sample often used in social science research. (Extra statistical considerations come into play when deciding the extent to which the data from a “handy” sample generalize beyond that sample. For example, the results of a study on visual perception with a handy sample of college students should be readily generalizable.) To conclude that a particular variable influences behavior, the researcher must follow the principles of good research design in obtaining and recording the data. No amount of statistical “massaging” of the data can rescue a poorly designed study.
Even when the data come from a well-designed study, just showing that there are differences between the scores for, say, two groups is not enough. The researcher must also demonstrate that differences in behavior between the two groups are reliable; that is, they are greater than differences attributable to chance.
Chance, or, more formally, chance sampling effects, refers to preexisting differences between samples of individuals that have nothing to do with how they are classified or treated in a research study. Chance sampling effects might be substantial in a study of human behavior where the subjects' prior history of genetic and environmental influences is beyond the researcher's control. In order to eliminate any systematic bias in the assignment of subjects to conditions based on the uncontrolled factors, researchers typically use randomization to assign subjects to conditions such that all subjects have the same chance of being assigned to any condition. This is a more crucial application of randomness than selecting subjects from a population that may be imprecisely defined.
Even with the use of randomization, however, a measure such as the sample mean can vary considerably from sample to sample even if the populations from which the samples are drawn are equivalent. This variation is due to the fact that different individuals fall into different samples, and each individual has unique characteristics. So, of course, the samples will vary one from the other. This will be especially true when sample size is low and a few discrepant scores can skew the results. Later, when we develop a specific example, we will see that sample size plays a crucial role in significance testing. To preview that illustration, the smaller the sample size, the larger the mean difference between groups has to be in order to demonstrate a statistically significant difference.
The formal process by which the researcher determines whether a difference is “statistically significant,” that is, whether it represents more than chance variation, is known as hypothesis testing.
...
- Analysis of Variance
- Association and Correlation
- Association
- Association Model
- Asymmetric Measures
- Biserial Correlation
- Canonical Correlation Analysis
- Correlation
- Correspondence Analysis
- Intraclass Correlation
- Multiple Correlation
- Part Correlation
- Partial Correlation
- Pearson's Correlation Coefficient
- Semipartial Correlation
- Simple Correlation (Regression)
- Spearman Correlation Coefficient
- Strength of Association
- Symmetric Measures
- Basic Qualitative Research
- Basic Statistics
- F Ratio
- N(n)
- t-Test
- X¯
- Y Variable
- z-Test
- Alternative Hypothesis
- Average
- Bar Graph
- Bell-Shaped Curve
- Bimodal
- Case
- Causal Modeling
- Cell
- Covariance
- Cumulative Frequency Polygon
- Data
- Dependent Variable
- Dispersion
- Exploratory Data Analysis
- Frequency Distribution
- Histogram
- Hypothesis
- Independent Variable
- Measures of Central Tendency
- Median
- Null Hypothesis
- Pie Chart
- Regression
- Standard Deviation
- Statistic
- Causal Modeling
- Discourse/Conversation Analysis
- Econometrics
- Epistemology
- Ethnography
- Evaluation
- Event History Analysis
- Experimental Design
- Factor Analysis and Related Techniques
- Feminist Methodology
- Generalized Linear Models
- Historical/Comparative
- Interviewing in Qualitative Research
- Latent Variable Model
- Life History/Biography
- Log-Linear Models (Categorical Dependent Variables)
- Longitudinal Analysis
- Mathematics and Formal Models
- Measurement Level
- Measurement Testing and Classification
- Multilevel Analysis
- Multiple Regression
- Qualitative Data Analysis
- Sampling in Qualitative Research
- Sampling in Surveys
- Scaling
- Significance Testing
- Simple Regression
- Survey Design
- Time Series
- ARIMA
- Box-Jenkins Modeling
- Cointegration
- Detrending
- Durbin-Watson Statistic
- Error Correction Models
- Forecasting
- Granger Causality
- Interrupted Time-Series Design
- Intervention Analysis
- Lag Structure
- Moving Average
- Periodicity
- Serial Correlation
- Spectral Analysis
- Time-Series Cross-Section (TSCS) Models
- Time-Series Data (Analysis/Design)
- Trend Analysis
- Loading...
Get a 30 day FREE TRIAL
-
Watch videos from a variety of sources bringing classroom topics to life
-
Read modern, diverse business cases
-
Explore hundreds of books and reference titles
Sage Recommends
We found other relevant content for you on other Sage platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches