Entry
Reader's guide
Entries A-Z
Subject index
Substantive Significance
Substantive significance refers to whether an observed effect is large enough to be meaningful. The concept of substantive significance was developed because statistical SIGNIFICANCE TESTS can find that very small effects are significant, even they are too small to matter. Substantive significance (also called practical significance) instead focuses on whether an observed relationship, difference, or coefficient is big enough to be considered important. Substantive significance is usually gauged intuitively, with different standards in different substantive realms.
For example, one way to think about the substantive significance of a difference between two percentages is in terms of how large that difference could possibly be. The largest possible difference in recidivism rates between prisons using two different prisoner release procedures would be 100%. Finding a difference in recidivism rates of 40% or 50% or 60% would likely be considered substantively significant, whereas obtaining a difference of 2% or 3% or 4% would likely be considered unimportant even if it passes the usual statistical significance tests.
Another way to think about substantive significance is as a comparison with the maximum possible difference from a mean level. If the mean recidivism rate is 30%, for example, at most it could be lowered by these 30%. An improvement of less than 10% of that (.10 × .30 = 3%) might be considered too small to matter, regardless of whether it is statistically significant. In some fields, researchers are expected to specify in advance how large an effect they require for substantive significance.
STATISTICAL SIGNIFICANCE tests were developed to test whether effects obtained in small SAMPLES are likely to be real. When applied to large samples, however, statistical significance tests can find that very small effects are significant. For example, a difference of .01% (e.g., 12.23% under one experimental condition versus 12.24% under another condition) might be found to be significant when the number of cases is in the hundreds of thousands. However, such a small difference is not likely to matter in most substantive realms. Hence, the concept of substantive significance was devised to call attention to the fact that statistical significance does not suffice. Thus, the substantive significance criterion is more stringent than the notion of statistical significance, rejecting the importance of some relationships that would pass conventional significance tests.
A similar logic holds for gauging the importance of relationships. A CORRELATION of .03 would be statistically significant if it were based on several thousand observations, but it might not be large enough to be considered substantively important in many application fields.
Fowler (1985) argues that one safeguard against reporting a statistically significant but trivial effect is to test a NULL HYPOTHESIS that specifies a nonzero effect. If the null hypothesis specifies the minimum EFFECT SIZE considered substantively important, then significance testing checks whether the observed effect is significantly greater than that minimum effect size.
Reference
- Analysis of Variance
- Association and Correlation
- Association
- Association Model
- Asymmetric Measures
- Biserial Correlation
- Canonical Correlation Analysis
- Correlation
- Correspondence Analysis
- Intraclass Correlation
- Multiple Correlation
- Part Correlation
- Partial Correlation
- Pearson's Correlation Coefficient
- Semipartial Correlation
- Simple Correlation (Regression)
- Spearman Correlation Coefficient
- Strength of Association
- Symmetric Measures
- Basic Qualitative Research
- Basic Statistics
- F Ratio
- N(n)
- t-Test
- X¯
- Y Variable
- z-Test
- Alternative Hypothesis
- Average
- Bar Graph
- Bell-Shaped Curve
- Bimodal
- Case
- Causal Modeling
- Cell
- Covariance
- Cumulative Frequency Polygon
- Data
- Dependent Variable
- Dispersion
- Exploratory Data Analysis
- Frequency Distribution
- Histogram
- Hypothesis
- Independent Variable
- Measures of Central Tendency
- Median
- Null Hypothesis
- Pie Chart
- Regression
- Standard Deviation
- Statistic
- Causal Modeling
- Discourse/Conversation Analysis
- Econometrics
- Epistemology
- Ethnography
- Evaluation
- Event History Analysis
- Experimental Design
- Factor Analysis and Related Techniques
- Feminist Methodology
- Generalized Linear Models
- Historical/Comparative
- Interviewing in Qualitative Research
- Latent Variable Model
- Life History/Biography
- Log-Linear Models (Categorical Dependent Variables)
- Longitudinal Analysis
- Mathematics and Formal Models
- Measurement Level
- Measurement Testing and Classification
- Multilevel Analysis
- Multiple Regression
- Qualitative Data Analysis
- Sampling in Qualitative Research
- Sampling in Surveys
- Scaling
- Significance Testing
- Simple Regression
- Survey Design
- Time Series
- ARIMA
- Box-Jenkins Modeling
- Cointegration
- Detrending
- Durbin-Watson Statistic
- Error Correction Models
- Forecasting
- Granger Causality
- Interrupted Time-Series Design
- Intervention Analysis
- Lag Structure
- Moving Average
- Periodicity
- Serial Correlation
- Spectral Analysis
- Time-Series Cross-Section (TSCS) Models
- Time-Series Data (Analysis/Design)
- Trend Analysis
- Loading...
Get a 30 day FREE TRIAL
-
Watch videos from a variety of sources bringing classroom topics to life
-
Read modern, diverse business cases
-
Explore hundreds of books and reference titles
Sage Recommends
We found other relevant content for you on other Sage platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches