Skip to main content icon/video/no-internet

Alpha, Significance Level of Test

Alpha is a threshold value used to judge whether a test statistic is statistically significant. It is chosen by the researcher. Alpha represents an acceptable probability of a Type I error in a statistical test. Because alpha corresponds to a probability, it can range from 0 to 1. In practice, 0.01, 0.05, and 0.1 are the most commonly used values for alpha, representing a 1%, 5%, and 10% chance of a Type I error occurring (i.e. rejecting the null hypothesis when it is in fact correct). If the p-vahie of a test is equal to or less than the chosen level of alpha, it is deemed statistically significant; otherwise it is not.

The typical level of alpha is 0.05, but this is simply a custom and is not based on any statistical science theory or criteria other than conventional practice that has become the accepted standard. Alpha levels of 0.1 are sometimes used, which is a more lenient standard; alpha levels greater than 0.1 are rarely if ever used. All things being equal, standard errors will be larger in smaller data sets, so it may make sense to choose 0.1 for alpha in a smaller data set. Similarly, in large data sets (hundreds of thousands of observations or more), it is not uncommon for nearly every test to be significant at the alpha 0.05 level; therefore the more stringent level of 0.01 is often used (or even 0.001 in some instances). In tabular presentation of results, different symbols are often used to denote significance at different values of alpha (e.g. one asterisk for 0.05, two asterisks for 0.01, three asterisks for 0.001). When t-values of tests are reported, it is redundant also to state significance at a given alpha.

Best practice is to specify alpha before analyzing data. Specifying alpha after performing an analysis opens one up to the temptation to tailor significance levels to fit the results. For example, if a test has a p-vahie of 0.07, this is not significant at the customary 0.05 level but it meets what sometimes is referred to as “marginal” significance at the 0.1 level. If one chooses a level of alpha after running the model, nothing would prevent, in this example, an investigator from choosing 0.1 simply because it achieves significance. On the other hand, if alpha is specified a priori, then the investigator would have to justify choosing 0.1 as alpha for reasons other than simply “moving the goalposts.” Another reason to specify alpha in advance is that sample size calculations require a value for alpha (or for the confidence level, which is just 1 minus alpha).

Note that if 20 statistical models are run, for example, then one should expect one of them to produce a significant result when alpha is set at 0.05, merely by chance. When multiple tests are performed, investigators sometimes use corrections, such as the Bonferroni correction, to adjust for this. In and of itself, specifying a stringent alpha (e.g. 0.01 or 0.001) is not a guarantee of anything. In particular, if a statistical model is mis-specified, alpha does not change that.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading