Entry
Reader's guide
Entries A-Z
Subject index
Two-Tailed Test
A NULL HYPOTHESIS most often specifies one particular value of a population PARAMETER, even though we do not necessarily believe this value to be the true value of the parameter. In the study of the relationship between two variables, using REGRESSION analysis as an example, the null hypothesis used most often is that there is no relationship between the two variables. That translates into the null hypothesis H0 : β = 0, where β is the SLOPE of the population regression line. Most often, we study the RELATIONSHIP between two variables for the purpose of demonstrating that a relationship does exist between the two variables. This is achieved if we are able to reject the null hypothesis of no relationship.
If we can show that the slope of the population regression line is not equal to zero—by rejecting the null hypothesis—then does the slope have a positive or negative value? If we do not have any extra information about the slope, then the alternative hypothesis becomes Ha :β ≠ 0, meaning the slope can be negative or positive. Such an alternative hypothesis makes the test a two-tailed test, also known as a two-sided test.
In the original approach to this problem, as developed by Sir Ronald A. Fisher and others, they chose a preset SIGNIFICANCE LEVEL, say α = 0.05, for lack of any better value. Depending on the nature of the analysis, they chose a test statistic, usually a z, t, CHI-SQUARE or F. For the normal z variable, 2.5% of the values are less than −1.96 and 2.5% of the values are larger than 1.96. Thus, for the normal z variable, the null hypothesis would be rejected if the observed value of z was less than −1.96 or larger than 1.96, meaning that the value fell in one of the two tails of the distribution.
This means that for a true null hypothesis, and repeating the study a large number of times, we would get samples, and thereby zs, 5% of the time either less than –1.96 or larger than 1.96. For these samples, we would erroneously reject the null hypothesis. The reason for the name “two-tailed test” is that the null hypothesis is rejected for unusually large or small values of the test statistic z or t. Tests using the chi-square or the F statistic are, by their nature, set up to be two-tailed tests.
With the arrival of statistical software for the analysis of the data, there has been a shift away from the significance level α, chosen before the analysis, to the use of PVALUES, computed as part of the analysis. For z and t, if we see the absolute value symbol |z| or |t|, then the corresponding p values represent two-tailed tests. If we decided before starting the analysis to make use of a two-tailed test, and the software provides a one-tailed p value, then we have to double this p value.
It is also possible to simply report the computed p value and let the reader determine whether the test should be two-tailed or one-tailed.
...
- Analysis of Variance
- Association and Correlation
- Association
- Association Model
- Asymmetric Measures
- Biserial Correlation
- Canonical Correlation Analysis
- Correlation
- Correspondence Analysis
- Intraclass Correlation
- Multiple Correlation
- Part Correlation
- Partial Correlation
- Pearson's Correlation Coefficient
- Semipartial Correlation
- Simple Correlation (Regression)
- Spearman Correlation Coefficient
- Strength of Association
- Symmetric Measures
- Basic Qualitative Research
- Basic Statistics
- F Ratio
- N(n)
- t-Test
- X¯
- Y Variable
- z-Test
- Alternative Hypothesis
- Average
- Bar Graph
- Bell-Shaped Curve
- Bimodal
- Case
- Causal Modeling
- Cell
- Covariance
- Cumulative Frequency Polygon
- Data
- Dependent Variable
- Dispersion
- Exploratory Data Analysis
- Frequency Distribution
- Histogram
- Hypothesis
- Independent Variable
- Measures of Central Tendency
- Median
- Null Hypothesis
- Pie Chart
- Regression
- Standard Deviation
- Statistic
- Causal Modeling
- Discourse/Conversation Analysis
- Econometrics
- Epistemology
- Ethnography
- Evaluation
- Event History Analysis
- Experimental Design
- Factor Analysis and Related Techniques
- Feminist Methodology
- Generalized Linear Models
- Historical/Comparative
- Interviewing in Qualitative Research
- Latent Variable Model
- Life History/Biography
- Log-Linear Models (Categorical Dependent Variables)
- Longitudinal Analysis
- Mathematics and Formal Models
- Measurement Level
- Measurement Testing and Classification
- Multilevel Analysis
- Multiple Regression
- Qualitative Data Analysis
- Sampling in Qualitative Research
- Sampling in Surveys
- Scaling
- Significance Testing
- Simple Regression
- Survey Design
- Time Series
- ARIMA
- Box-Jenkins Modeling
- Cointegration
- Detrending
- Durbin-Watson Statistic
- Error Correction Models
- Forecasting
- Granger Causality
- Interrupted Time-Series Design
- Intervention Analysis
- Lag Structure
- Moving Average
- Periodicity
- Serial Correlation
- Spectral Analysis
- Time-Series Cross-Section (TSCS) Models
- Time-Series Data (Analysis/Design)
- Trend Analysis
- Loading...
Get a 30 day FREE TRIAL
-
Watch videos from a variety of sources bringing classroom topics to life
-
Read modern, diverse business cases
-
Explore hundreds of books and reference titles
Sage Recommends
We found other relevant content for you on other Sage platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches