Entry
Reader's guide
Entries A-Z
Subject index
One-Tailed Test
A null hypothesis most often specifies one particular value of a population parameter. Wedo not necessarily believe this to be the true value of the parameter, but if we can reject the null hypothesis and thereby eliminate this particular value of the parameter, then it follows that the parameter must be equal to something else. In the study of the relationship between two variables, using regression as an example, the null hypothesis is most often stated in such a way that there is no relationship between the variables. In symbols, H0: β = 0, where β is the slope of the population regression line.
If we can reject this null hypothesis, then we have shown that β is not equal to 0, meaning that a relationship exits between the variables. Next, if the population slope is not equal to 0, then what can we conclude about the value of this slope? One possibility is that we do not know anything about the slope, meaning that it can be either greater than or less than zero. Another possibility is that we have additional knowledge about the slope. Say that we know, from previous research, that the slope cannot be negative. The alternative hypothesis can then be stated as H1: β > 0. Because the null hypothesis has been rejected, the conclusion follows from the alternative hypothesis that the value of β must be larger than 0.
This is an example of a one-tailed (also called one-sided) test. It is called one-tailed, or one-sided, because of the one-sided form of the alternative hypothesis. The alternative hypothesis includes values only in one direction away from the value of the parameter of the null hypothesis.
When there is a one-tailed alternative hypothesis, the null hypothesis gets rejected for only one range of the test statistic. When we have a normal test statistic and a 5% significance level, a null hypothesis with a two-tailed alternative hypothesis gets rejected for z < −1.96 or z > 1.96. Half of the significance level is located at each tail of the test statistic, and we reject for large, negative values or large, positive values. For a one-tailed test with a 5% significance level, however, the null hypothesis is rejected for z > 1.64. That means the rejection region is located here only in one (the positive) tail of the distribution of the test statistic. This is because we make use of the additional knowledge we have about the parameter—that it is greater than zero.
The distinction between a two-tailed and a one-tailed alternative hypothesis could matter for a normal test statistic, say if z = 1.85. With a two-tailed alternative here, the null would not be rejected, but it would be rejected for a one-tailed alternative. This situation, however, does not occur very often.
Also, with the change from a prechosen significance leveltoa P value computed from the data, the distinction is not as important. If we report a one-tailed p value, then the reader can easily change it to a two-tailed p value by multiplying by 2. Statistical software must be clear on whether it computes one-tailed or two-tailed p values.
...
- Analysis of Variance
- Association and Correlation
- Association
- Association Model
- Asymmetric Measures
- Biserial Correlation
- Canonical Correlation Analysis
- Correlation
- Correspondence Analysis
- Intraclass Correlation
- Multiple Correlation
- Part Correlation
- Partial Correlation
- Pearson's Correlation Coefficient
- Semipartial Correlation
- Simple Correlation (Regression)
- Spearman Correlation Coefficient
- Strength of Association
- Symmetric Measures
- Basic Qualitative Research
- Basic Statistics
- F Ratio
- N(n)
- t-Test
- X¯
- Y Variable
- z-Test
- Alternative Hypothesis
- Average
- Bar Graph
- Bell-Shaped Curve
- Bimodal
- Case
- Causal Modeling
- Cell
- Covariance
- Cumulative Frequency Polygon
- Data
- Dependent Variable
- Dispersion
- Exploratory Data Analysis
- Frequency Distribution
- Histogram
- Hypothesis
- Independent Variable
- Measures of Central Tendency
- Median
- Null Hypothesis
- Pie Chart
- Regression
- Standard Deviation
- Statistic
- Causal Modeling
- Discourse/Conversation Analysis
- Econometrics
- Epistemology
- Ethnography
- Evaluation
- Event History Analysis
- Experimental Design
- Factor Analysis and Related Techniques
- Feminist Methodology
- Generalized Linear Models
- Historical/Comparative
- Interviewing in Qualitative Research
- Latent Variable Model
- Life History/Biography
- Log-Linear Models (Categorical Dependent Variables)
- Longitudinal Analysis
- Mathematics and Formal Models
- Measurement Level
- Measurement Testing and Classification
- Multilevel Analysis
- Multiple Regression
- Qualitative Data Analysis
- Sampling in Qualitative Research
- Sampling in Surveys
- Scaling
- Significance Testing
- Simple Regression
- Survey Design
- Time Series
- ARIMA
- Box-Jenkins Modeling
- Cointegration
- Detrending
- Durbin-Watson Statistic
- Error Correction Models
- Forecasting
- Granger Causality
- Interrupted Time-Series Design
- Intervention Analysis
- Lag Structure
- Moving Average
- Periodicity
- Serial Correlation
- Spectral Analysis
- Time-Series Cross-Section (TSCS) Models
- Time-Series Data (Analysis/Design)
- Trend Analysis
- Loading...
Get a 30 day FREE TRIAL
-
Watch videos from a variety of sources bringing classroom topics to life
-
Read modern, diverse business cases
-
Explore hundreds of books and reference titles
Sage Recommends
We found other relevant content for you on other Sage platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches