Skip to main content icon/video/no-internet

The z test is a statistical test procedure that uses a sample statistic having a normal distribution. Statistical testing is a way of making statistical inferences about unknown population parameters. The z test is usually used to test hypotheses about (a) the mean of a population based on a single sample, (b) the proportion of successes in a population based on a single sample, (c) the difference between the means of two populations based on samples from each population, or (d) the difference between the proportions of successes in two populations based on samples from each population. In the following, capital letters such as X and Z are used to represent random variables and the corresponding lowercase letters such as x and z are used to represent specific values of the random variables.

A sample in a z test is supposed to be a simple random sample. A simple random sample drawn from a finite population is one where each sample of the same size has the same probability of being chosen and one from an infinite population is one where all the observations in the sample are statistically independent and are from the same distribution.

Two hypotheses are involved in a hypothesis test. The null hypothesis, denoted by H0, is the hypothesis that cannot be viewed as false unless sufficient evidence to the contrary is obtained. The alternative hypothesis, denoted by Ha, is the hypothesis against which the null hypothesis is tested and is viewed as true when the null hypothesis is declared as false. The null hypothesis is the one for which the cost is high when rejected by error. Depending on the applications, there are three types of hypothesis tests—two-tailed (sided) tests, one-tailed (sided) lower tail tests, and one-tailed (sided) upper tail tests.

A correct decision is made if a true hypothesis is accepted or a false hypothesis is rejected. Although it is never known whether a correct decision is made, the probabilities of making errors can be assessed. There are two types of errors in hypothesis testing. A Type I error is the kind of error made when rejecting a null hypothesis when it is actually true. A Type II error is the kind of error made when not rejecting a null hypothesis when it is actually false. The probability of making a Type I error is usually represented by α [i.e., α = P(Type I error) = P(Rejecting H0|H0 is true)] and is called the level of significance. The two types of errors cannot be controlled at the same time for a given sample size. Therefore, only Type I error is controlled by specifying α for the test. In practice, α is usually set to α = 0.01, α = 0.02, α = 0.05, or α = 0.10. Increasing the sample size reduces testing errors but also increases the cost of the study.

A statistical decision rule specifies for each possible outcome of the sample test statistic which alternative, H0 or Ha, should be concluded. The set of values of the sample test statistic for which the null hypothesis H0 is not rejected is called the acceptance region and that for which the alternative hypothesis Ha is concluded is called the rejection region. The decision rule might be established in three different but equivalent ways—critical values, action limits, and p values. When specifying decision rules, zα is used to represent the value of the standard normal random variable Z for a specific a such that p(Z > zα) = α. The value zα can be found in the standard normal probability table or through a spreadsheet or statistics software.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading