Previous Chapter Chapter 14: Confidence Intervals Next Chapter

  • Citations
  • Add to My List
  • Text Size

Confidence Intervals
Confidence intervals

The principle of sampling theory underpins a parametric technique for estimating confidence intervals. A confidence interval is a range of value of a sample statistic that is likely to contain an unknown population parameter at a given level of probability. The wider the confidence interval, the higher the confidence level. The normal distribution is used to calculate the limits if the population is normal and the standard deviation of the population is known. If normality cannot be assumed, a large sample size will ensure that the sampling distribution of the means is approximately normal (Oakshott, 1994).

Confidence intervals allow you to give an estimate of the reliability of your estimate by specifying some limits within which the true population value is expected to lie.

Key ...

Looks like you do not have access to this content.


Don’t know how to login?

Click here for free trial login.

Back to Top

Copy and paste the following HTML into your website