Skip to main content icon/video/no-internet

Power Analysis

A primary objective of many studies is to demonstrate a difference between two or more treatments under investigation. Power analysis, also referred to as sample size calculation, plays an important role in ensuring that a sufficient number of subjects are enrolled for answering the question of interest. Specifically, it is important to design a study in such a way that it will have a high probability of showing a difference when a difference truly exists and a low probability of showing a difference when none exists. If the sample size is too small, the study will be underpowered and may lead to discarding a potentially useful treatment. Such underpowered studies often lead to great confusion in the literature because they are often perceived as negative studies, whereas in actuality, this is not the case. Furthermore, from an investigator standpoint, cost and effort are devoted to a study that fails to prove or disprove the question of interest. Although studies with larger sample sizes than required are not affected by these same concerns, such studies are wasteful of important study resources that might have been directed elsewhere. Correspondingly, sample size calculation should play an important role during the planning stage of any study.

Basic Principles for Sample Size Calculation

Sample size calculation is usually performed based on some statistical criteria controlling the Type I and Type II errors (see Table 1).

Type I Error

The Type I error (a)is the probability of rejecting the null hypothesis when it is true:

None

For example, suppose that there are two groups of observations, where xiand yi(i=1, …, n)correspond to subjects receiving treatment and control, respectively. Assume that xiand yiare independentand normally distributedwith means μ1 and μ2,respectively, and a common variance of σ. Let τ = μ1μ2represent the difference between the two means for the two groups. A test of equalityattempts to show that one treatment is more effective than another:

None

In this, a Type I error refers to the probability of incorrectly concluding that the population means differ when there is actually no difference. The most common approach is to specify α = 0:05 Note that this implies that we would expect to reject the null hypothesis approximately 5% of the time when it is not true (there is no effect). However, this is often chosen more by convention than design. Any level of Type I error can be selected for any given study.

Table 1Possible Outcomes for a Trial
Truth
ConclusionNo Treatment BenefitTreatment Benefit
Evidence of Treatment EffectType I Error (False Positive)Correct Result (True Positive)
No Evidence of Treatment EffectCorrect Result (True NegativeType II Error (False Negative)

Type II Error

The Type II error (β)is the probability of not rejecting the null hypothesis when it is false:

None

In the previous example, this refers to the probability of not concluding that the population means differ when there is actually a difference. The powerof a test is defined as the probability of rejecting the null hypothesis given some assumed effect. Hence, when a true effect is assumed, there is a clear inverse relationship between the power and the probability of a type II

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading