Introduction    Types of Mistakes       Suggestions       Resources       Table of Contents     About    Glossary   Blog

Factors that Affect the Power of a Statistical Procedure

As discussed on the page Power of a Statistical Procedure, the power of a statistical procedure depends on the specific alternative chosen (for a hypothesis test) or a similar specification, such as width of confidence interval (for a confidence interval).

The following factors also influence power:

1. Sample Size

Power depends on sample size. Other things being equal, larger sample size yields higher power. Example and more details.

2. Variance

Power also depends on variance: smaller variance yields higher power.

Example: The pictures below each show the sampling distribution for the mean under the null hypothesis µ = 0 (blue -- on the left in each picture) together with the sampling distribution under the alternate hypothesis µ = 1 (green -- on the right in each picture), both with sample size 25, but for different standard deviations of the underlying distributions. (Different standard deviations might arise from using two different measuring instruments, or from considering two different populations.)
power with n = 25, SD = 10    Sampling distributions under null and alternate hypotheses, standard deviation 5

The Claremont University's Wise Project's Statistical Power Applet and the Rice Virtual Lab in Statistics' Robustness Simulation can be used to illustrate this dependence in an interactive manner.

Variance can sometimes be reduced by using a better measuring instrument, restricting to a subpopulation, or by choosing a better experimental design (see below).

3. Experimental Design

Power can sometimes be increased by adopting a different experimental design that has lower error variance. For example, stratified sampling or blocking can often reduce error variance and hence increase power. However,

For more on designs that may increase power, see:
Last updated June 2012