An often used method in applied statistics is determining the sample size necessary to view statistically significant results. Given the intended power, we can calculate the required sample size. Given the intended sample size, we can calculate the resulting power. Before we go in to how this works, we need to define a few things.
Error Types
Truth | |||
---|---|---|---|
H0 | H1 | ||
Test | Negative Don’t Reject |
True Negative | False Negative β |
Positive Reject |
False Positive α |
True Positive Power = 1 – β |
- = False Positive Rate.
This is the chance of rejecting the null hypothesis , given that the null hypothesis is true. - = False Negative Rate.
This is the chance of failing to reject the null hypothesis, given the alternative hypothesis was true. - Power is viewed as the complement of , the false negative rate. The power of the test is the chance to reject the null hypothesis, given the null hypothesis is false. (Given the alternative hypothesis is true)
Using these error types, we can make guesses as to the sample size necessary to achieve significant results to support our alternative hypotheses. The actual calculation for power and sample size is a little different from the normally distributed data, because in proportional data the variance is a function of the proportion, rather than being independent of the mean.
Sample Size Calculation
- Case 1: One Sided Test
Given ,
GivenIn this calculation we’re using . We will show later why the direction is not important, merely that we’re only considering values on one side of . Because follows a Bernoulli distribution, is a good estimator for .
Remember that in a one-sided test with , we’re going to reject if .
- Case 2: 2-sided Test
In the two-sided test, we reject if . The calculation for the 2-sided test follows very similarly to the one-sided test, however we change the to to reflect that we’re allowing values on both sides of the null hypothesis. The formula for sample size is thusly:All else remains the same.
Additional Links