# Comparison of two means (independent)

 Page 8/16 Date 06.02.2021 Size 0.86 Mb. #108995
Comparison of two means (independent):
We begin with a basic formula for sample size. Start with two groups, a continuous measurement endpoint, a two-sided alternative, normal distributions with the same variances and equal sample sizes. The basic formula is:

N = 16 / Δ2, where Δ = [μ0 – μ1] / σ = δ / σ

Note: This is the sample size for EACH group.
Δ can be thought of as the standardized difference between means, measured in units of the standard deviation. The magnitude of clinical difference of interest and the standard deviation are combined into a single quantity. And this quantity has a famous name – it is known as the Effect Size (ES). As a guideline, Jacob Cohen classified effect sizes as small, moderate, and large (0.2, 0.5, and 0.8 for two-group comparisons); you can use these as a starting point.
In the one-sample case, the numerator is 8, instead of 16; that is, N = 8 / Δ2. This situation occurs when a single sample is being compared with an external population value (i.e. a target). Note that the sample size for a one-sample case is one-half the sample size for each sample in a two-sample case. But since there are two samples, the total in the two-sample case will therefore be four times that of the one-sample case.
Example: If the standardized treatment difference Δ is expected to be 0.5, then 16/(0.5)2 = 64 subjects per treatment will be needed. Hence a total of 128 subjects are required. If the study only requires one group then 32 subjects will be needed; this is one-fourth of the number in the two-sample scenario.
This illustrates the rule that the two–sample scenario requires four times as many observations as the one-sample scenario. The reason is that in the two-sample situations two means have to be estimated, doubling the variance, and, additionally, requires two groups.
Note that the two key ingredients are the difference to be detected (μ0 – μ1) and the inherent variability of the observations indicated by σ.
Note also that the equation can be inverted to allow you to calculate the detectable difference for a given sample size N.
Δ = 4 / √N or (μ0 – μ1) = 4 σ./ √N
For a one-sample case, replace 4 by 2.
This rule is very robust and useful. Many sample size questions can be formulated so that this rule can be applied.
“Where does the multiplier of “16” come from?” I hear you asking.
The full formula is the following:
N = 2 (zα + zβ )2 / (δ/σ) 2
For α = .05, zα = 1.96; for β = .20, zβ =0.84. Hence 2 (zα + zβ )2 = 2(1.96 + 0.84)2 = 15.68 ≈16
What if you want other values of α and β? Here is a small table of the multipliers for various values of β for a two-sided α of .05.

 Multiplier Power (1 – β) One sample Two sample 0.50 4 8 0.80 8 16 0.90 11 21 0.95 13 26 0.975 16 31

What happens if the sample size is more than you can manage in a year? Double the treatment effect. If the sample size is too small, and you can’t justify enough funding, halve the treatment effect. Usually, though it’s the smallest difference that you would say is clinically important. Even if a small difference were statistically significant, you wouldn’t change your practice because of it.

Don’t fool around with the alpha level. But you can pick out beta levels of .05, .10 or .20 (or even .50) if you are really desperate. The usual choices are alpha of .05 and beta of .20.