Two-sample t-test calculator
This two-sample t-test calculator helps users compare the means of two independent groups to determine whether they are significantly different from each other.
Results:
t-Statistic: -
p-value (one-tailed, Ha: diff > 0): -
p-value (one-tailed, Ha: diff < 0): -
p-value (two-tailed, Ha: diff ≠ 0): -
Degrees of freedom: -
Standard error (SE): -
Confidence interval: (-, -)
Related calculators:
What is a two-sample t-test?
A two-sample t-test (also called an independent t-test) is a statistical test used to compare the means of two independent groups to determine whether there is a significant difference between them. It answers the question: “Are the two sample means significantly different, or is the observed difference due to random variation?”
When to use a two-sample t-test?
Use a two-sample t-test when:
- You have two independent groups (e.g., men vs. women, treatment vs. control group).
- The data in both groups are normally distributed (or the sample sizes are large enough for the Central Limit Theorem to apply).
- The data are continuous (e.g., test scores, heights, weights).
- The two groups have similar or different variances (you can choose between a standard t-test or Welch’s t-test).
Formula for the two-sample t-test
The test statistic (t-value) is calculated as:
t = \frac{(\bar{x}_1 - \bar{x}_2)}{SE}
where:
- \bar{x}_1, \bar{x}_2 are the sample means
- SE is the standard error of the difference between the two means
The standard error (SE) depends on whether we assume equal or unequal variances.
Equal variance (Pooled t-test)
If both groups have similar variances, we use the pooled standard deviation:
SE = \sqrt{s_p^2 \left(\frac{1}{n_1} + \frac{1}{n_2}\right)}
where the pooled variance (s_p^2) is:
s_p^2 = \frac{(n_1 - 1)s_1^2 + (n_2 - 1)s_2^2}{n_1 + n_2 - 2}
Unequal variance (Welch’s t-test)
If the two groups have different variances, we use:
SE = \sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}
Degrees of freedom (df)
The degrees of freedom (df) determine which t-distribution to use.
Equal variances:
df = n_1 + n_2 - 2
Unequal variances (Welch’s Approximation):
df = \frac{(s_1^2/n_1 + s_2^2/n_2)^2}{\frac{(s_1^2/n_1)^2}{n_1 - 1} + \frac{(s_2^2/n_2)^2}{n_2 - 1}}
Interpreting the results
- p-value < 0.05 → Statistically significant difference (Reject the null hypothesis)
- p-value ≥ 0.05 → No significant difference (Fail to reject the null hypothesis)
The test also provides a confidence interval (CI) for the difference between the means:
CI = (\bar{x}_1 - \bar{x}_2) \pm t_{\alpha/2, df} \times SE
If the confidence interval contains 0, the difference is not statistically significant.
Example: Comparing two groups
Scenario: Do Two Schools Have Different Average Test Scores?
A researcher wants to compare the math scores of students from two different schools.
School A:
n_1 = 30, \bar{x}_1 = 78, s_1 = 10
School B:
n_2 = 25, \bar{x}_2 = 72, s_2 = 12
- Confidence level: 95%
- Assume unequal variances and use Welch’s Approximation
Results:
- t-statistic: 1.9897
- Degrees of Freedom: 48.59 (Welch’s df)
- Standard Error: 3.0155
- Confidence Interval: (-0.0612, 12.0612)
- p-value (two-tailed): 0.0523
Conclusion: Since p < 0.10, we reject the null hypothesis. The two schools have significantly different math scores.
Feature | What it does |
---|---|
Compares two independent groups | Tests if their means are significantly different |
Calculates t-statistic | Measures how different the means are |
Finds p-value | Determines statistical significance |
Handles equal & unequal variances | Uses either pooled or Welch’s t-test |
Computes confidence interval | Estimates the range of the true mean difference |
Checks for input errors | Prevents invalid entries |