AB Test Calculator
Calculate statistical significance for two-proportion AB testing with confidence intervals and Z-score analysis
AB Test Statistical Significance
Group 1 (Control)
Group 2 (Treatment)
AB Test Results
Interpretation: With 95% confidence, we fail to reject the null hypothesis. There is no statistically significant difference between the two groups.
Conversion Rate Difference: 0.00 percentage points (Group 2 higher)
Test Validity Checks
Example AB Test
Website Button Test
Group 1 (Red Button):
• Sample size: 1,000 visitors
• Clicks: 85 conversions
• Conversion rate: 8.5%
Group 2 (Blue Button)
• Sample size: 1,000 visitors
• Clicks: 112 conversions
• Conversion rate: 11.2%
Result
Z-score: -2.28
P-value: 2.27%
Significant at 95% confidence
Blue button performs better!
AB Test Requirements
Sample Size
≥30 samples per group
Required for normal distribution
Random Sampling
Representative samples
Avoid selection bias
Similar Sizes
Balanced group sizes
For reliable results
Confidence Levels
Understanding AB Testing
What is an AB Test?
An AB test is a statistical method to compare two versions of something to determine which performs better. It uses a two-proportion Z-test to determine if observed differences between groups are statistically significant or just due to random chance.
When to Use AB Testing
- •Website design and user interface testing
- •Marketing campaign effectiveness
- •Product feature comparison
- •Medical treatment efficacy
Z-Score Formula
Z = (p₁ - p₂) / √[p̄(1-p̄)(1/n₁ + 1/n₂)]
where p̄ = (t₁ + t₂)/(n₁ + n₂)
- p₁, p₂: Sample proportions for groups 1 and 2
- p̄: Overall sample proportion
- n₁, n₂: Sample sizes for groups 1 and 2
- t₁, t₂: Number of positive results in each group
Statistical Significance
Statistical significance indicates that the observed difference between groups is unlikely to have occurred by chance alone. We reject the null hypothesis when |Z| > Zα/2.