All Tools

Free A/B Split Test Sample Size Calculator

Use this A/B Split Test Sample Size Calculator!

This easy-to-use tool helps you find out how many visitors you need to test two different versions of your webpage or app.

Here's what you'll need to enter:

Baseline Conversion Rate: Your current conversion rate (e.g., the percentage of visitors who make a purchase).

Minimum Detectable Effect: The smallest change in the conversion rate you want to detect.

Statistical Significance: The confidence level you want for your test results (usually 95%).

Statistical Power: The probability of detecting a true effect (typically set at 80%).


By entering these values, you'll get the recommended sample size for each version of your test, ensuring your results are both accurate and meaningful.

Your control group's expected conversion rate. The minimum relative change in conversion rate you would like to be able to detect. Typically set at 95%, this is your risk tolerance for Type I error. Typically set at 80%, this is your tolerance for Type II error.

Understanding Type I and Type II Errors

When conducting statistical tests, such as A/B tests, it’s crucial to understand the concepts of Type I and Type II errors. These errors are related to the conclusions we draw from our data.

Type I Error (False Positive)

Definition: A Type I error occurs when you reject the null hypothesis when it is actually true. In other words, you detect an effect that is not actually there.

Example: Imagine you run an A/B test to see if a new webpage design (Version B) increases conversions compared to the current design (Version A). A Type I error would mean concluding that Version B is better when, in reality, it isn’t.

Statistical Significance: This error is controlled by the significance level (α), often set at 0.05 (5%). If
your significance level is 5%, there is a 5% chance of making a Type I error.

Type II Error (False Negative)


Definition: A Type II error occurs when you fail to reject the null hypothesis when it is actually false. This means you miss detecting an effect that is actually present.

Example: Continuing with the A/B test example, a Type II error would mean concluding that there is no difference between Version A and Version B when, in fact, Version B does increase conversions.

Statistical Power: This error is controlled by the power of the test (1 - β), often set at 0.80 (80%). If your test has 80% power, there is a 20% chance of making a Type II error.

Key Points to Remember

Type I Error: False positive; detecting a difference when none exists. Controlled by the significance level (α).

Type II Error: False negative; failing to detect a difference when one exists. Controlled by the power of the test (1 - β).

Split Testing Common Questions

Don't see the answer you're looking for? Get in touch.

What is Baseline Conversion Rate?

This is the expected conversion rate of your control group without any changes.

What is Minimum Detectable Effect?

The smallest change in conversion rate that you want to detect, expressed as a percentage increase or decrease from the baseline.

Why is Statistical Significance important?

It helps you determine the likelihood that your results are due to the changes made and not by random chance.

What does Statistical Power tell us?

It indicates the probability that your test will detect a true effect when there is one.