Jeff Rajeck says, “In my previous posts about A/B testing, I made the case that you need to consider the math behind A/B testing, or risk having invalid, or even wrong, results.

My first suggestion is to use sample sizing, but that requires a lot of tests.

Here’s how to do something similar without nearly as many.

With just a little bit of analysis you can check the validity of your A/B test before you even conduct it. One way you can achieve this is by sizing the test beforehand.

You tell an online tool your typical conversion percentage and what minimum detectable effect you’re looking for – and the tool tells you how many tests you need to conduct”.

Using data science with A/B tests: chi-squared testing

‘Econsultancy’ Blog

Sharing is caring