A/B testing is a method of comparing two versions of a webpage against each other to find out which one performs better. This process employs statistical analysis to determine which variation in different pages performs better for a given conversion goal.

Econsultancy columnist Frederic Kalinke has shared five things marketers should know about the A/B testing  process.

Kalinke says, “A/B testing can be a rabbit hole from which some never re-emerge (we call these people statisticians, or even, if things get really bad, data scientists).

This article provides some pointers to help you navigate the ins and outs of A/B testing, and perhaps, escape unscathed.

Plan

When A/B testing, the plan is everything. Once you set the plan in motion, you must stick to the plan. You might be tempted to deviate, analyse results early, or tweak the experiment and move the goalposts. That way lies madness.

As the plan is everything, it needs to be clear. There is an art to a well-defined hypothesis and there is an imperative for a single, solid metric on which the hypothesis hinges. Generally-speaking, it makes most sense for marketers to use revenue, and if you can’t use revenue, use the next closest thing, such as sign-ups.

Depending on how you are testing, you will also need to consider setting the sample size in advance (use this handy tool)”.

How to stay safe when A/B testing

Econsultancy

Sharing is caring