A/B Testing sounds like the neatest approach to marketing that has ever come down the pike. But, for most of us “there is a problem”. The idea behind A/B Testing is simple. You make a version B of your current website with some change. Maybe you try a different font or color. Then you track the conversion rate on Version A vs Version B for a period of time. Whichever version wins by showing a higher conversion rate will become the new Version A and you move on to another Version B to try out. Very simple and neat.
There is a fly in the ointment for most of us. We don’t have enough traffic to be confident that the result was not from random chance. Maybe people really did like that new font better and maybe this was a one-off result that will never repeat itself. How will you ever know?
So, how many hits does it take before you can realistically rely upon the results? There are numerous calculators out there that will produce a number for the sample size you need to begin to feel confident. Whenever I run these things, the answer is always in the thousands, which way exceeds my weekly visitor list.
I very much like the idea of A/B Testing, but unless you can hit the minimums necessary to reach some minimum level of confidence, don’t bother with it. Producing random results disguised as scientific methodology is not likely to be helpful.
Comments