10 Ways to Make A/B Testing Go Wrong Without Any Effort | TestFort

10 Ways to Make A/B Testing Go Wrong Without Any Effort

You are here

We already talked about the necessity and fun of A/B testing. Seems like, given so many great split testing tools which are now available, businesses shouldn’t hesitate launching a couple of split tests every quarter or so. And many actually don’t hesitate.

Another flipside is that this type of testing, though possible to run without QA specialists, is not only about fun, but about knowing a couple of tricks to get the realistic results. As many app and website makers don’t suspect about these, they are doing the following 10 things that serve the reasons why their split tests fail them:

  1. Being lazy to test

    That’s the worst thing about optimization people do (or don’t really do). Success is about hard work and learning to do something better. As split testing gives you such an opportunity, you should run such tests often and whenever you come up with an idea worth testing.

  2. Having no idea what’s the test is for

    Every A/B test needs a hypothesis you’re going to check. If the test has no specific goal and you are testing some random ideas, the results will simply tell you nothing important and you’ll just waste money on such a pointless experiment.

  3. Not bothering to repeat a test

    Iterative tests are a must if you want to succeed. When a test fails, don’t be in a hurry to test another website page. Instead, learn from the test results, improve your hypothesis, then run the test again, learn more, and run again. Split tests are run in order to give you something to learn from and guide you to the right improvement, so they are not supposed to be successful for the first time you’re running them.

  4. Ignoring statistical significance

    When you ignore such an important parameter of your test success as statistical significance, you are bound to getting untrue results. To boost this value, you should set a larger sample size. That is, 100 users can’t give you a true to life idea about which of two variations is better. Whenever you run a test, it’s recommended to set the minimum sample of 1000 users to be 90% sure of results and more.

  5. Not caring about temporal fluctuations

    User activity is conditioned by many factors, so your test results may vary over the week greatly as well as on some national holidays. You should be foresee these and run every test at least for a week to get confidence in results.

  6. Lacking visitors

    If your site has less than 100 visitors and even less conversions, it’s obvious you’re going to wait for months to reach a fair statistical significance. Instead of wasting time and money on research, just skip it and make those changes to see what happens. Once it works and you have enough traffic, you can start considering A/B variations with practical more value.

  7. Wanting all at once

    When app and website makers what to know the best way of building products all at once, they are tempted to run multivariate testing, with multiple variations of a product compared at a time. I wouldn’t recommend doing this, as this is something very impatient people usually do (or people who don’t care about the results accuracy). The thing is the more variations you compare, the less statistical significance you get. So it’s better to check simple and specific hypotheses gradually via A/B tests.

  8. Aiming big

    5% gain doesn’t please you much, does it? However, you should be more realistic and appreciate even these gains better. While everybody awaits at least some 20% rise in conversions, big wins under normal conditions usually mean that your site hasn’t really been doing very well and as you’re getting more traffic your gains in A/B tests aren’t going to be very impressive.

  9. Not learning about possible threats to test results

    Even under the perfect conditions and after careful preparation there may be some unexpected factors you don’t pay attention to that may affect your test results and give you the wrong idea, so you need to at least be aware of them. Such threats include failure of your test tools, some outside world events influencing user attitude to your tested variations, bugs in variations, etc.

  10. Failing to do segmentation

    You never know all about the success of your split test until you segment the results and filter out the waste. Google Analytics does this just right.

As you see, making something go wrong takes no effort while you certainly should put effort into doing something right. Learn from these 10 mistakes and be more laborious on your way to success!