4 Common A/B Testing Mistakes (And How to Fix Them)

Download Now: Free Marketing Plan Template
Carl Hargreaves
Carl Hargreaves

Updated:

Published:

When you're creating content for the web, it's easy to make assumptions about what you think your audience might respond to -- but that's not necessarily the right mentality.

AB Testing Mistakes Carl.jpg

Enter A/B testing: one of the easiest and most popular forms of conversion rate optimization (CRO) testing known to marketers. And while many businesses have seen the value in using this type of validation to improve their decision making, others have tried it, only to be left with inconclusive results -- which is frustrating, to say the least.Download our free introductory guide to A/B testing here.  <http://offers.hubspot.com/an-introduction-to-ab-testing> 

The trouble is, small mistakes made during A/B testing can lead to round after round of incremental optimizations that fail to producing meaningful results. To combat that, I’ve outlined some of the most common A/B testing mistakes (as well as their remedies) below. These tips are designed to help you keep your testing plans on track so you can start converting more visitors into customers, so let's dive in ...

4 Common A/B Testing Mistakes (And How to Fix Them)

Problem #1: Your testing tool is faulty.

Popularity is a double-edged sword -- it’s true for high schoolers and it’s true for A/B testing software.

The ubiquity of A/B testing has led to a wide range of awesome, low-cost software for users to choose from, but it's not all of equal quality. Of course, differing tools offer differing functionality, but there can also be some more tricky differences between tools. And if you're unaware of these differences, your A/B tests may be in trouble before you even get started.

For example, did you know that some testing software can significantly slow down your site? This decrease speed can have a harmful impact on your site’s SEO and overall conversion rates.

In fact, on average, just one second of additional load time will result in an 11% decrease in page views, and 7% decline in conversions. This creates a nightmare scenario where the websites you were hoping to improve through A/B testing are actually hindered by your efforts.

It gets worse: Your selection of A/B testing software can actually impact the results of your tests, too. Entrepreneur and influencer, Neil Patel, found that the A/B software he was using was showing significant differences, but when he implemented the new page he failed to see conversions change. His problem turned out to be a faulty testing tool.

So with all these hidden pitfalls, what can you do to make sure your A/B testing software is working fine?

The Fix: Run an A/A test.

Prior to running an A/B test, you should run an A/A test with your software to ensure it’s working without impacting site speed and performance.

For the uninitiated, an A/A test is very similar to an A/B test. The difference is that in an A/A test both groups of users are shown the exact same page. That’s right, you need to literally test a page against itself. While this may seem silly at first, by running an A/A test you will be able to identify any distortionary effects caused by your testing software.

An A/A test is the one time you want your results to be boring. If you see conversion rates drop as soon as you start testing, then your tool is probably slowing down your site. If you see dramatic differences between the results for the two pages, then your software is likely faulty.

Problem #2: You stop testing at the first significant result.

This is the statistical equivalent to taking your ball and going home. Unfortunately, when it comes to A/B testing, stopping your test as soon as you see a statistical significant result is not just bad sportsmanship, but it also produces completely invalid results.

Many tools encourage this behavior by allowing users to stop a test as soon as statistical significance has been hit. But if you want to drive real improvement to your site, you need to fight the urge to end your tests early. This may seem counterintuitive, but the more often you check your test for significant results, the more likely you are to see incorrect results.

The issue here is false positives: these are results that incorrectly show a difference between pages. The more often you check your results, the more likely you will hit a result that has been thrown off by false positives.

This isn’t an issue if you stay calm and don’t end your test early. However, if you end your test at the first sign of a significant result then you’ll likely fall victim to deceptive false positive outcomes.

Analytics firm Heap published the results of a simulation, which displays how ending your test early compromises your results.

Using standard significance testing, results from a 1,000-user test are checked once there is a 5% chance of false positive. If the tester checked the same group of users 10 times, the chance of a false positive result balloons to 19.5%. If checked 100 times, our 5% chance of a false positive increases eight fold to 40.1%.

These are good numbers to remember next time you get excited about early promising results.

The Fix: Stick to a predetermined sample size.

To combat false positives, discipline is key. You should set a sample size in stone prior to running an A/B test and resist the urge to end your test early (no matter how promising your results look).

Don’t fret if you’re scratching your head on how large your sample needs to be. There are plenty of tools available online for calculating a minimum sample size. Some of the most popular are from Optimizely and VWO.

One last note on sample size: Keep in mind that you’ll need to pick a realistic number for your page. While we would all love to have millions of users to test on, most of us don’t have that luxury. I suggest making a rough estimate of how long you’ll need to run your test before hitting your target sample size.

Problem #3: You’re only focusing on conversions.

When you’re deep in the weeds of an A/B test, it’s easy to focus on the trees and miss the forest. Put more literally, in A/B testing, it is easy to concentrate only on conversions and lose sight of the long-term business results produced.

While adding new copy to your site may produce higher conversion rates, if the converted users are of lower quality then a higher conversion rate may actually create a negative result for the business.

It can be easy to fall victim to vanity metrics while A/B testing, yet these metrics will distract your focus away from the actual revenue-driving results. If you’re testing a call-to-action that leads to a landing page, you should not just focus on conversions to the landing page. Instead, measure the leads produced from the page and ideally try to tie those leads to the revenue they produce.

The Fix: Test a hypothesis.

Before you start your A/B test you should outline a hypothesis you wish to validate or disprove. By focusing this hypothesis on a KPI that drives actual business results, you’ll avoid being distracted by vanity metrics.

Your A/B test should be judged on its ability to affect this KPI, and not its impact on other associated figures. So if your goal is to increase sign-ups, always judge success by measuring sign-ups, not on clickthrough rates to the sign-up page.

When working to validate or disprove your hypothesis, don’t just throw out any results that aren’t statistically significant -- use these results to inform your later tests, instead. For example, if a change to your page’s CTA showed a small, statistically insignificant improvement, then this could be a sign that you might be onto something. Try running further tests on your CTA and see if you can hit on one that produces a significant improvement.

Problem #4: You only test incremental changes.

The button color test may have ruined A/B testing, as this test’s popularity has made it the frame of reference for understanding how A/B testing should be utilized. But there's more to the practice than that. In fact, while a large website might see a big return from adjusting something small like button color, for the vast majority of us, these small, incremental changes are not going to produce meaningful results.

A/B testing can force us to aim for miniscule improvements, but by focusing only on the incremental, we may be missing a much larger opportunity.

The Fix: Periodic radical testing.

A good rule of thumb? Periodically test radical changes to your page. (This practice has since been coined radical testing.) If you’re seeing weak conversion rates, then it’s probably a sign you should invest time in testing out a radical change rather than incremental changes.

Think of your testing efforts like a poker game, you’ll need to periodically bet big if you want to see a big return.

But before you run off preaching the accolades of radical testing, be aware that it has some drawbacks. First, it requires more upfront labor than A/B testing. Radical testing requires that you invest time drafting a major page redesign. Because of this time investment, I recommend only periodically conducting radical tests.

An additional pitfall to radical testing is that it makes it hard to pinpoint what factors are having the largest impact on your site. What radical testing does allow you to do is determine if a large page rehaul will impact your conversions, but it won’t allow you to pinpoint which individual changes might be driving these results -- so keep that in mind before you get started.

These are a few of the most common A/B testing mistakes but there are many, many more. Share your thoughts below some of the missteps you’ve seen.

free guide to a/b testing

Outline your company's marketing strategy in one simple, coherent plan.

    Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

    START FREE OR GET A DEMO