11 Common A/B Testing Myths BUSTED

Ginny Mineo
Ginny Mineo

Updated:

Published:

ab testing myths

When you're making marketing decisions every day, you probably don’t think about taking some time out to run an experiment. You’re busy writing email copy, designing your latest landing page, or crafting the perfect social media update -- not creating tests, optimizing treatments, or shattering null hypotheses.

But what if I told you that the latter three actions could completely transform the way you do the first three, help you make solid decisions based on data, and generate even more leads for your business?

A/B testing, also known as split testing, allows you to do just that. To perform an A/B test, marketers take two different versions of one piece of content (often landing pages, emails, and calls-to-action) and test them with two similarly sized audiences. To find out which test won, marketers measure whether the winning test is statistically significant with a certain level of confidence (95% or more). Using split testing in your marketing helps you optimize your assets for increased leads and converted customers.  

Unfortunately, this isn't what everyone hears about A/B testing. There are a ton of myths out there that prevent smart marketers from making accurate, data-driven decisions. To keep you from landing in that camp, we’re going to debunk some of the most common A/B testing myths out there. Let’s get started!

Myth #1: Marketers’ instincts work better than A/B testing.

Even the most talented and experienced marketers can be wrong. After years of experience, many of us have a solid understanding of what generally works to convert visitors into leads, and leads into customers -- but instincts shouldn’t be the only thing guiding our decisions.

Split testing allows you to use data to drive more traffic to your website and increase conversion rates. In fact, A/B testing has been shown to generate up to 30-40% more leads for B2B sites, and 20-25% more leads for ecommerce sites.

The moral of the story here: if you’re relying only on the HIPPO (highest paid person’s opinion) in the room, you’re missing out on the potential for increased revenue.

Myth #2: You should use A/B testing before making every single decision.

While split testing can help you with many of your marketing decisions, you don’t need to test every single decision you make. Some changes are not worth testing. For example, you don’t need to A/B test the clickthrough rates of these two headlines: “The Marketer’s Guide to Pinterest” and “A Marketer’s Guide to Pinterest.” Though split tests work well for small changes like the color of your CTA, switching out “the” for “a” won’t make a dent in your conversion rates. Now, if you wanted to test two headlines with different positioning -- yes, that would indeed warrant an A/B test.

Myth #3: A/B testing is not as effective as multivariate testing.

A/B testing and multivariate testing (MVT) are both great ways to use data to drive your marketing decisions, but they are used for very different purposes. A/B testing is used for testing one element in two or more different treatments. MVT is used to test the effectiveness of multiple combinations of elements across multiple treatments.

For example, an A/B test would be used to test the effect of the CTA color on the conversion rate while all other elements on the page are the same -- the traffic sources, type of visitor, layout of the form, and even the accompanying copy and image. You’re trying to answer one direct question: How does the color of the CTA affect conversions? You aren’t trying to explore how combinations of elements affect conversions (e.g. how the combination of the color of the CTA, the number of fields in the form, and the type of image used affect conversions).

So it's not that one test is more effective than another -- they're just ... different types of tests!

Myth #4: If a treatment works for one marketer, it will work for any marketer.

Though there are a ton of A/B testing case studies demonstrating the success of certain layouts, designs, and copy on conversion rates, you should never blindly follow other marketers’ success without testing it on your own first. Each testing situation is different. The original site has different traffic, audiences, products, marketing funnels, and promotions, so what works for that site may not work for yours.

That being said, taking a page out of someone else’s marketing plan can be a great jumping off point for your own marketing activities. For example, if you’re looking to improve the clickthrough rate (CTR) on your emails, you might want to try using a personalized sender name. In 2011, we conducted a test to see if including a personal name from someone on the HubSpot marketing team in the email's "From" field would increase the email CTR. In the test, we saw that the control (From "Hubspot") had a 0.73% CTR while our treatment (from "Maggie Georgieva, HubSpot") had a 0.96% CTR -- the personalized "From" field was a clear winner with 99.9% confidence.

ctrtest

This test worked for our audience, so it may work for yours ... or it may not. Use A/B tests to try it for yourself and learn the best tactic for your audience, and your marketing.

Myth #5: You need to be a tech-savvy marketer with a large budget to do A/B testing.

A/B testing doesn’t have to be expensive. If you’re operating on a near-zero-dollar budget, there are free split testing tools available like Google Analytics’ Content Experiments. Though Google’s tool is free, you will have to be a bit more tech-savvy to implement it.

Most paid A/B testing tools, including the one in HubSpot’s all-in-one marketing software, have a higher upfront cost but are much less technologically challenging. Paid tools are obviously more expensive than free tools (duh), but you may be able to work more quickly to cut down on the overhead costs.

Besides navigating technology and budget issues, you will need to be comfortable using some math to properly execute a split test. All winning tests must be statistically significant, so you’ll need to know what that means and how to use it to interpret the results. While you can determine if something is statistically significant with pen and paper, you can also use HubSpot’s free A/B testing calculator to figure it out.

Ultimately, you’ll need to have different levels of technological and mathematical knowledge depending on what resources you have available, but budgets don't need to be a barrier to A/B testing if you don’t mind diving into numbers and technology.

Myth #6: A/B testing is only for sites with a ton of traffic.

Since you only need to study two treatments in A/B testing, you don’t need a ton of visitors to test results -- you just need enough to reach statistical significance (the point at which you have at least 95% confidence in the results). Though more visitors can give you more accurate representations of what works and what doesn’t, there isn’t a universal minimum number of visitors you need for an A/B test. All you need is enough people to make sure the test is statistically significant.

There also are quite a few free tools out there to help you find how many visitors you need without getting an advanced degree in statistics.

Myth #7: A/B testing negatively affects your SEO.

One of the frequently asked questions about A/B testing is whether it can hurt your SEO. People think that if you're testing multiple versions of the same content, your website could be categorized in Google’s algorithm as duplicate content and penalized in the SERPs as a result.

This myth is completely false -- in fact, Google encourages you to and gives guidelines around how to test your content to get more visitors and conversions on your website, without being penalized for duplicate content. This is one step that can get somewhat technical without A/B testing software that already takes all of this into account for you, but it’s worth the investment since being found by search engines is essential to your inbound marketing success.

Myth #8: If one treatment stands out immediately, you don’t need to continue running the rest of the test.

One of the most important things to remember when running an A/B test is to always wait until your results are statistically significant. Like waiting for the statistically significant number of visitors, determining the time length to run an A/B test should be based off your confidence interval. Even if one test seems to be winning by a landslide immediately, if the sample size and time frame are not statistically significant yet, you need to keep the test running.

If you end up pulling the test before it has a chance to display accurate data, you could end up choosing the wrong test to implement in your marketing -- a potentially costly mistake. If you’d like to figure out how long your test needs to run before it reaches statistical significance, try Wingify’s test duration calculator.

Myth #9: Winning treatments will always look pretty.

One of the reasons that you should A/B test in the first place is to help drive your marketing with data, not subjective opinion. Time after time, A/B tests have proven that landing pages, emails, or CTAs don’t always have to look beautiful ... they just have to work better than the alternative. Even if a test isn’t pretty, it still can drive more conversions than one with a more "beautiful" layout. Don’t rely on your judgment of the design -- use the A/B testing results to guide your marketing.

Myth #10: You’re only measuring one conversion rate.

A/B testing results shouldn’t just stop at one metric -- you should be examining how your treatment moves multiple metrics. Otherwise, you risk missing larger, more important insights. 

For example, if you were using an A/B test to see how the color of a CTA affected conversions on your company’s blog, you wouldn’t want to just look at the number of blog subscribers your test drives. You’d want to see how many visitors converted into leads, and how many leads converted into customers, too. You might actually find that the color that converted the most subscribers was resulting in a decrease in leads! And hey, that might be okay, if your goal is subscribers even at the expense of leads. Or, it might cause you to scream in horror and immediately change the color back, knowing that more subscribers aren't worth it if it's at the expense of leads.

Either way, it demonstrates the importance of looking at more than just one metric when analyzing the results of a test.

Myth #11: After you finish your A/B test, you’re done.

So you ran an A/B test, gathered a proper sample, reached your confidence interval, and one test won ... but you aren’t done yet. Whether you’ve found dramatic results or not, you should keep testing and optimizing your content for conversions and leads. In other words, A/B testing shouldn’t be a one-time experiment -- use it to continually adjust and improve your marketing.

For example, let’s say that you’ve used an A/B test to determine that a red CTA button increased conversions over a green CTA button. Now that you know which CTA to use, you should try testing which button copy continues to increase your conversion rate. By continuing to use split testing, you can use data to drive your marketing decisions, a smart and cost-effective way to grow your business!

What common A/B testing myths have you seen debunked?

Image credit: Ken's Oven


We're committed to your privacy. HubSpot uses the information you provide to us to contact you about our relevant content, products, and services. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy.

Outline your company's marketing strategy in one simple, coherent plan.

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

START FREE OR GET A DEMO