Why Marketers' A/B Testing Shouldn't Be Limited to Small Changes

Joshua Porter
Joshua Porter

Updated:

Published:

Most people who do A/B testing do it modestly, testing only one variable at a time. They test headlines, images, button text, and other important page elements that are crucial to conversion. This makes sense. These elements are simple to test, the results you get are clear, and the next step is obvious: you permanently implement the text or image that converts best. By keeping the testing to one isolated variable, you can be more confident in the results.

But there has also been a lot of  discussion  about the downsides of this incremental approach. The argument is that if you continually test small items over time, you won’t be able to iterate your way to a much better design. Instead, you’ll just improve in very small increments and get stuck in what has come to be called the  Local Maximum , meaning that you’ve hit a glass ceiling in your design. Without a big shake-up, you won’t make any big gains.

Local Maximum

 

The classic case study for this type of testing is  Google’s 41 Shades of Blue study , where Google rigorously tested the color of the links on Google Search and Gmail to see which shade of blue converted best. And because Google has a tremendous amount of traffic, they were able to actually see differences between two shades of blue that a normal person would be hard-pressed to distinguish. At huge scale, small differences are discernible, so it makes sense for Google to do that testing.

But for the vast majority of folks working with much smaller traffic, that sort of incremental testing isn’t feasible. Yet most people still seem to test one variable at a time. (As an aside, we’ve been guilty of publishing results like this such as the  red/green button test results . The purpose of the test was to show that even something as innocuous as button color can make a real difference… not  that this is the best type of test to run).

Don't Limit Yourself to Small Changes

So the answer is simple:  test the entire page as a variable . Instead of testing single design elements like headlines, images, or call-to-action buttons, simply design two completely different pages and test them against each other. In this way, the entire page becomes the variable you’re testing. It's still as valid a test as ever, but you’re just working on a higher level.

An Example of a Page-Level Design Test

Here is an example of a test in which a page-level design had a large effect. Luke Stevens tested  two completely different versions of a homepage  for his upcoming  Performance Based Design book . The results weren’t even close. One outperformed the other by 131%.

luke stevens test 1

Running a Page-Level Design Test

There are some differences between element-level design and page-level design. The big one is that designing another page takes a lot longer than just writing another headline. So creating the variations and setting up the test will take longer. But once you start, the differences between two differently designed pages will show up much faster than the differences between two headlines. You’ll get test results more quickly with a page-level test because the variations will be so different.

Now, let’s imagine one page worked much better than another, and you wanted to know why. Well, you might be concerned that since the entire page was your variable, you cannot confidently pinpoint the exact reason. Was it the different layout, style, or copy? If you don’t know why something is working/not working, you’ll probably want to roll some other UX methods into the mix, such as interviews or user testing. By combining A/B testing with other methods , you’ll get the complete picture you’re after.

We’re seeing lots of folks A/B test large changes recently, and it’s something I’m sure we’ll see more of. I've helped a lot of people get started with page-level testing, and the results are always interesting. In one recent example, the design team for an art site was testing classic paintings of the old European masters vs. modern abstract artworks, and conversion changed drastically based on whether members of the audience were in their twenties vs. in their fifties. It turns out that younger people dig abstract art and older people like the European masters. This seems to make some sense in hindsight, but it was testing that led to the insight in the first place.

In summary, A/B testing shouldn't just be about small changes. You can do page-level testing (and even flow-level testing) and get very interesting results, often faster and more insightful than the incremental, small changes that A/B testing is known for. So change your testing strategy to include big leaps, and you can avoid the local maximum that much longer.


We're committed to your privacy. HubSpot uses the information you provide to us to contact you about our relevant content, products, and services. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy.

Outline your company's marketing strategy in one simple, coherent plan.

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

START FREE OR GET A DEMO