Running experiments on your website, such as testing the copy of a headline or trying out a new color for your call-to-action buttons, is crucial to your online demand generation. These tests help you get definitive answers on what works and what doesn't. But experimental design can seem over-complicated, overwhelming, and time-consuming.
The thing is, running experiments on your website is only as complicated as you make it. So keep on reading -- we'll walk you through the basics of running an experiment. It's not rocket science, and as long as you understand the proper steps to running A/B tests on your website, you'll be running tests and boosting conversions in no time.
1) Analyze data to find your website's problem areas.
First, determine which part of your website needs the most improvement. This way, you won't waste time testing things that don't need much help. For example, if you have two landing pages, one with 5% conversion rate (CVR) and the other with a 30% conversion rate, seek to improve the 5% CVR page.
If that seems awfully obvious to you, and you'd like to take the performance of more than just landing page CVR into account, let's go through a little case study. Here's some sample data for Example Enterprise:
Example Enterprise generates 60,000 website visitors and 500 leads per month.
Of the 60,000 website visitors, 60% entered through the homepage, 20% through the blog, and 20% through various other pages.
83% of Example Enterprise's leads converted on landing pages linked to from the company blog. Only 17% came from their homepage.
Example Enterprise's landing pages have an average conversion rate of 40%.
From this data, we know that Example Enterprise is getting a decent amount of website traffic. Most of the leads are coming from the blog, even though it accounts for only 20% of their traffic. The calls-to-action on the blog are doing very well -- if you do the math, they have an 8.6% click-through rate, and with that 40% landing page conversion rate, they're getting around 415 leads from the 12,000 blog visits. Not too shabby.
So what's the main problem?
Can you spot it?
Correct -- it's the homepage. The homepage generates 36,000 visits, but a measly 85 leads per month. Why is that? We know their landing pages are converting at a nice 40%, so it probably has something to do with how well they're driving traffic to their landing pages from their homepage. Their about pages aren't doing very well either -- in fact, they're not really getting any leads from them at all. But their first priority should be to focus on where 60% of their website visitors are going and not converting.
Now that we've narrowed down the most urgent problem, here are some questions we should ask Example Enterprise:
Is there a CTA above the fold on your homepage?
Does it stand out enough on the page?
Does the copy create enough of a sense of urgency?
Are you surfacing the right offer on your homepage?
Is the design of the page too confusing?
These are all things they can now test to improve conversions on their site.
2) Decide whether you'll be running a drastic or incremental A/B test.
What is a drastic A/B test?
A drastic A/B test is where you test one variant of a page that's completely different from your control. For example, you might completely redesign the layout of your homepage, or swap out a call-to-action that features an entirely different offer, and is designed with a different style, color, copy style, etc.
Here's an example of a drastic A/B test, which tests the layout of a thank-you page:
What is an incremental A/B test?
An incremental A/B test is where only one variable is different in the variant page from your control. For example, you might swap out an image on a landing page, try new copy for a page's headline, or change the color of a call-to-action button.
Here's an example of an incremental A/B test, which tests a landing page with no image vs. having an image.
Should you choose to run a drastic or incremental A/B test?
There is no wrong answer here. If you're new to experimental design, run a simple incremental test to get comfortable with running A/B tests while keeping your risk minimal.
According to many "best practices" you'll read online, you should run incremental tests, and the variant should only have one element that's different from your control. For example, you should test your headline OR color scheme OR form length.
But if you're an experienced marketer, I say GO ALL IN. If you want to find variations that are going to make a huge impact on your click-through rate or conversion rate, you need to make some big changes. Don't be afraid to run big tests. That drastic A/B test above yielded a 125% increase in trial sign-up conversion rates. It'd be hard to get those kinds of results after just tweaking the headline (though not impossible, mind you).
3) Choose software that makes it easy to test.
Now you want to choose software that lets you easily run your desired A/B test with minimal effort or coding know-how.
Of course, I'm fairly biased, but I use HubSpot to run our own A/B tests. In HubSpot, you can test everything from:
Your landing pages - compare form submissions, CVR, new contacts rates, and customers generated
If you don't use HubSpot, a couple good alternatives for A/B testing (just website elements, not email) are Optimizely and Unbounce.
4) Set up your test, wait it out, and don't touch.
Now it's time to implement your test, but as long as you've planned out what you want to test by now, this should be the easy part. The next step is to wait, which is super lame and boring, but necessary -- you need to get the traffic volume to your page high enough to declare a winner with confidence of the test's statistical significance.
The most important thing here is to not touch your test. If something on the variant or control is broken, e.g. the form submit button doesn't work, fix whatever it is and restart the test. Otherwise your data will not be sound.
It's excruciatingly frustrating to run a landing page A/B test, and then someone mistakenly edits the page to add a sentence or something. That completely nullifies the data from that point forward, because I have no idea if my test or their change is what influenced the final results. So just don't do it, and warn your colleagues not to touch the page you're testing without your permission first.
5) Determine your test's statistical significance.
In the ideal experiment, only our variant is the only variable, and all other conditions are constant. For example, the only difference will be the headline you're testing on your variant landing page, and all humans viewing the page will be perfect clones.
Of course, this is impossible. When conducting an A/B test, viewers are evenly split into two groups. But your viewers won't be perfect clones, and there will be external factors that play a role in your results -- where the traffic came from, who's looking at your page, what time of day they're looking at your page, whether or not they've had coffee yet, holidays vs. workdays, etc. This isn't much of a problem as long as the groups are split at random, but it does impact the data -- this is why we need to test for statistical significance. Once something is statistically significant, you can be assured that your results weren't just due to chance.
For both the control and the variant, put in the total number of tries (landing page views, call-to-action impressions, email sent, etc.), and the number of goals it completed (form submissions, call-to-action clicks, emails clicked).
The calculator will tell you the confidence level your data produces for the winning variation -- the closer you are to 100%, the more accurate the experiment results will be. You should aim for a value above 95%.
Originally published Jul 2, 2014 6:00:00 AM, updated June 28 2019