The last time I made an appearance here on the HubSpot Marketing Blog, I wasn’t shy about my love of experiments.
At the same time, I wasn’t shy in my sense that, all too often, they’re conducted for the wrong reasons. We talked about how the purpose of online experiments is to answer questions about how people use your website.
But how do you know which questions to ask? And how do you know whether experiments are even a viable option to answer your questions in the first place? Before you jump in, you need to make sure you know these things.
Not sure where and when you should start? Fear not -- we’re here to help. Let’s get to it.
How to Tell If You Can Run Experiments
Before you come up with experiments to run, you need to make sure you can accurately run them. Experiments should be completely off the table until you have an established online presence and means to track behavior. To do that, you’ll need five things.
In order to trust that the results of an experiment are unlikely to be influenced by randomness, you need to have a high volume of traffic. Some experiments require larger sample sizes than others -- even hundreds of thousands, in some cases -- but typically, you'll need a minimum of 100 unique page views per day to reach statistical significance within a reasonable amount of time.
In an experiment, your hypothesis is the statement you’re working to prove. But what is it that you’re trying to improve as a result of this test? Those are your key performance indicators (KPIs) -- the quantifiable measures of the experiment’s success. Without those, you have no North Star to guide the purpose of your experiment, or the objectives behind it.
In order to measure and observe the performance and results of your experiment groups, you’ll need to establish which data you’ll be tracking and monitoring. In the digital realm, that might include factors like:
Which pages are people visiting?
Where did they come from?
What are they doing once they arrive at those pages? Are they converting, bouncing, or taking another action?
4) Baseline Metrics
Even if you’re hoping to make improvements to your funnel, before you start an experiment, you should have an established, recorded funnel conversion rate (CVR). In other words, before you begin, you should be able to track:
Funnel visit -> retained customer
If you try to start an experiment without that information, you’ll have no benchmark to compare where you were prior to running it -- and therefore, you won’t know if you’re any better or worse off as a result.
5) You’ve picked all of the low-hanging fruit.
Make sure you’ve fully built out and iterated on all of the basic requirements for your funnel to work or even operate correctly. For example, in the ecommerce sector, you might want to do something like optimize your online product catalogue. But you can’t do so until you’ve made sure every product is listed there, you have a complete online checkout system, and have a way for visitors to contact you for customer service.
We have a phrase for this step: “Don’t start hanging up pictures before you paint the walls.”
How Do I Know If I Have These Five Things?
If you find yourself asking that question, we recommend running an A/A test -- an experiment where you go through all the motions of running and tracking an experiment, without actually changing anything. We do this in three steps:
Run the dummy test for five business days.
Take the test down.
Analyze the results.
Do you have 500+ unique users enrolled in the experiment?
Can you track both experiment groups full funnel?
Is funnel CVR about equal for both experiment groups?
So, do you have those five things? Nice job -- you’re already ahead of the curve. But experimentation still only makes sense when you can identify questions worth answering through quantitative research.
First things first, you need to pick a funnel that you want to optimize through experimentation. Once you have your funnel, identify the unanswered questions you have about how your audience moves between its stages. To identify unanswered questions, we need to take stock of what we already know.
Identifying who moves through your funnel, and why
Do you know exactly who’s entering the funnel and from where, with quantitative and qualitative data to back it up? How about why they’re entering the funnel, with the same supporting data? If you don't know the answers to these questions, this is where you should start.
Next, if you look at your funnel, can you figure out why people aren’t converting between steps?
Identifying knowledge gaps for how people move through your funnel
Let’s look at the following conversion:
Basic visit > purchase
Our goal is to identify why people do not convert between steps in our funnel. To find out, we need to list reasons why we think people are not converting, and seek out data to back up our claims. We will know that we have listed the right reasons when we can account for more than 100% of unconverted users, with supporting data.
Are people not purchasing because:
They have unanswered questions about the product? (Let's say this reason accounts for 5% of non-purchasing users.)
They aren’t ready to make a purchase yet?10% of non-purchasing users
They don’t see how the product fits into their lives? 40% of non-purchasing users
The product doesn’t align with what they are looking for? 5% of non-purchasing users
There are better-priced alternatives? 10% of non-purchasing users
There are alternatives with more or better features? 10% of non-purchasing users
They lack confidence in the product or the company that sells it? 30% of non-purchasing users
Note: These percentages total >100% -- each given user often has multiple reasons for deciding not to purchase.
If you find that you're struggling to put together a list of reasons as to why people don't convert, you'll need to gather qualitative feedback from your customers.
Once you've put together a thorough list, take a step back and look for areas of opportunity. For example, on the list above, hone in on, "They don't see how the product fits into their lives," and ask, "Why?" Assuming we have product market fit, there must be something we don't understand here. Otherwise, how can 40% of non-purchasing users be unable to see themselves using the product? It could become a fundamental question that we aim to answer through quantitative experimentation.
To boil it down: Experiments answer questions. To identify experiments, you need to identify gaps in your knowledge, and to do that, list what you do know -- that will help you more easily identify what you don't.
We hope that this post has provided you with the tools to identify when you should run experiments. In my next post, we'll get into ways you can discover the unanswered questions about your funnel, and prioritize those questions to maximize your investment in a given experiment. Plus, we'll provide a helpful framework for doing so.
How do you identify which experiments to run? Let us know your approach -- and hey, we might even feature your experiment on our blog.
Originally published Sep 1, 2017 6:00:00 AM, updated September 01 2017