Though first gaining widespread popularity 20 years ago, email hasn’t changed a whole lot. The same can be said about the challenge for email marketers: You must still be able to get your message to the right person, at the right time in a unique and interesting way, all with the hope that it will drive traffic to your site and increase engagement with your product or service.
But consumers are a skeptical lot. Consider how many times you’ve deleted a mailing list email from your inbox. How much of it did you read? Was the content compelling? Was anything about it interesting? Did you even get past the subject line?
While a few are predicting the death of email in favor of other channels such as social media, the reality is email is growing 7 percent each year. And as Yory Wurmser of the Direct Marketing Association recently said, “Although email is still a vibrant channel, consumers — especially younger ones — use it more selectively, and marketers need to adjust accordingly.”
Email remains the most effective and direct way to connect with your customers and prospects, but its content must be more compelling than ever. It must reach the right subscriber with the right message at the right time. If you’re able to do that effectively, your efforts will have a clear and distinct increase in your overall open rates as well as click-through rates (CTR). But what’s the best way to get insight into which audience you should send which message to and when?
Everyone knows about comparison shopping, but what about comparison marketing? Getting fast, accurate answers into these questions can be obtained by a method known as A/B testing. This approach involves testing different versions of messaging within smaller segments of an overall audience and choosing one winner out of the variables selected. The top performer is then sent to a much larger portion of your target audience.
A/B testing can be as simple as sending two emails with different page formats (e.g., long versus short), different image sizes (or no images at all) or using two different headlines and text styles (e.g., bullets versus paragraph.) When used effectively, A/B testing is a key performance indicator of which message would be most likely to generate the most leads and clicks with your largest target audience. But to run a successful A/B test, you first have to set the right parameters and define the outcome you want to reach.
Here are some good steps to follow for effective A/B testing:
- Set your hypothesis via the scientific method. For example, “Does a smiling face in a photo increase CTR on that page?” This will set the parameters of the test and provide a quantifiable metric — in this case, clicks — on which you can base your success.
- Know your sample size. Before anything else, you need to determine the percentage of your target group you want to reach with the A/B test. Should it be a large chunk or a smaller, more segmented list? This will ensure you get a reliable, valid result.
- Keep one variable. Because we’re only testing the effect a smiling face has on CTR, everything else must remain the same — that includes copy, formatting, subject line and even the time of send. Changing anything else would disrupt the goal of the test.
- Set your success metric. In this test, we’ve set the metric as “clicks.” But you can realistically use any metric that’s based on the result you want to achieve, i.e., conversion rate, trial downloads, etc. This is especially important because a common mistake in A/B testing is looking at multiple success metrics after the test has ended and then deciding which is the most significant.
- Determine split-group sizes. If you have a variable that’s performed well in the past (a subject line with high open rates), you can send that to a larger chunk of your test group (75/25, or even as high as 90/10). If you begin the test without a past strong performer, send the test versions to an even 50/50 split.
- Like any other best practice, use common sense. Only use the variables that you believe will have a strong impact on performance — not just a marginal improvement. Not every variable needs to be tested, but at the same time, don’t omit the ones that could be of great benefit.
- Document everything. Keeping a written record of the test you perform can save colleagues from performing the same test in the future. Publishing a blog post can also be a good method of sharing your findings with colleagues and educating your successors.
With A/B testing, you can dramatically strengthen your email marketing efforts for higher CTR and more conversions. Compare and see.