A/B Testing: One of Your Most Effective Deliverability Tools

Avatar Act-On
Email Marketing

There have been lots of discussions about A/B testing (many in this very blog), but not many have captured one of the better benefits it provides: better deliverability, and better Inbox placement.

Deliverability in the modern email world is one part technical (making sure SPF, DKIM, DMARC, the next thing, are set up), and 10 parts reputation. Reputation is the biggest thing that controls Inbox placement; it is usually easy to get inbox placement if you have good and consistent reputation. With a good reputation, if an ISP puts you on a blocklist, remediating the issue is generally fast and painless. A good reputation is the thing every major (and minor) email sender should work on.

Reputation, reputation, reputation.

You need a good reputation today to keep sending email. So what does A/B testing have to do with reputation? Well – A/B testing is a step toward engagement. Engagement is a major factor that plays into reputation. ISPs track engagement, and see it as a sign that the email is wanted in general. This is why email senders should segment their engaged and un-engaged email recipients, and send first to the segment that were more engaged. And beyond letting ISPs know that the sender isn’t spamming, good engagement helps the bottom line for sales emails (engagement=pipeline for sales and involvement).

What is A/B testing?

(If you know what A/B testing is, you can skip this section and go on to the next to read about its effects on deliverability and tips on how to use it for engagement. If not, here is a fast explanation.)

A/B testing is a way to determine which of two sets of things perform better. It can work in all sorts of places, not just email. In the traditional A/B test, the tester randomly splits a group in two. Now each group gets a version of the same thing, altered by one factor (or one set of factors).

Traditionally, the A group gets the standard thing (the “control”), and the B groups gets the slightly altered thing (the “variation”). With email, it’s easy to measure your results.

Example: Send Group A an email you would normally send with a high number of images. Send Group B the same email, but cut the number of images by one-third. If click rates are better on one group, you will know why! Modern day email A/B testing has been made easier by marketing automation. It’s fairly easy to set up an A/B test for a small section of your recipients, and the system will reactively send the entire group the email that performed better on the test, according to the parameters you set up.

A/B testing is used in most major businesses today, but your company doesn’t have to be huge to use it. If your mailing list is quite small (say, under 1,000 names) you might need to A/B test the whole group rather than a subset. But even at that, you’ll be learning information you can apply to your next iteration. Even small changes can lead to bigger ones.

How to use an A/B test for deliverability and Inbox placement

Once again, this all goes back to reputation. Past the technical barriers that are taken care of with the help of email service providers (ESPs), reputation is the biggest factor in deliverability and inbox placement. Engagement drives the reputation numbers, so if your email lists are clean and validated, the next step is upping engagement.

There are various ways to use A/B testing for emails, and I’m going to go through a list (not exhaustive, as there are nearly numberless ways to do this) with some suggestions on what to try.

Opens: gateway to a new lead.

There are only a few things we can test for on opens, but they have a huge impact on the total engagement numbers. If an email recipient never opens an email, they cannot click on it, and will never get in the pipeline. There are a few major things to test with Opens, specifically. In no real order: Subject lines, from addresses, and time sent.

Subject Lines

A good subject line is the reason many people open email, and crafting these can be an art, but A/B testing can definitely help get that art down to a near-science. Try several things in this: word choice, calls to action, length, punctuation, or personalization. Here is an example:

A: January is hard on your car, use our new buff to help keep it pristine!

B: January can be hard on [Make of vehicle]s, our new buff can help keep it pristine!

Some tests for personalization like this have seen amazing results, open rates raising by 70% and getting those previously unengaged to open up by over 100%. Not every test will be that good, but constant testing will help make the subject line less of an art, and more of a natural skill as results inform your decisions.

Times Sent

This comes down to testing your audience. When are they looking at their email? You may think you know, but sometimes results can surprise you, so test it!

Will an executive be more likely to look at a webinar offer…

A: Early on a Tuesday morning?


B: On a Saturday, at around 10 am?

Every company will find its own pattern, with its own audiences. You might find that one segment of your leads responds at a different day and time than another (just one more reason to segment). Test, test, and test again. And keep testing; seasons or current events and trends can affect these patterns. The sooner you notice, the better.

From Addresses

Different emails have different audiences, and your From addresses should match each audience. Test if the:

A: VPsalesperson@ email does better or worse than

B: salesrep@ email

There are many different combos you can try. One caveat: Always make your From address appropriate to the audience and the message. If your From address misleads customers in any way, it might influence one email, but can lead to greater spam complaints in the long run.

These tests will influence open rates, and doing them consistently will help follow trends, because the audience is always changing, and redoing tests you had information on might lead to new results in time.


Click-through rate has nearly endless factors that can be tested, so I’m going to cover only a few big ones: Text-to-image ratio, call to action placement, call to action in text or image, personalization, and clutter percentage.

Text-to-image ratio

This is a hard balancing act to achieve and maintain, and will likely change based upon the email subject and audience. Besides the base level of not wanting a full-image email (these can trigger spam filters, and often don’t work at all on mobile), there is a general consensus that emails with a good ratio of images and test have better click-through.

On an email directed at web developers, you could test:

A: 1 image for every paragraph

B: 1.5 images per paragraph

Image-to-text ratio is very audience dependent; sending to artists will likely need a different touch than sending to a salesperson. Testing can help find the right balance for each of your audiences.

Call-to-action placement

This placement is very important and can widely impact email performance for click-throughs. Too early in an email and the reader may not be engaged enough yet to click. (You could be seen as pushy.) Too late, and the reader might have already moved on to the next thing. Once again, it comes down to whom you are sending. Is a “register for this webinar” better:

A: After the first line on the subject of the webinar, or

B: After a paragraph citing the goals and author’s credentials

Similarly, adding two of the same link in a longer email (try one in the middle as a trial close, and one closer to the end) might result in better click-through rates.

Call to action in picture or text

I may sound like a broken record, and the title of this post should perhaps be, “Know your audience and test for them!” but testing picture or text of a call to action can be very important. If one third of your audience is viewing your emails with blocked images, the call won’t get clicked if they don’t see it! Different groups might have different filters. Some email clients (e.g. Outlook) block all images by default. Image calls to action tend to catch the eye better and be better clicked, but knowing which group to send is important to test.


Every email recipient knows (or should know) that by now personalization is done automatically (Hi, Jane!), but it still adds that touch that increases open rates. But how much is too much? Some email marketers have found that only two personalizations are acceptable to their audiences, whereas others seem to have no limit. You can collect their actions and behaviors in your marketing automation activity histories; the more data you have on someone, the more you can personalize an email until they wonder if it was written specifically for them (This is a good thing). These all make it more likely a person will click through and complete whatever action you’re asking for.


Other deliverability specialists have other words for it, but what I mean by “clutter” is how clean and simple, or conversely how chaotic, is the email. Does it have images on both sides of the text, does the text flow in a straight line, or does it have one block on the left, the next on the right? Some emails are better with more things everywhere and lend themselves to the topic and audience, others need to be clean and simple. TEST!

If you take anything away from this, it is this: you should A/B test something (really, almost anything)! For those still reading, the big takeaways are:

  • Testing helps get information on an audience, so segmentation of audiences is helpful.
  • Test especially when your inbox placement is struggling from spam issues; better engagement is better reputation is better inbox placement.
  • Technical issues are only a portion of the email game, the rest is up to the sender, and testing makes the sender better.