How to use a/b testing effectively
January 18, 2010
Hello to all.
It’s been a while since we posted, but among other things, we’ve resolved to post more frequently in this new decade. And to kick off this new effort we’re going to address a subject much discussed in the email world: A/B split testing.
First a definition of terms: in it’s basic form an A/B split test is the competing of two versions of an email within a given campaign, each on a small percentage of a list. Having monitored responses to each, the more effective of the two test emails is then sent to the remainder of the list. It is important to remember that the two competing versions are run on exclusive segments of the list, that is, test recipients receive either version [A] or version [B], but not both. The purpose and great power of an A/B split test lies in the ability to determine how your users are likely to respond to an email before having sent it to the vast majority of them. Of course, coming up with two versions of a single campaign also puts to the test your basic assumptions about who your users are and how they will respond to a given message, thus making it a teaching tool as well.
Our system has a default setting of 10% for each of the A/B segments, which means that 80% of the list is withheld. So, under the default settings, the winning email can be sent to 90% (10% test + 80% final) of the list (unless the final version of the email is a hybridized third version… So many options!). You can specify any number of recipients for your tests, just remember that you want it to be a large enough proportion for the test to be meaningful, and a small enough proportion that the vast majority of the list receive the most effective version of your email.
A 50:50 A/B test is not really an A/B test
We sometimes get requests to run A/B tests on a different proportion of a given list. The system lets you choose any fraction of your list that you specify. But quite often we are requested to run a split of 50% and 50%. As I said before, the system will let you do this, but just know that to do so defeats a central purpose of the A/B test. After-all, once you’ve run your test on 100% of the list it’s too late to use any of the knowledge gained! And even if you were to send the same email a second time you’d be in new conditions and sending to users who, at least half of which, had received the ad already.
For detailed instructions on how to perform an A/B split visit: https://docs.sailthru.com/ab_split
Pre-register for the 2023 Consumer Trends Index
Creating a Customer Value Exchange to Drive Better Personalization
This byline was originally posted on Forbes.com Some of the polish on retailers’ personalization strategies is starting to fade. While many retailers were focused...
What Can A Martech Upgrade Do For Your Brand?
Four criteria to evaluate your marketing technology’s strengths and vulnerabilities. Growing a business can be similar to remodeling an old home. A total renovation...
Data & AI
Thanksgiving Week Shows Retailers Collecting Zero-Party Data While Loyalty Programs Continue To Provide Solid Revenue Streams
The 2022 holiday shopping season has begun and forward-thinking brands are seizing the opportunity to collect zero-party data via interactive experiences in the busiest...