The Email Marketing Testing Paradox
I’m an email testing aficionado, and some may even say I’m data-obsessed. I spent years in ecommerce marketing poring over numbers and implementing experiments that created exciting results for retailers. I also attend tons of industry conferences, and at each one I attend, every single speaker has at least one slide in their presentation that speaks to the importance of testing.
Based on the myriad of these presentations I’ve seen, I assumed that many email marketers have really begun to adopt the advanced mindset of testing in email that I’ve subscribed to for years. Imagine my shock when I stumbled across this recent data from eMarketer!
The vast majority of marketers that do say they are testing are testing subject line, creative and CTA tests. Why is that shocking to me? While those email testing elements are important, they’re not really that powerful, and if done without a systematic approach also provide little impact to revenue.
It fascinates me that in 2015 digital marketers still have to convince each other that real testing is important! Why is it that we still need to convince ourselves that decisions should be driven by data and metrics and not gut feeling? Why are so many marketers afraid of testing? What is it that keeps them from implementing a testing plan?
I spoke to a few other vendors and representatives of marketing agencies while attending the MarketingSherpa Email Summit recently and they all shared the same frustrations. The consensus is that we’re all having a hard time convincing clients and fellow marketers about the importance of testing outside of things like the subject line or CTA colors.
The new way to look at subject line tests
Now, I understand that subject line testing is a part of marketing and isn’t going on away any time soon. But we can be doing much, much better even with a subject line test–or creative, or CTA!
Let me give you an example – you’re sending a campaign email and you have two subject lines that you are testing: (A) Introducing Luxe Silk Pants vs. (B) The most elegant fabric in New Silhouettes. Version A ends up the winner. Great job. But what does that tell you? What insights can you take away from this? Pretty much nothing.
Instead, I would start by putting together a couple of hypothesis that you are looking to answer with the test plan you put in place. And the key is that these tests are not one-off, but rather examined more so over time.
Example hypothesis could be:
Shorter subject lines outperform long Subject Lines
Upper Case subject lines outperform lower case subject lines
Neutral tone in the subject line outperforms creative, descriptive subject lines
Personalized subject lines (with name field insertion) outperform Non-personalized subject lines
Product names in subject lines outperform Emails without product names in Email
Once you’ve gone through all these hypotheses (and combinations thereof) you will end up with a pretty good understanding which subject lines work best for your brand and moving forward you won’t have to spent hours brainstorming, and debating over which subject lines to use for your A/B Test. The data is already there.
But the real deal is in serious longitudinal testing
These more focused tactical tests are a step in the right direction, but the tests that really excite me are the longitudinal tests–a method in which data is gathered for the same subjects repeatedly over a period of time–that some of our clients are working on. Admittedly, they are trickier to implement, they are longer in runtime and monitoring results is not a one-time thing, but the insights you’ll gather here are a lot more impactful, exciting and downright amazing at times.
Consider this particular example. You’re a marketer at a luxury retailer–imagine a Rebecca Minkoff or Marc By Marc Jacobs–selling leather goods for $300+. Your average customer is purchasing one item per year. How do you plan on communicating with a user who just bought one of your bags, knowing that it is very unlikely for her to make the next purchase a couple of months down the line? Do you keep on showing her your product catalog over and over and over again or do you provide her with valuable content on how to take care of the product, what celebrities might have been spotted with the same bag she just bought? What’s the right amount of emails this customer should be receiving following the purchase, and so on? The only way for you to find out which particular approach is most effective in turning more buyers into loyal, returning customers and also doing that faster is with a longitudinal test set up at a user level.
Imagine how much more exciting answering questions like that in your testing would be compared to the daily subject line testing you’re conducting, and how much more impactful it would be for you to say you’ve increased your return buyer rate by 2% compared to a 150% increase in open rates. The 2% increase translates to sizable revenue, while the 150% means more people opened your email (and not necessarily much else).
Curious about what types of longitudinal testing we’re doing for our clients here at Sailthru? Feel free to ask any questions and share your own stories in the comments!
—Marielle Hanke, Sr. Manager of Analytics and Optimization at Sailthru