The “always be testing” bug runs deep here at Marigold Engage by Sailthru, but it’s important to note that our knack for experiments extends far beyond simply testing images against text or personalized subject lines versus generic ones. More specifically, we are major proponents of customer-centric longitudinal studies in addition to one-off A/B tests.

Today’s Conversions vs. Tomorrow’s Value

Whereas an ad hoc A/B test might focus on driving incremental revenue through offer testing (e.g. testing free shipping vs. $20 off vs. 10% off in a welcome email to see which offer yields the stronger revenue per send), cohort studies are designed to look at the impact of different treatments over time.

Let’s stick with something along the lines of the aforementioned example: seeking to optimize its welcome stream in the name of incremental conversions, a retailer tests a 10% discount offer to new subscribers at various points in the first 14 days. Not surprisingly, the promotional offer results in a lift in gross conversions (Marketing 101: promotion moves product!). After digging one level deeper into the numbers, the retailer notes that the offer also prompted an increase in average order value (AOV), likely due to classic stockpiling effects. Sounds like we’ve found a winner in the promotion takers, right?

No, we have not. Instead, we find ourselves with several follow-on questions around the downstream impact of this promotion – namely, did the early discount offer train the customer to buy on promotion and erode downstream lifetime value?

Dissecting the Surface Metrics

Perhaps this marketer is particularly in tune with the numbers and actually analyzes some downstream numbers several months after the initial welcome stream and notes that AOV remains higher for those who initially converted on discount. With this new data point, it seems as though we have finally identified a solvent winner in the discount cell, have we not?

In fact, we unfortunately still have not. Consider the chart below, which now also takes into account the purchase frequency of the two different cohorts over two years. Despite the discount converters yielding transactions between 3% and 5.4% more valuable than those customers paying full price, they purchased at a rate of 7.2% less than those full-price shoppers (again, could be a function of stockpiling), meaning they netted out to be 3.3% less valuable than those with the seemingly more expensive carts.

 long_game.png

 Even with this revelation, though, it’s important to revisit the QQQ: the quantity/quality quandary. Are there enough customers at that higher two-year value to keep gross revenue compelling?  If the number of buyers falls considerably without that promotional incentive, the marketer will need to think twice.

Short-Term Experiments vs. Long-Term Learning

Recently, many Sailthru clients have leveraged cohort-based tests to assess the impact of email frequency on subscriber opt-out rates. More specifically, they use customer-level variables to dictate two groups: one that receives emails daily for the first 60 days and another that receives only three messages per week. From there, they can compare/contrast 60-day opt-out rates and other engagement metrics to understand the potential impact of tweaking frequency.

Campaign-centric testing – tweaking elements such as subject lines and calls to action – can certainly be valuable for driving incremental ROI, but it’s mission-critical to double-check how you think about testing. Rather than running separate A/B tests on your welcome email, your day2 email, your day7 email, etc., should you be developing welcome series A vs. B and ensuring that one cohort receives ALL A cells and the other ALL B’s (read: cutting the test groups at the customer level vs. the campaign level)? Probably so.

So while you should always be testing, make sure you’re playing the long game while you are it: revisit your results regularly to corroborate that optimizing for near-term conversions or engagement is not at the expense of long-term customer value. This is the difference between driving quick wins versus sustainable lifetime value.

— Cassie Lancellotti-Young, EVP of Client Success