A/B testing is straightforward, you split your prospects and see which group converts best, its easy, right? WRONG! In this article we explore some of the hidden challenges that make reaching the real answer with A/B testing more complex in a subscription environment. The value of a subscription product lies, not just in the initial purchase, but in the ongoing loyalty of the customers. A change that increases initial sign-up but also increases churn, for example, might not be a benefit overall. In this article we present an example of an innocuous-looking A/B test that would take a year to properly evaluate.
A/B testing
A/B testing is a fabulous tool for increasing sales (growth hacking) and even improving your product (lean startup). Web data contains a mass of information about how your customers are using your website and your product. An A/B test involves simply dividing your customers into two groups (A and B) each group gets a different experience and this allows you to choose which of A and B is the most successful, e.g. results in the most sales.
A/B testing takes the guess work out of designing websites and products because it allows you to systematically test alternatives. Should the “Buy” button be orange or green? We don’t have to guess we can find out and the approach can uncover some very unexpected conclusions that marketers would be unlike to reach on their own.
With a standalone purchase A/B testing is relatively straightforward, if a change leads to more sales its good. With subscription sales the equation is more complicated. The value of a subscription customer depends not just on the first purchase but on their ongoing loyalty. A change that increases initial sign up but also, increases churn might look good at first but ultimately prove to be a bad decision.
Subscription Example
For example, let us consider an A/B test that would need at least a year to properly evaluate. Imagine a website that sells a subscription product with the option of both an annual and a monthly billing terms. The question is, which should be the default? Obviously, this is something that we should be able to A/B test. So, we run the test, in variant A monthlies are the default, in variant B, its annuals. Here are the results:
Signups | Annual | Monthly | |
A | 100 | 30% | 70% |
B | 90 | 50% | 50% |
How do we know which one is better? Let us suppose that the subscription price is $10 per month or $100 per year, then at the level of MRR variant A looks better, the Monthly Recurring Revenue (MRR) per customer has increased ($9.5 vs $9.2) and in addition we have more signups so the MRR from group A is quite a bit greater than the MRR from group B ($950 vs. $825).
However, as we know (and discussed in the article Weighing up the true value of a customer) over the customers lifetime, annuals subscriptions are usually worth more than monthlies, because customers are committing for longer periods of time. So, instead of considering MRR we consider the Customer Lifetime Revenue (CLR). If our monthly customers churn at 5% per month and our annual customers at 20% per year, then, overall variant B becomes the winner with the total CLR of population B being $31500 vs the total CLR of A which is $29000.
Well, so far, so good. We got the results, we had to crunch the numbers a little, but we have our answer…well actually no, in getting to this answer we made a key assumption that may not be valid: we assumed that both populations would have the same churn rate as out historical data. In fact, variant B probably contains a number of people who bought an annual by mistake and who are therefore more likely to churn at the end of their first year than customers in variant A who had to deliberately choose the annual. Suppose for example, the annual churn rate of variant B was actually 10% higher than we previously supposed. In that case, the total CLR of B drops to $24000 and variant A turns out to be the winner after all. Unfortunately, we will have to wait a full year from when we started the test to evaluate the annual churn.
Conclusion
A/B testing is a very valuable for evaluating changes to websites and web products. The value of a subscription purchase is extracted over time and the value of a customer depends not just on their initial purchase but also on the probability they will cancel or upgrade/downgrade their plan. The same changes that impact signups can also impact the distribution of those probabilities in unpredictable ways. As a result, collecting and evaluating A/B test results in a subscription environment can take much longer and be far more complicated than with simple standalone purchases.