Run A Cohort Analysis, Not A Split Test
There are plenty of ways to make people jump through a hoop, but that doesn’t make a difference if they’ve stopped dancing a few minutes later.
This is the problem with using split tests within a community.
You can increase your conversion rates by amplifying the web copy or offering bonuses to people that register or participate.
But it’s only what happens over the long-term that really matters.
Consider this graph from a client below:
But by week 18 this has fallen to a THIRD of cohort 3.
The big win here isn’t stopping the big drop-off at the beginning. You can do that with an array of one-off novelty ideas. Big drop-offs happen in every online community.
The big win is stopping the drop-off after around week 16 (from the beginning of the 12-week cohort). You can’t make a single tweak. Instead, you need to look holistically at the experience and make sure it’s a very fun or very relevant place to visit.
- When people visit, are there always featured discussions taking place?
- Were members @mentioned and included in discussions in a positive way?
- Were there questions which relative newcomers felt informed enough to answer?
- Were relatively newcomers encouraged and felt safe asking their own questions within the community?
Looking at the first registration or first participation metrics might seem like a smart move, but I’d focus on the post-participation experience. Turning a 0.7% to a 2.3% here (like we see above) has a HUGE payoff over the long-term.