If you manage a large, customer-service based community, calculating the ROI is a challenge.
The common approach is to measure the number visitors, questions asked, questions answered, or reduction in call volume. Each has their problems.
- Tracking the number of visitors. This is the worst metric, it includes every possible form of visitor regardless of quality, whether they asked a question, or whether the problem was resolved. It includes people searching for something else entirely.
- Tracking the # questions asked. This doesn’t track whether the question was answered. The same question might be asked repeatedly to get an answer. Each would add to the return (when it should reduce it).
- Tracking the # questions answered. This doesn’t track whether the problem was resolved. The answer might be wrong. A member might ask a question, receive an answer, and then call the customer service line to check. This becomes an extra expense.
- Tracking reduced # calls received. This doesn’t track community attribution. When WidgetCo releases a new product, a lot of people will have questions about it. Over time those questions will be answered. Less people will need to answer questions (call volume naturally declines for new products). It’s hard to attribute this to the community (even with strong correlation).
Using a survey to determine problem resolution.
We instead need a system that tracks a) if the problem was resolved b) if it was resolved satisfactorily, and c) if it led to a reduction in calls to the customer service team.
Google use a simple survey to ask if a visitor’s problem was resolved (collect e-mails to avoid duplicates). They then multiply the % whose problem was resolved by the % of visitors. This provides a call deflection number.
However, the response rate here is low (single-digits). Second, it’s prone to a non-response bias. Those that received a positive response are more likely to click the link to do the survey. You can’t generalize a small, self-selected, sample across the entire community.
Using quotas to generalize across the community.
We need to know the demographics of the audience (you can also do habits and psychographics if you have the resources).
Then determine the breakdown of the audience’s total composition. Most organizations have this information in their CRM system. If you don’t, use an incentivized poll to a large, random, sample of members.
This should tell you that your audience is 57% male, 43% female with further age-related segments. By the end you should have 10+ % segments.
Now you can survey your web visitors (even those that don’t ask questions may be receiving help) to ask if their problem was resolved and use this quota system to ensure it reflects the overall breakdown of the community.
For example, 77% of responses might be from males aged 30 to 45. However, if this segment only comprises of 27% of your overall audience, then it accounts for just 27% of the responses.
Did it lead to less calls to the customer-service line?
This doesn’t show whether the problem was resolved as well as the customer service team may have resolved the problem. Nor does it show if the member would have called the customer support anyway (or just lived with the problem).
Therefore, we need to ask three important questions.
1) Was your problem resolved?
2) On a scale of 1 to 5, how satisfied are you with the resolution?*
3) If the problem was not resolved, would you have called customer support?
*Question two can be changed with any comparative customer service question (e.g. how happy are you with the customer service, are you likely to recommend etc…?)
Understanding total return
Now we can make a rough calculation of the return on the customer community.
Step 1) Determine the cost per call of traditional customer-service line.
Divide the cost of your customer service efforts by the number of calls to have a cost-per-call of your traditional customer service efforts. Let’s assume this is $1.75.
Step 2) Measure % of web visitors who didn’t call because of the community.
Multiply the % of satisfied customers (answer to survey question 1) by those that would have called the customer service line. Let’s assume this is 87% that had their problem resolved, but just 73% would have called customer service.This tells you that 64% of web visitors who would have called customer service had their problem resolved through the community instead.
Step 3) Determine the number of calls deflected because of the community.
Multiply the % of call reduction above by the total number of unique visitors to the community. If your community has received 740,000 visits, you can estimate 473,600 of them had their problem resolved through the community.
Step 4) Determine the value of calls deflected because of the community.
Multiply the figure above (473,600) by the cost per call in step 1 ($1.75). This tells you the total return in value terms ($828,800).
Step 5) Compare the satisfaction rate from each method.
If the customer service team scores an average of 4.2 out of 5 (or using any comparative method e.g. NPS, future purchase intentions) and the community satisfaction rate is 3.7, we need to divide 3.7 by 4.2. This gives us a figure of 88.1%. The community resolves problems 88.1% as well as the customer service line.
Step 6) Multiply the total return in $ value by the comparative satisfaction rate.
Now we incorporate the comparative rate by the value of the calls deflected. In this case it would be $828,800 by 88.1%. This gives our final figure of $730,172. We can then divide this by the investment in the community (platform, staff, overheads) to determine the ROI.
To justify the community, we want a % that covers the costs (it should be positive %), better than putting the money in a bank (about 5%) and the opportunity cost (another 5% to 10%). This usually requires something in the region of 15% to 25%.
A few notes
1) This isn’t a perfect system. It’s a rough idea. It relies upon samples to reflect the overall community. Developing the right quotas and collecting enough data for significance is the hard part.
2) It doesn’t track other benefits. It doesn’t track possible improved SEO, increased lead conversion, shorter sales cycles, improved team morale/beneficial internal changes, product feedback etc…
3) It doesn’t track ‘real’ reduction. It doesn’t actually track if the customer service expense was reduced as a result of the community. It tracks the theoretical reduction. A manager might be reluctant to let staff go.
4) ROI increases over time. A community incurs significant initial costs which negatively skew the ROI until the return has been established.
The goal of this post has been to highlight a (simplified) process for measuring the ROI of customer service communities. At present, too many biases (notably non-response/selection biases) are creeping into measurement. We need most community professionals trained in using data to avoid these biases. I hope this post helps.
If you want to discuss advanced topics in more detail, sign up to www.communitygeek.com.