Questions like “was this answer helpful?” or “did this answer solve your problem?” are a powerful means of evaluating the success of many community programs.
Most platforms enable you to have at least some form of this –
It’s one of the easiest measures of tracking the success of many community programs. For example, if you invest more in your superuser program, you should expect a greater percentage of answers to questions being marked helpful. If you decide to pay people to answer questions, you can see whether this is having the desired impact.
But this metric can also cause problems.
Some organisations use the same question to compare phone support, online tickets, knowledge-based articles, and the community. This isn’t a bad idea – as long as the numbers are taken in context.
Most of the easiest questions can be solved through documentation and knowledge-based articles. So we expect these to score well. Phone and ticket support is one to one contact from a paid professional, these would score well too.
A community, however, often gets the questions other channels can’t answer. Visitors are often frustrated by the time they visit. Thus a community will naturally score below other channels.
But this comparison overlooks three important things.
1) Volume. What % of questions is the community solving and thus ensuring people don’t need to visit any other channel?
2) Cost. If a community solve has a 15% lower satisfaction rate at 50% of the cost, that’s a huge win.
3) Other benefits. The community offers benefits in retention, loyalty, advocacy, feedback, research, and more which won’t show up in these metrics.
By itself, this question and the result can lead to a misunderstanding about the value and benefits of a community. Don’t use the number in isolation. Attach a cost, volume, and measure that allows a better appreciation of the community.
p.s. Be aware that changing the question from ‘was this answer helpful?’ to ‘did it solve your problem?’ will have a big impact upon the number.