Many communities today are moving towards simple rating systems in responses.
Readers can select whether a response is helpful or not helpful. Some are measured by this metric (it’s as good a metric as any).
But what if a member asks a question you can answer but not resolve?
What if a member wants a product you no longer sell? Or to request a feature you can’t create?
You can provide the most thoughtful, detailed, answer possible and it’s inevitably going to receive a downvote (or no votes) simply because it didn’t resolve the question.
Ouch!
This becomes an especially big problem during new product launches/changes when people have a lot of questions which can’t easily be solved.
If you’re measured by satisfaction scores (or % of helpful votes), your success will fluctuate randomly regardless of what you do.
This means two things:
- You need to tag and exclude the questions which were impossible to resolve in the first place. Be aware of the temptation to tag questions which are likely to be unpopular here too.
- You need to take a random sample of these scores and compare them. If you don’t want to tag every question, use samples. Take a random sample (>30) of votes on unsolvable questions and unsolvable questions. Compare the two and check for significance. Now you can make relatively safe compensations for drag caused by these questions.
It’s a relatively minor problem unless you’re measured by member satisfaction. In which case it can suddenly become a very big problem indeed.