The Runaway Community – Measuring And Improving An Online Community
We’ve been fortunate to consult with organizations whose communities we would define as runaway successes (Facebook, SAP, Oracle etc…). One major difference between these companies and those communities that are ‘doing ok’, is the former’s incredible commitment to constant improvement. The runaway successes are constantly benchmarking, testing, and refining what they do.
The organizations that aren’t doing so well will ask questions like “What should I measure?” or “What are some good benchmarks for [x]?”
In this post, I’m going to explain how we approach community measurement with these kinds of clients and some of the processes we put in place.
Remember, this is the final week you can sign up for our strategic community management course. We won’t be running this course again for a while, so I hope you can join us.
The Measurement Fallacy
Almost everyone we’ve worked with is measuring something.
But when we ask what they do with those measurements, the answers are either really vague “well, it tells us what’s working or not working” or redundant “I send them to my boss and colleagues”
Our clients (and course participants) will have an answer like:
“If the number of useful tips created by our top experts rises by 10% as expected next month, we’ll spend more time building relationships with the insider group and less time on our newsletters. If it rises by less than 10%, we’ll try pushing the leaderboard system as the core tactic instead”
This is the difference between having something you measure for fun and actually having a system to drive ongoing improvement. You should only measure the things you want to improve.
If you don’t know what to do with the data, why waste time collecting it? If you don’t know what will happen if a metric rises or falls, why measure it at all?
The Interpreting Problem
But interpreting data is a huge problem, even if you have benchmarks, to begin with.
Let’s imagine your goal is to increase customer satisfaction. You randomly survey a large group of active members each quarter and track results. You discover that customer satisfaction has stayed the same in the last three months. What would you do differently with this data?
Actually, take a second and think about it….
Some of you might conclude that the community isn’t working and may need to be scrapped. But what if customer satisfaction fell everywhere else except in the community? That could be a game-changing win.
You can’t make any decent analysis unless you have the right context. This means working on four levels; execution, tactics, strategy, and objectives.
Analyzing The Four Levels For Context
If customer satisfaction scores aren’t rising, is community the wrong approach? Or is it because you had the wrong objectives, strategy, and tactics?
Let’s imagine one of your objectives to increase customer satisfaction is to get more experts sharing product tips. Your strategy might be to build a sense of competition among them to generate the best tips (jealousy). One tactic to fulfill that strategy is to build a leaderboard of those that share the best tips ranking everyone with an expert badge.
But if customer satisfaction scores aren’t rising, did you have the wrong objective, strategy, or tactics? Or perhaps the tactics were just badly executed?
We tackle this by using a framework with four levels. At each level, you are constantly learning and refining what you do. It’s a campaign of hard work, but it helps turn a static community into a runaway success.
LEVEL 1: Was The Tactic Well Executed?
It’s impossible to know if you had the wrong objective, strategy, or tactic until you know if the tactic was well executed.
This means three things:
- Did it reach a large percentage of the target audience?
- Did it significantly change the behavior of the audience it did reach?
- Did that behavior change last for a long time?
You might look at how many people visited the leaderboard, how many of those visitors shared more tips and did they keep visiting the leaderboard to track their ranking (for example).
I’m often amazed how many great tactics were abandoned not because they were the wrong tactics but because they were badly executed. With a little refining (often more awareness), you can turn it into a great success.
LEVEL 2: Did You Use The Right Tactic?
If the goal of the leadership was to increase a sense of competition (and jealousy), you need to measure whether the increase in metrics related to the tactic (e.g. views of the leaderboard) tracked with any observable metric relating to jealousy (sentiment, stories captured, survey results, interviews etc…)
You can have a very successful tactic which fails to fulfill the strategy (you can also have an increase in strategy metrics that’s random and not connected to the tactic).
There should be a strong correlation between the two here. An increase in tactical metrics should be reflected by an increase in sentiment metrics that reflects your strategy.
If not, you want to move to a different tactic, e.g. interviewing or giving public prizes to the people who share the best tip each week.
LEVEL 3: Do You Have The Right Strategy?
Now you can move on to the bigger question.
Are you using the right strategy to achieve the objective in the first place?
What if you can survey experts and plenty more mention feeling competitive, but you don’t see any increase in the number of tips they’re sharing? You have the wrong strategy.
You might shift to a strategy based around getting members to feel a sense of satisfaction from helping each other or feeling a sense of pride in their own success and achievements etc…
LEVEL 4: Was It The Right Objective?
Now you need to check whether the objectives (members sharing more tips) does influence the goal (members sense of satisfaction). If the correlation between the two is positive and significant, you are using the right objectives. If the correlation is weak or negative, you might want to consider different objectives.
For example, instead of customers sharing tips, you might try answering product questions as quickly as possible.
Allocating Time To What Works
The entire purpose of the process is to gradually find and allocate time to the areas that work at the expense of those that don’t. It gives you the freedom to test while doubling down on the areas that are working.
You should also find you reach a point of diminishing returns for most efforts, this is the optimum amount of time and resources to spend on each task.
(LEVEL 5: Did You Have The Right Goal?)
We can also add a fifth level here. Did you have the right goal?
Was it a goal that drove useful results for your organization? Was it something a community could and should achieve?
The best method to measure this is to calculate your community ROI, in practice though, we’ve found very few organizations measure the financial return. Instead, it’s about the efficacy of the community compared with other channels to achieve the same goal. This is why you also need some idea of how effective other channels are.
This process works if you commit to it. If you begin randomly making changes or ignoring your data, the framework fails. Work from the execution level upwards. Remember each level you go up means bigger and bigger changes.
You can improve the execution of a tactic a lot easier than you can change the strategic objective. Changing the strategic objective means a new strategy and a set of tactics to execute. That’s why you always work from the execution-level upwards.
Strategic Community Management Course – FINAL WEEK
This is the final post of our FeverBee Explains series. In the series, we tried to go deeper on some of the more complex topics of our work. If you’ve enjoyed it, I hope you will join us.