Last year we introduced the Community-Driven Impact Score.
This is essentially a system for measuring communities. It’s similar to CSAT or NPS, which asks members to self-evaluate the impact they feel the community has had upon them.
The unique value of the Community-Driven Impact system is three-fold.
- It makes a direct causational argument for the benefit of the community. It overcomes the correlation challenge which bedevils almost every other method of measuring community.
- It is easy and flexible to deploy. You can run the survey and get results in a couple of days. You can also adapt the question to your particular goal. If you care more about retention than experience, you can adapt the question to match.
- It creates a goal other than engagement. The CDI score provides a result which balances an increase in engagement against the impact on each visitor. There is no point in driving more engagement if the impact on each visitor is plummeting.
Let’s dive a little deeper into what it is.
What Is The Community-Driven Impact Score?
The CDI score multiplies two items of data.
- The average self-reported impact of the community on each visitor (average member impact).
- The number of visitors to the community (community reach).
We will go through some caveats to both of these shortly. But essentially, the community aims to definitively show the impact of the community over members.
Like NPS and CSAT, it’s survey (or poll) driven. That makes it easy to deploy but also means we must be cautious about how we interpret the data.
Designing The Survey
I’d recommend gathering data on the following:
- How frequently do people visit the community.
- How many contributions they’ve made to the community?
Next, you have to choose between one of three options for the CDI Question.
Option 1: The CDI Question
The average member impact score is an adaption of the NPS question for community.
To put the question in the simplest terms, we ask;
“How has [the community] influenced [outcome].”
You can select the outcomes which matter to you (you could also identify specific aspects of the community if you like).
Some example questions might include:
- How has the [brand] community influenced your decision to [renew subscription]?
- How has the [brand] community influenced your decision to [purchase item]?
- How has the [brand] community impacted your satisfaction with ?
- How has the [brand] community impacted your ability to achieve [desired product outcome]?
- How has the [brand] community influenced your likelihood to utilise [features of product]?
This lets members evaluate for themselves the impact the community has had on their behaviours.
Answers and Scoring
Now we have the question, we need to be mindful about the answers we offer.
There are two ways of doing this. One provides a scale that lets members state if the community has caused a negative impact as much as a positive one.
For a subscription question, this might use a scoring system of:
- Decisive impact on cancellation
- Major impact on cancellation
- Minor factor on cancellation
- No impact
- Minor impact on renewal
- Major impact on renewal
- Decisive impact on renewal
I’d strongly recommend adapting the text as you need. The highest number might be shown as ‘decisive in renewal’ and the lower number might be ‘decisive in cancellation’. The answers are scored on a scale from 3 to -3 with 0 being ‘no impact’.
Option 2: Simplified CDI Question
The other approach is more similar to NPS in asking members:
“On a scale of 0 to 10, how has [community] influenced [outcome]?”
0 is shown as no impact and 10 is shown as a decisive impact
In this approach, 9 to 10 is considered a decisive impact, 6 to 8 is a significant impact, and 4 to 5 is a minor impact.
The power of this approach is even if you don’t want to report the pure CDI score, you can still make statements like:
- 30% of our members said it had a significant impact on their decision to renew
- 10% of our members said it was the decisive factor in their decision to renew
- 55% of our members said it had a small impact.
Once you multiply this by the value of a renewal, you can come to an approx dollar value range which is supported by data.
Option 3: The Comparative CDI Question
If you want to gather even more powerful insights, you can take the survey question one step further.
You can add a question which asks members to rank the community alongside other activities.
Please rank how important each of the following was in your decision to renew your subscription:
- Customer support.
- Ease of use.
- Outcomes of using the product.
- Annual conference.
This uses a matrix structure that lets members highlight how important the community is and how it compares to other factors the organisation might care about. This kind of data shows now just the impact of the community but how that impact compares to other channels.
Resource: You can find a simple survey template here.
How To Undertake The Survey
In the ideal scenario, you would issue the survey to a random, rotating, group of customers (not, not just community members) each month. This lets you get frequently updated data you can use to engage your audience.
It also helps you understand the percentage of customers who participate and avoid the typical problems with convenience sampling (i.e. just issuing a survey on your community page).
In practice, this doesn’t happen often because of valid internal concerns about how frequently to contact customers and competing groups each wanting to undertake their own surveys.
This means you’re far more likely to be able to undertake the survey in one of two ways:
- By sending an email to your community mailing list.
- By creating a pop-up survey to appear in your community.
In this scenario, the latter is probably preferable. But with some caveats which we will cover below. The key is we’re able to collect at least 300 responses for the results to hold some statistical validity.
While you could simply publish a pop-up poll on your community to gather data, the anonymous nature of a poll means it will be difficult to know who responded to it (i.e. is it only collecting responses from your top members who love the community?)
Correcting Survey Responses
A major challenge with most surveys is they fall victim to convenience sampling.
The most engaged customers are also most likely to respond to surveys.
It’s tough to get a typical visitor (i.e. someone who isn’t logged in) to complete a survey.
This can heavily distort the results in a favourable direction (aside: this issue bedevils call deflection surveys too).
There are two ways to tackle this.
- Only generalise the conclusion to those who complete the survey. If you only accept responses from members, you can only generalise the results across members (i.e. multiply by active members vs. total visitors). This will significantly undervalue the community as the majority of people won’t be logged in. It might still suffer from the same problem – only the most active members might reply!
- The better approach is to correct the results of your survey by weighting the responses. If you know 90% of your visitors don’t log in and 10% do log in, you can correct your weighting accordingly. For example, if the average score is 4.7 out of 5 for registered members and 3.9 for non-registered members, you can apply weightings to find the overall score of 3.89 ((4.7 * 0.1) + (3.9 * 0.9)).
This is an oversimplified (and somewhat extreme) example, but you get the idea. You need to correct the weightings to reflect the audience.
It is a huge help to use two different collector URLs for the survey. Show one survey to non-logged-in members and another survey to logged-in members. This makes the weighting easier.
You can also use the responses to the questions about the number of posts (or frequency of visits) to correct the survey results. If you know 95% of visitors don’t participate, you can correct for this using the method above. It still won’t be 100% accurate, but it’s far better than using raw results.
By the end of this process, you should ultimately have an average impact score. This is the score which estimates the average impact of the community on each individual visitor. .
Calculating The Full Member Impact
While you can’t influence engagement as much as you might like, the number of visitors to a community is still important when considering the impact of a community.
Exclude Unique Visits Under 30 Seconds
The assumption here is it takes at least 30 seconds for the community to have an impact. Anything less and these users shouldn’t be counted. This is why you probably want to exclude visits under 30 seconds from your CDI calculations.
You can set this up in Google Analytics relatively easily. Create a unique segment of users using the ‘time on page’ or ‘time on screen’ filter.
This assumes that for the community to have had an impact, a member would have to have viewed at least one page for at least 30 seconds. You can change those assumptions if you like, but they should be based on something.
Now we might recalculate our score to reflect that only 25% of visitors met this definition and have a CDI score of 12k (as opposed to 48k).
This is the critical score you should be held accountable to increase.
CDI Score Needs Executive Support
The biggest challenge facing the CDI score is gaining support.
CDI doesn’t measure dollar value return (although it can be useful in calculating dollar value return).
If an exec only cares about a specific dollar value return ($), then you need to follow a process to match. Yet, while the argument that it doesn’t measure dollar value return is valid, this can also be applied to CSAT, NPS, CES and a host of other common metrics.
The primary benefit of CDI is that it’s measuring the impact of the community against its intended purpose.
It’s using a measure of advocacy (NPS) or overall satisfaction (CSAT). It’s using a metric developed specifically to measure the impact of the community.
Summary and Next Steps
Here are some steps to follow when deploying the CDI Score.
- Present the overview of the Community-Driven Impact score. Make sure people are aware of what it is, how it works, and how to calculate it.
- Select a survey tool. Any survey tool will do. Make sure you can use multiple collectors and use different variants for logged-in vs. non-logged-in members.
- Design your survey. You can duplicate the survey template we’ve created. Decide if you want to do the simple survey calculating just the community or comparing the community against other channels.
- Collect responses. Publish the survey in your community (or send via email to all customers) and collect responses. Watch out for any bugs or errors which might arise.
- Correct the results. Weight the results of the survey to reflect the audience. This is especially important for logged-in vs. non-logged-in members.
- Only accept visits over  seconds. The number 30 is arbitrary. But consider how long it would genuinely take for the community to impact the desired behaviour. If it’s 60 seconds or multiple visits, use that instead.
- Track the score over time. Set the score as your ultimate target to improve over time. If you have a large community, run the survey over a random sampling of members (or customers) each month.
Let FeverBee Set Up The System For You
If you don’t have the time or experience setting up a community measurement system, please feel very welcome to contact us and set it up for you. We can:
Develop the system from scratch to suit your specific goals.
Design and test the initial survey.
Develop the strategy to implement the survey and gather responses.
Analyse and present the results (in a visually pleasing way).
Highlight the steps you can take to improve the CDI.
Provide the full documentation to take ownership of the system.
If you want more information, contact us.