Why You’re Misinterpreting Community Member Satisfaction Data

In most surveys, we ask a question about the relevance or satisfaction of the community.

This is usually a Likert scale question such as:

On a scale of 0 to 10, with 10 being the highest, how relevant or irrelevant do you find the information shared in the community?

On a scale of 0 to 10, with 10 being the highest, how satisfied or dissatisfied are you with your community experience?

On a scale of 0 to 10, with 10 being the highest, how helpful or unhelpful do you find the community in achieving your goals?

Some organisations simply use a poll and calculate the results from that.

The problem is that these results are widely misinterpreted, leading to strategic blunders.

 

Why The Average Score Is Dangerously Misleading

Sadly, people love simple scores which are easy to interpret.

For example, let’s imagine you run a survey and get an average member satisfaction, NPS, or relevancy score of 4.1 out of 5.

A year later you run the survey again and the average is 3.8 out of 5.

This looks like a bad result (and it could be). But it’s equally as likely to be a change in who is responding to the question.

For example, imagine in year one, your survey collects data from 70 newcomers, 20 intermediate members, and 10 top members. The following year, it collects data from 50 newcomers, 20 intermediate members, and 30 top members.

Your overall average will probably be worse. This isn’t because you’re offering a worse community experience. It’s because the percentage of responses from different groups in the sample has changed.

This is why tracking members by segment is key. You will get not only an overall average but you can see how each segment feels about the community. You can then adjust for sampling bias to get an overall score and track that over time.

 

Response Bias Favourably Distorts Survey Results

It’s genuinely hard to get people to complete a survey if they had a bad experience. Why would they bother? You could try offering money – but then you would have to know their answers to the survey in advance to know to who to offer money. It’s a catch-22.

We see this in the data all the time. Pretty much all segments in all communities score the community at least 3 out of 5 or above. This often leads clients to think that an average score of 3.6 out of 5 is good (when it isn’t).

It simply reflects that most people who are really unhappy don’t tend to complete surveys.

This is also why you should be careful when member satisfaction scores are compared against other channels (i.e. customer support). This is never an apples-to-apples comparison. The methodology changes everything. How, where, and when people are surveyed in their journey will have a big impact on the outcome

 

Why Top Members Are Usually The Least Satisfied

The other challenge is the results vary tremendously by whichever segment the member is in.

Before we began doing satisfaction surveys, we expected the best results would come from the top members (or longest-serving members) and the worst results would come from newcomers (or least active members).

In reality, we often find the opposite is true.

As you can see below, the veterans/top members are typically the least satisfied while the newcomers (especially those who have just joined) are the most satisfied.

Survey showing member satisfaction results

Why would the most active and longest-serving members also be the least satisfied?

Wouldn’t unhappy members simply vanish from the community altogether?

One reason is pretty obvious. Being the least satisfied or least happy isn’t the same as being unsatisfied or unhappy. Members who are truly unsatisfied or unhappy with the community don’t bother to complete a survey at all.

The more we’ve explored we’ve dug deeper into the survey data and combined it with qualitative responses from different groups, the more we’ve been able to build up a clearer understanding of how and why members engage the way they do.

Top members are usually the least satisfied for three reasons.

  • They compare the community to its past state. Many top members compare a community to an idealised former state. This is usually when they were on a platform they liked more, or where they know the community team better. The more mature the community is (and the more changes you’ve been through), the bigger the problem this becomes. They’re not evaluating the community by the daily experience they have, but against a nostalgic sense of the past (not too dissimilar from how many people have nostalgic views about the 70s, 80s, or 90s).
  • The top members notice the small problems. Top members spend the most time in a community. This means they’re more likely to notice minor other members don’t. It’s best to imagine this as visiting a friend’s well-decorated home with beautiful views. The house is likely to seem better to you than to them. They’ve experienced all the small issues you haven’t.
  • Top members are evaluating their relationship with the brand. Top members often evaluate the community experience by how they feel the community team treats them. If they harbour a sense of past injustice or don’t feel they are getting the respect or recognition they deserve, they evaluate the community poorly. They’re not evaluating the experience as much as they are the relationship.

Ultimately this means top members are using a different rubric to evaluate the community and they’re evaluating the community against far higher standards.

 

Why Newcomers Are The Most Satisfied

Newcomers tend to be more satisfied for three reasons:

  1. They’re evaluating a small slice of the community. Newcomers only evaluate the parts of the community they’ve experienced. Often this is simply them asking a question, getting the answer they wanted, and departing. If that small slice of the community was good (and it usually is), they are satisfied.
  2. They’re still excited by the community’s potential. If someone asked you how satisfied you are with college after your first day, you would probably say highly satisfied. You’re still in the excited phase. After a year, your review would be different (probably lower). It’s easy to forget how exciting joining a community filled with great expertise could be. There are so many things to learn and people to engage with. The excitement about the community’s potential skews the results.
  3. They’re comparing the community to search. Related to the above, newcomers often compare the community to search engines or tediously browse the documentation. They’re not objectively evaluating the community, but they’re comparing it to other places where they could get information.

Newcomers are evaluating a small slice of the community using different comparisons and are buoyed by the excitement of its potential.

 

How To Use Community Survey Data

It’s really easy (and tragically common) to compare the groups against one another and misinterpret what the data is telling you.

This could lead you to invest more time in top members and less time in newcomers.

But this isn’t what the survey is telling you to do!

The survey is a snapshot of information collected during a given period. It’s a reflection of the natural mindset, impulses, and attributes of each unique segment of the audience.

They’re not more or less satisfied because of what you’re doing, they’re more or less satisfied due to being a part of the segment.

So what’s the point of undertaking this survey if it’s not showing you what you think?

There are three reasons

  1. Create a benchmark for future comparison. This data lets you create a baseline (both overall and for each group) that you can use over time. This becomes your benchmark. You can (and should!) run the survey once a year and track progress. This will reveal whether the satisfaction is increasing or decreasing each year – and why specific groups are driving the outcome.
  2. Compare against industry standards. We’ve been running these surveys for so long now that we know when a result is an outlier and worth exploring. When we compare the results of one client’s survey against the industry baseline, we can see if the client should be doing better or worse. This is great data to check if your performance is within the expected range – if not you can discover who to learn from.
  3. Identify unique issues. The very next question when members give a rating should be to provide a reason for their rating. Are there any specific things they especially do or don’t like? This creates opportunities to learn about specific issues you can prioritise.

 

Summary

It’s great to have some method of measuring member satisfaction (a poll or survey question). However, you need to have some expertise in your community and how the data was collected to turn the data into useful insights.

Don’t use the average score without also considering the composition of respondents. Be mindful that the score will be distorted by a response bias. And anticipate that top members will seem the least satisfied – and that’s ok.

Track the data over time and then use the data to decide which groups need more or less support.

What Is A Login, Post, Or Download Worth In A Community? (An Analysis by FeverBee)

It’s incredibly hard to prove the value of a community.

Unless you have the time, expertise, willpower, and capabilities to do a controlled trial; most data analyses establish a relationship. The direction of that relationship (and whether there are confounding variables) is often left unexplored.

However, this doesn’t mean you can’t gather some really interesting insights. You may not be able to definitively prove the value of the community, but you can estimate the possible value of the community using statistically valid methods.

 

Pragmatic Institute Needed Help To Measure Their Community

In March 2022, The Pragmatic Institute hired us to help them determine the ROI of their community.

The Pragmatic Institute is a provider of expert training for Data, Design, and Product Teams. The Pragmatic Institute primarily sells to organizations – that often subscribe multiple staff members to courses at any given time.

To support their work, they created an online community hosted by Higher Logic. The community was popular and active. Members were clearly getting value from participating in the community. However, it had proven difficult to show whether engaging in the community led to more sales or better outcomes for The Pragmatic Institute.

FeverBee was tasked with answering three key questions:

1) What is the connection between community participation and the number of courses taken?

2) How might community activity impact training revenue?

3) What kind of community behavior should be encouraged to improve the metrics above?

 

Methodology

feverbee community methodology

Step One: Preparing the dataset

The first step of the process was to build a proper database.

We began by preparing the three datasets:

1) The dataset of Pragmatic users from Salesforce.
2) The community activity dataset from Higher Logic
3) The sales data from Salesforce.

Like many organizations, member and customer data was scattered across multiple channels.

The community was hosted on Higher Logic which contained all the engagement-level data. Purchase data lived in one set of Salesforce reports while course completion data lived in another.

To make things even more challenging, the individuals who took the courses weren’t usually the people who paid for them. Finally, the data often contained inconsistencies in how the names of organizations were recorded. Sometimes acronyms were used, other times the entire name was used.

Over several weeks, we cleaned the data, identified and resolved missing variables, and analyzed the outcomes.

 

Step Two: Running comparisons

Next, we began running comparisons between different segments of members. This included comparing active members with inactive members. We also compared members who have engaged in the community with those that didn’t.

When variables had a sufficient distribution, we performed linear regressions to look for a linear relationship between behavior and revenue generated per organization. We also grouped members by the number of posts they had made to determine if a specific number of engagements was associated with an increased probability of future sales (this is known as ordinal regression)

 

Step Three: Presenting Results

Once we had undertaken this analysis, we examined and visualized the results to estimate the potential value of the community. This is what we found…

 

What Is The Pragmatic Community Worth?

How Much Is A Community Worth?

Overall, almost every behavior was associated with greater revenue.

  • Active members (>1 post) generated 21% more revenue than inactive members.
  • Members who make at least one post contribute 44% more revenue.
  • Members who make at least one thread contribute 35% more revenue.
  • Members who log in to the community at least once contribute 21% more revenue.
  • Members who have replied to at least 1 direct message contribute 38% more revenue.
  • Members who contribute a document contribute 38% more revenue.
  • Members who download a document contribute 20% more revenue.
  • The more frequently members log in, the greater the revenue they generate.

By our calculations, community participation was associated with over $1.5m in additional revenue.

 

Which Behaviors Were Most Valuable?

It’s one thing to know the value of the community, it’s another to dive a little deeper to determine which behaviors are more important in driving that behavior.

In our case, we wanted to know which community behaviors increased the probability of members taking an additional course.

Here the data showed three important metrics.

Community - which behaviors were most valuable?

  • Replying to a DM increased the probability of taking an additional course by 16%.
  • Creating a document increased the probability of taking an additional course by 23%
  • Creating a thread increased the probability of taking an additional course by 201%

 

Setting Priorities

The probability of taking a future course was directly related to downloading documents, creating threads, and replying to direct messages.

This helped set some clear and specific strategic priorities within the community.

These priorities included:

  • Encouraging members to create threads
  • Creating more documents to download.
  • Getting more responses from members.

 

The Value Of Community Intelligence

There are far too many community teams who know their community is incredibly valuable but are unable to prove it.

As we’ve seen with the Pragmatic Institute, even if your data is scattered across several channels, we’re still able to combine this data and evaluate the impact of the community upon revenue, attitudes, and almost any other desired outcome.

If you can clearly define the variables, you can usually measure them. Better yet, we are able to specifically show which behaviors are most important and should be encouraged in the community at the expense of others.

As we’ve seen before, this helps you build a solid community strategy.

Once you can prove the value of the community through statistically valid methods, it becomes a lot easier to gain both internal validation and additional support.

If you know your community has value but you’re not able to definitively prove it yet, I’d suggest getting help from our community intelligence services.