How To Develop A Community Strategy

One of the biggest mistakes is to measure the success of a tactic without checking if the tactic itself was well-executed. This can lead you to changing your tactics instead of improving your execution. If a tactic has been badly executed, then the community will not fulfil its strategy and the strategy will not achieve its objective.

At this level, you’re not measuring whether they were the correct tactics in the first place. This will come next. You are simply measuring whether the tactics were well-executed.

This means you first need to define good… Fortunately, we already have.

Defining Successfully Executed Tactics

There is no single measure of a successful tactic. Tactics can vary wildly from one community to the next. You can, however, use a relatively simple process that was outlined earlier.

What Does Great Execution Look like

A successful tactic will reach a large percentage of the target audience, have a considerable impact on the audience, and have a lasting impact upon the audience. To measure this, we need some rough proxy metrics.

You usually don’t need to be precise here. Just get a rough idea if the tactic did reach a large percentage of people, whether it impacted those people, and whether it lasted for a long time.

  1. Reach. This is the easiest to measure. You identify what percentage of the targeted audience was reached by the tactic. You might define this as views, contributions, or any proxy metric which would best correlate with reach. The most important thing to remember here is we’re looking at the percentage of people reached, not the absolute number. Fifty people attending a webinar might look bad, unless it’s the fifty top community members you were trying to reach. Most tactics that fail simply didn’t reach enough people. So, the first thing you want to measure is whether the tactic reached a significant (usually >50%) of the target audience you were trying to reach.
  2. Depth. Now you need a metric which correlates with depth of the tactic. Depth is the degree to which it impacted each individual. This is a hard measure without undertaking surveys of each tactic. So, you can use a simpler proxy along the lines of rating, average time on page, repeat visits to page, questions asked, responses to the CTA, etc. Your goal is to find out if the tactic affected the people who were reached by it.
  3. Length. The third thing to measure is the longevity of the tactic. There are two types of longevity. The first type is when it it affects members reached for a long period of time. This might lead to a sustained change in behavior. The second type of longevity is when the tactic is popular for a long time. This means it continues to attract a large percentage of people (these might not be the same people). Very often, the tactics which show sudden success lose popularity quickly, whereas the unknown tactics gradually scale and become increasingly popular in the long run. At this stage, we want to know how long the tactic was valuable for.

Tactics to avoid are those which are popular with a tiny percentage of members. Unless this tiny percentage precisely reflects the people you want to reach, these tactics are usually a bad idea. Yet, they can often appear very successful because you’re getting positive feedback from the small percentage of people they do reach.

Likewise, tactics which appear popular by reach but have no depth (high bounce rates, low follow throughs) suggest it’s not great.

Finally, any tactic which does not have a big impact might be easy to cut.

Measuring Execution of Expert Interviews

You can use this process to measure the ‘interviews with top experts’ tactic we explored earlier. The purpose of this tactic is to foster a sense of awe and jealousy in other members. This means a lot of members need to watch the videos. This provides an easy set of metrics.

  1. Reach: The number of people watching the video / total size of the target audience. This is an obvious metric to track. If very few people sign up or watch the video, clearly it’s not a relevant or useful video to the audience. Try to make sure you have an approximate idea of how many active members you are trying to reach with this video.
  2. Depth: The number of questions, discussions or ratings about the video. These are relatively easy to track. We simply measure the number of questions asked, length of time someone watched the video (video analytics), or the rating the video was given. Pick one metric you feel is most relevant to you.
  3. Length: Views per month. You might measure friend connections, increased followings on Twitter, or broader levels of respect for those featured in interviews. These show the long-term impact of the videos.

These are all relatively good indicators that the tactic is being executed well. This might lead to some simple tracking metrics:

  1. Combined views of the video / number of active members.
  2. Number of questions asked in comments of the video or related discussion.
  3. Length of time video was watched (we assume the longer the video is watched, the more impact it will have).
  4. Twitter followings of experts.

You might set broad targets here, too. For example, you want 15% of members to watch the videos, 30% of them to ask questions, at least 50% of the video to be watched on average and to see a 20% increase in the following of experts.

You can see an example below.

screenshot-2016-10-06-10-24-38

Analysis Of Videos

Now you can analyze why the videos did or did not succeed. We are specifically looking at the execution here. This means the planning stages. The typical questions to answer here might be:

  1. Were the videos well promoted? Here we look at how promotion of the videos compared with other promotions. This is simple benchmarking. What were the open, click-through and sign-up rates like for the videos compared with similar activities? This will tell us if the promotion of the videos differed in any significant way. Were they promoted better or worse than any other activity?
  2. Did the audience like the videos? This can be shown in the ratings of the videos or the length of time they were watched. You might compare this against similar videos or simply ask the audience to rate whether they liked the video, found it useful, and provide any other useful feedback. These are proxy metrics.
  3. Did the videos have the intended impact? This is done by experiment. You can measure the friend connections of experts before the video runs and then after the video to see the impact.

This analysis will reveal why the tactic was or wasn’t well executed. You can also gather additional information in terms of resources or effort.

If the video didn’t reach many people, was it badly promoted? If the audience didn’t like the video, was this because they didn’t find it useful or because it was badly made? If it didn’t have the impact, was this because they didn’t consider the person an expert?

Using the example below, you can see that the video still had a high CTR (click through rate). This suggests that the audience is receptive to videos. However, the video had a much lower average viewing time than other videos. This suggests it’s something about the video itself. Notably, the 17 seconds suggests that something within the first 17 seconds was the problem.

screenshot-2016-10-06-10-27-16

This lets us draw on some useful insights. Namely, that an easy fix would be to create a more exciting introduction to the videos that hooks people. This might involve skipping the introduction and diving straight into the single most important piece of advice shared by experts.

Improve

This stage is relatively simple. You have 4 options to improve any tactic. These are:

  • Fix the mistake. If the tactic didn’t work because of an obvious mistake, fix the mistake. If the open rates of the email promoting the videos were low, you might tweak the subject line or who they appear from. If not many people attended despite being registered, you might schedule the interview to take place at a more convenient time.
  • Allocate more resources. If the tactic didn’t work because of a clear resource problem, allocate more resources if they are available. if there is a clear resource that would make a tactic successful, you can allocate that resource from elsewhere. Likewise, if it would work better with even more resources, you might want to commit more resources.
  • Kill it. If the execution was bad and you cannot easily or predictably (with more resources) fix it, you should stop executing those steps and (most likely) kill the tactic to free up resources for more effective tactics.

The improve stage should be explicit in stating the conditions under which you might spend more or less time on any specific tactic.You should put these in place beforehand to avoid your own biases later (or your team’s own biases).

screenshot-2016-10-06-10-28-57

Now you can easily apply the insight from our example to a practical step to improve the tactic. You have a clear action that might fix the video before you abandon the tactic. This is where measurement, analysis, and improvement combines so well.

You can do this for each of your tactics. This should not take too long and leads to continuous improvement in everything you do.

Summary

  1. Determine if the tactics were well-executed. Did they reach a large percentage of the target audience, have a considerable impact, and did that impact last for a long time?
  2. Define what you will measure using proxy metrics for reach, depth, and length. This might include views, ratings, and long-term trends. This will reveal what did and didn’t work.
  3. Analyze with context to explain why it did or didn’t work.
  4. Either fix it, kill it, or allocate more resources to it.

Chapters

Share

 LinkedIn LinkedIn

 Google Plus Google+