Why Help Topic Ratings Don’t Improve Documentation
The platform we’re using for Headway Themes documentation has a built-in rating system—one of those really simple ones where, at the end of the article, you click the applicable option:
▲ I found this article helpful.
▼ I didn’t find this article helpful.
In the platform’s admin panel, each article shows its overall rating. As far as I could tell, each positive rating gave the article a score of 100%, and a negative rating gave it 0%. The overall score appeared to be the average of all ratings.
It didn’t take long after we launched this documentation site for me to realize that this rating system is ineffective. Here’s why.
Just the simple math behind this setup keeps it from providing any truly useful feedback. Let’s say the first person to rate a particular article says it was helpful. So it gets 100%. Great, my article was helpful! But then a second person happens along and decides the article didn’t really provide what he was looking for, so he rates it negatively. BAM! Just like that, the overall rating is now 50%, making this article look much less helpful than before.
As you get a few more people rating the article, things may look up. But the situation gets worse later on as more people rate the article.
Say 12 people decide to rate a particular article; 8 rate it positively and 4 negatively. That’s a score of 800 out of 1200, giving us an overall rating of 67% percent. Each person’s input now counts for only 1/12 of the overall rating. And what happens when 12 more people rate the article? When the 24th person gets there, her input is down to a fairly insignificant 1/24 of the overall score.
As time goes on, each new rating has an even more insignificant impact on the overall score, so it will hardly ever change, even if you make changes to the article to improve it and it gets more positive ratings. It would take a lot of individual positive ratings to bring the overall rating up noticeably.
The Error Margin
Like any kind of statistical report, there’s some margin of error. Not everyone who visits the documentation takes the time to rate the articles they view. And there’s always the possibility that people will rate an article negatively without really trying to find the information they want.
And let’s not forget those who don’t rate it either way. The overall score will never reflect their opinions, so it may be a bit skewed (but likely less so as the number of ratings goes up).
The Missing Feedback
The biggest problem is the lack of meaningful feedback. Being told someone thought an article was helpful or not helpful doesn’t help me improve the content. It’s good to know what people do find helpful so I can do more of it. It’s better to know what’s not helpful so I’m aware of what’s missing, inaccurate, or confusing, and I can fix it.
Once I saw the problems with this rating system, the team lead hid those options so we wouldn’t get any more feedback. He’s in favor of having feedback, but only if we’re going to get meaningful information out of it. I’m right there with him. If we can add a way for people to add a comment to their rating, we’ll probably bring back the rating system. But in the meantime, there’s little point.
Concrete and specific feedback I can do something about; a thumbs up or thumbs down is just a distraction.