CiCoDa

The Net Promoter Score and the “couldn’t care less” group…

The Net Promoter Score (or NPS) is the ‘darling’ of Sales & Marketing, Customer Success and Product professionals worldwide. While it’s a simple enough score to capture and measure, the NPS can drive various serious and sometimes far-reaching decisions. And, as many of us know, we need to be careful what we measure because ‘what gets measured gets managed‘ …and that is not always good thing.

The Net Promoter Score is one of the main metrics or KPIs used not only to gauge the relative appeal of products or services, but even sometimes to validate their very existence. It is as if a high NPS is required as a raison d’etre for a Product or Service, for some people, rather than the simple (and perhaps subconsciously distasteful?) purpose of generating revenue and staying in business. A high NPS may be the thing that glosses over the fact that businesses generally exist simply to make money and everything after that is secondary, or it may be the metric that User Experience, Marketing and Customer-facing stakeholders literally drive their strategy towards achieving measurable ‘improvement’ of.

So, what is the NPS? And why is it so apparently all-conquering as a success metric in some business functions but not others? And does it (ironically) hide the major problems with a Product or Service faced by many – the majority, even – of customers…?

In simple terms, the Net Promoter Score is a value assigned by customers or users based on how likely they are to actively and explicitly recommend (or ‘promote’) a given product or service to their friends and family. In reality, the theory goes, users or customers will only recommend things they actually believe in to a friend or family member, and usually only at a relevant, spontaneous or opportune moment.

If recommending to strangers, of course, most users probably wouldn’t care whether the product or service is any good! Just ask any paid ‘influencer‘…

NPS measuring tools ask users to rate the likelihood that they would actively recommend the product or service on a scale of 1-10 or similar. Outliers designated by the relevant NPS algorithm are dropped and the remainder totaled (or ‘netted’) in some way. The result is the NPS.

Imagine a NPS algorithm that asks “How likely are you to recommend [some product] to your friends and family?”. It uses a scale of 1 to 9, with 1 being “Not in a million years” and 9 being “I’m doing it right now, repeatedly, to everyone I know!”. Some algorithm disregards some outliers based on some rules and comes up with an average of the remainder. This would be a form of NPS score – and what you really should be checking is whether the periodic trend is up or down. The key point behind it is that a ‘good’ NPS should result in organic growth by referrals (including to virtual friends on social media), a ‘bad’ NPS is a sign that customers will actively disparage (and ultimately kill!) your Product. Any changes you make should aim to result in a higher NPS at a later date.

Easy.

But pointless.

All in my humble opinion of course.

Of those people who provide a very high, how many of them actively refer friends and family to products or services, in real life, on an ongoing basis?

Sure, asking if visitors would recommend your site, service or product might make you feel good, because people generally respond positively (so if you have a negative NPS, you really are in trouble!). But saying they would and actually doing so are entirely different things. Anyone who ever tried to organise a weekend away with friends or family knows that the future tense ‘would’ and past tense ‘did’ are completely different!

But a more obvious (but apparently irrelevant… maybe because they can’t be measured…?) cohort of customers appear to be left out of the entire NPS debate. I am one of them and I doubt if this is a small group… In short (Shock! Horror!), I usually don’t care enough about a Product or Service to warrant answering the question.

How many of us have seen feedback-gathering mechanisms like the one below, whether it be in an Airport, a Hospital, a Mall or anywhere else…?

Apart from the fact that there is no way to know why someone might select a particular button, there is an equally important question: how many people simply walk past and don’t pay it any attention (and, by extension, what to we assume about them)..?

Do we assume they simply didn’t see the question? Or saw it and dismissed it? And of those that saw it and dismissed it, why did they do that? How can we tell if those people would prefer a button further to the left or further to the right?

In other words, how do you account for those who literally couldn’t care enough to even bother answering…?

By definition, these people are implicitly telling you they don’t care enough to even click a button, so it’s reasonable to believe they won’t care enough to proactively recommend your site, service or product to their friends and family.

Why does this matter? Well, in Product innovation and development, Sales & Marketing and a variety of other business disciplines, teams can sometimes put a lot of credit in what their customers tell them they would do, buy or use. But when a product is eventually made available to the market, how likely is it that the customers will actually buy or use it? Probably at least as likely as those who ‘would’ recommend it to a friend or family member (but don’t).

There is an equally important, related question: how many of your target audience (maybe the vast majority?) don’t care enough to answer? They should be considered as ‘absolutely not‘ but they are explicitly excluded instead. Even if the non-respondents are spread across the same distribution pattern as the respondents (i.e. in some sort of confused misuse of statistical patterns), this would be completely wrong, based on available evidence (or, more precisely, lack thereof!).

If you don’t know what ratio of ‘asks’ were answered, then how do you know if your perfect NPS is representative of 1 in 2 of your users & customers… or 1 in 2 million? Even if you do know how many invited respondents have / have not responded, any assumption of any kind about the non-respondents is destined to be wrong.

I’m sure some NPS-like algorithms have mechanisms in place to account for the ‘no show’ responses but I’m equally convinced (from a Scientific perspective) that they are most probably wrong or, at best, an educated guess based on other use cases. In Science, a known unknown is never replaced by an assumed answer and then just assumed to be right. The known unknown here is the answer(s) of the respondent(s) who didn’t respond. Assuming they don’t matter is easy (and neutral on the surface of it). Assuming that their answers would be similar in distribution to those that did answer, is a fallacy (one on which whole companies might bet the shop).

The counter argument is that they should be considered negatives. But what would that achieve and would it be the death of the NPS as a useful mechanism for spotting trends? Probably not. The good news is that it will still be the periodic trend over time that matters… not the actual number.

If the question is: “would you recommend this product or service to your friends or family?“, then an answer that basically says “I couldn’t be bothered enough to even answer this question” might be assumed to also mean “…so I certainly wouldn’t be bothered actively promoting the product or service“.

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

Contact us: +353 879 748 263

Mail us: ronhealyx@gmail.com

This error message is only visible to WordPress admins

Error: No feed found.

Please go to the Instagram Feed settings page to create a feed.