Management consultant Perter Drucker once said: “If you can’t measure it, you can’t manage it”. That quote lead to a business revolution that took us through the era of Kaplan and Norton’s Balanced Scorecard through to the world today where every interaction is measured, evaluated and optimised. Unlike other complex measurement systems, the Net Promoter Score is all about answering one simple question: How likely is it that you would recommend our company/product/service to a friend or colleague?
Customer retention is a big deal. Ask anyone in business and they will tell the most efficient method of acquiring new customers is through recommendations. What Net Promoter does is quantify customer recommendations in such a way that it’s possible to compare companies against each other.
At a number of events I’ve attended recently, company CEOs have often boasted about their score, comparing it to their competitors and peers – it happened a few hours ago during the VeeamON opening keynote when Co-CEO and President Peter McKay noted his company’s Net Promoter score was in the 70s – far exceeding that of his previous employer Vmware and about double his competitor’s score.
McKay also revealed that maintaining a high Net Promoter score was part of the incentive plan attached to his salary.
When a customer is asked how likely is it they would recommend our company, product or service to a friend or colleague, they give a graded score that is generally on a scale f 0 zero to ten. People who give a score of nine or ten are called “promoters”. Those who give scores between zero and six are called “detractors” with the remainder called “passives”.
The Net promoter score, which can range from -100 to 100 is calculated by subtracting the percentage of responses from detractors from the percentage that are promoters.
The problem with Net Promoter, in my view, is that it is a very superficial measure. While it might be interesting, it doesn’t answer the question of why people don’t recommend you. However, the Net promoter question could be included in a broader customer satisfaction survey where you can delve into what makes your customers happy and what is important to them.
Making the Net Promoter a Key Performance Indicator (KPI) is an interesting decision. My experience with poorly set KPIs is that they can drive negative behaviour or influence people into doing what’s best for them personally rather than what’s best for the business. For example, you might be able to convince more customers to be promoters but it might come at the cost of employee satisfaction as you drive people to work longer hours.
Net Promoter, like any other performance measurement tool, can be useful. But beware looking at in isolation.
Comments
5 responses to “What Is The ‘Net Promoter Score’ And Why Should You Care?”
NPS is a complete load of trash! When I worked at a phone shop NPS was very heavily focused on and staff were judged based on NPS. The problem was that the system is easily manipulated.
Some staff would do all kinds of things to rig their score and on the flip side customers would use the NPS surveys as a complaint form to the telco giving low scores to staff that was not representative of the service they gave. This would happen even if the NPS survey was explained to each customer that it’s not a telco survey but rather an in store experience survey.
Always infuriated me when a score would come back as a 0 out of 10 with the comments saying “matt was fantastic to deal with in store but coverage at my cousins house is bad” (or something to that effect). Then managers etc would see the 0/10 and grill me about it!
If I ever get an NPS survey I rate 0/10 but put glowing details in the comments. Take that for Data!
There are hugely successful companies that make most of their money telling other companies how to measure effectiveness, efficiency and overall performance of entire companies all the way down to individual employees. The reason that they’re able to do that is that there isn’t really an easy answer – otherwise we’d all know and they’d be out of a job. It’s easy to pay a sales team and judge them on sales. It can be easy to measure wastage and labour-hours and work on improving them. The further you go away from making things and selling things though, the harder it is to judge performance. Design teams can sink hundreds of hours into a project and not have a great deal of feedback for many months.
As @matt0 said, the human interaction at the centre of service and hospitality is so dependent on subjective assessment of (unreliable) customers – a really bad problem dealt with adequately might be a “3/10” from a customer’s point of view, whereas a trivial problem dealt with quickly but poorly could be a “8/10”. Middle managers are notoriously difficult to assess, which is where so many stories of insane KPIs come from, not to mention all the stories of useless managers.
NPS, of course, also has a flip side. There are some people so badly burnt by a product, salesman, or store – that they become a detractor.
For example, anyone who buys a Microsoft Surface is asking for trouble. The Surface has a proprietary power supply cable that is prone to breakage because there is no strain relief. To make matters worse the Surface can’t be charged from the USB connector. When your power connector breaks you are SOL.
It pays not to push people down into the Zeroes. Normally this can be done with two simple actions, follow through on your promises, and communicate with the customer.
With our NPS, any score below 9 was essentially a fail. After serving customers who we knew would receive a survey (contracts, prepaid phones etc not bill payments and things like that) staff would explain that the NPS is a reflection of the interaction in store, not a company rating, and basically plead the customer to give a 9 or a 10. We even had little brochures to hand out to customers explaining they should score us 9 or 10!!
Personally I rarely give a 9 or 10 on any survey or review unless it has been exceptional, over and above the call of duty stuff. Then I’ll give a 9, but never ever do I give 10. As I mentioned in my comment earlier my past experience with NPS has left a very sour taste so if I ever have to respond to them myself I give 0/10 but in the comments will leave positive feedback (provided the interaction was positive) and also my actual score out of 10.
There was also cases of staff activating multiple services on customers accounts or activating prepaid services in their names and keeping the sim card to then later respond to the survey them self then deactivating the service!!! All without telling the customer what was happening!!
Agree with what @spadge said that it can be very subjective. Could have 5 different people have an identical interaction and get 5 different scores.
I really hope businesses aren’t using NPS to rate staff as critically as what I remember dealing with.
I’m much the same, I’ll score something a “6” when it’s as expected and change my score from there. It’d be incredibly rare for me to give a nine, especially in retail.
A hard judgement on something like a NPS is just so dumb, because it poisons otherwise useful information as people interfere with the system to generate 9s and 10s. KPIs can often suffer from the same fate when they’re not well suited to a job – my KPIs this year constitute around 10-15% of my workload. I could do a terrible job by focusing on my KPIs to the exclusion of all else, but upper management would see a model employee.