The other night we went out with friends for dinner at a place with great food and terrible service. As we waited and waited and waited for our food, our poor server eventually ran out of excuses and reassurances that we would not be late for our play and simply went to hide in the kitchen. We flagged the manager down, suggesting that an hour was a long time to wait for dinner, and we were sternly reminded that they never start preparing the main course until the appetizer is on the table. Which of course does not explain why it took 45 minutes to cook a piece of fish but did explain that this manager is not compensated at all based on customer satisfaction.
The same cannot be said of a good many managers and executives, and we have discussed some of the unintended behaviours that accompany linking compensation to customer satisfaction scores.
Here is another reason the hyper-focus on quantitative customer feedback data is a dumb idea: It doesn’t actually fix anything until it’s really, really broken.
Whether you use Net Promoter Score (NPS) or your own scorecard, chances are you are drowning in data, dashboards and tedious reports, at the same time as you are desperately trying to figure out how to use all this big data stuff. And I’m going to bet that the only thing you are actually certain about is whether or not your Corporate Overlords are getting a decent bonus this year.
While that’s a good thing to know, for a lot of reasons, it’s actually a trailing indicator and tells you only that things were good, bad or the same at some recent point in time. It may also tell you the geographic locations of your happy and unhappy customers and which of your products is the most irritating. All good, but unless you are doing some deep qualitative work and root cause analysis, you are probably not fixing much.
For example, a few years ago, I had a bunch of data suggesting that the user experience for one of my products was poor to very poor. These scores had been low for years and the product team had redesigned the interface more or less annually trying to fix it. So we dug and dug and dug and found that the user experience with the software was actually pretty good, but the learning curve was long. A further bit of digging uncovered that lousy documentation made learning the application difficult and customers either gave up after a few months or had taken to calling the support centre, which tagged the reason for the call as “user experience”.
My friend Andy had a similar issue with terrible scores on the service technicians his company deployed to repair food processing equipment. Month after month the NPS was in the toilet, and the chief complaint was that the technicians were showing up late. They sent them on training courses, offered coaching but the scores stayed low. Turned out, the automated system that sent emails to customers confirming the service call time, gave a two hour window, whereas the scheduling system that dispatched the technicians, gave them a four-hour window.
Sometimes the issue isn’t even a particularly big one, it’s just that your customer service people don’t have the facts or talk track to help you out. A while back, one of my clients was seeing a lot of customer churn, and the reason was price. Even in the follow up interviews, the customers cited price as the reason for leaving. Yet, this was far from the most expensive product on the market, and it delivered terrific value. A bit of digging and some different follow up questions revealed that a competitor had been targeting the base with a much cheaper, but also much more basic product. The poor CSRs in the call centre just didn’t have a good way to talk customers out of leaving. We wrote up a competitive script, taught them how to have a conversation about value versus price and churn went down.
Here’s a final example: one of my agencies recently lost a key account, quite without warning and with a vague note about wanting a new creative direction. The VP of sales called them and got the same answer. So, too, the CEO. Months later, they had a third party call up the lost client, and the truth came out. It seems they really didn’t like the account director but they didn’t want her to get into trouble. Stupid reason to dump your agency but without the qualitative small data, it would never have come out.
The point is that if all you do is count the numbers and trust the dispositioning, then you are not going to move the needle too much on fixing things.
Things to do about it:
- Stop going for so much quantity – every transaction doesn’t need feedback. Hire a research firm and let them figure out a good sample size for you
- Take the money you aren’t spending on sending surveys every ten minutes and get a real analyst on the job helping you understand the data you do have
- Set aside some more budget to do follow up calls, maybe even use a third party in case your customers aren’t telling you the truth
- Do focus groups
- Get someone to do some serious social listening: chances are, if your customers are pissed off enough to leave, they are probably pissed off enough to share
- Consider whether tying anyone’s compensation to a CSAT number is a good idea – far better to make them accountable for retention than happiness.
Related Posts
A Refreshing Look at Customer Loyalty
Daddy Issues
BizMarketer is written by Elizabeth Williams
I help companies have better conversations
Drop me a line at ewilliams@candlerchase.com
Or follow me @bizmkter
Leave a Reply
You must be logged in to post a comment.