We mentioned in the previous post that one of the most common issues among Customer Experience executives is that their organization’s satisfaction scores flatten out. It doesn’t matter how they keep score – whether it’s looking at average satisfaction, Net Promoter or a top-box calculation – the curve will inevitably plateau after a couple of years..
This wouldn’t be a problem if they could confidently say that their organization had reached a state of customer experience perfection, but in most cases, employees and managers are painfully aware that there is still plenty of improvement to be made.
The problem with flat trend lines isn’t simply that they suggest a lack of progress. It’s also that they’re boring. It’s difficult to keep stakeholders interested and motivated when they see the same scores month after month. Many customer experience initiatives have stalled when satisfaction ratings reach a plateau.
Flat scores are actually just a sign that the VoC program needs to evolve. There are various actions that can be taken to push the program along, and different organizations approach the challenge in different ways. As a start, we offer a few do’s and don’ts. First, the do’s; next week we’ll follow with the don’ts:
Do: Bring other metrics to the foreground. Satisfaction ratings (or NPS, or however you’re keeping score) are not meant to be an end in themselves. They are intended to reflect customer attitudes and experiences as a means to achieving better business results. Eventually, satisfaction scores need to become less prominent as other success measures take the lead. Depending on what the goals of the program are, various operational and financial metrics may be brought forward, including complaint volumes, retention rates, new accounts, customer spend and average cost-to-serve. This doesn’t mean that satisfaction ratings disappear; they should continue to serve as an important indicator of the customer relationship. But as the Chinese proverb goes, “When the finger points at the moon, the fool looks at the finger.”
Do: Focus more heavily on open-ended responses. Numbers are nice because they’re easy to analyze and display. Words, on the other hand, are messy, and analyzing them is labor-intensive. As a result, it is common for VoC researchers to severely limit the use of open-ended questions on their surveys. It is also common to find that the research team is sitting on a pile of un-analyzed comments, hoping they will eventually have the time to make sense of them.
Do: Segment the results. Rather than tracking an overall satisfaction score for the company, it is often more productive to break the scores out by relevant customer groups and monitor them separately. Different groups may have different satisfaction criteria, as well as different expected ranges of satisfaction. For example, business travelers typically give lower satisfaction ratings than pleasure travelers, even though they may, on paper, appear to be more “loyal” to a specific hotel brand or airline. Understanding how different groups are best satisfied and what the relevant ranges of their satisfaction ratings are will allow you to focus your improvement efforts more effectively.
Do: Recruit new stakeholders. As Voice of the Customer programs mature, they often apply customer feedback in new ways to meet the needs of an expanding base of internal clients. While VoC may initially be used for service recovery, front-line coaching and satisfaction monitoring, over time the information can be systematically applied to support product innovation, process improvement, vendor relations, training and communications content, and other important organizational needs. At the same time, the VoC team may evolve from an analytical and report-generating group to an internal consulting organization, working closely with a wide range of stakeholders to help them advance their business objectives.