Turn Off Delight
Posted by NRadmin on July 7, 2014

In the not too distant past, companies determined the success of their product and service offerings by measuring customer satisfaction. The idea was that if the expectations of customers were adequately met, they would return to do business again. Customer satisfaction wasn’t a very exciting notion, but for years it served as an accepted measure of performance in American industry.

At last it dawned upon a handful of smart people that satisfaction wasn’t the issue. If companies knew their business and conducted it competently, most of their customers would be satisfied. Customer Satisfaction wasn’t an achievement, it was the default state of the business.

A new rallying cry emerged: “Satisfaction isn’t enough!” And with that cry came a new buzzword: Customer Delight.

The idea was that delighted customers are better than satisfied customers because they buy more, complain less, spread positive word-of-mouth and exhibit other profitable behaviors.

Many companies took this to mean that they should exceed customer expectations every time. Like the town of Lake Woebegon, where all the children are above average, these companies could never simply meet expectations. Instead, they had to create a memorable event in the mind of every customer who walked through the door.

It sounded good at first, but unfortunately it’s a preposterous notion. If you always exceed expectations, then expectations will simply rise. With ever-escalating expectations you have to offer more and more to make an impression. Eventually your employees will be dancing naked in the aisles and flinging coins at customers to get their attention.

Despite the logical weakness of the idea, an untold number of companies jumped on the Customer Delight bandwagon. The results have ranged from comical to horrifying: Grinning clerks following customers around the store. Waiters who interrupt intimate dinner conversations to introduce themselves and make small talk. Hotel operators who answer calls from guests by saying, “How can I enhance your experience?” Legions of service workers determined to become every customer’s close personal friend.

The Customer Delight fad has been very hard on employees. In an effort to ensure that every customer experience is exceptional, companies have tied employee evaluations and incentives to customer feedback ratings in such a way that anything below a perfect score is considered a failure. The results have in many instances been counterproductive. A few examples:

  • Automobile dealerships have taken to demanding that their sales and service staff get perfect scores on their customer follow-up surveys. Predictably, customers have been given surveys with the ratings already filled in by the dealership staff. In at least one case, a customer who would not give high ratings was refused service on a subsequent visit because he had caused an employee to lose his bonus.
  • A major grocery chain sent “mystery shoppers” into its stores to make sure employees were complying with service standards. One of the standards was to smile at customers. In some stores customers began to complain about the creepy employees who wouldn’t stop smiling. Moreover, some female employees began to complain that male customers were interpreting their smiles as flirting, leading to a threatening work environment.
  • An office-supply retailer linked a portion of its store manager bonuses to customer feedback from comment cards. If even a single negative comment was received, the manager’s bonus would be reduced. Again, the result was predictable: managers spent their time making sure unhappy customers never got an opportunity to complain.

The fact is, customers don’t want to be delighted all the time. It’s exhausting, both for customers and for employees. Every consumer receives service dozens of times a week from a wide range of companies, and most of those service interactions are, and should be, routine. Customers want service workers to be efficient, helpful and pleasant, but the service interaction itself should not become the center of attention. It is the outcome that matters.

The key to achieving customer delight is not excess, but opportunity. While most service is routine, every once in awhile a situation arises that is out of the ordinary: a complaint, a question, a special request, a chance for an employee to go the extra mile. If employees are trained to look for these opportunities and empowered to act on them when they appear, customers will be delighted at the right time. These rare, but exceptional, service experiences will furnish customers with positive stories about the company, and eventually the company will build a reputation that differentiates it from the competition. When it comes to delight, a little goes a long way.

 

The Cult of NPS
Posted by NRadmin on April 8, 2014

Net Promoter Score (NPS) has been around for more than a decade now, and its popularity just keeps growing. Despite some objections when it was first introduced, the attractions of NPS have proven to be more compelling that the arguments against it. NPS is popular because it’s simple, it’s intuitive, and it leads to action. Companies understand the approach and are doing something with the results.

So, from a practical point of view, NPS has proven to be effective. Unfortunately, many users approach it in a dogmatic manner, treating it as a strict formula rather than what it really should be: a useful set of guidelines. In some cases there is an almost cult-like devotion to the details of the method, with limited appreciation for the fact that it is the broader principles, rather than the specifics, that make NPS valuable.

“True believers” treat the formal aspects of NPS, such as the wording of the survey question, the 11-point rating scale, and the formula for calculating the score, as sacrosanct. But what do companies do when it doesn’t make sense to follow the NPS recipe precisely? For example:

  • Many organizations have internal conventions that require the use of specific rating scales. Can they use a 7-point or a 9-point scale to measure NPS, or must they adhere to the “correct” 11-point version as dictated by the literature?
  • What if the “likely to recommend” question is inappropriate for the target respondent group? Companies may wish to gather feedback from stakeholders who are not in a position to recommend, and who are not being relied upon to recruit new business through word-of-mouth. Is there another question that can be asked that maintains the spirit, if not the form, of NPS?
  • Does the formula for calculating net promoters (top-two box minus bottom seven) apply for all populations, or are there some cases in which it needs to be re-calibrated? For example, in certain countries (such as Japan), respondents tend to give lower ratings than in the US, while in others (such as Mexico) the ratings tend to be higher. There are also some customer relationships, particularly in the business-to-business world, where the distribution of ratings is likely to vary from the broader consumer population. Can the formula be changed in these cases?

The fact is, NPS can and should be modified when the situation calls for it. So long as the underlying principles of the method are maintained, there is no reason the details should not be adjusted to suit the circumstances.

The key is to understand these principles. There are four:

1.    Keep it simple. NPS was developed in part as a reaction to lengthy and complicated satisfaction surveys. While these surveys may collect valuable data, the reports are often perceived as too “researchy”; many managers find them confusing or intimidating. NPS is simple and easy to understand, and any modifications to the form should stay consistent with that approach.

2.    Focus on the tails of the curve. NPS is based on the notion that customers who give very high or low ratings are more “energized” than neutral customers. As such, they are more likely to take action, either to the benefit or the detriment of the company. By focusing on the tails of the curve rather than average ratings, NPS is designed to help promote specific customer behaviors, such as retention and referral. As such, it is thought to be a more effective way to impact the company’s financial performance than simply raising average satisfaction levels.

3.    Read the comments. Customers who have something to say to a company are generally not interested in providing a lot of ratings. They just want to tell their story. Research professionals, on the other hand, would prefer to have nice, clean numbers to work with because they know that customer comments can be messy, time-consuming, frustratingly vague (or maddeningly detailed) and full of contradictions. Nevertheless, open-ended comments provide the most important information in an NPS study. They truly represent the voice of the customer, providing managers with rich data that can be applied in a variety of ways.

4.    Take action. NPS is not simply a survey or a number; it also includes a process for taking action  on the findings. In NPS programs, managers are assigned accountability for survey results and expected to systematically apply the information within their areas of responsibility. They may use NPS data to document problems and root causes, spot emerging trends, identify unhappy customers who are at risk of leaving, improve product and service offerings, or any number of other applications. However, they do not use NPS as simply another number on a dashboard.

So, keeping these principles in mind, go ahead and fiddle with the NPS formula if it makes sense for your organization.

For a more extensive discussion on this subject, download NetReflector’s article, Modifiying NPS.

 

Customer Loyalty – A Fable
Posted by NRadmin on December 16, 2013

[social_share/]There is a charming little bedtime story that is popular among corporate managers. It goes like this . . .

A customer was lost in the woods. As he stumbled around trying to find a way out, he came upon various stores and businesses.

“Can you help me?” he asked as he approached each one.

“Forget it, pal,” they said. “You’re on your own.”

read more...

At last, as he was near despair, the customer saw a bright light through the trees. As he approached it he saw that it was the light of Our Company.

“Can you help me?” he pleaded.

“Of course,” said Our Company. “I will delight you with my service. I will be friendly and quick, and will always remember to thank you and ask if you found everything you were looking for and tell you to have a nice day.”

“Oh joy!” cried the customer. “What can I do to repay your kindness?”

“Just this,” said Our Company. “You must be loyal to me.”

“What does that mean?” asked the customer.

“It means that you will stay with me through thick and thin, better or worse. You will not fool around with other companies. You will tell your friends how wonderful I am, and you will not leave me if I occasionally do something stupid.”

“Like what?”

“Oh, you know,” said Our Company. “Like making a few billing errors or getting your order wrong. Also, you will stay with me even if my prices are higher than other companies, or if another company opens up closer to you or if someone else offers a better selection.”

“OK,” said the customer. “That sounds like a perfect deal. I accept!”

“Wonderful,” said Our Company. “Then we will have a relationship, which I will manage.”

So the customer married Our Company and they lived happily ever after.

If only fairy tales could come true.

Unfortunately, the real ending to the story is a bit more tawdry. What actually happened was this:

Not long after they got married, the customer was walking in the woods when he came upon another company.

“Wow!” said the customer as he looked the company over. “You’re really stocked!”

“Why don’t you come in and sample my wares?” said the company.

“Oh, no,” said the customer. “I cannot, for I am loyal to another.”

Continuing his walk, the customer saw yet another business.

“Come check out my prices,” it said. “Have you ever seen anything so low?”

The customer stared at the prices, which were so low they made him blush. Nevertheless, he declined the invitation to switch, saying that he was loyal to another.

“What does the other one have that I don’t?” said the low-price provider.

“Friendly service that exceeds my expectations every time,” said the customer.

“OK, but it’s your loss.”

The customer remained loyal to Our Company for some time, but eventually the temptation to stray became too great. He went to Our Company and said, “I’d like to start seeing competitors.”

Our Company was devastated. “But what about your loyalty?” it said. “Don’t I delight you anymore?”

“It’s not that. The delight is great, and nobody personalizes like you. I guess I’m just not a one-company guy. But we can still be friends.” And the customer danced away.

“Come back!” wailed Our Company. “Oh, why do the good one always leave?”

Just then the Good Fairy of Sensible Business Practices descended from above, fastened securely by a strong wire (for she was a sensible fairy).

“What a tragedy,” she said. “You made a great couple.”

“Why, oh Good Fairy, won’t customers stay loyal to me?” said Our Company.

“It’s not you, it’s them. Customers are just natural philanderers who will break the heart of any business that wants a monogamous relationship.”

“Are there no loyal customers, then?”

“That depends on what you mean by ‘loyal’,” said the Fairy.

“I mean, customers who feel a strong bond and are devoted to me and will act on my behalf. And pay a price premium. And stay with me when I screw up. And not do business with my competitors.”

“In other words,” said the Fairy. “Your idea of a loyal customer is someone who can’t make rational business decisions.”

“Exactly.”

“Well good luck with that.”

Now as it happened, Our Company eventually learned how to attract rational customers who would do many of the things that it wanted imaginary loyal customers to do.

But that’s another story . . .

 

 

 

Note To Service Professionals: Knock It Off!
Posted by NRadmin on October 7, 2013

[social_share/]“How has your day been going so far?” asked the barista as I stepped up to the counter.

It was 5:30 in the morning.

My day “so far” had been short and foggy. But it wouldn’t have mattered if I had been asked the question at 2:00 in the afternoon (which I was when I returned for a refill). At any time of the day it would have been a silly-sounding question.

And yet the question was clearly not the invention of this particular barista. I have been asked how my day is going so far at numerous restaurants, banks and retail stores – even by call agents on customer support lines. Somebody, somewhere, obviously thought this was a great question to break the ice, engage customers and make the retail experience more personal. It was probably included in some company’s customer service training class, and has since spread to other businesses like the latest strain of flu.

Unfortunately, “how is your day going so far” is just one example of an ever-growing repertoire of fatuous customer engagement techniques that are appearing wherever you do business. Here’s another one, which is popping up in restaurants: “How is everything tasting?” Or, this particularly frightening variation: “How are your first bites?” (What does that even mean?)

At one of the largest banks in the country, tellers have been instructed to reach over the counter and shake hands with customers. A colleague of mine has actually switched banks because he finds this practice so annoying.

All of these techniques were devised with the best of intentions. Companies are trying to create memorable customer experiences so they can build loyalty, generate positive word-of-mouth and differentiate themselves from the competition. Unfortunately, what sounds reasonable in a strategy meeting or training class can seem painfully awkward when applied in the real world.

So, note to customer service professionals: Knock it off!

Thank you, and have a nice day so far.

 

Facial Recognition and the Customer Experience
Posted by NRadmin on September 4, 2013

[social_share/]In an interesting use of technology to enhance the customer experience, a few retailers and hotels are now using facial recognition software to identify rich and famous patrons. The idea, of course, is that these VIPs should be given special treatment so they’ll spend more and avoid getting their feelings hurt if they aren’t recognized.

Nothing new about that – high-end establishments have always bent over backwards to please big spenders and celebrities. But now they can improve their odds of identifying the “right” people with sophisticated software that recognizes important faces, even if the appearance of the customer has changed – for example, if they are have gained weight, grown a beard or are wearing sunglasses.

Of course, if the software can recognize the rich and famous (even in disguise), it ought to be able to recognize the rest of us as well. It’s just a matter of getting all our faces into a database. No one has done that yet, but it’s bound to happen sooner or later.

In the meantime, a less precise version of facial recognition is being used to do on-site market research in stores. The software in this case does not recognize individuals, but it does identify characteristics such as sex, race and approximate age. This allows researchers to understand who is attracted to products and merchandising displays and to adjust their messaging to better appeal to these customer groups.

Using facial recognition technology to enhance the service experience represents a small leap from the early claims of CRM, which promised to “personalize” service by remembering customer profiles, contact histories and individual preferences. Many of those CRM systems were a disappointment in the early days, but over time the technology and applications improved and they’ve started to deliver better results. Whether facial recognition eventually develops wide-spread utility remains to be seen. In the meantime, front-line service professionals will still be practicing facial recognition the old-fashioned way – by remembering who their customers are.

 

The Law of Unintended Consequences in Service
Posted by NRadmin on June 26, 2013

[social_share/]

We came upon a post awhile back on BestMoviesEver.com, which illustrates an increasingly common ripple effect from bad phone service. The poster wrote about making numerous, fruitless calls to his cable company to get a simple problem fixed. Completely frustrated, he drove to one of the company’s stores, where a salesperson had to provide the support he needed.

This isn’t just a cable issue; we have encountered this scenario before with other companies and in other industries. Customers are using brick and mortar locations, which exist to generate revenue, as back-up customer support when the contact center fails to solve the problem.

In one case, we were able to calculate how much this situation was costing. A large bank had made changes to the IVR in its contact center, making it more difficult for customers to get assistance if they did not have their 16-digit account number handy. After making the IVR change, call abandon rates went up and customer satisfaction went down. Comments from unhappy customers suggested that many customers were taking their problems to the branch to get resolved.

We conducted a survey of branch personnel to test this hypothesis, and found that 58% of locations reported an increase in IVR-related customer traffic since the new system was activated. 80% said they spent more than an hour a week handling these issues, and 28% spent more than three hours.

 

We found that the bank was spending nearly $9 million per year in branch personnel labor costs to handle support issue that customers were unable or unwilling to get resolved through the contact center. The problem went beyond the cost of labor, of course – the situation was also causing longer wait times in the branches (resulting in lower satisfaction) and tying up branch managers and sale personnel with non-revenue generating activities.

To be fair, the bank was also saving money by shifting more of its call volume to self-service. But until they started looking at the ripple effects, the true cost-benefit remained hidden to everyone – except their customers.

 

Looking for perfect scores? Be careful what you ask for . . .
Posted by NRadmin on June 12, 2013

[social_share/]
One of the prevailing notions in the world of customer experience is that we should always strive to delight. We know we’ve succeeded when a customer gives us the highest possible rating on our feedback survey. If they give us a perfect score, they’re likely to be active advocates for our organization and attract new customers with their glowing endorsements.

This belief has led to some unfortunate behaviors on the part of company employees and managers, who are often under intense pressure to deliver those perfect scores. For years, consumers have complained about being pushed into giving high ratings by sales staff in automobile dealerships, in some cases being handed surveys with the ratings already filled in.

In the past few years, this trend has escalated to other industries. In a recent post on the Consumerist website, a customer sent in a photo of a note taped to a pizza box that was delivered from a major chain. It offered a $1 discount off the next pizza purchased if the customer completed the post-purchase survey. But there was a catch: “Only 5’s count.”

This type of behavior is becoming ubiquitous. We have seen it in restaurants and coffee shops; car rental offices; retail stores and bank branches. In case after case, employees practically beg customers to give them perfect ratings on point-of-sale and other post-transaction surveys.

This trend is troubling on more than one level. As a customer, it creates an uncomfortable dynamic in which the motivation for completing a questionnaire is no longer to provide honest feedback, but rather to help employees get their bonus. As a researcher, such aggressive electioneering leads to corrupt data and drives down consumer willingness to participate in surveys.

What’s the solution? For a start, companies should consider easing off on the demand that every service experience must result in a perfect survey score. In the end it’s probably better to receive honest feedback than to achieve pretend perfection.

 

When Satisfaction Scores Go Flat – Part 2
Posted by NRadmin on May 28, 2013

[social_share/] In the last post we mentioned some of the actions that can be taken when customer satisfaction scores flatten out. We listed a few “Do’s”; now let’s look at a few “Don’ts”:

Don’t: Shrink the scope. Satisfaction surveys can become overly focused on the needs of a specific user group, often at the expense of providing in-depth information about the customer relationship. For example, post-transaction surveys may be used primarily for coaching and rewarding call agents and other front-line service personnel, and over time become shortened to exclude any questions that are not directly related to the customer’s interaction with the agent. But this narrowly scoped data leaves out important information about the customer’s overall experience and relationship with the company. In general, Voice of the Customer programs should include both in-depth relationship surveys and transaction-based feedback, and the transaction feedback should capture information about the entire experience, not just the performance of the service agent.

read more...

Don’t: Change the scale. Some organizations fall into the trap of blaming the messenger, assuming that a different scale or manner of asking about satisfaction will change the result. Here are some hard truths:

• Bigger satisfaction scales don’t give you more precision. As a practical matter, all satisfaction analyses tend to break down into three buckets: Negative, Neutral and Positive. Whether you’re using a 5-point scale or a 100-point scale, you’ll still be looking at those three categories in the end. • Using multi-dimensional indexes may not help, either. Combining and weighting several metrics, like Overall Satisfaction, Willingness to Recommend, Likelihood to Repurchase, etc., sounds scientific and gives the illusion of greater precision. Unfortunately, these formula-based indexes are seldom better predictors of business performance than simply tracking Overall Satisfaction.

Don’t: Settle for “good enough”. If satisfaction ratings have reached a plateau, it may be tempting to rationalize by claiming that further improvement is unnecessary or unaffordable. But this is seldom true. Executives at companies with superior service levels, such as Nordstrom, are frequently heard to use phrases such as, “We’re still far from perfect”, “We have a long way to go”, and “We’re always working at getting better”. If scores are flat, it’s time to work harder, not to relax.

To download a “NetReflector Best Practices” article on this topic, click here.

 

When Satisfaction Scores Go Flat – Part 1
Posted by Pgurney on May 2, 2013

[social_share/]

We mentioned in the previous post that one of the most common issues among Customer Experience executives is that their organization’s satisfaction scores flatten out. It doesn’t matter how they keep score – whether it’s looking at average satisfaction, Net Promoter or a top-box calculation – the curve will inevitably plateau after a couple of years.

This wouldn’t be a problem if they could confidently say that their organization had reached a state of customer experience perfection, but in most cases, employees and managers are painfully aware that there is still plenty of improvement to be made.

The problem with flat trend lines isn’t simply that they suggest a lack of progress. It’s also that they’re boring. It’s difficult to keep stakeholders interested and motivated when they see the same scores month after month. Many customer experience initiatives have stalled when satisfaction ratings reach a plateau.

Flat scores are actually just a sign that the VoC program needs to evolve. There are various actions that can be taken to push the program along, and different organizations approach the challenge in different ways. As a start, we offer a few do’s and don’ts. First, the do’s; next week we’ll follow with the don’ts:

read more...

Do: Bring other metrics to the foreground. Satisfaction ratings (or NPS, or however you’re keeping score) are not meant to be an end in themselves. They are intended to reflect customer attitudes and experiences as a means to achieving better business results. Eventually, satisfaction scores need to become less prominent as other success measures take the lead. Depending on what the goals of the program are, various operational and financial metrics may be brought forward, including complaint volumes, retention rates, new accounts, customer spend and average cost-to-serve. This doesn’t mean that satisfaction ratings disappear; they should continue to serve as an important indicator of the customer relationship. But as the Chinese proverb goes, “When the finger points at the moon, the fool looks at the finger.”

Do: Focus more heavily on open-ended responses. Numbers are nice because they’re easy to analyze and display. Words, on the other hand, are messy, and analyzing them is labor-intensive. As a result, it is common for VoC researchers to  severely limit the use of open-ended questions on their surveys. It is also common to find that the research team is sitting on a pile of un-analyzed comments, hoping they will eventually have the time to make sense of them.

Do: Segment the results. Rather than tracking an overall satisfaction score for the company, it is often more productive to break the scores out by relevant customer groups and monitor them separately. Different groups may have different satisfaction criteria, as well as different expected ranges of satisfaction. For example, business travelers typically give lower satisfaction ratings than pleasure travelers, even though they may, on paper, appear to be more “loyal” to a specific hotel brand or airline. Understanding how different groups are best satisfied and what the relevant ranges of their satisfaction ratings are will allow you to focus your improvement efforts more effectively.

Do: Recruit new stakeholders. As Voice of the Customer programs mature, they often apply customer feedback in new ways to meet the needs of an expanding base of internal clients. While VoC may initially be used for service recovery, front-line coaching and satisfaction monitoring, over time the information can be systematically applied to support product innovation, process improvement, vendor relations, training and communications content, and other important organizational needs. At the same time, the VoC team may evolve from an analytical and report-generating group to an internal consulting organization, working closely with a wide range of stakeholders to help them advance their business objectives.

 

Sitting On A Pile Of Customer Comments?
Posted by NRadmin on March 19, 2013

Customer CommentsWe recently attended a small conference for customer experience executives, hosted by a group called Consero. The event consisted of a series of intimate panel discussions on a variety of topics relevant to the world of customer experience management, including Voice of the Customer (VOC) programs.

When talking with other attendees about VOC, we heard two commonly repeated complaints:

• Satisfaction scores are flat, and
• We’re sitting on a backlog of open-ended comments that we don’t have time to analyze.

In the next two posts, we’ll take up the subject of flat scores, including some do’s and don’ts for how to address this issue.
But for now, a few words about those open-ends: Don’t ignore them! The most useful information on your customer survey is hiding in the comments.

Unfortunately, it’s a lot more work to analyze text than numbers, which is why so many organizations are sitting on a pile of unread text.

Many companies are hoping they can buy into a text analytics engine to take care of the problem. But it’s not as easy as it sounds. These programs tend to be far more labor-intensive and far less precise than advertised. Over time, and with lots of tuning and calibration, they do become more useful. But if you currently have a backlog of comments and limited resources for analyzing them, consider using a third-party expert (like NetReflector) to conduct content analysis and help you get some structure and insight around the information. Once you deal with the backlog, it will be much easier to stay on top of new comments.

 

Review Wars! Companies And Customers Battle It Out
Posted by Pgurney on January 30, 2013

[social_share/]

On-line reviews have become a major source of customer feedback for many companies. But unlike satisfaction surveys, where responses are kept private and aggregated for analysis, on-line reviews are out in the open and can have an immediate effect on a business’ reputation and success. Small business owners are especially likely to take negative reviews personally, and to fight back if they think they’ve been unfairly criticized.

The results aren’t always pretty. . .  

read more...

A restaurant owner in Ottawa was so incensed with a customer review, she retaliated by posting a racy profile on an adults-only hook-up site – using the offending customer’s name and address. She also sent offensive emails to the customer’s employer. The restaurant owner landed in court, where she was convicted of libel and served a 90-day jail sentence.

A moving company in Massachusetts sent a letter to a customer who had written a one-star review, demanding that he remove the review or face a lawsuit for libel. The customer took umbrage at the threat, and started investigating other reviews that had been posted about the company. He discovered that many of the positive postings were from paid reviewers – in other words, fake. (We mentioned in an earlier post that Gartner Research predicts that by 2014, 10% – 15% of all social media ratings and reviews will be from paid sources.) The customer publicized his findings, and the moving company quickly withdrew its threat.

What’s sauce for the goose is sauce for the gander. A new website in Bucks County, Pennsylvania helps business owners identify and avoid problem customers. Called NastyClient.com, it’s a sort of reverse Angie’s List, where potential customers can be searched for by name to see if they’ve been causing problems for businesses. Presumably, posting negative reviews qualifies as “causing problems.”

Sometimes a bad review has a happy ending for everyone. A store owner in California tried repeatedly to contact a customer who had given a one-star review on Yelp, hoping to correct the problem and win back the customer’s business. Receiving no response, the owner sent a note telling the customer that he would like to drop off a replacement product in person, even though the customer lived two hours away. Result: Happy customer; Yelp review revised from one star to five stars, great publicity for the business.

The etiquette regarding on-line reviews is evolving, both for customers and businesses. Despite the pain of seeing a negative review, smart companies are learning that it generally pays to take the high road, even if they disagree with what a customer tells them.

 

When Customer Satisfaction Scores Aren’t Telling The Whole Story
Posted by Pgurney on November 7, 2012

[social_share/]

From PR Newswire: “Chase Bank Receives Top Marks in Customer Satisfaction Study”.

The study, conducted by Harris Polls and Google Consumer Surveys Platform, reports that 59% of Chase customers are “satisfied” or “extremely satisfied” – better than Citibank (55%), Bank of America (48%) and Wells Fargo (47%).

What’s wrong with this picture?

Two things. First, this is a classic “best of the worst” scenario. No one with experience in VOC research would consider 59% satisfaction to be satisfactory. The only reason Chase comes out on top is that the competition is weak.

Second, where are the other 7000+ US banks? Not included, of course, because they’re too small. But we can tell you with confidence that a whole lot of them have better than 59% satisfaction.

Let’s look at another source. Here’s some data from the  American Customer Satisfaction Index (ASCI).  It shows overall satisfaction trends among major US banks in the years running up to the global financial meltdown (Citibank is only shown for 2006 – 07, and it follows the Wells Fargo curve):

Notice that the curves are fairly flat, and in a range that can best be described as “unimpressive”. With one exception: Wachovia. Wachovia actually worked hard on its service quality, and in 2006 reached a score of 80, which is impressive for a large bank. One other national bank received similarly high ratings, although ASCI didn’t include it in its published findings: Washington Mutual.

Where are Wachovia and WaMu now? Neither survived the banking meltdown. Wachovia was purchased by Wells Fargo, and WaMu was acquired by Chase. Does that mean that Wells and Chase adopted the service standards of their new acquisitions, and have risen to a higher level of customer satisfaction? Let’s see:

Nope. They’re pretty much in the same range. Chase actually dropped four points, and in this index falls below Citi and Wells.

For interest, we added one more contender: “All Others”. That’s the dashed line that’s floating above the rest. Way above.

In other words, satisfaction with the four biggest retail banks doesn’t meet  the average of the next 7000.

So when we read that Chase “receives top marks in customer satisfaction”, it’s probably best to keep that statistic in context.

 

 

Embracing dissatisfaction…Target’s service training…Fake reviews and more…
Posted by NRadmin on November 2, 2012

[social_share/]

Glass half full?

Judy Ward, writing in Customer Service Buzz, brings up an interesting question: “Is it more important to focus on customer satisfaction or customer dissatisfaction when analyzing customer feedback?”

Judy comes down on the side of dissatisfaction. We agree.  

read more...

We all like to emphasize the benefits of delighting customers and exceeding expectations. But the fact is that the downside of dissatisfaction is generally greater than the upside of delight.

The risks and costs associated with annoying customers can be immediate and substantial: loss of business (either complete or partial), negative word-of-mouth (generally with all the juicy details, and often broadcast to a wide social network), and significant complaint-handling and service recovery costs.

The upside of delight is fuzzier. Yes, delighted customers are more likely to make referrals and say nice things about your business, and that can lead to new customer acquisition. But some of the other claims of service gurus are more questionable.  For example, is it really true that delighted customers will increase their rate of purchasing, buy higher-ticket items, or accept higher prices? Sometimes yes; often no. The results vary considerably across industries and customer segments.

Of course it’s important to identify and promote the factors that create delight. But if you look at actual customer behaviors – not just what they tell you they intend to do – a focus on reducing dissatisfaction will probably give you a bigger bang for your buck.

Define “amazing”. . .

The Daily Mail published an article about a new campaign at Target, which is meant to counter the success of on-line retailers like Amazon.com. They plan to do this by offering “amazing” service in their stores.

Fair enough. But the Daily Mail goes on to mock Target’s training manual, which it says is “packed full of corporate buzzwords and cringe-worthy customer service tips.”

read more...

According to the Target manual, “To keep guests coming back, our service must go beyond good . . . beyond great . . . and become downright amazing. And it all begins with something called the service vibe.”

This is actually pretty typical stuff for service training content, and probably not deserving of the mocking it received in the Mail. But we were struck by one of the examples Target uses to illustrate how to amaze customers: “A moment is when we stop, smile, and ask, ‘Can I help you find something?’ Amazing is how the whole family feels when we sincerely offer help.”

Has the quality of retail service really come to the point where making eye contact and offering assistance qualifies as “amazing”?

HR to the rescue

A new study from Bruce Tempkin “examines customer experience and employee engagement from the perspective of HR professionals.”

Great topic. HR ought to be a major player in any customer experience initiative, but too often they’re left out of the loop.

read more...

Tempkin reports that “most HR professionals understand the importance of creating a customer-centric culture, but only 15% of them are significantly helping in those efforts.”

At NetReflector, we identify culture building as one of the six primary applications of VOC programs, and recognize HR as a key player in that effort. For more on this, check out our VOC Applications Webinar (you’ll have to register, but it’s free).

Those pesky customer reviews

Gartner Research predicts that by 2014, 10% – 15% of all social media ratings and reviews will be from paid sources – in other words, fake.

read more...

Gartner goes on to suggest that a sort of arms race will evolve, in which these phony reviews will be counteracted by a combination of pressure from the FTC and public exposure by customers and media. Companies, in the meantime, “can help to promote trust by openly embracing both positive and negative reviews. . .They should also respond to ratings and reviews in an official capacity to demonstrate willingness to engage in productive conversation with everyone.” Hear, hear.

News from afar

Did you know that Kuwait has its own national customer satisfaction index? According to the Kuwait Times, the index surveys 10,000 customers and rates 400 companies in 17 industries. The average score for Kuwaiti businesses is 7.7 out of 10, up from 7.4 in 2010.

read more...

We’re delighted to see that more and more countries are recognizing the benefits of improving the customer experience.

It’s not just the major economic powers, either. For example, the island of Barbados funds a government agency called the National Initiative for Service Excellence (NISE). Among their many activities is to send delegations to other countries for service benchmarking tours. We’ve been honored to co-host several of these tours in Seattle, where the delegates were introduced to service leaders such as Nordstrom and Starbucks.