Customer Surveys are Useless

 

I know we have the destruction of the Republic and basic liberty to fight against, but I want to divert briefly from such weighty topics with a rant about the uselessness of most customer surveys. I think we have discussed this topic before here on Ricochet. A note provided by a hotel on a recent trip reminded me of the uselessness of so many customer surveys. Maybe some Ricochetti are plugged into organizations in such a way that they can encourage those organizations to rethink how they score customer surveys.

What set me off was we recently drove across the western United States and stayed at several properties associated with one particular lower mid-priced hotel brand (the kind that provides a basic room and serves a basic buffet breakfast, not fancy). One such property supplied guests with a note that said,

“You may receive an online survey concerning your stay with us. On these surveys, “8”s [on a 10 point scale] are below average, and we hate “8”s. We want to make sure you have an enjoyable stay with us. If you feel you cannot score us with a 9 or 10, please come by and visit with us or call the Front Office and let us know how we can make your visit the [brand name] stay you deserve.”

What’s the point of a ten-point scale if 80% of the scale is considered “failure”? The personnel at a car dealer at which I have had several cars serviced have told me that the car manufacturer considers any score other than a 5 on a 5 point scale to be a negative (a failure). If there is no differentiation between “met expectations,” “exceeded expectations,” and “knocked it out of the park,” what does the company learn from the survey? As a manager, I was constantly reminded that I should expect most employees to be rated a “3” on a 5 point scale (“met expectations”). Only a tiny fraction of employees will meet the level of “5” (“outstanding”).

The top 10 – 20% of a satisfaction scale should be reserved for truly unexpected and exceptionally good service. Scores of 6 or 7 on a 10 point scale at a modest hotel should be a perfectly acceptable norm. I got what I expected at this hotel – budget service for a budget price. I did not receive Ritz-Carlton or Four Seasons level service, but I’m also not paying Ritz-Carlton or Four Seasons prices. If I adjust the rating scale so it looks like I received Ritz-Carlton service at budget prices when in fact I received the expected budget service at budget prices, the hotel doesn’t receive information about how they are really doing against their competitors, or what areas might provide an opportunity for brand experience focus.

I rated (on TripAdvisor) a restaurant we visited on the trip with a very high score because the food and the service were truly exceptional as compared to what I expected based on the general category of restaurant and its prices. Such top scores should be the exception, not the expected.

An establishment learns nothing about the customer experience if the only real choices for customer ratings are “perfection” or “failure.” Grade inflation has corrupted customer surveys in addition to school grades.

Published in Business
Tags:

This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 33 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Percival Thatcher
    Percival
    @Percival

    If you give me a ten point scale to judge you, you’ll get a 5 or 6 and like it.

    • #1
  2. Arahant Member
    Arahant
    @Arahant

    Full Size Tabby: An establishment learns nothing about the customer experience if the only real choices for customer ratings are “perfection” or “failure.”

    Amen.

    • #2
  3. Bryan G. Stephens Thatcher
    Bryan G. Stephens
    @BryanGStephens

    Gallup’s study on 5 point likert scales shows that there is no difference between a satisficed customer and a dissatisfied customer. Engaged customers are the only ones who will continue services. These are the 5 star folks. Therefore, it is really binary: either you are at 5 stars, or you might as well be one star. 

    Only 5 star customers will pay the premium for services.

    Now, I think this is misapplied. But, if you liked the guy who did your car service, give the poor soul 5 stars. They are basing his pay on it. Wrongly, but that is what they are doing. 

    • #3
  4. Percival Thatcher
    Percival
    @Percival

    Bryan G. Stephens (View Comment):

    Gallup’s study on 5 point likert scales shows that there is no difference between a satisficed customer and a dissatisfied customer. Engaged customers are the only ones who will continue services. These are the 5 star folks. Therefore, it is really binary: either you are at 5 stars, or you might as well be one star.

    Only 5 star customers will pay the premium for services.

    Now, I think this is misapplied. But, if you liked the guy who did your car service, give the poor soul 5 stars. They are basing his pay on it. Wrongly, but that is what they are doing.

    Gallup studies get a 2/5.

    • #4
  5. 9thDistrictNeighbor Member
    9thDistrictNeighbor
    @9thDistrictNeighbor

    I never respond to the hotel-generated survey when I’m never returning.  It’s certainly pointless.

    • #5
  6. Miffed White Male Member
    Miffed White Male
    @MiffedWhiteMale

    Bryan G. Stephens (View Comment):

    Gallup’s study on 5 point likert scales shows that there is no difference between a satisficed customer and a dissatisfied customer. Engaged customers are the only ones who will continue services. These are the 5 star folks. Therefore, it is really binary: either you are at 5 stars, or you might as well be one star.

    Only 5 star customers will pay the premium for services.

    Now, I think this is misapplied. But, if you liked the guy who did your car service, give the poor soul 5 stars. They are basing his pay on it. Wrongly, but that is what they are doing.

    Or, ignore the survey entirely.

     

    • #6
  7. E. Kent Golding Moderator
    E. Kent Golding
    @EKentGolding

    I think it is Delta Airlines that has a simple Yes or No question after calling customer service ( reservations , etc ).  “Would you hire the last person you talked to ?”.    Very simple — evaluate the last person you talked to …. not the 5 hour wait to get to them.   I hate the wait at Delta, but if you can get through to a person, usually they are both patient and helpful.  So I almost always say I would hire them.

    • #7
  8. TBA Coolidge
    TBA
    @RobtGilsdorf

    Surveys are carefully designed to get the candid, unvarnished truth in exactly the way a company prefers to imagine it. Also, they can mail you stuff and sell your info. 

    Sometimes they use them to clean house by firing people they are afraid to fire without waving a ratings spreadsheet at them first. Employees know this which is why they ask you to fill out these things online. 

    So they offload their managerial responsibilities to their customers without paying us. 

    • #8
  9. TBA Coolidge
    TBA
    @RobtGilsdorf

    And btw, my favorite check out experience involves leaving the key in the room while I take my stuff out to the car and then drive off. 

    Just because I spent the night with the hotel doesn’t mean I want to talk to it in the morning. 

    • #9
  10. Headedwest Coolidge
    Headedwest
    @Headedwest

    This is why I have quit responding to these bogus surveys.  

    I spent almost 4 decades as a university professor, almost all of which were where I was subject to Student Evaluation of Teaching (SET). I’m pretty sure I never got anything from SET that I didn’t already know; you can tell when parts of a course aren’t working if you are paying any attention.

    It’s easy to score high on SET: be very friendly and have low standards. A lot of my colleagues played that game, but I was never able to do the low standards part. I normally had good enough scores to survive, but I was never going to be in line for a teaching award because I just would not go to the place where you could get that.

    But for several years I taught at a Big 10 university where the teaching ratings system was officially experimental and optional (the optional part was a lie). But what that meant in practice was that I didn’t have to go through the kabuki theatre part where I had to have a student in charge of collecting and delivering the score sheets to the office. I collected them and delivered them myself.

    The way this system worked was that there were 5 mandatory questions. Then I could select another 20 questions (from a very long list) that I wanted to get feedback on. In the early semesters, I went through all the questions and picked 20 of them to fill out the questionnaire. 

    But I got curious about the distribution of results.  So instead of immediately delivering the score sheets to the office and getting the averages back, I looked at the raw data. All the work I had done to select questions about my course turned out to be a waste of time. (I never tampered with the data; I just looked at it.)

    When I saw the score sheets I could easily sort them into 3 piles: the smallest pile was where the students ranked the 25 questions with some variance; they liked some parts of the course and didn’t like others. That pile was at most 10% of the submissions.

    What were the others? They were sheets where the student liked me and gave me high scores on all of the questions, or they didn’t like me and gave me low scores on all of the questions.

    My average (the only thing the department saw) was almost entirely the weighted average of the students who basically liked me and the students who basically disliked me.

    Once I saw that a few times, I quit looking and I quit choosing the optional questions. I think I got more likes when the students only had the minimum 5 questions to answer.

    I would have loved to have found out what worked and what didn’t work in any given course I taught, but SET does not give you that. 

     

     

    • #10
  11. TBA Coolidge
    TBA
    @RobtGilsdorf

    Headedwest (View Comment):

    The way this system worked was that there were 5 mandatory questions. Then I could select another 20 questions (from a very long list) that I wanted to get feedback on. In the early semesters, I went through all the questions and picked 20 of them to fill out the questionnaire.

    But I got curious about the distribution of results. So instead of immediately delivering the score sheets to the office and getting the averages back, I looked at the raw data. All the work I had done to select questions about my course turned out to be a waste of time. (I never tampered with the data; I just looked at it.)

    When I saw the score sheets I could easily sort them into 3 piles: the smallest pile was where the students ranked the 25 questions with some variance; they liked some parts of the course and didn’t like others. That pile was at most 10% of the submissions.

    What were the others? They were sheets where the student liked me and gave me high scores on all of the questions, or they didn’t like me and gave me low scores on all of the questions.

    My average (the only thing the department saw) was almost entirely the weighted average of the students who basically liked me and the students who basically disliked me.

    Once I saw that a few times, I quit looking and I quit choosing the optional questions. I think I got more likes when the students only had the minimum 5 questions to answer.

    I would have loved to have found out what worked and what didn’t work in any given course I taught, but SET does not give you that.

    Who cares whether the little [redacteds] liked you or not, it’s not as if they’re going to take the course from you twice – or if they are, that’s their problem. 

    We really have to lose the notion that students are customers. They are supplicants. 

    • #11
  12. Joseph Stanko Coolidge
    Joseph Stanko
    @JosephStanko

    But these amplifiers go to 11. 

    • #12
  13. Full Size Tabby Member
    Full Size Tabby
    @FullSizeTabby

    Bryan G. Stephens (View Comment):

    Gallup’s study on 5 point likert scales shows that there is no difference between a satisficed customer and a dissatisfied customer. Engaged customers are the only ones who will continue services. These are the 5 star folks. Therefore, it is really binary: either you are at 5 stars, or you might as well be one star.

    Only 5 star customers will pay the premium for services.

    Now, I think this is misapplied. But, if you liked the guy who did your car service, give the poor soul 5 stars. They are basing his pay on it. Wrongly, but that is what they are doing.

    So why not just make the survey questions binary? A scale implies gradation. 

    • #13
  14. Stad Coolidge
    Stad
    @Stad

    My friends and I have been going to the same beach resort for a dozen years every January.  I still get the “Are you satisfied with your stay?” surveys . . .

    • #14
  15. aardo vozz Member
    aardo vozz
    @aardovozz

    Stad (View Comment):

    My friends and I have been going to the same beach resort for a dozen years every January. I still get the “Are you satisfied with your stay?” surveys . . .

    At least they haven’t started giving out “Are you satisfied with your stay?” surveys at cemeteries.

    • #15
  16. Bryan G. Stephens Thatcher
    Bryan G. Stephens
    @BryanGStephens

    Full Size Tabby (View Comment):

    Bryan G. Stephens (View Comment):

    Gallup’s study on 5 point likert scales shows that there is no difference between a satisficed customer and a dissatisfied customer. Engaged customers are the only ones who will continue services. These are the 5 star folks. Therefore, it is really binary: either you are at 5 stars, or you might as well be one star.

    Only 5 star customers will pay the premium for services.

    Now, I think this is misapplied. But, if you liked the guy who did your car service, give the poor soul 5 stars. They are basing his pay on it. Wrongly, but that is what they are doing.

    So why not just make the survey questions binary? A scale implies gradation.

    I am not quite sure. 

    My own experience with client surveys tends in that direction. People are all happy or not happy. 

    • #16
  17. Headedwest Coolidge
    Headedwest
    @Headedwest

    TBA (View Comment):

    Who cares whether the little [redacteds] liked you or not, it’s not as if they’re going to take the course from you twice – or if they are, that’s their problem. 

    We really have to lose the notion that students are customers. They are supplicants. 

    I only cared to the extent whether or not I did a good job, and I think I pretty much understood when I didn’t meet my standards. But my employers wanted NUMERICAL proof of my teaching performance, so they could rank all of us in order of achievement. And on that scale I was successful, mostly in the top quarter or top third of people in my department.

    It’s lazy management; the department head doesn’t have to do anything or even think about how well his faculty are doing; he just automatically get them ranked. It’s the same impulse in consumer surveys: we need numerical ratings to beat up hotel managers, or whoever. Stage 1 is inventing a numerical test of a soft objective; stage 2 is people figuring out how to game it.

    I also agree completely that acting as if students are customers is inappropriate. 

    • #17
  18. Hugh Member
    Hugh
    @Hugh

    There are too many standing ovations as well.

    • #18
  19. Hoyacon Member
    Hoyacon
    @Hoyacon

    The worst surveys are the ones that are obviously designed to elicit personal information.  I get those frequently from the WSJ and others.

    “Tell us how to improve by letting us know how much you earn.”  I don’t think so.

    • #19
  20. Headedwest Coolidge
    Headedwest
    @Headedwest

    “Please give our podcast a 5-star rating.”

    • #20
  21. I Walton Member
    I Walton
    @IWalton

    Id fill some of those things out in the past. But its clear they just want information on the user so I stopped.

    • #21
  22. MiMac Thatcher
    MiMac
    @MiMac

    It is really hard to get actionable feedback-the kind you can use to make changes. For several years I was chief of the service at the local hospital & so I would receive all significantly positive or negative reviews. It soon became clear that most reviewers either liked the personality of the doctor they gave a positive review of (not a bad thing but they rarely mentioned skill, communication, or knowledge etc) or the patient was angered over something almost completely unrelated to the quality of their care- often the doctor wouldn’t comply with a ridiculous request (such as agreeing ahead of the procedure to only using medicines that the patient or a family member had a financial interest in) or blaming their doctor for completely predictable part of their hospital course. In my case, a colleague of mine once told me “you ought to read your on line reviews (this person is a doctor I respect and has requested that I take care of their family). So I pulled them up- a handful of reviews -all bad. So I decided to investigate the patient’s care to try to find out why they were dissatisfied. Turned out NONE of the “patients” were ever in my care-not a single one! They were mad at other physicians, all I can guess is that my name came up first when they did a goggle search of physicians in my specialty at the local hospital. Of course, Google couldn’t care less. But again such complaints are of little use if the reviewers won’t even review the correct practioner. 

    • #22
  23. Hoyacon Member
    Hoyacon
    @Hoyacon

    What’s the general opinion on Yelp?  I’m generally distrustful of social media, and understand that it can be abused, but Yelp–viewed with a practiced eye–seems useful, particularly for places of public accommodation.

    • #23
  24. Miffed White Male Member
    Miffed White Male
    @MiffedWhiteMale

    Hoyacon (View Comment):

    What’s the general opinion on Yelp? I’m generally distrustful of social media, and understand that it can be abused, but Yelp–viewed with a practiced eye–seems useful, particularly for places of public accommodation.

    I assume anybody doing detailed reviews of public places is either a pretentious twit or has an axe to grind.

    South Park did an episode about it some years ago.  (edit:  As I recall they focused on the Pretentious Twit aspect).

    • #24
  25. Full Size Tabby Member
    Full Size Tabby
    @FullSizeTabby

    I do find written reviews on products and services (generally on the vendor’s website) sometimes useful. But I ignore a large number of them when it is clear that they have an axe to grind or are talking about something that is tangential or weird. I am amazed at the number of reviewers who clearly have unrealistic expectations, are reviewing the wrong product, or had problems because they did not follow the directions. 

    I do write reviews on TripAdvisor because I have found useful the reviews of others. I try to use lots of words and describe what I experienced. Just saying the food or service was “good” doesn’t tell the reader anything useful. 

    • #25
  26. Guruforhire Inactive
    Guruforhire
    @Guruforhire

    There is a method to the madness.

    At least with the net promoter score.

    It likely has to do with how likely you would be to promote their product or service to others, or likelihood of repurchasing the product/service/brand.  The “it was fine” response of the 5-6-7 means that largely its an undifferentiated product or service which is unlikely to result in a word of mouth recommendations or loyal purchasing habits.

    so in terms of an employee evaluation a 3 is a chair stuffer, its fine, but when lay offs come…. a 4 is the person you don’t cut at lay off time at least not at first, and 5 is the business would burn a building before laying them off, these are the people the business cares about.

    So if you are a 3 or 4 keep your resume up to date, because the business does not care about you at all, except that it would be inconvenient to get rid of you now.

    • #26
  27. Joseph Stanko Coolidge
    Joseph Stanko
    @JosephStanko

    Headedwest (View Comment):

    “Please give our podcast a 5-star rating.”

    That’s not nearly as annoying as the pop-ups in apps I’ve just launched begging me to give them a review.  Any app that interrupts what I was doing to ask if I love the app is ipso facto not worthy of a 5-star rating.

    • #27
  28. Miffed White Male Member
    Miffed White Male
    @MiffedWhiteMale

    Full Size Tabby (View Comment):

    I do find written reviews on products and services (generally on the vendor’s website) sometimes useful. But I ignore a large number of them when it is clear that they have an axe to grind or are talking about something that is tangential or weird. I am amazed at the number of reviewers who clearly have unrealistic expectations, are reviewing the wrong product, or had problems because they did not follow the directions.

    I do write reviews on TripAdvisor because I have found useful the reviews of others. I try to use lots of words and describe what I experienced. Just saying the food or service was “good” doesn’t tell the reader anything useful.

    My favorite is the 1-star Amazon reviews complaining about shipping problems (late, damaged, received wrong product or wrong color, etc) with the product.  Because that’s relevant to the specific product.

    • #28
  29. The Cynthonian Inactive
    The Cynthonian
    @TheCynthonian

    Hoyacon (View Comment):

    What’s the general opinion on Yelp? I’m generally distrustful of social media, and understand that it can be abused, but Yelp–viewed with a practiced eye–seems useful, particularly for places of public accommodation.

    Yelp is easily manipulated.

    • #29
  30. Hoyacon Member
    Hoyacon
    @Hoyacon

    The Cynthonian (View Comment):

    Hoyacon (View Comment):

    What’s the general opinion on Yelp? I’m generally distrustful of social media, and understand that it can be abused, but Yelp–viewed with a practiced eye–seems useful, particularly for places of public accommodation.

    Yelp is easily manipulated.

    I’ve heard that, but, as with any reviews, you have to read with a bit of common sense in terms of separating the real from the fake. 

    • #30
Become a member to join the conversation. Or sign in if you're already a member.