The Cons of Customer Satisfaction Surveys
Despite their near-monopoly on the customer satisfaction measurement industry, forward-thinking companies may need to question their surveys or even dump them for the following reasons:
Sampling Bias
Sampling bias simply means that the sub populations being tested do not represent the fabric of the population as a whole. These are the two main types of sampling bias:
Sampling Error: If the sample is too small this will result in false information that does not reflect the true population. As CRM Buyer columnist Louis Columbus points out, this often leads to a secondary problem, when companies attempt to compensate by over-sampling.
Non-Representative Samples: This happens when the sampling method performed omits one or more customer tiers.
Response Bias
No matter how much care is taken in designing a representative sample of the customer base, the actual survey sample may be vastly different than the actual population due to response bias. Here are four major types of response bias:
Geographic Bias: Surveys may unwittingly leave out large sections of the population by polling in skewed geographic areas.
Temporal Bias: Phone surveys conducted during the day may be more likely to reach older segments of the population.
Changing Technology: Land-line surveys fail to reach a younger demographic which is increasingly reliant on cell phones. [v], [vi]
Problems of Motivation: Highly satisfied customers are much more likely to respond to survey requests than merely neutral or even dissatisfied customers. [vii]
Wording and Execution Biases
Let's say you've somehow managed to seal up all the problems with your survey sample design and you're confident your population is represented in the responses you've received. You're not in the clear yet; here's where the biggest problem of all may come in: Your questions themselves may be biasing your results. Customers have a variety of reasons—both intentional and unintentional—for answering in a way that does not reflect their true feelings. Below are four types of wording bias and execution bias to consider:
Lack of Options: The close-ended style of questioning used in the majority of satisfaction surveys can make subjects feel obligated to give an answer even if none of the options really reflects their true feeling. [viii]
Question Wording: Rewording questions in even subtle ways or asking questions with poor pronunciation (or with a dialect) can have significant impact on the number of "favorable" or "unfavorable" responses.[ix]
Bogus Questions: The questions are often simply irrelevant to the customer, who may respond more or less randomly to complete the survey. One of the most popular survey questions—"How satisfied are you"—has even been called the most "bogus question in the history of surveys".[x]
Happy Questions: Companies often use surveys to ask leading questions ("just wanted to make sure it all went well") that paint them in a favorable light. This may be done inadvertently, but companies also have motivations for hiding problem areas from management and executive teams.[iv]
Rigged Processes
The fact is this, employees often skew their own survey results. This happens for a variety of reasons: fear of demotion, criticism from management, links between survey results and employee bonuses, or even just lack of an outside perspective. Whatever the reason, a gamed system fails to provide informative survey results. Here are three possible ways of rigging satisfaction results:
Self-Administered QC: Quite often, the responsibility for measuring satisfaction levels relies on groups that naturally have a personal investment in receiving favorable satisfaction ratings. After all, how often have you received a comment card after you made some sort of positive remark about the service you received? And after you complained? Employees or management departments often sample only the most satisfied customers to boost their results.[iv]
Out-Right Cheating: Companies have other ways of inflating their scores as well. For example, if employees know that an an order is going to come in late they may push back the due date in the computer system.
Pressure to Produce Good Results: High motivation for customer loyalty ironically leads to high motivation to produce positive survey results, which can influence the development of biased research design.[iv]
No Meaningful Information Revealed
OK, so let's put aside all of these concerns and assume that your survey was perfectly conducted in every way. Sorry, but you're still not off the hook. No matter how great your survey design is, the responses may fail to reveal anything useful or valuable about your company's approach.
Trashed Surveys: Surveys often don't reach the people who could really use them—instead they wind up stashed in a back room or, worse yet, the dumpster.[iv]
Wrong Conclusions: Recently, the correlation between high levels of customer "satisfaction" and high levels of "loyalty" has been called into question. Surveys measure what people say, not what they do, and the two aren't always linked.[iii], [xi]
Meaningless Data: Many surveys ask meaningless questions: the questions may be important from a management standpoint but don't resonate with customers.[xii] Naturally this leads to meaningless data.
Interaction Metrics Free Solution