<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1657797781133784&amp;ev=PageView&amp;noscript=1">

The Ultimate Guide to Creating a Customer Satisfaction Survey That Gets Results

If you pretend there's not a problem, does it actually go away?

Of course not. But the way many businesses treat customer satisfaction and customer success, it seems like that's the common wisdom.

Measuring customer satisfaction allows you to diagnose potential problems, both at the individual and aggregate level. More importantly, it allows you to improve over time.

How do you gauge customer satisfaction? It all comes down to customer satisfaction surveys.

Why Conduct Customer Satisfaction Surveys?

Customer satisfaction can sound somewhat stale when we talk about acronyms, methodologies, and aggregate scores, so let's put this in more concrete terms: If you don't allow your customers a place to complain, you risk them doing so to all their friends. Yes, this negative word-of-mouth happens and can break a business.

There's always some murkiness with data like this, but it's said that angry customers will tell 9 of their friends about their bad experience.

Regardless of the specific number, you know anecdotally that you're much more likely to talk about a frustrating experience than a mildly positive one.

There's a term in psychology that explains the reasoning behind this: "emotional arousal." Emotional arousal measures a combination of a physiological and psychological response to an experience.

As an example, think about the last time you gave a big speech or were anticipating an event (say in the last few minutes of a close game with your favorite sports team).

In these situations, your palms are sweaty and your heart is pounding, which are the same physiological reactions that occur when you're scared or nervous (say, when you hear a weird noise outside your tent when you're camping).

All of these are "high-arousal" emotions.

Now we can further separate these "high-arousal" emotions based on their "valence" -- how positive or negative they are.

So the big game anxiety is a positive emotion, whereas the noise outside the tent or your anger after getting booted off your flight are negative valence.

arval.pngSource: The Thinking Zygote

All this is to say that your ideal scenario is one where your customers' experience is at the same time high-valence and high-arousal. At least, according to academic literature, that's the most likely conditioning for them to rate themselves a promoter and tell their friends about your company.

What you want to avoid is triggering high arousal and low valence experiences. That's when your customers give you a 1 or 2 on your NPS survey. If you can react quickly to these customers, you can mitigate the damage and even turn detractors into promoters.

But if you don't measure customer satisfaction at all, you'll never know these things.

You'll be (not so) blissfully ignorant, and you'll risk these customers running to their friends and talking about what a wreck your customer experience is.

No bueno.

Let's dive into some specific examples and templates to get started.

Customer Satisfaction Survey Examples and Templates

Customer satisfaction surveys come in a few common forms, usually executed using a popular "one question" response scale methodology like:

  • Net Promoter Score (NPS)
  • Customer Satisfaction Score (CSAT)
  • Customer Effort Score (CES)

Each of these customer satisfaction survey methodologies measures something slightly different, so it's important to consider the specifics if you hope to use this data wisely.

Net Promoter Score

Net Promoter Score is a popular survey methodology, especially for those in the technology space.

It's rare to see a company now that doesn't use the famous question: "How likely is it that you would recommend this company to a friend or colleague?"

Group 3nps-1-1.pngAsk your customers this question with HubSpot's free customer feedback tool.

While this measures customer satisfaction to an extent, it's more aimed at measuring customer loyalty and referral potential. The way NPS is calculated, you end up with an aggregate score (e.g. an NPS of 38), but you can also easily segment your responses into three categories: detractors, passives, and promoters.

You calculate your Net Promoter Score by subtracting the percentage of Detractors from the percentage of Promoters.

NPS calculation.png

In this way, NPS is useful both for aggregate measurement and improvement (e.g. we went from an NPS of 24 to 38 this year), but it's also great for segmenting customers based on their score.

You can use that knowledge to inform customer marketing campaigns, expedite service to detractors, work on marginally improving the experience of passives, etc.

Customer Satisfaction Score (CSAT)

Customer Satisfaction Score (CSAT) is the most straightforward of the customer satisfaction survey methodologies. It's right there in the name; it measures customer satisfaction, straight up.

Usually, this is with a question such as "how satisfied were you with your experience," and a corresponding survey scale, which can be 1 – 3, 1 – 5, or 1 – 10.

There isn't a universal best practice as to which survey scale to use.

satisfaction survey.png

Ask your customers this question with HubSpot's free customer feedback tool.

There's some evidence that the ease of the experience is a better indicator of customer loyalty than simple satisfaction. Therefore, Customer Effort Score (CES) has become very popular recently.

Customer Effort Score (CES)

Instead of asking how satisfied the customer was, you ask them to gauge the ease of their experience.

Group 3ease-1.png

Ask your customers this question with HubSpot's free customer feedback tool.

You're still measuring satisfaction, but in this way, you're gauging effort (the assumption being that the easier it is to complete a task, the better the experience). As it turns out, making an experience a low-effort one is one of the greatest ways to reduce frustration and disloyalty.

Customer Satisfaction Question Types & Survey Design

Survey design is important.

In fact, it's probably one of the biggest mistakes people make when conducting customer satisfaction surveys.

Just because you're not writing a blog post or an eye-catching infographic doesn't mean your survey still shouldn't be engaging, relevant, and impactful. If the design is wrong, the data won't be useful to answer your questions about your customers.

Without diving too deeply into the esoteric world of advanced survey design and statistical analysis, know this: How you pose the question affects the data you'll get in return.

There are a few different types of customer satisfaction survey questions.

I'll go through a few of them here and include some pros, cons, and tips for using them.

Binary Scale Questions

The first type of survey question is a simple binary distinction:

  • Was your experience satisfying?
  • Did our product meet expectations?
  • Did this article provide the answer you were seeking?
  • Did you find what you were looking for?

The answer options for all of these is dichotomous: yes/no, thumbs up/thumbs down, etc.

The benefit of this is its simplicity. In addition, most people tend to lengthen survey response scales to find deltas that may not mean that much. As Jared Spool, founder of UIE, said in a talk, "Anytime you're enlarging the scale to see higher-resolution data it's probably a flag that the data means nothing."

This is also a great question for something like a knowledge base, where a binary variable helps you optimize page content. Take, for example, Optimizely's knowledge base and their use of this question:

Screen Shot 2017-11-16 at 5.27.48 AM.png

When you're running an A/B test on an ecommerce site, you have a binary variable of conversion: you converted or you didn't. We would often like to experiment similarly on knowledge base pages, but what's the metric? Time on site? Who knows, but if you do a binary scale survey at the end, you can quite easily run a controlled experiment with the same statistical underpinnings as an ecommerce conversion rate A/B test.

Two cons with binary questions:

  • You lack nuance (which in some circumstances could be a benefit).
  • You may induce survey fatigue on longer surveys with many questions.

If it's a long survey with many questions, customers can tire out and lean towards positive answers (this isn't a problem when you just have one or two questions, of course).

Multiple Choice Questions

Multiple-choice questions have three or more mutually exclusive options.

These tend to be used to collect categorical variables, things like names and labels.

In data analysis, these can be incredibly useful to chop up your data and segment based on categorical variables.

Screen Shot 2017-11-09 at 1.00.12 PM.png

For example, in the context of customer satisfaction surveys, you could ask them what their job title is or what their business industry is, and then when you're analyzing the data, you can compare the customer satisfaction scores of various job titles or industries.

When proposing multiple choice questions on a survey, keep in mind your goals and what you'll do with the data.

If you have a ton of multiple choice questions, you can induce survey fatigue which will skew your data, so keep it to question you believe have important merit.

Scale Questions

Here's where we get into the meat of customer satisfaction survey design.

Almost all popular satisfaction surveys are based on scale questions.

For example, the CSAT score asks, “how satisfied with your experience," and you may get to rate the experience on a scale of 1-5 (a Likert scale).

satisfaction survey.png

The survey scale could be comprised of numbers or you could use labels, such as “strongly disagree, disagree, neutral, agree, and strongly agree."

Group 3ease-1.png

There are many pros to using scale questions.

  • It's an industry standard and your customers will completely understand what to do when presented with the question.
  • You can very easily segment your data to make decisions based on individual survey response.
  • You can easily measure data longitudinally to improve your aggregate score over time.

There's only one real disadvantage in my book: There's no qualitative insight. Therefore, you're left guessing why someone gave you a two or a seven.

It's a best practice to couple survey scale questions with open-ended feedback questions to get the best of both worlds.

Semantic Differential

Semantic differential scales are based on binary statements but you're allowed to choose a gradation of that score.

So you don't have to pick just one or the other, you can choose a place between the two poles that reflects your experience accurately.

Screen-Shot-2016-03-07-at-11.17.28-AM.pngSource: Explorable

These have a similar use case to survey response scale, but interestingly enough, if you analyze semantic differential scales they often break into two factors: positive and negative. So, they give you very similar answers as binary scales.

Open-Ended Questions

As I mentioned, the above survey questions don't allow for qualitative insights. They don't get at the “why" of an experience, only the “what."

Qualitative customer satisfaction feedback is important. It helps identify customers' value propositions, and helps learn about things most important to the customer -- which you won't glean from a numerical or multiple-choice survey.

It's easier to skew qualitative data and cherry pick insights than you'd think, though, so be mindful of personal bias when you start deciding which questions to ask.

Getting qualitative responses helps you close the loop here, too.

Instead of simply reaching out in ignorance and concern about a low satisfaction score, if you ask (and receive) a qualitative question, you can respond with a specific fix down the line.

When to Send Customer Satisfaction Surveys?

When to send your customer satisfaction surveys is another important question to consider.

When you pop the question also determines the quality of the data. While there are different strategies for conducting these surveys, ask enough experts and you'll hear this common mistake: most companies ask too late.

Companies might be tempted to use "the autopsy approach" to customer satisfaction: waiting until an event is over to figure out what went wrong with a customer. Instead, customers should be asked questions while their feedback could still have an impact.

Ideally, you'll deploy customer satisfaction surveys at different times to get different views of the customer experience at different lifecycles stages.

So, your first directive is to align your survey points with points of value that you'd like to measure in the customer experience. The more touch points you measure, the more granular your picture of the customer experience can be.

When to send your survey also depends on what type of survey it is.

Specifically in regards to CSAT surveys, we recommend sending them as soon as possible after an interaction with customer support to capture the experience when it's still fresh.

Closing the Loop: Acting on Customer Satisfaction Surveys

Now that you've got insights on your customer satisfaction levels, it's important to close the loop and actually follow up with customers in a meaningful way. Why let the data lie dormant when there are so many proactive efforts you can take here?

It's important to follow up with survey respondents. Closing the feedback loop with valuable customers that complete your satisfaction survey is simultaneously the most important and oftentimes most-ignored step in a successful customer satisfaction measurement campaign.

Making sure your team acknowledges and thanks anyone that completed the survey is critical to ensuring that customers will continue to provide you feedback -- because it's about building trust and showing them value.

You can't always pivot and deliver on every piece of feedback that comes through -- especially since some customer feedback just might not be of great value. But you can address every piece of feedback that comes through in some way -- because providing a response, even if what the customer is requesting is not something you will do, is always better than no response at all.

As with any form of data collection, one of the biggest mistakes is putting all that effort into collection and perhaps even analysis, but then coming up short when it comes to action. But that's why we collect data: to inform decisions.

How you act on your customer satisfaction data will vary according to the company and the situation (as well as resources available and many other variables), but it's important to have a plan of action. Ask yourself, "If I receive X feedback, what will I do with that information?"

Just asking this question will put you on a trajectory to improve your customer experience, as well as put you on a continuous customer feedback loop of better customer insights and actionable takeaways.

But it all starts with a simple survey, so start now by taking action and simply asking customers how their experience was.

New Call-to-action

Enjoy this article? Don't forget to share.