“Your customers will tell you exactly what they need.” I’ve heard this phrase so many times that it feels like the catchy chorus of a pop song. It’s an earworm that replays constantly in my mind.
But, when I implemented this advice, I realized that customers wouldn’t share their pains and aspirations until I asked the right questions.
The key to great customer feedback is writing great customer survey questions. That’s why I put together this guide on how to write survey questions with empathy and clarity.
We’ll cover:
5 Free Customer Satisfaction Survey Templates
Easily measure customer satisfaction and begin to improve your customer experience.
- Net Promoter Score
- CSAT Score
- Customer Effort Score
- And more!
Download Free
All fields are required.
How to Write a Survey: The Basics
What makes a good survey?
In my experience, there are five non-negotiable elements:
- An effective survey will always collect specific information from customers.
- They are concise and target particular demographic segments.
- They include different question types.
- All questions are framed to avoid participant bias.
- Each question is designed to be answered quickly and intuitively.
There are other aspects to it as well.
For example, how do you decide the order of questions? Is there even a rule you need to follow?
I’ll get into that in a bit when I discuss tips on how to write survey questions. First, let’s review the different question types you should prioritize in any survey you create.
Pro tip: HubSpot’s Customer Feedback Software lets you create in-depth customer surveys, seamlessly share the collected insights with your team, and analyze the feedback data to improve the audience experience.
6 Types of Questions I Typically Include in My Surveys
Surveys rely on a few specific question formats. Below, I’ve listed the six most common types that are present in most of them:
1. Multiple Choice Questions (MCQs)
Perhaps the most common type of survey question, multiple choice questions offer participants multiple response options. It's quick and simple to fill out, making data analysis a breeze.
Best for:
- Gathering demographic information from your customers.
- Determining product/service usage and consumer priorities.
There are two subtypes of MCQs.
Single-Answer MCQs
Single-answer MCQs present participants with different response options. However, they can only pick one of those choices.
Take a peek at the snapshot below to better understand this question format:
Multiple-Answer MCQs
Multiple-answer MCQs allow participants to choose all responses that apply to them from a list of options. This format is ideal if you lack demographic details about your audience base.
The image below shows what a typical multiple-answer MCQ looks like:
Pro tip: Always include an “Other” option that participants can pick if none of the responses apply to them. This helps avoid skewed results.
2. Rating Scale Questions
This question type asks respondents to rate something on a numerical scale assigned to a particular sentiment.
Best for:
- Gauging your company's Net Promoter Score® (NPS).
- Evaluating customer satisfaction.
- Getting clear-cut qualitative feedback from your customers.
Take an NPS survey as an example. A general rating scale question in this context would look like this:
Easy enough to understand, right?
Here's where it gets a little more complex: rating scale questions have subtypes. And each one is ideally suited to a different scenario.
Likert Scale Questions
Typically used in customer satisfaction surveys, Likert scale questions evaluate if respondents agree or disagree with a question. They rely on a 5-point or 7-point rating system, where the scale may range from “not at all likely” to “extremely likely.”
Having said that, there isn‘t much difference between the two point systems. It’s just that the 7-point system allows participants to be more granular with their responses.
Semantic Differential Questions
Semantic differential questions are similar to Likert scale questions.
However, at either end of the scale, there is an opposing sentiment. For example, “How would you rate our service?” with one being “terrible” and five being “exceptional.”
Over the years, I’ve found that semantic differential questions often elicit intuitive and emotional responses from participants. The most likely reason is that the answer draws from the respondent’s experience and perception of the product or service.
Pro tip: Rating scale surveys can help you track customer sentiment over time. Simply send one to the same audience segment periodically over a given period. Changes in responses can tell you whether customer sentiment is trending positively or negatively.
3. Close-ended Questions
All of the examples above are close-ended questions.
In short, you assign a set number of responses to these queries. Then, survey participants simply choose one or multiple options depending on how the question is framed.
Best for:
- Gathering quantitative feedback data on different product/service aspects.
- Streamlining the data analysis process by relying on rated/ranked responses.
Pro tip: Include multiple types of close-ended questions in your surveys. This helps the questionnaires feel less monotonous and keeps participants more engaged.
4. Open-ended Questions
Open-ended questions are accompanied by an empty text box where respondents can answer the query how they see fit.
Best for:
- Gathering granular and qualitative feedback about your product/service.
- Identifying overlooked concerns among specific customer segments.
For me, when it comes to understanding consumer sentiment, nothing beats open-ended questions. Yet, I avoid them like the plague if I want to collect in-depth quantitative customer data on elements like product use cases, market trends, or purchasing habits.
My conversation with Jake Ward, a fellow marketer and founder of Kleo, revealed great insights about open-ended questions.
Jake shared how listening to what people want (instead of his gut) has helped him build tools like Kleo. Here’s how he uses open-ended questions to get targeted and actionable audience feedback:
“Open-ended questions provide raw, unfiltered insight that multiple-choice questions cannot. People will reveal things you didn‘t ask. Kleo’s open-ended questions about LinkedIn's biggest frustrations led to features like creator engagement analysis and content preview. If we had used multiple-choice, we might have missed those insights. Multiple-choice can provide quick quantifiable data to validate assumptions. The key is balance.”
Pro tip: Open-ended questions require considerable thought and effort to answer. So, avoid including too many of them in your survey, or participants may just choose to abandon the questionnaire.
5. Dichotomous
Dichotomous questions offer only two options for participants to choose from.
Best for:
- Simplifying the survey experience and gathering factual data.
- Collecting concise feedback that's quick and easy to analyze.
A perfect example: “Are you satisfied with our customer service?” The responses would be “Yes, I'm satisfied” or “No, I'm dissatisfied.”
Here’s another example of what a dichotomous question looks like:
Pro tip: Use dichotomous questions sparingly in your surveys since they leave little to no room for data interpretation. Ideally, you’d only include only one in your questionnaire.
6. Ranking questions
Ranking questions have participants assign a score or order the response options based on their relative importance to them.
Best for:
- Analyzing what customers value most about your product/service.
- Identifying gaps in product functionality to enhance the customer experience.
Now, ranking questions can provide qualitative feedback from a specific audience demographic. So, from the example above, Respondent Pool X could rank “user experience” first. Meanwhile, Respondent Pool Y could prioritize “ease of use” over everything else.
The only issue is that you won’t understand the “why” behind those responses.
I usually get around this problem by pairing a ranking question with an open-ended query that asks participants to explain the reasoning behind their answers.
Pro tip: If you have an extended list of options to rank, go for a MaxDiff analysis. This is a more advanced ranking question type that simplifies comparative priority between responses.
5 Free Customer Satisfaction Survey Templates
Easily measure customer satisfaction and begin to improve your customer experience.
- Net Promoter Score
- CSAT Score
- Customer Effort Score
- And more!
Download Free
All fields are required.
5 Tips on How to Write Great Survey Questions
You know all about the different survey question types. Now, we get to the actual meat of the matter — how to write survey questions.
Don't worry, it’s not as difficult as you might think. And to help you along, I’ve laid out my five go-to tips below.
1. Use clear and conversational language.
Early in my career, I learned that language and phrasing are the two most important elements in a survey.
Think about it. When do participants usually fill out your questionnaires? During lunch breaks, in between reading briefs, or reviewing projects. They simply don’t have the time to analyze the meaning behind a particularly complex or vague query.
To help things along, frame your survey templates and questions so that they are:
- Short and concise.
- Quick and simple enough to answer.
- Targeted toward a specific point.
Here’s an example to give you a better idea:
I spoke to Stefan Chekanov, the CEO of Brosix, to learn how his team approaches these customer survey questions.
One of Chekanov’s best tips is to focus your survey questions on the survey’s core objective. Here’s an example he shared to contextualize this advice:
“If you are evaluating customer service quality, do not dilute the survey with questions about product features or pricing — this will only confuse respondents and muddle your results. Moreover, the order of your questions matters. Start with straightforward, engaging questions to build momentum, and ask more sensitive questions in the middle, not the end. And lastly, try to keep your surveys short — no longer than seven minutes to fill.”
2. Offer an exhaustive list of response options.
The entire point of learning how to write survey questions is to collect valuable customer feedback.
That won’t happen if you don’t know where to draw the line when framing your questionnaire.
Take this question as an example:
- “What is your gender?”
- Male
- Female
What do you think is missing in that question?
For one, inclusivity and sensitivity — the query potentially devalues a participant's sense of identity and personal experience.
Second, it’s dichotomous and lacks the necessary response options.
A quick fix to this could be something along the lines of:
- “What is your gender?”
- Male
- Female
- Transgender
- Non-binary
- Non-conforming
- Prefer not to say
In a chat with Sabas Lin, the co-founder of Know.ee, I discovered why a list of responses matters.
Sabas explained how a drop-down list of options strikes the right balance between flexibility and simplicity for responders.
“With drop-downs, customers can find answers that feel right without getting bogged down in a lengthy list or overwhelmed by open-ended responses,” he shared. “They save time for both sides — respondents get the sense they’re truly being heard without writing long explanations, and we gain feedback that’s both structured and actionable.”
Your goal with surveys is to pull data from the widest pool of consumers.
The only way you do that is by recognizing that not every respondent fits a certain category or mold. And this way, you also avoid the possibility of offending the sensibilities of a particular demographic.
3. Steer clear of leading questions.
I'm a big fan of courtroom dramas and TV shows. And every time a character objects and says “leading the witness,” I sigh and say “good catch.”
But what exactly is a leading question?
It’s essentially any query (open- or close-ended) that subtly predisposes a participant toward a particular response.
Here’s one such example:
“On a scale of 1-5, how excellent is our customer service?”
In the question above, I already hinted that I think my company's customer service is “excellent.” By doing so, I’m also directing participants to feel the same way.
Instead, what I should have asked is:
“On a scale of 1-5, how would you rate our customer service?”
Federico Spiezia, a UX designer and founder of Sparkr, told me a great way to avoid leading questions. Instead of nudging participants toward a response, ask them to share an anecdote related to your survey’s theme.
“I apply the MOM test from Rob Fitzpatrick’s book. I try to get real-world experiences as much as possible,” he explains. “A question I always ask is, ‘Tell me the last time that…’. For example, if you say you have a problem, I will ask you to tell me the last time it occurred.”
That said, the most effective way to eliminate leading questions from your survey is to:
- Cross-check for any statements/questions that seem coercive.
- Remove any questions that assume the respondent’s history or current actions.
- Avoid using any modifiers or adjectives when framing your questions.
The more you scrutinize here, the cleaner your data gets. That will help you avoid skewed or distorted results.
4. Localize hypothetical questions in a specific time frame or incident.
Asking survey participants to predict their behavior is perhaps the most unreliable way to gather customer feedback. And yet, you can see predictive or hypothetical questions in almost every survey you fill out.
You’ve probably come across them multiple times. Some of them are framed as the following:
- “How likely are you to use this product?”
- “Would you pay a monthly fee of X to use this service?”
A better way of phrasing such questions is to localize the customer’s experience in a recent, specific incident or time frame.
So, here’s what the fixed versions of those queries would look like:
Hypothetical/Predictive |
Localized |
“How likely are you to use this product?” |
“Approximately how many times did you use this product in the last 14 days?” |
“Would you pay a monthly fee of X to use this service?” |
“What’s the approximate monthly fee you would be comfortable paying for this service?”
|
If you must use hypothetical or predictive questions, try not to limit respondents to precise answers. Instead, go for localized ranges and estimates.
The reason for this is two-fold:
- First, recalling the exact number of past product/service use can be challenging.
- Second, participants are more likely to answer such questions if they feel they can be at least somewhat accurate.
Of course, ideally, you’d exclude such questions from your survey altogether. However, I realize that may not be possible in all cases.
5. Identify and split double-barreled questions.
Double-barreled questions are convoluted queries that mix multiple issues and make it difficult for participants to answer either one accurately.
Here’s a perfect example of one:
The question above correlates “hard work” with “punctuality.” Then, it assumes that the two traits are mutually inclusive. But what if the respondent is a hard worker who has occasionally been late? How would they answer that question?
One trick that I use often is to pay close attention to the “and” in my surveys.
For instance, I recently created a survey to gauge the effectiveness of my company’s internal onboarding and training process. The third question in the initial template was:
- “How would you rate our training and onboarding process?”
A quick fix for this was to split the question into two parts. So, the final survey that went out had the following:
- “How would you rate the training materials you received?”
- “How would you rate your onboarding experience?”
I’ll also disregard an element if it doesn’t meet my survey goals.
For example, if my objective was to evaluate the performance of our customer service staff, I’d solely focus on quantitative data (time to resolve complaints, total queries resolved, etc.). Product experience or functionality wouldn’t be a part of that survey.
5 Mistakes I Try To Avoid When Writing Survey Questions
You’ve got a handle on how to write survey questions. Now, you must understand what you must avoid when doing so.
To help you here, I’ve listed the five “don’ts” when it comes to creating your questionnaires.
1. Don’t present loaded questions.
Earlier, I talked about leading questions.
Now, we get to “loaded” queries. These questions go beyond subtle biases. Instead, they:
- Draw significantly inaccurate assumptions about the participant.
- Impose explicit bias to invoke a “desired” response.
- Significantly distort the collected data, making accurate analysis impossible.
Here’s a quick walkthrough on how to identify such questions:
Question |
Identifying Elements |
“Have you stopped binge drinking?” |
The assumption is that the participant binge drinks (or did binge drink). The evident bias in eliciting a response regardless of whether the question applies to them. |
Another way to get around loaded questions is to fine-tune when you send out your surveys.
For instance, let’s say you’d like to know why online shoppers prefer your site’s experience over your competitors.
In this case, you can’t pop a survey when they visit your site. That’s too early, and they aren’t your customers yet. You’d be better off waiting for them to purchase something and presenting them with a quick questionnaire on the checkout page.
2. Don’t fall for the question order bias.
Did you know that the order in which you present your survey questions can affect participant responses?
This is called the question order bias. It usually occurs in close-ended queries and can take the form of:
- Contrast bias: Where a participant’s response varies/differs significantly due to the order of questions.
- Assimilation bias: Where a participant’s responses begin to reflect/align with the ideas/topics presented in the survey based on the order of queries.
The most effective solution for this is to create blocks of related questions. Then, randomize the order of queries in that same block. This will prevent respondents from creating a relation between specific queries.
3. Don’t confuse participants with double negatives.
A crucial step in reviewing your survey questions is to check them for double negatives. Here’s an example of what such queries look like:
The question above isn’t clear at all. Is it asking me if I want Resolution 10 to pass? Or, does it want me to simply state how I feel about Resolution 10 being passed?
Fortunately, it’s quite easy to avoid such questions in your surveys. You only have to look for instances where “no” and “not” are paired with:
- Prefix words like “un-,” “in-,” “non-,” or “mis-.”
- Negative adverbs (scarcely, never, seldom, barely, etc.).
- Rule of exception (unless, except, if).
The best solution, though, is to keep your language neutral. No positives, no negatives (yes, that’s a double negative) — just objective questions that ask about specific issues.
4. Don’t overlap your answer/response options.
Your survey questions aren’t the only thing you must pay attention to. Correctly ordering your response options is just as important.
Take a look at the image below to get a better idea of what I’m talking about
Now, if you are 34, would you pick the “24-34” age group or go for the “34-45” choice?
Confusing, isn’t it? So, it’d be better if the order of response options looked like this:
This way, you don’t overlap or distort a participant’s answer. The best part? Any demographic data you collect from your surveys remains accurate.
5. Don’t forget to test your surveys.
The preferences, needs, or market challenges of your customer base are constantly evolving. Consistently testing your questionnaires is the most effective way to determine whether you’re keeping up with that evolution.
So, here’s a quick guide on how to test your surveys:
- Pretesting. Assemble a small group of participants (bonus points if they share demographic traits with your audience). Hand them a questionnaire draft to review content, language comprehension, and template design.
- Pilot Testing. Send a formalized, more structured version of the initial survey draft to a small sample group. This will give you an idea of the responses you will receive when you deploy the actual survey.
- Cognitive Interviewing. Your goal here is to understand why the previous respondents answered the way they did. Schedule 5-10 minute interviews with them. Also, ask them if the content seemed vague or made them uncomfortable.
Getting experts to evaluate the survey or using focus group discussions is also a great idea. This will help you analyze:
- The survey content’s relevance to your target audience.
- Potentially overlooked demographic traits in specific segments.
- Issues with comprehending certain complex concepts.
When all of that is done, use this final checklist to ensure that everything is in order:
Using AI Survey Makers to Design Customer Surveys: What to Keep in Mind
AI-powered survey makers offer speed and scalability.
You can set up and launch a full-fledged customer survey within minutes (or even faster). Plus, many of these AI survey tools can fast-track data analysis for a large volume of data.
However, while AI tools can be efficient, they still lack the ability to capture the finer details that humans naturally notice. After testing various AI survey makers, I noticed a consistent limitation: these tools often fail to understand customer sentiment and context.
I found generic responses even when I offered contextual information to guide AI survey tools for creating meaningful questions. These responses lacked the empathy that drives people to respond to survey questions.
Here’s an example from ChatGPT:
Notice how the questions feel mechanical and lack a personal connection with respondents? For this reason, I wouldn’t recommend relying solely on an AI survey maker. Maybe use one to get started with brainstorming question ideas, but always be sure to rewrite them in your own voice.
Besides, since most AI tools are trained on existing datasets, they naturally have biases. This bias can impact your survey’s neutrality because these tools can create leading questions based on historical data. As a result, this bias can also seep into data interpretation. AI survey makers can misinterpret responses to produce false conclusions and overlook the insights that matter.
So, for now, my advice is to stay wary of AI survey makers.
2024 Benchmarks for Customer Surveys
Finally, I want to leave you with some data points to consider before you start writing your survey.
- Surveys with 1-3 questions had the highest completion rate, at 83.34%. (Source: Survicate)
- The ideal survey length is around 8-10 questions, with 42% of consumers saying they’re willing to answer 7-10 questions as part of a survey. (Source: Glimpse-HubSpot)
- The more questions you include in your surveys, the lower their completion rates. Surveys with 9-14 questions have a completion rate of 56.28%. Those with 15 questions or more have a completion rate of 41.94%. (Source: Survicate)
- Survey response rates are primarily dependent on the channel you use and the time of deployment. Factors such as a wider participant pool do not generate a higher response rate. (Source: ScienceDirect)
- Monday, Wednesday, and Sunday are the best days to send surveys. Mondays have the highest response rates (20%), while Wednesday and Sunday follow closely behind with 19%. (Source: Zendesk)
- Just 13% of in-app surveys receive a response. (Source: Business of Apps)
- Surveys sent between 2-4 AM have the highest response rates (24%). Meanwhile, questionnaires sent between 10 AM to 11 AM have a response rate of just 16%. (Source: Zendesk)
- The average online survey response rate is 44.1%. However, 20-25% is also considered acceptable. (Source: ScienceDirect)
Start Writing Better Survey Questions
There you have it: You officially know all the steps necessary to write good survey questions. I’ve discovered that the way you phrase the questions, the types of questions you use, the answer options you provide, and the order of questions can affect the quality of the data your survey collects.
Following the tips above have helped me write stronger surveys, and I hope they do the same for you. If nothing else sticks, let me leave you with this: Make sure your questions are focused, simple, and objective.
Editor's note: This article was originally published in April 2018 and has since been updated for comprehensiveness.
5 Free Customer Satisfaction Survey Templates
Easily measure customer satisfaction and begin to improve your customer experience.
- Net Promoter Score
- CSAT Score
- Customer Effort Score
- And more!
Download Free
All fields are required.