What are Survey Sample Sizes & How to Find Your Sample Size

Download Now: 5 Free Customer Survey Templates
Taylor Cromwell

Updated:

Published:

As a content writer who loves diving into data, I’m a big fan of surveys. They can be an incredibly powerful tool in setting your company as a thought leader in your sphere and also help you gain critical insights about your customer base.

finding sample sizes for customer service

But getting a survey right can be incredibly tricky; it’s a sophisticated blend of art and science (and likely some trial and error). One of the key components of an effective survey is your sample size — the group you are trying to gather data about.

If the sample size is off, you risk drawing incorrect conclusions, wasting resources, or worse — making major business decisions based on unrepresentative data.

→ Free Download: 5 Customer Survey Templates [Access Now]

To help get you started on the right foot, I spoke with several data analysts and expert researchers to learn more about how they approach sample sizes. I’ve also included key insights from HubSpot’s very own Content Innovation team.

Table of Contents

5 Free Customer Satisfaction Survey Templates

Easily measure customer satisfaction and begin to improve your customer experience.

  • Net Promoter Score
  • CSAT Score
  • Customer Effort Score
  • And more!

    Download Free

    All fields are required.

    You're all set!

    Click this link to access this resource at any time.

    If you’re a creative like me and perhaps snoozed through statistics class, let’s go over a refresher of the basics.

    A sample size = a small representation of a larger group of people who you want to know more about.

    For example, let’s say you are running a study on trends in customer service. It’s not feasible to survey everyone in the entire customer service industry. Instead, you need to choose a sample size that serves as a snapshot of this group’s opinions, behaviors, or characteristics.

    The key? A sample size makes sure your findings hold statistical significance — meaning that the conclusions actually hold up when speaking for the target audience.

    Why does sample size matter?

    Sample sizes play a big role in how valid your survey results turn out to be because of what’s called narrative reliability, explains Olivia Thompson, a business consultant and economist at Chamber of Commerce.

    “With smaller samples, each response is arguably more influential in the story, and if your sample is small, you will likely rely on some outlier responses. If I have to use a smaller sample in practice, I’ll back it up with qualitative follow-ups to close the gaps and try to add context. It’s also a means of compensating for any possible bias in the data if sample size isn’t optimal.”

    What is a good sample size for a survey?

    “One myth you might think is true based on all the reports big brands and research centers put out is that you need thousands of people in a survey sample for it to be as accurate as possible. That is not always the case,” explains Pamela Bump, Head of Content Innovation, Research & Growth at HubSpot.

    Here's what you actually need to know about sample sizes:

    • Minimum threshold - 100 respondents (for most business surveys).
    • Maximum recommendation - 10% of your population, but not exceeding 1,000.
    • Sweet spot - Enough to represent different segments of your audience without drowning in data.

    Why it matters: If your sample size is too small, it could yield skewed results because of the overrepresentation of outliers or extreme views. Meanwhile, a sample that’s too large can be overly complex, costly, and time-consuming to analyze. It’s a delicate balance.

    For a more accurate way to determine your sample size, you can use a formula. Let me warn you: The sample size formula can be a little overwhelming. But we'll work through it together.

    Why Confidence Levels and Margin of Error Matter

    There are some key terms you want to know before getting to sample sizes. These include confidence levels and margin of error.

    • The confidence level indicates how sure you are that your results represent the population. Ideally, you want to aim for a 95% confidence level — meaning that if the survey was repeated, you’d get the same results 95 times out of 100. For some critical studies, researchers opt for a 99% confidence level.
    • The margin of error quantifies the level of error you’re willing to accept in your results. A 5% margin of error means results could vary by 5% in either direction. A lower margin = more accurate results. But, this will require a larger sample size.

    Let’s say you are surveying customers about a new product feature. If your sample has a 5% margin of error and 60% of respondents favor the feature — the true proportion in the total population likely falls between 55% and 65%.

    I’ll go over how exactly you can calculate the margin of error and confidence level below. Don’t worry!

    Sample Size Formula

    sample size formula for surveys

    The formula above is known as Slovin’s Formula. I know that looks daunting … The good news is that you won't have to do this math yourself. There are plenty of sample size calculators, including one from HubSpot in our A/B Testing Kit. However, it's a good idea to be familiar with this formula and what it means.

    The elements of this formula are:

    • z = z-score. The number of standard deviations a given proportion is away from the mean.
    • p = standard of deviation. The confidence level you can have that the population would select an answer within a certain range.
    • e = margin of error (percentage in decimal form). The percentage that tells you how much you can expect your survey results to reflect the views of the overall population.
    • N = population size. The total number of people in the group you're trying to study.

    To find your z-score, use the table below.

    what is a sample size; how to find z-score

    So, if you want a 99% confidence level, you’ll need a z-score of 2.58. If you want a 95% confidence level, you’ll need a z-score of 1.96 — and so on.

    Essentially, your sample size is decided based on the population, how confident you want to be in your results, and the margin of error you‘d like to allow for. Let’s look at an example.

    Below, I used a sample size calculator and I put in that my population size is 10,000, and I'd like to be 95% confident in my results within a 5% margin of error. The calculator then used the formula above and told me my sample size needs to be at least 370 people.

    what is a good sample size for a survey; how to calculate sample size

    Pro tip: Use HubSpot’s Customer Feedback Software to easily make surveys and hear from your customers.

    Tips for Choosing the Best Sample Size

    Now that we know the formula (and how to find a calculator to do it for you), I wanted to share some of the tips that experts shared with me. This can help you find the best sample size for your business goals more easily and efficiently.

    5 Free Customer Satisfaction Survey Templates

    Easily measure customer satisfaction and begin to improve your customer experience.

    • Net Promoter Score
    • CSAT Score
    • Customer Effort Score
    • And more!

      Download Free

      All fields are required.

      You're all set!

      Click this link to access this resource at any time.

      1. Be practical with resources.

      When starting a research project, one of the most effective approaches is to begin with a smaller test sample. This can help you validate your approach before committing to a larger study — think of it as a pilot run.

      Anna Blount, VP of Industry Research at Datafy, says, “While large sample sizes yield more reliable data, they may not be necessary for every project. Weigh the level of accuracy needed against your available resources.”

      Again, keep in mind the risks involved if the study is too large. It’s likely that you can find the answers you’re looking for with a smaller, but representative, group.

      Thompson agreed with this advice. She recommends starting small with 10% of your target group. “You‘d be surprised how much this little test can tell you about what you’re dealing with,” she says. “This first run gives you a sense of what opinions look like and whether they're similar or different, so you know whether you should increase your sample size or just slice yourself a small enough slice to get consistent feedback.”

      I like that this pilot approach not only helps validate your methodology but also saves you precious resources in the long run by identifying potential issues early in the process.

      Pro tip: Check out these customer satisfaction survey templates to streamline your survey creation process.

      2. Consider population and demographics.

      Understanding your target population size is crucial for determining an appropriate sample size. While larger populations typically require larger samples, there's a point of diminishing returns.

      For example, if you run a survey to understand consumer preferences for a product but only target a specific age group — your results likely won’t reflect the entire market. Instead, you’ll want to be proactive about what voices you are looking to capture.

      I like to think about it this way: If my product appeals to a wide range of demographics, I’ll need to include a diverse mix of respondents to gain comprehensive insights.

      For instance, if I’m developing a new beverage, I should consider sampling not just young adults, but also middle-aged and older consumers, as each group may have unique tastes and preferences that could influence my product’s success. Ignoring these differences could lead to skewed results, leaving me unprepared for how my product will be received across different segments of my target market.

      So my advice is: Take the time to define your population accurately and ensure your sample captures a broad array of perspectives. This will ultimately lead to more reliable data and better-informed business decisions.

      3. Account for response time and deadlines.

      These factors can significantly impact the quality, timeliness, and usefulness of the data collected. First, you’ll want to consider how long it will take your target audience to complete the survey. If you’re asking for in-depth feedback, I think it's wise to give respondents time to respond thoughtfully. However, if it’s a short, simple survey, a tighter deadline could encourage quicker participation.

      Your survey (unfortunately) doesn’t exist in a bubble. Your respondents are juggling schedules of their own, which can delay responses or reduce completion rates, so it’s good to be aware of these barriers. Additionally, the timing of survey launch matters: certain times of day or days of the week may yield high engagement.

      Pro tip: If you want to boost your response rate, send friendly reminders as the deadline approaches. I often don’t mind a nudge to finish something I may have forgotten — but avoid sending too many reminders, or it could become spammy.

      4. Plan for subgroups and analysis.

      When planning your sample size, it's essential to consider not just the overall numbers but how the data will be segmented in your analysis.

      “If you want to break down your data by subgroups (like age or gender), make sure you have enough respondents in each group,” says Jonathan Pirc, Founder of Lab42 Research. “A total sample of 500 might be enough for top-line insights, but if you plan to dive deeper into demographics, you'll need more respondents to keep those results reliable.”

      I think this is particularly important for studies that aim to draw conclusions about specific demographic segments or user behaviors.

      Here’s a practical example: Say I’m surveying customer satisfaction for a new product, and I want insights by age group (like 18-24, 25-34, etc.). With a total sample of 500, if only 10% are aged 18-24, that’s just 50 responses — not enough to reliably represent that age group. To get clear insights, I may need a larger sample or set specific quotas to ensure enough responses from each age range.

      Pro tip: Remember to focus on both the sample size for the survey as well as the sample for a specific stat you’re looking for, explains Jessica Davis, Senior Manager of Customer Insights Marketing at HubSpot. “Quotas for both keeps us accountable.”

      Finally, let's review some steps that you should take to ensure your results are statistically significant.

      1. Define your population.

      The population size in your formula should be the number of the entire group of people you want to understand. If you want to know how many people in the U.S. like your product, you obviously can‘t survey over 300 million people. However, it’s important to know the total number of people that you want to understand — whether it's people in a country, from a certain company, or in a certain profession.

      Expert Insight

      Kristie (Kay) Savage, head of Fanatics, Inc.’s Gaming Group’s Data Science and Product + Customer Data Program, breaks this down a little more in-depth: “When administering a survey about sentiment, attitudes, and/or behaviors, you can calculate your sample size based on your population and tolerance for error. As a rough rule of thumb sans math, you’ll generally need at least 100 responses for marginally acceptable accuracy.”

      However, she gave examples to illustrate when 100 isn’t the right number of respondents. If you have a very small population size, say 500, and you have a high tolerance for error, you may actually only need 80 respondents. If you have a very large population of 10 million and you have a very low tolerance for error, you may actually need 1,100 respondents.

      “These numbers were calculated using a formula, and there are many online calculators dedicated to helping you find your sample size for surveys, based on two factors: population size and tolerance for error. Calculations will get you more accurate sample size needs, if you want to get more precise than the rough rule of thumb.”

      2. Have a high confidence level with a low margin of error.

      When you have your results, you want to be confident that most of your population would agree with the results, with a low margin of error. Usually, the margin of error is around 5% or lower and the confidence level is around 95% or higher. This would mean that your results can give or take 5% on the top or bottom and that you're confident you would get the same results 95% of the time.

      Expert Insight

      “A 5% margin of error is fine for most surveys. But if you need tighter data, say a 2-3% margin, you’ll need to significantly increase your sample size. For example, in our brand tracking studies, we often target 3% and that means bumping the sample size closer to 1,000 or more,” explains Pirc.

      3. Plan for a response rate of about 20-30%.

      Of course, not everyone you send a survey to will respond, so you want to account for that. Once you‘ve put your information into a sample size calculator and figured out how large your sample size needs to be, it’s important to plan for your response rate to be around 20-30%.

      In fact, an even more conservative approach would be to account for a 10-15% response rate, meaning you should send your survey to many more people than the required sample size to ensure you receive that many responses.

      Expert Insight

      To improve response rates, Savage recommends going “where your people are.”

      “If you are a small shoe store wondering how much people like your new shoes, go to that store and hand out surveys or ask folks in person. It may hurt your response rates if you add an additional barrier, like sending the survey via email to your subscribers, and having your customer base wait for and find the email,” she said.

      “Making the ability to respond and take action as easy as possible, with limited steps involved for your customers, could increase your responses.”

      4. Estimate the standard deviation.

      A low standard deviation means that the results will be clustered around the same mean number, while a high deviation indicates that the results are spread across a much greater range. Since this step requires you to estimate how the responses to your survey will vary, you’ll want to pick a standard deviation of .5 to ensure your sample size is large enough.

      5. Focus on sample quality.

      Don’t mistake size for relevancy, Thompson explains. “I believe one of the biggest mistakes researchers can make is misjudging sample size for sample quality, especially in B2B or niche settings. In many cases, we’re just trying to collect a huge sample and do not question whether each individual can contribute to the study’s purpose,” she said.

      “In order to avoid this, I always try to keep relevancy at the forefront when I select my sample. Honestly, I’d rather work with a small pool of highly engaged participants than a larger, randomly selected group. Every data point matters. It’s important to define rigorous screening standards at the outset. I have learned that it’s much easier if you get it straight from the get-go than if you need to dig through useless information afterward.”

      6. Troubleshoot your sample size results.

      You’re almost at the finish line but may hit some snags. If your sample size is too large, you can make adjustments by increasing your margin of error or decreasing your confidence level.

      Additionally, if it is too small, sample more of your population.

      You’ll also want to adjust for non-response and subgroups:

      • Account for non-response. Increase your sample size according to expected response rates (e.g., if expecting a 20% response rate, multiply your sample size by 5).
      • Ensure adequate subgroup representation. If you’re analyzing specific groups, like age or gender, make sure there are enough respondents in each subgroup to draw valid conclusions.

      Expert Insight

      You can also try rotational sampling if you have a hard-to-reach or small target group. “Instead of pushing for one big survey, you gather a small group to start, keep them engaged over time, and then gradually swap in new participants to keep things fresh,” Thompson explained.

      “With this rolling method, you get a sample that grows over time, adapting to new insights while staying manageable. It’s a great way to remain representative and catch changes in opinion, even if the group’s small — you’re getting accurate, changing feedback without having a big sample size.”

      Optimal Sample Sizes Are Key for Meaningful Results

      I know survey sample sizes can be overwhelming, but hopefully, this guide can serve as a helpful resource for getting started. If anything, it should be clear why sample sizes are so crucial to the integrity of your survey.

      If you are looking for more resources, I recommend diving into survey templates and this A/B testing to help you get started.

      Editor's note: This article was originally published in December 2021 and has been updated for comprehensiveness.

      5 Free Customer Satisfaction Survey Templates

      Easily measure customer satisfaction and begin to improve your customer experience.

      • Net Promoter Score
      • CSAT Score
      • Customer Effort Score
      • And more!

        Download Free

        All fields are required.

        You're all set!

        Click this link to access this resource at any time.

         

        Topics: Survey Creation

        Related Articles

        5 free templates for learning more about your customers and respondents.

          Service Hub provides everything you need to delight and retain customers while supporting the success of your whole front office

          START FREE OR GET A DEMO