Employee Survey Tips: Margin of Error & Confidence Intervals
One of the most common statistical terms you’ll come across in surveys is “margin of error". Take a closer look at why this is the case and how you can put these numbers in context when evaluating your results.
I spent seven years leading methodology and data science for employee engagement surveys at Peakon and Workday. Over that time, I’ve worked with everyone from tiny tech startups to massive retail brands, mining operations, football clubs, and even government entities. Now that I’m no longer directly involved in survey creation, I can share my honest thoughts on the most effective ways to run employee surveys.
See the complete set of tips & tricks for running employee engagement surveys.
One of the most common statistical terms you’ll come across in surveys is “margin of error,” often paired with “confidence intervals.” In the world of political polling, these concepts are critical, but in employee engagement surveys, their role is usually far smaller than most people think. Below is a closer look at why this is the case and how you can put these numbers in context when evaluating your results.
Whole-Population Surveys vs. Sampling
In a typical political poll, a small group of people is asked about their opinions, and the results are extrapolated to represent an entire population. This sampling introduces uncertainty, which is why pollsters talk about margin of error.
Employee engagement surveys are different: you’re often surveying your entire workforce, or at least a large portion of it. Since you’re not working with a tiny sample, the theoretical margin of error essentially drops to zero. Statistically, if you get responses from the entire population, you don’t need to guess what the rest of your employees might say.
The results we have are in effect for the entire population we are interested in, which means the results are the results.
- Culture Amp (an example of a survey provider who gets it)
Why Some Platforms Show a Margin of Error Anyway
Despite surveying everyone, you might notice that certain software providers still highlight a margin of error or show confidence intervals.
Sometimes, it’s simply because the platform was built on polling-based principles where margin of error is standard. This normally indicates the platform wasn't really designed for our use-case and lacks specialization, something like Qualtrics is a good example of this.
In other cases, it’s a deliberate (and cynical!) choice. Some providers knowingly publish confidence intervals based upon faulty statistical reasoning, just to avoid the hassle of explaining such metrics aren’t really necessary when the whole population is surveyed.
Either way, don’t let these figures alarm you. They’re often based on formulas designed for sampling a portion of a population, not for surveying the entire group.
Non-Responders and Real-World Variability
In some organizations, not everyone completes the survey. So, the final results might not truly represent 100% of your workforce. There’s also a possibility that people who chose not to respond feel differently than those who did. In theory, this introduces some uncertainty.
However, accurately modeling non-response bias can be complex, and in most cases, it’s not substantial enough to skew results, especially if your overall response rate is healthy. The most notable exception is in very small teams where losing just a few responses can make a bigger difference. But even then, the effect is usually minor compared to the complete guesswork seen in polling-based studies.
I know only of one engagement survey provider that properly models missing data due to non-response, and that is Peakon / Workday.
Confidence Intervals in Larger Groups
When you have 100 or more people responding, your confidence intervals become incredibly tight - less than ±0.1 on a 5-point scale. This means that if your average is 4.0, you can be quite sure the “true” score is somewhere between 3.9 and 4.1. In practical terms, that level of precision is more than enough to make informed decisions about employee engagement.
Overall, margin of error and confidence intervals are important concepts for understanding data quality, but they tend to be less of an issue when you’re surveying an entire organization.
- Don’t Overthink It: If you’re surveying most or all of your workforce, your data is already quite accurate.
- Watch Out for Low Response Rates: In small teams or departments where only a handful of people respond, keep in mind your results may fluctuate more.
- Focus on Action: Even an exact score can’t fix engagement issues on its own. The real value lies in taking action on the feedback you receive.
- Communicate Clearly: If stakeholders see a “margin of error” figure, explain that it’s a carryover from sampling-based methods and may not significantly impact your results.
- One caveat: Benchmarking is an area where confidence intervals DO matter, since you're looking to represent a wider industry population. Ensure your benchmark has 30+ companies so you get high-quality confidence intervals.
Sunbeam
Sunbeam is a feedback analytics platform designed to make working with open-ended, text-based feedback as straightforward as working with scores. Too many organizations overlook the rich insights hidden in qualitative responses, and Sunbeam aims to fix that. By combining deep industry expertise with cutting-edge AI, Sunbeam makes it simple to analyze and act on text feedback, ultimately helping HR teams unlock the full potential of employee engagement data.