6 best practices for evaluating quantitative survey design

6 best practices for evaluating quantitative survey design

Not all quantitative surveys are created equally.

A well-written quantitative survey will deliver actionable answers and create a path for progress. A poorly written one could leave you and the team wondering “what does any of this mean?”

Too often people are in the “just get me the data” mode and forget that the first and often most important step is designing the survey. Drawing from years of experience in research and insights on both the client and supplier side, I’d like to share six best practices for evaluating survey designs so you can get that first step right. Here are six rules I use to check every survey.

1. Start with a clear goal and objectives

I love the quote “Writing a survey is like going on a road trip. You don’t have to use your GPS, but you’ll get to your destination quicker and more efficiently if you do.​” The essential first step is to start with a survey goal – a broad statement reflecting the desired outcome of your research.

Then establish objectives – the specific and measurable steps that will be the building blocks to achieving your survey goal.  I’ll admit that sometimes writing objectives can be hard. It helps to start with brainstorming key questions and asking your cross-functional team members what they’d like to see included. Gathering team input is a great way to gain both team engagement and alignment in your research.Two women are evaluating a quantitative survey design.

2. Plan your questions

The order of your questions is just as important as the questions themselves. Start with simple ones to ease your respondents into the survey and order the questions by topic to make the survey simple to follow and understand.

Keep it uncomplicated and short in length. Long surveys and difficult questions can compromise the quality of your data. Some analytical designs do add complexity, but these question types should be used purposefully, and balanced with other survey content. If your survey is getting too long, I recommend going back to your goals and objectives. Evaluate your questions and remove those that don’t align with your purpose.

3. Look for (unintentional) bias

Survey questions should not favor one answer over another, and they should not steer respondents towards a particular answer. Simple things to look for include making positive and negative statements equally acceptable and balanced, and randomizing answer choices when appropriate. Avoid leading questions or including your own opinion.

4. Ask about the “overall” first and reconsider the use of “typical”

When evaluating an experience there are two common errors that can negatively influence the actionability of your results.

First, always ask about the “overall” experience before questioning the specifics, as this reduces bias. The Delta survey I received after a flight is a good example of doing this correctly. Delta asks me to rate my overall flight experience before asking about the comfort of my seat. I answered that first overall question thinking “my flight went smoothly, the flight attendants were friendly, and I got to my destination on time…They’ve earned a good rating!” But if they had first asked me about seat comfort, baggage space and snack quality – that would have steered me toward the negatives that were not the most important aspects of the trip to me and biased my overall impression.

Secondly, I commonly see “tell me about your typical experience.” This wording makes me pause because the results could be muddy and unactionable. A coworker once shared a great example that illustrates why one should usually avoid this approach – comparing it to food.

Imagine you are asked about your ‘typical’ meal. What do you share? Is your response going to be about breakfast or dinner, weekend or weekday, when you are home alone eating over the sink or when you are out with friends celebrating? Is my typical beverage of choice water, coffee, wine or beer?

We can’t know what the consumer is thinking when we ask a broad ‘typical’ question, so focus on a specific experience – tell me about your dinner last night, today’s lunch, what was your most recent beverage, etc. This gives context for interpreting the responses.

Of course, there can be exceptions, but seeing “typical” should make you pause and consider if there is a more informative way to ask the question. Always be thinking of the quality of results.

5. Ask one question at a time

Avoid double-barrel questions (questions that ask about two or more things at once). For example, “Do you think the product is affordable and of good quality?” This is a common mistake in survey design that can make it difficult for respondents to answer accurately.

I actually received a survey with this fault the other day. It was a wine school that wanted my opinion on a virtual education program. I was asked “Would you be interested in topic X and be willing to pay $Y to attend?” Well, I was very interested in the topic but thought the price was too high – so how do I answer that in a closed-ended survey question? I ended up selecting ‘not interested’ but that was because of the price. If they had split that into two questions, they would’ve learned the topic was of high interest to me, but the price would need to be adjusted.

An easy way to detect double-barrel questions is to always look out for the word “and.”

6. Avoid the “warm tea” average

‘Warm tea’ is one of my favorite research parables with the lesson being – avoid averages. Averages are a measure of central tendency, taking high and low ratings and offsetting them to a ‘middle’ number. This type of analysis can be particularly challenging when reporting attitudes or behaviors. You will get better, more meaningful insights by focusing on the top (those that agree) and bottom box(es) (those that disagree).

Imagine a question asking “How much do you agree/disagree with the statement ‘I like hot tea’.” And the result, half the people strongly agree they like hot tea and half strongly disagree and like cold tea. Averaging those results could lead to a recommendation of ‘we should make warm tea! This will somewhat appeal to everyone.’ But no one likes warm tea. This average result leads to a solution that would alienate both hot tea lovers and cold tea fans. No one would buy your product.

Remember, a good survey provides clear, understandable findings, identifies significant differences and makes your participants feel valued and heard – all while confidently informing your decisions. The survey design itself is critical to achieving actionable learnings.


My final piece of advice sounds like what you might hear on public transit: “if you see something, say something.” Surveys are designed for consumers, and we are all consumers first and foremost. If it doesn’t make sense to you, then ask your colleagues or the research team about it. You will own the results, so best to be sure you can stand by them.

And of course, you can ask me more questions about survey design, research and what we are seeing in the market. Email me at jdilley@magid.com.