linkedin twitter facebook instagram linkedin twitter facebook instagram arrow icon-back arrow-left arrow-right square-grid close
Maximizing the Potential of Online Surveys: From Design to Data

Maximizing the Potential of Online Surveys: From Design to Data

In today’s data-driven business landscape, the importance of online surveys cannot be overstated. They represent a powerful tool for gathering insights and understanding consumer behavior across various markets. The benefits of online surveys include accessibility to diverse consumer populations and the efficiency of collecting data with speedier turnarounds compared to traditional methods. As businesses increasingly rely on quantitative data to inform their decision-making processes, the imperative for high-quality data becomes more pronounced. 

Despite their significant advantages, online surveys can still be vulnerable to issues. These challenges range from technical issues affecting the accessibility and user experience to more nuanced concerns such as respondent engagement and data integrity. Recognizing these potential pitfalls is the first step toward mitigating their impact.

Ensuring the high quality of responses before a survey officially launches is a critical step toward obtaining reliable and insightful data. This proactive approach involves a series of strategic measures designed to enhance the survey experience, to retain the most qualified respondents who are genuinely interested in the study:

  • Testing Across Devices: One key practice is the thorough testing of the questionnaire across various devices to ensure clear instructions, the correct functioning of more complex survey mechanisms, and overall user-friendliness. This not only helps in identifying potential issues that might confuse respondents but also in refining the survey to prevent such obstacles.
  • Screener Questions: To further guarantee we’re targeting the right audience, we are thorough in crafting our upfront screener questions. These screeners are designed to filter out individuals who do not meet the specific criteria essential for the study—such as those working in market research or related industries who might bias the results. Criteria can include meeting certain age or income requirements, or specific behaviors like purchasing from a brand in the past 12 months.
  • Attrition & Respondent Fatigue: Another major concern is respondent attrition, which refers to respondents dropping out or failing to complete a survey. Attrition tends to increase with survey length. Participants are less likely to remain engaged through lengthy or complex sections, particularly towards the end of the survey. An overload of brands to select from, dense grid questions, and intricate instructions can all contribute to respondent fatigue. It’s not necessarily the respondents’ fault – an overly demanding survey can overwhelm anyone.
  • Attention Checks: Sometimes, when there is an incentive for completing the online survey, respondents might tend to click through the survey randomly instead of providing thoughtful answers. To identify these respondents, we often incorporate attention checks throughout longer online surveys. These checks can include a row in a grid question instructing them to select a specific response, or a simple question that everyone would know. If respondents fail these attention checks, their responses are omitted from the final data to preserve data quality.
  • Bots: Bots can be another issue when a survey has an enticing compensation – we are typically able to avoid this with programming reCAPTCHA features at the beginning of our online surveys and implementing IP address fingerprinting to avoid duplicate responses.

At Magid, as insights professionals, we feel a strong obligation to ensure the data we collect is both accurate and valid. Our observations indicate that 10-15% of respondents on short or simple surveys tend to be of low quality, with this figure rising to 20% or even exceeding 30% on longer or more complex online surveys. These dishonest or unengaged participants add noise to the data, which can lead to less definitive or even misleading findings.

Once fieldwork is completed, we continue to conduct another thorough vetting of respondents as another layer of assurance in delivering high quality data. Some of these measures are completely objective and are based on formulas or numeric calculations, while others involve a certain degree of analyst subjectivity to determine respondent quality. Post-closure quality assurance to remove low-quality respondents often includes the following measures:

  • Speeders: These respondents complete the survey in a time that would not have allowed for serious consideration to give real responses. We eliminate these people based on how they compare to people who take the survey seriously based on the length of time they take and the type of responses they give.
    • The industry standard and what we use at Magid involves disqualifying respondents who complete the survey in less than ⅓ of the median survey length (also known as Length of Interview).
  • Nonsense Spewers: We review all open-end responses and remove respondents who answer with gibberish or responses that don’t address the question asked. When bots infiltrate the survey, it can be easy to tell who they are while reviewing open-end responses, as text will generally involve a string of randomly generated words.
  • Unengaged Respondents: We also examine the pattern of responses at the individual level. Unengaged respondents often don’t vary their responses or will contradict their own responses so we can see they aren’t paying attention while they answer. When respondents are not varying their answers in longer grid questions, we call this straight-lining and often flag these respondent types for review. Contradictors might, for example, involve a respondent claiming they have never heard of a brand then later claiming they have made a purchase from that same brand.

Ensuring trustworthy insights is crucial for businesses in today’s data-driven decision-making environment. This underscores the importance of continually revising and enhancing our best practices for maintaining high-quality respondents, including annual process reviews and tailoring guidelines to each survey to uphold the integrity of our data. 

Nicole Tang is a Quantitative Analyst at Magid, and Madison Cheslock is a Quantitative Associate. Both help move forward the mission of Magid’s Consumer and Commercial Brands division.

Want results you can count on? Let’s talk.

What challenges are you facing today?

We’re ready to deliver insights and move your organization forward.