Unfortunately, the ability to craft clear, effective, and unbiased surveys is a skill not all product teams (or even UX firms) possess. That’s a big problem. After all, if you unknowingly collect bad data — data that is unclear, exclusionary, incomplete, or apples-to-oranges — you could take away erroneous findings and make ill-informed decisions.
At Openfield, we understand the most common problems that plague UX research surveys, and we’ve developed a clear set of best practices that enable product teams to glean crucial insights with each round of research. This is what we’ve learned about what to avoid in UX research surveys — and how to do them right.
Before discussing Openfield’s approach, let’s pause to consider the most common pitfalls that plague poorly designed UX research surveys.
Survey questions should be as clear and specific as possible. If they are vague, overly technical, or otherwise difficult to understand, your respondents’ feedback is bound to be much less reliable.
Let’s say you prepare a survey question that asks respondents to “rate how satisfied you are with the tools you use in your classroom.” What you really want to know about are the EdTech tools your survey participants use. But for all they know you could be asking about their analog tools, general digital tools (like email and Zoom), or something else entirely. Because different respondents will almost certainly interpret your unclear question differently, your data is now as vague and undecipherable as your survey question.
The way you structure your rating scale is as important as how you word your questions. Your rating scale’s structure includes:
If your scale includes too many numbers, or if the points themselves are poorly defined, respondents may get confused. For example, if you don’t define the middle point on a scale, some respondents may assume it means “both.” Others may interpret it as a midway point on a spectrum, and still others may think it means “neutral” or “I don’t care.”
In general, five-to-seven-point scales work well so long as you define the end and middle points. Ten point scales and above offer relatively little additional value while requiring significantly more mental effort on the part of respondents.
Biased survey questions may lead respondents to select answers they wouldn’t ordinarily choose — or unintentionally exclude respondents altogether.
For example, let’s say your survey includes a question about users’ education level. If you include “some college” as the only possible response between “high school diploma” and “four-year degree,” you may be demonstrating a bias toward four-year degrees. As a result, you effectively “erase” respondents with technical degrees or professional certifications.
Each question in your survey should include just a single variable. If you include more than one assumption or question, your respondents will be forced to choose which question to base their answer on.
For example, you might ask, “how satisfied are you and your students with X product’s onboarding experience?” Unless both user groups happen to feel exactly the same way about your product’s onboarding, your survey respondents must now answer based on one or the other — without a way to specify whose experience they are rating.
The result? Unreliable data — and an unclear picture of what your users want and need from your product.
Beware of constructing forced binary or multiple choice questions that force users to “pick a side” that isn’t truly representative of their experience.
Without an “other” or write-in field, your survey data may be less meaningful than you think.
Like EdTech products themselves, UX research surveys should be designed with accessibility and inclusivity in mind. That mandate pertains to your survey’s content and format. Yet without careful attention, many surveys inadvertently include accessibility traps, from a lack of screen reader metadata to emoji rating scales.
It goes without saying: If your survey isn’t accessible to all your respondents, it’s not going to yield results that represent your full range of users.
It can be tempting to ask a multitude of questions in a single survey. After all, you’ve already got your respondents’ attention; why not make it count?
Unfortunately, overly long surveys breed boredom and frustration. More than that, as your respondents’ attention wanes, they are less and less likely to give careful thought and attention to each subsequent question they answer.
Knowing the pitfalls of poorly written UX surveys is the first step in avoiding them. However, when we partner with product teams to develop surveys, we use a number of tactics to ensure they are clear, effective, bias-free, and inclusive.
This includes:
Want to learn more about how Openfield can help you conduct UX research that takes your EdTech product to the next level? Let’s be in touch.