Statistics are essential to interpreting your UX research data. Without these important calculations, you can’t know whether or not your research findings are reliable or meaningful (statistically significant). And you also can’t know how they might apply to your larger pool of users. In other words, without the necessary statistics to back your research up, you might as well be taking a stab in the dark. Put simply, user research without statistics is about as useful as no user research at all.
Let’s say you survey 20 people about a new feature you’re developing. How do you know whether your findings are actually representative of your broader user base? And what do they mean for the 20,000 users you couldn’t talk to? With what degree of confidence can you be sure that your audience will respond a certain way?
Statistics are the only way to answer those and other important questions. In particular, statistics allow UX researchers to:
Raw data often appears to tell a certain story. But only with statistics can you really know what your data is telling you. For example, let’s say you want to compare two designs for a new feature. After showing the two options to a panel of users, you see that option A is 75% successful, while option B is 50% successful. Seems like a no-brainer, right?
Not so fast.
After performing a few statistical calculations, you discover that the two options are actually likely to perform about the same at scale. Rather than a no-brainer, you have a neck-and-neck race. At that point, you have two options. You can either conduct more research to see if option A or B emerges as a winner. Or, you can decide between the two options based on other criteria, such as the cost or difficulty of implementing them.
Another example: Let’s say you run a usability test on a new feature with ten users. Your panel of users gives your feature a score of 5 out of 7. Sounds pretty good, huh? But after running statistics, you find that your user panel isn’t as representative of your larger user base as you thought. In fact, if you were to ask each one of your users to assess the same feature, you’d be much more likely to score a measly 2 out of 7. Back to the drawing board.
Shortchanging statistics in your UX research is never a good idea. Do that, and you’re much more likely to:
Statistics are incredibly useful. But they can’t tell you exactly what to do or guarantee success given a particular outcome.
Remember, because you could never survey or collect data from absolutely every one of your users, you can never fully prove that your audience will respond in a particular way. You simply can’t be 100% certain. Statistics enable you to assign a probability to the data you have so you can make calculated speculations. But they are still just that: predictions.
A talented statistician will give you everything you need to make an informed decision. However, you’ll still need to weigh the level of risk and investment against the probability of a successful outcome. Of course, that’s worlds better than the alternative.
By applying statistics to your research data, you can protect your UX research investment — and make the (almost certainly) right decisions for your product.