When it comes down to it, you need to find out if your design works for the end user. That’s what usability testing is all about. As a designer, you live in a small bubble, intimately familiar with the design, fluent enough to navigate complex menus with ease. But will the user attain the same fluidity? That’s what we need to discover through user interviews and UX surveys.
Know What the Interview Is Good For
User interviews are effective when you want to learn about your users’ feeling about something. They’re solely qualitative, meaning they collected data that could have infinite possible answers. Your goal in user interviews should be to discover what sticks out in a user’s mind. Questions about visual design, about UI elements, about workflow: all of these are better answered with usability testing and A/B tests. The same goes for discovering what features users “like” or “dislike.”
Collect Meaningful Data by Design
The interview is a very squishy tool, and it’s easy to accidentally collect piles of useless data. If you ask the wrong questions during the user interview, the best case scenario is that you’ve wasted everyone’s time. The worst-case scenario, and the more likely scenario, is that you collect data which gives you the wrong ideas. Start with the wrong data, and you’ll quickly be developing features that users don’t want, can’t use, and never need.
Read other user surveys, ask knowledge holders, and study classic usability studies. Don’t reinvent the wheel and collect bad data for no reason.
Back Up Your Interviews With Quantitative Data
Qualitative data is made up of opinions, impressions, feedback, reviews, and responses from users. It’s characterized by a lack of numbers. Timing how long it takes a user to write an interview would be meaningless: it’s the content of the interview that matters.
Quantitative data, on the other hand, is “hard” data. It’s made up of numbers, empirically achieved by monitoring or experimenting with a population. Without quantitative data to back up your qualitative data, you’ll never know how far you can trust users. You need to find out how much divergence there is between what users tell you and what their actions demonstrate.
Any user you interview should also participate in a use study, where they are given multiple user interfaces and monitored as they use them. They can explain their process out loud as they go, after each step, or at the end of the process. Systems must track everything from the time between clicks to the position of the user’s eyeballs. The results of this data will tell us how much we can trust users. Do they say the site is easy to use, yet take far too long to navigate? They’re probably just being nice. Without quantitative data, there’s no way to discover that gap.
Triangulate Your Data
In order to make data from users interviews effective, you need to use data triangulation. By combining data points from multiple different usability studies, you can zero in on the user actually wants, needs, and desires. This means that user interviews should also be paired with some other quantitative method of the usability study, preferably one that actually watches users interact with the software. Without even trying, users mislead, fabricate stories, rationalize their behavior, edit out details, and exclude pertinent factors. And those things are often the most important to a UX designer. But you must remember that users are not designers. They do not understand what you need to know or why.
Ask Questions That Work
Imagine asking a user, “Would you pay $20 for this product today?” That might seem like a useful, valuable question. But only the most brutally honest user will say “no.” It requires a heart of stone to look at the person who spent a ton of time working on the project they just showed you, only to then say that it’s worthless to their face. User’s want to make you happy, and they will always say, “Sure, I would buy it!” So while this question might seem valuable on the surface, it inevitably connects misleading and dangerous data about user satisfaction.
Furthermore, users rarely remember exactly what they did. They will fill in gaps in their memory with guesses, trying to provide details they think you want to hear or that would make sense in the context. Users will also rationalize their behavior, providing reasons they may or may not have done something. Explanations like this are rarely helpful and almost exclusively misleading. Users do not know what they want, and therefore, they cannot tell you.
Remove Your Biases
You must also be certain to remove your own bias. With your unique perspective on the interface, you have likely developed certain biases in your judgment of the design. Do not bring this into the interview by asking loaded questions. Asking about specific aspects of the design is often a dead end anyway. Users are more responsive to general follow up questions that use their own words. If they say “I couldn’t find the email sign-up form,” you can respond with, “Tell me more about that” to elicit further information. Asking defensively-phrased questions like “Was it that bad?” or “What was wrong with it?” even in a kindly tone, will put the user’s back up and eliminate the chance for useful data.
User interviews are a tricky process. Some designers swear by them, while others prefer the reliability of hard data. The best usability testing methodology probably includes a mix of both styles of data collection. While running your interviews, remember the Nielsen Norman Group’s basic rules of usability testing:
- Watch what people actually do.
- Do not believe what people say they do.
- Definitely don’t believe what people predict they may do in the future.
If you want to make your user happy, you might be interested in these other posts: